@fluentcommerce/fc-connect-sdk 0.1.54 → 0.1.55
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +12 -0
- package/dist/cjs/clients/fluent-client.js +13 -6
- package/dist/cjs/utils/pagination-helpers.js +38 -2
- package/dist/cjs/versori/fluent-versori-client.js +11 -5
- package/dist/esm/clients/fluent-client.js +13 -6
- package/dist/esm/utils/pagination-helpers.js +38 -2
- package/dist/esm/versori/fluent-versori-client.js +11 -5
- package/dist/tsconfig.esm.tsbuildinfo +1 -1
- package/dist/tsconfig.tsbuildinfo +1 -1
- package/dist/tsconfig.types.tsbuildinfo +1 -1
- package/docs/00-START-HERE/EXPORT-VALIDATION.md +158 -158
- package/docs/00-START-HERE/cli-analyze-source-structure-guide.md +655 -655
- package/docs/00-START-HERE/cli-documentation-index.md +202 -202
- package/docs/00-START-HERE/cli-quick-reference.md +252 -252
- package/docs/00-START-HERE/decision-tree.md +552 -552
- package/docs/00-START-HERE/getting-started.md +1070 -1070
- package/docs/00-START-HERE/mapper-quick-decision-guide.md +235 -235
- package/docs/00-START-HERE/readme.md +237 -237
- package/docs/00-START-HERE/retailerid-configuration.md +404 -404
- package/docs/00-START-HERE/sdk-philosophy.md +794 -794
- package/docs/00-START-HERE/troubleshooting-quick-reference.md +1086 -1086
- package/docs/01-TEMPLATES/faq.md +686 -686
- package/docs/01-TEMPLATES/patterns/pattern-templates-guide.md +68 -68
- package/docs/01-TEMPLATES/patterns/patterns-csv-schema-validation-and-rejection-report.md +233 -233
- package/docs/01-TEMPLATES/patterns/patterns-custom-resolvers.md +407 -407
- package/docs/01-TEMPLATES/patterns/patterns-error-handling-retry.md +511 -511
- package/docs/01-TEMPLATES/patterns/patterns-field-mapping-universal.md +701 -701
- package/docs/01-TEMPLATES/patterns/patterns-large-file-splitting.md +1430 -1430
- package/docs/01-TEMPLATES/patterns/patterns-master-data-etl.md +2399 -2399
- package/docs/01-TEMPLATES/patterns/patterns-pagination-streaming.md +447 -447
- package/docs/01-TEMPLATES/patterns/patterns-state-duplicate-prevention.md +385 -385
- package/docs/01-TEMPLATES/readme.md +957 -957
- package/docs/01-TEMPLATES/standalone/standalone-asn-inbound-processing.md +1209 -1209
- package/docs/01-TEMPLATES/standalone/standalone-graphql-query-export.md +1140 -1140
- package/docs/01-TEMPLATES/standalone/standalone-graphql-to-parquet-partitioned-s3.md +432 -432
- package/docs/01-TEMPLATES/standalone/standalone-multi-channel-inventory-sync.md +1185 -1185
- package/docs/01-TEMPLATES/standalone/standalone-multi-source-aggregation.md +1462 -1462
- package/docs/01-TEMPLATES/standalone/standalone-s3-csv-batch-api.md +1390 -1390
- package/docs/01-TEMPLATES/standalone/standalone-s3-csv-inventory-to-batch.md +330 -330
- package/docs/01-TEMPLATES/standalone/standalone-scripts-guide.md +87 -87
- package/docs/01-TEMPLATES/standalone/standalone-sftp-xml-graphql.md +1444 -1444
- package/docs/01-TEMPLATES/standalone/standalone-webhook-payload-processing.md +688 -688
- package/docs/01-TEMPLATES/versori/business-examples/business-examples-dropship-order-routing.md +193 -193
- package/docs/01-TEMPLATES/versori/business-examples/business-examples-graphql-parquet-extraction.md +518 -518
- package/docs/01-TEMPLATES/versori/business-examples/business-examples-inter-location-transfers.md +2162 -2162
- package/docs/01-TEMPLATES/versori/business-examples/business-examples-pre-order-allocation.md +2226 -2226
- package/docs/01-TEMPLATES/versori/business-examples/business-scenarios-guide.md +87 -87
- package/docs/01-TEMPLATES/versori/patterns/versori-patterns-connection-validation-pattern.md +656 -656
- package/docs/01-TEMPLATES/versori/patterns/versori-patterns-dual-workflow-connector.md +835 -835
- package/docs/01-TEMPLATES/versori/patterns/versori-patterns-guide.md +108 -108
- package/docs/01-TEMPLATES/versori/patterns/versori-patterns-kv-state-management.md +1533 -1533
- package/docs/01-TEMPLATES/versori/patterns/versori-patterns-xml-response-patterns.md +1160 -1160
- package/docs/01-TEMPLATES/versori/versori-platform-guide.md +201 -201
- package/docs/01-TEMPLATES/versori/webhooks/template-webhook-asn-purchase-order.md +1906 -1906
- package/docs/01-TEMPLATES/versori/webhooks/template-webhook-dropship-routing.md +1074 -1074
- package/docs/01-TEMPLATES/versori/webhooks/template-webhook-flash-sale-reserve.md +1395 -1395
- package/docs/01-TEMPLATES/versori/webhooks/template-webhook-generic-xml-order.md +888 -888
- package/docs/01-TEMPLATES/versori/webhooks/template-webhook-payment-gateway-integration.md +2478 -2478
- package/docs/01-TEMPLATES/versori/webhooks/template-webhook-rma-returns-comprehensive.md +2240 -2240
- package/docs/01-TEMPLATES/versori/webhooks/template-webhook-xml-order-ingestion.md +2029 -2029
- package/docs/01-TEMPLATES/versori/webhooks/webhook-templates-guide.md +140 -140
- package/docs/01-TEMPLATES/versori/workflows/_examples/sample-data/inventory-mapping.json +20 -20
- package/docs/01-TEMPLATES/versori/workflows/_examples/sample-data/products_2025-01-22.csv +11 -11
- package/docs/01-TEMPLATES/versori/workflows/_examples/sample-data/sample-data-guide.md +34 -34
- package/docs/01-TEMPLATES/versori/workflows/_examples/workflow-examples-guide.md +36 -36
- package/docs/01-TEMPLATES/versori/workflows/extraction/extraction-modes-guide.md +1038 -1038
- package/docs/01-TEMPLATES/versori/workflows/extraction/extraction-workflows-guide.md +138 -138
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/graphql-extraction-guide.md +63 -63
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-fulfillments-to-sftp-csv.md +2062 -2062
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-fulfillments-to-sftp-xml.md +2294 -2294
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-inventory-positions-to-s3-csv.md +2461 -2461
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-inventory-positions-to-sftp-xml.md +2529 -2529
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-inventory-quantities-to-s3-csv.md +2464 -2464
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-inventory-quantities-to-s3-json.md +1959 -1959
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-orders-to-s3-csv.md +1953 -1953
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-orders-to-sftp-xml.md +2541 -2541
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-products-to-s3-json.md +2384 -2384
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-products-to-sftp-xml.md +2445 -2445
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-virtual-positions-to-s3-csv.md +2355 -2355
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-virtual-positions-to-s3-json.md +2042 -2042
- package/docs/01-TEMPLATES/versori/workflows/extraction/graphql-queries/template-extraction-virtual-positions-to-sftp-xml.md +2726 -2726
- package/docs/01-TEMPLATES/versori/workflows/ingestion/batch-api/batch-api-guide.md +206 -206
- package/docs/01-TEMPLATES/versori/workflows/ingestion/batch-api/template-ingestion-cycle-count-reconciliation.md +2030 -2030
- package/docs/01-TEMPLATES/versori/workflows/ingestion/batch-api/template-ingestion-multi-channel-inventory-sync.md +1882 -1882
- package/docs/01-TEMPLATES/versori/workflows/ingestion/batch-api/template-ingestion-s3-csv-inventory-batch.md +2827 -2827
- package/docs/01-TEMPLATES/versori/workflows/ingestion/batch-api/template-ingestion-s3-json-inventory-batch.md +1952 -1952
- package/docs/01-TEMPLATES/versori/workflows/ingestion/batch-api/template-ingestion-s3-xml-inventory-batch.md +3289 -3289
- package/docs/01-TEMPLATES/versori/workflows/ingestion/batch-api/template-ingestion-sftp-csv-inventory-batch.md +3064 -3064
- package/docs/01-TEMPLATES/versori/workflows/ingestion/batch-api/template-ingestion-sftp-json-inventory-batch.md +3238 -3238
- package/docs/01-TEMPLATES/versori/workflows/ingestion/batch-api/template-ingestion-sftp-xml-inventory-batch.md +2977 -2977
- package/docs/01-TEMPLATES/versori/workflows/ingestion/event-api/event-api-guide.md +321 -321
- package/docs/01-TEMPLATES/versori/workflows/ingestion/event-api/template-ingestion-payload-json-order-cancel-event.md +959 -959
- package/docs/01-TEMPLATES/versori/workflows/ingestion/event-api/template-ingestion-payload-xml-order-cancel-event.md +1170 -1170
- package/docs/01-TEMPLATES/versori/workflows/ingestion/event-api/template-ingestion-s3-csv-product-event.md +2312 -2312
- package/docs/01-TEMPLATES/versori/workflows/ingestion/event-api/template-ingestion-s3-json-product-event.md +2999 -2999
- package/docs/01-TEMPLATES/versori/workflows/ingestion/event-api/template-ingestion-s3-parquet-product-event.md +2836 -2836
- package/docs/01-TEMPLATES/versori/workflows/ingestion/event-api/template-ingestion-s3-xml-product-event.md +2395 -2395
- package/docs/01-TEMPLATES/versori/workflows/ingestion/event-api/template-ingestion-sftp-csv-product-event.md +2295 -2295
- package/docs/01-TEMPLATES/versori/workflows/ingestion/event-api/template-ingestion-sftp-json-product-event.md +2602 -2602
- package/docs/01-TEMPLATES/versori/workflows/ingestion/event-api/template-ingestion-sftp-parquet-product-event.md +2589 -2589
- package/docs/01-TEMPLATES/versori/workflows/ingestion/event-api/template-ingestion-sftp-xml-product-event.md +3578 -3578
- package/docs/01-TEMPLATES/versori/workflows/ingestion/graphql-mutations/graphql-mutations-guide.md +93 -93
- package/docs/01-TEMPLATES/versori/workflows/ingestion/graphql-mutations/template-ingestion-payload-json-order-update-graphql.md +1260 -1260
- package/docs/01-TEMPLATES/versori/workflows/ingestion/graphql-mutations/template-ingestion-payload-xml-order-update-graphql.md +1472 -1472
- package/docs/01-TEMPLATES/versori/workflows/ingestion/graphql-mutations/template-ingestion-s3-csv-control-graphql.md +2417 -2417
- package/docs/01-TEMPLATES/versori/workflows/ingestion/graphql-mutations/template-ingestion-s3-csv-location-graphql.md +2811 -2811
- package/docs/01-TEMPLATES/versori/workflows/ingestion/graphql-mutations/template-ingestion-s3-csv-price-graphql.md +2619 -2619
- package/docs/01-TEMPLATES/versori/workflows/ingestion/graphql-mutations/template-ingestion-s3-json-location-graphql.md +2807 -2807
- package/docs/01-TEMPLATES/versori/workflows/ingestion/graphql-mutations/template-ingestion-s3-xml-location-graphql.md +2373 -2373
- package/docs/01-TEMPLATES/versori/workflows/ingestion/graphql-mutations/template-ingestion-sftp-csv-control-graphql.md +2740 -2740
- package/docs/01-TEMPLATES/versori/workflows/ingestion/graphql-mutations/template-ingestion-sftp-csv-location-graphql.md +2760 -2760
- package/docs/01-TEMPLATES/versori/workflows/ingestion/graphql-mutations/template-ingestion-sftp-json-location-graphql.md +1710 -1710
- package/docs/01-TEMPLATES/versori/workflows/ingestion/ingestion-workflows-guide.md +136 -136
- package/docs/01-TEMPLATES/versori/workflows/rubix-webhooks/rubix-webhooks-guide.md +520 -520
- package/docs/01-TEMPLATES/versori/workflows/rubix-webhooks/template-webhook-rubix-fulfilment-to-sftp-xml-inline.md +1418 -1418
- package/docs/01-TEMPLATES/versori/workflows/rubix-webhooks/template-webhook-rubix-fulfilment-to-sftp-xml-universal-mapper.md +1785 -1785
- package/docs/01-TEMPLATES/versori/workflows/rubix-webhooks/template-webhook-rubix-order-attribute-update.md +824 -824
- package/docs/01-TEMPLATES/versori/workflows/workflows-overview-guide.md +646 -646
- package/docs/02-CORE-GUIDES/advanced-services/advanced-services-batch-archival.md +724 -724
- package/docs/02-CORE-GUIDES/advanced-services/advanced-services-job-tracker.md +627 -627
- package/docs/02-CORE-GUIDES/advanced-services/advanced-services-partial-batch-recovery.md +561 -561
- package/docs/02-CORE-GUIDES/advanced-services/advanced-services-quick-reference.md +367 -367
- package/docs/02-CORE-GUIDES/advanced-services/advanced-services-readme.md +407 -407
- package/docs/02-CORE-GUIDES/advanced-services/readme.md +49 -49
- package/docs/02-CORE-GUIDES/api-reference/api-reference-quick-reference.md +548 -548
- package/docs/02-CORE-GUIDES/api-reference/event-api-input-output-reference.md +702 -1171
- package/docs/02-CORE-GUIDES/api-reference/examples/client-initialization.ts +286 -286
- package/docs/02-CORE-GUIDES/api-reference/graphql-error-classification.md +337 -337
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-01-client-api.md +399 -520
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-03-authentication.md +199 -199
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-04-graphql-mapping.md +925 -925
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-05-services.md +1198 -1198
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-06-data-sources.md +1083 -1083
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-07-parsers.md +1097 -1097
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-08-pagination.md +513 -513
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-08-types.md +545 -597
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-09-error-handling.md +527 -527
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-09-webhook-validation.md +514 -514
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-10-extraction.md +557 -557
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-10-utilities.md +412 -412
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-11-cli-tools.md +423 -423
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-11-error-handling.md +716 -716
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-12-analyze-source-structure.md +518 -518
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-12-partial-responses.md +212 -212
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-12-testing.md +300 -300
- package/docs/02-CORE-GUIDES/api-reference/modules/api-reference-13-resolver-builder.md +322 -322
- package/docs/02-CORE-GUIDES/api-reference/readme.md +279 -279
- package/docs/02-CORE-GUIDES/auto-pagination/auto-pagination-quick-reference.md +351 -351
- package/docs/02-CORE-GUIDES/auto-pagination/auto-pagination-readme.md +277 -277
- package/docs/02-CORE-GUIDES/auto-pagination/examples/auto-pagination-readme.md +178 -178
- package/docs/02-CORE-GUIDES/auto-pagination/examples/common-patterns.ts +351 -351
- package/docs/02-CORE-GUIDES/auto-pagination/examples/paginate-products.ts +384 -384
- package/docs/02-CORE-GUIDES/auto-pagination/examples/paginate-virtual-positions.ts +308 -308
- package/docs/02-CORE-GUIDES/auto-pagination/modules/auto-pagination-01-foundations.md +470 -470
- package/docs/02-CORE-GUIDES/auto-pagination/modules/auto-pagination-02-quick-start.md +713 -713
- package/docs/02-CORE-GUIDES/auto-pagination/modules/auto-pagination-03-configuration.md +754 -754
- package/docs/02-CORE-GUIDES/auto-pagination/modules/auto-pagination-04-advanced-patterns.md +732 -732
- package/docs/02-CORE-GUIDES/auto-pagination/modules/auto-pagination-05-sdk-integration.md +847 -847
- package/docs/02-CORE-GUIDES/auto-pagination/modules/auto-pagination-06-troubleshooting.md +359 -359
- package/docs/02-CORE-GUIDES/auto-pagination/modules/auto-pagination-07-api-reference.md +462 -462
- package/docs/02-CORE-GUIDES/auto-pagination/readme.md +54 -54
- package/docs/02-CORE-GUIDES/data-sources/data-sources-file-operations-error-handling.md +1487 -1487
- package/docs/02-CORE-GUIDES/data-sources/data-sources-quick-reference.md +836 -836
- package/docs/02-CORE-GUIDES/data-sources/data-sources-readme.md +276 -276
- package/docs/02-CORE-GUIDES/data-sources/data-sources-sftp-credential-access-security.md +553 -553
- package/docs/02-CORE-GUIDES/data-sources/examples/common-patterns.ts +409 -409
- package/docs/02-CORE-GUIDES/data-sources/examples/data-sources-readme.md +178 -178
- package/docs/02-CORE-GUIDES/data-sources/examples/s3-operations.ts +308 -308
- package/docs/02-CORE-GUIDES/data-sources/examples/sftp-operations.ts +371 -371
- package/docs/02-CORE-GUIDES/data-sources/modules/data-sources-01-foundations.md +735 -735
- package/docs/02-CORE-GUIDES/data-sources/modules/data-sources-02-s3-operations.md +1302 -1302
- package/docs/02-CORE-GUIDES/data-sources/modules/data-sources-03-sftp-operations.md +1379 -1379
- package/docs/02-CORE-GUIDES/data-sources/modules/data-sources-04-file-patterns.md +941 -941
- package/docs/02-CORE-GUIDES/data-sources/modules/data-sources-05-advanced-topics.md +813 -813
- package/docs/02-CORE-GUIDES/data-sources/modules/data-sources-06-integration-patterns.md +486 -486
- package/docs/02-CORE-GUIDES/data-sources/modules/data-sources-07-troubleshooting.md +387 -387
- package/docs/02-CORE-GUIDES/data-sources/modules/data-sources-08-api-reference.md +417 -417
- package/docs/02-CORE-GUIDES/data-sources/readme.md +77 -77
- package/docs/02-CORE-GUIDES/error-handling-guide.md +936 -936
- package/docs/02-CORE-GUIDES/extraction/examples/02-core-guides-extraction-readme.md +116 -116
- package/docs/02-CORE-GUIDES/extraction/examples/common-patterns.ts +428 -428
- package/docs/02-CORE-GUIDES/extraction/examples/extract-inventory-basic.ts +187 -187
- package/docs/02-CORE-GUIDES/extraction/extraction-quick-reference.md +596 -596
- package/docs/02-CORE-GUIDES/extraction/modules/02-core-guides-extraction-01-foundations.md +514 -514
- package/docs/02-CORE-GUIDES/extraction/modules/02-core-guides-extraction-02-basic-extraction.md +823 -823
- package/docs/02-CORE-GUIDES/extraction/modules/02-core-guides-extraction-03-parquet-processing.md +507 -507
- package/docs/02-CORE-GUIDES/extraction/modules/02-core-guides-extraction-04-data-enrichment.md +546 -546
- package/docs/02-CORE-GUIDES/extraction/modules/02-core-guides-extraction-05-transformation.md +494 -494
- package/docs/02-CORE-GUIDES/extraction/modules/02-core-guides-extraction-06-export-formats.md +458 -458
- package/docs/02-CORE-GUIDES/extraction/modules/02-core-guides-extraction-06-performance.md +138 -138
- package/docs/02-CORE-GUIDES/extraction/modules/02-core-guides-extraction-07-api-reference.md +148 -148
- package/docs/02-CORE-GUIDES/extraction/modules/02-core-guides-extraction-07-optimization.md +692 -692
- package/docs/02-CORE-GUIDES/extraction/modules/02-core-guides-extraction-08-extraction-orchestrator.md +1008 -1008
- package/docs/02-CORE-GUIDES/extraction/readme.md +151 -151
- package/docs/02-CORE-GUIDES/ingestion/examples/_simple-kv-store.ts +40 -40
- package/docs/02-CORE-GUIDES/ingestion/examples/error-recovery.ts +728 -728
- package/docs/02-CORE-GUIDES/ingestion/examples/event-driven.ts +501 -501
- package/docs/02-CORE-GUIDES/ingestion/examples/local-file-ingestion.ts +88 -88
- package/docs/02-CORE-GUIDES/ingestion/examples/parquet-ingestion.ts +117 -117
- package/docs/02-CORE-GUIDES/ingestion/examples/performance-optimized.ts +647 -647
- package/docs/02-CORE-GUIDES/ingestion/examples/s3-csv-ingestion.ts +169 -169
- package/docs/02-CORE-GUIDES/ingestion/examples/sftp-csv-ingestion.ts +134 -134
- package/docs/02-CORE-GUIDES/ingestion/ingestion-quick-reference.md +546 -546
- package/docs/02-CORE-GUIDES/ingestion/modules/02-core-guides-ingestion-01-introduction.md +626 -626
- package/docs/02-CORE-GUIDES/ingestion/modules/02-core-guides-ingestion-02-quick-start.md +658 -658
- package/docs/02-CORE-GUIDES/ingestion/modules/02-core-guides-ingestion-03-data-sources.md +1052 -1052
- package/docs/02-CORE-GUIDES/ingestion/modules/02-core-guides-ingestion-04-field-mapping.md +763 -763
- package/docs/02-CORE-GUIDES/ingestion/modules/02-core-guides-ingestion-05-advanced-parsers.md +676 -676
- package/docs/02-CORE-GUIDES/ingestion/modules/02-core-guides-ingestion-06-batch-api.md +1295 -1295
- package/docs/02-CORE-GUIDES/ingestion/modules/02-core-guides-ingestion-07-api-reference.md +138 -138
- package/docs/02-CORE-GUIDES/ingestion/modules/02-core-guides-ingestion-07-state-management.md +1037 -1037
- package/docs/02-CORE-GUIDES/ingestion/modules/02-core-guides-ingestion-08-performance-optimization.md +1349 -1349
- package/docs/02-CORE-GUIDES/ingestion/modules/02-core-guides-ingestion-09-best-practices.md +1893 -1893
- package/docs/02-CORE-GUIDES/ingestion/readme.md +160 -160
- package/docs/02-CORE-GUIDES/logging-guide.md +585 -585
- package/docs/02-CORE-GUIDES/mapping/error-handling-patterns.md +401 -401
- package/docs/02-CORE-GUIDES/mapping/examples/02-core-guides-mapping-readme.md +128 -128
- package/docs/02-CORE-GUIDES/mapping/examples/common-patterns.ts +273 -273
- package/docs/02-CORE-GUIDES/mapping/examples/csv-location-ingestion.json +36 -36
- package/docs/02-CORE-GUIDES/mapping/examples/csv-mapping.ts +242 -242
- package/docs/02-CORE-GUIDES/mapping/examples/graphql-to-parquet-extraction.json +36 -36
- package/docs/02-CORE-GUIDES/mapping/examples/json-mapping.ts +213 -213
- package/docs/02-CORE-GUIDES/mapping/examples/json-product-to-mutation.json +48 -48
- package/docs/02-CORE-GUIDES/mapping/examples/xml-mapping.ts +291 -291
- package/docs/02-CORE-GUIDES/mapping/examples/xml-order-to-mutation.json +45 -45
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/graphql-mutation-mapping-quick-reference.md +463 -463
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/graphql-mutation-mapping-readme.md +227 -227
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-01-introduction.md +222 -222
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-02-quick-start.md +351 -351
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-03-schema-validation.md +569 -569
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-04-mapping-patterns.md +471 -471
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-05-configuration-reference.md +611 -611
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-06-advanced-xpath.md +148 -148
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-06-path-syntax.md +464 -464
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-07-api-reference.md +94 -94
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-07-array-handling.md +307 -307
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-08-custom-resolvers.md +544 -544
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-09-advanced-patterns.md +427 -427
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-10-hooks-and-variables.md +336 -336
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-11-error-handling.md +488 -488
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-12-arguments-vs-nodes.md +383 -383
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/modules/graphql-mutation-mapping-13-best-practices.md +477 -477
- package/docs/02-CORE-GUIDES/mapping/graphql-mutation-mapping/readme.md +62 -62
- package/docs/02-CORE-GUIDES/mapping/mapping-format-decision-tree.md +480 -480
- package/docs/02-CORE-GUIDES/mapping/mapping-graphql-alias-batching-guide.md +820 -820
- package/docs/02-CORE-GUIDES/mapping/mapping-javascript-objects.md +2369 -2369
- package/docs/02-CORE-GUIDES/mapping/mapping-mapper-comparison-guide.md +682 -682
- package/docs/02-CORE-GUIDES/mapping/modules/02-core-guides-mapping-07-api-reference.md +1327 -1327
- package/docs/02-CORE-GUIDES/mapping/modules/02-core-guides-mapping-08-error-handling.md +1142 -1142
- package/docs/02-CORE-GUIDES/mapping/modules/mapping-04-use-cases.md +891 -891
- package/docs/02-CORE-GUIDES/mapping/modules/mapping-06-helpers-resolvers.md +1126 -1126
- package/docs/02-CORE-GUIDES/mapping/modules/mapping-06-sdk-resolvers.md +199 -199
- package/docs/02-CORE-GUIDES/mapping/modules/mapping-07-api-reference.md +1319 -1319
- package/docs/02-CORE-GUIDES/mapping/readme.md +178 -178
- package/docs/02-CORE-GUIDES/mapping/resolver-registration.md +410 -410
- package/docs/02-CORE-GUIDES/mapping/resolvers/examples/common-patterns.ts +226 -226
- package/docs/02-CORE-GUIDES/mapping/resolvers/examples/custom-resolvers.ts +227 -227
- package/docs/02-CORE-GUIDES/mapping/resolvers/examples/sdk-resolvers-usage.ts +203 -203
- package/docs/02-CORE-GUIDES/mapping/resolvers/mapping-resolvers-readme.md +274 -274
- package/docs/02-CORE-GUIDES/mapping/resolvers/mapping-resolvers-resolver-api-reference.md +679 -679
- package/docs/02-CORE-GUIDES/mapping/resolvers/mapping-resolvers-resolver-cookbook.md +826 -826
- package/docs/02-CORE-GUIDES/mapping/resolvers/mapping-resolvers-resolver-guide.md +1330 -1330
- package/docs/02-CORE-GUIDES/mapping/resolvers/mapping-resolvers-resolver-helpers-reference.md +1437 -1437
- package/docs/02-CORE-GUIDES/mapping/resolvers/mapping-resolvers-resolver-parameters-reference.md +553 -553
- package/docs/02-CORE-GUIDES/mapping/resolvers/mapping-resolvers-resolver-troubleshooting.md +854 -854
- package/docs/02-CORE-GUIDES/mapping/resolvers/readme.md +75 -75
- package/docs/02-CORE-GUIDES/parsers/examples/02-core-guides-parsers-readme.md +161 -161
- package/docs/02-CORE-GUIDES/parsers/examples/csv-parser-examples.ts +110 -110
- package/docs/02-CORE-GUIDES/parsers/examples/json-parser-examples.ts +33 -33
- package/docs/02-CORE-GUIDES/parsers/examples/parquet-parser-examples.ts +47 -47
- package/docs/02-CORE-GUIDES/parsers/examples/xml-parser-examples.ts +38 -38
- package/docs/02-CORE-GUIDES/parsers/modules/02-core-guides-parsers-01-foundations.md +355 -355
- package/docs/02-CORE-GUIDES/parsers/modules/02-core-guides-parsers-02-csv-parser.md +772 -772
- package/docs/02-CORE-GUIDES/parsers/modules/02-core-guides-parsers-03-json-parser.md +789 -789
- package/docs/02-CORE-GUIDES/parsers/modules/02-core-guides-parsers-04-xml-parser.md +857 -857
- package/docs/02-CORE-GUIDES/parsers/modules/02-core-guides-parsers-05-parquet-parser.md +603 -603
- package/docs/02-CORE-GUIDES/parsers/modules/02-core-guides-parsers-06-integration-patterns.md +702 -702
- package/docs/02-CORE-GUIDES/parsers/modules/02-core-guides-parsers-06-streaming.md +121 -121
- package/docs/02-CORE-GUIDES/parsers/modules/02-core-guides-parsers-07-api-reference.md +89 -89
- package/docs/02-CORE-GUIDES/parsers/modules/02-core-guides-parsers-07-troubleshooting.md +727 -727
- package/docs/02-CORE-GUIDES/parsers/parsers-quick-reference.md +482 -482
- package/docs/02-CORE-GUIDES/parsers/parsers-readme.md +258 -258
- package/docs/02-CORE-GUIDES/parsers/readme.md +65 -65
- package/docs/02-CORE-GUIDES/readme.md +194 -194
- package/docs/02-CORE-GUIDES/webhook-validation/examples/basic-validation.ts +108 -108
- package/docs/02-CORE-GUIDES/webhook-validation/examples/common-patterns.ts +316 -316
- package/docs/02-CORE-GUIDES/webhook-validation/examples/webhook-validation-readme.md +61 -61
- package/docs/02-CORE-GUIDES/webhook-validation/modules/webhook-validation-01-foundations.md +440 -440
- package/docs/02-CORE-GUIDES/webhook-validation/modules/webhook-validation-02-quick-start.md +525 -525
- package/docs/02-CORE-GUIDES/webhook-validation/modules/webhook-validation-03-versori-integration.md +741 -741
- package/docs/02-CORE-GUIDES/webhook-validation/modules/webhook-validation-04-platform-integration.md +629 -629
- package/docs/02-CORE-GUIDES/webhook-validation/modules/webhook-validation-05-configuration.md +535 -535
- package/docs/02-CORE-GUIDES/webhook-validation/modules/webhook-validation-06-error-handling.md +611 -611
- package/docs/02-CORE-GUIDES/webhook-validation/modules/webhook-validation-06-troubleshooting.md +124 -124
- package/docs/02-CORE-GUIDES/webhook-validation/modules/webhook-validation-07-api-reference.md +511 -511
- package/docs/02-CORE-GUIDES/webhook-validation/modules/webhook-validation-08-rubix-webhooks.md +590 -590
- package/docs/02-CORE-GUIDES/webhook-validation/modules/webhook-validation-09-rubix-event-vs-http-call.md +432 -432
- package/docs/02-CORE-GUIDES/webhook-validation/readme.md +239 -239
- package/docs/02-CORE-GUIDES/webhook-validation/webhook-validation-quick-reference.md +392 -392
- package/docs/03-PATTERN-GUIDES/connector-scenarios/connector-scenarios-quick-reference.md +498 -498
- package/docs/03-PATTERN-GUIDES/connector-scenarios/connector-scenarios-readme.md +313 -313
- package/docs/03-PATTERN-GUIDES/connector-scenarios/examples/common-patterns.ts +612 -612
- package/docs/03-PATTERN-GUIDES/connector-scenarios/examples/connector-scenarios-readme.md +253 -253
- package/docs/03-PATTERN-GUIDES/connector-scenarios/modules/connector-scenarios-01-foundations.md +452 -452
- package/docs/03-PATTERN-GUIDES/connector-scenarios/modules/connector-scenarios-02-simple-scenarios.md +681 -681
- package/docs/03-PATTERN-GUIDES/connector-scenarios/modules/connector-scenarios-03-intermediate-scenarios.md +637 -637
- package/docs/03-PATTERN-GUIDES/connector-scenarios/modules/connector-scenarios-04-advanced-scenarios.md +650 -650
- package/docs/03-PATTERN-GUIDES/connector-scenarios/modules/connector-scenarios-05-bidirectional-sync.md +233 -233
- package/docs/03-PATTERN-GUIDES/connector-scenarios/modules/connector-scenarios-06-production-patterns.md +442 -442
- package/docs/03-PATTERN-GUIDES/connector-scenarios/modules/connector-scenarios-07-reference.md +445 -445
- package/docs/03-PATTERN-GUIDES/connector-scenarios/readme.md +31 -31
- package/docs/03-PATTERN-GUIDES/enterprise-integration-patterns.md +1528 -1528
- package/docs/03-PATTERN-GUIDES/error-handling/comprehensive-error-handling-guide.md +1437 -1437
- package/docs/03-PATTERN-GUIDES/error-handling/error-handling-quick-reference.md +390 -390
- package/docs/03-PATTERN-GUIDES/error-handling/examples/common-patterns.ts +438 -438
- package/docs/03-PATTERN-GUIDES/error-handling/modules/error-handling-01-foundations.md +362 -362
- package/docs/03-PATTERN-GUIDES/error-handling/modules/error-handling-02-error-types.md +850 -850
- package/docs/03-PATTERN-GUIDES/error-handling/modules/error-handling-03-utf8-handling.md +456 -456
- package/docs/03-PATTERN-GUIDES/error-handling/modules/error-handling-04-error-scenarios.md +658 -658
- package/docs/03-PATTERN-GUIDES/error-handling/modules/error-handling-05-calling-patterns.md +671 -671
- package/docs/03-PATTERN-GUIDES/error-handling/modules/error-handling-06-retry-strategies.md +1034 -1034
- package/docs/03-PATTERN-GUIDES/error-handling/modules/error-handling-07-monitoring.md +653 -653
- package/docs/03-PATTERN-GUIDES/error-handling/modules/error-handling-08-api-reference.md +847 -847
- package/docs/03-PATTERN-GUIDES/error-handling/readme.md +36 -36
- package/docs/03-PATTERN-GUIDES/examples/__tests__/readme.md +40 -40
- package/docs/03-PATTERN-GUIDES/examples/__tests__/resolver-examples.test.js +282 -282
- package/docs/03-PATTERN-GUIDES/examples/test-data/03-pattern-guides-readme.md +110 -110
- package/docs/03-PATTERN-GUIDES/examples/test-data/canonical-inventory.json +123 -123
- package/docs/03-PATTERN-GUIDES/examples/test-data/canonical-order.json +171 -171
- package/docs/03-PATTERN-GUIDES/examples/test-data/readme.md +28 -28
- package/docs/03-PATTERN-GUIDES/extraction/extraction-readme.md +15 -15
- package/docs/03-PATTERN-GUIDES/extraction/readme.md +25 -25
- package/docs/03-PATTERN-GUIDES/file-operations/examples/common-patterns.ts +407 -407
- package/docs/03-PATTERN-GUIDES/file-operations/examples/file-operations-readme.md +142 -142
- package/docs/03-PATTERN-GUIDES/file-operations/file-operations-quick-reference.md +462 -462
- package/docs/03-PATTERN-GUIDES/file-operations/file-operations-readme.md +379 -379
- package/docs/03-PATTERN-GUIDES/file-operations/modules/file-operations-01-foundations.md +430 -430
- package/docs/03-PATTERN-GUIDES/file-operations/modules/file-operations-02-quick-start.md +484 -484
- package/docs/03-PATTERN-GUIDES/file-operations/modules/file-operations-03-s3-operations.md +507 -507
- package/docs/03-PATTERN-GUIDES/file-operations/modules/file-operations-04-sftp-operations.md +963 -963
- package/docs/03-PATTERN-GUIDES/file-operations/modules/file-operations-05-streaming-performance.md +503 -503
- package/docs/03-PATTERN-GUIDES/file-operations/modules/file-operations-06-archive-patterns.md +386 -386
- package/docs/03-PATTERN-GUIDES/file-operations/modules/file-operations-06-error-handling.md +117 -117
- package/docs/03-PATTERN-GUIDES/file-operations/modules/file-operations-07-api-reference.md +78 -78
- package/docs/03-PATTERN-GUIDES/file-operations/modules/file-operations-07-testing-troubleshooting.md +567 -567
- package/docs/03-PATTERN-GUIDES/file-operations/modules/file-operations-08-api-reference.md +1055 -1055
- package/docs/03-PATTERN-GUIDES/file-operations/readme.md +32 -32
- package/docs/03-PATTERN-GUIDES/ingestion/ingestion-readme.md +15 -15
- package/docs/03-PATTERN-GUIDES/ingestion/readme.md +25 -25
- package/docs/03-PATTERN-GUIDES/integration-patterns/examples/batch-processing.ts +130 -130
- package/docs/03-PATTERN-GUIDES/integration-patterns/examples/common-patterns.ts +360 -360
- package/docs/03-PATTERN-GUIDES/integration-patterns/examples/delta-sync.ts +130 -130
- package/docs/03-PATTERN-GUIDES/integration-patterns/examples/integration-patterns-readme.md +100 -100
- package/docs/03-PATTERN-GUIDES/integration-patterns/examples/real-time-webhook.ts +398 -398
- package/docs/03-PATTERN-GUIDES/integration-patterns/integration-patterns-quick-reference.md +962 -962
- package/docs/03-PATTERN-GUIDES/integration-patterns/integration-patterns-readme.md +134 -134
- package/docs/03-PATTERN-GUIDES/integration-patterns/modules/integration-patterns-01-real-time-processing.md +991 -991
- package/docs/03-PATTERN-GUIDES/integration-patterns/modules/integration-patterns-02-batch-processing.md +1547 -1547
- package/docs/03-PATTERN-GUIDES/integration-patterns/modules/integration-patterns-03-delta-sync.md +1108 -1108
- package/docs/03-PATTERN-GUIDES/integration-patterns/modules/integration-patterns-04-webhook-patterns.md +1181 -1181
- package/docs/03-PATTERN-GUIDES/integration-patterns/modules/integration-patterns-05-error-handling.md +1061 -1061
- package/docs/03-PATTERN-GUIDES/integration-patterns/modules/integration-patterns-06-advanced-integration-services.md +1547 -1547
- package/docs/03-PATTERN-GUIDES/integration-patterns/modules/integration-patterns-06-performance.md +109 -109
- package/docs/03-PATTERN-GUIDES/integration-patterns/modules/integration-patterns-07-api-reference.md +34 -34
- package/docs/03-PATTERN-GUIDES/integration-patterns/readme.md +30 -30
- package/docs/03-PATTERN-GUIDES/logging-minimal-mode.md +128 -128
- package/docs/03-PATTERN-GUIDES/multiple-connections/examples/common-patterns.ts +380 -380
- package/docs/03-PATTERN-GUIDES/multiple-connections/examples/multiple-connections-readme.md +139 -139
- package/docs/03-PATTERN-GUIDES/multiple-connections/examples/parallel-root-connections.ts +149 -149
- package/docs/03-PATTERN-GUIDES/multiple-connections/examples/real-world-scenarios.ts +405 -405
- package/docs/03-PATTERN-GUIDES/multiple-connections/modules/multiple-connections-01-foundations.md +378 -378
- package/docs/03-PATTERN-GUIDES/multiple-connections/modules/multiple-connections-02-quick-start.md +566 -566
- package/docs/03-PATTERN-GUIDES/multiple-connections/modules/multiple-connections-03-targeting-connections.md +659 -659
- package/docs/03-PATTERN-GUIDES/multiple-connections/modules/multiple-connections-04-parallel-queries.md +656 -656
- package/docs/03-PATTERN-GUIDES/multiple-connections/modules/multiple-connections-05-best-practices.md +624 -624
- package/docs/03-PATTERN-GUIDES/multiple-connections/modules/multiple-connections-06-api-reference.md +824 -824
- package/docs/03-PATTERN-GUIDES/multiple-connections/modules/multiple-connections-06-versori.md +119 -119
- package/docs/03-PATTERN-GUIDES/multiple-connections/modules/multiple-connections-07-api-reference.md +87 -87
- package/docs/03-PATTERN-GUIDES/multiple-connections/multiple-connections-quick-reference.md +353 -353
- package/docs/03-PATTERN-GUIDES/multiple-connections/multiple-connections-readme.md +270 -270
- package/docs/03-PATTERN-GUIDES/multiple-connections/readme.md +30 -30
- package/docs/03-PATTERN-GUIDES/pagination/pagination-readme.md +14 -14
- package/docs/03-PATTERN-GUIDES/pagination/readme.md +24 -24
- package/docs/03-PATTERN-GUIDES/parquet/examples/common-patterns.ts +180 -180
- package/docs/03-PATTERN-GUIDES/parquet/examples/read-parquet.ts +48 -48
- package/docs/03-PATTERN-GUIDES/parquet/examples/write-parquet.ts +65 -65
- package/docs/03-PATTERN-GUIDES/parquet/modules/03-pattern-guides-parquet-01-introduction.md +393 -393
- package/docs/03-PATTERN-GUIDES/parquet/modules/03-pattern-guides-parquet-02-quick-start.md +572 -572
- package/docs/03-PATTERN-GUIDES/parquet/modules/03-pattern-guides-parquet-03-reading-parquet.md +525 -525
- package/docs/03-PATTERN-GUIDES/parquet/modules/03-pattern-guides-parquet-04-writing-parquet.md +554 -554
- package/docs/03-PATTERN-GUIDES/parquet/modules/03-pattern-guides-parquet-05-graphql-extraction.md +405 -405
- package/docs/03-PATTERN-GUIDES/parquet/modules/03-pattern-guides-parquet-06-performance.md +104 -104
- package/docs/03-PATTERN-GUIDES/parquet/modules/03-pattern-guides-parquet-06-s3-integration.md +511 -511
- package/docs/03-PATTERN-GUIDES/parquet/modules/03-pattern-guides-parquet-07-api-reference.md +90 -90
- package/docs/03-PATTERN-GUIDES/parquet/modules/03-pattern-guides-parquet-07-performance-optimization.md +525 -525
- package/docs/03-PATTERN-GUIDES/parquet/modules/03-pattern-guides-parquet-08-best-practices.md +712 -712
- package/docs/03-PATTERN-GUIDES/parquet/parquet-quick-reference.md +683 -683
- package/docs/03-PATTERN-GUIDES/parquet/parquet-readme.md +248 -248
- package/docs/03-PATTERN-GUIDES/parquet/readme.md +32 -32
- package/docs/03-PATTERN-GUIDES/parsers/parsers-readme.md +12 -12
- package/docs/03-PATTERN-GUIDES/parsers/readme.md +24 -24
- package/docs/03-PATTERN-GUIDES/readme.md +159 -159
- package/docs/03-PATTERN-GUIDES/webhooks/readme.md +24 -24
- package/docs/03-PATTERN-GUIDES/webhooks/webhooks-readme.md +8 -8
- package/docs/04-REFERENCE/architecture/architecture-01-overview.md +427 -427
- package/docs/04-REFERENCE/architecture/architecture-02-client-architecture.md +424 -424
- package/docs/04-REFERENCE/architecture/architecture-03-data-flow.md +690 -690
- package/docs/04-REFERENCE/architecture/architecture-04-service-layer.md +834 -834
- package/docs/04-REFERENCE/architecture/architecture-05-integration-architecture.md +655 -655
- package/docs/04-REFERENCE/architecture/architecture-06-state-management.md +653 -653
- package/docs/04-REFERENCE/architecture/architecture-adding-new-data-sources.md +686 -686
- package/docs/04-REFERENCE/architecture/readme.md +279 -279
- package/docs/04-REFERENCE/platforms/deno/readme.md +117 -117
- package/docs/04-REFERENCE/platforms/nodejs/readme.md +146 -146
- package/docs/04-REFERENCE/platforms/readme.md +135 -135
- package/docs/04-REFERENCE/platforms/versori/modules/platforms-versori-01-introduction.md +398 -398
- package/docs/04-REFERENCE/platforms/versori/modules/platforms-versori-02-quick-start.md +560 -560
- package/docs/04-REFERENCE/platforms/versori/modules/platforms-versori-03-authentication.md +757 -757
- package/docs/04-REFERENCE/platforms/versori/modules/platforms-versori-04-workflows.md +2476 -2476
- package/docs/04-REFERENCE/platforms/versori/modules/platforms-versori-05-connections.md +1167 -1167
- package/docs/04-REFERENCE/platforms/versori/modules/platforms-versori-06-kv-storage.md +990 -990
- package/docs/04-REFERENCE/platforms/versori/modules/platforms-versori-06-state-management.md +121 -121
- package/docs/04-REFERENCE/platforms/versori/modules/platforms-versori-07-api-reference.md +68 -68
- package/docs/04-REFERENCE/platforms/versori/modules/platforms-versori-07-deployment.md +731 -731
- package/docs/04-REFERENCE/platforms/versori/modules/platforms-versori-08-best-practices.md +1111 -1111
- package/docs/04-REFERENCE/platforms/versori/modules/platforms-versori-09-signature-reference.md +766 -766
- package/docs/04-REFERENCE/platforms/versori/platforms-versori-readme.md +299 -299
- package/docs/04-REFERENCE/platforms/versori/platforms-versori-s3-sftp-configuration-guide.md +1425 -1425
- package/docs/04-REFERENCE/platforms/versori/platforms-versori-webhook-api-key-security.md +816 -816
- package/docs/04-REFERENCE/platforms/versori/platforms-versori-webhook-connection-security.md +681 -681
- package/docs/04-REFERENCE/platforms/versori/platforms-versori-workflow-task-types.md +708 -708
- package/docs/04-REFERENCE/platforms/versori/readme.md +108 -108
- package/docs/04-REFERENCE/readme.md +148 -148
- package/docs/04-REFERENCE/resolver-signature/examples/advanced-resolvers.ts +482 -482
- package/docs/04-REFERENCE/resolver-signature/examples/async-resolvers.ts +496 -496
- package/docs/04-REFERENCE/resolver-signature/examples/basic-resolvers.ts +343 -343
- package/docs/04-REFERENCE/resolver-signature/examples/resolver-signature-readme.md +188 -188
- package/docs/04-REFERENCE/resolver-signature/examples/testing-resolvers.ts +463 -463
- package/docs/04-REFERENCE/resolver-signature/modules/resolver-signature-01-foundations.md +286 -286
- package/docs/04-REFERENCE/resolver-signature/modules/resolver-signature-02-parameter-reference.md +643 -643
- package/docs/04-REFERENCE/resolver-signature/modules/resolver-signature-03-basic-examples.md +521 -521
- package/docs/04-REFERENCE/resolver-signature/modules/resolver-signature-04-advanced-patterns.md +739 -739
- package/docs/04-REFERENCE/resolver-signature/modules/resolver-signature-05-sdk-resolvers.md +531 -531
- package/docs/04-REFERENCE/resolver-signature/modules/resolver-signature-06-migration-guide.md +650 -650
- package/docs/04-REFERENCE/resolver-signature/modules/resolver-signature-06-testing.md +125 -125
- package/docs/04-REFERENCE/resolver-signature/modules/resolver-signature-07-api-reference.md +794 -794
- package/docs/04-REFERENCE/resolver-signature/readme.md +64 -64
- package/docs/04-REFERENCE/resolver-signature/resolver-signature-quick-reference.md +270 -270
- package/docs/04-REFERENCE/resolver-signature/resolver-signature-readme.md +351 -351
- package/docs/04-REFERENCE/schema/fluent-commerce-schema.json +764 -764
- package/docs/04-REFERENCE/schema/readme.md +141 -141
- package/docs/04-REFERENCE/testing/examples/04-reference-testing-readme.md +158 -158
- package/docs/04-REFERENCE/testing/examples/fluent-testing.ts +62 -62
- package/docs/04-REFERENCE/testing/examples/health-check.ts +155 -155
- package/docs/04-REFERENCE/testing/examples/integration-test.ts +119 -119
- package/docs/04-REFERENCE/testing/examples/performance-test.ts +183 -183
- package/docs/04-REFERENCE/testing/examples/s3-testing.ts +127 -127
- package/docs/04-REFERENCE/testing/modules/04-reference-testing-01-foundations.md +267 -267
- package/docs/04-REFERENCE/testing/modules/04-reference-testing-02-s3-testing.md +599 -599
- package/docs/04-REFERENCE/testing/modules/04-reference-testing-03-fluent-testing.md +589 -589
- package/docs/04-REFERENCE/testing/modules/04-reference-testing-04-integration-testing.md +699 -699
- package/docs/04-REFERENCE/testing/modules/04-reference-testing-05-debugging.md +478 -478
- package/docs/04-REFERENCE/testing/modules/04-reference-testing-06-cicd-integration.md +463 -463
- package/docs/04-REFERENCE/testing/modules/04-reference-testing-06-preflight-validation.md +131 -131
- package/docs/04-REFERENCE/testing/modules/04-reference-testing-07-best-practices.md +499 -499
- package/docs/04-REFERENCE/testing/modules/04-reference-testing-07-coverage-ci.md +165 -165
- package/docs/04-REFERENCE/testing/modules/04-reference-testing-08-api-reference.md +634 -634
- package/docs/04-REFERENCE/testing/readme.md +86 -86
- package/docs/04-REFERENCE/testing/testing-quick-reference.md +667 -667
- package/docs/04-REFERENCE/testing/testing-readme.md +286 -286
- package/docs/04-REFERENCE/troubleshooting/readme.md +144 -144
- package/docs/04-REFERENCE/troubleshooting/troubleshooting-deno-sftp-compatibility.md +392 -392
- package/docs/template-loading-matrix.md +242 -242
- package/package.json +5 -3
- package/docs/02-CORE-GUIDES/api-reference/cli-profile-integration.md +0 -377
|
@@ -1,1462 +1,1462 @@
|
|
|
1
|
-
# Standalone: Multi-Source Inventory Aggregation
|
|
2
|
-
|
|
3
|
-
**FC Connect SDK Use Case Guide**
|
|
4
|
-
|
|
5
|
-
> **SDK**: [@fluentcommerce/fc-connect-sdk](https://www.npmjs.com/package/@fluentcommerce/fc-connect-sdk)
|
|
6
|
-
> **Version**: Use latest - `npm install @fluentcommerce/fc-connect-sdk@latest`
|
|
7
|
-
|
|
8
|
-
**Context**: Node.js script that aggregates inventory from SFTP + S3, reconciles with Fluent Commerce, and updates differences
|
|
9
|
-
|
|
10
|
-
**Complexity**: High
|
|
11
|
-
|
|
12
|
-
**Runtime**: Node.js ≥18
|
|
13
|
-
|
|
14
|
-
**Estimated Lines**: ~800 lines
|
|
15
|
-
|
|
16
|
-
## What You'll Build
|
|
17
|
-
|
|
18
|
-
- Multi-source data collection (SFTP CSV + S3 JSON)
|
|
19
|
-
- Data aggregation and deduplication using Map-based algorithm
|
|
20
|
-
- Reconciliation with Fluent Commerce current state
|
|
21
|
-
- Difference calculation (delta detection)
|
|
22
|
-
- GraphQL mutation execution for inventory updates
|
|
23
|
-
- Comprehensive logging and metrics tracking
|
|
24
|
-
- Reconciliation report generation
|
|
25
|
-
- Error handling and recovery
|
|
26
|
-
|
|
27
|
-
## SDK Methods Used
|
|
28
|
-
|
|
29
|
-
- `createClient(...)` - OAuth2 client creation
|
|
30
|
-
- `SftpDataSource(...)` - SFTP file operations
|
|
31
|
-
- `S3DataSource(...)` - S3 file operations
|
|
32
|
-
- `CSVParserService` - Parse SFTP CSV files
|
|
33
|
-
- `client.graphql({ query, variables })` - Query current Fluent inventory
|
|
34
|
-
- `client.graphqlMutation(mutation, variables)` - Execute inventory updates
|
|
35
|
-
- `UniversalMapper(...)` - Transform aggregated data
|
|
36
|
-
|
|
37
|
-
## Complete Working Code
|
|
38
|
-
|
|
39
|
-
### 1. Environment Configuration
|
|
40
|
-
|
|
41
|
-
Create `.env` file:
|
|
42
|
-
|
|
43
|
-
```bash
|
|
44
|
-
# Fluent Commerce OAuth2
|
|
45
|
-
FLUENT_BASE_URL=https://api.fluentcommerce.com
|
|
46
|
-
FLUENT_CLIENT_ID=your-oauth2-client-id
|
|
47
|
-
FLUENT_CLIENT_SECRET=your-oauth2-client-secret
|
|
48
|
-
FLUENT_USERNAME=your-username
|
|
49
|
-
FLUENT_PASSWORD=your-password
|
|
50
|
-
FLUENT_RETAILER_ID=your-retailer-id
|
|
51
|
-
|
|
52
|
-
# Warehouse SFTP Source
|
|
53
|
-
WAREHOUSE_SFTP_HOST=warehouse-sftp.example.com
|
|
54
|
-
WAREHOUSE_SFTP_PORT=22
|
|
55
|
-
WAREHOUSE_SFTP_USER=warehouse-user
|
|
56
|
-
WAREHOUSE_SFTP_PASSWORD=warehouse-password
|
|
57
|
-
WAREHOUSE_SFTP_PATH=/inventory/updates
|
|
58
|
-
|
|
59
|
-
# Store S3 Source
|
|
60
|
-
STORE_S3_BUCKET=store-inventory
|
|
61
|
-
STORE_S3_REGION=us-east-1
|
|
62
|
-
STORE_AWS_ACCESS_KEY_ID=your-aws-access-key
|
|
63
|
-
STORE_AWS_SECRET_ACCESS_KEY=your-aws-secret-key
|
|
64
|
-
STORE_S3_PREFIX=stores/inventory/
|
|
65
|
-
|
|
66
|
-
# Reporting S3 Destination
|
|
67
|
-
REPORT_S3_BUCKET=inventory-reports
|
|
68
|
-
REPORT_S3_REGION=us-east-1
|
|
69
|
-
REPORT_AWS_ACCESS_KEY_ID=your-aws-access-key
|
|
70
|
-
REPORT_AWS_SECRET_ACCESS_KEY=your-aws-secret-key
|
|
71
|
-
```
|
|
72
|
-
|
|
73
|
-
### 2. Package Configuration
|
|
74
|
-
|
|
75
|
-
Create `package.json`:
|
|
76
|
-
|
|
77
|
-
```json
|
|
78
|
-
{
|
|
79
|
-
"name": "multi-source-inventory-aggregation",
|
|
80
|
-
"version": "1.0.0",
|
|
81
|
-
"type": "module",
|
|
82
|
-
"description": "Multi-source inventory aggregation with Fluent Commerce",
|
|
83
|
-
"main": "src/index.js",
|
|
84
|
-
"scripts": {
|
|
85
|
-
"start": "node src/index.js",
|
|
86
|
-
"dev": "node --watch src/index.js",
|
|
87
|
-
"aggregation:once": "node src/index.js",
|
|
88
|
-
"aggregation:schedule": "node src/scheduler.js"
|
|
89
|
-
},
|
|
90
|
-
"dependencies": {
|
|
91
|
-
"@fluentcommerce/fc-connect-sdk": "^0.1.39",
|
|
92
|
-
"dotenv": "^16.0.0",
|
|
93
|
-
"node-cron": "^3.0.0"
|
|
94
|
-
},
|
|
95
|
-
"engines": {
|
|
96
|
-
"node": ">=18.0.0"
|
|
97
|
-
}
|
|
98
|
-
}
|
|
99
|
-
```
|
|
100
|
-
|
|
101
|
-
### 3. Main Aggregation Script
|
|
102
|
-
|
|
103
|
-
Create `src/index.js`:
|
|
104
|
-
|
|
105
|
-
```javascript
|
|
106
|
-
// FC Connect SDK+
|
|
107
|
-
// Install: npm install @fluentcommerce/fc-connect-sdk@latest
|
|
108
|
-
// Docs: https://www.npmjs.com/package/@fluentcommerce/fc-connect-sdk
|
|
109
|
-
// GitHub: https://github.com/fluentcommerce/fc-connect-sdk
|
|
110
|
-
|
|
111
|
-
import {
|
|
112
|
-
createClient,
|
|
113
|
-
SftpDataSource,
|
|
114
|
-
S3DataSource,
|
|
115
|
-
CSVParserService,
|
|
116
|
-
UniversalMapper,
|
|
117
|
-
createConsoleLogger,
|
|
118
|
-
toStructuredLogger,
|
|
119
|
-
} from '@fluentcommerce/fc-connect-sdk';
|
|
120
|
-
import dotenv from 'dotenv';
|
|
121
|
-
import { fileURLToPath } from 'url';
|
|
122
|
-
import { dirname, join } from 'path';
|
|
123
|
-
|
|
124
|
-
// Load environment variables
|
|
125
|
-
dotenv.config();
|
|
126
|
-
|
|
127
|
-
// Get current directory (ESM equivalent of __dirname)
|
|
128
|
-
const __filename = fileURLToPath(import.meta.url);
|
|
129
|
-
const __dirname = dirname(__filename);
|
|
130
|
-
|
|
131
|
-
/**
|
|
132
|
-
* MULTI-SOURCE INVENTORY AGGREGATION SYSTEM
|
|
133
|
-
*
|
|
134
|
-
* This script implements a sophisticated inventory aggregation workflow:
|
|
135
|
-
*
|
|
136
|
-
* 1. Data Collection: Gather inventory from multiple sources (SFTP + S3)
|
|
137
|
-
* 2. Aggregation: Combine data using Map-based deduplication
|
|
138
|
-
* 3. Reconciliation: Compare with Fluent Commerce current state
|
|
139
|
-
* 4. Update: Send only differences to Fluent via GraphQL mutations
|
|
140
|
-
* 5. Reporting: Generate comprehensive reconciliation report
|
|
141
|
-
*
|
|
142
|
-
* DESIGN DECISIONS:
|
|
143
|
-
* - Map-based aggregation: O(n) complexity for deduplication
|
|
144
|
-
* - Delta-only updates: Minimize API calls by updating only differences
|
|
145
|
-
* - Source tracking: Maintain provenance for debugging
|
|
146
|
-
* - Location granularity: Track inventory per SKU per location
|
|
147
|
-
* - Error recovery: Continue processing after non-critical failures
|
|
148
|
-
*/
|
|
149
|
-
|
|
150
|
-
/**
|
|
151
|
-
* Aggregated Inventory Item Structure
|
|
152
|
-
*
|
|
153
|
-
* This represents a single SKU's inventory across all locations.
|
|
154
|
-
* The Map structure provides O(1) lookup for deduplication.
|
|
155
|
-
*
|
|
156
|
-
* @typedef {Object} AggregatedItem
|
|
157
|
-
* @property {string} sku - Product SKU (unique identifier)
|
|
158
|
-
* @property {Map<string, number>} locations - Location ID -> quantity mapping
|
|
159
|
-
* @property {Set<string>} sources - Data sources that contributed to this item
|
|
160
|
-
*/
|
|
161
|
-
|
|
162
|
-
/**
|
|
163
|
-
* Initialize Logger
|
|
164
|
-
*/
|
|
165
|
-
const logger = toStructuredLogger(createConsoleLogger(), {
|
|
166
|
-
service: 'multi-source-aggregation',
|
|
167
|
-
version: '1.0.0',
|
|
168
|
-
});
|
|
169
|
-
|
|
170
|
-
/**
|
|
171
|
-
* Configuration Object
|
|
172
|
-
*
|
|
173
|
-
* Centralizes all configuration to make the script easily testable
|
|
174
|
-
* and configurable without code changes.
|
|
175
|
-
*/
|
|
176
|
-
const config = {
|
|
177
|
-
// Fluent Commerce
|
|
178
|
-
fluent: {
|
|
179
|
-
baseUrl: process.env.FLUENT_BASE_URL,
|
|
180
|
-
clientId: process.env.FLUENT_CLIENT_ID,
|
|
181
|
-
clientSecret: process.env.FLUENT_CLIENT_SECRET,
|
|
182
|
-
username: process.env.FLUENT_USERNAME,
|
|
183
|
-
password: process.env.FLUENT_PASSWORD,
|
|
184
|
-
retailerId: process.env.FLUENT_RETAILER_ID,
|
|
185
|
-
},
|
|
186
|
-
|
|
187
|
-
// Warehouse SFTP Source
|
|
188
|
-
warehouseSftp: {
|
|
189
|
-
host: process.env.WAREHOUSE_SFTP_HOST,
|
|
190
|
-
port: parseInt(process.env.WAREHOUSE_SFTP_PORT || '22'),
|
|
191
|
-
username: process.env.WAREHOUSE_SFTP_USER,
|
|
192
|
-
password: process.env.WAREHOUSE_SFTP_PASSWORD,
|
|
193
|
-
remotePath: process.env.WAREHOUSE_SFTP_PATH || '/inventory/updates',
|
|
194
|
-
filePattern: '*.csv',
|
|
195
|
-
},
|
|
196
|
-
|
|
197
|
-
// Store S3 Source
|
|
198
|
-
storeS3: {
|
|
199
|
-
bucket: process.env.STORE_S3_BUCKET,
|
|
200
|
-
region: process.env.STORE_S3_REGION || 'us-east-1',
|
|
201
|
-
accessKeyId: process.env.STORE_AWS_ACCESS_KEY_ID,
|
|
202
|
-
secretAccessKey: process.env.STORE_AWS_SECRET_ACCESS_KEY,
|
|
203
|
-
prefix: process.env.STORE_S3_PREFIX || 'stores/inventory/',
|
|
204
|
-
},
|
|
205
|
-
|
|
206
|
-
// Reporting S3 Destination
|
|
207
|
-
reportS3: {
|
|
208
|
-
bucket: process.env.REPORT_S3_BUCKET,
|
|
209
|
-
region: process.env.REPORT_S3_REGION || 'us-east-1',
|
|
210
|
-
accessKeyId: process.env.REPORT_AWS_ACCESS_KEY_ID,
|
|
211
|
-
secretAccessKey: process.env.REPORT_AWS_SECRET_ACCESS_KEY,
|
|
212
|
-
},
|
|
213
|
-
|
|
214
|
-
// Processing options
|
|
215
|
-
processing: {
|
|
216
|
-
batchSize: 100, // Number of records to process in one GraphQL mutation
|
|
217
|
-
maxRetries: 3,
|
|
218
|
-
retryDelay: 1000, // milliseconds
|
|
219
|
-
},
|
|
220
|
-
};
|
|
221
|
-
|
|
222
|
-
/**
|
|
223
|
-
* Validate Configuration
|
|
224
|
-
*
|
|
225
|
-
* Ensures all required environment variables are set before proceeding.
|
|
226
|
-
* Fails fast with clear error messages to prevent runtime issues.
|
|
227
|
-
*/
|
|
228
|
-
function validateConfig() {
|
|
229
|
-
const required = [
|
|
230
|
-
'FLUENT_BASE_URL',
|
|
231
|
-
'FLUENT_CLIENT_ID',
|
|
232
|
-
'FLUENT_CLIENT_SECRET',
|
|
233
|
-
'FLUENT_USERNAME',
|
|
234
|
-
'FLUENT_PASSWORD',
|
|
235
|
-
'FLUENT_RETAILER_ID',
|
|
236
|
-
'WAREHOUSE_SFTP_HOST',
|
|
237
|
-
'WAREHOUSE_SFTP_USER',
|
|
238
|
-
'WAREHOUSE_SFTP_PASSWORD',
|
|
239
|
-
'STORE_S3_BUCKET',
|
|
240
|
-
'STORE_AWS_ACCESS_KEY_ID',
|
|
241
|
-
'STORE_AWS_SECRET_ACCESS_KEY',
|
|
242
|
-
];
|
|
243
|
-
|
|
244
|
-
const missing = required.filter(key => !process.env[key]);
|
|
245
|
-
|
|
246
|
-
if (missing.length > 0) {
|
|
247
|
-
throw new Error(`Missing required environment variables: ${missing.join(', ')}`);
|
|
248
|
-
}
|
|
249
|
-
|
|
250
|
-
logger.info('Configuration validated successfully');
|
|
251
|
-
}
|
|
252
|
-
|
|
253
|
-
/**
|
|
254
|
-
* Initialize Data Sources
|
|
255
|
-
*
|
|
256
|
-
* Creates SFTP and S3 data source instances for reading inventory files.
|
|
257
|
-
* Each source is configured with connection parameters and file patterns.
|
|
258
|
-
*
|
|
259
|
-
* @returns {Object} Data source instances
|
|
260
|
-
*/
|
|
261
|
-
function initializeDataSources() {
|
|
262
|
-
logger.info('Initializing data sources');
|
|
263
|
-
|
|
264
|
-
// SFTP Data Source for warehouse inventory (CSV format)
|
|
265
|
-
const sftpSource = new SftpDataSource(
|
|
266
|
-
{
|
|
267
|
-
type: 'SFTP_CSV',
|
|
268
|
-
connectionId: 'warehouse-sftp',
|
|
269
|
-
name: 'Warehouse SFTP',
|
|
270
|
-
settings: {
|
|
271
|
-
...config.warehouseSftp,
|
|
272
|
-
csvDelimiter: ',',
|
|
273
|
-
csvHeaders: ['sku', 'location', 'quantity', 'status'],
|
|
274
|
-
csvSkipEmptyLines: true,
|
|
275
|
-
csvTrimValues: true,
|
|
276
|
-
},
|
|
277
|
-
},
|
|
278
|
-
logger
|
|
279
|
-
);
|
|
280
|
-
|
|
281
|
-
// S3 Data Source for store inventory (JSON format)
|
|
282
|
-
const s3Source = new S3DataSource(
|
|
283
|
-
{
|
|
284
|
-
type: 'S3_CSV',
|
|
285
|
-
connectionId: 'store-s3',
|
|
286
|
-
name: 'Store Inventory S3',
|
|
287
|
-
s3Config: config.storeS3,
|
|
288
|
-
},
|
|
289
|
-
logger
|
|
290
|
-
);
|
|
291
|
-
|
|
292
|
-
// S3 Data Source for reporting (write destination)
|
|
293
|
-
const reportS3Source = new S3DataSource(
|
|
294
|
-
{
|
|
295
|
-
type: 'S3_CSV',
|
|
296
|
-
connectionId: 'report-s3',
|
|
297
|
-
name: 'Report S3',
|
|
298
|
-
s3Config: config.reportS3,
|
|
299
|
-
},
|
|
300
|
-
logger
|
|
301
|
-
);
|
|
302
|
-
|
|
303
|
-
logger.info('Data sources initialized successfully');
|
|
304
|
-
|
|
305
|
-
return { sftpSource, s3Source, reportS3Source };
|
|
306
|
-
}
|
|
307
|
-
|
|
308
|
-
/**
|
|
309
|
-
* Collect Inventory from SFTP Source
|
|
310
|
-
*
|
|
311
|
-
* WAREHOUSE INVENTORY COLLECTION
|
|
312
|
-
*
|
|
313
|
-
* Algorithm:
|
|
314
|
-
* 1. List CSV files from SFTP remote path
|
|
315
|
-
* 2. Download each CSV file
|
|
316
|
-
* 3. Parse CSV using CSVParserService
|
|
317
|
-
* 4. Aggregate data into Map structure (SKU -> locations -> quantity)
|
|
318
|
-
* 5. Track source provenance (WAREHOUSE_SFTP)
|
|
319
|
-
*
|
|
320
|
-
* CSV Format Expected:
|
|
321
|
-
* sku,location,quantity,status
|
|
322
|
-
* PROD-001,WH-01,150,AVAILABLE
|
|
323
|
-
* PROD-002,WH-01,75,AVAILABLE
|
|
324
|
-
*
|
|
325
|
-
* @param {SftpDataSource} sftpSource - SFTP data source instance
|
|
326
|
-
* @param {Map<string, AggregatedItem>} inventoryMap - Aggregation map
|
|
327
|
-
* @returns {Promise<number>} Number of files processed
|
|
328
|
-
*/
|
|
329
|
-
async function collectSftpInventory(sftpSource, inventoryMap) {
|
|
330
|
-
logger.info('Collecting inventory from SFTP (warehouse)');
|
|
331
|
-
|
|
332
|
-
try {
|
|
333
|
-
// List CSV files from SFTP server
|
|
334
|
-
const files = await sftpSource.listFiles({
|
|
335
|
-
filePattern: config.warehouseSftp.filePattern,
|
|
336
|
-
});
|
|
337
|
-
|
|
338
|
-
logger.info(`Found ${files.length} CSV files in SFTP`);
|
|
339
|
-
|
|
340
|
-
if (files.length === 0) {
|
|
341
|
-
logger.warn('No files found in SFTP source');
|
|
342
|
-
return 0;
|
|
343
|
-
}
|
|
344
|
-
|
|
345
|
-
// Process each CSV file
|
|
346
|
-
for (const file of files) {
|
|
347
|
-
logger.info(`Processing SFTP file: ${file.name}`);
|
|
348
|
-
|
|
349
|
-
try {
|
|
350
|
-
// Download file content
|
|
351
|
-
const content = await sftpSource.downloadFile(file.name);
|
|
352
|
-
|
|
353
|
-
// Parse CSV content
|
|
354
|
-
const csvParser = new CSVParserService();
|
|
355
|
-
const records = await csvParser.parse(content);
|
|
356
|
-
|
|
357
|
-
logger.info(`Parsed ${records.length} records from ${file.name}`);
|
|
358
|
-
|
|
359
|
-
// Aggregate records into inventory map
|
|
360
|
-
for (const record of records) {
|
|
361
|
-
const sku = record.sku;
|
|
362
|
-
const location = record.location;
|
|
363
|
-
const quantity = parseInt(record.quantity) || 0;
|
|
364
|
-
|
|
365
|
-
// Skip invalid records
|
|
366
|
-
if (!sku || !location) {
|
|
367
|
-
logger.warn('Skipping record with missing SKU or location', { record });
|
|
368
|
-
continue;
|
|
369
|
-
}
|
|
370
|
-
|
|
371
|
-
// Get or create aggregated item
|
|
372
|
-
if (!inventoryMap.has(sku)) {
|
|
373
|
-
inventoryMap.set(sku, {
|
|
374
|
-
sku: sku,
|
|
375
|
-
locations: new Map(),
|
|
376
|
-
sources: new Set(),
|
|
377
|
-
});
|
|
378
|
-
}
|
|
379
|
-
|
|
380
|
-
const item = inventoryMap.get(sku);
|
|
381
|
-
|
|
382
|
-
// Aggregate quantity by location
|
|
383
|
-
// If SKU+location exists from another source, ADD quantities
|
|
384
|
-
const currentQty = item.locations.get(location) || 0;
|
|
385
|
-
item.locations.set(location, currentQty + quantity);
|
|
386
|
-
|
|
387
|
-
// Track source
|
|
388
|
-
item.sources.add('WAREHOUSE_SFTP');
|
|
389
|
-
}
|
|
390
|
-
} catch (fileError) {
|
|
391
|
-
logger.error(`Failed to process SFTP file: ${file.name}`, fileError);
|
|
392
|
-
// Continue processing other files
|
|
393
|
-
}
|
|
394
|
-
}
|
|
395
|
-
|
|
396
|
-
logger.info(`SFTP collection complete: ${inventoryMap.size} unique SKUs aggregated`);
|
|
397
|
-
return files.length;
|
|
398
|
-
} catch (error) {
|
|
399
|
-
logger.error('Failed to collect inventory from SFTP', error);
|
|
400
|
-
throw error;
|
|
401
|
-
}
|
|
402
|
-
}
|
|
403
|
-
|
|
404
|
-
/**
|
|
405
|
-
* Collect Inventory from S3 Source
|
|
406
|
-
*
|
|
407
|
-
* STORE INVENTORY COLLECTION
|
|
408
|
-
*
|
|
409
|
-
* Algorithm:
|
|
410
|
-
* 1. List JSON files from S3 bucket/prefix
|
|
411
|
-
* 2. Download each JSON file
|
|
412
|
-
* 3. Parse JSON (expected structure: { inventory: [...] })
|
|
413
|
-
* 4. Aggregate data into Map structure
|
|
414
|
-
* 5. Track source provenance (STORE_S3)
|
|
415
|
-
*
|
|
416
|
-
* JSON Format Expected:
|
|
417
|
-
* {
|
|
418
|
-
* "storeId": "STORE-001",
|
|
419
|
-
* "timestamp": "2024-01-15T10:30:00Z",
|
|
420
|
-
* "inventory": [
|
|
421
|
-
* { "productId": "PROD-001", "storeId": "STORE-001", "availableQty": 25 },
|
|
422
|
-
* { "productId": "PROD-002", "storeId": "STORE-001", "availableQty": 50 }
|
|
423
|
-
* ]
|
|
424
|
-
* }
|
|
425
|
-
*
|
|
426
|
-
* @param {S3DataSource} s3Source - S3 data source instance
|
|
427
|
-
* @param {Map<string, AggregatedItem>} inventoryMap - Aggregation map
|
|
428
|
-
* @returns {Promise<number>} Number of files processed
|
|
429
|
-
*/
|
|
430
|
-
async function collectS3Inventory(s3Source, inventoryMap) {
|
|
431
|
-
logger.info('Collecting inventory from S3 (stores)');
|
|
432
|
-
|
|
433
|
-
try {
|
|
434
|
-
// List JSON files from S3
|
|
435
|
-
const files = await s3Source.listFiles({
|
|
436
|
-
prefix: config.storeS3.prefix,
|
|
437
|
-
});
|
|
438
|
-
|
|
439
|
-
// Filter only JSON files
|
|
440
|
-
const jsonFiles = files.filter(file => file.name.toLowerCase().endsWith('.json'));
|
|
441
|
-
|
|
442
|
-
logger.info(`Found ${jsonFiles.length} JSON files in S3`);
|
|
443
|
-
|
|
444
|
-
if (jsonFiles.length === 0) {
|
|
445
|
-
logger.warn('No JSON files found in S3 source');
|
|
446
|
-
return 0;
|
|
447
|
-
}
|
|
448
|
-
|
|
449
|
-
// Process each JSON file
|
|
450
|
-
for (const file of jsonFiles) {
|
|
451
|
-
logger.info(`Processing S3 file: ${file.path}`);
|
|
452
|
-
|
|
453
|
-
try {
|
|
454
|
-
// Download file content
|
|
455
|
-
const content = await s3Source.downloadFile(file.path);
|
|
456
|
-
|
|
457
|
-
// Parse JSON
|
|
458
|
-
const storeData = JSON.parse(content);
|
|
459
|
-
|
|
460
|
-
if (!storeData.inventory || !Array.isArray(storeData.inventory)) {
|
|
461
|
-
logger.warn(`Invalid JSON structure in ${file.path}: missing 'inventory' array`);
|
|
462
|
-
continue;
|
|
463
|
-
}
|
|
464
|
-
|
|
465
|
-
logger.info(`Parsed ${storeData.inventory.length} records from ${file.path}`);
|
|
466
|
-
|
|
467
|
-
// Aggregate records into inventory map
|
|
468
|
-
for (const record of storeData.inventory) {
|
|
469
|
-
const sku = record.productId;
|
|
470
|
-
const location = record.storeId;
|
|
471
|
-
const quantity = parseInt(record.availableQty) || 0;
|
|
472
|
-
|
|
473
|
-
// Skip invalid records
|
|
474
|
-
if (!sku || !location) {
|
|
475
|
-
logger.warn('Skipping record with missing productId or storeId', { record });
|
|
476
|
-
continue;
|
|
477
|
-
}
|
|
478
|
-
|
|
479
|
-
// Get or create aggregated item
|
|
480
|
-
if (!inventoryMap.has(sku)) {
|
|
481
|
-
inventoryMap.set(sku, {
|
|
482
|
-
sku: sku,
|
|
483
|
-
locations: new Map(),
|
|
484
|
-
sources: new Set(),
|
|
485
|
-
});
|
|
486
|
-
}
|
|
487
|
-
|
|
488
|
-
const item = inventoryMap.get(sku);
|
|
489
|
-
|
|
490
|
-
// Aggregate quantity by location
|
|
491
|
-
const currentQty = item.locations.get(location) || 0;
|
|
492
|
-
item.locations.set(location, currentQty + quantity);
|
|
493
|
-
|
|
494
|
-
// Track source
|
|
495
|
-
item.sources.add('STORE_S3');
|
|
496
|
-
}
|
|
497
|
-
} catch (fileError) {
|
|
498
|
-
logger.error(`Failed to process S3 file: ${file.path}`, fileError);
|
|
499
|
-
// Continue processing other files
|
|
500
|
-
}
|
|
501
|
-
}
|
|
502
|
-
|
|
503
|
-
logger.info(`S3 collection complete: ${inventoryMap.size} unique SKUs aggregated`);
|
|
504
|
-
return jsonFiles.length;
|
|
505
|
-
} catch (error) {
|
|
506
|
-
logger.error('Failed to collect inventory from S3', error);
|
|
507
|
-
throw error;
|
|
508
|
-
}
|
|
509
|
-
}
|
|
510
|
-
|
|
511
|
-
/**
|
|
512
|
-
* Query Current Fluent Inventory
|
|
513
|
-
*
|
|
514
|
-
* CURRENT STATE RETRIEVAL
|
|
515
|
-
*
|
|
516
|
-
* Queries Fluent Commerce to get the current inventory state.
|
|
517
|
-
* This is necessary for reconciliation (comparing aggregated vs. current).
|
|
518
|
-
*
|
|
519
|
-
* GraphQL Query Pattern:
|
|
520
|
-
* - Uses pagination (first: 1000) to handle large inventories
|
|
521
|
-
* - Returns: ref, productRef, locationRef, qty, availableQty
|
|
522
|
-
* - Maps to SKU+Location structure for comparison
|
|
523
|
-
*
|
|
524
|
-
* @param {FluentClient} fluentClient - Fluent Commerce API client
|
|
525
|
-
* @returns {Promise<Map>} Map of "SKU:LOCATION" -> { qty, availableQty }
|
|
526
|
-
*/
|
|
527
|
-
async function queryCurrentInventory(fluentClient) {
|
|
528
|
-
logger.info('Querying current inventory from Fluent Commerce');
|
|
529
|
-
|
|
530
|
-
const query = `
|
|
531
|
-
query GetCurrentInventory($retailerId: ID!, $first: Int!, $after: String) {
|
|
532
|
-
inventoryPositions(
|
|
533
|
-
retailerId: $retailerId,
|
|
534
|
-
first: $first,
|
|
535
|
-
after: $after
|
|
536
|
-
) {
|
|
537
|
-
edges {
|
|
538
|
-
cursor
|
|
539
|
-
node {
|
|
540
|
-
ref
|
|
541
|
-
productRef
|
|
542
|
-
locationRef
|
|
543
|
-
qty
|
|
544
|
-
availableQty
|
|
545
|
-
status
|
|
546
|
-
updatedOn
|
|
547
|
-
}
|
|
548
|
-
}
|
|
549
|
-
pageInfo {
|
|
550
|
-
hasNextPage
|
|
551
|
-
}
|
|
552
|
-
}
|
|
553
|
-
}
|
|
554
|
-
`;
|
|
555
|
-
|
|
556
|
-
try {
|
|
557
|
-
// Query with pagination support
|
|
558
|
-
const result = await fluentClient.graphql({
|
|
559
|
-
query,
|
|
560
|
-
variables: {
|
|
561
|
-
retailerId: config.fluent.retailerId,
|
|
562
|
-
first: 1000, // Adjust based on expected inventory size
|
|
563
|
-
},
|
|
564
|
-
});
|
|
565
|
-
|
|
566
|
-
if (!result.data || !result.data.inventoryPositions) {
|
|
567
|
-
throw new Error('Invalid GraphQL response structure');
|
|
568
|
-
}
|
|
569
|
-
|
|
570
|
-
const edges = result.data.inventoryPositions.edges || [];
|
|
571
|
-
logger.info(`Retrieved ${edges.length} inventory positions from Fluent`);
|
|
572
|
-
|
|
573
|
-
// Build lookup map: "SKU:LOCATION" -> inventory data
|
|
574
|
-
const currentInventory = new Map();
|
|
575
|
-
|
|
576
|
-
for (const edge of edges) {
|
|
577
|
-
const node = edge.node;
|
|
578
|
-
const key = `${node.productRef}:${node.locationRef}`;
|
|
579
|
-
|
|
580
|
-
currentInventory.set(key, {
|
|
581
|
-
ref: node.ref,
|
|
582
|
-
productRef: node.productRef,
|
|
583
|
-
locationRef: node.locationRef,
|
|
584
|
-
qty: node.qty,
|
|
585
|
-
availableQty: node.availableQty,
|
|
586
|
-
status: node.status,
|
|
587
|
-
updatedOn: node.updatedOn,
|
|
588
|
-
});
|
|
589
|
-
}
|
|
590
|
-
|
|
591
|
-
logger.info(`Current inventory indexed: ${currentInventory.size} positions`);
|
|
592
|
-
return currentInventory;
|
|
593
|
-
} catch (error) {
|
|
594
|
-
logger.error('Failed to query current inventory', error);
|
|
595
|
-
throw error;
|
|
596
|
-
}
|
|
597
|
-
}
|
|
598
|
-
|
|
599
|
-
/**
|
|
600
|
-
* Reconcile Inventory Differences
|
|
601
|
-
*
|
|
602
|
-
* RECONCILIATION ALGORITHM
|
|
603
|
-
*
|
|
604
|
-
* Compares aggregated inventory (from SFTP + S3) with Fluent current state.
|
|
605
|
-
* Identifies differences and generates update commands.
|
|
606
|
-
*
|
|
607
|
-
* Algorithm:
|
|
608
|
-
* 1. Iterate through aggregated inventory Map
|
|
609
|
-
* 2. For each SKU+Location, compare aggregated qty vs. Fluent qty
|
|
610
|
-
* 3. If difference exists (newQty !== currentQty), create update record
|
|
611
|
-
* 4. Calculate adjustment type (RECEIPT if increase, DISPATCH if decrease)
|
|
612
|
-
* 5. Build reconciliation report with detailed differences
|
|
613
|
-
*
|
|
614
|
-
* Update Decision Logic:
|
|
615
|
-
* - No Fluent record: CREATE (new inventory position)
|
|
616
|
-
* - Qty unchanged: SKIP (no update needed)
|
|
617
|
-
* - Qty increased: RECEIPT adjustment
|
|
618
|
-
* - Qty decreased: DISPATCH adjustment
|
|
619
|
-
*
|
|
620
|
-
* @param {Map<string, AggregatedItem>} inventoryMap - Aggregated inventory
|
|
621
|
-
* @param {Map} currentInventory - Current Fluent inventory
|
|
622
|
-
* @returns {Object} { updates: [], report: [] }
|
|
623
|
-
*/
|
|
624
|
-
function reconcileInventory(inventoryMap, currentInventory) {
|
|
625
|
-
logger.info('Reconciling inventory differences');
|
|
626
|
-
|
|
627
|
-
const updates = [];
|
|
628
|
-
const reconciliationReport = [];
|
|
629
|
-
let createCount = 0;
|
|
630
|
-
let updateCount = 0;
|
|
631
|
-
let skipCount = 0;
|
|
632
|
-
|
|
633
|
-
// Iterate through aggregated inventory
|
|
634
|
-
for (const [sku, aggregated] of inventoryMap.entries()) {
|
|
635
|
-
for (const [location, newQty] of aggregated.locations.entries()) {
|
|
636
|
-
const key = `${sku}:${location}`;
|
|
637
|
-
|
|
638
|
-
// Look up current inventory position
|
|
639
|
-
const current = currentInventory.get(key);
|
|
640
|
-
const currentQty = current ? current.qty : 0;
|
|
641
|
-
const difference = newQty - currentQty;
|
|
642
|
-
|
|
643
|
-
// Skip if no change
|
|
644
|
-
if (difference === 0 && current) {
|
|
645
|
-
skipCount++;
|
|
646
|
-
continue;
|
|
647
|
-
}
|
|
648
|
-
|
|
649
|
-
// Determine operation type
|
|
650
|
-
let operation;
|
|
651
|
-
if (!current) {
|
|
652
|
-
operation = 'CREATE';
|
|
653
|
-
createCount++;
|
|
654
|
-
} else if (difference > 0) {
|
|
655
|
-
operation = 'RECEIPT';
|
|
656
|
-
updateCount++;
|
|
657
|
-
} else {
|
|
658
|
-
operation = 'DISPATCH';
|
|
659
|
-
updateCount++;
|
|
660
|
-
}
|
|
661
|
-
|
|
662
|
-
// Create update record
|
|
663
|
-
updates.push({
|
|
664
|
-
ref: current ? current.ref : `${sku}-${location}`,
|
|
665
|
-
productRef: sku,
|
|
666
|
-
locationRef: location,
|
|
667
|
-
qty: newQty,
|
|
668
|
-
type: operation === 'CREATE' ? 'ADJUSTMENT' : operation,
|
|
669
|
-
adjustmentQty: Math.abs(difference),
|
|
670
|
-
reason: 'MULTI_SOURCE_SYNC',
|
|
671
|
-
attributes: {
|
|
672
|
-
sources: Array.from(aggregated.sources).join(','),
|
|
673
|
-
previousQty: currentQty,
|
|
674
|
-
newQty: newQty,
|
|
675
|
-
difference: difference,
|
|
676
|
-
syncedAt: new Date().toISOString(),
|
|
677
|
-
},
|
|
678
|
-
});
|
|
679
|
-
|
|
680
|
-
// Add to reconciliation report
|
|
681
|
-
reconciliationReport.push({
|
|
682
|
-
sku,
|
|
683
|
-
location,
|
|
684
|
-
operation,
|
|
685
|
-
previousQty: currentQty,
|
|
686
|
-
newQty: newQty,
|
|
687
|
-
difference,
|
|
688
|
-
sources: Array.from(aggregated.sources),
|
|
689
|
-
timestamp: new Date().toISOString(),
|
|
690
|
-
});
|
|
691
|
-
}
|
|
692
|
-
}
|
|
693
|
-
|
|
694
|
-
logger.info('Reconciliation complete', {
|
|
695
|
-
totalDifferences: updates.length,
|
|
696
|
-
creates: createCount,
|
|
697
|
-
updates: updateCount,
|
|
698
|
-
skipped: skipCount,
|
|
699
|
-
});
|
|
700
|
-
|
|
701
|
-
return { updates, reconciliationReport };
|
|
702
|
-
}
|
|
703
|
-
|
|
704
|
-
/**
|
|
705
|
-
* Execute Inventory Updates
|
|
706
|
-
*
|
|
707
|
-
* BATCH UPDATE EXECUTION
|
|
708
|
-
*
|
|
709
|
-
* Sends inventory updates to Fluent Commerce via GraphQL mutations.
|
|
710
|
-
* Processes updates in batches to avoid API limits.
|
|
711
|
-
*
|
|
712
|
-
* Mutation Pattern:
|
|
713
|
-
* - Uses adjustInventory mutation for updates
|
|
714
|
-
* - Batches of 100 records per mutation
|
|
715
|
-
* - Tracks success/failure per batch
|
|
716
|
-
* - Logs errors for retry/debugging
|
|
717
|
-
*
|
|
718
|
-
* @param {FluentClient} fluentClient - Fluent Commerce API client
|
|
719
|
-
* @param {Array} updates - Array of update records
|
|
720
|
-
* @returns {Promise<Object>} { success: number, failed: number, errors: [] }
|
|
721
|
-
*/
|
|
722
|
-
async function executeUpdates(fluentClient, updates) {
|
|
723
|
-
logger.info(`Executing ${updates.length} inventory updates`);
|
|
724
|
-
|
|
725
|
-
if (updates.length === 0) {
|
|
726
|
-
logger.info('No updates to execute');
|
|
727
|
-
return { success: 0, failed: 0, errors: [] };
|
|
728
|
-
}
|
|
729
|
-
|
|
730
|
-
const batchSize = config.processing.batchSize;
|
|
731
|
-
let successCount = 0;
|
|
732
|
-
let failedCount = 0;
|
|
733
|
-
const errors = [];
|
|
734
|
-
|
|
735
|
-
// Process in batches
|
|
736
|
-
for (let i = 0; i < updates.length; i += batchSize) {
|
|
737
|
-
const batch = updates.slice(i, i + batchSize);
|
|
738
|
-
const batchNumber = Math.floor(i / batchSize) + 1;
|
|
739
|
-
const totalBatches = Math.ceil(updates.length / batchSize);
|
|
740
|
-
|
|
741
|
-
logger.info(`Processing batch ${batchNumber}/${totalBatches} (${batch.length} records)`);
|
|
742
|
-
|
|
743
|
-
try {
|
|
744
|
-
// GraphQL mutation for inventory adjustment
|
|
745
|
-
const mutation = `
|
|
746
|
-
mutation AdjustInventory($updates: [InventoryAdjustmentInput]!) {
|
|
747
|
-
adjustInventory(input: $updates) {
|
|
748
|
-
success
|
|
749
|
-
failed
|
|
750
|
-
errors {
|
|
751
|
-
ref
|
|
752
|
-
message
|
|
753
|
-
code
|
|
754
|
-
}
|
|
755
|
-
}
|
|
756
|
-
}
|
|
757
|
-
`;
|
|
758
|
-
|
|
759
|
-
const result = await fluentClient.graphqlMutation(mutation, {
|
|
760
|
-
updates: batch,
|
|
761
|
-
});
|
|
762
|
-
|
|
763
|
-
if (result.data && result.data.adjustInventory) {
|
|
764
|
-
const batchResult = result.data.adjustInventory;
|
|
765
|
-
successCount += batchResult.success || 0;
|
|
766
|
-
failedCount += batchResult.failed || 0;
|
|
767
|
-
|
|
768
|
-
if (batchResult.errors && batchResult.errors.length > 0) {
|
|
769
|
-
errors.push(...batchResult.errors);
|
|
770
|
-
logger.warn(`Batch ${batchNumber} had ${batchResult.errors.length} errors`, {
|
|
771
|
-
errors: batchResult.errors.slice(0, 5), // Log first 5 errors
|
|
772
|
-
});
|
|
773
|
-
}
|
|
774
|
-
|
|
775
|
-
logger.info(
|
|
776
|
-
`Batch ${batchNumber} complete: ${batchResult.success} success, ${batchResult.failed} failed`
|
|
777
|
-
);
|
|
778
|
-
}
|
|
779
|
-
} catch (batchError) {
|
|
780
|
-
logger.error(`Batch ${batchNumber} failed completely`, batchError);
|
|
781
|
-
failedCount += batch.length;
|
|
782
|
-
errors.push({
|
|
783
|
-
batch: batchNumber,
|
|
784
|
-
message: batchError.message,
|
|
785
|
-
records: batch.length,
|
|
786
|
-
});
|
|
787
|
-
}
|
|
788
|
-
}
|
|
789
|
-
|
|
790
|
-
logger.info('Update execution complete', {
|
|
791
|
-
totalUpdates: updates.length,
|
|
792
|
-
success: successCount,
|
|
793
|
-
failed: failedCount,
|
|
794
|
-
errors: errors.length,
|
|
795
|
-
});
|
|
796
|
-
|
|
797
|
-
return { success: successCount, failed: failedCount, errors };
|
|
798
|
-
}
|
|
799
|
-
|
|
800
|
-
/**
|
|
801
|
-
* Generate Reconciliation Report
|
|
802
|
-
*
|
|
803
|
-
* COMPREHENSIVE REPORTING
|
|
804
|
-
*
|
|
805
|
-
* Creates a detailed JSON report of the reconciliation process.
|
|
806
|
-
* Includes statistics, detailed changes, and errors.
|
|
807
|
-
* Uploads report to S3 for auditing and monitoring.
|
|
808
|
-
*
|
|
809
|
-
* Report Structure:
|
|
810
|
-
* - timestamp: When reconciliation ran
|
|
811
|
-
* - sourcesProcessed: Files processed per source
|
|
812
|
-
* - statistics: Summary counts
|
|
813
|
-
* - reconciliation: Detailed changes (first 1000)
|
|
814
|
-
* - errors: Any errors encountered
|
|
815
|
-
*
|
|
816
|
-
* @param {S3DataSource} reportS3Source - S3 data source for report upload
|
|
817
|
-
* @param {Object} stats - Statistics object
|
|
818
|
-
* @returns {Promise<string>} Report S3 key
|
|
819
|
-
*/
|
|
820
|
-
async function generateReconciliationReport(reportS3Source, stats) {
|
|
821
|
-
logger.info('Generating reconciliation report');
|
|
822
|
-
|
|
823
|
-
const timestamp = new Date().toISOString();
|
|
824
|
-
const reportKey = `reconciliation/report-${timestamp.replace(/[:.]/g, '-')}.json`;
|
|
825
|
-
|
|
826
|
-
const report = {
|
|
827
|
-
timestamp,
|
|
828
|
-
version: '1.0.0',
|
|
829
|
-
sourcesProcessed: {
|
|
830
|
-
sftpFiles: stats.sftpFiles,
|
|
831
|
-
s3Files: stats.s3Files,
|
|
832
|
-
},
|
|
833
|
-
statistics: {
|
|
834
|
-
totalSKUs: stats.totalSKUs,
|
|
835
|
-
totalLocations: stats.totalLocations,
|
|
836
|
-
totalUpdates: stats.totalUpdates,
|
|
837
|
-
creates: stats.creates,
|
|
838
|
-
receipts: stats.receipts,
|
|
839
|
-
dispatches: stats.dispatches,
|
|
840
|
-
skipped: stats.skipped,
|
|
841
|
-
success: stats.success,
|
|
842
|
-
failed: stats.failed,
|
|
843
|
-
},
|
|
844
|
-
reconciliation: stats.reconciliationDetails.slice(0, 1000), // Limit to first 1000 for report size
|
|
845
|
-
errors: stats.errors || [],
|
|
846
|
-
};
|
|
847
|
-
|
|
848
|
-
// Upload report to S3
|
|
849
|
-
try {
|
|
850
|
-
await reportS3Source.uploadFile(reportKey, JSON.stringify(report, null, 2), {
|
|
851
|
-
contentType: 'application/json',
|
|
852
|
-
});
|
|
853
|
-
|
|
854
|
-
logger.info(`Reconciliation report uploaded: ${reportKey}`);
|
|
855
|
-
return reportKey;
|
|
856
|
-
} catch (error) {
|
|
857
|
-
logger.error('Failed to upload reconciliation report', error);
|
|
858
|
-
throw error;
|
|
859
|
-
}
|
|
860
|
-
}
|
|
861
|
-
|
|
862
|
-
/**
|
|
863
|
-
* Main Aggregation Function
|
|
864
|
-
*
|
|
865
|
-
* ORCHESTRATION ENTRY POINT
|
|
866
|
-
*
|
|
867
|
-
* Coordinates the entire multi-source aggregation workflow:
|
|
868
|
-
* 1. Validate configuration
|
|
869
|
-
* 2. Initialize data sources and Fluent client
|
|
870
|
-
* 3. Collect inventory from SFTP + S3
|
|
871
|
-
* 4. Query current Fluent inventory
|
|
872
|
-
* 5. Reconcile differences
|
|
873
|
-
* 6. Execute updates
|
|
874
|
-
* 7. Generate report
|
|
875
|
-
*
|
|
876
|
-
* Error Handling Strategy:
|
|
877
|
-
* - Configuration errors: Fail immediately
|
|
878
|
-
* - Source collection errors: Continue with other sources
|
|
879
|
-
* - Reconciliation errors: Fail (cannot proceed without comparison)
|
|
880
|
-
* - Update errors: Track failures but continue batch processing
|
|
881
|
-
*
|
|
882
|
-
* @returns {Promise<Object>} Final statistics
|
|
883
|
-
*/
|
|
884
|
-
async function aggregateInventory() {
|
|
885
|
-
const startTime = Date.now();
|
|
886
|
-
|
|
887
|
-
logger.info('='.repeat(80));
|
|
888
|
-
logger.info('MULTI-SOURCE INVENTORY AGGREGATION - START');
|
|
889
|
-
logger.info('='.repeat(80));
|
|
890
|
-
|
|
891
|
-
try {
|
|
892
|
-
// Step 1: Validate configuration
|
|
893
|
-
validateConfig();
|
|
894
|
-
|
|
895
|
-
// Step 2: Initialize data sources
|
|
896
|
-
const { sftpSource, s3Source, reportS3Source } = initializeDataSources();
|
|
897
|
-
|
|
898
|
-
// Step 3: Create Fluent client
|
|
899
|
-
logger.info('Creating Fluent Commerce client');
|
|
900
|
-
const fluentClient = await createClient(config.fluent);
|
|
901
|
-
logger.info('Fluent client created successfully');
|
|
902
|
-
|
|
903
|
-
// Step 4: Initialize aggregation map
|
|
904
|
-
// Map structure: SKU -> { sku, locations: Map<location, qty>, sources: Set<source> }
|
|
905
|
-
const inventoryMap = new Map();
|
|
906
|
-
|
|
907
|
-
// Step 5: Collect inventory from SFTP (warehouses)
|
|
908
|
-
logger.info('-'.repeat(80));
|
|
909
|
-
logger.info('PHASE 1: SFTP Collection');
|
|
910
|
-
logger.info('-'.repeat(80));
|
|
911
|
-
const sftpFiles = await collectSftpInventory(sftpSource, inventoryMap);
|
|
912
|
-
|
|
913
|
-
// Step 6: Collect inventory from S3 (stores)
|
|
914
|
-
logger.info('-'.repeat(80));
|
|
915
|
-
logger.info('PHASE 2: S3 Collection');
|
|
916
|
-
logger.info('-'.repeat(80));
|
|
917
|
-
const s3Files = await collectS3Inventory(s3Source, inventoryMap);
|
|
918
|
-
|
|
919
|
-
logger.info('-'.repeat(80));
|
|
920
|
-
logger.info('AGGREGATION COMPLETE');
|
|
921
|
-
logger.info('-'.repeat(80));
|
|
922
|
-
logger.info(`Total unique SKUs: ${inventoryMap.size}`);
|
|
923
|
-
|
|
924
|
-
// Calculate total locations
|
|
925
|
-
let totalLocations = 0;
|
|
926
|
-
for (const item of inventoryMap.values()) {
|
|
927
|
-
totalLocations += item.locations.size;
|
|
928
|
-
}
|
|
929
|
-
logger.info(`Total locations: ${totalLocations}`);
|
|
930
|
-
|
|
931
|
-
// Step 7: Query current Fluent inventory
|
|
932
|
-
logger.info('-'.repeat(80));
|
|
933
|
-
logger.info('PHASE 3: Current State Query');
|
|
934
|
-
logger.info('-'.repeat(80));
|
|
935
|
-
const currentInventory = await queryCurrentInventory(fluentClient);
|
|
936
|
-
|
|
937
|
-
// Step 8: Reconcile differences
|
|
938
|
-
logger.info('-'.repeat(80));
|
|
939
|
-
logger.info('PHASE 4: Reconciliation');
|
|
940
|
-
logger.info('-'.repeat(80));
|
|
941
|
-
const { updates, reconciliationReport } = reconcileInventory(inventoryMap, currentInventory);
|
|
942
|
-
|
|
943
|
-
// Step 9: Execute updates
|
|
944
|
-
logger.info('-'.repeat(80));
|
|
945
|
-
logger.info('PHASE 5: Update Execution');
|
|
946
|
-
logger.info('-'.repeat(80));
|
|
947
|
-
const updateResult = await executeUpdates(fluentClient, updates);
|
|
948
|
-
|
|
949
|
-
// Step 10: Generate report
|
|
950
|
-
logger.info('-'.repeat(80));
|
|
951
|
-
logger.info('PHASE 6: Report Generation');
|
|
952
|
-
logger.info('-'.repeat(80));
|
|
953
|
-
|
|
954
|
-
const stats = {
|
|
955
|
-
sftpFiles,
|
|
956
|
-
s3Files,
|
|
957
|
-
totalSKUs: inventoryMap.size,
|
|
958
|
-
totalLocations,
|
|
959
|
-
totalUpdates: updates.length,
|
|
960
|
-
creates: updates.filter(u => u.type === 'ADJUSTMENT').length,
|
|
961
|
-
receipts: updates.filter(u => u.type === 'RECEIPT').length,
|
|
962
|
-
dispatches: updates.filter(u => u.type === 'DISPATCH').length,
|
|
963
|
-
skipped: totalLocations - updates.length,
|
|
964
|
-
success: updateResult.success,
|
|
965
|
-
failed: updateResult.failed,
|
|
966
|
-
errors: updateResult.errors,
|
|
967
|
-
reconciliationDetails: reconciliationReport,
|
|
968
|
-
};
|
|
969
|
-
|
|
970
|
-
const reportKey = await generateReconciliationReport(reportS3Source, stats);
|
|
971
|
-
|
|
972
|
-
// Step 11: Final summary
|
|
973
|
-
const duration = Math.round((Date.now() - startTime) / 1000);
|
|
974
|
-
|
|
975
|
-
logger.info('='.repeat(80));
|
|
976
|
-
logger.info('MULTI-SOURCE INVENTORY AGGREGATION - COMPLETE');
|
|
977
|
-
logger.info('='.repeat(80));
|
|
978
|
-
logger.info(`Duration: ${duration} seconds`);
|
|
979
|
-
logger.info(`Sources processed: ${sftpFiles} SFTP files, ${s3Files} S3 files`);
|
|
980
|
-
logger.info(`Total SKUs: ${stats.totalSKUs}`);
|
|
981
|
-
logger.info(
|
|
982
|
-
`Total updates: ${stats.totalUpdates} (${stats.success} success, ${stats.failed} failed)`
|
|
983
|
-
);
|
|
984
|
-
logger.info(`Report: ${reportKey}`);
|
|
985
|
-
logger.info('='.repeat(80));
|
|
986
|
-
|
|
987
|
-
return stats;
|
|
988
|
-
} catch (error) {
|
|
989
|
-
logger.error('Multi-source aggregation failed', error);
|
|
990
|
-
throw error;
|
|
991
|
-
}
|
|
992
|
-
}
|
|
993
|
-
|
|
994
|
-
// Execute if run directly
|
|
995
|
-
if (import.meta.url === `file://${process.argv[1]}`) {
|
|
996
|
-
aggregateInventory()
|
|
997
|
-
.then(() => {
|
|
998
|
-
logger.info('Aggregation completed successfully');
|
|
999
|
-
process.exit(0);
|
|
1000
|
-
})
|
|
1001
|
-
.catch(error => {
|
|
1002
|
-
logger.error('Aggregation failed with error', error);
|
|
1003
|
-
process.exit(1);
|
|
1004
|
-
});
|
|
1005
|
-
}
|
|
1006
|
-
|
|
1007
|
-
export { aggregateInventory, config };
|
|
1008
|
-
```
|
|
1009
|
-
|
|
1010
|
-
### 4. Scheduler (Optional)
|
|
1011
|
-
|
|
1012
|
-
Create `src/scheduler.js` for scheduled execution:
|
|
1013
|
-
|
|
1014
|
-
```javascript
|
|
1015
|
-
import cron from 'node-cron';
|
|
1016
|
-
import { aggregateInventory } from './index.js';
|
|
1017
|
-
|
|
1018
|
-
/**
|
|
1019
|
-
* SCHEDULED AGGREGATION
|
|
1020
|
-
*
|
|
1021
|
-
* Runs aggregation on a schedule using cron syntax.
|
|
1022
|
-
* Default: Hourly at minute 0 (e.g., 1:00, 2:00, 3:00)
|
|
1023
|
-
*
|
|
1024
|
-
* Cron Pattern: '0 * * * *'
|
|
1025
|
-
* - Minute: 0 (at the top of the hour)
|
|
1026
|
-
* - Hour: * (every hour)
|
|
1027
|
-
* - Day of Month: * (every day)
|
|
1028
|
-
* - Month: * (every month)
|
|
1029
|
-
* - Day of Week: * (every day of week)
|
|
1030
|
-
*/
|
|
1031
|
-
|
|
1032
|
-
console.log('Multi-Source Inventory Aggregation Scheduler');
|
|
1033
|
-
console.log('Schedule: Hourly at minute 0');
|
|
1034
|
-
console.log('Press Ctrl+C to stop');
|
|
1035
|
-
|
|
1036
|
-
// Schedule task
|
|
1037
|
-
cron.schedule('0 * * * *', async () => {
|
|
1038
|
-
console.log(`\n${'='.repeat(80)}`);
|
|
1039
|
-
console.log(`Scheduled aggregation triggered: ${new Date().toISOString()}`);
|
|
1040
|
-
console.log('='.repeat(80));
|
|
1041
|
-
|
|
1042
|
-
try {
|
|
1043
|
-
await aggregateInventory();
|
|
1044
|
-
console.log('Scheduled aggregation completed successfully');
|
|
1045
|
-
} catch (error) {
|
|
1046
|
-
console.error('Scheduled aggregation failed:', error);
|
|
1047
|
-
}
|
|
1048
|
-
});
|
|
1049
|
-
|
|
1050
|
-
// Run immediately on startup (optional)
|
|
1051
|
-
if (process.env.RUN_ON_STARTUP === 'true') {
|
|
1052
|
-
console.log('Running initial aggregation on startup...');
|
|
1053
|
-
aggregateInventory()
|
|
1054
|
-
.then(() => console.log('Initial aggregation complete'))
|
|
1055
|
-
.catch(error => console.error('Initial aggregation failed:', error));
|
|
1056
|
-
}
|
|
1057
|
-
```
|
|
1058
|
-
|
|
1059
|
-
## Key Patterns Explained
|
|
1060
|
-
|
|
1061
|
-
### Pattern 1: Multi-Source Data Collection
|
|
1062
|
-
|
|
1063
|
-
**Challenge**: Collecting inventory from heterogeneous sources (CSV from SFTP, JSON from S3)
|
|
1064
|
-
|
|
1065
|
-
**Solution**: Source-specific collection functions with unified aggregation structure
|
|
1066
|
-
|
|
1067
|
-
```javascript
|
|
1068
|
-
// Each source has its own collection function
|
|
1069
|
-
await collectSftpInventory(sftpSource, inventoryMap); // CSV → Map
|
|
1070
|
-
await collectS3Inventory(s3Source, inventoryMap); // JSON → Map
|
|
1071
|
-
|
|
1072
|
-
// Both populate the same Map structure:
|
|
1073
|
-
// Map<SKU, { locations: Map<location, qty>, sources: Set<source> }>
|
|
1074
|
-
```
|
|
1075
|
-
|
|
1076
|
-
**Why This Works**:
|
|
1077
|
-
|
|
1078
|
-
- Abstraction: Each source function handles its own format
|
|
1079
|
-
- Unified Structure: All sources aggregate into the same Map
|
|
1080
|
-
- Source Tracking: Set tracks which sources contributed to each SKU
|
|
1081
|
-
- Deduplication: Map automatically handles duplicate SKU+location combinations
|
|
1082
|
-
|
|
1083
|
-
### Pattern 2: Map-Based Aggregation & Deduplication
|
|
1084
|
-
|
|
1085
|
-
**Challenge**: Combine inventory from multiple sources, handling duplicates and quantities
|
|
1086
|
-
|
|
1087
|
-
**Solution**: Nested Map structure with additive aggregation
|
|
1088
|
-
|
|
1089
|
-
```javascript
|
|
1090
|
-
// Aggregation data structure
|
|
1091
|
-
const inventoryMap = new Map(); // SKU -> AggregatedItem
|
|
1092
|
-
|
|
1093
|
-
// AggregatedItem structure:
|
|
1094
|
-
{
|
|
1095
|
-
sku: 'PROD-001',
|
|
1096
|
-
locations: Map<string, number>, // location -> quantity
|
|
1097
|
-
sources: Set<string> // ['WAREHOUSE_SFTP', 'STORE_S3']
|
|
1098
|
-
}
|
|
1099
|
-
|
|
1100
|
-
// Aggregation algorithm (additive for duplicates)
|
|
1101
|
-
if (!inventoryMap.has(sku)) {
|
|
1102
|
-
inventoryMap.set(sku, {
|
|
1103
|
-
sku: sku,
|
|
1104
|
-
locations: new Map(),
|
|
1105
|
-
sources: new Set()
|
|
1106
|
-
});
|
|
1107
|
-
}
|
|
1108
|
-
|
|
1109
|
-
const item = inventoryMap.get(sku);
|
|
1110
|
-
const currentQty = item.locations.get(location) || 0;
|
|
1111
|
-
item.locations.set(location, currentQty + quantity); // ADD quantities
|
|
1112
|
-
item.sources.add(sourceName);
|
|
1113
|
-
```
|
|
1114
|
-
|
|
1115
|
-
**Why This Works**:
|
|
1116
|
-
|
|
1117
|
-
- O(1) lookup: Map provides fast SKU lookup
|
|
1118
|
-
- O(1) duplicate detection: Map automatically handles duplicates
|
|
1119
|
-
- Additive logic: Quantities from different sources are summed
|
|
1120
|
-
- Source provenance: Track which sources contributed (debugging/auditing)
|
|
1121
|
-
|
|
1122
|
-
### Pattern 3: Reconciliation Algorithm
|
|
1123
|
-
|
|
1124
|
-
**Challenge**: Compare aggregated inventory with Fluent current state to find differences
|
|
1125
|
-
|
|
1126
|
-
**Solution**: Key-based lookup with delta calculation
|
|
1127
|
-
|
|
1128
|
-
```javascript
|
|
1129
|
-
// Build lookup key: "SKU:LOCATION"
|
|
1130
|
-
const key = `${sku}:${location}`;
|
|
1131
|
-
|
|
1132
|
-
// Look up current state
|
|
1133
|
-
const current = currentInventory.get(key);
|
|
1134
|
-
const currentQty = current ? current.qty : 0;
|
|
1135
|
-
|
|
1136
|
-
// Calculate delta
|
|
1137
|
-
const difference = newQty - currentQty;
|
|
1138
|
-
|
|
1139
|
-
// Determine operation
|
|
1140
|
-
if (difference === 0) {
|
|
1141
|
-
operation = 'SKIP'; // No change
|
|
1142
|
-
} else if (!current) {
|
|
1143
|
-
operation = 'CREATE'; // New inventory position
|
|
1144
|
-
} else if (difference > 0) {
|
|
1145
|
-
operation = 'RECEIPT'; // Increase
|
|
1146
|
-
} else {
|
|
1147
|
-
operation = 'DISPATCH'; // Decrease
|
|
1148
|
-
}
|
|
1149
|
-
```
|
|
1150
|
-
|
|
1151
|
-
**Why This Works**:
|
|
1152
|
-
|
|
1153
|
-
- Key-based comparison: Fast O(1) lookup for each SKU+location
|
|
1154
|
-
- Delta detection: Only send changes (minimizes API calls)
|
|
1155
|
-
- Operation classification: Clear intent (CREATE/RECEIPT/DISPATCH)
|
|
1156
|
-
- Skip optimization: Avoid unnecessary updates when qty unchanged
|
|
1157
|
-
|
|
1158
|
-
### Pattern 4: Batch Update Execution
|
|
1159
|
-
|
|
1160
|
-
**Challenge**: Send thousands of inventory updates without hitting API limits
|
|
1161
|
-
|
|
1162
|
-
**Solution**: Batch processing with error tracking
|
|
1163
|
-
|
|
1164
|
-
```javascript
|
|
1165
|
-
const batchSize = 100;
|
|
1166
|
-
|
|
1167
|
-
for (let i = 0; i < updates.length; i += batchSize) {
|
|
1168
|
-
const batch = updates.slice(i, i + batchSize);
|
|
1169
|
-
|
|
1170
|
-
try {
|
|
1171
|
-
const result = await fluentClient.graphqlMutation(mutation, {
|
|
1172
|
-
updates: batch,
|
|
1173
|
-
});
|
|
1174
|
-
|
|
1175
|
-
// Track success/failure per batch
|
|
1176
|
-
successCount += result.data.adjustInventory.success;
|
|
1177
|
-
failedCount += result.data.adjustInventory.failed;
|
|
1178
|
-
} catch (batchError) {
|
|
1179
|
-
// Log error but continue processing other batches
|
|
1180
|
-
logger.error(`Batch ${batchNumber} failed`, batchError);
|
|
1181
|
-
failedCount += batch.length;
|
|
1182
|
-
}
|
|
1183
|
-
}
|
|
1184
|
-
```
|
|
1185
|
-
|
|
1186
|
-
**Why This Works**:
|
|
1187
|
-
|
|
1188
|
-
- Batch optimization: Reduces API calls (1000 updates → 10 API calls at batch size 100)
|
|
1189
|
-
- Error isolation: One batch failure doesn't stop other batches
|
|
1190
|
-
- Progress tracking: Log after each batch for monitoring
|
|
1191
|
-
- Retry-friendly: Failed batches can be retried individually
|
|
1192
|
-
|
|
1193
|
-
### Pattern 5: Comprehensive Report Generation
|
|
1194
|
-
|
|
1195
|
-
**Challenge**: Provide audit trail and debugging information for reconciliation
|
|
1196
|
-
|
|
1197
|
-
**Solution**: Structured JSON report with statistics and details
|
|
1198
|
-
|
|
1199
|
-
```javascript
|
|
1200
|
-
const report = {
|
|
1201
|
-
timestamp: new Date().toISOString(),
|
|
1202
|
-
version: '1.0.0',
|
|
1203
|
-
|
|
1204
|
-
// Source processing statistics
|
|
1205
|
-
sourcesProcessed: {
|
|
1206
|
-
sftpFiles: 5,
|
|
1207
|
-
s3Files: 20,
|
|
1208
|
-
},
|
|
1209
|
-
|
|
1210
|
-
// Aggregate statistics
|
|
1211
|
-
statistics: {
|
|
1212
|
-
totalSKUs: 1250,
|
|
1213
|
-
totalLocations: 3400,
|
|
1214
|
-
totalUpdates: 450,
|
|
1215
|
-
creates: 50,
|
|
1216
|
-
receipts: 200,
|
|
1217
|
-
dispatches: 200,
|
|
1218
|
-
skipped: 2950,
|
|
1219
|
-
success: 445,
|
|
1220
|
-
failed: 5,
|
|
1221
|
-
},
|
|
1222
|
-
|
|
1223
|
-
// Detailed changes (first 1000)
|
|
1224
|
-
reconciliation: [
|
|
1225
|
-
{
|
|
1226
|
-
sku: 'PROD-001',
|
|
1227
|
-
location: 'WH-01',
|
|
1228
|
-
operation: 'RECEIPT',
|
|
1229
|
-
previousQty: 100,
|
|
1230
|
-
newQty: 150,
|
|
1231
|
-
difference: 50,
|
|
1232
|
-
sources: ['WAREHOUSE_SFTP'],
|
|
1233
|
-
timestamp: '2024-01-15T10:30:00Z',
|
|
1234
|
-
},
|
|
1235
|
-
],
|
|
1236
|
-
|
|
1237
|
-
// Errors encountered
|
|
1238
|
-
errors: [],
|
|
1239
|
-
};
|
|
1240
|
-
|
|
1241
|
-
// Upload to S3 for persistence
|
|
1242
|
-
await reportS3Source.uploadFile(reportKey, JSON.stringify(report, null, 2));
|
|
1243
|
-
```
|
|
1244
|
-
|
|
1245
|
-
**Why This Works**:
|
|
1246
|
-
|
|
1247
|
-
- Audit trail: Permanent record of what was updated
|
|
1248
|
-
- Debugging: Detailed information for troubleshooting failures
|
|
1249
|
-
- Monitoring: Statistics for dashboards and alerts
|
|
1250
|
-
- Compliance: Provenance tracking (which sources contributed)
|
|
1251
|
-
|
|
1252
|
-
## Deployment Options
|
|
1253
|
-
|
|
1254
|
-
### Option 1: Scheduled Execution (Hourly)
|
|
1255
|
-
|
|
1256
|
-
Run aggregation every hour using cron:
|
|
1257
|
-
|
|
1258
|
-
```bash
|
|
1259
|
-
# Install dependencies
|
|
1260
|
-
npm install
|
|
1261
|
-
|
|
1262
|
-
# Run scheduler (stays running, executes hourly)
|
|
1263
|
-
npm run aggregation:schedule
|
|
1264
|
-
```
|
|
1265
|
-
|
|
1266
|
-
**Use Case**: Regular inventory synchronization from multiple sources
|
|
1267
|
-
|
|
1268
|
-
### Option 2: Manual Execution (On-Demand)
|
|
1269
|
-
|
|
1270
|
-
Run aggregation once, manually triggered:
|
|
1271
|
-
|
|
1272
|
-
```bash
|
|
1273
|
-
# Single execution
|
|
1274
|
-
npm run aggregation:once
|
|
1275
|
-
```
|
|
1276
|
-
|
|
1277
|
-
**Use Case**: Testing, troubleshooting, or ad-hoc reconciliation
|
|
1278
|
-
|
|
1279
|
-
### Option 3: Containerized Deployment
|
|
1280
|
-
|
|
1281
|
-
Create `Dockerfile`:
|
|
1282
|
-
|
|
1283
|
-
```dockerfile
|
|
1284
|
-
FROM node:18-alpine
|
|
1285
|
-
|
|
1286
|
-
WORKDIR /app
|
|
1287
|
-
|
|
1288
|
-
COPY package*.json ./
|
|
1289
|
-
RUN npm ci --production
|
|
1290
|
-
|
|
1291
|
-
COPY src/ ./src/
|
|
1292
|
-
|
|
1293
|
-
CMD ["node", "src/scheduler.js"]
|
|
1294
|
-
```
|
|
1295
|
-
|
|
1296
|
-
Build and run:
|
|
1297
|
-
|
|
1298
|
-
```bash
|
|
1299
|
-
# Build Docker image
|
|
1300
|
-
docker build -t multi-source-aggregation .
|
|
1301
|
-
|
|
1302
|
-
# Run container
|
|
1303
|
-
docker run -d \
|
|
1304
|
-
--name inventory-aggregation \
|
|
1305
|
-
--env-file .env \
|
|
1306
|
-
--restart unless-stopped \
|
|
1307
|
-
multi-source-aggregation
|
|
1308
|
-
```
|
|
1309
|
-
|
|
1310
|
-
**Use Case**: Production deployment with container orchestration (Kubernetes, ECS)
|
|
1311
|
-
|
|
1312
|
-
## Testing
|
|
1313
|
-
|
|
1314
|
-
### Test with Mock Data
|
|
1315
|
-
|
|
1316
|
-
Create `tests/test-aggregation.js`:
|
|
1317
|
-
|
|
1318
|
-
```javascript
|
|
1319
|
-
import { aggregateInventory, config } from '../src/index.js';
|
|
1320
|
-
|
|
1321
|
-
// Override config for testing
|
|
1322
|
-
config.processing.batchSize = 10;
|
|
1323
|
-
|
|
1324
|
-
// Mock environment variables
|
|
1325
|
-
process.env.FLUENT_BASE_URL = 'https://api.fluentcommerce.com';
|
|
1326
|
-
process.env.FLUENT_CLIENT_ID = 'test-client-id';
|
|
1327
|
-
// ... (other test env vars)
|
|
1328
|
-
|
|
1329
|
-
// Run aggregation
|
|
1330
|
-
aggregateInventory()
|
|
1331
|
-
.then(stats => {
|
|
1332
|
-
console.log('Test completed successfully');
|
|
1333
|
-
console.log(JSON.stringify(stats, null, 2));
|
|
1334
|
-
})
|
|
1335
|
-
.catch(error => {
|
|
1336
|
-
console.error('Test failed:', error);
|
|
1337
|
-
process.exit(1);
|
|
1338
|
-
});
|
|
1339
|
-
```
|
|
1340
|
-
|
|
1341
|
-
Run test:
|
|
1342
|
-
|
|
1343
|
-
```bash
|
|
1344
|
-
node tests/test-aggregation.js
|
|
1345
|
-
```
|
|
1346
|
-
|
|
1347
|
-
## Common Issues
|
|
1348
|
-
|
|
1349
|
-
### Issue 1: SFTP Connection Timeout
|
|
1350
|
-
|
|
1351
|
-
**Symptom**: `Error: Connection timeout after 30s`
|
|
1352
|
-
|
|
1353
|
-
**Solution**:
|
|
1354
|
-
|
|
1355
|
-
```bash
|
|
1356
|
-
# Increase timeout in .env
|
|
1357
|
-
WAREHOUSE_SFTP_TIMEOUT=60000
|
|
1358
|
-
|
|
1359
|
-
# Or in code (src/index.js):
|
|
1360
|
-
settings: {
|
|
1361
|
-
...config.warehouseSftp,
|
|
1362
|
-
connectionTimeout: 60000 // 60 seconds
|
|
1363
|
-
}
|
|
1364
|
-
```
|
|
1365
|
-
|
|
1366
|
-
### Issue 2: S3 Access Denied
|
|
1367
|
-
|
|
1368
|
-
**Symptom**: `Error: AccessDenied: User is not authorized`
|
|
1369
|
-
|
|
1370
|
-
**Solution**:
|
|
1371
|
-
|
|
1372
|
-
- Verify AWS credentials are correct
|
|
1373
|
-
- Check IAM policy includes:
|
|
1374
|
-
- `s3:GetObject` on source bucket
|
|
1375
|
-
- `s3:PutObject` on report bucket
|
|
1376
|
-
- `s3:ListBucket` on both buckets
|
|
1377
|
-
|
|
1378
|
-
```json
|
|
1379
|
-
{
|
|
1380
|
-
"Version": "2012-10-17",
|
|
1381
|
-
"Statement": [
|
|
1382
|
-
{
|
|
1383
|
-
"Effect": "Allow",
|
|
1384
|
-
"Action": ["s3:GetObject", "s3:ListBucket"],
|
|
1385
|
-
"Resource": ["arn:aws:s3:::store-inventory", "arn:aws:s3:::store-inventory/*"]
|
|
1386
|
-
},
|
|
1387
|
-
{
|
|
1388
|
-
"Effect": "Allow",
|
|
1389
|
-
"Action": ["s3:PutObject"],
|
|
1390
|
-
"Resource": ["arn:aws:s3:::inventory-reports/*"]
|
|
1391
|
-
}
|
|
1392
|
-
]
|
|
1393
|
-
}
|
|
1394
|
-
```
|
|
1395
|
-
|
|
1396
|
-
### Issue 3: Fluent GraphQL Errors
|
|
1397
|
-
|
|
1398
|
-
**Symptom**: `Error: adjustInventory mutation failed`
|
|
1399
|
-
|
|
1400
|
-
**Solution**:
|
|
1401
|
-
|
|
1402
|
-
- Check Fluent API credentials
|
|
1403
|
-
- Verify retailerId is correct
|
|
1404
|
-
- Ensure inventory positions exist (or use CREATE operation)
|
|
1405
|
-
- Check mutation input format matches Fluent schema
|
|
1406
|
-
|
|
1407
|
-
### Issue 4: Memory Issues with Large Datasets
|
|
1408
|
-
|
|
1409
|
-
**Symptom**: `JavaScript heap out of memory`
|
|
1410
|
-
|
|
1411
|
-
**Solution**:
|
|
1412
|
-
|
|
1413
|
-
```bash
|
|
1414
|
-
# Increase Node.js heap size
|
|
1415
|
-
node --max-old-space-size=4096 src/index.js
|
|
1416
|
-
|
|
1417
|
-
# Or reduce batch size in config
|
|
1418
|
-
config.processing.batchSize = 50;
|
|
1419
|
-
```
|
|
1420
|
-
|
|
1421
|
-
### Issue 5: Duplicate SKUs from Multiple Sources
|
|
1422
|
-
|
|
1423
|
-
**Symptom**: Quantities are doubled or incorrect
|
|
1424
|
-
|
|
1425
|
-
**Solution**:
|
|
1426
|
-
|
|
1427
|
-
- **Intended behavior**: Quantities are ADDED from multiple sources
|
|
1428
|
-
- If you need OVERRIDE behavior (last source wins), modify aggregation:
|
|
1429
|
-
|
|
1430
|
-
```javascript
|
|
1431
|
-
// OVERRIDE mode (replace instead of add)
|
|
1432
|
-
item.locations.set(location, quantity); // Don't add, just set
|
|
1433
|
-
```
|
|
1434
|
-
|
|
1435
|
-
### Issue 6: Report Upload Failures
|
|
1436
|
-
|
|
1437
|
-
**Symptom**: `Failed to upload reconciliation report`
|
|
1438
|
-
|
|
1439
|
-
**Solution**:
|
|
1440
|
-
|
|
1441
|
-
- Check report S3 bucket exists
|
|
1442
|
-
- Verify write permissions
|
|
1443
|
-
- Check S3 region matches configuration
|
|
1444
|
-
- Ensure network connectivity to S3
|
|
1445
|
-
|
|
1446
|
-
## Related Guides
|
|
1447
|
-
|
|
1448
|
-
- **[Simple CSV Ingestion](./s3-csv-batch-api.md)** - Single source ingestion pattern
|
|
1449
|
-
- **[SFTP to Fluent](../../02-CORE-GUIDES/ingestion/ingestion-readme.md)** - SFTP-specific workflows (see Versori templates)
|
|
1450
|
-
- **[S3 Parquet Extraction](./graphql-query-export.md)** - S3 and Parquet handling
|
|
1451
|
-
- **[Connector Scenarios](../../02-CORE-GUIDES/advanced-services/advanced-services-readme.md)** - Multi-source patterns
|
|
1452
|
-
- **[Universal Mapping Guide](../../02-CORE-GUIDES/advanced-services/advanced-services-readme.md)** - Field transformation
|
|
1453
|
-
- **[SDK Resolvers](../../02-CORE-GUIDES/mapping/resolvers/mapping-resolvers-resolver-guide.md)** - Built-in transformations
|
|
1454
|
-
|
|
1455
|
-
## Next Steps
|
|
1456
|
-
|
|
1457
|
-
1. **Add Custom Transformations**: Use UniversalMapper for complex field mappings
|
|
1458
|
-
2. **Implement Retry Logic**: Add exponential backoff for transient failures
|
|
1459
|
-
3. **Add Monitoring**: Integrate with CloudWatch, Datadog, or other monitoring
|
|
1460
|
-
4. **Implement Alerting**: Send notifications on failures or reconciliation anomalies
|
|
1461
|
-
5. **Add State Management**: Track processed files to avoid reprocessing
|
|
1462
|
-
6. **Optimize Performance**: Implement parallel processing for large datasets
|
|
1
|
+
# Standalone: Multi-Source Inventory Aggregation
|
|
2
|
+
|
|
3
|
+
**FC Connect SDK Use Case Guide**
|
|
4
|
+
|
|
5
|
+
> **SDK**: [@fluentcommerce/fc-connect-sdk](https://www.npmjs.com/package/@fluentcommerce/fc-connect-sdk)
|
|
6
|
+
> **Version**: Use latest - `npm install @fluentcommerce/fc-connect-sdk@latest`
|
|
7
|
+
|
|
8
|
+
**Context**: Node.js script that aggregates inventory from SFTP + S3, reconciles with Fluent Commerce, and updates differences
|
|
9
|
+
|
|
10
|
+
**Complexity**: High
|
|
11
|
+
|
|
12
|
+
**Runtime**: Node.js ≥18
|
|
13
|
+
|
|
14
|
+
**Estimated Lines**: ~800 lines
|
|
15
|
+
|
|
16
|
+
## What You'll Build
|
|
17
|
+
|
|
18
|
+
- Multi-source data collection (SFTP CSV + S3 JSON)
|
|
19
|
+
- Data aggregation and deduplication using Map-based algorithm
|
|
20
|
+
- Reconciliation with Fluent Commerce current state
|
|
21
|
+
- Difference calculation (delta detection)
|
|
22
|
+
- GraphQL mutation execution for inventory updates
|
|
23
|
+
- Comprehensive logging and metrics tracking
|
|
24
|
+
- Reconciliation report generation
|
|
25
|
+
- Error handling and recovery
|
|
26
|
+
|
|
27
|
+
## SDK Methods Used
|
|
28
|
+
|
|
29
|
+
- `createClient(...)` - OAuth2 client creation
|
|
30
|
+
- `SftpDataSource(...)` - SFTP file operations
|
|
31
|
+
- `S3DataSource(...)` - S3 file operations
|
|
32
|
+
- `CSVParserService` - Parse SFTP CSV files
|
|
33
|
+
- `client.graphql({ query, variables })` - Query current Fluent inventory
|
|
34
|
+
- `client.graphqlMutation(mutation, variables)` - Execute inventory updates
|
|
35
|
+
- `UniversalMapper(...)` - Transform aggregated data
|
|
36
|
+
|
|
37
|
+
## Complete Working Code
|
|
38
|
+
|
|
39
|
+
### 1. Environment Configuration
|
|
40
|
+
|
|
41
|
+
Create `.env` file:
|
|
42
|
+
|
|
43
|
+
```bash
|
|
44
|
+
# Fluent Commerce OAuth2
|
|
45
|
+
FLUENT_BASE_URL=https://api.fluentcommerce.com
|
|
46
|
+
FLUENT_CLIENT_ID=your-oauth2-client-id
|
|
47
|
+
FLUENT_CLIENT_SECRET=your-oauth2-client-secret
|
|
48
|
+
FLUENT_USERNAME=your-username
|
|
49
|
+
FLUENT_PASSWORD=your-password
|
|
50
|
+
FLUENT_RETAILER_ID=your-retailer-id
|
|
51
|
+
|
|
52
|
+
# Warehouse SFTP Source
|
|
53
|
+
WAREHOUSE_SFTP_HOST=warehouse-sftp.example.com
|
|
54
|
+
WAREHOUSE_SFTP_PORT=22
|
|
55
|
+
WAREHOUSE_SFTP_USER=warehouse-user
|
|
56
|
+
WAREHOUSE_SFTP_PASSWORD=warehouse-password
|
|
57
|
+
WAREHOUSE_SFTP_PATH=/inventory/updates
|
|
58
|
+
|
|
59
|
+
# Store S3 Source
|
|
60
|
+
STORE_S3_BUCKET=store-inventory
|
|
61
|
+
STORE_S3_REGION=us-east-1
|
|
62
|
+
STORE_AWS_ACCESS_KEY_ID=your-aws-access-key
|
|
63
|
+
STORE_AWS_SECRET_ACCESS_KEY=your-aws-secret-key
|
|
64
|
+
STORE_S3_PREFIX=stores/inventory/
|
|
65
|
+
|
|
66
|
+
# Reporting S3 Destination
|
|
67
|
+
REPORT_S3_BUCKET=inventory-reports
|
|
68
|
+
REPORT_S3_REGION=us-east-1
|
|
69
|
+
REPORT_AWS_ACCESS_KEY_ID=your-aws-access-key
|
|
70
|
+
REPORT_AWS_SECRET_ACCESS_KEY=your-aws-secret-key
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
### 2. Package Configuration
|
|
74
|
+
|
|
75
|
+
Create `package.json`:
|
|
76
|
+
|
|
77
|
+
```json
|
|
78
|
+
{
|
|
79
|
+
"name": "multi-source-inventory-aggregation",
|
|
80
|
+
"version": "1.0.0",
|
|
81
|
+
"type": "module",
|
|
82
|
+
"description": "Multi-source inventory aggregation with Fluent Commerce",
|
|
83
|
+
"main": "src/index.js",
|
|
84
|
+
"scripts": {
|
|
85
|
+
"start": "node src/index.js",
|
|
86
|
+
"dev": "node --watch src/index.js",
|
|
87
|
+
"aggregation:once": "node src/index.js",
|
|
88
|
+
"aggregation:schedule": "node src/scheduler.js"
|
|
89
|
+
},
|
|
90
|
+
"dependencies": {
|
|
91
|
+
"@fluentcommerce/fc-connect-sdk": "^0.1.39",
|
|
92
|
+
"dotenv": "^16.0.0",
|
|
93
|
+
"node-cron": "^3.0.0"
|
|
94
|
+
},
|
|
95
|
+
"engines": {
|
|
96
|
+
"node": ">=18.0.0"
|
|
97
|
+
}
|
|
98
|
+
}
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
### 3. Main Aggregation Script
|
|
102
|
+
|
|
103
|
+
Create `src/index.js`:
|
|
104
|
+
|
|
105
|
+
```javascript
|
|
106
|
+
// FC Connect SDK+
|
|
107
|
+
// Install: npm install @fluentcommerce/fc-connect-sdk@latest
|
|
108
|
+
// Docs: https://www.npmjs.com/package/@fluentcommerce/fc-connect-sdk
|
|
109
|
+
// GitHub: https://github.com/fluentcommerce/fc-connect-sdk
|
|
110
|
+
|
|
111
|
+
import {
|
|
112
|
+
createClient,
|
|
113
|
+
SftpDataSource,
|
|
114
|
+
S3DataSource,
|
|
115
|
+
CSVParserService,
|
|
116
|
+
UniversalMapper,
|
|
117
|
+
createConsoleLogger,
|
|
118
|
+
toStructuredLogger,
|
|
119
|
+
} from '@fluentcommerce/fc-connect-sdk';
|
|
120
|
+
import dotenv from 'dotenv';
|
|
121
|
+
import { fileURLToPath } from 'url';
|
|
122
|
+
import { dirname, join } from 'path';
|
|
123
|
+
|
|
124
|
+
// Load environment variables
|
|
125
|
+
dotenv.config();
|
|
126
|
+
|
|
127
|
+
// Get current directory (ESM equivalent of __dirname)
|
|
128
|
+
const __filename = fileURLToPath(import.meta.url);
|
|
129
|
+
const __dirname = dirname(__filename);
|
|
130
|
+
|
|
131
|
+
/**
|
|
132
|
+
* MULTI-SOURCE INVENTORY AGGREGATION SYSTEM
|
|
133
|
+
*
|
|
134
|
+
* This script implements a sophisticated inventory aggregation workflow:
|
|
135
|
+
*
|
|
136
|
+
* 1. Data Collection: Gather inventory from multiple sources (SFTP + S3)
|
|
137
|
+
* 2. Aggregation: Combine data using Map-based deduplication
|
|
138
|
+
* 3. Reconciliation: Compare with Fluent Commerce current state
|
|
139
|
+
* 4. Update: Send only differences to Fluent via GraphQL mutations
|
|
140
|
+
* 5. Reporting: Generate comprehensive reconciliation report
|
|
141
|
+
*
|
|
142
|
+
* DESIGN DECISIONS:
|
|
143
|
+
* - Map-based aggregation: O(n) complexity for deduplication
|
|
144
|
+
* - Delta-only updates: Minimize API calls by updating only differences
|
|
145
|
+
* - Source tracking: Maintain provenance for debugging
|
|
146
|
+
* - Location granularity: Track inventory per SKU per location
|
|
147
|
+
* - Error recovery: Continue processing after non-critical failures
|
|
148
|
+
*/
|
|
149
|
+
|
|
150
|
+
/**
|
|
151
|
+
* Aggregated Inventory Item Structure
|
|
152
|
+
*
|
|
153
|
+
* This represents a single SKU's inventory across all locations.
|
|
154
|
+
* The Map structure provides O(1) lookup for deduplication.
|
|
155
|
+
*
|
|
156
|
+
* @typedef {Object} AggregatedItem
|
|
157
|
+
* @property {string} sku - Product SKU (unique identifier)
|
|
158
|
+
* @property {Map<string, number>} locations - Location ID -> quantity mapping
|
|
159
|
+
* @property {Set<string>} sources - Data sources that contributed to this item
|
|
160
|
+
*/
|
|
161
|
+
|
|
162
|
+
/**
|
|
163
|
+
* Initialize Logger
|
|
164
|
+
*/
|
|
165
|
+
const logger = toStructuredLogger(createConsoleLogger(), {
|
|
166
|
+
service: 'multi-source-aggregation',
|
|
167
|
+
version: '1.0.0',
|
|
168
|
+
});
|
|
169
|
+
|
|
170
|
+
/**
|
|
171
|
+
* Configuration Object
|
|
172
|
+
*
|
|
173
|
+
* Centralizes all configuration to make the script easily testable
|
|
174
|
+
* and configurable without code changes.
|
|
175
|
+
*/
|
|
176
|
+
const config = {
|
|
177
|
+
// Fluent Commerce
|
|
178
|
+
fluent: {
|
|
179
|
+
baseUrl: process.env.FLUENT_BASE_URL,
|
|
180
|
+
clientId: process.env.FLUENT_CLIENT_ID,
|
|
181
|
+
clientSecret: process.env.FLUENT_CLIENT_SECRET,
|
|
182
|
+
username: process.env.FLUENT_USERNAME,
|
|
183
|
+
password: process.env.FLUENT_PASSWORD,
|
|
184
|
+
retailerId: process.env.FLUENT_RETAILER_ID,
|
|
185
|
+
},
|
|
186
|
+
|
|
187
|
+
// Warehouse SFTP Source
|
|
188
|
+
warehouseSftp: {
|
|
189
|
+
host: process.env.WAREHOUSE_SFTP_HOST,
|
|
190
|
+
port: parseInt(process.env.WAREHOUSE_SFTP_PORT || '22'),
|
|
191
|
+
username: process.env.WAREHOUSE_SFTP_USER,
|
|
192
|
+
password: process.env.WAREHOUSE_SFTP_PASSWORD,
|
|
193
|
+
remotePath: process.env.WAREHOUSE_SFTP_PATH || '/inventory/updates',
|
|
194
|
+
filePattern: '*.csv',
|
|
195
|
+
},
|
|
196
|
+
|
|
197
|
+
// Store S3 Source
|
|
198
|
+
storeS3: {
|
|
199
|
+
bucket: process.env.STORE_S3_BUCKET,
|
|
200
|
+
region: process.env.STORE_S3_REGION || 'us-east-1',
|
|
201
|
+
accessKeyId: process.env.STORE_AWS_ACCESS_KEY_ID,
|
|
202
|
+
secretAccessKey: process.env.STORE_AWS_SECRET_ACCESS_KEY,
|
|
203
|
+
prefix: process.env.STORE_S3_PREFIX || 'stores/inventory/',
|
|
204
|
+
},
|
|
205
|
+
|
|
206
|
+
// Reporting S3 Destination
|
|
207
|
+
reportS3: {
|
|
208
|
+
bucket: process.env.REPORT_S3_BUCKET,
|
|
209
|
+
region: process.env.REPORT_S3_REGION || 'us-east-1',
|
|
210
|
+
accessKeyId: process.env.REPORT_AWS_ACCESS_KEY_ID,
|
|
211
|
+
secretAccessKey: process.env.REPORT_AWS_SECRET_ACCESS_KEY,
|
|
212
|
+
},
|
|
213
|
+
|
|
214
|
+
// Processing options
|
|
215
|
+
processing: {
|
|
216
|
+
batchSize: 100, // Number of records to process in one GraphQL mutation
|
|
217
|
+
maxRetries: 3,
|
|
218
|
+
retryDelay: 1000, // milliseconds
|
|
219
|
+
},
|
|
220
|
+
};
|
|
221
|
+
|
|
222
|
+
/**
|
|
223
|
+
* Validate Configuration
|
|
224
|
+
*
|
|
225
|
+
* Ensures all required environment variables are set before proceeding.
|
|
226
|
+
* Fails fast with clear error messages to prevent runtime issues.
|
|
227
|
+
*/
|
|
228
|
+
function validateConfig() {
|
|
229
|
+
const required = [
|
|
230
|
+
'FLUENT_BASE_URL',
|
|
231
|
+
'FLUENT_CLIENT_ID',
|
|
232
|
+
'FLUENT_CLIENT_SECRET',
|
|
233
|
+
'FLUENT_USERNAME',
|
|
234
|
+
'FLUENT_PASSWORD',
|
|
235
|
+
'FLUENT_RETAILER_ID',
|
|
236
|
+
'WAREHOUSE_SFTP_HOST',
|
|
237
|
+
'WAREHOUSE_SFTP_USER',
|
|
238
|
+
'WAREHOUSE_SFTP_PASSWORD',
|
|
239
|
+
'STORE_S3_BUCKET',
|
|
240
|
+
'STORE_AWS_ACCESS_KEY_ID',
|
|
241
|
+
'STORE_AWS_SECRET_ACCESS_KEY',
|
|
242
|
+
];
|
|
243
|
+
|
|
244
|
+
const missing = required.filter(key => !process.env[key]);
|
|
245
|
+
|
|
246
|
+
if (missing.length > 0) {
|
|
247
|
+
throw new Error(`Missing required environment variables: ${missing.join(', ')}`);
|
|
248
|
+
}
|
|
249
|
+
|
|
250
|
+
logger.info('Configuration validated successfully');
|
|
251
|
+
}
|
|
252
|
+
|
|
253
|
+
/**
|
|
254
|
+
* Initialize Data Sources
|
|
255
|
+
*
|
|
256
|
+
* Creates SFTP and S3 data source instances for reading inventory files.
|
|
257
|
+
* Each source is configured with connection parameters and file patterns.
|
|
258
|
+
*
|
|
259
|
+
* @returns {Object} Data source instances
|
|
260
|
+
*/
|
|
261
|
+
function initializeDataSources() {
|
|
262
|
+
logger.info('Initializing data sources');
|
|
263
|
+
|
|
264
|
+
// SFTP Data Source for warehouse inventory (CSV format)
|
|
265
|
+
const sftpSource = new SftpDataSource(
|
|
266
|
+
{
|
|
267
|
+
type: 'SFTP_CSV',
|
|
268
|
+
connectionId: 'warehouse-sftp',
|
|
269
|
+
name: 'Warehouse SFTP',
|
|
270
|
+
settings: {
|
|
271
|
+
...config.warehouseSftp,
|
|
272
|
+
csvDelimiter: ',',
|
|
273
|
+
csvHeaders: ['sku', 'location', 'quantity', 'status'],
|
|
274
|
+
csvSkipEmptyLines: true,
|
|
275
|
+
csvTrimValues: true,
|
|
276
|
+
},
|
|
277
|
+
},
|
|
278
|
+
logger
|
|
279
|
+
);
|
|
280
|
+
|
|
281
|
+
// S3 Data Source for store inventory (JSON format)
|
|
282
|
+
const s3Source = new S3DataSource(
|
|
283
|
+
{
|
|
284
|
+
type: 'S3_CSV',
|
|
285
|
+
connectionId: 'store-s3',
|
|
286
|
+
name: 'Store Inventory S3',
|
|
287
|
+
s3Config: config.storeS3,
|
|
288
|
+
},
|
|
289
|
+
logger
|
|
290
|
+
);
|
|
291
|
+
|
|
292
|
+
// S3 Data Source for reporting (write destination)
|
|
293
|
+
const reportS3Source = new S3DataSource(
|
|
294
|
+
{
|
|
295
|
+
type: 'S3_CSV',
|
|
296
|
+
connectionId: 'report-s3',
|
|
297
|
+
name: 'Report S3',
|
|
298
|
+
s3Config: config.reportS3,
|
|
299
|
+
},
|
|
300
|
+
logger
|
|
301
|
+
);
|
|
302
|
+
|
|
303
|
+
logger.info('Data sources initialized successfully');
|
|
304
|
+
|
|
305
|
+
return { sftpSource, s3Source, reportS3Source };
|
|
306
|
+
}
|
|
307
|
+
|
|
308
|
+
/**
|
|
309
|
+
* Collect Inventory from SFTP Source
|
|
310
|
+
*
|
|
311
|
+
* WAREHOUSE INVENTORY COLLECTION
|
|
312
|
+
*
|
|
313
|
+
* Algorithm:
|
|
314
|
+
* 1. List CSV files from SFTP remote path
|
|
315
|
+
* 2. Download each CSV file
|
|
316
|
+
* 3. Parse CSV using CSVParserService
|
|
317
|
+
* 4. Aggregate data into Map structure (SKU -> locations -> quantity)
|
|
318
|
+
* 5. Track source provenance (WAREHOUSE_SFTP)
|
|
319
|
+
*
|
|
320
|
+
* CSV Format Expected:
|
|
321
|
+
* sku,location,quantity,status
|
|
322
|
+
* PROD-001,WH-01,150,AVAILABLE
|
|
323
|
+
* PROD-002,WH-01,75,AVAILABLE
|
|
324
|
+
*
|
|
325
|
+
* @param {SftpDataSource} sftpSource - SFTP data source instance
|
|
326
|
+
* @param {Map<string, AggregatedItem>} inventoryMap - Aggregation map
|
|
327
|
+
* @returns {Promise<number>} Number of files processed
|
|
328
|
+
*/
|
|
329
|
+
async function collectSftpInventory(sftpSource, inventoryMap) {
|
|
330
|
+
logger.info('Collecting inventory from SFTP (warehouse)');
|
|
331
|
+
|
|
332
|
+
try {
|
|
333
|
+
// List CSV files from SFTP server
|
|
334
|
+
const files = await sftpSource.listFiles({
|
|
335
|
+
filePattern: config.warehouseSftp.filePattern,
|
|
336
|
+
});
|
|
337
|
+
|
|
338
|
+
logger.info(`Found ${files.length} CSV files in SFTP`);
|
|
339
|
+
|
|
340
|
+
if (files.length === 0) {
|
|
341
|
+
logger.warn('No files found in SFTP source');
|
|
342
|
+
return 0;
|
|
343
|
+
}
|
|
344
|
+
|
|
345
|
+
// Process each CSV file
|
|
346
|
+
for (const file of files) {
|
|
347
|
+
logger.info(`Processing SFTP file: ${file.name}`);
|
|
348
|
+
|
|
349
|
+
try {
|
|
350
|
+
// Download file content
|
|
351
|
+
const content = await sftpSource.downloadFile(file.name);
|
|
352
|
+
|
|
353
|
+
// Parse CSV content
|
|
354
|
+
const csvParser = new CSVParserService();
|
|
355
|
+
const records = await csvParser.parse(content);
|
|
356
|
+
|
|
357
|
+
logger.info(`Parsed ${records.length} records from ${file.name}`);
|
|
358
|
+
|
|
359
|
+
// Aggregate records into inventory map
|
|
360
|
+
for (const record of records) {
|
|
361
|
+
const sku = record.sku;
|
|
362
|
+
const location = record.location;
|
|
363
|
+
const quantity = parseInt(record.quantity) || 0;
|
|
364
|
+
|
|
365
|
+
// Skip invalid records
|
|
366
|
+
if (!sku || !location) {
|
|
367
|
+
logger.warn('Skipping record with missing SKU or location', { record });
|
|
368
|
+
continue;
|
|
369
|
+
}
|
|
370
|
+
|
|
371
|
+
// Get or create aggregated item
|
|
372
|
+
if (!inventoryMap.has(sku)) {
|
|
373
|
+
inventoryMap.set(sku, {
|
|
374
|
+
sku: sku,
|
|
375
|
+
locations: new Map(),
|
|
376
|
+
sources: new Set(),
|
|
377
|
+
});
|
|
378
|
+
}
|
|
379
|
+
|
|
380
|
+
const item = inventoryMap.get(sku);
|
|
381
|
+
|
|
382
|
+
// Aggregate quantity by location
|
|
383
|
+
// If SKU+location exists from another source, ADD quantities
|
|
384
|
+
const currentQty = item.locations.get(location) || 0;
|
|
385
|
+
item.locations.set(location, currentQty + quantity);
|
|
386
|
+
|
|
387
|
+
// Track source
|
|
388
|
+
item.sources.add('WAREHOUSE_SFTP');
|
|
389
|
+
}
|
|
390
|
+
} catch (fileError) {
|
|
391
|
+
logger.error(`Failed to process SFTP file: ${file.name}`, fileError);
|
|
392
|
+
// Continue processing other files
|
|
393
|
+
}
|
|
394
|
+
}
|
|
395
|
+
|
|
396
|
+
logger.info(`SFTP collection complete: ${inventoryMap.size} unique SKUs aggregated`);
|
|
397
|
+
return files.length;
|
|
398
|
+
} catch (error) {
|
|
399
|
+
logger.error('Failed to collect inventory from SFTP', error);
|
|
400
|
+
throw error;
|
|
401
|
+
}
|
|
402
|
+
}
|
|
403
|
+
|
|
404
|
+
/**
|
|
405
|
+
* Collect Inventory from S3 Source
|
|
406
|
+
*
|
|
407
|
+
* STORE INVENTORY COLLECTION
|
|
408
|
+
*
|
|
409
|
+
* Algorithm:
|
|
410
|
+
* 1. List JSON files from S3 bucket/prefix
|
|
411
|
+
* 2. Download each JSON file
|
|
412
|
+
* 3. Parse JSON (expected structure: { inventory: [...] })
|
|
413
|
+
* 4. Aggregate data into Map structure
|
|
414
|
+
* 5. Track source provenance (STORE_S3)
|
|
415
|
+
*
|
|
416
|
+
* JSON Format Expected:
|
|
417
|
+
* {
|
|
418
|
+
* "storeId": "STORE-001",
|
|
419
|
+
* "timestamp": "2024-01-15T10:30:00Z",
|
|
420
|
+
* "inventory": [
|
|
421
|
+
* { "productId": "PROD-001", "storeId": "STORE-001", "availableQty": 25 },
|
|
422
|
+
* { "productId": "PROD-002", "storeId": "STORE-001", "availableQty": 50 }
|
|
423
|
+
* ]
|
|
424
|
+
* }
|
|
425
|
+
*
|
|
426
|
+
* @param {S3DataSource} s3Source - S3 data source instance
|
|
427
|
+
* @param {Map<string, AggregatedItem>} inventoryMap - Aggregation map
|
|
428
|
+
* @returns {Promise<number>} Number of files processed
|
|
429
|
+
*/
|
|
430
|
+
async function collectS3Inventory(s3Source, inventoryMap) {
|
|
431
|
+
logger.info('Collecting inventory from S3 (stores)');
|
|
432
|
+
|
|
433
|
+
try {
|
|
434
|
+
// List JSON files from S3
|
|
435
|
+
const files = await s3Source.listFiles({
|
|
436
|
+
prefix: config.storeS3.prefix,
|
|
437
|
+
});
|
|
438
|
+
|
|
439
|
+
// Filter only JSON files
|
|
440
|
+
const jsonFiles = files.filter(file => file.name.toLowerCase().endsWith('.json'));
|
|
441
|
+
|
|
442
|
+
logger.info(`Found ${jsonFiles.length} JSON files in S3`);
|
|
443
|
+
|
|
444
|
+
if (jsonFiles.length === 0) {
|
|
445
|
+
logger.warn('No JSON files found in S3 source');
|
|
446
|
+
return 0;
|
|
447
|
+
}
|
|
448
|
+
|
|
449
|
+
// Process each JSON file
|
|
450
|
+
for (const file of jsonFiles) {
|
|
451
|
+
logger.info(`Processing S3 file: ${file.path}`);
|
|
452
|
+
|
|
453
|
+
try {
|
|
454
|
+
// Download file content
|
|
455
|
+
const content = await s3Source.downloadFile(file.path);
|
|
456
|
+
|
|
457
|
+
// Parse JSON
|
|
458
|
+
const storeData = JSON.parse(content);
|
|
459
|
+
|
|
460
|
+
if (!storeData.inventory || !Array.isArray(storeData.inventory)) {
|
|
461
|
+
logger.warn(`Invalid JSON structure in ${file.path}: missing 'inventory' array`);
|
|
462
|
+
continue;
|
|
463
|
+
}
|
|
464
|
+
|
|
465
|
+
logger.info(`Parsed ${storeData.inventory.length} records from ${file.path}`);
|
|
466
|
+
|
|
467
|
+
// Aggregate records into inventory map
|
|
468
|
+
for (const record of storeData.inventory) {
|
|
469
|
+
const sku = record.productId;
|
|
470
|
+
const location = record.storeId;
|
|
471
|
+
const quantity = parseInt(record.availableQty) || 0;
|
|
472
|
+
|
|
473
|
+
// Skip invalid records
|
|
474
|
+
if (!sku || !location) {
|
|
475
|
+
logger.warn('Skipping record with missing productId or storeId', { record });
|
|
476
|
+
continue;
|
|
477
|
+
}
|
|
478
|
+
|
|
479
|
+
// Get or create aggregated item
|
|
480
|
+
if (!inventoryMap.has(sku)) {
|
|
481
|
+
inventoryMap.set(sku, {
|
|
482
|
+
sku: sku,
|
|
483
|
+
locations: new Map(),
|
|
484
|
+
sources: new Set(),
|
|
485
|
+
});
|
|
486
|
+
}
|
|
487
|
+
|
|
488
|
+
const item = inventoryMap.get(sku);
|
|
489
|
+
|
|
490
|
+
// Aggregate quantity by location
|
|
491
|
+
const currentQty = item.locations.get(location) || 0;
|
|
492
|
+
item.locations.set(location, currentQty + quantity);
|
|
493
|
+
|
|
494
|
+
// Track source
|
|
495
|
+
item.sources.add('STORE_S3');
|
|
496
|
+
}
|
|
497
|
+
} catch (fileError) {
|
|
498
|
+
logger.error(`Failed to process S3 file: ${file.path}`, fileError);
|
|
499
|
+
// Continue processing other files
|
|
500
|
+
}
|
|
501
|
+
}
|
|
502
|
+
|
|
503
|
+
logger.info(`S3 collection complete: ${inventoryMap.size} unique SKUs aggregated`);
|
|
504
|
+
return jsonFiles.length;
|
|
505
|
+
} catch (error) {
|
|
506
|
+
logger.error('Failed to collect inventory from S3', error);
|
|
507
|
+
throw error;
|
|
508
|
+
}
|
|
509
|
+
}
|
|
510
|
+
|
|
511
|
+
/**
|
|
512
|
+
* Query Current Fluent Inventory
|
|
513
|
+
*
|
|
514
|
+
* CURRENT STATE RETRIEVAL
|
|
515
|
+
*
|
|
516
|
+
* Queries Fluent Commerce to get the current inventory state.
|
|
517
|
+
* This is necessary for reconciliation (comparing aggregated vs. current).
|
|
518
|
+
*
|
|
519
|
+
* GraphQL Query Pattern:
|
|
520
|
+
* - Uses pagination (first: 1000) to handle large inventories
|
|
521
|
+
* - Returns: ref, productRef, locationRef, qty, availableQty
|
|
522
|
+
* - Maps to SKU+Location structure for comparison
|
|
523
|
+
*
|
|
524
|
+
* @param {FluentClient} fluentClient - Fluent Commerce API client
|
|
525
|
+
* @returns {Promise<Map>} Map of "SKU:LOCATION" -> { qty, availableQty }
|
|
526
|
+
*/
|
|
527
|
+
async function queryCurrentInventory(fluentClient) {
|
|
528
|
+
logger.info('Querying current inventory from Fluent Commerce');
|
|
529
|
+
|
|
530
|
+
const query = `
|
|
531
|
+
query GetCurrentInventory($retailerId: ID!, $first: Int!, $after: String) {
|
|
532
|
+
inventoryPositions(
|
|
533
|
+
retailerId: $retailerId,
|
|
534
|
+
first: $first,
|
|
535
|
+
after: $after
|
|
536
|
+
) {
|
|
537
|
+
edges {
|
|
538
|
+
cursor
|
|
539
|
+
node {
|
|
540
|
+
ref
|
|
541
|
+
productRef
|
|
542
|
+
locationRef
|
|
543
|
+
qty
|
|
544
|
+
availableQty
|
|
545
|
+
status
|
|
546
|
+
updatedOn
|
|
547
|
+
}
|
|
548
|
+
}
|
|
549
|
+
pageInfo {
|
|
550
|
+
hasNextPage
|
|
551
|
+
}
|
|
552
|
+
}
|
|
553
|
+
}
|
|
554
|
+
`;
|
|
555
|
+
|
|
556
|
+
try {
|
|
557
|
+
// Query with pagination support
|
|
558
|
+
const result = await fluentClient.graphql({
|
|
559
|
+
query,
|
|
560
|
+
variables: {
|
|
561
|
+
retailerId: config.fluent.retailerId,
|
|
562
|
+
first: 1000, // Adjust based on expected inventory size
|
|
563
|
+
},
|
|
564
|
+
});
|
|
565
|
+
|
|
566
|
+
if (!result.data || !result.data.inventoryPositions) {
|
|
567
|
+
throw new Error('Invalid GraphQL response structure');
|
|
568
|
+
}
|
|
569
|
+
|
|
570
|
+
const edges = result.data.inventoryPositions.edges || [];
|
|
571
|
+
logger.info(`Retrieved ${edges.length} inventory positions from Fluent`);
|
|
572
|
+
|
|
573
|
+
// Build lookup map: "SKU:LOCATION" -> inventory data
|
|
574
|
+
const currentInventory = new Map();
|
|
575
|
+
|
|
576
|
+
for (const edge of edges) {
|
|
577
|
+
const node = edge.node;
|
|
578
|
+
const key = `${node.productRef}:${node.locationRef}`;
|
|
579
|
+
|
|
580
|
+
currentInventory.set(key, {
|
|
581
|
+
ref: node.ref,
|
|
582
|
+
productRef: node.productRef,
|
|
583
|
+
locationRef: node.locationRef,
|
|
584
|
+
qty: node.qty,
|
|
585
|
+
availableQty: node.availableQty,
|
|
586
|
+
status: node.status,
|
|
587
|
+
updatedOn: node.updatedOn,
|
|
588
|
+
});
|
|
589
|
+
}
|
|
590
|
+
|
|
591
|
+
logger.info(`Current inventory indexed: ${currentInventory.size} positions`);
|
|
592
|
+
return currentInventory;
|
|
593
|
+
} catch (error) {
|
|
594
|
+
logger.error('Failed to query current inventory', error);
|
|
595
|
+
throw error;
|
|
596
|
+
}
|
|
597
|
+
}
|
|
598
|
+
|
|
599
|
+
/**
|
|
600
|
+
* Reconcile Inventory Differences
|
|
601
|
+
*
|
|
602
|
+
* RECONCILIATION ALGORITHM
|
|
603
|
+
*
|
|
604
|
+
* Compares aggregated inventory (from SFTP + S3) with Fluent current state.
|
|
605
|
+
* Identifies differences and generates update commands.
|
|
606
|
+
*
|
|
607
|
+
* Algorithm:
|
|
608
|
+
* 1. Iterate through aggregated inventory Map
|
|
609
|
+
* 2. For each SKU+Location, compare aggregated qty vs. Fluent qty
|
|
610
|
+
* 3. If difference exists (newQty !== currentQty), create update record
|
|
611
|
+
* 4. Calculate adjustment type (RECEIPT if increase, DISPATCH if decrease)
|
|
612
|
+
* 5. Build reconciliation report with detailed differences
|
|
613
|
+
*
|
|
614
|
+
* Update Decision Logic:
|
|
615
|
+
* - No Fluent record: CREATE (new inventory position)
|
|
616
|
+
* - Qty unchanged: SKIP (no update needed)
|
|
617
|
+
* - Qty increased: RECEIPT adjustment
|
|
618
|
+
* - Qty decreased: DISPATCH adjustment
|
|
619
|
+
*
|
|
620
|
+
* @param {Map<string, AggregatedItem>} inventoryMap - Aggregated inventory
|
|
621
|
+
* @param {Map} currentInventory - Current Fluent inventory
|
|
622
|
+
* @returns {Object} { updates: [], report: [] }
|
|
623
|
+
*/
|
|
624
|
+
function reconcileInventory(inventoryMap, currentInventory) {
|
|
625
|
+
logger.info('Reconciling inventory differences');
|
|
626
|
+
|
|
627
|
+
const updates = [];
|
|
628
|
+
const reconciliationReport = [];
|
|
629
|
+
let createCount = 0;
|
|
630
|
+
let updateCount = 0;
|
|
631
|
+
let skipCount = 0;
|
|
632
|
+
|
|
633
|
+
// Iterate through aggregated inventory
|
|
634
|
+
for (const [sku, aggregated] of inventoryMap.entries()) {
|
|
635
|
+
for (const [location, newQty] of aggregated.locations.entries()) {
|
|
636
|
+
const key = `${sku}:${location}`;
|
|
637
|
+
|
|
638
|
+
// Look up current inventory position
|
|
639
|
+
const current = currentInventory.get(key);
|
|
640
|
+
const currentQty = current ? current.qty : 0;
|
|
641
|
+
const difference = newQty - currentQty;
|
|
642
|
+
|
|
643
|
+
// Skip if no change
|
|
644
|
+
if (difference === 0 && current) {
|
|
645
|
+
skipCount++;
|
|
646
|
+
continue;
|
|
647
|
+
}
|
|
648
|
+
|
|
649
|
+
// Determine operation type
|
|
650
|
+
let operation;
|
|
651
|
+
if (!current) {
|
|
652
|
+
operation = 'CREATE';
|
|
653
|
+
createCount++;
|
|
654
|
+
} else if (difference > 0) {
|
|
655
|
+
operation = 'RECEIPT';
|
|
656
|
+
updateCount++;
|
|
657
|
+
} else {
|
|
658
|
+
operation = 'DISPATCH';
|
|
659
|
+
updateCount++;
|
|
660
|
+
}
|
|
661
|
+
|
|
662
|
+
// Create update record
|
|
663
|
+
updates.push({
|
|
664
|
+
ref: current ? current.ref : `${sku}-${location}`,
|
|
665
|
+
productRef: sku,
|
|
666
|
+
locationRef: location,
|
|
667
|
+
qty: newQty,
|
|
668
|
+
type: operation === 'CREATE' ? 'ADJUSTMENT' : operation,
|
|
669
|
+
adjustmentQty: Math.abs(difference),
|
|
670
|
+
reason: 'MULTI_SOURCE_SYNC',
|
|
671
|
+
attributes: {
|
|
672
|
+
sources: Array.from(aggregated.sources).join(','),
|
|
673
|
+
previousQty: currentQty,
|
|
674
|
+
newQty: newQty,
|
|
675
|
+
difference: difference,
|
|
676
|
+
syncedAt: new Date().toISOString(),
|
|
677
|
+
},
|
|
678
|
+
});
|
|
679
|
+
|
|
680
|
+
// Add to reconciliation report
|
|
681
|
+
reconciliationReport.push({
|
|
682
|
+
sku,
|
|
683
|
+
location,
|
|
684
|
+
operation,
|
|
685
|
+
previousQty: currentQty,
|
|
686
|
+
newQty: newQty,
|
|
687
|
+
difference,
|
|
688
|
+
sources: Array.from(aggregated.sources),
|
|
689
|
+
timestamp: new Date().toISOString(),
|
|
690
|
+
});
|
|
691
|
+
}
|
|
692
|
+
}
|
|
693
|
+
|
|
694
|
+
logger.info('Reconciliation complete', {
|
|
695
|
+
totalDifferences: updates.length,
|
|
696
|
+
creates: createCount,
|
|
697
|
+
updates: updateCount,
|
|
698
|
+
skipped: skipCount,
|
|
699
|
+
});
|
|
700
|
+
|
|
701
|
+
return { updates, reconciliationReport };
|
|
702
|
+
}
|
|
703
|
+
|
|
704
|
+
/**
|
|
705
|
+
* Execute Inventory Updates
|
|
706
|
+
*
|
|
707
|
+
* BATCH UPDATE EXECUTION
|
|
708
|
+
*
|
|
709
|
+
* Sends inventory updates to Fluent Commerce via GraphQL mutations.
|
|
710
|
+
* Processes updates in batches to avoid API limits.
|
|
711
|
+
*
|
|
712
|
+
* Mutation Pattern:
|
|
713
|
+
* - Uses adjustInventory mutation for updates
|
|
714
|
+
* - Batches of 100 records per mutation
|
|
715
|
+
* - Tracks success/failure per batch
|
|
716
|
+
* - Logs errors for retry/debugging
|
|
717
|
+
*
|
|
718
|
+
* @param {FluentClient} fluentClient - Fluent Commerce API client
|
|
719
|
+
* @param {Array} updates - Array of update records
|
|
720
|
+
* @returns {Promise<Object>} { success: number, failed: number, errors: [] }
|
|
721
|
+
*/
|
|
722
|
+
async function executeUpdates(fluentClient, updates) {
|
|
723
|
+
logger.info(`Executing ${updates.length} inventory updates`);
|
|
724
|
+
|
|
725
|
+
if (updates.length === 0) {
|
|
726
|
+
logger.info('No updates to execute');
|
|
727
|
+
return { success: 0, failed: 0, errors: [] };
|
|
728
|
+
}
|
|
729
|
+
|
|
730
|
+
const batchSize = config.processing.batchSize;
|
|
731
|
+
let successCount = 0;
|
|
732
|
+
let failedCount = 0;
|
|
733
|
+
const errors = [];
|
|
734
|
+
|
|
735
|
+
// Process in batches
|
|
736
|
+
for (let i = 0; i < updates.length; i += batchSize) {
|
|
737
|
+
const batch = updates.slice(i, i + batchSize);
|
|
738
|
+
const batchNumber = Math.floor(i / batchSize) + 1;
|
|
739
|
+
const totalBatches = Math.ceil(updates.length / batchSize);
|
|
740
|
+
|
|
741
|
+
logger.info(`Processing batch ${batchNumber}/${totalBatches} (${batch.length} records)`);
|
|
742
|
+
|
|
743
|
+
try {
|
|
744
|
+
// GraphQL mutation for inventory adjustment
|
|
745
|
+
const mutation = `
|
|
746
|
+
mutation AdjustInventory($updates: [InventoryAdjustmentInput]!) {
|
|
747
|
+
adjustInventory(input: $updates) {
|
|
748
|
+
success
|
|
749
|
+
failed
|
|
750
|
+
errors {
|
|
751
|
+
ref
|
|
752
|
+
message
|
|
753
|
+
code
|
|
754
|
+
}
|
|
755
|
+
}
|
|
756
|
+
}
|
|
757
|
+
`;
|
|
758
|
+
|
|
759
|
+
const result = await fluentClient.graphqlMutation(mutation, {
|
|
760
|
+
updates: batch,
|
|
761
|
+
});
|
|
762
|
+
|
|
763
|
+
if (result.data && result.data.adjustInventory) {
|
|
764
|
+
const batchResult = result.data.adjustInventory;
|
|
765
|
+
successCount += batchResult.success || 0;
|
|
766
|
+
failedCount += batchResult.failed || 0;
|
|
767
|
+
|
|
768
|
+
if (batchResult.errors && batchResult.errors.length > 0) {
|
|
769
|
+
errors.push(...batchResult.errors);
|
|
770
|
+
logger.warn(`Batch ${batchNumber} had ${batchResult.errors.length} errors`, {
|
|
771
|
+
errors: batchResult.errors.slice(0, 5), // Log first 5 errors
|
|
772
|
+
});
|
|
773
|
+
}
|
|
774
|
+
|
|
775
|
+
logger.info(
|
|
776
|
+
`Batch ${batchNumber} complete: ${batchResult.success} success, ${batchResult.failed} failed`
|
|
777
|
+
);
|
|
778
|
+
}
|
|
779
|
+
} catch (batchError) {
|
|
780
|
+
logger.error(`Batch ${batchNumber} failed completely`, batchError);
|
|
781
|
+
failedCount += batch.length;
|
|
782
|
+
errors.push({
|
|
783
|
+
batch: batchNumber,
|
|
784
|
+
message: batchError.message,
|
|
785
|
+
records: batch.length,
|
|
786
|
+
});
|
|
787
|
+
}
|
|
788
|
+
}
|
|
789
|
+
|
|
790
|
+
logger.info('Update execution complete', {
|
|
791
|
+
totalUpdates: updates.length,
|
|
792
|
+
success: successCount,
|
|
793
|
+
failed: failedCount,
|
|
794
|
+
errors: errors.length,
|
|
795
|
+
});
|
|
796
|
+
|
|
797
|
+
return { success: successCount, failed: failedCount, errors };
|
|
798
|
+
}
|
|
799
|
+
|
|
800
|
+
/**
|
|
801
|
+
* Generate Reconciliation Report
|
|
802
|
+
*
|
|
803
|
+
* COMPREHENSIVE REPORTING
|
|
804
|
+
*
|
|
805
|
+
* Creates a detailed JSON report of the reconciliation process.
|
|
806
|
+
* Includes statistics, detailed changes, and errors.
|
|
807
|
+
* Uploads report to S3 for auditing and monitoring.
|
|
808
|
+
*
|
|
809
|
+
* Report Structure:
|
|
810
|
+
* - timestamp: When reconciliation ran
|
|
811
|
+
* - sourcesProcessed: Files processed per source
|
|
812
|
+
* - statistics: Summary counts
|
|
813
|
+
* - reconciliation: Detailed changes (first 1000)
|
|
814
|
+
* - errors: Any errors encountered
|
|
815
|
+
*
|
|
816
|
+
* @param {S3DataSource} reportS3Source - S3 data source for report upload
|
|
817
|
+
* @param {Object} stats - Statistics object
|
|
818
|
+
* @returns {Promise<string>} Report S3 key
|
|
819
|
+
*/
|
|
820
|
+
async function generateReconciliationReport(reportS3Source, stats) {
|
|
821
|
+
logger.info('Generating reconciliation report');
|
|
822
|
+
|
|
823
|
+
const timestamp = new Date().toISOString();
|
|
824
|
+
const reportKey = `reconciliation/report-${timestamp.replace(/[:.]/g, '-')}.json`;
|
|
825
|
+
|
|
826
|
+
const report = {
|
|
827
|
+
timestamp,
|
|
828
|
+
version: '1.0.0',
|
|
829
|
+
sourcesProcessed: {
|
|
830
|
+
sftpFiles: stats.sftpFiles,
|
|
831
|
+
s3Files: stats.s3Files,
|
|
832
|
+
},
|
|
833
|
+
statistics: {
|
|
834
|
+
totalSKUs: stats.totalSKUs,
|
|
835
|
+
totalLocations: stats.totalLocations,
|
|
836
|
+
totalUpdates: stats.totalUpdates,
|
|
837
|
+
creates: stats.creates,
|
|
838
|
+
receipts: stats.receipts,
|
|
839
|
+
dispatches: stats.dispatches,
|
|
840
|
+
skipped: stats.skipped,
|
|
841
|
+
success: stats.success,
|
|
842
|
+
failed: stats.failed,
|
|
843
|
+
},
|
|
844
|
+
reconciliation: stats.reconciliationDetails.slice(0, 1000), // Limit to first 1000 for report size
|
|
845
|
+
errors: stats.errors || [],
|
|
846
|
+
};
|
|
847
|
+
|
|
848
|
+
// Upload report to S3
|
|
849
|
+
try {
|
|
850
|
+
await reportS3Source.uploadFile(reportKey, JSON.stringify(report, null, 2), {
|
|
851
|
+
contentType: 'application/json',
|
|
852
|
+
});
|
|
853
|
+
|
|
854
|
+
logger.info(`Reconciliation report uploaded: ${reportKey}`);
|
|
855
|
+
return reportKey;
|
|
856
|
+
} catch (error) {
|
|
857
|
+
logger.error('Failed to upload reconciliation report', error);
|
|
858
|
+
throw error;
|
|
859
|
+
}
|
|
860
|
+
}
|
|
861
|
+
|
|
862
|
+
/**
|
|
863
|
+
* Main Aggregation Function
|
|
864
|
+
*
|
|
865
|
+
* ORCHESTRATION ENTRY POINT
|
|
866
|
+
*
|
|
867
|
+
* Coordinates the entire multi-source aggregation workflow:
|
|
868
|
+
* 1. Validate configuration
|
|
869
|
+
* 2. Initialize data sources and Fluent client
|
|
870
|
+
* 3. Collect inventory from SFTP + S3
|
|
871
|
+
* 4. Query current Fluent inventory
|
|
872
|
+
* 5. Reconcile differences
|
|
873
|
+
* 6. Execute updates
|
|
874
|
+
* 7. Generate report
|
|
875
|
+
*
|
|
876
|
+
* Error Handling Strategy:
|
|
877
|
+
* - Configuration errors: Fail immediately
|
|
878
|
+
* - Source collection errors: Continue with other sources
|
|
879
|
+
* - Reconciliation errors: Fail (cannot proceed without comparison)
|
|
880
|
+
* - Update errors: Track failures but continue batch processing
|
|
881
|
+
*
|
|
882
|
+
* @returns {Promise<Object>} Final statistics
|
|
883
|
+
*/
|
|
884
|
+
async function aggregateInventory() {
|
|
885
|
+
const startTime = Date.now();
|
|
886
|
+
|
|
887
|
+
logger.info('='.repeat(80));
|
|
888
|
+
logger.info('MULTI-SOURCE INVENTORY AGGREGATION - START');
|
|
889
|
+
logger.info('='.repeat(80));
|
|
890
|
+
|
|
891
|
+
try {
|
|
892
|
+
// Step 1: Validate configuration
|
|
893
|
+
validateConfig();
|
|
894
|
+
|
|
895
|
+
// Step 2: Initialize data sources
|
|
896
|
+
const { sftpSource, s3Source, reportS3Source } = initializeDataSources();
|
|
897
|
+
|
|
898
|
+
// Step 3: Create Fluent client
|
|
899
|
+
logger.info('Creating Fluent Commerce client');
|
|
900
|
+
const fluentClient = await createClient(config.fluent);
|
|
901
|
+
logger.info('Fluent client created successfully');
|
|
902
|
+
|
|
903
|
+
// Step 4: Initialize aggregation map
|
|
904
|
+
// Map structure: SKU -> { sku, locations: Map<location, qty>, sources: Set<source> }
|
|
905
|
+
const inventoryMap = new Map();
|
|
906
|
+
|
|
907
|
+
// Step 5: Collect inventory from SFTP (warehouses)
|
|
908
|
+
logger.info('-'.repeat(80));
|
|
909
|
+
logger.info('PHASE 1: SFTP Collection');
|
|
910
|
+
logger.info('-'.repeat(80));
|
|
911
|
+
const sftpFiles = await collectSftpInventory(sftpSource, inventoryMap);
|
|
912
|
+
|
|
913
|
+
// Step 6: Collect inventory from S3 (stores)
|
|
914
|
+
logger.info('-'.repeat(80));
|
|
915
|
+
logger.info('PHASE 2: S3 Collection');
|
|
916
|
+
logger.info('-'.repeat(80));
|
|
917
|
+
const s3Files = await collectS3Inventory(s3Source, inventoryMap);
|
|
918
|
+
|
|
919
|
+
logger.info('-'.repeat(80));
|
|
920
|
+
logger.info('AGGREGATION COMPLETE');
|
|
921
|
+
logger.info('-'.repeat(80));
|
|
922
|
+
logger.info(`Total unique SKUs: ${inventoryMap.size}`);
|
|
923
|
+
|
|
924
|
+
// Calculate total locations
|
|
925
|
+
let totalLocations = 0;
|
|
926
|
+
for (const item of inventoryMap.values()) {
|
|
927
|
+
totalLocations += item.locations.size;
|
|
928
|
+
}
|
|
929
|
+
logger.info(`Total locations: ${totalLocations}`);
|
|
930
|
+
|
|
931
|
+
// Step 7: Query current Fluent inventory
|
|
932
|
+
logger.info('-'.repeat(80));
|
|
933
|
+
logger.info('PHASE 3: Current State Query');
|
|
934
|
+
logger.info('-'.repeat(80));
|
|
935
|
+
const currentInventory = await queryCurrentInventory(fluentClient);
|
|
936
|
+
|
|
937
|
+
// Step 8: Reconcile differences
|
|
938
|
+
logger.info('-'.repeat(80));
|
|
939
|
+
logger.info('PHASE 4: Reconciliation');
|
|
940
|
+
logger.info('-'.repeat(80));
|
|
941
|
+
const { updates, reconciliationReport } = reconcileInventory(inventoryMap, currentInventory);
|
|
942
|
+
|
|
943
|
+
// Step 9: Execute updates
|
|
944
|
+
logger.info('-'.repeat(80));
|
|
945
|
+
logger.info('PHASE 5: Update Execution');
|
|
946
|
+
logger.info('-'.repeat(80));
|
|
947
|
+
const updateResult = await executeUpdates(fluentClient, updates);
|
|
948
|
+
|
|
949
|
+
// Step 10: Generate report
|
|
950
|
+
logger.info('-'.repeat(80));
|
|
951
|
+
logger.info('PHASE 6: Report Generation');
|
|
952
|
+
logger.info('-'.repeat(80));
|
|
953
|
+
|
|
954
|
+
const stats = {
|
|
955
|
+
sftpFiles,
|
|
956
|
+
s3Files,
|
|
957
|
+
totalSKUs: inventoryMap.size,
|
|
958
|
+
totalLocations,
|
|
959
|
+
totalUpdates: updates.length,
|
|
960
|
+
creates: updates.filter(u => u.type === 'ADJUSTMENT').length,
|
|
961
|
+
receipts: updates.filter(u => u.type === 'RECEIPT').length,
|
|
962
|
+
dispatches: updates.filter(u => u.type === 'DISPATCH').length,
|
|
963
|
+
skipped: totalLocations - updates.length,
|
|
964
|
+
success: updateResult.success,
|
|
965
|
+
failed: updateResult.failed,
|
|
966
|
+
errors: updateResult.errors,
|
|
967
|
+
reconciliationDetails: reconciliationReport,
|
|
968
|
+
};
|
|
969
|
+
|
|
970
|
+
const reportKey = await generateReconciliationReport(reportS3Source, stats);
|
|
971
|
+
|
|
972
|
+
// Step 11: Final summary
|
|
973
|
+
const duration = Math.round((Date.now() - startTime) / 1000);
|
|
974
|
+
|
|
975
|
+
logger.info('='.repeat(80));
|
|
976
|
+
logger.info('MULTI-SOURCE INVENTORY AGGREGATION - COMPLETE');
|
|
977
|
+
logger.info('='.repeat(80));
|
|
978
|
+
logger.info(`Duration: ${duration} seconds`);
|
|
979
|
+
logger.info(`Sources processed: ${sftpFiles} SFTP files, ${s3Files} S3 files`);
|
|
980
|
+
logger.info(`Total SKUs: ${stats.totalSKUs}`);
|
|
981
|
+
logger.info(
|
|
982
|
+
`Total updates: ${stats.totalUpdates} (${stats.success} success, ${stats.failed} failed)`
|
|
983
|
+
);
|
|
984
|
+
logger.info(`Report: ${reportKey}`);
|
|
985
|
+
logger.info('='.repeat(80));
|
|
986
|
+
|
|
987
|
+
return stats;
|
|
988
|
+
} catch (error) {
|
|
989
|
+
logger.error('Multi-source aggregation failed', error);
|
|
990
|
+
throw error;
|
|
991
|
+
}
|
|
992
|
+
}
|
|
993
|
+
|
|
994
|
+
// Execute if run directly
|
|
995
|
+
if (import.meta.url === `file://${process.argv[1]}`) {
|
|
996
|
+
aggregateInventory()
|
|
997
|
+
.then(() => {
|
|
998
|
+
logger.info('Aggregation completed successfully');
|
|
999
|
+
process.exit(0);
|
|
1000
|
+
})
|
|
1001
|
+
.catch(error => {
|
|
1002
|
+
logger.error('Aggregation failed with error', error);
|
|
1003
|
+
process.exit(1);
|
|
1004
|
+
});
|
|
1005
|
+
}
|
|
1006
|
+
|
|
1007
|
+
export { aggregateInventory, config };
|
|
1008
|
+
```
|
|
1009
|
+
|
|
1010
|
+
### 4. Scheduler (Optional)
|
|
1011
|
+
|
|
1012
|
+
Create `src/scheduler.js` for scheduled execution:
|
|
1013
|
+
|
|
1014
|
+
```javascript
|
|
1015
|
+
import cron from 'node-cron';
|
|
1016
|
+
import { aggregateInventory } from './index.js';
|
|
1017
|
+
|
|
1018
|
+
/**
|
|
1019
|
+
* SCHEDULED AGGREGATION
|
|
1020
|
+
*
|
|
1021
|
+
* Runs aggregation on a schedule using cron syntax.
|
|
1022
|
+
* Default: Hourly at minute 0 (e.g., 1:00, 2:00, 3:00)
|
|
1023
|
+
*
|
|
1024
|
+
* Cron Pattern: '0 * * * *'
|
|
1025
|
+
* - Minute: 0 (at the top of the hour)
|
|
1026
|
+
* - Hour: * (every hour)
|
|
1027
|
+
* - Day of Month: * (every day)
|
|
1028
|
+
* - Month: * (every month)
|
|
1029
|
+
* - Day of Week: * (every day of week)
|
|
1030
|
+
*/
|
|
1031
|
+
|
|
1032
|
+
console.log('Multi-Source Inventory Aggregation Scheduler');
|
|
1033
|
+
console.log('Schedule: Hourly at minute 0');
|
|
1034
|
+
console.log('Press Ctrl+C to stop');
|
|
1035
|
+
|
|
1036
|
+
// Schedule task
|
|
1037
|
+
cron.schedule('0 * * * *', async () => {
|
|
1038
|
+
console.log(`\n${'='.repeat(80)}`);
|
|
1039
|
+
console.log(`Scheduled aggregation triggered: ${new Date().toISOString()}`);
|
|
1040
|
+
console.log('='.repeat(80));
|
|
1041
|
+
|
|
1042
|
+
try {
|
|
1043
|
+
await aggregateInventory();
|
|
1044
|
+
console.log('Scheduled aggregation completed successfully');
|
|
1045
|
+
} catch (error) {
|
|
1046
|
+
console.error('Scheduled aggregation failed:', error);
|
|
1047
|
+
}
|
|
1048
|
+
});
|
|
1049
|
+
|
|
1050
|
+
// Run immediately on startup (optional)
|
|
1051
|
+
if (process.env.RUN_ON_STARTUP === 'true') {
|
|
1052
|
+
console.log('Running initial aggregation on startup...');
|
|
1053
|
+
aggregateInventory()
|
|
1054
|
+
.then(() => console.log('Initial aggregation complete'))
|
|
1055
|
+
.catch(error => console.error('Initial aggregation failed:', error));
|
|
1056
|
+
}
|
|
1057
|
+
```
|
|
1058
|
+
|
|
1059
|
+
## Key Patterns Explained
|
|
1060
|
+
|
|
1061
|
+
### Pattern 1: Multi-Source Data Collection
|
|
1062
|
+
|
|
1063
|
+
**Challenge**: Collecting inventory from heterogeneous sources (CSV from SFTP, JSON from S3)
|
|
1064
|
+
|
|
1065
|
+
**Solution**: Source-specific collection functions with unified aggregation structure
|
|
1066
|
+
|
|
1067
|
+
```javascript
|
|
1068
|
+
// Each source has its own collection function
|
|
1069
|
+
await collectSftpInventory(sftpSource, inventoryMap); // CSV → Map
|
|
1070
|
+
await collectS3Inventory(s3Source, inventoryMap); // JSON → Map
|
|
1071
|
+
|
|
1072
|
+
// Both populate the same Map structure:
|
|
1073
|
+
// Map<SKU, { locations: Map<location, qty>, sources: Set<source> }>
|
|
1074
|
+
```
|
|
1075
|
+
|
|
1076
|
+
**Why This Works**:
|
|
1077
|
+
|
|
1078
|
+
- Abstraction: Each source function handles its own format
|
|
1079
|
+
- Unified Structure: All sources aggregate into the same Map
|
|
1080
|
+
- Source Tracking: Set tracks which sources contributed to each SKU
|
|
1081
|
+
- Deduplication: Map automatically handles duplicate SKU+location combinations
|
|
1082
|
+
|
|
1083
|
+
### Pattern 2: Map-Based Aggregation & Deduplication
|
|
1084
|
+
|
|
1085
|
+
**Challenge**: Combine inventory from multiple sources, handling duplicates and quantities
|
|
1086
|
+
|
|
1087
|
+
**Solution**: Nested Map structure with additive aggregation
|
|
1088
|
+
|
|
1089
|
+
```javascript
|
|
1090
|
+
// Aggregation data structure
|
|
1091
|
+
const inventoryMap = new Map(); // SKU -> AggregatedItem
|
|
1092
|
+
|
|
1093
|
+
// AggregatedItem structure:
|
|
1094
|
+
{
|
|
1095
|
+
sku: 'PROD-001',
|
|
1096
|
+
locations: Map<string, number>, // location -> quantity
|
|
1097
|
+
sources: Set<string> // ['WAREHOUSE_SFTP', 'STORE_S3']
|
|
1098
|
+
}
|
|
1099
|
+
|
|
1100
|
+
// Aggregation algorithm (additive for duplicates)
|
|
1101
|
+
if (!inventoryMap.has(sku)) {
|
|
1102
|
+
inventoryMap.set(sku, {
|
|
1103
|
+
sku: sku,
|
|
1104
|
+
locations: new Map(),
|
|
1105
|
+
sources: new Set()
|
|
1106
|
+
});
|
|
1107
|
+
}
|
|
1108
|
+
|
|
1109
|
+
const item = inventoryMap.get(sku);
|
|
1110
|
+
const currentQty = item.locations.get(location) || 0;
|
|
1111
|
+
item.locations.set(location, currentQty + quantity); // ADD quantities
|
|
1112
|
+
item.sources.add(sourceName);
|
|
1113
|
+
```
|
|
1114
|
+
|
|
1115
|
+
**Why This Works**:
|
|
1116
|
+
|
|
1117
|
+
- O(1) lookup: Map provides fast SKU lookup
|
|
1118
|
+
- O(1) duplicate detection: Map automatically handles duplicates
|
|
1119
|
+
- Additive logic: Quantities from different sources are summed
|
|
1120
|
+
- Source provenance: Track which sources contributed (debugging/auditing)
|
|
1121
|
+
|
|
1122
|
+
### Pattern 3: Reconciliation Algorithm
|
|
1123
|
+
|
|
1124
|
+
**Challenge**: Compare aggregated inventory with Fluent current state to find differences
|
|
1125
|
+
|
|
1126
|
+
**Solution**: Key-based lookup with delta calculation
|
|
1127
|
+
|
|
1128
|
+
```javascript
|
|
1129
|
+
// Build lookup key: "SKU:LOCATION"
|
|
1130
|
+
const key = `${sku}:${location}`;
|
|
1131
|
+
|
|
1132
|
+
// Look up current state
|
|
1133
|
+
const current = currentInventory.get(key);
|
|
1134
|
+
const currentQty = current ? current.qty : 0;
|
|
1135
|
+
|
|
1136
|
+
// Calculate delta
|
|
1137
|
+
const difference = newQty - currentQty;
|
|
1138
|
+
|
|
1139
|
+
// Determine operation
|
|
1140
|
+
if (difference === 0) {
|
|
1141
|
+
operation = 'SKIP'; // No change
|
|
1142
|
+
} else if (!current) {
|
|
1143
|
+
operation = 'CREATE'; // New inventory position
|
|
1144
|
+
} else if (difference > 0) {
|
|
1145
|
+
operation = 'RECEIPT'; // Increase
|
|
1146
|
+
} else {
|
|
1147
|
+
operation = 'DISPATCH'; // Decrease
|
|
1148
|
+
}
|
|
1149
|
+
```
|
|
1150
|
+
|
|
1151
|
+
**Why This Works**:
|
|
1152
|
+
|
|
1153
|
+
- Key-based comparison: Fast O(1) lookup for each SKU+location
|
|
1154
|
+
- Delta detection: Only send changes (minimizes API calls)
|
|
1155
|
+
- Operation classification: Clear intent (CREATE/RECEIPT/DISPATCH)
|
|
1156
|
+
- Skip optimization: Avoid unnecessary updates when qty unchanged
|
|
1157
|
+
|
|
1158
|
+
### Pattern 4: Batch Update Execution
|
|
1159
|
+
|
|
1160
|
+
**Challenge**: Send thousands of inventory updates without hitting API limits
|
|
1161
|
+
|
|
1162
|
+
**Solution**: Batch processing with error tracking
|
|
1163
|
+
|
|
1164
|
+
```javascript
|
|
1165
|
+
const batchSize = 100;
|
|
1166
|
+
|
|
1167
|
+
for (let i = 0; i < updates.length; i += batchSize) {
|
|
1168
|
+
const batch = updates.slice(i, i + batchSize);
|
|
1169
|
+
|
|
1170
|
+
try {
|
|
1171
|
+
const result = await fluentClient.graphqlMutation(mutation, {
|
|
1172
|
+
updates: batch,
|
|
1173
|
+
});
|
|
1174
|
+
|
|
1175
|
+
// Track success/failure per batch
|
|
1176
|
+
successCount += result.data.adjustInventory.success;
|
|
1177
|
+
failedCount += result.data.adjustInventory.failed;
|
|
1178
|
+
} catch (batchError) {
|
|
1179
|
+
// Log error but continue processing other batches
|
|
1180
|
+
logger.error(`Batch ${batchNumber} failed`, batchError);
|
|
1181
|
+
failedCount += batch.length;
|
|
1182
|
+
}
|
|
1183
|
+
}
|
|
1184
|
+
```
|
|
1185
|
+
|
|
1186
|
+
**Why This Works**:
|
|
1187
|
+
|
|
1188
|
+
- Batch optimization: Reduces API calls (1000 updates → 10 API calls at batch size 100)
|
|
1189
|
+
- Error isolation: One batch failure doesn't stop other batches
|
|
1190
|
+
- Progress tracking: Log after each batch for monitoring
|
|
1191
|
+
- Retry-friendly: Failed batches can be retried individually
|
|
1192
|
+
|
|
1193
|
+
### Pattern 5: Comprehensive Report Generation
|
|
1194
|
+
|
|
1195
|
+
**Challenge**: Provide audit trail and debugging information for reconciliation
|
|
1196
|
+
|
|
1197
|
+
**Solution**: Structured JSON report with statistics and details
|
|
1198
|
+
|
|
1199
|
+
```javascript
|
|
1200
|
+
const report = {
|
|
1201
|
+
timestamp: new Date().toISOString(),
|
|
1202
|
+
version: '1.0.0',
|
|
1203
|
+
|
|
1204
|
+
// Source processing statistics
|
|
1205
|
+
sourcesProcessed: {
|
|
1206
|
+
sftpFiles: 5,
|
|
1207
|
+
s3Files: 20,
|
|
1208
|
+
},
|
|
1209
|
+
|
|
1210
|
+
// Aggregate statistics
|
|
1211
|
+
statistics: {
|
|
1212
|
+
totalSKUs: 1250,
|
|
1213
|
+
totalLocations: 3400,
|
|
1214
|
+
totalUpdates: 450,
|
|
1215
|
+
creates: 50,
|
|
1216
|
+
receipts: 200,
|
|
1217
|
+
dispatches: 200,
|
|
1218
|
+
skipped: 2950,
|
|
1219
|
+
success: 445,
|
|
1220
|
+
failed: 5,
|
|
1221
|
+
},
|
|
1222
|
+
|
|
1223
|
+
// Detailed changes (first 1000)
|
|
1224
|
+
reconciliation: [
|
|
1225
|
+
{
|
|
1226
|
+
sku: 'PROD-001',
|
|
1227
|
+
location: 'WH-01',
|
|
1228
|
+
operation: 'RECEIPT',
|
|
1229
|
+
previousQty: 100,
|
|
1230
|
+
newQty: 150,
|
|
1231
|
+
difference: 50,
|
|
1232
|
+
sources: ['WAREHOUSE_SFTP'],
|
|
1233
|
+
timestamp: '2024-01-15T10:30:00Z',
|
|
1234
|
+
},
|
|
1235
|
+
],
|
|
1236
|
+
|
|
1237
|
+
// Errors encountered
|
|
1238
|
+
errors: [],
|
|
1239
|
+
};
|
|
1240
|
+
|
|
1241
|
+
// Upload to S3 for persistence
|
|
1242
|
+
await reportS3Source.uploadFile(reportKey, JSON.stringify(report, null, 2));
|
|
1243
|
+
```
|
|
1244
|
+
|
|
1245
|
+
**Why This Works**:
|
|
1246
|
+
|
|
1247
|
+
- Audit trail: Permanent record of what was updated
|
|
1248
|
+
- Debugging: Detailed information for troubleshooting failures
|
|
1249
|
+
- Monitoring: Statistics for dashboards and alerts
|
|
1250
|
+
- Compliance: Provenance tracking (which sources contributed)
|
|
1251
|
+
|
|
1252
|
+
## Deployment Options
|
|
1253
|
+
|
|
1254
|
+
### Option 1: Scheduled Execution (Hourly)
|
|
1255
|
+
|
|
1256
|
+
Run aggregation every hour using cron:
|
|
1257
|
+
|
|
1258
|
+
```bash
|
|
1259
|
+
# Install dependencies
|
|
1260
|
+
npm install
|
|
1261
|
+
|
|
1262
|
+
# Run scheduler (stays running, executes hourly)
|
|
1263
|
+
npm run aggregation:schedule
|
|
1264
|
+
```
|
|
1265
|
+
|
|
1266
|
+
**Use Case**: Regular inventory synchronization from multiple sources
|
|
1267
|
+
|
|
1268
|
+
### Option 2: Manual Execution (On-Demand)
|
|
1269
|
+
|
|
1270
|
+
Run aggregation once, manually triggered:
|
|
1271
|
+
|
|
1272
|
+
```bash
|
|
1273
|
+
# Single execution
|
|
1274
|
+
npm run aggregation:once
|
|
1275
|
+
```
|
|
1276
|
+
|
|
1277
|
+
**Use Case**: Testing, troubleshooting, or ad-hoc reconciliation
|
|
1278
|
+
|
|
1279
|
+
### Option 3: Containerized Deployment
|
|
1280
|
+
|
|
1281
|
+
Create `Dockerfile`:
|
|
1282
|
+
|
|
1283
|
+
```dockerfile
|
|
1284
|
+
FROM node:18-alpine
|
|
1285
|
+
|
|
1286
|
+
WORKDIR /app
|
|
1287
|
+
|
|
1288
|
+
COPY package*.json ./
|
|
1289
|
+
RUN npm ci --production
|
|
1290
|
+
|
|
1291
|
+
COPY src/ ./src/
|
|
1292
|
+
|
|
1293
|
+
CMD ["node", "src/scheduler.js"]
|
|
1294
|
+
```
|
|
1295
|
+
|
|
1296
|
+
Build and run:
|
|
1297
|
+
|
|
1298
|
+
```bash
|
|
1299
|
+
# Build Docker image
|
|
1300
|
+
docker build -t multi-source-aggregation .
|
|
1301
|
+
|
|
1302
|
+
# Run container
|
|
1303
|
+
docker run -d \
|
|
1304
|
+
--name inventory-aggregation \
|
|
1305
|
+
--env-file .env \
|
|
1306
|
+
--restart unless-stopped \
|
|
1307
|
+
multi-source-aggregation
|
|
1308
|
+
```
|
|
1309
|
+
|
|
1310
|
+
**Use Case**: Production deployment with container orchestration (Kubernetes, ECS)
|
|
1311
|
+
|
|
1312
|
+
## Testing
|
|
1313
|
+
|
|
1314
|
+
### Test with Mock Data
|
|
1315
|
+
|
|
1316
|
+
Create `tests/test-aggregation.js`:
|
|
1317
|
+
|
|
1318
|
+
```javascript
|
|
1319
|
+
import { aggregateInventory, config } from '../src/index.js';
|
|
1320
|
+
|
|
1321
|
+
// Override config for testing
|
|
1322
|
+
config.processing.batchSize = 10;
|
|
1323
|
+
|
|
1324
|
+
// Mock environment variables
|
|
1325
|
+
process.env.FLUENT_BASE_URL = 'https://api.fluentcommerce.com';
|
|
1326
|
+
process.env.FLUENT_CLIENT_ID = 'test-client-id';
|
|
1327
|
+
// ... (other test env vars)
|
|
1328
|
+
|
|
1329
|
+
// Run aggregation
|
|
1330
|
+
aggregateInventory()
|
|
1331
|
+
.then(stats => {
|
|
1332
|
+
console.log('Test completed successfully');
|
|
1333
|
+
console.log(JSON.stringify(stats, null, 2));
|
|
1334
|
+
})
|
|
1335
|
+
.catch(error => {
|
|
1336
|
+
console.error('Test failed:', error);
|
|
1337
|
+
process.exit(1);
|
|
1338
|
+
});
|
|
1339
|
+
```
|
|
1340
|
+
|
|
1341
|
+
Run test:
|
|
1342
|
+
|
|
1343
|
+
```bash
|
|
1344
|
+
node tests/test-aggregation.js
|
|
1345
|
+
```
|
|
1346
|
+
|
|
1347
|
+
## Common Issues
|
|
1348
|
+
|
|
1349
|
+
### Issue 1: SFTP Connection Timeout
|
|
1350
|
+
|
|
1351
|
+
**Symptom**: `Error: Connection timeout after 30s`
|
|
1352
|
+
|
|
1353
|
+
**Solution**:
|
|
1354
|
+
|
|
1355
|
+
```bash
|
|
1356
|
+
# Increase timeout in .env
|
|
1357
|
+
WAREHOUSE_SFTP_TIMEOUT=60000
|
|
1358
|
+
|
|
1359
|
+
# Or in code (src/index.js):
|
|
1360
|
+
settings: {
|
|
1361
|
+
...config.warehouseSftp,
|
|
1362
|
+
connectionTimeout: 60000 // 60 seconds
|
|
1363
|
+
}
|
|
1364
|
+
```
|
|
1365
|
+
|
|
1366
|
+
### Issue 2: S3 Access Denied
|
|
1367
|
+
|
|
1368
|
+
**Symptom**: `Error: AccessDenied: User is not authorized`
|
|
1369
|
+
|
|
1370
|
+
**Solution**:
|
|
1371
|
+
|
|
1372
|
+
- Verify AWS credentials are correct
|
|
1373
|
+
- Check IAM policy includes:
|
|
1374
|
+
- `s3:GetObject` on source bucket
|
|
1375
|
+
- `s3:PutObject` on report bucket
|
|
1376
|
+
- `s3:ListBucket` on both buckets
|
|
1377
|
+
|
|
1378
|
+
```json
|
|
1379
|
+
{
|
|
1380
|
+
"Version": "2012-10-17",
|
|
1381
|
+
"Statement": [
|
|
1382
|
+
{
|
|
1383
|
+
"Effect": "Allow",
|
|
1384
|
+
"Action": ["s3:GetObject", "s3:ListBucket"],
|
|
1385
|
+
"Resource": ["arn:aws:s3:::store-inventory", "arn:aws:s3:::store-inventory/*"]
|
|
1386
|
+
},
|
|
1387
|
+
{
|
|
1388
|
+
"Effect": "Allow",
|
|
1389
|
+
"Action": ["s3:PutObject"],
|
|
1390
|
+
"Resource": ["arn:aws:s3:::inventory-reports/*"]
|
|
1391
|
+
}
|
|
1392
|
+
]
|
|
1393
|
+
}
|
|
1394
|
+
```
|
|
1395
|
+
|
|
1396
|
+
### Issue 3: Fluent GraphQL Errors
|
|
1397
|
+
|
|
1398
|
+
**Symptom**: `Error: adjustInventory mutation failed`
|
|
1399
|
+
|
|
1400
|
+
**Solution**:
|
|
1401
|
+
|
|
1402
|
+
- Check Fluent API credentials
|
|
1403
|
+
- Verify retailerId is correct
|
|
1404
|
+
- Ensure inventory positions exist (or use CREATE operation)
|
|
1405
|
+
- Check mutation input format matches Fluent schema
|
|
1406
|
+
|
|
1407
|
+
### Issue 4: Memory Issues with Large Datasets
|
|
1408
|
+
|
|
1409
|
+
**Symptom**: `JavaScript heap out of memory`
|
|
1410
|
+
|
|
1411
|
+
**Solution**:
|
|
1412
|
+
|
|
1413
|
+
```bash
|
|
1414
|
+
# Increase Node.js heap size
|
|
1415
|
+
node --max-old-space-size=4096 src/index.js
|
|
1416
|
+
|
|
1417
|
+
# Or reduce batch size in config
|
|
1418
|
+
config.processing.batchSize = 50;
|
|
1419
|
+
```
|
|
1420
|
+
|
|
1421
|
+
### Issue 5: Duplicate SKUs from Multiple Sources
|
|
1422
|
+
|
|
1423
|
+
**Symptom**: Quantities are doubled or incorrect
|
|
1424
|
+
|
|
1425
|
+
**Solution**:
|
|
1426
|
+
|
|
1427
|
+
- **Intended behavior**: Quantities are ADDED from multiple sources
|
|
1428
|
+
- If you need OVERRIDE behavior (last source wins), modify aggregation:
|
|
1429
|
+
|
|
1430
|
+
```javascript
|
|
1431
|
+
// OVERRIDE mode (replace instead of add)
|
|
1432
|
+
item.locations.set(location, quantity); // Don't add, just set
|
|
1433
|
+
```
|
|
1434
|
+
|
|
1435
|
+
### Issue 6: Report Upload Failures
|
|
1436
|
+
|
|
1437
|
+
**Symptom**: `Failed to upload reconciliation report`
|
|
1438
|
+
|
|
1439
|
+
**Solution**:
|
|
1440
|
+
|
|
1441
|
+
- Check report S3 bucket exists
|
|
1442
|
+
- Verify write permissions
|
|
1443
|
+
- Check S3 region matches configuration
|
|
1444
|
+
- Ensure network connectivity to S3
|
|
1445
|
+
|
|
1446
|
+
## Related Guides
|
|
1447
|
+
|
|
1448
|
+
- **[Simple CSV Ingestion](./s3-csv-batch-api.md)** - Single source ingestion pattern
|
|
1449
|
+
- **[SFTP to Fluent](../../02-CORE-GUIDES/ingestion/ingestion-readme.md)** - SFTP-specific workflows (see Versori templates)
|
|
1450
|
+
- **[S3 Parquet Extraction](./graphql-query-export.md)** - S3 and Parquet handling
|
|
1451
|
+
- **[Connector Scenarios](../../02-CORE-GUIDES/advanced-services/advanced-services-readme.md)** - Multi-source patterns
|
|
1452
|
+
- **[Universal Mapping Guide](../../02-CORE-GUIDES/advanced-services/advanced-services-readme.md)** - Field transformation
|
|
1453
|
+
- **[SDK Resolvers](../../02-CORE-GUIDES/mapping/resolvers/mapping-resolvers-resolver-guide.md)** - Built-in transformations
|
|
1454
|
+
|
|
1455
|
+
## Next Steps
|
|
1456
|
+
|
|
1457
|
+
1. **Add Custom Transformations**: Use UniversalMapper for complex field mappings
|
|
1458
|
+
2. **Implement Retry Logic**: Add exponential backoff for transient failures
|
|
1459
|
+
3. **Add Monitoring**: Integrate with CloudWatch, Datadog, or other monitoring
|
|
1460
|
+
4. **Implement Alerting**: Send notifications on failures or reconciliation anomalies
|
|
1461
|
+
5. **Add State Management**: Track processed files to avoid reprocessing
|
|
1462
|
+
6. **Optimize Performance**: Implement parallel processing for large datasets
|