@hotmeshio/hotmesh 0.12.0 → 0.13.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -11,14 +11,14 @@ npm install @hotmeshio/hotmesh
11
11
  ## Use HotMesh for
12
12
 
13
13
  - **Durable pipelines** — Orchestrate long-running, multi-step pipelines transactionally.
14
- - **Familiar Temporal syntax** — The `Durable` module provides a Temporal-compatible API (`Client`, `Worker`, `proxyActivities`, `sleepFor`, `startChild`, signals) that runs directly on Postgres. No app server required.
14
+ - **Crash-safe execution** — Every step is a committed row. If the process dies, it picks up where it left off.
15
15
  - **Distributed state machines** — Build stateful applications where every component can [fail and recover](https://github.com/hotmeshio/sdk-typescript/blob/main/services/collator/README.md).
16
16
  - **AI and training pipelines** — Multi-step AI workloads where each stage is expensive and must not be repeated on failure. A crashed pipeline resumes from the last committed step, not from the beginning.
17
17
 
18
18
  ## How it works in 30 seconds
19
19
 
20
20
  1. **You write workflow functions.** Plain TypeScript — branching, loops, error handling. HotMesh also supports a YAML syntax for declarative, functional workflows.
21
- 2. **HotMesh compiles them into a transactional execution plan.** Each step becomes a committed database row. If the process crashes mid-workflow, it resumes from the last committed step.
21
+ 2. **HotMesh compiles them into a transactional execution plan.** Each step becomes a committed database row backed by Postgres ACID guarantees and monotonic integers for ordering. If the process crashes mid-workflow, it resumes from the last committed step.
22
22
  3. **Your Postgres database is the engine.** It stores state, coordinates retries, and delivers messages. Every connected client participates in execution — there is no central server.
23
23
 
24
24
  ## Quickstart
@@ -56,7 +56,7 @@ export async function notifyBackorder(itemId: string): Promise<void> {
56
56
  }
57
57
  ```
58
58
 
59
- ### Option 1: Code (Temporal-compatible API)
59
+ ### Option 1: Code
60
60
 
61
61
  ```typescript
62
62
  // workflows.ts
@@ -305,8 +305,6 @@ ORDER BY
305
305
 
306
306
  What happened? Consult the database. What's still running? Query the semaphore. What failed? Read the row. The execution state isn't reconstructed from a log — it was committed transactionally as each step ran.
307
307
 
308
- You can also use the Temporal-compatible API:
309
-
310
308
  ```typescript
311
309
  const handle = client.workflow.getHandle('orders', 'orderWorkflow', 'order-456');
312
310
 
@@ -331,14 +329,6 @@ There is no proprietary dashboard. Workflow state lives in Postgres, so use what
331
329
 
332
330
  For a deep dive into the transactional execution model — how every step is crash-safe, how the monotonic collation ledger guarantees exactly-once delivery, and how cycles and retries remain correct under arbitrary failure — see the [Collation Design Document](https://github.com/hotmeshio/sdk-typescript/blob/main/services/collator/README.md). The symbolic system (how to design workflows) and lifecycle details (how to deploy workflows) are covered in the [Architectural Overview](https://zenodo.org/records/12168558).
333
331
 
334
- ## Familiar with Temporal?
335
-
336
- Durable is designed as a drop-in-compatible alternative for common Temporal patterns.
337
-
338
- **What's the same:** `Client`, `Worker`, `proxyActivities`, `sleepFor`, `startChild`/`execChild`, signals (`waitFor`/`signal`), retry policies, and the overall workflow-as-code programming model.
339
-
340
- **What's different:** Postgres is the only infrastructure dependency — it stores state and coordinates workers.
341
-
342
332
  ## Running tests
343
333
 
344
334
  Tests run inside Docker. Start the services and run the full suite:
@@ -351,8 +341,8 @@ docker compose exec hotmesh npm test
351
341
  Run a specific test group:
352
342
 
353
343
  ```bash
354
- docker compose exec hotmesh npm run test:durable # all Durable tests (Temporal pattern coverage proofs)
355
- docker compose exec hotmesh npm run test:durable:hello # single Durable test (hello world proxyActivity proof)
344
+ docker compose exec hotmesh npm run test:durable # all Durable tests
345
+ docker compose exec hotmesh npm run test:durable:hello # single Durable test (hello world)
356
346
  docker compose exec hotmesh npm run test:virtual # all Virtual network function (VNF) tests
357
347
  ```
358
348
 
@@ -21,6 +21,7 @@ declare class DurableWaitForError extends Error {
21
21
  declare class DurableProxyError extends Error {
22
22
  activityName: string;
23
23
  arguments: string[];
24
+ argumentMetadata?: Record<string, any>;
24
25
  backoffCoefficient: number;
25
26
  code: number;
26
27
  index: number;
@@ -32,6 +32,7 @@ class DurableProxyError extends Error {
32
32
  super(`ProxyActivity Interruption`);
33
33
  this.type = 'DurableProxyError';
34
34
  this.arguments = params.arguments;
35
+ this.argumentMetadata = params.argumentMetadata;
35
36
  this.workflowId = params.workflowId;
36
37
  this.workflowTopic = params.workflowTopic;
37
38
  this.parentWorkflowId = params.parentWorkflowId;
@@ -1,3 +1,4 @@
1
1
  /// <reference types="node" />
2
2
  import { AsyncLocalStorage } from 'async_hooks';
3
3
  export declare const asyncLocalStorage: AsyncLocalStorage<Map<string, any>>;
4
+ export declare const activityAsyncLocalStorage: AsyncLocalStorage<Map<string, any>>;
@@ -1,5 +1,6 @@
1
1
  "use strict";
2
2
  Object.defineProperty(exports, "__esModule", { value: true });
3
- exports.asyncLocalStorage = void 0;
3
+ exports.activityAsyncLocalStorage = exports.asyncLocalStorage = void 0;
4
4
  const async_hooks_1 = require("async_hooks");
5
5
  exports.asyncLocalStorage = new async_hooks_1.AsyncLocalStorage();
6
+ exports.activityAsyncLocalStorage = new async_hooks_1.AsyncLocalStorage();
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@hotmeshio/hotmesh",
3
- "version": "0.12.0",
4
- "description": "Permanent-Memory Workflows & AI Agents",
3
+ "version": "0.13.0",
4
+ "description": "Durable Workflow",
5
5
  "main": "./build/index.js",
6
6
  "types": "./build/index.d.ts",
7
7
  "homepage": "https://github.com/hotmeshio/sdk-typescript/",
@@ -33,6 +33,7 @@
33
33
  "test:durable:fatal": "vitest run tests/durable/fatal",
34
34
  "test:durable:goodbye": "HMSH_LOGLEVEL=debug vitest run tests/durable/goodbye/postgres.test.ts",
35
35
  "test:durable:interceptor": "HMSH_LOGLEVEL=info vitest run tests/durable/interceptor/postgres.test.ts",
36
+ "test:durable:metadata": "HMSH_LOGLEVEL=info vitest run tests/durable/interceptor/postgres.test.ts -t 'argumentMetadata'",
36
37
  "test:durable:entity": "HMSH_LOGLEVEL=debug vitest run tests/durable/entity/postgres.test.ts",
37
38
  "test:durable:agent": "HMSH_LOGLEVEL=debug vitest run tests/durable/agent/postgres.test.ts",
38
39
  "test:durable:hello": "HMSH_TELEMETRY=debug HMSH_LOGLEVEL=info vitest run tests/durable/helloworld/postgres.test.ts",
@@ -29,7 +29,7 @@ import { PostgresClientType } from '../../types/postgres';
29
29
  * from completed jobs while preserving:
30
30
  * - `jdata` — workflow return data
31
31
  * - `udata` — user-searchable data
32
- * - `jmark` — timeline markers needed for Temporal-compatible export
32
+ * - `jmark` — timeline markers needed for workflow execution export
33
33
  *
34
34
  * Set `keepHmark: true` to also preserve `hmark` (activity state markers).
35
35
  *
@@ -63,11 +63,14 @@ import { PostgresClientType } from '../../types/postgres';
63
63
  * jobs: false, streams: false, attributes: true,
64
64
  * });
65
65
  *
66
- * // Cron 2 — Hourly: remove processed stream messages older than 24h
66
+ * // Cron 2 — Hourly: aggressive engine stream cleanup, conservative worker streams
67
67
  * await DBA.prune({
68
68
  * appId: 'myapp', connection,
69
- * expire: '24 hours',
70
- * jobs: false, streams: true,
69
+ * jobs: false, attributes: false,
70
+ * engineStreams: true,
71
+ * engineStreamsExpire: '24 hours',
72
+ * workerStreams: true,
73
+ * workerStreamsExpire: '90 days', // preserve for export fidelity
71
74
  * });
72
75
  *
73
76
  * // Cron 3 — Weekly: remove expired 'book' jobs older than 30 days
@@ -101,6 +104,13 @@ import { PostgresClientType } from '../../types/postgres';
101
104
  *
102
105
  * -- Prune everything older than 7 days and strip attributes
103
106
  * SELECT * FROM myapp.prune('7 days', true, true, true);
107
+ *
108
+ * -- Independent stream retention: engine 24h, worker 90 days
109
+ * SELECT * FROM myapp.prune(
110
+ * '7 days', true, false, false, NULL, false, false,
111
+ * true, true, -- prune_engine_streams, prune_worker_streams
112
+ * INTERVAL '24 hours', INTERVAL '90 days' -- per-table retention
113
+ * );
104
114
  * ```
105
115
  */
106
116
  declare class DBA {
@@ -32,7 +32,7 @@ const postgres_1 = require("../connector/providers/postgres");
32
32
  * from completed jobs while preserving:
33
33
  * - `jdata` — workflow return data
34
34
  * - `udata` — user-searchable data
35
- * - `jmark` — timeline markers needed for Temporal-compatible export
35
+ * - `jmark` — timeline markers needed for workflow execution export
36
36
  *
37
37
  * Set `keepHmark: true` to also preserve `hmark` (activity state markers).
38
38
  *
@@ -66,11 +66,14 @@ const postgres_1 = require("../connector/providers/postgres");
66
66
  * jobs: false, streams: false, attributes: true,
67
67
  * });
68
68
  *
69
- * // Cron 2 — Hourly: remove processed stream messages older than 24h
69
+ * // Cron 2 — Hourly: aggressive engine stream cleanup, conservative worker streams
70
70
  * await DBA.prune({
71
71
  * appId: 'myapp', connection,
72
- * expire: '24 hours',
73
- * jobs: false, streams: true,
72
+ * jobs: false, attributes: false,
73
+ * engineStreams: true,
74
+ * engineStreamsExpire: '24 hours',
75
+ * workerStreams: true,
76
+ * workerStreamsExpire: '90 days', // preserve for export fidelity
74
77
  * });
75
78
  *
76
79
  * // Cron 3 — Weekly: remove expired 'book' jobs older than 30 days
@@ -104,6 +107,13 @@ const postgres_1 = require("../connector/providers/postgres");
104
107
  *
105
108
  * -- Prune everything older than 7 days and strip attributes
106
109
  * SELECT * FROM myapp.prune('7 days', true, true, true);
110
+ *
111
+ * -- Independent stream retention: engine 24h, worker 90 days
112
+ * SELECT * FROM myapp.prune(
113
+ * '7 days', true, false, false, NULL, false, false,
114
+ * true, true, -- prune_engine_streams, prune_worker_streams
115
+ * INTERVAL '24 hours', INTERVAL '90 days' -- per-table retention
116
+ * );
107
117
  * ```
108
118
  */
109
119
  class DBA {
@@ -172,11 +182,17 @@ class DBA {
172
182
  strip_attributes BOOLEAN DEFAULT FALSE,
173
183
  entity_list TEXT[] DEFAULT NULL,
174
184
  prune_transient BOOLEAN DEFAULT FALSE,
175
- keep_hmark BOOLEAN DEFAULT FALSE
185
+ keep_hmark BOOLEAN DEFAULT FALSE,
186
+ prune_engine_streams BOOLEAN DEFAULT NULL,
187
+ prune_worker_streams BOOLEAN DEFAULT NULL,
188
+ engine_streams_retention INTERVAL DEFAULT NULL,
189
+ worker_streams_retention INTERVAL DEFAULT NULL
176
190
  )
177
191
  RETURNS TABLE(
178
192
  deleted_jobs BIGINT,
179
193
  deleted_streams BIGINT,
194
+ deleted_engine_streams BIGINT,
195
+ deleted_worker_streams BIGINT,
180
196
  stripped_attributes BIGINT,
181
197
  deleted_transient BIGINT,
182
198
  marked_pruned BIGINT
@@ -185,12 +201,22 @@ class DBA {
185
201
  AS $$
186
202
  DECLARE
187
203
  v_deleted_jobs BIGINT := 0;
188
- v_deleted_streams BIGINT := 0;
204
+ v_deleted_engine_streams BIGINT := 0;
205
+ v_deleted_worker_streams BIGINT := 0;
189
206
  v_stripped_attributes BIGINT := 0;
190
207
  v_deleted_transient BIGINT := 0;
191
208
  v_marked_pruned BIGINT := 0;
192
- v_temp_count BIGINT := 0;
209
+ v_do_engine BOOLEAN;
210
+ v_do_worker BOOLEAN;
211
+ v_engine_retention INTERVAL;
212
+ v_worker_retention INTERVAL;
193
213
  BEGIN
214
+ -- Resolve per-table overrides (fall back to legacy prune_streams + retention)
215
+ v_do_engine := COALESCE(prune_engine_streams, prune_streams);
216
+ v_do_worker := COALESCE(prune_worker_streams, prune_streams);
217
+ v_engine_retention := COALESCE(engine_streams_retention, retention);
218
+ v_worker_retention := COALESCE(worker_streams_retention, retention);
219
+
194
220
  -- 1. Hard-delete expired jobs older than the retention window.
195
221
  -- FK CASCADE on jobs_attributes handles attribute cleanup.
196
222
  -- Optionally scoped to an entity allowlist.
@@ -211,22 +237,23 @@ class DBA {
211
237
  GET DIAGNOSTICS v_deleted_transient = ROW_COUNT;
212
238
  END IF;
213
239
 
214
- -- 3. Hard-delete expired stream messages older than the retention window.
215
- -- Deletes from both engine_streams and worker_streams tables.
216
- IF prune_streams THEN
240
+ -- 3. Hard-delete expired engine stream messages.
241
+ IF v_do_engine THEN
217
242
  DELETE FROM ${schema}.engine_streams
218
243
  WHERE expired_at IS NOT NULL
219
- AND expired_at < NOW() - retention;
220
- GET DIAGNOSTICS v_deleted_streams = ROW_COUNT;
244
+ AND expired_at < NOW() - v_engine_retention;
245
+ GET DIAGNOSTICS v_deleted_engine_streams = ROW_COUNT;
246
+ END IF;
221
247
 
248
+ -- 4. Hard-delete expired worker stream messages.
249
+ IF v_do_worker THEN
222
250
  DELETE FROM ${schema}.worker_streams
223
251
  WHERE expired_at IS NOT NULL
224
- AND expired_at < NOW() - retention;
225
- GET DIAGNOSTICS v_temp_count = ROW_COUNT;
226
- v_deleted_streams := v_deleted_streams + v_temp_count;
252
+ AND expired_at < NOW() - v_worker_retention;
253
+ GET DIAGNOSTICS v_deleted_worker_streams = ROW_COUNT;
227
254
  END IF;
228
255
 
229
- -- 4. Strip execution artifacts from completed, live, un-pruned jobs.
256
+ -- 5. Strip execution artifacts from completed, live, un-pruned jobs.
230
257
  -- Always preserves: jdata, udata, jmark (timeline/export history).
231
258
  -- Optionally preserves: hmark (when keep_hmark is true).
232
259
  IF strip_attributes THEN
@@ -261,7 +288,9 @@ class DBA {
261
288
  END IF;
262
289
 
263
290
  deleted_jobs := v_deleted_jobs;
264
- deleted_streams := v_deleted_streams;
291
+ deleted_streams := v_deleted_engine_streams + v_deleted_worker_streams;
292
+ deleted_engine_streams := v_deleted_engine_streams;
293
+ deleted_worker_streams := v_deleted_worker_streams;
265
294
  stripped_attributes := v_stripped_attributes;
266
295
  deleted_transient := v_deleted_transient;
267
296
  marked_pruned := v_marked_pruned;
@@ -357,14 +386,24 @@ class DBA {
357
386
  const entities = options.entities ?? null;
358
387
  const pruneTransient = options.pruneTransient ?? false;
359
388
  const keepHmark = options.keepHmark ?? false;
389
+ // Per-table overrides (NULL = fall back to streams/expire in SQL)
390
+ const engineStreams = options.engineStreams ?? null;
391
+ const workerStreams = options.workerStreams ?? null;
392
+ const engineStreamsExpire = options.engineStreamsExpire ?? null;
393
+ const workerStreamsExpire = options.workerStreamsExpire ?? null;
360
394
  await DBA.deploy(options.connection, options.appId);
361
395
  const { client, release } = await DBA.getClient(options.connection);
362
396
  try {
363
- const result = await client.query(`SELECT * FROM ${schema}.prune($1::interval, $2::boolean, $3::boolean, $4::boolean, $5::text[], $6::boolean, $7::boolean)`, [expire, jobs, streams, attributes, entities, pruneTransient, keepHmark]);
397
+ const result = await client.query(`SELECT * FROM ${schema}.prune($1::interval, $2::boolean, $3::boolean, $4::boolean, $5::text[], $6::boolean, $7::boolean, $8::boolean, $9::boolean, $10::interval, $11::interval)`, [
398
+ expire, jobs, streams, attributes, entities, pruneTransient, keepHmark,
399
+ engineStreams, workerStreams, engineStreamsExpire, workerStreamsExpire,
400
+ ]);
364
401
  const row = result.rows[0];
365
402
  return {
366
403
  jobs: Number(row.deleted_jobs),
367
404
  streams: Number(row.deleted_streams),
405
+ engineStreams: Number(row.deleted_engine_streams),
406
+ workerStreams: Number(row.deleted_worker_streams),
368
407
  attributes: Number(row.stripped_attributes),
369
408
  transient: Number(row.deleted_transient),
370
409
  marked: Number(row.marked_pruned),
@@ -0,0 +1,30 @@
1
+ import { DurableActivityContext } from '../../types/durable';
2
+ /**
3
+ * The activity-internal API surface, exposed as `Durable.activity`. Methods
4
+ * on this class are designed to be called **inside** an activity function —
5
+ * they read from the activity's `AsyncLocalStorage` context that is populated
6
+ * by the activity worker before invoking the user's function.
7
+ *
8
+ * ## Usage
9
+ *
10
+ * ```typescript
11
+ * import { Durable } from '@hotmeshio/hotmesh';
12
+ *
13
+ * export async function processData(data: string): Promise<string> {
14
+ * const ctx = Durable.activity.getContext();
15
+ * console.log(`Activity ${ctx.activityName} for workflow ${ctx.workflowId}`);
16
+ * console.log(`Metadata:`, ctx.argumentMetadata);
17
+ * return `Processed: ${data}`;
18
+ * }
19
+ * ```
20
+ */
21
+ export declare class ActivityService {
22
+ /**
23
+ * Returns the current activity's execution context. Only available
24
+ * inside an activity function invoked by the durable activity worker.
25
+ *
26
+ * @returns The activity context including name, args, metadata, and parent workflow info.
27
+ * @throws If called outside of an activity execution context.
28
+ */
29
+ static getContext(): DurableActivityContext;
30
+ }
@@ -0,0 +1,46 @@
1
+ "use strict";
2
+ Object.defineProperty(exports, "__esModule", { value: true });
3
+ exports.ActivityService = void 0;
4
+ const storage_1 = require("../../modules/storage");
5
+ /**
6
+ * The activity-internal API surface, exposed as `Durable.activity`. Methods
7
+ * on this class are designed to be called **inside** an activity function —
8
+ * they read from the activity's `AsyncLocalStorage` context that is populated
9
+ * by the activity worker before invoking the user's function.
10
+ *
11
+ * ## Usage
12
+ *
13
+ * ```typescript
14
+ * import { Durable } from '@hotmeshio/hotmesh';
15
+ *
16
+ * export async function processData(data: string): Promise<string> {
17
+ * const ctx = Durable.activity.getContext();
18
+ * console.log(`Activity ${ctx.activityName} for workflow ${ctx.workflowId}`);
19
+ * console.log(`Metadata:`, ctx.argumentMetadata);
20
+ * return `Processed: ${data}`;
21
+ * }
22
+ * ```
23
+ */
24
+ class ActivityService {
25
+ /**
26
+ * Returns the current activity's execution context. Only available
27
+ * inside an activity function invoked by the durable activity worker.
28
+ *
29
+ * @returns The activity context including name, args, metadata, and parent workflow info.
30
+ * @throws If called outside of an activity execution context.
31
+ */
32
+ static getContext() {
33
+ const store = storage_1.activityAsyncLocalStorage.getStore();
34
+ if (!store) {
35
+ throw new Error('Durable.activity.getContext() called outside of an activity execution context');
36
+ }
37
+ return {
38
+ activityName: store.get('activityName'),
39
+ arguments: store.get('arguments'),
40
+ argumentMetadata: store.get('argumentMetadata') ?? {},
41
+ workflowId: store.get('workflowId'),
42
+ workflowTopic: store.get('workflowTopic'),
43
+ };
44
+ }
45
+ }
46
+ exports.ActivityService = ActivityService;
@@ -1,8 +1,8 @@
1
1
  import { HotMesh } from '../hotmesh';
2
2
  import { ClientConfig, ClientWorkflow, Connection, WorkflowOptions } from '../../types/durable';
3
3
  /**
4
- * The Durable `Client` service is functionally
5
- * equivalent to the Temporal `Client` service.
4
+ * The Durable `Client` service provides methods for
5
+ * starting, signaling, and querying workflows.
6
6
  * Start a new workflow execution by calling
7
7
  * `workflow.start`. Note the direct connection to
8
8
  * Postgres.
@@ -81,8 +81,8 @@ export declare class ClientService {
81
81
  */
82
82
  search: (hotMeshClient: HotMesh, index: string, query: string[]) => Promise<string[]>;
83
83
  /**
84
- * The Durable `Client` service is functionally
85
- * equivalent to the Temporal `Client` service.
84
+ * The Durable `Client` service provides methods for
85
+ * starting, signaling, and querying workflows.
86
86
  * Starting a workflow is the primary use case and
87
87
  * is accessed by calling workflow.start().
88
88
  */
@@ -11,8 +11,8 @@ const search_1 = require("./search");
11
11
  const handle_1 = require("./handle");
12
12
  const factory_1 = require("./schemas/factory");
13
13
  /**
14
- * The Durable `Client` service is functionally
15
- * equivalent to the Temporal `Client` service.
14
+ * The Durable `Client` service provides methods for
15
+ * starting, signaling, and querying workflows.
16
16
  * Start a new workflow execution by calling
17
17
  * `workflow.start`. Note the direct connection to
18
18
  * Postgres.
@@ -114,8 +114,8 @@ class ClientService {
114
114
  return await searchClient.sendIndexedQuery(index, query);
115
115
  };
116
116
  /**
117
- * The Durable `Client` service is functionally
118
- * equivalent to the Temporal `Client` service.
117
+ * The Durable `Client` service provides methods for
118
+ * starting, signaling, and querying workflows.
119
119
  * Starting a workflow is the primary use case and
120
120
  * is accessed by calling workflow.start().
121
121
  */
@@ -54,7 +54,7 @@ declare class ExporterService {
54
54
  */
55
55
  export(jobId: string, options?: ExportOptions): Promise<DurableJobExport>;
56
56
  /**
57
- * Export a workflow execution as a Temporal-compatible event history.
57
+ * Export a workflow execution as a structured event history.
58
58
  *
59
59
  * **Sparse mode** (default): transforms the main workflow's timeline
60
60
  * into a flat event list. No additional I/O beyond the initial export.
@@ -80,7 +80,7 @@ declare class ExporterService {
80
80
  private resolveSymbolField;
81
81
  /**
82
82
  * Pure transformation: convert a raw DurableJobExport into a
83
- * Temporal-compatible WorkflowExecution event history.
83
+ * WorkflowExecution event history.
84
84
  */
85
85
  transformToExecution(raw: DurableJobExport, workflowId: string, workflowTopic: string, options: ExecutionExportOptions): WorkflowExecution;
86
86
  /**
@@ -173,7 +173,7 @@ class ExporterService {
173
173
  return jobExport;
174
174
  }
175
175
  /**
176
- * Export a workflow execution as a Temporal-compatible event history.
176
+ * Export a workflow execution as a structured event history.
177
177
  *
178
178
  * **Sparse mode** (default): transforms the main workflow's timeline
179
179
  * into a flat event list. No additional I/O beyond the initial export.
@@ -510,28 +510,43 @@ class ExporterService {
510
510
  }
511
511
  }
512
512
  }
513
- // ── 3. Stream-based fallback for unenriched activity events ──
514
- // When job attributes have been pruned, recover inputs from worker_streams
513
+ // ── 3. Stream-based enrichment from worker_streams ──
514
+ // Fetches worker invocation messages to recover:
515
+ // - Workflow input arguments (for workflow_execution_started)
516
+ // - Activity inputs when job attributes have been pruned
515
517
  if (this.store.getStreamHistory) {
516
- const unenrichedEvents = execution.events.filter((e) => (e.event_type === 'activity_task_scheduled' ||
518
+ const unenrichedActivityEvents = execution.events.filter((e) => (e.event_type === 'activity_task_scheduled' ||
517
519
  e.event_type === 'activity_task_completed' ||
518
520
  e.event_type === 'activity_task_failed') &&
519
521
  e.attributes.input === undefined);
520
- if (unenrichedEvents.length > 0) {
521
- const streamHistory = await this.store.getStreamHistory(workflowId, {
522
- types: ['worker'],
523
- });
522
+ const startedEvent = execution.events.find((e) => e.event_type === 'workflow_execution_started');
523
+ const startedNeedsInput = startedEvent &&
524
+ (!(startedEvent.attributes.input) ||
525
+ Object.keys(startedEvent.attributes.input).length === 0);
526
+ if (unenrichedActivityEvents.length > 0 || startedNeedsInput) {
527
+ const streamHistory = await this.store.getStreamHistory(workflowId);
524
528
  // Build a map of aid -> stream message data (the worker invocation inputs)
525
529
  const streamInputsByAid = new Map();
526
530
  for (const entry of streamHistory) {
527
- if (entry.msg_type === 'worker' && entry.data) {
531
+ if (entry.data) {
528
532
  const key = `${entry.aid}:${entry.dad || ''}`;
529
533
  if (!streamInputsByAid.has(key)) {
530
534
  streamInputsByAid.set(key, entry.data);
531
535
  }
532
536
  }
533
537
  }
534
- for (const evt of unenrichedEvents) {
538
+ // Enrich workflow_execution_started with input arguments from the
539
+ // first worker invocation message (aid="worker", data.arguments=[...])
540
+ if (startedNeedsInput) {
541
+ const workerEntry = streamHistory.find((entry) => entry.aid === 'worker' &&
542
+ entry.jid === workflowId &&
543
+ entry.data?.arguments);
544
+ if (workerEntry) {
545
+ startedEvent.attributes.input = workerEntry.data.arguments;
546
+ }
547
+ }
548
+ // Enrich unenriched activity events
549
+ for (const evt of unenrichedActivityEvents) {
535
550
  const attrs = evt.attributes;
536
551
  // Try matching by activity_type + dimensional address
537
552
  const key = `${attrs.activity_type}:${attrs.timeline_key || ''}`;
@@ -571,7 +586,7 @@ class ExporterService {
571
586
  }
572
587
  /**
573
588
  * Pure transformation: convert a raw DurableJobExport into a
574
- * Temporal-compatible WorkflowExecution event history.
589
+ * WorkflowExecution event history.
575
590
  */
576
591
  transformToExecution(raw, workflowId, workflowTopic, options) {
577
592
  const events = [];
@@ -761,7 +776,7 @@ class ExporterService {
761
776
  for (let i = 0; i < events.length; i++) {
762
777
  events[i].event_id = i + 1;
763
778
  }
764
- // ── Back-references (Temporal-compatible) ────────────────────
779
+ // ── Back-references ────────────────────────────────────────
765
780
  const scheduledMap = new Map();
766
781
  const initiatedMap = new Map();
767
782
  for (const e of events) {
@@ -45,7 +45,7 @@ export declare class WorkflowHandleService {
45
45
  */
46
46
  export(options?: ExportOptions): Promise<DurableJobExport>;
47
47
  /**
48
- * Exports the workflow as a Temporal-like execution event history.
48
+ * Exports the workflow as an execution event history.
49
49
  *
50
50
  * **Sparse mode** (default): transforms the main workflow's timeline
51
51
  * into a flat event list with workflow lifecycle, activity, child workflow,
@@ -44,7 +44,7 @@ class WorkflowHandleService {
44
44
  return this.exporter.export(this.workflowId, options);
45
45
  }
46
46
  /**
47
- * Exports the workflow as a Temporal-like execution event history.
47
+ * Exports the workflow as an execution event history.
48
48
  *
49
49
  * **Sparse mode** (default): transforms the main workflow's timeline
50
50
  * into a flat event list with workflow lifecycle, activity, child workflow,
@@ -4,14 +4,15 @@ import { ClientService } from './client';
4
4
  import { ConnectionService } from './connection';
5
5
  import { Search } from './search';
6
6
  import { Entity } from './entity';
7
+ import { ActivityService } from './activity';
7
8
  import { WorkerService } from './worker';
8
9
  import { WorkflowService } from './workflow';
9
10
  import { WorkflowHandleService } from './handle';
10
11
  import { didInterrupt } from './workflow/interruption';
11
12
  /**
12
- * The Durable service provides a Temporal-compatible workflow framework backed
13
- * by Postgres. It offers entity-based memory management and composable,
14
- * fault-tolerant workflows.
13
+ * The Durable service provides a workflow framework backed by Postgres.
14
+ * It offers entity-based memory management and composable, fault-tolerant
15
+ * workflows authored in a familiar procedural style.
15
16
  *
16
17
  * ## Core Features
17
18
  *
@@ -277,12 +278,12 @@ declare class DurableClass {
277
278
  constructor();
278
279
  /**
279
280
  * The Durable `Client` service is functionally
280
- * equivalent to the Temporal `Client` service.
281
+ * provides methods for starting, signaling, and querying workflows.
281
282
  */
282
283
  static Client: typeof ClientService;
283
284
  /**
284
- * The Durable `Connection` service is functionally
285
- * equivalent to the Temporal `Connection` service.
285
+ * The Durable `Connection` service
286
+ * manages database connections for the durable workflow engine.
286
287
  */
287
288
  static Connection: typeof ConnectionService;
288
289
  /**
@@ -301,8 +302,8 @@ declare class DurableClass {
301
302
  */
302
303
  static Handle: typeof WorkflowHandleService;
303
304
  /**
304
- * The Durable `Worker` service is functionally
305
- * equivalent to the Temporal `Worker` service.
305
+ * The Durable `Worker` service
306
+ * registers workflow and activity workers and connects them to the mesh.
306
307
  */
307
308
  static Worker: typeof WorkerService;
308
309
  /**
@@ -349,10 +350,16 @@ declare class DurableClass {
349
350
  */
350
351
  static registerActivityWorker: typeof WorkerService.registerActivityWorker;
351
352
  /**
352
- * The Durable `workflow` service is functionally
353
- * equivalent to the Temporal `Workflow` service
354
- * with additional methods for managing workflows,
355
- * including: `execChild`, `waitFor`, `sleep`, etc
353
+ * The Durable `activity` service provides context to
354
+ * executing activity functions. Call `Durable.activity.getContext()`
355
+ * inside an activity to access metadata, workflow ID, and other
356
+ * context passed from the parent workflow.
357
+ */
358
+ static activity: typeof ActivityService;
359
+ /**
360
+ * The Durable `workflow` service
361
+ * provides the workflow-internal API surface with methods for
362
+ * managing workflows, including: `execChild`, `waitFor`, `sleep`, etc
356
363
  */
357
364
  static workflow: typeof WorkflowService;
358
365
  /**
@@ -7,15 +7,16 @@ const client_1 = require("./client");
7
7
  const connection_1 = require("./connection");
8
8
  const search_1 = require("./search");
9
9
  const entity_1 = require("./entity");
10
+ const activity_1 = require("./activity");
10
11
  const worker_1 = require("./worker");
11
12
  const workflow_1 = require("./workflow");
12
13
  const handle_1 = require("./handle");
13
14
  const interruption_1 = require("./workflow/interruption");
14
15
  const interceptor_1 = require("./interceptor");
15
16
  /**
16
- * The Durable service provides a Temporal-compatible workflow framework backed
17
- * by Postgres. It offers entity-based memory management and composable,
18
- * fault-tolerant workflows.
17
+ * The Durable service provides a workflow framework backed by Postgres.
18
+ * It offers entity-based memory management and composable, fault-tolerant
19
+ * workflows authored in a familiar procedural style.
19
20
  *
20
21
  * ## Core Features
21
22
  *
@@ -381,12 +382,12 @@ class DurableClass {
381
382
  exports.Durable = DurableClass;
382
383
  /**
383
384
  * The Durable `Client` service is functionally
384
- * equivalent to the Temporal `Client` service.
385
+ * provides methods for starting, signaling, and querying workflows.
385
386
  */
386
387
  DurableClass.Client = client_1.ClientService;
387
388
  /**
388
- * The Durable `Connection` service is functionally
389
- * equivalent to the Temporal `Connection` service.
389
+ * The Durable `Connection` service
390
+ * manages database connections for the durable workflow engine.
390
391
  */
391
392
  DurableClass.Connection = connection_1.ConnectionService;
392
393
  /**
@@ -405,8 +406,8 @@ DurableClass.Entity = entity_1.Entity;
405
406
  */
406
407
  DurableClass.Handle = handle_1.WorkflowHandleService;
407
408
  /**
408
- * The Durable `Worker` service is functionally
409
- * equivalent to the Temporal `Worker` service.
409
+ * The Durable `Worker` service
410
+ * registers workflow and activity workers and connects them to the mesh.
410
411
  */
411
412
  DurableClass.Worker = worker_1.WorkerService;
412
413
  /**
@@ -453,10 +454,16 @@ DurableClass.Worker = worker_1.WorkerService;
453
454
  */
454
455
  DurableClass.registerActivityWorker = worker_1.WorkerService.registerActivityWorker;
455
456
  /**
456
- * The Durable `workflow` service is functionally
457
- * equivalent to the Temporal `Workflow` service
458
- * with additional methods for managing workflows,
459
- * including: `execChild`, `waitFor`, `sleep`, etc
457
+ * The Durable `activity` service provides context to
458
+ * executing activity functions. Call `Durable.activity.getContext()`
459
+ * inside an activity to access metadata, workflow ID, and other
460
+ * context passed from the parent workflow.
461
+ */
462
+ DurableClass.activity = activity_1.ActivityService;
463
+ /**
464
+ * The Durable `workflow` service
465
+ * provides the workflow-internal API surface with methods for
466
+ * managing workflows, including: `execChild`, `waitFor`, `sleep`, etc
460
467
  */
461
468
  DurableClass.workflow = workflow_1.WorkflowService;
462
469
  /**
@@ -2,8 +2,7 @@
2
2
  *********** HOTMESH 'DURABLE' MODULE APPLICATION GRAPH **********
3
3
  *
4
4
  * This HotMesh application spec uses 50 activities and 25 transitions
5
- * to model and emulate the Temporal Application & Query servers using
6
- * a pluggable backend.
5
+ * to model a durable workflow engine using a pluggable backend.
7
6
  *
8
7
  * This YAML file can also serve as a useful starting point for building
9
8
  * Integration/BPM/Workflow servers in general (MuleSoft, etc) without the need
@@ -17,7 +16,7 @@
17
16
  * * Service Meshes
18
17
  * * Master Data Management systems
19
18
  */
20
- declare const APP_VERSION = "8";
19
+ declare const APP_VERSION = "9";
21
20
  declare const APP_ID = "durable";
22
21
  /**
23
22
  * returns a new durable workflow schema
@@ -3,8 +3,7 @@
3
3
  *********** HOTMESH 'DURABLE' MODULE APPLICATION GRAPH **********
4
4
  *
5
5
  * This HotMesh application spec uses 50 activities and 25 transitions
6
- * to model and emulate the Temporal Application & Query servers using
7
- * a pluggable backend.
6
+ * to model a durable workflow engine using a pluggable backend.
8
7
  *
9
8
  * This YAML file can also serve as a useful starting point for building
10
9
  * Integration/BPM/Workflow servers in general (MuleSoft, etc) without the need
@@ -20,7 +19,7 @@
20
19
  */
21
20
  Object.defineProperty(exports, "__esModule", { value: true });
22
21
  exports.APP_ID = exports.APP_VERSION = exports.getWorkflowYAML = void 0;
23
- const APP_VERSION = '8';
22
+ const APP_VERSION = '9';
24
23
  exports.APP_VERSION = APP_VERSION;
25
24
  const APP_ID = 'durable';
26
25
  exports.APP_ID = APP_ID;
@@ -558,6 +557,9 @@ const getWorkflowYAML = (app, version) => {
558
557
  description: the arguments to pass to the activity
559
558
  items:
560
559
  type: string
560
+ argumentMetadata:
561
+ type: object
562
+ description: optional metadata to pass alongside activity arguments
561
563
  expire:
562
564
  type: number
563
565
  backoffCoefficient:
@@ -569,6 +571,7 @@ const getWorkflowYAML = (app, version) => {
569
571
  maps:
570
572
  activityName: '{worker.output.data.activityName}'
571
573
  arguments: '{worker.output.data.arguments}'
574
+ argumentMetadata: '{worker.output.data.argumentMetadata}'
572
575
  workflowDimension: '{worker.output.data.workflowDimension}'
573
576
  index: '{worker.output.data.index}'
574
577
  originJobId: '{worker.output.data.originJobId}'
@@ -1311,6 +1314,9 @@ const getWorkflowYAML = (app, version) => {
1311
1314
  description: the arguments to pass to the activity
1312
1315
  items:
1313
1316
  type: string
1317
+ argumentMetadata:
1318
+ type: object
1319
+ description: optional metadata to pass alongside activity arguments
1314
1320
  expire:
1315
1321
  type: number
1316
1322
  backoffCoefficient:
@@ -1322,6 +1328,7 @@ const getWorkflowYAML = (app, version) => {
1322
1328
  maps:
1323
1329
  activityName: '{signaler_worker.output.data.activityName}'
1324
1330
  arguments: '{signaler_worker.output.data.arguments}'
1331
+ argumentMetadata: '{signaler_worker.output.data.argumentMetadata}'
1325
1332
  workflowDimension: '{signaler_worker.output.data.workflowDimension}'
1326
1333
  index: '{signaler_worker.output.data.index}'
1327
1334
  originJobId: '{signaler_worker.output.data.originJobId}'
@@ -2123,6 +2130,9 @@ const getWorkflowYAML = (app, version) => {
2123
2130
  description: the arguments to pass to the activity
2124
2131
  items:
2125
2132
  type: string
2133
+ argumentMetadata:
2134
+ type: object
2135
+ description: optional metadata to pass alongside activity arguments
2126
2136
  expire:
2127
2137
  type: number
2128
2138
  backoffCoefficient:
@@ -2142,6 +2152,11 @@ const getWorkflowYAML = (app, version) => {
2142
2152
  - ['{collator_trigger.output.data.items}', '{collator_cycle_hook.output.data.cur_index}']
2143
2153
  - ['{@array.get}', arguments]
2144
2154
  - ['{@object.get}']
2155
+ argumentMetadata:
2156
+ '@pipe':
2157
+ - ['{collator_trigger.output.data.items}', '{collator_cycle_hook.output.data.cur_index}']
2158
+ - ['{@array.get}', argumentMetadata]
2159
+ - ['{@object.get}']
2145
2160
  workflowDimension:
2146
2161
  '@pipe':
2147
2162
  - ['{collator_trigger.output.data.items}', '{collator_cycle_hook.output.data.cur_index}']
@@ -2366,6 +2381,8 @@ const getWorkflowYAML = (app, version) => {
2366
2381
  type: string
2367
2382
  arguments:
2368
2383
  type: array
2384
+ argumentMetadata:
2385
+ type: object
2369
2386
  backoffCoefficient:
2370
2387
  type: number
2371
2388
  maximumAttempts:
@@ -2433,12 +2450,15 @@ const getWorkflowYAML = (app, version) => {
2433
2450
  type: string
2434
2451
  arguments:
2435
2452
  type: array
2453
+ argumentMetadata:
2454
+ type: object
2436
2455
  maps:
2437
2456
  parentWorkflowId: '{activity_trigger.output.data.parentWorkflowId}'
2438
2457
  workflowId: '{activity_trigger.output.data.workflowId}'
2439
2458
  workflowTopic: '{activity_trigger.output.data.workflowTopic}'
2440
2459
  activityName: '{activity_trigger.output.data.activityName}'
2441
2460
  arguments: '{activity_trigger.output.data.arguments}'
2461
+ argumentMetadata: '{activity_trigger.output.data.argumentMetadata}'
2442
2462
  output:
2443
2463
  schema:
2444
2464
  type: object
@@ -240,7 +240,13 @@ class WorkerService {
240
240
  if (!activityFunction) {
241
241
  throw new Error(`Activity '${activityName}' not found in registry`);
242
242
  }
243
- const pojoResponse = await activityFunction.apply(null, activityInput.arguments);
243
+ const activityContext = new Map();
244
+ activityContext.set('activityName', activityName);
245
+ activityContext.set('arguments', activityInput.arguments);
246
+ activityContext.set('argumentMetadata', activityInput.argumentMetadata ?? {});
247
+ activityContext.set('workflowId', activityInput.workflowId);
248
+ activityContext.set('workflowTopic', activityInput.workflowTopic);
249
+ const pojoResponse = await storage_1.activityAsyncLocalStorage.run(activityContext, () => activityFunction.apply(null, activityInput.arguments));
244
250
  return {
245
251
  status: stream_1.StreamStatus.SUCCESS,
246
252
  metadata: { ...data.metadata },
@@ -419,7 +425,13 @@ class WorkerService {
419
425
  const activityInput = data.data;
420
426
  const activityName = activityInput.activityName;
421
427
  const activityFunction = WorkerService.activityRegistry[activityName];
422
- const pojoResponse = await activityFunction.apply(this, activityInput.arguments);
428
+ const activityContext = new Map();
429
+ activityContext.set('activityName', activityName);
430
+ activityContext.set('arguments', activityInput.arguments);
431
+ activityContext.set('argumentMetadata', activityInput.argumentMetadata ?? {});
432
+ activityContext.set('workflowId', activityInput.workflowId);
433
+ activityContext.set('workflowTopic', activityInput.workflowTopic);
434
+ const pojoResponse = await storage_1.activityAsyncLocalStorage.run(activityContext, () => activityFunction.apply(this, activityInput.arguments));
423
435
  return {
424
436
  status: stream_1.StreamStatus.SUCCESS,
425
437
  metadata: { ...data.metadata },
@@ -653,6 +665,7 @@ class WorkerService {
653
665
  dimension: err.workflowDimension,
654
666
  }),
655
667
  arguments: err.arguments,
668
+ argumentMetadata: err.argumentMetadata,
656
669
  workflowDimension: err.workflowDimension,
657
670
  index: err.index,
658
671
  originJobId: err.originJobId,
@@ -7,7 +7,7 @@
7
7
  * If that `sessionId` already exists in the `replay` hash (loaded from
8
8
  * the job's stored state), the previously persisted result is
9
9
  * deserialized and returned — skipping re-execution entirely. This is
10
- * analogous to Temporal's event history replay.
10
+ * analogous to event history replay in durable workflow engines.
11
11
  *
12
12
  * ## Session ID Format
13
13
  *
@@ -12,7 +12,7 @@ const context_1 = require("./context");
12
12
  * If that `sessionId` already exists in the `replay` hash (loaded from
13
13
  * the job's stored state), the previously persisted result is
14
14
  * deserialized and returned — skipping re-execution entirely. This is
15
- * analogous to Temporal's event history replay.
15
+ * analogous to event history replay in durable workflow engines.
16
16
  *
17
17
  * ## Session ID Format
18
18
  *
@@ -21,6 +21,7 @@ function getProxyInterruptPayload(context, activityName, execIndex, args, option
21
21
  }
22
22
  return {
23
23
  arguments: args,
24
+ argumentMetadata: options?.argumentMetadata ?? undefined,
24
25
  workflowDimension,
25
26
  index: execIndex,
26
27
  originJobId: originJobId || workflowId,
@@ -53,7 +54,7 @@ function wrapActivity(activityName, options) {
53
54
  const coreFunction = async () => {
54
55
  if (didRunAlready) {
55
56
  if (result?.$error) {
56
- if (options?.retryPolicy?.throwOnError !== false) {
57
+ if (activityCtx.options?.retryPolicy?.throwOnError !== false) {
57
58
  const code = result.$error.code;
58
59
  const message = result.$error.message;
59
60
  const stack = result.$error.stack;
@@ -74,7 +75,7 @@ function wrapActivity(activityName, options) {
74
75
  }
75
76
  return result.data;
76
77
  }
77
- const interruptionMessage = getProxyInterruptPayload(context, activityName, execIndex, activityCtx.args, options);
78
+ const interruptionMessage = getProxyInterruptPayload(context, activityName, execIndex, activityCtx.args, activityCtx.options);
78
79
  interruptionRegistry.push({
79
80
  code: common_1.HMSH_CODE_DURABLE_PROXY,
80
81
  type: 'DurableProxyError',
@@ -160,7 +160,7 @@ import { StreamCode, StreamData, StreamDataResponse, StreamStatus } from '../../
160
160
  * ## Higher-Level Modules
161
161
  *
162
162
  * For most use cases, prefer the higher-level wrappers:
163
- * - **Durable** — Temporal-style durable workflow functions.
163
+ * - **Durable** — Durable workflow functions with replay and retry.
164
164
  * - **Virtual** — Virtual network functions and idempotent RPC.
165
165
  *
166
166
  * @see {@link https://hotmeshio.github.io/sdk-typescript/} - API reference
@@ -161,7 +161,7 @@ const enums_1 = require("../../modules/enums");
161
161
  * ## Higher-Level Modules
162
162
  *
163
163
  * For most use cases, prefer the higher-level wrappers:
164
- * - **Durable** — Temporal-style durable workflow functions.
164
+ * - **Durable** — Durable workflow functions with replay and retry.
165
165
  * - **Virtual** — Virtual network functions and idempotent RPC.
166
166
  *
167
167
  * @see {@link https://hotmeshio.github.io/sdk-typescript/} - API reference
@@ -37,15 +37,47 @@ export interface PruneOptions {
37
37
  jobs?: boolean;
38
38
  /**
39
39
  * If true, hard-deletes expired stream messages older than the
40
- * retention window.
40
+ * retention window from both `engine_streams` and `worker_streams`.
41
+ * Use `engineStreams` / `workerStreams` for independent control.
41
42
  * @default true
42
43
  */
43
44
  streams?: boolean;
45
+ /**
46
+ * Override for `engine_streams` cleanup. When set, takes precedence
47
+ * over `streams` for the engine table. Engine streams contain internal
48
+ * routing messages and can be pruned aggressively.
49
+ * @default undefined (falls back to `streams`)
50
+ */
51
+ engineStreams?: boolean;
52
+ /**
53
+ * Override for `worker_streams` cleanup. When set, takes precedence
54
+ * over `streams` for the worker table. Worker streams contain workflow
55
+ * input arguments and activity payloads needed by the exporter — use
56
+ * a longer retention to preserve export fidelity.
57
+ * @default undefined (falls back to `streams`)
58
+ */
59
+ workerStreams?: boolean;
60
+ /**
61
+ * Retention override for `engine_streams`. When set, uses this interval
62
+ * instead of the global `expire` for engine stream cleanup.
63
+ * @default undefined (falls back to `expire`)
64
+ *
65
+ * @example '24 hours'
66
+ */
67
+ engineStreamsExpire?: string;
68
+ /**
69
+ * Retention override for `worker_streams`. When set, uses this interval
70
+ * instead of the global `expire` for worker stream cleanup.
71
+ * @default undefined (falls back to `expire`)
72
+ *
73
+ * @example '90 days'
74
+ */
75
+ workerStreamsExpire?: string;
44
76
  /**
45
77
  * If true, strips execution-artifact attributes from completed,
46
78
  * un-pruned jobs. Preserves `jdata` (return data), `udata`
47
79
  * (searchable data), and `jmark` (timeline/event history for
48
- * Temporal-compatible export). See `keepHmark` for `hmark`.
80
+ * workflow execution export). See `keepHmark` for `hmark`.
49
81
  * @default false
50
82
  */
51
83
  attributes?: boolean;
@@ -79,8 +111,12 @@ export interface PruneOptions {
79
111
  export interface PruneResult {
80
112
  /** Number of expired job rows hard-deleted */
81
113
  jobs: number;
82
- /** Number of expired stream message rows hard-deleted */
114
+ /** Number of expired stream message rows hard-deleted (engine + worker) */
83
115
  streams: number;
116
+ /** Number of expired engine_streams rows hard-deleted */
117
+ engineStreams: number;
118
+ /** Number of expired worker_streams rows hard-deleted */
119
+ workerStreams: number;
84
120
  /** Number of execution-artifact attribute rows stripped from completed jobs */
85
121
  attributes: number;
86
122
  /** Number of transient (entity IS NULL) job rows hard-deleted */
@@ -104,6 +104,23 @@ type WorkflowContext = {
104
104
  */
105
105
  expire?: number;
106
106
  };
107
+ /**
108
+ * Context available inside an executing activity function via
109
+ * `Durable.activity.getContext()`. Populated by the activity worker
110
+ * using `activityAsyncLocalStorage`.
111
+ */
112
+ type DurableActivityContext = {
113
+ /** The name of the activity function being executed */
114
+ activityName: string;
115
+ /** The arguments passed to the activity */
116
+ arguments: any[];
117
+ /** Optional metadata provided via `proxyActivities({ argumentMetadata })` */
118
+ argumentMetadata: Record<string, any>;
119
+ /** The workflow ID of the parent workflow that dispatched this activity */
120
+ workflowId: string;
121
+ /** The workflow topic of the parent workflow */
122
+ workflowTopic: string;
123
+ };
107
124
  /**
108
125
  * The schema for the full-text-search
109
126
  * @deprecated
@@ -354,6 +371,7 @@ type SignalOptions = {
354
371
  type ActivityWorkflowDataType = {
355
372
  activityName: string;
356
373
  arguments: any[];
374
+ argumentMetadata?: Record<string, any>;
357
375
  workflowId: string;
358
376
  workflowTopic: string;
359
377
  };
@@ -494,6 +512,10 @@ type ActivityConfig = {
494
512
  * ```
495
513
  */
496
514
  taskQueue?: string;
515
+ /** Optional metadata to pass alongside activity arguments. This metadata
516
+ * is transported as a dedicated schema field (not inside args) and made
517
+ * available to the activity function via `Durable.activity.getContext()`. */
518
+ argumentMetadata?: Record<string, any>;
497
519
  /** Retry policy configuration for activities */
498
520
  retryPolicy?: {
499
521
  /** Maximum number of retry attempts, default is 5 (HMSH_DURABLE_MAX_ATTEMPTS) */
@@ -685,4 +707,4 @@ export interface ActivityInterceptor {
685
707
  */
686
708
  execute(activityCtx: ActivityInterceptorContext, workflowCtx: Map<string, any>, next: () => Promise<any>): Promise<any>;
687
709
  }
688
- export { ActivityConfig, ActivityWorkflowDataType, ChildResponseType, ClientConfig, ClientWorkflow, ContextType, Connection, FunctionSignature, ProxyResponseType, ProxyType, Registry, SignalOptions, FindJobsOptions, FindOptions, FindWhereOptions, FindWhereQuery, HookOptions, SearchResults, WorkerConfig, WorkflowConfig, WorkerOptions, WorkflowSearchOptions, WorkflowDataType, WorkflowOptions, WorkflowContext, };
710
+ export { ActivityConfig, DurableActivityContext, ActivityWorkflowDataType, ChildResponseType, ClientConfig, ClientWorkflow, ContextType, Connection, FunctionSignature, ProxyResponseType, ProxyType, Registry, SignalOptions, FindJobsOptions, FindOptions, FindWhereOptions, FindWhereQuery, HookOptions, SearchResults, WorkerConfig, WorkflowConfig, WorkerOptions, WorkflowSearchOptions, WorkflowDataType, WorkflowOptions, WorkflowContext, };
@@ -29,6 +29,7 @@ export type DurableWaitForAllErrorType = {
29
29
  };
30
30
  export type DurableProxyErrorType = {
31
31
  arguments: string[];
32
+ argumentMetadata?: Record<string, any>;
32
33
  activityName: string;
33
34
  backoffCoefficient?: number;
34
35
  index: number;
@@ -251,7 +251,7 @@ export interface ExecutionExportOptions {
251
251
  * When true, fetches the full stream message history for this workflow
252
252
  * from the worker_streams table and attaches it as `stream_history`.
253
253
  * This provides raw activity input/output data from the original stream
254
- * messages, enabling Temporal-grade export fidelity.
254
+ * messages, enabling full export fidelity.
255
255
  *
256
256
  * @default false
257
257
  */
@@ -3,7 +3,7 @@ export { App, AppVID, AppTransitions, AppSubscriptions } from './app';
3
3
  export { AsyncSignal } from './async';
4
4
  export { CacheMode } from './cache';
5
5
  export { CollationFaultType, CollationStage } from './collator';
6
- export { ActivityConfig, ActivityInterceptor, ActivityInterceptorContext, ActivityWorkflowDataType, ChildResponseType, ClientConfig, ClientWorkflow, ContextType, Connection, ProxyResponseType, ProxyType, Registry, SignalOptions, FindJobsOptions, FindOptions, FindWhereOptions, FindWhereQuery, HookOptions, SearchResults, WorkflowConfig, WorkerConfig, WorkerOptions, WorkflowContext, WorkflowSearchOptions, WorkflowSearchSchema, WorkflowDataType, WorkflowOptions, WorkflowInterceptor, InterceptorRegistry, } from './durable';
6
+ export { ActivityConfig, DurableActivityContext, ActivityInterceptor, ActivityInterceptorContext, ActivityWorkflowDataType, ChildResponseType, ClientConfig, ClientWorkflow, ContextType, Connection, ProxyResponseType, ProxyType, Registry, SignalOptions, FindJobsOptions, FindOptions, FindWhereOptions, FindWhereQuery, HookOptions, SearchResults, WorkflowConfig, WorkerConfig, WorkerOptions, WorkflowContext, WorkflowSearchOptions, WorkflowSearchSchema, WorkflowDataType, WorkflowOptions, WorkflowInterceptor, InterceptorRegistry, } from './durable';
7
7
  export { PruneOptions, PruneResult, } from './dba';
8
8
  export { DurableChildErrorType, DurableProxyErrorType, DurableSleepErrorType, DurableWaitForAllErrorType, DurableWaitForErrorType, } from './error';
9
9
  export { ActivityAction, ActivityDetail, ActivityInputMap, ActivityTaskCompletedAttributes, ActivityTaskFailedAttributes, ActivityTaskScheduledAttributes, ChildWorkflowExecutionCompletedAttributes, ChildWorkflowExecutionFailedAttributes, ChildWorkflowExecutionStartedAttributes, DependencyExport, DurableJobExport, ExecutionExportOptions, ExportCycles, ExportFields, ExportItem, ExportMode, ExportOptions, ExportTransitions, JobAction, JobActionExport, JobAttributesRow, JobExport, JobRow, JobTimeline, StreamHistoryEntry, TimelineType, TimerFiredAttributes, TimerStartedAttributes, TransitionType, WorkflowEventAttributes, WorkflowEventCategory, WorkflowEventType, WorkflowExecution, WorkflowExecutionCompletedAttributes, WorkflowExecutionEvent, WorkflowExecutionFailedAttributes, WorkflowExecutionSignaledAttributes, WorkflowExecutionStartedAttributes, WorkflowExecutionStatus, WorkflowExecutionSummary, } from './exporter';
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@hotmeshio/hotmesh",
3
- "version": "0.12.0",
4
- "description": "Permanent-Memory Workflows & AI Agents",
3
+ "version": "0.13.0",
4
+ "description": "Durable Workflow",
5
5
  "main": "./build/index.js",
6
6
  "types": "./build/index.d.ts",
7
7
  "homepage": "https://github.com/hotmeshio/sdk-typescript/",
@@ -33,6 +33,7 @@
33
33
  "test:durable:fatal": "vitest run tests/durable/fatal",
34
34
  "test:durable:goodbye": "HMSH_LOGLEVEL=debug vitest run tests/durable/goodbye/postgres.test.ts",
35
35
  "test:durable:interceptor": "HMSH_LOGLEVEL=info vitest run tests/durable/interceptor/postgres.test.ts",
36
+ "test:durable:metadata": "HMSH_LOGLEVEL=info vitest run tests/durable/interceptor/postgres.test.ts -t 'argumentMetadata'",
36
37
  "test:durable:entity": "HMSH_LOGLEVEL=debug vitest run tests/durable/entity/postgres.test.ts",
37
38
  "test:durable:agent": "HMSH_LOGLEVEL=debug vitest run tests/durable/agent/postgres.test.ts",
38
39
  "test:durable:hello": "HMSH_TELEMETRY=debug HMSH_LOGLEVEL=info vitest run tests/durable/helloworld/postgres.test.ts",