arvo-event-handler 3.0.18 → 3.0.20

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/.nvmrc ADDED
@@ -0,0 +1 @@
1
+ v20.17.0
package/Dockerfile ADDED
@@ -0,0 +1,7 @@
1
+ ARG NODE_VERSION=18
2
+ FROM node:${NODE_VERSION}-alpine
3
+ WORKDIR /app
4
+ COPY node_modules ./node_modules
5
+ COPY package*.json ./
6
+ COPY . .
7
+ RUN npm run build
@@ -0,0 +1,23 @@
1
+ ARG NODE_VERSION=18
2
+ FROM node:${NODE_VERSION}-alpine
3
+
4
+ RUN npm install -g @aikidosec/safe-chain
5
+ RUN safe-chain setup-ci
6
+
7
+ WORKDIR /install
8
+ COPY package*.json ./
9
+ COPY .npmrc ./
10
+
11
+ # Build arguments for optional package installation
12
+ ARG PACKAGES=""
13
+ ARG DEV=""
14
+
15
+ # Install dependencies in isolation
16
+ # Lifecycle scripts can run but have no access to host secrets
17
+ # If PACKAGES specified: install those specific packages
18
+ # Otherwise: install all dependencies from package.json
19
+ RUN if [ -n "$PACKAGES" ]; then \
20
+ [ "$DEV" = "true" ] && npm install -D $PACKAGES || npm install $PACKAGES; \
21
+ else \
22
+ npm install; \
23
+ fi
@@ -0,0 +1,7 @@
1
+ ARG NODE_VERSION=18
2
+ FROM node:${NODE_VERSION}-alpine
3
+ WORKDIR /app
4
+ COPY node_modules ./node_modules
5
+ COPY package*.json ./
6
+ COPY . .
7
+ CMD ["npm", "test"]
package/README.md CHANGED
@@ -1,102 +1,142 @@
1
- # Arvo Event Handler
2
-
3
- The `arvo-event-handler` package serves as the comprehensive orchestration and event processing foundation for building sophisticated, reliable event-driven systems within the Arvo architecture. This package provides a complete toolkit of components that work seamlessly together to handle everything from simple event processing to complex distributed workflow orchestration, all while maintaining strict type safety, comprehensive observability, and robust error handling.
4
-
5
1
  [![SonarCloud](https://sonarcloud.io/images/project_badges/sonarcloud-white.svg)](https://sonarcloud.io/summary/new_code?id=SaadAhmad123_arvo-event-handler)
6
2
  [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=SaadAhmad123_arvo-event-handler&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=SaadAhmad123_arvo-event-handler)
7
3
 
8
- ## Installation
9
-
10
- Install the package along with its core dependency:
11
-
12
- ```bash
13
- npm install arvo-event-handler arvo-core
14
- ```
15
-
16
- ```bash
17
- yarn add arvo-event-handler arvo-core
18
- ```
19
-
20
- ## The Event Handlers
21
-
22
- The Arvo event handling architecture is based on three handler patterns.
23
-
24
- ### 1. Simple Event Handler
25
-
26
- This kind of event handling is provided by [`ArvoEventHandler`](src/ArvoEventHandler/README.md). This approach transforms ArvoContract definitions into stateless, pure function handlers that process individual events in isolation. Each handler binds to a specific contract, validates incoming events against schema definitions, executes business logic, and returns response events. It supports multiple contract versions for backward compatibility and enables multi-domain event broadcasting for parallel processing pipelines. This pattern is ideal for microservices, API endpoints, and any scenario where you need reliable, contract-enforced event processing without complex state management or workflow coordination.
27
-
28
- ### 2. State-machine based workflow orchestration
4
+ # Arvo - A toolkit for event driven applications (arvo-event-handler)
29
5
 
30
- This kind of event handling is provided by [`ArvoMachine`](src/ArvoMachine/README.md) which defines the state machine and [`ArvoOrchestrator`](src/ArvoOrchestrator/README.md) which executes it. This approach uses declarative state machine definitions to model complex business processes with multiple states, transitions, and conditional logic. ArvoMachine creates XState-compatible machines with Arvo-specific constraints and contract bindings, while ArvoOrchestrator provides the runtime environment for executing these machines with distributed state persistence, resource locking, and comprehensive lifecycle management. This pattern excels at complex workflows with parallel states, timing requirements, conditional branching, and scenarios where visual workflow modeling and deterministic state transitions are crucial for business process management.
6
+ The orchestration and event processing foundation for [Arvo](https://www.arvo.land/), providing everything from simple event handlers to complex workflow orchestration with state machines and imperative resumables.
31
7
 
32
- ### 3. Dynamic stateful event handling and orchestration
8
+ This package provides three core handler patterns and essential infrastructure for building reliable event-driven systems:
33
9
 
34
- This kind of event handling is provided by [`ArvoResumable`](src/ArvoResumable/README.md). The event handling is a different approach to workflow processing and complements the state machine pattern by offering an imperative programming model where developers write handler functions that explicitly manage workflow state through context objects. Instead of defining states and transitions declaratively, you write code that examines incoming events, updates workflow context, and decides what actions to take next. This approach provides direct control over workflow logic, making it easier to debug and understand for teams familiar with traditional programming patterns, while still offering the same reliability, observability, and distributed coordination features as state machine orchestration.
10
+ **ArvoEventHandler** - Stateless event processors that transform contract-defined events. Perfect for microservices, API endpoints, and simple request-response patterns.
35
11
 
36
- ## Core Infrastructure Components
12
+ **ArvoOrchestrator** - Declarative state machine-based workflow orchestration using XState. Ideal for complex business processes with clear states, transitions, and parallel execution.
37
13
 
38
- Beyond the three main handler patterns, the package includes essential infrastructure components that enable robust distributed system operation.
14
+ **ArvoResumable** - Imperative workflow handlers with explicit state management. Best for dynamic workflows, AI-driven decision logic, and teams preferring traditional programming patterns.
39
15
 
40
- ### Memory - State Persistance
41
-
42
- The [`IMachineMemory`](src/MachineMemory/README.md) interface defines how workflow state gets persisted and coordinated across distributed instances. It implements an optimistic locking strategy with "fail fast on acquire, be tolerant on release" semantics, ensuring data consistency while enabling system recovery from transient failures.
43
-
44
- This package includes `SimpleMachineMemory` for development/ prototyping scenarios and provides example for implementing cloud-based production-ready distributed storage solutions.
45
-
46
- ### Error Handling
47
-
48
- The Arvo event handling system uses a layered error handling approach that provides clear boundaries between different types of failures, enabling appropriate responses at each level.
49
-
50
- **Business Logic Failures** are expected outcomes in your business processes and should be modeled as explicit events in your `ArvoContract` definitions. For example, when a user already exists during registration or a payment is declined, these represent normal business scenarios rather than system errors. By defining these as emittable events, downstream consumers can distinguish between business logic outcomes and actual system problems, enabling appropriate handling logic for each scenario.
51
-
52
- **Transient System Errors** occur when underlying infrastructure or external services fail temporarily. Database connection timeouts, API unavailability, or network issues fall into this category. The system automatically converts uncaught exceptions into standardized system error events with the type pattern `sys.{contract.type}.error`. These events carry error details and can trigger retry mechanisms, circuit breakers, or alternative processing paths while maintaining the event-driven flow of your system.
53
-
54
- **Violations** represent critical failures that require immediate attention and cannot be handled through normal event processing patterns. The system defines four distinct violation types to help you identify and respond to different categories of critical issues:
55
-
56
- - `ContractViolation` occurs when event data fails contract validation, indicating schema mismatches between services. This typically signals version incompatibilities or data corruption that requires developer intervention to resolve.
16
+ ## Installation
17
+ ```bash
18
+ npm install arvo-event-handler arvo-core xstate@5 zod@3
19
+ ```
57
20
 
58
- - `ConfigViolation` happens when events are routed to handlers that cannot process them, revealing system topology or configuration problems that need infrastructure-level fixes.
21
+ ## Quick Start
22
+
23
+ ### Simple Event Handler
24
+ ```typescript
25
+ import { createArvoEventHandler } from 'arvo-event-handler';
26
+ import { createArvoContract } from 'arvo-core';
27
+ import { z } from 'zod';
28
+
29
+ const contract = createArvoContract({
30
+ uri: '#/contracts/user',
31
+ type: 'user.validate',
32
+ versions: {
33
+ '1.0.0': {
34
+ accepts: z.object({ email: z.string().email() }),
35
+ emits: {
36
+ 'evt.user.validate.success': z.object({ valid: z.boolean() })
37
+ }
38
+ }
39
+ }
40
+ });
41
+
42
+ const handler = createArvoEventHandler({
43
+ contract,
44
+ executionunits: 0,
45
+ handler: {
46
+ '1.0.0': async ({ event }) => ({
47
+ type: 'evt.user.validate.success',
48
+ data: { valid: true }
49
+ })
50
+ }
51
+ });
52
+ ```
59
53
 
60
- - `ExecutionViolation` provides a mechanism for custom error handling when your business logic encounters scenarios that cannot be resolved through normal event patterns and require special intervention.
54
+ ### State Machine Orchestrator
55
+ ```typescript
56
+ import { createArvoOrchestrator, setupArvoMachine } from 'arvo-event-handler';
57
+ import { createArvoOrchestratorContract } from 'arvo-core';
58
+
59
+ const orchestratorContract = createArvoOrchestratorContract({
60
+ uri: '#/orchestrator/workflow',
61
+ name: 'workflow',
62
+ versions: {
63
+ '1.0.0': {
64
+ init: z.object({ userId: z.string() }),
65
+ complete: z.object({ result: z.string() })
66
+ }
67
+ }
68
+ });
69
+
70
+ const machine = setupArvoMachine({
71
+ contracts: {
72
+ self: orchestratorContract.version('1.0.0'),
73
+ services: { /* service contracts */ }
74
+ }
75
+ }).createMachine({
76
+ // XState machine definition
77
+ });
78
+
79
+ const orchestrator = createArvoOrchestrator({
80
+ machines: [machine],
81
+ memory, // IMachineMemory implementation
82
+ executionunits: 0
83
+ });
84
+ ```
61
85
 
62
- - `TransactionViolation` is raised specifically by `ArvoOrchestrator` and `ArvoResumable` when state persistence operations fail. The accompanying `TransactionViolationCause` provides detailed information about what went wrong, allowing you to implement appropriate recovery strategies for distributed transaction failures.
86
+ ### Imperative Resumable
87
+ ```typescript
88
+ import { createArvoResumable } from 'arvo-event-handler';
89
+
90
+ const resumable = createArvoResumable({
91
+ contracts: {
92
+ self: orchestratorContract,
93
+ services: { /* service contracts */ }
94
+ },
95
+ memory,
96
+ executionunits: 0,
97
+ handler: {
98
+ '1.0.0': async ({ input, service, context }) => {
99
+ if (input) {
100
+ return {
101
+ context: { userId: input.data.userId },
102
+ services: [{ type: 'user.validate', data: { /* ... */ } }]
103
+ };
104
+ }
105
+ // Handle service responses and return output
106
+ }
107
+ }
108
+ });
109
+ ```
63
110
 
111
+ ## Additional Core Components
64
112
 
65
- ### Local Developement & Testing
113
+ **IMachineMemory** - State persistence interface with optimistic locking for distributed workflow coordination. Includes `SimpleMachineMemory` for local development.
66
114
 
67
- The package provides `createSimpleEventBroker` utility which creates local event buses perfect for testing, development, and single-function workflow coordination. It enables comprehensive integration testing without external message brokers while supporting the same event patterns used in production distributed systems.
115
+ **Error Handling** - Three-tier system: business logic failures as contract events, transient errors as system events, and violations for critical failures requiring immediate intervention.
68
116
 
117
+ **SimpleEventBroker** - Local in-memory FIFO queue-based event broker for testing and development without external message infrastructure. Also suitable for production deployments with limited scale (≤1000 users).
69
118
 
70
- ## Architecture Principles
119
+ **SimpleMachineMemory** - Local in-memory hash map based storage for testing and development without external database infrastructure.
71
120
 
72
- The entire system follows consistent architectural principles that promote reliability and maintainability. All handlers implement the signature `ArvoEvent => Promise<{ events: ArvoEvent[] }>`, creating predictable event flow patterns throughout the system. Contract-first development ensures all service interactions are explicitly defined and validated, eliminating common integration issues while providing compile-time type safety.
121
+ All handlers implement the same interface `IArvoEventHandler` regardless of complexity, enabling consistent patterns across your entire system. Contract-first development ensures type safety and validation at every boundary. Built-in OpenTelemetry integration provides complete observability. State management through pluggable interfaces supports any storage backend from memory to distributed databases.
73
122
 
74
- Multi-domain event broadcasting allows single handlers to create events for different processing contexts simultaneously, supporting patterns like audit trails, analytics processing, and external system integration. The comprehensive observability integration provides operational visibility through OpenTelemetry spans, structured logging, and performance metrics collection.
123
+ The same handler code works locally with in-memory brokers during development and in production with distributed message systems and persistent state stores.
75
124
 
76
- The functional architecture enables natural horizontal scaling since handlers operate as pure functions with consistent behavior regardless of deployment location. State management through pluggable persistence interfaces supports various scaling strategies from single-instance deployments to sophisticated distributed configurations.
125
+ ## What is `arvo-event-handler`?
77
126
 
78
- ## Documentation and Resources
127
+ The `arvo-event-handler` is one of the two foundational packages in the Arvo ecosystem, alongside `arvo-core`. Together, they provide the complete foundation for building event-driven applications that are distributed system-compliant. Explore additional tools and integrations in the `@arvo-tools` namespace.
79
128
 
80
- | Component | Documentation | When to Use |
81
- |-----------|---------------|-------------|
82
- | **ArvoEventHandler** | [Simple Event Processing](src/ArvoEventHandler/README.md) | Stateless services, API endpoints, microservices, simple request-response processing |
83
- | **ArvoMachine** | [State Machine Workflows](src/ArvoMachine/README.md) | Complex business processes with multiple states, conditional branching, parallel execution, visual workflow modeling |
84
- | **ArvoOrchestrator** | [Workflow Orchestration](src/ArvoOrchestrator/README.md) | Running state machines in production, distributed workflow coordination, comprehensive lifecycle management |
85
- | **ArvoResumable** | [Handler-Based Workflows](src/ArvoResumable/README.md) | Dynamic workflows, imperative programming preference, rapid prototyping, teams familiar with traditional programming patterns |
86
- | **MachineMemory** | [State Persistence Interface](src/MachineMemory/README.md) | Custom state storage requirements, distributed locking strategies, production persistence implementations |
129
+ Learn more at the official Arvo website: [https://www.arvo.land/](https://www.arvo.land/)
87
130
 
88
- ## Package Information
131
+ ## Documentation
89
132
 
90
- | Resource | Link |
91
- |----------|------|
92
- | Package | [npm package](https://www.npmjs.com/package/arvo-event-handler) |
93
- | Repository | [GitHub repository](https://github.com/SaadAhmad123/arvo-event-handler) |
94
- | Documentation | [Complete documentation](https://saadahmad123.github.io/arvo-event-handler/index.html) |
95
- | Core Package | [arvo-core documentation](https://saadahmad123.github.io/arvo-core/index.html) |
133
+ Complete guides, API reference, and tutorials at [https://www.arvo.land/](https://www.arvo.land/)
96
134
 
97
135
  ## License
98
136
 
99
- This package is available under the MIT License. For more details, refer to the [LICENSE.md](LICENSE.md) file in the project repository.
137
+ MIT - See [LICENSE.md](LICENSE.md)
138
+
139
+ ---
100
140
 
101
141
  ### SonarCloud Metrics
102
142
 
@@ -132,6 +132,7 @@ export declare function setupArvoMachine<TContext extends MachineContext, TSelfC
132
132
  enqueueArvoEvent: EnqueueArvoEventActionParam;
133
133
  }>, ToParameterizedObject<TGuards>, never, TTag, InferVersionedArvoContract<TSelfContract>["accepts"], z.input<TSelfContract["emits"][ReturnType<typeof ArvoOrchestratorEventTypeGen.complete<ExtractOrchestratorType<TSelfContract["accepts"]["type"]>>>]> & {
134
134
  __id?: CreateArvoEvent<Record<string, unknown>, string>["id"];
135
+ __executionunits?: CreateArvoEvent<Record<string, unknown>, string>["executionunits"];
135
136
  }, InferServiceContract<TServiceContracts>["emitted"], TMeta>>(config: TConfig & {
136
137
  id: string;
137
138
  version?: TSelfContract["version"];
@@ -146,10 +147,12 @@ export declare function setupArvoMachine<TContext extends MachineContext, TSelfC
146
147
  type: K_2;
147
148
  params: TGuards[K_2];
148
149
  }; }>, never, {} | {
149
- [x: string]: {} | any | {
150
- [x: string]: {} | any | any;
150
+ [x: string]: {} | /*elided*/ any | {
151
+ [x: string]: {} | /*elided*/ any | /*elided*/ any;
151
152
  };
152
153
  } | {
153
- [x: string]: {} | any | any;
154
+ [x: string]: {} | {
155
+ [x: string]: {} | /*elided*/ any | /*elided*/ any;
156
+ } | /*elided*/ any;
154
157
  }, TTag, TSelfContract["accepts"]["schema"]["_output"], { [K_3 in string & keyof TSelfContract["emits"]]: import("arvo-core").InferArvoEvent<import("arvo-core").ArvoEvent<TSelfContract["emits"][K_3]["_output"], Record<string, any>, K_3>>; }[`arvo.orc.${ExtractOrchestratorType<TSelfContract["accepts"]["type"]>}.done`]["data"], { [K_4 in keyof TServiceContracts]: EnqueueArvoEventActionParam<z.input<TServiceContracts[K_4]["accepts"]["schema"]>, TServiceContracts[K_4]["accepts"]["type"], Record<string, string | number | boolean | null>>; }[keyof TServiceContracts], TMeta, any>>;
155
158
  };
@@ -182,6 +182,7 @@ var ArvoOrchestrator = /** @class */ (function () {
182
182
  domain: orchestrationParentSubject
183
183
  ? [arvo_core_1.ArvoOrchestrationSubject.parse(orchestrationParentSubject).execution.domain]
184
184
  : [null],
185
+ executionunits: executionResult.finalOutput.__executionunits,
185
186
  });
186
187
  }
187
188
  emittables = (0, createEmitableEvent_1.processRawEventsIntoEmittables)({
@@ -241,6 +241,7 @@ var ArvoResumable = /** @class */ (function () {
241
241
  domain: orchestrationParentSubject
242
242
  ? [arvo_core_1.ArvoOrchestrationSubject.parse(orchestrationParentSubject).execution.domain]
243
243
  : [null],
244
+ executionunits: executionResult.output.__executionunits,
244
245
  });
245
246
  }
246
247
  emittables = (0, createEmitableEvent_1.processRawEventsIntoEmittables)({
@@ -97,6 +97,7 @@ type Handler<TState extends ArvoResumableState<Record<string, any>>, TSelfContra
97
97
  [L in keyof InferVersionedArvoContract<TSelfContract>['emits']]: EnqueueArvoEventActionParam<InferVersionedArvoContract<TSelfContract>['emits'][L]['data'], InferVersionedArvoContract<TSelfContract>['emits'][L]['type']>['data'];
98
98
  }[keyof InferVersionedArvoContract<TSelfContract>['emits']] & {
99
99
  __id?: CreateArvoEvent<Record<string, unknown>, string>['id'];
100
+ __executionunits?: CreateArvoEvent<Record<string, unknown>, string>['executionunits'];
100
101
  };
101
102
  /**
102
103
  * Service call events to emit.
package/justfile ADDED
@@ -0,0 +1,126 @@
1
+ # Docker-Isolated NPM Development Environment
2
+ #
3
+ # This justfile provides Docker-based sandbox isolation for npm operations to protect against
4
+ # supply chain attacks during local development on local machine. All npm operations
5
+ # run in ephemeral containers with no access to your host filesystem, environment
6
+ # variables, or secrets.
7
+ #
8
+ # WHAT THIS PROTECTS AGAINST:
9
+ # - Malicious install scripts stealing SSH keys, AWS credentials, or other secrets
10
+ # - Package typosquatting attacks that exfiltrate local environment variables
11
+ # - Compromised packages accessing your home directory during installation
12
+ # - Supply chain attacks that attempt to modify files outside node_modules
13
+ # - Malicious code execution during build and test phases (runs in isolated containers)
14
+ #
15
+ # WHAT THIS DOESN'T PROTECT AGAINST:
16
+ # - Malicious code in package runtime logic when you actually run your application
17
+ # - Sophisticated obfuscated malware that bypasses basic pattern detection
18
+ # - Attacks that only activate in production environments
19
+ #
20
+ # **Disclaimer:** This does not gate against malware in node_modules or in your code
21
+ # (you need to update the Docker.install to add that gate as per your requirments
22
+ # - if you need one). Rather, its scope is **strictly limited** to attempting to protect
23
+ # the host device from exposure if the malware gets excuted.
24
+ #
25
+ # HOW IT WORKS:
26
+ # INSTALL PHASE:
27
+ # 1. npm install runs inside a clean Docker container with no volume mounts
28
+ # 2. Basic placeholder malware detection (so the you can add more complex methods if you want) scans run after installation completes
29
+ # 3. Only node_modules and package files are extracted back to your host
30
+ # 4. Container is destroyed, leaving no trace of potentially malicious install scripts
31
+ #
32
+ # BUILD PHASE:
33
+ # 1. Source code and dependencies are copied into a fresh container
34
+ # 2. Build process (TypeScript compilation, bundling, etc.) runs isolated
35
+ # 3. Only the compiled output (dist/) is extracted back to host
36
+ # 4. Any malicious code that tries to run during build is contained
37
+ #
38
+ # TEST PHASE:
39
+ # 1. Tests run in an isolated container with optional .env file mounting
40
+ # 2. Test dependencies can't access your host system during execution
41
+ # 3. Container is destroyed after tests complete
42
+ # 4. Secrets in .env are passed at runtime, never baked into image layers
43
+ #
44
+ # USAGE:
45
+ # just install # Install all dependencies from package.json
46
+ # just install <package> # Install specific package(s)
47
+ # just install-dev <package> # Install as dev dependency
48
+ # just test # Run tests in isolated container
49
+ # just build # Build project in isolated container
50
+ # just clean # Remove node_modules
51
+
52
+ node_version := `cat .nvmrc | tr -d 'v\n\r'`
53
+
54
+ install *PACKAGES:
55
+ #!/usr/bin/env bash
56
+ set -euo pipefail
57
+ NODE_VERSION={{node_version}}
58
+ echo "Installing dependencies with Node $NODE_VERSION..."
59
+ docker build --progress=plain -f Dockerfile.install --build-arg NODE_VERSION=$NODE_VERSION --build-arg PACKAGES="{{PACKAGES}}" -t npm-installer .
60
+ CONTAINER_ID=$(docker create --name npm-temp npm-installer)
61
+ docker logs $CONTAINER_ID
62
+ echo "Extracting node_modules..."
63
+ docker cp npm-temp:/install/node_modules ./node_modules
64
+ docker cp npm-temp:/install/package.json ./package.json
65
+ docker cp npm-temp:/install/package-lock.json ./package-lock.json 2>/dev/null || true
66
+ echo "Cleaning up..."
67
+ docker rm npm-temp
68
+ docker rmi npm-installer
69
+ echo "Done."
70
+
71
+ install-dev *PACKAGES:
72
+ #!/usr/bin/env bash
73
+ set -euo pipefail
74
+ NODE_VERSION={{node_version}}
75
+ echo "Installing dev dependencies with Node $NODE_VERSION..."
76
+ docker build --progress=plain -f Dockerfile.install --build-arg NODE_VERSION=$NODE_VERSION --build-arg PACKAGES="{{PACKAGES}}" --build-arg DEV=true -t npm-installer .
77
+ CONTAINER_ID=$(docker create --name npm-temp npm-installer)
78
+ docker logs $CONTAINER_ID
79
+ echo "Extracting node_modules..."
80
+ docker cp npm-temp:/install/node_modules ./node_modules
81
+ docker cp npm-temp:/install/package.json ./package.json
82
+ docker cp npm-temp:/install/package-lock.json ./package-lock.json 2>/dev/null || true
83
+ echo "Cleaning up..."
84
+ docker rm npm-temp
85
+ docker rmi npm-installer
86
+ echo "Done."
87
+
88
+ build:
89
+ #!/usr/bin/env bash
90
+ set -euo pipefail
91
+ NODE_VERSION=$(cat .nvmrc | tr -d 'v\n\r')
92
+ echo "Building with Node $NODE_VERSION..."
93
+ # Build does not need network. So it must not use it
94
+ docker build --network none --progress=plain -f Dockerfile --build-arg NODE_VERSION=$NODE_VERSION -t npm-build .
95
+ CONTAINER_ID=$(docker create npm-build)
96
+ echo "Extracting build artifacts..."
97
+ docker cp $CONTAINER_ID:/app/dist ./dist
98
+ echo "Cleaning up..."
99
+ docker rm $CONTAINER_ID
100
+ docker rmi npm-build
101
+ echo "Build complete. Output in ./dist"
102
+
103
+ test:
104
+ #!/usr/bin/env bash
105
+ set -euo pipefail
106
+ NODE_VERSION=$(cat .nvmrc | tr -d 'v\n\r')
107
+ echo "Running tests with Node $NODE_VERSION..."
108
+ docker build --progress=plain -f Dockerfile.test --build-arg NODE_VERSION=$NODE_VERSION -t npm-test .
109
+
110
+ # Run tests with .env file mounted if it exists
111
+ if [ -f .env ]; then
112
+ echo "Found .env file, mounting it..."
113
+ docker run --rm --env-file .env npm-test
114
+ else
115
+ echo "No .env file found, running without environment variables..."
116
+ ## If the .env is not there then I can safely assume the there is
117
+ ## no need so making netowrk calls
118
+ docker run --rm --network none npm-test
119
+ fi
120
+ echo "Tests complete."
121
+
122
+ clean:
123
+ rm -rf node_modules
124
+
125
+ install-biome:
126
+ npm i -D @biomejs/biome@1.9.4
package/package.json CHANGED
@@ -1,8 +1,12 @@
1
1
  {
2
2
  "name": "arvo-event-handler",
3
- "version": "3.0.18",
4
- "description": "Type-safe event handler system with versioning, telemetry, and contract validation for distributed Arvo event-driven architectures, featuring routing and multi-handler support.",
3
+ "version": "3.0.20",
4
+ "description": "A complete set of orthogonal event handler and orchestration primitives for Arvo based applications, featuring declarative state machines (XState), imperative resumables for agentic workflows, contract-based routing, OpenTelemetry observability, and in-memory event broker for building composable event-driven architectures.",
5
5
  "main": "dist/index.js",
6
+ "repository": {
7
+ "type": "git",
8
+ "url": "https://github.com/SaadAhmad123/arvo-event-handler"
9
+ },
6
10
  "scripts": {
7
11
  "build": "tsc",
8
12
  "start": "node ./dist/index.js",
@@ -13,44 +17,59 @@
13
17
  "doc": "npx typedoc",
14
18
  "otel": "docker run --rm -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 -p 16686:16686 -p 4317:4317 -p 4318:4318 -p 9411:9411 jaegertracing/all-in-one:latest"
15
19
  },
16
- "keywords": ["arvo", "event-driven architecture", "xorca", "core", "cloudevent", "opentelemetry", "orchestrator"],
20
+ "keywords": [
21
+ "arvo",
22
+ "event-driven",
23
+ "event-handler",
24
+ "orchestration",
25
+ "state-machine",
26
+ "xstate",
27
+ "workflow",
28
+ "resumable",
29
+ "virtual orchestration",
30
+ "cloudevents",
31
+ "opentelemetry",
32
+ "distributed-systems",
33
+ "microservices",
34
+ "agentic-ai",
35
+ "event-broker",
36
+ "observability"
37
+ ],
17
38
  "author": "Saad Ahmad <saadkwi12@hotmail.com>",
18
39
  "license": "MIT",
19
40
  "devDependencies": {
20
- "@biomejs/biome": "^1.9.4",
21
- "@jest/globals": "^29.7.0",
22
- "@opentelemetry/auto-instrumentations-node": "^0.49.1",
23
- "@opentelemetry/exporter-metrics-otlp-proto": "^0.52.1",
24
- "@opentelemetry/exporter-trace-otlp-grpc": "^0.53.0",
25
- "@opentelemetry/exporter-trace-otlp-proto": "^0.52.1",
26
- "@opentelemetry/resources": "^1.25.1",
27
- "@opentelemetry/sdk-metrics": "^1.25.1",
28
- "@opentelemetry/sdk-node": "^0.52.1",
29
- "@opentelemetry/sdk-trace-node": "^1.25.1",
30
- "@opentelemetry/semantic-conventions": "^1.25.1",
31
- "@types/jest": "^29.5.12",
32
- "@types/node": "^22.5.0",
33
- "@types/uuid": "^10.0.0",
34
- "dotenv": "^16.4.5",
35
- "jest": "^29.7.0",
36
- "prettier": "^3.3.3",
37
- "ts-jest": "^29.2.5",
38
- "ts-node": "^10.9.2",
39
- "typedoc": "^0.26.6",
40
- "typedoc-github-theme": "^0.1.2",
41
- "typedoc-plugin-coverage": "^3.4.0",
42
- "typedoc-plugin-mermaid": "^1.12.0",
43
- "typedoc-plugin-zod": "^1.2.1",
44
- "typescript": "^5.5.4"
41
+ "@biomejs/biome": "1.9.4",
42
+ "@jest/globals": "29.7.0",
43
+ "@opentelemetry/auto-instrumentations-node": "0.49.1",
44
+ "@opentelemetry/exporter-metrics-otlp-proto": "0.52.1",
45
+ "@opentelemetry/exporter-trace-otlp-grpc": "0.53.0",
46
+ "@opentelemetry/exporter-trace-otlp-proto": "0.52.1",
47
+ "@opentelemetry/resources": "1.25.1",
48
+ "@opentelemetry/sdk-metrics": "1.25.1",
49
+ "@opentelemetry/sdk-node": "0.52.1",
50
+ "@opentelemetry/sdk-trace-node": "1.25.1",
51
+ "@opentelemetry/semantic-conventions": "1.38.0",
52
+ "@types/jest": "29.5.12",
53
+ "@types/node": "22.19.1",
54
+ "dotenv": "16.6.1",
55
+ "jest": "29.7.0",
56
+ "ts-jest": "29.4.5",
57
+ "ts-node": "10.9.2",
58
+ "typedoc": "0.28.15",
59
+ "typedoc-github-theme": "0.3.1",
60
+ "typedoc-plugin-coverage": "4.0.2",
61
+ "typedoc-plugin-mermaid": "1.12.0",
62
+ "typedoc-plugin-zod": "1.4.3",
63
+ "typescript": "5.9.3"
45
64
  },
46
65
  "dependencies": {
47
- "@opentelemetry/api": "^1.9.0",
48
- "@opentelemetry/core": "^1.30.1",
49
- "arvo-core": "^3.0.18",
50
- "uuid": "^11.1.0",
51
- "xstate": "^5.23.0",
52
- "zod": "^3.25.74",
53
- "zod-to-json-schema": "^3.24.6"
66
+ "@opentelemetry/api": "1.9.0",
67
+ "@opentelemetry/core": "1.30.1",
68
+ "arvo-core": "3.0.22",
69
+ "uuid": "11.1.0",
70
+ "xstate": "5.24.0",
71
+ "zod": "3.25.74",
72
+ "zod-to-json-schema": "3.25.0"
54
73
  },
55
74
  "engines": {
56
75
  "node": ">=18.0.0"