builderman 1.5.3 β†’ 1.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -18,6 +18,7 @@ It is designed for monorepos, long-running development processes, and CI/CD pipe
18
18
  > - [Commands & Modes](#commands--modes)
19
19
  > - [Environment Variables](#environment-variables)
20
20
  > - [Dependencies](#dependencies)
21
+ > - [Command-Level Dependencies](#command-level-dependencies)
21
22
  > - [Pipelines](#pipelines)
22
23
  > - [Concurrency Control](#concurrency-control)
23
24
  > - [Pipeline Composition](#pipeline-composition)
@@ -30,9 +31,15 @@ It is designed for monorepos, long-running development processes, and CI/CD pipe
30
31
  > - [Skipping Tasks](#skipping-tasks)
31
32
  > - [Strict Mode](#strict-mode)
32
33
  > - [Task-Level Skip Override](#task-level-skip-override)
34
+ > - [Caching](#caching)
35
+ > - [Cache Inputs](#cache-inputs)
36
+ > - [File Paths](#file-paths)
37
+ > - [Artifacts](#artifacts)
38
+ > - [Input Resolvers](#input-resolvers)
33
39
  > - [Execution Statistics](#execution-statistics)
34
40
  > - [Pipeline Statistics](#pipeline-statistics)
35
41
  > - [Task Statistics](#task-statistics)
42
+ > - [Advanced Examples](#advanced-examples)
36
43
  > - [When Should I Use builderman?](#when-should-i-use-builderman)
37
44
 
38
45
  ## Key Features
@@ -45,6 +52,9 @@ It is designed for monorepos, long-running development processes, and CI/CD pipe
45
52
  - πŸ“Š **Rich execution statistics** β€” always available, even on failure
46
53
  - ❌ **Never throws** β€” failures are returned as structured results
47
54
  - 🧱 **Composable pipelines** β€” pipelines can be converted into tasks
55
+ - πŸ’Ύ **Task-level caching** β€” skip tasks when inputs and outputs haven't changed
56
+ - 🎯 **Artifact dependencies** β€” reference outputs from other tasks in cache inputs
57
+ - πŸ”Œ **Input resolvers** β€” track package dependencies and other dynamic inputs
48
58
 
49
59
  ---
50
60
 
@@ -63,27 +73,35 @@ import { task, pipeline } from "builderman"
63
73
 
64
74
  const build = task({
65
75
  name: "build",
66
- commands: { build: "tsc" },
67
- cwd: "packages/my-package", // Optional: defaults to "."
76
+ commands: {
77
+ build: "tsc",
78
+ dev: "tsc --watch",
79
+ },
68
80
  })
69
81
 
70
82
  const test = task({
71
83
  name: "test",
72
- commands: { build: "npm test" },
84
+ commands: {
85
+ build: "npm test",
86
+ },
73
87
  dependencies: [build],
74
- cwd: "packages/my-package",
75
88
  })
76
89
 
77
- const result = await pipeline([build, test]).run({
78
- command: "build",
90
+ const deploy = task({
91
+ name: "deploy",
92
+ commands: {
93
+ build: "npm run deploy",
94
+ },
95
+ dependencies: [test],
79
96
  })
80
97
 
81
- if (!result.ok) {
82
- console.error("Pipeline failed:", result.error.message)
83
- }
98
+ const result = await pipeline([build, test, deploy]).run({
99
+ command: "build",
100
+ })
101
+ console.log(result)
84
102
  ```
85
103
 
86
- This defines a simple dependency graph where `test` runs only after `build` completes successfully.
104
+ This defines a simple CI pipeline where `test` runs only after `build` completes, and `deploy` runs only after `test` completes. The result is a structured object with detailed execution statistics.
87
105
 
88
106
  ---
89
107
 
@@ -102,8 +120,8 @@ A **task** represents a unit of work. Each task:
102
120
  ```ts
103
121
  import { task } from "builderman"
104
122
 
105
- const libTask = task({
106
- name: "lib:build",
123
+ const myPackage = task({
124
+ name: "myPackage",
107
125
  commands: {
108
126
  build: "tsc",
109
127
  dev: {
@@ -111,7 +129,7 @@ const libTask = task({
111
129
  readyWhen: (stdout) => stdout.includes("Watching for file changes."),
112
130
  },
113
131
  },
114
- cwd: "packages/lib",
132
+ cwd: "packages/myPackage",
115
133
  })
116
134
  ```
117
135
 
@@ -133,9 +151,11 @@ Commands may be:
133
151
  - A string (executed directly), or
134
152
  - An object with:
135
153
  - `run`: the command to execute
136
- - `readyWhen`: a predicate that marks the task as ready
154
+ - `dependencies`: optional array of tasks that this command depends on (see [Command-Level Dependencies](#command-level-dependencies))
155
+ - `readyWhen`: a predicate that marks the task as ready - useful for long-running processes that can allow dependents to start before they exit (e.g. a "watch" process)
137
156
  - `teardown`: cleanup logic to run after completion
138
157
  - `env`: environment variables specific to this command
158
+ - `cache`: configuration for task-level caching (see [Caching](#caching))
139
159
 
140
160
  ---
141
161
 
@@ -153,11 +173,11 @@ Environment variables can be provided at multiple levels, with more specific lev
153
173
  #### Command-Level Environment Variables
154
174
 
155
175
  ```ts
156
- const apiTask = task({
157
- name: "api",
176
+ const server = task({
177
+ name: "server",
158
178
  commands: {
159
179
  dev: {
160
- run: "npm run dev",
180
+ run: "node server.js",
161
181
  env: {
162
182
  PORT: "3000",
163
183
  NODE_ENV: "development",
@@ -170,15 +190,27 @@ const apiTask = task({
170
190
  #### Task-Level Environment Variables
171
191
 
172
192
  ```ts
173
- const apiTask = task({
174
- name: "api",
175
- commands: {
176
- dev: "npm run dev",
177
- build: "npm run build",
178
- },
193
+ const server = task({
194
+ name: "server",
179
195
  env: {
180
- API_URL: "http://localhost:3000",
181
- LOG_LEVEL: "debug",
196
+ // in both dev and build, the PORT environment variable will be set to "3000"
197
+ PORT: "3000",
198
+ },
199
+ commands: {
200
+ dev: {
201
+ run: "node server.js",
202
+ env: {
203
+ LOG_LEVEL: "debug",
204
+ // overrides the task-level PORT environment variable
205
+ PORT: "4200",
206
+ },
207
+ },
208
+ build: {
209
+ run: "node server.js",
210
+ env: {
211
+ LOG_LEVEL: "info",
212
+ },
213
+ },
182
214
  },
183
215
  })
184
216
  ```
@@ -186,7 +218,7 @@ const apiTask = task({
186
218
  #### Pipeline-Level Environment Variables
187
219
 
188
220
  ```ts
189
- const result = await pipeline([apiTask]).run({
221
+ const result = await pipeline([server]).run({
190
222
  env: {
191
223
  DATABASE_URL: "postgres://localhost/mydb",
192
224
  REDIS_URL: "redis://localhost:6379",
@@ -225,15 +257,57 @@ In this example, tasks in `innerPipeline` will receive both `INNER_VAR` and `OUT
225
257
 
226
258
  Tasks may depend on other tasks. A task will not start until all its dependencies have completed (or been skipped).
227
259
 
260
+ When a task has task-level dependencies, each command in the task automatically depends on the command with the same name in the dependency task (if it exists). For example, if a task has commands `{ dev, build }` and depends on another task with commands `{ dev, build }`, then this task's `dev` command will depend on the dependency's `dev` command, and this task's `build` command will depend on the dependency's `build` command.
261
+
228
262
  ```ts
229
- const consumerTask = task({
230
- name: "consumer:dev",
263
+ const server = task({
264
+ name: "server",
231
265
  commands: {
232
- build: "npm run build",
233
- dev: "npm run dev",
266
+ dev: "node server.js",
267
+ build: "node server.js",
268
+ },
269
+ dependencies: [shared], // Both "build" and "dev" commands will depend on shared's matching commands, if they exist
270
+ })
271
+ ```
272
+
273
+ #### Command-Level Dependencies
274
+
275
+ You can also specify dependencies at the command level for more granular control. This is useful when different commands have different dependency requirements.
276
+
277
+ ```ts
278
+ const database = task({
279
+ name: "database",
280
+ commands: {
281
+ dev: {
282
+ run: "docker compose up",
283
+ readyWhen: (output) => output.includes("ready"),
284
+ teardown: "docker compose down",
285
+ },
286
+ },
287
+ })
288
+
289
+ const migrations = task({
290
+ name: "migrations",
291
+ commands: {
292
+ build: "npm run migrate",
293
+ },
294
+ })
295
+
296
+ const api = task({
297
+ name: "api",
298
+ commands: {
299
+ // Build only needs migrations
300
+ build: {
301
+ run: "npm run build",
302
+ dependencies: [migrations],
303
+ },
304
+
305
+ // Dev needs both the database and migrations
306
+ dev: {
307
+ run: "npm run dev",
308
+ dependencies: [database, migrations],
309
+ },
234
310
  },
235
- cwd: "packages/consumer",
236
- dependencies: [libTask],
237
311
  })
238
312
  ```
239
313
 
@@ -246,7 +320,7 @@ A **pipeline** executes a set of tasks according to their dependency graph.
246
320
  ```ts
247
321
  import { pipeline } from "builderman"
248
322
 
249
- const result = await pipeline([libTask, consumerTask]).run({
323
+ const result = await pipeline([backend, frontend]).run({
250
324
  command: "dev",
251
325
  onTaskBegin: (name) => {
252
326
  console.log(`[${name}] starting`)
@@ -289,29 +363,39 @@ If `maxConcurrency` is not specified, there is no limit (tasks run concurrently
289
363
  Pipelines can be converted into tasks and composed like any other unit of work.
290
364
 
291
365
  ```ts
292
- const build = pipeline([
293
- /* ... */
294
- ])
295
- const test = pipeline([
296
- /* ... */
297
- ])
298
- const deploy = pipeline([
299
- /* ... */
300
- ])
366
+ const backend = task({
367
+ name: "backend",
368
+ cwd: "packages/backend",
369
+ commands: { build: "npm run build" },
370
+ })
301
371
 
302
- const buildTask = build.toTask({ name: "build" })
303
- const testTask = test.toTask({
304
- name: "test",
305
- dependencies: [buildTask],
306
- env: { TEST_ENV: "test-value" }, // Optional: env for nested pipeline
372
+ const frontend = task({
373
+ name: "frontend",
374
+ cwd: "packages/frontend",
375
+ commands: { build: "npm run build" },
376
+ })
377
+
378
+ const productionMonitoring = task({
379
+ name: "production-monitoring",
380
+ cwd: "packages/production-monitoring",
381
+ commands: { build: "npm run build" },
307
382
  })
308
- const deployTask = deploy.toTask({ name: "deploy", dependencies: [testTask] })
309
383
 
310
- const ci = pipeline([buildTask, testTask, deployTask])
311
- const result = await ci.run()
384
+ // Convert a pipeline into a task
385
+ const app = pipeline([backend, frontend]).toTask({
386
+ name: "app",
387
+ dependencies: [productionMonitoring], // The app task depends on productionMonitoring
388
+ })
389
+
390
+ const result = await pipeline([app, productionMonitoring]).run()
312
391
  ```
313
392
 
314
- When a pipeline is converted to a task, it becomes a **single node** in the dependency graph. The nested pipeline must fully complete before dependents can start.
393
+ When a pipeline is converted to a task:
394
+
395
+ - It becomes a **single node** in the dependency graph, with the tasks in the pipeline as subtasks
396
+ - The tasks in the pipeline all must either complete or be flagged as 'ready' or 'skipped' before dependents can start
397
+ - You can specify dependencies and environment variables for the pipeline task
398
+ - The tasks in the pipeline are tracked as subtasks in execution statistics, and are included in the summary object
315
399
 
316
400
  ---
317
401
 
@@ -324,7 +408,7 @@ All failures β€” including task errors, invalid configuration, cancellation, and
324
408
  ```ts
325
409
  import { pipeline, PipelineError } from "builderman"
326
410
 
327
- const result = await pipeline([libTask, consumerTask]).run()
411
+ const result = await pipeline([backend, frontend]).run()
328
412
 
329
413
  if (!result.ok) {
330
414
  switch (result.error.code) {
@@ -361,7 +445,7 @@ You can cancel a running pipeline using an `AbortSignal`.
361
445
  ```ts
362
446
  const controller = new AbortController()
363
447
 
364
- const runPromise = pipeline([libTask, consumerTask]).run({
448
+ const runPromise = pipeline([backend, frontend]).run({
365
449
  signal: controller.signal,
366
450
  })
367
451
 
@@ -440,7 +524,10 @@ Teardowns do **not** run when:
440
524
 
441
525
  ## Skipping Tasks
442
526
 
443
- If a task does not define a command for the current mode, it is **skipped** by default.
527
+ Tasks can be skipped in two scenarios:
528
+
529
+ 1. **Missing command**: If a task does not define a command for the current mode, it is **skipped** by default
530
+ 2. **Cache hit**: If a task has cache configuration and the cache matches, the task is **skipped** (see [Caching](#caching))
444
531
 
445
532
  Skipped tasks:
446
533
 
@@ -468,8 +555,12 @@ const apiTask = task({
468
555
 
469
556
  const result = await pipeline([dbTask, apiTask]).run({
470
557
  command: "build",
471
- onTaskSkipped: (taskName, mode) => {
472
- console.log(`[${taskName}] skipped (no "${mode}" command)`)
558
+ onTaskSkipped: (taskName, taskId, mode, reason) => {
559
+ if (reason === "command-not-found") {
560
+ console.log(`[${taskName}] skipped (no "${mode}" command)`)
561
+ } else if (reason === "cache-hit") {
562
+ console.log(`[${taskName}] skipped (cache hit)`)
563
+ }
473
564
  },
474
565
  })
475
566
  ```
@@ -514,6 +605,193 @@ const result = await pipeline([dbTask]).run({
514
605
 
515
606
  ---
516
607
 
608
+ ## Caching
609
+
610
+ **builderman** supports task-level caching to skip expensive work when inputs and outputs haven't changed. This is useful for build-style tasks where you want to avoid re-running work when nothing has changed.
611
+
612
+ ### Basic Usage
613
+
614
+ Enable caching by providing `cache` configuration in your command:
615
+
616
+ ```ts
617
+ const buildTask = task({
618
+ name: "build",
619
+ commands: {
620
+ build: {
621
+ run: "tsc",
622
+ cache: {
623
+ inputs: ["src"],
624
+ // outputs is optional; if omitted, only inputs are tracked
625
+ outputs: ["dist"],
626
+ },
627
+ },
628
+ },
629
+ })
630
+ ```
631
+
632
+ When caching is enabled:
633
+
634
+ 1. **First run**: The task executes normally and creates a snapshot of the input and output files
635
+ 2. **Subsequent runs**: The task compares the current state with the cached snapshot
636
+ 3. **Cache hit**: If inputs and outputs are unchanged, the task is **skipped** (no command execution)
637
+ 4. **Cache miss**: If anything changed, the task runs and updates the cache
638
+
639
+ ### Cache Inputs
640
+
641
+ Cache inputs can include:
642
+
643
+ - **File paths** (strings): Directories or files to track
644
+ - **Artifacts**: References to outputs from other tasks using `task.artifact("command")`
645
+ - **Input resolvers**: Special functions that resolve to cacheable inputs (e.g., package dependencies)
646
+
647
+ #### File Paths
648
+
649
+ ```ts
650
+ const buildTask = task({
651
+ name: "build",
652
+ commands: {
653
+ build: {
654
+ run: "tsc",
655
+ cache: {
656
+ inputs: ["src", "package.json"],
657
+ outputs: ["dist"],
658
+ },
659
+ },
660
+ },
661
+ })
662
+ ```
663
+
664
+ #### Artifacts
665
+
666
+ You can reference outputs from other tasks as cache inputs using `task.artifact("command")`. This creates an artifact dependency that tracks changes to the producing task's outputs.
667
+
668
+ ```ts
669
+ const shared = task({
670
+ name: "shared",
671
+ cwd: "packages/shared",
672
+ commands: {
673
+ build: {
674
+ run: "npm run build",
675
+ cache: {
676
+ inputs: ["src"],
677
+ outputs: ["dist"],
678
+ },
679
+ },
680
+ },
681
+ })
682
+
683
+ const backend = task({
684
+ name: "backend",
685
+ cwd: "packages/backend",
686
+ commands: {
687
+ build: {
688
+ run: "npm run build",
689
+ cache: {
690
+ inputs: [
691
+ "src",
692
+ shared.artifact("build"), // Track changes to shared's build outputs
693
+ ],
694
+ outputs: ["dist"],
695
+ },
696
+ },
697
+ },
698
+ })
699
+ ```
700
+
701
+ When using artifacts:
702
+
703
+ - The artifact-producing task must have `cache.outputs` defined
704
+ - The artifact is included in the cache key, so changes to the artifact invalidate the cache
705
+ - The consuming task automatically depends on the producing task (execution dependency)
706
+
707
+ #### Input Resolvers
708
+
709
+ Input resolvers are functions that resolve to cacheable inputs. They're useful for tracking package dependencies and other dynamic inputs.
710
+
711
+ For example, the `@builderman/resolvers-pnpm` package provides a resolver for pnpm package dependencies:
712
+
713
+ ```ts
714
+ import { task } from "builderman"
715
+ import { pnpm } from "@builderman/resolvers-pnpm"
716
+
717
+ const server = task({
718
+ name: "server",
719
+ cwd: "packages/server",
720
+ commands: {
721
+ build: {
722
+ run: "pnpm build",
723
+ cache: {
724
+ inputs: [
725
+ "src",
726
+ pnpm.package(), // Automatically tracks pnpm dependencies
727
+ ],
728
+ outputs: ["dist"],
729
+ },
730
+ },
731
+ },
732
+ })
733
+ ```
734
+
735
+ The resolver automatically detects whether you're in a workspace or local package and tracks the appropriate `pnpm-lock.yaml` and package dependencies.
736
+
737
+ ### How It Works
738
+
739
+ The cache system:
740
+
741
+ - Creates a snapshot of file metadata (modification time and size) for all files in the configured input and output paths
742
+ - For artifacts, tracks the artifact identifier from the producing task's cache
743
+ - For resolvers, includes the resolved input in the cache key
744
+ - Stores snapshots in `.builderman/cache/<version>/` relative to the main process's working directory
745
+ - Compares snapshots before running the task
746
+ - Writes the snapshot **after** successful task completion (ensuring outputs are captured)
747
+
748
+ ### Path Resolution
749
+
750
+ - Paths may be **absolute** or **relative to the task's `cwd`**
751
+ - Directories are recursively scanned for all files
752
+ - Non-existent paths are treated as empty (no files)
753
+
754
+ ### Cache Information in Statistics
755
+
756
+ When a task has cache configuration, its statistics include cache information:
757
+
758
+ ```ts
759
+ const result = await pipeline([buildTask]).run()
760
+
761
+ const taskStats = result.stats.tasks[0]
762
+
763
+ if (taskStats.cache) {
764
+ console.log("Cache checked:", taskStats.cache.checked)
765
+ console.log("Cache hit:", taskStats.cache.hit)
766
+ console.log("Cache file:", taskStats.cache.cacheFile)
767
+ console.log("Inputs:", taskStats.cache.inputs)
768
+ console.log("Outputs:", taskStats.cache.outputs)
769
+ }
770
+ ```
771
+
772
+ ### Cache Behavior
773
+
774
+ - **Cache failures never break execution** β€” if cache checking fails, the task runs normally
775
+ - **Cache is written after completion** β€” ensures outputs are captured correctly
776
+ - **Cache is per task and command** β€” each task-command combination has its own cache file
777
+ - **Cache directory is versioned** β€” stored under `v1/` to allow future cache format changes
778
+
779
+ ### When to Use Caching
780
+
781
+ Caching is ideal for:
782
+
783
+ - Build tasks (TypeScript compilation, bundling, etc.)
784
+ - Code generation tasks
785
+ - Any expensive operation where inputs/outputs can be reliably tracked
786
+
787
+ Caching is **not** suitable for:
788
+
789
+ - Tasks that have side effects beyond file outputs
790
+ - Tasks that depend on external state (APIs, databases, etc.)
791
+ - Tasks where outputs are non-deterministic
792
+
793
+ ---
794
+
517
795
  ## Execution Statistics
518
796
 
519
797
  Every pipeline run returns detailed execution statistics.
@@ -551,6 +829,14 @@ for (const task of result.stats.tasks) {
551
829
  console.log("Teardown:", task.teardown.status)
552
830
  }
553
831
 
832
+ // Cache information is available when the task has cache configuration
833
+ if (task.cache) {
834
+ console.log("Cache checked:", task.cache.checked)
835
+ if (task.cache.hit !== undefined) {
836
+ console.log("Cache hit:", task.cache.hit)
837
+ }
838
+ }
839
+
554
840
  // when using pipeline.toTask() to convert a pipeline into a task, the task will have subtasks
555
841
  if (task.subtasks) {
556
842
  for (const subtask of task.subtasks) {
@@ -562,16 +848,201 @@ for (const task of result.stats.tasks) {
562
848
 
563
849
  ---
564
850
 
851
+ ## Advanced Examples
852
+
853
+ ### Monorepo Build Pipeline
854
+
855
+ Here's a comprehensive example showing how to build a complex monorepo pipeline with caching, artifacts, and pipeline composition:
856
+
857
+ ```ts
858
+ import { task, pipeline } from "builderman"
859
+ import { pnpm } from "@builderman/resolvers-pnpm"
860
+
861
+ /**
862
+ * Shared core module used by multiple packages
863
+ */
864
+ const core = task({
865
+ name: "core",
866
+ cwd: "packages/core",
867
+ commands: {
868
+ build: {
869
+ run: "pnpm build",
870
+ cache: {
871
+ inputs: ["src", pnpm.package()],
872
+ outputs: ["dist"],
873
+ },
874
+ },
875
+ dev: {
876
+ run: "pnpm dev",
877
+ readyWhen: (output) => output.includes("Watching for file changes"),
878
+ },
879
+ test: {
880
+ run: "pnpm test",
881
+ env: {
882
+ NODE_ENV: "development",
883
+ },
884
+ },
885
+ },
886
+ })
887
+
888
+ /**
889
+ * Factory for related feature packages
890
+ */
891
+ const createFeatureTask = (name: string) =>
892
+ task({
893
+ name,
894
+ cwd: `packages/${name}`,
895
+ commands: {
896
+ build: {
897
+ run: "pnpm build",
898
+ cache: {
899
+ inputs: ["src", core.artifact("build"), pnpm.package()],
900
+ outputs: ["dist"],
901
+ },
902
+ },
903
+ dev: {
904
+ run: "pnpm dev",
905
+ readyWhen: (output) => output.includes("Build complete"),
906
+ },
907
+ },
908
+ })
909
+
910
+ const featureA = createFeatureTask("feature-a")
911
+ const featureB = createFeatureTask("feature-b")
912
+
913
+ /**
914
+ * Compose related features into a single pipeline task
915
+ */
916
+ const features = pipeline([featureA, featureB]).toTask({
917
+ name: "features",
918
+ dependencies: [core],
919
+ })
920
+
921
+ /**
922
+ * Consumer package with command-level dependencies
923
+ */
924
+ const integration = task({
925
+ name: "integration",
926
+ cwd: "packages/integration",
927
+ commands: {
928
+ build: {
929
+ run: "pnpm build",
930
+ cache: {
931
+ inputs: [
932
+ "src",
933
+ core.artifact("build"),
934
+ featureA.artifact("build"),
935
+ featureB.artifact("build"),
936
+ pnpm.package(),
937
+ ],
938
+ outputs: ["dist"],
939
+ },
940
+ },
941
+ dev: {
942
+ run: "pnpm dev",
943
+ dependencies: [core, features],
944
+ },
945
+ },
946
+ })
947
+
948
+ /**
949
+ * End-to-end test suites
950
+ */
951
+ const smokeTests = task({
952
+ name: "e2e:smoke",
953
+ cwd: "tests/smoke",
954
+ commands: {
955
+ build: {
956
+ run: "pnpm build",
957
+ cache: {
958
+ inputs: [
959
+ "src",
960
+ core.artifact("build"),
961
+ integration.artifact("build"),
962
+ pnpm.package(),
963
+ ],
964
+ outputs: ["dist"],
965
+ },
966
+ },
967
+ test: "pnpm test",
968
+ },
969
+ dependencies: [core],
970
+ env: {
971
+ NODE_ENV: "development",
972
+ },
973
+ })
974
+
975
+ const fullTests = task({
976
+ name: "e2e:full",
977
+ cwd: "tests/full",
978
+ commands: {
979
+ build: {
980
+ run: "pnpm build",
981
+ cache: {
982
+ inputs: [
983
+ "src",
984
+ core.artifact("build"),
985
+ integration.artifact("build"),
986
+ pnpm.package(),
987
+ ],
988
+ outputs: ["dist"],
989
+ },
990
+ },
991
+ test: "pnpm test",
992
+ },
993
+ // Conditional dependency based on environment
994
+ dependencies: (process.env.CI ? [smokeTests] : []).concat(core),
995
+ env: {
996
+ NODE_ENV: "development",
997
+ },
998
+ })
999
+
1000
+ /**
1001
+ * Pipeline execution
1002
+ */
1003
+ const command = process.argv[2]
1004
+
1005
+ const result = await pipeline([
1006
+ core,
1007
+ features,
1008
+ integration,
1009
+ smokeTests,
1010
+ fullTests,
1011
+ ]).run({
1012
+ command,
1013
+ onTaskBegin: (name) => console.log(`[start] ${name}`),
1014
+ onTaskSkipped: (name, _, __, reason) =>
1015
+ console.log(`[skip] ${name} (${reason})`),
1016
+ onTaskComplete: (name) => console.log(`[done] ${name}`),
1017
+ })
1018
+
1019
+ console.log(result)
1020
+ ```
1021
+
1022
+ This example demonstrates:
1023
+
1024
+ - **Caching with artifacts**: Tasks reference outputs from other tasks using `task.artifact("command")`
1025
+ - **Input resolvers**: Using `pnpm.package()` to track package dependencies
1026
+ - **Pipeline composition**: Converting pipelines to tasks with `pipeline.toTask()`
1027
+ - **Command-level dependencies**: Different commands can have different dependencies
1028
+ - **Conditional dependencies**: Adjusting dependencies based on runtime conditions
1029
+ - **Observability**: Using callbacks to track pipeline execution
1030
+
1031
+ ---
1032
+
565
1033
  ## When Should I Use builderman?
566
1034
 
567
1035
  **builderman** is a good fit when:
568
1036
 
569
- - You have dependent tasks that must run in a strict order
570
- - You run long-lived dev processes that need readiness detection
571
- - Cleanup matters (databases, containers, servers)
572
- - You want structured results instead of log-scraping
1037
+ - You have interdependent tasks that must run in a well-defined order
1038
+ - You run long-lived processes that need readiness detection (not just exit codes)
1039
+ - Cleanup and teardown matter (containers, databases, servers, watchers)
1040
+ - You want deterministic execution with structured results instead of log-scraping
1041
+ - You need observable pipelines that behave the same locally and in CI
1042
+ - You want to compose and reuse workflows, not just run scripts
573
1043
 
574
1044
  It may be overkill if:
575
1045
 
576
- - You only need a few linear npm scripts
577
- - You do not need dependency graphs or teardown guarantees
1046
+ - Your workflow is a handful of linear npm scripts
1047
+ - Tasks are fire-and-forget and don’t require cleanup
1048
+ - You don’t need dependency graphs, cancellation, or failure propagation