@hotmeshio/long-tail 0.1.7 → 0.1.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (245) hide show
  1. package/README.md +3 -1
  2. package/build/api/auth.d.ts +11 -0
  3. package/build/api/auth.js +11 -0
  4. package/build/api/bot-accounts.d.ts +87 -0
  5. package/build/api/bot-accounts.js +87 -0
  6. package/build/api/controlplane.d.ts +48 -0
  7. package/build/api/controlplane.js +48 -0
  8. package/build/api/dba.d.ts +27 -0
  9. package/build/api/dba.js +27 -0
  10. package/build/api/escalations.d.ts +201 -0
  11. package/build/api/escalations.js +236 -0
  12. package/build/api/exports.d.ts +58 -0
  13. package/build/api/exports.js +58 -0
  14. package/build/api/insight.d.ts +57 -0
  15. package/build/api/insight.js +57 -0
  16. package/build/api/maintenance.d.ts +15 -0
  17. package/build/api/maintenance.js +15 -0
  18. package/build/api/mcp-runs.d.ts +27 -0
  19. package/build/api/mcp-runs.js +27 -0
  20. package/build/api/mcp.d.ts +108 -0
  21. package/build/api/mcp.js +108 -0
  22. package/build/api/namespaces.d.ts +13 -0
  23. package/build/api/namespaces.js +13 -0
  24. package/build/api/roles.d.ts +63 -0
  25. package/build/api/roles.js +63 -5
  26. package/build/api/settings.d.ts +8 -0
  27. package/build/api/settings.js +8 -0
  28. package/build/api/tasks.d.ts +43 -0
  29. package/build/api/tasks.js +43 -0
  30. package/build/api/users.d.ts +71 -0
  31. package/build/api/users.js +71 -5
  32. package/build/api/workflow-sets.d.ts +51 -0
  33. package/build/api/workflow-sets.js +51 -0
  34. package/build/api/workflows.d.ts +110 -0
  35. package/build/api/workflows.js +109 -0
  36. package/build/api/yaml-workflows.d.ts +195 -0
  37. package/build/api/yaml-workflows.js +195 -0
  38. package/build/examples/types/envelopes.d.ts +27 -0
  39. package/build/examples/types/index.d.ts +1 -1
  40. package/build/examples/workers.js +8 -0
  41. package/build/examples/workflows/assembly-line/activities.d.ts +28 -0
  42. package/build/examples/workflows/assembly-line/activities.js +53 -0
  43. package/build/examples/workflows/assembly-line/index.d.ts +17 -0
  44. package/build/examples/workflows/assembly-line/index.js +60 -0
  45. package/build/examples/workflows/assembly-line/iterator.d.ts +12 -0
  46. package/build/examples/workflows/assembly-line/iterator.js +54 -0
  47. package/build/examples/workflows/assembly-line/reverter.d.ts +18 -0
  48. package/build/examples/workflows/assembly-line/reverter.js +89 -0
  49. package/build/examples/workflows/assembly-line/types.d.ts +25 -0
  50. package/build/examples/workflows/assembly-line/types.js +8 -0
  51. package/build/examples/workflows/assembly-line/worker.d.ts +13 -0
  52. package/build/examples/workflows/assembly-line/worker.js +81 -0
  53. package/build/routes/escalations/list.js +22 -0
  54. package/build/routes/workflows/invocation.js +15 -9
  55. package/build/sdk/index.d.ts +11 -0
  56. package/build/sdk/index.js +1 -0
  57. package/build/services/workflow-invocation.d.ts +15 -2
  58. package/build/services/workflow-invocation.js +11 -6
  59. package/build/tsconfig.tsbuildinfo +1 -1
  60. package/dashboard/dist/assets/{AdminDashboard-DRjkRSjJ.js → AdminDashboard-B15jSEV2.js} +2 -2
  61. package/dashboard/dist/assets/{AdminDashboard-DRjkRSjJ.js.map → AdminDashboard-B15jSEV2.js.map} +1 -1
  62. package/dashboard/dist/assets/{AvailableEscalationsPage-CnivX4Tz.js → AvailableEscalationsPage-0V2yvKak.js} +2 -2
  63. package/dashboard/dist/assets/{AvailableEscalationsPage-CnivX4Tz.js.map → AvailableEscalationsPage-0V2yvKak.js.map} +1 -1
  64. package/dashboard/dist/assets/{BotPicker-DwwaBhTH.js → BotPicker-B4UxHcek.js} +2 -2
  65. package/dashboard/dist/assets/{BotPicker-DwwaBhTH.js.map → BotPicker-B4UxHcek.js.map} +1 -1
  66. package/dashboard/dist/assets/{CollapsibleSection-DQpaVA0M.js → CollapsibleSection-BBexNWVd.js} +2 -2
  67. package/dashboard/dist/assets/{CollapsibleSection-DQpaVA0M.js.map → CollapsibleSection-BBexNWVd.js.map} +1 -1
  68. package/dashboard/dist/assets/{ConfirmDeleteModal-B7JoDNvt.js → ConfirmDeleteModal-DlPDJSq_.js} +2 -2
  69. package/dashboard/dist/assets/{ConfirmDeleteModal-B7JoDNvt.js.map → ConfirmDeleteModal-DlPDJSq_.js.map} +1 -1
  70. package/dashboard/dist/assets/{CopyableId-AqoZayBG.js → CopyableId-BxHW1ahb.js} +2 -2
  71. package/dashboard/dist/assets/{CopyableId-AqoZayBG.js.map → CopyableId-BxHW1ahb.js.map} +1 -1
  72. package/dashboard/dist/assets/{CredentialsPage-qGw1kQzi.js → CredentialsPage-Bp_Y1Szk.js} +2 -2
  73. package/dashboard/dist/assets/{CredentialsPage-qGw1kQzi.js.map → CredentialsPage-Bp_Y1Szk.js.map} +1 -1
  74. package/dashboard/dist/assets/{CustomDurationPicker-D1HUQcd0.js → CustomDurationPicker-ByBFqXSO.js} +2 -2
  75. package/dashboard/dist/assets/{CustomDurationPicker-D1HUQcd0.js.map → CustomDurationPicker-ByBFqXSO.js.map} +1 -1
  76. package/dashboard/dist/assets/{DataTable-DKvSKoVG.js → DataTable-DyIXg-tQ.js} +2 -2
  77. package/dashboard/dist/assets/{DataTable-DKvSKoVG.js.map → DataTable-DyIXg-tQ.js.map} +1 -1
  78. package/dashboard/dist/assets/{ElapsedCell-B0yrReGQ.js → ElapsedCell-BgnA0qpS.js} +2 -2
  79. package/dashboard/dist/assets/{ElapsedCell-B0yrReGQ.js.map → ElapsedCell-BgnA0qpS.js.map} +1 -1
  80. package/dashboard/dist/assets/{EmptyState-X0fIzYID.js → EmptyState-DlMImvgm.js} +2 -2
  81. package/dashboard/dist/assets/{EmptyState-X0fIzYID.js.map → EmptyState-DlMImvgm.js.map} +1 -1
  82. package/dashboard/dist/assets/{EscalationsOverview-BQAT9W7r.js → EscalationsOverview-D90kdfw1.js} +2 -2
  83. package/dashboard/dist/assets/{EscalationsOverview-BQAT9W7r.js.map → EscalationsOverview-D90kdfw1.js.map} +1 -1
  84. package/dashboard/dist/assets/{EventTable-CX1KNLhZ.js → EventTable-BNxtlgNz.js} +2 -2
  85. package/dashboard/dist/assets/{EventTable-CX1KNLhZ.js.map → EventTable-BNxtlgNz.js.map} +1 -1
  86. package/dashboard/dist/assets/{FilterBar-DMTvuQy-.js → FilterBar-BTiaAhCx.js} +2 -2
  87. package/dashboard/dist/assets/{FilterBar-DMTvuQy-.js.map → FilterBar-BTiaAhCx.js.map} +1 -1
  88. package/dashboard/dist/assets/{ListToolbar-DTOSxoEy.js → ListToolbar-BUcagSCn.js} +2 -2
  89. package/dashboard/dist/assets/{ListToolbar-DTOSxoEy.js.map → ListToolbar-BUcagSCn.js.map} +1 -1
  90. package/dashboard/dist/assets/{McpOverview-BaKTIWrG.js → McpOverview-B-tCvz8C.js} +2 -2
  91. package/dashboard/dist/assets/{McpOverview-BaKTIWrG.js.map → McpOverview-B-tCvz8C.js.map} +1 -1
  92. package/dashboard/dist/assets/{McpQueryDetailPage-CC08T5k8.js → McpQueryDetailPage-DPlF1wYb.js} +2 -2
  93. package/dashboard/dist/assets/{McpQueryDetailPage-CC08T5k8.js.map → McpQueryDetailPage-DPlF1wYb.js.map} +1 -1
  94. package/dashboard/dist/assets/{McpQueryPage-CVfF9dYg.js → McpQueryPage-Bz7AdcfR.js} +2 -2
  95. package/dashboard/dist/assets/{McpQueryPage-CVfF9dYg.js.map → McpQueryPage-Bz7AdcfR.js.map} +1 -1
  96. package/dashboard/dist/assets/{McpRunDetailPage-CKs1RWeV.js → McpRunDetailPage-Di_qpL2V.js} +2 -2
  97. package/dashboard/dist/assets/{McpRunDetailPage-CKs1RWeV.js.map → McpRunDetailPage-Di_qpL2V.js.map} +1 -1
  98. package/dashboard/dist/assets/{McpRunsPage-CcPD_tY1.js → McpRunsPage-BBgybBEa.js} +2 -2
  99. package/dashboard/dist/assets/{McpRunsPage-CcPD_tY1.js.map → McpRunsPage-BBgybBEa.js.map} +1 -1
  100. package/dashboard/dist/assets/{Modal-_2AbWxJT.js → Modal-CaJ0gTEa.js} +2 -2
  101. package/dashboard/dist/assets/{Modal-_2AbWxJT.js.map → Modal-CaJ0gTEa.js.map} +1 -1
  102. package/dashboard/dist/assets/{OperatorDashboard-BGiRaRDr.js → OperatorDashboard-DDfMmrmR.js} +2 -2
  103. package/dashboard/dist/assets/{OperatorDashboard-BGiRaRDr.js.map → OperatorDashboard-DDfMmrmR.js.map} +1 -1
  104. package/dashboard/dist/assets/{PageHeader-DVr5Qyzm.js → PageHeader-C5D-G5rp.js} +2 -2
  105. package/dashboard/dist/assets/{PageHeader-DVr5Qyzm.js.map → PageHeader-C5D-G5rp.js.map} +1 -1
  106. package/dashboard/dist/assets/{PageHeaderWithStats-D0KRASML.js → PageHeaderWithStats-DCa2eZh2.js} +2 -2
  107. package/dashboard/dist/assets/{PageHeaderWithStats-D0KRASML.js.map → PageHeaderWithStats-DCa2eZh2.js.map} +1 -1
  108. package/dashboard/dist/assets/{PriorityBadge-Bx2559OU.js → PriorityBadge-DTHq6OUZ.js} +2 -2
  109. package/dashboard/dist/assets/{PriorityBadge-Bx2559OU.js.map → PriorityBadge-DTHq6OUZ.js.map} +1 -1
  110. package/dashboard/dist/assets/{ProcessDetailPage-69I--sry.js → ProcessDetailPage-fC4dhrd0.js} +2 -2
  111. package/dashboard/dist/assets/{ProcessDetailPage-69I--sry.js.map → ProcessDetailPage-fC4dhrd0.js.map} +1 -1
  112. package/dashboard/dist/assets/{ProcessesListPage-BDpUbua2.js → ProcessesListPage-CL2MY8uD.js} +2 -2
  113. package/dashboard/dist/assets/{ProcessesListPage-BDpUbua2.js.map → ProcessesListPage-CL2MY8uD.js.map} +1 -1
  114. package/dashboard/dist/assets/{RolePill-CcAqEaSt.js → RolePill-kgKPANly.js} +2 -2
  115. package/dashboard/dist/assets/{RolePill-CcAqEaSt.js.map → RolePill-kgKPANly.js.map} +1 -1
  116. package/dashboard/dist/assets/{RolesPage-Cl23Hjet.js → RolesPage-Be2lXTHD.js} +2 -2
  117. package/dashboard/dist/assets/{RolesPage-Cl23Hjet.js.map → RolesPage-Be2lXTHD.js.map} +1 -1
  118. package/dashboard/dist/assets/{RowActions-B4mqIt3Z.js → RowActions-DIzJCwqR.js} +2 -2
  119. package/dashboard/dist/assets/{RowActions-B4mqIt3Z.js.map → RowActions-DIzJCwqR.js.map} +1 -1
  120. package/dashboard/dist/assets/{StatCard-Cz_2OjAZ.js → StatCard-CRi2Jy6t.js} +2 -2
  121. package/dashboard/dist/assets/{StatCard-Cz_2OjAZ.js.map → StatCard-CRi2Jy6t.js.map} +1 -1
  122. package/dashboard/dist/assets/{StatusBadge-Wi2FJZsn.js → StatusBadge-BETI_8Mr.js} +2 -2
  123. package/dashboard/dist/assets/{StatusBadge-Wi2FJZsn.js.map → StatusBadge-BETI_8Mr.js.map} +1 -1
  124. package/dashboard/dist/assets/{StepIndicator-PW5NRDMb.js → StepIndicator-DjpMqCjz.js} +2 -2
  125. package/dashboard/dist/assets/{StepIndicator-PW5NRDMb.js.map → StepIndicator-DjpMqCjz.js.map} +1 -1
  126. package/dashboard/dist/assets/{StickyPagination-Bl2Uzz65.js → StickyPagination-BZbExQ9t.js} +2 -2
  127. package/dashboard/dist/assets/{StickyPagination-Bl2Uzz65.js.map → StickyPagination-BZbExQ9t.js.map} +1 -1
  128. package/dashboard/dist/assets/{SwimlaneTimeline-CUPqMd0z.js → SwimlaneTimeline-DZthQyhR.js} +2 -2
  129. package/dashboard/dist/assets/{SwimlaneTimeline-CUPqMd0z.js.map → SwimlaneTimeline-DZthQyhR.js.map} +1 -1
  130. package/dashboard/dist/assets/{TagInput-BLtf86Ly.js → TagInput-CukbOfYn.js} +2 -2
  131. package/dashboard/dist/assets/{TagInput-BLtf86Ly.js.map → TagInput-CukbOfYn.js.map} +1 -1
  132. package/dashboard/dist/assets/{TaskDetailPage-BXJFX74D.js → TaskDetailPage-CIZHIKo9.js} +2 -2
  133. package/dashboard/dist/assets/{TaskDetailPage-BXJFX74D.js.map → TaskDetailPage-CIZHIKo9.js.map} +1 -1
  134. package/dashboard/dist/assets/{TaskQueuePill-CWYj3xKe.js → TaskQueuePill-Q7DGoysj.js} +2 -2
  135. package/dashboard/dist/assets/{TaskQueuePill-CWYj3xKe.js.map → TaskQueuePill-Q7DGoysj.js.map} +1 -1
  136. package/dashboard/dist/assets/{TasksListPage-C3cX94Mw.js → TasksListPage-CsC9wjb0.js} +2 -2
  137. package/dashboard/dist/assets/{TasksListPage-C3cX94Mw.js.map → TasksListPage-CsC9wjb0.js.map} +1 -1
  138. package/dashboard/dist/assets/{TimeAgo-B_5yDDHV.js → TimeAgo-BcOOnJeH.js} +2 -2
  139. package/dashboard/dist/assets/{TimeAgo-B_5yDDHV.js.map → TimeAgo-BcOOnJeH.js.map} +1 -1
  140. package/dashboard/dist/assets/{TimestampCell-DRX724uU.js → TimestampCell-CEMapYDQ.js} +2 -2
  141. package/dashboard/dist/assets/{TimestampCell-DRX724uU.js.map → TimestampCell-CEMapYDQ.js.map} +1 -1
  142. package/dashboard/dist/assets/{UserName-Ca8FA469.js → UserName-BHoN7iRL.js} +2 -2
  143. package/dashboard/dist/assets/{UserName-Ca8FA469.js.map → UserName-BHoN7iRL.js.map} +1 -1
  144. package/dashboard/dist/assets/{WorkflowExecutionPage-BBYWEV2P.js → WorkflowExecutionPage-DawKl1LT.js} +2 -2
  145. package/dashboard/dist/assets/{WorkflowExecutionPage-BBYWEV2P.js.map → WorkflowExecutionPage-DawKl1LT.js.map} +1 -1
  146. package/dashboard/dist/assets/{WorkflowPill-BXifAuLi.js → WorkflowPill-CP84Vqeg.js} +2 -2
  147. package/dashboard/dist/assets/{WorkflowPill-BXifAuLi.js.map → WorkflowPill-CP84Vqeg.js.map} +1 -1
  148. package/dashboard/dist/assets/{WorkflowsDashboard-Drl3juz9.js → WorkflowsDashboard-QrvVFxtQ.js} +2 -2
  149. package/dashboard/dist/assets/{WorkflowsDashboard-Drl3juz9.js.map → WorkflowsDashboard-QrvVFxtQ.js.map} +1 -1
  150. package/dashboard/dist/assets/{WorkflowsOverview-03IRrDLg.js → WorkflowsOverview-DuhAi_OY.js} +2 -2
  151. package/dashboard/dist/assets/{WorkflowsOverview-03IRrDLg.js.map → WorkflowsOverview-DuhAi_OY.js.map} +1 -1
  152. package/dashboard/dist/assets/{YamlWorkflowsPage-DC2cLxVi.js → YamlWorkflowsPage-Dc9xw82a.js} +2 -2
  153. package/dashboard/dist/assets/{YamlWorkflowsPage-DC2cLxVi.js.map → YamlWorkflowsPage-Dc9xw82a.js.map} +1 -1
  154. package/dashboard/dist/assets/{bots-DZEXcgiJ.js → bots-Dny-rmmI.js} +2 -2
  155. package/dashboard/dist/assets/{bots-DZEXcgiJ.js.map → bots-Dny-rmmI.js.map} +1 -1
  156. package/dashboard/dist/assets/{escalation-Cw48lNaF.js → escalation-CV0sKNH5.js} +2 -2
  157. package/dashboard/dist/assets/{escalation-Cw48lNaF.js.map → escalation-CV0sKNH5.js.map} +1 -1
  158. package/dashboard/dist/assets/{escalation-columns-NINpo3qf.js → escalation-columns-30CKyoWI.js} +2 -2
  159. package/dashboard/dist/assets/{escalation-columns-NINpo3qf.js.map → escalation-columns-30CKyoWI.js.map} +1 -1
  160. package/dashboard/dist/assets/{helpers-fk_qr729.js → helpers-C1-30CzH.js} +2 -2
  161. package/dashboard/dist/assets/{helpers-fk_qr729.js.map → helpers-C1-30CzH.js.map} +1 -1
  162. package/dashboard/dist/assets/{helpers-Cuu3xKfr.js → helpers-C1jaRD-d.js} +2 -2
  163. package/dashboard/dist/assets/{helpers-Cuu3xKfr.js.map → helpers-C1jaRD-d.js.map} +1 -1
  164. package/dashboard/dist/assets/{index-CNI7k7oB.js → index-BUVQ6wmy.js} +2 -2
  165. package/dashboard/dist/assets/{index-CNI7k7oB.js.map → index-BUVQ6wmy.js.map} +1 -1
  166. package/dashboard/dist/assets/{index-aRvL-dXp.js → index-CxVB7F4X.js} +2 -2
  167. package/dashboard/dist/assets/{index-aRvL-dXp.js.map → index-CxVB7F4X.js.map} +1 -1
  168. package/dashboard/dist/assets/{index-BIG3KooI.js → index-D-oCWCAS.js} +2 -2
  169. package/dashboard/dist/assets/{index-BIG3KooI.js.map → index-D-oCWCAS.js.map} +1 -1
  170. package/dashboard/dist/assets/{index-b03HlbnH.js → index-D0wPM3Ck.js} +2 -2
  171. package/dashboard/dist/assets/{index-b03HlbnH.js.map → index-D0wPM3Ck.js.map} +1 -1
  172. package/dashboard/dist/assets/{index-DwRytW9O.js → index-DHgnkykj.js} +2 -2
  173. package/dashboard/dist/assets/{index-DwRytW9O.js.map → index-DHgnkykj.js.map} +1 -1
  174. package/dashboard/dist/assets/{index-Dj-z-x8M.js → index-D_aJBEAG.js} +2 -2
  175. package/dashboard/dist/assets/{index-Dj-z-x8M.js.map → index-D_aJBEAG.js.map} +1 -1
  176. package/dashboard/dist/assets/{index-CORHB0WC.js → index-Dd_U4mLm.js} +2 -2
  177. package/dashboard/dist/assets/{index-CORHB0WC.js.map → index-Dd_U4mLm.js.map} +1 -1
  178. package/dashboard/dist/assets/{index-BwN3KP_L.js → index-DnmZbNxk.js} +7 -7
  179. package/dashboard/dist/assets/index-DnmZbNxk.js.map +1 -0
  180. package/dashboard/dist/assets/{index-B98ipWxE.js → index-_RBvi7s6.js} +2 -2
  181. package/dashboard/dist/assets/{index-B98ipWxE.js.map → index-_RBvi7s6.js.map} +1 -1
  182. package/dashboard/dist/assets/{index-Bxe8h1x4.js → index-n7td8zgX.js} +2 -2
  183. package/dashboard/dist/assets/{index-Bxe8h1x4.js.map → index-n7td8zgX.js.map} +1 -1
  184. package/dashboard/dist/assets/{mcp-BZoFryNc.js → mcp-CRFr4L9W.js} +2 -2
  185. package/dashboard/dist/assets/{mcp-BZoFryNc.js.map → mcp-CRFr4L9W.js.map} +1 -1
  186. package/dashboard/dist/assets/{mcp-query-wiw1kwm8.js → mcp-query-DHY2mZBQ.js} +2 -2
  187. package/dashboard/dist/assets/{mcp-query-wiw1kwm8.js.map → mcp-query-DHY2mZBQ.js.map} +1 -1
  188. package/dashboard/dist/assets/{mcp-runs-BaEKnf5v.js → mcp-runs-BqPHqwAO.js} +2 -2
  189. package/dashboard/dist/assets/{mcp-runs-BaEKnf5v.js.map → mcp-runs-BqPHqwAO.js.map} +1 -1
  190. package/dashboard/dist/assets/{namespaces-BwnZI4_A.js → namespaces-TG1aIpo_.js} +2 -2
  191. package/dashboard/dist/assets/{namespaces-BwnZI4_A.js.map → namespaces-TG1aIpo_.js.map} +1 -1
  192. package/dashboard/dist/assets/{roles-Bgn1K8zU.js → roles-DhhLTvXg.js} +2 -2
  193. package/dashboard/dist/assets/{roles-Bgn1K8zU.js.map → roles-DhhLTvXg.js.map} +1 -1
  194. package/dashboard/dist/assets/{settings-CizYiutL.js → settings-D9MBzEeB.js} +2 -2
  195. package/dashboard/dist/assets/{settings-CizYiutL.js.map → settings-D9MBzEeB.js.map} +1 -1
  196. package/dashboard/dist/assets/{tasks-Bmte_hc4.js → tasks-BxmcZoev.js} +2 -2
  197. package/dashboard/dist/assets/{tasks-Bmte_hc4.js.map → tasks-BxmcZoev.js.map} +1 -1
  198. package/dashboard/dist/assets/{useEventHooks-CUCxpiI2.js → useEventHooks-BylecvvI.js} +2 -2
  199. package/dashboard/dist/assets/{useEventHooks-CUCxpiI2.js.map → useEventHooks-BylecvvI.js.map} +1 -1
  200. package/dashboard/dist/assets/{useYamlActivityEvents-Cum02Ej9.js → useYamlActivityEvents-ocmj11e_.js} +2 -2
  201. package/dashboard/dist/assets/{useYamlActivityEvents-Cum02Ej9.js.map → useYamlActivityEvents-ocmj11e_.js.map} +1 -1
  202. package/dashboard/dist/assets/{users-NSDgTt-z.js → users-Ce5r-JAv.js} +2 -2
  203. package/dashboard/dist/assets/{users-NSDgTt-z.js.map → users-Ce5r-JAv.js.map} +1 -1
  204. package/dashboard/dist/assets/{workflows-k0XRdGXx.js → workflows-ykIeVbRJ.js} +2 -2
  205. package/dashboard/dist/assets/{workflows-k0XRdGXx.js.map → workflows-ykIeVbRJ.js.map} +1 -1
  206. package/dashboard/dist/assets/{yaml-workflows-DAre8I78.js → yaml-workflows-WypmKYht.js} +2 -2
  207. package/dashboard/dist/assets/{yaml-workflows-DAre8I78.js.map → yaml-workflows-WypmKYht.js.map} +1 -1
  208. package/dashboard/dist/index.html +1 -1
  209. package/docs/api/sdk/auth.md +27 -0
  210. package/docs/api/sdk/bot-accounts.md +243 -0
  211. package/docs/api/sdk/controlplane.md +103 -0
  212. package/docs/api/sdk/dba.md +52 -0
  213. package/docs/api/sdk/escalations.md +374 -0
  214. package/docs/api/sdk/events.md +68 -0
  215. package/docs/api/sdk/exports.md +124 -0
  216. package/docs/api/sdk/insight.md +112 -0
  217. package/docs/api/sdk/maintenance.md +54 -0
  218. package/docs/api/sdk/mcp-runs.md +71 -0
  219. package/docs/api/sdk/mcp.md +250 -0
  220. package/docs/api/sdk/namespaces.md +43 -0
  221. package/docs/api/sdk/roles.md +183 -0
  222. package/docs/api/sdk/settings.md +24 -0
  223. package/docs/api/sdk/tasks.md +120 -0
  224. package/docs/api/sdk/users.md +196 -0
  225. package/docs/api/sdk/workflow-sets.md +135 -0
  226. package/docs/api/sdk/workflows.md +271 -0
  227. package/docs/api/sdk/yaml-workflows.md +408 -0
  228. package/package.json +1 -1
  229. package/dashboard/dist/assets/index-BwN3KP_L.js.map +0 -1
  230. package/docs/epic-integration.md +0 -224
  231. package/docs/workflow-builder.md +0 -371
  232. /package/docs/api/{dba.md → http/dba.md} +0 -0
  233. /package/docs/api/{escalations.md → http/escalations.md} +0 -0
  234. /package/docs/api/{exports.md → http/exports.md} +0 -0
  235. /package/docs/api/{maintenance.md → http/maintenance.md} +0 -0
  236. /package/docs/api/{mcp-runs.md → http/mcp-runs.md} +0 -0
  237. /package/docs/api/{mcp-servers.md → http/mcp-servers.md} +0 -0
  238. /package/docs/api/{namespaces.md → http/namespaces.md} +0 -0
  239. /package/docs/api/{roles.md → http/roles.md} +0 -0
  240. /package/docs/api/{service-accounts.md → http/service-accounts.md} +0 -0
  241. /package/docs/api/{settings.md → http/settings.md} +0 -0
  242. /package/docs/api/{tasks.md → http/tasks.md} +0 -0
  243. /package/docs/api/{users.md → http/users.md} +0 -0
  244. /package/docs/api/{workflows.md → http/workflows.md} +0 -0
  245. /package/docs/api/{yaml-workflows.md → http/yaml-workflows.md} +0 -0
@@ -1,224 +0,0 @@
1
- # Capturing SOPs: Epic Referral Intake
2
-
3
- ## The Problem Isn't the Integration
4
-
5
- An engineering team is building for medical back-office customers who process referral intake. The customers know how to do this work. They know that BlueCross referrals in Texas require a specific prior-auth form submitted through the payer portal before the clinical documents are attached. They know that Epic flags certain `ServiceRequest` resources as urgent but the actual urgency depends on the referring provider's specialty. They know that when a coverage check returns `active` but the plan code starts with `HMO-`, there's an additional step the system doesn't surface.
6
-
7
- This knowledge is institutional. It lives in the heads of intake coordinators who've been doing this for years. Some of it is written down — in training binders, in shared docs that were last updated eighteen months ago, in Slack threads that nobody will ever find again. Most of it isn't written down at all. It's the "ask Linda" layer of operations.
8
-
9
- The engineering team doesn't need another integration platform. They need a way to capture what Linda knows and make it executable. When Linda leaves or the team scales from five coordinators to fifty, the knowledge should still be there — not as documentation that someone has to read and interpret, but as compiled tools that run the process the way Linda would.
10
-
11
- That's the actual problem. Epic's FHIR APIs are the data layer. The institutional knowledge is the value layer. Long Tail is the machine that converts one into the other.
12
-
13
- ---
14
-
15
- ## Why Not Traditional Tools
16
-
17
- The engineering team has evaluated the usual options:
18
-
19
- **Integration middleware** builds point-to-point connections between systems. It moves data. It doesn't encode decisions. The middleware can sync a `ServiceRequest` from Epic to an internal queue, but it can't encode the rule that BlueCross-Texas referrals need a payer portal submission before document attachment. That logic ends up in custom code bolted onto the middleware, maintained by engineers who don't understand the clinical workflow, informed by specs that were wrong the day they were written.
20
-
21
- **RPA** records clicks and replays them. It captures *what* the coordinator does in the browser, not *why*. When Epic's UI changes — and it changes often — the recording breaks. When the underlying rule changes — a payer updates their prior-auth requirements — the recording doesn't know. RPA captures motion, not knowledge.
22
-
23
- **Custom software** works if the requirements are static. They aren't. Payers change rules quarterly. Epic releases updates. New customers bring new Epic configurations. The engineering team would spend most of their time maintaining the gap between what the software does and what the process requires, mediated by Jira tickets that describe the problem in terms the coordinator understands but the engineer has to translate.
24
-
25
- The team wants to bypass all of this. Connect directly to Epic's FHIR APIs. Let the people who know the process describe it. Compile what works into tools. When something changes, the tool fails, the right person fixes it, and the updated knowledge compiles back in. No translation layer between the domain expert and the executable process.
26
-
27
- ---
28
-
29
- ## The Starting Point: Epic as Data Layer
30
-
31
- Epic exposes FHIR R4 APIs through open.epic.com. The engineering team registers a custom MCP server that wraps the FHIR endpoints their referral workflows need. Each tool maps to a FHIR operation:
32
-
33
- - `search_service_requests` — find referrals by patient, date, status, referring provider
34
- - `get_patient` — retrieve patient demographics by MRN or FHIR ID
35
- - `search_coverage` — check insurance status and plan details
36
- - `search_document_references` — find clinical documents linked to a referral
37
- - `create_task` — post a follow-up action back to Epic
38
- - `update_service_request` — transition referral status
39
-
40
- The server handles FHIR serialization, pagination, and query syntax. The team tags it `['ehr', 'epic', 'fhir', 'referrals']`. They also register the browser automation server for payer portals that don't have APIs.
41
-
42
- This is plumbing. It takes a week. The interesting part is what happens next.
43
-
44
- ---
45
-
46
- ## Capturing the First SOP
47
-
48
- ### The Session with Linda
49
-
50
- The engineering team sits with Linda, the senior intake coordinator. They open the Pipeline Designer — the development environment where AI-assisted exploration is allowed — and ask Linda to walk through a referral.
51
-
52
- Linda describes it as she does it:
53
-
54
- > *"First I pull up the referral in Epic and check the insurance. If it's BlueCross and the plan code starts with HMO, I have to go to the BlueCross portal and submit a prior-auth request before I can attach the clinical docs. If it's Aetna, I check whether the referring provider is in-network first, because out-of-network Aetna referrals need a different form. For everything else, I just verify the coverage is active and move to documents."*
55
-
56
- The engineer translates this into a prompt. The dynamic path runs — the LLM discovers the FHIR tools, executes the queries Linda described, follows her branching logic, and produces the result. Every FHIR call is checkpointed. The execution trace captures not just the API calls but the decision structure: the payer-specific branches, the ordering constraints, the conditions that trigger each path.
57
-
58
- Linda looks at the output. "That's right, except when the coverage check returns `active` but there's a `copay-exception` extension, you need to flag it for manual review." The engineer adds this to the prompt and reruns. The LLM adjusts.
59
-
60
- ### What Just Happened
61
-
62
- Linda's institutional knowledge — the payer-specific rules, the ordering constraints, the exception handling — is now captured in an execution trace. Not in a spec document. Not in a Jira ticket. In a recorded sequence of API calls with decision points and branching logic that produced the correct result.
63
-
64
- The AI is scaffolding here. It translated Linda's description into tool calls. It won't be in the production path. But it bridged the gap between "how Linda describes the process" and "a sequence of API operations that implements it." That gap is where traditional approaches lose fidelity.
65
-
66
- ### Compilation
67
-
68
- The team opens the Compilation Wizard. The compiler examines the execution trace and separates what's fixed (the branching logic, the FHIR query patterns, the payer-specific rules) from what's dynamic (the patient ID, the referral ID, the specific plan code values). It produces a deterministic DAG with typed inputs and typed outputs.
69
-
70
- The team names it `intake-referral-coverage-check`, tags it `['referrals', 'insurance', 'epic', 'intake']`, and deploys it. Linda's knowledge about payer-specific coverage rules is now a compiled, versioned, executable tool. No LLM in the execution path. It runs the process the way Linda described it, every time, without Linda.
71
-
72
- ---
73
-
74
- ## Capturing the Next SOP, and the Next
75
-
76
- The team runs more sessions. Each one captures a different piece of the intake process:
77
-
78
- **Document verification.** Maria knows which clinical documents are required for each referral type. Orthopedic referrals need recent imaging. Cardiology needs an EKG report and recent labs. Behavioral health needs a clinical summary but not labs. Maria walks through the logic. The system captures it. The compiled tool checks `DocumentReference` resources against referral-type requirements and identifies what's missing.
79
-
80
- **Provider validation.** James knows the receiving provider network. He knows which practices have stopped accepting certain insurance plans, which have capacity constraints, and which require specific referral formats. The compiled tool queries `Practitioner` and `Organization` resources and validates the referral against James's routing knowledge.
81
-
82
- **Status transitions.** The team captures the full referral lifecycle: `draft` → `active` → `on-hold` (waiting for documents) → `active` (documents received) → `completed` (scheduled). Each transition has preconditions. Each precondition is a compiled check. The workflow won't advance a referral to `active` unless coverage is verified and required documents are attached — because that's how Linda's team actually does it.
83
-
84
- Each compiled tool encodes one person's domain knowledge as an executable, testable, versionable artifact. The SOP isn't a document anymore. It's a tool.
85
-
86
- ---
87
-
88
- ## Connecting Directly
89
-
90
- ### No Middleware in the Middle
91
-
92
- The compiled workflows call Epic's FHIR APIs directly. There's no integration platform transforming data between systems. The `ServiceRequest` resource in Epic is the referral. The `Coverage` resource is the insurance. The `Task` resource is the follow-up action. Epic is the source of truth, and the workflows read and write to it without translation.
93
-
94
- This matters for a practical reason: when Epic changes something — a new required field, a modified search parameter, an updated extension — the impact is immediate and visible. The compiled tool fails. The system creates an escalation with the full FHIR `OperationOutcome` error. The team knows exactly what changed and where.
95
-
96
- With middleware in between, the same change produces a vague data-mapping error three layers deep, and someone spends a week figuring out which system changed what.
97
-
98
- ### FHIR as the Shared Language
99
-
100
- The tools the team builds speak FHIR. This means:
101
-
102
- **Epic is the canonical record.** Workflows don't maintain shadow copies of referrals in a separate database. They query Epic for current state and write results back to Epic. If an intake coordinator opens Epic directly, they see the same status the workflow set.
103
-
104
- **Tasks stay in Epic.** When a workflow identifies missing documents, it creates a FHIR `Task` resource assigned to the appropriate party. The referring provider's office sees the task in their Epic instance. No separate task-tracking system. No "check the portal for your action items" emails.
105
-
106
- **Audit is in the workflow engine.** Every FHIR call, every decision branch, every escalation is recorded in the workflow execution history. The compliance team can trace a referral from intake to scheduling, including which credentials were used, which rules were applied, and which human decisions were made along the way.
107
-
108
- ---
109
-
110
- ## Identity: Who's Acting, On Whose Behalf
111
-
112
- The engineering team serves multiple back-office customers. Each customer connects to their own Epic instance with their own SMART on FHIR credentials (registered at open.epic.com). The same compiled workflows run against different Epic environments.
113
-
114
- **Service accounts per customer.** Each customer gets a bot account with stored Epic credentials — the SMART on FHIR private key, client ID, and token endpoint, encrypted at rest. When a workflow runs for Customer A, the credential cascade resolves Customer A's Epic token automatically.
115
-
116
- **Dual identity.** Every workflow execution records who initiated it (the scheduler, the webhook, the human operator) and who it runs as (the customer's service account). The audit trail answers both questions.
117
-
118
- **Scope enforcement.** The interceptor injects the principal's scopes into every activity. A customer scoped to `['epic:read', 'epic:referral:write']` cannot trigger a `Task` creation that requires `epic:task:write`. The FHIR token might allow it. The workflow layer doesn't.
119
-
120
- **Ephemeral credentials for payer portals.** When a workflow needs to submit a prior-auth form on a payer's web portal, it escalates to a human operator for login credentials. The credential is encrypted, stored with a TTL, exchanged atomically at dispatch time, and deleted after use. It never persists in workflow state.
121
-
122
- This isn't a section the team's customers think about. It's plumbing the engineering team builds once. But it's the plumbing that makes "one codebase, many customers, many Epic instances" possible without credential leakage or cross-tenant contamination.
123
-
124
- ---
125
-
126
- ## The Living Part: When Knowledge Needs to Update
127
-
128
- ### Payer Rule Changes
129
-
130
- Aetna updates their prior-authorization requirements for orthopedic referrals. The compiled `intake-referral-coverage-check` tool doesn't know this — it encodes last quarter's rules. Referrals that should trigger prior-auth are sailing through without it.
131
-
132
- The failure surfaces when a downstream scheduling workflow can't complete because the payer rejects the referral. The escalation includes the full context: which referral, which payer, which step failed, and the payer's rejection reason.
133
-
134
- The team brings Linda back to the Pipeline Designer. "Oh, Aetna changed this in March. Now any ortho referral with a plan code starting with `PPO-` also needs prior auth, not just the HMO plans." The engineer reruns the dynamic path with the updated logic. It succeeds. They compile. The new version deploys. The old version is still there for rollback.
135
-
136
- Linda's updated knowledge is now in the system. The SOP evolved. No spec document was rewritten. No Jira ticket went through a sprint. The compiled tool *is* the current SOP.
137
-
138
- ### Epic API Changes
139
-
140
- Epic updates their FHIR implementation. A field that was optional on `ServiceRequest` is now required. The compiled tool starts returning 422 errors.
141
-
142
- The system doesn't swallow these. Non-2xx FHIR responses throw typed errors with the status code, the `OperationOutcome` body, and the request context. The interceptor catches the failure and escalates with full diagnostic information.
143
-
144
- The team uses the dynamic path in development — AI is allowed there — to explore the change against Epic's sandbox. The LLM reads the error, adjusts the query, succeeds. The team compiles the corrected pattern. Production resumes.
145
-
146
- ### New Customer, Different Epic Configuration
147
-
148
- A new customer's Epic instance has a custom extension on `Coverage` resources that indicates whether a plan requires a specialist referral versus an open-access referral. The existing compiled tools don't read this extension.
149
-
150
- The team doesn't rewrite the tools. They run a dynamic session against the new customer's sandbox, discover the extension, and compile a customer-specific variant of the coverage check. They tag it `['referrals', 'insurance', 'epic', 'customer-b']`. The routing layer uses tags to match the right tool to the right customer.
151
-
152
- The core intake workflow stays the same. The coverage-check step resolves to a different compiled tool depending on which customer's service account is running. The process is the same. The payer-specific and Epic-specific details differ. The tag system handles the routing.
153
-
154
- ---
155
-
156
- ## The Composition: SOPs Build on SOPs
157
-
158
- Individual compiled tools are building blocks. The full referral intake process composes them:
159
-
160
- 1. **Validate referral** — check required FHIR fields, verify referring provider, confirm receiving practice accepts the referral type
161
- 2. **Check coverage** — payer-specific insurance verification with prior-auth detection
162
- 3. **Verify documents** — referral-type-specific clinical document requirements
163
- 4. **Resolve gaps** — if documents are missing or prior-auth is needed, escalate to the appropriate party and wait
164
- 5. **Transition status** — advance the referral through Epic's status lifecycle with appropriate `Task` creation
165
-
166
- Each step is a compiled tool that encodes one person's domain knowledge. The composition is itself compiled — a deterministic DAG that sequences the steps with conditional branches and escalation points. No LLM anywhere in the production path.
167
-
168
- The escalation in step 4 is where institutional knowledge grows. When the workflow pauses for a missing document, the coordinator who resolves it might say: "This always happens with referrals from Dr. Chen's office — they never attach the imaging report." That pattern becomes a compiled pre-check that flags Dr. Chen's referrals for document follow-up *before* the gap-resolution step. The SOP gets tighter with use.
169
-
170
- ---
171
-
172
- ## The Browser Layer: Payer Portals Without APIs
173
-
174
- Some parts of referral intake require browser interaction. Payer authorization portals, legacy fax gateways, state Medicaid enrollment systems — these have web interfaces but no APIs.
175
-
176
- The team compiles browser workflows the same way they compile FHIR workflows. In development, the LLM navigates the portal, discovers the form fields, submits the request, and captures the confirmation. The engineer reviews the execution, corrects any brittle selectors, and compiles. In production, the compiled DAG replays the exact browser sequence — no LLM, deterministic, durable.
177
-
178
- When a portal redesigns its interface, the compiled script fails. The escalation includes a screenshot of the new layout. The team recompiles against the updated page. The browser backend is pluggable — Playwright locally, a managed browser service in production — but the compiled workflow doesn't change.
179
-
180
- This is where "bypassing traditional tools" is most literal. Instead of an RPA platform that records mouse movements and breaks on every UI change, the team has compiled browser workflows that encode the *intent* (submit prior-auth form with these fields) in a structure that can be updated when the surface changes.
181
-
182
- ---
183
-
184
- ## What the Engineering Team Built
185
-
186
- After three months, the engineering team has delivered something that none of the traditional tools could:
187
-
188
- **Executable SOPs.** Each compiled workflow encodes a specific piece of the intake process as described by the people who do it. The SOPs aren't documents — they're running code that produces the same result the domain expert would, deterministically, across every customer's Epic instance.
189
-
190
- **A knowledge inventory that grows.** Every session with a domain expert produces compiled tools. Every escalation resolution that reveals a repeatable pattern produces another. The system accumulates operational knowledge monotonically — new compilations add capabilities, version history preserves the evolution.
191
-
192
- **Self-maintaining integrations.** When Epic's API changes, the affected tool fails and escalates with precise diagnostic context. The fix is a recompilation, not a rewrite. When a payer changes rules, the domain expert describes the change, the dynamic path validates it, and the compiled tool updates. The engineering team isn't maintaining a gap between spec and implementation because there is no spec — the compiled tool is the specification.
193
-
194
- **Customer isolation without code branches.** Service accounts, credential scoping, and tag-based routing handle multi-tenancy. The same compiled workflows serve every customer. Customer-specific variations are separate compiled tools selected by the routing layer. No `if (customer === 'A')` branches in the codebase.
195
-
196
- **A compliance-ready audit trail.** Every referral processed has a complete execution record: which FHIR calls were made, which credentials were used, which human decisions were involved, which compiled tool version ran. The data is in Postgres — queryable, exportable, backupable.
197
-
198
- ### What Didn't Ship
199
-
200
- No middleware platform. No RPA licenses. No integration-platform-as-a-service subscription. No custom application with a six-month development timeline and a maintenance budget.
201
-
202
- The engineering team wrote one MCP server that wraps Epic's FHIR APIs. They compiled workflows from sessions with domain experts. They deployed it on their own infrastructure — Postgres and containers. The institutional knowledge that previously lived in Linda's head and Maria's training binder now lives in compiled tools that run without either of them present.
203
-
204
- ---
205
-
206
- ## From 0 to 1, and 1 to 1.5
207
-
208
- The system didn't arrive complete. It grew.
209
-
210
- **0 to 1** was the first compiled SOP: Linda's coverage-check workflow running against Epic's sandbox. A single tool that encoded a single person's knowledge about a single piece of the intake process. It proved the concept — institutional knowledge could be captured as a compiled tool and executed without AI, without the domain expert present, against a real FHIR API.
211
-
212
- **1 to 1.1** was multi-tenancy: the same tool running against multiple customers' Epic instances with proper credential isolation.
213
-
214
- **1.1 to 1.2** was breadth: capturing more SOPs from more domain experts. Document verification from Maria. Provider routing from James. Each session added to the tool inventory.
215
-
216
- **1.2 to 1.3** was composition: individual tools assembled into the full intake pipeline, with escalation points where human judgment is still required.
217
-
218
- **1.3 to 1.5** is the living part: the system updating itself when the landscape shifts. A payer changes rules, a compiled tool fails, the domain expert describes the change, the tool recompiles. The gap between "how the process works" and "what the system does" stays small because the feedback loop is tight — failure to escalation to resolution to recompilation.
219
-
220
- **1.5 to 2** is where it gets interesting. The escalation history reveals patterns. Which payers change rules most often. Which Epic configurations cause the most failures. Which referral types need the most human intervention. The engineering team uses this data to prioritize: build more specific tools where the escalation rate is highest. The system's own operational history guides its evolution.
221
-
222
- The long tail of referral edge cases gets shorter. Not because someone anticipated every case, but because the system compiled solutions as the team encountered them. Each solved case is a permanent capability. The SOP library grows. The escalation rate drops. The domain experts spend less time on routine intake and more time on the genuinely novel cases — the ones that actually need human judgment.
223
-
224
- That's what a living system looks like. Not software that was correct when it shipped. Software that gets more correct the longer it runs, because it absorbs what the people around it know.
@@ -1,371 +0,0 @@
1
- # The Workflow Builder: From Prompt to Plan
2
-
3
- ## The Gap Between a Prompt and a System
4
-
5
- The current workflow builder takes a prompt and produces a single DAG. Describe a task, the LLM constructs YAML, you deploy a tool. This works when the unit of work is one workflow — screenshot a page, check a patient's coverage, submit a form.
6
-
7
- But the engineering team sitting with Linda doesn't have one task. They have a PRD. A technical design document. Pages of specifications describing a referral intake system with branching paths, composed sub-processes, human review gates, and payer-specific variations. The system they need isn't one workflow. It's a set of interrelated workflows — some calling others, some waiting for human input, some branching based on data conditions — that together implement a complete operational process.
8
-
9
- Today, the engineer would decompose the PRD manually, build each workflow individually through the wizard, deploy them into the same namespace, and wire the composition by hand. That works, but it's the same translation problem the builder was designed to eliminate: the engineer becomes the bottleneck between the specification and the executable system.
10
-
11
- The evolution is plan mode. When the input exceeds what a single workflow can express — because it's long, because it describes composition, because it references multiple processes — the builder enters a planning phase. It decomposes the specification into workflows, identifies their relationships, builds them leaf-first, and deploys them as a coordinated set.
12
-
13
- ---
14
-
15
- ## Two Entry Points, Same Destination
16
-
17
- The builder already has two modes:
18
-
19
- **Discover & Compile.** Run a dynamic execution with AI, then compile the trace into a deterministic DAG. The execution is the specification — the compiler extracts the pattern from what actually happened.
20
-
21
- **Direct Build.** Describe the workflow, and the LLM constructs the YAML directly from tool schemas without executing anything. The engineer's description is the specification.
22
-
23
- Plan mode extends Direct Build. The input is the same textarea. The difference is what the input contains. A short prompt ("check referral coverage for a patient") produces a single workflow. A long specification with composition, branching, and multiple processes triggers plan mode.
24
-
25
- The detection is straightforward: length beyond a threshold, or structural signals in the content — references to multiple steps that are themselves processes, explicit composition language ("this flow calls that flow"), mentions of planning or phased execution. When plan mode activates, the builder wraps the existing single-workflow wizard in an outer loop that plans first, then builds each workflow through the same pipeline.
26
-
27
- ---
28
-
29
- ## What Plan Mode Produces
30
-
31
- A plan is a decomposition of the specification into workflows with defined relationships:
32
-
33
- **Leaf workflows** have no dependencies on other custom workflows. They call MCP tools directly — FHIR queries, browser actions, file operations. They're the atomic units of the system.
34
-
35
- **Composition workflows** call leaf workflows (or other composition workflows) as activities. They encode sequencing, branching, and human-in-the-loop gates. They're the orchestration layer.
36
-
37
- **The build order is leaf-first.** The planner identifies terminal workflows — the ones that don't invoke other custom workflows — and builds those first. Then it builds the workflows that compose them. Then the workflows that compose those. The tree builds from the leaves up, because each layer needs to reference the tools that already exist in the layer below.
38
-
39
- Each workflow in the plan belongs to a namespace. Workflows in the same namespace are deployed together as one HotMesh application. They can invoke each other using HotMesh's `await` activity — a direct, typed call within the same execution engine. This is the tightest composition: same deployment, same transaction boundary, same namespace.
40
-
41
- Workflows in different namespaces invoke each other as MCP tools. The calling workflow treats the target as any other tool — discovered by tag, invoked through the MCP protocol, results returned as structured data. This is the loosest composition: independent deployment, independent versioning, connected only through the tool interface.
42
-
43
- The planner decides which workflows belong together based on coupling. Workflows that share data context, that are always deployed together, that represent phases of the same process — same namespace. Workflows that represent independent capabilities, that might be reused across processes, that have different versioning cycles — separate namespaces.
44
-
45
- ---
46
-
47
- ## The Outer Loop
48
-
49
- The existing wizard has four steps in builder mode: Describe, Profile, Deploy, Test. Plan mode wraps this in an outer loop:
50
-
51
- **Plan step.** The specification goes in. The planner produces a structured decomposition: which workflows to build, in what order, with what relationships. The dashboard displays this as a list or graph — the set of workflows, their dependencies, their namespaces. The engineer reviews, adjusts, and confirms.
52
-
53
- **Build loop.** For each workflow in the plan (leaf-first), the system runs the existing builder pipeline: construct YAML from the specification slice, validate, present for review. The engineer sees the familiar Profile → Deploy → Test sequence for each workflow, but the context is richer — the plan shows where this workflow fits in the larger system, which workflows call it, which it calls.
54
-
55
- **Verification.** After all workflows are built and deployed, the system runs the composition end-to-end. The engineer provides inputs at the top level, and the execution cascades through the composed workflows. Escalation points pause and wait. Branch conditions route correctly. The full system operates as specified.
56
-
57
- The UX for the outer loop is familiar. The plan step is a new panel above the wizard — a progress view showing which workflows have been built, which are next, which are pending. When the engineer clicks into a specific workflow, they see the standard wizard steps. When they step back out, they see the plan.
58
-
59
- Real-time updates flow through the existing event system. As each workflow compiles, deploys, and activates, the plan view updates. Build progress, deployment status, test results — all surfaced through the same SSE/NATS topic subscriptions the wizard already uses.
60
-
61
- Plan view has both expanded, collapsed modes. These serve to infrorm what is going on at a global level, but also can be collapsed sufficiently to provide high-level progress while allowing the engineer to focus on the target workflow currently loaded in the wizard.
62
-
63
- ---
64
-
65
- ## Workflow Sets: The Data Model
66
-
67
- The current `lt_yaml_workflows` table stores individual workflows. Plan mode produces sets of related workflows. The relationship between them — which calls which, which namespace they share, which plan produced them — needs a home.
68
-
69
- A new entity: **workflow set**. A set is a named group of workflows produced by a single plan. It records:
70
-
71
- - The original specification (the PRD, the TDD, the markdown that was pasted in)
72
- - The plan (the decomposition into workflows with relationships)
73
- - The namespaces involved
74
- - The build order
75
- - The status of each workflow in the set (planned, building, deployed, active, failed)
76
-
77
- Each `lt_yaml_workflows` record gains an optional `set_id` foreign key. Workflows not produced by a plan have no set. Workflows produced by a plan reference their set, and through it, their siblings and their relationships.
78
-
79
- The set is a first-class entity in the dashboard. An engineer can view all workflows in a set, see their dependency graph, deploy or redeploy the set, and trace execution through composed workflows. When a leaf workflow is updated (because Epic changed an API, because a payer changed rules), the set view shows which composition workflows are affected.
80
-
81
- This also solves the grouping problem for workflows that aren't plan-produced. An engineer who builds three related workflows individually can group them into a set after the fact. The set is an organizational unit, not just a plan artifact.
82
-
83
- ---
84
-
85
- ## Composition Mechanics
86
-
87
- ### Same-Namespace Composition
88
-
89
- Workflows in the same namespace are deployed as a single HotMesh application. They share a deployment lifecycle — when any workflow in the namespace is updated, the entire namespace redeploys (merging all YAML graphs).
90
-
91
- Within the same namespace, a workflow invokes another using a worker activity whose `workflowName` matches the target's `subscribes` topic. The call is synchronous from the caller's perspective — the activity blocks until the child workflow completes — but durable underneath. If the worker process crashes, the child workflow's progress is preserved, and execution resumes from the last checkpoint.
92
-
93
- Data flows through typed input/output schemas. The caller maps its data to the child's input schema. The child's output maps back to the caller's next activity. The YAML wiring is explicit — every field is traced.
94
-
95
- ### Cross-Namespace Composition
96
-
97
- Workflows in different namespaces are independent MCP tools. The calling workflow invokes them through the standard MCP tool protocol — the same mechanism used for browser automation, FHIR queries, or any other tool.
98
-
99
- This is composition through the tool layer. The caller doesn't know or care whether the target is a compiled workflow, a built-in MCP server, or an external service. It discovers the tool by tag, passes arguments, and receives results. The target workflow runs in its own namespace, with its own deployment lifecycle, its own versioning.
100
-
101
- Cross-namespace composition is the right choice when:
102
-
103
- - The target workflow is reusable across processes (a coverage check used by both intake and scheduling)
104
- - The target has a different release cadence (payer-specific tools updated independently)
105
- - The target belongs to a different team or domain
106
-
107
- ### The Composition Spectrum
108
-
109
- Same-namespace `await` and cross-namespace MCP tool calls are two points on a spectrum. The planner chooses based on coupling signals in the specification. Tightly coupled processes that share data context and deploy together go in the same namespace. Independent capabilities that serve multiple consumers go in separate namespaces.
110
-
111
- An engineer can override the planner's choice. Move a workflow from one namespace to another, and the composition mechanism changes automatically — `await` becomes an MCP tool call, or vice versa. The workflow's logic doesn't change. Only the wiring.
112
-
113
- ---
114
-
115
- ## The Prompt Strategy
116
-
117
- The builder's LLM prompt is the critical path. It teaches the model how to construct valid HotMesh YAML — activity definitions, transitions, `@pipe` data flow, signal hooks, iteration patterns, collision-proofing suffixes. The existing prompt handles single workflows well.
118
-
119
- Plan mode extends the prompt in two dimensions:
120
-
121
- **Decomposition.** The planner prompt takes the full specification and produces a structured plan: workflow names, descriptions, input/output contracts, dependencies, namespace assignments, build order. This is a different LLM task than YAML construction — it's architectural reasoning about how to decompose a system into workflows.
122
-
123
- **Cross-references.** When the builder constructs a composition workflow, it needs to know the input/output schemas of the workflows it calls. The prompt includes these schemas as context — "workflow `check-coverage` accepts `{patient_id, referral_id}` and returns `{covered, prior_auth_required, plan_code}`." The builder wires the composition using these contracts.
124
-
125
- The prompt strategy for complex specifications is iterative. The planner makes a first pass. The engineer reviews. Adjustments feed back into the planner. Each workflow in the plan is built with the full plan as context — the builder knows where each piece fits. This is why plan mode is a loop, not a single shot.
126
-
127
- For very large specifications — a full PRD with edge cases, error handling, and operational requirements — the planner may produce nested plans. A top-level plan decomposes into sub-systems. Each sub-system has its own namespace and its own set of workflows. The sub-systems compose through MCP tool calls. This is a plan that produces plans — the outer loop runs recursively.
128
-
129
- ---
130
-
131
- ## Human-in-the-Loop in Compiled Workflows
132
-
133
- The existing system ships a human-queue MCP server with tools for durable pause points: `escalate_to_human`, `check_resolution`, `escalate_and_wait`. Compiled YAML workflows use these as regular activities.
134
-
135
- In the YAML, a human-in-the-loop step is two activities:
136
-
137
- 1. A **worker** activity that calls `escalate_to_human` (or `escalate_and_wait`) with the escalation payload — what needs review, what context to show, what role should handle it.
138
- 2. A **hook** activity that pauses the workflow and waits for a signal. The signal carries the human's response — approved/rejected, corrected data, additional instructions.
139
-
140
- The hook's `signal_schema` defines what the human provides. The dashboard renders a form from this schema. When the human submits, the signal fires, the hook receives the data, and the workflow resumes.
141
-
142
- Plan mode makes this explicit in the decomposition. When the specification says "a coordinator reviews the referral before scheduling," the planner creates a workflow with an escalation step between the validation activities and the scheduling activities. The escalation's role, payload, and expected response are derived from the specification.
143
-
144
- Conditional transitions after the hook encode the branching: if approved, proceed to scheduling; if rejected, escalate to the referring provider; if corrected, re-run validation with the corrected data. These branches are deterministic — the YAML encodes every path. The human's choice selects the path. No LLM involved.
145
-
146
- ---
147
-
148
- ## What the Engineer Sees
149
-
150
- ### Pasting the PRD
151
-
152
- The engineer opens the Pipeline Designer in builder mode. The textarea is the same one that accepts a short prompt. This time, the engineer pastes three pages of markdown — a referral intake specification with sections for validation, coverage checking, document requirements, payer-specific rules, escalation chains, and scheduling handoff.
153
-
154
- The system detects the input's complexity and activates plan mode. The UI transitions: above the wizard steps, a new panel appears showing the plan as it forms.
155
-
156
- ### The Plan View
157
-
158
- The planner produces a decomposition. The dashboard renders it as a dependency graph:
159
-
160
- ```
161
- referral-intake (namespace: referral-ops)
162
- ├── validate-referral (leaf)
163
- ├── check-coverage (leaf, payer-branching)
164
- ├── verify-documents (leaf, referral-type-specific)
165
- ├── resolve-gaps (composition: escalate → wait → retry)
166
- ├── route-to-scheduling (leaf)
167
- └── process-referral (composition: orchestrates all above)
168
- ```
169
-
170
- Each node shows: name, type (leaf/composition), namespace, status (planned/building/deployed). The engineer can click into any node to see the planned input/output schema and which tools it will use.
171
-
172
- The engineer reviews. "The coverage check should be in its own namespace — we reuse it for eligibility verification in another process." They drag the node to a new namespace. The plan adjusts: the composition workflow that called it now references it as an MCP tool instead of an `await` activity.
173
-
174
- ### Building Leaf-First
175
-
176
- The engineer confirms the plan. The system begins building, starting with leaf workflows. The plan view updates in real time — each node's status changes from "planned" to "building" to "deployed" as the builder completes each workflow.
177
-
178
- For each workflow, the engineer can expand into the standard wizard: review the YAML, adjust the profile, deploy, test with sample inputs. Or they can let the builder proceed automatically and review the full set at the end.
179
-
180
- ### Testing the Composition
181
-
182
- After all workflows deploy, the engineer runs the top-level `process-referral` workflow with a test referral. The execution cascades: validation calls the validate-referral tool, coverage check invokes the check-coverage MCP tool (cross-namespace), document verification runs in-namespace. At the escalation step, the workflow pauses. The engineer resolves it through the dashboard. The workflow resumes and completes.
183
-
184
- The plan view shows the execution trace across all workflows — a unified timeline of which tools ran in which workflow, with durations and data flow visible.
185
-
186
- ---
187
-
188
- ## The Epic Story, Continued
189
-
190
- The engineering team building referral intake for back-office customers doesn't paste one prompt at a time. They have specifications. They've sat with Linda and Maria and James, captured the full intake process, and written it up as a PRD with sections, edge cases, and decision tables.
191
-
192
- They paste the PRD into the builder. Plan mode activates. The planner decomposes it:
193
-
194
- - **Namespace `referral-intake`**: `validate-referral`, `verify-documents`, `resolve-gaps`, `route-to-scheduling`, `process-referral` (composition)
195
- - **Namespace `payer-tools`**: `check-coverage-bluecross`, `check-coverage-aetna`, `check-coverage-default`, `check-coverage` (router composition)
196
- - **Namespace `epic-fhir`**: Low-level FHIR operations if not already registered as MCP tools
197
-
198
- The payer-specific coverage tools are in their own namespace because they version independently — when Aetna changes rules, only `check-coverage-aetna` updates. The intake workflows reference them as MCP tools.
199
-
200
- The engineer reviews the plan, adjusts namespace assignments, confirms. The builder produces the full set of workflows. The engineer tests the composition end-to-end against Epic's sandbox. Linda's knowledge, Maria's knowledge, James's knowledge — encoded in a coordinated set of deterministic tools that run without AI, without the domain experts, against any customer's Epic instance.
201
-
202
- When the PRD changes — and it will — the engineer pastes the updated section. Plan mode produces a delta: which workflows need rebuilding, which are unchanged, which have new dependencies. The system evolves incrementally, the same way the specification evolved.
203
-
204
- ---
205
-
206
- ## Implementation Surface
207
-
208
- ### Data Model
209
-
210
- **New table: `lt_workflow_sets`**
211
-
212
- | Column | Type | Purpose |
213
- |--------|------|---------|
214
- | `id` | UUID | Primary key |
215
- | `name` | TEXT | Human-readable set name |
216
- | `description` | TEXT | Set description |
217
- | `specification` | TEXT | Original input (PRD, TDD, markdown) |
218
- | `plan` | JSONB | Decomposition: workflows, relationships, namespaces, build order |
219
- | `namespaces` | TEXT[] | App IDs involved |
220
- | `status` | TEXT | `planning` / `building` / `deployed` / `active` / `failed` |
221
- | `source_workflow_id` | TEXT | Builder workflow that produced the plan |
222
- | `created_at` | TIMESTAMPTZ | |
223
- | `updated_at` | TIMESTAMPTZ | |
224
-
225
- **Addition to `lt_yaml_workflows`:**
226
-
227
- | Column | Type | Purpose |
228
- |--------|------|---------|
229
- | `set_id` | UUID (nullable, FK) | References `lt_workflow_sets.id` |
230
- | `set_role` | TEXT (nullable) | `leaf` / `composition` / `router` |
231
- | `set_build_order` | INTEGER (nullable) | Build sequence within the set |
232
-
233
- ### API Endpoints
234
-
235
- **Plan lifecycle:**
236
- - `POST /api/workflow-sets` — create set from specification, triggers planning
237
- - `GET /api/workflow-sets` — list sets with status filters
238
- - `GET /api/workflow-sets/:id` — get set with plan and workflow statuses
239
- - `PUT /api/workflow-sets/:id/plan` — update plan (engineer adjustments)
240
- - `POST /api/workflow-sets/:id/build` — start building (leaf-first)
241
- - `POST /api/workflow-sets/:id/deploy` — deploy all namespaces
242
- - `GET /api/workflow-sets/:id/workflows` — list workflows in set with relationships
243
-
244
- **Extended workflow endpoints:**
245
- - `GET /api/yaml-workflows?set_id=:id` — filter by set membership
246
-
247
- ### Workflow: `mcpWorkflowPlanner`
248
-
249
- A new system workflow that decomposes specifications into plans. Activities:
250
-
251
- - `analyzeSpecification` — detect complexity, extract structural signals
252
- - `decomposeIntoWorkflows` — LLM-driven decomposition with namespace assignment
253
- - `validatePlan` — check for circular dependencies, missing contracts, namespace conflicts
254
- - `refinePlan` — incorporate engineer feedback
255
-
256
- The planner workflow produces the plan. The existing `mcpWorkflowBuilder` workflow builds each individual workflow in the plan. The orchestration between planner and builder is itself a composition — the planner calls the builder for each workflow in the plan, leaf-first.
257
-
258
- ### Dashboard Components
259
-
260
- **PlanView** — the outer loop panel showing the workflow dependency graph, build status, and namespace groupings. Wraps the existing wizard steps.
261
-
262
- **PlanNode** — individual workflow in the plan view. Shows name, type, namespace, status, and links to the standard wizard for that workflow.
263
-
264
- **SetListPage** — list of workflow sets, filterable by status and namespace. Entry point for managing multi-workflow systems.
265
-
266
- ### Events
267
-
268
- Plan mode publishes events on the existing NATS topic space:
269
-
270
- - `lt.plan.created` — plan produced from specification
271
- - `lt.plan.workflow.building` — individual workflow build started
272
- - `lt.plan.workflow.deployed` — individual workflow deployed
273
- - `lt.plan.completed` — all workflows in plan built and deployed
274
- - `lt.plan.failed` — build failure with context
275
-
276
- The dashboard subscribes to these for real-time plan view updates.
277
-
278
- ---
279
-
280
- ## Prompt Strategy: Teaching the LLM to Build YAML
281
-
282
- ### What Exists
283
-
284
- The current builder prompt (`prompts.ts`) teaches four activity types — trigger, worker, hook, cycle — with strong coverage of `@pipe` RPN semantics and collision-proofing rules. The RPN section ("operands THEN operator") is particularly effective; LLMs reliably produce correct pipe expressions when the stack-machine framing is clear.
285
-
286
- The prompt includes an abbreviated operator list: the most-used methods from `@string`, `@date`, `@math`, `@number`, `@array`, `@object`, `@conditional`, and `@json`. This is sufficient for single workflows that transform strings, build paths, and iterate arrays.
287
-
288
- ### What's Missing
289
-
290
- Three activity types are absent from the prompt: **await**, **signal**, and **interrupt**. These are precisely the types needed for composition and cross-workflow communication — the core of plan mode.
291
-
292
- The operator catalog is abbreviated. The full HotMesh pipe system exposes 13 handler categories with over 150 methods. Most are rarely needed, but gaps in the catalog produce workarounds: the LLM constructs multi-step pipes to accomplish what a single operator would do, or worse, hardcodes values that should be computed.
293
-
294
- Conditional transitions — the mechanism for branching based on activity output — are not documented in the prompt. The LLM can produce linear flows but struggles with branches, error routing, and cycle exit conditions.
295
-
296
- ### The Reference System
297
-
298
- Rather than inflating the prompt with everything, the builder uses a modular reference system. Reference files live alongside the prompt at `system/workflows/mcp-workflow-builder/reference/`:
299
-
300
- **`activity-types.md`** — Complete documentation of all 8 activity types with YAML examples, properties, and usage guidance. Covers trigger, worker, hook (all 4 modes), cycle, await (sync and fire-and-forget), signal (one and all), and interrupt (self and remote).
301
-
302
- **`pipe-functions.md`** — Full catalog of every `@pipe` operator across all 13 handler categories with signatures and return types.
303
-
304
- ### Injection Strategy
305
-
306
- The prompt is assembled from layers. The base prompt always includes:
307
-
308
- - Trigger, worker, hook, cycle — the types every workflow needs
309
- - `@pipe` RPN rules with the "common mistake" examples
310
- - Core operators: `@string` (concat, substring, toLowerCase), `@date` (now, yyyymmdd, toISOString), `@math` (add), `@array` (get, length), `@object` (get)
311
- - Construction rules (collision-proofing, _scope threading, workflowName, job.maps)
312
- - Clarification protocol
313
-
314
- Additional context is injected based on what the specification requires:
315
-
316
- **Composition detected** (specification references child workflows, sub-processes, or the plan contains multiple workflows) → inject the **await** activity reference. This teaches the LLM how to invoke child workflows within the same namespace, how to wire input/output contracts, and how to use fire-and-forget mode.
317
-
318
- **HITL detected** (specification mentions human review, approval gates, escalation) → inject the **signal** activity reference and the full **hook** web-hook mode documentation. This teaches signal routing through the `hooks:` section, `code: 200` vs `code: 202`, and the escalation-then-wait pattern.
319
-
320
- **Cancellation detected** (specification mentions timeout, abort, cancel) → inject the **interrupt** activity reference. Self-interrupt for validation failures, remote interrupt for cancelling child workflows.
321
-
322
- **Branching detected** (specification mentions conditional paths, error handling, routing) → inject **conditional transitions** documentation with examples of `conditions.code`, `conditions.match`, and multi-target transition arrays.
323
-
324
- **Complex transforms detected** (specification involves date arithmetic, array manipulation, object construction beyond simple field references) → inject the extended **pipe function catalog** for the relevant categories.
325
-
326
- ### Why Not Include Everything
327
-
328
- Prompt length directly affects YAML quality. The current prompt is approximately 270 lines — tight enough that the LLM reads and applies every rule. Adding the full activity type reference (~250 lines) and the full pipe catalog (~300 lines) would triple the prompt size. The LLM would skim, miss construction rules, and produce more errors.
329
-
330
- The injection strategy keeps the prompt focused on what this specific workflow needs. A simple three-step linear workflow gets the base prompt. A multi-workflow composition with escalation gates gets the base prompt plus await, signal, hook web-hook mode, and conditional transitions. The prompt grows with complexity, not unconditionally.
331
-
332
- ### Refinement Loop
333
-
334
- When a built workflow fails testing, the `REFINE_PROMPT` guides correction. The refinement prompt can also inject activity-type references that the initial build missed — if the failure is an incorrect await configuration, the await reference is added to the refinement context. The system learns which references each workflow type needs through the feedback loop.
335
-
336
- ### Enrichments to the Existing Prompt
337
-
338
- Based on the full HotMesh documentation, these additions improve accuracy without significant length:
339
-
340
- 1. **Conditional transitions.** Add a section showing multi-target transitions with `conditions`:
341
- ```yaml
342
- transitions:
343
- check_status:
344
- - to: handle_error
345
- conditions:
346
- code: 500
347
- - to: proceed
348
- ```
349
- This is essential for branching workflows and currently undocumented.
350
-
351
- 2. **Extended pipe operators in the abbreviated list.** Add: `@string.endsWith`, `@string.startsWith`, `@string.padStart`, `@string.replace`, `@array.join`, `@array.sort`, `@array.slice`, `@object.create`, `@object.assign`, `@object.entries`, `@object.fromEntries`, `@conditional.nullish`, `@conditional.equality`, `@conditional.strict_equality`, `@number.isInteger`, `@number.parseFloat`, `@number.parseInt`, `@number.toFixed`. These appear frequently in real workflow specifications and their absence forces workarounds.
352
-
353
- 3. **Hook sleep mode.** The existing prompt mentions hook as "durable pause point" but doesn't show the sleep configuration. Add: `sleep: 5` for fixed delays, `@pipe` expression for dynamic delays.
354
-
355
- 4. **`stats` on trigger.** Custom job IDs and key-based lookups are needed for idempotent workflows. Add the `stats.id` and `stats.key` pattern.
356
-
357
- 5. **`await: false` pattern.** Fire-and-forget child workflows are common in composition. A one-line addition to the hook/cycle section would cover it: "For fire-and-forget child workflows, use `type: await` with `await: false`."
358
-
359
- These additions total roughly 30 lines — minimal prompt growth for significant capability gain.
360
-
361
- ---
362
-
363
- ## What This Enables
364
-
365
- An engineer with a specification — a PRD, a TDD, a markdown document describing a multi-step operational process — pastes it into the builder and gets back a coordinated set of deterministic workflows. Leaf workflows call MCP tools. Composition workflows orchestrate leaves. Human-in-the-loop gates pause for review. Payer-specific branches route correctly. The full system deploys as a set of tools that other workflows can compose further.
366
-
367
- The specification is the input. The compiled DAGs are the output. Plan mode is the compiler that bridges the gap. The engineer reviews and adjusts at every step — this isn't autonomous generation. It's assisted decomposition, where the LLM handles the YAML construction and the engineer handles the architectural decisions.
368
-
369
- The citizen developer illusion dissolves. What remains is an engineer who understands the domain, understands composition, and has a tool that translates specifications into executable workflows without writing YAML by hand. The tool handles the syntax. The engineer handles the semantics.
370
-
371
- And because every compiled workflow is an MCP tool, the output of one plan can be the input of another. A set of referral intake workflows becomes a tool that a scheduling system calls. A set of payer verification workflows becomes a tool that an eligibility checker calls. Compositions compose. Plans reference the output of prior plans. The system grows in the same way the organization's operational knowledge grows — incrementally, accretively, each new capability building on the last.
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes