flowengine 0.3.0 → 0.3.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.rubocop_todo.yml +1 -6
- data/README.md +76 -13
- data/lib/flowengine/clarification_result.rb +21 -0
- data/lib/flowengine/definition.rb +4 -2
- data/lib/flowengine/dsl/flow_builder.rb +2 -2
- data/lib/flowengine/dsl/step_builder.rb +10 -1
- data/lib/flowengine/dsl.rb +0 -4
- data/lib/flowengine/engine/state_serializer.rb +50 -0
- data/lib/flowengine/engine.rb +101 -57
- data/lib/flowengine/errors.rb +39 -16
- data/lib/flowengine/llm/adapter.rb +69 -7
- data/lib/flowengine/llm/adapters.rb +24 -0
- data/lib/flowengine/llm/auto_client.rb +19 -0
- data/lib/flowengine/llm/client.rb +52 -6
- data/lib/flowengine/llm/intake_prompt_builder.rb +131 -0
- data/lib/flowengine/llm/provider.rb +12 -0
- data/lib/flowengine/llm/sensitive_data_filter.rb +1 -1
- data/lib/flowengine/llm.rb +95 -36
- data/lib/flowengine/node.rb +13 -3
- data/lib/flowengine/version.rb +1 -1
- data/lib/flowengine.rb +24 -25
- data/resources/models.yml +25 -0
- metadata +22 -4
- data/lib/flowengine/llm/anthropic_adapter.rb +0 -40
- data/lib/flowengine/llm/gemini_adapter.rb +0 -40
- data/lib/flowengine/llm/openai_adapter.rb +0 -38
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 2cfa285d8ba7e6975cffead57c7a737c42ddb119be74e37edb617198c56d391c
|
|
4
|
+
data.tar.gz: 842c4b843cbdd83fceb49cb066bb49553d4d95a6d0cf2d1df891c8d553eb005b
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 742b19a6eb3c667612ce74d83dab6f0402b02bf5cc0f808a4cf9c2070dfb649617e126a6674c7059465cf6654f0898d9d367bd483c6da4bffe8ec785ec5ae6c1
|
|
7
|
+
data.tar.gz: 225aae48aa9c12121c7a466a263ff66a5735064472720776cf17f0404b9d3a9f581107df57a8efa2d4071f3a635568b3b7ed37ff8d58f4b876f6b29af498e2fd
|
data/.rubocop_todo.yml
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# This configuration was generated by
|
|
2
2
|
# `rubocop --auto-gen-config`
|
|
3
|
-
# on 2026-03-
|
|
3
|
+
# on 2026-03-11 14:01:54 UTC using RuboCop version 1.85.1.
|
|
4
4
|
# The point is for the user to remove these configuration records
|
|
5
5
|
# one by one as the offenses are removed from the code base.
|
|
6
6
|
# Note that changes in the inspected code, or installation of new
|
|
@@ -13,11 +13,6 @@ Gemspec/DevelopmentDependencies:
|
|
|
13
13
|
Exclude:
|
|
14
14
|
- 'flowengine.gemspec'
|
|
15
15
|
|
|
16
|
-
# Offense count: 1
|
|
17
|
-
# Configuration parameters: AllowedMethods, AllowedPatterns, CountRepeatedAttributes.
|
|
18
|
-
Metrics/AbcSize:
|
|
19
|
-
Max: 18
|
|
20
|
-
|
|
21
16
|
# Offense count: 2
|
|
22
17
|
# Configuration parameters: AllowedMethods, AllowedPatterns.
|
|
23
18
|
Metrics/CyclomaticComplexity:
|
data/README.md
CHANGED
|
@@ -98,6 +98,69 @@ engine.history
|
|
|
98
98
|
|
|
99
99
|
### Using the `flowengine-cli` gem to Generate the JSON Answers File
|
|
100
100
|
|
|
101
|
+
---
|
|
102
|
+
|
|
103
|
+
## LLM-Based DSL Capabilities & Environment Variables
|
|
104
|
+
|
|
105
|
+
There are several environment variables that define which vendor and which model you can talk to should you choose to engage LLM in your decision logic.
|
|
106
|
+
|
|
107
|
+
There is a very special YAML file that's provided with this gem, which locks in the list of supported vendors and three types of models per vendor:
|
|
108
|
+
|
|
109
|
+
- best bang for the buck models
|
|
110
|
+
- deep thinking and hard mode models
|
|
111
|
+
- fastest models
|
|
112
|
+
- *at some point we might also add the "cheapest".*
|
|
113
|
+
|
|
114
|
+
+The file [resources/models.yml](resources/models.yml) defines which models are available to the adapter. This file is used at the startup of the gem, to load and initialize all LLM Adapters for which we have the API Key defined in the environment. And for those we'll have at least three model names loaded:
|
|
115
|
+
|
|
116
|
+
- `top:` — best results, likely the most expensive.
|
|
117
|
+
- `default:` — default model, if the user of the adapter does not specify.
|
|
118
|
+
- `fastest` — the fastest model from this vendor.
|
|
119
|
+
|
|
120
|
+
Here is the contents of `resources/models.yml` verbatim:
|
|
121
|
+
|
|
122
|
+
```yaml
|
|
123
|
+
models:
|
|
124
|
+
version: "1.0"
|
|
125
|
+
date: "Wed Mar 11 02:35:39 PDT 2026"
|
|
126
|
+
vendors:
|
|
127
|
+
anthropic:
|
|
128
|
+
adapter: "FlowEngine::LLM::Adapters::AnthropicAdapter"
|
|
129
|
+
var: "ANTHROPIC_API_KEY"
|
|
130
|
+
top: "claude-opus-4-6"
|
|
131
|
+
default: "claude-sonnet-4-6"
|
|
132
|
+
fastest: "claude-haiku-4-5-20251001"
|
|
133
|
+
openai:
|
|
134
|
+
adapter: "FlowEngine::LLM::Adapters::OpenAIAdapter"
|
|
135
|
+
var: "OPENAI_API_KEY"
|
|
136
|
+
top: "gpt-5.4"
|
|
137
|
+
default: "gpt-5-mini"
|
|
138
|
+
fastest: "gpt-5-nano"
|
|
139
|
+
gemini:
|
|
140
|
+
adapter: "FlowEngine::LLM::Adapters::GeminiAdapters"
|
|
141
|
+
var: "GEMINI_API_KEY"
|
|
142
|
+
top: "gemini-3.1-pro-preview"
|
|
143
|
+
default: "gemini-2.5-flash"
|
|
144
|
+
fastest: "gemini-2.5-flash-lite"
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
Notice how this file operates as almost a sort of glue for the gem: it explicitly tells you the names of variables to store your API keys, the class names of the corresponding Adapters, and the three models for each vendor:
|
|
148
|
+
|
|
149
|
+
1. `:top`
|
|
150
|
+
2. `:default`
|
|
151
|
+
3. `:fastest`
|
|
152
|
+
|
|
153
|
+
> [!IMPORTANT]
|
|
154
|
+
>
|
|
155
|
+
> The reason these models are extracted into a separate YAML file should be obvious: the contents of this list seems to change every week, and gem can remain at the same version for years. For this reason, the gem honors the environment variable `${FLOWENGINE_LLM_MODELS_PATH}` and will read the models and vendors from the file pointed to by that path environment variable. This is your door to better models, and other LLM vendors that RubyLLM supports.
|
|
156
|
+
|
|
157
|
+
When the gem is loading, one of the first things it does is load this YAML file and instantiate the hash of pre-initialized adapters.
|
|
158
|
+
|
|
159
|
+
Need an adapter to throw with your API call?
|
|
160
|
+
|
|
161
|
+
```ruby
|
|
162
|
+
FlowEngine::LLM[vendor: :anthropic, 'claude-opus-4-6']
|
|
163
|
+
|
|
101
164
|
## LLM-parsed Introduction
|
|
102
165
|
|
|
103
166
|
FlowEngine supports an optional **introduction step** that collects free-form text from the user before the structured flow begins. An LLM parses this text to pre-fill answers, automatically skipping steps the user already answered in their introduction.
|
|
@@ -180,11 +243,11 @@ Before any text reaches the LLM, `submit_introduction` scans for sensitive data
|
|
|
180
243
|
- **EIN**: `12-3456789`
|
|
181
244
|
- **Nine consecutive digits**: `123456789`
|
|
182
245
|
|
|
183
|
-
If detected, a `FlowEngine::SensitiveDataError` is raised immediately. The introduction text is discarded and no LLM call is made.
|
|
246
|
+
If detected, a `FlowEngine::Errors::SensitiveDataError` is raised immediately. The introduction text is discarded and no LLM call is made.
|
|
184
247
|
|
|
185
248
|
```ruby
|
|
186
249
|
engine.submit_introduction("My SSN is 123-45-6789", llm_client: client)
|
|
187
|
-
# => raises FlowEngine::SensitiveDataError
|
|
250
|
+
# => raises FlowEngine::Errors::SensitiveDataError
|
|
188
251
|
```
|
|
189
252
|
|
|
190
253
|
### Custom LLM Adapters
|
|
@@ -467,11 +530,11 @@ engine = FlowEngine::Engine.new(definition)
|
|
|
467
530
|
```ruby
|
|
468
531
|
# Answering after the flow is complete
|
|
469
532
|
engine.answer("extra")
|
|
470
|
-
# => raises FlowEngine::AlreadyFinishedError
|
|
533
|
+
# => raises FlowEngine::Errors::AlreadyFinishedError
|
|
471
534
|
|
|
472
535
|
# Referencing an unknown step in a definition
|
|
473
536
|
definition.step(:nonexistent)
|
|
474
|
-
# => raises FlowEngine::UnknownStepError
|
|
537
|
+
# => raises FlowEngine::Errors::UnknownStepError
|
|
475
538
|
|
|
476
539
|
# Invalid definition (start step doesn't exist)
|
|
477
540
|
FlowEngine.define do
|
|
@@ -481,19 +544,19 @@ FlowEngine.define do
|
|
|
481
544
|
question "Hello"
|
|
482
545
|
end
|
|
483
546
|
end
|
|
484
|
-
# => raises FlowEngine::DefinitionError
|
|
547
|
+
# => raises FlowEngine::Errors::DefinitionError
|
|
485
548
|
|
|
486
549
|
# Sensitive data in introduction
|
|
487
550
|
engine.submit_introduction("My SSN is 123-45-6789", llm_client: client)
|
|
488
|
-
# => raises FlowEngine::SensitiveDataError
|
|
551
|
+
# => raises FlowEngine::Errors::SensitiveDataError
|
|
489
552
|
|
|
490
553
|
# Introduction exceeds maxlength
|
|
491
554
|
engine.submit_introduction("A" * 3000, llm_client: client)
|
|
492
|
-
# => raises FlowEngine::ValidationError
|
|
555
|
+
# => raises FlowEngine::Errors::ValidationError
|
|
493
556
|
|
|
494
557
|
# Missing API key or LLM response parsing failure
|
|
495
|
-
FlowEngine::LLM::OpenAIAdapter.new # without OPENAI_API_KEY
|
|
496
|
-
# => raises FlowEngine::LLMError
|
|
558
|
+
FlowEngine::LLM::Adapters::OpenAIAdapter.new # without OPENAI_API_KEY
|
|
559
|
+
# => raises FlowEngine::Errors::LLMError
|
|
497
560
|
```
|
|
498
561
|
|
|
499
562
|
## Validation
|
|
@@ -510,8 +573,8 @@ class FlowEngine::Validation::Adapter
|
|
|
510
573
|
end
|
|
511
574
|
|
|
512
575
|
# Result object
|
|
513
|
-
FlowEngine::Validation::Result.new(valid: true, errors: [])
|
|
514
|
-
FlowEngine::Validation::Result.new(valid: false, errors: ["must be a number"])
|
|
576
|
+
FlowEngine::Errors::Validation::Result.new(valid: true, errors: [])
|
|
577
|
+
FlowEngine::Errors::Validation::Result.new(valid: false, errors: ["must be a number"])
|
|
515
578
|
```
|
|
516
579
|
|
|
517
580
|
### Custom Validator Example
|
|
@@ -532,12 +595,12 @@ class MyValidator < FlowEngine::Validation::Adapter
|
|
|
532
595
|
end
|
|
533
596
|
end
|
|
534
597
|
|
|
535
|
-
FlowEngine::Validation::Result.new(valid: errors.empty?, errors: errors)
|
|
598
|
+
FlowEngine::Errors::Validation::Result.new(valid: errors.empty?, errors: errors)
|
|
536
599
|
end
|
|
537
600
|
end
|
|
538
601
|
|
|
539
602
|
engine = FlowEngine::Engine.new(definition, validator: MyValidator.new)
|
|
540
|
-
engine.answer("not_a_number") # => raises FlowEngine::ValidationError
|
|
603
|
+
engine.answer("not_a_number") # => raises FlowEngine::Errors::ValidationError
|
|
541
604
|
```
|
|
542
605
|
|
|
543
606
|
## Mermaid Diagram Export
|
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module FlowEngine
|
|
4
|
+
# Immutable result from an AI intake or clarification round.
|
|
5
|
+
#
|
|
6
|
+
# @attr_reader answered [Hash<Symbol, Object>] step_id => value pairs filled this round
|
|
7
|
+
# @attr_reader pending_steps [Array<Symbol>] step ids still unanswered after this round
|
|
8
|
+
# @attr_reader follow_up [String, nil] clarifying question from the LLM, or nil if done
|
|
9
|
+
# @attr_reader round [Integer] current clarification round (1-based)
|
|
10
|
+
ClarificationResult = Data.define(:answered, :pending_steps, :follow_up, :round) do
|
|
11
|
+
def initialize(answered: {}, pending_steps: [], follow_up: nil, round: 1)
|
|
12
|
+
super
|
|
13
|
+
freeze
|
|
14
|
+
end
|
|
15
|
+
|
|
16
|
+
# @return [Boolean] true when the LLM has no more questions or max rounds reached
|
|
17
|
+
def done?
|
|
18
|
+
follow_up.nil?
|
|
19
|
+
end
|
|
20
|
+
end
|
|
21
|
+
end
|
|
@@ -30,7 +30,7 @@ module FlowEngine
|
|
|
30
30
|
# @return [Node] the node for that step
|
|
31
31
|
# @raise [UnknownStepError] if id is not in steps
|
|
32
32
|
def step(id)
|
|
33
|
-
steps.fetch(id) { raise UnknownStepError, "Unknown step: #{id.inspect}" }
|
|
33
|
+
steps.fetch(id) { raise Errors::UnknownStepError, "Unknown step: #{id.inspect}" }
|
|
34
34
|
end
|
|
35
35
|
|
|
36
36
|
# @return [Array<Symbol>] all step ids in the definition
|
|
@@ -41,7 +41,9 @@ module FlowEngine
|
|
|
41
41
|
private
|
|
42
42
|
|
|
43
43
|
def validate!
|
|
44
|
-
|
|
44
|
+
return if steps.key?(start_step_id)
|
|
45
|
+
|
|
46
|
+
raise Errors::DefinitionError, "Start step #{start_step_id.inspect} not found in nodes"
|
|
45
47
|
end
|
|
46
48
|
end
|
|
47
49
|
end
|
|
@@ -45,8 +45,8 @@ module FlowEngine
|
|
|
45
45
|
# @return [Definition]
|
|
46
46
|
# @raise [DefinitionError] if start was not set or no steps were defined
|
|
47
47
|
def build
|
|
48
|
-
raise DefinitionError, "No start step defined" if @start_step_id.nil?
|
|
49
|
-
raise DefinitionError, "No steps defined" if @nodes.empty?
|
|
48
|
+
raise ::FlowEngine::Errors::DefinitionError, "No start step defined" if @start_step_id.nil?
|
|
49
|
+
raise ::FlowEngine::Errors::DefinitionError, "No steps defined" if @nodes.empty?
|
|
50
50
|
|
|
51
51
|
Definition.new(start_step_id: @start_step_id, nodes: @nodes, introduction: @introduction)
|
|
52
52
|
end
|
|
@@ -15,6 +15,7 @@ module FlowEngine
|
|
|
15
15
|
@transitions = []
|
|
16
16
|
@visibility_rule = nil
|
|
17
17
|
@decorations = nil
|
|
18
|
+
@max_clarifications = 0
|
|
18
19
|
end
|
|
19
20
|
|
|
20
21
|
# Sets the step/input type (e.g. :multi_select, :number_matrix).
|
|
@@ -62,6 +63,13 @@ module FlowEngine
|
|
|
62
63
|
@visibility_rule = rule
|
|
63
64
|
end
|
|
64
65
|
|
|
66
|
+
# Sets the maximum number of clarification rounds for an :ai_intake step.
|
|
67
|
+
#
|
|
68
|
+
# @param count [Integer] max follow-up rounds (0 = one-shot, no clarifications)
|
|
69
|
+
def max_clarifications(count)
|
|
70
|
+
@max_clarifications = count
|
|
71
|
+
end
|
|
72
|
+
|
|
65
73
|
# Builds the {Node} for the given step id from accumulated attributes.
|
|
66
74
|
#
|
|
67
75
|
# @param id [Symbol] step id
|
|
@@ -75,7 +83,8 @@ module FlowEngine
|
|
|
75
83
|
fields: @fields,
|
|
76
84
|
transitions: @transitions,
|
|
77
85
|
visibility_rule: @visibility_rule,
|
|
78
|
-
decorations: @decorations
|
|
86
|
+
decorations: @decorations,
|
|
87
|
+
max_clarifications: @max_clarifications
|
|
79
88
|
)
|
|
80
89
|
end
|
|
81
90
|
end
|
data/lib/flowengine/dsl.rb
CHANGED
|
@@ -1,9 +1,5 @@
|
|
|
1
1
|
# frozen_string_literal: true
|
|
2
2
|
|
|
3
|
-
require_relative "dsl/rule_helpers"
|
|
4
|
-
require_relative "dsl/step_builder"
|
|
5
|
-
require_relative "dsl/flow_builder"
|
|
6
|
-
|
|
7
3
|
module FlowEngine
|
|
8
4
|
# Namespace for the declarative flow DSL: {FlowBuilder} builds a {Definition} from blocks,
|
|
9
5
|
# {StepBuilder} builds individual {Node}s, and {RuleHelpers} provide rule factory methods.
|
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module FlowEngine
|
|
4
|
+
class Engine
|
|
5
|
+
# Handles state serialization and deserialization for Engine persistence.
|
|
6
|
+
# Normalizes string-keyed hashes (from JSON) to symbol-keyed hashes.
|
|
7
|
+
module StateSerializer
|
|
8
|
+
SYMBOLIZERS = {
|
|
9
|
+
current_step_id: ->(v) { v&.to_sym },
|
|
10
|
+
active_intake_step_id: ->(v) { v&.to_sym },
|
|
11
|
+
history: ->(v) { Array(v).map { |e| e&.to_sym } },
|
|
12
|
+
answers: ->(v) { symbolize_answers(v) },
|
|
13
|
+
conversation_history: ->(v) { symbolize_conversation_history(v) }
|
|
14
|
+
}.freeze
|
|
15
|
+
|
|
16
|
+
# Normalizes a state hash so step ids and history entries are symbols.
|
|
17
|
+
def self.symbolize_state(hash)
|
|
18
|
+
return hash unless hash.is_a?(Hash)
|
|
19
|
+
|
|
20
|
+
hash.each_with_object({}) do |(key, value), result|
|
|
21
|
+
sym_key = key.to_sym
|
|
22
|
+
result[sym_key] = SYMBOLIZERS.fetch(sym_key, ->(v) { v }).call(value)
|
|
23
|
+
end
|
|
24
|
+
end
|
|
25
|
+
|
|
26
|
+
# @param answers [Hash] answers map (keys may be strings)
|
|
27
|
+
# @return [Hash] same map with symbol keys
|
|
28
|
+
def self.symbolize_answers(answers)
|
|
29
|
+
return {} unless answers.is_a?(Hash)
|
|
30
|
+
|
|
31
|
+
answers.each_with_object({}) { |(k, v), h| h[k.to_sym] = v }
|
|
32
|
+
end
|
|
33
|
+
|
|
34
|
+
# @param history [Array<Hash>] conversation history entries
|
|
35
|
+
# @return [Array<Hash>] same entries with symbolized keys and role
|
|
36
|
+
def self.symbolize_conversation_history(history)
|
|
37
|
+
return [] unless history.is_a?(Array)
|
|
38
|
+
|
|
39
|
+
history.map do |entry|
|
|
40
|
+
next entry unless entry.is_a?(Hash)
|
|
41
|
+
|
|
42
|
+
entry.each_with_object({}) do |(k, v), h|
|
|
43
|
+
sym_key = k.to_sym
|
|
44
|
+
h[sym_key] = sym_key == :role ? v.to_sym : v
|
|
45
|
+
end
|
|
46
|
+
end
|
|
47
|
+
end
|
|
48
|
+
end
|
|
49
|
+
end
|
|
50
|
+
end
|
data/lib/flowengine/engine.rb
CHANGED
|
@@ -3,14 +3,9 @@
|
|
|
3
3
|
module FlowEngine
|
|
4
4
|
# Runtime session that drives flow navigation: holds definition, answers, and current step.
|
|
5
5
|
# Validates each answer via an optional {Validation::Adapter}, then advances using node transitions.
|
|
6
|
-
#
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
# @attr_reader history [Array<Symbol>] ordered list of step ids visited (including current)
|
|
10
|
-
# @attr_reader current_step_id [Symbol, nil] current step id, or nil when flow is finished
|
|
11
|
-
# @attr_reader introduction_text [String, nil] free-form text submitted before the flow began
|
|
12
|
-
class Engine
|
|
13
|
-
attr_reader :definition, :answers, :history, :current_step_id, :introduction_text
|
|
6
|
+
class Engine # rubocop:disable Metrics/ClassLength
|
|
7
|
+
attr_reader :definition, :answers, :history, :current_step_id, :introduction_text,
|
|
8
|
+
:clarification_round, :conversation_history
|
|
14
9
|
|
|
15
10
|
# @param definition [Definition] the flow to run
|
|
16
11
|
# @param validator [Validation::Adapter] validator for step answers (default: {Validation::NullAdapter})
|
|
@@ -21,6 +16,9 @@ module FlowEngine
|
|
|
21
16
|
@current_step_id = definition.start_step_id
|
|
22
17
|
@validator = validator
|
|
23
18
|
@introduction_text = nil
|
|
19
|
+
@clarification_round = 0
|
|
20
|
+
@conversation_history = []
|
|
21
|
+
@active_intake_step_id = nil
|
|
24
22
|
@history << @current_step_id
|
|
25
23
|
end
|
|
26
24
|
|
|
@@ -42,10 +40,10 @@ module FlowEngine
|
|
|
42
40
|
# @raise [AlreadyFinishedError] if the flow has already finished
|
|
43
41
|
# @raise [ValidationError] if the validator rejects the value
|
|
44
42
|
def answer(value)
|
|
45
|
-
raise AlreadyFinishedError, "Flow is already finished" if finished?
|
|
43
|
+
raise Errors::AlreadyFinishedError, "Flow is already finished" if finished?
|
|
46
44
|
|
|
47
45
|
result = @validator.validate(current_step, value)
|
|
48
|
-
raise ValidationError, "Validation failed: #{result.errors.join(", ")}" unless result.valid?
|
|
46
|
+
raise Errors::ValidationError, "Validation failed: #{result.errors.join(", ")}" unless result.valid?
|
|
49
47
|
|
|
50
48
|
answers[@current_step_id] = value
|
|
51
49
|
advance_step
|
|
@@ -56,9 +54,6 @@ module FlowEngine
|
|
|
56
54
|
#
|
|
57
55
|
# @param text [String] user's free-form introduction
|
|
58
56
|
# @param llm_client [LLM::Client] configured LLM client for parsing
|
|
59
|
-
# @raise [SensitiveDataError] if text contains SSN, ITIN, EIN, etc.
|
|
60
|
-
# @raise [ValidationError] if text exceeds the introduction maxlength
|
|
61
|
-
# @raise [LLMError] on LLM communication or parsing failures
|
|
62
57
|
def submit_introduction(text, llm_client:)
|
|
63
58
|
validate_introduction_length!(text)
|
|
64
59
|
LLM::SensitiveDataFilter.check!(text)
|
|
@@ -68,65 +63,67 @@ module FlowEngine
|
|
|
68
63
|
auto_advance_prefilled
|
|
69
64
|
end
|
|
70
65
|
|
|
71
|
-
#
|
|
66
|
+
# Submits free-form text for the current AI intake step. Returns a ClarificationResult.
|
|
72
67
|
#
|
|
73
|
-
# @
|
|
68
|
+
# @param text [String] user's free-form text
|
|
69
|
+
# @param llm_client [LLM::Client] configured LLM client
|
|
70
|
+
# @return [ClarificationResult]
|
|
71
|
+
def submit_ai_intake(text, llm_client:)
|
|
72
|
+
node = current_step
|
|
73
|
+
raise Errors::EngineError, "Current step is not an AI intake step" unless node&.ai_intake?
|
|
74
|
+
|
|
75
|
+
LLM::SensitiveDataFilter.check!(text)
|
|
76
|
+
|
|
77
|
+
@active_intake_step_id = @current_step_id
|
|
78
|
+
@clarification_round = 1
|
|
79
|
+
@conversation_history = [{ role: :user, text: text }]
|
|
80
|
+
|
|
81
|
+
perform_intake_round(text, llm_client, node)
|
|
82
|
+
end
|
|
83
|
+
|
|
84
|
+
# Submits a clarification response for an ongoing AI intake conversation.
|
|
85
|
+
#
|
|
86
|
+
# @param text [String] user's response to the follow-up question
|
|
87
|
+
# @param llm_client [LLM::Client] configured LLM client
|
|
88
|
+
# @return [ClarificationResult]
|
|
89
|
+
def submit_clarification(text, llm_client:)
|
|
90
|
+
raise Errors::EngineError, "No active AI intake conversation to clarify" unless @active_intake_step_id
|
|
91
|
+
|
|
92
|
+
LLM::SensitiveDataFilter.check!(text)
|
|
93
|
+
|
|
94
|
+
node = @definition.step(@active_intake_step_id)
|
|
95
|
+
@clarification_round += 1
|
|
96
|
+
@conversation_history << { role: :user, text: text }
|
|
97
|
+
|
|
98
|
+
perform_intake_round(text, llm_client, node)
|
|
99
|
+
end
|
|
100
|
+
|
|
101
|
+
# Serializable state for persistence or resumption.
|
|
74
102
|
def to_state
|
|
75
103
|
{
|
|
76
104
|
current_step_id: @current_step_id,
|
|
77
105
|
answers: @answers,
|
|
78
106
|
history: @history,
|
|
79
|
-
introduction_text: @introduction_text
|
|
107
|
+
introduction_text: @introduction_text,
|
|
108
|
+
clarification_round: @clarification_round,
|
|
109
|
+
conversation_history: @conversation_history,
|
|
110
|
+
active_intake_step_id: @active_intake_step_id
|
|
80
111
|
}
|
|
81
112
|
end
|
|
82
113
|
|
|
83
|
-
# Rebuilds an engine from a previously saved state
|
|
114
|
+
# Rebuilds an engine from a previously saved state.
|
|
84
115
|
#
|
|
85
116
|
# @param definition [Definition] same definition used when state was captured
|
|
86
|
-
# @param state_hash [Hash] hash with
|
|
117
|
+
# @param state_hash [Hash] hash with state keys (may be strings from JSON)
|
|
87
118
|
# @param validator [Validation::Adapter] validator to use (default: NullAdapter)
|
|
88
119
|
# @return [Engine] restored engine instance
|
|
89
120
|
def self.from_state(definition, state_hash, validator: Validation::NullAdapter.new)
|
|
90
|
-
state = symbolize_state(state_hash)
|
|
121
|
+
state = StateSerializer.symbolize_state(state_hash)
|
|
91
122
|
engine = allocate
|
|
92
123
|
engine.send(:restore_state, definition, state, validator)
|
|
93
124
|
engine
|
|
94
125
|
end
|
|
95
126
|
|
|
96
|
-
# Normalizes a state hash so step ids and history entries are symbols; answers keys are symbols.
|
|
97
|
-
#
|
|
98
|
-
# @param hash [Hash] raw state (e.g. from JSON)
|
|
99
|
-
# @return [Hash] symbolized state
|
|
100
|
-
def self.symbolize_state(hash)
|
|
101
|
-
return hash unless hash.is_a?(Hash)
|
|
102
|
-
|
|
103
|
-
hash.each_with_object({}) do |(key, value), result|
|
|
104
|
-
sym_key = key.to_sym
|
|
105
|
-
result[sym_key] = case sym_key
|
|
106
|
-
when :current_step_id
|
|
107
|
-
value&.to_sym
|
|
108
|
-
when :history
|
|
109
|
-
Array(value).map { |v| v&.to_sym }
|
|
110
|
-
when :answers
|
|
111
|
-
symbolize_answers(value)
|
|
112
|
-
else
|
|
113
|
-
value
|
|
114
|
-
end
|
|
115
|
-
end
|
|
116
|
-
end
|
|
117
|
-
|
|
118
|
-
# @param answers [Hash] answers map (keys may be strings)
|
|
119
|
-
# @return [Hash] same map with symbol keys
|
|
120
|
-
def self.symbolize_answers(answers)
|
|
121
|
-
return {} unless answers.is_a?(Hash)
|
|
122
|
-
|
|
123
|
-
answers.each_with_object({}) do |(key, value), result|
|
|
124
|
-
result[key.to_sym] = value
|
|
125
|
-
end
|
|
126
|
-
end
|
|
127
|
-
|
|
128
|
-
private_class_method :symbolize_state, :symbolize_answers
|
|
129
|
-
|
|
130
127
|
private
|
|
131
128
|
|
|
132
129
|
def restore_state(definition, state, validator)
|
|
@@ -136,12 +133,14 @@ module FlowEngine
|
|
|
136
133
|
@answers = state[:answers] || {}
|
|
137
134
|
@history = state[:history] || []
|
|
138
135
|
@introduction_text = state[:introduction_text]
|
|
136
|
+
@clarification_round = state[:clarification_round] || 0
|
|
137
|
+
@conversation_history = state[:conversation_history] || []
|
|
138
|
+
@active_intake_step_id = state[:active_intake_step_id]
|
|
139
139
|
end
|
|
140
140
|
|
|
141
141
|
def advance_step
|
|
142
142
|
node = definition.step(@current_step_id)
|
|
143
143
|
next_id = node.next_step_id(answers)
|
|
144
|
-
|
|
145
144
|
@current_step_id = next_id
|
|
146
145
|
@history << next_id if next_id
|
|
147
146
|
end
|
|
@@ -151,13 +150,58 @@ module FlowEngine
|
|
|
151
150
|
return unless maxlength
|
|
152
151
|
return if text.length <= maxlength
|
|
153
152
|
|
|
154
|
-
raise ValidationError, "Introduction text exceeds maxlength (#{text.length}/#{maxlength})"
|
|
153
|
+
raise Errors::ValidationError, "Introduction text exceeds maxlength (#{text.length}/#{maxlength})"
|
|
155
154
|
end
|
|
156
155
|
|
|
157
|
-
# Advances through consecutive steps that already have pre-filled answers.
|
|
158
|
-
# Stops at the first step without a pre-filled answer or when the flow ends.
|
|
159
156
|
def auto_advance_prefilled
|
|
160
157
|
advance_step while @current_step_id && @answers.key?(@current_step_id)
|
|
161
158
|
end
|
|
159
|
+
|
|
160
|
+
def perform_intake_round(user_text, llm_client, node)
|
|
161
|
+
result = llm_client.parse_ai_intake(
|
|
162
|
+
definition: @definition, user_text: user_text,
|
|
163
|
+
answered: @answers, conversation_history: @conversation_history
|
|
164
|
+
)
|
|
165
|
+
@answers.merge!(result[:answers])
|
|
166
|
+
follow_up = resolve_follow_up(result[:follow_up], node)
|
|
167
|
+
|
|
168
|
+
build_clarification_result(result[:answers], follow_up)
|
|
169
|
+
end
|
|
170
|
+
|
|
171
|
+
def resolve_follow_up(follow_up, node)
|
|
172
|
+
if follow_up && @clarification_round <= node.max_clarifications
|
|
173
|
+
@conversation_history << { role: :assistant, text: follow_up }
|
|
174
|
+
follow_up
|
|
175
|
+
else
|
|
176
|
+
finalize_intake
|
|
177
|
+
nil
|
|
178
|
+
end
|
|
179
|
+
end
|
|
180
|
+
|
|
181
|
+
def build_clarification_result(round_answers, follow_up)
|
|
182
|
+
ClarificationResult.new(
|
|
183
|
+
answered: round_answers,
|
|
184
|
+
pending_steps: pending_non_intake_steps,
|
|
185
|
+
follow_up: follow_up,
|
|
186
|
+
round: @clarification_round
|
|
187
|
+
)
|
|
188
|
+
end
|
|
189
|
+
|
|
190
|
+
def finalize_intake
|
|
191
|
+
@answers[@active_intake_step_id] = conversation_summary
|
|
192
|
+
@active_intake_step_id = nil
|
|
193
|
+
advance_step
|
|
194
|
+
auto_advance_prefilled
|
|
195
|
+
end
|
|
196
|
+
|
|
197
|
+
def conversation_summary
|
|
198
|
+
@conversation_history.map { |e| "#{e[:role]}: #{e[:text]}" }.join("\n")
|
|
199
|
+
end
|
|
200
|
+
|
|
201
|
+
def pending_non_intake_steps
|
|
202
|
+
@definition.steps.each_with_object([]) do |(id, node), pending|
|
|
203
|
+
pending << id unless node.ai_intake? || @answers.key?(id)
|
|
204
|
+
end
|
|
205
|
+
end
|
|
162
206
|
end
|
|
163
207
|
end
|
data/lib/flowengine/errors.rb
CHANGED
|
@@ -1,27 +1,50 @@
|
|
|
1
1
|
# frozen_string_literal: true
|
|
2
2
|
|
|
3
3
|
module FlowEngine
|
|
4
|
-
|
|
5
|
-
|
|
4
|
+
module Errors
|
|
5
|
+
# Base exception for all flowengine errors.
|
|
6
|
+
class Error < StandardError; end
|
|
6
7
|
|
|
7
|
-
|
|
8
|
-
|
|
8
|
+
# Raised when configuration is invalid (e.g. missing models.yml).
|
|
9
|
+
class ConfigurationError < Error; end
|
|
9
10
|
|
|
10
|
-
|
|
11
|
-
|
|
11
|
+
# Raised when a flow definition is invalid (e.g. missing start step, unknown step reference).
|
|
12
|
+
class DefinitionError < Error; end
|
|
12
13
|
|
|
13
|
-
|
|
14
|
-
|
|
14
|
+
# Raised when navigating to or requesting a step id that does not exist in the definition.
|
|
15
|
+
class UnknownStepError < Error; end
|
|
15
16
|
|
|
16
|
-
|
|
17
|
-
|
|
17
|
+
# Base exception for runtime engine errors (e.g. validation, already finished).
|
|
18
|
+
class EngineError < Error; end
|
|
18
19
|
|
|
19
|
-
|
|
20
|
-
|
|
20
|
+
# Raised when {Engine#answer} is called after the flow has already finished.
|
|
21
|
+
class AlreadyFinishedError < EngineError; end
|
|
21
22
|
|
|
22
|
-
|
|
23
|
-
|
|
23
|
+
# Raised when the validator rejects the user's answer for the current step.
|
|
24
|
+
class ValidationError < EngineError; end
|
|
24
25
|
|
|
25
|
-
|
|
26
|
-
|
|
26
|
+
# Raised when introduction text contains sensitive data (SSN, ITIN, EIN, etc.).
|
|
27
|
+
class SensitiveDataError < EngineError; end
|
|
28
|
+
|
|
29
|
+
# Base exception for LLM-related errors (missing API key, response parsing, etc.).
|
|
30
|
+
class LLMError < Error; end
|
|
31
|
+
|
|
32
|
+
# Raised when no API key is found for any provider.
|
|
33
|
+
class NoAPIKeyFoundError < LLMError; end
|
|
34
|
+
|
|
35
|
+
# Raised when a requested provider does not exist.
|
|
36
|
+
class NoSuchProviderExists < LLMError; end
|
|
37
|
+
|
|
38
|
+
# Raised when a provider is missing its API key.
|
|
39
|
+
class ProviderMissingApiKey < LLMError; end
|
|
40
|
+
|
|
41
|
+
# Raised when a requested model is not available.
|
|
42
|
+
class ModelNotAvailable < LLMError; end
|
|
43
|
+
|
|
44
|
+
# Raised when the LLM provider rejects the request due to rate limits or budget.
|
|
45
|
+
class OutOfBudgetError < LLMError; end
|
|
46
|
+
|
|
47
|
+
# Raised when the LLM provider rejects authentication credentials.
|
|
48
|
+
class AuthorizationError < LLMError; end
|
|
49
|
+
end
|
|
27
50
|
end
|