llm-fillin 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 1167e6d823abb66aaa8e4cfd2f0a5ba2cb969dfe2ebc88f5a9931e5cf641b846
4
- data.tar.gz: 1419c96c1c60df093cf8175cb1fa0e4098460f5528e3ed43097effd7426f4b1a
3
+ metadata.gz: a9a7ff0f7be64aaca2b9a33a7f0b025efee102564c99ccad04fa461e73c9bc01
4
+ data.tar.gz: c2c4010ba0ce661098f5ce6295d910b7b50910428434c6e01898116bfc16c41a
5
5
  SHA512:
6
- metadata.gz: cd888ae3ec1245f5201c5509b3d08ce234927c5b91e61a46b10fd52739d4bfb314657a289da336b4d01fedebdeb3166ec4c036e75fb67d48e7b40be21f336e92
7
- data.tar.gz: 4cdb05933e2f97aae0bf296f44f11abf2ac77612a57d8784c959fe184e8038caae4348c009d4ca1979c2294c2bffba9c0909333a0e345518be955049ebc5e425
6
+ metadata.gz: ae37c8a7f116ae48b31fc657e7f043568c6c3a94e3dac92a245fccbd52f0045507604bc8ce0fb837bfe41c8ffb5d86ca2e9a1fcdb7614fa50e2d0bdb6dec8510
7
+ data.tar.gz: 5344056f92d113505b1d26847ddccf5d40ffb02a264540efbfe133659756150b75888662013310e3e5e8dc376a92a4a6f833dc6a140f93600cc7a6ec40bcb664
data/README.md CHANGED
@@ -3,6 +3,8 @@
3
3
  **LLM-powered slot filling + tool orchestration for Ruby.**
4
4
  Register JSON-schema tools, let an LLM ask for missing fields, then call your handlers safely.
5
5
 
6
+ ---
7
+
6
8
  ## Install
7
9
  ```bash
8
10
  bundle install
@@ -19,14 +21,40 @@ export OPENAI_API_KEY=sk-...
19
21
  ruby examples/run_create_toy.rb
20
22
  ```
21
23
 
22
- Try:
23
- ```
24
- I want a red race car toy for $12
24
+ ---
25
+
26
+ ## Example Conversation
27
+
28
+ ```text
29
+ AI: Got it! You want a red race car toy for $12. Could you please provide a name for this toy?
30
+ YOU: Supra
31
+ AI: You want a red race car toy named "Supra" for $12. Just to confirm, is the category of the toy "car"?
32
+ YOU: what other category are there?
33
+ AI: The toy categories available are plush, puzzle, doll, car, lego, and other. Would you like to proceed with the "car" category for the red race car toy named Supra?
34
+ YOU: make it lego
35
+ AI: You want a red Lego race car toy priced at $12. Just to confirm, is the toy name "Supra"?
36
+ YOU: yes
37
+ ✅ Toy created: Supra (lego, red) - $12.0 | ID TOY-6A99ED
25
38
  ```
26
39
 
27
- The assistant will ask for any missing fields (like category) and then “create” the toy.
40
+ ---
41
+
42
+ ## How it works
43
+ 1. You type a natural request (`"I want a red race car toy for $12"`).
44
+ 2. The LLM recognizes intent (`create_toy_v1`) and extracts what it can:
45
+ - name = missing
46
+ - category = inferred, but asks for confirmation
47
+ - price = 1200 minor units
48
+ - color = red
49
+ 3. The assistant asks follow-up questions until all required fields are filled.
50
+ 4. When ready, the tool handler runs and returns a structured object with the toy’s details.
51
+
52
+ ---
28
53
 
29
54
  ## Use in your app
30
- - Register tools (schemas + handlers)
31
- - Call the Orchestrator with your message list
32
- - Validate server-side; enforce tenant/RBAC; generate idempotency for creates
55
+ - Register your own tools in the `Registry` (e.g. `create_invoice`, `create_user`, `lookup_balance`).
56
+ - Pass messages into the `Orchestrator`.
57
+ - The orchestrator ensures:
58
+ - JSON schema validation
59
+ - tenant/actor context passed into handlers
60
+ - idempotency keys for safe “create” operations
@@ -6,27 +6,42 @@ module LLM
6
6
  module Fillin
7
7
  class OpenAIAdapter
8
8
  def initialize(api_key:, model:, temperature: 0)
9
- @client = OpenAI::Client.new(access_token: api_key)
10
- @model = model
9
+ # Official OpenAI SDK (openai ~> 0.21.x)
10
+ @client = OpenAI::Client.new(api_key: api_key)
11
+ @model = model.to_sym # e.g. :"gpt-4o-mini"
11
12
  @temperature = temperature
12
13
  end
13
14
 
14
15
  def step(system_prompt:, messages:, tools:, tool_results: [])
15
- resp = @client.chat(parameters: {
16
- model: @model,
16
+ response = @client.chat.completions.create(
17
+ model: @model, # e.g. :"gpt-4o-mini" or "gpt-4o-mini"
17
18
  temperature: @temperature,
18
- tools: tools,
19
- tool_choice: "auto",
20
- messages: [{ role: "system", content: system_prompt }] +
21
- messages +
22
- tool_results
23
- })
24
- msg = resp.dig("choices", 0, "message")
25
- { tool_calls: msg["tool_calls"], content: msg["content"] }
19
+ messages: [{ role: "system", content: system_prompt }] + messages + tool_results,
20
+ tools: tools, # [{ type: "function", function: {...} }, ...]
21
+ tool_choice: "auto"
22
+ )
23
+
24
+ # In openai ~> 0.21, response is an object:
25
+ # OpenAI::Models::Chat::ChatCompletion
26
+ choice = response.choices.first
27
+ msg = choice.message # OpenAI::Models::Chat::ChatCompletionMessage
28
+
29
+ # Accessors vary by presence; guard with respond_to?
30
+ tool_calls = msg.respond_to?(:tool_calls) ? msg.tool_calls : nil
31
+ function_call = msg.respond_to?(:function_call) ? msg.function_call : nil
32
+ content = msg.respond_to?(:content) ? msg.content : nil
33
+
34
+ {
35
+ tool_calls: tool_calls, # Array or nil
36
+ function_call: function_call, # Hash-like or nil
37
+ content: content # String or nil
38
+ }
26
39
  end
27
40
 
41
+ # Feed tool results back using role "tool" (tool calls) OR "function" (legacy).
42
+ # We'll always emit the modern "tool" message; orchestrator can adapt.
28
43
  def tool_result_message(tool_call_id:, name:, content:)
29
- { role: "tool", tool_call_id:, name:, content: content.to_json }
44
+ { role: "tool", tool_call_id: tool_call_id, name: name, content: content.to_json }
30
45
  end
31
46
  end
32
47
  end
@@ -17,6 +17,7 @@ module LLM
17
17
  # messages: [{role:"user", content:"..."}]
18
18
  def step(thread_id:, tenant_id:, actor_id:, messages:)
19
19
  prior_tool_msgs = @store.fetch_tool_messages(thread_id)
20
+
20
21
  res = @adapter.step(
21
22
  system_prompt: POLICY,
22
23
  messages: messages,
@@ -24,28 +25,60 @@ module LLM
24
25
  tool_results: prior_tool_msgs
25
26
  )
26
27
 
28
+ # --- New-style tools (preferred) ---
27
29
  if (calls = res[:tool_calls]).is_a?(Array) && calls.any?
28
30
  call = calls.first
29
- name, version = call.dig("function", "name").split(/_v/i)
30
- args = JSON.parse(call.dig("function", "arguments") || "{}")
31
+ # In openai 0.21.x these are typed objects:
32
+ # call => OpenAI::Models::Chat::ChatCompletionMessageFunctionToolCall
33
+ # call.function => OpenAI::Models::Chat::ChatCompletionMessageFunctionCall
34
+
35
+ fn = call.respond_to?(:function) ? call.function : nil
36
+ name = fn&.respond_to?(:name) ? fn.name : nil # e.g., "create_toy_v1"
37
+ args_json = fn&.respond_to?(:arguments) ? fn.arguments.to_s : "{}"
38
+ args = args_json.empty? ? {} : JSON.parse(args_json)
31
39
 
32
- tool = @registry.tool(name, version: "v1")
40
+ tool_name, version = (name || "").split(/_v/i)
41
+ version ||= "v1"
42
+
43
+ tool = @registry.tool(tool_name, version: "v1")
33
44
  Validators.validate!(tool.schema, args)
34
45
 
35
- ctx = { tenant_id:, actor_id:, thread_id: }
46
+ ctx = { tenant_id: tenant_id, actor_id: actor_id, thread_id: thread_id }
36
47
  result = tool.handler.call(args, ctx)
37
48
 
38
49
  tool_msg = @adapter.tool_result_message(
39
- tool_call_id: call["id"],
40
- name: "#{name}_v1",
50
+ tool_call_id: call.respond_to?(:id) ? call.id : nil,
51
+ name: "#{tool_name}_v1",
41
52
  content: result
42
53
  )
43
54
  @store.push_tool_message(thread_id, tool_msg)
44
55
 
45
- { type: :tool_ran, tool_name: name, result: result }
46
- else
47
- { type: :assistant, text: res[:content].to_s }
56
+ return { type: :tool_ran, tool_name: tool_name, result: result }
48
57
  end
58
+
59
+ # --- Legacy single function_call (fallback) ---
60
+ if (fc = res[:function_call])
61
+ name_with_version = fc["name"]
62
+ args_json = fc["arguments"].to_s
63
+ args = args_json.empty? ? {} : JSON.parse(args_json)
64
+
65
+ tool_name, version = name_with_version.split(/_v/i)
66
+ version ||= "v1"
67
+
68
+ tool = @registry.tool(tool_name, version: "v1")
69
+ Validators.validate!(tool.schema, args)
70
+
71
+ ctx = { tenant_id: tenant_id, actor_id: actor_id, thread_id: thread_id }
72
+ result = tool.handler.call(args, ctx)
73
+
74
+ # Legacy role is "function"; we already store tool messages in memory,
75
+ # not required for a single-step demo, but safe to omit or adapt if needed.
76
+
77
+ return { type: :tool_ran, tool_name: tool_name, result: result }
78
+ end
79
+
80
+ # No tool call -> just assistant text (likely a clarifying question)
81
+ { type: :assistant, text: res[:content].to_s }
49
82
  end
50
83
  end
51
84
  end
@@ -1,6 +1,6 @@
1
- # frozen_string_literal: true
1
+ # lib/llm/fillin/version.rb
2
2
  module LLM
3
3
  module Fillin
4
- VERSION = "0.1.0"
4
+ VERSION = "0.1.1"
5
5
  end
6
6
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm-fillin
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.0
4
+ version: 0.1.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Phia Vang