anima-core 0.2.0 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: fcb9e1d40357cd0eabdc5fffa01f8727b449a5b85e6f7b7dbe9033fae461bec9
4
- data.tar.gz: d785a36f13e3a3e698b80dd123544ca6dbee535372b92776a5b7993e4125baa1
3
+ metadata.gz: 9517757f2c4fdb8b19d204a154d3badd3c3b8fb456dffaf03236b2d7c065378d
4
+ data.tar.gz: a0d26e0b27fd1d2df0c46c3a4bbad822ac3517d1d02b41a6bdd49195de64e71f
5
5
  SHA512:
6
- metadata.gz: d84d91dc67fe56617f294b9a6982e830ac36c242fdb203bfa601c2e6fb009dd8a3d9d5243c4bfa501d7069e40bb08d15d920717374a79825ff1af7b4812713de
7
- data.tar.gz: d61f7ed56737c02f4412bcdee5d15e9b4b8e3b63f45011aa9e8ae2c1a80725413841ad1ed95d7f02ab6a8e532e0a36f0fd9abc39b50adb5fed0dbb21ec599604
6
+ metadata.gz: d150e5f97a3a2055c66cfaa773c7bd1dd33354b869fe38d6fa25b2b47352c76cec0c222f44aa4eb7e7e65b61098b7531e61522a01e6dd1d722ee9224c7759aea
7
+ data.tar.gz: 0f9ef56323a4598ed8dc3dca79e6ec9cf1e63ea9dff8751612cd32691aa0968e629144b4f7c47150cd9c9861f3d74e323c852433e1aa066732bb10fc8db235fc
data/CHANGELOG.md CHANGED
@@ -1,5 +1,24 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.2.1] - 2026-03-13
4
+
5
+ ### Added
6
+ - TUI view mode switching via `Ctrl+a → v` — cycle between Basic, Verbose, and Debug (#75)
7
+ - Draper EventDecorator hierarchy — structured data decorators for all event types (#74)
8
+ - Decorators return structured hashes (not strings) for transport-layer filtering (#86)
9
+ - Basic mode tool call counter — inline `🔧 Tools: X/Y ✓` aggregation (#73)
10
+ - Verbose view mode rendering — timestamps, tool call previews, system messages (#76)
11
+ - Tool call previews: bash `$ command`, web_get `GET url`, generic JSON fallback
12
+ - Tool response display: truncated to 3 lines, `↩` success / `❌` failure indicators
13
+ - Debug view mode — token counts per message, full tool args/responses, tool use IDs (#77)
14
+ - Estimated token indicator (`~` prefix) for events not yet counted by background job
15
+ - View mode persisted on Session model — survives TUI disconnect/reconnect
16
+ - Mode changes broadcast to all connected clients with re-decorated viewport
17
+
18
+ ### Fixed
19
+ - Newlines in LLM responses collapsed into single line in rendered view modes
20
+ - Loading state stuck after view mode switch — input blocked with "Thinking..."
21
+
3
22
  ## [0.2.0] - 2026-03-10
4
23
 
5
24
  ### Added
data/README.md CHANGED
@@ -194,7 +194,19 @@ Events fire, subscribers react, state updates, the cortex (LLM) reads the result
194
194
 
195
195
  There is no linear chat history. There are only events attached to a session. The context window is a **viewport** — a sliding window over the event stream, assembled on demand for each LLM call within a configured token budget.
196
196
 
197
- Currently uses a simple sliding window (newest events first, walk backwards until budget exhausted). Future versions will add multi-resolution compression with Draper decorators and associative recall from Mneme.
197
+ Currently uses a simple sliding window (newest events first, walk backwards until budget exhausted). Future versions will add associative recall from Mneme.
198
+
199
+ ### TUI View Modes
200
+
201
+ Three switchable view modes let you control how much detail the TUI shows. Cycle with `Ctrl+a → v`:
202
+
203
+ | Mode | What you see |
204
+ |------|-------------|
205
+ | **Basic** (default) | User + assistant messages. Tool calls are hidden but summarized as an inline counter: `🔧 Tools: 2/2 ✓` |
206
+ | **Verbose** | Everything in Basic, plus timestamps `[HH:MM:SS]`, tool call previews (`🔧 bash` / `$ command` / `↩ response`), and system messages |
207
+ | **Debug** | Full X-ray view — timestamps, token counts per message (`[14 tok]`), full tool call args, full tool responses, tool use IDs |
208
+
209
+ View modes are implemented via Draper decorators that operate at the transport layer. Each event type has a dedicated decorator (`UserMessageDecorator`, `ToolCallDecorator`, etc.) that returns structured data — the TUI renders it. Mode is stored on the `Session` model server-side, so it persists across reconnections.
198
210
 
199
211
  ### Brain as Microservices on a Shared Event Bus
200
212
 
@@ -331,7 +343,7 @@ This single example demonstrates every core principle:
331
343
 
332
344
  ## Status
333
345
 
334
- **Core agent complete.** The conversational agent works end-to-end: event-driven architecture, LLM integration with tool calling (bash, web), sliding viewport context assembly, persistent sessions, and client-server architecture with WebSocket transport and graceful reconnection.
346
+ **Core agent complete.** The conversational agent works end-to-end: event-driven architecture, LLM integration with tool calling (bash, web), sliding viewport context assembly, persistent sessions, client-server architecture with WebSocket transport, graceful reconnection, and three TUI view modes (Basic/Verbose/Debug) via Draper decorators.
335
347
 
336
348
  The hormonal system (Thymos, feelings, desires), semantic memory (Mneme), and soul matrix (Psyche) are designed but not yet implemented — they're the next layer on top of the working agent.
337
349
 
data/anima-core.gemspec CHANGED
@@ -28,6 +28,7 @@ Gem::Specification.new do |spec|
28
28
  spec.executables = spec.files.grep(%r{\Aexe/}) { |f| File.basename(f) }
29
29
  spec.require_paths = ["lib"]
30
30
 
31
+ spec.add_dependency "draper", "~> 4.0"
31
32
  spec.add_dependency "foreman", "~> 0.88"
32
33
  spec.add_dependency "httparty", "~> 0.24"
33
34
  spec.add_dependency "puma", "~> 6.0"
@@ -15,13 +15,14 @@ class SessionChannel < ApplicationCable::Channel
15
15
 
16
16
  # Subscribes the client to the session-specific stream.
17
17
  # Rejects the subscription if no valid session_id is provided.
18
- # Transmits chat history to the subscribing client after confirmation.
18
+ # Transmits the current view_mode and chat history to the subscribing client.
19
19
  #
20
20
  # @param params [Hash] must include :session_id (positive integer)
21
21
  def subscribed
22
22
  @current_session_id = params[:session_id].to_i
23
23
  if @current_session_id > 0
24
24
  stream_from stream_name
25
+ transmit_view_mode
25
26
  transmit_history
26
27
  else
27
28
  reject
@@ -85,6 +86,23 @@ class SessionChannel < ApplicationCable::Channel
85
86
  transmit_error("Session not found")
86
87
  end
87
88
 
89
+ # Changes the session's view mode and re-broadcasts the viewport.
90
+ # All clients on the session receive the mode change and fresh history.
91
+ #
92
+ # @param data [Hash] must include "view_mode" (one of Session::VIEW_MODES)
93
+ def change_view_mode(data)
94
+ mode = data["view_mode"].to_s
95
+ return transmit_error("Invalid view mode") unless Session::VIEW_MODES.include?(mode)
96
+
97
+ session = Session.find(@current_session_id)
98
+ session.update!(view_mode: mode)
99
+
100
+ ActionCable.server.broadcast(stream_name, {"action" => "view_mode_changed", "view_mode" => mode})
101
+ broadcast_viewport(session)
102
+ rescue ActiveRecord::RecordNotFound
103
+ transmit_error("Session not found")
104
+ end
105
+
88
106
  private
89
107
 
90
108
  def stream_name
@@ -102,24 +120,98 @@ class SessionChannel < ApplicationCable::Channel
102
120
  transmit({
103
121
  "action" => "session_changed",
104
122
  "session_id" => new_id,
105
- "message_count" => session.events.llm_messages.count
123
+ "message_count" => session.events.llm_messages.count,
124
+ "view_mode" => session.view_mode
106
125
  })
107
126
  transmit_history
108
127
  end
109
128
 
110
- # Sends displayable events from the LLM's viewport to the subscribing
111
- # client. The TUI shows exactly what the agent can see — no more, no less.
129
+ # Transmits the current view_mode so the TUI initializes correctly.
130
+ # Sends `{action: "view_mode", view_mode: <mode>}` to the subscribing client.
131
+ # @return [void]
132
+ def transmit_view_mode
133
+ session = Session.find_by(id: @current_session_id)
134
+ return unless session
135
+
136
+ transmit({"action" => "view_mode", "view_mode" => session.view_mode})
137
+ end
138
+
139
+ # Sends decorated context events (messages + tool interactions) from
140
+ # the LLM's viewport to the subscribing client. Each event is wrapped
141
+ # in an {EventDecorator} and the pre-rendered output is included in
142
+ # the transmitted payload. Tool events are included so the TUI can
143
+ # reconstruct tool call counters on reconnect.
144
+ # In debug mode, prepends the assembled system prompt as a special block.
112
145
  def transmit_history
113
146
  session = Session.find_by(id: @current_session_id)
114
147
  return unless session
115
148
 
149
+ transmit_system_prompt(session) if session.view_mode == "debug"
150
+
116
151
  session.viewport_events.each do |event|
117
- next unless event.llm_message?
152
+ transmit(decorate_event_payload(event, session.view_mode))
153
+ end
154
+ end
118
155
 
119
- transmit(event.payload)
156
+ # Broadcasts the re-decorated viewport to all clients on the session stream.
157
+ # Used after a view mode change to refresh all connected clients.
158
+ # In debug mode, prepends the assembled system prompt as a special block.
159
+ # @param session [Session] the session whose viewport to broadcast
160
+ # @return [void]
161
+ def broadcast_viewport(session)
162
+ broadcast_system_prompt(session) if session.view_mode == "debug"
163
+
164
+ session.viewport_events.each do |event|
165
+ ActionCable.server.broadcast(stream_name, decorate_event_payload(event, session.view_mode))
120
166
  end
121
167
  end
122
168
 
169
+ def decorate_event_payload(event, mode = "basic")
170
+ payload = event.payload
171
+ decorator = EventDecorator.for(event)
172
+ return payload unless decorator
173
+
174
+ payload.merge("rendered" => {mode => decorator.render(mode)})
175
+ end
176
+
177
+ # Transmits the assembled system prompt to the subscribing client.
178
+ # Skipped when the session has no system prompt configured.
179
+ # @param session [Session]
180
+ # @return [void]
181
+ def transmit_system_prompt(session)
182
+ payload = system_prompt_payload(session)
183
+ return unless payload
184
+
185
+ transmit(payload)
186
+ end
187
+
188
+ # Broadcasts the assembled system prompt to all clients on the stream.
189
+ # Skipped when the session has no system prompt configured.
190
+ # @param session [Session]
191
+ # @return [void]
192
+ def broadcast_system_prompt(session)
193
+ payload = system_prompt_payload(session)
194
+ return unless payload
195
+
196
+ ActionCable.server.broadcast(stream_name, payload)
197
+ end
198
+
199
+ # Builds the system prompt payload for debug mode transmission.
200
+ # @param session [Session]
201
+ # @return [Hash, nil] the system prompt payload, or nil if no prompt
202
+ def system_prompt_payload(session)
203
+ prompt = session.system_prompt
204
+ return unless prompt
205
+
206
+ tokens = [(prompt.bytesize / Event::BYTES_PER_TOKEN.to_f).ceil, 1].max
207
+ {
208
+ "type" => "system_prompt",
209
+ "rendered" => {
210
+ "debug" => {role: :system_prompt, content: prompt, tokens: tokens, estimated: true}
211
+ }
212
+ }
213
+ end
214
+
123
215
  def transmit_error(message)
124
216
  transmit({"action" => "error", "message" => message})
125
217
  end
@@ -0,0 +1,24 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Decorates agent_message events for display in the TUI.
4
+ # Basic mode returns role and content. Verbose mode adds a timestamp.
5
+ # Debug mode adds token count (exact when counted, estimated when not).
6
+ class AgentMessageDecorator < EventDecorator
7
+ # @return [Hash] structured agent message data
8
+ # `{role: :assistant, content: String}`
9
+ def render_basic
10
+ {role: :assistant, content: content}
11
+ end
12
+
13
+ # @return [Hash] structured agent message with nanosecond timestamp
14
+ # `{role: :assistant, content: String, timestamp: Integer|nil}`
15
+ def render_verbose
16
+ {role: :assistant, content: content, timestamp: timestamp}
17
+ end
18
+
19
+ # @return [Hash] verbose output plus token count for debugging
20
+ # `{role: :assistant, content: String, timestamp: Integer|nil, tokens: Integer, estimated: Boolean}`
21
+ def render_debug
22
+ render_verbose.merge(token_info)
23
+ end
24
+ end
@@ -0,0 +1,6 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Base decorator for the application. All Draper decorators inherit from
4
+ # this class to share common configuration and helpers.
5
+ class ApplicationDecorator < Draper::Decorator
6
+ end
@@ -0,0 +1,173 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Base decorator for {Event} records, providing multi-resolution rendering
4
+ # for the TUI. Each event type has a dedicated subclass that implements
5
+ # rendering methods for each view mode (basic, verbose, debug).
6
+ #
7
+ # Decorators return structured hashes (not pre-formatted strings) so that
8
+ # the TUI can style and lay out content based on semantic role, without
9
+ # fragile regex parsing. The TUI receives structured data via ActionCable
10
+ # and formats it for display.
11
+ #
12
+ # Subclasses must override {#render_basic}. Verbose and debug modes
13
+ # delegate to basic until subclasses provide their own implementations.
14
+ #
15
+ # @example Decorate an Event AR model
16
+ # decorator = EventDecorator.for(event)
17
+ # decorator.render_basic #=> {role: :user, content: "hello"} or nil
18
+ #
19
+ # @example Render for a specific view mode
20
+ # decorator = EventDecorator.for(event)
21
+ # decorator.render("verbose") #=> {role: :user, content: "hello", timestamp: 1709312325000000000}
22
+ #
23
+ # @example Decorate a raw payload hash (from EventBus)
24
+ # decorator = EventDecorator.for(type: "user_message", content: "hello")
25
+ # decorator.render_basic #=> {role: :user, content: "hello"}
26
+ class EventDecorator < ApplicationDecorator
27
+ delegate_all
28
+
29
+ TOOL_ICON = "\u{1F527}"
30
+ RETURN_ARROW = "\u21A9"
31
+ ERROR_ICON = "\u274C"
32
+
33
+ DECORATOR_MAP = {
34
+ "user_message" => "UserMessageDecorator",
35
+ "agent_message" => "AgentMessageDecorator",
36
+ "tool_call" => "ToolCallDecorator",
37
+ "tool_response" => "ToolResponseDecorator",
38
+ "system_message" => "SystemMessageDecorator"
39
+ }.freeze
40
+ private_constant :DECORATOR_MAP
41
+
42
+ # Normalizes hash payloads into an Event-like interface so decorators
43
+ # can use {#payload}, {#event_type}, etc. uniformly on both AR models
44
+ # and raw EventBus hashes.
45
+ #
46
+ # @!attribute event_type [r] the event's type (e.g. "user_message")
47
+ # @!attribute payload [r] string-keyed hash of event data
48
+ # @!attribute timestamp [r] nanosecond-precision timestamp
49
+ # @!attribute token_count [r] cumulative token count
50
+ EventPayload = Struct.new(:event_type, :payload, :timestamp, :token_count, keyword_init: true) do
51
+ # Heuristic token estimate matching {Event#estimate_tokens} so decorators
52
+ # can call it uniformly on both AR models and hash payloads.
53
+ # @return [Integer] at least 1
54
+ def estimate_tokens
55
+ text = if event_type.to_s.in?(%w[tool_call tool_response])
56
+ payload.to_json
57
+ else
58
+ payload&.dig("content").to_s
59
+ end
60
+ [(text.bytesize / Event::BYTES_PER_TOKEN.to_f).ceil, 1].max
61
+ end
62
+ end
63
+
64
+ # Factory returning the appropriate subclass decorator for the given event.
65
+ # Hashes are normalized via {EventPayload} to provide a uniform interface.
66
+ #
67
+ # @param event [Event, Hash] an Event AR model or a raw payload hash
68
+ # @return [EventDecorator, nil] decorated event, or nil for unknown types
69
+ def self.for(event)
70
+ source = wrap_source(event)
71
+ klass_name = DECORATOR_MAP[source.event_type]
72
+ return nil unless klass_name
73
+
74
+ klass_name.constantize.new(source)
75
+ end
76
+
77
+ RENDER_DISPATCH = {
78
+ "basic" => :render_basic,
79
+ "verbose" => :render_verbose,
80
+ "debug" => :render_debug
81
+ }.freeze
82
+ private_constant :RENDER_DISPATCH
83
+
84
+ # Dispatches to the render method for the given view mode.
85
+ #
86
+ # @param mode [String] one of "basic", "verbose", "debug"
87
+ # @return [Hash, nil] structured event data, or nil to hide the event
88
+ # @raise [ArgumentError] if the mode is not a valid view mode
89
+ def render(mode)
90
+ method = RENDER_DISPATCH[mode]
91
+ raise ArgumentError, "Invalid view mode: #{mode.inspect}" unless method
92
+
93
+ public_send(method)
94
+ end
95
+
96
+ # @abstract Subclasses must implement to render the event for basic view mode.
97
+ # @return [Hash, nil] structured event data, or nil to hide the event
98
+ def render_basic
99
+ raise NotImplementedError, "#{self.class} must implement #render_basic"
100
+ end
101
+
102
+ # Verbose view mode with timestamps and tool details.
103
+ # Delegates to {#render_basic} until subclasses provide their own implementations.
104
+ # @return [Hash, nil] structured event data, or nil to hide the event
105
+ def render_verbose
106
+ render_basic
107
+ end
108
+
109
+ # Debug view mode with token counts and system prompts.
110
+ # Delegates to {#render_basic} until subclasses provide their own implementations.
111
+ # @return [Hash, nil] structured event data, or nil to hide the event
112
+ def render_debug
113
+ render_basic
114
+ end
115
+
116
+ private
117
+
118
+ # Token count for display: exact count from {CountEventTokensJob} when
119
+ # available, heuristic estimate otherwise. Estimated counts are flagged
120
+ # so the TUI can prefix them with a tilde.
121
+ #
122
+ # @return [Hash] `{tokens: Integer, estimated: Boolean}`
123
+ def token_info
124
+ count = token_count.to_i
125
+ if count > 0
126
+ {tokens: count, estimated: false}
127
+ else
128
+ {tokens: estimate_token_count, estimated: true}
129
+ end
130
+ end
131
+
132
+ # Delegates to the underlying object's heuristic token estimator.
133
+ # Both {Event} AR models and {EventPayload} structs implement this.
134
+ #
135
+ # @return [Integer] at least 1
136
+ def estimate_token_count
137
+ object.estimate_tokens
138
+ end
139
+
140
+ # Extracts display content from the event payload.
141
+ # @return [String, nil]
142
+ def content
143
+ payload["content"]
144
+ end
145
+
146
+ # Truncates multi-line text, appending "..." when lines exceed the limit.
147
+ # @param text [String, nil] text to truncate (nil is coerced to empty string)
148
+ # @param max_lines [Integer] maximum number of lines to keep
149
+ # @return [String] truncated text
150
+ def truncate_lines(text, max_lines:)
151
+ str = text.to_s
152
+ lines = str.split("\n")
153
+ return str unless lines.size > max_lines
154
+
155
+ lines.first(max_lines).push("...").join("\n")
156
+ end
157
+
158
+ # Normalizes input to something Draper can wrap.
159
+ # Event AR models pass through; hashes become EventPayload structs
160
+ # with string-normalized keys.
161
+ def self.wrap_source(event)
162
+ return event unless event.is_a?(Hash)
163
+
164
+ normalized = event.transform_keys(&:to_s)
165
+ EventPayload.new(
166
+ event_type: normalized["type"].to_s,
167
+ payload: normalized,
168
+ timestamp: normalized["timestamp"],
169
+ token_count: normalized["token_count"]&.to_i || 0
170
+ )
171
+ end
172
+ private_class_method :wrap_source
173
+ end
@@ -0,0 +1,21 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Decorates system_message events for display in the TUI.
4
+ # Hidden in basic mode. Verbose and debug modes return timestamped system info.
5
+ class SystemMessageDecorator < EventDecorator
6
+ # @return [nil] system messages are hidden in basic mode
7
+ def render_basic
8
+ nil
9
+ end
10
+
11
+ # @return [Hash] structured system message data
12
+ # `{role: :system, content: String, timestamp: Integer|nil}`
13
+ def render_verbose
14
+ {role: :system, content: content, timestamp: timestamp}
15
+ end
16
+
17
+ # @return [Hash] same as verbose — system messages have no additional debug data
18
+ def render_debug
19
+ render_verbose
20
+ end
21
+ end
@@ -0,0 +1,48 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Decorates tool_call events for display in the TUI.
4
+ # Hidden in basic mode — tool activity is represented by the
5
+ # aggregated tool counter instead. Verbose mode returns tool name
6
+ # and a formatted preview of the input arguments. Debug mode shows
7
+ # full untruncated input as pretty-printed JSON with tool_use_id.
8
+ class ToolCallDecorator < EventDecorator
9
+ # @return [nil] tool calls are hidden in basic mode
10
+ def render_basic
11
+ nil
12
+ end
13
+
14
+ # @return [Hash] structured tool call data
15
+ # `{role: :tool_call, tool: String, input: String, timestamp: Integer|nil}`
16
+ def render_verbose
17
+ {role: :tool_call, tool: payload["tool_name"], input: format_input, timestamp: timestamp}
18
+ end
19
+
20
+ # @return [Hash] full tool call data with untruncated input and tool_use_id
21
+ # `{role: :tool_call, tool: String, input: String, tool_use_id: String|nil, timestamp: Integer|nil}`
22
+ def render_debug
23
+ {
24
+ role: :tool_call,
25
+ tool: payload["tool_name"],
26
+ input: JSON.pretty_generate(payload["tool_input"] || {}),
27
+ tool_use_id: payload["tool_use_id"],
28
+ timestamp: timestamp
29
+ }
30
+ end
31
+
32
+ private
33
+
34
+ # Formats tool input for display, with tool-specific formatting for
35
+ # known tools and generic JSON fallback for others.
36
+ # @return [String] formatted input preview
37
+ def format_input
38
+ input = payload["tool_input"]
39
+ case payload["tool_name"]
40
+ when "bash"
41
+ "$ #{input&.dig("command")}"
42
+ when "web_get"
43
+ "GET #{input&.dig("url")}"
44
+ else
45
+ truncate_lines(input.to_json, max_lines: 2)
46
+ end
47
+ end
48
+ end
@@ -0,0 +1,37 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Decorates tool_response events for display in the TUI.
4
+ # Hidden in basic mode — tool activity is represented by the
5
+ # aggregated tool counter instead. Verbose mode returns truncated
6
+ # output with a success/failure indicator. Debug mode shows full
7
+ # untruncated output with tool_use_id and estimated token count.
8
+ class ToolResponseDecorator < EventDecorator
9
+ # @return [nil] tool responses are hidden in basic mode
10
+ def render_basic
11
+ nil
12
+ end
13
+
14
+ # @return [Hash] structured tool response data
15
+ # `{role: :tool_response, content: String, success: Boolean, timestamp: Integer|nil}`
16
+ def render_verbose
17
+ {
18
+ role: :tool_response,
19
+ content: truncate_lines(content, max_lines: 3),
20
+ success: payload["success"] != false,
21
+ timestamp: timestamp
22
+ }
23
+ end
24
+
25
+ # @return [Hash] full tool response data with untruncated content, tool_use_id, and token estimate
26
+ # `{role: :tool_response, content: String, success: Boolean, tool_use_id: String|nil,
27
+ # timestamp: Integer|nil, tokens: Integer, estimated: Boolean}`
28
+ def render_debug
29
+ {
30
+ role: :tool_response,
31
+ content: content,
32
+ success: payload["success"] != false,
33
+ tool_use_id: payload["tool_use_id"],
34
+ timestamp: timestamp
35
+ }.merge(token_info)
36
+ end
37
+ end
@@ -0,0 +1,24 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Decorates user_message events for display in the TUI.
4
+ # Basic mode returns role and content. Verbose mode adds a timestamp.
5
+ # Debug mode adds token count (exact when counted, estimated when not).
6
+ class UserMessageDecorator < EventDecorator
7
+ # @return [Hash] structured user message data
8
+ # `{role: :user, content: String}`
9
+ def render_basic
10
+ {role: :user, content: content}
11
+ end
12
+
13
+ # @return [Hash] structured user message with nanosecond timestamp
14
+ # `{role: :user, content: String, timestamp: Integer|nil}`
15
+ def render_verbose
16
+ {role: :user, content: content, timestamp: timestamp}
17
+ end
18
+
19
+ # @return [Hash] verbose output plus token count for debugging
20
+ # `{role: :user, content: String, timestamp: Integer|nil, tokens: Integer, estimated: Boolean}`
21
+ def render_debug
22
+ render_verbose.merge(token_info)
23
+ end
24
+ end
data/app/models/event.rb CHANGED
@@ -22,6 +22,9 @@ class Event < ApplicationRecord
22
22
 
23
23
  ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze
24
24
 
25
+ # Heuristic: average bytes per token for English prose.
26
+ BYTES_PER_TOKEN = 4
27
+
25
28
  belongs_to :session
26
29
 
27
30
  validates :event_type, presence: true, inclusion: {in: TYPES}
@@ -56,6 +59,20 @@ class Event < ApplicationRecord
56
59
  event_type.in?(CONTEXT_TYPES)
57
60
  end
58
61
 
62
+ # Heuristic token estimate: ~4 bytes per token for English prose.
63
+ # Tool events are estimated from the full payload JSON since tool_input
64
+ # and tool metadata contribute to token count. Messages use content only.
65
+ #
66
+ # @return [Integer] estimated token count (at least 1)
67
+ def estimate_tokens
68
+ text = if event_type.in?(%w[tool_call tool_response])
69
+ payload.to_json
70
+ else
71
+ payload["content"].to_s
72
+ end
73
+ [(text.bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max
74
+ end
75
+
59
76
  private
60
77
 
61
78
  def schedule_token_count
@@ -7,13 +7,22 @@ class Session < ApplicationRecord
7
7
  # Claude Sonnet 4 context window minus system prompt reserve.
8
8
  DEFAULT_TOKEN_BUDGET = 190_000
9
9
 
10
- # Heuristic: average bytes per token for English prose.
11
- BYTES_PER_TOKEN = 4
10
+ VIEW_MODES = %w[basic verbose debug].freeze
12
11
 
13
12
  has_many :events, -> { order(:id) }, dependent: :destroy
14
13
 
14
+ validates :view_mode, inclusion: {in: VIEW_MODES}
15
+
15
16
  scope :recent, ->(limit = 10) { order(updated_at: :desc).limit(limit) }
16
17
 
18
+ # Cycles to the next view mode: basic → verbose → debug → basic.
19
+ #
20
+ # @return [String] the next view mode in the cycle
21
+ def next_view_mode
22
+ current_index = VIEW_MODES.index(view_mode) || 0
23
+ VIEW_MODES[(current_index + 1) % VIEW_MODES.size]
24
+ end
25
+
17
26
  # Returns the events currently visible in the LLM context window.
18
27
  # Walks events newest-first and includes them until the token budget
19
28
  # is exhausted. Events are full-size or excluded entirely.
@@ -35,6 +44,15 @@ class Session < ApplicationRecord
35
44
  selected.reverse
36
45
  end
37
46
 
47
+ # Returns the assembled system prompt for this session.
48
+ # The system prompt includes system instructions, goals, and memories.
49
+ # Currently a placeholder — these subsystems are not yet implemented.
50
+ #
51
+ # @return [String, nil] the system prompt text, or nil if not configured
52
+ def system_prompt
53
+ nil
54
+ end
55
+
38
56
  # Builds the message array expected by the Anthropic Messages API.
39
57
  # Includes user/agent messages and tool call/response events in
40
58
  # Anthropic's wire format. Consecutive tool_call events are grouped
@@ -97,18 +115,12 @@ class Session < ApplicationRecord
97
115
  }
98
116
  end
99
117
 
100
- # Rough estimate for events not yet counted by the background job.
101
- # For tool events, estimates from the full payload since tool_input
102
- # and tool metadata contribute to token count.
118
+ # Delegates to {Event#estimate_tokens} for events not yet counted
119
+ # by the background job.
103
120
  #
104
121
  # @param event [Event]
105
122
  # @return [Integer] at least 1
106
123
  def estimate_tokens(event)
107
- text = if event.event_type.in?(%w[tool_call tool_response])
108
- event.payload.to_json
109
- else
110
- event.payload["content"].to_s
111
- end
112
- [(text.bytesize / BYTES_PER_TOKEN.to_f).ceil, 1].max
124
+ event.estimate_tokens
113
125
  end
114
126
  end
@@ -8,6 +8,7 @@ require "active_record/railtie"
8
8
  require "active_job/railtie"
9
9
  require "action_cable/engine"
10
10
  require "rails/test_unit/railtie"
11
+ require "draper"
11
12
  require "solid_cable"
12
13
  require "solid_queue"
13
14