openclacky 0.9.29 → 0.9.31

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: de10c61ed7d2c530c9bdacae8ddc625cfb969d98316f51f45e1833e87ed9b0cb
4
- data.tar.gz: 42787ba0901cfc544fde90b46e1a2b4ffe07ce6636bcccbf364141dce3fe0515
3
+ metadata.gz: eb05bb9cc5c24901331584bde84b85b264536fa838001bcf113f41df5a96dbce
4
+ data.tar.gz: a8bde52bacb92f46a894582ad9b00afa2a325323b464194fbbd744a4cbbddc77
5
5
  SHA512:
6
- metadata.gz: 7017db404d793f1cb9483a4a28119f252c8e2e0fd1f183bbe8edf6cf1d1fb5748be2225658dbb666e753499a6694be7c1027e4a448898f22298c5c9bc6d1e2e6
7
- data.tar.gz: 406a93802ab41cb68a8c098db3b0777583d888e19f84efa871549832216729ef59899f00512351727a2ceaf0ec209bfc2105061e9e7f050fcfbcca0cf6e50a64
6
+ metadata.gz: 3684277987068171be446db5ab6efa3fe33304714d963cd757cc2102477911a13dc2b68f7dec2952d020f1bfd6427d8510469214dcdc8c1e8666995c93ddb098
7
+ data.tar.gz: 7a84b5fc71f3beb237ed07475bd7a72dac6162c9b519e732e1aba2cafe0af05e3d8f1a4d90e3ec857a084a158113720213a873ea73c997ebb553eea39e61964a
data/CHANGELOG.md CHANGED
@@ -7,6 +7,43 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+ ## [0.9.31] - 2026-04-18
11
+
12
+ ### Added
13
+ - GLM (智谱) model provider support — select GLM models directly from the provider settings
14
+ - Claude Opus 4.7 model option in the built-in provider list
15
+ - Skill Creator UI — create and edit skills from the Web interface with a visual editor
16
+ - Interactive feedback cards — `request_user_feedback` now renders as a styled interactive card in all UIs (Web, UI2, plain), instead of plain text
17
+ - Brand deactivation — white-label brand can now be toggled on/off from the settings page
18
+ - Empty skill placeholder — shows a friendly message when no skills are installed yet
19
+
20
+ ### Improved
21
+ - Shell tool large output handling — when a shell command waits for input or times out with large output, the output is now properly truncated and saved to temp files so the agent can still read the full content
22
+ - Chinese UI translations expanded with new thinkverbose labels
23
+
24
+ ### Fixed
25
+ - Bedrock streaming truncation recovery — when a tool call's arguments are truncated by the API, the broken assistant message is now retracted from history and the agent retries cleanly instead of crashing
26
+ - First session scroll position in the Web UI sidebar
27
+ - Idle status indicator in UI2
28
+ - Channels page spacing and skill creator label alignment in Web UI
29
+
30
+ ## [0.9.30] - 2026-04-16
31
+
32
+ ### Added
33
+ - **OpenClacky provider support**: new built-in provider preset for OpenClacky API (https://api.openclacky.com) with Claude Opus 4.6, Sonnet 4.6/4.5, and Haiku 4.5 models
34
+ - **Session chunk index system**: compressed conversation chunks now include a searchable index with topics and turn counts — the agent can selectively load only relevant historical context instead of re-reading all compressed messages, dramatically reducing token usage in long sessions
35
+ - **Provider availability indicator**: Web UI now shows a real-time status badge (Available/Unavailable) next to each provider in the settings modal, helping users quickly identify which services are reachable
36
+
37
+ ### Improved
38
+ - **Progress streaming UX**: API call progress messages (e.g., "Agent is thinking...", compression updates) are now streamed incrementally to the Web UI with better visual feedback and reduced latency
39
+ - **Brand name localization**: brand skill metadata now includes configurable Chinese names (`name_zh`) for better display in localized UIs
40
+ - **Idle timer reliability**: fixed a race condition where old idle timers from previous CLI sessions could continue running after restarting, causing premature auto-saves
41
+
42
+ ### Fixed
43
+ - **Prompt caching in subagents**: subagent tool calls (e.g., skills invoked via `invoke_skill`) now correctly inherit and propagate prompt caching behavior from the parent agent, reducing redundant API costs
44
+ - **WeChat Work Ruby 3.1 compatibility**: fixed `Queue.empty?` crash on Ruby < 3.2 in WeCom channel WebSocket client (method was added in Ruby 3.2.0)
45
+ - **WeChat markdown stripping**: incoming messages from WeChat (Weixin) now preserve original text content when stripping markdown decorators, fixing message corruption where text was accidentally removed
46
+
10
47
  ## [0.9.29] - 2026-04-15
11
48
 
12
49
  ### Added
@@ -81,10 +81,16 @@ module Clacky
81
81
  # infrastructure blips — do NOT trigger fallback. Just retry on the current
82
82
  # model (primary or already-active fallback) up to max_retries.
83
83
  if retries <= max_retries
84
- @ui&.show_warning("Network failed: #{e.message}. Retry #{retries}/#{max_retries}...")
84
+ @ui&.show_progress(
85
+ "Network failed: #{e.message}",
86
+ progress_type: "retrying",
87
+ phase: "active",
88
+ metadata: { attempt: retries, total: max_retries }
89
+ )
85
90
  sleep retry_delay
86
91
  retry
87
92
  else
93
+ @ui&.show_progress(progress_type: "retrying", phase: "done")
88
94
  @ui&.show_error("Network failed after #{max_retries} retries: #{e.message}")
89
95
  raise AgentError, "Network connection failed after #{max_retries} retries: #{e.message}"
90
96
  end
@@ -112,10 +118,16 @@ module Clacky
112
118
  retry
113
119
  end
114
120
  end
115
- @ui&.show_warning("#{e.message} (#{retries}/#{current_max})")
121
+ @ui&.show_progress(
122
+ e.message,
123
+ progress_type: "retrying",
124
+ phase: "active",
125
+ metadata: { attempt: retries, total: current_max }
126
+ )
116
127
  sleep retry_delay
117
128
  retry
118
129
  else
130
+ @ui&.show_progress(progress_type: "retrying", phase: "done")
119
131
  @ui&.show_error("LLM service unavailable after #{current_max} retries. Please try again later.")
120
132
  raise AgentError, "LLM service unavailable after #{current_max} retries"
121
133
  end
@@ -38,16 +38,23 @@ module Clacky
38
38
  YOUR ONLY TASK: Create a comprehensive summary of the conversation above.
39
39
 
40
40
  REQUIRED RESPONSE FORMAT:
41
- Your response MUST start with <analysis> or <summary> tags. No other format is acceptable.
41
+ First output a <topics> line listing 3-6 key topic phrases (comma-separated, concise).
42
+ Then output the full summary wrapped in <summary> tags.
42
43
 
43
- Follow the detailed compression prompt structure provided earlier. Focus on:
44
+ Example format:
45
+ <topics>Rails setup, database config, deploy pipeline, Tailwind CSS</topics>
46
+ <summary>
47
+ ...full summary text...
48
+ </summary>
49
+
50
+ Focus on:
44
51
  - User's explicit requests and intents
45
52
  - Key technical concepts and code changes
46
53
  - Files examined and modified
47
54
  - Errors encountered and fixes applied
48
55
  - Current work status and pending tasks
49
56
 
50
- Begin your summary NOW. Remember: PURE TEXT response only, starting with <analysis> or <summary> tags.
57
+ Begin your response NOW. Remember: PURE TEXT only, starting with <topics> then <summary>.
51
58
  PROMPT
52
59
 
53
60
  def initialize(client, model: nil)
@@ -109,22 +116,33 @@ module Clacky
109
116
  end
110
117
 
111
118
 
119
+ # Parse topics tag from compressed content.
120
+ # Returns the topics string if found, nil otherwise.
121
+ # e.g. "<topics>Rails setup, database config</topics>" → "Rails setup, database config"
122
+ def parse_topics(content)
123
+ m = content.match(/<topics>(.*?)<\/topics>/m)
124
+ m ? m[1].strip : nil
125
+ end
126
+
112
127
  def parse_compressed_result(result, chunk_path: nil)
113
128
  # Return the compressed result as a single assistant message
114
- # Keep the <analysis> or <summary> tags as they provide semantic context
129
+ # Keep the <summary> tags as they provide semantic context
115
130
  content = result.to_s.strip
116
131
 
117
132
  if content.empty?
118
133
  []
119
134
  else
135
+ # Strip out the <topics> block — it's metadata for the chunk file, not for AI context
136
+ content_without_topics = content.gsub(/<topics>.*?<\/topics>\n*/m, "").strip
137
+
120
138
  # Inject chunk anchor so AI knows where to find original conversation
121
139
  if chunk_path
122
140
  anchor = "\n\n---\n📁 **Original conversation archived at:** `#{chunk_path}`\n" \
123
141
  "_Use `file_reader` tool to recall details from this chunk._"
124
- content = content + anchor
142
+ content_without_topics = content_without_topics + anchor
125
143
  end
126
144
 
127
- [{ role: "assistant", content: content, compressed_summary: true, chunk_path: chunk_path }]
145
+ [{ role: "assistant", content: content_without_topics, compressed_summary: true, chunk_path: chunk_path }]
128
146
  end
129
147
  end
130
148
  end
@@ -17,9 +17,9 @@ module Clacky
17
17
  def trigger_idle_compression
18
18
  # Check if we should compress (force mode)
19
19
  compression_context = compress_messages_if_needed(force: true)
20
- @ui&.show_idle_status(phase: :start, message: "Idle detected. Compressing conversation to optimize costs...")
20
+ @ui&.show_progress("Idle detected. Compressing conversation to optimize costs...", progress_type: "idle_compress", phase: "active")
21
21
  if compression_context.nil?
22
- @ui&.show_idle_status(phase: :end, message: "Idle skipped.")
22
+ @ui&.show_progress("Idle skipped.", progress_type: "idle_compress", phase: "done")
23
23
  Clacky::Logger.info(
24
24
  "Idle compression skipped",
25
25
  enable_compression: @config.enable_compression,
@@ -137,7 +137,8 @@ module Clacky
137
137
  original_messages,
138
138
  compression_context[:recent_messages],
139
139
  chunk_index: chunk_index,
140
- compression_level: compression_context[:compression_level]
140
+ compression_level: compression_context[:compression_level],
141
+ topics: @message_compressor.parse_topics(compressed_content)
141
142
  )
142
143
 
143
144
  @history.replace_all(@message_compressor.rebuild_with_compression(
@@ -167,7 +168,7 @@ module Clacky
167
168
  # Show compression info (use estimated tokens from rebuilt history)
168
169
  compression_summary = "History compressed (~#{compression_context[:original_token_count]} -> ~#{@history.estimate_tokens} tokens, " \
169
170
  "level #{compression_context[:compression_level]})"
170
- @ui&.show_idle_status(phase: :end, message: compression_summary)
171
+ @ui&.show_progress(compression_summary, progress_type: "idle_compress", phase: "done")
171
172
  end
172
173
 
173
174
  # Get recent messages while preserving tool_calls/tool_results pairs.
@@ -303,16 +304,21 @@ module Clacky
303
304
  # @param recent_messages [Array<Hash>] Recent messages being kept (to exclude from chunk)
304
305
  # @param chunk_index [Integer] Sequential chunk number
305
306
  # @param compression_level [Integer] Compression level
307
+ # @param topics [String, nil] Short topic description for chunk index card
306
308
  # @return [String, nil] Path to saved chunk file, or nil if save failed
307
- def save_compressed_chunk(original_messages, recent_messages, chunk_index:, compression_level:)
309
+ def save_compressed_chunk(original_messages, recent_messages, chunk_index:, compression_level:, topics: nil)
308
310
  return nil unless @session_id && @created_at
309
311
 
310
312
  # Messages being compressed = original minus system message minus recent messages
311
313
  # Also exclude system-injected scaffolding (session context, memory prompts, etc.)
312
314
  # — these are internal CLI metadata and must not appear in chunk MD or WebUI history.
315
+ # Also exclude previous compressed_summary messages: they are index cards pointing
316
+ # to older chunk files and must NOT be embedded inside a new chunk, otherwise
317
+ # parse_chunk_md_to_rounds would follow the nested reference and create circular
318
+ # chunk chains (chunk-2 → chunk-1 → ... → chunk-2).
313
319
  recent_set = recent_messages.to_a
314
320
  messages_to_archive = original_messages.reject do |m|
315
- m[:role] == "system" || m[:system_injected] || recent_set.include?(m)
321
+ m[:role] == "system" || m[:system_injected] || m[:compressed_summary] || recent_set.include?(m)
316
322
  end
317
323
 
318
324
  return nil if messages_to_archive.empty?
@@ -324,7 +330,7 @@ module Clacky
324
330
  chunk_filename = "#{base_name}-chunk-#{chunk_index}.md"
325
331
  chunk_path = File.join(sessions_dir, chunk_filename)
326
332
 
327
- md_content = build_chunk_md(messages_to_archive, chunk_index: chunk_index, compression_level: compression_level)
333
+ md_content = build_chunk_md(messages_to_archive, chunk_index: chunk_index, compression_level: compression_level, topics: topics)
328
334
 
329
335
  File.write(chunk_path, md_content)
330
336
  FileUtils.chmod(0o600, chunk_path)
@@ -339,8 +345,9 @@ module Clacky
339
345
  # @param messages [Array<Hash>] Messages to render
340
346
  # @param chunk_index [Integer] Chunk number for metadata
341
347
  # @param compression_level [Integer] Compression level
348
+ # @param topics [String, nil] Short topic description extracted from LLM summary
342
349
  # @return [String] Markdown content
343
- def build_chunk_md(messages, chunk_index:, compression_level:)
350
+ def build_chunk_md(messages, chunk_index:, compression_level:, topics: nil)
344
351
  lines = []
345
352
 
346
353
  # Front matter
@@ -350,6 +357,7 @@ module Clacky
350
357
  lines << "compression_level: #{compression_level}"
351
358
  lines << "archived_at: #{Time.now.iso8601}"
352
359
  lines << "message_count: #{messages.size}"
360
+ lines << "topics: #{topics}" if topics
353
361
  lines << "---"
354
362
  lines << ""
355
363
  lines << "# Session Chunk #{chunk_index}"
@@ -414,7 +414,8 @@ module Clacky
414
414
  parts = []
415
415
  parts << "**Context:** #{context.strip}" << "" unless context.strip.empty?
416
416
  parts << "**Question:** #{question.strip}"
417
- if options && !options.empty?
417
+ # Guard: options must be an Array to iterate with each_with_index
418
+ if options.is_a?(Array) && !options.empty?
418
419
  parts << "" << "**Options:**"
419
420
  options.each_with_index { |opt, i| parts << " #{i + 1}. #{opt}" }
420
421
  end
@@ -529,6 +530,75 @@ module Clacky
529
530
  { name: "image_#{idx + 1}.#{ext}", mime_type: mime_type, data_url: url, path: path }
530
531
  end
531
532
  end
533
+
534
+ # Inject a chunk index card into the conversation when archived chunks exist.
535
+ # Lists all chunk files (path + topics + turn count) so the AI knows where to
536
+ # look if it needs details from past conversations. The AI can load any chunk
537
+ # on demand using the existing file_reader tool — no new tools required.
538
+ #
539
+ # Only re-injects when a new chunk has been added since the last injection,
540
+ # keeping the message list clean across multiple compressions.
541
+ #
542
+ # Cache-safe: injected as a system_injected user message in the conversation
543
+ # turns, never touching the system prompt.
544
+ def inject_chunk_index_if_needed
545
+ # Collect all compressed_summary messages that carry a chunk_path
546
+ chunk_msgs = @history.to_a.select { |m| m[:compressed_summary] && m[:chunk_path] }
547
+ return if chunk_msgs.empty?
548
+
549
+ # Skip if we already injected an index for this exact chunk count
550
+ return if @history.last_injected_chunk_count == chunk_msgs.size
551
+
552
+ # Remove any previously injected chunk index (stale — chunk count changed)
553
+ @history.delete_where { |m| m[:chunk_index] }
554
+
555
+ # Build index card lines
556
+ lines = ["## Previous Session Archives (#{chunk_msgs.size} chunk#{"s" if chunk_msgs.size > 1} available)\n"]
557
+ chunk_msgs.each_with_index do |msg, i|
558
+ path = msg[:chunk_path].to_s
559
+ topics = read_chunk_topics(path)
560
+ turns = read_chunk_message_count(path)
561
+ lines << "[CHUNK-#{i + 1}] #{path}"
562
+ lines << " Topics: #{topics}" if topics
563
+ lines << " Turns: #{turns}" if turns
564
+ lines << ""
565
+ end
566
+ lines << "Use file_reader to load a chunk file when you need original conversation details."
567
+
568
+ @history.append({
569
+ role: "user",
570
+ content: lines.join("\n"),
571
+ system_injected: true,
572
+ chunk_index: true,
573
+ chunk_count: chunk_msgs.size
574
+ })
575
+ end
576
+
577
+ # Read the `topics` field from a chunk MD file's YAML front matter.
578
+ # Returns nil if the file is missing or has no topics field.
579
+ private def read_chunk_topics(chunk_path)
580
+ return nil unless chunk_path && File.exist?(chunk_path)
581
+ File.foreach(chunk_path) do |line|
582
+ return line.sub(/^topics:\s*/, "").strip if line.start_with?("topics:")
583
+ break if line.strip == "---" && $. > 1 # end of front matter
584
+ end
585
+ nil
586
+ rescue
587
+ nil
588
+ end
589
+
590
+ # Read the `message_count` field from a chunk MD file's YAML front matter.
591
+ # Returns nil if the file is missing or has no message_count field.
592
+ private def read_chunk_message_count(chunk_path)
593
+ return nil unless chunk_path && File.exist?(chunk_path)
594
+ File.foreach(chunk_path) do |line|
595
+ return line.sub(/^message_count:\s*/, "").strip.to_i if line.start_with?("message_count:")
596
+ break if line.strip == "---" && $. > 1
597
+ end
598
+ nil
599
+ rescue
600
+ nil
601
+ end
532
602
  end
533
603
  end
534
604
  end
@@ -255,7 +255,7 @@ module Clacky
255
255
  transient: transient
256
256
  })
257
257
 
258
- @ui&.show_info("Injected skill content for /#{skill.identifier}#{skill.name_zh ? " (#{skill.name_zh})" : ""}")
258
+ @ui&.show_info("Injected skill content for /#{skill.identifier}#{skill.name_zh.to_s.empty? ? "" : " (#{skill.name_zh})"}")
259
259
  end
260
260
 
261
261
 
@@ -405,7 +405,7 @@ module Clacky
405
405
 
406
406
  # Log which model the subagent is actually using (may differ from requested
407
407
  # when "lite" falls back to default due to no lite model configured)
408
- @ui&.show_info("Subagent start: #{skill.identifier}#{skill.name_zh ? " (#{skill.name_zh})" : ""} [#{subagent.current_model_info[:model]}]")
408
+ @ui&.show_info("Subagent start: #{skill.identifier}#{skill.name_zh.to_s.empty? ? "" : " (#{skill.name_zh})"} [#{subagent.current_model_info[:model]}]")
409
409
 
410
410
  # Run subagent with the actual task as the sole user turn.
411
411
  # If the user typed the skill command with no arguments (e.g. "/jade-appraisal"),
data/lib/clacky/agent.rb CHANGED
@@ -212,6 +212,9 @@ module Clacky
212
212
  # Inject session context (date + model) if not yet present or date has changed
213
213
  inject_session_context_if_needed
214
214
 
215
+ # Inject chunk index card if archived chunks exist and index is stale
216
+ inject_chunk_index_if_needed
217
+
215
218
  # Split files into vision images and disk files; downgrade oversized images to disk
216
219
  image_files, disk_files = partition_files(Array(files))
217
220
  vision_images, downgraded = resolve_vision_images(image_files)
@@ -695,11 +698,13 @@ module Clacky
695
698
  @ui&.update_todos(@todos.dup)
696
699
  end
697
700
 
698
- # Special handling for request_user_feedback: show directly as message
701
+ # Special handling for request_user_feedback: emit as interactive feedback card
699
702
  if call[:name] == "request_user_feedback"
700
- if result.is_a?(Hash) && result[:message]
701
- @ui&.show_assistant_message(result[:message], files: [])
702
- end
703
+ # Pass the raw call arguments to show_tool_call so the WebUI controller
704
+ # can extract question/context/options and emit a "request_feedback" event
705
+ # (renders as a clickable card in the browser).
706
+ # Fallback UIs (terminal, IM channels) receive the formatted text message.
707
+ @ui&.show_tool_call(call[:name], call[:arguments])
703
708
 
704
709
  if @config.permission_mode == :auto_approve
705
710
  # auto_approve means no human is watching (unattended/scheduled tasks).
@@ -731,6 +736,17 @@ module Clacky
731
736
  }
732
737
  Clacky::Logger.error("tool_execution_error", tool: call[:name], error: e)
733
738
 
739
+ # If arguments were malformed/truncated (e.g. Bedrock streaming truncation),
740
+ # retract the bad assistant message from history so the next LLM call gets a
741
+ # fresh context rather than re-reading a cached broken tool call.
742
+ # Also skip adding a tool_result — without the assistant message there is no
743
+ # tool_call to pair with, and sending an orphan tool_result breaks the API.
744
+ if e.is_a?(Utils::BadArgumentsError)
745
+ size_before = @history.size
746
+ @history.pop_while { |m| m[:role] == "assistant" && m[:tool_calls]&.any? { |tc| tc[:id] == call[:id] } }
747
+ next if @history.size < size_before # message was retracted, skip tool_result
748
+ end
749
+
734
750
  @hooks.trigger(:on_tool_error, call, e)
735
751
  @ui&.show_tool_error(e)
736
752
  # Use build_denied_result with system_injected=true so LLM knows it can retry
@@ -135,6 +135,29 @@ module Clacky
135
135
  FileUtils.chmod(0o600, BRAND_FILE)
136
136
  end
137
137
 
138
+ # Remove the local license binding and wipe all brand-related fields from disk.
139
+ # Brand skills installed from this license are also cleared.
140
+ # Returns { success: true }.
141
+ def deactivate!
142
+ clear_brand_skills!
143
+ FileUtils.rm_f(BRAND_FILE)
144
+ # Reset all in-memory state so this instance is clean after the call.
145
+ @product_name = nil
146
+ @package_name = nil
147
+ @logo_url = nil
148
+ @support_contact = nil
149
+ @support_qr_url = nil
150
+ @theme_color = nil
151
+ @homepage_url = nil
152
+ @license_key = nil
153
+ @license_activated_at = nil
154
+ @license_expires_at = nil
155
+ @license_last_heartbeat = nil
156
+ @license_user_id = nil
157
+ @device_id = nil
158
+ { success: true }
159
+ end
160
+
138
161
  # Activate the license against the OpenClacky Cloud API using HMAC proof.
139
162
  # Returns a result hash: { success: bool, message: String, data: Hash }
140
163
  def activate!(license_key)
@@ -664,11 +687,36 @@ module Clacky
664
687
  # installed and that have a newer version available.
665
688
  # New skills are never auto-installed — the user must click Install/Update
666
689
  # explicitly from the Brand Skills panel.
690
+ installed = installed_brand_skills
667
691
  skills_needing_update = result[:skills].select { |s| s["needs_update"] }
668
692
  results = skills_needing_update.map do |skill_info|
669
693
  install_brand_skill!(skill_info)
670
694
  end
671
695
 
696
+ # Even when the version hasn't changed, display metadata (name_zh,
697
+ # description_zh, description) may have been updated on the platform.
698
+ # Patch brand_skills.json in-place without re-downloading the ZIP.
699
+ result[:skills].each do |skill_info|
700
+ name = skill_info["name"]
701
+ next unless installed.key?(name)
702
+ next if skill_info["needs_update"] # already being reinstalled above
703
+
704
+ local = installed[name]
705
+ next if local["name_zh"] == skill_info["name_zh"].to_s &&
706
+ local["description_zh"] == skill_info["description_zh"].to_s &&
707
+ local["description"] == skill_info["description"].to_s
708
+
709
+ # Metadata changed — update brand_skills.json without reinstalling.
710
+ record_installed_skill(
711
+ name,
712
+ local["version"],
713
+ skill_info["description"].to_s,
714
+ encrypted: local["encrypted"] != false,
715
+ description_zh: skill_info["description_zh"].to_s,
716
+ name_zh: skill_info["name_zh"].to_s
717
+ )
718
+ end
719
+
672
720
  on_complete&.call(results)
673
721
  rescue StandardError
674
722
  # Background sync failures are intentionally swallowed — the agent
data/lib/clacky/cli.rb CHANGED
@@ -709,8 +709,21 @@ module Clacky
709
709
  sleep 0.1
710
710
  # Clear output area
711
711
  ui_controller.layout.clear_output
712
+ # Cancel old idle timer before replacing agent to avoid stale-agent compression
713
+ idle_timer.cancel
712
714
  # Clear session by creating a new agent
713
715
  agent = Clacky::Agent.new(client, agent_config, working_dir: working_dir, ui: ui_controller, profile: agent.agent_profile.name, session_id: Clacky::SessionManager.generate_id, source: :manual)
716
+ # Rebuild idle timer bound to the new agent
717
+ idle_timer = Clacky::IdleCompressionTimer.new(
718
+ agent: agent,
719
+ session_manager: session_manager,
720
+ logger: ->(msg, level:) { ui_controller.log(msg, level: level) }
721
+ ) do |success|
722
+ if success
723
+ ui_controller.update_sessionbar(tasks: agent.total_tasks, cost: agent.total_cost)
724
+ end
725
+ ui_controller.set_idle_status
726
+ end
714
727
  ui_controller.show_info("Session cleared. Starting fresh.")
715
728
  # Update session bar with reset values
716
729
  ui_controller.update_sessionbar(tasks: agent.total_tasks, cost: agent.total_cost)
data/lib/clacky/client.rb CHANGED
@@ -114,12 +114,25 @@ module Clacky
114
114
 
115
115
  # ── Prompt-caching support ────────────────────────────────────────────────
116
116
 
117
- # Returns true for Claude 3.5+ models that support prompt caching.
117
+ # Returns true for Claude models that support prompt caching (gen 3.5+ or gen 4+).
118
+ #
119
+ # Handles both direct model names (e.g. "claude-haiku-4-5") and
120
+ # Clacky AI Bedrock proxy names with "abs-" prefix (e.g. "abs-claude-haiku-4-5").
121
+ #
122
+ # Why only Claude models:
123
+ # - MiniMax uses automatic server-side caching (no cache_control needed from client)
124
+ # - Kimi uses a proprietary prompt_cache_key param, not cache_control
125
+ # - MiMo has no documented caching API
126
+ # - Only Claude (direct, OpenRouter, or ClackyAI Bedrock proxy) consumes our
127
+ # cache_control / cachePoint markers
118
128
  def supports_prompt_caching?(model)
119
- model_str = model.to_s.downcase
129
+ # Strip ClackyAI Bedrock proxy prefix before matching
130
+ model_str = model.to_s.downcase.sub(/^abs-/, "")
120
131
  return false unless model_str.include?("claude")
121
132
 
122
- model_str.match?(/claude(?:-3[-.]?[5-9]|-[4-9]|-sonnet-[34])/)
133
+ # Match Claude gen 3.5+ (3.5/3.6/3.7…) or gen 4+ in any name format:
134
+ # claude-3.5-sonnet-... claude-3-7-sonnet claude-haiku-4-5 claude-sonnet-4-6
135
+ model_str.match?(/claude(?:-3[-.]?[5-9]|.*-[4-9][-.]|.*-[4-9]$|-[4-9][-.]|-[4-9]$|-sonnet-[34])/)
123
136
  end
124
137
 
125
138
 
@@ -41,21 +41,21 @@ Example (Chinese):
41
41
  ### 2. Ask the user to name the AI (card)
42
42
 
43
43
  Call `request_user_feedback` to let the user pick or type a name for their AI assistant.
44
- Offer a few fun suggestions as options, plus a free-text fallback.
44
+ Offer a few fun suggestions as options. The user can also ignore the options and type any name directly.
45
45
 
46
46
  If `lang == "zh"`, use:
47
47
  ```json
48
48
  {
49
- "question": "先来点有意思的 —— 你想叫我什么名字?可以选一个,也可以直接输入你喜欢的:",
50
- "options": ["🐟 摸鱼王", "📚 卷王", "🌟 小天才", "🐱 本喵", "🌅 拾光", "自己输入名字…"]
49
+ "question": "先来点有意思的 —— 你想叫我什么名字?",
50
+ "options": ["摸鱼王", "老六", "夜猫子", "话唠", "包打听", "碎碎念", "掌柜的"]
51
51
  }
52
52
  ```
53
53
 
54
54
  Otherwise (English):
55
55
  ```json
56
56
  {
57
- "question": "Let's start with something fun — what would you like to call me? Pick one or type your own:",
58
- "options": ["✨ Aria", "🤖 Max", "🌙 Luna", "⚡ Zap", "🎯 Ace", "Type your own name…"]
57
+ "question": "Let's start with something fun — what would you like to call me?",
58
+ "options": ["Nox", "Sable", "Remy", "Vex", "Pip", "Zola", "Bex"]
59
59
  }
60
60
  ```
61
61
 
@@ -108,7 +108,11 @@ Call `request_user_feedback` again. This is where we learn about the user themse
108
108
  If `lang == "zh"`, use:
109
109
  ```json
110
110
  {
111
- "question": "那你呢?随便聊聊自己吧 —— 全部可选,填多少都行:\n• 你的名字(我该怎么称呼你?)\n• 职业\n• 最希望用 AI 做什么\n• 社交 / 作品链接(GitHub、微博、个人网站等)—— 我会读取公开信息来更了解你",
111
+ "question": "那你呢?随便聊聊自己吧 —— 全部可选,填多少都行:
112
+ - 你的名字(我该怎么称呼你?)
113
+ - 职业
114
+ - 最希望用 AI 做什么
115
+ - 社交 / 作品链接(GitHub、微博、个人网站等)—— 我会读取公开信息来更了解你",
112
116
  "options": []
113
117
  }
114
118
  ```
@@ -116,7 +120,11 @@ If `lang == "zh"`, use:
116
120
  Otherwise (English):
117
121
  ```json
118
122
  {
119
- "question": "Now a bit about you — all optional, skip anything you like.\n• Your name (what should I call you?)\n• Occupation\n• What you want to use AI for most\n• Social / portfolio links (GitHub, Twitter/X, personal site…) — I'll read them to learn about you",
123
+ "question": "Now a bit about you — all optional, skip anything you like.
124
+ - Your name (what should I call you?)
125
+ - Occupation
126
+ - What you want to use AI for most
127
+ - Social / portfolio links (GitHub, Twitter/X, personal site…) — I'll read them to learn about you",
120
128
  "options": []
121
129
  }
122
130
  ```
@@ -97,10 +97,6 @@ module Clacky
97
97
  emit("info", message: message)
98
98
  end
99
99
 
100
- def show_idle_status(phase:, message:)
101
- emit("idle_status", phase: phase.to_s, message: message)
102
- end
103
-
104
100
  def show_warning(message)
105
101
  emit("warning", message: message)
106
102
  end
@@ -119,15 +115,25 @@ module Clacky
119
115
 
120
116
  # === Progress ===
121
117
 
122
- def show_progress(message = nil, prefix_newline: true, output_buffer: nil)
123
- @progress_start_time = Time.now
124
- emit("progress", message: message, status: "start")
118
+ def show_progress(message = nil, prefix_newline: true, progress_type: "thinking", phase: "active", metadata: {})
119
+ @progress_start_time = Time.now if phase == "active"
120
+
121
+ data = {
122
+ message: message,
123
+ progress_type: progress_type,
124
+ phase: phase,
125
+ status: phase == "active" ? "start" : "stop" # backward compat
126
+ }
127
+ data[:metadata] = metadata unless metadata.empty?
128
+ data[:elapsed] = (Time.now - @progress_start_time).round(1) if phase == "done" && @progress_start_time
129
+
130
+ emit("progress", **data)
131
+
132
+ @progress_start_time = nil if phase == "done"
125
133
  end
126
134
 
127
135
  def clear_progress
128
- elapsed = @progress_start_time ? (Time.now - @progress_start_time).round(1) : 0
129
- @progress_start_time = nil
130
- emit("progress", status: "stop", elapsed: elapsed)
136
+ show_progress(progress_type: "thinking", phase: "done")
131
137
  end
132
138
 
133
139
  # === State updates ===
@@ -12,6 +12,7 @@ module Clacky
12
12
  task_id created_at system_injected session_context memory_update
13
13
  subagent_instructions subagent_result token_usage
14
14
  compressed_summary chunk_path truncated transient
15
+ chunk_index chunk_count
15
16
  ].freeze
16
17
 
17
18
  def initialize(messages = [])
@@ -123,6 +124,13 @@ module Clacky
123
124
  msg&.dig(:session_date)
124
125
  end
125
126
 
127
+ # Return the chunk_count from the most recently injected chunk index message.
128
+ # Used by inject_chunk_index_if_needed to avoid re-injecting when nothing changed.
129
+ def last_injected_chunk_count
130
+ msg = @messages.reverse.find { |m| m[:chunk_index] }
131
+ msg&.dig(:chunk_count) || 0
132
+ end
133
+
126
134
  # Return only real (non-system-injected) user messages.
127
135
  def real_user_messages
128
136
  @messages.select { |m| m[:role] == "user" && !m[:system_injected] }