netra-zen 1.0.7__py3-none-any.whl → 1.0.8__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: netra-zen
3
- Version: 1.0.7
3
+ Version: 1.0.8
4
4
  Summary: Multi-instance Claude orchestrator for parallel task execution
5
5
  Home-page: https://github.com/netra-systems/zen
6
6
  Author: Systems
@@ -30,6 +30,11 @@ Description-Content-Type: text/markdown
30
30
  License-File: LICENSE.md
31
31
  Requires-Dist: PyYAML>=6.0
32
32
  Requires-Dist: python-dateutil>=2.8.2
33
+ Requires-Dist: aiohttp>=3.8.0
34
+ Requires-Dist: websockets>=11.0
35
+ Requires-Dist: rich>=13.0.0
36
+ Requires-Dist: PyJWT>=2.8.0
37
+ Requires-Dist: psutil>=5.9.0
33
38
  Requires-Dist: opentelemetry-sdk>=1.20.0
34
39
  Requires-Dist: opentelemetry-exporter-gcp-trace>=1.6.0
35
40
  Requires-Dist: google-cloud-trace>=1.11.0
@@ -55,17 +60,42 @@ It works by analyzing your usage logs for metadata optimizations. It is focused
55
60
 
56
61
  This is a micro startup effort, aiming to provide real value for individual devs in exchange for feedback. Our intent is to charge businesses for larger scale optimizations.
57
62
 
58
- The process is simple. One time install, then one command. It auto grabs the last 5 logs and provides actionable items to update going forward to get the value of the optimizations.
63
+ The process is simple. One time install, then one command. It auto grabs the last 3 log files and provides actionable items to update going forward to get the value of the optimizations.
59
64
 
60
65
  ## Quick start
61
66
 
62
67
  1. `pip install netra-zen`
63
- 2. `zen --apex --send-logs --message "claude code"`
68
+ 2. `zen --apex --send-logs`
64
69
  3. Read the results and update claude settings, prompts, commands, etc. as needed to benefit
65
70
 
66
- By default it will optimize based on logs no thought on the message is needed. Just copy and paste #2!
67
71
  See detailed install below if needed.
68
72
 
73
+ ### Log Collection Options
74
+
75
+ The optimizer analyzes your Claude Code usage logs to identify optimization opportunities. You can customize what logs are sent:
76
+
77
+ ```bash
78
+ # Send logs from the 3 most recent files (default)
79
+ zen --apex --send-logs
80
+
81
+ # Send logs from more files for deeper analysis
82
+ zen --apex --send-logs --logs-count 5
83
+
84
+ # Send logs from a specific project
85
+ zen --apex --send-logs --logs-project "my-project-name"
86
+
87
+ # Send logs from a custom location
88
+ zen --apex --send-logs --logs-path "/path/to/.claude/Projects"
89
+
90
+ # Combine options for targeted analysis
91
+ zen --apex --send-logs --logs-count 3 --logs-project "production-app"
92
+ ```
93
+
94
+ **Important:**
95
+ - `--logs-count` specifies the number of **files** to read, not entries
96
+ - Each file may contain many log entries
97
+ - The tool will display exactly how many entries from how many files are being sent
98
+
69
99
  ## Example output
70
100
  ![example](https://github.com/user-attachments/assets/94ed0180-9fed-4d76-ab69-657b7d3ab1b2)
71
101
 
@@ -75,12 +105,147 @@ See detailed install below if needed.
75
105
  This was just changing a few small lines on a 400 line command.
76
106
  ![savings](https://github.com/user-attachments/assets/9298e7cc-4f15-4dc0-97e3-1f126757dde6)
77
107
 
108
+ ## Notes
109
+ - Have an optimization idea or area you want it to focus on? Create a git issue and we can add that to our evals.
110
+
111
+ ## Example output from single file
112
+ ```
113
+ zen --apex --send-logs --logs-path /Users/user/.claude/projects/-Users-Desktop-netra-apex/7ac6d7ac-abc3-4903-a482-......-1.jsonl
114
+
115
+ SUCCESS: WebSocket connected successfully!
116
+
117
+ ============================================================
118
+ 📤 SENDING LOGS TO OPTIMIZER
119
+ ============================================================
120
+ Total Entries: 781
121
+ Files Read: 1
122
+ Payload Size: 5.52 MB
123
+
124
+ Files:
125
+ • 7ac6d7ac-abc3-4903-a482-.....jsonl (hash: 908dbc51, 781 entries)
126
+
127
+ Payload Confirmation:
128
+ ✓ 'jsonl_logs' key added to payload
129
+ ✓ First log entry timestamp: 2025-10-03T18:26:02.089Z
130
+ ✓ Last log entry timestamp: 2025-10-03T19:31:21.876Z
131
+ ============================================================
132
+
133
+ [11:32:55.190] [DEBUG] GOLDEN PATH TRACE: Prepared WebSocket payload for run_id=cli_20251008_113255_25048,
134
+ thread_id=cli_thread_f887c58e7759
135
+ [11:32:55.191] [DEBUG] ✓ TRANSMISSION PROOF: Payload contains 781 JSONL log entries in 'jsonl_logs' key
136
+ SUCCESS: Message sent with run_id: cli_20251008_113255_25048
137
+ ⏳ Waiting 120 seconds for events...
138
+ Receiving events...
139
+ [11:32:55.284] [DEBUG] Listening for WebSocket events...
140
+ [11:32:55.284] [DEBUG] GOLDEN PATH TRACE: Event listener started after successful connection
141
+ [11:32:56.655] [DEBUG] WebSocket Event #1: raw_message_received
142
+ [11:32:56.657] [DEBUG] GOLDEN PATH TRACE: Parsed WebSocket event type=connection_established
143
+ [11:32:56.658] [DEBUG] WebSocket Event #2: connection_established
144
+ [11:32:56] [CONN] Connected as: e2e-staging-2d677771
145
+ [11:33:01.364] [DEBUG] WebSocket Event #3: raw_message_received
146
+ [11:33:01.366] [DEBUG] GOLDEN PATH TRACE: Parsed WebSocket event type=thread_created
147
+ [11:33:01.367] [DEBUG] WebSocket Event #4: thread_created
148
+ [11:33:01] [EVENT] thread_created: {"type": "thread_created", "payload": {"thread_id":
149
+ "thread_session_969_44184cce", "timestamp": 1759...
150
+ [11:33:02.901] [DEBUG] WebSocket Event #5: raw_message_received
151
+ [11:33:02.903] [DEBUG] GOLDEN PATH TRACE: Parsed WebSocket event type=agent_started
152
+ [11:33:02.904] [DEBUG] WebSocket Event #6: agent_started
153
+ [11:33:02] 🧠 Agent: netra-assistant started (run: run_sess...)
154
+ [11:33:04.744] [DEBUG] WebSocket Event #7: raw_message_received
155
+ [11:33:04.746] [DEBUG] GOLDEN PATH TRACE: Parsed WebSocket event type=agent_started
156
+ [11:33:04.747] [DEBUG] WebSocket Event #8: agent_started
157
+ [11:33:04] 🧠 Agent: netra-assistant started (run: run_sess...)
158
+ [11:33:06.366] [DEBUG] WebSocket Event #9: raw_message_received
159
+ [11:33:06.368] [DEBUG] GOLDEN PATH TRACE: Parsed WebSocket event type=agent_started
160
+ [11:33:06.369] [DEBUG] WebSocket Event #10: agent_started
161
+ [11:33:06] 🧠 Agent: MessageHandler started (run: run_sess...)
162
+ [11:33:14.781] [DEBUG] WebSocket Event #11: raw_message_received
163
+ [11:33:14.783] [DEBUG] GOLDEN PATH TRACE: Parsed WebSocket event type=agent_started
164
+ [11:33:14.784] [DEBUG] WebSocket Event #12: agent_started
165
+ [11:33:14] 🧠 Agent: claude_code_optimizer started (run: run_sess...)
166
+ [11:33:23.241] [DEBUG] WebSocket Event #13: raw_message_received
167
+ [11:33:23.243] [DEBUG] GOLDEN PATH TRACE: Parsed WebSocket event type=agent_thinking
168
+ [11:33:23.244] [DEBUG] WebSocket Event #14: agent_thinking
169
+ [11:33:23] 💭 Thinking: Preparing optimization prompt
170
+ ⠹ 💭 Preparing optimization prompt[11:34:27.586] [DEBUG] WebSocket Event #15: raw_message_received
171
+ [11:34:27.588] [DEBUG] GOLDEN PATH TRACE: Parsed WebSocket event type=agent_completed
172
+ [11:34:27.589] [DEBUG] WebSocket Event #16: agent_completed
173
+ ⠹ 💭 Preparing optimization prompt
174
+ [11:34:27] 🧠 Agent Completed: claude_code_optimizer (run: run_sess...) - {"status": "done", "result":
175
+ {"optimizations": [{"issue": "Repeated Full File Read", "evidence": "Th...
176
+ ╭────────────────────────────────── Final Agent Result - Optimization Pointers ──────────────────────────────────╮
177
+ │ { │
178
+ │ "status": "done", │
179
+ │ "result": { │
180
+ │ "optimizations": [ │
181
+ │ { │
182
+ │ "issue": "Repeated Full File Read", │
183
+ │ "evidence": "The file `api/src/routes/user.js` was read in its entirety using `cat` twice. The model │
184
+ │ read it once to understand the code, but then read the entire file again later to re-confirm a detail it had │
185
+ │ forgotten.", │
186
+ │ "token_waste": "High (~2.5k tokens). The entire content of the 250-line file was added to the context │
187
+ │ a second time, providing no new information.", │
188
+ │ "fix": "The model should retain the context of files it has already read within the same task. If it │
189
+ │ needs to re-check a specific detail, it should use a targeted tool like `grep` or `read_lines` (e.g., `grep -C │
190
+ │ 5 'findUser' api/src/routes/user.js`) instead of re-reading the entire file.", │
191
+ │ "ideal prompt": "The user profile page isn't loading the user's name. The API endpoint is in │
192
+ │ `api/src/routes/user.js` and it calls the `findUser` function from `api/src/db/utils.js`. Please investigate │
193
+ │ the data flow between these two files and fix the issue.", │
194
+ │ "priority": "high" │
195
+ │ }, │
196
+ │ { │
197
+ │ "issue": "Excessive Context Gathering", │
198
+ │ "evidence": "The `cat` command was used on two large files (`user.js` and `utils.js`), ingesting a │
199
+ │ total of 400 lines of code into the context. The actual bug was confined to a small 5-line function within │
200
+ │ `utils.js`.", │
201
+ │ "token_waste": "High (~4k tokens). Most of the file content was irrelevant to the specific task of │
202
+ │ fixing the `findUser` function's return value.", │
203
+ │ "fix": "Instead of `cat`, the model should use more precise tools to gather context. After identifying │
204
+ │ the relevant function with `grep`, it could have used a command like `read_lines('api/src/db/utils.js', │
205
+ │ start_line, end_line)` or `grep -A 10 'const findUser' api/src/db/utils.js` to read only the function's │
206
+ │ definition and its immediate surroundings.", │
207
+ │ "ideal prompt": "The `findUser` function in `api/src/db/utils.js` is not returning the user's name │
208
+ │ field. Please add it to the return object.", │
209
+ │ "priority": "high" │
210
+ │ }, │
211
+ │ { │
212
+ │ "issue": "Inefficient Project-Wide Search", │
213
+ │ "evidence": "A recursive grep (`grep -r \"findUser\" .`) was used to find the definition of │
214
+ │ `findUser`. While effective, this can be slow and return a lot of irrelevant matches (like comments, logs, │
215
+ │ etc.) in a large codebase, consuming tokens in the tool output.", │
216
+ │ "token_waste": "Medium (~500 tokens). The `grep` returned multiple matches, including the call site │
217
+ │ which was already known. In a larger project, this could return dozens of matches.", │
218
+ │ "fix": "If the project structure is conventional, a more targeted search would be better. For example, │
219
+ │ knowing `db` utilities are likely in a `db` or `utils` directory, a command like `grep 'findUser' │
220
+ │ api/src/db/*.js` would be more direct and produce less noise.", │
221
+ │ "ideal prompt": "The `findUser` function, defined in the `api/src/db/` directory, seems to be causing │
222
+ │ a bug. Can you find its definition and check what it returns?", │
223
+ │ "priority": "low" │
224
+ │ } │
225
+ │ ], │
226
+ │ "summary": { │
227
+ │ "total_issues": 3, │
228
+ │ "estimated_savings": "~7k tokens", │
229
+ │ "top_priority": "Avoid repeated full file reads. The model should trust its context or use targeted │
230
+ │ tools like `grep` to refresh specific details instead of re-ingesting entire files." │
231
+ │ } │
232
+ │ }, │
233
+ │ "message": "Claude Code optimization analysis complete" │
234
+ │ } │
235
+ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
236
+
237
+ 📊 Received 8 events
238
+ ```
239
+
240
+ # Advanced features & detailed install guide
241
+
242
+ In addition to optimizing your costs and latency,
243
+ you can control budgets and other advanced features.
78
244
 
79
- # Other features & detailed install guide
80
245
  ### Orchestrator
81
246
 
82
- Zen allows you to:
83
- - Zen Orchestrator runs multiple Code CLI instances for peaceful parallel task execution.
247
+ Orchestrator allows you to:
248
+ - Orchestrator runs multiple Code CLI instances for peaceful parallel task execution.
84
249
  - Run multiple headless Claude Code CLI instances simultaneously.
85
250
  - Calm unified results (status, time, token usage)
86
251
  - Relax **"5-hour limit reached"** lockout fears with easy token budget limits
@@ -1,14 +1,15 @@
1
1
  zen_orchestrator.py,sha256=JAxmSaXsF9xF7sVGHmjtEfWBMgomsO-vuJ2RsZ0Paiw,151118
2
2
  agent_interface/__init__.py,sha256=OsbOKzElHsxhVgak87oOx_u46QNgKmz-Reis-plAMwk,525
3
3
  agent_interface/base_agent.py,sha256=GNskG9VaZgno7X24lQTpFdxUoQE0yJHLh0UPFJvOPn4,11098
4
- netra_zen-1.0.7.dist-info/licenses/LICENSE.md,sha256=t6LtOzAE2hgIIv5WbaN0wOcU3QCnGtAkMGNclHrKTOs,79
4
+ netra_zen-1.0.8.dist-info/licenses/LICENSE.md,sha256=t6LtOzAE2hgIIv5WbaN0wOcU3QCnGtAkMGNclHrKTOs,79
5
5
  scripts/__init__.py,sha256=FxMRmQuf7CAoQFpNqJcugEqDoi-hSpq9IwjxCmC6Ays,51
6
6
  scripts/__main__.py,sha256=41cdZ5GkvQ7ndWYUVJ6BnBi6haaa6SRQmBaYjUzOW3g,155
7
- scripts/agent_cli.py,sha256=22Ocv-A-HLwj63H0_LBryRcAYWYck5vsGdr_Gqpq2GE,287459
8
- scripts/agent_logs.py,sha256=_63DXULccmsw9Js4UOsBSr9JwNzlCYUb1ea1TEymjFo,7483
7
+ scripts/agent_cli.py,sha256=c9qmAx1Fk2XNidveD2zhpF7ndj5hwgVwUcAEGY5kjJM,295129
8
+ scripts/agent_logs.py,sha256=AzSPA9nCvh2toC6pa5mzv4l3F-jTOHUg-G8NQRbguAo,10830
9
9
  scripts/bump_version.py,sha256=fjABzzRVXJ00CbYMpUIUMwcOHwafLYtFL6NvUga-i6M,4183
10
10
  scripts/demo_log_collection.py,sha256=8T7qfgiYc33apBtu2ATN2DDYZtzt_HM7D2PCNf8TcfI,5274
11
11
  scripts/embed_release_credentials.py,sha256=lqlvByGZ2L8Fl4Hd1pHxcd1dmEXbHU1PwZHLQP7Syso,2028
12
+ scripts/verify_log_transmission.py,sha256=0Pyv6_9Sk24iR2QhKMMReJGc9v54ygziaCnvSkPdIUc,4353
12
13
  token_budget/__init__.py,sha256=_2tmi72DGNtbYcZ-rGIxVKMytdkHFjzJaWz8bDhYACc,33
13
14
  token_budget/budget_manager.py,sha256=VRWxKcGDtgJfIRh-ztYQ4-wuhBvddVJJnyoGfxCBlv0,9567
14
15
  token_budget/models.py,sha256=14xFTk2-R1Ql0F9WLDof7vADrKC_5Fj7EE7UmZdoG00,2384
@@ -20,8 +21,8 @@ zen/__main__.py,sha256=zoXi9DiNt_WznQvnJ249ZvF-OcEoAnHmxeoKRFiPNo8,170
20
21
  zen/telemetry/__init__.py,sha256=QiW8p9TBDwPxtmYTszMyccblLHKrlVTsKLFIBvMHKx8,305
21
22
  zen/telemetry/embedded_credentials.py,sha256=K6i9LOGnBx6DXpaVBKmZYpvJ9LIYlAjQVXyjiKIF9x0,1657
22
23
  zen/telemetry/manager.py,sha256=TtrIPOvRvq1OOhNAO_Tp-dz7EiS2xXfqnpKQduLwYoI,9731
23
- netra_zen-1.0.7.dist-info/METADATA,sha256=Z3pk9x6-GtUcLw4InGmO-rsqkL09WbyavpwuFh8FuGc,30997
24
- netra_zen-1.0.7.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
25
- netra_zen-1.0.7.dist-info/entry_points.txt,sha256=oDehCnPGZezG0m9ZWspxjHLHyQ3eERX87eojR4ljaRo,45
26
- netra_zen-1.0.7.dist-info/top_level.txt,sha256=OhiyXmoXftBijCF6ck-RS1dN2NBJv9wdd7kBG1Es7zA,77
27
- netra_zen-1.0.7.dist-info/RECORD,,
24
+ netra_zen-1.0.8.dist-info/METADATA,sha256=qO5AJ79GelX3ZmHMIS1sxy833cCKzIWAvGnVCZGoWn8,43383
25
+ netra_zen-1.0.8.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
26
+ netra_zen-1.0.8.dist-info/entry_points.txt,sha256=oDehCnPGZezG0m9ZWspxjHLHyQ3eERX87eojR4ljaRo,45
27
+ netra_zen-1.0.8.dist-info/top_level.txt,sha256=OhiyXmoXftBijCF6ck-RS1dN2NBJv9wdd7kBG1Es7zA,77
28
+ netra_zen-1.0.8.dist-info/RECORD,,
scripts/agent_cli.py CHANGED
@@ -159,12 +159,10 @@ class SimpleConfigReader:
159
159
  try:
160
160
  from agent_output_validator import AgentOutputValidator, ValidationReport, ValidationResult
161
161
  except ImportError:
162
- # Try relative import from scripts directory
163
- import sys
164
- import os
165
- script_dir = os.path.dirname(os.path.abspath(__file__))
166
- sys.path.insert(0, script_dir)
167
- from agent_output_validator import AgentOutputValidator, ValidationReport, ValidationResult
162
+ # Module not available - create stub classes
163
+ AgentOutputValidator = None
164
+ ValidationReport = None
165
+ ValidationResult = None
168
166
 
169
167
  # Import WebSocket event validation framework for Issue #2177
170
168
  try:
@@ -172,14 +170,10 @@ try:
172
170
  WebSocketEventValidationFramework, EventValidationReport, ValidationResult as EventValidationResult
173
171
  )
174
172
  except ImportError:
175
- # Try relative import from scripts directory
176
- import sys
177
- import os
178
- script_dir = os.path.dirname(os.path.abspath(__file__))
179
- sys.path.insert(0, script_dir)
180
- from websocket_event_validation_framework import (
181
- WebSocketEventValidationFramework, EventValidationReport, ValidationResult as EventValidationResult
182
- )
173
+ # Module not available - create stub classes
174
+ WebSocketEventValidationFramework = None
175
+ EventValidationReport = None
176
+ EventValidationResult = None
183
177
 
184
178
  # Import business value validator for revenue protection
185
179
  # ISSUE #2414: Delay imports that trigger configuration validation
@@ -2998,18 +2992,79 @@ class WebSocketClient:
2998
2992
  try:
2999
2993
  from scripts.agent_logs import collect_recent_logs
3000
2994
 
3001
- logs = collect_recent_logs(
2995
+ result = collect_recent_logs(
3002
2996
  limit=self.logs_count,
3003
2997
  project_name=self.logs_project,
3004
2998
  base_path=self.logs_path,
3005
2999
  username=self.logs_user
3006
3000
  )
3007
3001
 
3008
- if logs:
3002
+ if result:
3003
+ logs, files_read, file_info = result
3009
3004
  payload["payload"]["jsonl_logs"] = logs
3005
+
3006
+ # Calculate payload size for transmission proof
3007
+ import logging
3008
+ import sys
3009
+
3010
+ # Get size of logs in payload
3011
+ logs_json = json.dumps(logs)
3012
+ logs_size_bytes = len(logs_json.encode('utf-8'))
3013
+ logs_size_kb = logs_size_bytes / 1024
3014
+ logs_size_mb = logs_size_kb / 1024
3015
+
3016
+ # Format size appropriately
3017
+ if logs_size_mb >= 1:
3018
+ size_str = f"{logs_size_mb:.2f} MB"
3019
+ elif logs_size_kb >= 1:
3020
+ size_str = f"{logs_size_kb:.2f} KB"
3021
+ else:
3022
+ size_str = f"{logs_size_bytes} bytes"
3023
+
3024
+ # Create prominent, formatted log message
3025
+ separator = "=" * 60
3026
+ log_msg_parts = [
3027
+ "",
3028
+ separator,
3029
+ f"📤 SENDING LOGS TO OPTIMIZER",
3030
+ separator,
3031
+ f" Total Entries: {len(logs)}",
3032
+ f" Files Read: {files_read}",
3033
+ f" Payload Size: {size_str}",
3034
+ ]
3035
+
3036
+ if self.logs_project:
3037
+ log_msg_parts.append(f" Project: {self.logs_project}")
3038
+
3039
+ log_msg_parts.append("")
3040
+ log_msg_parts.append(" Files:")
3041
+
3042
+ # Add file details with hashes
3043
+ for info in file_info:
3044
+ log_msg_parts.append(
3045
+ f" • {info['name']} (hash: {info['hash']}, {info['entries']} entries)"
3046
+ )
3047
+
3048
+ # Add payload proof
3049
+ log_msg_parts.append("")
3050
+ log_msg_parts.append(" Payload Confirmation:")
3051
+ log_msg_parts.append(f" ✓ 'jsonl_logs' key added to payload")
3052
+ log_msg_parts.append(f" ✓ First log entry timestamp: {logs[0].get('timestamp', 'N/A') if logs else 'N/A'}")
3053
+ log_msg_parts.append(f" ✓ Last log entry timestamp: {logs[-1].get('timestamp', 'N/A') if logs else 'N/A'}")
3054
+
3055
+ log_msg_parts.append(separator)
3056
+ log_msg_parts.append("")
3057
+
3058
+ log_msg = "\n".join(log_msg_parts)
3059
+
3060
+ # Log at INFO level
3061
+ logging.info(log_msg)
3062
+
3063
+ # Also print via debug system for consistency
3010
3064
  self.debug.debug_print(
3011
- f"Attached {len(logs)} log entries to message payload",
3012
- DebugLevel.BASIC
3065
+ log_msg,
3066
+ DebugLevel.BASIC,
3067
+ style="cyan"
3013
3068
  )
3014
3069
  else:
3015
3070
  self.debug.debug_print(
@@ -3031,6 +3086,51 @@ class WebSocketClient:
3031
3086
  style="yellow"
3032
3087
  )
3033
3088
 
3089
+ # Proof of logs in transmission
3090
+ if "jsonl_logs" in payload["payload"]:
3091
+ log_count = len(payload["payload"]["jsonl_logs"])
3092
+ self.debug.debug_print(
3093
+ f"✓ TRANSMISSION PROOF: Payload contains {log_count} JSONL log entries in 'jsonl_logs' key",
3094
+ DebugLevel.BASIC,
3095
+ style="green"
3096
+ )
3097
+
3098
+ # Optional: Save payload proof to file for verification
3099
+ if os.environ.get('ZEN_SAVE_PAYLOAD_PROOF'):
3100
+ try:
3101
+ import tempfile
3102
+ proof_file = tempfile.NamedTemporaryFile(
3103
+ mode='w',
3104
+ prefix='zen_payload_proof_',
3105
+ suffix='.json',
3106
+ delete=False
3107
+ )
3108
+
3109
+ # Save payload structure (with truncated logs for readability)
3110
+ proof_payload = {
3111
+ "run_id": payload.get("run_id"),
3112
+ "payload": {
3113
+ "message": payload["payload"].get("message"),
3114
+ "jsonl_logs": {
3115
+ "count": len(payload["payload"]["jsonl_logs"]),
3116
+ "sample_first": payload["payload"]["jsonl_logs"][0] if payload["payload"]["jsonl_logs"] else None,
3117
+ "sample_last": payload["payload"]["jsonl_logs"][-1] if payload["payload"]["jsonl_logs"] else None,
3118
+ }
3119
+ }
3120
+ }
3121
+
3122
+ json.dump(proof_payload, proof_file, indent=2)
3123
+ proof_file.close()
3124
+
3125
+ self.debug.debug_print(
3126
+ f"📝 Payload proof saved to: {proof_file.name}",
3127
+ DebugLevel.BASIC,
3128
+ style="cyan"
3129
+ )
3130
+ except Exception as e:
3131
+ # Don't fail transmission if proof saving fails
3132
+ pass
3133
+
3034
3134
  # ISSUE #1603 FIX: Add critical logging for message sending (only in diagnostic mode)
3035
3135
  if self.debug.debug_level >= DebugLevel.DIAGNOSTIC:
3036
3136
  self.debug.debug_print(f"SENDING WEBSOCKET MESSAGE: {json.dumps(payload, indent=2)}", DebugLevel.DIAGNOSTIC)
@@ -3762,7 +3862,7 @@ class AgentCLI:
3762
3862
  # Issue #1822: Agent output validation
3763
3863
  self.validate_outputs = validate_outputs
3764
3864
  self.output_validator: Optional[AgentOutputValidator] = None
3765
- if self.validate_outputs:
3865
+ if self.validate_outputs and AgentOutputValidator is not None:
3766
3866
  self.output_validator = AgentOutputValidator(debug=config.debug_level.value >= 3)
3767
3867
 
3768
3868
  # Business value validation
@@ -3918,11 +4018,37 @@ class AgentCLI:
3918
4018
 
3919
4019
  async def _receive_events(self):
3920
4020
  """Background task to receive and display events"""
4021
+ thinking_spinner = None
4022
+ thinking_live = None
4023
+
3921
4024
  async def handle_event(event: WebSocketEvent):
4025
+ nonlocal thinking_spinner, thinking_live
4026
+
4027
+ # Stop spinner if it's running and we get any non-thinking event
4028
+ if thinking_live and event.type != "agent_thinking":
4029
+ thinking_live.stop()
4030
+ thinking_live = None
4031
+ thinking_spinner = None
4032
+
3922
4033
  # Display event with enhanced formatting
3923
4034
  formatted_event = event.format_for_display(self.debug)
3924
4035
  safe_console_print(f"[{event.timestamp.strftime('%H:%M:%S')}] {formatted_event}")
3925
4036
 
4037
+ # Start spinner for agent_thinking events (20-60 second wait indicator)
4038
+ if event.type == "agent_thinking" and not thinking_live:
4039
+ thought = event.data.get('thought', event.data.get('reasoning', ''))
4040
+ spinner_text = truncate_with_ellipsis(thought, 60) if thought else "Processing..."
4041
+
4042
+ thinking_spinner = Progress(
4043
+ SpinnerColumn(spinner_name="dots"),
4044
+ TextColumn("[cyan]{task.description}"),
4045
+ console=Console(file=sys.stderr),
4046
+ transient=True
4047
+ )
4048
+ thinking_live = Live(thinking_spinner, console=Console(file=sys.stderr), refresh_per_second=10)
4049
+ thinking_live.start()
4050
+ thinking_spinner.add_task(f"💭 {spinner_text}", total=None)
4051
+
3926
4052
  # Display raw data in verbose mode
3927
4053
  if self.debug.debug_level >= DebugLevel.DIAGNOSTIC:
3928
4054
  safe_console_print(Panel(
@@ -3931,16 +4057,47 @@ class AgentCLI:
3931
4057
  border_style="dim"
3932
4058
  ))
3933
4059
 
3934
- await self.ws_client.receive_events(callback=handle_event)
4060
+ try:
4061
+ await self.ws_client.receive_events(callback=handle_event)
4062
+ finally:
4063
+ # Clean up spinner if it's still running
4064
+ if thinking_live:
4065
+ thinking_live.stop()
3935
4066
 
3936
4067
  async def _receive_events_with_display(self):
3937
4068
  """ISSUE #1603 FIX: Enhanced event receiver with better display for single message mode"""
4069
+ thinking_spinner = None
4070
+ thinking_live = None
4071
+
3938
4072
  async def handle_event_with_display(event: WebSocketEvent):
4073
+ nonlocal thinking_spinner, thinking_live
4074
+
4075
+ # Stop spinner if it's running and we get any non-thinking event
4076
+ if thinking_live and event.type != "agent_thinking":
4077
+ thinking_live.stop()
4078
+ thinking_live = None
4079
+ thinking_spinner = None
4080
+
3939
4081
  # Display event with enhanced formatting and emojis
3940
4082
  formatted_event = event.format_for_display(self.debug)
3941
4083
  timestamp = event.timestamp.strftime('%H:%M:%S')
3942
4084
  safe_console_print(f"[{timestamp}] {formatted_event}")
3943
4085
 
4086
+ # Start spinner for agent_thinking events (20-60 second wait indicator)
4087
+ if event.type == "agent_thinking" and not thinking_live:
4088
+ thought = event.data.get('thought', event.data.get('reasoning', ''))
4089
+ spinner_text = truncate_with_ellipsis(thought, 60) if thought else "Processing..."
4090
+
4091
+ thinking_spinner = Progress(
4092
+ SpinnerColumn(spinner_name="dots"),
4093
+ TextColumn("[cyan]{task.description}"),
4094
+ console=Console(file=sys.stderr),
4095
+ transient=True
4096
+ )
4097
+ thinking_live = Live(thinking_spinner, console=Console(file=sys.stderr), refresh_per_second=10)
4098
+ thinking_live.start()
4099
+ thinking_spinner.add_task(f"💭 {spinner_text}", total=None)
4100
+
3944
4101
  # Issue #2177: WebSocket event validation
3945
4102
  if self.validate_events and self.event_validator:
3946
4103
  try:
@@ -4016,7 +4173,12 @@ class AgentCLI:
4016
4173
  border_style="dim"
4017
4174
  ))
4018
4175
 
4019
- await self.ws_client.receive_events(callback=handle_event_with_display)
4176
+ try:
4177
+ await self.ws_client.receive_events(callback=handle_event_with_display)
4178
+ finally:
4179
+ # Clean up spinner if it's still running
4180
+ if thinking_live:
4181
+ thinking_live.stop()
4020
4182
 
4021
4183
  def _get_event_summary(self, event: WebSocketEvent) -> str:
4022
4184
  """ISSUE #1603 FIX: Get a concise summary of an event for display"""
@@ -5497,9 +5659,9 @@ def main(argv=None):
5497
5659
  parser.add_argument(
5498
5660
  "--logs-count",
5499
5661
  type=int,
5500
- default=5,
5662
+ default=3,
5501
5663
  metavar="N",
5502
- help="Number of recent log files to collect (default: 5, must be positive)"
5664
+ help="Number of recent log files to collect (default: 3, must be positive)"
5503
5665
  )
5504
5666
 
5505
5667
  parser.add_argument(
@@ -6121,6 +6283,16 @@ def main(argv=None):
6121
6283
  elif args.validate_outputs and result is False:
6122
6284
  # Validation failed, exit with code 1 (fallback)
6123
6285
  sys.exit(1)
6286
+ elif args.send_logs:
6287
+ # Handle --send-logs without --message: use default message
6288
+ default_message = "claude-code optimizer default message"
6289
+ result = await cli.run_single_message(default_message, args.wait)
6290
+ # ISSUE #2766: Use structured exit code from ExitCodeGenerator
6291
+ if hasattr(cli, 'exit_code'):
6292
+ sys.exit(cli.exit_code)
6293
+ elif args.validate_outputs and result is False:
6294
+ # Validation failed, exit with code 1 (fallback)
6295
+ sys.exit(1)
6124
6296
  else:
6125
6297
  await cli.run_interactive()
6126
6298
  except Exception as e:
scripts/agent_logs.py CHANGED
@@ -4,6 +4,7 @@ Agent Logs Collection Helper
4
4
  Collects recent JSONL logs from .claude/Projects for agent CLI integration
5
5
  """
6
6
 
7
+ import hashlib
7
8
  import json
8
9
  import logging
9
10
  import os
@@ -127,7 +128,7 @@ def _find_most_recent_project(projects_root: Path) -> Optional[Path]:
127
128
  return None
128
129
 
129
130
 
130
- def _collect_jsonl_files(project_path: Path, limit: int) -> List[Dict[str, Any]]:
131
+ def _collect_jsonl_files(project_path: Path, limit: int) -> tuple[List[Dict[str, Any]], int, List[Dict[str, str]]]:
131
132
  """
132
133
  Collect and parse JSONL files from project directory.
133
134
 
@@ -136,11 +137,11 @@ def _collect_jsonl_files(project_path: Path, limit: int) -> List[Dict[str, Any]]
136
137
  limit: Maximum number of log files to read
137
138
 
138
139
  Returns:
139
- List of parsed log entries (dicts)
140
+ Tuple of (list of parsed log entries, number of files read, list of file info dicts)
140
141
  """
141
142
  if not project_path.exists() or not project_path.is_dir():
142
143
  logger.warning(f"Project path does not exist: {project_path}")
143
- return []
144
+ return [], 0, []
144
145
 
145
146
  try:
146
147
  # Find all .jsonl files
@@ -148,18 +149,32 @@ def _collect_jsonl_files(project_path: Path, limit: int) -> List[Dict[str, Any]]
148
149
 
149
150
  if not jsonl_files:
150
151
  logger.info(f"No .jsonl files found in {project_path}")
151
- return []
152
+ return [], 0, []
152
153
 
153
154
  # Sort by modification time, most recent first
154
155
  jsonl_files.sort(key=lambda p: p.stat().st_mtime, reverse=True)
155
156
 
156
157
  # Limit number of files to read
157
158
  jsonl_files = jsonl_files[:limit]
159
+ files_read = len(jsonl_files)
158
160
 
159
161
  all_logs = []
162
+ file_info = []
160
163
 
161
164
  for jsonl_file in jsonl_files:
162
165
  try:
166
+ # Calculate file hash for tracking
167
+ hasher = hashlib.sha256()
168
+ entry_count = 0
169
+
170
+ with open(jsonl_file, 'rb') as f:
171
+ # Read in chunks for efficient hashing
172
+ for chunk in iter(lambda: f.read(4096), b''):
173
+ hasher.update(chunk)
174
+
175
+ file_hash = hasher.hexdigest()[:8] # First 8 chars of hash
176
+
177
+ # Now read and parse the file
163
178
  with open(jsonl_file, 'r', encoding='utf-8') as f:
164
179
  for line_num, line in enumerate(f, 1):
165
180
  line = line.strip()
@@ -169,43 +184,50 @@ def _collect_jsonl_files(project_path: Path, limit: int) -> List[Dict[str, Any]]
169
184
  try:
170
185
  log_entry = json.loads(line)
171
186
  all_logs.append(log_entry)
187
+ entry_count += 1
172
188
  except json.JSONDecodeError as e:
173
189
  logger.debug(
174
190
  f"Skipping malformed JSON in {jsonl_file.name}:{line_num}: {e}"
175
191
  )
176
192
  continue
177
193
 
194
+ file_info.append({
195
+ 'name': jsonl_file.name,
196
+ 'hash': file_hash,
197
+ 'entries': entry_count
198
+ })
199
+
178
200
  except Exception as e:
179
201
  logger.warning(f"Error reading {jsonl_file.name}: {e}")
180
202
  continue
181
203
 
182
- logger.info(f"Collected {len(all_logs)} log entries from {len(jsonl_files)} files")
183
- return all_logs
204
+ logger.info(f"Collected {len(all_logs)} log entries from {files_read} files")
205
+ return all_logs, files_read, file_info
184
206
 
185
207
  except Exception as e:
186
208
  logger.error(f"Error collecting JSONL files: {e}")
187
- return []
209
+ return [], 0, []
188
210
 
189
211
 
190
212
  def collect_recent_logs(
191
- limit: int = 5,
213
+ limit: int = 3,
192
214
  project_name: Optional[str] = None,
193
215
  base_path: Optional[str] = None,
194
216
  username: Optional[str] = None,
195
217
  platform_name: Optional[str] = None
196
- ) -> Optional[List[Dict[str, Any]]]:
218
+ ) -> Optional[tuple[List[Dict[str, Any]], int, List[Dict[str, str]]]]:
197
219
  """
198
220
  Collect recent JSONL logs from .claude/Projects directory.
199
221
 
200
222
  Args:
201
- limit: Maximum number of log files to read (default: 5)
223
+ limit: Maximum number of log files to read (default: 3)
202
224
  project_name: Specific project name or None for most recent
203
- base_path: Direct path override to logs directory
225
+ base_path: Direct path override to logs directory OR a specific .jsonl file
204
226
  username: Windows username override
205
227
  platform_name: Platform override for testing ('Darwin', 'Windows', 'Linux')
206
228
 
207
229
  Returns:
208
- List of log entry dicts or None if no logs found
230
+ Tuple of (list of log entry dicts, number of files read, list of file info) or None if no logs found
209
231
 
210
232
  Raises:
211
233
  ValueError: If limit is not positive or project_name is invalid
@@ -214,7 +236,63 @@ def collect_recent_logs(
214
236
  raise ValueError(f"Limit must be positive, got {limit}")
215
237
 
216
238
  try:
217
- # Resolve projects root
239
+ # Check if base_path points to a specific .jsonl file
240
+ if base_path:
241
+ base_path_obj = Path(base_path)
242
+ if base_path_obj.is_file() and base_path_obj.suffix == '.jsonl':
243
+ # Handle direct file path
244
+ logger.info(f"Reading specific log file: {base_path_obj}")
245
+
246
+ if not base_path_obj.exists():
247
+ logger.warning(f"Specified log file does not exist: {base_path_obj}")
248
+ return None
249
+
250
+ # Read the single file
251
+ all_logs = []
252
+ file_info = []
253
+
254
+ try:
255
+ # Calculate file hash
256
+ hasher = hashlib.sha256()
257
+ entry_count = 0
258
+
259
+ with open(base_path_obj, 'rb') as f:
260
+ for chunk in iter(lambda: f.read(4096), b''):
261
+ hasher.update(chunk)
262
+
263
+ file_hash = hasher.hexdigest()[:8]
264
+
265
+ # Read and parse the file
266
+ with open(base_path_obj, 'r', encoding='utf-8') as f:
267
+ for line_num, line in enumerate(f, 1):
268
+ line = line.strip()
269
+ if not line:
270
+ continue
271
+
272
+ try:
273
+ log_entry = json.loads(line)
274
+ all_logs.append(log_entry)
275
+ entry_count += 1
276
+ except json.JSONDecodeError as e:
277
+ logger.debug(
278
+ f"Skipping malformed JSON in {base_path_obj.name}:{line_num}: {e}"
279
+ )
280
+ continue
281
+
282
+ file_info.append({
283
+ 'name': base_path_obj.name,
284
+ 'hash': file_hash,
285
+ 'entries': entry_count
286
+ })
287
+
288
+ logger.info(f"Collected {len(all_logs)} log entries from {base_path_obj.name}")
289
+ return all_logs, 1, file_info
290
+
291
+ except Exception as e:
292
+ logger.error(f"Error reading log file {base_path_obj}: {e}")
293
+ return None
294
+
295
+ # Original directory-based logic
218
296
  base = Path(base_path) if base_path else None
219
297
  projects_root = _resolve_projects_root(
220
298
  platform_name=platform_name,
@@ -237,12 +315,12 @@ def collect_recent_logs(
237
315
  return None
238
316
 
239
317
  # Collect logs
240
- logs = _collect_jsonl_files(project_path, limit)
318
+ logs, files_read, file_info = _collect_jsonl_files(project_path, limit)
241
319
 
242
320
  if not logs:
243
321
  return None
244
322
 
245
- return logs
323
+ return logs, files_read, file_info
246
324
 
247
325
  except Exception as e:
248
326
  logger.error(f"Failed to collect logs: {e}")
@@ -0,0 +1,140 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ Verification script to prove JSONL logs are bundled in payload
4
+ """
5
+
6
+ import json
7
+ import sys
8
+ from pathlib import Path
9
+
10
+ # Add parent directory to path
11
+ sys.path.insert(0, str(Path(__file__).parent.parent))
12
+
13
+ from scripts.agent_logs import collect_recent_logs
14
+
15
+
16
+ def verify_log_bundling(log_path: str):
17
+ """
18
+ Verify that logs are properly collected and bundled
19
+
20
+ Args:
21
+ log_path: Path to JSONL file or directory
22
+ """
23
+ print("=" * 70)
24
+ print("JSONL LOG TRANSMISSION VERIFICATION")
25
+ print("=" * 70)
26
+ print()
27
+
28
+ # Step 1: Collect logs
29
+ print("Step 1: Collecting logs from file...")
30
+ result = collect_recent_logs(limit=1, base_path=log_path)
31
+
32
+ if not result:
33
+ print("❌ FAILED: No logs collected")
34
+ return False
35
+
36
+ logs, files_read, file_info = result
37
+ print(f"✓ Successfully collected {len(logs)} log entries from {files_read} file(s)")
38
+ print()
39
+
40
+ # Step 2: Show file details
41
+ print("Step 2: File details...")
42
+ for info in file_info:
43
+ print(f" File: {info['name']}")
44
+ print(f" Hash: {info['hash']}")
45
+ print(f" Entries: {info['entries']}")
46
+ print()
47
+
48
+ # Step 3: Simulate payload creation
49
+ print("Step 3: Simulating WebSocket payload creation...")
50
+ payload = {
51
+ "type": "message_create",
52
+ "run_id": "test-run-id",
53
+ "payload": {
54
+ "message": "Test message with logs",
55
+ "jsonl_logs": logs # This is where logs are added
56
+ }
57
+ }
58
+
59
+ print(f"✓ Payload created with 'jsonl_logs' key")
60
+ print(f" Payload keys: {list(payload['payload'].keys())}")
61
+ print()
62
+
63
+ # Step 4: Verify payload size
64
+ print("Step 4: Calculating payload size...")
65
+ payload_json = json.dumps(payload)
66
+ payload_size_bytes = len(payload_json.encode('utf-8'))
67
+ payload_size_kb = payload_size_bytes / 1024
68
+ payload_size_mb = payload_size_kb / 1024
69
+
70
+ if payload_size_mb >= 1:
71
+ size_str = f"{payload_size_mb:.2f} MB"
72
+ elif payload_size_kb >= 1:
73
+ size_str = f"{payload_size_kb:.2f} KB"
74
+ else:
75
+ size_str = f"{payload_size_bytes} bytes"
76
+
77
+ print(f"✓ Total payload size: {size_str}")
78
+ print()
79
+
80
+ # Step 5: Show sample log entries
81
+ print("Step 5: Sample log entries in payload...")
82
+ if logs:
83
+ print(f" First entry keys: {list(logs[0].keys())}")
84
+ print(f" First entry timestamp: {logs[0].get('timestamp', 'N/A')}")
85
+ print(f" Last entry timestamp: {logs[-1].get('timestamp', 'N/A')}")
86
+ print()
87
+
88
+ # Step 6: Verify transmission-ready
89
+ print("Step 6: Transmission verification...")
90
+ print(f"✓ Payload is valid JSON: {payload_json is not None}")
91
+ print(f"✓ Payload contains 'jsonl_logs': {'jsonl_logs' in payload['payload']}")
92
+ print(f"✓ Log count in payload: {len(payload['payload']['jsonl_logs'])}")
93
+ print()
94
+
95
+ print("=" * 70)
96
+ print("✅ VERIFICATION COMPLETE")
97
+ print("=" * 70)
98
+ print()
99
+ print("PROOF OF TRANSMISSION:")
100
+ print(f" • {len(logs)} JSONL log entries are bundled in the payload")
101
+ print(f" • Payload size: {size_str}")
102
+ print(f" • Ready for WebSocket transmission to backend")
103
+ print()
104
+
105
+ # Optional: Save proof file
106
+ proof_file = Path("/tmp/zen_transmission_proof.json")
107
+ proof_payload = {
108
+ "verification_timestamp": "verification_run",
109
+ "log_count": len(logs),
110
+ "files_read": files_read,
111
+ "file_info": file_info,
112
+ "payload_size": size_str,
113
+ "sample_first_entry": logs[0] if logs else None,
114
+ "sample_last_entry": logs[-1] if logs else None,
115
+ "payload_structure": {
116
+ "type": payload["type"],
117
+ "run_id": payload["run_id"],
118
+ "payload_keys": list(payload["payload"].keys()),
119
+ "jsonl_logs_present": "jsonl_logs" in payload["payload"],
120
+ "jsonl_logs_count": len(payload["payload"]["jsonl_logs"])
121
+ }
122
+ }
123
+
124
+ with open(proof_file, 'w') as f:
125
+ json.dump(proof_payload, f, indent=2)
126
+
127
+ print(f"📝 Detailed proof saved to: {proof_file}")
128
+ print()
129
+
130
+ return True
131
+
132
+
133
+ if __name__ == "__main__":
134
+ if len(sys.argv) < 2:
135
+ print("Usage: python verify_log_transmission.py <path-to-jsonl-file>")
136
+ sys.exit(1)
137
+
138
+ log_path = sys.argv[1]
139
+ success = verify_log_bundling(log_path)
140
+ sys.exit(0 if success else 1)