connectonion 0.5.1__py3-none-any.whl → 0.5.3__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. connectonion/__init__.py +4 -1
  2. connectonion/agent.py +47 -98
  3. connectonion/announce.py +3 -3
  4. connectonion/asgi.py +225 -0
  5. connectonion/cli/commands/deploy_commands.py +6 -5
  6. connectonion/connect.py +1 -1
  7. connectonion/console.py +370 -71
  8. connectonion/host.py +559 -0
  9. connectonion/llm.py +0 -4
  10. connectonion/logger.py +15 -4
  11. connectonion/relay.py +3 -3
  12. connectonion/static/docs.html +591 -0
  13. connectonion/tool_registry.py +3 -1
  14. connectonion/useful_plugins/calendar_plugin.py +9 -0
  15. connectonion/useful_plugins/eval.py +9 -0
  16. connectonion/useful_plugins/gmail_plugin.py +9 -0
  17. connectonion/useful_plugins/image_result_formatter.py +9 -0
  18. connectonion/useful_plugins/re_act.py +9 -0
  19. connectonion/useful_plugins/shell_approval.py +9 -0
  20. connectonion/useful_tools/diff_writer.py +9 -0
  21. connectonion/useful_tools/gmail.py +9 -0
  22. connectonion/useful_tools/google_calendar.py +9 -0
  23. connectonion/useful_tools/microsoft_calendar.py +9 -0
  24. connectonion/useful_tools/outlook.py +9 -0
  25. connectonion/useful_tools/shell.py +9 -0
  26. connectonion/useful_tools/slash_command.py +11 -1
  27. connectonion/useful_tools/terminal.py +9 -0
  28. connectonion/useful_tools/todo_list.py +11 -1
  29. connectonion/useful_tools/web_fetch.py +9 -0
  30. {connectonion-0.5.1.dist-info → connectonion-0.5.3.dist-info}/METADATA +4 -2
  31. {connectonion-0.5.1.dist-info → connectonion-0.5.3.dist-info}/RECORD +33 -30
  32. {connectonion-0.5.1.dist-info → connectonion-0.5.3.dist-info}/WHEEL +0 -0
  33. {connectonion-0.5.1.dist-info → connectonion-0.5.3.dist-info}/entry_points.txt +0 -0
connectonion/__init__.py CHANGED
@@ -1,6 +1,6 @@
1
1
  """ConnectOnion - A simple agent framework with behavior tracking."""
2
2
 
3
- __version__ = "0.5.1"
3
+ __version__ = "0.5.3"
4
4
 
5
5
  # Auto-load .env files for the entire framework
6
6
  from dotenv import load_dotenv
@@ -21,6 +21,7 @@ from .decorators import replay, xray_replay
21
21
  from .useful_tools import send_email, get_emails, mark_read, mark_unread, Memory, Gmail, GoogleCalendar, Outlook, MicrosoftCalendar, WebFetch, Shell, DiffWriter, pick, yes_no, autocomplete, TodoList, SlashCommand
22
22
  from .auto_debug_exception import auto_debug_exception
23
23
  from .connect import connect, RemoteAgent
24
+ from .host import host, create_app
24
25
  from .events import (
25
26
  after_user_input,
26
27
  before_llm,
@@ -63,6 +64,8 @@ __all__ = [
63
64
  "auto_debug_exception",
64
65
  "connect",
65
66
  "RemoteAgent",
67
+ "host",
68
+ "create_app",
66
69
  "after_user_input",
67
70
  "before_llm",
68
71
  "after_llm",
connectonion/agent.py CHANGED
@@ -1,12 +1,12 @@
1
1
  """
2
2
  Purpose: Orchestrate AI agent execution with LLM calls, tool execution, and automatic logging
3
3
  LLM-Note:
4
- Dependencies: imports from [llm.py, tool_factory.py, prompts.py, decorators.py, logger.py, tool_executor.py, trust.py, tool_registry.py] | imported by [__init__.py, trust.py, debug_agent/__init__.py] | tested by [tests/test_agent.py, tests/test_agent_prompts.py, tests/test_agent_workflows.py]
4
+ Dependencies: imports from [llm.py, tool_factory.py, prompts.py, decorators.py, logger.py, tool_executor.py, tool_registry.py] | imported by [__init__.py, debug_agent/__init__.py] | tested by [tests/test_agent.py, tests/test_agent_prompts.py, tests/test_agent_workflows.py]
5
5
  Data flow: receives user prompt: str from Agent.input() → creates/extends current_session with messages → calls llm.complete() with tool schemas → receives LLMResponse with tool_calls → executes tools via tool_executor.execute_and_record_tools() → appends tool results to messages → repeats loop until no tool_calls or max_iterations → logger logs to .co/logs/{name}.log and .co/sessions/{name}_{timestamp}.yaml → returns final response: str
6
- State/Effects: modifies self.current_session['messages', 'trace', 'turn', 'iteration'] | writes to .co/logs/{name}.log and .co/sessions/ via logger.py | initializes trust agent if trust parameter provided
7
- Integration: exposes Agent(name, tools, system_prompt, model, trust, log, quiet), .input(prompt), .execute_tool(name, args), .add_tool(func), .remove_tool(name), .list_tools(), .reset_conversation() | tools stored in ToolRegistry with attribute access (agent.tools.tool_name) and instance storage (agent.tools.gmail) | tool execution delegates to tool_executor module | trust system via trust.create_trust_agent() | log defaults to .co/logs/ (None), can be True (current dir), False (disabled), or custom path | quiet=True suppresses console but keeps session logging
6
+ State/Effects: modifies self.current_session['messages', 'trace', 'turn', 'iteration'] | writes to .co/logs/{name}.log and .co/sessions/ via logger.py
7
+ Integration: exposes Agent(name, tools, system_prompt, model, log, quiet), .input(prompt), .execute_tool(name, args), .add_tool(func), .remove_tool(name), .list_tools(), .reset_conversation() | tools stored in ToolRegistry with attribute access (agent.tools.tool_name) and instance storage (agent.tools.gmail) | tool execution delegates to tool_executor module | log defaults to .co/logs/ (None), can be True (current dir), False (disabled), or custom path | quiet=True suppresses console but keeps session logging | trust enforcement moved to host() for network access control
8
8
  Performance: max_iterations=10 default (configurable per-input) | session state persists across turns for multi-turn conversations | ToolRegistry provides O(1) tool lookup via .get() or attribute access
9
- Errors: LLM errors bubble up | tool execution errors captured in trace and returned to LLM for retry | trust agent creation can fail if invalid trust parameter
9
+ Errors: LLM errors bubble up | tool execution errors captured in trace and returned to LLM for retry
10
10
  """
11
11
 
12
12
  import os
@@ -26,8 +26,7 @@ from .logger import Logger
26
26
  from .tool_executor import execute_and_record_tools, execute_single_tool
27
27
  from .events import EventHandler
28
28
 
29
- # Handle trust parameter - convert to trust agent
30
- from .trust import create_trust_agent, get_default_trust_level
29
+
31
30
  class Agent:
32
31
  """Agent that can use tools to complete tasks."""
33
32
 
@@ -40,7 +39,6 @@ class Agent:
40
39
  api_key: Optional[str] = None,
41
40
  model: str = "co/gemini-2.5-pro",
42
41
  max_iterations: int = 10,
43
- trust: Optional[Union[str, Path, 'Agent']] = None,
44
42
  log: Optional[Union[bool, str, Path]] = None,
45
43
  quiet: bool = False,
46
44
  plugins: Optional[List[List[EventHandler]]] = None,
@@ -64,20 +62,6 @@ class Agent:
64
62
  effective_log = Path(os.getenv('CONNECTONION_LOG'))
65
63
 
66
64
  self.logger = Logger(agent_name=name, quiet=quiet, log=effective_log)
67
-
68
-
69
-
70
- # If trust is None, check for environment default
71
- if trust is None:
72
- trust = get_default_trust_level()
73
-
74
- # Only create trust agent if we're not already a trust agent
75
- # (to prevent infinite recursion when creating trust agents)
76
- if name and name.startswith('trust_agent_'):
77
- self.trust = None # Trust agents don't need their own trust agents
78
- else:
79
- # Store the trust agent directly (or None)
80
- self.trust = create_trust_agent(trust, api_key=api_key, model=model)
81
65
 
82
66
  # Initialize event registry
83
67
  # Note: before_each_tool/after_each_tool fire for EACH tool
@@ -145,6 +129,18 @@ class Agent:
145
129
  # - co/ models check OPENONION_API_KEY
146
130
  self.llm = create_llm(model=model, api_key=api_key)
147
131
 
132
+ # Print banner (if console enabled)
133
+ if self.logger.console:
134
+ # Determine log_dir if logging is enabled
135
+ log_dir = ".co/" if self.logger.enable_sessions else None
136
+ self.logger.console.print_banner(
137
+ agent_name=self.name,
138
+ model=self.llm.model,
139
+ tools=len(self.tools),
140
+ log_dir=log_dir,
141
+ llm=self.llm
142
+ )
143
+
148
144
  def _invoke_events(self, event_type: str):
149
145
  """Invoke all event handlers for given type. Exceptions propagate (fail fast)."""
150
146
  for handler in self.events.get(event_type, []):
@@ -180,21 +176,39 @@ class Agent:
180
176
 
181
177
  self.events[event_type].append(event_func)
182
178
 
183
- def input(self, prompt: str, max_iterations: Optional[int] = None) -> str:
179
+ def input(self, prompt: str, max_iterations: Optional[int] = None,
180
+ session: Optional[Dict] = None) -> str:
184
181
  """Provide input to the agent and get response.
185
182
 
186
183
  Args:
187
184
  prompt: The input prompt or data to process
188
185
  max_iterations: Override agent's max_iterations for this request
186
+ session: Optional session to continue a conversation. Pass the session
187
+ from a previous response to maintain context. Contains:
188
+ - session_id: Conversation identifier
189
+ - messages: Conversation history
190
+ - trace: Execution trace for debugging
191
+ - turn: Turn counter
189
192
 
190
193
  Returns:
191
194
  The agent's response after processing the input
192
195
  """
193
196
  start_time = time.time()
194
- self.logger.print(f"[bold]INPUT:[/bold] {prompt[:100]}...")
197
+ if self.logger.console:
198
+ self.logger.console.print_task(prompt)
195
199
 
196
- # Initialize session on first input, or continue existing conversation
197
- if self.current_session is None:
200
+ # Session restoration: if session passed, restore it (stateless API continuation)
201
+ if session is not None:
202
+ self.current_session = {
203
+ 'session_id': session.get('session_id'),
204
+ 'messages': list(session.get('messages', [])),
205
+ 'trace': list(session.get('trace', [])),
206
+ 'turn': session.get('turn', 0)
207
+ }
208
+ # Start YAML session logging with session_id for thread safety
209
+ self.logger.start_session(self.system_prompt, session_id=session.get('session_id'))
210
+ elif self.current_session is None:
211
+ # Initialize new session
198
212
  self.current_session = {
199
213
  'messages': [{"role": "system", "content": self.system_prompt}],
200
214
  'trace': [],
@@ -236,7 +250,11 @@ class Agent:
236
250
 
237
251
  self.current_session['result'] = result
238
252
 
239
- self.logger.print(f"[green]✓ Complete[/green] ({duration:.1f}s)")
253
+ # Print completion summary
254
+ if self.logger.console:
255
+ session_path = f".co/sessions/{self.name}.yaml" if self.logger.enable_sessions else None
256
+ self.logger.console.print_completion(duration, self.current_session, session_path)
257
+
240
258
  self._invoke_events('on_complete')
241
259
 
242
260
  # Log turn to YAML session (after on_complete so handlers can modify state)
@@ -313,9 +331,6 @@ class Agent:
313
331
  """Run the main LLM/tool iteration loop until complete or max iterations."""
314
332
  while self.current_session['iteration'] < max_iterations:
315
333
  self.current_session['iteration'] += 1
316
- iteration = self.current_session['iteration']
317
-
318
- self.logger.print(f"[dim]Iteration {iteration}/{max_iterations}[/dim]")
319
334
 
320
335
  # Get LLM response
321
336
  response = self._get_llm_decision()
@@ -339,9 +354,8 @@ class Agent:
339
354
  tool_schemas = [tool.to_function_schema() for tool in self.tools] if self.tools else None
340
355
 
341
356
  # Show request info
342
- msg_count = len(self.current_session['messages'])
343
- tool_count = len(self.tools) if self.tools else 0
344
- self.logger.print(f"[yellow]→[/yellow] LLM Request ({self.llm.model}) • {msg_count} msgs • {tool_count} tools")
357
+ if self.logger.console:
358
+ self.logger.console.print_llm_request(self.llm.model, self.current_session, self.max_iterations)
345
359
 
346
360
  # Invoke before_llm events
347
361
  self._invoke_events('before_llm')
@@ -369,7 +383,7 @@ class Agent:
369
383
  # Invoke after_llm events (after trace entry is added)
370
384
  self._invoke_events('after_llm')
371
385
 
372
- self.logger.log_llm_response(duration, len(response.tool_calls), response.usage)
386
+ self.logger.log_llm_response(self.llm.model, duration, len(response.tool_calls), response.usage)
373
387
 
374
388
  return response
375
389
 
@@ -434,68 +448,3 @@ class Agent:
434
448
  debugger = InteractiveDebugger(self)
435
449
  debugger.start_debug_session(prompt)
436
450
 
437
- def serve(self, relay_url: str = "wss://oo.openonion.ai/ws/announce"):
438
- """
439
- Start serving this agent on the relay network.
440
-
441
- This makes the agent discoverable and connectable by other agents.
442
- The agent will:
443
- 1. Load/generate Ed25519 keys for identity
444
- 2. Connect to relay server
445
- 3. Send ANNOUNCE message with agent summary
446
- 4. Wait for incoming TASK messages
447
- 5. Process tasks and send responses
448
-
449
- Args:
450
- relay_url: WebSocket URL for relay (default: production relay)
451
-
452
- Example:
453
- >>> agent = Agent("translator", tools=[translate])
454
- >>> agent.serve() # Runs forever, processing tasks
455
- ✓ Announced to relay: 0x3d4017c3...
456
- Debug: https://oo.openonion.ai/agent/0x3d4017c3e843895a...
457
- ♥ Sent heartbeat
458
- → Received task: abc12345...
459
- ✓ Sent response: abc12345...
460
-
461
- Note:
462
- This is a blocking call. The agent will run until interrupted (Ctrl+C).
463
- """
464
- import asyncio
465
- from . import address, announce, relay
466
-
467
- # Load or generate keys
468
- co_dir = Path.cwd() / '.co'
469
- addr_data = address.load(co_dir)
470
-
471
- if addr_data is None:
472
- self.logger.print("[yellow]No keys found, generating new identity...[/yellow]")
473
- addr_data = address.generate()
474
- address.save(addr_data, co_dir)
475
- self.logger.print(f"[green]✓ Keys saved to {co_dir / 'keys'}[/green]")
476
-
477
- # Create ANNOUNCE message
478
- # Use system_prompt as summary (first 1000 chars)
479
- summary = self.system_prompt[:1000] if self.system_prompt else f"{self.name} agent"
480
- announce_msg = announce.create_announce_message(
481
- addr_data,
482
- summary,
483
- endpoints=[] # MVP: No direct endpoints yet
484
- )
485
-
486
- self.logger.print(f"\n[bold]Starting agent: {self.name}[/bold]")
487
- self.logger.print(f"Address: {addr_data['address']}")
488
- self.logger.print(f"Debug: https://oo.openonion.ai/agent/{addr_data['address']}\n")
489
-
490
- # Define async task handler
491
- async def task_handler(prompt: str) -> str:
492
- """Handle incoming task by running through agent.input()"""
493
- return self.input(prompt)
494
-
495
- # Run serve loop
496
- async def run():
497
- ws = await relay.connect(relay_url)
498
- await relay.serve_loop(ws, announce_msg, task_handler)
499
-
500
- # Run the async loop
501
- asyncio.run(run())
connectonion/announce.py CHANGED
@@ -1,10 +1,10 @@
1
1
  """
2
2
  Purpose: Build and sign ANNOUNCE messages for agent relay network registration
3
3
  LLM-Note:
4
- Dependencies: imports from [json, time, typing, address.py] | imported by [agent.py] | tested by [tests/test_announce.py]
5
- Data flow: receives from Agent.serve() → create_announce_message(address_data, summary, endpoints) → builds message dict without signature → serializes to deterministic JSON (sort_keys=True) → calls address.sign() to create Ed25519 signature → returns signed message ready for relay
4
+ Dependencies: imports from [json, time, typing, address.py] | imported by [host.py] | tested by [tests/test_announce.py]
5
+ Data flow: receives from host() → create_announce_message(address_data, summary, endpoints) → builds message dict without signature → serializes to deterministic JSON (sort_keys=True) → calls address.sign() to create Ed25519 signature → returns signed message ready for relay
6
6
  State/Effects: no side effects | pure function | deterministic JSON serialization (matches server verification) | signature is hex string without 0x prefix
7
- Integration: exposes create_announce_message(address_data, summary, endpoints) | used by Agent.serve() to announce agent presence to relay network | relay server verifies signature using address (public key) | heartbeat re-sends with updated timestamp
7
+ Integration: exposes create_announce_message(address_data, summary, endpoints) | used by host() to announce agent presence to relay network | relay server verifies signature using address (public key) | heartbeat re-sends with updated timestamp
8
8
  Performance: Ed25519 signing is fast (sub-millisecond) | JSON serialization minimal overhead | no I/O or network calls
9
9
  Errors: raises KeyError if address_data missing required keys | address.sign() errors bubble up | no validation of summary length or endpoint format
10
10
 
connectonion/asgi.py ADDED
@@ -0,0 +1,225 @@
1
+ """Raw ASGI utilities for HTTP/WebSocket handling.
2
+
3
+ This module contains the protocol-level code for handling HTTP and WebSocket
4
+ requests. Separated from host.py for better testing and smaller file size.
5
+
6
+ Design decision: Raw ASGI instead of Starlette/FastAPI for full protocol control.
7
+ See: docs/design-decisions/022-raw-asgi-implementation.md
8
+ """
9
+ import json
10
+ from pathlib import Path
11
+
12
+
13
+ async def read_body(receive) -> bytes:
14
+ """Read complete request body from ASGI receive."""
15
+ body = b""
16
+ while True:
17
+ m = await receive()
18
+ body += m.get("body", b"")
19
+ if not m.get("more_body"):
20
+ break
21
+ return body
22
+
23
+
24
+ async def send_json(send, data: dict, status: int = 200):
25
+ """Send JSON response via ASGI send."""
26
+ body = json.dumps(data).encode()
27
+ await send({"type": "http.response.start", "status": status,
28
+ "headers": [[b"content-type", b"application/json"]]})
29
+ await send({"type": "http.response.body", "body": body})
30
+
31
+
32
+ async def send_html(send, html: bytes, status: int = 200):
33
+ """Send HTML response via ASGI send."""
34
+ await send({
35
+ "type": "http.response.start",
36
+ "status": status,
37
+ "headers": [[b"content-type", b"text/html; charset=utf-8"]],
38
+ })
39
+ await send({"type": "http.response.body", "body": html})
40
+
41
+
42
+ async def handle_http(
43
+ scope,
44
+ receive,
45
+ send,
46
+ *,
47
+ handlers: dict,
48
+ storage,
49
+ trust: str,
50
+ result_ttl: int,
51
+ start_time: float,
52
+ blacklist: list | None = None,
53
+ whitelist: list | None = None,
54
+ ):
55
+ """Route HTTP requests to handlers.
56
+
57
+ Args:
58
+ scope: ASGI scope dict (method, path, headers, etc.)
59
+ receive: ASGI receive callable
60
+ send: ASGI send callable
61
+ handlers: Dict of handler functions (input, session, sessions, health, info, auth)
62
+ storage: SessionStorage instance
63
+ trust: Trust level (open/careful/strict)
64
+ result_ttl: How long to keep results in seconds
65
+ start_time: Server start time
66
+ blacklist: Blocked identities
67
+ whitelist: Allowed identities
68
+ """
69
+ method, path = scope["method"], scope["path"]
70
+
71
+ if method == "POST" and path == "/input":
72
+ body = await read_body(receive)
73
+ try:
74
+ data = json.loads(body) if body else {}
75
+ except json.JSONDecodeError:
76
+ await send_json(send, {"error": "Invalid JSON"}, 400)
77
+ return
78
+
79
+ prompt, identity, sig_valid, err = handlers["auth"](
80
+ data, trust, blacklist=blacklist, whitelist=whitelist
81
+ )
82
+ if err:
83
+ status = 401 if err.startswith("unauthorized") else 403 if err.startswith("forbidden") else 400
84
+ await send_json(send, {"error": err}, status)
85
+ return
86
+
87
+ # Extract session for conversation continuation
88
+ session = data.get("session")
89
+ result = handlers["input"](storage, prompt, result_ttl, session)
90
+ await send_json(send, result)
91
+
92
+ elif method == "GET" and path.startswith("/sessions/"):
93
+ result = handlers["session"](storage, path[10:])
94
+ await send_json(send, result or {"error": "not found"}, 404 if not result else 200)
95
+
96
+ elif method == "GET" and path == "/sessions":
97
+ await send_json(send, handlers["sessions"](storage))
98
+
99
+ elif method == "GET" and path == "/health":
100
+ await send_json(send, handlers["health"](start_time))
101
+
102
+ elif method == "GET" and path == "/info":
103
+ await send_json(send, handlers["info"](trust))
104
+
105
+ elif method == "GET" and path == "/docs":
106
+ # Serve static docs page
107
+ try:
108
+ base = Path(__file__).resolve().parent
109
+ html_path = base / "static" / "docs.html"
110
+ html = html_path.read_bytes()
111
+ except Exception:
112
+ html = b"<html><body><h1>ConnectOnion Docs</h1><p>Docs not found.</p></body></html>"
113
+ await send_html(send, html)
114
+
115
+ else:
116
+ await send_json(send, {"error": "not found"}, 404)
117
+
118
+
119
+ async def handle_websocket(
120
+ scope,
121
+ receive,
122
+ send,
123
+ *,
124
+ handlers: dict,
125
+ trust: str,
126
+ blacklist: list | None = None,
127
+ whitelist: list | None = None,
128
+ ):
129
+ """Handle WebSocket connections at /ws.
130
+
131
+ Args:
132
+ scope: ASGI scope dict
133
+ receive: ASGI receive callable
134
+ send: ASGI send callable
135
+ handlers: Dict with 'ws_input' and 'auth' handlers
136
+ trust: Trust level
137
+ blacklist: Blocked identities
138
+ whitelist: Allowed identities
139
+ """
140
+ if scope["path"] != "/ws":
141
+ await send({"type": "websocket.close", "code": 4004})
142
+ return
143
+
144
+ await send({"type": "websocket.accept"})
145
+
146
+ while True:
147
+ msg = await receive()
148
+ if msg["type"] == "websocket.disconnect":
149
+ break
150
+ if msg["type"] == "websocket.receive":
151
+ try:
152
+ data = json.loads(msg.get("text", "{}"))
153
+ except json.JSONDecodeError:
154
+ await send({"type": "websocket.send",
155
+ "text": json.dumps({"type": "ERROR", "message": "Invalid JSON"})})
156
+ continue
157
+
158
+ if data.get("type") == "INPUT":
159
+ prompt, identity, sig_valid, err = handlers["auth"](
160
+ data, trust, blacklist=blacklist, whitelist=whitelist
161
+ )
162
+ if err:
163
+ await send({"type": "websocket.send",
164
+ "text": json.dumps({"type": "ERROR", "message": err})})
165
+ continue
166
+ if not prompt:
167
+ await send({"type": "websocket.send",
168
+ "text": json.dumps({"type": "ERROR", "message": "prompt required"})})
169
+ continue
170
+ result = handlers["ws_input"](prompt)
171
+ await send({"type": "websocket.send",
172
+ "text": json.dumps({"type": "OUTPUT", "result": result})})
173
+
174
+
175
+ def create_app(
176
+ *,
177
+ handlers: dict,
178
+ storage,
179
+ trust: str = "careful",
180
+ result_ttl: int = 86400,
181
+ blacklist: list | None = None,
182
+ whitelist: list | None = None,
183
+ ):
184
+ """Create ASGI application.
185
+
186
+ Args:
187
+ handlers: Dict of handler functions
188
+ storage: SessionStorage instance
189
+ trust: Trust level (open/careful/strict)
190
+ result_ttl: How long to keep results in seconds
191
+ blacklist: Blocked identities
192
+ whitelist: Allowed identities
193
+
194
+ Returns:
195
+ ASGI application callable
196
+ """
197
+ import time
198
+ start_time = time.time()
199
+
200
+ async def app(scope, receive, send):
201
+ if scope["type"] == "http":
202
+ await handle_http(
203
+ scope,
204
+ receive,
205
+ send,
206
+ handlers=handlers,
207
+ storage=storage,
208
+ trust=trust,
209
+ result_ttl=result_ttl,
210
+ start_time=start_time,
211
+ blacklist=blacklist,
212
+ whitelist=whitelist,
213
+ )
214
+ elif scope["type"] == "websocket":
215
+ await handle_websocket(
216
+ scope,
217
+ receive,
218
+ send,
219
+ handlers=handlers,
220
+ trust=trust,
221
+ blacklist=blacklist,
222
+ whitelist=whitelist,
223
+ )
224
+
225
+ return app
@@ -2,7 +2,7 @@
2
2
  Purpose: Deploy agent projects to ConnectOnion Cloud with git archive packaging and secrets management
3
3
  LLM-Note:
4
4
  Dependencies: imports from [os, subprocess, tempfile, time, toml, requests, pathlib, rich.console, dotenv] | imported by [cli/main.py via handle_deploy()] | calls backend at [https://oo.openonion.ai/api/v1/deploy]
5
- Data flow: handle_deploy() → validates git repo and .co/config.toml → _get_api_key() loads OPENONION_API_KEY → reads config.toml for project name and secrets path → dotenv_values() loads secrets from .env → git archive creates tarball of HEAD → POST to /api/v1/deploy with tarball + secrets → polls /api/v1/deploy/status/{id} until running/error → displays agent URL
5
+ Data flow: handle_deploy() → validates git repo and .co/config.toml → _get_api_key() loads OPENONION_API_KEY → reads config.toml for project name and secrets path → dotenv_values() loads secrets from .env → git archive creates tarball of HEAD → POST to /api/v1/deploy with tarball + project_name + secrets → polls /api/v1/deploy/{id}/status until running/error → displays agent URL
6
6
  State/Effects: creates temporary tarball file in tempdir | reads .co/config.toml, .env files | makes network POST request | prints progress to stdout via rich.Console | does not modify project files
7
7
  Integration: exposes handle_deploy() for CLI | expects git repo with .co/config.toml containing project.name, project.secrets, deploy.entrypoint | uses Bearer token auth | returns void (prints results)
8
8
  Performance: git archive is fast | network timeout 60s for upload, 10s for status checks | polls every 3s up to 100 times (~5 min)
@@ -87,6 +87,7 @@ def handle_deploy():
87
87
  f"{API_BASE}/api/v1/deploy",
88
88
  files={"package": ("agent.tar.gz", f, "application/gzip")},
89
89
  data={
90
+ "project_name": project_name,
90
91
  "secrets": str(secrets),
91
92
  "entrypoint": entrypoint,
92
93
  },
@@ -98,13 +99,13 @@ def handle_deploy():
98
99
  console.print(f"[red]Deploy failed: {response.text}[/red]")
99
100
  return
100
101
 
101
- deployment_id = response.json().get("deployment_id")
102
+ deployment_id = response.json().get("id")
102
103
 
103
104
  # Wait for deployment
104
105
  console.print("Building...")
105
106
  for _ in range(100):
106
107
  status_resp = requests.get(
107
- f"{API_BASE}/api/v1/deploy/status/{deployment_id}",
108
+ f"{API_BASE}/api/v1/deploy/{deployment_id}/status",
108
109
  headers={"Authorization": f"Bearer {api_key}"},
109
110
  timeout=10,
110
111
  )
@@ -118,8 +119,8 @@ def handle_deploy():
118
119
  return
119
120
  time.sleep(3)
120
121
 
121
- agent_url = response.json().get("agent_url", "")
122
+ url = response.json().get("url", "")
122
123
  console.print()
123
124
  console.print("[bold green]Deployed![/bold green]")
124
- console.print(f"Agent URL: {agent_url}")
125
+ console.print(f"Agent URL: {url}")
125
126
  console.print()
connectonion/connect.py CHANGED
@@ -4,7 +4,7 @@ LLM-Note:
4
4
  Dependencies: imports from [asyncio, json, uuid, websockets] | imported by [__init__.py, tests/test_connect.py, examples/] | tested by [tests/test_connect.py, tests/integration/manual/network_connect_manual.py]
5
5
  Data flow: user calls connect(address, relay_url) → creates RemoteAgent instance → user calls .input(prompt) → _send_task() creates WebSocket to relay /ws/input → sends INPUT message with {type, input_id, to, prompt} → waits for OUTPUT response from relay → returns result string OR raises ConnectionError
6
6
  State/Effects: establishes temporary WebSocket connection per task (no persistent connection) | sends INPUT messages to relay | receives OUTPUT/ERROR messages | no file I/O or global state | asyncio.run() blocks on .input(), await on .input_async()
7
- Integration: exposes connect(address, relay_url), RemoteAgent class with .input(prompt, timeout), .input_async(prompt, timeout) | default relay_url="wss://oo.openonion.ai/ws/announce" | address format: 0x + 64 hex chars (Ed25519 public key) | complements Agent.serve() which listens for INPUT on relay | Protocol: INPUT type with to/prompt fields → OUTPUT type with input_id/result fields
7
+ Integration: exposes connect(address, relay_url), RemoteAgent class with .input(prompt, timeout), .input_async(prompt, timeout) | default relay_url="wss://oo.openonion.ai/ws/announce" | address format: 0x + 64 hex chars (Ed25519 public key) | complements host() with relay_url which listens for INPUT on relay | Protocol: INPUT type with to/prompt fields → OUTPUT type with input_id/result fields
8
8
  Performance: creates new WebSocket connection per input() call (no connection pooling) | default timeout=30s | async under the hood (asyncio.run wraps for sync API) | no caching or retry logic
9
9
  Errors: raises ImportError if websockets not installed | raises ConnectionError for ERROR responses from relay | raises ConnectionError for unexpected response types | asyncio.TimeoutError if no response within timeout | WebSocket connection errors bubble up
10
10