livellm 1.4.5__tar.gz → 1.5.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: livellm
3
- Version: 1.4.5
3
+ Version: 1.5.1
4
4
  Summary: Python client for the LiveLLM Server
5
5
  Project-URL: Homepage, https://github.com/qalby-tech/livellm-client-py
6
6
  Project-URL: Repository, https://github.com/qalby-tech/livellm-client-py
@@ -257,6 +257,51 @@ response = await client.agent_run(
257
257
  )
258
258
  ```
259
259
 
260
+ #### Agent with Conversation History
261
+
262
+ You can request the full conversation history (including tool calls and returns) by setting `include_history=True`:
263
+
264
+ ```python
265
+ from livellm.models import TextMessage, ToolCallMessage, ToolReturnMessage
266
+
267
+ # Request with history enabled
268
+ response = await client.agent_run(
269
+ provider_uid="openai",
270
+ model="gpt-4",
271
+ messages=[TextMessage(role="user", content="Search for latest AI news")],
272
+ tools=[WebSearchInput(kind=ToolKind.WEB_SEARCH)],
273
+ include_history=True # Enable history in response
274
+ )
275
+
276
+ print(f"Output: {response.output}")
277
+
278
+ # Access full conversation history including tool interactions
279
+ if response.history:
280
+ for msg in response.history:
281
+ if isinstance(msg, TextMessage):
282
+ print(f"{msg.role}: {msg.content}")
283
+ elif isinstance(msg, ToolCallMessage):
284
+ print(f"Tool Call: {msg.tool_name}({msg.args})")
285
+ elif isinstance(msg, ToolReturnMessage):
286
+ print(f"Tool Return from {msg.tool_name}: {msg.content}")
287
+ ```
288
+
289
+ **History Message Types:**
290
+ - `TextMessage` - Regular text messages (user, model, system)
291
+ - `BinaryMessage` - Images or other binary content
292
+ - `ToolCallMessage` - Tool invocations made by the agent
293
+ - `tool_name` - Name of the tool called
294
+ - `args` - Arguments passed to the tool
295
+ - `ToolReturnMessage` - Results returned from tool calls
296
+ - `tool_name` - Name of the tool that was called
297
+ - `content` - The return value from the tool
298
+
299
+ **Use cases:**
300
+ - Debugging tool interactions
301
+ - Maintaining conversation state across multiple requests
302
+ - Auditing and logging complete conversations
303
+ - Building conversational UIs with full context visibility
304
+
260
305
  ### Audio Services
261
306
 
262
307
  #### Text-to-Speech
@@ -557,10 +602,12 @@ response = await client.ping()
557
602
  **Messages**
558
603
  - `TextMessage(role, content)` - Text message
559
604
  - `BinaryMessage(role, content, mime_type, caption?)` - Image/audio message
560
- - `MessageRole` - `USER` | `MODEL` | `SYSTEM` (or use strings: `"user"`, `"model"`, `"system"`)
605
+ - `ToolCallMessage(role, tool_name, args)` - Tool invocation by agent
606
+ - `ToolReturnMessage(role, tool_name, content)` - Tool execution result
607
+ - `MessageRole` - `USER` | `MODEL` | `SYSTEM` | `TOOL_CALL` | `TOOL_RETURN` (or use strings)
561
608
 
562
609
  **Requests**
563
- - `AgentRequest(provider_uid, model, messages, tools?, gen_config?)`
610
+ - `AgentRequest(provider_uid, model, messages, tools?, gen_config?, include_history?)` - Set `include_history=True` to get full conversation
564
611
  - `SpeakRequest(provider_uid, model, text, voice, mime_type, sample_rate, gen_config?)`
565
612
  - `TranscribeRequest(provider_uid, file, model, language?, gen_config?)`
566
613
  - `TranscriptionInitWsRequest(provider_uid, model, language?, input_sample_rate?, input_audio_format?, gen_config?)`
@@ -576,7 +623,7 @@ response = await client.ping()
576
623
  - `FallbackStrategy` - `SEQUENTIAL` | `PARALLEL`
577
624
 
578
625
  **Responses**
579
- - `AgentResponse(output, usage{input_tokens, output_tokens}, ...)`
626
+ - `AgentResponse(output, usage{input_tokens, output_tokens}, history?)` - `history` included when `include_history=True`
580
627
  - `TranscribeResponse(text, language)`
581
628
  - `TranscriptionWsResponse(transcription, is_end)` - Real-time transcription result
582
629
 
@@ -230,6 +230,51 @@ response = await client.agent_run(
230
230
  )
231
231
  ```
232
232
 
233
+ #### Agent with Conversation History
234
+
235
+ You can request the full conversation history (including tool calls and returns) by setting `include_history=True`:
236
+
237
+ ```python
238
+ from livellm.models import TextMessage, ToolCallMessage, ToolReturnMessage
239
+
240
+ # Request with history enabled
241
+ response = await client.agent_run(
242
+ provider_uid="openai",
243
+ model="gpt-4",
244
+ messages=[TextMessage(role="user", content="Search for latest AI news")],
245
+ tools=[WebSearchInput(kind=ToolKind.WEB_SEARCH)],
246
+ include_history=True # Enable history in response
247
+ )
248
+
249
+ print(f"Output: {response.output}")
250
+
251
+ # Access full conversation history including tool interactions
252
+ if response.history:
253
+ for msg in response.history:
254
+ if isinstance(msg, TextMessage):
255
+ print(f"{msg.role}: {msg.content}")
256
+ elif isinstance(msg, ToolCallMessage):
257
+ print(f"Tool Call: {msg.tool_name}({msg.args})")
258
+ elif isinstance(msg, ToolReturnMessage):
259
+ print(f"Tool Return from {msg.tool_name}: {msg.content}")
260
+ ```
261
+
262
+ **History Message Types:**
263
+ - `TextMessage` - Regular text messages (user, model, system)
264
+ - `BinaryMessage` - Images or other binary content
265
+ - `ToolCallMessage` - Tool invocations made by the agent
266
+ - `tool_name` - Name of the tool called
267
+ - `args` - Arguments passed to the tool
268
+ - `ToolReturnMessage` - Results returned from tool calls
269
+ - `tool_name` - Name of the tool that was called
270
+ - `content` - The return value from the tool
271
+
272
+ **Use cases:**
273
+ - Debugging tool interactions
274
+ - Maintaining conversation state across multiple requests
275
+ - Auditing and logging complete conversations
276
+ - Building conversational UIs with full context visibility
277
+
233
278
  ### Audio Services
234
279
 
235
280
  #### Text-to-Speech
@@ -530,10 +575,12 @@ response = await client.ping()
530
575
  **Messages**
531
576
  - `TextMessage(role, content)` - Text message
532
577
  - `BinaryMessage(role, content, mime_type, caption?)` - Image/audio message
533
- - `MessageRole` - `USER` | `MODEL` | `SYSTEM` (or use strings: `"user"`, `"model"`, `"system"`)
578
+ - `ToolCallMessage(role, tool_name, args)` - Tool invocation by agent
579
+ - `ToolReturnMessage(role, tool_name, content)` - Tool execution result
580
+ - `MessageRole` - `USER` | `MODEL` | `SYSTEM` | `TOOL_CALL` | `TOOL_RETURN` (or use strings)
534
581
 
535
582
  **Requests**
536
- - `AgentRequest(provider_uid, model, messages, tools?, gen_config?)`
583
+ - `AgentRequest(provider_uid, model, messages, tools?, gen_config?, include_history?)` - Set `include_history=True` to get full conversation
537
584
  - `SpeakRequest(provider_uid, model, text, voice, mime_type, sample_rate, gen_config?)`
538
585
  - `TranscribeRequest(provider_uid, file, model, language?, gen_config?)`
539
586
  - `TranscriptionInitWsRequest(provider_uid, model, language?, input_sample_rate?, input_audio_format?, gen_config?)`
@@ -549,7 +596,7 @@ response = await client.ping()
549
596
  - `FallbackStrategy` - `SEQUENTIAL` | `PARALLEL`
550
597
 
551
598
  **Responses**
552
- - `AgentResponse(output, usage{input_tokens, output_tokens}, ...)`
599
+ - `AgentResponse(output, usage{input_tokens, output_tokens}, history?)` - `history` included when `include_history=True`
553
600
  - `TranscribeResponse(text, language)`
554
601
  - `TranscriptionWsResponse(transcription, is_end)` - Real-time transcription result
555
602
 
@@ -1,7 +1,7 @@
1
1
  from .common import BaseRequest, ProviderKind, Settings, SuccessResponse
2
2
  from .fallback import AgentFallbackRequest, AudioFallbackRequest, TranscribeFallbackRequest, FallbackStrategy
3
3
  from .agent.agent import AgentRequest, AgentResponse, AgentResponseUsage
4
- from .agent.chat import Message, MessageRole, TextMessage, BinaryMessage
4
+ from .agent.chat import Message, MessageRole, TextMessage, BinaryMessage, ToolCallMessage, ToolReturnMessage
5
5
  from .agent.tools import Tool, ToolInput, ToolKind, WebSearchInput, MCPStreamableServerInput
6
6
  from .audio.speak import SpeakMimeType, SpeakRequest, SpeakStreamResponse
7
7
  from .audio.transcribe import TranscribeRequest, TranscribeResponse, File
@@ -27,6 +27,8 @@ __all__ = [
27
27
  "MessageRole",
28
28
  "TextMessage",
29
29
  "BinaryMessage",
30
+ "ToolCallMessage",
31
+ "ToolReturnMessage",
30
32
  "Tool",
31
33
  "ToolInput",
32
34
  "ToolKind",
@@ -1,5 +1,5 @@
1
1
  from .agent import AgentRequest, AgentResponse, AgentResponseUsage
2
- from .chat import Message, MessageRole, TextMessage, BinaryMessage
2
+ from .chat import Message, MessageRole, TextMessage, BinaryMessage, ToolCallMessage, ToolReturnMessage
3
3
  from .tools import Tool, ToolInput, ToolKind, WebSearchInput, MCPStreamableServerInput
4
4
 
5
5
 
@@ -11,6 +11,8 @@ __all__ = [
11
11
  "MessageRole",
12
12
  "TextMessage",
13
13
  "BinaryMessage",
14
+ "ToolCallMessage",
15
+ "ToolReturnMessage",
14
16
  "Tool",
15
17
  "ToolInput",
16
18
  "ToolKind",
@@ -2,7 +2,7 @@
2
2
 
3
3
  from pydantic import BaseModel, Field, field_validator
4
4
  from typing import Optional, List, Union
5
- from .chat import TextMessage, BinaryMessage
5
+ from .chat import TextMessage, BinaryMessage, ToolCallMessage, ToolReturnMessage
6
6
  from .tools import WebSearchInput, MCPStreamableServerInput
7
7
  from ..common import BaseRequest
8
8
 
@@ -12,6 +12,7 @@ class AgentRequest(BaseRequest):
12
12
  messages: List[Union[TextMessage, BinaryMessage]] = Field(..., description="The messages to use")
13
13
  tools: List[Union[WebSearchInput, MCPStreamableServerInput]] = Field(default_factory=list, description="The tools to use")
14
14
  gen_config: Optional[dict] = Field(default=None, description="The configuration for the generation")
15
+ include_history: bool = Field(default=False, description="Whether to include full conversation history in the response")
15
16
 
16
17
  class AgentResponseUsage(BaseModel):
17
18
  input_tokens: int = Field(..., description="The number of input tokens used")
@@ -19,4 +20,5 @@ class AgentResponseUsage(BaseModel):
19
20
 
20
21
  class AgentResponse(BaseModel):
21
22
  output: str = Field(..., description="The output of the response")
22
- usage: AgentResponseUsage = Field(..., description="The usage of the response")
23
+ usage: AgentResponseUsage = Field(..., description="The usage of the response")
24
+ history: Optional[List[Union[TextMessage, BinaryMessage, ToolCallMessage, ToolReturnMessage]]] = Field(default=None, description="Full conversation history including tool calls and returns (only included when include_history=true)")
@@ -7,6 +7,8 @@ class MessageRole(str, Enum):
7
7
  USER = "user"
8
8
  MODEL = "model"
9
9
  SYSTEM = "system"
10
+ TOOL_CALL = "tool_call"
11
+ TOOL_RETURN = "tool_return"
10
12
 
11
13
 
12
14
  class Message(BaseModel):
@@ -27,3 +29,13 @@ class BinaryMessage(Message):
27
29
  raise ValueError("MIME type are meant for user messages only")
28
30
  return self
29
31
 
32
+ class ToolCallMessage(Message):
33
+ """Message representing a tool call made by the agent"""
34
+ tool_name: str = Field(..., description="The name of the tool being called")
35
+ args: dict = Field(..., description="The arguments passed to the tool")
36
+
37
+ class ToolReturnMessage(Message):
38
+ """Message representing the return value from a tool call"""
39
+ tool_name: str = Field(..., description="The name of the tool that was called")
40
+ content: str = Field(..., description="The return value from the tool")
41
+
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "livellm"
3
- version = "1.4.5"
3
+ version = "1.5.1"
4
4
  description = "Python client for the LiveLLM Server"
5
5
  readme = "README.md"
6
6
  requires-python = ">=3.10"
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes