literun 0.1.0__py3-none-any.whl → 0.1.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,242 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: literun
3
- Version: 0.1.0
4
- Summary: A Minimal agent runtime built on OpenAI Responses API
5
- Author-email: Kaustubh <trivedikaustubh01@gmail.com>
6
- License: MIT
7
- Requires-Python: >=3.10
8
- Description-Content-Type: text/markdown
9
- License-File: LICENSE
10
- Requires-Dist: openai>=2.11.0
11
- Provides-Extra: dev
12
- Requires-Dist: pytest>=7.0; extra == "dev"
13
- Requires-Dist: black; extra == "dev"
14
- Requires-Dist: flake8; extra == "dev"
15
- Dynamic: license-file
16
-
17
- # LiteRun 🚀
18
-
19
- A lightweight, flexible Python framework for building custom OpenAI agents (Responses API) with tool support and structured prompt management.
20
-
21
- ## Features
22
-
23
- - **Custom Agent Execution**: Complete control over the agent execution loop, supporting both synchronous and streaming responses.
24
- - **Tool Support**: Easy registration and execution of Python functions as tools.
25
- - **Type Safety**: Strong typing for tool arguments with automatic coercion and validation.
26
- - **Prompt Templates**: Structured way to build system, user, and assistant messages.
27
- - **Constants**: Pre-defined constants for OpenAI roles and message types.
28
- - **Streaming Support**: Built-in support for real-time streaming of agent thoughts, tool calls, and responses.
29
- - **Tool Management**: Easy-to-define tools with automatic JSON schema generation (`ArgsSchema`).
30
- - **Event-Driven**: Structured event system for granular control over the agent's execution lifecycle.
31
- - **OpenAI Compatible**: Seamlessly integrates with `openai-python` client.
32
-
33
- ## Requirements
34
-
35
- - Python 3.10+
36
- - [OpenAI Python API library](https://pypi.org/project/openai/)
37
-
38
- ## Installation
39
-
40
- ### Production
41
-
42
- ```bash
43
- pip install literun
44
- ```
45
-
46
- ### Development
47
-
48
- ```bash
49
- git clone https://github.com/kaustubh-tr/literun.git
50
- cd openai-agent
51
- pip install -e .[dev]
52
- ```
53
-
54
- ## Quick Start
55
-
56
- ### Basic Agent
57
-
58
- Here is a simple example of how to create an agent with a custom tool:
59
-
60
- ```python
61
- import os
62
- from literun import Agent, ChatOpenAI, Tool, ArgsSchema
63
-
64
- # 1. Define a tool function
65
- def get_weather(location: str, unit: str = "celsius") -> str:
66
- return f"The weather in {location} is 25 degrees {unit}."
67
-
68
- # 2. Wrap it with Tool schema
69
- weather_tool = Tool(
70
- func=get_weather,
71
- name="get_weather",
72
- description="Get the weather for a location",
73
- args_schema=[
74
- ArgsSchema(
75
- name="location",
76
- type=str,
77
- description="The city and state, e.g. San Francisco, CA",
78
- ),
79
- ArgsSchema(
80
- name="unit",
81
- type=str,
82
- description="The unit of temperature",
83
- enum=["celsius", "fahrenheit"],
84
- ),
85
- ],
86
- )
87
-
88
- # 3. Initialize LLM and Agent
89
- llm = ChatOpenAI(model="gpt-4o", temperature=0.7)
90
-
91
- # 4. Initialize Agent
92
- agent = Agent(
93
- llm=llm,
94
- system_prompt="You are a helpful assistant.",
95
- tools=[weather_tool],
96
- )
97
-
98
- # 5. Run the Agent
99
- result = agent.invoke(user_input="What is the weather in Tokyo?")
100
- print(f"Final Answer: {result.final_output}")
101
- ```
102
-
103
- ### Streaming Agent
104
-
105
- You can also stream the agent's execution to handle events in real-time:
106
-
107
- ```python
108
- # ... (setup tool and agent as above)
109
-
110
- print("Agent: ", end="", flush=True)
111
- for result in agent.stream(user_input="What is the weather in Tokyo?"):
112
- event = result.event
113
- if event.type == "response.output_text.delta":
114
- print(event.delta, end="", flush=True)
115
- elif event.type == "response.function_call_arguments.done":
116
- print(f"\n[Tool Call: {event.name}]")
117
-
118
- print()
119
- ```
120
-
121
- ### Runtime Configuration (Context Injection)
122
-
123
- The framework allows passing a runtime context to tools using explicit context injection.
124
-
125
- Rules:
126
- 1. Define a tool function with a parameter annotated with `ToolRuntime`.
127
- 2. The framework will automatically inject the `runtime_context` (wrapped in `ToolRuntime`) into that parameter.
128
- 3. Access configuration values using `ctx.{parameter}`.
129
-
130
- ```python
131
- from typing import Dict, Any
132
- from literun import Tool, ArgsSchema, ToolRuntime
133
-
134
- # 1. Define tool with context
135
- def get_weather(location: str, ctx: ToolRuntime) -> str:
136
- """
137
- Returns weather info for a location.
138
- The runtime context can include sensitive info like user_id or API keys.
139
- """
140
- user_id = getattr(ctx, "user_id", "unknown_user")
141
- api_key = getattr(ctx, "weather_api_key", None)
142
-
143
- # Simulate fetching weather
144
- return f"Weather for {location} fetched using API key '{api_key}' for user '{user_id}'."
145
-
146
- # 2. Register tool
147
- tool = Tool(
148
- name="get_weather",
149
- description="Get the weather for a given location",
150
- func=get_weather,
151
- args_schema=[
152
- ArgsSchema(
153
- name="location",
154
- type=str,
155
- description="Location for which to get the weather",
156
- )
157
- ]
158
- )
159
-
160
- # 3. Setup agent
161
- agent = Agent(
162
- llm=ChatOpenAI(api_key="fake"),
163
- tools=[tool]
164
- )
165
-
166
- # 4. Pass config at runtime
167
- # The whole dict is passed into the 'ctx' argument
168
- agent.invoke(
169
- user_input="What's the weather in London?",
170
- runtime_context={
171
- "user_id": "user_123",
172
- "weather_api_key": "SECRET_API_KEY_456"
173
- }
174
- )
175
- ```
176
-
177
- ### Using ChatOpenAI Directly
178
-
179
- You can also use the `ChatOpenAI` class directly if you don't need the agent loop (e.g., for simple, one-off LLM calls).
180
-
181
- ```python
182
- from literun import ChatOpenAI
183
-
184
- llm = ChatOpenAI(model="gpt-4o", temperature=0)
185
-
186
- messages = [
187
- {"role": "system", "content": "You are a helpful assistant."},
188
- {"role": "user", "content": "Tell me a joke."}
189
- ]
190
-
191
- # Synchronous call
192
- # Returns the raw OpenAI Responses API response object
193
- response = llm.invoke(messages=messages)
194
- print(response.output_text)
195
-
196
- # Or streaming call
197
- # Returns a generator of raw OpenAI response stream events
198
- stream = llm.stream(messages=messages)
199
- for event in stream:
200
- print(event)
201
- ```
202
-
203
- See [examples](examples/) for complete runnable examples.
204
-
205
- ## Project Structure
206
-
207
- The project is organized as follows:
208
-
209
- ```
210
- literun/
211
- ├── src/
212
- │ └── literun/ # Main package source
213
- │ ├── agent.py # Agent runtime logic
214
- │ ├── llm.py # LLM client wrapper
215
- │ ├── tool.py # Tool definition and execution
216
- │ ├── events.py # Stream event types
217
- │ └── ...
218
- ├── tests/ # Unit tests
219
- ├── examples/ # Usage examples
220
- └── pyproject.toml # Project configuration
221
- ```
222
-
223
- ## Testing
224
-
225
- Run the test suite using `unittest`:
226
-
227
- ```bash
228
- python -m unittest discover tests
229
- ```
230
-
231
- ## Contributing
232
-
233
- 1. Fork the repository
234
- 2. Create a feature branch
235
- 3. Make your changes
236
- 4. Run tests: `python -m unittest discover tests`
237
- 5. Update the example usage if needed
238
- 6. Submit a pull request
239
-
240
- ## License
241
-
242
- MIT
@@ -1,17 +0,0 @@
1
- literun/__init__.py,sha256=lnkWaQ_BYMVjyuPiRxmELOEMKzzqGw8n0DPEKIuNrgk,706
2
- literun/agent.py,sha256=CrMrvKHDlwQ4shqR1GkJv0pKq64mdIAy5VnVZoa-1To,15105
3
- literun/args_schema.py,sha256=bmLTybD4zPhjTwHkZrjQVEkiARHbje1SXTkH0hWyp0s,2618
4
- literun/constants.py,sha256=gSyuHnUdPuQBH3GKjp7FwZhk_C-F7ecF1IF-36H23_Q,453
5
- literun/events.py,sha256=SARUr4XWG9qpuy34hpZHHyJRX5btmJmBhBfvU8KbYyc,3615
6
- literun/items.py,sha256=SPINMwy8vOuK6iIqAMC5Av-BIgsCiuVKpYK_4vk70fk,2975
7
- literun/llm.py,sha256=7RU990Lw5XMY5RrlkepBaKFmP9cyZUu85SmUccrVIbY,5030
8
- literun/prompt_message.py,sha256=wizT6T8A4rsswcCyQVQeKyOdMPPFmSKtQoVUliMgV0M,4982
9
- literun/prompt_template.py,sha256=IFNZH80e2AO4u1p5L59Q49vvumU6nvToVHVXwkM_7mI,5128
10
- literun/results.py,sha256=7uRPdfzRzdI0HXhWhFiya4yhH-NmjGfdGZAgXMpfkpQ,1324
11
- literun/tool.py,sha256=iKQg1Xgh_Bzn1V0lVllmH1-G0XiHuDLkXOe0rCHRfc4,5049
12
- literun/utils.py,sha256=4r9P7u46KzuG3eNZq8kfuEWJxpc4b8T26nrzjW7_Hec,2261
13
- literun-0.1.0.dist-info/licenses/LICENSE,sha256=sJlY4ztFUqGGojhTNtL2UbhQeSZF3B4V1dAtzGfMHOE,1073
14
- literun-0.1.0.dist-info/METADATA,sha256=jTSF_nuY1_6T3ImwNAgAQcxkZ3TdVzHx8Ofmy8CmK1U,6649
15
- literun-0.1.0.dist-info/WHEEL,sha256=wUyA8OaulRlbfwMtmQsvNngGrxQHAvkKcvRmdizlJi0,92
16
- literun-0.1.0.dist-info/top_level.txt,sha256=YFnS29wBQf5eX9UEtBYA1ZegxjIs_7n691L8qIR_QW0,8
17
- literun-0.1.0.dist-info/RECORD,,
@@ -1 +0,0 @@
1
- literun