llmops-observability 8.0.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,263 @@
1
+ Metadata-Version: 2.4
2
+ Name: llmops-observability
3
+ Version: 8.0.0
4
+ Summary: LLMOps Observability SDK with direct Langfuse integration (no SQS/batching)
5
+ Requires-Python: >=3.9
6
+ Description-Content-Type: text/markdown
7
+ Requires-Dist: langfuse>=2.0.0
8
+ Requires-Dist: httpx
9
+ Requires-Dist: python-dotenv
10
+
11
+ # LLMOps Observability SDK
12
+
13
+ A lightweight Python SDK for LLM observability with **direct Langfuse integration**. No SQS queues, no batching, no worker threads - just instant tracing to Langfuse.
14
+
15
+ ## Key Features
16
+
17
+ - ⚡ **Instant Tracing**: Sends traces directly to Langfuse in real-time
18
+ - 🎯 **Simple API**: Same decorators as veriskGO (`@track_function`, `@track_llm_call`)
19
+ - 🚫 **No Complexity**: No SQS queues, no batching, no background workers
20
+ - 🔄 **Sync & Async**: Supports both synchronous and asynchronous functions
21
+ - 🎨 **Provider Agnostic**: Works with any LLM provider (Bedrock, OpenAI, Anthropic, etc.)
22
+
23
+ ## Comparison with veriskGO
24
+
25
+ | Feature | veriskGO | llmops-observability |
26
+ |---------|----------|---------------------|
27
+ | Trace Destination | SQS → Lambda → Langfuse | Direct to Langfuse |
28
+ | Batching | Yes | No |
29
+ | Worker Threads | Yes | No |
30
+ | Latency | Queued/Batched | Instant |
31
+ | Complexity | High | Low |
32
+ | Use Case | Production at scale | Development/Testing |
33
+
34
+ ## Installation
35
+
36
+ ```bash
37
+ cd llmops-observability
38
+ pip install -e .
39
+ ```
40
+
41
+ ## Quick Start
42
+
43
+ ### 1. Configure Environment Variables
44
+
45
+ Create a `.env` file:
46
+
47
+ ```bash
48
+ LANGFUSE_PUBLIC_KEY=pk-lf-...
49
+ LANGFUSE_SECRET_KEY=sk-lf-...
50
+ LANGFUSE_BASE_URL=https://your-langfuse-instance.com
51
+ LANGFUSE_VERIFY_SSL=false # Optional, default is true
52
+ ```
53
+
54
+ ### 2. Use in Your Code
55
+
56
+ ```python
57
+ from llmops_observability import TraceManager, track_function, track_llm_call
58
+
59
+ # Start a trace
60
+ TraceManager.start_trace(
61
+ name="my_workflow",
62
+ user_id="user_123",
63
+ session_id="session_456",
64
+ metadata={"environment": "development"}
65
+ )
66
+
67
+ # Track regular functions
68
+ @track_function()
69
+ def process_data(input_data):
70
+ # Your code here
71
+ return {"processed": input_data}
72
+
73
+ # Track LLM calls
74
+ @track_llm_call()
75
+ def call_bedrock(prompt):
76
+ # Call your LLM
77
+ response = bedrock_client.converse(
78
+ modelId="anthropic.claude-3-sonnet",
79
+ messages=[{"role": "user", "content": prompt}]
80
+ )
81
+ return response
82
+
83
+ # Use the functions
84
+ result = process_data("some data")
85
+ llm_response = call_bedrock("Hello, world!")
86
+
87
+ # End the trace (flushes to Langfuse)
88
+ TraceManager.end_trace()
89
+ ```
90
+
91
+ ### 3. Async Support
92
+
93
+ ```python
94
+ @track_function()
95
+ async def async_process(data):
96
+ return await some_async_operation(data)
97
+
98
+ @track_llm_call(name="summarize")
99
+ async def async_llm_call(text):
100
+ return await chain.ainvoke({"text": text})
101
+ ```
102
+
103
+ ## Configuration
104
+
105
+ ### Environment Variables
106
+
107
+ - `LANGFUSE_PUBLIC_KEY`: Your Langfuse public key (required)
108
+ - `LANGFUSE_SECRET_KEY`: Your Langfuse secret key (required)
109
+ - `LANGFUSE_BASE_URL`: Your Langfuse instance URL (required)
110
+ - `LANGFUSE_VERIFY_SSL`: Whether to verify SSL certificates (optional, default: `true`)
111
+
112
+ ### Programmatic Configuration
113
+
114
+ ```python
115
+ from llmops_observability import configure
116
+
117
+ configure(
118
+ public_key="pk-lf-...",
119
+ secret_key="sk-lf-...",
120
+ base_url="https://your-langfuse.com",
121
+ verify_ssl=False
122
+ )
123
+ ```
124
+
125
+ ## API Reference
126
+
127
+ ### TraceManager
128
+
129
+ #### `start_trace(name, user_id=None, session_id=None, metadata=None, tags=None)`
130
+ Start a new trace.
131
+
132
+ ```python
133
+ TraceManager.start_trace(
134
+ name="my_workflow",
135
+ user_id="user_123",
136
+ session_id="session_456",
137
+ metadata={"env": "dev"},
138
+ tags=["experiment"]
139
+ )
140
+ ```
141
+
142
+ #### `end_trace()`
143
+ End the current trace and flush to Langfuse.
144
+
145
+ ```python
146
+ TraceManager.end_trace()
147
+ ```
148
+
149
+ ### Decorators
150
+
151
+ #### `@track_function(name=None, tags=None)`
152
+ Track regular function execution.
153
+
154
+ ```python
155
+ @track_function()
156
+ def my_function(x, y):
157
+ return x + y
158
+
159
+ @track_function(name="custom_name", tags={"version": "1.0"})
160
+ def another_function():
161
+ pass
162
+ ```
163
+
164
+ #### `@track_llm_call(name=None, tags=None, extract_output=True)`
165
+ Track LLM generation calls.
166
+
167
+ ```python
168
+ @track_llm_call()
169
+ def call_llm(prompt):
170
+ return llm.invoke(prompt)
171
+
172
+ @track_llm_call(name="summarize", tags={"model": "claude-3"})
173
+ def summarize(text):
174
+ return llm.summarize(text)
175
+ ```
176
+
177
+ ## Response Format Support
178
+
179
+ The `@track_llm_call` decorator automatically extracts text from various LLM response formats:
180
+
181
+ - AWS Bedrock Converse API
182
+ - Anthropic Messages API
183
+ - Amazon Titan
184
+ - Cohere
185
+ - AI21
186
+ - OpenAI
187
+ - Generic text responses
188
+
189
+ ## Project Structure
190
+
191
+ ```
192
+ llmops-observability/
193
+ ├── src/
194
+ │ └── llmops_observability/
195
+ │ ├── __init__.py # Public API
196
+ │ ├── config.py # Langfuse client configuration
197
+ │ ├── trace_manager.py # TraceManager & @track_function
198
+ │ ├── llm.py # @track_llm_call decorator
199
+ │ └── models.py # SpanContext model
200
+ ├── pyproject.toml
201
+ └── README.md
202
+ ```
203
+
204
+ ## Example: Complete Workflow
205
+
206
+ ```python
207
+ from llmops_observability import TraceManager, track_function, track_llm_call
208
+ import boto3
209
+
210
+ # Initialize Bedrock client
211
+ bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
212
+
213
+ @track_function()
214
+ def retrieve_context(query):
215
+ # Simulate RAG retrieval
216
+ return {"documents": ["Context doc 1", "Context doc 2"]}
217
+
218
+ @track_llm_call()
219
+ def generate_answer(prompt, context):
220
+ response = bedrock.converse(
221
+ modelId="anthropic.claude-3-sonnet-20240229-v1:0",
222
+ messages=[{
223
+ "role": "user",
224
+ "content": f"Context: {context}\n\nQuestion: {prompt}"
225
+ }]
226
+ )
227
+ return response
228
+
229
+ # Start trace
230
+ TraceManager.start_trace(
231
+ name="rag_pipeline",
232
+ user_id="user_123",
233
+ metadata={"pipeline": "v1"}
234
+ )
235
+
236
+ # Execute workflow
237
+ context = retrieve_context("What is Python?")
238
+ answer = generate_answer("What is Python?", context)
239
+
240
+ # End trace
241
+ TraceManager.end_trace()
242
+ ```
243
+
244
+ ## When to Use This SDK
245
+
246
+ ✅ **Use llmops-observability when:**
247
+ - Developing and testing LLM applications
248
+ - You want instant trace visibility in Langfuse
249
+ - Simple, straightforward tracing without infrastructure
250
+
251
+ ✅ **Use veriskGO when:**
252
+ - Running in production at scale
253
+ - Need decoupled, async tracing via SQS
254
+ - Want batching and optimized throughput
255
+ - Need complex observability pipelines
256
+
257
+ ## License
258
+
259
+ TODO: Add license
260
+
261
+ ## Contributing
262
+
263
+ TODO: Add contribution guidelines
@@ -0,0 +1,11 @@
1
+ llmops_observability/__init__.py,sha256=3ss1rhrGvx8rsBTqGn7IPGA5NmreVD9prcuY_OlyW7c,777
2
+ llmops_observability/asgi_middleware.py,sha256=hZ6Eg3BupBs3MjDRhHbFRXKU_OHArK9oh7iB-Pj3nRo,4693
3
+ llmops_observability/config.py,sha256=2wY-RkrCQBaYEOzh09lWJJ-2jBxNmZMMwoOUJ2H3LZQ,2677
4
+ llmops_observability/llm.py,sha256=V-XtnED5XIvQzUu7BX9cUzKJmyxJN0u6NvcPKUrdWd0,26536
5
+ llmops_observability/models.py,sha256=mIwFcG5H1jCTnOBB9VpgA1VG_R0-yeYp9_8H-T05DEk,922
6
+ llmops_observability/pricing.py,sha256=iqGBAcY8iFjWG2JhKfokv42DyRBuDu28Cp91y388ipA,4044
7
+ llmops_observability/trace_manager.py,sha256=-aHSvU6zWAN_LoXuJoSf7oCrmYfrBhourkzQBJm6hgU,24573
8
+ llmops_observability-8.0.0.dist-info/METADATA,sha256=ra3m0P0xXlqqUouXiDJ5bjGvTgcNYPh-ux_XZ0unDPE,6632
9
+ llmops_observability-8.0.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
10
+ llmops_observability-8.0.0.dist-info/top_level.txt,sha256=aben9etPKobjtjZ8J2MQBDgrw457BcwiP9VsBbyyHUY,21
11
+ llmops_observability-8.0.0.dist-info/RECORD,,
@@ -0,0 +1,5 @@
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (80.9.0)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
@@ -0,0 +1 @@
1
+ llmops_observability