langtrace-python-sdk 3.3.4__py3-none-any.whl → 3.3.7__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,371 +0,0 @@
1
- Metadata-Version: 2.3
2
- Name: langtrace-python-sdk
3
- Version: 3.3.4
4
- Summary: Python SDK for LangTrace
5
- Project-URL: Homepage, https://github.com/Scale3-Labs/langtrace-python-sdk
6
- Author-email: Scale3 Labs <engineering@scale3labs.com>
7
- License: Apache-2.0
8
- Classifier: License :: OSI Approved :: Apache Software License
9
- Classifier: Operating System :: OS Independent
10
- Classifier: Programming Language :: Python :: 3
11
- Requires-Python: >=3.9
12
- Requires-Dist: colorama>=0.4.6
13
- Requires-Dist: fsspec>=2024.6.0
14
- Requires-Dist: opentelemetry-api>=1.25.0
15
- Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.25.0
16
- Requires-Dist: opentelemetry-exporter-otlp-proto-http>=1.25.0
17
- Requires-Dist: opentelemetry-instrumentation-sqlalchemy>=0.46b0
18
- Requires-Dist: opentelemetry-instrumentation>=0.47b0
19
- Requires-Dist: opentelemetry-sdk>=1.25.0
20
- Requires-Dist: sentry-sdk>=2.14.0
21
- Requires-Dist: sqlalchemy
22
- Requires-Dist: tiktoken>=0.1.1
23
- Requires-Dist: trace-attributes==7.1.0
24
- Requires-Dist: transformers>=4.11.3
25
- Requires-Dist: ujson>=5.10.0
26
- Provides-Extra: dev
27
- Requires-Dist: anthropic; extra == 'dev'
28
- Requires-Dist: boto3; extra == 'dev'
29
- Requires-Dist: chromadb; extra == 'dev'
30
- Requires-Dist: cohere; extra == 'dev'
31
- Requires-Dist: embedchain; extra == 'dev'
32
- Requires-Dist: google-cloud-aiplatform; extra == 'dev'
33
- Requires-Dist: google-generativeai; extra == 'dev'
34
- Requires-Dist: groq; extra == 'dev'
35
- Requires-Dist: langchain; extra == 'dev'
36
- Requires-Dist: langchain-community; extra == 'dev'
37
- Requires-Dist: langchain-openai; extra == 'dev'
38
- Requires-Dist: litellm==1.48.7; extra == 'dev'
39
- Requires-Dist: mistralai; extra == 'dev'
40
- Requires-Dist: ollama; extra == 'dev'
41
- Requires-Dist: openai==1.45.0; extra == 'dev'
42
- Requires-Dist: pinecone-client; extra == 'dev'
43
- Requires-Dist: python-dotenv; extra == 'dev'
44
- Requires-Dist: qdrant-client; extra == 'dev'
45
- Requires-Dist: setuptools; extra == 'dev'
46
- Requires-Dist: weaviate-client; extra == 'dev'
47
- Provides-Extra: test
48
- Requires-Dist: pytest; extra == 'test'
49
- Requires-Dist: pytest-asyncio; extra == 'test'
50
- Requires-Dist: pytest-vcr; extra == 'test'
51
- Description-Content-Type: text/markdown
52
-
53
- # [Langtrace](https://www.langtrace.ai)
54
-
55
- ## Open Source & Open Telemetry(OTEL) Observability for LLM applications
56
-
57
- ![Static Badge](https://img.shields.io/badge/License-Apache--2.0-blue) ![Static Badge](https://img.shields.io/badge/npm_@langtrase/typescript--sdk-1.2.9-green) ![Static Badge](https://img.shields.io/badge/pip_langtrace--python--sdk-1.2.8-green) ![Static Badge](https://img.shields.io/badge/Development_status-Active-green)
58
-
59
- ---
60
-
61
- Langtrace is an open source observability software which lets you capture, debug and analyze traces and metrics from all your applications that leverages LLM APIs, Vector Databases and LLM based Frameworks.
62
-
63
- ## Open Telemetry Support
64
-
65
- The traces generated by Langtrace adhere to [Open Telemetry Standards(OTEL)](https://opentelemetry.io/docs/concepts/signals/traces/). We are developing [semantic conventions](https://opentelemetry.io/docs/concepts/semantic-conventions/) for the traces generated by this project. You can checkout the current definitions in [this repository](https://github.com/Scale3-Labs/langtrace-trace-attributes/tree/main/schemas). Note: This is an ongoing development and we encourage you to get involved and welcome your feedback.
66
-
67
- ---
68
-
69
- ## Langtrace Cloud ☁️
70
-
71
- To use the managed SaaS version of Langtrace, follow the steps below:
72
-
73
- 1. Sign up by going to [this link](https://langtrace.ai).
74
- 2. Create a new Project after signing up. Projects are containers for storing traces and metrics generated by your application. If you have only one application, creating 1 project will do.
75
- 3. Generate an API key by going inside the project.
76
- 4. In your application, install the Langtrace SDK and initialize it with the API key you generated in the step 3.
77
- 5. The code for installing and setting up the SDK is shown below
78
-
79
- ## Getting Started
80
-
81
- Get started by adding simply three lines to your code!
82
-
83
- ```python
84
- pip install langtrace-python-sdk
85
- ```
86
-
87
- ```python
88
- from langtrace_python_sdk import langtrace # Must precede any llm module imports
89
- langtrace.init(api_key=<your_api_key>)
90
- ```
91
-
92
- OR
93
-
94
- ```python
95
- from langtrace_python_sdk import langtrace # Must precede any llm module imports
96
- langtrace.init() # LANGTRACE_API_KEY as an ENVIRONMENT variable
97
- ```
98
-
99
- ## FastAPI Quick Start
100
-
101
- Initialize FastAPI project and add this inside the `main.py` file
102
-
103
- ```python
104
- from fastapi import FastAPI
105
- from langtrace_python_sdk import langtrace
106
- from openai import OpenAI
107
-
108
- langtrace.init()
109
- app = FastAPI()
110
- client = OpenAI()
111
-
112
- @app.get("/")
113
- def root():
114
- client.chat.completions.create(
115
- model="gpt-4",
116
- messages=[{"role": "user", "content": "Say this is a test three times"}],
117
- stream=False,
118
- )
119
- return {"Hello": "World"}
120
- ```
121
-
122
- ## Django Quick Start
123
-
124
- Initialize django project and add this inside the `__init.py__` file
125
-
126
- ```python
127
- from langtrace_python_sdk import langtrace
128
- from openai import OpenAI
129
-
130
-
131
- langtrace.init()
132
- client = OpenAI()
133
-
134
- client.chat.completions.create(
135
- model="gpt-4",
136
- messages=[{"role": "user", "content": "Say this is a test three times"}],
137
- stream=False,
138
- )
139
-
140
- ```
141
-
142
- ## Flask Quick Start
143
-
144
- Initialize flask project and this inside `app.py` file
145
-
146
- ```python
147
- from flask import Flask
148
- from langtrace_python_sdk import langtrace
149
- from openai import OpenAI
150
-
151
- langtrace.init()
152
- client = OpenAI()
153
- app = Flask(__name__)
154
-
155
-
156
- @app.route("/")
157
- def main():
158
- client.chat.completions.create(
159
- model="gpt-4",
160
- messages=[{"role": "user", "content": "Say this is a test three times"}],
161
- stream=False,
162
- )
163
- return "Hello, World!"
164
- ```
165
-
166
- ## Langtrace Self Hosted
167
-
168
- Get started by adding simply two lines to your code and see traces being logged to the console!
169
-
170
- ```python
171
- pip install langtrace-python-sdk
172
- ```
173
-
174
- ```python
175
- from langtrace_python_sdk import langtrace # Must precede any llm module imports
176
- langtrace.init(write_spans_to_console=True)
177
- ```
178
-
179
- ## Langtrace self hosted custom exporter
180
-
181
- Get started by adding simply three lines to your code and see traces being exported to your remote location!
182
-
183
- ```python
184
- pip install langtrace-python-sdk
185
- ```
186
-
187
- ```python
188
- from langtrace_python_sdk import langtrace # Must precede any llm module imports
189
- langtrace.init(custom_remote_exporter=<your_exporter>, batch=<True or False>)
190
- ```
191
-
192
- ### Configure Langtrace
193
-
194
- | Parameter | Type | Default Value | Description |
195
- | -------------------------- | ----------------------------------- | ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
196
- | `api_key` | `str` | `LANGTRACE_API_KEY` or `None` | The API key for authentication. |
197
- | `batch` | `bool` | `True` | Whether to batch spans before sending them. |
198
- | `write_spans_to_console` | `bool` | `False` | Whether to write spans to the console. |
199
- | `custom_remote_exporter` | `Optional[Exporter]` | `None` | Custom remote exporter. If `None`, a default `LangTraceExporter` will be used. |
200
- | `api_host` | `Optional[str]` | `https://langtrace.ai/` | The API host for the remote exporter. |
201
- | `disable_instrumentations` | `Optional[DisableInstrumentations]` | `None` | You can pass an object to disable instrumentation for specific vendors ex: `{'only': ['openai']}` or `{'all_except': ['openai']}` |
202
-
203
- ### Error Reporting to Langtrace
204
-
205
- By default all sdk errors are reported to langtrace via Sentry. This can be disabled by setting the following enviroment variable to `False` like so `LANGTRACE_ERROR_REPORTING=False`
206
-
207
- ### Additional Customization
208
-
209
- - `@with_langtrace_root_span` - this decorator is designed to organize and relate different spans, in a hierarchical manner. When you're performing multiple operations that you want to monitor together as a unit, this function helps by establishing a "parent" (`LangtraceRootSpan` or whatever is passed to `name`) span. Then, any calls to the LLM APIs made within the given function (fn) will be considered "children" of this parent span. This setup is especially useful for tracking the performance or behavior of a group of operations collectively, rather than individually.
210
-
211
- ```python
212
- from langtrace_python_sdk import with_langtrace_root_span
213
-
214
- @with_langtrace_root_span()
215
- def example():
216
- response = client.chat.completions.create(
217
- model="gpt-4",
218
- messages=[{"role": "user", "content": "Say this is a test three times"}],
219
- stream=False,
220
- )
221
- return response
222
- ```
223
-
224
- - `inject_additional_attributes` - this function is designed to enhance the traces by adding custom attributes to the current context. These custom attributes provide extra details about the operations being performed, making it easier to analyze and understand their behavior.
225
-
226
- ```python
227
- from langtrace_python_sdk import inject_additional_attributes
228
-
229
-
230
-
231
- def do_llm_stuff(name=""):
232
- response = client.chat.completions.create(
233
- model="gpt-4",
234
- messages=[{"role": "user", "content": "Say this is a test three times"}],
235
- stream=False,
236
- )
237
- return response
238
-
239
-
240
- def main():
241
- response = inject_additional_attributes(lambda: do_llm_stuff(name="llm"), {'user.id': 'userId'})
242
-
243
- # if the function do not take arguments then this syntax will work
244
- response = inject_additional_attributes(do_llm_stuff, {'user.id': 'userId'})
245
- ```
246
-
247
- - `with_additional_attributes` - is behaving the same as `inject_additional_attributes` but as a decorator, this will be deprecated soon.
248
-
249
- ```python
250
- from langtrace_python_sdk import with_langtrace_root_span, with_additional_attributes
251
-
252
-
253
- @with_additional_attributes({"user.id": "1234"})
254
- def api_call1():
255
- response = client.chat.completions.create(
256
- model="gpt-4",
257
- messages=[{"role": "user", "content": "Say this is a test three times"}],
258
- stream=False,
259
- )
260
- return response
261
-
262
-
263
- @with_additional_attributes({"user.id": "5678"})
264
- def api_call2():
265
- response = client.chat.completions.create(
266
- model="gpt-4",
267
- messages=[{"role": "user", "content": "Say this is a test three times"}],
268
- stream=False,
269
- )
270
- return response
271
-
272
-
273
- @with_langtrace_root_span()
274
- def chat_completion():
275
- api_call1()
276
- api_call2()
277
- ```
278
-
279
- - `get_prompt_from_registry` - this function is designed to fetch the desired prompt from the `Prompt Registry`. You can pass two options for filtering `prompt_version` & `variables`.
280
-
281
- ```python
282
- from langtrace_python_sdk import get_prompt_from_registry
283
-
284
- prompt = get_prompt_from_registry(<Registry ID>, options={"prompt_version": 1, "variables": {"foo": "bar"} })
285
- ```
286
-
287
- ### Opt out of tracing prompt and completion data
288
-
289
- By default, prompt and completion data are captured. If you would like to opt out of it, set the following env var,
290
-
291
- `TRACE_PROMPT_COMPLETION_DATA=false`
292
-
293
- ### Enable/Disable checkpoint tracing for DSPy
294
-
295
- By default, checkpoints are traced for DSPy pipelines. If you would like to disable it, set the following env var,
296
-
297
- `TRACE_DSPY_CHECKPOINT=false`
298
-
299
- Note: Checkpoint tracing will increase the latency of executions as the state is serialized. Please disable it in production.
300
-
301
- ## Supported integrations
302
-
303
- Langtrace automatically captures traces from the following vendors:
304
-
305
- | Vendor | Type | Typescript SDK | Python SDK |
306
- | ------------- | --------------- | ------------------ | ------------------------------- |
307
- | OpenAI | LLM | :white_check_mark: | :white_check_mark: |
308
- | Anthropic | LLM | :white_check_mark: | :white_check_mark: |
309
- | Azure OpenAI | LLM | :white_check_mark: | :white_check_mark: |
310
- | Cohere | LLM | :white_check_mark: | :white_check_mark: |
311
- | Groq | LLM | :x: | :white_check_mark: |
312
- | Perplexity | LLM | :white_check_mark: | :white_check_mark: |
313
- | Gemini | LLM | :x: | :white_check_mark: |
314
- | Mistral | LLM | :x: | :white_check_mark: |
315
- | Langchain | Framework | :x: | :white_check_mark: |
316
- | Langgraph | Framework | :x: | :white_check_mark: |
317
- | LlamaIndex | Framework | :white_check_mark: | :white_check_mark: |
318
- | AWS Bedrock | Framework | :white_check_mark: | :white_check_mark: |
319
- | LiteLLM | Framework | :x: | :white_check_mark: |
320
- | DSPy | Framework | :x: | :white_check_mark: |
321
- | CrewAI | Framework | :x: | :white_check_mark: |
322
- | Ollama | Framework | :x: | :white_check_mark: |
323
- | VertexAI | Framework | :x: | :white_check_mark: |
324
- | Vercel AI SDK | Framework | :white_check_mark: | :x: |
325
- | EmbedChain | Framework | :x: | :white_check_mark: |
326
- | Autogen | Framework | :x: | :white_check_mark: |
327
- | Pinecone | Vector Database | :white_check_mark: | :white_check_mark: |
328
- | ChromaDB | Vector Database | :white_check_mark: | :white_check_mark: |
329
- | QDrant | Vector Database | :white_check_mark: | :white_check_mark: |
330
- | Weaviate | Vector Database | :white_check_mark: | :white_check_mark: |
331
- | PGVector | Vector Database | :white_check_mark: | :white_check_mark: (SQLAlchemy) |
332
-
333
- ---
334
-
335
- ## Feature Requests and Issues
336
-
337
- - To request for features, head over [here to start a discussion](https://github.com/Scale3-Labs/langtrace/discussions/categories/feature-requests).
338
- - To raise an issue, head over [here and create an issue](https://github.com/Scale3-Labs/langtrace/issues).
339
-
340
- ---
341
-
342
- ## Contributions
343
-
344
- We welcome contributions to this project. To get started, fork this repository and start developing. To get involved, join our [Discord](https://discord.langtrace.ai) workspace.
345
-
346
- - If you want to run any of the examples go to `run_example.py` file, you will find `ENABLED_EXAMPLES`. choose the example you want to run and just toggle the flag to `True` and run the file using `python src/run_example.py`
347
-
348
- - If you want to run tests, make sure to install dev & test dependencies:
349
-
350
- ```python
351
- pip install '.[test]' && pip install '.[dev]'
352
- ```
353
-
354
- then run `pytest` using:
355
-
356
- ```python
357
- pytest -v
358
- ```
359
-
360
- ---
361
-
362
- ## Security
363
-
364
- To report security vulnerabilites, email us at <security@scale3labs.com>. You can read more on security [here](https://github.com/Scale3-Labs/langtrace/blob/development/SECURITY.md).
365
-
366
- ---
367
-
368
- ## License
369
-
370
- - Langtrace application is [licensed](https://github.com/Scale3-Labs/langtrace/blob/development/LICENSE) under the AGPL 3.0 License. You can read about this license [here](https://www.gnu.org/licenses/agpl-3.0.en.html).
371
- - Langtrace SDKs are licensed under the Apache 2.0 License. You can read about this license [here](https://www.apache.org/licenses/LICENSE-2.0).