rebrandly-otel 0.3.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,1926 @@
1
+ Metadata-Version: 2.4
2
+ Name: rebrandly_otel
3
+ Version: 0.3.1
4
+ Summary: Python OTEL wrapper by Rebrandly
5
+ Home-page: https://gitlab.rebrandly.com/rebrandly/instrumentation/rebrandly-otel-python
6
+ Author: Antonio Romano
7
+ Author-email: antonio@rebrandly.com
8
+ Classifier: Programming Language :: Python :: 3
9
+ Classifier: License :: OSI Approved :: MIT License
10
+ Classifier: Operating System :: OS Independent
11
+ Description-Content-Type: text/markdown
12
+ License-File: LICENSE
13
+ Requires-Dist: opentelemetry-api>=1.34.0
14
+ Requires-Dist: opentelemetry-sdk>=1.34.0
15
+ Requires-Dist: opentelemetry-exporter-otlp>=1.34.0
16
+ Requires-Dist: opentelemetry-semantic-conventions>=0.47b0
17
+ Requires-Dist: psutil>=5.0.0
18
+ Requires-Dist: fastapi>=0.118.0
19
+ Dynamic: author
20
+ Dynamic: author-email
21
+ Dynamic: classifier
22
+ Dynamic: description
23
+ Dynamic: description-content-type
24
+ Dynamic: home-page
25
+ Dynamic: license-file
26
+ Dynamic: requires-dist
27
+ Dynamic: summary
28
+
29
+ # Rebrandly OpenTelemetry SDK for Python
30
+
31
+ A comprehensive OpenTelemetry instrumentation SDK designed specifically for Rebrandly services, with built-in support for AWS Lambda functions and message processing.
32
+
33
+ ## Overview
34
+
35
+ The Rebrandly OpenTelemetry SDK provides a unified interface for distributed tracing, metrics collection, and structured logging across Python applications. It offers automatic instrumentation for AWS Lambda functions, simplified span management, and seamless integration with OTLP-compatible backends.
36
+
37
+ ## Table of Contents
38
+
39
+ - [Installation](#installation)
40
+ - [Quick Start](#quick-start)
41
+ - [Getting Started](#getting-started)
42
+ - [Step 1: Install the Package](#step-1-install-the-package)
43
+ - [Step 2: Configure Environment Variables](#step-2-configure-environment-variables)
44
+ - [Step 3: Choose Your Integration Pattern](#step-3-choose-your-integration-pattern)
45
+ - [Step 4: Verify It's Working](#step-4-verify-its-working)
46
+ - [Step 5: Add Custom Instrumentation](#step-5-add-custom-instrumentation-optional)
47
+ - [Configuration](#configuration)
48
+ - [Core Components](#core-components)
49
+ - [Built-in Metrics](#built-in-metrics)
50
+ - [Tracing Features](#tracing-features)
51
+ - [Automatic Span Attributes](#automatic-span-attributes)
52
+ - [Logging Integration](#logging-integration)
53
+ - [AWS Lambda Support](#aws-lambda-support)
54
+ - [Performance Considerations](#performance-considerations)
55
+ - [Export Formats](#export-formats)
56
+ - [Thread Safety](#thread-safety)
57
+ - [Resource Attributes](#resource-attributes)
58
+ - [Error Handling](#error-handling)
59
+ - [Compatibility](#compatibility)
60
+ - [Best Practices](#best-practices)
61
+ - [Span Management](#span-management)
62
+ - [Error Handling](#error-handling-1)
63
+ - [Metric Cardinality](#metric-cardinality)
64
+ - [Lambda Functions](#lambda-functions)
65
+ - [Context Propagation](#context-propagation)
66
+ - [Logging](#logging)
67
+ - [Configuration](#configuration-1)
68
+ - [Performance](#performance)
69
+ - [Security](#security)
70
+ - [Testing](#testing)
71
+ - [Async/Await Support](#asyncawait-support)
72
+ - [Database Instrumentation](#database-instrumentation)
73
+ - [Examples](#examples)
74
+ - [Lambda - Send SNS / SQS](#lambda---send-sns--sqs-message)
75
+ - [Lambda - Receive SQS](#lambda-receive-sqs-message)
76
+ - [Lambda - Receive SNS](#lambda-receive-sns-message-record-specific-event)
77
+ - [Flask](#flask)
78
+ - [FastAPI](#fastapi)
79
+ - [PyMySQL Database Instrumentation](#pymysql-database-instrumentation)
80
+ - [Troubleshooting](#troubleshooting)
81
+ - [Testing](#testing-1)
82
+ - [License](#license)
83
+ - [Build and Deploy](#build-and-deploy)
84
+
85
+ ## Installation
86
+
87
+ ```bash
88
+ pip install rebrandly-otel
89
+ ```
90
+
91
+ > **Note**: The SDK automatically initializes when you import it. No manual setup or initialization calls are needed - just `from rebrandly_otel import otel` and start using it!
92
+
93
+ ### Dependencies
94
+
95
+ - `opentelemetry-api`
96
+ - `opentelemetry-sdk`
97
+ - `opentelemetry-exporter-otlp-proto-grpc`
98
+ - `opentelemetry-semantic-conventions`
99
+ - `psutil` (for system metrics)
100
+
101
+ ## Quick Start
102
+
103
+ Get started with the Rebrandly OpenTelemetry SDK in under 5 minutes:
104
+
105
+ ```python
106
+ from rebrandly_otel import otel, logger
107
+ from opentelemetry.trace import SpanKind, Status, StatusCode
108
+
109
+ # 1. SDK auto-initializes when imported
110
+ # Optional: Configure via environment variables (see Configuration section)
111
+
112
+ # 2. Create a traced operation using context manager
113
+ def process_order(order_id):
114
+ with otel.span("process-order", attributes={"order.id": order_id}) as span:
115
+ logger.info(f"Processing order {order_id}")
116
+
117
+ # Your business logic here
118
+ save_to_database(order_id)
119
+
120
+ # Exceptions are automatically recorded
121
+ # No need to manually call span.end()
122
+
123
+ # 3. For AWS Lambda functions
124
+ from rebrandly_otel import lambda_handler
125
+
126
+ @lambda_handler(name="order-processor")
127
+ def handler(event, context):
128
+ logger.info("Lambda invoked", extra={"event": event})
129
+ order_id = event.get('orderId')
130
+ process_order(order_id)
131
+ return {'statusCode': 200, 'body': 'Success'}
132
+
133
+ # 4. For Flask applications
134
+ from flask import Flask
135
+ from rebrandly_otel import app_before_request, app_after_request, flask_error_handler
136
+
137
+ app = Flask(__name__)
138
+ app.before_request(app_before_request)
139
+ app.after_request(app_after_request)
140
+ app.register_error_handler(Exception, flask_error_handler)
141
+
142
+ @app.route('/orders/<order_id>')
143
+ def get_order(order_id):
144
+ with otel.span("fetch-order"):
145
+ logger.info(f"Fetching order {order_id}")
146
+ # Your logic here
147
+ return {"order_id": order_id, "status": "shipped"}
148
+ ```
149
+
150
+ **Next Steps:**
151
+ - See [Getting Started](#getting-started) for detailed integration guide
152
+ - Check [Configuration](#configuration) to set up environment variables
153
+ - Explore [Examples](#examples) for framework-specific patterns (Flask, FastAPI, Lambda)
154
+
155
+ ## Getting Started
156
+
157
+ ### Step 1: Install the Package
158
+
159
+ ```bash
160
+ pip install rebrandly-otel
161
+ ```
162
+
163
+ ### Step 2: Configure Environment Variables
164
+
165
+ Create a `.env` file or set environment variables:
166
+
167
+ ```bash
168
+ # Required
169
+ export OTEL_SERVICE_NAME=my-service
170
+ export OTEL_SERVICE_VERSION=1.0.0
171
+
172
+ # Optional - for sending data to an OTLP collector
173
+ export OTEL_EXPORTER_OTLP_ENDPOINT=https://your-collector:4317
174
+
175
+ # Optional - for debugging locally
176
+ export OTEL_DEBUG=true
177
+ ```
178
+
179
+ ### Step 3: Choose Your Integration Pattern
180
+
181
+ #### For Flask Applications
182
+
183
+ ```python
184
+ from flask import Flask
185
+ from rebrandly_otel import otel, logger, app_before_request, app_after_request, flask_error_handler
186
+
187
+ app = Flask(__name__)
188
+
189
+ # Register OTEL handlers - handles ALL telemetry automatically
190
+ app.before_request(app_before_request)
191
+ app.after_request(app_after_request)
192
+ app.register_error_handler(Exception, flask_error_handler)
193
+
194
+ # Your routes - no telemetry code needed!
195
+ @app.route('/api/users')
196
+ def get_users():
197
+ logger.info("Fetching users")
198
+ # Business logic only
199
+ return {"users": []}
200
+
201
+ @app.route('/api/users/<user_id>')
202
+ def get_user(user_id):
203
+ # Add custom spans when needed
204
+ with otel.span("fetch-user-details"):
205
+ logger.info(f"Fetching user {user_id}")
206
+ # Your business logic
207
+ return {"user_id": user_id, "name": "John Doe"}
208
+
209
+ if __name__ == '__main__':
210
+ app.run(debug=True)
211
+ ```
212
+
213
+ #### For FastAPI Applications
214
+
215
+ ```python
216
+ from fastapi import FastAPI, Depends
217
+ from contextlib import asynccontextmanager
218
+ from rebrandly_otel import otel, logger, force_flush
219
+ from rebrandly_otel.fastapi_support import setup_fastapi, get_current_span
220
+
221
+ @asynccontextmanager
222
+ async def lifespan(app: FastAPI):
223
+ logger.info("Application starting up")
224
+ yield
225
+ logger.info("Application shutting down")
226
+ force_flush()
227
+
228
+ app = FastAPI(lifespan=lifespan)
229
+
230
+ # Setup OTEL integration - handles ALL telemetry automatically
231
+ setup_fastapi(otel, app)
232
+
233
+ # Your routes
234
+ @app.get("/api/users")
235
+ async def get_users():
236
+ logger.info("Fetching users")
237
+ return {"users": []}
238
+
239
+ @app.get("/api/users/{user_id}")
240
+ async def get_user(user_id: int, span=Depends(get_current_span)):
241
+ # Add custom spans when needed
242
+ with otel.span("fetch-user-details", attributes={"user.id": user_id}):
243
+ logger.info(f"Fetching user {user_id}")
244
+ # Your business logic
245
+ return {"user_id": user_id, "name": "John Doe"}
246
+
247
+ if __name__ == "__main__":
248
+ import uvicorn
249
+ uvicorn.run(app, host="0.0.0.0", port=8000)
250
+ ```
251
+
252
+ #### For AWS Lambda Functions
253
+
254
+ ```python
255
+ from rebrandly_otel import lambda_handler, logger, otel
256
+
257
+ @lambda_handler(name="user-processor")
258
+ def handler(event, context):
259
+ logger.info("Processing event", extra={"event_type": event.get("eventType")})
260
+
261
+ # Your business logic
262
+ user_id = event.get('userId')
263
+
264
+ # Add custom spans if needed
265
+ with otel.span("process-user", attributes={"user.id": user_id}):
266
+ result = process_user(user_id)
267
+
268
+ return {
269
+ 'statusCode': 200,
270
+ 'body': result
271
+ }
272
+
273
+ def process_user(user_id):
274
+ logger.info(f"Processing user {user_id}")
275
+ # Your business logic here
276
+ return {"processed": True}
277
+ ```
278
+
279
+ #### For Standalone Scripts
280
+
281
+ ```python
282
+ from rebrandly_otel import otel, logger, force_flush, shutdown
283
+
284
+ def main():
285
+ # Create traced operation
286
+ with otel.span("main-operation"):
287
+ logger.info("Starting operation")
288
+
289
+ # Your business logic
290
+ process_data()
291
+
292
+ logger.info("Operation completed")
293
+
294
+ def process_data():
295
+ with otel.span("process-data"):
296
+ # Nested spans are automatically linked
297
+ logger.info("Processing data")
298
+ # Your logic here
299
+
300
+ if __name__ == "__main__":
301
+ try:
302
+ main()
303
+ except Exception as e:
304
+ logger.error(f"Operation failed: {e}", exc_info=True)
305
+ finally:
306
+ # Ensure telemetry is flushed before exit
307
+ force_flush(timeout_millis=5000)
308
+ shutdown()
309
+ ```
310
+
311
+ ### Step 4: Verify It's Working
312
+
313
+ #### Local Debugging
314
+
315
+ Set `OTEL_DEBUG=true` to see telemetry output in your console:
316
+
317
+ ```bash
318
+ OTEL_DEBUG=true python app.py
319
+ ```
320
+
321
+ You should see trace and metric data logged to the console.
322
+
323
+ #### Production Setup
324
+
325
+ Configure `OTEL_EXPORTER_OTLP_ENDPOINT` to point to your OpenTelemetry Collector or backend:
326
+
327
+ ```bash
328
+ export OTEL_EXPORTER_OTLP_ENDPOINT=https://your-collector.example.com:4317
329
+ export OTEL_EXPORTER_OTLP_HEADERS="x-api-key=your-api-key"
330
+ ```
331
+
332
+ Common backends:
333
+ - **Honeycomb**: `https://api.honeycomb.io:443`
334
+ - **Lightstep**: `https://ingest.lightstep.com:443`
335
+ - **Jaeger**: `http://jaeger-collector:4317`
336
+ - **Self-hosted Collector**: `http://localhost:4317`
337
+
338
+ ### Step 5: Add Custom Instrumentation (Optional)
339
+
340
+ Add custom spans, metrics, and logs as needed:
341
+
342
+ ```python
343
+ from rebrandly_otel import otel, logger, meter
344
+ from opentelemetry.trace import Status, StatusCode
345
+
346
+ # Custom span with attributes
347
+ with otel.span("custom-operation", attributes={"user.id": user_id}) as span:
348
+ # Add events to the span
349
+ span.add_event("processing_started", {"timestamp": datetime.now().isoformat()})
350
+
351
+ # Your code
352
+ result = do_work()
353
+
354
+ # Add more attributes dynamically
355
+ span.set_attribute("result.count", len(result))
356
+
357
+ # Custom metric
358
+ order_counter = meter.meter.create_counter(
359
+ name="orders.created",
360
+ description="Number of orders created",
361
+ unit="1"
362
+ )
363
+ order_counter.add(1, {"order.type": "standard", "region": "us-east-1"})
364
+
365
+ # Custom histogram for measuring durations
366
+ duration_histogram = meter.meter.create_histogram(
367
+ name="order.processing.duration",
368
+ description="Order processing duration",
369
+ unit="ms"
370
+ )
371
+ duration_histogram.record(123.45, {"order.type": "standard"})
372
+
373
+ # Structured logging with trace correlation
374
+ logger.info("Order processed", extra={
375
+ "order_id": 12345,
376
+ "user_id": 67890,
377
+ "amount": 99.99
378
+ })
379
+ ```
380
+
381
+ ## Configuration
382
+
383
+ The SDK is configured through environment variables:
384
+
385
+ | Variable | Description | Default |
386
+ |------------------------------------|-------------|---------------------------------|
387
+ | `OTEL_SERVICE_NAME` | Service identifier | `default-service-python` |
388
+ | `OTEL_SERVICE_VERSION` | Service version | `1.0.0` |
389
+ | `OTEL_SERVICE_APPLICATION` | Application namespace (groups multiple services under one application) | Fallback to `OTEL_SERVICE_NAME` |
390
+ | `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP collector endpoint | `None` |
391
+ | `OTEL_DEBUG` | Enable console debugging | `false` |
392
+ | `OTEL_CAPTURE_REQUEST_BODY` | Enable HTTP request body capture for Flask and FastAPI (default: true). Set to `false` to disable. Only captures JSON content with automatic sensitive data redaction. | `true` |
393
+ | `OTEL_SPAN_ATTRIBUTES` | Attributes automatically added to all spans (format: `key1=value1,key2=value2`) | `None` |
394
+ | `BATCH_EXPORT_TIME_MILLIS` | Batch export interval | `100` |
395
+ | `ENV` or `ENVIRONMENT` or `NODE_ENV` | Deployment environment | `local` |
396
+
397
+ ## Core Components
398
+
399
+ ### RebrandlyOTEL Class
400
+
401
+ The main entry point for all telemetry operations. Implements a singleton pattern to ensure consistent instrumentation across your application.
402
+
403
+ #### Properties
404
+
405
+ - **`tracer`**: Returns the `RebrandlyTracer` instance for distributed tracing
406
+ - **`meter`**: Returns the `RebrandlyMeter` instance for metrics collection
407
+ - **`logger`**: Returns the configured Python logger with OpenTelemetry integration
408
+
409
+ #### Initialization
410
+
411
+ The SDK auto-initializes as soon as you embed it.
412
+
413
+ ### Key Methods
414
+
415
+ #### `span(name, attributes=None, kind=SpanKind.INTERNAL, message=None)`
416
+
417
+ Context manager for creating traced spans with automatic error handling and status management.
418
+
419
+ #### `lambda_handler(name=None, attributes=None, kind=SpanKind.CONSUMER, auto_flush=True, skip_aws_link=True)`
420
+
421
+ Decorator for AWS Lambda functions with automatic instrumentation, metrics collection, and telemetry flushing.
422
+
423
+ #### `aws_message_handler(name=None, attributes=None, kind=SpanKind.CONSUMER, auto_flush=True)`
424
+
425
+ Decorator for processing individual AWS messages (SQS/SNS) with context propagation.
426
+
427
+ #### `aws_message_span(name, message=None, attributes=None, kind=SpanKind.CONSUMER)`
428
+
429
+ Context manager for creating spans from AWS messages with automatic context extraction.
430
+
431
+ #### `force_flush(start_datetime=None, timeout_millis=1000)`
432
+
433
+ Forces all pending telemetry data to be exported. Critical for serverless environments.
434
+
435
+ #### `shutdown()`
436
+
437
+ Gracefully shuts down all OpenTelemetry components.
438
+
439
+ ## Built-in Metrics
440
+
441
+ The SDK automatically registers and tracks the following metrics:
442
+
443
+ ### Standard Metrics
444
+
445
+ - **`cpu_usage_percentage`** (Gauge): CPU utilization percentage
446
+ - **`memory_usage_bytes`** (Gauge): Memory usage in bytes
447
+
448
+
449
+ ### Custom Metrics
450
+
451
+ You can create the custom metrics you need using the default open telemetry metrics
452
+
453
+ ```python
454
+ from src.rebrandly_otel import meter
455
+
456
+ sqs_counter = meter.meter.create_counter(
457
+ name="sqs_sender_counter",
458
+ description="Number of messages sent",
459
+ unit="1"
460
+ )
461
+ sqs_counter.add(1)
462
+ ```
463
+
464
+ ## Tracing Features
465
+
466
+ ### Automatic Context Propagation
467
+
468
+ The SDK automatically extracts and propagates trace context from:
469
+ - AWS SQS message attributes
470
+ - AWS SNS message attributes
471
+ - HTTP headers
472
+ - Custom carriers
473
+
474
+ ### Span Attributes
475
+
476
+ Lambda spans automatically include:
477
+ - `faas.trigger`: Detected trigger type (sqs, sns, api_gateway, etc.)
478
+ - `faas.execution`: AWS request ID
479
+ - `faas.id`: Function ARN
480
+ - `cloud.provider`: Always "aws" for Lambda
481
+ - `cloud.platform`: Always "aws_lambda" for Lambda
482
+
483
+ ## Automatic Span Attributes
484
+
485
+ The SDK supports automatically adding custom attributes to all spans via the `OTEL_SPAN_ATTRIBUTES` environment variable. This is useful for adding metadata that applies to all telemetry in a service, such as team ownership, deployment environment, or version information.
486
+
487
+ ### Configuration
488
+
489
+ Set the `OTEL_SPAN_ATTRIBUTES` environment variable with a comma-separated list of key-value pairs:
490
+
491
+ ```bash
492
+ export OTEL_SPAN_ATTRIBUTES="team=backend,environment=production,version=1.2.3"
493
+ ```
494
+
495
+ ### Behavior
496
+
497
+ - **Universal Application**: Attributes are added to ALL spans, including:
498
+ - Manually created spans (`tracer.start_span()`, `tracer.start_as_current_span()`)
499
+ - Lambda handler spans (`@lambda_handler`)
500
+ - AWS message handler spans (`@aws_message_handler`)
501
+ - Flask/FastAPI middleware spans
502
+ - Auto-instrumented spans (database queries, HTTP requests, etc.)
503
+
504
+ - **Format**: Same as `OTEL_RESOURCE_ATTRIBUTES` - comma-separated `key=value` pairs
505
+ - **Value Handling**: Supports values containing `=` characters (e.g., URLs)
506
+ - **Whitespace**: Leading/trailing whitespace is automatically trimmed
507
+
508
+ ### Example
509
+
510
+ ```python
511
+ import os
512
+
513
+ # Set environment variable
514
+ os.environ['OTEL_SPAN_ATTRIBUTES'] = "team=backend,service.owner=platform-team,deployment.region=us-east-1"
515
+
516
+ # Initialize SDK
517
+ from rebrandly_otel import otel, logger
518
+
519
+ # Create any span - attributes are added automatically
520
+ with otel.span('my-operation'):
521
+ logger.info('Processing request')
522
+ # The span will include:
523
+ # - team: "backend"
524
+ # - service.owner: "platform-team"
525
+ # - deployment.region: "us-east-1"
526
+ # ... plus any other attributes you set manually
527
+ ```
528
+
529
+ ### Use Cases
530
+
531
+ - **Team/Ownership Tagging**: `team=backend,owner=john@example.com`
532
+ - **Environment Metadata**: `environment=production,region=us-east-1,availability_zone=us-east-1a`
533
+ - **Version Tracking**: `version=1.2.3,build=12345,commit=abc123def`
534
+ - **Cost Attribution**: `cost_center=engineering,project=customer-api`
535
+ - **Multi-Tenancy**: `tenant=acme-corp,customer_tier=enterprise`
536
+
537
+ ### Difference from OTEL_RESOURCE_ATTRIBUTES
538
+
539
+ - **OTEL_RESOURCE_ATTRIBUTES**: Service-level metadata (set once, applies to the entire service instance)
540
+ - **OTEL_SPAN_ATTRIBUTES**: Span-level metadata (added to each individual span at creation time)
541
+
542
+ Both use the same format but serve different purposes in the OpenTelemetry data model.
543
+
544
+ ## Span Filtering for Cost Optimization
545
+
546
+ For high-volume services that generate massive amounts of telemetry data, the SDK supports span filtering to reduce ingestion costs while maintaining visibility into critical errors. This feature works with the OpenTelemetry Collector's tail sampling policies.
547
+
548
+ ### Use Case: Errors-Only Filtering
549
+
550
+ High-traffic services (APIs, event processors, background workers) often generate thousands of successful spans that provide little debugging value. By enabling errors-only filtering, you can:
551
+
552
+ - **Reduce ingestion costs** by 90-99% (only error spans are stored)
553
+ - **Maintain error visibility** - all errors are still captured and traced
554
+ - **Preserve spanmetrics** - metrics are generated at the agent level before filtering
555
+
556
+ ### Configuration
557
+
558
+ Set the `span.filter=errors-only` attribute using `OTEL_SPAN_ATTRIBUTES`:
559
+
560
+ ```bash
561
+ export OTEL_SPAN_ATTRIBUTES="span.filter=errors-only"
562
+ ```
563
+
564
+ **In serverless.yml (for Lambda functions):**
565
+ ```yaml
566
+ provider:
567
+ environment:
568
+ OTEL_SERVICE_NAME: high-volume-api
569
+ OTEL_EXPORTER_OTLP_ENDPOINT: http://otel-collector:4317
570
+ OTEL_SPAN_ATTRIBUTES: "span.filter=errors-only"
571
+ ```
572
+
573
+ ### How It Works
574
+
575
+ 1. **Application Level**: The `OTEL_SPAN_ATTRIBUTES` environment variable adds `span.filter=errors-only` to **all spans** created by your service
576
+
577
+ 2. **Agent Level**: The OTEL Agent receives all traces and generates spanmetrics (request counts, latencies, etc.) from 100% of traces
578
+
579
+ 3. **Gateway Level - Two-Stage Filtering**:
580
+
581
+ **Stage 1 - Filter Processor** (runs first):
582
+ - Drops spans with `span.filter=errors-only` AND status is NOT ERROR
583
+ - Passes all other spans to the next stage
584
+
585
+ **Stage 2 - Tail Sampling** (runs second):
586
+ - Applies policy-based sampling to remaining spans
587
+ - Errors-only-filter policy: Samples error spans from errors-only services
588
+ - General errors-policy: Samples all other error spans
589
+ - Randomized-policy: Samples 30% of remaining successful spans (from services WITHOUT errors-only filter)
590
+
591
+ 4. **Backend**: Only error traces from errors-only services are stored. Services without the filter get 30% of successful spans + all errors sampled.
592
+
593
+ ### Example
594
+
595
+ ```python
596
+ import os
597
+ from rebrandly_otel import otel, logger
598
+
599
+ # Enable errors-only filtering
600
+ os.environ['OTEL_SPAN_ATTRIBUTES'] = "span.filter=errors-only"
601
+
602
+ from rebrandly_otel import lambda_handler
603
+
604
+ @lambda_handler(name="high-volume-processor")
605
+ def handler(event, context):
606
+ logger.info("Processing event")
607
+
608
+ # Successful execution - span will be dropped at gateway
609
+ if process_successfully(event):
610
+ return {'statusCode': 200, 'body': 'Success'}
611
+
612
+ # Error case - span will be sampled and stored
613
+ raise Exception("Processing failed") # This trace will be captured!
614
+ ```
615
+
616
+ ### When to Use Errors-Only Filtering
617
+
618
+ ✅ **Good candidates for errors-only filtering:**
619
+ - High-traffic APIs (> 1000 requests/second)
620
+ - Background job processors with high success rates
621
+ - Event streaming consumers (SQS, Kinesis, Kafka)
622
+ - Scheduled tasks that run frequently
623
+ - Health check endpoints
624
+
625
+ ❌ **Not recommended for:**
626
+ - Low-traffic services (< 100 requests/minute)
627
+ - Services in active development or debugging
628
+ - Critical payment/transaction processing (you may want to sample successful traces too)
629
+ - Services where latency analysis of successful requests is important
630
+
631
+ ### Combining with Other Filters
632
+
633
+ You can combine `span.filter` with other span attributes:
634
+
635
+ ```bash
636
+ export OTEL_SPAN_ATTRIBUTES="span.filter=errors-only,team=backend,environment=production"
637
+ ```
638
+
639
+ All attributes are added to every span, enabling rich filtering and querying capabilities.
640
+
641
+ ### Monitoring and Validation
642
+
643
+ To verify errors-only filtering is working:
644
+
645
+ 1. **Check spanmetrics in your metrics backend** - should show 100% of requests
646
+ 2. **Check traces in your tracing backend** - should only show error traces
647
+ 3. **Calculate sampling rate**: `trace_count / metric_request_count` should be very low (< 1%)
648
+
649
+ ```python
650
+ # Add custom metric to track sampling
651
+ from rebrandly_otel import meter
652
+
653
+ sampling_counter = meter.meter.create_counter(
654
+ name="spans.filtered",
655
+ description="Spans filtered by errors-only policy",
656
+ unit="1"
657
+ )
658
+
659
+ # Increment on successful operations (these will be dropped)
660
+ sampling_counter.add(1, {"filter": "errors-only", "outcome": "success"})
661
+ ```
662
+
663
+ ### Cost Savings Example
664
+
665
+ **Before (all traces sampled):**
666
+ - Service: 10,000 requests/minute
667
+ - Error rate: 0.5%
668
+ - Traces stored: 14.4 million/day
669
+ - Cost: ~$500/month (at $0.10 per 100K spans)
670
+
671
+ **After (errors-only filtering):**
672
+ - Service: 10,000 requests/minute
673
+ - Error rate: 0.5%
674
+ - Traces stored: 72,000/day (only errors)
675
+ - Cost: ~$2.50/month (at $0.10 per 100K spans)
676
+
677
+ **Savings: 99.5% reduction, ~$497/month saved** 💰
678
+
679
+ ### Troubleshooting
680
+
681
+ **Issue**: No traces appearing in backend after enabling errors-only filtering
682
+
683
+ **Solution**: Verify errors are actually occurring. Test by intentionally raising an exception:
684
+ ```python
685
+ # Temporary test endpoint
686
+ @app.route('/test-error')
687
+ def test_error():
688
+ raise Exception("Test error for trace validation")
689
+ ```
690
+
691
+ **Issue**: Successful traces still appearing in backend
692
+
693
+ **Solution**:
694
+ 1. Verify `OTEL_SPAN_ATTRIBUTES` is set correctly
695
+ 2. Check OTEL Collector gateway configuration has the errors-only policy
696
+ 3. Ensure collector version supports tail sampling `and` policies
697
+
698
+ ### Exception Handling
699
+
700
+ Spans automatically capture exceptions with:
701
+ - Full exception details and stack traces
702
+ - Automatic status code setting
703
+ - Exception events in the span timeline
704
+
705
+ ## Logging Integration
706
+
707
+ The SDK integrates with Python's standard logging module:
708
+
709
+ ```python
710
+ from rebrandly_otel import logger
711
+
712
+ # Use as a standard Python logger
713
+ logger.info("Processing started", extra={"request_id": "123"})
714
+ logger.error("Processing failed", exc_info=True)
715
+ ```
716
+
717
+ Features:
718
+ - Automatic trace context injection
719
+ - Structured logging support
720
+ - Console and OTLP export
721
+ - Log level configuration via environment
722
+
723
+ ## AWS Lambda Support
724
+
725
+ ### Trigger Detection
726
+
727
+ Automatically detects and labels Lambda triggers:
728
+ - API Gateway (v1 and v2)
729
+ - SQS
730
+ - SNS
731
+ - S3
732
+ - Kinesis
733
+ - DynamoDB
734
+ - EventBridge
735
+ - Batch
736
+
737
+ ### Automatic Metrics
738
+
739
+ For Lambda functions, the SDK automatically captures:
740
+ - Memory usage
741
+ - CPU utilization
742
+
743
+ ### Context Extraction
744
+
745
+ Automatically extracts trace context from:
746
+ - SQS MessageAttributes
747
+ - SNS MessageAttributes (including nested format)
748
+ - Custom message attributes
749
+
750
+ ## Performance Considerations
751
+
752
+ ### Batch Processing
753
+
754
+ The SDK uses batch processing to optimize network usage and reduce overhead:
755
+
756
+ ```python
757
+ import os
758
+
759
+ # Configure batch export interval (milliseconds)
760
+ os.environ['BATCH_EXPORT_TIME_MILLIS'] = '100' # Default: 100ms
761
+
762
+ # Faster flushing for Lambda (reduce cold start impact)
763
+ os.environ['BATCH_EXPORT_TIME_MILLIS'] = '50' # Flush every 50ms
764
+
765
+ # Slower flushing for high-throughput apps (better batching)
766
+ os.environ['BATCH_EXPORT_TIME_MILLIS'] = '200' # Flush every 200ms
767
+ ```
768
+
769
+ **Trade-offs:**
770
+ - **Lower values (50ms)**: Faster data delivery, higher network overhead, better for serverless
771
+ - **Higher values (200ms)**: Better batching, lower overhead, risk of data loss on crashes
772
+
773
+ ### Lambda Optimization
774
+
775
+ For AWS Lambda functions, the SDK is specifically optimized:
776
+
777
+ **Cold Start Impact**: < 50ms
778
+ - Lazy initialization of exporters
779
+ - Minimal import overhead
780
+ - Efficient resource allocation
781
+
782
+ **Memory Usage**: ~20-30 MB additional
783
+ - Efficient span buffering
784
+ - Automatic cleanup on function freeze
785
+ - No memory leaks in long-running containers
786
+
787
+ **Best Practices for Lambda**:
788
+ ```python
789
+ from rebrandly_otel import lambda_handler, force_flush
790
+
791
+ @lambda_handler(name="my-function", auto_flush=True)
792
+ def handler(event, context):
793
+ # auto_flush=True ensures telemetry is exported
794
+ # Add 2-3 seconds to timeout for flush buffer
795
+ return process_event(event)
796
+
797
+ # For manual control
798
+ @lambda_handler(name="my-function", auto_flush=False)
799
+ def handler(event, context):
800
+ result = process_event(event)
801
+ force_flush(timeout_millis=2000) # Explicit flush with 2s timeout
802
+ return result
803
+ ```
804
+
805
+ ### Sampling Strategies
806
+
807
+ For high-traffic applications, implement sampling to reduce overhead:
808
+
809
+ ```python
810
+ from opentelemetry.sdk.trace.sampling import ParentBasedTraceIdRatioBased, ALWAYS_ON
811
+
812
+ # Sample 10% of traces
813
+ sampler = ParentBasedTraceIdRatioBased(0.1)
814
+
815
+ # Use ALWAYS_ON for low-traffic or critical services
816
+ sampler = ALWAYS_ON
817
+ ```
818
+
819
+ ### Metric Cardinality Management
820
+
821
+ Avoid high-cardinality attributes that create too many metric series:
822
+
823
+ ```python
824
+ from rebrandly_otel import meter
825
+
826
+ # ❌ Bad: Creates millions of unique metric series
827
+ order_counter = meter.meter.create_counter("orders.processed")
828
+ order_counter.add(1, {"user_id": "12345", "order_id": "98765"}) # Too many combinations!
829
+
830
+ # ✅ Good: Limited cardinality
831
+ order_counter = meter.meter.create_counter("orders.processed")
832
+ order_counter.add(1, {
833
+ "order.type": "standard", # Only a few types
834
+ "region": "us-east-1", # Limited regions
835
+ "tier": "premium" # Few tier values
836
+ })
837
+ ```
838
+
839
+ **Cardinality Guidelines:**
840
+ - Keep attribute combinations under 1000 per metric
841
+ - Use aggregations in your application layer for high-cardinality data
842
+ - Monitor metric series count in your backend
843
+
844
+ ### Span Attributes Best Practices
845
+
846
+ Optimize span attributes for performance and cost:
847
+
848
+ ```python
849
+ from rebrandly_otel import otel
850
+
851
+ # ✅ Good: Reasonable attribute size
852
+ with otel.span("process-order", attributes={
853
+ "order.id": "12345",
854
+ "user.id": "67890",
855
+ "order.total": 99.99
856
+ }) as span:
857
+ process_order()
858
+
859
+ # ❌ Bad: Large payloads in attributes
860
+ with otel.span("process-order", attributes={
861
+ "order.full_details": json.dumps(order), # Could be huge!
862
+ "request.body": request_body # Potentially large
863
+ }) as span:
864
+ process_order()
865
+ ```
866
+
867
+ **Guidelines:**
868
+ - Keep individual attributes under 1KB
869
+ - Avoid storing full payloads in attributes
870
+ - Use events for detailed debugging data
871
+ - Leverage `OTEL_CAPTURE_REQUEST_BODY=false` to disable body capture
872
+
873
+ ### Database Query Optimization
874
+
875
+ Minimize overhead from database instrumentation:
876
+
877
+ ```python
878
+ from rebrandly_otel import instrument_pymysql
879
+
880
+ # Configure slow query threshold to reduce span volume
881
+ connection = instrument_pymysql(otel, connection, options={
882
+ 'slow_query_threshold_ms': 1000, # Only flag queries > 1s
883
+ 'capture_bindings': False # Disable parameter capture (faster)
884
+ })
885
+ ```
886
+
887
+ ### Thread Pool and Async Considerations
888
+
889
+ The SDK is thread-safe and works with async code:
890
+
891
+ ```python
892
+ import asyncio
893
+ from rebrandly_otel import otel, logger
894
+
895
+ async def async_operation():
896
+ # Context is automatically propagated in async functions
897
+ with otel.span("async-work"):
898
+ await asyncio.sleep(0.1)
899
+ logger.info("Async work completed")
900
+
901
+ # Multiple concurrent operations
902
+ async def main():
903
+ await asyncio.gather(
904
+ async_operation(),
905
+ async_operation(),
906
+ async_operation()
907
+ )
908
+ ```
909
+
910
+ ### Memory Management
911
+
912
+ Monitor and optimize memory usage:
913
+
914
+ ```python
915
+ import psutil
916
+ from rebrandly_otel import otel, meter, logger
917
+
918
+ # Create memory gauge for monitoring
919
+ memory_gauge = meter.meter.create_observable_gauge(
920
+ name="process.memory.used",
921
+ callbacks=[lambda: psutil.Process().memory_info().rss],
922
+ description="Process memory usage",
923
+ unit="bytes"
924
+ )
925
+
926
+ # Check span buffer size
927
+ def check_telemetry_overhead():
928
+ process = psutil.Process()
929
+ mem_before = process.memory_info().rss
930
+
931
+ # Create 1000 spans
932
+ for i in range(1000):
933
+ with otel.span(f"test-{i}"):
934
+ pass
935
+
936
+ mem_after = process.memory_info().rss
937
+ overhead = (mem_after - mem_before) / 1000
938
+ logger.info(f"Per-span memory overhead: {overhead} bytes")
939
+ ```
940
+
941
+ ### Production Optimization Checklist
942
+
943
+ - [ ] Configure appropriate `BATCH_EXPORT_TIME_MILLIS` for your workload
944
+ - [ ] Implement sampling for high-traffic services (> 100 req/s)
945
+ - [ ] Keep metric cardinality under 1000 combinations per metric
946
+ - [ ] Monitor memory usage and adjust batch settings if needed
947
+ - [ ] Use `OTEL_DEBUG=false` in production (significant performance impact)
948
+ - [ ] Set appropriate timeout buffers for Lambda functions (add 2-3s)
949
+ - [ ] Review and limit span attribute sizes (< 1KB per attribute)
950
+ - [ ] Disable request body capture if not needed (`OTEL_CAPTURE_REQUEST_BODY=false`)
951
+ - [ ] Use connection pooling for database instrumentation
952
+ - [ ] Monitor OTLP exporter queue depth and adjust batch settings
953
+
954
+ ## Export Formats
955
+
956
+ ### Supported Exporters
957
+
958
+ - **OTLP/gRPC**: Primary export format for production
959
+ - **Console**: Available for local development and debugging
960
+
961
+ ## Thread Safety
962
+
963
+ All components are thread-safe and can be used in multi-threaded applications:
964
+ - Singleton pattern ensures single initialization
965
+ - Thread-safe metric recording
966
+ - Concurrent span creation support
967
+
968
+ ## Resource Attributes
969
+
970
+ Automatically includes:
971
+ - Service name and version
972
+ - Python runtime version
973
+ - Deployment environment
974
+ - Custom resource attributes via environment
975
+
976
+ ## Error Handling
977
+
978
+ - Graceful degradation when OTLP endpoint unavailable
979
+ - Non-blocking telemetry operations
980
+ - Automatic retry with exponential backoff
981
+ - Comprehensive error logging
982
+
983
+ ## Compatibility
984
+
985
+ - Python 3.7+
986
+ - AWS Lambda runtime support
987
+ - Compatible with OpenTelemetry Collector
988
+ - Works with any OTLP-compatible backend
989
+
990
+ ## Best Practices
991
+
992
+ ### Span Management
993
+
994
+ **1. Use Context Managers for Automatic Cleanup**
995
+ ```python
996
+ from rebrandly_otel import otel, logger
997
+
998
+ # ✅ Good: Context manager automatically ends span
999
+ with otel.span("process-order", attributes={"order.id": order_id}):
1000
+ process_order(order_id)
1001
+ # Span automatically ended, even if exception occurs
1002
+
1003
+ # ⚠️ Manual span management (less preferred)
1004
+ span = otel.tracer.start_span("process-order")
1005
+ try:
1006
+ process_order(order_id)
1007
+ finally:
1008
+ span.end() # Must remember to end
1009
+ ```
1010
+
1011
+ **2. Use Meaningful Span Names**
1012
+ ```python
1013
+ # ✅ Good: Descriptive, operation-focused names
1014
+ with otel.span("fetch-user-profile"):
1015
+ pass
1016
+ with otel.span("validate-payment"):
1017
+ pass
1018
+ with otel.span("send-email-notification"):
1019
+ pass
1020
+
1021
+ # ❌ Bad: Vague or implementation-focused names
1022
+ with otel.span("function1"):
1023
+ pass
1024
+ with otel.span("handler"):
1025
+ pass
1026
+ with otel.span("process"):
1027
+ pass
1028
+ ```
1029
+
1030
+ **3. Add Contextual Attributes**
1031
+ ```python
1032
+ with otel.span("create-order", attributes={
1033
+ # Business context
1034
+ "order.id": order_id,
1035
+ "user.id": user_id,
1036
+ "order.total": total_amount,
1037
+ "payment.method": payment_method,
1038
+ # Technical context
1039
+ "db.system": "postgresql",
1040
+ "http.method": "POST"
1041
+ }) as span:
1042
+ # Can add more attributes dynamically
1043
+ span.set_attribute("order.items_count", len(items))
1044
+ ```
1045
+
1046
+ **4. Record Exceptions Properly**
1047
+ ```python
1048
+ from opentelemetry.trace import Status, StatusCode
1049
+
1050
+ with otel.span("risky-operation") as span:
1051
+ try:
1052
+ risky_operation()
1053
+ except Exception as e:
1054
+ span.record_exception(e)
1055
+ span.set_status(Status(StatusCode.ERROR, str(e)))
1056
+ raise # Re-raise after recording
1057
+ ```
1058
+
1059
+ ### Error Handling
1060
+
1061
+ **1. Distinguish Error Types**
1062
+ ```python
1063
+ from rebrandly_otel import otel, logger
1064
+ from opentelemetry.trace import Status, StatusCode
1065
+
1066
+ with otel.span("process-payment") as span:
1067
+ try:
1068
+ process_payment(amount)
1069
+ span.set_status(Status(StatusCode.OK))
1070
+ except ValidationError as e:
1071
+ # Client errors (4xx) - not span errors
1072
+ span.set_attribute("error.validation", str(e))
1073
+ span.set_status(Status(StatusCode.OK)) # Business logic, not system error
1074
+ raise
1075
+ except Exception as e:
1076
+ # Server errors (5xx) - mark span as error
1077
+ span.record_exception(e)
1078
+ span.set_status(Status(StatusCode.ERROR, str(e)))
1079
+ logger.error(f"Payment processing failed: {e}")
1080
+ raise
1081
+ ```
1082
+
1083
+ ### Metric Cardinality
1084
+
1085
+ **1. Limit Attribute Values**
1086
+ ```python
1087
+ from rebrandly_otel import meter
1088
+
1089
+ # ❌ Bad: Unbounded cardinality
1090
+ request_counter.add(1, {
1091
+ "user.id": user_id, # Millions of users!
1092
+ "request.id": request_id, # Every request unique!
1093
+ "timestamp": time.time() # Always unique!
1094
+ })
1095
+
1096
+ # ✅ Good: Bounded cardinality
1097
+ request_counter.add(1, {
1098
+ "http.method": "GET", # ~10 values
1099
+ "http.route": "/api/users", # Hundreds of routes
1100
+ "http.response.status_code": 200 # ~50 status codes
1101
+ })
1102
+ ```
1103
+
1104
+ **2. Aggregate High-Cardinality Data**
1105
+ ```python
1106
+ # Store detailed data in spans, aggregate in metrics
1107
+ with otel.span("process-order", attributes={
1108
+ "user.id": user_id, # Detailed (high cardinality OK in spans)
1109
+ "order.id": order_id
1110
+ }) as span:
1111
+ # Aggregated attributes in metrics (low cardinality)
1112
+ order_counter.add(1, {
1113
+ "user.tier": get_user_tier(user_id), # bronze/silver/gold
1114
+ "order.category": get_category(order) # electronics/clothing/food
1115
+ })
1116
+ ```
1117
+
1118
+ ### Lambda Functions
1119
+
1120
+ **1. Always Flush Before Exit**
1121
+ ```python
1122
+ from rebrandly_otel import lambda_handler, force_flush
1123
+
1124
+ # Using decorator (auto-flush enabled by default)
1125
+ @lambda_handler(name="my-function")
1126
+ def handler(event, context):
1127
+ # Your code
1128
+ return response
1129
+
1130
+ # Manual flush if needed
1131
+ @lambda_handler(name="my-function", auto_flush=False)
1132
+ def handler(event, context):
1133
+ result = process(event)
1134
+ force_flush(timeout_millis=5000) # Flush with 5s timeout
1135
+ return result
1136
+ ```
1137
+
1138
+ **2. Add Buffer to Timeout**
1139
+ ```python
1140
+ # If Lambda timeout is 30s, set function timeout to 27s
1141
+ # Reserve 3s for telemetry flush
1142
+
1143
+ import time
1144
+
1145
+ LAMBDA_TIMEOUT_SEC = 30
1146
+ FLUSH_BUFFER_SEC = 3
1147
+ FUNCTION_TIMEOUT_SEC = LAMBDA_TIMEOUT_SEC - FLUSH_BUFFER_SEC
1148
+
1149
+ @lambda_handler(name="my-function")
1150
+ def handler(event, context):
1151
+ deadline = time.time() + FUNCTION_TIMEOUT_SEC
1152
+
1153
+ # Check timeout during processing
1154
+ if time.time() > deadline:
1155
+ raise TimeoutError("Function timeout approaching")
1156
+
1157
+ return process_with_timeout(event, deadline)
1158
+ ```
1159
+
1160
+ ### Context Propagation
1161
+
1162
+ **1. Propagate Context in HTTP Calls**
1163
+ ```python
1164
+ import requests
1165
+ from opentelemetry.propagate import inject
1166
+
1167
+ def call_downstream(url, data):
1168
+ # Extract current context and inject into headers
1169
+ headers = {}
1170
+ inject(headers) # Automatically adds traceparent header
1171
+
1172
+ # Make request with trace headers
1173
+ response = requests.post(url, json=data, headers=headers)
1174
+ return response.json()
1175
+ ```
1176
+
1177
+ **2. Propagate Context in AWS Messages**
1178
+ ```python
1179
+ import boto3
1180
+ import json
1181
+ from rebrandly_otel import otel
1182
+
1183
+ sqs = boto3.client('sqs')
1184
+
1185
+ # Get trace context for message attributes
1186
+ trace_attrs = otel.tracer.get_attributes_for_aws_from_context()
1187
+
1188
+ # Send message with trace context
1189
+ sqs.send_message(
1190
+ QueueUrl=queue_url,
1191
+ MessageBody=json.dumps(data),
1192
+ MessageAttributes=trace_attrs # Automatic context injection
1193
+ )
1194
+ ```
1195
+
1196
+ ### Logging
1197
+
1198
+ **1. Use Structured Logging**
1199
+ ```python
1200
+ from rebrandly_otel import logger
1201
+
1202
+ # ✅ Good: Structured with context
1203
+ logger.info("Order processed", extra={
1204
+ "order_id": order.id,
1205
+ "user_id": user.id,
1206
+ "amount": order.total,
1207
+ "duration": processing_time
1208
+ })
1209
+
1210
+ # ❌ Bad: String formatting
1211
+ logger.info(f"Order {order.id} processed for user {user.id} with amount {order.total}")
1212
+ ```
1213
+
1214
+ **2. Log at Appropriate Levels**
1215
+ ```python
1216
+ logger.debug("Entering function", extra={"function": "process_order"}) # Development only
1217
+ logger.info("Order created", extra={"order_id": 123}) # Normal operations
1218
+ logger.warning("Slow query detected", extra={"duration": 2.0}) # Performance issues
1219
+ logger.error("Payment failed", extra={"error": str(e)}, exc_info=True) # Errors with traceback
1220
+ ```
1221
+
1222
+ ### Configuration
1223
+
1224
+ **1. Use Environment-Specific Settings**
1225
+ ```bash
1226
+ # .env.development
1227
+ export OTEL_DEBUG=true
1228
+ export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
1229
+ export BATCH_EXPORT_TIME_MILLIS=50
1230
+ export LOG_LEVEL=DEBUG
1231
+
1232
+ # .env.production
1233
+ export OTEL_DEBUG=false
1234
+ export OTEL_EXPORTER_OTLP_ENDPOINT=https://collector.prod.example.com:4317
1235
+ export BATCH_EXPORT_TIME_MILLIS=100
1236
+ export LOG_LEVEL=INFO
1237
+ export OTEL_SPAN_ATTRIBUTES=environment=production,team=backend
1238
+ ```
1239
+
1240
+ **2. Don't Hardcode Service Names**
1241
+ ```python
1242
+ # ❌ Bad: Hardcoded in code
1243
+ os.environ['OTEL_SERVICE_NAME'] = 'my-service'
1244
+
1245
+ # ✅ Good: Set via environment
1246
+ # In Dockerfile or deployment config:
1247
+ # ENV OTEL_SERVICE_NAME=my-service
1248
+ # ENV OTEL_SERVICE_VERSION=1.2.3
1249
+ ```
1250
+
1251
+ ### Performance
1252
+
1253
+ **1. Implement Sampling for High-Traffic Services**
1254
+ ```python
1255
+ from opentelemetry.sdk.trace.sampling import ParentBasedTraceIdRatioBased, ALWAYS_ON
1256
+
1257
+ # Sample 10% of traces for high-traffic services (> 1000 req/s)
1258
+ sampler = ParentBasedTraceIdRatioBased(0.1)
1259
+
1260
+ # Use ALWAYS_ON for low-traffic or critical services
1261
+ sampler = ALWAYS_ON
1262
+ ```
1263
+
1264
+ **2. Disable Debug Mode in Production**
1265
+ ```bash
1266
+ # Significant performance impact!
1267
+ export OTEL_DEBUG=false # In production
1268
+
1269
+ # Only enable for troubleshooting specific issues
1270
+ ```
1271
+
1272
+ **3. Monitor Telemetry Overhead**
1273
+ ```python
1274
+ from rebrandly_otel import meter
1275
+ import time
1276
+
1277
+ # Track telemetry overhead
1278
+ telemetry_duration = meter.meter.create_histogram(
1279
+ name="telemetry.overhead",
1280
+ description="Time spent on telemetry operations",
1281
+ unit="ms"
1282
+ )
1283
+
1284
+ start = time.time()
1285
+ # Your telemetry operation
1286
+ overhead = (time.time() - start) * 1000
1287
+ telemetry_duration.record(overhead)
1288
+ ```
1289
+
1290
+ ### Security
1291
+
1292
+ **1. Sanitize Sensitive Data**
1293
+ ```python
1294
+ from rebrandly_otel import logger
1295
+
1296
+ # ❌ Bad: Logging sensitive data
1297
+ logger.info("User login", extra={"username": username, "password": password})
1298
+
1299
+ # ✅ Good: Exclude sensitive data
1300
+ logger.info("User login", extra={
1301
+ "username": username,
1302
+ "password_provided": bool(password)
1303
+ })
1304
+
1305
+ # SDK automatically redacts sensitive fields when OTEL_CAPTURE_REQUEST_BODY=true
1306
+ ```
1307
+
1308
+ **2. Use Secure Connections**
1309
+ ```bash
1310
+ # ✅ Good: TLS endpoint
1311
+ export OTEL_EXPORTER_OTLP_ENDPOINT=https://collector.example.com:4317
1312
+
1313
+ # ⚠️ Caution: Only use HTTP for local development
1314
+ export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
1315
+ ```
1316
+
1317
+ ### Testing
1318
+
1319
+ **1. Disable Telemetry in Tests**
1320
+ ```python
1321
+ # conftest.py or test setup
1322
+ import os
1323
+ import pytest
1324
+
1325
+ @pytest.fixture(autouse=True)
1326
+ def disable_telemetry():
1327
+ os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = ''
1328
+ os.environ['OTEL_DEBUG'] = 'false'
1329
+
1330
+ # Or mock the SDK
1331
+ from unittest.mock import Mock, patch
1332
+
1333
+ @patch('rebrandly_otel.otel')
1334
+ @patch('rebrandly_otel.logger')
1335
+ def test_my_function(mock_logger, mock_otel):
1336
+ mock_otel.span.return_value.__enter__ = Mock()
1337
+ mock_otel.span.return_value.__exit__ = Mock()
1338
+ # Your test
1339
+ ```
1340
+
1341
+ **2. Test Telemetry Integration Separately**
1342
+ ```python
1343
+ # test_telemetry_integration.py
1344
+ from rebrandly_otel import otel
1345
+ import os
1346
+
1347
+ def test_span_creation():
1348
+ os.environ['OTEL_DEBUG'] = 'true'
1349
+ with otel.span("test-span"):
1350
+ pass
1351
+ # Verify span was created
1352
+ ```
1353
+
1354
+ ### Async/Await Support
1355
+
1356
+ **1. Use Spans with Async Functions**
1357
+ ```python
1358
+ import asyncio
1359
+ from rebrandly_otel import otel, logger
1360
+
1361
+ async def async_operation():
1362
+ # Context is automatically propagated in async functions
1363
+ with otel.span("async-work"):
1364
+ await asyncio.sleep(0.1)
1365
+ logger.info("Async work completed")
1366
+
1367
+ # Multiple concurrent operations
1368
+ async def main():
1369
+ await asyncio.gather(
1370
+ async_operation(),
1371
+ async_operation(),
1372
+ async_operation()
1373
+ )
1374
+
1375
+ asyncio.run(main())
1376
+ ```
1377
+
1378
+ ### Database Instrumentation
1379
+
1380
+ **1. Always Instrument Connections**
1381
+ ```python
1382
+ import pymysql
1383
+ from rebrandly_otel import otel, instrument_pymysql
1384
+
1385
+ # Create connection
1386
+ connection = pymysql.connect(
1387
+ host='localhost',
1388
+ user='user',
1389
+ password='password',
1390
+ database='mydb'
1391
+ )
1392
+
1393
+ # Instrument the connection
1394
+ connection = instrument_pymysql(otel, connection, options={
1395
+ 'slow_query_threshold_ms': 1000, # Flag slow queries
1396
+ 'capture_bindings': False # Disable for performance/security
1397
+ })
1398
+
1399
+ # All queries now automatically traced
1400
+ with connection.cursor() as cursor:
1401
+ cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
1402
+ ```
1403
+
1404
+ ## Examples
1405
+
1406
+ ### Lambda - Send SNS / SQS message
1407
+ ```python
1408
+ import os
1409
+ import json
1410
+ import boto3
1411
+ from rebrandly_otel import otel, lambda_handler, logger
1412
+
1413
+ sqs = boto3.client('sqs')
1414
+ QUEUE_URL = os.environ.get('SQS_URL')
1415
+
1416
+ @lambda_handler("sqs_sender")
1417
+ def handler(event, context):
1418
+ logger.info("Starting SQS message send")
1419
+
1420
+ # Get trace context for propagation
1421
+ trace_attrs = otel.tracer.get_attributes_for_aws_from_context()
1422
+
1423
+ # Send message with trace context
1424
+ response = sqs.send_message(
1425
+ QueueUrl=QUEUE_URL,
1426
+ MessageBody=json.dumps({"data": "test message"}),
1427
+ MessageAttributes=trace_attrs
1428
+ )
1429
+
1430
+ logger.info(f"Sent SQS message: {response['MessageId']}")
1431
+
1432
+ return {
1433
+ 'statusCode': 200,
1434
+ 'body': json.dumps({'messageId': response['MessageId']})
1435
+ }
1436
+ ```
1437
+
1438
+ ### Lambda Receive SQS message
1439
+ ```python
1440
+ import json
1441
+ from rebrandly_otel import lambda_handler, logger, aws_message_span
1442
+
1443
+ @lambda_handler(name="sqs_receiver")
1444
+ def handler(event, context):
1445
+ for record in event['Records']:
1446
+ # Process each message with trace context
1447
+ process_message(record)
1448
+
1449
+ def process_message(record):
1450
+ with aws_message_span("process_message_sqs_receiver", message=record) as s:
1451
+ logger.info(f"Processing message: {record['messageId']}")
1452
+
1453
+ # Parse message body
1454
+ body = json.loads(record['body'])
1455
+ logger.info(f"Message data: {body}")
1456
+ ```
1457
+
1458
+ ### Lambda Receive SNS message (record specific event)
1459
+ ```python
1460
+ import json
1461
+ from rebrandly_otel import lambda_handler, logger, aws_message_span
1462
+
1463
+ @lambda_handler(name="sns_receiver")
1464
+ def handler(event, context):
1465
+ for record in event['Records']:
1466
+ # Process each message with trace context
1467
+ process_message(record)
1468
+
1469
+ def process_message(record):
1470
+ message = json.loads(record['Sns']['Message'])
1471
+ if message['event'] == 'whitelisted-event':
1472
+ with aws_message_span("process_message_sns_receiver", message=record) as s:
1473
+ logger.info(f"Processing message: {record['messageId']}")
1474
+
1475
+ # Parse message body
1476
+ body = json.loads(record['body'])
1477
+ logger.info(f"Message data: {body}")
1478
+ ```
1479
+
1480
+ ###
1481
+ Flask
1482
+
1483
+ ```python
1484
+
1485
+ from flask import Flask, jsonify
1486
+ from src.rebrandly_otel import otel, logger, app_before_request, app_after_request, flask_error_handler
1487
+ from datetime import datetime
1488
+
1489
+ app = Flask(__name__)
1490
+
1491
+ # Register the centralized OTEL handlers
1492
+ app.before_request(app_before_request)
1493
+ app.after_request(app_after_request)
1494
+ app.register_error_handler(Exception, flask_error_handler)
1495
+
1496
+ @app.route('/health')
1497
+ def health():
1498
+ logger.info("Health check requested")
1499
+ return jsonify({"status": "healthy"}), 200
1500
+
1501
+ @app.route('/process', methods=['POST', 'GET'])
1502
+ def process():
1503
+ with otel.span("process_request"):
1504
+ logger.info("Processing POST request")
1505
+
1506
+ # Simulate processing
1507
+ result = {"processed": True, "timestamp": datetime.now().isoformat()}
1508
+
1509
+ logger.info(f"Returning result: {result}")
1510
+ return jsonify(result), 200
1511
+
1512
+ @app.route('/error')
1513
+ def error():
1514
+ logger.error("Error endpoint called")
1515
+ raise Exception("Simulated error")
1516
+
1517
+ if __name__ == '__main__':
1518
+ app.run(debug=True)
1519
+ ```
1520
+
1521
+ ###
1522
+ FastAPI
1523
+
1524
+ ```python
1525
+
1526
+ # main_fastapi.py
1527
+ from fastapi import FastAPI, HTTPException, Depends
1528
+ from contextlib import asynccontextmanager
1529
+ from src.rebrandly_otel import otel, logger, force_flush
1530
+ from src.fastapi_support import setup_fastapi, get_current_span
1531
+ from datetime import datetime
1532
+ from typing import Optional
1533
+ import uvicorn
1534
+
1535
+ @asynccontextmanager
1536
+ async def lifespan(app: FastAPI):
1537
+ # Startup
1538
+ logger.info("FastAPI application starting up")
1539
+ yield
1540
+ # Shutdown
1541
+ logger.info("FastAPI application shutting down")
1542
+ force_flush()
1543
+
1544
+ app = FastAPI(title="FastAPI OTEL Example", lifespan=lifespan)
1545
+
1546
+ # Setup FastAPI with OTEL
1547
+ setup_fastapi(otel, app)
1548
+
1549
+ @app.get("/health")
1550
+ async def health():
1551
+ """Health check endpoint."""
1552
+ logger.info("Health check requested")
1553
+ return {"status": "healthy"}
1554
+
1555
+ @app.post("/process")
1556
+ @app.get("/process")
1557
+ async def process(span = Depends(get_current_span)):
1558
+ """Process endpoint with custom span."""
1559
+ with otel.span("process_request"):
1560
+ logger.info("Processing request")
1561
+
1562
+ # You can also use the injected span directly
1563
+ if span:
1564
+ span.add_event("custom_processing_event", {
1565
+ "timestamp": datetime.now().isoformat()
1566
+ })
1567
+
1568
+ # Simulate some processing
1569
+ result = {
1570
+ "processed": True,
1571
+ "timestamp": datetime.now().isoformat()
1572
+ }
1573
+
1574
+ logger.info(f"Returning result: {result}")
1575
+ return result
1576
+
1577
+ @app.get("/error")
1578
+ async def error():
1579
+ """Endpoint that raises an error."""
1580
+ logger.error("Error endpoint called")
1581
+ raise HTTPException(status_code=400, detail="Simulated error")
1582
+
1583
+ @app.get("/exception")
1584
+ async def exception():
1585
+ """Endpoint that raises an unhandled exception."""
1586
+ logger.error("Exception endpoint called")
1587
+ raise ValueError("Simulated unhandled exception")
1588
+
1589
+ @app.get("/items/{item_id}")
1590
+ async def get_item(item_id: int, q: Optional[str] = None):
1591
+ """Example endpoint with path and query parameters."""
1592
+ with otel.span("fetch_item", attributes={"item_id": item_id, "query": q}):
1593
+ logger.info(f"Fetching item {item_id} with query: {q}")
1594
+
1595
+ if item_id == 999:
1596
+ raise HTTPException(status_code=404, detail="Item not found")
1597
+
1598
+ return {
1599
+ "item_id": item_id,
1600
+ "name": f"Item {item_id}",
1601
+ "query": q
1602
+ }
1603
+
1604
+ if __name__ == "__main__":
1605
+ uvicorn.run(app, host="0.0.0.0", port=8000)
1606
+ ```
1607
+
1608
+ ### PyMySQL Database Instrumentation
1609
+
1610
+ The SDK provides connection-level instrumentation for PyMySQL that automatically traces all queries without requiring you to instrument each query individually.
1611
+
1612
+ ```python
1613
+ import pymysql
1614
+ from rebrandly_otel import otel, logger, instrument_pymysql
1615
+
1616
+ # SDK auto-initializes on import
1617
+
1618
+ # Create and instrument your connection
1619
+ connection = pymysql.connect(
1620
+ host='localhost',
1621
+ user='your_user',
1622
+ password='your_password',
1623
+ database='your_database'
1624
+ )
1625
+
1626
+ # Instrument the connection - all queries are now automatically traced
1627
+ connection = instrument_pymysql(otel, connection, options={
1628
+ 'slow_query_threshold_ms': 1000, # Queries over 1s flagged as slow
1629
+ 'capture_bindings': False # Set True to capture query parameters
1630
+ })
1631
+
1632
+ # Use normally - all queries automatically traced
1633
+ with connection.cursor() as cursor:
1634
+ cursor.execute("SELECT * FROM users WHERE id = %s", (123,))
1635
+ result = cursor.fetchone()
1636
+ logger.info(f"Found user: {result}")
1637
+
1638
+ connection.close()
1639
+ otel.force_flush()
1640
+ ```
1641
+
1642
+ Features:
1643
+ - Automatic span creation for all queries
1644
+ - Query operation detection (SELECT, INSERT, UPDATE, etc.)
1645
+ - Slow query detection and flagging
1646
+ - Duration tracking
1647
+ - Error recording with exception details
1648
+ - Optional query parameter capture (disabled by default for security)
1649
+
1650
+ Environment configuration:
1651
+ - `PYMYSQL_SLOW_QUERY_THRESHOLD_MS`: Threshold for slow query detection (default: 1500ms)
1652
+
1653
+ ### More examples
1654
+ You can find More examples [here](examples)
1655
+
1656
+ ## Troubleshooting
1657
+
1658
+ ### Common Issues
1659
+
1660
+ #### No Data Exported
1661
+
1662
+ **Symptoms**: Telemetry data not appearing in your observability backend.
1663
+
1664
+ **Solutions**:
1665
+ 1. Verify `OTEL_EXPORTER_OTLP_ENDPOINT` is correctly set:
1666
+ ```bash
1667
+ echo $OTEL_EXPORTER_OTLP_ENDPOINT
1668
+ ```
1669
+ 2. Check network connectivity to the collector:
1670
+ ```bash
1671
+ curl -v $OTEL_EXPORTER_OTLP_ENDPOINT
1672
+ ```
1673
+ 3. Enable debug mode to see console output:
1674
+ ```bash
1675
+ export OTEL_DEBUG=true
1676
+ python app.py
1677
+ ```
1678
+ 4. Verify the collector is running and accepting connections
1679
+ 5. Check for firewall rules blocking outbound gRPC traffic (port 4317)
1680
+
1681
+ #### Missing Traces in Lambda
1682
+
1683
+ **Symptoms**: Lambda function executes but no traces appear in backend.
1684
+
1685
+ **Solutions**:
1686
+ 1. Ensure `force_flush()` is called before handler returns:
1687
+ ```python
1688
+ from rebrandly_otel import lambda_handler, force_flush
1689
+
1690
+ @lambda_handler(name="my-function")
1691
+ def handler(event, context):
1692
+ # Your code
1693
+ force_flush(timeout_millis=5000) # Explicitly flush
1694
+ return response
1695
+ ```
1696
+ 2. Verify Lambda timeout allows enough time for flush (add 2-3 seconds buffer)
1697
+ 3. Check Lambda execution role has network access to OTLP endpoint
1698
+ 4. Verify environment variables are set in Lambda configuration
1699
+ 5. Check CloudWatch Logs for error messages
1700
+
1701
+ #### Trace Context Not Propagating
1702
+
1703
+ **Symptoms**: Distributed traces appear as disconnected spans instead of a unified trace.
1704
+
1705
+ **Solutions**:
1706
+ 1. Verify message attributes are being sent:
1707
+ ```python
1708
+ # For SQS
1709
+ trace_attrs = otel.tracer.get_attributes_for_aws_from_context()
1710
+ response = sqs.send_message(
1711
+ QueueUrl=queue_url,
1712
+ MessageBody=json.dumps(data),
1713
+ MessageAttributes=trace_attrs # Don't forget this!
1714
+ )
1715
+ ```
1716
+ 2. Check that receiving end uses `aws_message_span()` or `@aws_message_handler`:
1717
+ ```python
1718
+ with aws_message_span("process-message", message=record):
1719
+ # Processing logic
1720
+ pass
1721
+ ```
1722
+ 3. For HTTP services, verify headers are being propagated:
1723
+ ```python
1724
+ import requests
1725
+ from opentelemetry.propagate import inject
1726
+
1727
+ headers = {}
1728
+ inject(headers) # Injects traceparent header
1729
+ response = requests.get(url, headers=headers)
1730
+ ```
1731
+
1732
+ #### High Memory Usage
1733
+
1734
+ **Symptoms**: Application memory usage grows over time.
1735
+
1736
+ **Solutions**:
1737
+ 1. Reduce batch export interval:
1738
+ ```bash
1739
+ export BATCH_EXPORT_TIME_MILLIS=50 # Flush more frequently (default: 100)
1740
+ ```
1741
+ 2. Implement sampling for high-traffic applications:
1742
+ ```python
1743
+ from opentelemetry.sdk.trace.sampling import ParentBasedTraceIdRatioBased
1744
+
1745
+ # Sample 10% of traces
1746
+ sampler = ParentBasedTraceIdRatioBased(0.1)
1747
+ ```
1748
+ 3. Monitor metric cardinality - avoid high-cardinality attributes:
1749
+ ```python
1750
+ # Bad: user_id has millions of possible values
1751
+ counter.add(1, {"user_id": user_id})
1752
+
1753
+ # Good: user_tier has limited values
1754
+ counter.add(1, {"user_tier": "premium"})
1755
+ ```
1756
+ 4. Check for span leaks - ensure all spans are properly closed
1757
+
1758
+ #### Import Errors
1759
+
1760
+ **Symptoms**: `ModuleNotFoundError` or `ImportError` when importing rebrandly_otel.
1761
+
1762
+ **Solutions**:
1763
+ 1. Verify installation:
1764
+ ```bash
1765
+ pip show rebrandly-otel
1766
+ ```
1767
+ 2. Reinstall if necessary:
1768
+ ```bash
1769
+ pip uninstall rebrandly-otel
1770
+ pip install rebrandly-otel
1771
+ ```
1772
+ 3. Check Python version compatibility (requires Python 3.7+):
1773
+ ```bash
1774
+ python --version
1775
+ ```
1776
+ 4. For Lambda, ensure package is included in deployment package or layer
1777
+
1778
+ #### Database Instrumentation Not Working
1779
+
1780
+ **Symptoms**: Database queries not appearing as spans.
1781
+
1782
+ **Solutions**:
1783
+ 1. Verify connection is instrumented:
1784
+ ```python
1785
+ from rebrandly_otel import instrument_pymysql
1786
+
1787
+ connection = pymysql.connect(...)
1788
+ connection = instrument_pymysql(otel, connection) # Don't forget this!
1789
+ ```
1790
+ 2. Check that you're using the instrumented connection object
1791
+ 3. Verify slow query threshold settings:
1792
+ ```bash
1793
+ export PYMYSQL_SLOW_QUERY_THRESHOLD_MS=1000
1794
+ ```
1795
+
1796
+ #### Flask/FastAPI Routes Not Traced
1797
+
1798
+ **Symptoms**: HTTP requests not creating spans.
1799
+
1800
+ **Solutions**:
1801
+ 1. Verify middleware is registered **before** route definitions:
1802
+ ```python
1803
+ # Flask
1804
+ app.before_request(app_before_request)
1805
+ app.after_request(app_after_request)
1806
+ app.register_error_handler(Exception, flask_error_handler)
1807
+
1808
+ # Then define routes
1809
+ @app.route('/api/users')
1810
+ def get_users():
1811
+ ...
1812
+ ```
1813
+ 2. For FastAPI, ensure `setup_fastapi()` is called:
1814
+ ```python
1815
+ from rebrandly_otel.fastapi_support import setup_fastapi
1816
+ setup_fastapi(otel, app)
1817
+ ```
1818
+ 3. Check for middleware conflicts with other libraries
1819
+
1820
+ ### Debugging Tips
1821
+
1822
+ #### Enable Debug Mode
1823
+
1824
+ See all telemetry data in console:
1825
+ ```bash
1826
+ export OTEL_DEBUG=true
1827
+ python app.py
1828
+ ```
1829
+
1830
+ #### Check Span Attributes
1831
+
1832
+ Print span attributes during development:
1833
+ ```python
1834
+ with otel.span("test-span") as span:
1835
+ print(f"Trace ID: {span.get_span_context().trace_id}")
1836
+ print(f"Span ID: {span.get_span_context().span_id}")
1837
+ # Your code
1838
+ ```
1839
+
1840
+ #### Verify Environment Variables
1841
+
1842
+ Check all OTEL configuration:
1843
+ ```python
1844
+ import os
1845
+ print("OTEL_SERVICE_NAME:", os.getenv("OTEL_SERVICE_NAME"))
1846
+ print("OTEL_EXPORTER_OTLP_ENDPOINT:", os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT"))
1847
+ print("OTEL_DEBUG:", os.getenv("OTEL_DEBUG"))
1848
+ ```
1849
+
1850
+ #### Test Trace Context Propagation
1851
+
1852
+ Manually test context extraction:
1853
+ ```python
1854
+ from rebrandly_otel import otel
1855
+
1856
+ # Simulate SQS message
1857
+ test_message = {
1858
+ 'messageAttributes': {
1859
+ 'traceparent': {'stringValue': '00-trace-id-span-id-01'}
1860
+ }
1861
+ }
1862
+
1863
+ context = otel.tracer.extract_context_from_aws_message(test_message)
1864
+ print(f"Extracted context: {context}")
1865
+ ```
1866
+
1867
+ ## Testing
1868
+
1869
+ ### Running Tests
1870
+
1871
+ The test suite uses [pytest](https://docs.pytest.org/).
1872
+
1873
+ Run all tests:
1874
+ ```bash
1875
+ pytest
1876
+ ```
1877
+
1878
+ Run specific test file:
1879
+ ```bash
1880
+ pytest tests/test_flask_support.py -v
1881
+ pytest tests/test_fastapi_support.py -v
1882
+ pytest tests/test_usage.py -v
1883
+ pytest tests/test_pymysql_instrumentation.py -v
1884
+ pytest tests/test_metrics_and_logs.py -v
1885
+ pytest tests/test_decorators.py -v
1886
+ pytest tests/test_span_attributes_processor.py -v
1887
+ ```
1888
+
1889
+ Run with coverage:
1890
+ ```bash
1891
+ pytest --cov=src --cov-report=html
1892
+ ```
1893
+
1894
+ ### Test Coverage
1895
+
1896
+ The test suite includes:
1897
+ - **Integration tests** (`test_usage.py`): Core OTEL functionality, Lambda handlers, message processing
1898
+ - **Flask integration tests** (`test_flask_support.py`): Flask setup and hooks
1899
+ - **FastAPI integration tests** (`test_fastapi_support.py`): FastAPI setup and middleware
1900
+ - **PyMySQL instrumentation tests** (`test_pymysql_instrumentation.py`): Database connection instrumentation, query tracing, helper functions
1901
+ - **Metrics and logs tests** (`test_metrics_and_logs.py`): Custom metrics creation (counter, histogram, gauge), logging levels (info, warning, debug, error)
1902
+ - **Decorators tests** (`test_decorators.py`): Lambda handler decorator, AWS message handler decorator, traces decorator, aws_message_span context manager
1903
+ - **Span attributes processor tests** (`test_span_attributes_processor.py`): Automatic span attributes from OTEL_SPAN_ATTRIBUTES (31 tests)
1904
+
1905
+ ## License
1906
+
1907
+ Rebrandly Python SDK is released under the MIT License.
1908
+
1909
+ ## Build and Deploy
1910
+
1911
+ ```bash
1912
+ brew install pipx
1913
+ pipx ensurepath
1914
+ pipx install build
1915
+ pipx install twine
1916
+ ```
1917
+
1918
+ > build
1919
+ >
1920
+ > twine upload dist/*
1921
+
1922
+ If `build` gives you an error, try:
1923
+
1924
+ > pyproject-build
1925
+ >
1926
+ > twine upload dist/*