tracehub-logger 1.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (31) hide show
  1. tracehub_logger-1.0.0/PKG-INFO +441 -0
  2. tracehub_logger-1.0.0/README.md +405 -0
  3. tracehub_logger-1.0.0/pyproject.toml +54 -0
  4. tracehub_logger-1.0.0/setup.cfg +4 -0
  5. tracehub_logger-1.0.0/tests/test_batcher.py +71 -0
  6. tracehub_logger-1.0.0/tests/test_buffer.py +64 -0
  7. tracehub_logger-1.0.0/tests/test_client.py +126 -0
  8. tracehub_logger-1.0.0/tests/test_enricher.py +69 -0
  9. tracehub_logger-1.0.0/tests/test_exceptions.py +18 -0
  10. tracehub_logger-1.0.0/tests/test_integration_rabbitmq.py +98 -0
  11. tracehub_logger-1.0.0/tests/test_models.py +76 -0
  12. tracehub_logger-1.0.0/tests/test_serializer.py +49 -0
  13. tracehub_logger-1.0.0/tests/test_transport.py +132 -0
  14. tracehub_logger-1.0.0/tracehub/__init__.py +21 -0
  15. tracehub_logger-1.0.0/tracehub/batcher.py +68 -0
  16. tracehub_logger-1.0.0/tracehub/buffer.py +44 -0
  17. tracehub_logger-1.0.0/tracehub/client.py +144 -0
  18. tracehub_logger-1.0.0/tracehub/enricher.py +73 -0
  19. tracehub_logger-1.0.0/tracehub/exceptions.py +13 -0
  20. tracehub_logger-1.0.0/tracehub/integrations/__init__.py +1 -0
  21. tracehub_logger-1.0.0/tracehub/integrations/django.py +37 -0
  22. tracehub_logger-1.0.0/tracehub/integrations/fastapi.py +36 -0
  23. tracehub_logger-1.0.0/tracehub/integrations/flask.py +34 -0
  24. tracehub_logger-1.0.0/tracehub/models.py +64 -0
  25. tracehub_logger-1.0.0/tracehub/serializer.py +17 -0
  26. tracehub_logger-1.0.0/tracehub/transport.py +135 -0
  27. tracehub_logger-1.0.0/tracehub_logger.egg-info/PKG-INFO +441 -0
  28. tracehub_logger-1.0.0/tracehub_logger.egg-info/SOURCES.txt +29 -0
  29. tracehub_logger-1.0.0/tracehub_logger.egg-info/dependency_links.txt +1 -0
  30. tracehub_logger-1.0.0/tracehub_logger.egg-info/requires.txt +17 -0
  31. tracehub_logger-1.0.0/tracehub_logger.egg-info/top_level.txt +1 -0
@@ -0,0 +1,441 @@
1
+ Metadata-Version: 2.4
2
+ Name: tracehub-logger
3
+ Version: 1.0.0
4
+ Summary: Lightweight, non-blocking Python SDK for TraceHub log ingestion and error tracking
5
+ License-Expression: MIT
6
+ Project-URL: Homepage, http://103.127.146.14
7
+ Project-URL: Documentation, http://103.127.146.14/docs
8
+ Project-URL: Repository, https://github.com/tracehub/sdk
9
+ Project-URL: Bug Tracker, https://github.com/tracehub/sdk/issues
10
+ Keywords: logging,monitoring,error-tracking,tracehub,observability,tracing
11
+ Classifier: Development Status :: 4 - Beta
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: Programming Language :: Python :: 3
14
+ Classifier: Programming Language :: Python :: 3.10
15
+ Classifier: Programming Language :: Python :: 3.11
16
+ Classifier: Programming Language :: Python :: 3.12
17
+ Classifier: Programming Language :: Python :: 3.13
18
+ Classifier: Topic :: System :: Logging
19
+ Classifier: Topic :: System :: Monitoring
20
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
21
+ Classifier: Typing :: Typed
22
+ Requires-Python: >=3.10
23
+ Description-Content-Type: text/markdown
24
+ Requires-Dist: httpx>=0.25
25
+ Provides-Extra: django
26
+ Provides-Extra: fastapi
27
+ Requires-Dist: starlette; extra == "fastapi"
28
+ Provides-Extra: flask
29
+ Requires-Dist: flask; extra == "flask"
30
+ Provides-Extra: rabbitmq
31
+ Requires-Dist: pika>=1.3; extra == "rabbitmq"
32
+ Provides-Extra: dev
33
+ Requires-Dist: pytest>=7.0; extra == "dev"
34
+ Requires-Dist: pytest-httpx>=0.30; extra == "dev"
35
+ Requires-Dist: pika>=1.3; extra == "dev"
36
+
37
+ # TraceHub
38
+
39
+ Lightweight, non-blocking Python SDK for **TraceHub** — a log ingestion and error-tracking platform.
40
+
41
+ The SDK collects logs from your application, batches them in a background thread, and ships them to the TraceHub API via gzip-compressed HTTP. Logs flow through a RabbitMQ queue on the backend for reliable, asynchronous processing.
42
+
43
+ ```
44
+ Your App ──▶ SDK (batch + gzip) ──▶ TraceHub API ──▶ RabbitMQ ──▶ Worker ──▶ PostgreSQL
45
+ ```
46
+
47
+ ---
48
+
49
+ ## Table of Contents
50
+
51
+ - [Installation](#installation)
52
+ - [Quick Start](#quick-start)
53
+ - [Configuration](#configuration)
54
+ - [Logging Methods](#logging-methods)
55
+ - [Error Tracking](#error-tracking)
56
+ - [Extra Metadata](#extra-metadata)
57
+ - [Framework Integrations](#framework-integrations)
58
+ - [FastAPI](#fastapi)
59
+ - [Flask](#flask)
60
+ - [Django](#django)
61
+ - [Architecture](#architecture)
62
+ - [API Reference](#api-reference)
63
+ - [Testing](#testing)
64
+ - [Troubleshooting](#troubleshooting)
65
+
66
+ ---
67
+
68
+ ## Installation
69
+
70
+ ```bash
71
+ pip install tracehub
72
+ ```
73
+
74
+ With framework integrations:
75
+
76
+ ```bash
77
+ pip install tracehub[fastapi] # FastAPI / Starlette
78
+ pip install tracehub[flask] # Flask
79
+ pip install tracehub[django] # Django
80
+ ```
81
+
82
+ ---
83
+
84
+ ## Quick Start
85
+
86
+ ```python
87
+ from tracehub import TraceHubLogger
88
+
89
+ # Initialize (endpoint defaults to http://103.127.146.14)
90
+ logger = TraceHubLogger(
91
+ api_key="th_your_api_key", # from TraceHub dashboard
92
+ service="my-app", # your service name
93
+ environment="production", # production / staging / dev
94
+ )
95
+
96
+ # Log messages at any severity level
97
+ logger.info("Application started", module="main")
98
+ logger.warn("Disk usage above 80%", module="monitoring")
99
+
100
+ # Capture errors with full stack traces
101
+ try:
102
+ process_payment(order_id=123)
103
+ except Exception:
104
+ logger.error("Payment failed", exc_info=True, module="billing",
105
+ extra={"order_id": 123})
106
+
107
+ # Attach arbitrary metadata
108
+ logger.info("User logged in", module="auth",
109
+ extra={"user_id": "usr_42", "ip": "10.0.0.1"})
110
+
111
+ # Ensure all logs are sent before shutdown
112
+ logger.close()
113
+ ```
114
+
115
+ ---
116
+
117
+ ## Configuration
118
+
119
+ | Parameter | Type | Default | Description |
120
+ |------------------|---------|----------------------------|----------------------------------------------------------|
121
+ | `api_key` | `str` | **required** | Project API key (starts with `th_`) |
122
+ | `service` | `str` | **required** | Name of your service / application |
123
+ | `environment` | `str` | **required** | Deployment environment (`production`, `staging`, `dev`) |
124
+ | `endpoint` | `str` | `http://103.127.146.14` | TraceHub API base URL |
125
+ | `batch_size` | `int` | `50` | Flush when buffer reaches this many entries |
126
+ | `flush_interval` | `float` | `5.0` | Max seconds between flushes |
127
+ | `max_buffer` | `int` | `10000` | Ring buffer capacity (oldest entries dropped when full) |
128
+ | `max_retries` | `int` | `3` | Retry count on 5xx / network errors |
129
+ | `timeout` | `float` | `10.0` | HTTP request timeout in seconds |
130
+ | `compress` | `bool` | `True` | Gzip-compress payloads before sending |
131
+ | `dlq_path` | `str` | `~/.tracehub/dlq` | Dead-letter queue directory for failed batches |
132
+
133
+ ### Example — full configuration
134
+
135
+ ```python
136
+ logger = TraceHubLogger(
137
+ api_key="th_abc123",
138
+ service="order-service",
139
+ environment="production",
140
+ endpoint="http://103.127.146.14",
141
+ batch_size=100,
142
+ flush_interval=3.0,
143
+ max_buffer=50_000,
144
+ max_retries=5,
145
+ timeout=15.0,
146
+ compress=True,
147
+ dlq_path="/var/log/tracehub/dlq",
148
+ )
149
+ ```
150
+
151
+ ---
152
+
153
+ ## Logging Methods
154
+
155
+ Five severity levels matching the backend's `log_level` enum:
156
+
157
+ ```python
158
+ logger.debug("Verbose diagnostic info", module="db")
159
+ logger.info("Normal operational message", module="auth")
160
+ logger.warn("Something looks unusual", module="cache")
161
+ logger.error("Operation failed", module="api", exc_info=True)
162
+ logger.fatal("Critical system failure", module="core", exc_info=True)
163
+ ```
164
+
165
+ All methods accept these keyword arguments:
166
+
167
+ | Argument | Type | Description |
168
+ |------------|------------------|-----------------------------------------------------|
169
+ | `module` | `str` | Logical module name (e.g. `"auth"`, `"payments"`) |
170
+ | `extra` | `dict[str, Any]` | Arbitrary key-value metadata |
171
+ | `exc_info` | `bool` | Capture current exception stack trace (error/fatal) |
172
+
173
+ ---
174
+
175
+ ## Error Tracking
176
+
177
+ When you log at `ERROR` or `FATAL` with `exc_info=True`, the SDK captures the full stack trace. The backend's RabbitMQ worker then:
178
+
179
+ 1. Normalizes the error message (strips UUIDs, timestamps, hex addresses)
180
+ 2. Extracts the top 5 stack frames
181
+ 3. Generates a SHA-256 fingerprint
182
+ 4. Creates or updates an **Issue** in the dashboard
183
+
184
+ This means repeated occurrences of the same error are grouped into a single issue with an incrementing event count.
185
+
186
+ ```python
187
+ try:
188
+ db.execute("SELECT * FROM users WHERE id = ?", user_id)
189
+ except DatabaseError:
190
+ logger.error("Query failed", exc_info=True, module="db",
191
+ extra={"query": "get_user", "user_id": user_id})
192
+ ```
193
+
194
+ ---
195
+
196
+ ## Extra Metadata
197
+
198
+ The `extra` parameter accepts any JSON-serializable dictionary. This data is stored in a JSONB column and is fully searchable in the dashboard.
199
+
200
+ ```python
201
+ logger.info("Order placed", module="orders", extra={
202
+ "order_id": "ord_789",
203
+ "total": 49.99,
204
+ "items": 3,
205
+ "customer_tier": "premium",
206
+ })
207
+ ```
208
+
209
+ ---
210
+
211
+ ## Framework Integrations
212
+
213
+ ### FastAPI
214
+
215
+ ```python
216
+ from fastapi import FastAPI
217
+ from tracehub import TraceHubLogger
218
+ from tracehub.integrations.fastapi import TraceHubMiddleware
219
+
220
+ app = FastAPI()
221
+ app.add_middleware(TraceHubMiddleware)
222
+
223
+ logger = TraceHubLogger(
224
+ api_key="th_your_key",
225
+ service="my-fastapi-app",
226
+ environment="production",
227
+ )
228
+
229
+ @app.get("/users/{user_id}")
230
+ async def get_user(user_id: int):
231
+ logger.info("Fetching user", module="api", extra={"user_id": user_id})
232
+ return {"id": user_id}
233
+ ```
234
+
235
+ The middleware automatically:
236
+ - Reads `X-Trace-ID` from the request header (or generates a UUID)
237
+ - Attaches the trace ID to all logs emitted during that request
238
+ - Returns `X-Trace-ID` in the response header
239
+
240
+ ### Flask
241
+
242
+ ```python
243
+ from flask import Flask
244
+ from tracehub import TraceHubLogger
245
+ from tracehub.integrations.flask import init_tracehub
246
+
247
+ app = Flask(__name__)
248
+ init_tracehub(app)
249
+
250
+ logger = TraceHubLogger(
251
+ api_key="th_your_key",
252
+ service="my-flask-app",
253
+ environment="production",
254
+ )
255
+
256
+ @app.route("/health")
257
+ def health():
258
+ logger.info("Health check", module="api")
259
+ return {"status": "ok"}
260
+ ```
261
+
262
+ ### Django
263
+
264
+ Add the middleware to your `settings.py`:
265
+
266
+ ```python
267
+ MIDDLEWARE = [
268
+ "tracehub.integrations.django.TraceHubMiddleware",
269
+ # ... other middleware
270
+ ]
271
+ ```
272
+
273
+ Then use the logger anywhere:
274
+
275
+ ```python
276
+ from tracehub import TraceHubLogger
277
+
278
+ logger = TraceHubLogger(
279
+ api_key="th_your_key",
280
+ service="my-django-app",
281
+ environment="production",
282
+ )
283
+
284
+ def my_view(request):
285
+ logger.info("Processing request", module="views")
286
+ # trace_id is automatically attached
287
+ ```
288
+
289
+ ---
290
+
291
+ ## Architecture
292
+
293
+ ```
294
+ ┌─────────────────────────────────────────────────────────┐
295
+ │ Your Application │
296
+ │ │
297
+ │ logger.info("msg") │
298
+ │ │ │
299
+ │ ▼ │
300
+ │ ┌──────────┐ ┌────────────┐ ┌────────────────┐ │
301
+ │ │ Enricher │───▶│ RingBuffer │───▶│ BatchWorker │ │
302
+ │ │ (1ms) │ │ (10k cap) │ │ (daemon thread)│ │
303
+ │ └──────────┘ └────────────┘ └───────┬────────┘ │
304
+ │ │ │
305
+ │ Enricher adds: Flushes on: │
306
+ │ - timestamp (UTC ISO) - batch_size reached │
307
+ │ - hostname - flush_interval │
308
+ │ - PID / thread_id - shutdown │
309
+ │ - sdk_version │
310
+ │ - trace_id │
311
+ └─────────────────────────────────────────────────────────┘
312
+
313
+ │ POST /api/v1/ingest
314
+ │ X-API-Key: th_xxx
315
+ │ Content-Encoding: gzip
316
+
317
+ ┌─────────────────────────────────────────────────────────┐
318
+ │ TraceHub Backend │
319
+ │ │
320
+ │ ┌───────────┐ ┌──────────┐ ┌─────────────────┐ │
321
+ │ │ FastAPI │───▶│ RabbitMQ │───▶│ Worker Process │ │
322
+ │ │ Ingestion │ │ Queue │ │ (log_ingestion) │ │
323
+ │ └───────────┘ └──────────┘ └────────┬────────┘ │
324
+ │ │ │
325
+ │ ┌────────▼────────┐ │
326
+ │ │ PostgreSQL │ │
327
+ │ │ (partitioned) │ │
328
+ │ └─────────────────┘ │
329
+ └─────────────────────────────────────────────────────────┘
330
+ ```
331
+
332
+ ### Components
333
+
334
+ | Component | Description |
335
+ |-----------------|-----------------------------------------------------------------|
336
+ | **Enricher** | Adds host, PID, timestamp, trace_id (~1ms, runs on caller thread) |
337
+ | **RingBuffer** | Thread-safe circular buffer. Drops oldest entries when full. |
338
+ | **BatchWorker** | Daemon thread that flushes buffer on size/time thresholds. |
339
+ | **HttpTransport** | Sends gzip-compressed batches. Retries with exponential backoff. |
340
+ | **DeadLetterQueue** | Persists failed batches to disk for later replay. |
341
+
342
+ ### Reliability Features
343
+
344
+ - **Non-blocking**: Logging calls return immediately (~1ms)
345
+ - **Background batching**: Reduces HTTP overhead by grouping logs
346
+ - **Gzip compression**: Minimizes bandwidth usage
347
+ - **Exponential backoff**: Retries on 5xx / timeout / network errors
348
+ - **Dead-letter queue**: Failed batches saved to disk, replayed on next startup
349
+ - **Ring buffer**: Fixed memory footprint, no OOM risk
350
+ - **Graceful shutdown**: `atexit` hook flushes remaining logs
351
+
352
+ ---
353
+
354
+ ## API Reference
355
+
356
+ ### `TraceHubLogger`
357
+
358
+ ```python
359
+ class TraceHubLogger:
360
+ def __init__(self, api_key, service, environment, endpoint="", *, ...)
361
+ def debug(self, message, *, module="", extra=None) -> None
362
+ def info(self, message, *, module="", extra=None) -> None
363
+ def warn(self, message, *, module="", extra=None) -> None
364
+ def error(self, message, *, exc_info=False, module="", extra=None) -> None
365
+ def fatal(self, message, *, exc_info=False, module="", extra=None) -> None
366
+ def flush(self) -> None
367
+ def close(self) -> None
368
+ ```
369
+
370
+ ### `tracehub.enricher`
371
+
372
+ ```python
373
+ def set_trace_id(trace_id: str) -> None # Set trace ID for current thread
374
+ def get_trace_id() -> str # Get current thread's trace ID
375
+ def clear_trace_id() -> None # Clear current thread's trace ID
376
+ ```
377
+
378
+ ### Exceptions
379
+
380
+ ```python
381
+ TraceHubError # Base exception
382
+ TraceHubConfigError # Invalid configuration (missing api_key, etc.)
383
+ TraceHubTransportError # HTTP transport failure
384
+ ```
385
+
386
+ ---
387
+
388
+ ## Testing
389
+
390
+ ### Run unit tests
391
+
392
+ ```bash
393
+ cd SDK
394
+ pip install -e ".[dev]"
395
+ pytest tests/ -v
396
+ ```
397
+
398
+ ### Run integration tests (requires live backend + RabbitMQ)
399
+
400
+ ```bash
401
+ TRACEHUB_TEST_API_KEY=th_your_key pytest tests/test_integration_rabbitmq.py -v -s
402
+ ```
403
+
404
+ ### What the integration tests verify
405
+
406
+ | Test | Validates |
407
+ |-----------------------------|---------------------------------------------------------|
408
+ | `test_single_log_ingestion` | SDK -> API accepts a single log |
409
+ | `test_batch_ingestion` | All five severity levels are accepted |
410
+ | `test_error_with_stack_trace` | Stack traces flow through RabbitMQ to issue creation |
411
+ | `test_high_volume_batch` | 50 logs batched and flushed correctly |
412
+ | `test_extra_metadata` | Arbitrary JSON metadata is transmitted |
413
+
414
+ ---
415
+
416
+ ## Troubleshooting
417
+
418
+ ### Logs not appearing in dashboard
419
+
420
+ 1. **Check API key**: Ensure it starts with `th_` and is active in the project settings
421
+ 2. **Check endpoint**: Default is `http://103.127.146.14` -- verify it's reachable
422
+ 3. **Check DLQ**: Look in `~/.tracehub/dlq/` for failed batches
423
+ 4. **Check RabbitMQ**: Verify the worker is running on the backend
424
+
425
+ ### High memory usage
426
+
427
+ Reduce `max_buffer` (default 10,000 entries). The ring buffer drops oldest entries when full.
428
+
429
+ ### Slow application startup
430
+
431
+ The SDK replays any dead-letter queue files on startup. If `~/.tracehub/dlq/` has many files, clear them or increase the timeout.
432
+
433
+ ### `TraceHubConfigError: api_key is required`
434
+
435
+ Pass a non-empty `api_key` parameter. Generate one from the TraceHub dashboard under Project Settings > API Keys.
436
+
437
+ ---
438
+
439
+ ## License
440
+
441
+ MIT