structlog-config 0.8.0__tar.gz → 0.9.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.3
2
2
  Name: structlog-config
3
- Version: 0.8.0
3
+ Version: 0.9.0
4
4
  Summary: A comprehensive structlog configuration with sensible defaults for development and production environments, featuring context management, exception formatting, and path prettification.
5
5
  Keywords: logging,structlog,json-logging,structured-logging
6
6
  Author: Michael Bianco
@@ -26,19 +26,16 @@ Here are the main goals:
26
26
  * High performance JSON logging in production
27
27
  * All loggers, even plugin or system loggers, should route through the same formatter
28
28
  * Structured logging everywhere
29
+ * Pytest plugin to easily capture logs and dump to a directory on failure. This is really important for LLMs so they can
30
+ easily consume logs and context for each test and handle them sequentially.
29
31
  * Ability to easily set thread-local log context
30
32
  * Nice log formatters for stack traces, ORM ([ActiveModel/SQLModel](https://github.com/iloveitaly/activemodel)), etc
31
33
  * Ability to log level and output (i.e. file path) *by logger* for easy development debugging
32
34
  * If you are using fastapi, structured logging for access logs
35
+ * [Improved exception logging with beautiful-traceback](https://github.com/iloveitaly/beautiful-traceback)
33
36
 
34
37
  ## Installation
35
38
 
36
- ```bash
37
- pip install structlog-config
38
- ```
39
-
40
- Or with [uv](https://docs.astral.sh/uv/):
41
-
42
39
  ```bash
43
40
  uv add structlog-config
44
41
  ```
@@ -51,9 +48,12 @@ from structlog_config import configure_logger
51
48
  log = configure_logger()
52
49
 
53
50
  log.info("the log", key="value")
51
+
52
+ # named logger just like stdlib, but with a different syntax
53
+ custom_named_logger = structlog.get_logger(logger_name="test")
54
54
  ```
55
55
 
56
- ## JSON Logging for Production
56
+ ## JSON Logging in Production
57
57
 
58
58
  JSON logging is automatically enabled in production and staging environments (`PYTHON_ENV=production` or `PYTHON_ENV=staging`):
59
59
 
@@ -72,11 +72,13 @@ log = configure_logger(json_logger=True)
72
72
  log = configure_logger(json_logger=False)
73
73
  ```
74
74
 
75
- JSON logs use [orjson](https://github.com/ijl/orjson) for performance, include sorted keys and ISO timestamps, and serialize exceptions cleanly. Note that `PYTHON_LOG_PATH` is ignored with JSON logging (stdout only).
75
+ JSON logs use [orjson](https://github.com/ijl/orjson) for performance, include sorted keys and ISO timestamps, and serialize exceptions cleanly.
76
+
77
+ Note that `PYTHON_LOG_PATH` is ignored with JSON logging (stdout only).
76
78
 
77
79
  ## TRACE Logging Level
78
80
 
79
- This package adds support for a custom `TRACE` logging level (level 5) that's even more verbose than `DEBUG`. This is useful for extremely detailed debugging scenarios.
81
+ This package adds support for a custom `TRACE` logging level (level 5) that's even more verbose than `DEBUG`.
80
82
 
81
83
  The `TRACE` level is automatically set up when you call `configure_logger()`. You can use it like any other logging level:
82
84
 
@@ -88,7 +90,7 @@ log = configure_logger()
88
90
 
89
91
  # Using structlog
90
92
  log.info("This is info")
91
- log.debug("This is debug")
93
+ log.debug("This is debug")
92
94
  log.trace("This is trace") # Most verbose
93
95
 
94
96
  # Using stdlib logging
@@ -136,8 +138,6 @@ log.info("Processing file", file_path=Path.cwd() / "data" / "users.csv")
136
138
 
137
139
  ### Whenever Datetime Formatter
138
140
 
139
- **Note:** Requires `pip install whenever` to be installed.
140
-
141
141
  Formats [whenever](https://github.com/ariebovenberg/whenever) datetime objects without their class wrappers for cleaner output:
142
142
 
143
143
  ```python
@@ -152,8 +152,6 @@ Supports all whenever datetime types: `ZonedDateTime`, `Instant`, `LocalDateTime
152
152
 
153
153
  ### ActiveModel Object Formatter
154
154
 
155
- **Note:** Requires `pip install activemodel` and `pip install typeid-python` to be installed.
156
-
157
155
  Automatically converts [ActiveModel](https://github.com/iloveitaly/activemodel) BaseModel instances to their ID representation and TypeID objects to strings:
158
156
 
159
157
  ```python
@@ -166,8 +164,6 @@ log.info("User action", user=user)
166
164
 
167
165
  ### FastAPI Context
168
166
 
169
- **Note:** Requires `pip install starlette-context` to be installed.
170
-
171
167
  Automatically includes all context data from [starlette-context](https://github.com/tomwojcik/starlette-context) in your logs, useful for request tracing:
172
168
 
173
169
  ```python
@@ -193,63 +189,102 @@ Here's how to use it:
193
189
  1. [Disable fastapi's default logging.](https://github.com/iloveitaly/python-starter-template/blob/f54cb47d8d104987f2e4a668f9045a62e0d6818a/main.py#L55-L56)
194
190
  2. [Add the middleware to your FastAPI app.](https://github.com/iloveitaly/python-starter-template/blob/f54cb47d8d104987f2e4a668f9045a62e0d6818a/app/routes/middleware/__init__.py#L63-L65)
195
191
 
196
- ## Pytest Plugin: Capture Logs on Failure
192
+ ## Pytest Plugin: Capture Output on Failure
197
193
 
198
- A pytest plugin that captures logs per-test and displays them only when tests fail. This keeps your test output clean while ensuring you have all the debugging information you need when something goes wrong.
194
+ A pytest plugin that captures stdout, stderr, and exceptions from failing tests and writes them to organized output files. This is useful for debugging test failures, especially in CI/CD environments where you need to inspect output after the fact.
199
195
 
200
196
  ### Features
201
197
 
202
- - Only shows logs for failing tests (keeps output clean)
203
- - Captures logs from all test phases (setup, call, teardown)
204
- - Unique log file per test
205
- - Optional persistent log storage for debugging
206
- - Automatically handles `PYTHON_LOG_PATH` environment variable
198
+ - Captures stdout, stderr, and exception tracebacks for failing tests
199
+ - Only creates output for failing tests (keeps directories clean)
200
+ - Separate files for each output type (stdout.txt, stderr.txt, exception.txt)
201
+ - Captures all test phases (setup, call, teardown)
202
+ - Optional fd-level capture for subprocess output
207
203
 
208
204
  ### Usage
209
205
 
210
- Enable the plugin with the `--capture-logs-on-fail` flag:
206
+ Enable the plugin with the `--structlog-output` flag and `-s` (to disable pytest's built-in capture):
211
207
 
212
208
  ```bash
213
- pytest --capture-logs-on-fail
209
+ pytest --structlog-output=./test-output -s
214
210
  ```
215
211
 
216
- Or enable it permanently in `pytest.ini` or `pyproject.toml`:
212
+ The `--structlog-output` flag both enables the plugin and specifies where output files should be written.
213
+
214
+ **Recommended:** Also disable pytest's logging plugin with `-p no:logging` to avoid duplicate/interfering capture:
217
215
 
218
- ```toml
219
- [tool.pytest.ini_options]
220
- addopts = ["--capture-logs-on-fail"]
216
+ ```bash
217
+ pytest --structlog-output=./test-output -s -p no:logging
221
218
  ```
222
219
 
223
- ### Persist Logs to Directory
220
+ While the plugin works without this flag, disabling pytest's logging capture ensures cleaner output and avoids any potential conflicts between the two capture mechanisms.
224
221
 
225
- To keep all test logs for later inspection (useful for CI/CD debugging):
222
+ ### Output Structure
226
223
 
227
- ```bash
228
- pytest --capture-logs-dir=./test-logs
224
+ Each failing test gets its own directory with separate files:
225
+
226
+ ```
227
+ test-output/
228
+ test_module__test_name/
229
+ stdout.txt # stdout from test (includes setup, call, and teardown phases)
230
+ stderr.txt # stderr from test (includes setup, call, and teardown phases)
231
+ exception.txt # exception traceback
232
+ ```
233
+
234
+ ### Advanced: fd-level Capture
235
+
236
+ For tests that spawn subprocesses or write directly to file descriptors, you can enable fd-level capture. This is useful for integration tests that run external processes (such a server which replicates a production environment).
237
+
238
+ #### Add fixture to function signature
239
+
240
+ Great for a single single test:
241
+
242
+ ```python
243
+ def test_with_subprocess(file_descriptor_output_capture):
244
+ # subprocess.run() output will be captured
245
+ subprocess.run(["echo", "hello from subprocess"])
246
+
247
+ # multiprocessing.Process output will be captured
248
+ from multiprocessing import Process
249
+ proc = Process(target=lambda: print("hello from process"))
250
+ proc.start()
251
+ proc.join()
252
+
253
+ assert False # Trigger failure to write output files
229
254
  ```
230
255
 
231
- This creates a log file for each test and disables automatic cleanup.
256
+ Alternatively, you can use `@pytest.mark.usefixtures("file_descriptor_output_capture")`
232
257
 
233
- ### How It Works
234
258
 
235
- 1. Sets `PYTHON_LOG_PATH` environment variable to a unique temp file for each test
236
- 2. Your application logs (via `configure_logger()`) write to this file
237
- 3. On test failure, the plugin prints the captured logs to stdout
238
- 4. Log files are cleaned up after the test session (unless `--capture-logs-dir` is used)
259
+ #### All tests in directory
239
260
 
240
- ### Example Output
261
+ Add to `conftest.py`:
241
262
 
242
- When a test fails, you'll see:
263
+ ```python
264
+ import pytest
243
265
 
266
+ pytestmark = pytest.mark.usefixtures("file_descriptor_output_capture")
244
267
  ```
245
- FAILED tests/test_user.py::test_user_login
246
268
 
247
- --- Captured logs for failed test (call): tests/test_user.py::test_user_login ---
248
- 2025-11-01 18:30:00 [info] User login started user_id=123
249
- 2025-11-01 18:30:01 [error] Database connection failed timeout=5.0
269
+ ### Example
270
+
271
+ When a test fails:
272
+
273
+ ```python
274
+ def test_user_login():
275
+ print("Starting login process")
276
+ print("ERROR: Connection failed", file=sys.stderr)
277
+ assert False, "Login failed"
250
278
  ```
251
279
 
252
- For passing tests, no log output is shown, keeping your test output clean and focused.
280
+ You'll get:
281
+
282
+ ```
283
+ test-output/test_user__test_user_login/
284
+ stdout.txt: "Starting login process"
285
+ stderr.txt: "ERROR: Connection failed"
286
+ exception.txt: Full traceback with "AssertionError: Login failed"
287
+ ```
253
288
 
254
289
  ## Beautiful Traceback Support
255
290
 
@@ -9,19 +9,16 @@ Here are the main goals:
9
9
  * High performance JSON logging in production
10
10
  * All loggers, even plugin or system loggers, should route through the same formatter
11
11
  * Structured logging everywhere
12
+ * Pytest plugin to easily capture logs and dump to a directory on failure. This is really important for LLMs so they can
13
+ easily consume logs and context for each test and handle them sequentially.
12
14
  * Ability to easily set thread-local log context
13
15
  * Nice log formatters for stack traces, ORM ([ActiveModel/SQLModel](https://github.com/iloveitaly/activemodel)), etc
14
16
  * Ability to log level and output (i.e. file path) *by logger* for easy development debugging
15
17
  * If you are using fastapi, structured logging for access logs
18
+ * [Improved exception logging with beautiful-traceback](https://github.com/iloveitaly/beautiful-traceback)
16
19
 
17
20
  ## Installation
18
21
 
19
- ```bash
20
- pip install structlog-config
21
- ```
22
-
23
- Or with [uv](https://docs.astral.sh/uv/):
24
-
25
22
  ```bash
26
23
  uv add structlog-config
27
24
  ```
@@ -34,9 +31,12 @@ from structlog_config import configure_logger
34
31
  log = configure_logger()
35
32
 
36
33
  log.info("the log", key="value")
34
+
35
+ # named logger just like stdlib, but with a different syntax
36
+ custom_named_logger = structlog.get_logger(logger_name="test")
37
37
  ```
38
38
 
39
- ## JSON Logging for Production
39
+ ## JSON Logging in Production
40
40
 
41
41
  JSON logging is automatically enabled in production and staging environments (`PYTHON_ENV=production` or `PYTHON_ENV=staging`):
42
42
 
@@ -55,11 +55,13 @@ log = configure_logger(json_logger=True)
55
55
  log = configure_logger(json_logger=False)
56
56
  ```
57
57
 
58
- JSON logs use [orjson](https://github.com/ijl/orjson) for performance, include sorted keys and ISO timestamps, and serialize exceptions cleanly. Note that `PYTHON_LOG_PATH` is ignored with JSON logging (stdout only).
58
+ JSON logs use [orjson](https://github.com/ijl/orjson) for performance, include sorted keys and ISO timestamps, and serialize exceptions cleanly.
59
+
60
+ Note that `PYTHON_LOG_PATH` is ignored with JSON logging (stdout only).
59
61
 
60
62
  ## TRACE Logging Level
61
63
 
62
- This package adds support for a custom `TRACE` logging level (level 5) that's even more verbose than `DEBUG`. This is useful for extremely detailed debugging scenarios.
64
+ This package adds support for a custom `TRACE` logging level (level 5) that's even more verbose than `DEBUG`.
63
65
 
64
66
  The `TRACE` level is automatically set up when you call `configure_logger()`. You can use it like any other logging level:
65
67
 
@@ -71,7 +73,7 @@ log = configure_logger()
71
73
 
72
74
  # Using structlog
73
75
  log.info("This is info")
74
- log.debug("This is debug")
76
+ log.debug("This is debug")
75
77
  log.trace("This is trace") # Most verbose
76
78
 
77
79
  # Using stdlib logging
@@ -119,8 +121,6 @@ log.info("Processing file", file_path=Path.cwd() / "data" / "users.csv")
119
121
 
120
122
  ### Whenever Datetime Formatter
121
123
 
122
- **Note:** Requires `pip install whenever` to be installed.
123
-
124
124
  Formats [whenever](https://github.com/ariebovenberg/whenever) datetime objects without their class wrappers for cleaner output:
125
125
 
126
126
  ```python
@@ -135,8 +135,6 @@ Supports all whenever datetime types: `ZonedDateTime`, `Instant`, `LocalDateTime
135
135
 
136
136
  ### ActiveModel Object Formatter
137
137
 
138
- **Note:** Requires `pip install activemodel` and `pip install typeid-python` to be installed.
139
-
140
138
  Automatically converts [ActiveModel](https://github.com/iloveitaly/activemodel) BaseModel instances to their ID representation and TypeID objects to strings:
141
139
 
142
140
  ```python
@@ -149,8 +147,6 @@ log.info("User action", user=user)
149
147
 
150
148
  ### FastAPI Context
151
149
 
152
- **Note:** Requires `pip install starlette-context` to be installed.
153
-
154
150
  Automatically includes all context data from [starlette-context](https://github.com/tomwojcik/starlette-context) in your logs, useful for request tracing:
155
151
 
156
152
  ```python
@@ -176,63 +172,102 @@ Here's how to use it:
176
172
  1. [Disable fastapi's default logging.](https://github.com/iloveitaly/python-starter-template/blob/f54cb47d8d104987f2e4a668f9045a62e0d6818a/main.py#L55-L56)
177
173
  2. [Add the middleware to your FastAPI app.](https://github.com/iloveitaly/python-starter-template/blob/f54cb47d8d104987f2e4a668f9045a62e0d6818a/app/routes/middleware/__init__.py#L63-L65)
178
174
 
179
- ## Pytest Plugin: Capture Logs on Failure
175
+ ## Pytest Plugin: Capture Output on Failure
180
176
 
181
- A pytest plugin that captures logs per-test and displays them only when tests fail. This keeps your test output clean while ensuring you have all the debugging information you need when something goes wrong.
177
+ A pytest plugin that captures stdout, stderr, and exceptions from failing tests and writes them to organized output files. This is useful for debugging test failures, especially in CI/CD environments where you need to inspect output after the fact.
182
178
 
183
179
  ### Features
184
180
 
185
- - Only shows logs for failing tests (keeps output clean)
186
- - Captures logs from all test phases (setup, call, teardown)
187
- - Unique log file per test
188
- - Optional persistent log storage for debugging
189
- - Automatically handles `PYTHON_LOG_PATH` environment variable
181
+ - Captures stdout, stderr, and exception tracebacks for failing tests
182
+ - Only creates output for failing tests (keeps directories clean)
183
+ - Separate files for each output type (stdout.txt, stderr.txt, exception.txt)
184
+ - Captures all test phases (setup, call, teardown)
185
+ - Optional fd-level capture for subprocess output
190
186
 
191
187
  ### Usage
192
188
 
193
- Enable the plugin with the `--capture-logs-on-fail` flag:
189
+ Enable the plugin with the `--structlog-output` flag and `-s` (to disable pytest's built-in capture):
194
190
 
195
191
  ```bash
196
- pytest --capture-logs-on-fail
192
+ pytest --structlog-output=./test-output -s
197
193
  ```
198
194
 
199
- Or enable it permanently in `pytest.ini` or `pyproject.toml`:
195
+ The `--structlog-output` flag both enables the plugin and specifies where output files should be written.
196
+
197
+ **Recommended:** Also disable pytest's logging plugin with `-p no:logging` to avoid duplicate/interfering capture:
200
198
 
201
- ```toml
202
- [tool.pytest.ini_options]
203
- addopts = ["--capture-logs-on-fail"]
199
+ ```bash
200
+ pytest --structlog-output=./test-output -s -p no:logging
204
201
  ```
205
202
 
206
- ### Persist Logs to Directory
203
+ While the plugin works without this flag, disabling pytest's logging capture ensures cleaner output and avoids any potential conflicts between the two capture mechanisms.
207
204
 
208
- To keep all test logs for later inspection (useful for CI/CD debugging):
205
+ ### Output Structure
209
206
 
210
- ```bash
211
- pytest --capture-logs-dir=./test-logs
207
+ Each failing test gets its own directory with separate files:
208
+
209
+ ```
210
+ test-output/
211
+ test_module__test_name/
212
+ stdout.txt # stdout from test (includes setup, call, and teardown phases)
213
+ stderr.txt # stderr from test (includes setup, call, and teardown phases)
214
+ exception.txt # exception traceback
215
+ ```
216
+
217
+ ### Advanced: fd-level Capture
218
+
219
+ For tests that spawn subprocesses or write directly to file descriptors, you can enable fd-level capture. This is useful for integration tests that run external processes (such a server which replicates a production environment).
220
+
221
+ #### Add fixture to function signature
222
+
223
+ Great for a single single test:
224
+
225
+ ```python
226
+ def test_with_subprocess(file_descriptor_output_capture):
227
+ # subprocess.run() output will be captured
228
+ subprocess.run(["echo", "hello from subprocess"])
229
+
230
+ # multiprocessing.Process output will be captured
231
+ from multiprocessing import Process
232
+ proc = Process(target=lambda: print("hello from process"))
233
+ proc.start()
234
+ proc.join()
235
+
236
+ assert False # Trigger failure to write output files
212
237
  ```
213
238
 
214
- This creates a log file for each test and disables automatic cleanup.
239
+ Alternatively, you can use `@pytest.mark.usefixtures("file_descriptor_output_capture")`
215
240
 
216
- ### How It Works
217
241
 
218
- 1. Sets `PYTHON_LOG_PATH` environment variable to a unique temp file for each test
219
- 2. Your application logs (via `configure_logger()`) write to this file
220
- 3. On test failure, the plugin prints the captured logs to stdout
221
- 4. Log files are cleaned up after the test session (unless `--capture-logs-dir` is used)
242
+ #### All tests in directory
222
243
 
223
- ### Example Output
244
+ Add to `conftest.py`:
224
245
 
225
- When a test fails, you'll see:
246
+ ```python
247
+ import pytest
226
248
 
249
+ pytestmark = pytest.mark.usefixtures("file_descriptor_output_capture")
227
250
  ```
228
- FAILED tests/test_user.py::test_user_login
229
251
 
230
- --- Captured logs for failed test (call): tests/test_user.py::test_user_login ---
231
- 2025-11-01 18:30:00 [info] User login started user_id=123
232
- 2025-11-01 18:30:01 [error] Database connection failed timeout=5.0
252
+ ### Example
253
+
254
+ When a test fails:
255
+
256
+ ```python
257
+ def test_user_login():
258
+ print("Starting login process")
259
+ print("ERROR: Connection failed", file=sys.stderr)
260
+ assert False, "Login failed"
233
261
  ```
234
262
 
235
- For passing tests, no log output is shown, keeping your test output clean and focused.
263
+ You'll get:
264
+
265
+ ```
266
+ test-output/test_user__test_user_login/
267
+ stdout.txt: "Starting login process"
268
+ stderr.txt: "ERROR: Connection failed"
269
+ exception.txt: Full traceback with "AssertionError: Login failed"
270
+ ```
236
271
 
237
272
  ## Beautiful Traceback Support
238
273
 
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "structlog-config"
3
- version = "0.8.0"
3
+ version = "0.9.0"
4
4
  description = "A comprehensive structlog configuration with sensible defaults for development and production environments, featuring context management, exception formatting, and path prettification."
5
5
  keywords = ["logging", "structlog", "json-logging", "structured-logging"]
6
6
  readme = "README.md"
@@ -17,8 +17,11 @@ authors = [{ name = "Michael Bianco", email = "mike@mikebian.co" }]
17
17
  [project.optional-dependencies]
18
18
  fastapi = ["fastapi-ipware>=0.1.0"]
19
19
 
20
+ [project.entry-points.pytest11]
21
+ structlog_config = "structlog_config.pytest_plugin"
22
+
20
23
  [build-system]
21
- requires = ["uv_build>=0.8.11,<0.9.0"]
24
+ requires = ["uv_build>=0.9.0,<0.10.0"]
22
25
  build-backend = "uv_build"
23
26
 
24
27
  [tool.uv.build-backend]
@@ -37,7 +40,7 @@ dev = [
37
40
  ]
38
41
 
39
42
  [tool.pyright]
40
- exclude = ["playground/", "tmp/", ".venv/", "tests/"]
43
+ exclude = ["examples/", "playground/", "tmp/", ".venv/", "tests/"]
41
44
 
42
45
  [tool.ruff]
43
46
  extend-exclude = ["playground.py", "playground/"]
@@ -1,5 +1,5 @@
1
1
  from contextlib import _GeneratorContextManager
2
- from typing import Generator, Protocol
2
+ from typing import Protocol
3
3
 
4
4
  import orjson
5
5
  import structlog
@@ -20,11 +20,12 @@ from structlog_config.formatters import (
20
20
 
21
21
  from . import (
22
22
  packages,
23
- trace, # noqa: F401
23
+ trace, # noqa: F401 (import has side effects for trace level setup)
24
24
  )
25
25
  from .constants import NO_COLOR, package_logger
26
- from .environments import is_production, is_pytest, is_staging
26
+ from .environments import is_pytest
27
27
  from .levels import get_environment_log_level_as_string
28
+ from .hook import install_exception_hook
28
29
  from .stdlib_logging import (
29
30
  redirect_stdlib_loggers,
30
31
  )
@@ -146,6 +147,10 @@ class LoggerWithContext(FilteringBoundLogger, Protocol):
146
147
  "clear thread-local context"
147
148
  ...
148
149
 
150
+ def trace(self, *args, **kwargs) -> None: # noqa: F811
151
+ "trace level logging"
152
+ ...
153
+
149
154
 
150
155
  # TODO this may be a bad idea, but I really don't like how the `bound` stuff looks and how to access it, way too ugly
151
156
  def add_simple_context_aliases(log) -> LoggerWithContext:
@@ -157,7 +162,10 @@ def add_simple_context_aliases(log) -> LoggerWithContext:
157
162
 
158
163
 
159
164
  def configure_logger(
160
- *, logger_factory=None, json_logger: bool | None = None
165
+ *,
166
+ json_logger: bool = False,
167
+ logger_factory=None,
168
+ install_exception_hook: bool = False,
161
169
  ) -> LoggerWithContext:
162
170
  """
163
171
  Create a struct logger with some special additions:
@@ -170,9 +178,10 @@ def configure_logger(
170
178
  >>> log.clear()
171
179
 
172
180
  Args:
181
+ json_logger: Flag to use JSON logging. Defaults to False.
173
182
  logger_factory: Optional logger factory to override the default
174
- json_logger: Optional flag to use JSON logging. If None, defaults to
175
- production or staging environment sourced from PYTHON_ENV.
183
+ install_exception_hook: Optional flag to install a global exception hook
184
+ that logs uncaught exceptions using structlog. Defaults to False.
176
185
  """
177
186
  setup_trace()
178
187
 
@@ -180,8 +189,10 @@ def configure_logger(
180
189
  # This is important for tests where configure_logger might be called multiple times
181
190
  structlog.reset_defaults()
182
191
 
183
- if json_logger is None:
184
- json_logger = is_production() or is_staging()
192
+ if install_exception_hook:
193
+ from .hook import install_exception_hook as _install_hook
194
+
195
+ _install_hook(json_logger)
185
196
 
186
197
  redirect_stdlib_loggers(json_logger)
187
198
  redirect_showwarnings()
@@ -0,0 +1,8 @@
1
+ import os
2
+
3
+
4
+ def is_pytest():
5
+ """
6
+ PYTEST_CURRENT_TEST is set by pytest to indicate the current test being run
7
+ """
8
+ return "PYTEST_CURRENT_TEST" in os.environ
@@ -61,7 +61,7 @@ def client_ip_from_request(request: Request | WebSocket) -> str | None:
61
61
  Uses fastapi-ipware library to properly extract client IP from various proxy headers.
62
62
  Fallback to direct client connection if no proxy headers found.
63
63
  """
64
- ip, trusted_route = ipware.get_client_ip_from_request(request)
64
+ ip, trusted_route = ipware.get_client_ip_from_request(request) # type: ignore
65
65
  if ip:
66
66
  log.debug(
67
67
  "extracted client IP from headers", ip=ip, trusted_route=trusted_route
@@ -21,9 +21,9 @@ def simplify_activemodel_objects(
21
21
  What's tricky about this method, and other structlog processors, is they are run *after* a response
22
22
  is returned to the user. So, they don't error out in tests and it doesn't impact users. They do show up in Sentry.
23
23
  """
24
- from activemodel import BaseModel
25
- from sqlalchemy.orm.base import object_state
26
- from typeid import TypeID
24
+ from activemodel import BaseModel # type: ignore
25
+ from sqlalchemy.orm.base import object_state # type: ignore
26
+ from typeid import TypeID # type: ignore
27
27
 
28
28
  for key, value in list(event_dict.items()):
29
29
  if isinstance(value, BaseModel):
@@ -181,7 +181,7 @@ def add_fastapi_context(
181
181
 
182
182
  https://github.com/tomwojcik/starlette-context/blob/master/example/setup_logging.py
183
183
  """
184
- from starlette_context import context
184
+ from starlette_context import context # type: ignore
185
185
 
186
186
  if context.exists():
187
187
  event_dict.update(context.data)
@@ -0,0 +1,20 @@
1
+ import sys
2
+
3
+ import structlog
4
+
5
+
6
+ def install_exception_hook(json_logger: bool = False):
7
+ def structlog_excepthook(exc_type, exc_value, exc_traceback):
8
+ if issubclass(exc_type, KeyboardInterrupt):
9
+ sys.__excepthook__(exc_type, exc_value, exc_traceback)
10
+ return
11
+
12
+ logger = structlog.get_logger()
13
+
14
+ # We rely on structlog's configuration (configured in __init__.py)
15
+ # to handle the exception formatting based on whether it's JSON or Console mode.
16
+ logger.exception(
17
+ "uncaught_exception", exc_info=(exc_type, exc_value, exc_traceback)
18
+ )
19
+
20
+ sys.excepthook = structlog_excepthook
@@ -3,36 +3,36 @@ Determine if certain packages are installed to conditionally enable processors
3
3
  """
4
4
 
5
5
  try:
6
- import orjson
6
+ import orjson # type: ignore
7
7
  except ImportError:
8
8
  orjson = None
9
9
 
10
10
  try:
11
- import sqlalchemy
11
+ import sqlalchemy # type: ignore
12
12
  except ImportError:
13
13
  sqlalchemy = None
14
14
 
15
15
  try:
16
- import activemodel
16
+ import activemodel # type: ignore
17
17
  except ImportError:
18
18
  activemodel = None
19
19
 
20
20
  try:
21
- import typeid
21
+ import typeid # type: ignore
22
22
  except ImportError:
23
23
  typeid = None
24
24
 
25
25
  try:
26
- import beautiful_traceback
26
+ import beautiful_traceback # type: ignore
27
27
  except ImportError:
28
28
  beautiful_traceback = None
29
29
 
30
30
  try:
31
- import starlette_context
31
+ import starlette_context # type: ignore
32
32
  except ImportError:
33
33
  starlette_context = None
34
34
 
35
35
  try:
36
- import whenever
36
+ import whenever # type: ignore
37
37
  except ImportError:
38
38
  whenever = None