pytest-codeblock 0.1.2__tar.gz → 0.1.4__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: pytest-codeblock
3
- Version: 0.1.2
3
+ Version: 0.1.4
4
4
  Summary: Pytest plugin to collect and test code blocks in reStructuredText and Markdown files.
5
5
  Author-email: Artur Barseghyan <artur.barseghyan@gmail.com>
6
6
  Maintainer-email: Artur Barseghyan <artur.barseghyan@gmail.com>
@@ -68,6 +68,11 @@ pytest-codeblock
68
68
  .. _Django: https://www.djangoproject.com
69
69
  .. _pip: https://pypi.org/project/pip/
70
70
  .. _uv: https://pypi.org/project/uv/
71
+ .. _fake.py: https://github.com/barseghyanartur/fake.py
72
+ .. _boto3: https://github.com/boto/boto3
73
+ .. _moto: https://github.com/getmoto/moto
74
+ .. _openai: https://github.com/openai/openai-python
75
+ .. _Ollama: https://github.com/ollama/ollama
71
76
 
72
77
  .. Internal references
73
78
 
@@ -112,6 +117,7 @@ Features
112
117
  - **Markdown and reST support**: Automatically finds fenced code blocks
113
118
  in `.md`/`.markdown` files and `.. code-block:: python` or literal blocks
114
119
  in `.rst` files.
120
+ - **Support for literalinclude blocks** in `.rst` files.
115
121
  - **Grouping by name**: Split a single example across multiple code blocks;
116
122
  the plugin concatenates them into one test.
117
123
  - **Minimal dependencies**: Only requires `pytest`_.
@@ -147,9 +153,7 @@ Configuration
147
153
 
148
154
  [tool.pytest.ini_options]
149
155
  testpaths = [
150
- "*.rst",
151
156
  "**/*.rst",
152
- "*.md",
153
157
  "**/*.md",
154
158
  ]
155
159
 
@@ -184,6 +188,8 @@ You can also use a literal block with a preceding name comment:
184
188
  y = 5
185
189
  print(y * 2)
186
190
 
191
+ ----
192
+
187
193
  **Grouping example**
188
194
 
189
195
  It's possible to split one logical test into multiple blocks.
@@ -216,8 +222,12 @@ Note the ``.. continue::`` directive.
216
222
 
217
223
  The above mentioned three snippets will run as a single test.
218
224
 
225
+ ----
226
+
219
227
  **pytest marks**
220
228
 
229
+ In the example below, `django_db` marker is added to the code.
230
+
221
231
  .. code-block:: rst
222
232
 
223
233
  .. pytestmark: django_db
@@ -228,6 +238,16 @@ The above mentioned three snippets will run as a single test.
228
238
 
229
239
  user = User.objects.first()
230
240
 
241
+ ----
242
+
243
+ **literalinclude**
244
+
245
+ .. code-block:: rst
246
+
247
+ .. pytestmark: fakepy
248
+ .. literalinclude:: examples/python/create_pdf_file_example.py
249
+ :name: test_li_create_pdf_file
250
+
231
251
  Markdown usage
232
252
  --------------
233
253
 
@@ -246,6 +266,8 @@ Any fenced code block with a recognized Python language tag (e.g., ``python``,
246
266
  assert result == 9
247
267
  ```
248
268
 
269
+ ----
270
+
249
271
  **Grouping example**
250
272
 
251
273
  .. code-block:: markdown
@@ -260,6 +282,8 @@ Any fenced code block with a recognized Python language tag (e.g., ``python``,
260
282
  print(x + 1) # Uses x from the first snippet
261
283
  ```
262
284
 
285
+ ----
286
+
263
287
  **pytest marks**
264
288
 
265
289
  .. code-block:: markdown
@@ -273,9 +297,23 @@ Any fenced code block with a recognized Python language tag (e.g., ``python``,
273
297
 
274
298
  Customisation/hooks
275
299
  ===================
276
- If you want to add additional things into your specific tests, do as follows:
300
+ Tests can be extended and fine-tuned using `pytest`_'s standard hook system.
301
+
302
+ Below is an example workflow:
303
+
304
+ 1. **Add custom markers** to the code blocks (``fakepy``, ``aws``, ``openai``).
305
+ 2. **Implement pytest hooks** in ``conftest.py`` to react to those markers.
277
306
 
278
- **Add a couple of custom pytest marks**
307
+
308
+ Add custom markers in reStructuredText
309
+ --------------------------------------
310
+
311
+ ``fakepy`` reStructuredText marker
312
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
313
+
314
+ Sample `fake.py`_ code to generate a PDF file with random text.
315
+
316
+ *Filename: README.rst*
279
317
 
280
318
  .. code-block:: rst
281
319
 
@@ -287,6 +325,15 @@ If you want to add additional things into your specific tests, do as follows:
287
325
 
288
326
  FAKER.pdf_file()
289
327
 
328
+ ``aws`` reStructuredText marker
329
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
330
+
331
+ Sample `boto3`_ code to create a bucket on AWS S3.
332
+
333
+ *Filename: README.rst*
334
+
335
+ .. code-block:: rst
336
+
290
337
  .. pytestmark: aws
291
338
  .. code-block:: python
292
339
  :name: test_create_bucket
@@ -297,6 +344,17 @@ If you want to add additional things into your specific tests, do as follows:
297
344
  s3.create_bucket(Bucket="my-bucket")
298
345
  assert "my-bucket" in [b["Name"] for b in s3.list_buckets()["Buckets"]]
299
346
 
347
+ ``openai`` reStructuredText marker
348
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
349
+
350
+ Sample `openai`_ code to ask LLM to tell a joke. Note, that next to a
351
+ custom ``openai`` marker, ``xfail`` marker is used, which allows underlying
352
+ code to fail, without marking entire test suite as failed.
353
+
354
+ *Filename: README.rst*
355
+
356
+ .. code-block:: rst
357
+
300
358
  .. pytestmark: xfail
301
359
  .. pytestmark: openai
302
360
  .. code-block:: python
@@ -315,12 +373,78 @@ If you want to add additional things into your specific tests, do as follows:
315
373
 
316
374
  assert isinstance(completion.choices[0].message.content, str)
317
375
 
318
- **Hook into it `conftest.py`**
376
+ Add custom markers in Markdown
377
+ ------------------------------
378
+
379
+ ``fakepy`` Markdown marker
380
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
381
+
382
+ *Filename: README.md*
383
+
384
+ .. code-block:: markdown
385
+
386
+ <!-- pytestmark: fakepy -->
387
+ ```python name=test_create_pdf_file
388
+ from fake import FAKER
389
+
390
+ FAKER.pdf_file()
391
+ ```
392
+
393
+ ``aws`` Markdown marker
394
+ ~~~~~~~~~~~~~~~~~~~~~~~
395
+
396
+ *Filename: README.md*
397
+
398
+ .. code-block:: markdown
399
+
400
+ <!-- pytestmark: aws -->
401
+ ```python name=test_create_bucket
402
+ import boto3
403
+
404
+ s3 = boto3.client("s3", region_name="us-east-1")
405
+ s3.create_bucket(Bucket="my-bucket")
406
+ assert "my-bucket" in [b["Name"] for b in s3.list_buckets()["Buckets"]]
407
+ ```
408
+
409
+ ``openai`` Markdown marker
410
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
411
+
412
+ *Filename: README.md*
413
+
414
+ .. code-block:: markdown
415
+
416
+ <!-- pytestmark: xfail -->
417
+ <!-- pytestmark: openai -->
418
+ ```python name=test_tell_me_a_joke
419
+ from openai import OpenAI
420
+
421
+ client = OpenAI()
422
+ completion = client.chat.completions.create(
423
+ model="gpt-4o",
424
+ messages=[
425
+ {"role": "developer", "content": "You are a famous comedian."},
426
+ {"role": "user", "content": "Tell me a joke."},
427
+ ],
428
+ )
429
+
430
+ assert isinstance(completion.choices[0].message.content, str)
431
+ ```
432
+
433
+ Implement pytest hooks
434
+ ----------------------
435
+
436
+ In the example below:
437
+
438
+ - `moto`_ is used to mock AWS S3 service for all tests marked as ``aws``.
439
+ - Environment variable ``OPENAI_BASE_URL`` is set
440
+ to ``http://localhost:11434/v1`` (assuming you have `Ollama`_ running) for
441
+ all tests marked as ``openai``.
442
+ - ``FILE_REGISTRY.clean_up()`` is executed at the end of each test marked
443
+ as ``fakepy``.
319
444
 
320
445
  *Filename: conftest.py*
321
446
 
322
447
  .. code-block:: python
323
- :name: test_conftest
324
448
 
325
449
  import os
326
450
  from contextlib import suppress
@@ -335,7 +459,10 @@ If you want to add additional things into your specific tests, do as follows:
335
459
  def pytest_collection_modifyitems(config, items):
336
460
  for item in items:
337
461
  if item.get_closest_marker(CODEBLOCK_MARK):
338
- # Add `documentation` marker to `pytest-codeblock` tests
462
+ # All `pytest-codeblock` tests are automatically assigned
463
+ # a `codeblock` marker, which can be used for customisation.
464
+ # In the example below we add an additional `documentation`
465
+ # marker to `pytest-codeblock` tests.
339
466
  item.add_marker(pytest.mark.documentation)
340
467
  if item.get_closest_marker("aws"):
341
468
  # Apply `mock_aws` to all tests marked as `aws`
@@ -345,7 +472,11 @@ If you want to add additional things into your specific tests, do as follows:
345
472
  # Setup before test runs
346
473
  def pytest_runtest_setup(item):
347
474
  if item.get_closest_marker("openai"):
348
- # Send all OpenAI requests to locally running Ollama
475
+ # Send all OpenAI requests to locally running Ollama for all
476
+ # tests marked as `openai`. The tests would x-pass on environments
477
+ # where Ollama is up and running (assuming, you have created an
478
+ # alias for gpt-4o using one of the available models) and would
479
+ # x-fail on environments, where Ollama isn't runnig.
349
480
  os.environ.setdefault("OPENAI_API_KEY", "ollama")
350
481
  os.environ.setdefault("OPENAI_BASE_URL", "http://localhost:11434/v1")
351
482
 
@@ -9,6 +9,11 @@ pytest-codeblock
9
9
  .. _Django: https://www.djangoproject.com
10
10
  .. _pip: https://pypi.org/project/pip/
11
11
  .. _uv: https://pypi.org/project/uv/
12
+ .. _fake.py: https://github.com/barseghyanartur/fake.py
13
+ .. _boto3: https://github.com/boto/boto3
14
+ .. _moto: https://github.com/getmoto/moto
15
+ .. _openai: https://github.com/openai/openai-python
16
+ .. _Ollama: https://github.com/ollama/ollama
12
17
 
13
18
  .. Internal references
14
19
 
@@ -53,6 +58,7 @@ Features
53
58
  - **Markdown and reST support**: Automatically finds fenced code blocks
54
59
  in `.md`/`.markdown` files and `.. code-block:: python` or literal blocks
55
60
  in `.rst` files.
61
+ - **Support for literalinclude blocks** in `.rst` files.
56
62
  - **Grouping by name**: Split a single example across multiple code blocks;
57
63
  the plugin concatenates them into one test.
58
64
  - **Minimal dependencies**: Only requires `pytest`_.
@@ -88,9 +94,7 @@ Configuration
88
94
 
89
95
  [tool.pytest.ini_options]
90
96
  testpaths = [
91
- "*.rst",
92
97
  "**/*.rst",
93
- "*.md",
94
98
  "**/*.md",
95
99
  ]
96
100
 
@@ -125,6 +129,8 @@ You can also use a literal block with a preceding name comment:
125
129
  y = 5
126
130
  print(y * 2)
127
131
 
132
+ ----
133
+
128
134
  **Grouping example**
129
135
 
130
136
  It's possible to split one logical test into multiple blocks.
@@ -157,8 +163,12 @@ Note the ``.. continue::`` directive.
157
163
 
158
164
  The above mentioned three snippets will run as a single test.
159
165
 
166
+ ----
167
+
160
168
  **pytest marks**
161
169
 
170
+ In the example below, `django_db` marker is added to the code.
171
+
162
172
  .. code-block:: rst
163
173
 
164
174
  .. pytestmark: django_db
@@ -169,6 +179,16 @@ The above mentioned three snippets will run as a single test.
169
179
 
170
180
  user = User.objects.first()
171
181
 
182
+ ----
183
+
184
+ **literalinclude**
185
+
186
+ .. code-block:: rst
187
+
188
+ .. pytestmark: fakepy
189
+ .. literalinclude:: examples/python/create_pdf_file_example.py
190
+ :name: test_li_create_pdf_file
191
+
172
192
  Markdown usage
173
193
  --------------
174
194
 
@@ -187,6 +207,8 @@ Any fenced code block with a recognized Python language tag (e.g., ``python``,
187
207
  assert result == 9
188
208
  ```
189
209
 
210
+ ----
211
+
190
212
  **Grouping example**
191
213
 
192
214
  .. code-block:: markdown
@@ -201,6 +223,8 @@ Any fenced code block with a recognized Python language tag (e.g., ``python``,
201
223
  print(x + 1) # Uses x from the first snippet
202
224
  ```
203
225
 
226
+ ----
227
+
204
228
  **pytest marks**
205
229
 
206
230
  .. code-block:: markdown
@@ -214,9 +238,23 @@ Any fenced code block with a recognized Python language tag (e.g., ``python``,
214
238
 
215
239
  Customisation/hooks
216
240
  ===================
217
- If you want to add additional things into your specific tests, do as follows:
241
+ Tests can be extended and fine-tuned using `pytest`_'s standard hook system.
242
+
243
+ Below is an example workflow:
244
+
245
+ 1. **Add custom markers** to the code blocks (``fakepy``, ``aws``, ``openai``).
246
+ 2. **Implement pytest hooks** in ``conftest.py`` to react to those markers.
218
247
 
219
- **Add a couple of custom pytest marks**
248
+
249
+ Add custom markers in reStructuredText
250
+ --------------------------------------
251
+
252
+ ``fakepy`` reStructuredText marker
253
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
254
+
255
+ Sample `fake.py`_ code to generate a PDF file with random text.
256
+
257
+ *Filename: README.rst*
220
258
 
221
259
  .. code-block:: rst
222
260
 
@@ -228,6 +266,15 @@ If you want to add additional things into your specific tests, do as follows:
228
266
 
229
267
  FAKER.pdf_file()
230
268
 
269
+ ``aws`` reStructuredText marker
270
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
271
+
272
+ Sample `boto3`_ code to create a bucket on AWS S3.
273
+
274
+ *Filename: README.rst*
275
+
276
+ .. code-block:: rst
277
+
231
278
  .. pytestmark: aws
232
279
  .. code-block:: python
233
280
  :name: test_create_bucket
@@ -238,6 +285,17 @@ If you want to add additional things into your specific tests, do as follows:
238
285
  s3.create_bucket(Bucket="my-bucket")
239
286
  assert "my-bucket" in [b["Name"] for b in s3.list_buckets()["Buckets"]]
240
287
 
288
+ ``openai`` reStructuredText marker
289
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
290
+
291
+ Sample `openai`_ code to ask LLM to tell a joke. Note, that next to a
292
+ custom ``openai`` marker, ``xfail`` marker is used, which allows underlying
293
+ code to fail, without marking entire test suite as failed.
294
+
295
+ *Filename: README.rst*
296
+
297
+ .. code-block:: rst
298
+
241
299
  .. pytestmark: xfail
242
300
  .. pytestmark: openai
243
301
  .. code-block:: python
@@ -256,12 +314,78 @@ If you want to add additional things into your specific tests, do as follows:
256
314
 
257
315
  assert isinstance(completion.choices[0].message.content, str)
258
316
 
259
- **Hook into it `conftest.py`**
317
+ Add custom markers in Markdown
318
+ ------------------------------
319
+
320
+ ``fakepy`` Markdown marker
321
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
322
+
323
+ *Filename: README.md*
324
+
325
+ .. code-block:: markdown
326
+
327
+ <!-- pytestmark: fakepy -->
328
+ ```python name=test_create_pdf_file
329
+ from fake import FAKER
330
+
331
+ FAKER.pdf_file()
332
+ ```
333
+
334
+ ``aws`` Markdown marker
335
+ ~~~~~~~~~~~~~~~~~~~~~~~
336
+
337
+ *Filename: README.md*
338
+
339
+ .. code-block:: markdown
340
+
341
+ <!-- pytestmark: aws -->
342
+ ```python name=test_create_bucket
343
+ import boto3
344
+
345
+ s3 = boto3.client("s3", region_name="us-east-1")
346
+ s3.create_bucket(Bucket="my-bucket")
347
+ assert "my-bucket" in [b["Name"] for b in s3.list_buckets()["Buckets"]]
348
+ ```
349
+
350
+ ``openai`` Markdown marker
351
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
352
+
353
+ *Filename: README.md*
354
+
355
+ .. code-block:: markdown
356
+
357
+ <!-- pytestmark: xfail -->
358
+ <!-- pytestmark: openai -->
359
+ ```python name=test_tell_me_a_joke
360
+ from openai import OpenAI
361
+
362
+ client = OpenAI()
363
+ completion = client.chat.completions.create(
364
+ model="gpt-4o",
365
+ messages=[
366
+ {"role": "developer", "content": "You are a famous comedian."},
367
+ {"role": "user", "content": "Tell me a joke."},
368
+ ],
369
+ )
370
+
371
+ assert isinstance(completion.choices[0].message.content, str)
372
+ ```
373
+
374
+ Implement pytest hooks
375
+ ----------------------
376
+
377
+ In the example below:
378
+
379
+ - `moto`_ is used to mock AWS S3 service for all tests marked as ``aws``.
380
+ - Environment variable ``OPENAI_BASE_URL`` is set
381
+ to ``http://localhost:11434/v1`` (assuming you have `Ollama`_ running) for
382
+ all tests marked as ``openai``.
383
+ - ``FILE_REGISTRY.clean_up()`` is executed at the end of each test marked
384
+ as ``fakepy``.
260
385
 
261
386
  *Filename: conftest.py*
262
387
 
263
388
  .. code-block:: python
264
- :name: test_conftest
265
389
 
266
390
  import os
267
391
  from contextlib import suppress
@@ -276,7 +400,10 @@ If you want to add additional things into your specific tests, do as follows:
276
400
  def pytest_collection_modifyitems(config, items):
277
401
  for item in items:
278
402
  if item.get_closest_marker(CODEBLOCK_MARK):
279
- # Add `documentation` marker to `pytest-codeblock` tests
403
+ # All `pytest-codeblock` tests are automatically assigned
404
+ # a `codeblock` marker, which can be used for customisation.
405
+ # In the example below we add an additional `documentation`
406
+ # marker to `pytest-codeblock` tests.
280
407
  item.add_marker(pytest.mark.documentation)
281
408
  if item.get_closest_marker("aws"):
282
409
  # Apply `mock_aws` to all tests marked as `aws`
@@ -286,7 +413,11 @@ If you want to add additional things into your specific tests, do as follows:
286
413
  # Setup before test runs
287
414
  def pytest_runtest_setup(item):
288
415
  if item.get_closest_marker("openai"):
289
- # Send all OpenAI requests to locally running Ollama
416
+ # Send all OpenAI requests to locally running Ollama for all
417
+ # tests marked as `openai`. The tests would x-pass on environments
418
+ # where Ollama is up and running (assuming, you have created an
419
+ # alias for gpt-4o using one of the available models) and would
420
+ # x-fail on environments, where Ollama isn't runnig.
290
421
  os.environ.setdefault("OPENAI_API_KEY", "ollama")
291
422
  os.environ.setdefault("OPENAI_BASE_URL", "http://localhost:11434/v1")
292
423
 
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "pytest-codeblock"
3
- version = "0.1.2"
3
+ version = "0.1.4"
4
4
  description = "Pytest plugin to collect and test code blocks in reStructuredText and Markdown files."
5
5
  readme = "README.rst"
6
6
  requires-python = ">=3.9"
@@ -190,6 +190,15 @@ pythonpath = [
190
190
  norecursedirs = [".git", "examples"]
191
191
  DJANGO_SETTINGS_MODULE = "django_settings"
192
192
 
193
+ markers = [
194
+ "slow: mark a test that takes a long time to run.",
195
+ "codeblock: pytest-codeblock markers",
196
+ "aws: mark test as a AWS test",
197
+ "documentation: mark test as a documentation test",
198
+ "fakepy: mark test as a fake.py test",
199
+ "openai: mark test as a openai test",
200
+ ]
201
+
193
202
  [tool.coverage.run]
194
203
  relative_files = true
195
204
  omit = [".tox/*"]
@@ -2,7 +2,7 @@ from .md import MarkdownFile
2
2
  from .rst import RSTFile
3
3
 
4
4
  __title__ = "pytest-codeblock"
5
- __version__ = "0.1.2"
5
+ __version__ = "0.1.4"
6
6
  __author__ = "Artur Barseghyan <artur.barseghyan@gmail.com>"
7
7
  __copyright__ = "2025 Artur Barseghyan"
8
8
  __license__ = "MIT"
@@ -1,4 +1,5 @@
1
1
  import re
2
+ from pathlib import Path
2
3
  from typing import Optional
3
4
 
4
5
  import pytest
@@ -14,8 +15,40 @@ __all__ = (
14
15
  "parse_rst",
15
16
  )
16
17
 
18
+ # Highlight: Added helper function for literalinclude path resolution
19
+ def resolve_literalinclude_path(
20
+ base_dir: Path,
21
+ include_path: str,
22
+ ) -> Optional[str]:
23
+ """
24
+ Resolve the full path for a literalinclude directive.
25
+ Returns None if the file doesn't exist.
26
+ """
27
+ _include_path = Path(include_path)
28
+ if _include_path.exists():
29
+ return str(_include_path.resolve())
30
+
31
+ _base_dir = Path(base_dir.dirname) if base_dir.isfile() else base_dir
32
+ try:
33
+ full_path = _base_dir / include_path
34
+ if full_path.exists():
35
+ return str(full_path.resolve())
36
+ except Exception:
37
+ pass
38
+ return None
39
+
17
40
 
18
- def parse_rst(text: str) -> list[CodeSnippet]:
41
+ def get_literalinclude_content(path):
42
+ try:
43
+ with open(path) as f:
44
+ return f.read()
45
+ except Exception as e:
46
+ raise RuntimeError(
47
+ f"Failed to read literalinclude file {path}: {e}"
48
+ ) from e
49
+
50
+
51
+ def parse_rst(text: str, base_dir: Path) -> list[CodeSnippet]:
19
52
  """
20
53
  Parse an RST document into CodeSnippet objects, capturing:
21
54
  - .. pytestmark: <mark>
@@ -35,28 +68,65 @@ def parse_rst(text: str) -> list[CodeSnippet]:
35
68
  while i < n:
36
69
  line = lines[i]
37
70
 
38
- # Collect ".. pytestmark: xyz"
71
+ # --------------------------------------------------------------------
72
+ # Collect `.. pytestmark: xyz`
73
+ # --------------------------------------------------------------------
39
74
  m = re.match(r"^\s*\.\.\s*pytestmark:\s*(\w+)\s*$", line)
40
75
  if m:
41
76
  pending_marks.append(m.group(1))
42
77
  i += 1
43
78
  continue
44
79
 
45
- # Collect ".. continue: foo"
80
+ # --------------------------------------------------------------------
81
+ # The `.. literalinclude` directive
82
+ # --------------------------------------------------------------------
83
+ if line.strip().startswith(".. literalinclude::"):
84
+ path = line.split(".. literalinclude::", 1)[1].strip()
85
+ name = None
86
+
87
+ # Look ahead for name
88
+ j = i + 1
89
+ while j < len(lines) and lines[j].strip():
90
+ if ":name:" in lines[j]:
91
+ name = lines[j].split(":name:", 1)[1].strip()
92
+ break
93
+ j += 1
94
+
95
+ if name and name.startswith("test_"):
96
+ full_path = resolve_literalinclude_path(base_dir, path)
97
+ if full_path:
98
+ snippet = CodeSnippet(
99
+ code=get_literalinclude_content(full_path),
100
+ line=i + 1,
101
+ name=name,
102
+ marks=pending_marks.copy(),
103
+ )
104
+ snippets.append(snippet)
105
+
106
+ i = j + 1
107
+ continue
108
+
109
+ # --------------------------------------------------------------------
110
+ # Collect `.. continue: foo`
111
+ # --------------------------------------------------------------------
46
112
  m = re.match(r"^\s*\.\.\s*continue:\s*(\S+)\s*$", line)
47
113
  if m:
48
114
  pending_continue = m.group(1)
49
115
  i += 1
50
116
  continue
51
117
 
52
- # Collect ".. codeblock-name: foo"
118
+ # --------------------------------------------------------------------
119
+ # Collect `.. codeblock-name: foo`
120
+ # --------------------------------------------------------------------
53
121
  m = re.match(r"^\s*\.\.\s*codeblock-name:\s*(\S+)\s*$", line)
54
122
  if m:
55
123
  pending_name = m.group(1)
56
124
  i += 1
57
125
  continue
58
126
 
59
- # The code-block directive
127
+ # --------------------------------------------------------------------
128
+ # The `.. code-block` directive
129
+ # --------------------------------------------------------------------
60
130
  m = re.match(r"^(\s*)\.\. (?:code-block|code)::\s*(\w+)", line)
61
131
  if m:
62
132
  base_indent = len(m.group(1))
@@ -125,7 +195,9 @@ def parse_rst(text: str) -> list[CodeSnippet]:
125
195
  i += 1
126
196
  continue
127
197
 
198
+ # --------------------------------------------------------------------
128
199
  # The literal-block via "::"
200
+ # --------------------------------------------------------------------
129
201
  if line.rstrip().endswith("::") and pending_name:
130
202
  # Similar override logic
131
203
  if pending_continue:
@@ -176,7 +248,7 @@ class RSTFile(pytest.File):
176
248
  """Collect RST code-block tests as real test functions."""
177
249
  def collect(self):
178
250
  text = self.fspath.read_text(encoding="utf-8")
179
- raw = parse_rst(text)
251
+ raw = parse_rst(text, self.fspath)
180
252
 
181
253
  # Only keep test_* snippets
182
254
  tests = [
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: pytest-codeblock
3
- Version: 0.1.2
3
+ Version: 0.1.4
4
4
  Summary: Pytest plugin to collect and test code blocks in reStructuredText and Markdown files.
5
5
  Author-email: Artur Barseghyan <artur.barseghyan@gmail.com>
6
6
  Maintainer-email: Artur Barseghyan <artur.barseghyan@gmail.com>
@@ -68,6 +68,11 @@ pytest-codeblock
68
68
  .. _Django: https://www.djangoproject.com
69
69
  .. _pip: https://pypi.org/project/pip/
70
70
  .. _uv: https://pypi.org/project/uv/
71
+ .. _fake.py: https://github.com/barseghyanartur/fake.py
72
+ .. _boto3: https://github.com/boto/boto3
73
+ .. _moto: https://github.com/getmoto/moto
74
+ .. _openai: https://github.com/openai/openai-python
75
+ .. _Ollama: https://github.com/ollama/ollama
71
76
 
72
77
  .. Internal references
73
78
 
@@ -112,6 +117,7 @@ Features
112
117
  - **Markdown and reST support**: Automatically finds fenced code blocks
113
118
  in `.md`/`.markdown` files and `.. code-block:: python` or literal blocks
114
119
  in `.rst` files.
120
+ - **Support for literalinclude blocks** in `.rst` files.
115
121
  - **Grouping by name**: Split a single example across multiple code blocks;
116
122
  the plugin concatenates them into one test.
117
123
  - **Minimal dependencies**: Only requires `pytest`_.
@@ -147,9 +153,7 @@ Configuration
147
153
 
148
154
  [tool.pytest.ini_options]
149
155
  testpaths = [
150
- "*.rst",
151
156
  "**/*.rst",
152
- "*.md",
153
157
  "**/*.md",
154
158
  ]
155
159
 
@@ -184,6 +188,8 @@ You can also use a literal block with a preceding name comment:
184
188
  y = 5
185
189
  print(y * 2)
186
190
 
191
+ ----
192
+
187
193
  **Grouping example**
188
194
 
189
195
  It's possible to split one logical test into multiple blocks.
@@ -216,8 +222,12 @@ Note the ``.. continue::`` directive.
216
222
 
217
223
  The above mentioned three snippets will run as a single test.
218
224
 
225
+ ----
226
+
219
227
  **pytest marks**
220
228
 
229
+ In the example below, `django_db` marker is added to the code.
230
+
221
231
  .. code-block:: rst
222
232
 
223
233
  .. pytestmark: django_db
@@ -228,6 +238,16 @@ The above mentioned three snippets will run as a single test.
228
238
 
229
239
  user = User.objects.first()
230
240
 
241
+ ----
242
+
243
+ **literalinclude**
244
+
245
+ .. code-block:: rst
246
+
247
+ .. pytestmark: fakepy
248
+ .. literalinclude:: examples/python/create_pdf_file_example.py
249
+ :name: test_li_create_pdf_file
250
+
231
251
  Markdown usage
232
252
  --------------
233
253
 
@@ -246,6 +266,8 @@ Any fenced code block with a recognized Python language tag (e.g., ``python``,
246
266
  assert result == 9
247
267
  ```
248
268
 
269
+ ----
270
+
249
271
  **Grouping example**
250
272
 
251
273
  .. code-block:: markdown
@@ -260,6 +282,8 @@ Any fenced code block with a recognized Python language tag (e.g., ``python``,
260
282
  print(x + 1) # Uses x from the first snippet
261
283
  ```
262
284
 
285
+ ----
286
+
263
287
  **pytest marks**
264
288
 
265
289
  .. code-block:: markdown
@@ -273,9 +297,23 @@ Any fenced code block with a recognized Python language tag (e.g., ``python``,
273
297
 
274
298
  Customisation/hooks
275
299
  ===================
276
- If you want to add additional things into your specific tests, do as follows:
300
+ Tests can be extended and fine-tuned using `pytest`_'s standard hook system.
301
+
302
+ Below is an example workflow:
303
+
304
+ 1. **Add custom markers** to the code blocks (``fakepy``, ``aws``, ``openai``).
305
+ 2. **Implement pytest hooks** in ``conftest.py`` to react to those markers.
277
306
 
278
- **Add a couple of custom pytest marks**
307
+
308
+ Add custom markers in reStructuredText
309
+ --------------------------------------
310
+
311
+ ``fakepy`` reStructuredText marker
312
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
313
+
314
+ Sample `fake.py`_ code to generate a PDF file with random text.
315
+
316
+ *Filename: README.rst*
279
317
 
280
318
  .. code-block:: rst
281
319
 
@@ -287,6 +325,15 @@ If you want to add additional things into your specific tests, do as follows:
287
325
 
288
326
  FAKER.pdf_file()
289
327
 
328
+ ``aws`` reStructuredText marker
329
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
330
+
331
+ Sample `boto3`_ code to create a bucket on AWS S3.
332
+
333
+ *Filename: README.rst*
334
+
335
+ .. code-block:: rst
336
+
290
337
  .. pytestmark: aws
291
338
  .. code-block:: python
292
339
  :name: test_create_bucket
@@ -297,6 +344,17 @@ If you want to add additional things into your specific tests, do as follows:
297
344
  s3.create_bucket(Bucket="my-bucket")
298
345
  assert "my-bucket" in [b["Name"] for b in s3.list_buckets()["Buckets"]]
299
346
 
347
+ ``openai`` reStructuredText marker
348
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
349
+
350
+ Sample `openai`_ code to ask LLM to tell a joke. Note, that next to a
351
+ custom ``openai`` marker, ``xfail`` marker is used, which allows underlying
352
+ code to fail, without marking entire test suite as failed.
353
+
354
+ *Filename: README.rst*
355
+
356
+ .. code-block:: rst
357
+
300
358
  .. pytestmark: xfail
301
359
  .. pytestmark: openai
302
360
  .. code-block:: python
@@ -315,12 +373,78 @@ If you want to add additional things into your specific tests, do as follows:
315
373
 
316
374
  assert isinstance(completion.choices[0].message.content, str)
317
375
 
318
- **Hook into it `conftest.py`**
376
+ Add custom markers in Markdown
377
+ ------------------------------
378
+
379
+ ``fakepy`` Markdown marker
380
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
381
+
382
+ *Filename: README.md*
383
+
384
+ .. code-block:: markdown
385
+
386
+ <!-- pytestmark: fakepy -->
387
+ ```python name=test_create_pdf_file
388
+ from fake import FAKER
389
+
390
+ FAKER.pdf_file()
391
+ ```
392
+
393
+ ``aws`` Markdown marker
394
+ ~~~~~~~~~~~~~~~~~~~~~~~
395
+
396
+ *Filename: README.md*
397
+
398
+ .. code-block:: markdown
399
+
400
+ <!-- pytestmark: aws -->
401
+ ```python name=test_create_bucket
402
+ import boto3
403
+
404
+ s3 = boto3.client("s3", region_name="us-east-1")
405
+ s3.create_bucket(Bucket="my-bucket")
406
+ assert "my-bucket" in [b["Name"] for b in s3.list_buckets()["Buckets"]]
407
+ ```
408
+
409
+ ``openai`` Markdown marker
410
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
411
+
412
+ *Filename: README.md*
413
+
414
+ .. code-block:: markdown
415
+
416
+ <!-- pytestmark: xfail -->
417
+ <!-- pytestmark: openai -->
418
+ ```python name=test_tell_me_a_joke
419
+ from openai import OpenAI
420
+
421
+ client = OpenAI()
422
+ completion = client.chat.completions.create(
423
+ model="gpt-4o",
424
+ messages=[
425
+ {"role": "developer", "content": "You are a famous comedian."},
426
+ {"role": "user", "content": "Tell me a joke."},
427
+ ],
428
+ )
429
+
430
+ assert isinstance(completion.choices[0].message.content, str)
431
+ ```
432
+
433
+ Implement pytest hooks
434
+ ----------------------
435
+
436
+ In the example below:
437
+
438
+ - `moto`_ is used to mock AWS S3 service for all tests marked as ``aws``.
439
+ - Environment variable ``OPENAI_BASE_URL`` is set
440
+ to ``http://localhost:11434/v1`` (assuming you have `Ollama`_ running) for
441
+ all tests marked as ``openai``.
442
+ - ``FILE_REGISTRY.clean_up()`` is executed at the end of each test marked
443
+ as ``fakepy``.
319
444
 
320
445
  *Filename: conftest.py*
321
446
 
322
447
  .. code-block:: python
323
- :name: test_conftest
324
448
 
325
449
  import os
326
450
  from contextlib import suppress
@@ -335,7 +459,10 @@ If you want to add additional things into your specific tests, do as follows:
335
459
  def pytest_collection_modifyitems(config, items):
336
460
  for item in items:
337
461
  if item.get_closest_marker(CODEBLOCK_MARK):
338
- # Add `documentation` marker to `pytest-codeblock` tests
462
+ # All `pytest-codeblock` tests are automatically assigned
463
+ # a `codeblock` marker, which can be used for customisation.
464
+ # In the example below we add an additional `documentation`
465
+ # marker to `pytest-codeblock` tests.
339
466
  item.add_marker(pytest.mark.documentation)
340
467
  if item.get_closest_marker("aws"):
341
468
  # Apply `mock_aws` to all tests marked as `aws`
@@ -345,7 +472,11 @@ If you want to add additional things into your specific tests, do as follows:
345
472
  # Setup before test runs
346
473
  def pytest_runtest_setup(item):
347
474
  if item.get_closest_marker("openai"):
348
- # Send all OpenAI requests to locally running Ollama
475
+ # Send all OpenAI requests to locally running Ollama for all
476
+ # tests marked as `openai`. The tests would x-pass on environments
477
+ # where Ollama is up and running (assuming, you have created an
478
+ # alias for gpt-4o using one of the available models) and would
479
+ # x-fail on environments, where Ollama isn't runnig.
349
480
  os.environ.setdefault("OPENAI_API_KEY", "ollama")
350
481
  os.environ.setdefault("OPENAI_BASE_URL", "http://localhost:11434/v1")
351
482