pistributer 0.2.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- pistributer-0.2.0.dist-info/METADATA +333 -0
- pistributer-0.2.0.dist-info/RECORD +8 -0
- pistributer-0.2.0.dist-info/WHEEL +5 -0
- pistributer-0.2.0.dist-info/licenses/LICENSE +160 -0
- pistributer-0.2.0.dist-info/top_level.txt +3 -0
- pistributer.py +348 -0
- pistributer_sqlite.py +210 -0
- pistributer_txt.py +281 -0
|
@@ -0,0 +1,333 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: pistributer
|
|
3
|
+
Version: 0.2.0
|
|
4
|
+
Summary: A local-first file-backed FIFO queue with jsonl, txt, and sqlite drivers.
|
|
5
|
+
Author: Yeong
|
|
6
|
+
License-Expression: Apache-2.0
|
|
7
|
+
Project-URL: Homepage, https://github.com/yeongdev/pistributer
|
|
8
|
+
Project-URL: Repository, https://github.com/yeongdev/pistributer
|
|
9
|
+
Project-URL: Issues, https://github.com/yeongdev/pistributer/issues
|
|
10
|
+
Project-URL: Changelog, https://github.com/yeongdev/pistributer/blob/main/CHANGELOG.md
|
|
11
|
+
Keywords: queue,file,fifo,jsonl,streaming
|
|
12
|
+
Classifier: Development Status :: 3 - Alpha
|
|
13
|
+
Classifier: Intended Audience :: Developers
|
|
14
|
+
Classifier: Programming Language :: Python :: 3
|
|
15
|
+
Classifier: Programming Language :: Python :: 3 :: Only
|
|
16
|
+
Classifier: Programming Language :: Python :: 3.9
|
|
17
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
18
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
19
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
20
|
+
Classifier: Topic :: Software Development :: Libraries
|
|
21
|
+
Requires-Python: >=3.9
|
|
22
|
+
Description-Content-Type: text/markdown
|
|
23
|
+
License-File: LICENSE
|
|
24
|
+
Dynamic: license-file
|
|
25
|
+
|
|
26
|
+
# pistributer
|
|
27
|
+
|
|
28
|
+
`pistributer` is a local-first FIFO queue toolkit for file-based workflows.
|
|
29
|
+
|
|
30
|
+
**Hard position:** `pistributer` exists for developers who want a usable queue across servers or local jobs without standing up Redis, Kafka, or another heavy service.
|
|
31
|
+
|
|
32
|
+
**Performance contract:** this project prefers faster write/read throughput over a larger or more polished interface. The core value is still the same: write, read, high concurrency, multi-file workloads, and as little extra overhead as possible.
|
|
33
|
+
|
|
34
|
+
It started as a high-throughput local queue used on large datasets, and it now ships three practical drivers instead of pretending one storage format is ideal for every workload:
|
|
35
|
+
|
|
36
|
+
- `txt` for the shortest raw file path
|
|
37
|
+
- `jsonl` for structured, inspectable records
|
|
38
|
+
- `sqlite` for stronger integrity under contention
|
|
39
|
+
|
|
40
|
+
## When to use each driver
|
|
41
|
+
|
|
42
|
+
| Driver | Use it when | Avoid it when |
|
|
43
|
+
| --- | --- | --- |
|
|
44
|
+
| `txt` | You want the shortest plain-text file path and staged or single-writer-friendly queueing | You need structured records or stronger overlap correctness |
|
|
45
|
+
| `jsonl` | You want readable structured payloads and the default publishable workflow | You need the rawest append path or heavy overlapping write/read integrity |
|
|
46
|
+
| `sqlite` | You want stronger local correctness under contention | You want the lightest append-heavy file path |
|
|
47
|
+
|
|
48
|
+
## Project background
|
|
49
|
+
|
|
50
|
+
The earliest version of this tool was not created as a polished open-source package. It was created by a developer who wanted a queue but did not want to install or operate a heavier system such as Redis or Kafka.
|
|
51
|
+
|
|
52
|
+
The original approach was simple: use the filesystem itself as the message layer and use plain `.txt` files for data exchange. That early code was not cleaned up for publication, but the underlying queue logic was already useful in practice and was used on large amounts of real data.
|
|
53
|
+
|
|
54
|
+
Later, after the developer started building AI-oriented workflows with `nanobot`, the need for a cleaner installable package became more obvious. That is what pushed the repackaging effort: not a rewrite for the sake of novelty, but a practical need for `pip` installability, clearer docs, and a structured default format.
|
|
55
|
+
|
|
56
|
+
That is why the project now has two historical truths at the same time:
|
|
57
|
+
|
|
58
|
+
- the `0.1.x` line represents long-used file-queue logic that existed before the packaging cleanup
|
|
59
|
+
- the `0.2.0` line makes `.jsonl` the default public driver and adds wider tests, benchmarks, and release tooling
|
|
60
|
+
|
|
61
|
+
The current repository keeps that history visible instead of hiding it. The old behavior is preserved for comparison in `benchmarks/pistributer_bak.py`, while the main package is documented and tested as the public installable tool.
|
|
62
|
+
|
|
63
|
+
## `0.1.x` and `0.2.0` in practice
|
|
64
|
+
|
|
65
|
+
The simplest way to understand the version difference is this:
|
|
66
|
+
|
|
67
|
+
- `0.1.x` means the long-used plain-text file-queue era
|
|
68
|
+
- `0.2.0` means the public packaged era with `.jsonl` as the default driver
|
|
69
|
+
|
|
70
|
+
What changed:
|
|
71
|
+
|
|
72
|
+
- `0.1.x` was centered on the shortest `.txt` path and real-world usage before packaging cleanup
|
|
73
|
+
- `0.2.0` keeps the same basic file-queue idea but makes `.jsonl` the default public format
|
|
74
|
+
- `0.2.0` adds tests, benchmark documentation, packaging, and a clearer boundary for when to use `txt`, `jsonl`, and `sqlite`
|
|
75
|
+
|
|
76
|
+
What did **not** change:
|
|
77
|
+
|
|
78
|
+
- the project still values write/read throughput over interface growth
|
|
79
|
+
- the core file-queue idea is still append, rotate into `.in_use`, and consume with a small API
|
|
80
|
+
- the hot path is still expected to stay small and fast
|
|
81
|
+
|
|
82
|
+
Measured conclusion from the current benchmark work:
|
|
83
|
+
|
|
84
|
+
- the visible slowdown from `0.1.x` to `0.2.0` is real, but it is not mostly caused by the `.jsonl` extension itself
|
|
85
|
+
- most of the remaining overhead comes from the modern hot path doing a little more work around path validation and file-operation safeguards
|
|
86
|
+
- JSON serialization adds a smaller extra cost when callers pass Python objects instead of already-serialized strings
|
|
87
|
+
|
|
88
|
+
In short: `0.2.0` is slower than the historical plain-text path, but the gap is now mostly the cost of a cleaner public default and structured serialization, not a dramatic penalty from JSONL as a format.
|
|
89
|
+
|
|
90
|
+
## Design priorities
|
|
91
|
+
|
|
92
|
+
The author's preference is very explicit:
|
|
93
|
+
|
|
94
|
+
1. keep the queue fast under heavy write/read workloads
|
|
95
|
+
2. keep the public surface small
|
|
96
|
+
3. avoid changes that reduce throughput unless the trade-off is clearly worth it
|
|
97
|
+
|
|
98
|
+
That means this project does **not** try to win by offering a large interface. It tries to win by doing a very small number of things well:
|
|
99
|
+
|
|
100
|
+
- append records fast
|
|
101
|
+
- read records fast
|
|
102
|
+
- handle many files
|
|
103
|
+
- stay useful under high write pressure
|
|
104
|
+
|
|
105
|
+
The current `.jsonl` default is accepted even though it is slower than the historical `.txt` path, because the measured overhead is still within an acceptable range for the intended workflows. Outside of that specific trade-off, performance regressions are treated as unacceptable.
|
|
106
|
+
|
|
107
|
+
## Project position
|
|
108
|
+
|
|
109
|
+
`pistributer` is best positioned as a lightweight local queue for scripts, batch jobs, and single-host pipelines.
|
|
110
|
+
|
|
111
|
+
It is a good fit when:
|
|
112
|
+
|
|
113
|
+
- your workload is file-centric
|
|
114
|
+
- you want simple deployment with no external service
|
|
115
|
+
- you care about readable queue state on disk
|
|
116
|
+
- you want to choose between throughput-first and integrity-first local drivers
|
|
117
|
+
- your file-driver usage is staged or single-writer-friendly
|
|
118
|
+
|
|
119
|
+
It is not the best fit when:
|
|
120
|
+
|
|
121
|
+
- you need a multi-node distributed queue
|
|
122
|
+
- you need strict correctness under heavy overlapping readers and writers with the file drivers
|
|
123
|
+
- you want managed persistence, replication, or cross-host coordination
|
|
124
|
+
|
|
125
|
+
## Why use it
|
|
126
|
+
|
|
127
|
+
Use `pistributer` when you want a small queue abstraction for a local or single-host pipeline without introducing Redis, Kafka, or a separate database service.
|
|
128
|
+
|
|
129
|
+
What it is good at:
|
|
130
|
+
|
|
131
|
+
- append-heavy local workloads
|
|
132
|
+
- high-concurrency write-focused workloads
|
|
133
|
+
- simple batch pipelines and scripts
|
|
134
|
+
- readable on-disk data formats
|
|
135
|
+
- lightweight deployment with minimal operational overhead
|
|
136
|
+
|
|
137
|
+
The file-based drivers rotate active data into `.in_use`, which helps reduce direct producer/consumer contention on the same file.
|
|
138
|
+
|
|
139
|
+
The two file drivers, `Pistributer` and `PistributerTxt`, are best documented as staged or single-writer-friendly. They work well when appends and reads are mostly separated, but they are not the strongest option for heavy overlapping writer and reader contention.
|
|
140
|
+
|
|
141
|
+
For hot-path performance, `put()` in the two file drivers assumes the parent directory already exists. Directory preparation is intentionally kept outside the hot path.
|
|
142
|
+
|
|
143
|
+
The intent is not to keep adding more actions and more surface area. The intent is to keep the important path small: write, read, and move a lot of data without unnecessary slowdown.
|
|
144
|
+
|
|
145
|
+
## Driver modes
|
|
146
|
+
|
|
147
|
+
### `jsonl` driver
|
|
148
|
+
|
|
149
|
+
```python
|
|
150
|
+
from pistributer import Pistributer
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
Use this as the default mode when you want structured records and a modern `.jsonl` workflow.
|
|
154
|
+
|
|
155
|
+
### `txt` driver
|
|
156
|
+
|
|
157
|
+
```python
|
|
158
|
+
from pistributer_txt import PistributerTxt
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
Use this when raw text throughput matters more than structure.
|
|
162
|
+
|
|
163
|
+
### `sqlite` driver
|
|
164
|
+
|
|
165
|
+
```python
|
|
166
|
+
from pistributer_sqlite import PistributerSqlite
|
|
167
|
+
```
|
|
168
|
+
|
|
169
|
+
Use this when queue correctness matters more than the shortest append path.
|
|
170
|
+
|
|
171
|
+
See `DRIVERS.md` for a full comparison.
|
|
172
|
+
|
|
173
|
+
See `EXAMPLES.md` for small copyable examples for all three drivers.
|
|
174
|
+
|
|
175
|
+
The Python docstrings are written to be `help()`-friendly, so `help(Pistributer)`, `help(Pistributer.put)`, and the equivalent driver methods should now give usable inline guidance.
|
|
176
|
+
|
|
177
|
+
## Choose a driver
|
|
178
|
+
|
|
179
|
+
| Driver | Best for | Main trade-off |
|
|
180
|
+
| --- | --- | --- |
|
|
181
|
+
| `Pistributer` (`jsonl`) | Structured local queues and readable payloads | More overhead than raw text |
|
|
182
|
+
| `PistributerTxt` | Fast plain-text append-heavy workloads | Least structured format |
|
|
183
|
+
| `PistributerSqlite` | Stronger local correctness under contention | Higher transaction overhead |
|
|
184
|
+
|
|
185
|
+
## API naming note
|
|
186
|
+
|
|
187
|
+
Public API names stay stable on purpose.
|
|
188
|
+
|
|
189
|
+
- `Pistributer` and `PistributerTxt` keep the historical camelCase names such as `isEmpty()` and `getIndex()`.
|
|
190
|
+
- `PistributerSqlite` keeps its newer snake_case method `is_empty()`.
|
|
191
|
+
|
|
192
|
+
The naming is not perfectly uniform, but the project keeps the existing public contract instead of breaking working code.
|
|
193
|
+
|
|
194
|
+
## Install
|
|
195
|
+
|
|
196
|
+
```bash
|
|
197
|
+
pip install pistributer
|
|
198
|
+
```
|
|
199
|
+
|
|
200
|
+
## Quick start
|
|
201
|
+
|
|
202
|
+
```python
|
|
203
|
+
from pistributer import Pistributer
|
|
204
|
+
|
|
205
|
+
Pistributer.put("channel.jsonl", {"value": "hello"})
|
|
206
|
+
Pistributer.put("channel.jsonl", {"value": "world"})
|
|
207
|
+
|
|
208
|
+
queue = Pistributer("channel.jsonl")
|
|
209
|
+
|
|
210
|
+
print(queue.next())
|
|
211
|
+
print(queue.next())
|
|
212
|
+
print(queue.isEmpty())
|
|
213
|
+
```
|
|
214
|
+
|
|
215
|
+
## Core API
|
|
216
|
+
|
|
217
|
+
The practical core of the project is intentionally small: append with `put()` and consume with `next()`.
|
|
218
|
+
|
|
219
|
+
The other methods exist to support queue lifecycle and compatibility, not to turn the library into a broad interface framework.
|
|
220
|
+
|
|
221
|
+
- `Pistributer(path)`: open a queue backed by a `.jsonl` file
|
|
222
|
+
- `Pistributer.put(path, value)`: append one message; the parent directory must already exist
|
|
223
|
+
- `Pistributer.new(path, value, overwrite=False, sep="")`: create a file with initial content
|
|
224
|
+
- `Pistributer.next()`: return the next message, or raise `StopIteration` if empty
|
|
225
|
+
- `Pistributer.isEmpty()`: return whether unread messages remain
|
|
226
|
+
- `Pistributer.size()`: count queued messages across active files
|
|
227
|
+
- `Pistributer.remaining()`: count unread messages
|
|
228
|
+
- `Pistributer.getIndex()`: return the consumed message count
|
|
229
|
+
|
|
230
|
+
## Validation
|
|
231
|
+
|
|
232
|
+
The repository includes both correctness tests and direct benchmarks against the preserved backup implementation at `benchmarks/pistributer_bak.py`.
|
|
233
|
+
|
|
234
|
+
### Functional tests
|
|
235
|
+
|
|
236
|
+
The main regression test writes `300` `.jsonl` files, each with `30` distinct JSON rows, and then consumes everything through `next()`.
|
|
237
|
+
|
|
238
|
+
That covers `9000` records and checks:
|
|
239
|
+
|
|
240
|
+
- FIFO ordering
|
|
241
|
+
- empty-queue behavior
|
|
242
|
+
- remaining-count behavior
|
|
243
|
+
- reopen/index persistence
|
|
244
|
+
- rotated `.in_use` files plus newly appended data
|
|
245
|
+
|
|
246
|
+
Run the test suite with:
|
|
247
|
+
|
|
248
|
+
```bash
|
|
249
|
+
python -m unittest discover -s tests -v
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
### Staged benchmark
|
|
253
|
+
|
|
254
|
+
The staged benchmark compares:
|
|
255
|
+
|
|
256
|
+
- `benchmarks/pistributer_bak.py` with `300` `.txt` files × `30` rows
|
|
257
|
+
- `pistributer.py` with `300` `.jsonl` files × `30` rows
|
|
258
|
+
|
|
259
|
+
Latest measured result in this workspace:
|
|
260
|
+
|
|
261
|
+
- backup total: about `0.956s`
|
|
262
|
+
- current total: about `1.020s`
|
|
263
|
+
- current write ratio: about `1.18x` of backup
|
|
264
|
+
- current read ratio: about `1.02x` of backup
|
|
265
|
+
- current total ratio: about `1.07x` of backup
|
|
266
|
+
|
|
267
|
+
In this staged workload, the current `jsonl` rewrite is slightly slower than the backup `txt` path, which is expected because the structured driver pays extra serialization and validation overhead.
|
|
268
|
+
|
|
269
|
+
That overhead is currently accepted because the repository intentionally chose `.jsonl` as the public default. Beyond that known trade-off, the project should resist changes that make the hot path slower.
|
|
270
|
+
|
|
271
|
+
The important version-difference detail is that this benchmark compares the historical backup implementation against the current public package. It is not a pure `jsonl` vs `txt` format test in isolation.
|
|
272
|
+
|
|
273
|
+
After moving parent-directory preparation out of the file-driver `put()` hot path, a focused write-only microbenchmark in this workspace produced these averages over five runs of `20000` appends:
|
|
274
|
+
|
|
275
|
+
- backup `txt` string append: about `0.617s`
|
|
276
|
+
- current `txt` string append: about `0.586s`
|
|
277
|
+
- current `jsonl` string append: about `0.608s`
|
|
278
|
+
- current `jsonl` dictionary append: about `0.652s`
|
|
279
|
+
|
|
280
|
+
That means the current `jsonl` hot path is now much closer to the historical reference point when the caller passes pre-serialized strings, and the remaining extra cost mainly shows up when the caller asks the driver to serialize Python objects.
|
|
281
|
+
|
|
282
|
+
Run it with:
|
|
283
|
+
|
|
284
|
+
```bash
|
|
285
|
+
python benchmarks/compare_versions.py
|
|
286
|
+
```
|
|
287
|
+
|
|
288
|
+
Detailed notes are in `BENCHMARKS.md`.
|
|
289
|
+
|
|
290
|
+
### Interleaved write/read stress benchmark
|
|
291
|
+
|
|
292
|
+
The repository also includes a heavier benchmark where writes and reads overlap:
|
|
293
|
+
|
|
294
|
+
- `300` writer threads
|
|
295
|
+
- `64` consumer threads
|
|
296
|
+
- `30` logical file assignments per writer
|
|
297
|
+
- `10` shared files per writer
|
|
298
|
+
- `300` rows per file assignment
|
|
299
|
+
- `2,700,000` total rows attempted
|
|
300
|
+
- `6010` physical files after shared-path collapse
|
|
301
|
+
|
|
302
|
+
Latest measured result in this workspace:
|
|
303
|
+
|
|
304
|
+
- backup `.txt` consumed `2,699,852 / 2,700,000`
|
|
305
|
+
- current `.jsonl` consumed `2,699,351 / 2,700,000`
|
|
306
|
+
- both modes produced `0` malformed JSON rows
|
|
307
|
+
- both file drivers failed full integrity under simultaneous write/read pressure
|
|
308
|
+
|
|
309
|
+
That is the practical reason the `sqlite` driver exists: once the workload shifts from staged local queueing to stronger overlapping write/read correctness, a transactional driver becomes the better fit.
|
|
310
|
+
|
|
311
|
+
Run it with:
|
|
312
|
+
|
|
313
|
+
```bash
|
|
314
|
+
python benchmarks/threaded_interleaved_compare.py
|
|
315
|
+
```
|
|
316
|
+
|
|
317
|
+
## Release
|
|
318
|
+
|
|
319
|
+
Release steps and commands are documented in `RELEASE.md`.
|
|
320
|
+
|
|
321
|
+
Project history is tracked in `CHANGELOG.md`.
|
|
322
|
+
|
|
323
|
+
## Contributing
|
|
324
|
+
|
|
325
|
+
Small, focused contributions are welcome. Start with `CONTRIBUTING.md` for development and review expectations.
|
|
326
|
+
|
|
327
|
+
## Notes
|
|
328
|
+
|
|
329
|
+
- Messages are stored as one JSON object per line in the default driver.
|
|
330
|
+
- Files rotate from `data` to `.in_use` so new appends stay separate from active consumption.
|
|
331
|
+
- `txt` and `jsonl` are strongest in local staged workloads where writes and reads are mostly separated.
|
|
332
|
+
- `sqlite` targets a different optimization goal: stronger correctness under concurrency.
|
|
333
|
+
- The project is intentionally lightweight and local-first.
|
|
@@ -0,0 +1,8 @@
|
|
|
1
|
+
pistributer.py,sha256=skDir-SAA9ZpoJCC69wrpZNSydLFCGkTDOEP0B08pUY,10969
|
|
2
|
+
pistributer_sqlite.py,sha256=DJ2KdM5rsLGvzad1-6pnTAiGGv0i-1miBCZIcH5youA,6641
|
|
3
|
+
pistributer_txt.py,sha256=icpLYhFxCjNiNwyWUMb0BSmTiUBpOCfYDW-51k6XxgQ,9081
|
|
4
|
+
pistributer-0.2.0.dist-info/licenses/LICENSE,sha256=p361mXIV7Nqk4udtvxN66X4YwsMRE29fBUWrQxH5DlY,9334
|
|
5
|
+
pistributer-0.2.0.dist-info/METADATA,sha256=fmtxsUyNlKO7GB9NMh_qnVIyPG8RIIZXrYlBFfWzZ4c,14481
|
|
6
|
+
pistributer-0.2.0.dist-info/WHEEL,sha256=aeYiig01lYGDzBgS8HxWXOg3uV61G9ijOsup-k9o1sk,91
|
|
7
|
+
pistributer-0.2.0.dist-info/top_level.txt,sha256=oF_7r2SHma_zOSFaEXtwgDMkF3kWDxn6kSBEN1lwBCM,47
|
|
8
|
+
pistributer-0.2.0.dist-info/RECORD,,
|
|
@@ -0,0 +1,160 @@
|
|
|
1
|
+
Apache License
|
|
2
|
+
Version 2.0, January 2004
|
|
3
|
+
http://www.apache.org/licenses/
|
|
4
|
+
|
|
5
|
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
6
|
+
|
|
7
|
+
1. Definitions.
|
|
8
|
+
|
|
9
|
+
"License" shall mean the terms and conditions for use, reproduction, and
|
|
10
|
+
distribution as defined by Sections 1 through 9 of this document.
|
|
11
|
+
|
|
12
|
+
"Licensor" shall mean the copyright owner or entity authorized by the copyright
|
|
13
|
+
owner that is granting the License.
|
|
14
|
+
|
|
15
|
+
"Legal Entity" shall mean the union of the acting entity and all other entities
|
|
16
|
+
that control, are controlled by, or are under common control with that entity.
|
|
17
|
+
For the purposes of this definition, "control" means (i) the power, direct or
|
|
18
|
+
indirect, to cause the direction or management of such entity, whether by
|
|
19
|
+
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
20
|
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
21
|
+
|
|
22
|
+
"You" (or "Your") shall mean an individual or Legal Entity exercising
|
|
23
|
+
permissions granted by this License.
|
|
24
|
+
|
|
25
|
+
"Source" form shall mean the preferred form for making modifications, including
|
|
26
|
+
but not limited to software source code, documentation source, and
|
|
27
|
+
configuration files.
|
|
28
|
+
|
|
29
|
+
"Object" form shall mean any form resulting from mechanical transformation or
|
|
30
|
+
translation of a Source form, including but not limited to compiled object
|
|
31
|
+
code, generated documentation, and conversions to other media types.
|
|
32
|
+
|
|
33
|
+
"Work" shall mean the work of authorship, whether in Source or Object form,
|
|
34
|
+
made available under the License, as indicated by a copyright notice that is
|
|
35
|
+
included in or attached to the work (an example is provided in the Appendix
|
|
36
|
+
below).
|
|
37
|
+
|
|
38
|
+
"Derivative Works" shall mean any work, whether in Source or Object form, that
|
|
39
|
+
is based on (or derived from) the Work and for which the editorial revisions,
|
|
40
|
+
annotations, elaborations, or other modifications represent, as a whole, an
|
|
41
|
+
original work of authorship. For the purposes of this License, Derivative Works
|
|
42
|
+
shall not include works that remain separable from, or merely link (or bind by
|
|
43
|
+
name) to the interfaces of, the Work and Derivative Works thereof.
|
|
44
|
+
|
|
45
|
+
"Contribution" shall mean any work of authorship, including the original version
|
|
46
|
+
of the Work and any modifications or additions to that Work or Derivative Works
|
|
47
|
+
thereof, that is intentionally submitted to Licensor for inclusion in the Work
|
|
48
|
+
by the copyright owner or by an individual or Legal Entity authorized to submit
|
|
49
|
+
on behalf of the copyright owner. For the purposes of this definition,
|
|
50
|
+
"submitted" means any form of electronic, verbal, or written communication sent
|
|
51
|
+
to the Licensor or its representatives, including but not limited to
|
|
52
|
+
communication on electronic mailing lists, source code control systems, and
|
|
53
|
+
issue tracking systems that are managed by, or on behalf of, the Licensor for
|
|
54
|
+
the purpose of discussing and improving the Work, but excluding communication
|
|
55
|
+
that is conspicuously marked or otherwise designated in writing by the copyright
|
|
56
|
+
owner as "Not a Contribution."
|
|
57
|
+
|
|
58
|
+
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
|
|
59
|
+
of whom a Contribution has been received by Licensor and subsequently
|
|
60
|
+
incorporated within the Work.
|
|
61
|
+
|
|
62
|
+
2. Grant of Copyright License. Subject to the terms and conditions of this
|
|
63
|
+
License, each Contributor hereby grants to You a perpetual, worldwide,
|
|
64
|
+
non-exclusive, no-charge, royalty-free, irrevocable copyright license to
|
|
65
|
+
reproduce, prepare Derivative Works of, publicly display, publicly perform,
|
|
66
|
+
sublicense, and distribute the Work and such Derivative Works in Source or
|
|
67
|
+
Object form.
|
|
68
|
+
|
|
69
|
+
3. Grant of Patent License. Subject to the terms and conditions of this License,
|
|
70
|
+
each Contributor hereby grants to You a perpetual, worldwide, non-exclusive,
|
|
71
|
+
no-charge, royalty-free, irrevocable (except as stated in this section) patent
|
|
72
|
+
license to make, have made, use, offer to sell, sell, import, and otherwise
|
|
73
|
+
transfer the Work, where such license applies only to those patent claims
|
|
74
|
+
licensable by such Contributor that are necessarily infringed by their
|
|
75
|
+
Contribution(s) alone or by combination of their Contribution(s) with the Work
|
|
76
|
+
to which such Contribution(s) was submitted. If You institute patent litigation
|
|
77
|
+
against any entity (including a cross-claim or counterclaim in a lawsuit)
|
|
78
|
+
alleging that the Work or a Contribution incorporated within the Work
|
|
79
|
+
constitutes direct or contributory patent infringement, then any patent
|
|
80
|
+
licenses granted to You under this License for that Work shall terminate as of
|
|
81
|
+
the date such litigation is filed.
|
|
82
|
+
|
|
83
|
+
4. Redistribution. You may reproduce and distribute copies of the Work or
|
|
84
|
+
Derivative Works thereof in any medium, with or without modifications, and in
|
|
85
|
+
Source or Object form, provided that You meet the following conditions:
|
|
86
|
+
|
|
87
|
+
(a) You must give any other recipients of the Work or Derivative Works a copy
|
|
88
|
+
of this License; and
|
|
89
|
+
|
|
90
|
+
(b) You must cause any modified files to carry prominent notices stating that
|
|
91
|
+
You changed the files; and
|
|
92
|
+
|
|
93
|
+
(c) You must retain, in the Source form of any Derivative Works that You
|
|
94
|
+
distribute, all copyright, patent, trademark, and attribution notices from
|
|
95
|
+
the Source form of the Work, excluding those notices that do not pertain to
|
|
96
|
+
any part of the Derivative Works; and
|
|
97
|
+
|
|
98
|
+
(d) If the Work includes a "NOTICE" text file as part of its distribution, then
|
|
99
|
+
any Derivative Works that You distribute must include a readable copy of the
|
|
100
|
+
attribution notices contained within such NOTICE file, excluding those
|
|
101
|
+
notices that do not pertain to any part of the Derivative Works, in at least
|
|
102
|
+
one of the following places: within a NOTICE text file distributed as part
|
|
103
|
+
of the Derivative Works; within the Source form or documentation, if
|
|
104
|
+
provided along with the Derivative Works; or, within a display generated by
|
|
105
|
+
the Derivative Works, if and wherever such third-party notices normally
|
|
106
|
+
appear. The contents of the NOTICE file are for informational purposes only
|
|
107
|
+
and do not modify the License. You may add Your own attribution notices
|
|
108
|
+
within Derivative Works that You distribute, alongside or as an addendum to
|
|
109
|
+
the NOTICE text from the Work, provided that such additional attribution
|
|
110
|
+
notices cannot be construed as modifying the License.
|
|
111
|
+
|
|
112
|
+
You may add Your own copyright statement to Your modifications and may provide
|
|
113
|
+
additional or different license terms and conditions for use, reproduction, or
|
|
114
|
+
distribution of Your modifications, or for any such Derivative Works as a
|
|
115
|
+
whole, provided Your use, reproduction, and distribution of the Work otherwise
|
|
116
|
+
complies with the conditions stated in this License.
|
|
117
|
+
|
|
118
|
+
5. Submission of Contributions. Unless You explicitly state otherwise, any
|
|
119
|
+
Contribution intentionally submitted for inclusion in the Work by You to the
|
|
120
|
+
Licensor shall be under the terms and conditions of this License, without any
|
|
121
|
+
additional terms or conditions. Notwithstanding the above, nothing herein shall
|
|
122
|
+
supersede or modify the terms of any separate license agreement you may have
|
|
123
|
+
executed with Licensor regarding such Contributions.
|
|
124
|
+
|
|
125
|
+
6. Trademarks. This License does not grant permission to use the trade names,
|
|
126
|
+
trademarks, service marks, or product names of the Licensor, except as required
|
|
127
|
+
for reasonable and customary use in describing the origin of the Work and
|
|
128
|
+
reproducing the content of the NOTICE file.
|
|
129
|
+
|
|
130
|
+
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in
|
|
131
|
+
writing, Licensor provides the Work (and each Contributor provides its
|
|
132
|
+
Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
|
133
|
+
KIND, either express or implied, including, without limitation, any warranties
|
|
134
|
+
or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
135
|
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
136
|
+
appropriateness of using or redistributing the Work and assume any risks
|
|
137
|
+
associated with Your exercise of permissions under this License.
|
|
138
|
+
|
|
139
|
+
8. Limitation of Liability. In no event and under no legal theory, whether in
|
|
140
|
+
tort (including negligence), contract, or otherwise, unless required by
|
|
141
|
+
applicable law (such as deliberate and grossly negligent acts) or agreed to in
|
|
142
|
+
writing, shall any Contributor be liable to You for damages, including any
|
|
143
|
+
direct, indirect, special, incidental, or consequential damages of any
|
|
144
|
+
character arising as a result of this License or out of the use or inability to
|
|
145
|
+
use the Work (including but not limited to damages for loss of goodwill, work
|
|
146
|
+
stoppage, computer failure or malfunction, or any and all other commercial
|
|
147
|
+
damages or losses), even if such Contributor has been advised of the
|
|
148
|
+
possibility of such damages.
|
|
149
|
+
|
|
150
|
+
9. Accepting Warranty or Additional Liability. While redistributing the Work or
|
|
151
|
+
Derivative Works thereof, You may choose to offer, and charge a fee for,
|
|
152
|
+
acceptance of support, warranty, indemnity, or other liability obligations
|
|
153
|
+
and/or rights consistent with this License. However, in accepting such
|
|
154
|
+
obligations, You may act only on Your own behalf and on Your sole
|
|
155
|
+
responsibility, not on behalf of any other Contributor, and only if You agree
|
|
156
|
+
to indemnify, defend, and hold each Contributor harmless for any liability
|
|
157
|
+
incurred by, or claims asserted against, such Contributor by reason of your
|
|
158
|
+
accepting any such warranty or additional liability.
|
|
159
|
+
|
|
160
|
+
END OF TERMS AND CONDITIONS
|
pistributer.py
ADDED
|
@@ -0,0 +1,348 @@
|
|
|
1
|
+
"""JSONL-first `pistributer` driver.
|
|
2
|
+
|
|
3
|
+
This module is the default public delivery of `pistributer`.
|
|
4
|
+
|
|
5
|
+
It keeps the original local file-queue model: append records to a data file,
|
|
6
|
+
rotate that file into `.in_use`, and track read progress in a sidecar index.
|
|
7
|
+
|
|
8
|
+
Use this driver when you want structured payloads and a small local queue API.
|
|
9
|
+
The file driver is best for staged workflows or single-writer-friendly usage.
|
|
10
|
+
It is not the strongest choice for heavy overlapping write and read contention.
|
|
11
|
+
The hot write path assumes the parent directory already exists.
|
|
12
|
+
|
|
13
|
+
Example:
|
|
14
|
+
>>> from pistributer import Pistributer
|
|
15
|
+
>>> Pistributer.put("events.jsonl", {"event": "start"})
|
|
16
|
+
True
|
|
17
|
+
>>> queue = Pistributer("events.jsonl")
|
|
18
|
+
>>> isinstance(queue.next(), str)
|
|
19
|
+
True
|
|
20
|
+
"""
|
|
21
|
+
|
|
22
|
+
from __future__ import annotations
|
|
23
|
+
|
|
24
|
+
import json
|
|
25
|
+
import os
|
|
26
|
+
from pathlib import Path
|
|
27
|
+
|
|
28
|
+
__all__ = ["Pistributer"]
|
|
29
|
+
__version__ = "0.2.0"
|
|
30
|
+
|
|
31
|
+
|
|
32
|
+
class Pistributer:
|
|
33
|
+
"""Queue interface for newline-delimited JSON files.
|
|
34
|
+
|
|
35
|
+
The public method names intentionally preserve the historical API, including
|
|
36
|
+
`isEmpty()` and `getIndex()`, so existing code keeps working.
|
|
37
|
+
|
|
38
|
+
Example:
|
|
39
|
+
>>> queue = Pistributer("events.jsonl")
|
|
40
|
+
>>> queue.isEmpty()
|
|
41
|
+
True
|
|
42
|
+
"""
|
|
43
|
+
|
|
44
|
+
def __init__(self, path: str | Path):
|
|
45
|
+
"""Open a queue backed by a `.jsonl` channel file.
|
|
46
|
+
|
|
47
|
+
Args:
|
|
48
|
+
path: Path to the queue file. The path must end with `.jsonl`.
|
|
49
|
+
|
|
50
|
+
Returns:
|
|
51
|
+
None.
|
|
52
|
+
|
|
53
|
+
Raises:
|
|
54
|
+
ValueError: If `path` does not end with `.jsonl`.
|
|
55
|
+
|
|
56
|
+
Example:
|
|
57
|
+
>>> Pistributer("events.jsonl")
|
|
58
|
+
"""
|
|
59
|
+
channel_path = self._normalize_channel_name(path)
|
|
60
|
+
self.abspath = "" if os.sep not in channel_path else os.path.abspath(".")
|
|
61
|
+
self.path = {
|
|
62
|
+
"data": os.path.abspath(os.path.join(self.abspath, channel_path)),
|
|
63
|
+
"index": os.path.abspath(os.path.join(self.abspath, f"{channel_path}.index")),
|
|
64
|
+
"in_use": os.path.abspath(os.path.join(self.abspath, f"{channel_path}.in_use")),
|
|
65
|
+
}
|
|
66
|
+
self.q: list[str] = []
|
|
67
|
+
self.__index = {"index": 0}
|
|
68
|
+
self.__initialQueue()
|
|
69
|
+
|
|
70
|
+
@staticmethod
|
|
71
|
+
def new(target_path: str | Path, string, overwrite: bool = False, sep: str = "") -> bool:
|
|
72
|
+
"""Create a new queue file with one serialized record.
|
|
73
|
+
|
|
74
|
+
Args:
|
|
75
|
+
target_path: Destination `.jsonl` file.
|
|
76
|
+
string: Record to write. Strings are written as-is; other values are
|
|
77
|
+
serialized to compact JSON.
|
|
78
|
+
overwrite: When `True`, allow replacing an existing file.
|
|
79
|
+
sep: Optional trailing separator to append after the payload.
|
|
80
|
+
|
|
81
|
+
Returns:
|
|
82
|
+
True when the file is created successfully.
|
|
83
|
+
|
|
84
|
+
Raises:
|
|
85
|
+
ValueError: If `target_path` does not end with `.jsonl`.
|
|
86
|
+
IsADirectoryError: If `target_path` points to a directory.
|
|
87
|
+
FileExistsError: If the file already exists and `overwrite` is `False`.
|
|
88
|
+
|
|
89
|
+
Example:
|
|
90
|
+
>>> Pistributer.new("events.jsonl", {"event": "start"}, overwrite=True)
|
|
91
|
+
True
|
|
92
|
+
"""
|
|
93
|
+
file_path = Pistributer._normalize_channel_name(target_path)
|
|
94
|
+
if os.path.isdir(file_path):
|
|
95
|
+
raise IsADirectoryError(f'This is a folder, require a file to write "{file_path}"')
|
|
96
|
+
if os.path.isfile(file_path) and overwrite is not True:
|
|
97
|
+
raise FileExistsError(f'File exists "{file_path}"')
|
|
98
|
+
|
|
99
|
+
payload = Pistributer._serialize_record(string)
|
|
100
|
+
parent_dir = os.path.dirname(file_path)
|
|
101
|
+
if parent_dir and not os.path.isdir(parent_dir):
|
|
102
|
+
os.makedirs(parent_dir, exist_ok=True)
|
|
103
|
+
|
|
104
|
+
with open(file_path, "w+", encoding="utf-8") as handle:
|
|
105
|
+
handle.write(f"{payload}{sep}")
|
|
106
|
+
handle.flush()
|
|
107
|
+
|
|
108
|
+
return True
|
|
109
|
+
|
|
110
|
+
@staticmethod
|
|
111
|
+
def put(target_path: str | Path, string) -> bool:
|
|
112
|
+
"""Append one record to a `.jsonl` queue file.
|
|
113
|
+
|
|
114
|
+
Args:
|
|
115
|
+
target_path: Destination `.jsonl` file.
|
|
116
|
+
string: Record to append. Strings are written as-is; other values are
|
|
117
|
+
serialized to compact JSON.
|
|
118
|
+
|
|
119
|
+
Returns:
|
|
120
|
+
True when the record is appended successfully.
|
|
121
|
+
|
|
122
|
+
Raises:
|
|
123
|
+
ValueError: If `target_path` does not end with `.jsonl`.
|
|
124
|
+
FileNotFoundError: If the parent directory does not already exist.
|
|
125
|
+
|
|
126
|
+
Example:
|
|
127
|
+
>>> Pistributer.put("events.jsonl", {"event": "finish"})
|
|
128
|
+
True
|
|
129
|
+
"""
|
|
130
|
+
file_path = Pistributer._normalize_channel_name(target_path)
|
|
131
|
+
payload = Pistributer._serialize_record(string)
|
|
132
|
+
|
|
133
|
+
with open(file_path, "a+", encoding="utf-8") as handle:
|
|
134
|
+
handle.write(f"{payload}\n")
|
|
135
|
+
handle.flush()
|
|
136
|
+
|
|
137
|
+
return True
|
|
138
|
+
|
|
139
|
+
def next(self) -> str:
|
|
140
|
+
"""Return the next unread record.
|
|
141
|
+
|
|
142
|
+
Returns:
|
|
143
|
+
The next stored line as a string.
|
|
144
|
+
|
|
145
|
+
Raises:
|
|
146
|
+
StopIteration: If the queue is empty.
|
|
147
|
+
|
|
148
|
+
Example:
|
|
149
|
+
>>> queue = Pistributer("events.jsonl")
|
|
150
|
+
>>> queue.next()
|
|
151
|
+
"""
|
|
152
|
+
if self.isEmpty():
|
|
153
|
+
raise StopIteration("Pistributer queue is empty")
|
|
154
|
+
|
|
155
|
+
data = self.q[self.__index["index"]]
|
|
156
|
+
self.increaseIndex()
|
|
157
|
+
return data
|
|
158
|
+
|
|
159
|
+
def isEmpty(self) -> bool:
|
|
160
|
+
"""Report whether the queue has unread records.
|
|
161
|
+
|
|
162
|
+
This historical camelCase name is kept for compatibility.
|
|
163
|
+
|
|
164
|
+
Returns:
|
|
165
|
+
True when no unread records remain, otherwise False.
|
|
166
|
+
|
|
167
|
+
Example:
|
|
168
|
+
>>> queue = Pistributer("events.jsonl")
|
|
169
|
+
>>> queue.isEmpty()
|
|
170
|
+
True
|
|
171
|
+
"""
|
|
172
|
+
if self.__index["index"] >= len(self.q):
|
|
173
|
+
if os.path.isfile(self.path["index"]):
|
|
174
|
+
os.remove(self.path["index"])
|
|
175
|
+
if os.path.isfile(self.path["in_use"]):
|
|
176
|
+
os.remove(self.path["in_use"])
|
|
177
|
+
|
|
178
|
+
if os.path.isfile(self.path["data"]):
|
|
179
|
+
self.__initialQueue()
|
|
180
|
+
return False
|
|
181
|
+
|
|
182
|
+
return True
|
|
183
|
+
|
|
184
|
+
return False
|
|
185
|
+
|
|
186
|
+
def size(self) -> int:
|
|
187
|
+
"""Count records across the active data and `.in_use` files.
|
|
188
|
+
|
|
189
|
+
Returns:
|
|
190
|
+
The total number of stored records for the channel.
|
|
191
|
+
|
|
192
|
+
Example:
|
|
193
|
+
>>> queue = Pistributer("events.jsonl")
|
|
194
|
+
>>> isinstance(queue.size(), int)
|
|
195
|
+
True
|
|
196
|
+
"""
|
|
197
|
+
paths = self.path["data"], self.path["in_use"]
|
|
198
|
+
size = 0
|
|
199
|
+
for file_path in paths:
|
|
200
|
+
if os.path.isfile(file_path):
|
|
201
|
+
with open(file_path, encoding="utf-8") as handle:
|
|
202
|
+
for _ in handle:
|
|
203
|
+
size += 1
|
|
204
|
+
return size
|
|
205
|
+
|
|
206
|
+
def remaining(self) -> int:
|
|
207
|
+
"""Count unread records.
|
|
208
|
+
|
|
209
|
+
Returns:
|
|
210
|
+
The number of records that have not been consumed yet.
|
|
211
|
+
|
|
212
|
+
Example:
|
|
213
|
+
>>> queue = Pistributer("events.jsonl")
|
|
214
|
+
>>> isinstance(queue.remaining(), int)
|
|
215
|
+
True
|
|
216
|
+
"""
|
|
217
|
+
return max(0, self.size() - self.getIndex()["index"])
|
|
218
|
+
|
|
219
|
+
def getIndex(self) -> dict[str, int]:
|
|
220
|
+
"""Return the persisted read index.
|
|
221
|
+
|
|
222
|
+
Returns:
|
|
223
|
+
A dictionary with one key, `index`, storing the consumed count.
|
|
224
|
+
|
|
225
|
+
Example:
|
|
226
|
+
>>> queue = Pistributer("events.jsonl")
|
|
227
|
+
>>> "index" in queue.getIndex()
|
|
228
|
+
True
|
|
229
|
+
"""
|
|
230
|
+
if os.path.isfile(self.path["index"]):
|
|
231
|
+
with open(self.path["index"], "r+", encoding="utf-8") as json_file:
|
|
232
|
+
try:
|
|
233
|
+
output = json.load(json_file)
|
|
234
|
+
except Exception:
|
|
235
|
+
return {"index": 0}
|
|
236
|
+
return output
|
|
237
|
+
|
|
238
|
+
with open(self.path["index"], "w+", encoding="utf-8") as handle:
|
|
239
|
+
json.dump({"index": 0}, handle)
|
|
240
|
+
|
|
241
|
+
return {"index": 0}
|
|
242
|
+
|
|
243
|
+
def updateIndex(self, count: int) -> None:
|
|
244
|
+
"""Persist a new read index value.
|
|
245
|
+
|
|
246
|
+
Args:
|
|
247
|
+
count: Desired consumed count. Negative values are clamped to zero.
|
|
248
|
+
|
|
249
|
+
Returns:
|
|
250
|
+
None.
|
|
251
|
+
|
|
252
|
+
Example:
|
|
253
|
+
>>> queue = Pistributer("events.jsonl")
|
|
254
|
+
>>> queue.updateIndex(0)
|
|
255
|
+
"""
|
|
256
|
+
self.__index["index"] = max(0, count)
|
|
257
|
+
with open(self.path["index"], "w+", encoding="utf-8") as handle:
|
|
258
|
+
json.dump(self.__index, handle)
|
|
259
|
+
|
|
260
|
+
def increaseIndex(self, count: int = 1) -> None:
|
|
261
|
+
"""Advance the persisted read index.
|
|
262
|
+
|
|
263
|
+
Args:
|
|
264
|
+
count: Number of consumed records to add.
|
|
265
|
+
|
|
266
|
+
Returns:
|
|
267
|
+
None.
|
|
268
|
+
|
|
269
|
+
Example:
|
|
270
|
+
>>> queue = Pistributer("events.jsonl")
|
|
271
|
+
>>> queue.increaseIndex()
|
|
272
|
+
"""
|
|
273
|
+
self.updateIndex(self.__index["index"] + count)
|
|
274
|
+
|
|
275
|
+
def decreaseIndex(self, count: int = 1) -> None:
|
|
276
|
+
"""Move the persisted read index backward.
|
|
277
|
+
|
|
278
|
+
Args:
|
|
279
|
+
count: Number of consumed records to subtract.
|
|
280
|
+
|
|
281
|
+
Returns:
|
|
282
|
+
None.
|
|
283
|
+
|
|
284
|
+
Example:
|
|
285
|
+
>>> queue = Pistributer("events.jsonl")
|
|
286
|
+
>>> queue.decreaseIndex()
|
|
287
|
+
"""
|
|
288
|
+
self.updateIndex(self.__index["index"] - count)
|
|
289
|
+
|
|
290
|
+
def __initialQueue(self) -> None:
|
|
291
|
+
self.__index = self.getIndex()
|
|
292
|
+
|
|
293
|
+
if os.path.isfile(self.path["in_use"]):
|
|
294
|
+
self.q = list(self.__readInUse())
|
|
295
|
+
return
|
|
296
|
+
|
|
297
|
+
if os.path.isfile(self.path["data"]):
|
|
298
|
+
os.rename(self.path["data"], self.path["in_use"])
|
|
299
|
+
self.q = list(self.__readInUse())
|
|
300
|
+
self.updateIndex(0)
|
|
301
|
+
return
|
|
302
|
+
|
|
303
|
+
self.q = []
|
|
304
|
+
if os.path.isfile(self.path["index"]):
|
|
305
|
+
os.remove(self.path["index"])
|
|
306
|
+
|
|
307
|
+
def __readInUse(self):
|
|
308
|
+
if os.path.isfile(self.path["in_use"]):
|
|
309
|
+
return [item for item in self.__read_file_by_line(self.path["in_use"]) if item != ""]
|
|
310
|
+
return []
|
|
311
|
+
|
|
312
|
+
def __read_file_by_line(self, file_path: str):
|
|
313
|
+
if os.path.isfile(file_path):
|
|
314
|
+
with open(file_path, "r", encoding="utf-8") as handle:
|
|
315
|
+
return handle.read().split("\n")
|
|
316
|
+
return []
|
|
317
|
+
|
|
318
|
+
@staticmethod
|
|
319
|
+
def _normalize_channel_name(path: str | Path) -> str:
|
|
320
|
+
"""Validate a `.jsonl` channel path.
|
|
321
|
+
|
|
322
|
+
Args:
|
|
323
|
+
path: Candidate file path.
|
|
324
|
+
|
|
325
|
+
Returns:
|
|
326
|
+
The normalized filesystem path string.
|
|
327
|
+
|
|
328
|
+
Raises:
|
|
329
|
+
ValueError: If the file name does not end with `.jsonl`.
|
|
330
|
+
"""
|
|
331
|
+
file_path = os.fspath(path)
|
|
332
|
+
if not file_path.endswith(".jsonl"):
|
|
333
|
+
raise ValueError(f'Path must end with ".jsonl" > "{file_path}"')
|
|
334
|
+
return file_path
|
|
335
|
+
|
|
336
|
+
@staticmethod
|
|
337
|
+
def _serialize_record(record) -> str:
|
|
338
|
+
"""Convert a record to the line format stored by this driver.
|
|
339
|
+
|
|
340
|
+
Args:
|
|
341
|
+
record: String or JSON-serializable value.
|
|
342
|
+
|
|
343
|
+
Returns:
|
|
344
|
+
A single-line string ready to append to the queue file.
|
|
345
|
+
"""
|
|
346
|
+
if isinstance(record, str):
|
|
347
|
+
return record
|
|
348
|
+
return json.dumps(record, ensure_ascii=False, separators=(",", ":"))
|
pistributer_sqlite.py
ADDED
|
@@ -0,0 +1,210 @@
|
|
|
1
|
+
"""SQLite-backed `pistributer` driver.
|
|
2
|
+
|
|
3
|
+
This module is the integrity-first local queue in the project.
|
|
4
|
+
|
|
5
|
+
Use it when correctness matters more than the shortest append path.
|
|
6
|
+
It has a different public naming style for `is_empty()` because it was added as
|
|
7
|
+
the newer SQLite-specific driver, but the existing name is kept stable.
|
|
8
|
+
|
|
9
|
+
Example:
|
|
10
|
+
>>> from pistributer_sqlite import PistributerSqlite
|
|
11
|
+
>>> queue = PistributerSqlite("events.db")
|
|
12
|
+
>>> queue.put("start")
|
|
13
|
+
True
|
|
14
|
+
>>> queue.close()
|
|
15
|
+
"""
|
|
16
|
+
|
|
17
|
+
from __future__ import annotations
|
|
18
|
+
|
|
19
|
+
import os
|
|
20
|
+
import sqlite3
|
|
21
|
+
from pathlib import Path
|
|
22
|
+
|
|
23
|
+
__all__ = ["PistributerSqlite"]
|
|
24
|
+
|
|
25
|
+
|
|
26
|
+
class PistributerSqlite:
|
|
27
|
+
"""Queue interface backed by a local SQLite database."""
|
|
28
|
+
|
|
29
|
+
def __init__(self, path: str | Path):
|
|
30
|
+
"""Open or create a `.db` queue file.
|
|
31
|
+
|
|
32
|
+
Args:
|
|
33
|
+
path: Path to the SQLite database file. The path must end with `.db`.
|
|
34
|
+
|
|
35
|
+
Returns:
|
|
36
|
+
None.
|
|
37
|
+
|
|
38
|
+
Raises:
|
|
39
|
+
ValueError: If `path` does not end with `.db`.
|
|
40
|
+
|
|
41
|
+
Example:
|
|
42
|
+
>>> PistributerSqlite("events.db")
|
|
43
|
+
"""
|
|
44
|
+
self.path = self._normalize_db_path(path)
|
|
45
|
+
parent_dir = os.path.dirname(self.path)
|
|
46
|
+
if parent_dir and not os.path.isdir(parent_dir):
|
|
47
|
+
os.makedirs(parent_dir, exist_ok=True)
|
|
48
|
+
self.connection = sqlite3.connect(self.path, timeout=30, isolation_level=None)
|
|
49
|
+
self.connection.execute("PRAGMA journal_mode=WAL")
|
|
50
|
+
self.connection.execute("PRAGMA synchronous=FULL")
|
|
51
|
+
self.connection.execute(
|
|
52
|
+
"CREATE TABLE IF NOT EXISTS pistributer_queue (id INTEGER PRIMARY KEY AUTOINCREMENT, payload TEXT NOT NULL, consumed INTEGER NOT NULL DEFAULT 0)"
|
|
53
|
+
)
|
|
54
|
+
self.connection.execute(
|
|
55
|
+
"CREATE INDEX IF NOT EXISTS idx_pistributer_queue_consumed_id ON pistributer_queue(consumed, id)"
|
|
56
|
+
)
|
|
57
|
+
|
|
58
|
+
@staticmethod
|
|
59
|
+
def new(target_path: str | Path, rows, overwrite: bool = False) -> bool:
|
|
60
|
+
"""Create a new SQLite queue and preload rows.
|
|
61
|
+
|
|
62
|
+
Args:
|
|
63
|
+
target_path: Destination `.db` file.
|
|
64
|
+
rows: Iterable of payloads to insert.
|
|
65
|
+
overwrite: When `True`, allow replacing an existing database file.
|
|
66
|
+
|
|
67
|
+
Returns:
|
|
68
|
+
True when the database is created and populated successfully.
|
|
69
|
+
|
|
70
|
+
Raises:
|
|
71
|
+
ValueError: If `target_path` does not end with `.db`.
|
|
72
|
+
FileExistsError: If the file already exists and `overwrite` is `False`.
|
|
73
|
+
|
|
74
|
+
Example:
|
|
75
|
+
>>> PistributerSqlite.new("events.db", ["start"], overwrite=True)
|
|
76
|
+
True
|
|
77
|
+
"""
|
|
78
|
+
db_path = PistributerSqlite._normalize_db_path(target_path)
|
|
79
|
+
if os.path.exists(db_path) and not overwrite:
|
|
80
|
+
raise FileExistsError(f'File exists "{db_path}"')
|
|
81
|
+
if os.path.exists(db_path) and overwrite:
|
|
82
|
+
os.remove(db_path)
|
|
83
|
+
queue = PistributerSqlite(db_path)
|
|
84
|
+
queue.put_many(rows)
|
|
85
|
+
queue.close()
|
|
86
|
+
return True
|
|
87
|
+
|
|
88
|
+
def put(self, payload: str) -> bool:
|
|
89
|
+
"""Append one payload to the queue.
|
|
90
|
+
|
|
91
|
+
Args:
|
|
92
|
+
payload: Value to store. It is converted to string before insert.
|
|
93
|
+
|
|
94
|
+
Returns:
|
|
95
|
+
True when the insert succeeds.
|
|
96
|
+
|
|
97
|
+
Example:
|
|
98
|
+
>>> queue = PistributerSqlite("events.db")
|
|
99
|
+
>>> queue.put("finish")
|
|
100
|
+
True
|
|
101
|
+
"""
|
|
102
|
+
self.connection.execute("INSERT INTO pistributer_queue (payload) VALUES (?)", (str(payload),))
|
|
103
|
+
return True
|
|
104
|
+
|
|
105
|
+
def put_many(self, payloads) -> bool:
|
|
106
|
+
"""Append many payloads in one transaction.
|
|
107
|
+
|
|
108
|
+
Args:
|
|
109
|
+
payloads: Iterable of values to store.
|
|
110
|
+
|
|
111
|
+
Returns:
|
|
112
|
+
True when the batch insert succeeds.
|
|
113
|
+
|
|
114
|
+
Example:
|
|
115
|
+
>>> queue = PistributerSqlite("events.db")
|
|
116
|
+
>>> queue.put_many(["a", "b"])
|
|
117
|
+
True
|
|
118
|
+
"""
|
|
119
|
+
with self.connection:
|
|
120
|
+
self.connection.executemany(
|
|
121
|
+
"INSERT INTO pistributer_queue (payload) VALUES (?)",
|
|
122
|
+
[(str(payload),) for payload in payloads],
|
|
123
|
+
)
|
|
124
|
+
return True
|
|
125
|
+
|
|
126
|
+
def next(self) -> str:
|
|
127
|
+
"""Return the next unread payload.
|
|
128
|
+
|
|
129
|
+
Returns:
|
|
130
|
+
The next stored payload as a string.
|
|
131
|
+
|
|
132
|
+
Raises:
|
|
133
|
+
StopIteration: If the queue is empty.
|
|
134
|
+
|
|
135
|
+
Example:
|
|
136
|
+
>>> queue = PistributerSqlite("events.db")
|
|
137
|
+
>>> queue.next()
|
|
138
|
+
"""
|
|
139
|
+
with self.connection:
|
|
140
|
+
row = self.connection.execute(
|
|
141
|
+
"SELECT id, payload FROM pistributer_queue WHERE consumed = 0 ORDER BY id LIMIT 1"
|
|
142
|
+
).fetchone()
|
|
143
|
+
if row is None:
|
|
144
|
+
raise StopIteration("PistributerSqlite queue is empty")
|
|
145
|
+
self.connection.execute("UPDATE pistributer_queue SET consumed = 1 WHERE id = ?", (row[0],))
|
|
146
|
+
return row[1]
|
|
147
|
+
|
|
148
|
+
def is_empty(self) -> bool:
|
|
149
|
+
"""Report whether unread payloads remain.
|
|
150
|
+
|
|
151
|
+
Returns:
|
|
152
|
+
True when no unread rows remain, otherwise False.
|
|
153
|
+
|
|
154
|
+
Example:
|
|
155
|
+
>>> queue = PistributerSqlite("events.db")
|
|
156
|
+
>>> queue.is_empty()
|
|
157
|
+
True
|
|
158
|
+
"""
|
|
159
|
+
row = self.connection.execute(
|
|
160
|
+
"SELECT 1 FROM pistributer_queue WHERE consumed = 0 LIMIT 1"
|
|
161
|
+
).fetchone()
|
|
162
|
+
return row is None
|
|
163
|
+
|
|
164
|
+
def size(self) -> int:
|
|
165
|
+
"""Count all rows in the queue table.
|
|
166
|
+
|
|
167
|
+
Returns:
|
|
168
|
+
The total number of stored rows.
|
|
169
|
+
"""
|
|
170
|
+
row = self.connection.execute("SELECT COUNT(*) FROM pistributer_queue").fetchone()
|
|
171
|
+
return int(row[0]) if row else 0
|
|
172
|
+
|
|
173
|
+
def remaining(self) -> int:
|
|
174
|
+
"""Count unread rows in the queue table.
|
|
175
|
+
|
|
176
|
+
Returns:
|
|
177
|
+
The number of rows that have not been consumed yet.
|
|
178
|
+
"""
|
|
179
|
+
row = self.connection.execute("SELECT COUNT(*) FROM pistributer_queue WHERE consumed = 0").fetchone()
|
|
180
|
+
return int(row[0]) if row else 0
|
|
181
|
+
|
|
182
|
+
def close(self) -> None:
|
|
183
|
+
"""Close the SQLite connection.
|
|
184
|
+
|
|
185
|
+
Returns:
|
|
186
|
+
None.
|
|
187
|
+
|
|
188
|
+
Example:
|
|
189
|
+
>>> queue = PistributerSqlite("events.db")
|
|
190
|
+
>>> queue.close()
|
|
191
|
+
"""
|
|
192
|
+
self.connection.close()
|
|
193
|
+
|
|
194
|
+
@staticmethod
|
|
195
|
+
def _normalize_db_path(path: str | Path) -> str:
|
|
196
|
+
"""Validate a `.db` queue path.
|
|
197
|
+
|
|
198
|
+
Args:
|
|
199
|
+
path: Candidate database file path.
|
|
200
|
+
|
|
201
|
+
Returns:
|
|
202
|
+
The normalized filesystem path string.
|
|
203
|
+
|
|
204
|
+
Raises:
|
|
205
|
+
ValueError: If the file name does not end with `.db`.
|
|
206
|
+
"""
|
|
207
|
+
file_path = os.fspath(path)
|
|
208
|
+
if not file_path.endswith(".db"):
|
|
209
|
+
raise ValueError(f'Path must end with ".db" > "{file_path}"')
|
|
210
|
+
return file_path
|
pistributer_txt.py
ADDED
|
@@ -0,0 +1,281 @@
|
|
|
1
|
+
"""Text-first `pistributer` driver.
|
|
2
|
+
|
|
3
|
+
This module preserves the shortest file-backed queue path in the project.
|
|
4
|
+
|
|
5
|
+
Use it when you want raw text throughput and simple append-heavy local queueing.
|
|
6
|
+
Like the JSONL file driver, it is best for staged workflows or
|
|
7
|
+
single-writer-friendly usage. It is not designed for strong overlapping
|
|
8
|
+
writer and reader integrity.
|
|
9
|
+
The hot write path assumes the parent directory already exists.
|
|
10
|
+
|
|
11
|
+
Example:
|
|
12
|
+
>>> from pistributer_txt import PistributerTxt
|
|
13
|
+
>>> PistributerTxt.put("events.txt", "start")
|
|
14
|
+
True
|
|
15
|
+
"""
|
|
16
|
+
|
|
17
|
+
from __future__ import annotations
|
|
18
|
+
|
|
19
|
+
import json
|
|
20
|
+
import os
|
|
21
|
+
from pathlib import Path
|
|
22
|
+
|
|
23
|
+
__all__ = ["PistributerTxt"]
|
|
24
|
+
|
|
25
|
+
|
|
26
|
+
class PistributerTxt:
|
|
27
|
+
"""Queue interface for plain-text channel files.
|
|
28
|
+
|
|
29
|
+
The public method names intentionally preserve the historical file-driver
|
|
30
|
+
API, including `isEmpty()` and `getIndex()`.
|
|
31
|
+
"""
|
|
32
|
+
|
|
33
|
+
def __init__(self, path: str | Path):
|
|
34
|
+
"""Open a queue backed by a `.txt` file.
|
|
35
|
+
|
|
36
|
+
Args:
|
|
37
|
+
path: Path to the queue file. The path must end with `.txt`.
|
|
38
|
+
|
|
39
|
+
Returns:
|
|
40
|
+
None.
|
|
41
|
+
|
|
42
|
+
Raises:
|
|
43
|
+
ValueError: If `path` does not end with `.txt`.
|
|
44
|
+
|
|
45
|
+
Example:
|
|
46
|
+
>>> PistributerTxt("events.txt")
|
|
47
|
+
"""
|
|
48
|
+
channel_path = self._normalize_channel_name(path)
|
|
49
|
+
self.abspath = "" if os.sep not in channel_path else os.path.abspath(".")
|
|
50
|
+
self.path = {
|
|
51
|
+
"data": os.path.abspath(os.path.join(self.abspath, channel_path)),
|
|
52
|
+
"index": os.path.abspath(os.path.join(self.abspath, f"{channel_path}.index")),
|
|
53
|
+
"in_use": os.path.abspath(os.path.join(self.abspath, f"{channel_path}.in_use")),
|
|
54
|
+
}
|
|
55
|
+
self.q: list[str] = []
|
|
56
|
+
self.__index = {"index": 0}
|
|
57
|
+
self.__initial_queue()
|
|
58
|
+
|
|
59
|
+
@staticmethod
|
|
60
|
+
def new(target_path: str | Path, string, overwrite: bool = False, sep: str = "") -> bool:
|
|
61
|
+
"""Create a new `.txt` queue file with one initial line.
|
|
62
|
+
|
|
63
|
+
Args:
|
|
64
|
+
target_path: Destination `.txt` file.
|
|
65
|
+
string: Text payload to write.
|
|
66
|
+
overwrite: When `True`, allow replacing an existing file.
|
|
67
|
+
sep: Optional trailing separator to append.
|
|
68
|
+
|
|
69
|
+
Returns:
|
|
70
|
+
True when the file is created successfully.
|
|
71
|
+
|
|
72
|
+
Raises:
|
|
73
|
+
ValueError: If `target_path` does not end with `.txt`.
|
|
74
|
+
IsADirectoryError: If `target_path` points to a directory.
|
|
75
|
+
FileExistsError: If the file already exists and `overwrite` is `False`.
|
|
76
|
+
|
|
77
|
+
Example:
|
|
78
|
+
>>> PistributerTxt.new("events.txt", "start", overwrite=True)
|
|
79
|
+
True
|
|
80
|
+
"""
|
|
81
|
+
file_path = PistributerTxt._normalize_channel_name(target_path)
|
|
82
|
+
if os.path.isdir(file_path):
|
|
83
|
+
raise IsADirectoryError(f'This is a folder, require a file to write "{file_path}"')
|
|
84
|
+
if os.path.isfile(file_path) and overwrite is not True:
|
|
85
|
+
raise FileExistsError(f'File exists "{file_path}"')
|
|
86
|
+
|
|
87
|
+
parent_dir = os.path.dirname(file_path)
|
|
88
|
+
if parent_dir and not os.path.isdir(parent_dir):
|
|
89
|
+
os.makedirs(parent_dir, exist_ok=True)
|
|
90
|
+
|
|
91
|
+
with open(file_path, "w+", encoding="utf-8") as handle:
|
|
92
|
+
handle.write(f"{string}{sep}")
|
|
93
|
+
handle.flush()
|
|
94
|
+
return True
|
|
95
|
+
|
|
96
|
+
@staticmethod
|
|
97
|
+
def put(target_path: str | Path, string) -> bool:
|
|
98
|
+
"""Append one text line to a `.txt` queue file.
|
|
99
|
+
|
|
100
|
+
Args:
|
|
101
|
+
target_path: Destination `.txt` file.
|
|
102
|
+
string: Text payload to append.
|
|
103
|
+
|
|
104
|
+
Returns:
|
|
105
|
+
True when the line is appended successfully.
|
|
106
|
+
|
|
107
|
+
Raises:
|
|
108
|
+
ValueError: If `target_path` does not end with `.txt`.
|
|
109
|
+
FileNotFoundError: If the parent directory does not already exist.
|
|
110
|
+
|
|
111
|
+
Example:
|
|
112
|
+
>>> PistributerTxt.put("events.txt", "finish")
|
|
113
|
+
True
|
|
114
|
+
"""
|
|
115
|
+
file_path = PistributerTxt._normalize_channel_name(target_path)
|
|
116
|
+
|
|
117
|
+
with open(file_path, "a+", encoding="utf-8") as handle:
|
|
118
|
+
handle.write(f"{string}\n")
|
|
119
|
+
handle.flush()
|
|
120
|
+
return True
|
|
121
|
+
|
|
122
|
+
def next(self) -> str:
|
|
123
|
+
"""Return the next unread text line.
|
|
124
|
+
|
|
125
|
+
Returns:
|
|
126
|
+
The next stored line.
|
|
127
|
+
|
|
128
|
+
Raises:
|
|
129
|
+
StopIteration: If the queue is empty.
|
|
130
|
+
|
|
131
|
+
Example:
|
|
132
|
+
>>> queue = PistributerTxt("events.txt")
|
|
133
|
+
>>> queue.next()
|
|
134
|
+
"""
|
|
135
|
+
if self.isEmpty():
|
|
136
|
+
raise StopIteration("PistributerTxt queue is empty")
|
|
137
|
+
data = self.q[self.__index["index"]]
|
|
138
|
+
self.increaseIndex()
|
|
139
|
+
return data
|
|
140
|
+
|
|
141
|
+
def isEmpty(self) -> bool:
|
|
142
|
+
"""Report whether the queue has unread text lines.
|
|
143
|
+
|
|
144
|
+
This historical camelCase name is kept for compatibility.
|
|
145
|
+
|
|
146
|
+
Returns:
|
|
147
|
+
True when no unread records remain, otherwise False.
|
|
148
|
+
|
|
149
|
+
Example:
|
|
150
|
+
>>> queue = PistributerTxt("events.txt")
|
|
151
|
+
>>> queue.isEmpty()
|
|
152
|
+
True
|
|
153
|
+
"""
|
|
154
|
+
if self.__index["index"] >= len(self.q):
|
|
155
|
+
if os.path.isfile(self.path["index"]):
|
|
156
|
+
os.remove(self.path["index"])
|
|
157
|
+
if os.path.isfile(self.path["in_use"]):
|
|
158
|
+
os.remove(self.path["in_use"])
|
|
159
|
+
if os.path.isfile(self.path["data"]):
|
|
160
|
+
self.__initial_queue()
|
|
161
|
+
return False
|
|
162
|
+
return True
|
|
163
|
+
return False
|
|
164
|
+
|
|
165
|
+
def size(self) -> int:
|
|
166
|
+
"""Count records across the active data and `.in_use` files.
|
|
167
|
+
|
|
168
|
+
Returns:
|
|
169
|
+
The total number of stored text lines for the channel.
|
|
170
|
+
"""
|
|
171
|
+
size = 0
|
|
172
|
+
for file_path in (self.path["data"], self.path["in_use"]):
|
|
173
|
+
if os.path.isfile(file_path):
|
|
174
|
+
with open(file_path, encoding="utf-8") as handle:
|
|
175
|
+
for _ in handle:
|
|
176
|
+
size += 1
|
|
177
|
+
return size
|
|
178
|
+
|
|
179
|
+
def remaining(self) -> int:
|
|
180
|
+
"""Count unread text lines.
|
|
181
|
+
|
|
182
|
+
Returns:
|
|
183
|
+
The number of records that have not been consumed yet.
|
|
184
|
+
"""
|
|
185
|
+
return max(0, self.size() - self.getIndex()["index"])
|
|
186
|
+
|
|
187
|
+
def getIndex(self) -> dict[str, int]:
|
|
188
|
+
"""Return the persisted read index.
|
|
189
|
+
|
|
190
|
+
Returns:
|
|
191
|
+
A dictionary with one key, `index`, storing the consumed count.
|
|
192
|
+
"""
|
|
193
|
+
if os.path.isfile(self.path["index"]):
|
|
194
|
+
with open(self.path["index"], "r+", encoding="utf-8") as json_file:
|
|
195
|
+
try:
|
|
196
|
+
output = json.load(json_file)
|
|
197
|
+
except Exception:
|
|
198
|
+
return {"index": 0}
|
|
199
|
+
return output
|
|
200
|
+
|
|
201
|
+
with open(self.path["index"], "w+", encoding="utf-8") as handle:
|
|
202
|
+
json.dump({"index": 0}, handle)
|
|
203
|
+
return {"index": 0}
|
|
204
|
+
|
|
205
|
+
def updateIndex(self, count: int) -> None:
|
|
206
|
+
"""Persist a new read index value.
|
|
207
|
+
|
|
208
|
+
Args:
|
|
209
|
+
count: Desired consumed count. Negative values are clamped to zero.
|
|
210
|
+
|
|
211
|
+
Returns:
|
|
212
|
+
None.
|
|
213
|
+
"""
|
|
214
|
+
self.__index["index"] = max(0, count)
|
|
215
|
+
with open(self.path["index"], "w+", encoding="utf-8") as handle:
|
|
216
|
+
json.dump(self.__index, handle)
|
|
217
|
+
|
|
218
|
+
def increaseIndex(self, count: int = 1) -> None:
|
|
219
|
+
"""Advance the persisted read index.
|
|
220
|
+
|
|
221
|
+
Args:
|
|
222
|
+
count: Number of consumed records to add.
|
|
223
|
+
|
|
224
|
+
Returns:
|
|
225
|
+
None.
|
|
226
|
+
"""
|
|
227
|
+
self.updateIndex(self.__index["index"] + count)
|
|
228
|
+
|
|
229
|
+
def decreaseIndex(self, count: int = 1) -> None:
|
|
230
|
+
"""Move the persisted read index backward.
|
|
231
|
+
|
|
232
|
+
Args:
|
|
233
|
+
count: Number of consumed records to subtract.
|
|
234
|
+
|
|
235
|
+
Returns:
|
|
236
|
+
None.
|
|
237
|
+
"""
|
|
238
|
+
self.updateIndex(self.__index["index"] - count)
|
|
239
|
+
|
|
240
|
+
def __initial_queue(self) -> None:
|
|
241
|
+
self.__index = self.getIndex()
|
|
242
|
+
if os.path.isfile(self.path["in_use"]):
|
|
243
|
+
self.q = list(self.__read_in_use())
|
|
244
|
+
return
|
|
245
|
+
if os.path.isfile(self.path["data"]):
|
|
246
|
+
os.rename(self.path["data"], self.path["in_use"])
|
|
247
|
+
self.q = list(self.__read_in_use())
|
|
248
|
+
self.updateIndex(0)
|
|
249
|
+
return
|
|
250
|
+
self.q = []
|
|
251
|
+
if os.path.isfile(self.path["index"]):
|
|
252
|
+
os.remove(self.path["index"])
|
|
253
|
+
|
|
254
|
+
def __read_in_use(self):
|
|
255
|
+
if os.path.isfile(self.path["in_use"]):
|
|
256
|
+
return [item for item in self.__read_file_by_line(self.path["in_use"]) if item != ""]
|
|
257
|
+
return []
|
|
258
|
+
|
|
259
|
+
def __read_file_by_line(self, file_path: str):
|
|
260
|
+
if os.path.isfile(file_path):
|
|
261
|
+
with open(file_path, "r", encoding="utf-8") as handle:
|
|
262
|
+
return handle.read().split("\n")
|
|
263
|
+
return []
|
|
264
|
+
|
|
265
|
+
@staticmethod
|
|
266
|
+
def _normalize_channel_name(path: str | Path) -> str:
|
|
267
|
+
"""Validate a `.txt` channel path.
|
|
268
|
+
|
|
269
|
+
Args:
|
|
270
|
+
path: Candidate file path.
|
|
271
|
+
|
|
272
|
+
Returns:
|
|
273
|
+
The normalized filesystem path string.
|
|
274
|
+
|
|
275
|
+
Raises:
|
|
276
|
+
ValueError: If the file name does not end with `.txt`.
|
|
277
|
+
"""
|
|
278
|
+
file_path = os.fspath(path)
|
|
279
|
+
if not file_path.endswith(".txt"):
|
|
280
|
+
raise ValueError(f'Path must end with ".txt" > "{file_path}"')
|
|
281
|
+
return file_path
|