pywrkr 0.9.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
pywrkr-0.9.2/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 pywrkr contributors
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
pywrkr-0.9.2/PKG-INFO ADDED
@@ -0,0 +1,544 @@
1
+ Metadata-Version: 2.4
2
+ Name: pywrkr
3
+ Version: 0.9.2
4
+ Summary: Python HTTP benchmarking tool with extended statistics, inspired by wrk and Apache ab
5
+ Author: pywrkr contributors
6
+ License-Expression: MIT
7
+ Project-URL: Homepage, https://github.com/kurok/pywrkr
8
+ Classifier: Development Status :: 4 - Beta
9
+ Classifier: Programming Language :: Python :: 3
10
+ Classifier: Programming Language :: Python :: 3.10
11
+ Classifier: Programming Language :: Python :: 3.11
12
+ Classifier: Programming Language :: Python :: 3.12
13
+ Classifier: Programming Language :: Python :: 3.13
14
+ Classifier: Topic :: Internet :: WWW/HTTP
15
+ Classifier: Topic :: System :: Benchmark
16
+ Requires-Python: >=3.10
17
+ Description-Content-Type: text/markdown
18
+ License-File: LICENSE
19
+ Requires-Dist: aiohttp>=3.8
20
+ Provides-Extra: tui
21
+ Requires-Dist: rich>=13.0; extra == "tui"
22
+ Provides-Extra: otel
23
+ Requires-Dist: opentelemetry-api>=1.20; extra == "otel"
24
+ Requires-Dist: opentelemetry-sdk>=1.20; extra == "otel"
25
+ Requires-Dist: opentelemetry-exporter-otlp-proto-http>=1.20; extra == "otel"
26
+ Provides-Extra: all
27
+ Requires-Dist: rich>=13.0; extra == "all"
28
+ Requires-Dist: opentelemetry-api>=1.20; extra == "all"
29
+ Requires-Dist: opentelemetry-sdk>=1.20; extra == "all"
30
+ Requires-Dist: opentelemetry-exporter-otlp-proto-http>=1.20; extra == "all"
31
+ Provides-Extra: dev
32
+ Requires-Dist: pytest>=7.0; extra == "dev"
33
+ Requires-Dist: aiohttp>=3.8; extra == "dev"
34
+ Dynamic: license-file
35
+
36
+ # pywrkr
37
+
38
+ A Python HTTP benchmarking tool inspired by [wrk](https://github.com/wg/wrk) and [Apache ab](https://httpd.apache.org/docs/current/programs/ab.html), with extended statistics and virtual user simulation.
39
+
40
+ ## Features
41
+
42
+ - **Five benchmarking modes:**
43
+ - **Duration mode** (`-d`): wrk-style, run for N seconds
44
+ - **Request-count mode** (`-n`): ab-style, send exactly N requests
45
+ - **User simulation mode** (`-u`): simulate virtual users with ramp-up and think time
46
+ - **Rate limiting mode** (`--rate`): send requests at a controlled, constant rate (with optional ramp)
47
+ - **Autofind mode** (`--autofind`): automatically ramp load to find maximum sustainable capacity
48
+ - **Detailed latency statistics:** min/max/mean/median/stdev, percentiles (p50-p99.99), histogram, and ab-style "percentage served within" table
49
+ - **Throughput timeline:** requests/sec over time in ASCII bar chart
50
+ - **Multiple output formats:** terminal, CSV (`-e`), JSON (`--json`), HTML (`-w`)
51
+ - **HTTP features:** keep-alive toggle, Basic auth (`-A`), cookies (`-C`), custom headers (`-H`), POST body (`-b`/`-p`), content-length verification (`-l`)
52
+ - **Cache-busting** (`-R`): append a unique random query parameter to each request URL
53
+ - **Graceful shutdown:** handles SIGINT/SIGTERM cleanly
54
+ - **Live progress display** with requests/sec, error count, and active user count
55
+ - **SLO-aware thresholds** (`--threshold`): pass/fail criteria like `p95 < 300ms`, `error_rate < 1%` with non-zero exit code on breach — CI-ready
56
+ - **Native observability export:** OpenTelemetry (`--otel-endpoint`) and Prometheus remote write (`--prom-remote-write`)
57
+ - **Test metadata tags** (`--tag`): attach environment, build, region labels to metrics and JSON output
58
+
59
+ ## Requirements
60
+
61
+ - Python 3.10+
62
+ - aiohttp
63
+
64
+ ```bash
65
+ pip install aiohttp
66
+ ```
67
+
68
+ ## Quick Start
69
+
70
+ ```bash
71
+ # Basic 10-second benchmark with 10 connections
72
+ python pywrkr.py http://localhost:8080/
73
+
74
+ # 30 seconds, 200 concurrent connections
75
+ python pywrkr.py -c 200 -d 30 http://localhost:8080/api
76
+
77
+ # Send exactly 1000 requests with 50 connections (ab-style)
78
+ python pywrkr.py -n 1000 -c 50 http://localhost:8080/
79
+
80
+ # Simulate 1500 users for 5 minutes with 30s ramp-up and 1s think time
81
+ python pywrkr.py -u 1500 -d 300 --ramp-up 30 --think-time 1.0 http://localhost:8080/
82
+
83
+ # Cache-busting mode (bypass HTTP caches with random query param)
84
+ python pywrkr.py -R -c 100 -d 10 http://localhost:8080/
85
+
86
+ # Constant rate: 500 requests/sec for 30 seconds
87
+ python pywrkr.py --rate 500 -d 30 http://localhost:8080/
88
+
89
+ # Rate ramp: linearly increase from 100 to 1000 req/s over 60 seconds
90
+ python pywrkr.py --rate 100 --rate-ramp 1000 -d 60 http://localhost:8080/
91
+
92
+ # Autofind: automatically find max sustainable load
93
+ python pywrkr.py --autofind --max-error-rate 1 --max-p95 5.0 http://localhost:8080/
94
+
95
+ # SLO thresholds: exit code 2 if any threshold breached (CI-friendly)
96
+ python pywrkr.py --threshold "p95 < 300ms" --threshold "error_rate < 1%" \
97
+ -c 100 -d 30 http://localhost:8080/
98
+
99
+ # Export metrics to OpenTelemetry collector
100
+ python pywrkr.py --otel-endpoint http://localhost:4318 \
101
+ --tag environment=staging --tag build=v1.2.3 \
102
+ -c 100 -d 30 http://localhost:8080/
103
+
104
+ # Push metrics to Prometheus Pushgateway
105
+ python pywrkr.py --prom-remote-write http://pushgateway:9091 \
106
+ --tag region=us-east-1 --tag service=api \
107
+ -c 100 -d 30 http://localhost:8080/
108
+
109
+ # POST with auth, cookies, and JSON output
110
+ python pywrkr.py -n 500 -c 20 -m POST -b '{"key":"val"}' \
111
+ -H "Content-Type: application/json" \
112
+ -A user:pass -C "session=abc123" \
113
+ --json results.json http://localhost:8080/api
114
+ ```
115
+
116
+ ## Usage
117
+
118
+ ```
119
+ usage: pywrkr.py [-h] [-c CONNECTIONS] [-d DURATION] [-n NUM_REQUESTS]
120
+ [-t THREADS] [-m METHOD] [-H NAME:VALUE] [-b BODY]
121
+ [-p POST_FILE] [-A user:pass] [-C COOKIE] [-k]
122
+ [--no-keepalive] [-l] [-v VERBOSITY] [--timeout TIMEOUT]
123
+ [-e FILE] [-w] [--json FILE] [-u USERS] [--ramp-up RAMP_UP]
124
+ [--think-time THINK_TIME] [--think-jitter THINK_JITTER]
125
+ [-R] [--rate RATE] [--rate-ramp RATE_RAMP]
126
+ url
127
+ ```
128
+
129
+ ### Options
130
+
131
+ | Flag | Long | Description |
132
+ |------|------|-------------|
133
+ | `url` | | Target URL to benchmark (required) |
134
+ | `-c` | `--connections` | Number of concurrent connections (default: 10) |
135
+ | `-d` | `--duration` | Test duration in seconds (default: 10) |
136
+ | `-n` | `--num-requests` | Total number of requests (ab-style, overrides `-d`) |
137
+ | `-t` | `--threads` | Number of worker groups (default: 4) |
138
+ | `-m` | `--method` | HTTP method: GET, POST, PUT, DELETE, etc. (default: GET) |
139
+ | `-H` | `--header` | Custom header, e.g. `-H "Content-Type: application/json"` (repeatable) |
140
+ | `-b` | `--body` | Request body string |
141
+ | `-p` | `--post-file` | File containing POST body data |
142
+ | `-A` | `--basic-auth` | Basic HTTP auth as `user:pass` |
143
+ | `-C` | `--cookie` | Cookie as `name=value` (repeatable) |
144
+ | `-k` | `--keepalive` | Enable keep-alive (default: on) |
145
+ | | `--no-keepalive` | Disable keep-alive |
146
+ | `-l` | `--verify-length` | Verify response Content-Length consistency |
147
+ | `-v` | `--verbosity` | 0=quiet, 2=warnings, 3=status codes, 4=full detail |
148
+ | | `--timeout` | Request timeout in seconds (default: 30) |
149
+ | `-e` | `--csv` | Write CSV percentile table to file |
150
+ | `-w` | `--html` | Print results as HTML table |
151
+ | | `--json` | Write JSON results to file |
152
+ | `-R` | `--random-param` | Append unique `_cb=<uuid>` query param per request (cache-buster) |
153
+ | | `--rate` | Target requests per second (constant rate mode) |
154
+ | | `--rate-ramp` | Linearly ramp rate from `--rate` to this value over the duration |
155
+ | | `--scenario` | Path to JSON/YAML scenario file for scripted multi-step requests |
156
+ | | `--latency-breakdown` | Show detailed per-phase latency breakdown (DNS, TCP, TLS, TTFB, transfer) |
157
+ | | `--threshold` / `--th` | SLO threshold (repeatable), e.g. `--threshold "p95 < 300ms"`. Exit code 2 on breach |
158
+ | | `--tag` | Metadata tag as `key=value` (repeatable), e.g. `--tag environment=staging` |
159
+ | | `--otel-endpoint` | Export metrics to OpenTelemetry collector (OTLP/HTTP) |
160
+ | | `--prom-remote-write` | Push metrics to Prometheus Pushgateway endpoint |
161
+
162
+ ### User Simulation Options
163
+
164
+ | Flag | Long | Description |
165
+ |------|------|-------------|
166
+ | `-u` | `--users` | Number of virtual users (enables simulation mode) |
167
+ | | `--ramp-up` | Seconds to gradually start all users (default: 0) |
168
+ | | `--think-time` | Mean pause between requests per user in seconds (default: 1.0) |
169
+ | | `--think-jitter` | Think time jitter factor 0-1 (default: 0.5, i.e. +/-50%) |
170
+
171
+ ## Output
172
+
173
+ ### Terminal Output
174
+
175
+ ```
176
+ ======================================================================
177
+ BENCHMARK RESULTS
178
+ ======================================================================
179
+ Mode: 300 virtual users, 120.0s
180
+ Duration: 124.15s
181
+ Virtual Users: 300
182
+ Ramp-up: 10.00s
183
+ Think Time: 1.00s (+/-50%)
184
+ Avg Reqs/User: 50.8
185
+ Keep-Alive: yes
186
+ Total Requests: 15,229
187
+ Total Errors: 1
188
+ Requests/sec: 122.66
189
+ Transfer/sec: 119.34MB/s
190
+ Total Transfer: 14.46GB
191
+
192
+ ======================================================================
193
+ LATENCY STATISTICS
194
+ ======================================================================
195
+ Min: 449.00ms
196
+ Max: 4.85s
197
+ Mean: 961.00ms
198
+ Median: 870.00ms
199
+ Stdev: 520.00ms
200
+
201
+ Latency Percentiles:
202
+ p50 870.00ms
203
+ p75 1.10s
204
+ p90 1.56s
205
+ p95 2.98s
206
+ p99 4.85s
207
+ ```
208
+
209
+ ### JSON Output
210
+
211
+ Use `--json results.json` to save structured results:
212
+
213
+ ```json
214
+ {
215
+ "duration_sec": 124.15,
216
+ "connections": 300,
217
+ "total_requests": 15229,
218
+ "total_errors": 1,
219
+ "requests_per_sec": 122.66,
220
+ "transfer_per_sec_bytes": 125120000.0,
221
+ "total_bytes": 15533200000,
222
+ "latency": {
223
+ "min": 0.449,
224
+ "max": 4.85,
225
+ "mean": 0.961,
226
+ "median": 0.87,
227
+ "stdev": 0.52
228
+ },
229
+ "percentiles": {
230
+ "p50": 0.87,
231
+ "p75": 1.1,
232
+ "p90": 1.56,
233
+ "p95": 2.98,
234
+ "p99": 4.85
235
+ }
236
+ }
237
+ ```
238
+
239
+ ## Benchmarking Modes
240
+
241
+ ### Duration Mode (wrk-style)
242
+
243
+ Runs for a fixed duration with a pool of persistent connections:
244
+
245
+ ```bash
246
+ python pywrkr.py -c 100 -d 30 http://localhost:8080/
247
+ ```
248
+
249
+ ### Request-Count Mode (ab-style)
250
+
251
+ Sends exactly N requests, then stops:
252
+
253
+ ```bash
254
+ python pywrkr.py -n 10000 -c 50 http://localhost:8080/
255
+ ```
256
+
257
+ ### User Simulation Mode
258
+
259
+ Simulates realistic user behavior with configurable think time and gradual ramp-up:
260
+
261
+ ```bash
262
+ python pywrkr.py -u 500 -d 300 --ramp-up 30 --think-time 1.0 http://localhost:8080/
263
+ ```
264
+
265
+ Each virtual user:
266
+ 1. Sends a request
267
+ 2. Waits for the response
268
+ 3. Pauses for think time (with jitter)
269
+ 4. Repeats until duration expires
270
+
271
+ The ramp-up period gradually introduces users to avoid a thundering herd at startup.
272
+
273
+ ### Cache-Busting Mode
274
+
275
+ Append `-R` to any mode to bypass HTTP caches by adding a unique query parameter to each request:
276
+
277
+ ```bash
278
+ python pywrkr.py -R -u 300 -d 120 https://example.com/
279
+ # Each request hits: https://example.com/?_cb=<unique-uuid>
280
+ ```
281
+
282
+ This is useful for testing origin server performance without CDN/proxy cache interference.
283
+
284
+ ### Rate Limiting Mode
285
+
286
+ Instead of sending requests as fast as possible, `--rate` sends them at a controlled, constant rate. This is critical for SLA testing and finding exact server breaking points.
287
+
288
+ ```bash
289
+ # Constant 500 req/s for 30 seconds
290
+ python pywrkr.py --rate 500 -d 30 http://localhost:8080/
291
+
292
+ # Rate with request count: 50 req/s, stop after 200 requests
293
+ python pywrkr.py --rate 50 -n 200 http://localhost:8080/
294
+
295
+ # Rate limiting with multiple connections (rate is global, shared across all workers)
296
+ python pywrkr.py --rate 100 -c 10 -d 60 http://localhost:8080/
297
+
298
+ # Combine with user simulation (applies when think_time is 0)
299
+ python pywrkr.py --rate 200 -u 50 -d 120 --think-time 0 http://localhost:8080/
300
+ ```
301
+
302
+ **Rate Ramp** (`--rate-ramp`): Linearly increase the rate over the test duration. This is useful for finding the exact breaking point automatically:
303
+
304
+ ```bash
305
+ # Start at 100 req/s, linearly increase to 1000 req/s over 60 seconds
306
+ python pywrkr.py --rate 100 --rate-ramp 1000 -d 60 http://localhost:8080/
307
+ ```
308
+
309
+ At `--rate 500`, the tool sends one request every 2ms. If the server cannot keep up (latency exceeds the interval), requests queue up -- this is expected and useful for identifying saturation points.
310
+
311
+ **Comparison with default "max throughput" mode:**
312
+
313
+ | Mode | Use Case |
314
+ |------|----------|
315
+ | Default (no `--rate`) | Find maximum throughput; stress test |
316
+ | `--rate N` | SLA validation; controlled load; latency-under-load testing |
317
+ | `--rate N --rate-ramp M` | Find breaking point; gradual load increase |
318
+
319
+ Results include "Target RPS" vs "Actual RPS" and "Rate Limit Waits" count (how many times the limiter had to slow down a worker).
320
+
321
+ ### Latency Breakdown
322
+
323
+ Use `--latency-breakdown` to see where each request spends its time. This breaks down latency into individual phases using aiohttp's tracing infrastructure:
324
+
325
+ ```bash
326
+ # Show latency breakdown for each phase
327
+ python pywrkr.py --latency-breakdown -n 1000 -c 50 https://example.com/
328
+
329
+ # Combine with JSON output
330
+ python pywrkr.py --latency-breakdown --json results.json -d 30 https://example.com/
331
+ ```
332
+
333
+ Output includes averages with min/max/p50/p95 for each phase:
334
+
335
+ ```
336
+ ======================================================================
337
+ LATENCY BREAKDOWN (averages)
338
+ ======================================================================
339
+ DNS Lookup: 2.15ms (min=1.20ms, max=5.30ms, p50=2.00ms, p95=4.10ms)
340
+ TCP Connect: 12.34ms (min=10.00ms, max=18.50ms, p50=12.00ms, p95=16.20ms)
341
+ TLS Handshake: 45.67ms (min=40.00ms, max=55.00ms, p50=45.00ms, p95=52.00ms)
342
+ TTFB: 89.12ms (min=60.00ms, max=150.00ms, p50=85.00ms, p95=130.00ms)
343
+ Transfer: 34.56ms (min=20.00ms, max=80.00ms, p50=30.00ms, p95=65.00ms)
344
+ Total: 183.84ms (min=131.20ms, max=308.80ms, p50=174.00ms, p95=267.30ms)
345
+
346
+ New Connections: 50
347
+ Reused Connections: 950
348
+ ```
349
+
350
+ **Phases:**
351
+ - **DNS Lookup** -- Time to resolve the hostname via DNS
352
+ - **TCP Connect** -- Time to establish the TCP connection
353
+ - **TLS Handshake** -- Time for TLS negotiation (HTTPS only)
354
+ - **TTFB** -- Time to first byte, from sending the request to receiving the first response byte
355
+ - **Transfer** -- Time to read the full response body
356
+
357
+ **Connection reuse:** When keep-alive is enabled (the default), most requests reuse existing connections. For reused connections, DNS/Connect/TLS phases will be zero. The breakdown reports how many connections were new vs. reused.
358
+
359
+ When `--json` is used, the breakdown data is included in the JSON output under the `latency_breakdown` key.
360
+
361
+ ### Auto-Ramping / Step Load (Autofind)
362
+
363
+ Automatically increase load until the server's capacity is found. The `--autofind` flag starts with a small number of users, runs short tests at increasing load levels, and uses binary search to pinpoint the maximum sustainable load.
364
+
365
+ ```bash
366
+ # Find max capacity with default thresholds (1% error rate, 5s p95)
367
+ python pywrkr.py --autofind https://example.com/
368
+
369
+ # Custom thresholds: 0.5% error rate, 2s p95, 15s steps
370
+ python pywrkr.py --autofind --max-error-rate 0.5 --max-p95 2.0 \
371
+ --step-duration 15 https://example.com/
372
+
373
+ # Start from 50 users, up to 5000, multiply by 1.5x each step
374
+ python pywrkr.py --autofind --start-users 50 --max-users 5000 \
375
+ --step-multiplier 1.5 https://example.com/
376
+
377
+ # Save detailed results to JSON
378
+ python pywrkr.py --autofind --json autofind_results.json https://example.com/
379
+
380
+ # With cache-busting and custom think time
381
+ python pywrkr.py --autofind -R --think-time 0.5 https://example.com/
382
+ ```
383
+
384
+ **How it works:**
385
+
386
+ 1. Start with `--start-users` (default: 10) virtual users
387
+ 2. Run a short test (`--step-duration`, default: 30s) at that load
388
+ 3. Check if error rate exceeds `--max-error-rate` or p95 latency exceeds `--max-p95`
389
+ 4. If OK, multiply users by `--step-multiplier` (default: 2x) and repeat
390
+ 5. If thresholds exceeded, binary search between the last good and first bad user count
391
+ 6. Report the maximum sustainable load with a summary table
392
+
393
+ **Example output:**
394
+
395
+ ```
396
+ ============================================================
397
+ AUTOFIND RESULTS
398
+ ============================================================
399
+ Maximum sustainable load: 280 users
400
+
401
+ Step Results:
402
+ Users | RPS | p50 | p95 | p99 | Errors | Status
403
+ 10 | 9.8 | 120ms | 180ms | 200ms | 0.0% | OK
404
+ 20 | 19.5 | 125ms | 190ms | 220ms | 0.0% | OK
405
+ 40 | 38.2 | 130ms | 250ms | 300ms | 0.0% | OK
406
+ 80 | 75.1 | 180ms | 400ms | 600ms | 0.0% | OK
407
+ 160 | 140.2 | 350ms | 1.2s | 2.1s | 0.0% | OK
408
+ 320 | 135.5 | 2.1s | 8.5s | 15.2s | 5.2% | FAIL
409
+ 240 | 138.1 | 800ms | 3.2s | 5.1s | 0.8% | OK
410
+ 280 | 136.8 | 1.1s | 4.8s | 7.2s | 0.9% | OK
411
+ 300 | 135.2 | 1.5s | 5.5s | 9.1s | 1.2% | FAIL
412
+ ============================================================
413
+ ```
414
+
415
+ **Autofind options:**
416
+
417
+ | Flag | Description |
418
+ |------|-------------|
419
+ | `--autofind` | Enable auto-ramping mode |
420
+ | `--max-error-rate` | Stop when error rate exceeds this percent (default: 1.0) |
421
+ | `--max-p95` | Stop when p95 latency exceeds this in seconds (default: 5.0) |
422
+ | `--step-duration` | Duration of each step test in seconds (default: 30) |
423
+ | `--start-users` | Starting number of users (default: 10) |
424
+ | `--max-users` | Maximum users to try (default: 10000) |
425
+ | `--step-multiplier` | Multiply users by this each step (default: 2.0) |
426
+
427
+ ### SLO-Aware Thresholds
428
+
429
+ Define pass/fail criteria for your benchmarks. If any threshold is breached, pywrkr exits with code 2 — making it usable in CI/CD pipelines.
430
+
431
+ ```bash
432
+ # Single threshold
433
+ python pywrkr.py --threshold "p95 < 300ms" -c 100 -d 30 http://localhost:8080/
434
+
435
+ # Multiple thresholds
436
+ python pywrkr.py \
437
+ --th "p95 < 300ms" \
438
+ --th "p99 < 1s" \
439
+ --th "error_rate < 1%" \
440
+ --th "rps > 100" \
441
+ -c 100 -d 30 http://localhost:8080/
442
+ ```
443
+
444
+ **Supported metrics:**
445
+ - `p50`, `p75`, `p90`, `p95`, `p99` — latency percentiles
446
+ - `avg_latency`, `max_latency`, `min_latency` — latency aggregates
447
+ - `error_rate` — error percentage (e.g., `error_rate < 1%` or `error_rate < 1`)
448
+ - `rps` — requests per second
449
+
450
+ **Operators:** `<`, `>`, `<=`, `>=`
451
+
452
+ **Time units:** `ms` (milliseconds), `s` (seconds), `us` (microseconds). Default is seconds if no unit.
453
+
454
+ **Example output:**
455
+ ```
456
+ ======================================================================
457
+ SLO THRESHOLDS
458
+ ======================================================================
459
+ p95 < 300ms Actual: 245.00ms PASS
460
+ p99 < 1s Actual: 820.00ms PASS
461
+ error_rate < 1% Actual: 0.00% PASS
462
+ rps > 100 Actual: 523.45 PASS
463
+
464
+ Result: ALL THRESHOLDS PASSED
465
+ ```
466
+
467
+ **CI usage:**
468
+ ```bash
469
+ python pywrkr.py --th "p95 < 500ms" --th "error_rate < 0.1%" \
470
+ -c 50 -d 60 http://api.staging/health || echo "Performance regression detected!"
471
+ ```
472
+
473
+ ### Observability Export
474
+
475
+ Export benchmark metrics directly to your observability stack.
476
+
477
+ #### OpenTelemetry
478
+
479
+ ```bash
480
+ pip install pywrkr[otel]
481
+ python pywrkr.py --otel-endpoint http://localhost:4318 \
482
+ --tag environment=staging --tag build=$(git rev-parse --short HEAD) \
483
+ -c 100 -d 30 http://localhost:8080/
484
+ ```
485
+
486
+ Exports gauges and counters: `pywrkr.requests.total`, `pywrkr.errors.total`, `pywrkr.requests_per_sec`, `pywrkr.latency.p50/p95/p99/mean/max`, `pywrkr.transfer_bytes_per_sec`, `pywrkr.duration_sec`.
487
+
488
+ #### Prometheus Remote Write (Pushgateway)
489
+
490
+ ```bash
491
+ python pywrkr.py --prom-remote-write http://pushgateway:9091 \
492
+ --tag region=us-east-1 --tag service=api \
493
+ -c 100 -d 30 http://localhost:8080/
494
+ ```
495
+
496
+ Uses stdlib `urllib` — no extra dependencies. Pushes metrics in Prometheus text format to `{endpoint}/metrics/job/pywrkr`.
497
+
498
+ #### Test Metadata Tags
499
+
500
+ Tags are attached to all exported metrics and included in JSON output:
501
+
502
+ ```bash
503
+ python pywrkr.py --tag environment=production --tag build=v2.1.0 \
504
+ --tag region=eu-west-1 --tag test_name=api_stress \
505
+ --json results.json -c 100 -d 30 http://localhost:8080/
506
+ ```
507
+
508
+ ## Installation
509
+
510
+ ```bash
511
+ # Basic (aiohttp only)
512
+ pip install pywrkr
513
+
514
+ # With live TUI dashboard
515
+ pip install pywrkr[tui]
516
+
517
+ # With OpenTelemetry export
518
+ pip install pywrkr[otel]
519
+
520
+ # Everything
521
+ pip install pywrkr[all]
522
+ ```
523
+
524
+ ## Testing
525
+
526
+ ```bash
527
+ # Run all tests
528
+ python -m pytest test_pywrkr.py -v
529
+
530
+ # Run specific test class
531
+ python -m pytest test_pywrkr.py::TestMakeUrl -v
532
+ python -m pytest test_pywrkr.py::TestBenchmarkIntegration -v
533
+ python -m pytest test_pywrkr.py::TestAutofindIntegration -v
534
+ ```
535
+
536
+ The test suite includes unit and integration tests covering:
537
+ - Formatting helpers, percentiles, histogram, timeline, CSV/JSON/HTML output
538
+ - Integration tests with a real aiohttp test server (duration mode, request-count mode, POST, auth, cookies, content-length verification, keepalive, cache-buster)
539
+ - User simulation integration tests (think time, ramp-up, jitter, error handling, output formats)
540
+ - Autofind integration tests (healthy server, error endpoint, threshold enforcement, binary search, JSON output, summary table)
541
+
542
+ ## License
543
+
544
+ MIT