aiolocust 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,76 @@
1
+ Metadata-Version: 2.3
2
+ Name: aiolocust
3
+ Version: 0.1.0
4
+ Summary: Add your description here
5
+ Author: Lars Holmberg
6
+ Author-email: Lars Holmberg <lars.holmberg@redshirt.se>
7
+ Requires-Dist: aiohttp>=3.13.3
8
+ Requires-Dist: rich>=14.2.0
9
+ Requires-Dist: uvloop>=0.22.1 ; sys_platform == 'linux'
10
+ Requires-Dist: uvloop>=0.22.1 ; sys_platform == 'darwin'
11
+ Requires-Python: >=3.14
12
+ Description-Content-Type: text/markdown
13
+
14
+ # AIOLocust
15
+
16
+ This is a 2026 reimagining of [Locust](https://github.com/locustio/locust/). It is possible that we may merge the projects at some point, but for now it is a separate library.
17
+
18
+ ## Simpler and more consistent syntax than Locust, leveraging asyncio
19
+
20
+ ```python
21
+ import asyncio
22
+ from aiolocust import LocustClientSession
23
+
24
+ async def user(client: LocustClientSession):
25
+ async with client.get("http://example.com/") as resp:
26
+ assert resp.status == 200
27
+ asyncio.sleep(0.1)
28
+ ```
29
+
30
+ Locust was created in 2011, and while it has gone through several major overhauls, it still has a fair amount of legacy style code, and has accumulated a lot of non-core functionality that make it very hard to maintain and improve. It's 10000+ lines of code using a mix of procedural, object oriented and functional programming, with several confusing abstractions.
31
+
32
+ AIOLocust is built to be smaller in scope, but capture the learnings from Locust. It uses modern, explicitly asyncronous, Python code (instead of gevent/monkey patching).
33
+
34
+ It also further emphasizes the "It's just Python"-approach. If you, for example, want to take precise control of the ramp up and ramp down of a test, you shouldn't need to read the documentation, you should only need to know how to write code. We'll still provide the option of using prebuilt features too of course, but we'll try not to "build our users into a box".
35
+
36
+ ## High performance
37
+
38
+ aiolocust is more performant than "regular" Locust because has a smaller footprint/complexity, but it's two main gains come from:
39
+
40
+ ### Using [asyncio](https://docs.python.org/3/library/asyncio.html) together with [aiohttp](https://docs.aiohttp.org/en/stable/)
41
+
42
+ aiolocust's performance is *much* better than HttpUser (based on Requests), and even slightly better than FastHttpUser (based on geventhttpclient). Because it uses async programming instead of monkey patching it is more useful on modern Python and more future-proof. Specifically it allows your locustfile to easily use asyncio libraries (like Playwright), which are becoming more and more common.
43
+
44
+ ### Leveraging Python in its [freethreading/no-GIL](https://docs.python.org/3/howto/free-threading-python.html) form
45
+
46
+ This means that you dont need to launch one locust process per core! Even if your load tests are doing some heavy computations, they are unlikely to impact eachother, as one thread will not block Python from running another one.
47
+
48
+ In fact, you can still use syncronous libraries, (without gevent monkey patching), just increase the number of threads.
49
+
50
+ Users/threads can also communicate easily with eachother, as they share memory with eachother, unlike in the old Locust implementation where you were forced to use zmq messaging between master and worker processes.
51
+
52
+ ### Some actual numbers
53
+
54
+ For example, aiolocust can do almost 70k requests/s on a MacBook Pro M3. It is also much faster to start than regular Locust, and has no issues spawning a lot of new users in a short interval.
55
+
56
+ ```text
57
+ Name ┃ Count ┃ Failures ┃ Avg ┃ Max ┃ Rate
58
+ ━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━
59
+ http://localhost:8080/ │ 2032741 │ 0 (0.0%) │ 2.8ms │ 35.8ms │ 67659.22/s
60
+ ────────────────────────┼─────────┼──────────┼─────────┼────────────┼────────────
61
+ Total │ 2.8ms │ 35.8ms │ 2032741 │ 67659.22/s │
62
+ ```
63
+
64
+ ## Things this doesn't have compared do Locust (at least not yet)
65
+
66
+ * A WebUI
67
+ * Support for distributed tests
68
+ * Polish. This is not ready for production use yet.
69
+
70
+ ## How to run the example
71
+
72
+ ```text
73
+ git clone https://github.com/cyberw/aiolocust.git
74
+ cd aiolocust
75
+ uv run python locustfile.py
76
+ ```
@@ -0,0 +1,63 @@
1
+ # AIOLocust
2
+
3
+ This is a 2026 reimagining of [Locust](https://github.com/locustio/locust/). It is possible that we may merge the projects at some point, but for now it is a separate library.
4
+
5
+ ## Simpler and more consistent syntax than Locust, leveraging asyncio
6
+
7
+ ```python
8
+ import asyncio
9
+ from aiolocust import LocustClientSession
10
+
11
+ async def user(client: LocustClientSession):
12
+ async with client.get("http://example.com/") as resp:
13
+ assert resp.status == 200
14
+ asyncio.sleep(0.1)
15
+ ```
16
+
17
+ Locust was created in 2011, and while it has gone through several major overhauls, it still has a fair amount of legacy style code, and has accumulated a lot of non-core functionality that make it very hard to maintain and improve. It's 10000+ lines of code using a mix of procedural, object oriented and functional programming, with several confusing abstractions.
18
+
19
+ AIOLocust is built to be smaller in scope, but capture the learnings from Locust. It uses modern, explicitly asyncronous, Python code (instead of gevent/monkey patching).
20
+
21
+ It also further emphasizes the "It's just Python"-approach. If you, for example, want to take precise control of the ramp up and ramp down of a test, you shouldn't need to read the documentation, you should only need to know how to write code. We'll still provide the option of using prebuilt features too of course, but we'll try not to "build our users into a box".
22
+
23
+ ## High performance
24
+
25
+ aiolocust is more performant than "regular" Locust because has a smaller footprint/complexity, but it's two main gains come from:
26
+
27
+ ### Using [asyncio](https://docs.python.org/3/library/asyncio.html) together with [aiohttp](https://docs.aiohttp.org/en/stable/)
28
+
29
+ aiolocust's performance is *much* better than HttpUser (based on Requests), and even slightly better than FastHttpUser (based on geventhttpclient). Because it uses async programming instead of monkey patching it is more useful on modern Python and more future-proof. Specifically it allows your locustfile to easily use asyncio libraries (like Playwright), which are becoming more and more common.
30
+
31
+ ### Leveraging Python in its [freethreading/no-GIL](https://docs.python.org/3/howto/free-threading-python.html) form
32
+
33
+ This means that you dont need to launch one locust process per core! Even if your load tests are doing some heavy computations, they are unlikely to impact eachother, as one thread will not block Python from running another one.
34
+
35
+ In fact, you can still use syncronous libraries, (without gevent monkey patching), just increase the number of threads.
36
+
37
+ Users/threads can also communicate easily with eachother, as they share memory with eachother, unlike in the old Locust implementation where you were forced to use zmq messaging between master and worker processes.
38
+
39
+ ### Some actual numbers
40
+
41
+ For example, aiolocust can do almost 70k requests/s on a MacBook Pro M3. It is also much faster to start than regular Locust, and has no issues spawning a lot of new users in a short interval.
42
+
43
+ ```text
44
+ Name ┃ Count ┃ Failures ┃ Avg ┃ Max ┃ Rate
45
+ ━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━
46
+ http://localhost:8080/ │ 2032741 │ 0 (0.0%) │ 2.8ms │ 35.8ms │ 67659.22/s
47
+ ────────────────────────┼─────────┼──────────┼─────────┼────────────┼────────────
48
+ Total │ 2.8ms │ 35.8ms │ 2032741 │ 67659.22/s │
49
+ ```
50
+
51
+ ## Things this doesn't have compared do Locust (at least not yet)
52
+
53
+ * A WebUI
54
+ * Support for distributed tests
55
+ * Polish. This is not ready for production use yet.
56
+
57
+ ## How to run the example
58
+
59
+ ```text
60
+ git clone https://github.com/cyberw/aiolocust.git
61
+ cd aiolocust
62
+ uv run python locustfile.py
63
+ ```
@@ -0,0 +1,39 @@
1
+ [project]
2
+ name = "aiolocust"
3
+ version = "0.1.0"
4
+ description = "Add your description here"
5
+ readme = "README.md"
6
+ authors = [
7
+ { name = "Lars Holmberg", email = "lars.holmberg@redshirt.se" }
8
+ ]
9
+ requires-python = ">=3.14"
10
+ dependencies = [
11
+ "aiohttp>=3.13.3",
12
+ "rich>=14.2.0",
13
+ "uvloop>=0.22.1 ; sys_platform == 'linux'",
14
+ "uvloop>=0.22.1 ; sys_platform == 'darwin'",
15
+ ]
16
+
17
+ [tool.uv]
18
+ package = true
19
+
20
+ [build-system]
21
+ requires = ["uv_build>=0.9.7,<0.10.0"]
22
+ build-backend = "uv_build"
23
+
24
+ [tool.ruff]
25
+ target-version = "py314"
26
+ line-length = 110
27
+ # lint.ignore = ["E402", "E501", "E713", "E731"]
28
+ lint.select = ["E", "F", "W", "UP", "I", "ARG001"]
29
+ lint.dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?)|resp)$"
30
+
31
+ [dependency-groups]
32
+ dev = [
33
+ "pyright>=1.1.408",
34
+ "pytest>=9.0.2",
35
+ "pytest-asyncio>=1.3.0",
36
+ "pytest-httpserver>=1.1.3",
37
+ "ruff>=0.14.10",
38
+ ]
39
+
@@ -0,0 +1,239 @@
1
+ import asyncio
2
+ import os
3
+ import signal
4
+ import sys
5
+ import threading
6
+ import time
7
+ import warnings
8
+ from collections.abc import Callable
9
+ from typing import cast
10
+
11
+ from aiohttp import ClientConnectorError, ClientResponse, ClientResponseError, ClientSession
12
+ from aiohttp.client import _RequestContextManager
13
+ from rich.console import Console
14
+ from rich.table import Table
15
+
16
+ from . import event_handlers
17
+ from .datatypes import Request, RequestEntry
18
+
19
+ # uvloop is faster than the default pure-python asyncio event loop
20
+ # so if it is installed, we're going to be using that one
21
+ try:
22
+ import uvloop
23
+
24
+ new_event_loop = uvloop.new_event_loop
25
+ except ImportError:
26
+ new_event_loop = None
27
+
28
+ # We're going to inherit from ClientSession, even though it is considered internal,
29
+ # Because we dont want to take the performance hit and typing issues of wrapping every method
30
+ warnings.filterwarnings(
31
+ action="ignore",
32
+ message=".*Inheritance .* from ClientSession is discouraged.*",
33
+ category=DeprecationWarning,
34
+ module="aiolocust",
35
+ )
36
+
37
+ if sys._is_gil_enabled():
38
+ raise RuntimeError("aiolocust requires a freethreading Python build")
39
+
40
+
41
+ running = True
42
+ console = Console()
43
+
44
+
45
+ original_sigint_handler = signal.getsignal(signal.SIGINT)
46
+
47
+
48
+ def signal_handler(_sig, _frame):
49
+ print("Stopping...")
50
+ global running
51
+ running = False
52
+ # stop everything immediately on second Ctrl-C
53
+ signal.signal(signal.SIGINT, original_sigint_handler)
54
+
55
+
56
+ signal.signal(signal.SIGINT, signal_handler)
57
+
58
+
59
+ async def stats_printer():
60
+ global running
61
+ start_time = time.perf_counter()
62
+ while running:
63
+ await asyncio.sleep(2)
64
+ requests_copy: dict[str, RequestEntry] = event_handlers.requests.copy() # avoid mutation during print
65
+ elapsed = time.perf_counter() - start_time
66
+ total_ttlb = 0
67
+ total_max_ttlb = 0
68
+ total_count = 0
69
+ total_errorcount = 0
70
+ table = Table(show_edge=False)
71
+ table.add_column("Name", max_width=30)
72
+ table.add_column("Count", justify="right")
73
+ table.add_column("Failures", justify="right")
74
+ table.add_column("Avg", justify="right")
75
+ table.add_column("Max", justify="right")
76
+ table.add_column("Rate", justify="right")
77
+
78
+ for url, re in requests_copy.items():
79
+ table.add_row(
80
+ url,
81
+ str(re.count),
82
+ f"{re.errorcount} ({100 * re.errorcount / re.count:2.1f}%)",
83
+ f"{1000 * re.sum_ttlb / re.count:4.1f}ms",
84
+ f"{1000 * re.max_ttlb:4.1f}ms",
85
+ f"{re.count / elapsed:.2f}/s",
86
+ )
87
+ total_ttlb += re.sum_ttlb
88
+ total_max_ttlb = max(total_max_ttlb, re.max_ttlb)
89
+ total_count += re.count
90
+ total_errorcount += re.errorcount
91
+ table.add_section()
92
+ table.add_row(
93
+ "Total",
94
+ str(total_count),
95
+ f"{total_errorcount} ({100 * total_errorcount / total_count:2.1f}%)",
96
+ f"{1000 * total_ttlb / total_count:4.1f}ms",
97
+ f"{1000 * total_max_ttlb:4.1f}ms",
98
+ f"{total_count / elapsed:.2f}/s",
99
+ )
100
+ print()
101
+ console.print(table)
102
+
103
+ if elapsed > 30:
104
+ running = False
105
+
106
+
107
+ class LocustResponse(ClientResponse):
108
+ error: Exception | bool | None = None
109
+
110
+ def __init__(self, *args, **kwargs):
111
+ raise NotImplementedError() # use wrap_response
112
+
113
+ @classmethod
114
+ def wrap_response(cls, resp: ClientResponse) -> LocustResponse:
115
+ new = cast(LocustResponse, resp)
116
+ new.error = None
117
+ return new
118
+
119
+
120
+ class LocustRequestContextManager(_RequestContextManager):
121
+ def __init__(self, request_handler: Callable, *args, **kwargs):
122
+ super().__init__(*args, **kwargs)
123
+ # slightly hacky way to get the URL, but passing it explicitly would be a mess
124
+ # and it is only used for connection errors where the exception doesn't contain URL
125
+ self.str_or_url = args[0]._coro.cr_frame.f_locals["str_or_url"]
126
+ self.request_handler = request_handler
127
+ self._resp: LocustResponse # type: ignore
128
+
129
+ async def __aenter__(self) -> LocustResponse:
130
+ self.start_time = time.perf_counter()
131
+ try:
132
+ await super().__aenter__()
133
+ except ClientConnectorError as e:
134
+ elapsed = self.ttlb = time.perf_counter() - self.start_time
135
+ if request_info := getattr(e, "request_info", None):
136
+ url = request_info.url
137
+ else:
138
+ url = self.str_or_url
139
+ self.request_handler(Request(url, elapsed, elapsed, e))
140
+ raise
141
+ except ClientResponseError as e:
142
+ elapsed = self.ttlb = time.perf_counter() - self.start_time
143
+ self.request_handler(Request(str(e.request_info.url), elapsed, elapsed, e))
144
+ raise
145
+ else:
146
+ self.url = super()._resp.url
147
+ self.ttfb = time.perf_counter() - self.start_time
148
+ await self._resp.read()
149
+ self.ttlb = time.perf_counter() - self.start_time
150
+ return LocustResponse.wrap_response(self._resp)
151
+
152
+ async def __aexit__(self, exc_type, exc_val, exc_tb) -> None:
153
+ await super().__aexit__(exc_type, exc_val, exc_tb)
154
+ if self._resp.error is None: # no explicit value set in with-block
155
+ self._resp.error = exc_val or self._resp.status >= 400
156
+ self.request_handler(
157
+ Request(
158
+ str(self.url),
159
+ self.ttfb,
160
+ self.ttlb,
161
+ self._resp.error,
162
+ )
163
+ )
164
+
165
+
166
+ class LocustClientSession(ClientSession):
167
+ iteration = 0
168
+
169
+ def __init__(self, base_url=None, request_handler: Callable | None = None, **kwargs):
170
+ super().__init__(base_url=base_url, **kwargs)
171
+ self.request_handler = request_handler or event_handlers.request
172
+
173
+ # explicitly declare this to get the correct return type and enter session
174
+ async def __aenter__(self) -> LocustClientSession:
175
+ return self
176
+
177
+ def get(self, url, **kwargs) -> LocustRequestContextManager:
178
+ return LocustRequestContextManager(self.request_handler, super().get(url, **kwargs))
179
+
180
+ def post(self, url, **kwargs) -> LocustRequestContextManager:
181
+ return LocustRequestContextManager(self.request_handler, super().post(url, **kwargs))
182
+
183
+
184
+ async def user_loop(user):
185
+ async with LocustClientSession() as client:
186
+ while running:
187
+ try:
188
+ await user(client)
189
+ except (ClientResponseError, AssertionError):
190
+ pass
191
+ client.iteration += 1
192
+
193
+
194
+ async def user_runner(user, count, printer):
195
+ event_handlers.requests = {}
196
+ async with asyncio.TaskGroup() as tg:
197
+ if printer:
198
+ tg.create_task(stats_printer())
199
+ for _ in range(count):
200
+ tg.create_task(user_loop(user))
201
+
202
+
203
+ def thread_worker(user, count, printer):
204
+ return asyncio.run(user_runner(user, count, printer), loop_factory=new_event_loop)
205
+
206
+
207
+ def distribute_evenly(total, num_buckets):
208
+ # Calculate the base amount for every bucket
209
+ base = total // num_buckets
210
+ # Calculate how many buckets need an extra +1
211
+ remainder = total % num_buckets
212
+ # Create the list: add 1 to the first 'remainder' buckets
213
+ return [base + 1 if i < remainder else base for i in range(num_buckets)]
214
+
215
+
216
+ async def main(user: Callable, user_count: int, event_loops: int | None = None):
217
+ if event_loops is None:
218
+ if cpu_count := os.cpu_count():
219
+ # for heavy calculations this may need to be increased,
220
+ # but for I/O bound tasks 1/2 of CPU cores seems to be the most efficient
221
+ event_loops = max(cpu_count // 2, 1)
222
+ else:
223
+ event_loops = 1
224
+
225
+ threads = []
226
+ for i in distribute_evenly(user_count, event_loops):
227
+ t = threading.Thread(
228
+ target=thread_worker,
229
+ args=(
230
+ user,
231
+ i,
232
+ not threads, # first thread prints stats
233
+ ),
234
+ )
235
+ threads.append(t)
236
+ t.start()
237
+
238
+ for t in threads:
239
+ t.join()
@@ -0,0 +1,18 @@
1
+ from dataclasses import dataclass
2
+
3
+
4
+ @dataclass(slots=True)
5
+ class Request:
6
+ url: str
7
+ ttfb: float
8
+ ttlb: float
9
+ error: Exception | bool | None
10
+
11
+
12
+ @dataclass(slots=True)
13
+ class RequestEntry:
14
+ count: int
15
+ errorcount: int
16
+ sum_ttfb: float
17
+ sum_ttlb: float
18
+ max_ttlb: float
@@ -0,0 +1,16 @@
1
+ from .datatypes import Request, RequestEntry
2
+
3
+ requests: dict[str, RequestEntry] = {}
4
+
5
+
6
+ def request(req: Request):
7
+ if req.url not in requests:
8
+ requests[req.url] = RequestEntry(1, 1 if req.error else 0, req.ttfb, req.ttlb, req.ttlb)
9
+ else:
10
+ re = requests[req.url]
11
+ re.count += 1
12
+ if req.error:
13
+ re.errorcount += 1
14
+ re.sum_ttfb += req.ttfb
15
+ re.sum_ttlb += req.ttlb
16
+ re.max_ttlb = max(re.max_ttlb, req.ttlb)