uringmachine 0.23.1 → 0.24.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.github/workflows/test.yml +1 -1
- data/CHANGELOG.md +8 -0
- data/Gemfile +1 -1
- data/TODO.md +52 -12
- data/benchmark/bm_io_pipe.rb +43 -1
- data/benchmark/bm_io_socketpair.rb +32 -2
- data/benchmark/bm_mutex_io.rb +47 -5
- data/benchmark/chart_bm_io_pipe_x.png +0 -0
- data/benchmark/common.rb +161 -17
- data/benchmark/http_parse.rb +9 -9
- data/benchmark/http_server_accept_queue.rb +104 -0
- data/benchmark/http_server_multi_accept.rb +93 -0
- data/benchmark/http_server_multi_ractor.rb +99 -0
- data/benchmark/http_server_single_thread.rb +80 -0
- data/benchmark/ips_io_pipe.rb +146 -0
- data/docs/design/buffer_pool.md +183 -0
- data/docs/um_api.md +91 -0
- data/examples/fiber_scheduler_file_io.rb +34 -0
- data/examples/fiber_scheduler_file_io_async.rb +33 -0
- data/ext/um/um.c +65 -48
- data/ext/um/um.h +11 -1
- data/ext/um/um_class.c +54 -11
- data/ext/um/um_sidecar.c +106 -0
- data/ext/um/um_stream.c +31 -0
- data/ext/um/um_stream_class.c +14 -0
- data/grant-2025/interim-report.md +130 -0
- data/grant-2025/journal.md +166 -2
- data/grant-2025/tasks.md +27 -17
- data/lib/uringmachine/fiber_scheduler.rb +35 -27
- data/lib/uringmachine/version.rb +1 -1
- data/lib/uringmachine.rb +4 -6
- data/test/helper.rb +8 -3
- data/test/test_fiber.rb +16 -0
- data/test/test_fiber_scheduler.rb +184 -72
- data/test/test_stream.rb +16 -0
- data/test/test_um.rb +94 -24
- metadata +14 -2
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 258d087be861df468fe7b4cf1628fe2669f095c0f347db65a5b5d63b31d34421
|
|
4
|
+
data.tar.gz: 12d340eb8c71147557af11dbbeb2b961ec163ecdc4f28c8688d19a3b20e232ec
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: e22dd92400e845b2b6931f10d96b5e926b413d72cc7e2c80df355f188b1989774fc963cbea32b9ca3373da403efc685ad3806e7a9965459295973db38f922d78
|
|
7
|
+
data.tar.gz: e7fa4a7728f64306d2219225d41ab697e17bdd76274015c13ad16d9b2f14d1d6e961896e62c03ded86607f8b59e908ccac708b470a07be6f4aadac11349db034
|
data/.github/workflows/test.yml
CHANGED
data/CHANGELOG.md
CHANGED
|
@@ -1,3 +1,11 @@
|
|
|
1
|
+
# 0.24.0 2026-01-30
|
|
2
|
+
|
|
3
|
+
- Add `Stream.resp_encode_cmd`
|
|
4
|
+
- Add sidecar mode
|
|
5
|
+
- Add test mode, remove special handling of OP_SCHEDULE in um_switch, do it only
|
|
6
|
+
in test mode
|
|
7
|
+
- Improve fiber scheduler error handling, add tests for I/O errors
|
|
8
|
+
|
|
1
9
|
# 0.23.1 2025-12-16
|
|
2
10
|
|
|
3
11
|
- Add `MSG_NOSIGNAL` to default flags for `#sendv` and `#send_bundle`
|
data/Gemfile
CHANGED
data/TODO.md
CHANGED
|
@@ -1,6 +1,20 @@
|
|
|
1
1
|
## immediate
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
- Fix all futex value (Queue, Mutex) to be aligned
|
|
4
|
+
|
|
5
|
+
## Sidecar thread
|
|
6
|
+
|
|
7
|
+
The sidecar thread is an auxiliary thread that is used to wait for CQEs. It
|
|
8
|
+
calls `io_uring_wait_cqe` (or equivalent lower-level interface) in a loop, and
|
|
9
|
+
each time a CQE is available, it signals this to the primary UringMachine
|
|
10
|
+
thread (using a futex).
|
|
11
|
+
|
|
12
|
+
The primary UringMachine thread runs fibers from the runqueue. When the runqueue
|
|
13
|
+
is exhausted, it performs a `io_uring_submit` for unsubmitted ops. It then waits
|
|
14
|
+
for the futex to become signalled (non-zero), and then processes all available
|
|
15
|
+
completions.
|
|
16
|
+
|
|
17
|
+
## Buffer rings - automatic management
|
|
4
18
|
|
|
5
19
|
```ruby
|
|
6
20
|
# completely hands off
|
|
@@ -10,35 +24,61 @@ machine.read_each(fd) { |str| ... }
|
|
|
10
24
|
machine.read_each(fd, io_buffer: true) { |iobuff, len| ... }
|
|
11
25
|
```
|
|
12
26
|
|
|
13
|
-
##
|
|
27
|
+
## Balancing I/O with the runqueue
|
|
14
28
|
|
|
15
|
-
|
|
29
|
+
- in some cases where there are many entries in the runqueue, this can
|
|
30
|
+
negatively affect latency. In some cases, this can also lead to I/O
|
|
31
|
+
starvation. If the runqueue is never empty, then SQEs are not submitted and
|
|
32
|
+
CQEs are not processed.
|
|
33
|
+
- So we want to limit the number of consecutive fiber switches before processing
|
|
34
|
+
I/O.
|
|
35
|
+
- Some possible approaches:
|
|
16
36
|
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
37
|
+
1. limit consecutive switches with a parameter
|
|
38
|
+
2. limit consecutive switches relative to the runqueue size and/or the amount
|
|
39
|
+
of pending SQEs
|
|
40
|
+
3. an adaptive algorithm that occasionally measures the time between I/O
|
|
41
|
+
processing iterations, and adjusts the consecutive switches limit?
|
|
22
42
|
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
```
|
|
43
|
+
- We also want to devise some benchmark that measures throughput / latency with
|
|
44
|
+
different settings, in a situation with very high concurrency.
|
|
26
45
|
|
|
27
46
|
## useful concurrency tools
|
|
28
47
|
|
|
29
48
|
- debounce
|
|
30
49
|
|
|
31
50
|
```ruby
|
|
32
|
-
debouncer =
|
|
51
|
+
debouncer = machine.debounce { }
|
|
33
52
|
```
|
|
34
53
|
|
|
54
|
+
- read multiple files
|
|
55
|
+
|
|
56
|
+
```ruby
|
|
57
|
+
# with a block
|
|
58
|
+
machine.read_files(*fns) { |fn, data| ... }
|
|
35
59
|
|
|
60
|
+
# without a block
|
|
61
|
+
machine.read_files(*fns) #=> { fn1:, fn2:, fn3:, ...}
|
|
62
|
+
```
|
|
36
63
|
|
|
37
64
|
## polyvalent select
|
|
38
65
|
|
|
39
66
|
- select on multiple queues (ala Go)
|
|
40
67
|
- select on mixture of queues and fds
|
|
41
68
|
|
|
69
|
+
(see also simplified op management below)
|
|
70
|
+
|
|
71
|
+
## simplified op management
|
|
72
|
+
|
|
73
|
+
Op lifecycle management can be much much simpler
|
|
74
|
+
|
|
75
|
+
- make all ops heap-allocated
|
|
76
|
+
- clear up state transitions:
|
|
77
|
+
|
|
78
|
+
- kernel-side state: unsubmitted, submitted, completed, done (for multishot ops)
|
|
79
|
+
- app-side state: unsubmitted, submitted, ...
|
|
80
|
+
|
|
81
|
+
|
|
42
82
|
## ops
|
|
43
83
|
|
|
44
84
|
- [ ] multishot timeout
|
data/benchmark/bm_io_pipe.rb
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
require_relative './common'
|
|
4
4
|
|
|
5
|
-
GROUPS =
|
|
5
|
+
GROUPS = 48
|
|
6
6
|
ITERATIONS = 10000
|
|
7
7
|
|
|
8
8
|
SIZE = 1024
|
|
@@ -52,6 +52,18 @@ class UMBenchmark
|
|
|
52
52
|
end
|
|
53
53
|
end
|
|
54
54
|
|
|
55
|
+
def do_baseline_um(machine)
|
|
56
|
+
GROUPS.times do
|
|
57
|
+
r, w = UM.pipe
|
|
58
|
+
ITERATIONS.times {
|
|
59
|
+
machine.write(w, DATA)
|
|
60
|
+
machine.read(r, +'', SIZE)
|
|
61
|
+
}
|
|
62
|
+
machine.close(w)
|
|
63
|
+
machine.close(r)
|
|
64
|
+
end
|
|
65
|
+
end
|
|
66
|
+
|
|
55
67
|
def do_scheduler(scheduler, ios)
|
|
56
68
|
GROUPS.times do
|
|
57
69
|
r, w = IO.pipe
|
|
@@ -68,6 +80,22 @@ class UMBenchmark
|
|
|
68
80
|
end
|
|
69
81
|
end
|
|
70
82
|
|
|
83
|
+
def do_scheduler_x(div, scheduler, ios)
|
|
84
|
+
(GROUPS/div).times do
|
|
85
|
+
r, w = IO.pipe
|
|
86
|
+
r.sync = true
|
|
87
|
+
w.sync = true
|
|
88
|
+
Fiber.schedule do
|
|
89
|
+
ITERATIONS.times { w.write(DATA) }
|
|
90
|
+
w.close
|
|
91
|
+
end
|
|
92
|
+
Fiber.schedule do
|
|
93
|
+
ITERATIONS.times { r.readpartial(SIZE) }
|
|
94
|
+
r.close
|
|
95
|
+
end
|
|
96
|
+
end
|
|
97
|
+
end
|
|
98
|
+
|
|
71
99
|
def do_um(machine, fibers, fds)
|
|
72
100
|
GROUPS.times do
|
|
73
101
|
r, w = UM.pipe
|
|
@@ -81,4 +109,18 @@ class UMBenchmark
|
|
|
81
109
|
end
|
|
82
110
|
end
|
|
83
111
|
end
|
|
112
|
+
|
|
113
|
+
def do_um_x(div, machine, fibers, fds)
|
|
114
|
+
(GROUPS/div).times do
|
|
115
|
+
r, w = UM.pipe
|
|
116
|
+
fibers << machine.spin do
|
|
117
|
+
ITERATIONS.times { machine.write(w, DATA) }
|
|
118
|
+
machine.close_async(w)
|
|
119
|
+
end
|
|
120
|
+
fibers << machine.spin do
|
|
121
|
+
ITERATIONS.times { machine.read(r, +'', SIZE) }
|
|
122
|
+
machine.close_async(r)
|
|
123
|
+
end
|
|
124
|
+
end
|
|
125
|
+
end
|
|
84
126
|
end
|
|
@@ -3,10 +3,10 @@
|
|
|
3
3
|
require_relative './common'
|
|
4
4
|
require 'socket'
|
|
5
5
|
|
|
6
|
-
GROUPS =
|
|
6
|
+
GROUPS = 48
|
|
7
7
|
ITERATIONS = 10000
|
|
8
8
|
|
|
9
|
-
SIZE =
|
|
9
|
+
SIZE = 1 << 14
|
|
10
10
|
DATA = '*' * SIZE
|
|
11
11
|
|
|
12
12
|
class UMBenchmark
|
|
@@ -55,6 +55,22 @@ class UMBenchmark
|
|
|
55
55
|
end
|
|
56
56
|
end
|
|
57
57
|
|
|
58
|
+
def do_scheduler_x(div, scheduler, ios)
|
|
59
|
+
(GROUPS/div).times do
|
|
60
|
+
r, w = Socket.socketpair(:AF_UNIX, :SOCK_STREAM, 0)
|
|
61
|
+
r.sync = true
|
|
62
|
+
w.sync = true
|
|
63
|
+
Fiber.schedule do
|
|
64
|
+
ITERATIONS.times { w.send(DATA, 0) }
|
|
65
|
+
w.close
|
|
66
|
+
end
|
|
67
|
+
Fiber.schedule do
|
|
68
|
+
ITERATIONS.times { r.recv(SIZE) }
|
|
69
|
+
r.close
|
|
70
|
+
end
|
|
71
|
+
end
|
|
72
|
+
end
|
|
73
|
+
|
|
58
74
|
def do_um(machine, fibers, fds)
|
|
59
75
|
GROUPS.times do
|
|
60
76
|
r, w = UM.socketpair(UM::AF_UNIX, UM::SOCK_STREAM, 0)
|
|
@@ -68,4 +84,18 @@ class UMBenchmark
|
|
|
68
84
|
end
|
|
69
85
|
end
|
|
70
86
|
end
|
|
87
|
+
|
|
88
|
+
def do_um_x(div, machine, fibers, fds)
|
|
89
|
+
(GROUPS/div).times do
|
|
90
|
+
r, w = UM.socketpair(UM::AF_UNIX, UM::SOCK_STREAM, 0)
|
|
91
|
+
fibers << machine.spin do
|
|
92
|
+
ITERATIONS.times { machine.send(w, DATA, SIZE, UM::MSG_WAITALL) }
|
|
93
|
+
machine.close_async(w)
|
|
94
|
+
end
|
|
95
|
+
fibers << machine.spin do
|
|
96
|
+
ITERATIONS.times { machine.recv(r, +'', SIZE, 0) }
|
|
97
|
+
machine.close_async(r)
|
|
98
|
+
end
|
|
99
|
+
end
|
|
100
|
+
end
|
|
71
101
|
end
|
data/benchmark/bm_mutex_io.rb
CHANGED
|
@@ -4,9 +4,9 @@ require_relative './common'
|
|
|
4
4
|
require 'securerandom'
|
|
5
5
|
require 'fileutils'
|
|
6
6
|
|
|
7
|
-
GROUPS = ENV['N']&.to_i ||
|
|
7
|
+
GROUPS = ENV['N']&.to_i || 48
|
|
8
8
|
WORKERS = 10
|
|
9
|
-
ITERATIONS =
|
|
9
|
+
ITERATIONS = 10000
|
|
10
10
|
|
|
11
11
|
puts "N=#{GROUPS}"
|
|
12
12
|
|
|
@@ -14,10 +14,15 @@ SIZE = 1024
|
|
|
14
14
|
DATA = "*" * SIZE
|
|
15
15
|
|
|
16
16
|
class UMBenchmark
|
|
17
|
+
def cleanup
|
|
18
|
+
# `rm /tmp/mutex*` rescue nil
|
|
19
|
+
end
|
|
20
|
+
|
|
17
21
|
def do_threads(threads, ios)
|
|
18
22
|
GROUPS.times do
|
|
19
23
|
mutex = Mutex.new
|
|
20
|
-
ios << (f = File.open("/tmp/mutex_io_threads_#{SecureRandom.hex}", 'w'))
|
|
24
|
+
# ios << (f = File.open("/tmp/mutex_io_threads_#{SecureRandom.hex}", 'w'))
|
|
25
|
+
ios << (f = File.open("/dev/null", 'w'))
|
|
21
26
|
f.sync = true
|
|
22
27
|
WORKERS.times do
|
|
23
28
|
threads << Thread.new do
|
|
@@ -34,7 +39,24 @@ class UMBenchmark
|
|
|
34
39
|
def do_scheduler(scheduler, ios)
|
|
35
40
|
GROUPS.times do
|
|
36
41
|
mutex = Mutex.new
|
|
37
|
-
ios << (f = File.open("/tmp/mutex_io_fiber_scheduler_#{SecureRandom.hex}", 'w'))
|
|
42
|
+
# ios << (f = File.open("/tmp/mutex_io_fiber_scheduler_#{SecureRandom.hex}", 'w'))
|
|
43
|
+
ios << (f = File.open("/dev/null", 'w'))
|
|
44
|
+
f.sync = true
|
|
45
|
+
WORKERS.times do
|
|
46
|
+
Fiber.schedule do
|
|
47
|
+
ITERATIONS.times do
|
|
48
|
+
mutex.synchronize { f.write(DATA) }
|
|
49
|
+
end
|
|
50
|
+
end
|
|
51
|
+
end
|
|
52
|
+
end
|
|
53
|
+
end
|
|
54
|
+
|
|
55
|
+
def do_scheduler_x(div, scheduler, ios)
|
|
56
|
+
(GROUPS/div).times do
|
|
57
|
+
mutex = Mutex.new
|
|
58
|
+
# ios << (f = File.open("/tmp/mutex_io_fiber_scheduler_#{SecureRandom.hex}", 'w'))
|
|
59
|
+
ios << (f = File.open("/dev/null", 'w'))
|
|
38
60
|
f.sync = true
|
|
39
61
|
WORKERS.times do
|
|
40
62
|
Fiber.schedule do
|
|
@@ -49,7 +71,27 @@ class UMBenchmark
|
|
|
49
71
|
def do_um(machine, fibers, fds)
|
|
50
72
|
GROUPS.times do
|
|
51
73
|
mutex = UM::Mutex.new
|
|
52
|
-
fds << (fd = machine.open("/tmp/mutex_io_um_#{SecureRandom.hex}", UM::O_CREAT | UM::O_WRONLY))
|
|
74
|
+
# fds << (fd = machine.open("/tmp/mutex_io_um_#{SecureRandom.hex}", UM::O_CREAT | UM::O_WRONLY))
|
|
75
|
+
fds << (fd = machine.open("/dev/null", UM::O_WRONLY))
|
|
76
|
+
WORKERS.times do
|
|
77
|
+
fibers << machine.spin do
|
|
78
|
+
ITERATIONS.times do
|
|
79
|
+
machine.synchronize(mutex) do
|
|
80
|
+
machine.write(fd, DATA)
|
|
81
|
+
end
|
|
82
|
+
end
|
|
83
|
+
rescue => e
|
|
84
|
+
p e
|
|
85
|
+
end
|
|
86
|
+
end
|
|
87
|
+
end
|
|
88
|
+
end
|
|
89
|
+
|
|
90
|
+
def do_um_x(div, machine, fibers, fds)
|
|
91
|
+
(GROUPS/div).times do
|
|
92
|
+
mutex = UM::Mutex.new
|
|
93
|
+
# fds << (fd = machine.open("/tmp/mutex_io_um_#{SecureRandom.hex}", UM::O_CREAT | UM::O_WRONLY))
|
|
94
|
+
fds << (fd = machine.open("/dev/null", UM::O_WRONLY))
|
|
53
95
|
WORKERS.times do
|
|
54
96
|
fibers << machine.spin do
|
|
55
97
|
ITERATIONS.times do
|
|
Binary file
|
data/benchmark/common.rb
CHANGED
|
@@ -9,6 +9,7 @@ gemfile do
|
|
|
9
9
|
gem 'io-event'
|
|
10
10
|
gem 'async'
|
|
11
11
|
gem 'pg'
|
|
12
|
+
gem 'gvltools'
|
|
12
13
|
end
|
|
13
14
|
|
|
14
15
|
require 'uringmachine/fiber_scheduler'
|
|
@@ -54,26 +55,56 @@ class UMBenchmark
|
|
|
54
55
|
end
|
|
55
56
|
|
|
56
57
|
@@benchmarks = {
|
|
57
|
-
baseline: [:baseline, "No Concurrency"],
|
|
58
|
-
|
|
59
|
-
thread_pool: [:thread_pool, "ThreadPool"],
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
58
|
+
# baseline: [:baseline, "No Concurrency"],
|
|
59
|
+
# baseline_um: [:baseline_um, "UM no concurrency"],
|
|
60
|
+
# thread_pool: [:thread_pool, "ThreadPool"],
|
|
61
|
+
|
|
62
|
+
threads: [:threads, "Threads"],
|
|
63
|
+
|
|
64
|
+
async_uring: [:scheduler, "Async uring"],
|
|
65
|
+
async_uring_x2: [:scheduler_x, "Async uring x2"],
|
|
66
|
+
|
|
67
|
+
# async_epoll: [:scheduler, "Async epoll"],
|
|
68
|
+
# async_epoll_x2: [:scheduler_x, "Async epoll x2"],
|
|
69
|
+
|
|
70
|
+
um_fs: [:scheduler, "UM FS"],
|
|
71
|
+
um_fs_x2: [:scheduler_x, "UM FS x2"],
|
|
72
|
+
|
|
73
|
+
um: [:um, "UM"],
|
|
74
|
+
um_sidecar: [:um, "UM sidecar"],
|
|
75
|
+
# um_sqpoll: [:um, "UM sqpoll"],
|
|
76
|
+
um_x2: [:um_x, "UM x2"],
|
|
77
|
+
um_x4: [:um_x, "UM x4"],
|
|
78
|
+
um_x8: [:um_x, "UM x8"],
|
|
65
79
|
}
|
|
66
80
|
|
|
67
81
|
def run_benchmarks(b)
|
|
82
|
+
STDOUT.sync = true
|
|
68
83
|
@@benchmarks.each do |sym, (doer, name)|
|
|
69
|
-
|
|
84
|
+
if respond_to?(:"do_#{doer}")
|
|
85
|
+
STDOUT << "Running #{name}... "
|
|
86
|
+
ts = nil
|
|
87
|
+
b.report(name) {
|
|
88
|
+
ts = measure_time { send(:"run_#{sym}") }
|
|
89
|
+
}
|
|
90
|
+
p ts
|
|
91
|
+
cleanup
|
|
92
|
+
end
|
|
70
93
|
end
|
|
71
94
|
end
|
|
72
95
|
|
|
96
|
+
def cleanup
|
|
97
|
+
end
|
|
98
|
+
|
|
73
99
|
def run_baseline
|
|
74
100
|
do_baseline
|
|
75
101
|
end
|
|
76
102
|
|
|
103
|
+
def run_baseline_um
|
|
104
|
+
machine = UM.new(4096)
|
|
105
|
+
do_baseline_um(machine)
|
|
106
|
+
end
|
|
107
|
+
|
|
77
108
|
def run_threads
|
|
78
109
|
threads = []
|
|
79
110
|
ios = []
|
|
@@ -117,27 +148,140 @@ class UMBenchmark
|
|
|
117
148
|
ios.each { it.close rescue nil }
|
|
118
149
|
end
|
|
119
150
|
|
|
151
|
+
def run_async_uring_x2
|
|
152
|
+
threads = 2.times.map do
|
|
153
|
+
Thread.new do
|
|
154
|
+
selector ||= IO::Event::Selector::URing.new(Fiber.current)
|
|
155
|
+
worker_pool = Async::Scheduler::WorkerPool.new
|
|
156
|
+
scheduler = Async::Scheduler.new(selector:, worker_pool:)
|
|
157
|
+
Fiber.set_scheduler(scheduler)
|
|
158
|
+
ios = []
|
|
159
|
+
scheduler.run { do_scheduler_x(2, scheduler, ios) }
|
|
160
|
+
ios.each { it.close rescue nil }
|
|
161
|
+
end
|
|
162
|
+
end
|
|
163
|
+
threads.each(&:join)
|
|
164
|
+
end
|
|
165
|
+
|
|
166
|
+
def run_async_epoll_x2
|
|
167
|
+
threads = 2.times.map do
|
|
168
|
+
Thread.new do
|
|
169
|
+
selector ||= IO::Event::Selector::EPoll.new(Fiber.current)
|
|
170
|
+
scheduler = Async::Scheduler.new(selector:)
|
|
171
|
+
Fiber.set_scheduler(scheduler)
|
|
172
|
+
ios = []
|
|
173
|
+
scheduler.run { do_scheduler_x(2, scheduler, ios) }
|
|
174
|
+
ios.each { it.close rescue nil }
|
|
175
|
+
end
|
|
176
|
+
end
|
|
177
|
+
threads.each(&:join)
|
|
178
|
+
end
|
|
179
|
+
|
|
180
|
+
def run_um_fs_x2
|
|
181
|
+
threads = 2.times.map do
|
|
182
|
+
Thread.new do
|
|
183
|
+
machine = UM.new
|
|
184
|
+
thread_pool = UM::BlockingOperationThreadPool.new(2)
|
|
185
|
+
scheduler = UM::FiberScheduler.new(machine, thread_pool)
|
|
186
|
+
Fiber.set_scheduler(scheduler)
|
|
187
|
+
ios = []
|
|
188
|
+
do_scheduler_x(2, scheduler, ios)
|
|
189
|
+
scheduler.join
|
|
190
|
+
ios.each { it.close rescue nil }
|
|
191
|
+
end
|
|
192
|
+
end
|
|
193
|
+
threads.each(&:join)
|
|
194
|
+
end
|
|
195
|
+
|
|
120
196
|
def run_um
|
|
121
|
-
machine = UM.new
|
|
197
|
+
machine = UM.new
|
|
198
|
+
fibers = []
|
|
199
|
+
fds = []
|
|
200
|
+
do_um(machine, fibers, fds)
|
|
201
|
+
machine.await_fibers(fibers)
|
|
202
|
+
fds.each { machine.close(it) }
|
|
203
|
+
end
|
|
204
|
+
|
|
205
|
+
def run_um_sidecar
|
|
206
|
+
machine = UM.new(sidecar: true)
|
|
122
207
|
fibers = []
|
|
123
208
|
fds = []
|
|
124
209
|
do_um(machine, fibers, fds)
|
|
125
210
|
machine.await_fibers(fibers)
|
|
126
|
-
puts "UM:"
|
|
127
|
-
p machine.metrics
|
|
128
211
|
fds.each { machine.close(it) }
|
|
129
212
|
end
|
|
130
213
|
|
|
131
214
|
def run_um_sqpoll
|
|
132
|
-
machine = UM.new(
|
|
215
|
+
machine = UM.new(sqpoll: true)
|
|
133
216
|
fibers = []
|
|
134
217
|
fds = []
|
|
135
218
|
do_um(machine, fibers, fds)
|
|
136
219
|
machine.await_fibers(fibers)
|
|
137
|
-
fds.each { machine.
|
|
138
|
-
|
|
139
|
-
|
|
140
|
-
|
|
220
|
+
fds.each { machine.close(it) }
|
|
221
|
+
end
|
|
222
|
+
|
|
223
|
+
def run_um_x2
|
|
224
|
+
threads = 2.times.map do
|
|
225
|
+
Thread.new do
|
|
226
|
+
machine = UM.new
|
|
227
|
+
fibers = []
|
|
228
|
+
fds = []
|
|
229
|
+
do_um_x(2, machine, fibers, fds)
|
|
230
|
+
machine.await_fibers(fibers)
|
|
231
|
+
fds.each { machine.close(it) }
|
|
232
|
+
end
|
|
233
|
+
end
|
|
234
|
+
threads.each(&:join)
|
|
235
|
+
end
|
|
236
|
+
|
|
237
|
+
def run_um_x4
|
|
238
|
+
threads = 4.times.map do
|
|
239
|
+
Thread.new do
|
|
240
|
+
machine = UM.new
|
|
241
|
+
fibers = []
|
|
242
|
+
fds = []
|
|
243
|
+
do_um_x(4, machine, fibers, fds)
|
|
244
|
+
machine.await_fibers(fibers)
|
|
245
|
+
fds.each { machine.close(it) }
|
|
246
|
+
end
|
|
247
|
+
end
|
|
248
|
+
threads.each(&:join)
|
|
249
|
+
end
|
|
250
|
+
|
|
251
|
+
def run_um_x8
|
|
252
|
+
threads = 8.times.map do
|
|
253
|
+
Thread.new do
|
|
254
|
+
machine = UM.new
|
|
255
|
+
fibers = []
|
|
256
|
+
fds = []
|
|
257
|
+
do_um_x(8, machine, fibers, fds)
|
|
258
|
+
machine.await_fibers(fibers)
|
|
259
|
+
fds.each { machine.close(it) }
|
|
260
|
+
end
|
|
261
|
+
end
|
|
262
|
+
threads.each(&:join)
|
|
263
|
+
end
|
|
264
|
+
|
|
265
|
+
def measure_time
|
|
266
|
+
GVLTools::GlobalTimer.enable
|
|
267
|
+
t0s = [
|
|
268
|
+
Process.clock_gettime(Process::CLOCK_MONOTONIC),
|
|
269
|
+
Process.clock_gettime(Process::CLOCK_PROCESS_CPUTIME_ID),
|
|
270
|
+
GVLTools::GlobalTimer.monotonic_time / 1_000_000_000.0
|
|
271
|
+
]
|
|
272
|
+
yield
|
|
273
|
+
t1s = [
|
|
274
|
+
Process.clock_gettime(Process::CLOCK_MONOTONIC),
|
|
275
|
+
Process.clock_gettime(Process::CLOCK_PROCESS_CPUTIME_ID),
|
|
276
|
+
GVLTools::GlobalTimer.monotonic_time / 1_000_000_000.0
|
|
277
|
+
]
|
|
278
|
+
{
|
|
279
|
+
monotonic: t1s[0] - t0s[0],
|
|
280
|
+
cpu: t1s[1] - t0s[1],
|
|
281
|
+
gvl: t1s[2] - t0s[2]
|
|
282
|
+
}
|
|
283
|
+
ensure
|
|
284
|
+
GVLTools::GlobalTimer.disable
|
|
141
285
|
end
|
|
142
286
|
end
|
|
143
287
|
|
data/benchmark/http_parse.rb
CHANGED
|
@@ -1,15 +1,15 @@
|
|
|
1
1
|
# frozen_string_literal: true
|
|
2
2
|
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
|
|
10
|
-
|
|
3
|
+
require 'bundler/inline'
|
|
4
|
+
|
|
5
|
+
gemfile do
|
|
6
|
+
source 'https://rubygems.org'
|
|
7
|
+
gem 'uringmachine', path: '..'
|
|
8
|
+
gem 'benchmark'
|
|
9
|
+
gem 'benchmark-ips'
|
|
10
|
+
gem 'http_parser.rb'
|
|
11
|
+
end
|
|
11
12
|
|
|
12
|
-
require 'bundler/setup'
|
|
13
13
|
require 'uringmachine'
|
|
14
14
|
require 'benchmark/ips'
|
|
15
15
|
require 'http/parser'
|
|
@@ -0,0 +1,104 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
require 'bundler/inline'
|
|
4
|
+
|
|
5
|
+
gemfile do
|
|
6
|
+
source 'https://rubygems.org'
|
|
7
|
+
gem 'uringmachine', path: '..'
|
|
8
|
+
end
|
|
9
|
+
|
|
10
|
+
require 'uringmachine'
|
|
11
|
+
|
|
12
|
+
RE_REQUEST_LINE = /^([a-z]+)\s+([^\s]+)\s+(http\/[0-9\.]{1,3})/i
|
|
13
|
+
RE_HEADER_LINE = /^([a-z0-9\-]+)\:\s+(.+)/i
|
|
14
|
+
|
|
15
|
+
def stream_get_request_line(stream, buf)
|
|
16
|
+
line = stream.get_line(buf, 0)
|
|
17
|
+
m = line&.match(RE_REQUEST_LINE)
|
|
18
|
+
return nil if !m
|
|
19
|
+
|
|
20
|
+
{
|
|
21
|
+
'method' => m[1].downcase,
|
|
22
|
+
'path' => m[2],
|
|
23
|
+
'protocol' => m[3].downcase
|
|
24
|
+
}
|
|
25
|
+
end
|
|
26
|
+
|
|
27
|
+
class InvalidHeadersError < StandardError; end
|
|
28
|
+
|
|
29
|
+
def get_headers(stream, buf)
|
|
30
|
+
headers = stream_get_request_line(stream, buf)
|
|
31
|
+
return nil if !headers
|
|
32
|
+
|
|
33
|
+
while true
|
|
34
|
+
line = stream.get_line(buf, 0)
|
|
35
|
+
break if line.empty?
|
|
36
|
+
|
|
37
|
+
m = line.match(RE_HEADER_LINE)
|
|
38
|
+
raise InvalidHeadersError, "Invalid header" if !m
|
|
39
|
+
|
|
40
|
+
headers[m[1]] = m[2]
|
|
41
|
+
end
|
|
42
|
+
|
|
43
|
+
headers
|
|
44
|
+
end
|
|
45
|
+
|
|
46
|
+
BODY = "Hello, world!" * 1000
|
|
47
|
+
|
|
48
|
+
def send_response(machine, fd)
|
|
49
|
+
headers = "HTTP/1.1 200\r\nContent-Length: #{BODY.bytesize}\r\n\r\n"
|
|
50
|
+
machine.sendv(fd, headers, BODY)
|
|
51
|
+
end
|
|
52
|
+
|
|
53
|
+
def handle_connection(machine, fd)
|
|
54
|
+
stream = UM::Stream.new(machine, fd)
|
|
55
|
+
buf = String.new(capacity: 65536)
|
|
56
|
+
|
|
57
|
+
while true
|
|
58
|
+
headers = get_headers(stream, buf)
|
|
59
|
+
break if !headers
|
|
60
|
+
|
|
61
|
+
send_response(machine, fd)
|
|
62
|
+
end
|
|
63
|
+
rescue InvalidHeadersError, SystemCallError => e
|
|
64
|
+
# ignore
|
|
65
|
+
ensure
|
|
66
|
+
machine.close_async(fd)
|
|
67
|
+
end
|
|
68
|
+
|
|
69
|
+
N = ENV['N']&.to_i || 1
|
|
70
|
+
PORT = ENV['PORT']&.to_i || 1234
|
|
71
|
+
|
|
72
|
+
accept_queue = UM::Queue.new
|
|
73
|
+
|
|
74
|
+
acceptor = Thread.new do
|
|
75
|
+
machine = UM.new
|
|
76
|
+
fd = machine.socket(UM::AF_INET, UM::SOCK_STREAM, 0, 0)
|
|
77
|
+
machine.setsockopt(fd, UM::SOL_SOCKET, UM::SO_REUSEADDR, true)
|
|
78
|
+
machine.setsockopt(fd, UM::SOL_SOCKET, UM::SO_REUSEPORT, true)
|
|
79
|
+
machine.bind(fd, '127.0.0.1', PORT)
|
|
80
|
+
machine.listen(fd, 128)
|
|
81
|
+
machine.accept_into_queue(fd, accept_queue)
|
|
82
|
+
rescue Exception => e
|
|
83
|
+
p e
|
|
84
|
+
p e.backtrace
|
|
85
|
+
exit!
|
|
86
|
+
end
|
|
87
|
+
|
|
88
|
+
workers = N.times.map do |idx|
|
|
89
|
+
Thread.new do
|
|
90
|
+
machine = UM.new
|
|
91
|
+
|
|
92
|
+
loop do
|
|
93
|
+
fd = machine.shift(accept_queue)
|
|
94
|
+
machine.spin { handle_connection(machine, fd) }
|
|
95
|
+
end
|
|
96
|
+
rescue Exception => e
|
|
97
|
+
p e
|
|
98
|
+
p e.backtrace
|
|
99
|
+
exit!
|
|
100
|
+
end
|
|
101
|
+
end
|
|
102
|
+
|
|
103
|
+
puts "Listening on localhost:#{PORT}, #{N} worker thread(s)"
|
|
104
|
+
acceptor.join
|