uringmachine 0.23.1 → 0.24.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,130 @@
1
+ # Interim Report for Ruby Association Grant Program 2025
2
+
3
+ ## Project Summary
4
+
5
+ Io_uring is a relatively new Linux API, permitting the invocation of Linux
6
+ system calls asynchronously. UringMachine is a gem that brings low-level access
7
+ to the io_uring interface to Ruby programs, and permits not only asynchronous
8
+ I/O on files and sockets, but also timeouts, futex wait/wake, statx etc, with
9
+ support for fiber-based concurrency. This project will work to enhance
10
+ UringMachine to include a fiber scheduler implementation for usage with the
11
+ standard Ruby I/O classes, to have builtin support for SSL, to support more
12
+ io_uring ops such as writev, splice, fsync, mkdir, fadvise, etc.
13
+
14
+ ## Progress Report
15
+
16
+ As of the present date, I have worked on the following:
17
+
18
+ ### Improvements to the Ruby `Fiber::Scheduler` interface
19
+
20
+ - [PR](https://github.com/ruby/ruby/pull/15213) to expose
21
+ `rb_process_status_new` internal Ruby C API
22
+ (https://bugs.ruby-lang.org/issues/21704). This is needed in order to allow
23
+ FiberScheduler implementations to instantiate `Process::Status` objects in the
24
+ `#process_wait` hook. This PR is still pending a decision by the Ruby core team.
25
+
26
+ - [PR](https://github.com/ruby/ruby/pull/15385) to cleanup FiberScheduler and
27
+ fiber state in a forked process (https://bugs.ruby-lang.org/issues/21717).
28
+ This was merged into Ruby 4.0.
29
+
30
+ - [PR](https://github.com/ruby/ruby/pull/15609) to invoke FiberScheduler
31
+ `io_write` hook on IO flush (https://bugs.ruby-lang.org/issues/21789). This
32
+ was merged into Ruby 4.0.
33
+
34
+ - Found an issue while implementing the `#io_pwrite` hook, which resulted in a
35
+ [PR](https://github.com/ruby/ruby/pull/15428) submitted by Samuel Williams,
36
+ and merged into Ruby 4.0.
37
+
38
+ - Worked with Samuel Williams on how to implement the `#io_close` hook, which
39
+ resulted in a [PR](https://github.com/ruby/ruby/pull/15434) submitted by
40
+ Samuel and merged into Ruby 4.0.
41
+
42
+ - [PR](https://github.com/ruby/ruby/pull/15865) to add socket I/O hooks to the
43
+ FiberScheduler interface (https://bugs.ruby-lang.org/issues/21837). This PR is
44
+ currently in draft phase.
45
+
46
+ ### UringMachine `Fiber::Scheduler` Implementation
47
+
48
+ - I developed a [full
49
+ implementation](https://github.com/digital-fabric/uringmachine/blob/main/lib/uringmachine/fiber_scheduler.rb)
50
+ of the `Fiber::Scheduler` interface using UringMachine, with methods for *all*
51
+ hooks:
52
+
53
+ - `#scheduler_close`
54
+ - `#fiber`, `#yield`
55
+ - `#blocking_operation_wait`, `#block`, `#unblock`, `#fiber_interrupt`
56
+ - `#kernel_sleep`, `#timeout_after`
57
+ - `#io_read`, `#io_write`, `#io_pread`, `#io_pwrite`, `#io_close`
58
+ - `#io_wait`, `#io_select`
59
+ - `#process_wait` (relies on the `rb_process_status_new` PR)
60
+ - `#address_resolve`
61
+
62
+ - Wrote [extensive
63
+ tests](https://github.com/digital-fabric/uringmachine/blob/main/test/test_fiber_scheduler.rb)
64
+ for the UringMachine fiber scheduler.
65
+
66
+ ### Improvements to UringMachine
67
+
68
+ - Improved various internal aspects of the C-extension: performance and
69
+ correctness of mutex and queue implementations.
70
+
71
+ - Added support for accepting instances of `IO::Buffer` as buffer for the
72
+ various I/O operations, in order to facilitate the `Fiber::Scheduler`
73
+ implementation.
74
+
75
+ - Added various methods for working with processes:
76
+
77
+ - `UringMachine#waitid`
78
+ - `UringMachine.pidfd_open`
79
+ - `UringMachine.pidfd_send_signal`
80
+
81
+ - Added detailed internal metrics.
82
+
83
+ - Added support for vectorized write/send using io_uring: `UringMachine#writev`
84
+ and `UringMachine#sendv`
85
+
86
+ - Added support for `SQPOLL` mode - this io_uring mode lets us avoid entering
87
+ the kernel when submitting I/O operations as the kernel is busy polling the SQ
88
+ ring.
89
+
90
+ - Added support for sidecar mode: an auxiliary thread is used to enter the
91
+ kernel and wait for CQE's (I/O operation completion entries), letting the Ruby
92
+ thread avoid entering the kernel in order to wait for CQEs.
93
+
94
+ ### Benchmarking
95
+
96
+ - I did extensive benchmarking comparing different solutions for performing
97
+ concurrent I/O in Ruby:
98
+
99
+ - Using normal Ruby threads
100
+ - Using Samuel's [Async](https://github.com/socketry/async/) gem which
101
+ implements a `Fiber::Scheduler`
102
+ - Using the UringMachine `Fiber::Scheduler`
103
+ - Using the UringMachine low-level API
104
+
105
+ - The benchmarks simulate different kinds of workloads:
106
+
107
+ - Writing and reading from pipes
108
+ - Writing and reading from sockets
109
+ - Doing CPU-bound work synchronized by mutex
110
+ - Doing I/O-bound work synchronized by mutex
111
+ - Pushing and pulling items from queues
112
+ - Running queries on a PostgreSQL database
113
+
114
+ - The results are here: https://github.com/digital-fabric/uringmachine/blob/main/benchmark/README.md
115
+
116
+ ### Pending Work
117
+
118
+ Before the end of the grant work period I intend to do the following:
119
+
120
+ - I already started work on SSL integration. I intend to contribute changes to
121
+ `ruby/openssl` to add support for custom BIO that will use the underlying
122
+ socket for performing I/O (currently the Ruby openssl implementation
123
+ completely bypasses the Ruby I/O layer in order to send/recv to sockets). This
124
+ will allow integration with the `Fiber::Scheduler` interface.
125
+
126
+ - Add support for automatic buffer management for performing mutlishot read/recv
127
+ using io_uring's registered buffers feature.
128
+
129
+ - Add some more low-level methods for performing I/O operations supported by
130
+ io_uring: splice, fsync, mkdir, fadvise etc.
@@ -455,16 +455,180 @@ Ruby I/O layer. Some interesting warts in the Ruby `IO` implementation:
455
455
  [Extralite](https://github.com/digital-fabric/extralite/)): normally, using an
456
456
  actor interface, or protected by a mutex. I'll try to follow up with a
457
457
  benchmark measuring concurrent access to SQLite DBs, similar to the PG one.
458
-
458
+
459
459
  Another interesting benchmark I found was one for resolving DNS addresses
460
460
  using Ruby's builtin `Addrinfo` API, the bundled `resolv` gem, and a basic DNS
461
461
  resolver included in UringMachine (I totally forgot I made one). Here too, I'd
462
462
  like to add a benchmark to measure how these different solutions do in a
463
463
  highly concurrent scenario.
464
-
464
+
465
465
  - Thanks to one of these old benchmarks I made a change that more than doubled
466
466
  the performance of `UM#snooze`. What this method does is it adds the current
467
467
  fiber to the end of the runqueue, and yields control to the next fiber in the
468
468
  runqueue, or to process available CQE's. This method is useful for testing,
469
469
  but also for yielding control periodically when performing CPU-bound work, in
470
470
  order to keep the application responsive and improve latency.
471
+
472
+ # 2025-12-14
473
+
474
+ - Changed how `struct um_op`s are allocated. This struct is used to represent
475
+ any io_uring operation. It is also used to represent runqueue entries. Now,
476
+ for most I/O operations, this struct is stack-allocated. But when a new fiber
477
+ is scheduled, or when using the `#timeout` or any of the `#xxx_async` methods,
478
+ like `#close_async` or `#write_async`, we need to use a heap-allocated
479
+ `um_op`, because we don't control its lifetime. In order to minimize
480
+ allocations, once a `um_op` is done with (it's been pulled out of the
481
+ runqueue, or its corresponding CQE has been processed), it is put on a
482
+ freelist in order to be reused when needed. Previously, when the freelist was
483
+ empty, UringMachine would just allocate a new one using `malloc`. Now
484
+ UringMachine allocates a array of 256 structs at once and puts all of them on
485
+ the freelist.
486
+ - Implemented the vectorized versions of `#write` and `#send`, so now one can
487
+ use `#writev` and `#sendv` to send multiple buffers at once. This could be
488
+ very useful for situations like sending an HTTP response, which is made of a
489
+ headers part and a body part. Also, `#writev` and `#sendv` are guaranteed to
490
+ write/send the entirety of the given buffers, unlike `#write` and `#send`
491
+ which can do partial write/send (for `#send` you can specify the
492
+ `UM::MSG_WAITALL` flag) to guarantee a complete send.
493
+ - With the new built-in `Set` class and its new [C
494
+ API](https://github.com/ruby/ruby/pull/13735), I've switched the internal
495
+ `pending_fibers` holding fibers waiting for an operation to complete, from a
496
+ hash to a set.
497
+
498
+ # 2025-12-15
499
+
500
+ - Working more with benchmarks, it has occurred to me that with the current
501
+ design of UringMachine, whenever we check for I/O completions (which is also
502
+ the moment when we make I/O submissions to the kernel), we leave some
503
+ performance on the table. This is because when we call `io_uring_submit` or
504
+ `io_uring_wait_cqes`, we make a blocking system call (namely,
505
+ `io_uring_enter`), and correspondigly we release the GVL.
506
+
507
+ What this means is that while we're waiting for the system call to return, the
508
+ GVL is available for another Ruby thread to do CPU-bound work. Normally when
509
+ there's a discussion about concurrency in Ruby, there's this dichotomy: it's
510
+ either threads or fibers. But as described above, even when using fibers and
511
+ io_uring for concurrent I/O, we still need to enter the kernel periodically in
512
+ order to submit operations and process completions. So this is an opportunity
513
+ to yield the GVL to a different thread, which can run some Ruby code while the
514
+ first thread is waiting for the system call to return.
515
+
516
+ With that in mind, I modified the benchmark code to see what would happen if
517
+ we run two UringMachine instances on two separate threads. The results are
518
+ quite interesting: splitting the work load between two UringMachine instances
519
+ running on separate threads, we get a marked improvement in performance.
520
+ Depending on the benchmark, we get even better performance if we increase the
521
+ thread count to 4.
522
+
523
+ But, as we increase the thread count, we eventually hit diminishing returns
524
+ and risk actually having worse performance than with just a single thread. So,
525
+ at least for the workloads I tested (including a very primitive HTTP/1.1
526
+ server), the sweet spot is between 2 and 4 threads.
527
+
528
+ One thing I have noticed though, is that while the pure UM version (i.e. using
529
+ the UM low-level API) gets a boost from running on multiple threads, the UM
530
+ fiber scheduler actually can perform worse. This is also the case for the
531
+ Async fiber scheduler, so this might have to do with the fact that the Ruby IO
532
+ class does a lot of work behind the scenes, including locking write mutexes
533
+ and other stuff that's done when the IO is closed. This is still to be
534
+ investigated...
535
+
536
+ # 2025-12-16
537
+
538
+ - Added `UM#accept_into_queue`, which accepts incoming socket connections in a
539
+ loop and pushes them to the given queue.
540
+
541
+ - Improved error handling in the fiber scheduler, and added more tests. There
542
+ are now about 4.2KLoC of test code, with 255 test cases and 780 assertions. And
543
+ that's without all the tests that depend on the
544
+ [`rb_process_new`](https://github.com/ruby/ruby/pull/15213) API, the PR for
545
+ which is currently still not merged.
546
+
547
+ - Added a test mode to UringMachine that affects runqueue processing, without
548
+ impacting performance under normal conditions.
549
+
550
+ # 2025-12-17
551
+
552
+ - I noticed that the fiber scheduler `#io_write` was not being called on
553
+ `IO#flush` or when closing an IO with buffered writes. So any time the IO
554
+ write buffer needs to be flushed, instead of calling the `#io_write` hook, the
555
+ Ruby I/O layer would just run this on a worker thread by calling the
556
+ `#blocking_operation_wait` hook. I've made a
557
+ [PR](https://github.com/ruby/ruby/pull/15609) to fix this.
558
+
559
+ # 2025-12-18
560
+
561
+ - Added a [PR](https://github.com/ruby/ruby/pull/15629) to update Ruby NEWS with
562
+ changes to the FiberScheduler interface.
563
+
564
+ - I did some more verification work on the fiber scheduler implementation. I
565
+ added more tests and improved error handling in read/write hooks.
566
+
567
+ - Made some small changes to fiber scheduling. I added a test mode which peeks
568
+ at CQEs on each snooze, in order to facilitate testing.
569
+
570
+ # 2025-12-20
571
+
572
+ - Did some more work on benchmarks, and added provisory GVL time measurement.
573
+
574
+ - Implemented sidecar mode - the basic idea is that UringMachine starts an
575
+ auxiliary thread that loops entering the kernel with a call to
576
+ `io_uring_enter` in order to make CQEs available. On return from the system
577
+ call, it signals through a futex that ready CQEs can be processed.
578
+
579
+ On fiber switch, the next fiber to run is shifted from the runqueue. If the
580
+ runqueue is empty, the UringMachine will wait for the signal, and then process
581
+ all CQEs. The idea is that in a single threaded environment, under high enough
582
+ I/O load, we don't need to release the GVL in order to process ready CQEs,
583
+ and thus we can better saturate the CPU.
584
+
585
+ # 2025-12-26
586
+
587
+ - Finished up the sidecar mode implementation. I did some preliminary benchmarks
588
+ and this mode does provide a small performance benefit, depending on the
589
+ context. But for the moment, I consider this mode experimental.
590
+
591
+ # 2026-01-07
592
+
593
+ - In the last week I've been working on implementing a buffer pool
594
+ with automatic buffer manangement. I've been contemplating the design for a
595
+ few weeks already, and after the vacation has decided the idea is solid enough
596
+ for me to start writing some code. But let me back up and explain what I'm
597
+ trying to achieve.
598
+
599
+ The io_uring interface includes a facility for setting up buffer rings. The
600
+ idea is that the application provides buffers to the kernel, which uses those
601
+ buffers for reading or receiving repeatedly from an fd, letting the
602
+ application know with each CQE which buffer was used and with how much data.
603
+ This is particularly useful when dealing with bursts of incoming data.
604
+
605
+ The application initiates multishot read/recv operations on each connection,
606
+ and the kernel has at its disposition a pool of application-provided buffers
607
+ it can use whenever a chunk of data is read / received. So the kernel consumes
608
+ those buffers as needed, and fills them with data when it becomes available.
609
+ Those data will be processed by the application at some later time when it's
610
+ ready to process CQEs. The application will then add the consumed buffers back
611
+ to the buffer ring, making them available to the kernel again.
612
+
613
+ Multiple buffer rings may be registered by the application, each with a set
614
+ maxmimum number of buffers and with a buffer group id (`bgid`). The buffers
615
+ added to a buffer ring may be of any size. Each buffer in a buffer ring also
616
+ has an id (`bid`). So buffers are identified by the tuple `[bgid, bid]`. When
617
+ submitting a multishot read/recv operation, we indicate the buffer group id
618
+ (`bgid`), letting the kernel know which buffer ring to use. The kernel then
619
+ generates CQEs (completion queue entries) which contain the id of the buffer
620
+ that contains the data (`bid`). Crucially, a single buffer ring may be used in
621
+ multiple concurrent multishot read/recv operations on different fd's.
622
+
623
+ In addition,on recent kernels io_uring is capable of partially consuming
624
+ buffers, which prevents wasting buffer space. When a buffer ring is set up for
625
+ [partial buffer
626
+ consumption](https://www.man7.org/linux/man-pages/man3/io_uring_setup_buf_ring.3.html),
627
+ each CQE relating to a multishot read/recv operation will also have a flag
628
+ telling the application [whether the buffer will be further
629
+ used](https://www.man7.org/linux/man-pages/man3/io_uring_prep_recv.3.html)
630
+ beyond the amount of data readily available. Each completion of a given buffer
631
+ ID will continue where the previous one left off. So it's great that buffer
632
+ space can be used fully by the kernel, but the application is required to keep
633
+ track of a "cursor" for each buffer.
634
+
data/grant-2025/tasks.md CHANGED
@@ -12,29 +12,39 @@
12
12
  https://unixism.net/loti/tutorial/sq_poll.html
13
13
  - [v] Add `UM.socketpair`
14
14
 
15
- - [ ] Add more metrics
15
+ - [v] Add more metrics
16
16
  - [v] runqueue depth
17
17
  - [v] number of pending fibers
18
18
  - [v] ops: transient count, free count
19
19
  - [v] total fiber switches, total waiting for CQEs
20
- - [ ] watermark: ops_pending, ops_unsubmitted, ops_runqueue, ops_free, ops_transient
21
- (only in profile mode)
22
- - [ ] Performance tuning parameters
23
- - [ ] max fiber switches before processing CQEs
24
- - [ ] max fiber switches before submitting unsubmitted SQEs
25
- - [ ] measure switches since last submitting / last CQE processing
26
-
27
- - [ ] Better buffer management buffer rings
20
+
21
+ - [v] Make writev automatically complete partial writes
22
+
23
+ - [ ] Add inotify API
24
+
25
+ https://www.man7.org/linux/man-pages/man7/inotify.7.html
26
+
27
+ - [ ] Better buffer management
28
28
  - [v] Add `UM#sendv` method (see below)
29
29
  - [v] Benchmark `#sendv` vs `#send_bundle` (in concurrent situation)
30
+ - [v] Support for `IO::Buffer`?
30
31
  - [ ] Benchmark `#read_each` vs `#read` (in concurrent situation)
31
- - [ ] Support for `IO::Buffer`? How's the API gonna look like?
32
- - [ ] Some higher-level abstraction for managing a *pool* of buffer rings
33
-
34
- - [ ] Add some way to measure fiber CPU time.
35
- https://github.com/socketry/async/issues/428
36
-
37
- - [ ] UringMachine Fiber::Scheduler implementation
32
+ - [ ] Implement automatic buffer pool:
33
+ - [ ] Automatic buffer allocation,registration and management.
34
+ - [ ] Support for partial buffer consumption.
35
+ - [ ] Data processing through a rewritten stream implementation.
36
+
37
+ - [v] Sidecar mode
38
+ - [v] Convert `UM#initialize` to take kwargs
39
+ - [v] `:size` - SQ entries
40
+ - [v] `:sqpoll` - sqpoll mode
41
+ - [v] `:sidecar` - sidecar mode
42
+ - [v] Sidecar implementation
43
+ - [v] sidecar thread
44
+ - [v] futex handling
45
+ - [v] submission logic
46
+
47
+ - [v] UringMachine Fiber::Scheduler implementation
38
48
  - [v] Check how scheduler interacts with `fork`.
39
49
  - [v] Implement `process_wait` (with `rb_process_status_new`)
40
50
  - [v] Implement `fiber_interrupt` hook
@@ -97,7 +107,7 @@
97
107
  - [v] pipes: multiple pairs of fibers - reader / writer
98
108
  - [v] sockets: echo server + many clients
99
109
 
100
- - [ ] Benchmarks
110
+ - [v] Benchmarks
101
111
  - [v] UM queue / Ruby queue (threads) / Ruby queue with UM fiber scheduler
102
112
 
103
113
  N groups where each group has M producers and O consumers accessing the same queue.
@@ -13,10 +13,11 @@ class UringMachine
13
13
  # Initializes a new worker pool.
14
14
  #
15
15
  # @return [void]
16
- def initialize
16
+ def initialize(max_workers = Etc.nprocessors)
17
+ @max_workers = max_workers
17
18
  @pending_count = 0
18
19
  @worker_count = 0
19
- @max_workers = Etc.nprocessors
20
+
20
21
  @worker_mutex = UM::Mutex.new
21
22
  @job_queue = UM::Queue.new
22
23
  @workers = []
@@ -52,7 +53,7 @@ class UringMachine
52
53
 
53
54
  # @return [void]
54
55
  def run_worker_thread
55
- machine = UM.new(4)
56
+ machine = UM.new(size: 4)
56
57
  loop do
57
58
  q, op = machine.shift(@job_queue)
58
59
  @pending_count += 1
@@ -76,7 +77,7 @@ class UringMachine
76
77
  class FiberScheduler
77
78
 
78
79
  # The blocking operation thread pool is shared by all fiber schedulers.
79
- @@blocking_operation_thread_pool = BlockingOperationThreadPool.new
80
+ DEFAULT_THREAD_POOL = BlockingOperationThreadPool.new
80
81
 
81
82
  # UringMachine instance associated with scheduler.
82
83
  attr_reader :machine
@@ -92,8 +93,9 @@ class UringMachine
92
93
  #
93
94
  # @param machine [UringMachine, nil] UringMachine instance
94
95
  # @return [void]
95
- def initialize(machine = nil)
96
+ def initialize(machine = nil, thread_pool = DEFAULT_THREAD_POOL)
96
97
  @machine = machine || UM.new
98
+ @thread_pool = thread_pool
97
99
  @fiber_map = ObjectSpace::WeakMap.new
98
100
  @thread = Thread.current
99
101
  end
@@ -107,7 +109,8 @@ class UringMachine
107
109
  # the fiber map, scheduled on the scheduler machine, and started before this
108
110
  # method returns (by calling snooze).
109
111
  #
110
- # @param block [Proc] fiber block @return [Fiber]
112
+ # @param block [Proc] fiber block
113
+ # @return [Fiber]
111
114
  def fiber(&block)
112
115
  fiber = Fiber.new(blocking: false) { @machine.run(fiber, &block) }
113
116
 
@@ -145,7 +148,7 @@ class UringMachine
145
148
  # @param op [callable] blocking operation
146
149
  # @return [void]
147
150
  def blocking_operation_wait(op)
148
- @@blocking_operation_thread_pool.process(@machine, op)
151
+ @thread_pool.process(@machine, op)
149
152
  end
150
153
 
151
154
  # Blocks the current fiber by yielding to the machine. This hook is called
@@ -188,7 +191,6 @@ class UringMachine
188
191
  # Yields to the next runnable fiber.
189
192
  def yield
190
193
  @machine.snooze
191
- # @machine.yield
192
194
  end
193
195
 
194
196
  # Waits for the given io to become ready.
@@ -198,7 +200,6 @@ class UringMachine
198
200
  # @param timeout [Number, nil] optional timeout
199
201
  # @param return
200
202
  def io_wait(io, events, timeout = nil)
201
- # p(io_wait: io, events:)
202
203
  timeout ||= io.timeout
203
204
  if timeout
204
205
  @machine.timeout(timeout, Timeout::Error) {
@@ -243,7 +244,7 @@ class UringMachine
243
244
  length = buffer.size if length == 0
244
245
 
245
246
  if (timeout = io.timeout)
246
- @machine.timeout(timeout, Timeout::Error) do
247
+ @machine.timeout(timeout, Timeout::Error) do
247
248
  @machine.read(io.fileno, buffer, length, offset)
248
249
  rescue Errno::EINTR
249
250
  retry
@@ -253,6 +254,8 @@ class UringMachine
253
254
  end
254
255
  rescue Errno::EINTR
255
256
  retry
257
+ rescue Errno => e
258
+ -e.errno
256
259
  end
257
260
 
258
261
  # Reads from the given IO at the given file offset
@@ -267,7 +270,7 @@ class UringMachine
267
270
  length = buffer.size if length == 0
268
271
 
269
272
  if (timeout = io.timeout)
270
- @machine.timeout(timeout, Timeout::Error) do
273
+ @machine.timeout(timeout, Timeout::Error) do
271
274
  @machine.read(io.fileno, buffer, length, offset, from)
272
275
  rescue Errno::EINTR
273
276
  retry
@@ -277,6 +280,8 @@ class UringMachine
277
280
  end
278
281
  rescue Errno::EINTR
279
282
  retry
283
+ rescue Errno => e
284
+ -e.errno
280
285
  end
281
286
 
282
287
  # Writes to the given IO.
@@ -287,12 +292,11 @@ class UringMachine
287
292
  # @param offset [Integer] write offset
288
293
  # @return [Integer] bytes written
289
294
  def io_write(io, buffer, length, offset)
290
- # p(io_write: io, length:, offset:, timeout: io.timeout)
291
295
  length = buffer.size if length == 0
292
296
  buffer = buffer.slice(offset) if offset > 0
293
297
 
294
298
  if (timeout = io.timeout)
295
- @machine.timeout(timeout, Timeout::Error) do
299
+ @machine.timeout(timeout, Timeout::Error) do
296
300
  @machine.write(io.fileno, buffer, length)
297
301
  rescue Errno::EINTR
298
302
  retry
@@ -302,6 +306,8 @@ class UringMachine
302
306
  end
303
307
  rescue Errno::EINTR
304
308
  retry
309
+ rescue Errno => e
310
+ -e.errno
305
311
  end
306
312
 
307
313
  # Writes to the given IO at the given file offset.
@@ -313,12 +319,11 @@ class UringMachine
313
319
  # @param offset [Integer] buffer offset
314
320
  # @return [Integer] bytes written
315
321
  def io_pwrite(io, buffer, from, length, offset)
316
- # p(io_pwrite: io, from:, length:, offset:, timeout: io.timeout)
317
322
  length = buffer.size if length == 0
318
323
  buffer = buffer.slice(offset) if offset > 0
319
324
 
320
325
  if (timeout = io.timeout)
321
- @machine.timeout(timeout, Timeout::Error) do
326
+ @machine.timeout(timeout, Timeout::Error) do
322
327
  @machine.write(io.fileno, buffer, length, from)
323
328
  rescue Errno::EINTR
324
329
  retry
@@ -328,6 +333,8 @@ class UringMachine
328
333
  end
329
334
  rescue Errno::EINTR
330
335
  retry
336
+ rescue Errno => e
337
+ -e.errno
331
338
  end
332
339
 
333
340
  # Closes the given fd.
@@ -335,8 +342,9 @@ class UringMachine
335
342
  # @param fd [Integer] file descriptor
336
343
  # @return [Integer] file descriptor
337
344
  def io_close(fd)
338
- # p(io_close: fd)
339
345
  @machine.close_async(fd)
346
+ rescue Errno => e
347
+ -e.errno
340
348
  end
341
349
 
342
350
  if UM.method_defined?(:waitid_status)
@@ -366,17 +374,17 @@ class UringMachine
366
374
  #
367
375
  # @param hostname [String] hostname to resolve
368
376
  # @return [Array<Addrinfo>] array of resolved addresses
369
- def address_resolve(hostname)
370
- Resolv.getaddresses(hostname)
371
- end
372
-
373
- # Run the given block with a timeout.
374
- #
375
- # @param duration [Number] timeout duration
376
- # @param exception [Class] exception Class
377
- # @param message [String] exception message
378
- # @param block [Proc] block to run
379
- # @return [any] block return value
377
+ def address_resolve(hostname)
378
+ Resolv.getaddresses(hostname)
379
+ end
380
+
381
+ # Run the given block with a timeout.
382
+ #
383
+ # @param duration [Number] timeout duration
384
+ # @param exception [Class] exception Class
385
+ # @param message [String] exception message
386
+ # @param block [Proc] block to run
387
+ # @return [any] block return value
380
388
  def timeout_after(duration, exception, message, &block)
381
389
  @machine.timeout(duration, exception, &block)
382
390
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  class UringMachine
4
- VERSION = '0.23.1'
4
+ VERSION = '0.24.0'
5
5
  end
data/lib/uringmachine.rb CHANGED
@@ -7,10 +7,8 @@ require 'uringmachine/dns_resolver'
7
7
  UM = UringMachine
8
8
 
9
9
  class UringMachine
10
- @@fiber_map = {}
11
-
12
10
  def fiber_map
13
- @@fiber_map
11
+ @fiber_map ||= {}
14
12
  end
15
13
 
16
14
  class Terminate < Exception
@@ -20,13 +18,13 @@ class UringMachine
20
18
  fiber = klass.new { |v| run_block_in_fiber(block, fiber, v) }
21
19
  self.schedule(fiber, value)
22
20
 
23
- @@fiber_map[fiber] = fiber
21
+ fiber_map[fiber] = fiber
24
22
  end
25
23
 
26
24
  def run(fiber, &block)
27
25
  run_block_in_fiber(block, fiber, nil)
28
26
  self.schedule(fiber, nil)
29
- @@fiber_map[fiber] = fiber
27
+ fiber_map[fiber] = fiber
30
28
  end
31
29
 
32
30
  def join(*fibers)
@@ -97,7 +95,7 @@ class UringMachine
97
95
  ensure
98
96
  fiber.mark_as_done
99
97
  # cleanup
100
- @@fiber_map.delete(fiber)
98
+ fiber_map.delete(fiber)
101
99
  self.notify_done_listeners(fiber)
102
100
 
103
101
  # switch away to a different fiber
data/test/helper.rb CHANGED
@@ -62,17 +62,22 @@ class UMBaseTest < Minitest::Test
62
62
 
63
63
  def setup
64
64
  @machine = UM.new
65
+ @machine.test_mode = true
65
66
  end
66
67
 
67
68
  def teardown
68
69
  return if !@machine
69
70
 
70
- pending_fibers = @machine.pending_fibers
71
- raise "leaked fibers: #{pending_fibers}" if pending_fibers.size > 0
72
-
71
+ # pending_fibers = @machine.pending_fibers
72
+ # raise "leaked fibers: #{pending_fibers}" if pending_fibers.size > 0
73
+
73
74
  GC.start
74
75
  end
75
76
 
77
+ def scheduler_calls_tally
78
+ @scheduler.calls.map { it[:sym] }.tally
79
+ end
80
+
76
81
  def assign_port
77
82
  @@port_assign_mutex ||= Mutex.new
78
83
  @@port_assign_mutex.synchronize do
data/test/test_fiber.rb CHANGED
@@ -222,6 +222,22 @@ class WaitFibersTest < UMBaseTest
222
222
  res = machine.await_fibers(f)
223
223
  assert_equal 1, res
224
224
  end
225
+
226
+ def test_await_fibers_terminate
227
+ f1 = machine.spin { machine.sleep(1) }
228
+ f2 = machine.spin { machine.sleep(1) }
229
+ done = false
230
+ a = machine.spin do
231
+ machine.await_fibers([f1, f2])
232
+ rescue UM::Terminate
233
+ done = true
234
+ end
235
+
236
+ machine.snooze
237
+ machine.schedule(a, UM::Terminate.new)
238
+ machine.join(a)
239
+ assert_equal true, done
240
+ end
225
241
  end
226
242
 
227
243
  class ScopeTest < UMBaseTest