evt 0.1.1 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 4c45ef9b7d8ed527d6280ca836777fb2c54822e23c9414c0cdeb234561115f47
4
- data.tar.gz: 9543f98474500b00ae1a68009a261db1538d986e71bb7d9437bc9f190aff3c38
3
+ metadata.gz: 9466e3fd37da9807dc59525feebfe9252489a460a83111febf95973ff77fee56
4
+ data.tar.gz: 531808277a49049dc156144294346730a457d717699e091348125086dca529de
5
5
  SHA512:
6
- metadata.gz: d9fe5b68df1d9e72d10314744ac173bc4f99a3d420034d861b4d1685c1f36bfbe24c993c9686c5862d45926dd8ddb901759d913dfb0ab39d730c380bfa27f3d4
7
- data.tar.gz: cfbe9a5c112c71d96a737f29d9b4c5e1fd078b060331d999b7007a846b8e2a71f4de2eeebfc53e22dae8c33bf89afb15df5b36205b17f43f6f7e00096d3950f7
6
+ metadata.gz: c12a241295309fd31bf21bc4957374a636ff046b6aa1c5f670113f0cdeedb3df74f974145ba6f19f9e442d5f8d9ae373d70bfee6ae417dc09e9b620d57bcc78f
7
+ data.tar.gz: b3a369c7640d4770a89e7d37166930a757ca25266132fbd0544f2f0a065670de1c13578ea4cf7fc43ad1fb2c2020757b20cc972e85c23cab720bb229103fb070
@@ -0,0 +1,51 @@
1
+ name: CI Tests
2
+ on:
3
+ pull_request:
4
+ push:
5
+ branches:
6
+ - master
7
+ schedule:
8
+ - cron: '0 7 * * SUN'
9
+ jobs:
10
+ test:
11
+ strategy:
12
+ fail-fast: false
13
+ matrix:
14
+ include:
15
+ - { os: ubuntu-20.04, ruby: '3.0' }
16
+ - { os: ubuntu-20.04, ruby: ruby-head }
17
+ - { os: macos-11.0, ruby: '3.0' }
18
+ - { os: macos-11.0, ruby: ruby-head }
19
+ - { os: windows-2019, ruby: mingw }
20
+ - { os: windows-2019, ruby: mswin }
21
+ name: ${{ matrix.os }} ${{ matrix.ruby }}
22
+ runs-on: ${{ matrix.os }}
23
+ timeout-minutes: 5
24
+ steps:
25
+ - uses: actions/checkout@v2
26
+ - uses: ruby/setup-ruby@master
27
+ with:
28
+ ruby-version: ${{ matrix.ruby }}
29
+ bundler-cache: false
30
+ - name: Install Dependencies
31
+ run: |
32
+ gem install bundler
33
+ bundle install --jobs 4 --retry 3
34
+ - name: Compile
35
+ run: rake compile
36
+ - name: Test
37
+ run: rake
38
+ build:
39
+ runs-on: ubuntu-20.04
40
+ steps:
41
+ - uses: actions/checkout@v2
42
+ - uses: ruby/setup-ruby@master
43
+ with:
44
+ ruby-version: '3.0'
45
+ bundler-cache: false
46
+ - name: Install Dependencies
47
+ run: |
48
+ gem install bundler
49
+ bundle install --jobs 4 --retry 3
50
+ - name: Build
51
+ run: gem build evt.gemspec
data/.gitignore CHANGED
@@ -9,3 +9,4 @@
9
9
  /*.gem
10
10
  /lib/*.bundle
11
11
  /lib/*.so
12
+ Gemfile.lock
data/Gemfile CHANGED
@@ -3,5 +3,5 @@ source "https://rubygems.org"
3
3
  # Specify your gem's dependencies in evt.gemspec
4
4
  gemspec
5
5
 
6
- gem "rake", "~> 12.0"
6
+ gem "rake", "~> 13.0"
7
7
  gem "minitest", "~> 5.0"
data/README.md CHANGED
@@ -1,2 +1,73 @@
1
- # evt
2
- A low-level Event Handler designed for Ruby 3 Scheduler
1
+ # Evt
2
+
3
+ The Event Library that designed for Ruby 3.0.
4
+
5
+ **This gem is still under development, APIs and features are not stable. Advices and PRs are highly welcome.**
6
+
7
+ [![CI Tests](https://github.com/dsh0416/evt/workflows/CI%20Tests/badge.svg)](https://github.com/dsh0416/evt/actions?query=workflow%3A%22CI+Tests%22)
8
+ [![Gem Version](https://badge.fury.io/rb/evt.svg)](https://rubygems.org/gems/evt)
9
+ [![Downloads](https://ruby-gem-downloads-badge.herokuapp.com/evt?type=total)](https://rubygems.org/gems/evt)
10
+
11
+ ## Features
12
+
13
+
14
+
15
+ ### IO Backend Support
16
+
17
+ | | Linux | Windows | macOS | FreeBSD |
18
+ | --------------- | ----------- | ------------| ----------- | ----------- |
19
+ | io_uring | ✅ (See 1) | ❌ | ❌ | ❌ |
20
+ | epoll | ✅ (See 2) | ❌ | ❌ | ❌ |
21
+ | kqueue | ❌ | ❌ | ✅ (⚠️ See 5) | ✅ |
22
+ | IOCP | ❌ | ❌ (⚠️See 3) | ❌ | ❌ |
23
+ | Ruby (`select`) | ✅ Fallback | ✅ (⚠️See 4) | ✅ Fallback | ✅ Fallback |
24
+
25
+ 1. when liburing is installed
26
+ 2. when kernel version >= 2.6.8
27
+ 3. WOULD NOT WORK until `FILE_FLAG_OVERLAPPED` is included in I/O initialization process.
28
+ 4. Some I/Os are not able to be nonblock under Windows. See [Scheduler Docs](https://docs.ruby-lang.org/en/master/doc/scheduler_md.html#label-IO).
29
+ 5. `kqueue` performance in Darwin is very poor. **MAY BE DISABLED IN THE FUTURE.**
30
+
31
+ ## Install
32
+
33
+ ```bash
34
+ gem install evt
35
+ ```
36
+
37
+ ## Usage
38
+
39
+ ```ruby
40
+ require 'evt'
41
+
42
+ rd, wr = IO.pipe
43
+ scheduler = Evt::Scheduler.new
44
+
45
+ Fiber.set_scheduler scheduler
46
+
47
+ Fiber.schedule do
48
+ message = rd.read(20)
49
+ puts message
50
+ rd.close
51
+ end
52
+
53
+ Fiber.schedule do
54
+ wr.write("Hello World")
55
+ wr.close
56
+ end
57
+
58
+ scheduler.run
59
+
60
+ # "Hello World"
61
+ ```
62
+
63
+ ## Roadmap
64
+
65
+ - [x] Support epoll/kqueue/select
66
+ - [x] Upgrade to the latest Scheduler API
67
+ - [x] Support io_uring
68
+ - [x] Support iov features of io_uring
69
+ - [x] Support IOCP (**NOT ENABLED YET**)
70
+ - [x] Setup tests with Ruby 3
71
+ - [ ] Support IOCP with iov features
72
+ - [ ] Setup more tests for production purpose
73
+ - [ ] Documentation for usages
@@ -10,7 +10,7 @@ Gem::Specification.new do |spec|
10
10
  spec.description = "A low-level Event Handler designed for Ruby 3 Scheduler for better performance"
11
11
  spec.homepage = "https://github.com/dsh0416/evt"
12
12
  spec.license = 'BSD-3-Clause'
13
- spec.required_ruby_version = '>= 2.7.1'
13
+ spec.required_ruby_version = '>= 2.8.0.dev'
14
14
 
15
15
  spec.metadata["homepage_uri"] = spec.homepage
16
16
  spec.metadata["source_code_uri"] = "https://github.com/dsh0416/evt"
@@ -18,7 +18,7 @@ Gem::Specification.new do |spec|
18
18
  # Specify which files should be added to the gem when it is released.
19
19
  # The `git ls-files -z` loads the files in the RubyGem that have been added into git.
20
20
  spec.files = Dir.chdir(File.expand_path('..', __FILE__)) do
21
- `git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(test|spec|features)/}) }
21
+ `git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(test|spec|features|.vscode)/}) }
22
22
  end
23
23
  spec.require_paths = ["lib"]
24
24
  spec.extensions = ['ext/evt/extconf.rb']
@@ -0,0 +1,91 @@
1
+ #ifndef EPOLL_H
2
+ #define EPOLL_G
3
+ #include "evt.h"
4
+
5
+ #if HAVE_SYS_EPOLL_H
6
+ VALUE method_scheduler_init(VALUE self) {
7
+ rb_iv_set(self, "@epfd", INT2NUM(epoll_create(1))); // Size of epoll is ignored after Linux 2.6.8.
8
+ return Qnil;
9
+ }
10
+
11
+ VALUE method_scheduler_register(VALUE self, VALUE io, VALUE interest) {
12
+ struct epoll_event event;
13
+ ID id_fileno = rb_intern("fileno");
14
+ int epfd = NUM2INT(rb_iv_get(self, "@epfd"));
15
+ int fd = NUM2INT(rb_funcall(io, id_fileno, 0));
16
+ int ruby_interest = NUM2INT(interest);
17
+ int readable = NUM2INT(rb_const_get(rb_cIO, rb_intern("READABLE")));
18
+ int writable = NUM2INT(rb_const_get(rb_cIO, rb_intern("WRITABLE")));
19
+
20
+ if (ruby_interest & readable) {
21
+ event.events |= EPOLLIN;
22
+ }
23
+
24
+ if (ruby_interest & writable) {
25
+ event.events |= EPOLLOUT;
26
+ }
27
+
28
+ event.data.ptr = (void*) io;
29
+
30
+ epoll_ctl(epfd, EPOLL_CTL_ADD, fd, &event);
31
+ return Qnil;
32
+ }
33
+
34
+ VALUE method_scheduler_deregister(VALUE self, VALUE io) {
35
+ ID id_fileno = rb_intern("fileno");
36
+ int epfd = NUM2INT(rb_iv_get(self, "@epfd"));
37
+ int fd = NUM2INT(rb_funcall(io, id_fileno, 0));
38
+ epoll_ctl(epfd, EPOLL_CTL_DEL, fd, NULL); // Require Linux 2.6.9 for NULL event.
39
+ return Qnil;
40
+ }
41
+
42
+ VALUE method_scheduler_wait(VALUE self) {
43
+ int n, epfd, i, event_flag, timeout;
44
+ VALUE next_timeout, obj_io, readables, writables, result;
45
+ ID id_next_timeout = rb_intern("next_timeout");
46
+ ID id_push = rb_intern("push");
47
+
48
+ epfd = NUM2INT(rb_iv_get(self, "@epfd"));
49
+ next_timeout = rb_funcall(self, id_next_timeout, 0);
50
+ readables = rb_ary_new();
51
+ writables = rb_ary_new();
52
+
53
+ if (next_timeout == Qnil) {
54
+ timeout = -1;
55
+ } else {
56
+ timeout = NUM2INT(next_timeout);
57
+ }
58
+
59
+ struct epoll_event* events = (struct epoll_event*) xmalloc(sizeof(struct epoll_event) * EPOLL_MAX_EVENTS);
60
+
61
+ n = epoll_wait(epfd, events, EPOLL_MAX_EVENTS, timeout);
62
+ if (n < 0) {
63
+ rb_raise(rb_eIOError, "unable to call epoll_wait");
64
+ }
65
+
66
+ for (i = 0; i < n; i++) {
67
+ event_flag = events[i].events;
68
+ if (event_flag & EPOLLIN) {
69
+ obj_io = (VALUE) events[i].data.ptr;
70
+ rb_funcall(readables, id_push, 1, obj_io);
71
+ }
72
+
73
+ if (event_flag & EPOLLOUT) {
74
+ obj_io = (VALUE) events[i].data.ptr;
75
+ rb_funcall(writables, id_push, 1, obj_io);
76
+ }
77
+ }
78
+
79
+ result = rb_ary_new2(2);
80
+ rb_ary_store(result, 0, readables);
81
+ rb_ary_store(result, 1, writables);
82
+
83
+ xfree(events);
84
+ return result;
85
+ }
86
+
87
+ VALUE method_scheduler_backend(VALUE klass) {
88
+ return rb_str_new_cstr("epoll");
89
+ }
90
+ #endif
91
+ #endif
@@ -1,122 +1,34 @@
1
- #include <ruby.h>
1
+ #ifndef EVT_C
2
+ #define EVT_C
2
3
 
3
- VALUE Scheduler = Qnil;
4
-
5
- void Init_evt_ext();
6
- VALUE method_scheduler_init(VALUE self);
7
- VALUE method_scheduler_register(VALUE self, VALUE io, VALUE interest);
8
- VALUE method_scheduler_deregister(VALUE self, VALUE io);
9
- VALUE method_scheduler_wait(VALUE self);
4
+ #include "evt.h"
10
5
 
11
6
  void Init_evt_ext()
12
7
  {
13
- Scheduler = rb_define_class("Scheduler", rb_cObject);
14
- rb_define_method(Scheduler, "init_selector", method_scheduler_init, 0);
15
- rb_define_method(Scheduler, "register", method_scheduler_register, 2);
16
- rb_define_method(Scheduler, "deregister", method_scheduler_deregister, 1);
17
- rb_define_method(Scheduler, "wait", method_scheduler_wait, 0);
18
- }
19
-
20
-
21
- #if defined(__linux__) // TODO: Do more checks for using epoll
22
- #include <sys/epoll.h>
23
- #define EPOLL_MAX_EVENTS 65535
24
-
25
- VALUE method_scheduler_init(VALUE self) {
26
- rb_iv_set(self, "@epfd", INT2NUM(epoll_create(1))); // Size of epoll is ignored after Linux 2.6.8.
27
- return Qnil;
28
- }
29
-
30
- VALUE method_scheduler_register(VALUE self, VALUE io, VALUE interest) {
31
- struct epoll_event event;
32
- ID id_fileno = rb_intern("fileno");
33
- int epfd = NUM2INT(rb_iv_get(self, "@epfd"));
34
- int fd = NUM2INT(rb_funcall(io, id_fileno, 0));
35
- int ruby_interest = NUM2INT(interest);
36
- int readable = NUM2INT(rb_const_get(rb_cIO, rb_intern("WAIT_READABLE")));
37
- int writable = NUM2INT(rb_const_get(rb_cIO, rb_intern("WAIT_WRITABLE")));
38
-
39
- if (ruby_interest & readable) {
40
- event.events |= EPOLLIN;
41
- } else if (ruby_interest & writable) {
42
- event.events |= EPOLLOUT;
43
- }
44
- event.data.ptr = (void*) io;
45
-
46
- epoll_ctl(epfd, EPOLL_CTL_ADD, fd, &event);
47
- return Qnil;
48
- }
49
-
50
- VALUE method_scheduler_deregister(VALUE self, VALUE io) {
51
- ID id_fileno = rb_intern("fileno");
52
- int epfd = NUM2INT(rb_iv_get(self, "@epfd"));
53
- int fd = NUM2INT(rb_funcall(io, id_fileno, 0));
54
- epoll_ctl(epfd, EPOLL_CTL_DEL, fd, NULL); // Require Linux 2.6.9 for NULL event.
55
- return Qnil;
56
- }
57
-
58
- VALUE method_scheduler_wait(VALUE self) {
59
- int n, epfd, i, event_flag;
60
- VALUE next_timeout, obj_io, readables, writables, result;
61
- ID id_next_timeout = rb_intern("next_timeout");
62
- ID id_push = rb_intern("push");
63
-
64
- epfd = NUM2INT(rb_iv_get(self, "@epfd"));
65
- next_timeout = rb_funcall(self, id_next_timeout, 0);
66
- readables = rb_ary_new();
67
- writables = rb_ary_new();
68
-
69
- struct epoll_event* events = (struct epoll_event*) xmalloc(sizeof(struct epoll_event) * EPOLL_MAX_EVENTS);
70
-
71
- n = epoll_wait(epfd, events, EPOLL_MAX_EVENTS, next_timeout);
72
- // Check if n > 0
73
-
74
- for (i = 0; i < n; i++) {
75
- event_flag = events[i].events;
76
- if (event_flag & EPOLLIN) {
77
- obj_io = (VALUE) events[i].data.ptr;
78
- rb_funcall(readables, id_push, 1, obj_io);
79
- } else if (event_flag & EPOLLOUT) {
80
- obj_io = (VALUE) events[i].data.ptr;
81
- rb_funcall(writables, id_push, 1, obj_io);
82
- }
83
- }
84
-
85
- result = rb_ary_new2(2);
86
- rb_ary_store(result, 0, readables);
87
- rb_ary_store(result, 1, writables);
88
-
89
- xfree(events);
90
- return result;
91
- }
92
- #else
93
- // Fallback to IO.select
94
- VALUE method_scheduler_init(VALUE self) {
95
- return Qnil;
96
- }
97
-
98
- VALUE method_scheduler_register(VALUE self, VALUE io, VALUE interest) {
99
- return Qnil;
100
- }
101
-
102
- VALUE method_scheduler_deregister(VALUE self, VALUE io) {
103
- return Qnil;
104
- }
105
-
106
- VALUE method_scheduler_wait(VALUE self) {
107
- // return IO.select(@readable.keys, @writable.keys, [], next_timeout)
108
- VALUE readable, writable, readable_keys, writable_keys, next_timeout;
109
- ID id_select = rb_intern("select");
110
- ID id_keys = rb_intern("keys");
111
- ID id_next_timeout = rb_intern("next_timeout");
112
-
113
- readable = rb_iv_get(self, "@readable");
114
- writable = rb_iv_get(self, "@writable");
115
-
116
- readable_keys = rb_funcall(readable, id_keys, 0);
117
- writable_keys = rb_funcall(writable, id_keys, 0);
118
- next_timeout = rb_funcall(self, id_next_timeout, 0);
119
-
120
- return rb_funcall(rb_cIO, id_select, 4, readable_keys, writable_keys, rb_ary_new(), next_timeout);
121
- }
122
- #endif
8
+ Evt = rb_define_module("Evt");
9
+ Scheduler = rb_define_class_under(Evt, "Scheduler", rb_cObject);
10
+ Payload = rb_define_class_under(Scheduler, "Payload", rb_cObject);
11
+ Fiber = rb_define_class("Fiber", rb_cObject);
12
+ rb_define_singleton_method(Scheduler, "backend", method_scheduler_backend, 0);
13
+ rb_define_method(Scheduler, "init_selector", method_scheduler_init, 0);
14
+ rb_define_method(Scheduler, "register", method_scheduler_register, 2);
15
+ rb_define_method(Scheduler, "deregister", method_scheduler_deregister, 1);
16
+ rb_define_method(Scheduler, "wait", method_scheduler_wait, 0);
17
+
18
+ #if HAVELIBURING_H
19
+ rb_define_method(Scheduler, "io_read", method_scheduler_io_read, 4);
20
+ rb_define_method(Scheduler, "io_write", method_scheduler_io_read, 4);
21
+ #endif
22
+ }
23
+
24
+ #if HAVE_LIBURING_H
25
+ #include "uring.h"
26
+ #elif HAVE_SYS_EPOLL_H
27
+ #include "epoll.h"
28
+ #elif HAVE_SYS_EVENT_H
29
+ #include "kqueue.h"
30
+ #elif HAVE_WINDOWS_H
31
+ #include "select.h"
32
+ // #include "iocp.h"
33
+ #endif
34
+ #endif
@@ -0,0 +1,82 @@
1
+ #ifndef EVT_H
2
+ #define EVT_H
3
+
4
+ #include <ruby.h>
5
+
6
+ VALUE Evt = Qnil;
7
+ VALUE Scheduler = Qnil;
8
+ VALUE Payload = Qnil;
9
+ VALUE Fiber = Qnil;
10
+
11
+ void Init_evt_ext();
12
+ VALUE method_scheduler_init(VALUE self);
13
+ VALUE method_scheduler_register(VALUE self, VALUE io, VALUE interest);
14
+ VALUE method_scheduler_deregister(VALUE self, VALUE io);
15
+ VALUE method_scheduler_wait(VALUE self);
16
+ VALUE method_scheduler_backend(VALUE klass);
17
+ #if HAVE_LIBURING_H
18
+ VALUE method_scheduler_io_read(VALUE self, VALUE io, VALUE buffer, VALUE offset, VALUE length);
19
+ VALUE method_scheduler_io_write(VALUE self, VALUE io, VALUE buffer, VALUE offset, VALUE length);
20
+ #endif
21
+
22
+ #if HAV_WINDOWS_H
23
+ VALUE method_scheduler_io_read(VALUE io, VALUE buffer, VALUE offset, VALUE length);
24
+ VALUE method_scheduler_io_write(VALUE io, VALUE buffer, VALUE offset, VALUE length);
25
+ #endif
26
+
27
+ #if HAVE_LIBURING_H
28
+ #include <liburing.h>
29
+
30
+ #define URING_ENTRIES 64
31
+ #define URING_MAX_EVENTS 64
32
+
33
+ struct uring_data {
34
+ bool is_poll;
35
+ short poll_mask;
36
+ VALUE io;
37
+ };
38
+
39
+ void uring_payload_free(void* data);
40
+ size_t uring_payload_size(const void* data);
41
+
42
+ static const rb_data_type_t type_uring_payload = {
43
+ .wrap_struct_name = "uring_payload",
44
+ .function = {
45
+ .dmark = NULL,
46
+ .dfree = uring_payload_free,
47
+ .dsize = uring_payload_size,
48
+ },
49
+ .data = NULL,
50
+ .flags = RUBY_TYPED_FREE_IMMEDIATELY,
51
+ };
52
+ #elif HAVE_SYS_EPOLL_H
53
+ #include <sys/epoll.h>
54
+ #define EPOLL_MAX_EVENTS 64
55
+ #elif HAVE_SYS_EVENT_H
56
+ #include <sys/event.h>
57
+ #define KQUEUE_MAX_EVENTS 64
58
+ #elif HAVE_WINDOWS_H
59
+ // #include <Windows.h>
60
+ // #define IOCP_MAX_EVENTS 64
61
+
62
+ // struct iocp_data {
63
+ // VALUE io;
64
+ // bool is_poll;
65
+ // int interest;
66
+ // };
67
+
68
+ // void iocp_payload_free(void* data);
69
+ // size_t iocp_payload_size(const void* data);
70
+
71
+ // static const rb_data_type_t type_iocp_payload = {
72
+ // .wrap_struct_name = "iocp_payload",
73
+ // .function = {
74
+ // .dmark = NULL,
75
+ // .dfree = iocp_payload_free,
76
+ // .dsize = iocp_payload_size,
77
+ // },
78
+ // .data = NULL,
79
+ // .flags = RUBY_TYPED_FREE_IMMEDIATELY,
80
+ // };
81
+ #endif
82
+ #endif
@@ -1,5 +1,12 @@
1
1
  require 'mkmf'
2
2
  extension_name = 'evt_ext'
3
- create_header
4
3
  dir_config(extension_name)
5
- create_makefile(extension_name)
4
+
5
+ have_library('uring')
6
+ have_header('liburing.h')
7
+ have_header('sys/epoll.h')
8
+ have_header('sys/event.h')
9
+ have_header('Windows.h')
10
+
11
+ create_header
12
+ create_makefile(extension_name)
@@ -0,0 +1,126 @@
1
+ #ifndef IOCP_H
2
+ #define IOCP_H
3
+ #include "evt.h"
4
+
5
+ #if HAVE_WINDOWS_H
6
+ void iocp_payload_free(void* data) {
7
+ CloseHandle((HANDLE) data);
8
+ }
9
+
10
+ size_t iocp_payload_size(const void* data) {
11
+ return sizeof(HANDLE);
12
+ }
13
+
14
+ VALUE method_scheduler_init(VALUE self) {
15
+ HANDLE iocp = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 0);
16
+ rb_iv_set(self, "@iocp", TypedData_Wrap_Struct(Payload, &type_iocp_payload, iocp));
17
+ return Qnil;
18
+ }
19
+
20
+ VALUE method_scheduler_register(VALUE self, VALUE io, VALUE interest) {
21
+ HANDLE iocp;
22
+ VALUE iocp_obj = rb_iv_get(self, "@iocp");
23
+ struct iocp_data* data;
24
+ TypedData_Get_Struct(iocp_obj, HANDLE, &type_iocp_payload, iocp);
25
+ int fd = NUM2INT(rb_funcallv(io, rb_intern("fileno"), 0, 0));
26
+ HANDLE io_handler = (HANDLE)rb_w32_get_osfhandle(fd);
27
+
28
+ int ruby_interest = NUM2INT(interest);
29
+ int readable = NUM2INT(rb_const_get(rb_cIO, rb_intern("READABLE")));
30
+ int writable = NUM2INT(rb_const_get(rb_cIO, rb_intern("WRITABLE")));
31
+ data = (struct iocp_data*) xmalloc(sizeof(struct iocp_data));
32
+ data->io = io;
33
+ data->is_poll = true;
34
+ data->interest = 0;
35
+
36
+ if (ruby_interest & readable) {
37
+ interest |= readable;
38
+ }
39
+
40
+ if (ruby_interest & writable) {
41
+ interest |= writable;
42
+ }
43
+
44
+ HANDLE res = CreateIoCompletionPort(io_handler, iocp, (ULONG_PTR) data, 0);
45
+ printf("IO at address: 0x%08x\n", (void *)data);
46
+
47
+ return Qnil;
48
+ }
49
+
50
+ VALUE method_scheduler_deregister(VALUE self, VALUE io) {
51
+ return Qnil;
52
+ }
53
+
54
+ VALUE method_scheduler_wait(VALUE self) {
55
+ ID id_next_timeout = rb_intern("next_timeout");
56
+ ID id_push = rb_intern("push");
57
+ VALUE iocp_obj = rb_iv_get(self, "@iocp");
58
+ VALUE next_timeout = rb_funcall(self, id_next_timeout, 0);
59
+
60
+ int readable = NUM2INT(rb_const_get(rb_cIO, rb_intern("READABLE")));
61
+ int writable = NUM2INT(rb_const_get(rb_cIO, rb_intern("WRITABLE")));
62
+
63
+ HANDLE iocp;
64
+ OVERLAPPED_ENTRY lpCompletionPortEntries[IOCP_MAX_EVENTS];
65
+ ULONG ulNumEntriesRemoved;
66
+ TypedData_Get_Struct(iocp_obj, HANDLE, &type_iocp_payload, iocp);
67
+
68
+ DWORD timeout;
69
+ if (next_timeout == Qnil) {
70
+ timeout = 0x5000;
71
+ } else {
72
+ timeout = NUM2INT(next_timeout) * 1000; // seconds to milliseconds
73
+ }
74
+
75
+ DWORD NumberOfBytesTransferred;
76
+ LPOVERLAPPED pOverlapped;
77
+ ULONG_PTR CompletionKey;
78
+
79
+ BOOL res = GetQueuedCompletionStatus(iocp, &NumberOfBytesTransferred, &CompletionKey, &pOverlapped, timeout);
80
+ // BOOL res = GetQueuedCompletionStatusEx(
81
+ // iocp, lpCompletionPortEntries, IOCP_MAX_EVENTS, &ulNumEntriesRemoved, timeout, TRUE);
82
+
83
+ VALUE result = rb_ary_new2(2);
84
+
85
+ VALUE readables = rb_ary_new();
86
+ VALUE writables = rb_ary_new();
87
+
88
+ rb_ary_store(result, 0, readables);
89
+ rb_ary_store(result, 1, writables);
90
+
91
+ if (!result) {
92
+ return result;
93
+ }
94
+
95
+ printf("--------- Received! ---------\n");
96
+ printf("Received IO at address: 0x%08x\n", (void *)CompletionKey);
97
+ printf("dwNumberOfBytesTransferred: %lld\n", NumberOfBytesTransferred);
98
+
99
+ // if (ulNumEntriesRemoved > 0) {
100
+ // printf("Entries: %ld\n", ulNumEntriesRemoved);
101
+ // }
102
+
103
+ // for (ULONG i = 0; i < ulNumEntriesRemoved; i++) {
104
+ // OVERLAPPED_ENTRY entry = lpCompletionPortEntries[i];
105
+
106
+ // struct iocp_data *data = (struct iocp_data*) entry.lpCompletionKey;
107
+
108
+ // int interest = data->interest;
109
+ // VALUE obj_io = data->io;
110
+ // if (interest & readable) {
111
+ // rb_funcall(readables, id_push, 1, obj_io);
112
+ // } else if (interest & writable) {
113
+ // rb_funcall(writables, id_push, 1, obj_io);
114
+ // }
115
+
116
+ // xfree(data);
117
+ // }
118
+
119
+ return result;
120
+ }
121
+
122
+ VALUE method_scheduler_backend(VALUE klass) {
123
+ return rb_str_new_cstr("iocp");
124
+ }
125
+ #endif
126
+ #endif
@@ -0,0 +1,96 @@
1
+ #ifndef KQUEUE_H
2
+ #define KQUEUE_H
3
+ #include "evt.h"
4
+
5
+ #if HAVE_SYS_EVENT_H
6
+
7
+ VALUE method_scheduler_init(VALUE self) {
8
+ rb_iv_set(self, "@kq", INT2NUM(kqueue()));
9
+ return Qnil;
10
+ }
11
+
12
+ VALUE method_scheduler_register(VALUE self, VALUE io, VALUE interest) {
13
+ struct kevent event;
14
+ u_short event_flags = 0;
15
+ ID id_fileno = rb_intern("fileno");
16
+ int kq = NUM2INT(rb_iv_get(self, "@kq"));
17
+ int fd = NUM2INT(rb_funcall(io, id_fileno, 0));
18
+ int ruby_interest = NUM2INT(interest);
19
+ int readable = NUM2INT(rb_const_get(rb_cIO, rb_intern("READABLE")));
20
+ int writable = NUM2INT(rb_const_get(rb_cIO, rb_intern("WRITABLE")));
21
+
22
+ if (ruby_interest & readable) {
23
+ event_flags |= EVFILT_READ;
24
+ }
25
+
26
+ if (ruby_interest & writable) {
27
+ event_flags |= EVFILT_WRITE;
28
+ }
29
+
30
+ EV_SET(&event, fd, event_flags, EV_ADD|EV_ENABLE, 0, 0, (void*) io);
31
+ kevent(kq, &event, 1, NULL, 0, NULL); // TODO: Check the return value
32
+ return Qnil;
33
+ }
34
+
35
+ VALUE method_scheduler_deregister(VALUE self, VALUE io) {
36
+ struct kevent event;
37
+ ID id_fileno = rb_intern("fileno");
38
+ int kq = NUM2INT(rb_iv_get(self, "@kq"));
39
+ int fd = NUM2INT(rb_funcall(io, id_fileno, 0));
40
+ EV_SET(&event, fd, 0, EV_DELETE, 0, 0, (void*) io);
41
+ kevent(kq, &event, 1, NULL, 0, (void*) io); // TODO: Check the return value
42
+ return Qnil;
43
+ }
44
+
45
+ VALUE method_scheduler_wait(VALUE self) {
46
+ int n, kq, i;
47
+ u_short event_flags = 0;
48
+
49
+ struct kevent* events; // Event Triggered
50
+ struct timespec timeout;
51
+ VALUE next_timeout, obj_io, readables, writables, result;
52
+ ID id_next_timeout = rb_intern("next_timeout");
53
+ ID id_push = rb_intern("push");
54
+
55
+ kq = NUM2INT(rb_iv_get(self, "@kq"));
56
+ next_timeout = rb_funcall(self, id_next_timeout, 0);
57
+ readables = rb_ary_new();
58
+ writables = rb_ary_new();
59
+
60
+ events = (struct kevent*) xmalloc(sizeof(struct kevent) * KQUEUE_MAX_EVENTS);
61
+
62
+ if (next_timeout == Qnil || NUM2INT(next_timeout) == -1) {
63
+ n = kevent(kq, NULL, 0, events, KQUEUE_MAX_EVENTS, NULL);
64
+ } else {
65
+ timeout.tv_sec = next_timeout / 1000;
66
+ timeout.tv_nsec = next_timeout % 1000 * 1000 * 1000;
67
+ n = kevent(kq, NULL, 0, events, KQUEUE_MAX_EVENTS, &timeout);
68
+ }
69
+
70
+ // TODO: Check if n >= 0
71
+ for (i = 0; i < n; i++) {
72
+ event_flags = events[i].filter;
73
+ if (event_flags & EVFILT_READ) {
74
+ obj_io = (VALUE) events[i].udata;
75
+ rb_funcall(readables, id_push, 1, obj_io);
76
+ }
77
+
78
+ if (event_flags & EVFILT_WRITE) {
79
+ obj_io = (VALUE) events[i].udata;
80
+ rb_funcall(writables, id_push, 1, obj_io);
81
+ }
82
+ }
83
+
84
+ result = rb_ary_new2(2);
85
+ rb_ary_store(result, 0, readables);
86
+ rb_ary_store(result, 1, writables);
87
+
88
+ xfree(events);
89
+ return result;
90
+ }
91
+
92
+ VALUE method_scheduler_backend(VALUE klass) {
93
+ return rb_str_new_cstr("kqueue");
94
+ }
95
+ #endif
96
+ #endif
@@ -0,0 +1,36 @@
1
+ #ifndef SELECT_H
2
+ #define SELECT_H
3
+ #include "evt.h"
4
+
5
+ VALUE method_scheduler_init(VALUE self) {
6
+ return Qnil;
7
+ }
8
+
9
+ VALUE method_scheduler_register(VALUE self, VALUE io, VALUE interest) {
10
+ return Qnil;
11
+ }
12
+
13
+ VALUE method_scheduler_deregister(VALUE self, VALUE io) {
14
+ return Qnil;
15
+ }
16
+
17
+ VALUE method_scheduler_wait(VALUE self) {
18
+ // return IO.select(@readable.keys, @writable.keys, [], next_timeout)
19
+ VALUE readable, writable, readable_keys, writable_keys, next_timeout;
20
+ ID id_select = rb_intern("select");
21
+ ID id_next_timeout = rb_intern("next_timeout");
22
+
23
+ readable = rb_iv_get(self, "@readable");
24
+ writable = rb_iv_get(self, "@writable");
25
+
26
+ readable_keys = rb_funcall(readable, rb_intern("keys"), 0);
27
+ writable_keys = rb_funcall(writable, rb_intern("keys"), 0);
28
+ next_timeout = rb_funcall(self, id_next_timeout, 0);
29
+
30
+ return rb_funcall(rb_cIO, id_select, 4, readable_keys, writable_keys, rb_ary_new(), next_timeout);
31
+ }
32
+
33
+ VALUE method_scheduler_backend(VALUE klass) {
34
+ return rb_str_new_cstr("ruby");
35
+ }
36
+ #endif
@@ -0,0 +1,199 @@
1
+ #ifndef URING_H
2
+ #define URING_H
3
+ #include "evt.h"
4
+ #if HAVE_LIBURING_H
5
+ void uring_payload_free(void* data) {
6
+ // TODO: free the uring_data structs if the payload is freed before all IO responds
7
+ io_uring_queue_exit((struct io_uring*) data);
8
+ xfree(data);
9
+ }
10
+
11
+ size_t uring_payload_size(const void* data) {
12
+ return sizeof(struct io_uring);
13
+ }
14
+
15
+ VALUE method_scheduler_init(VALUE self) {
16
+ int ret;
17
+ struct io_uring* ring;
18
+ ring = xmalloc(sizeof(struct io_uring));
19
+ ret = io_uring_queue_init(URING_ENTRIES, ring, 0);
20
+ if (ret < 0) {
21
+ rb_raise(rb_eIOError, "unable to initalize io_uring");
22
+ }
23
+ rb_iv_set(self, "@ring", TypedData_Wrap_Struct(Payload, &type_uring_payload, ring));
24
+ return Qnil;
25
+ }
26
+
27
+ VALUE method_scheduler_register(VALUE self, VALUE io, VALUE interest) {
28
+ VALUE ring_obj;
29
+ struct io_uring* ring;
30
+ struct io_uring_sqe *sqe;
31
+ struct uring_data *data;
32
+ short poll_mask = 0;
33
+ ID id_fileno = rb_intern("fileno");
34
+
35
+ ring_obj = rb_iv_get(self, "@ring");
36
+ TypedData_Get_Struct(ring_obj, struct io_uring, &type_uring_payload, ring);
37
+ sqe = io_uring_get_sqe(ring);
38
+ int fd = NUM2INT(rb_funcall(io, id_fileno, 0));
39
+
40
+ int ruby_interest = NUM2INT(interest);
41
+ int readable = NUM2INT(rb_const_get(rb_cIO, rb_intern("READABLE")));
42
+ int writable = NUM2INT(rb_const_get(rb_cIO, rb_intern("WRITABLE")));
43
+
44
+ if (ruby_interest & readable) {
45
+ poll_mask |= POLL_IN;
46
+ }
47
+
48
+ if (ruby_interest & writable) {
49
+ poll_mask |= POLL_OUT;
50
+ }
51
+
52
+ data = (struct uring_data*) xmalloc(sizeof(struct uring_data));
53
+ data->is_poll = true;
54
+ data->io = io;
55
+ data->poll_mask = poll_mask;
56
+
57
+ io_uring_prep_poll_add(sqe, fd, poll_mask);
58
+ io_uring_sqe_set_data(sqe, data);
59
+ io_uring_submit(ring);
60
+ return Qnil;
61
+ }
62
+
63
+ VALUE method_scheduler_deregister(VALUE self, VALUE io) {
64
+ // io_uring runs under oneshot mode. No need to deregister.
65
+ return Qnil;
66
+ }
67
+
68
+ VALUE method_scheduler_wait(VALUE self) {
69
+ struct io_uring* ring;
70
+ struct io_uring_cqe *cqes[URING_MAX_EVENTS];
71
+ struct uring_data *data;
72
+ VALUE next_timeout, obj_io, readables, writables, iovs, result;
73
+ unsigned ret, i;
74
+ double time = 0.0;
75
+ short poll_events;
76
+
77
+ ID id_next_timeout = rb_intern("next_timeout");
78
+ ID id_push = rb_intern("push");
79
+ ID id_sleep = rb_intern("sleep");
80
+
81
+ next_timeout = rb_funcall(self, id_next_timeout, 0);
82
+ readables = rb_ary_new();
83
+ writables = rb_ary_new();
84
+ iovs = rb_ary_new();
85
+
86
+ TypedData_Get_Struct(rb_iv_get(self, "@ring"), struct io_uring, &type_uring_payload, ring);
87
+ ret = io_uring_peek_batch_cqe(ring, cqes, URING_MAX_EVENTS);
88
+
89
+ for (i = 0; i < ret; i++) {
90
+ data = (struct uring_data*) io_uring_cqe_get_data(cqes[i]);
91
+ poll_events = data->poll_mask;
92
+ obj_io = data->io;
93
+ if (data->is_poll) {
94
+ if (poll_events & POLL_IN) {
95
+ rb_funcall(readables, id_push, 1, obj_io);
96
+ }
97
+
98
+ if (poll_events & POLL_OUT) {
99
+ rb_funcall(writables, id_push, 1, obj_io);
100
+ }
101
+ } else {
102
+ rb_funcall(iovs, id_push, 1, obj_io);
103
+ }
104
+ }
105
+
106
+ if (ret == 0) {
107
+ if (next_timeout != Qnil && NUM2INT(next_timeout) != -1) {
108
+ // sleep
109
+ time = next_timeout / 1000;
110
+ rb_funcall(rb_mKernel, id_sleep, 1, rb_float_new(time));
111
+ } else {
112
+ rb_funcall(rb_mKernel, id_sleep, 1, rb_float_new(0.001)); // To avoid infinite loop
113
+ }
114
+ }
115
+
116
+ result = rb_ary_new2(3);
117
+ rb_ary_store(result, 0, readables);
118
+ rb_ary_store(result, 1, writables);
119
+ rb_ary_store(result, 2, iovs);
120
+
121
+ return result;
122
+ }
123
+
124
+ VALUE method_scheduler_io_read(VALUE self, VALUE io, VALUE buffer, VALUE offset, VALUE length) {
125
+ struct io_uring* ring;
126
+ struct uring_data *data;
127
+ char* read_buffer;
128
+ ID id_fileno = rb_intern("fileno");
129
+ // @iov[io] = Fiber.current
130
+ VALUE iovs = rb_iv_get(self, "@iovs");
131
+ rb_hash_aset(iovs, io, rb_funcall(Fiber, rb_intern("current"), 0));
132
+ // register
133
+ VALUE ring_obj = rb_iv_get(self, "@ring");
134
+ TypedData_Get_Struct(ring_obj, struct io_uring, &type_uring_payload, ring);
135
+ struct io_uring_sqe *sqe = io_uring_get_sqe(ring);
136
+ int fd = NUM2INT(rb_funcall(io, id_fileno, 0));
137
+
138
+ read_buffer = (char*) xmalloc(NUM2SIZET(length));
139
+ struct iovec iov = {
140
+ .iov_base = read_buffer,
141
+ .iov_len = NUM2SIZET(length),
142
+ };
143
+
144
+ data = (struct uring_data*) xmalloc(sizeof(struct uring_data));
145
+ data->is_poll = false;
146
+ data->io = io;
147
+ data->poll_mask = 0;
148
+
149
+ io_uring_prep_readv(sqe, fd, &iov, 1, NUM2SIZET(offset));
150
+ io_uring_sqe_set_data(sqe, data);
151
+ io_uring_submit(ring);
152
+
153
+ VALUE result = rb_str_new(read_buffer, strlen(read_buffer));
154
+ if (buffer != Qnil) {
155
+ rb_str_append(buffer, result);
156
+ }
157
+
158
+ rb_funcall(Fiber, rb_intern("yield"), 0); // Fiber.yield
159
+ return result;
160
+ }
161
+
162
+ VALUE method_scheduler_io_write(VALUE self, VALUE io, VALUE buffer, VALUE offset, VALUE length) {
163
+ struct io_uring* ring;
164
+ struct uring_data *data;
165
+ char* write_buffer;
166
+ ID id_fileno = rb_intern("fileno");
167
+ // @iov[io] = Fiber.current
168
+ VALUE iovs = rb_iv_get(self, "@iovs");
169
+ rb_hash_aset(iovs, io, rb_funcall(Fiber, rb_intern("current"), 0));
170
+ // register
171
+ VALUE ring_obj = rb_iv_get(self, "@ring");
172
+ TypedData_Get_Struct(ring_obj, struct io_uring, &type_uring_payload, ring);
173
+ struct io_uring_sqe *sqe = io_uring_get_sqe(ring);
174
+ int fd = NUM2INT(rb_funcall(io, id_fileno, 0));
175
+
176
+ write_buffer = StringValueCStr(buffer);
177
+ struct iovec iov = {
178
+ .iov_base = write_buffer,
179
+ .iov_len = NUM2SIZET(length),
180
+ };
181
+
182
+ data = (struct uring_data*) xmalloc(sizeof(struct uring_data));
183
+ data->is_poll = false;
184
+ data->io = io;
185
+ data->poll_mask = 0;
186
+
187
+ io_uring_prep_writev(sqe, fd, &iov, 1, NUM2SIZET(offset));
188
+ io_uring_sqe_set_data(sqe, data);
189
+ io_uring_submit(ring);
190
+ rb_funcall(Fiber, rb_intern("yield"), 0); // Fiber.yield
191
+ return length;
192
+ }
193
+
194
+ VALUE method_scheduler_backend(VALUE klass) {
195
+ return rb_str_new_cstr("liburing");
196
+ }
197
+
198
+ #endif
199
+ #endif
data/lib/evt.rb CHANGED
@@ -1,8 +1,8 @@
1
+ # frozen_string_literal: true
2
+
1
3
  require_relative 'evt/version'
2
- require 'evt_ext'
3
4
  require_relative 'evt/scheduler'
5
+ require_relative 'evt_ext'
4
6
 
5
7
  module Evt
6
- class Error < StandardError; end
7
- # Your code goes here...
8
8
  end
@@ -2,62 +2,55 @@
2
2
 
3
3
  require 'fiber'
4
4
  require 'socket'
5
+ require 'io/nonblock'
5
6
 
6
- begin
7
- require 'io/nonblock'
8
- rescue LoadError
9
- # Ignore.
10
- end
11
-
12
- class IO
13
- WAIT_READABLE = 1
14
- WAIT_WRITABLE = 3
15
- end
16
-
17
- class Scheduler
7
+ class Evt::Scheduler
18
8
  def initialize
19
9
  @readable = {}
20
10
  @writable = {}
11
+ @iovs = {}
21
12
  @waiting = {}
22
- @blocking = []
13
+
14
+ @lock = Mutex.new
15
+ @locking = 0
16
+ @ready = []
23
17
 
24
18
  @ios = ObjectSpace::WeakMap.new
25
19
  init_selector
26
20
  end
27
21
 
28
- attr :readable
29
- attr :writable
30
- attr :waiting
31
- attr :blocking
22
+ attr_reader :readable
23
+ attr_reader :writable
24
+ attr_reader :waiting
32
25
 
33
26
  def next_timeout
34
27
  _fiber, timeout = @waiting.min_by{|key, value| value}
35
28
 
36
29
  if timeout
37
30
  offset = timeout - current_time
38
-
39
- if offset < 0
40
- return 0
41
- else
42
- return offset
43
- end
31
+ offset < 0 ? 0 : offset
44
32
  end
45
33
  end
46
34
 
47
35
  def run
48
- while @readable.any? or @writable.any? or @waiting.any?
49
- # Can only handle file descriptors up to 1024...
50
- readable, writable = self.wait
51
-
52
- # puts "readable: #{readable}" if readable&.any?
53
- # puts "writable: #{writable}" if writable&.any?
36
+ while @readable.any? or @writable.any? or @waiting.any? or @iovs.any? or @locking.positive?
37
+ readable, writable, iovs = self.wait
54
38
 
55
39
  readable&.each do |io|
56
- @readable[io]&.resume
40
+ fiber = @readable.delete(io)
41
+ fiber&.resume
57
42
  end
58
43
 
59
44
  writable&.each do |io|
60
- @writable[io]&.resume
45
+ fiber = @writable.delete(io)
46
+ fiber&.resume
47
+ end
48
+
49
+ unless iovs.nil?
50
+ iovs&.each do |io|
51
+ fiber = @iovs.delete(io)
52
+ fiber&.resume
53
+ end
61
54
  end
62
55
 
63
56
  if @waiting.any?
@@ -73,98 +66,56 @@ class Scheduler
73
66
  end
74
67
  end
75
68
  end
76
- end
77
- end
78
-
79
- def for_fd(fd)
80
- @ios[fd] ||= ::IO.for_fd(fd, autoclose: false)
81
- end
82
69
 
83
- def wait_readable(io)
84
- @readable[io] = Fiber.current
85
- self.register(io, IO::WAIT_READABLE)
86
- Fiber.yield
87
- @readable.delete(io)
88
- self.deregister(io)
89
- return true
90
- end
70
+ if @ready.any?
71
+ ready = nil
91
72
 
92
- def wait_readable_fd(fd)
93
- wait_readable(
94
- for_fd(fd)
95
- )
96
- end
97
-
98
- def wait_writable(io)
99
- @writable[io] = Fiber.current
100
- self.register(io, IO::WAIT_READABLE)
101
- Fiber.yield
102
- @writable.delete(io)
103
- self.deregister(io)
104
- return true
105
- end
73
+ @lock.synchronize do
74
+ ready, @ready = @ready, []
75
+ end
106
76
 
107
- def wait_writable_fd(fd)
108
- wait_writable(
109
- for_fd(fd)
110
- )
77
+ ready.each do |fiber|
78
+ fiber.resume
79
+ end
80
+ end
81
+ end
111
82
  end
112
83
 
113
84
  def current_time
114
85
  Process.clock_gettime(Process::CLOCK_MONOTONIC)
115
86
  end
116
87
 
117
- def wait_sleep(duration = nil)
118
- @waiting[Fiber.current] = current_time + duration
119
-
120
- Fiber.yield
121
-
122
- return true
123
- end
124
-
125
- def wait_any(io, events, duration)
126
- unless (events & IO::WAIT_READABLE).zero?
127
- @readable[io] = Fiber.current
128
- end
129
-
130
- unless (events & IO::WAIT_WRITABLE).zero?
131
- @writable[io] = Fiber.current
132
- end
133
-
88
+ def io_wait(io, events, duration)
89
+ @readable[io] = Fiber.current unless (events & IO::READABLE).zero?
90
+ @writable[io] = Fiber.current unless (events & IO::WRITABLE).zero?
134
91
  self.register(io, events)
135
-
136
92
  Fiber.yield
137
-
138
- @readable.delete(io)
139
- @writable.delete(io)
140
93
  self.deregister(io)
141
-
142
- return true
94
+ true
143
95
  end
144
96
 
145
-
146
- def wait_for_single_fd(fd, events, duration)
147
- wait_any(
148
- for_fd(fd),
149
- events,
150
- duration
151
- )
97
+ def kernel_sleep(duration = nil)
98
+ @waiting[Fiber.current] = current_time + duration if duration.nil?
99
+ Fiber.yield
100
+ true
152
101
  end
153
102
 
154
- def enter_blocking_region
155
- # puts "Enter blocking region: #{caller.first}"
103
+ def mutex_lock(mutex)
104
+ @locking += 1
105
+ Fiber.yield
106
+ ensure
107
+ @locking -= 1
156
108
  end
157
109
 
158
- def exit_blocking_region
159
- # puts "Exit blocking region: #{caller.first}"
160
- @blocking << caller.first
110
+ def mutex_unlock(mutex, fiber)
111
+ @lock.synchronize do
112
+ @ready << fiber
113
+ end
161
114
  end
162
115
 
163
116
  def fiber(&block)
164
117
  fiber = Fiber.new(blocking: false, &block)
165
-
166
118
  fiber.resume
167
-
168
- return fiber
119
+ fiber
169
120
  end
170
121
  end
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  module Evt
2
- VERSION = "0.1.1"
4
+ VERSION = "0.2.1"
3
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: evt
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.1
4
+ version: 0.2.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Delton Ding
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-08-17 00:00:00.000000000 Z
11
+ date: 2020-12-21 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: rake-compiler
@@ -32,17 +32,22 @@ extensions:
32
32
  - ext/evt/extconf.rb
33
33
  extra_rdoc_files: []
34
34
  files:
35
+ - ".github/workflows/test.yml"
35
36
  - ".gitignore"
36
- - ".travis.yml"
37
37
  - CODE_OF_CONDUCT.md
38
38
  - Gemfile
39
- - Gemfile.lock
40
39
  - LICENSE
41
40
  - README.md
42
41
  - Rakefile
43
42
  - evt.gemspec
43
+ - ext/evt/epoll.h
44
44
  - ext/evt/evt.c
45
+ - ext/evt/evt.h
45
46
  - ext/evt/extconf.rb
47
+ - ext/evt/iocp.h
48
+ - ext/evt/kqueue.h
49
+ - ext/evt/select.h
50
+ - ext/evt/uring.h
46
51
  - lib/evt.rb
47
52
  - lib/evt/scheduler.rb
48
53
  - lib/evt/version.rb
@@ -60,14 +65,14 @@ required_ruby_version: !ruby/object:Gem::Requirement
60
65
  requirements:
61
66
  - - ">="
62
67
  - !ruby/object:Gem::Version
63
- version: 2.7.1
68
+ version: 2.8.0.dev
64
69
  required_rubygems_version: !ruby/object:Gem::Requirement
65
70
  requirements:
66
71
  - - ">="
67
72
  - !ruby/object:Gem::Version
68
73
  version: '0'
69
74
  requirements: []
70
- rubygems_version: 3.1.2
75
+ rubygems_version: 3.2.2
71
76
  signing_key:
72
77
  specification_version: 4
73
78
  summary: A low-level Event Handler designed for Ruby 3 Scheduler
@@ -1,6 +0,0 @@
1
- ---
2
- language: ruby
3
- cache: bundler
4
- rvm:
5
- - ruby-head
6
- before_install: gem install bundler -v 2.1.4
@@ -1,24 +0,0 @@
1
- PATH
2
- remote: .
3
- specs:
4
- evt (0.1.1)
5
-
6
- GEM
7
- remote: https://rubygems.org/
8
- specs:
9
- minitest (5.14.1)
10
- rake (12.3.3)
11
- rake-compiler (1.1.1)
12
- rake
13
-
14
- PLATFORMS
15
- ruby
16
-
17
- DEPENDENCIES
18
- evt!
19
- minitest (~> 5.0)
20
- rake (~> 12.0)
21
- rake-compiler (~> 1.0)
22
-
23
- BUNDLED WITH
24
- 2.2.0.dev