nio4r 2.5.4 → 2.5.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 3353e688cab0a1d45f509edcb0a5bc5fa3beb6faca9e6e5703fe75ecbb220ed9
4
- data.tar.gz: d5186d282adfa316128165ca4d2ccae5322a0605b15508d47170f9f7517e3346
3
+ metadata.gz: 83e0080af4a88a697820ce40d34e6adbd886c6ebbcf9c75829c04e2febea5058
4
+ data.tar.gz: 7942b6d643f2dc26d0bb7e2c96deb51b697c13df2c3ccc4f972c09df856e3694
5
5
  SHA512:
6
- metadata.gz: 05c2472803019de225a4d813f302bad6810fecf193adad4291c4b58e0e5aaa34d02b60945bc6641be55103324b82eee001988641d2b9937f63d18e7d00a03f8d
7
- data.tar.gz: fcbb1f1aa622df5e82df481e40567c153d15ca2e59ca21fbd3aff00006326e1f8ec44ff108c59fc8e8124bcc3b8e99f23d6291fe7ca27092d7af6ac22be71eb5
6
+ metadata.gz: 359600dd9ee73684a9b511df14fd5367e92dc0c0845c4195e436cc27fb82474bfd49aee4698e617bb1e0b5d7a9061efa3a10bdd9ec30e2455988376a09e2e49d
7
+ data.tar.gz: 6ad1095616c63f9763d787ae1bcd724cd08dbf834c4c9feb3a1abb2af75322be1fe3a875481eadd155404d0e34cb3c758ea94fdc2c50ae0136a24a541e68df83
@@ -14,11 +14,17 @@ jobs:
14
14
  strategy:
15
15
  fail-fast: false
16
16
  matrix:
17
- os: [ubuntu-16.04, ubuntu-18.04, macos-latest, windows-latest]
18
- ruby: [2.4, 2.5, 2.6, 2.7, jruby, truffleruby-head]
17
+ os: [ ubuntu-20.04, ubuntu-18.04, macos-10.15, windows-2019 ]
18
+ ruby: [ head, 3.0, 2.7, 2.6, 2.5, 2.4, jruby, truffleruby-head ]
19
+ include:
20
+ - { os: ubuntu-16.04, ruby: 3.0 }
21
+ - { os: ubuntu-16.04, ruby: 2.4 }
22
+ - { os: macos-11.0 , ruby: 3.0 }
23
+ - { os: macos-11.0 , ruby: 2.4 }
19
24
  exclude:
20
- - { os: windows-latest, ruby: jruby }
21
- - { os: windows-latest, ruby: truffleruby-head }
25
+ - { os: windows-2019, ruby: head }
26
+ - { os: windows-2019, ruby: jruby }
27
+ - { os: windows-2019, ruby: truffleruby-head }
22
28
 
23
29
  steps:
24
30
  - name: repo checkout
data/README.md CHANGED
@@ -40,6 +40,7 @@ writing.
40
40
  * Ruby 2.5
41
41
  * Ruby 2.6
42
42
  * Ruby 2.7
43
+ * Ruby 3.0
43
44
  * [JRuby](https://github.com/jruby/jruby)
44
45
  * [TruffleRuby](https://github.com/oracle/truffleruby)
45
46
 
data/ext/libev/Changes CHANGED
@@ -1,8 +1,77 @@
1
1
  Revision history for libev, a high-performance and full-featured event loop.
2
2
 
3
+ TODO: for next ABI/API change, consider moving EV__IOFDSSET into io->fd instead and provide a getter.
4
+ TODO: document EV_TSTAMP_T
5
+
6
+ 4.33 Wed Mar 18 13:22:29 CET 2020
7
+ - no changes w.r.t. 4.32.
8
+
9
+ 4.32 (EV only)
10
+ - the 4.31 timerfd code wrongly changed the priority of the signal
11
+ fd watcher, which is usually harmless unless signal fds are
12
+ also used (found via cpan tester service).
13
+ - the documentation wrongly claimed that user may modify fd and events
14
+ members in io watchers when the watcher was stopped
15
+ (found by b_jonas).
16
+ - new ev_io_modify mutator which changes only the events member,
17
+ which can be faster. also added ev::io::set (int events) method
18
+ to ev++.h.
19
+ - officially allow a zero events mask for io watchers. this should
20
+ work with older libev versions as well but was not officially
21
+ allowed before.
22
+ - do not wake up every minute when timerfd is used to detect timejumps.
23
+ - do not wake up every minute when periodics are disabled and we have
24
+ a monotonic clock.
25
+ - support a lot more "uncommon" compile time configurations,
26
+ such as ev_embed enabled but ev_timer disabled.
27
+ - use a start/stop wrapper class to reduce code duplication in
28
+ ev++.h and make it needlessly more c++-y.
29
+ - the linux aio backend is no longer compiled in by default.
30
+ - update to libecb version 0x00010008.
31
+
32
+ 4.31 Fri Dec 20 21:58:29 CET 2019
33
+ - handle backends with minimum wait time a bit better by not
34
+ waiting in the presence of already-expired timers
35
+ (behaviour reported by Felipe Gasper).
36
+ - new feature: use timerfd to detect timejumps quickly,
37
+ can be disabled with the new EVFLAG_NOTIMERFD loop flag.
38
+ - document EV_USE_SIGNALFD feature macro.
39
+
40
+ 4.30 (EV only)
41
+ - change non-autoconf test for __kernel_rwf_t by testing
42
+ LINUX_VERSION_CODE, the most direct test I could find.
43
+ - fix a bug in the io_uring backend that polled the wrong
44
+ backend fd, causing it to not work in many cases.
45
+
46
+ 4.29 (EV only)
47
+ - add io uring autoconf and non-autoconf detection.
48
+ - disable io_uring when some header files are too old.
49
+
50
+ 4.28 (EV only)
51
+ - linuxaio backend resulted in random memory corruption
52
+ when loop is forked.
53
+ - linuxaio backend might have tried to cancel an iocb
54
+ multiple times (was unable to trigger this).
55
+ - linuxaio backend now employs a generation counter to
56
+ avoid handling spurious events from cancelled requests.
57
+ - io_cancel can return EINTR, deal with it. also, assume
58
+ io_submit also returns EINTR.
59
+ - fix some other minor bugs in linuxaio backend.
60
+ - ev_tstamp type can now be overriden by defining EV_TSTAMP_T.
61
+ - cleanup: replace expect_true/false and noinline by their
62
+ libecb counterparts.
63
+ - move syscall infrastructure from ev_linuxaio.c to ev.c.
64
+ - prepare io_uring integration.
65
+ - tweak ev_floor.
66
+ - epoll, poll, win32 Sleep and other places that use millisecond
67
+ reslution now all try to round up times.
68
+ - solaris port backend didn't compile.
69
+ - abstract time constants into their macros, for more flexibility.
70
+
3
71
  4.27 Thu Jun 27 22:43:44 CEST 2019
4
- - linux aio backend almost complete rewritten to work around its
72
+ - linux aio backend almost completely rewritten to work around its
5
73
  limitations.
74
+ - linux aio backend now requires linux 4.19+.
6
75
  - epoll backend now mandatory for linux aio backend.
7
76
  - fail assertions more aggressively on invalid fd's detected
8
77
  in the event loop, do not just silently fd_kill in case of
@@ -22,7 +91,7 @@ Revision history for libev, a high-performance and full-featured event loop.
22
91
  4.25 Fri Dec 21 07:49:20 CET 2018
23
92
  - INCOMPATIBLE CHANGE: EV_THROW was renamed to EV_NOEXCEPT
24
93
  (EV_THROW still provided) and now uses noexcept on C++11 or newer.
25
- - move the darwin select workaround highe rin ev.c, as newer versions of
94
+ - move the darwin select workaround higher in ev.c, as newer versions of
26
95
  darwin managed to break their broken select even more.
27
96
  - ANDROID => __ANDROID__ (reported by enh@google.com).
28
97
  - disable epoll_create1 on android because it has broken header files
data/ext/libev/ev.c CHANGED
@@ -116,7 +116,7 @@
116
116
  # undef EV_USE_POLL
117
117
  # define EV_USE_POLL 0
118
118
  # endif
119
-
119
+
120
120
  # if HAVE_EPOLL_CTL && HAVE_SYS_EPOLL_H
121
121
  # ifndef EV_USE_EPOLL
122
122
  # define EV_USE_EPOLL EV_FEATURE_BACKENDS
@@ -125,16 +125,25 @@
125
125
  # undef EV_USE_EPOLL
126
126
  # define EV_USE_EPOLL 0
127
127
  # endif
128
-
128
+
129
129
  # if HAVE_LINUX_AIO_ABI_H
130
130
  # ifndef EV_USE_LINUXAIO
131
- # define EV_USE_LINUXAIO EV_FEATURE_BACKENDS
131
+ # define EV_USE_LINUXAIO 0 /* was: EV_FEATURE_BACKENDS, always off by default */
132
132
  # endif
133
133
  # else
134
134
  # undef EV_USE_LINUXAIO
135
135
  # define EV_USE_LINUXAIO 0
136
136
  # endif
137
-
137
+
138
+ # if HAVE_LINUX_FS_H && HAVE_SYS_TIMERFD_H && HAVE_KERNEL_RWF_T
139
+ # ifndef EV_USE_IOURING
140
+ # define EV_USE_IOURING EV_FEATURE_BACKENDS
141
+ # endif
142
+ # else
143
+ # undef EV_USE_IOURING
144
+ # define EV_USE_IOURING 0
145
+ # endif
146
+
138
147
  # if HAVE_KQUEUE && HAVE_SYS_EVENT_H
139
148
  # ifndef EV_USE_KQUEUE
140
149
  # define EV_USE_KQUEUE EV_FEATURE_BACKENDS
@@ -143,7 +152,7 @@
143
152
  # undef EV_USE_KQUEUE
144
153
  # define EV_USE_KQUEUE 0
145
154
  # endif
146
-
155
+
147
156
  # if HAVE_PORT_H && HAVE_PORT_CREATE
148
157
  # ifndef EV_USE_PORT
149
158
  # define EV_USE_PORT EV_FEATURE_BACKENDS
@@ -179,7 +188,16 @@
179
188
  # undef EV_USE_EVENTFD
180
189
  # define EV_USE_EVENTFD 0
181
190
  # endif
182
-
191
+
192
+ # if HAVE_SYS_TIMERFD_H
193
+ # ifndef EV_USE_TIMERFD
194
+ # define EV_USE_TIMERFD EV_FEATURE_OS
195
+ # endif
196
+ # else
197
+ # undef EV_USE_TIMERFD
198
+ # define EV_USE_TIMERFD 0
199
+ # endif
200
+
183
201
  #endif
184
202
 
185
203
  /* OS X, in its infinite idiocy, actually HARDCODES
@@ -335,6 +353,22 @@
335
353
  # define EV_USE_PORT 0
336
354
  #endif
337
355
 
356
+ #ifndef EV_USE_LINUXAIO
357
+ # if __linux /* libev currently assumes linux/aio_abi.h is always available on linux */
358
+ # define EV_USE_LINUXAIO 0 /* was: 1, always off by default */
359
+ # else
360
+ # define EV_USE_LINUXAIO 0
361
+ # endif
362
+ #endif
363
+
364
+ #ifndef EV_USE_IOURING
365
+ # if __linux /* later checks might disable again */
366
+ # define EV_USE_IOURING 1
367
+ # else
368
+ # define EV_USE_IOURING 0
369
+ # endif
370
+ #endif
371
+
338
372
  #ifndef EV_USE_INOTIFY
339
373
  # if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 4))
340
374
  # define EV_USE_INOTIFY EV_FEATURE_OS
@@ -367,6 +401,14 @@
367
401
  # endif
368
402
  #endif
369
403
 
404
+ #ifndef EV_USE_TIMERFD
405
+ # if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 8))
406
+ # define EV_USE_TIMERFD EV_FEATURE_OS
407
+ # else
408
+ # define EV_USE_TIMERFD 0
409
+ # endif
410
+ #endif
411
+
370
412
  #if 0 /* debugging */
371
413
  # define EV_VERIFY 3
372
414
  # define EV_USE_4HEAP 1
@@ -409,6 +451,7 @@
409
451
  # define clock_gettime(id, ts) syscall (SYS_clock_gettime, (id), (ts))
410
452
  # undef EV_USE_MONOTONIC
411
453
  # define EV_USE_MONOTONIC 1
454
+ # define EV_NEED_SYSCALL 1
412
455
  # else
413
456
  # undef EV_USE_CLOCK_SYSCALL
414
457
  # define EV_USE_CLOCK_SYSCALL 0
@@ -432,6 +475,14 @@
432
475
  # define EV_USE_INOTIFY 0
433
476
  #endif
434
477
 
478
+ #if __linux && EV_USE_IOURING
479
+ # include <linux/version.h>
480
+ # if LINUX_VERSION_CODE < KERNEL_VERSION(4,14,0)
481
+ # undef EV_USE_IOURING
482
+ # define EV_USE_IOURING 0
483
+ # endif
484
+ #endif
485
+
435
486
  #if !EV_USE_NANOSLEEP
436
487
  /* hp-ux has it in sys/time.h, which we unconditionally include above */
437
488
  # if !defined _WIN32 && !defined __hpux
@@ -441,12 +492,29 @@
441
492
 
442
493
  #if EV_USE_LINUXAIO
443
494
  # include <sys/syscall.h>
444
- # if !SYS_io_getevents || !EV_USE_EPOLL /* ev_linxaio uses ev_poll.c:ev_epoll_create */
495
+ # if SYS_io_getevents && EV_USE_EPOLL /* linuxaio backend requires epoll backend */
496
+ # define EV_NEED_SYSCALL 1
497
+ # else
445
498
  # undef EV_USE_LINUXAIO
446
499
  # define EV_USE_LINUXAIO 0
447
500
  # endif
448
501
  #endif
449
502
 
503
+ #if EV_USE_IOURING
504
+ # include <sys/syscall.h>
505
+ # if !SYS_io_uring_setup && __linux && !__alpha
506
+ # define SYS_io_uring_setup 425
507
+ # define SYS_io_uring_enter 426
508
+ # define SYS_io_uring_wregister 427
509
+ # endif
510
+ # if SYS_io_uring_setup && EV_USE_EPOLL /* iouring backend requires epoll backend */
511
+ # define EV_NEED_SYSCALL 1
512
+ # else
513
+ # undef EV_USE_IOURING
514
+ # define EV_USE_IOURING 0
515
+ # endif
516
+ #endif
517
+
450
518
  #if EV_USE_INOTIFY
451
519
  # include <sys/statfs.h>
452
520
  # include <sys/inotify.h>
@@ -458,7 +526,7 @@
458
526
  #endif
459
527
 
460
528
  #if EV_USE_EVENTFD
461
- /* our minimum requirement is glibc 2.7 which has the stub, but not the header */
529
+ /* our minimum requirement is glibc 2.7 which has the stub, but not the full header */
462
530
  # include <stdint.h>
463
531
  # ifndef EFD_NONBLOCK
464
532
  # define EFD_NONBLOCK O_NONBLOCK
@@ -474,7 +542,7 @@ EV_CPP(extern "C") int (eventfd) (unsigned int initval, int flags);
474
542
  #endif
475
543
 
476
544
  #if EV_USE_SIGNALFD
477
- /* our minimum requirement is glibc 2.7 which has the stub, but not the header */
545
+ /* our minimum requirement is glibc 2.7 which has the stub, but not the full header */
478
546
  # include <stdint.h>
479
547
  # ifndef SFD_NONBLOCK
480
548
  # define SFD_NONBLOCK O_NONBLOCK
@@ -486,7 +554,7 @@ EV_CPP(extern "C") int (eventfd) (unsigned int initval, int flags);
486
554
  # define SFD_CLOEXEC 02000000
487
555
  # endif
488
556
  # endif
489
- EV_CPP (extern "C") int signalfd (int fd, const sigset_t *mask, int flags);
557
+ EV_CPP (extern "C") int (signalfd) (int fd, const sigset_t *mask, int flags);
490
558
 
491
559
  struct signalfd_siginfo
492
560
  {
@@ -495,7 +563,17 @@ struct signalfd_siginfo
495
563
  };
496
564
  #endif
497
565
 
498
- /**/
566
+ /* for timerfd, libev core requires TFD_TIMER_CANCEL_ON_SET &c */
567
+ #if EV_USE_TIMERFD
568
+ # include <sys/timerfd.h>
569
+ /* timerfd is only used for periodics */
570
+ # if !(defined (TFD_TIMER_CANCEL_ON_SET) && defined (TFD_CLOEXEC) && defined (TFD_NONBLOCK)) || !EV_PERIODIC_ENABLE
571
+ # undef EV_USE_TIMERFD
572
+ # define EV_USE_TIMERFD 0
573
+ # endif
574
+ #endif
575
+
576
+ /*****************************************************************************/
499
577
 
500
578
  #if EV_VERIFY >= 3
501
579
  # define EV_FREQUENT_CHECK ev_verify (EV_A)
@@ -510,18 +588,34 @@ struct signalfd_siginfo
510
588
  #define MIN_INTERVAL 0.0001220703125 /* 1/2**13, good till 4000 */
511
589
  /*#define MIN_INTERVAL 0.00000095367431640625 /* 1/2**20, good till 2200 */
512
590
 
513
- #define MIN_TIMEJUMP 1. /* minimum timejump that gets detected (if monotonic clock available) */
514
- #define MAX_BLOCKTIME 59.743 /* never wait longer than this time (to detect time jumps) */
591
+ #define MIN_TIMEJUMP 1. /* minimum timejump that gets detected (if monotonic clock available) */
592
+ #define MAX_BLOCKTIME 59.743 /* never wait longer than this time (to detect time jumps) */
593
+ #define MAX_BLOCKTIME2 1500001.07 /* same, but when timerfd is used to detect jumps, also safe delay to not overflow */
594
+
595
+ /* find a portable timestamp that is "always" in the future but fits into time_t.
596
+ * this is quite hard, and we are mostly guessing - we handle 32 bit signed/unsigned time_t,
597
+ * and sizes larger than 32 bit, and maybe the unlikely floating point time_t */
598
+ #define EV_TSTAMP_HUGE \
599
+ (sizeof (time_t) >= 8 ? 10000000000000. \
600
+ : 0 < (time_t)4294967295 ? 4294967295. \
601
+ : 2147483647.) \
515
602
 
516
- #define EV_TV_SET(tv,t) do { tv.tv_sec = (long)t; tv.tv_usec = (long)((t - tv.tv_sec) * 1e6); } while (0)
517
- #define EV_TS_SET(ts,t) do { ts.tv_sec = (long)t; ts.tv_nsec = (long)((t - ts.tv_sec) * 1e9); } while (0)
603
+ #ifndef EV_TS_CONST
604
+ # define EV_TS_CONST(nv) nv
605
+ # define EV_TS_TO_MSEC(a) a * 1e3 + 0.9999
606
+ # define EV_TS_FROM_USEC(us) us * 1e-6
607
+ # define EV_TV_SET(tv,t) do { tv.tv_sec = (long)t; tv.tv_usec = (long)((t - tv.tv_sec) * 1e6); } while (0)
608
+ # define EV_TS_SET(ts,t) do { ts.tv_sec = (long)t; ts.tv_nsec = (long)((t - ts.tv_sec) * 1e9); } while (0)
609
+ # define EV_TV_GET(tv) ((tv).tv_sec + (tv).tv_usec * 1e-6)
610
+ # define EV_TS_GET(ts) ((ts).tv_sec + (ts).tv_nsec * 1e-9)
611
+ #endif
518
612
 
519
613
  /* the following is ecb.h embedded into libev - use update_ev_c to update from an external copy */
520
614
  /* ECB.H BEGIN */
521
615
  /*
522
616
  * libecb - http://software.schmorp.de/pkg/libecb
523
617
  *
524
- * Copyright (©) 2009-2015 Marc Alexander Lehmann <libecb@schmorp.de>
618
+ * Copyright (©) 2009-2015,2018-2020 Marc Alexander Lehmann <libecb@schmorp.de>
525
619
  * Copyright (©) 2011 Emanuele Giaquinta
526
620
  * All rights reserved.
527
621
  *
@@ -562,15 +656,23 @@ struct signalfd_siginfo
562
656
  #define ECB_H
563
657
 
564
658
  /* 16 bits major, 16 bits minor */
565
- #define ECB_VERSION 0x00010006
659
+ #define ECB_VERSION 0x00010008
566
660
 
567
- #ifdef _WIN32
661
+ #include <string.h> /* for memcpy */
662
+
663
+ #if defined (_WIN32) && !defined (__MINGW32__)
568
664
  typedef signed char int8_t;
569
665
  typedef unsigned char uint8_t;
666
+ typedef signed char int_fast8_t;
667
+ typedef unsigned char uint_fast8_t;
570
668
  typedef signed short int16_t;
571
669
  typedef unsigned short uint16_t;
670
+ typedef signed int int_fast16_t;
671
+ typedef unsigned int uint_fast16_t;
572
672
  typedef signed int int32_t;
573
673
  typedef unsigned int uint32_t;
674
+ typedef signed int int_fast32_t;
675
+ typedef unsigned int uint_fast32_t;
574
676
  #if __GNUC__
575
677
  typedef signed long long int64_t;
576
678
  typedef unsigned long long uint64_t;
@@ -578,6 +680,8 @@ struct signalfd_siginfo
578
680
  typedef signed __int64 int64_t;
579
681
  typedef unsigned __int64 uint64_t;
580
682
  #endif
683
+ typedef int64_t int_fast64_t;
684
+ typedef uint64_t uint_fast64_t;
581
685
  #ifdef _WIN64
582
686
  #define ECB_PTRSIZE 8
583
687
  typedef uint64_t uintptr_t;
@@ -599,6 +703,14 @@ struct signalfd_siginfo
599
703
  #define ECB_GCC_AMD64 (__amd64 || __amd64__ || __x86_64 || __x86_64__)
600
704
  #define ECB_MSVC_AMD64 (_M_AMD64 || _M_X64)
601
705
 
706
+ #ifndef ECB_OPTIMIZE_SIZE
707
+ #if __OPTIMIZE_SIZE__
708
+ #define ECB_OPTIMIZE_SIZE 1
709
+ #else
710
+ #define ECB_OPTIMIZE_SIZE 0
711
+ #endif
712
+ #endif
713
+
602
714
  /* work around x32 idiocy by defining proper macros */
603
715
  #if ECB_GCC_AMD64 || ECB_MSVC_AMD64
604
716
  #if _ILP32
@@ -1114,6 +1226,44 @@ ecb_inline ecb_const uint32_t ecb_rotr32 (uint32_t x, unsigned int count) { retu
1114
1226
  ecb_inline ecb_const uint64_t ecb_rotl64 (uint64_t x, unsigned int count) { return (x >> (64 - count)) | (x << count); }
1115
1227
  ecb_inline ecb_const uint64_t ecb_rotr64 (uint64_t x, unsigned int count) { return (x << (64 - count)) | (x >> count); }
1116
1228
 
1229
+ #if ECB_CPP
1230
+
1231
+ inline uint8_t ecb_ctz (uint8_t v) { return ecb_ctz32 (v); }
1232
+ inline uint16_t ecb_ctz (uint16_t v) { return ecb_ctz32 (v); }
1233
+ inline uint32_t ecb_ctz (uint32_t v) { return ecb_ctz32 (v); }
1234
+ inline uint64_t ecb_ctz (uint64_t v) { return ecb_ctz64 (v); }
1235
+
1236
+ inline bool ecb_is_pot (uint8_t v) { return ecb_is_pot32 (v); }
1237
+ inline bool ecb_is_pot (uint16_t v) { return ecb_is_pot32 (v); }
1238
+ inline bool ecb_is_pot (uint32_t v) { return ecb_is_pot32 (v); }
1239
+ inline bool ecb_is_pot (uint64_t v) { return ecb_is_pot64 (v); }
1240
+
1241
+ inline int ecb_ld (uint8_t v) { return ecb_ld32 (v); }
1242
+ inline int ecb_ld (uint16_t v) { return ecb_ld32 (v); }
1243
+ inline int ecb_ld (uint32_t v) { return ecb_ld32 (v); }
1244
+ inline int ecb_ld (uint64_t v) { return ecb_ld64 (v); }
1245
+
1246
+ inline int ecb_popcount (uint8_t v) { return ecb_popcount32 (v); }
1247
+ inline int ecb_popcount (uint16_t v) { return ecb_popcount32 (v); }
1248
+ inline int ecb_popcount (uint32_t v) { return ecb_popcount32 (v); }
1249
+ inline int ecb_popcount (uint64_t v) { return ecb_popcount64 (v); }
1250
+
1251
+ inline uint8_t ecb_bitrev (uint8_t v) { return ecb_bitrev8 (v); }
1252
+ inline uint16_t ecb_bitrev (uint16_t v) { return ecb_bitrev16 (v); }
1253
+ inline uint32_t ecb_bitrev (uint32_t v) { return ecb_bitrev32 (v); }
1254
+
1255
+ inline uint8_t ecb_rotl (uint8_t v, unsigned int count) { return ecb_rotl8 (v, count); }
1256
+ inline uint16_t ecb_rotl (uint16_t v, unsigned int count) { return ecb_rotl16 (v, count); }
1257
+ inline uint32_t ecb_rotl (uint32_t v, unsigned int count) { return ecb_rotl32 (v, count); }
1258
+ inline uint64_t ecb_rotl (uint64_t v, unsigned int count) { return ecb_rotl64 (v, count); }
1259
+
1260
+ inline uint8_t ecb_rotr (uint8_t v, unsigned int count) { return ecb_rotr8 (v, count); }
1261
+ inline uint16_t ecb_rotr (uint16_t v, unsigned int count) { return ecb_rotr16 (v, count); }
1262
+ inline uint32_t ecb_rotr (uint32_t v, unsigned int count) { return ecb_rotr32 (v, count); }
1263
+ inline uint64_t ecb_rotr (uint64_t v, unsigned int count) { return ecb_rotr64 (v, count); }
1264
+
1265
+ #endif
1266
+
1117
1267
  #if ECB_GCC_VERSION(4,3) || (ECB_CLANG_BUILTIN(__builtin_bswap32) && ECB_CLANG_BUILTIN(__builtin_bswap64))
1118
1268
  #if ECB_GCC_VERSION(4,8) || ECB_CLANG_BUILTIN(__builtin_bswap16)
1119
1269
  #define ecb_bswap16(x) __builtin_bswap16 (x)
@@ -1194,6 +1344,78 @@ ecb_inline ecb_const ecb_bool ecb_big_endian (void) { return ecb_byteorder_he
1194
1344
  ecb_inline ecb_const ecb_bool ecb_little_endian (void);
1195
1345
  ecb_inline ecb_const ecb_bool ecb_little_endian (void) { return ecb_byteorder_helper () == 0x44332211; }
1196
1346
 
1347
+ /*****************************************************************************/
1348
+ /* unaligned load/store */
1349
+
1350
+ ecb_inline uint_fast16_t ecb_be_u16_to_host (uint_fast16_t v) { return ecb_little_endian () ? ecb_bswap16 (v) : v; }
1351
+ ecb_inline uint_fast32_t ecb_be_u32_to_host (uint_fast32_t v) { return ecb_little_endian () ? ecb_bswap32 (v) : v; }
1352
+ ecb_inline uint_fast64_t ecb_be_u64_to_host (uint_fast64_t v) { return ecb_little_endian () ? ecb_bswap64 (v) : v; }
1353
+
1354
+ ecb_inline uint_fast16_t ecb_le_u16_to_host (uint_fast16_t v) { return ecb_big_endian () ? ecb_bswap16 (v) : v; }
1355
+ ecb_inline uint_fast32_t ecb_le_u32_to_host (uint_fast32_t v) { return ecb_big_endian () ? ecb_bswap32 (v) : v; }
1356
+ ecb_inline uint_fast64_t ecb_le_u64_to_host (uint_fast64_t v) { return ecb_big_endian () ? ecb_bswap64 (v) : v; }
1357
+
1358
+ ecb_inline uint_fast16_t ecb_peek_u16_u (const void *ptr) { uint16_t v; memcpy (&v, ptr, sizeof (v)); return v; }
1359
+ ecb_inline uint_fast32_t ecb_peek_u32_u (const void *ptr) { uint32_t v; memcpy (&v, ptr, sizeof (v)); return v; }
1360
+ ecb_inline uint_fast64_t ecb_peek_u64_u (const void *ptr) { uint64_t v; memcpy (&v, ptr, sizeof (v)); return v; }
1361
+
1362
+ ecb_inline uint_fast16_t ecb_peek_be_u16_u (const void *ptr) { return ecb_be_u16_to_host (ecb_peek_u16_u (ptr)); }
1363
+ ecb_inline uint_fast32_t ecb_peek_be_u32_u (const void *ptr) { return ecb_be_u32_to_host (ecb_peek_u32_u (ptr)); }
1364
+ ecb_inline uint_fast64_t ecb_peek_be_u64_u (const void *ptr) { return ecb_be_u64_to_host (ecb_peek_u64_u (ptr)); }
1365
+
1366
+ ecb_inline uint_fast16_t ecb_peek_le_u16_u (const void *ptr) { return ecb_le_u16_to_host (ecb_peek_u16_u (ptr)); }
1367
+ ecb_inline uint_fast32_t ecb_peek_le_u32_u (const void *ptr) { return ecb_le_u32_to_host (ecb_peek_u32_u (ptr)); }
1368
+ ecb_inline uint_fast64_t ecb_peek_le_u64_u (const void *ptr) { return ecb_le_u64_to_host (ecb_peek_u64_u (ptr)); }
1369
+
1370
+ ecb_inline uint_fast16_t ecb_host_to_be_u16 (uint_fast16_t v) { return ecb_little_endian () ? ecb_bswap16 (v) : v; }
1371
+ ecb_inline uint_fast32_t ecb_host_to_be_u32 (uint_fast32_t v) { return ecb_little_endian () ? ecb_bswap32 (v) : v; }
1372
+ ecb_inline uint_fast64_t ecb_host_to_be_u64 (uint_fast64_t v) { return ecb_little_endian () ? ecb_bswap64 (v) : v; }
1373
+
1374
+ ecb_inline uint_fast16_t ecb_host_to_le_u16 (uint_fast16_t v) { return ecb_big_endian () ? ecb_bswap16 (v) : v; }
1375
+ ecb_inline uint_fast32_t ecb_host_to_le_u32 (uint_fast32_t v) { return ecb_big_endian () ? ecb_bswap32 (v) : v; }
1376
+ ecb_inline uint_fast64_t ecb_host_to_le_u64 (uint_fast64_t v) { return ecb_big_endian () ? ecb_bswap64 (v) : v; }
1377
+
1378
+ ecb_inline void ecb_poke_u16_u (void *ptr, uint16_t v) { memcpy (ptr, &v, sizeof (v)); }
1379
+ ecb_inline void ecb_poke_u32_u (void *ptr, uint32_t v) { memcpy (ptr, &v, sizeof (v)); }
1380
+ ecb_inline void ecb_poke_u64_u (void *ptr, uint64_t v) { memcpy (ptr, &v, sizeof (v)); }
1381
+
1382
+ ecb_inline void ecb_poke_be_u16_u (void *ptr, uint_fast16_t v) { ecb_poke_u16_u (ptr, ecb_host_to_be_u16 (v)); }
1383
+ ecb_inline void ecb_poke_be_u32_u (void *ptr, uint_fast32_t v) { ecb_poke_u32_u (ptr, ecb_host_to_be_u32 (v)); }
1384
+ ecb_inline void ecb_poke_be_u64_u (void *ptr, uint_fast64_t v) { ecb_poke_u64_u (ptr, ecb_host_to_be_u64 (v)); }
1385
+
1386
+ ecb_inline void ecb_poke_le_u16_u (void *ptr, uint_fast16_t v) { ecb_poke_u16_u (ptr, ecb_host_to_le_u16 (v)); }
1387
+ ecb_inline void ecb_poke_le_u32_u (void *ptr, uint_fast32_t v) { ecb_poke_u32_u (ptr, ecb_host_to_le_u32 (v)); }
1388
+ ecb_inline void ecb_poke_le_u64_u (void *ptr, uint_fast64_t v) { ecb_poke_u64_u (ptr, ecb_host_to_le_u64 (v)); }
1389
+
1390
+ #if ECB_CPP
1391
+
1392
+ inline uint8_t ecb_bswap (uint8_t v) { return v; }
1393
+ inline uint16_t ecb_bswap (uint16_t v) { return ecb_bswap16 (v); }
1394
+ inline uint32_t ecb_bswap (uint32_t v) { return ecb_bswap32 (v); }
1395
+ inline uint64_t ecb_bswap (uint64_t v) { return ecb_bswap64 (v); }
1396
+
1397
+ template<typename T> inline T ecb_be_to_host (T v) { return ecb_little_endian () ? ecb_bswap (v) : v; }
1398
+ template<typename T> inline T ecb_le_to_host (T v) { return ecb_big_endian () ? ecb_bswap (v) : v; }
1399
+ template<typename T> inline T ecb_peek (const void *ptr) { return *(const T *)ptr; }
1400
+ template<typename T> inline T ecb_peek_be (const void *ptr) { return ecb_be_to_host (ecb_peek <T> (ptr)); }
1401
+ template<typename T> inline T ecb_peek_le (const void *ptr) { return ecb_le_to_host (ecb_peek <T> (ptr)); }
1402
+ template<typename T> inline T ecb_peek_u (const void *ptr) { T v; memcpy (&v, ptr, sizeof (v)); return v; }
1403
+ template<typename T> inline T ecb_peek_be_u (const void *ptr) { return ecb_be_to_host (ecb_peek_u<T> (ptr)); }
1404
+ template<typename T> inline T ecb_peek_le_u (const void *ptr) { return ecb_le_to_host (ecb_peek_u<T> (ptr)); }
1405
+
1406
+ template<typename T> inline T ecb_host_to_be (T v) { return ecb_little_endian () ? ecb_bswap (v) : v; }
1407
+ template<typename T> inline T ecb_host_to_le (T v) { return ecb_big_endian () ? ecb_bswap (v) : v; }
1408
+ template<typename T> inline void ecb_poke (void *ptr, T v) { *(T *)ptr = v; }
1409
+ template<typename T> inline void ecb_poke_be (void *ptr, T v) { return ecb_poke <T> (ptr, ecb_host_to_be (v)); }
1410
+ template<typename T> inline void ecb_poke_le (void *ptr, T v) { return ecb_poke <T> (ptr, ecb_host_to_le (v)); }
1411
+ template<typename T> inline void ecb_poke_u (void *ptr, T v) { memcpy (ptr, &v, sizeof (v)); }
1412
+ template<typename T> inline void ecb_poke_be_u (void *ptr, T v) { return ecb_poke_u<T> (ptr, ecb_host_to_be (v)); }
1413
+ template<typename T> inline void ecb_poke_le_u (void *ptr, T v) { return ecb_poke_u<T> (ptr, ecb_host_to_le (v)); }
1414
+
1415
+ #endif
1416
+
1417
+ /*****************************************************************************/
1418
+
1197
1419
  #if ECB_GCC_VERSION(3,0) || ECB_C99
1198
1420
  #define ecb_mod(m,n) ((m) % (n) + ((m) % (n) < 0 ? (n) : 0))
1199
1421
  #else
@@ -1227,6 +1449,8 @@ ecb_inline ecb_const ecb_bool ecb_little_endian (void) { return ecb_byteorder_he
1227
1449
  #define ecb_array_length(name) (sizeof (name) / sizeof (name [0]))
1228
1450
  #endif
1229
1451
 
1452
+ /*****************************************************************************/
1453
+
1230
1454
  ecb_function_ ecb_const uint32_t ecb_binary16_to_binary32 (uint32_t x);
1231
1455
  ecb_function_ ecb_const uint32_t
1232
1456
  ecb_binary16_to_binary32 (uint32_t x)
@@ -1344,7 +1568,6 @@ ecb_binary32_to_binary16 (uint32_t x)
1344
1568
  || (defined __arm__ && (defined __ARM_EABI__ || defined __EABI__ || defined __VFP_FP__ || defined _WIN32_WCE || defined __ANDROID__)) \
1345
1569
  || defined __aarch64__
1346
1570
  #define ECB_STDFP 1
1347
- #include <string.h> /* for memcpy */
1348
1571
  #else
1349
1572
  #define ECB_STDFP 0
1350
1573
  #endif
@@ -1539,7 +1762,7 @@ ecb_binary32_to_binary16 (uint32_t x)
1539
1762
  #if ECB_MEMORY_FENCE_NEEDS_PTHREADS
1540
1763
  /* if your architecture doesn't need memory fences, e.g. because it is
1541
1764
  * single-cpu/core, or if you use libev in a project that doesn't use libev
1542
- * from multiple threads, then you can define ECB_AVOID_PTHREADS when compiling
1765
+ * from multiple threads, then you can define ECB_NO_THREADS when compiling
1543
1766
  * libev, in which cases the memory fences become nops.
1544
1767
  * alternatively, you can remove this #error and link against libpthread,
1545
1768
  * which will then provide the memory fences.
@@ -1553,18 +1776,80 @@ ecb_binary32_to_binary16 (uint32_t x)
1553
1776
  # define ECB_MEMORY_FENCE_RELEASE ECB_MEMORY_FENCE
1554
1777
  #endif
1555
1778
 
1556
- #define expect_false(cond) ecb_expect_false (cond)
1557
- #define expect_true(cond) ecb_expect_true (cond)
1558
- #define noinline ecb_noinline
1559
-
1560
1779
  #define inline_size ecb_inline
1561
1780
 
1562
1781
  #if EV_FEATURE_CODE
1563
1782
  # define inline_speed ecb_inline
1564
1783
  #else
1565
- # define inline_speed noinline static
1784
+ # define inline_speed ecb_noinline static
1785
+ #endif
1786
+
1787
+ /*****************************************************************************/
1788
+ /* raw syscall wrappers */
1789
+
1790
+ #if EV_NEED_SYSCALL
1791
+
1792
+ #include <sys/syscall.h>
1793
+
1794
+ /*
1795
+ * define some syscall wrappers for common architectures
1796
+ * this is mostly for nice looks during debugging, not performance.
1797
+ * our syscalls return < 0, not == -1, on error. which is good
1798
+ * enough for linux aio.
1799
+ * TODO: arm is also common nowadays, maybe even mips and x86
1800
+ * TODO: after implementing this, it suddenly looks like overkill, but its hard to remove...
1801
+ */
1802
+ #if __GNUC__ && __linux && ECB_AMD64 && !EV_FEATURE_CODE
1803
+ /* the costly errno access probably kills this for size optimisation */
1804
+
1805
+ #define ev_syscall(nr,narg,arg1,arg2,arg3,arg4,arg5,arg6) \
1806
+ ({ \
1807
+ long res; \
1808
+ register unsigned long r6 __asm__ ("r9" ); \
1809
+ register unsigned long r5 __asm__ ("r8" ); \
1810
+ register unsigned long r4 __asm__ ("r10"); \
1811
+ register unsigned long r3 __asm__ ("rdx"); \
1812
+ register unsigned long r2 __asm__ ("rsi"); \
1813
+ register unsigned long r1 __asm__ ("rdi"); \
1814
+ if (narg >= 6) r6 = (unsigned long)(arg6); \
1815
+ if (narg >= 5) r5 = (unsigned long)(arg5); \
1816
+ if (narg >= 4) r4 = (unsigned long)(arg4); \
1817
+ if (narg >= 3) r3 = (unsigned long)(arg3); \
1818
+ if (narg >= 2) r2 = (unsigned long)(arg2); \
1819
+ if (narg >= 1) r1 = (unsigned long)(arg1); \
1820
+ __asm__ __volatile__ ( \
1821
+ "syscall\n\t" \
1822
+ : "=a" (res) \
1823
+ : "0" (nr), "r" (r1), "r" (r2), "r" (r3), "r" (r4), "r" (r5) \
1824
+ : "cc", "r11", "cx", "memory"); \
1825
+ errno = -res; \
1826
+ res; \
1827
+ })
1828
+
1829
+ #endif
1830
+
1831
+ #ifdef ev_syscall
1832
+ #define ev_syscall0(nr) ev_syscall (nr, 0, 0, 0, 0, 0, 0, 0)
1833
+ #define ev_syscall1(nr,arg1) ev_syscall (nr, 1, arg1, 0, 0, 0, 0, 0)
1834
+ #define ev_syscall2(nr,arg1,arg2) ev_syscall (nr, 2, arg1, arg2, 0, 0, 0, 0)
1835
+ #define ev_syscall3(nr,arg1,arg2,arg3) ev_syscall (nr, 3, arg1, arg2, arg3, 0, 0, 0)
1836
+ #define ev_syscall4(nr,arg1,arg2,arg3,arg4) ev_syscall (nr, 3, arg1, arg2, arg3, arg4, 0, 0)
1837
+ #define ev_syscall5(nr,arg1,arg2,arg3,arg4,arg5) ev_syscall (nr, 5, arg1, arg2, arg3, arg4, arg5, 0)
1838
+ #define ev_syscall6(nr,arg1,arg2,arg3,arg4,arg5,arg6) ev_syscall (nr, 6, arg1, arg2, arg3, arg4, arg5,arg6)
1839
+ #else
1840
+ #define ev_syscall0(nr) syscall (nr)
1841
+ #define ev_syscall1(nr,arg1) syscall (nr, arg1)
1842
+ #define ev_syscall2(nr,arg1,arg2) syscall (nr, arg1, arg2)
1843
+ #define ev_syscall3(nr,arg1,arg2,arg3) syscall (nr, arg1, arg2, arg3)
1844
+ #define ev_syscall4(nr,arg1,arg2,arg3,arg4) syscall (nr, arg1, arg2, arg3, arg4)
1845
+ #define ev_syscall5(nr,arg1,arg2,arg3,arg4,arg5) syscall (nr, arg1, arg2, arg3, arg4, arg5)
1846
+ #define ev_syscall6(nr,arg1,arg2,arg3,arg4,arg5,arg6) syscall (nr, arg1, arg2, arg3, arg4, arg5,arg6)
1847
+ #endif
1848
+
1566
1849
  #endif
1567
1850
 
1851
+ /*****************************************************************************/
1852
+
1568
1853
  #define NUMPRI (EV_MAXPRI - EV_MINPRI + 1)
1569
1854
 
1570
1855
  #if EV_MINPRI == EV_MAXPRI
@@ -1622,7 +1907,7 @@ static EV_ATOMIC_T have_monotonic; /* did clock_gettime (CLOCK_MONOTONIC) work?
1622
1907
  #include <float.h>
1623
1908
 
1624
1909
  /* a floor() replacement function, should be independent of ev_tstamp type */
1625
- noinline
1910
+ ecb_noinline
1626
1911
  static ev_tstamp
1627
1912
  ev_floor (ev_tstamp v)
1628
1913
  {
@@ -1633,26 +1918,26 @@ ev_floor (ev_tstamp v)
1633
1918
  const ev_tstamp shift = sizeof (unsigned long) >= 8 ? 18446744073709551616. : 4294967296.;
1634
1919
  #endif
1635
1920
 
1636
- /* argument too large for an unsigned long? */
1637
- if (expect_false (v >= shift))
1921
+ /* special treatment for negative arguments */
1922
+ if (ecb_expect_false (v < 0.))
1923
+ {
1924
+ ev_tstamp f = -ev_floor (-v);
1925
+
1926
+ return f - (f == v ? 0 : 1);
1927
+ }
1928
+
1929
+ /* argument too large for an unsigned long? then reduce it */
1930
+ if (ecb_expect_false (v >= shift))
1638
1931
  {
1639
1932
  ev_tstamp f;
1640
1933
 
1641
1934
  if (v == v - 1.)
1642
- return v; /* very large number */
1935
+ return v; /* very large numbers are assumed to be integer */
1643
1936
 
1644
1937
  f = shift * ev_floor (v * (1. / shift));
1645
1938
  return f + ev_floor (v - f);
1646
1939
  }
1647
1940
 
1648
- /* special treatment for negative args? */
1649
- if (expect_false (v < 0.))
1650
- {
1651
- ev_tstamp f = -ev_floor (-v);
1652
-
1653
- return f - (f == v ? 0 : 1);
1654
- }
1655
-
1656
1941
  /* fits into an unsigned long */
1657
1942
  return (unsigned long)v;
1658
1943
  }
@@ -1665,7 +1950,7 @@ ev_floor (ev_tstamp v)
1665
1950
  # include <sys/utsname.h>
1666
1951
  #endif
1667
1952
 
1668
- noinline ecb_cold
1953
+ ecb_noinline ecb_cold
1669
1954
  static unsigned int
1670
1955
  ev_linux_version (void)
1671
1956
  {
@@ -1705,7 +1990,7 @@ ev_linux_version (void)
1705
1990
  /*****************************************************************************/
1706
1991
 
1707
1992
  #if EV_AVOID_STDIO
1708
- noinline ecb_cold
1993
+ ecb_noinline ecb_cold
1709
1994
  static void
1710
1995
  ev_printerr (const char *msg)
1711
1996
  {
@@ -1722,7 +2007,7 @@ ev_set_syserr_cb (void (*cb)(const char *msg) EV_NOEXCEPT) EV_NOEXCEPT
1722
2007
  syserr_cb = cb;
1723
2008
  }
1724
2009
 
1725
- noinline ecb_cold
2010
+ ecb_noinline ecb_cold
1726
2011
  static void
1727
2012
  ev_syserr (const char *msg)
1728
2013
  {
@@ -1804,7 +2089,7 @@ typedef struct
1804
2089
  unsigned char events; /* the events watched for */
1805
2090
  unsigned char reify; /* flag set when this ANFD needs reification (EV_ANFD_REIFY, EV__IOFDSET) */
1806
2091
  unsigned char emask; /* some backends store the actual kernel mask in here */
1807
- unsigned char unused;
2092
+ unsigned char eflags; /* flags field for use by backends */
1808
2093
  #if EV_USE_EPOLL
1809
2094
  unsigned int egen; /* generation counter to counter epoll bugs */
1810
2095
  #endif
@@ -1868,7 +2153,7 @@ typedef struct
1868
2153
 
1869
2154
  #else
1870
2155
 
1871
- EV_API_DECL ev_tstamp ev_rt_now = 0; /* needs to be initialised to make it a definition despite extern */
2156
+ EV_API_DECL ev_tstamp ev_rt_now = EV_TS_CONST (0.); /* needs to be initialised to make it a definition despite extern */
1872
2157
  #define VAR(name,decl) static decl;
1873
2158
  #include "ev_vars.h"
1874
2159
  #undef VAR
@@ -1878,8 +2163,8 @@ typedef struct
1878
2163
  #endif
1879
2164
 
1880
2165
  #if EV_FEATURE_API
1881
- # define EV_RELEASE_CB if (expect_false (release_cb)) release_cb (EV_A)
1882
- # define EV_ACQUIRE_CB if (expect_false (acquire_cb)) acquire_cb (EV_A)
2166
+ # define EV_RELEASE_CB if (ecb_expect_false (release_cb)) release_cb (EV_A)
2167
+ # define EV_ACQUIRE_CB if (ecb_expect_false (acquire_cb)) acquire_cb (EV_A)
1883
2168
  # define EV_INVOKE_PENDING invoke_cb (EV_A)
1884
2169
  #else
1885
2170
  # define EV_RELEASE_CB (void)0
@@ -1896,17 +2181,19 @@ ev_tstamp
1896
2181
  ev_time (void) EV_NOEXCEPT
1897
2182
  {
1898
2183
  #if EV_USE_REALTIME
1899
- if (expect_true (have_realtime))
2184
+ if (ecb_expect_true (have_realtime))
1900
2185
  {
1901
2186
  struct timespec ts;
1902
2187
  clock_gettime (CLOCK_REALTIME, &ts);
1903
- return ts.tv_sec + ts.tv_nsec * 1e-9;
2188
+ return EV_TS_GET (ts);
1904
2189
  }
1905
2190
  #endif
1906
2191
 
1907
- struct timeval tv;
1908
- gettimeofday (&tv, 0);
1909
- return tv.tv_sec + tv.tv_usec * 1e-6;
2192
+ {
2193
+ struct timeval tv;
2194
+ gettimeofday (&tv, 0);
2195
+ return EV_TV_GET (tv);
2196
+ }
1910
2197
  }
1911
2198
  #endif
1912
2199
 
@@ -1914,11 +2201,11 @@ inline_size ev_tstamp
1914
2201
  get_clock (void)
1915
2202
  {
1916
2203
  #if EV_USE_MONOTONIC
1917
- if (expect_true (have_monotonic))
2204
+ if (ecb_expect_true (have_monotonic))
1918
2205
  {
1919
2206
  struct timespec ts;
1920
2207
  clock_gettime (CLOCK_MONOTONIC, &ts);
1921
- return ts.tv_sec + ts.tv_nsec * 1e-9;
2208
+ return EV_TS_GET (ts);
1922
2209
  }
1923
2210
  #endif
1924
2211
 
@@ -1936,7 +2223,7 @@ ev_now (EV_P) EV_NOEXCEPT
1936
2223
  void
1937
2224
  ev_sleep (ev_tstamp delay) EV_NOEXCEPT
1938
2225
  {
1939
- if (delay > 0.)
2226
+ if (delay > EV_TS_CONST (0.))
1940
2227
  {
1941
2228
  #if EV_USE_NANOSLEEP
1942
2229
  struct timespec ts;
@@ -1946,7 +2233,7 @@ ev_sleep (ev_tstamp delay) EV_NOEXCEPT
1946
2233
  #elif defined _WIN32
1947
2234
  /* maybe this should round up, as ms is very low resolution */
1948
2235
  /* compared to select (µs) or nanosleep (ns) */
1949
- Sleep ((unsigned long)(delay * 1e3));
2236
+ Sleep ((unsigned long)(EV_TS_TO_MSEC (delay)));
1950
2237
  #else
1951
2238
  struct timeval tv;
1952
2239
 
@@ -1986,7 +2273,7 @@ array_nextsize (int elem, int cur, int cnt)
1986
2273
  return ncur;
1987
2274
  }
1988
2275
 
1989
- noinline ecb_cold
2276
+ ecb_noinline ecb_cold
1990
2277
  static void *
1991
2278
  array_realloc (int elem, void *base, int *cur, int cnt)
1992
2279
  {
@@ -2000,7 +2287,7 @@ array_realloc (int elem, void *base, int *cur, int cnt)
2000
2287
  memset ((void *)(base + offset), 0, sizeof (*(base)) * (count))
2001
2288
 
2002
2289
  #define array_needsize(type,base,cur,cnt,init) \
2003
- if (expect_false ((cnt) > (cur))) \
2290
+ if (ecb_expect_false ((cnt) > (cur))) \
2004
2291
  { \
2005
2292
  ecb_unused int ocur_ = (cur); \
2006
2293
  (base) = (type *)array_realloc \
@@ -2024,20 +2311,20 @@ array_realloc (int elem, void *base, int *cur, int cnt)
2024
2311
  /*****************************************************************************/
2025
2312
 
2026
2313
  /* dummy callback for pending events */
2027
- noinline
2314
+ ecb_noinline
2028
2315
  static void
2029
2316
  pendingcb (EV_P_ ev_prepare *w, int revents)
2030
2317
  {
2031
2318
  }
2032
2319
 
2033
- noinline
2320
+ ecb_noinline
2034
2321
  void
2035
2322
  ev_feed_event (EV_P_ void *w, int revents) EV_NOEXCEPT
2036
2323
  {
2037
2324
  W w_ = (W)w;
2038
2325
  int pri = ABSPRI (w_);
2039
2326
 
2040
- if (expect_false (w_->pending))
2327
+ if (ecb_expect_false (w_->pending))
2041
2328
  pendings [pri][w_->pending - 1].events |= revents;
2042
2329
  else
2043
2330
  {
@@ -2098,7 +2385,7 @@ fd_event (EV_P_ int fd, int revents)
2098
2385
  {
2099
2386
  ANFD *anfd = anfds + fd;
2100
2387
 
2101
- if (expect_true (!anfd->reify))
2388
+ if (ecb_expect_true (!anfd->reify))
2102
2389
  fd_event_nocheck (EV_A_ fd, revents);
2103
2390
  }
2104
2391
 
@@ -2116,8 +2403,20 @@ fd_reify (EV_P)
2116
2403
  {
2117
2404
  int i;
2118
2405
 
2406
+ /* most backends do not modify the fdchanges list in backend_modfiy.
2407
+ * except io_uring, which has fixed-size buffers which might force us
2408
+ * to handle events in backend_modify, causing fdchanges to be amended,
2409
+ * which could result in an endless loop.
2410
+ * to avoid this, we do not dynamically handle fds that were added
2411
+ * during fd_reify. that means that for those backends, fdchangecnt
2412
+ * might be non-zero during poll, which must cause them to not block.
2413
+ * to not put too much of a burden on other backends, this detail
2414
+ * needs to be handled in the backend.
2415
+ */
2416
+ int changecnt = fdchangecnt;
2417
+
2119
2418
  #if EV_SELECT_IS_WINSOCKET || EV_USE_IOCP
2120
- for (i = 0; i < fdchangecnt; ++i)
2419
+ for (i = 0; i < changecnt; ++i)
2121
2420
  {
2122
2421
  int fd = fdchanges [i];
2123
2422
  ANFD *anfd = anfds + fd;
@@ -2141,7 +2440,7 @@ fd_reify (EV_P)
2141
2440
  }
2142
2441
  #endif
2143
2442
 
2144
- for (i = 0; i < fdchangecnt; ++i)
2443
+ for (i = 0; i < changecnt; ++i)
2145
2444
  {
2146
2445
  int fd = fdchanges [i];
2147
2446
  ANFD *anfd = anfds + fd;
@@ -2152,7 +2451,7 @@ fd_reify (EV_P)
2152
2451
 
2153
2452
  anfd->reify = 0;
2154
2453
 
2155
- /*if (expect_true (o_reify & EV_ANFD_REIFY)) probably a deoptimisation */
2454
+ /*if (ecb_expect_true (o_reify & EV_ANFD_REIFY)) probably a deoptimisation */
2156
2455
  {
2157
2456
  anfd->events = 0;
2158
2457
 
@@ -2167,7 +2466,14 @@ fd_reify (EV_P)
2167
2466
  backend_modify (EV_A_ fd, o_events, anfd->events);
2168
2467
  }
2169
2468
 
2170
- fdchangecnt = 0;
2469
+ /* normally, fdchangecnt hasn't changed. if it has, then new fds have been added.
2470
+ * this is a rare case (see beginning comment in this function), so we copy them to the
2471
+ * front and hope the backend handles this case.
2472
+ */
2473
+ if (ecb_expect_false (fdchangecnt != changecnt))
2474
+ memmove (fdchanges, fdchanges + changecnt, (fdchangecnt - changecnt) * sizeof (*fdchanges));
2475
+
2476
+ fdchangecnt -= changecnt;
2171
2477
  }
2172
2478
 
2173
2479
  /* something about the given fd changed */
@@ -2176,9 +2482,9 @@ void
2176
2482
  fd_change (EV_P_ int fd, int flags)
2177
2483
  {
2178
2484
  unsigned char reify = anfds [fd].reify;
2179
- anfds [fd].reify |= flags;
2485
+ anfds [fd].reify = reify | flags;
2180
2486
 
2181
- if (expect_true (!reify))
2487
+ if (ecb_expect_true (!reify))
2182
2488
  {
2183
2489
  ++fdchangecnt;
2184
2490
  array_needsize (int, fdchanges, fdchangemax, fdchangecnt, array_needsize_noinit);
@@ -2211,7 +2517,7 @@ fd_valid (int fd)
2211
2517
  }
2212
2518
 
2213
2519
  /* called on EBADF to verify fds */
2214
- noinline ecb_cold
2520
+ ecb_noinline ecb_cold
2215
2521
  static void
2216
2522
  fd_ebadf (EV_P)
2217
2523
  {
@@ -2224,7 +2530,7 @@ fd_ebadf (EV_P)
2224
2530
  }
2225
2531
 
2226
2532
  /* called on ENOMEM in select/poll to kill some fds and retry */
2227
- noinline ecb_cold
2533
+ ecb_noinline ecb_cold
2228
2534
  static void
2229
2535
  fd_enomem (EV_P)
2230
2536
  {
@@ -2239,7 +2545,7 @@ fd_enomem (EV_P)
2239
2545
  }
2240
2546
 
2241
2547
  /* usually called after fork if backend needs to re-arm all fds from scratch */
2242
- noinline
2548
+ ecb_noinline
2243
2549
  static void
2244
2550
  fd_rearm_all (EV_P)
2245
2551
  {
@@ -2303,19 +2609,19 @@ downheap (ANHE *heap, int N, int k)
2303
2609
  ANHE *pos = heap + DHEAP * (k - HEAP0) + HEAP0 + 1;
2304
2610
 
2305
2611
  /* find minimum child */
2306
- if (expect_true (pos + DHEAP - 1 < E))
2612
+ if (ecb_expect_true (pos + DHEAP - 1 < E))
2307
2613
  {
2308
2614
  /* fast path */ (minpos = pos + 0), (minat = ANHE_at (*minpos));
2309
- if ( ANHE_at (pos [1]) < minat) (minpos = pos + 1), (minat = ANHE_at (*minpos));
2310
- if ( ANHE_at (pos [2]) < minat) (minpos = pos + 2), (minat = ANHE_at (*minpos));
2311
- if ( ANHE_at (pos [3]) < minat) (minpos = pos + 3), (minat = ANHE_at (*minpos));
2615
+ if ( minat > ANHE_at (pos [1])) (minpos = pos + 1), (minat = ANHE_at (*minpos));
2616
+ if ( minat > ANHE_at (pos [2])) (minpos = pos + 2), (minat = ANHE_at (*minpos));
2617
+ if ( minat > ANHE_at (pos [3])) (minpos = pos + 3), (minat = ANHE_at (*minpos));
2312
2618
  }
2313
2619
  else if (pos < E)
2314
2620
  {
2315
2621
  /* slow path */ (minpos = pos + 0), (minat = ANHE_at (*minpos));
2316
- if (pos + 1 < E && ANHE_at (pos [1]) < minat) (minpos = pos + 1), (minat = ANHE_at (*minpos));
2317
- if (pos + 2 < E && ANHE_at (pos [2]) < minat) (minpos = pos + 2), (minat = ANHE_at (*minpos));
2318
- if (pos + 3 < E && ANHE_at (pos [3]) < minat) (minpos = pos + 3), (minat = ANHE_at (*minpos));
2622
+ if (pos + 1 < E && minat > ANHE_at (pos [1])) (minpos = pos + 1), (minat = ANHE_at (*minpos));
2623
+ if (pos + 2 < E && minat > ANHE_at (pos [2])) (minpos = pos + 2), (minat = ANHE_at (*minpos));
2624
+ if (pos + 3 < E && minat > ANHE_at (pos [3])) (minpos = pos + 3), (minat = ANHE_at (*minpos));
2319
2625
  }
2320
2626
  else
2321
2627
  break;
@@ -2333,7 +2639,7 @@ downheap (ANHE *heap, int N, int k)
2333
2639
  ev_active (ANHE_w (he)) = k;
2334
2640
  }
2335
2641
 
2336
- #else /* 4HEAP */
2642
+ #else /* not 4HEAP */
2337
2643
 
2338
2644
  #define HEAP0 1
2339
2645
  #define HPARENT(k) ((k) >> 1)
@@ -2360,7 +2666,7 @@ downheap (ANHE *heap, int N, int k)
2360
2666
 
2361
2667
  heap [k] = heap [c];
2362
2668
  ev_active (ANHE_w (heap [k])) = k;
2363
-
2669
+
2364
2670
  k = c;
2365
2671
  }
2366
2672
 
@@ -2415,7 +2721,7 @@ reheap (ANHE *heap, int N)
2415
2721
 
2416
2722
  /*****************************************************************************/
2417
2723
 
2418
- /* associate signal watchers to a signal signal */
2724
+ /* associate signal watchers to a signal */
2419
2725
  typedef struct
2420
2726
  {
2421
2727
  EV_ATOMIC_T pending;
@@ -2431,7 +2737,7 @@ static ANSIG signals [EV_NSIG - 1];
2431
2737
 
2432
2738
  #if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE
2433
2739
 
2434
- noinline ecb_cold
2740
+ ecb_noinline ecb_cold
2435
2741
  static void
2436
2742
  evpipe_init (EV_P)
2437
2743
  {
@@ -2482,7 +2788,7 @@ evpipe_write (EV_P_ EV_ATOMIC_T *flag)
2482
2788
  {
2483
2789
  ECB_MEMORY_FENCE; /* push out the write before this function was called, acquire flag */
2484
2790
 
2485
- if (expect_true (*flag))
2791
+ if (ecb_expect_true (*flag))
2486
2792
  return;
2487
2793
 
2488
2794
  *flag = 1;
@@ -2569,7 +2875,7 @@ pipecb (EV_P_ ev_io *iow, int revents)
2569
2875
  ECB_MEMORY_FENCE;
2570
2876
 
2571
2877
  for (i = EV_NSIG - 1; i--; )
2572
- if (expect_false (signals [i].pending))
2878
+ if (ecb_expect_false (signals [i].pending))
2573
2879
  ev_feed_signal_event (EV_A_ i + 1);
2574
2880
  }
2575
2881
  #endif
@@ -2620,13 +2926,13 @@ ev_sighandler (int signum)
2620
2926
  ev_feed_signal (signum);
2621
2927
  }
2622
2928
 
2623
- noinline
2929
+ ecb_noinline
2624
2930
  void
2625
2931
  ev_feed_signal_event (EV_P_ int signum) EV_NOEXCEPT
2626
2932
  {
2627
2933
  WL w;
2628
2934
 
2629
- if (expect_false (signum <= 0 || signum >= EV_NSIG))
2935
+ if (ecb_expect_false (signum <= 0 || signum >= EV_NSIG))
2630
2936
  return;
2631
2937
 
2632
2938
  --signum;
@@ -2635,7 +2941,7 @@ ev_feed_signal_event (EV_P_ int signum) EV_NOEXCEPT
2635
2941
  /* it is permissible to try to feed a signal to the wrong loop */
2636
2942
  /* or, likely more useful, feeding a signal nobody is waiting for */
2637
2943
 
2638
- if (expect_false (signals [signum].loop != EV_A))
2944
+ if (ecb_expect_false (signals [signum].loop != EV_A))
2639
2945
  return;
2640
2946
  #endif
2641
2947
 
@@ -2729,6 +3035,57 @@ childcb (EV_P_ ev_signal *sw, int revents)
2729
3035
 
2730
3036
  /*****************************************************************************/
2731
3037
 
3038
+ #if EV_USE_TIMERFD
3039
+
3040
+ static void periodics_reschedule (EV_P);
3041
+
3042
+ static void
3043
+ timerfdcb (EV_P_ ev_io *iow, int revents)
3044
+ {
3045
+ struct itimerspec its = { 0 };
3046
+
3047
+ its.it_value.tv_sec = ev_rt_now + (int)MAX_BLOCKTIME2;
3048
+ timerfd_settime (timerfd, TFD_TIMER_ABSTIME | TFD_TIMER_CANCEL_ON_SET, &its, 0);
3049
+
3050
+ ev_rt_now = ev_time ();
3051
+ /* periodics_reschedule only needs ev_rt_now */
3052
+ /* but maybe in the future we want the full treatment. */
3053
+ /*
3054
+ now_floor = EV_TS_CONST (0.);
3055
+ time_update (EV_A_ EV_TSTAMP_HUGE);
3056
+ */
3057
+ #if EV_PERIODIC_ENABLE
3058
+ periodics_reschedule (EV_A);
3059
+ #endif
3060
+ }
3061
+
3062
+ ecb_noinline ecb_cold
3063
+ static void
3064
+ evtimerfd_init (EV_P)
3065
+ {
3066
+ if (!ev_is_active (&timerfd_w))
3067
+ {
3068
+ timerfd = timerfd_create (CLOCK_REALTIME, TFD_NONBLOCK | TFD_CLOEXEC);
3069
+
3070
+ if (timerfd >= 0)
3071
+ {
3072
+ fd_intern (timerfd); /* just to be sure */
3073
+
3074
+ ev_io_init (&timerfd_w, timerfdcb, timerfd, EV_READ);
3075
+ ev_set_priority (&timerfd_w, EV_MINPRI);
3076
+ ev_io_start (EV_A_ &timerfd_w);
3077
+ ev_unref (EV_A); /* watcher should not keep loop alive */
3078
+
3079
+ /* (re-) arm timer */
3080
+ timerfdcb (EV_A_ 0, 0);
3081
+ }
3082
+ }
3083
+ }
3084
+
3085
+ #endif
3086
+
3087
+ /*****************************************************************************/
3088
+
2732
3089
  #if EV_USE_IOCP
2733
3090
  # include "ev_iocp.c"
2734
3091
  #endif
@@ -2744,6 +3101,9 @@ childcb (EV_P_ ev_signal *sw, int revents)
2744
3101
  #if EV_USE_LINUXAIO
2745
3102
  # include "ev_linuxaio.c"
2746
3103
  #endif
3104
+ #if EV_USE_IOURING
3105
+ # include "ev_iouring.c"
3106
+ #endif
2747
3107
  #if EV_USE_POLL
2748
3108
  # include "ev_poll.c"
2749
3109
  #endif
@@ -2781,17 +3141,14 @@ ev_supported_backends (void) EV_NOEXCEPT
2781
3141
  {
2782
3142
  unsigned int flags = 0;
2783
3143
 
2784
- if (EV_USE_PORT ) flags |= EVBACKEND_PORT;
2785
- if (EV_USE_KQUEUE ) flags |= EVBACKEND_KQUEUE;
2786
- if (EV_USE_EPOLL ) flags |= EVBACKEND_EPOLL;
3144
+ if (EV_USE_PORT ) flags |= EVBACKEND_PORT;
3145
+ if (EV_USE_KQUEUE ) flags |= EVBACKEND_KQUEUE;
3146
+ if (EV_USE_EPOLL ) flags |= EVBACKEND_EPOLL;
3147
+ if (EV_USE_LINUXAIO && ev_linux_version () >= 0x041300) flags |= EVBACKEND_LINUXAIO; /* 4.19+ */
3148
+ if (EV_USE_IOURING && ev_linux_version () >= 0x050601 ) flags |= EVBACKEND_IOURING; /* 5.6.1+ */
3149
+ if (EV_USE_POLL ) flags |= EVBACKEND_POLL;
3150
+ if (EV_USE_SELECT ) flags |= EVBACKEND_SELECT;
2787
3151
 
2788
- #ifdef EV_USE_LINUXAIO
2789
- if (EV_USE_LINUXAIO) flags |= EVBACKEND_LINUXAIO;
2790
- #endif
2791
-
2792
- if (EV_USE_POLL ) flags |= EVBACKEND_POLL;
2793
- if (EV_USE_SELECT ) flags |= EVBACKEND_SELECT;
2794
-
2795
3152
  return flags;
2796
3153
  }
2797
3154
 
@@ -2801,24 +3158,27 @@ ev_recommended_backends (void) EV_NOEXCEPT
2801
3158
  {
2802
3159
  unsigned int flags = ev_supported_backends ();
2803
3160
 
2804
- #if defined(__APPLE__) && (MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_X_VERSION_10_14)
2805
- /* apple has a poor track record but post 10.12.2 it seems to work sufficiently well */
2806
- #elif defined(__NetBSD__)
2807
- /* kqueue is borked on everything but netbsd apparently */
2808
- /* it usually doesn't work correctly on anything but sockets and pipes */
2809
- #else
3161
+ /* apple has a poor track record but post 10.12.2 it seems to work sufficiently well */
3162
+ #if defined(__APPLE__) && (MAC_OS_X_VERSION_MIN_REQUIRED < MAC_OS_X_VERSION_10_14)
2810
3163
  /* only select works correctly on that "unix-certified" platform */
2811
3164
  flags &= ~EVBACKEND_KQUEUE; /* horribly broken, even for sockets */
2812
3165
  flags &= ~EVBACKEND_POLL; /* poll is based on kqueue from 10.5 onwards */
3166
+ #elif !defined(__NetBSD__)
3167
+ /* kqueue is borked on everything but netbsd apparently */
3168
+ /* it usually doesn't work correctly on anything but sockets and pipes */
3169
+ flags &= ~EVBACKEND_KQUEUE;
2813
3170
  #endif
2814
3171
 
2815
3172
  #ifdef __FreeBSD__
2816
3173
  flags &= ~EVBACKEND_POLL; /* poll return value is unusable (http://forums.freebsd.org/archive/index.php/t-10270.html) */
2817
3174
  #endif
2818
3175
 
2819
- /* TODO: linuxaio is very experimental */
2820
- #if !EV_RECOMMEND_LINUXAIO
3176
+ #ifdef __linux__
3177
+ /* NOTE: linuxaio is very experimental, never recommend */
2821
3178
  flags &= ~EVBACKEND_LINUXAIO;
3179
+
3180
+ /* NOTE: io_uring is super experimental, never recommend */
3181
+ flags &= ~EVBACKEND_IOURING;
2822
3182
  #endif
2823
3183
 
2824
3184
  return flags;
@@ -2828,12 +3188,14 @@ ecb_cold
2828
3188
  unsigned int
2829
3189
  ev_embeddable_backends (void) EV_NOEXCEPT
2830
3190
  {
2831
- int flags = EVBACKEND_EPOLL | EVBACKEND_KQUEUE | EVBACKEND_PORT;
3191
+ int flags = EVBACKEND_EPOLL | EVBACKEND_KQUEUE | EVBACKEND_PORT | EVBACKEND_IOURING;
2832
3192
 
2833
3193
  /* epoll embeddability broken on all linux versions up to at least 2.6.23 */
2834
3194
  if (ev_linux_version () < 0x020620) /* disable it on linux < 2.6.32 */
2835
3195
  flags &= ~EVBACKEND_EPOLL;
2836
3196
 
3197
+ /* EVBACKEND_LINUXAIO is theoretically embeddable, but suffers from a performance overhead */
3198
+
2837
3199
  return flags;
2838
3200
  }
2839
3201
 
@@ -2895,7 +3257,7 @@ ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_NOEXCEPT, void (*acquire)
2895
3257
  #endif
2896
3258
 
2897
3259
  /* initialise a loop structure, must be zero-initialised */
2898
- noinline ecb_cold
3260
+ ecb_noinline ecb_cold
2899
3261
  static void
2900
3262
  loop_init (EV_P_ unsigned int flags) EV_NOEXCEPT
2901
3263
  {
@@ -2960,6 +3322,9 @@ loop_init (EV_P_ unsigned int flags) EV_NOEXCEPT
2960
3322
  #if EV_USE_SIGNALFD
2961
3323
  sigfd = flags & EVFLAG_SIGNALFD ? -2 : -1;
2962
3324
  #endif
3325
+ #if EV_USE_TIMERFD
3326
+ timerfd = flags & EVFLAG_NOTIMERFD ? -1 : -2;
3327
+ #endif
2963
3328
 
2964
3329
  if (!(flags & EVBACKEND_MASK))
2965
3330
  flags |= ev_recommended_backends ();
@@ -2973,6 +3338,9 @@ loop_init (EV_P_ unsigned int flags) EV_NOEXCEPT
2973
3338
  #if EV_USE_KQUEUE
2974
3339
  if (!backend && (flags & EVBACKEND_KQUEUE )) backend = kqueue_init (EV_A_ flags);
2975
3340
  #endif
3341
+ #if EV_USE_IOURING
3342
+ if (!backend && (flags & EVBACKEND_IOURING )) backend = iouring_init (EV_A_ flags);
3343
+ #endif
2976
3344
  #if EV_USE_LINUXAIO
2977
3345
  if (!backend && (flags & EVBACKEND_LINUXAIO)) backend = linuxaio_init (EV_A_ flags);
2978
3346
  #endif
@@ -3010,7 +3378,7 @@ ev_loop_destroy (EV_P)
3010
3378
 
3011
3379
  #if EV_CLEANUP_ENABLE
3012
3380
  /* queue cleanup watchers (and execute them) */
3013
- if (expect_false (cleanupcnt))
3381
+ if (ecb_expect_false (cleanupcnt))
3014
3382
  {
3015
3383
  queue_events (EV_A_ (W *)cleanups, cleanupcnt, EV_CLEANUP);
3016
3384
  EV_INVOKE_PENDING;
@@ -3039,6 +3407,11 @@ ev_loop_destroy (EV_P)
3039
3407
  close (sigfd);
3040
3408
  #endif
3041
3409
 
3410
+ #if EV_USE_TIMERFD
3411
+ if (ev_is_active (&timerfd_w))
3412
+ close (timerfd);
3413
+ #endif
3414
+
3042
3415
  #if EV_USE_INOTIFY
3043
3416
  if (fs_fd >= 0)
3044
3417
  close (fs_fd);
@@ -3056,6 +3429,9 @@ ev_loop_destroy (EV_P)
3056
3429
  #if EV_USE_KQUEUE
3057
3430
  if (backend == EVBACKEND_KQUEUE ) kqueue_destroy (EV_A);
3058
3431
  #endif
3432
+ #if EV_USE_IOURING
3433
+ if (backend == EVBACKEND_IOURING ) iouring_destroy (EV_A);
3434
+ #endif
3059
3435
  #if EV_USE_LINUXAIO
3060
3436
  if (backend == EVBACKEND_LINUXAIO) linuxaio_destroy (EV_A);
3061
3437
  #endif
@@ -3123,6 +3499,9 @@ loop_fork (EV_P)
3123
3499
  #if EV_USE_KQUEUE
3124
3500
  if (backend == EVBACKEND_KQUEUE ) kqueue_fork (EV_A);
3125
3501
  #endif
3502
+ #if EV_USE_IOURING
3503
+ if (backend == EVBACKEND_IOURING ) iouring_fork (EV_A);
3504
+ #endif
3126
3505
  #if EV_USE_LINUXAIO
3127
3506
  if (backend == EVBACKEND_LINUXAIO) linuxaio_fork (EV_A);
3128
3507
  #endif
@@ -3133,22 +3512,44 @@ loop_fork (EV_P)
3133
3512
  infy_fork (EV_A);
3134
3513
  #endif
3135
3514
 
3136
- #if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE
3137
- if (ev_is_active (&pipe_w) && postfork != 2)
3515
+ if (postfork != 2)
3138
3516
  {
3139
- /* pipe_write_wanted must be false now, so modifying fd vars should be safe */
3517
+ #if EV_USE_SIGNALFD
3518
+ /* surprisingly, nothing needs to be done for signalfd, accoridng to docs, it does the right thing on fork */
3519
+ #endif
3140
3520
 
3141
- ev_ref (EV_A);
3142
- ev_io_stop (EV_A_ &pipe_w);
3521
+ #if EV_USE_TIMERFD
3522
+ if (ev_is_active (&timerfd_w))
3523
+ {
3524
+ ev_ref (EV_A);
3525
+ ev_io_stop (EV_A_ &timerfd_w);
3526
+
3527
+ close (timerfd);
3528
+ timerfd = -2;
3143
3529
 
3144
- if (evpipe [0] >= 0)
3145
- EV_WIN32_CLOSE_FD (evpipe [0]);
3530
+ evtimerfd_init (EV_A);
3531
+ /* reschedule periodics, in case we missed something */
3532
+ ev_feed_event (EV_A_ &timerfd_w, EV_CUSTOM);
3533
+ }
3534
+ #endif
3535
+
3536
+ #if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE
3537
+ if (ev_is_active (&pipe_w))
3538
+ {
3539
+ /* pipe_write_wanted must be false now, so modifying fd vars should be safe */
3540
+
3541
+ ev_ref (EV_A);
3542
+ ev_io_stop (EV_A_ &pipe_w);
3543
+
3544
+ if (evpipe [0] >= 0)
3545
+ EV_WIN32_CLOSE_FD (evpipe [0]);
3146
3546
 
3147
- evpipe_init (EV_A);
3148
- /* iterate over everything, in case we missed something before */
3149
- ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM);
3547
+ evpipe_init (EV_A);
3548
+ /* iterate over everything, in case we missed something before */
3549
+ ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM);
3550
+ }
3551
+ #endif
3150
3552
  }
3151
- #endif
3152
3553
 
3153
3554
  postfork = 0;
3154
3555
  }
@@ -3174,7 +3575,7 @@ ev_loop_new (unsigned int flags) EV_NOEXCEPT
3174
3575
  #endif /* multiplicity */
3175
3576
 
3176
3577
  #if EV_VERIFY
3177
- noinline ecb_cold
3578
+ ecb_noinline ecb_cold
3178
3579
  static void
3179
3580
  verify_watcher (EV_P_ W w)
3180
3581
  {
@@ -3184,7 +3585,7 @@ verify_watcher (EV_P_ W w)
3184
3585
  assert (("libev: pending watcher not on pending queue", pendings [ABSPRI (w)][w->pending - 1].w == w));
3185
3586
  }
3186
3587
 
3187
- noinline ecb_cold
3588
+ ecb_noinline ecb_cold
3188
3589
  static void
3189
3590
  verify_heap (EV_P_ ANHE *heap, int N)
3190
3591
  {
@@ -3200,7 +3601,7 @@ verify_heap (EV_P_ ANHE *heap, int N)
3200
3601
  }
3201
3602
  }
3202
3603
 
3203
- noinline ecb_cold
3604
+ ecb_noinline ecb_cold
3204
3605
  static void
3205
3606
  array_verify (EV_P_ W *ws, int cnt)
3206
3607
  {
@@ -3359,7 +3760,7 @@ ev_pending_count (EV_P) EV_NOEXCEPT
3359
3760
  return count;
3360
3761
  }
3361
3762
 
3362
- noinline
3763
+ ecb_noinline
3363
3764
  void
3364
3765
  ev_invoke_pending (EV_P)
3365
3766
  {
@@ -3388,7 +3789,7 @@ ev_invoke_pending (EV_P)
3388
3789
  inline_size void
3389
3790
  idle_reify (EV_P)
3390
3791
  {
3391
- if (expect_false (idleall))
3792
+ if (ecb_expect_false (idleall))
3392
3793
  {
3393
3794
  int pri;
3394
3795
 
@@ -3428,7 +3829,7 @@ timers_reify (EV_P)
3428
3829
  if (ev_at (w) < mn_now)
3429
3830
  ev_at (w) = mn_now;
3430
3831
 
3431
- assert (("libev: negative ev_timer repeat value found while processing timers", w->repeat > 0.));
3832
+ assert (("libev: negative ev_timer repeat value found while processing timers", w->repeat > EV_TS_CONST (0.)));
3432
3833
 
3433
3834
  ANHE_at_cache (timers [HEAP0]);
3434
3835
  downheap (timers, timercnt, HEAP0);
@@ -3447,7 +3848,7 @@ timers_reify (EV_P)
3447
3848
 
3448
3849
  #if EV_PERIODIC_ENABLE
3449
3850
 
3450
- noinline
3851
+ ecb_noinline
3451
3852
  static void
3452
3853
  periodic_recalc (EV_P_ ev_periodic *w)
3453
3854
  {
@@ -3460,7 +3861,7 @@ periodic_recalc (EV_P_ ev_periodic *w)
3460
3861
  ev_tstamp nat = at + w->interval;
3461
3862
 
3462
3863
  /* when resolution fails us, we use ev_rt_now */
3463
- if (expect_false (nat == at))
3864
+ if (ecb_expect_false (nat == at))
3464
3865
  {
3465
3866
  at = ev_rt_now;
3466
3867
  break;
@@ -3516,7 +3917,7 @@ periodics_reify (EV_P)
3516
3917
 
3517
3918
  /* simply recalculate all periodics */
3518
3919
  /* TODO: maybe ensure that at least one event happens when jumping forward? */
3519
- noinline ecb_cold
3920
+ ecb_noinline ecb_cold
3520
3921
  static void
3521
3922
  periodics_reschedule (EV_P)
3522
3923
  {
@@ -3540,7 +3941,7 @@ periodics_reschedule (EV_P)
3540
3941
  #endif
3541
3942
 
3542
3943
  /* adjust all timers by a given offset */
3543
- noinline ecb_cold
3944
+ ecb_noinline ecb_cold
3544
3945
  static void
3545
3946
  timers_reschedule (EV_P_ ev_tstamp adjust)
3546
3947
  {
@@ -3560,7 +3961,7 @@ inline_speed void
3560
3961
  time_update (EV_P_ ev_tstamp max_block)
3561
3962
  {
3562
3963
  #if EV_USE_MONOTONIC
3563
- if (expect_true (have_monotonic))
3964
+ if (ecb_expect_true (have_monotonic))
3564
3965
  {
3565
3966
  int i;
3566
3967
  ev_tstamp odiff = rtmn_diff;
@@ -3569,7 +3970,7 @@ time_update (EV_P_ ev_tstamp max_block)
3569
3970
 
3570
3971
  /* only fetch the realtime clock every 0.5*MIN_TIMEJUMP seconds */
3571
3972
  /* interpolate in the meantime */
3572
- if (expect_true (mn_now - now_floor < MIN_TIMEJUMP * .5))
3973
+ if (ecb_expect_true (mn_now - now_floor < EV_TS_CONST (MIN_TIMEJUMP * .5)))
3573
3974
  {
3574
3975
  ev_rt_now = rtmn_diff + mn_now;
3575
3976
  return;
@@ -3593,7 +3994,7 @@ time_update (EV_P_ ev_tstamp max_block)
3593
3994
 
3594
3995
  diff = odiff - rtmn_diff;
3595
3996
 
3596
- if (expect_true ((diff < 0. ? -diff : diff) < MIN_TIMEJUMP))
3997
+ if (ecb_expect_true ((diff < EV_TS_CONST (0.) ? -diff : diff) < EV_TS_CONST (MIN_TIMEJUMP)))
3597
3998
  return; /* all is well */
3598
3999
 
3599
4000
  ev_rt_now = ev_time ();
@@ -3612,7 +4013,7 @@ time_update (EV_P_ ev_tstamp max_block)
3612
4013
  {
3613
4014
  ev_rt_now = ev_time ();
3614
4015
 
3615
- if (expect_false (mn_now > ev_rt_now || ev_rt_now > mn_now + max_block + MIN_TIMEJUMP))
4016
+ if (ecb_expect_false (mn_now > ev_rt_now || ev_rt_now > mn_now + max_block + EV_TS_CONST (MIN_TIMEJUMP)))
3616
4017
  {
3617
4018
  /* adjust timers. this is easy, as the offset is the same for all of them */
3618
4019
  timers_reschedule (EV_A_ ev_rt_now - mn_now);
@@ -3666,8 +4067,8 @@ ev_run (EV_P_ int flags)
3666
4067
  #endif
3667
4068
 
3668
4069
  #ifndef _WIN32
3669
- if (expect_false (curpid)) /* penalise the forking check even more */
3670
- if (expect_false (getpid () != curpid))
4070
+ if (ecb_expect_false (curpid)) /* penalise the forking check even more */
4071
+ if (ecb_expect_false (getpid () != curpid))
3671
4072
  {
3672
4073
  curpid = getpid ();
3673
4074
  postfork = 1;
@@ -3676,7 +4077,7 @@ ev_run (EV_P_ int flags)
3676
4077
 
3677
4078
  #if EV_FORK_ENABLE
3678
4079
  /* we might have forked, so queue fork handlers */
3679
- if (expect_false (postfork))
4080
+ if (ecb_expect_false (postfork))
3680
4081
  if (forkcnt)
3681
4082
  {
3682
4083
  queue_events (EV_A_ (W *)forks, forkcnt, EV_FORK);
@@ -3686,18 +4087,18 @@ ev_run (EV_P_ int flags)
3686
4087
 
3687
4088
  #if EV_PREPARE_ENABLE
3688
4089
  /* queue prepare watchers (and execute them) */
3689
- if (expect_false (preparecnt))
4090
+ if (ecb_expect_false (preparecnt))
3690
4091
  {
3691
4092
  queue_events (EV_A_ (W *)prepares, preparecnt, EV_PREPARE);
3692
4093
  EV_INVOKE_PENDING;
3693
4094
  }
3694
4095
  #endif
3695
4096
 
3696
- if (expect_false (loop_done))
4097
+ if (ecb_expect_false (loop_done))
3697
4098
  break;
3698
4099
 
3699
4100
  /* we might have forked, so reify kernel state if necessary */
3700
- if (expect_false (postfork))
4101
+ if (ecb_expect_false (postfork))
3701
4102
  loop_fork (EV_A);
3702
4103
 
3703
4104
  /* update fd-related kernel structures */
@@ -3712,16 +4113,28 @@ ev_run (EV_P_ int flags)
3712
4113
  ev_tstamp prev_mn_now = mn_now;
3713
4114
 
3714
4115
  /* update time to cancel out callback processing overhead */
3715
- time_update (EV_A_ 1e100);
4116
+ time_update (EV_A_ EV_TS_CONST (EV_TSTAMP_HUGE));
3716
4117
 
3717
4118
  /* from now on, we want a pipe-wake-up */
3718
4119
  pipe_write_wanted = 1;
3719
4120
 
3720
4121
  ECB_MEMORY_FENCE; /* make sure pipe_write_wanted is visible before we check for potential skips */
3721
4122
 
3722
- if (expect_true (!(flags & EVRUN_NOWAIT || idleall || !activecnt || pipe_write_skipped)))
4123
+ if (ecb_expect_true (!(flags & EVRUN_NOWAIT || idleall || !activecnt || pipe_write_skipped)))
3723
4124
  {
3724
- waittime = MAX_BLOCKTIME;
4125
+ waittime = EV_TS_CONST (MAX_BLOCKTIME);
4126
+
4127
+ #if EV_USE_TIMERFD
4128
+ /* sleep a lot longer when we can reliably detect timejumps */
4129
+ if (ecb_expect_true (timerfd >= 0))
4130
+ waittime = EV_TS_CONST (MAX_BLOCKTIME2);
4131
+ #endif
4132
+ #if !EV_PERIODIC_ENABLE
4133
+ /* without periodics but with monotonic clock there is no need */
4134
+ /* for any time jump detection, so sleep longer */
4135
+ if (ecb_expect_true (have_monotonic))
4136
+ waittime = EV_TS_CONST (MAX_BLOCKTIME2);
4137
+ #endif
3725
4138
 
3726
4139
  if (timercnt)
3727
4140
  {
@@ -3738,23 +4151,28 @@ ev_run (EV_P_ int flags)
3738
4151
  #endif
3739
4152
 
3740
4153
  /* don't let timeouts decrease the waittime below timeout_blocktime */
3741
- if (expect_false (waittime < timeout_blocktime))
4154
+ if (ecb_expect_false (waittime < timeout_blocktime))
3742
4155
  waittime = timeout_blocktime;
3743
4156
 
3744
- /* at this point, we NEED to wait, so we have to ensure */
3745
- /* to pass a minimum nonzero value to the backend */
3746
- if (expect_false (waittime < backend_mintime))
3747
- waittime = backend_mintime;
4157
+ /* now there are two more special cases left, either we have
4158
+ * already-expired timers, so we should not sleep, or we have timers
4159
+ * that expire very soon, in which case we need to wait for a minimum
4160
+ * amount of time for some event loop backends.
4161
+ */
4162
+ if (ecb_expect_false (waittime < backend_mintime))
4163
+ waittime = waittime <= EV_TS_CONST (0.)
4164
+ ? EV_TS_CONST (0.)
4165
+ : backend_mintime;
3748
4166
 
3749
4167
  /* extra check because io_blocktime is commonly 0 */
3750
- if (expect_false (io_blocktime))
4168
+ if (ecb_expect_false (io_blocktime))
3751
4169
  {
3752
4170
  sleeptime = io_blocktime - (mn_now - prev_mn_now);
3753
4171
 
3754
4172
  if (sleeptime > waittime - backend_mintime)
3755
4173
  sleeptime = waittime - backend_mintime;
3756
4174
 
3757
- if (expect_true (sleeptime > 0.))
4175
+ if (ecb_expect_true (sleeptime > EV_TS_CONST (0.)))
3758
4176
  {
3759
4177
  ev_sleep (sleeptime);
3760
4178
  waittime -= sleeptime;
@@ -3825,7 +4243,6 @@ rb_thread_unsafe_dangerous_crazy_blocking_region_end(...);
3825
4243
  ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM);
3826
4244
  }
3827
4245
 
3828
-
3829
4246
  /* update ev_rt_now, do magic */
3830
4247
  time_update (EV_A_ waittime + sleeptime);
3831
4248
  }
@@ -3843,13 +4260,13 @@ rb_thread_unsafe_dangerous_crazy_blocking_region_end(...);
3843
4260
 
3844
4261
  #if EV_CHECK_ENABLE
3845
4262
  /* queue check watchers, to be executed first */
3846
- if (expect_false (checkcnt))
4263
+ if (ecb_expect_false (checkcnt))
3847
4264
  queue_events (EV_A_ (W *)checks, checkcnt, EV_CHECK);
3848
4265
  #endif
3849
4266
 
3850
4267
  EV_INVOKE_PENDING;
3851
4268
  }
3852
- while (expect_true (
4269
+ while (ecb_expect_true (
3853
4270
  activecnt
3854
4271
  && !loop_done
3855
4272
  && !(flags & (EVRUN_ONCE | EVRUN_NOWAIT))
@@ -3886,7 +4303,7 @@ ev_unref (EV_P) EV_NOEXCEPT
3886
4303
  void
3887
4304
  ev_now_update (EV_P) EV_NOEXCEPT
3888
4305
  {
3889
- time_update (EV_A_ 1e100);
4306
+ time_update (EV_A_ EV_TSTAMP_HUGE);
3890
4307
  }
3891
4308
 
3892
4309
  void
@@ -3923,7 +4340,7 @@ wlist_del (WL *head, WL elem)
3923
4340
  {
3924
4341
  while (*head)
3925
4342
  {
3926
- if (expect_true (*head == elem))
4343
+ if (ecb_expect_true (*head == elem))
3927
4344
  {
3928
4345
  *head = elem->next;
3929
4346
  break;
@@ -3950,7 +4367,7 @@ ev_clear_pending (EV_P_ void *w) EV_NOEXCEPT
3950
4367
  W w_ = (W)w;
3951
4368
  int pending = w_->pending;
3952
4369
 
3953
- if (expect_true (pending))
4370
+ if (ecb_expect_true (pending))
3954
4371
  {
3955
4372
  ANPENDING *p = pendings [ABSPRI (w_)] + pending - 1;
3956
4373
  p->w = (W)&pending_w;
@@ -3987,13 +4404,13 @@ ev_stop (EV_P_ W w)
3987
4404
 
3988
4405
  /*****************************************************************************/
3989
4406
 
3990
- noinline
4407
+ ecb_noinline
3991
4408
  void
3992
4409
  ev_io_start (EV_P_ ev_io *w) EV_NOEXCEPT
3993
4410
  {
3994
4411
  int fd = w->fd;
3995
4412
 
3996
- if (expect_false (ev_is_active (w)))
4413
+ if (ecb_expect_false (ev_is_active (w)))
3997
4414
  return;
3998
4415
 
3999
4416
  assert (("libev: ev_io_start called with negative fd", fd >= 0));
@@ -4017,12 +4434,12 @@ ev_io_start (EV_P_ ev_io *w) EV_NOEXCEPT
4017
4434
  EV_FREQUENT_CHECK;
4018
4435
  }
4019
4436
 
4020
- noinline
4437
+ ecb_noinline
4021
4438
  void
4022
4439
  ev_io_stop (EV_P_ ev_io *w) EV_NOEXCEPT
4023
4440
  {
4024
4441
  clear_pending (EV_A_ (W)w);
4025
- if (expect_false (!ev_is_active (w)))
4442
+ if (ecb_expect_false (!ev_is_active (w)))
4026
4443
  return;
4027
4444
 
4028
4445
  assert (("libev: ev_io_stop called with illegal fd (must stay constant after start!)", w->fd >= 0 && w->fd < anfdmax));
@@ -4040,11 +4457,11 @@ ev_io_stop (EV_P_ ev_io *w) EV_NOEXCEPT
4040
4457
  EV_FREQUENT_CHECK;
4041
4458
  }
4042
4459
 
4043
- noinline
4460
+ ecb_noinline
4044
4461
  void
4045
4462
  ev_timer_start (EV_P_ ev_timer *w) EV_NOEXCEPT
4046
4463
  {
4047
- if (expect_false (ev_is_active (w)))
4464
+ if (ecb_expect_false (ev_is_active (w)))
4048
4465
  return;
4049
4466
 
4050
4467
  ev_at (w) += mn_now;
@@ -4065,12 +4482,12 @@ ev_timer_start (EV_P_ ev_timer *w) EV_NOEXCEPT
4065
4482
  /*assert (("libev: internal timer heap corruption", timers [ev_active (w)] == (WT)w));*/
4066
4483
  }
4067
4484
 
4068
- noinline
4485
+ ecb_noinline
4069
4486
  void
4070
4487
  ev_timer_stop (EV_P_ ev_timer *w) EV_NOEXCEPT
4071
4488
  {
4072
4489
  clear_pending (EV_A_ (W)w);
4073
- if (expect_false (!ev_is_active (w)))
4490
+ if (ecb_expect_false (!ev_is_active (w)))
4074
4491
  return;
4075
4492
 
4076
4493
  EV_FREQUENT_CHECK;
@@ -4082,7 +4499,7 @@ ev_timer_stop (EV_P_ ev_timer *w) EV_NOEXCEPT
4082
4499
 
4083
4500
  --timercnt;
4084
4501
 
4085
- if (expect_true (active < timercnt + HEAP0))
4502
+ if (ecb_expect_true (active < timercnt + HEAP0))
4086
4503
  {
4087
4504
  timers [active] = timers [timercnt + HEAP0];
4088
4505
  adjustheap (timers, timercnt, active);
@@ -4096,7 +4513,7 @@ ev_timer_stop (EV_P_ ev_timer *w) EV_NOEXCEPT
4096
4513
  EV_FREQUENT_CHECK;
4097
4514
  }
4098
4515
 
4099
- noinline
4516
+ ecb_noinline
4100
4517
  void
4101
4518
  ev_timer_again (EV_P_ ev_timer *w) EV_NOEXCEPT
4102
4519
  {
@@ -4127,17 +4544,22 @@ ev_timer_again (EV_P_ ev_timer *w) EV_NOEXCEPT
4127
4544
  ev_tstamp
4128
4545
  ev_timer_remaining (EV_P_ ev_timer *w) EV_NOEXCEPT
4129
4546
  {
4130
- return ev_at (w) - (ev_is_active (w) ? mn_now : 0.);
4547
+ return ev_at (w) - (ev_is_active (w) ? mn_now : EV_TS_CONST (0.));
4131
4548
  }
4132
4549
 
4133
4550
  #if EV_PERIODIC_ENABLE
4134
- noinline
4551
+ ecb_noinline
4135
4552
  void
4136
4553
  ev_periodic_start (EV_P_ ev_periodic *w) EV_NOEXCEPT
4137
4554
  {
4138
- if (expect_false (ev_is_active (w)))
4555
+ if (ecb_expect_false (ev_is_active (w)))
4139
4556
  return;
4140
4557
 
4558
+ #if EV_USE_TIMERFD
4559
+ if (timerfd == -2)
4560
+ evtimerfd_init (EV_A);
4561
+ #endif
4562
+
4141
4563
  if (w->reschedule_cb)
4142
4564
  ev_at (w) = w->reschedule_cb (w, ev_rt_now);
4143
4565
  else if (w->interval)
@@ -4162,12 +4584,12 @@ ev_periodic_start (EV_P_ ev_periodic *w) EV_NOEXCEPT
4162
4584
  /*assert (("libev: internal periodic heap corruption", ANHE_w (periodics [ev_active (w)]) == (WT)w));*/
4163
4585
  }
4164
4586
 
4165
- noinline
4587
+ ecb_noinline
4166
4588
  void
4167
4589
  ev_periodic_stop (EV_P_ ev_periodic *w) EV_NOEXCEPT
4168
4590
  {
4169
4591
  clear_pending (EV_A_ (W)w);
4170
- if (expect_false (!ev_is_active (w)))
4592
+ if (ecb_expect_false (!ev_is_active (w)))
4171
4593
  return;
4172
4594
 
4173
4595
  EV_FREQUENT_CHECK;
@@ -4179,7 +4601,7 @@ ev_periodic_stop (EV_P_ ev_periodic *w) EV_NOEXCEPT
4179
4601
 
4180
4602
  --periodiccnt;
4181
4603
 
4182
- if (expect_true (active < periodiccnt + HEAP0))
4604
+ if (ecb_expect_true (active < periodiccnt + HEAP0))
4183
4605
  {
4184
4606
  periodics [active] = periodics [periodiccnt + HEAP0];
4185
4607
  adjustheap (periodics, periodiccnt, active);
@@ -4191,7 +4613,7 @@ ev_periodic_stop (EV_P_ ev_periodic *w) EV_NOEXCEPT
4191
4613
  EV_FREQUENT_CHECK;
4192
4614
  }
4193
4615
 
4194
- noinline
4616
+ ecb_noinline
4195
4617
  void
4196
4618
  ev_periodic_again (EV_P_ ev_periodic *w) EV_NOEXCEPT
4197
4619
  {
@@ -4207,11 +4629,11 @@ ev_periodic_again (EV_P_ ev_periodic *w) EV_NOEXCEPT
4207
4629
 
4208
4630
  #if EV_SIGNAL_ENABLE
4209
4631
 
4210
- noinline
4632
+ ecb_noinline
4211
4633
  void
4212
4634
  ev_signal_start (EV_P_ ev_signal *w) EV_NOEXCEPT
4213
4635
  {
4214
- if (expect_false (ev_is_active (w)))
4636
+ if (ecb_expect_false (ev_is_active (w)))
4215
4637
  return;
4216
4638
 
4217
4639
  assert (("libev: ev_signal_start called with illegal signal number", w->signum > 0 && w->signum < EV_NSIG));
@@ -4290,12 +4712,12 @@ ev_signal_start (EV_P_ ev_signal *w) EV_NOEXCEPT
4290
4712
  EV_FREQUENT_CHECK;
4291
4713
  }
4292
4714
 
4293
- noinline
4715
+ ecb_noinline
4294
4716
  void
4295
4717
  ev_signal_stop (EV_P_ ev_signal *w) EV_NOEXCEPT
4296
4718
  {
4297
4719
  clear_pending (EV_A_ (W)w);
4298
- if (expect_false (!ev_is_active (w)))
4720
+ if (ecb_expect_false (!ev_is_active (w)))
4299
4721
  return;
4300
4722
 
4301
4723
  EV_FREQUENT_CHECK;
@@ -4338,7 +4760,7 @@ ev_child_start (EV_P_ ev_child *w) EV_NOEXCEPT
4338
4760
  #if EV_MULTIPLICITY
4339
4761
  assert (("libev: child watchers are only supported in the default loop", loop == ev_default_loop_ptr));
4340
4762
  #endif
4341
- if (expect_false (ev_is_active (w)))
4763
+ if (ecb_expect_false (ev_is_active (w)))
4342
4764
  return;
4343
4765
 
4344
4766
  EV_FREQUENT_CHECK;
@@ -4353,7 +4775,7 @@ void
4353
4775
  ev_child_stop (EV_P_ ev_child *w) EV_NOEXCEPT
4354
4776
  {
4355
4777
  clear_pending (EV_A_ (W)w);
4356
- if (expect_false (!ev_is_active (w)))
4778
+ if (ecb_expect_false (!ev_is_active (w)))
4357
4779
  return;
4358
4780
 
4359
4781
  EV_FREQUENT_CHECK;
@@ -4377,14 +4799,14 @@ ev_child_stop (EV_P_ ev_child *w) EV_NOEXCEPT
4377
4799
  #define NFS_STAT_INTERVAL 30.1074891 /* for filesystems potentially failing inotify */
4378
4800
  #define MIN_STAT_INTERVAL 0.1074891
4379
4801
 
4380
- noinline static void stat_timer_cb (EV_P_ ev_timer *w_, int revents);
4802
+ ecb_noinline static void stat_timer_cb (EV_P_ ev_timer *w_, int revents);
4381
4803
 
4382
4804
  #if EV_USE_INOTIFY
4383
4805
 
4384
4806
  /* the * 2 is to allow for alignment padding, which for some reason is >> 8 */
4385
4807
  # define EV_INOTIFY_BUFSIZE (sizeof (struct inotify_event) * 2 + NAME_MAX)
4386
4808
 
4387
- noinline
4809
+ ecb_noinline
4388
4810
  static void
4389
4811
  infy_add (EV_P_ ev_stat *w)
4390
4812
  {
@@ -4459,7 +4881,7 @@ infy_add (EV_P_ ev_stat *w)
4459
4881
  if (ev_is_active (&w->timer)) ev_unref (EV_A);
4460
4882
  }
4461
4883
 
4462
- noinline
4884
+ ecb_noinline
4463
4885
  static void
4464
4886
  infy_del (EV_P_ ev_stat *w)
4465
4887
  {
@@ -4477,7 +4899,7 @@ infy_del (EV_P_ ev_stat *w)
4477
4899
  inotify_rm_watch (fs_fd, wd);
4478
4900
  }
4479
4901
 
4480
- noinline
4902
+ ecb_noinline
4481
4903
  static void
4482
4904
  infy_wd (EV_P_ int slot, int wd, struct inotify_event *ev)
4483
4905
  {
@@ -4633,7 +5055,7 @@ ev_stat_stat (EV_P_ ev_stat *w) EV_NOEXCEPT
4633
5055
  w->attr.st_nlink = 1;
4634
5056
  }
4635
5057
 
4636
- noinline
5058
+ ecb_noinline
4637
5059
  static void
4638
5060
  stat_timer_cb (EV_P_ ev_timer *w_, int revents)
4639
5061
  {
@@ -4677,7 +5099,7 @@ stat_timer_cb (EV_P_ ev_timer *w_, int revents)
4677
5099
  void
4678
5100
  ev_stat_start (EV_P_ ev_stat *w) EV_NOEXCEPT
4679
5101
  {
4680
- if (expect_false (ev_is_active (w)))
5102
+ if (ecb_expect_false (ev_is_active (w)))
4681
5103
  return;
4682
5104
 
4683
5105
  ev_stat_stat (EV_A_ w);
@@ -4709,7 +5131,7 @@ void
4709
5131
  ev_stat_stop (EV_P_ ev_stat *w) EV_NOEXCEPT
4710
5132
  {
4711
5133
  clear_pending (EV_A_ (W)w);
4712
- if (expect_false (!ev_is_active (w)))
5134
+ if (ecb_expect_false (!ev_is_active (w)))
4713
5135
  return;
4714
5136
 
4715
5137
  EV_FREQUENT_CHECK;
@@ -4734,7 +5156,7 @@ ev_stat_stop (EV_P_ ev_stat *w) EV_NOEXCEPT
4734
5156
  void
4735
5157
  ev_idle_start (EV_P_ ev_idle *w) EV_NOEXCEPT
4736
5158
  {
4737
- if (expect_false (ev_is_active (w)))
5159
+ if (ecb_expect_false (ev_is_active (w)))
4738
5160
  return;
4739
5161
 
4740
5162
  pri_adjust (EV_A_ (W)w);
@@ -4758,7 +5180,7 @@ void
4758
5180
  ev_idle_stop (EV_P_ ev_idle *w) EV_NOEXCEPT
4759
5181
  {
4760
5182
  clear_pending (EV_A_ (W)w);
4761
- if (expect_false (!ev_is_active (w)))
5183
+ if (ecb_expect_false (!ev_is_active (w)))
4762
5184
  return;
4763
5185
 
4764
5186
  EV_FREQUENT_CHECK;
@@ -4781,7 +5203,7 @@ ev_idle_stop (EV_P_ ev_idle *w) EV_NOEXCEPT
4781
5203
  void
4782
5204
  ev_prepare_start (EV_P_ ev_prepare *w) EV_NOEXCEPT
4783
5205
  {
4784
- if (expect_false (ev_is_active (w)))
5206
+ if (ecb_expect_false (ev_is_active (w)))
4785
5207
  return;
4786
5208
 
4787
5209
  EV_FREQUENT_CHECK;
@@ -4797,7 +5219,7 @@ void
4797
5219
  ev_prepare_stop (EV_P_ ev_prepare *w) EV_NOEXCEPT
4798
5220
  {
4799
5221
  clear_pending (EV_A_ (W)w);
4800
- if (expect_false (!ev_is_active (w)))
5222
+ if (ecb_expect_false (!ev_is_active (w)))
4801
5223
  return;
4802
5224
 
4803
5225
  EV_FREQUENT_CHECK;
@@ -4819,7 +5241,7 @@ ev_prepare_stop (EV_P_ ev_prepare *w) EV_NOEXCEPT
4819
5241
  void
4820
5242
  ev_check_start (EV_P_ ev_check *w) EV_NOEXCEPT
4821
5243
  {
4822
- if (expect_false (ev_is_active (w)))
5244
+ if (ecb_expect_false (ev_is_active (w)))
4823
5245
  return;
4824
5246
 
4825
5247
  EV_FREQUENT_CHECK;
@@ -4835,7 +5257,7 @@ void
4835
5257
  ev_check_stop (EV_P_ ev_check *w) EV_NOEXCEPT
4836
5258
  {
4837
5259
  clear_pending (EV_A_ (W)w);
4838
- if (expect_false (!ev_is_active (w)))
5260
+ if (ecb_expect_false (!ev_is_active (w)))
4839
5261
  return;
4840
5262
 
4841
5263
  EV_FREQUENT_CHECK;
@@ -4854,7 +5276,7 @@ ev_check_stop (EV_P_ ev_check *w) EV_NOEXCEPT
4854
5276
  #endif
4855
5277
 
4856
5278
  #if EV_EMBED_ENABLE
4857
- noinline
5279
+ ecb_noinline
4858
5280
  void
4859
5281
  ev_embed_sweep (EV_P_ ev_embed *w) EV_NOEXCEPT
4860
5282
  {
@@ -4888,6 +5310,7 @@ embed_prepare_cb (EV_P_ ev_prepare *prepare, int revents)
4888
5310
  }
4889
5311
  }
4890
5312
 
5313
+ #if EV_FORK_ENABLE
4891
5314
  static void
4892
5315
  embed_fork_cb (EV_P_ ev_fork *fork_w, int revents)
4893
5316
  {
@@ -4904,6 +5327,7 @@ embed_fork_cb (EV_P_ ev_fork *fork_w, int revents)
4904
5327
 
4905
5328
  ev_embed_start (EV_A_ w);
4906
5329
  }
5330
+ #endif
4907
5331
 
4908
5332
  #if 0
4909
5333
  static void
@@ -4916,7 +5340,7 @@ embed_idle_cb (EV_P_ ev_idle *idle, int revents)
4916
5340
  void
4917
5341
  ev_embed_start (EV_P_ ev_embed *w) EV_NOEXCEPT
4918
5342
  {
4919
- if (expect_false (ev_is_active (w)))
5343
+ if (ecb_expect_false (ev_is_active (w)))
4920
5344
  return;
4921
5345
 
4922
5346
  {
@@ -4934,8 +5358,10 @@ ev_embed_start (EV_P_ ev_embed *w) EV_NOEXCEPT
4934
5358
  ev_set_priority (&w->prepare, EV_MINPRI);
4935
5359
  ev_prepare_start (EV_A_ &w->prepare);
4936
5360
 
5361
+ #if EV_FORK_ENABLE
4937
5362
  ev_fork_init (&w->fork, embed_fork_cb);
4938
5363
  ev_fork_start (EV_A_ &w->fork);
5364
+ #endif
4939
5365
 
4940
5366
  /*ev_idle_init (&w->idle, e,bed_idle_cb);*/
4941
5367
 
@@ -4948,14 +5374,16 @@ void
4948
5374
  ev_embed_stop (EV_P_ ev_embed *w) EV_NOEXCEPT
4949
5375
  {
4950
5376
  clear_pending (EV_A_ (W)w);
4951
- if (expect_false (!ev_is_active (w)))
5377
+ if (ecb_expect_false (!ev_is_active (w)))
4952
5378
  return;
4953
5379
 
4954
5380
  EV_FREQUENT_CHECK;
4955
5381
 
4956
5382
  ev_io_stop (EV_A_ &w->io);
4957
5383
  ev_prepare_stop (EV_A_ &w->prepare);
5384
+ #if EV_FORK_ENABLE
4958
5385
  ev_fork_stop (EV_A_ &w->fork);
5386
+ #endif
4959
5387
 
4960
5388
  ev_stop (EV_A_ (W)w);
4961
5389
 
@@ -4967,7 +5395,7 @@ ev_embed_stop (EV_P_ ev_embed *w) EV_NOEXCEPT
4967
5395
  void
4968
5396
  ev_fork_start (EV_P_ ev_fork *w) EV_NOEXCEPT
4969
5397
  {
4970
- if (expect_false (ev_is_active (w)))
5398
+ if (ecb_expect_false (ev_is_active (w)))
4971
5399
  return;
4972
5400
 
4973
5401
  EV_FREQUENT_CHECK;
@@ -4983,7 +5411,7 @@ void
4983
5411
  ev_fork_stop (EV_P_ ev_fork *w) EV_NOEXCEPT
4984
5412
  {
4985
5413
  clear_pending (EV_A_ (W)w);
4986
- if (expect_false (!ev_is_active (w)))
5414
+ if (ecb_expect_false (!ev_is_active (w)))
4987
5415
  return;
4988
5416
 
4989
5417
  EV_FREQUENT_CHECK;
@@ -5005,7 +5433,7 @@ ev_fork_stop (EV_P_ ev_fork *w) EV_NOEXCEPT
5005
5433
  void
5006
5434
  ev_cleanup_start (EV_P_ ev_cleanup *w) EV_NOEXCEPT
5007
5435
  {
5008
- if (expect_false (ev_is_active (w)))
5436
+ if (ecb_expect_false (ev_is_active (w)))
5009
5437
  return;
5010
5438
 
5011
5439
  EV_FREQUENT_CHECK;
@@ -5023,7 +5451,7 @@ void
5023
5451
  ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_NOEXCEPT
5024
5452
  {
5025
5453
  clear_pending (EV_A_ (W)w);
5026
- if (expect_false (!ev_is_active (w)))
5454
+ if (ecb_expect_false (!ev_is_active (w)))
5027
5455
  return;
5028
5456
 
5029
5457
  EV_FREQUENT_CHECK;
@@ -5046,7 +5474,7 @@ ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_NOEXCEPT
5046
5474
  void
5047
5475
  ev_async_start (EV_P_ ev_async *w) EV_NOEXCEPT
5048
5476
  {
5049
- if (expect_false (ev_is_active (w)))
5477
+ if (ecb_expect_false (ev_is_active (w)))
5050
5478
  return;
5051
5479
 
5052
5480
  w->sent = 0;
@@ -5066,7 +5494,7 @@ void
5066
5494
  ev_async_stop (EV_P_ ev_async *w) EV_NOEXCEPT
5067
5495
  {
5068
5496
  clear_pending (EV_A_ (W)w);
5069
- if (expect_false (!ev_is_active (w)))
5497
+ if (ecb_expect_false (!ev_is_active (w)))
5070
5498
  return;
5071
5499
 
5072
5500
  EV_FREQUENT_CHECK;
@@ -5273,4 +5701,3 @@ ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w)) EV_NOEXCEPT
5273
5701
  #if EV_MULTIPLICITY
5274
5702
  #include "ev_wrap.h"
5275
5703
  #endif
5276
-