nio4r 0.2.2-java → 0.3.0-java

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/.travis.yml CHANGED
@@ -2,6 +2,10 @@ rvm:
2
2
  - 1.8.7
3
3
  - 1.9.2
4
4
  - 1.9.3
5
+ - ree
5
6
  - ruby-head
6
- - rbx
7
- - jruby
7
+ - jruby-18mode
8
+ - jruby-19mode
9
+ - jruby-head
10
+ - rbx-18mode
11
+ - rbx-19mode
data/CHANGES.md CHANGED
@@ -1,3 +1,13 @@
1
+ 0.3.0
2
+ -----
3
+ * NIO::Selector#select now takes a block and behaves like select_each
4
+ * NIO::Selector#select_each is now deprecated and will be removed
5
+ * Closing monitors detaches them from their selector
6
+ * Java extension for JRuby
7
+ * Upgrade to libev 4.11
8
+ * Bugfixes for zero/negative select timeouts
9
+ * Handle OP_CONNECT properly on JRuby
10
+
1
11
  0.2.2
2
12
  -----
3
13
  * Raise IOError if asked to wake up a closed selector
data/LICENSE.txt CHANGED
@@ -1,4 +1,4 @@
1
- Copyright (c) 2011 Tony Arcieri
1
+ Copyright (c) 2011-12 Tony Arcieri
2
2
 
3
3
  Permission is hereby granted, free of charge, to any person obtaining
4
4
  a copy of this software and associated documentation files (the
data/README.md CHANGED
@@ -24,6 +24,7 @@ Supported Platforms
24
24
  nio4r is known to work on the following Ruby implementations:
25
25
 
26
26
  * MRI/YARV 1.8.7, 1.9.2, 1.9.3
27
+ * REE (2011.12)
27
28
  * JRuby 1.6.x (and likely earlier versions too)
28
29
  * Rubinius 1.x/2.0
29
30
  * A pure Ruby implementation based on Kernel.select is also provided
@@ -32,7 +33,7 @@ Platform notes:
32
33
 
33
34
  * MRI/YARV and Rubinius implement nio4j with a C extension based on libev,
34
35
  which provides a high performance binding to native IO APIs
35
- * JRuby uses a special backend based on the high performance Java NIO subsystem
36
+ * JRuby uses a Java extension based on the high performance Java NIO subsystem
36
37
  * A pure Ruby implementation is also provided for Ruby implementations which
37
38
  don't implement the MRI C extension API
38
39
 
@@ -99,11 +100,12 @@ ready = selector.select(15) # Wait 15 seconds
99
100
  If a timeout occurs, ready will be nil.
100
101
 
101
102
  You can avoid allocating an array each time you call NIO::Selector#select by
102
- using NIO::Selector#select_each instead. This method successively yields ready
103
- NIO::Monitor objects:
103
+ passing a block to select. The block will be called for each ready monitor
104
+ object, with that object passed as an argument. The number of ready monitors
105
+ is returned as a Fixnum:
104
106
 
105
107
  ```ruby
106
- >> selector.select_each { |m| m.value.call }
108
+ >> selector.select { |m| m.value.call }
107
109
  Got some data: Hi there!
108
110
  => 1
109
111
  ```
@@ -119,9 +121,9 @@ selector.deregister(reader)
119
121
 
120
122
  Monitors provide methods which let you introspect on why a particular IO
121
123
  object was selected. These methods are not thread safe unless you are holding
122
- the selector lock (i.e. if you're in a #select_each callback). Only use them
123
- if you aren't concerned with thread safety, or you're within a #select_each
124
- callback:
124
+ the selector lock (i.e. if you're in a block pased to #select). Only use them
125
+ if you aren't concerned with thread safety, or you're within a #select
126
+ block:
125
127
 
126
128
  - ***#interests***: what this monitor is interested in (:r, :w, or :rw)
127
129
  - ***#readiness***: what the monitored IO object is ready for according to the last select operation
@@ -152,9 +154,12 @@ nio4r is not a full-featured event framework like EventMachine or Cool.io.
152
154
  Instead, nio4r is the sort of thing you might write a library like that on
153
155
  top of. nio4r provides a minimal API such that individual Ruby implementers
154
156
  may choose to produce optimized versions for their platform, without having
155
- to maintain a large codebase. As of the time of writing, the current
156
- implementation is a little over 100 lines of code for both the pure Ruby and
157
- JRuby backends. The native extension uses approximately 500 lines of C code.
157
+ to maintain a large codebase.
158
+
159
+ As of the time of writing, the current implementation is
160
+ * ~200 lines of Ruby code
161
+ * ~700 lines of "custom" C code (not counting libev)
162
+ * ~400 lines of Java code
158
163
 
159
164
  nio4r is also not a replacement for Kinder Gentler IO (KGIO), a set of
160
165
  advanced Ruby IO APIs. At some point in the future nio4r might provide a
@@ -164,7 +169,7 @@ however this is not the case today.
164
169
  License
165
170
  -------
166
171
 
167
- Copyright (c) 2011 Tony Arcieri. Distributed under the MIT License. See
172
+ Copyright (c) 2011-12 Tony Arcieri. Distributed under the MIT License. See
168
173
  LICENSE.txt for further details.
169
174
 
170
175
  Includes libev. Copyright (C)2007-09 Marc Alexander Lehmann. Distributed under
data/Rakefile CHANGED
@@ -2,8 +2,8 @@
2
2
  require "bundler/gem_tasks"
3
3
  require "rake/clean"
4
4
 
5
- Dir["tasks/**/*.rake"].each { |task| load task }
5
+ Dir[File.expand_path("../tasks/**/*.rake", __FILE__)].each { |task| load task }
6
6
 
7
7
  task :default => %w(compile spec)
8
8
 
9
- CLEAN.include "**/*.o", "**/*.so", "**/*.bundle", "pkg"
9
+ CLEAN.include "**/*.o", "**/*.so", "**/*.bundle", "**/*.jar", "pkg", "tmp"
@@ -23,6 +23,9 @@ class EchoServer
23
23
 
24
24
  def accept
25
25
  socket = @server.accept
26
+ _, port, host = socket.peeraddr
27
+ puts "*** #{host}:#{port} connected"
28
+
26
29
  monitor = @selector.register(socket, :r)
27
30
  monitor.value = proc { read(socket) }
28
31
  end
@@ -30,6 +33,12 @@ class EchoServer
30
33
  def read(socket)
31
34
  data = socket.read_nonblock(4096)
32
35
  socket.write_nonblock(data)
36
+ rescue EOFError
37
+ _, port, host = socket.peeraddr
38
+ puts "*** #{host}:#{port} disconnected"
39
+
40
+ @selector.deregister(socket)
41
+ socket.close
33
42
  end
34
43
  end
35
44
 
data/ext/libev/Changes CHANGED
@@ -1,5 +1,37 @@
1
1
  Revision history for libev, a high-performance and full-featured event loop.
2
2
 
3
+ TODO: ev_loop_wakeup
4
+ TODO: EV_STANDALONE == NO_HASSEL (do not use clock_gettime in ev_standalone)
5
+
6
+ 4.11 Sat Feb 4 19:52:39 CET 2012
7
+ - INCOMPATIBLE CHANGE: ev_timer_again now clears the pending status, as
8
+ was documented already, but not implemented in the repeating case.
9
+ - new compiletime symbols: EV_NO_SMP and EV_NO_THREADS.
10
+ - fix a race where the workaround against the epoll fork bugs
11
+ caused signals to not be handled anymore.
12
+ - correct backend_fudge for most backends, and implement a windows
13
+ specific workaround to avoid looping because we call both
14
+ select and Sleep, both with different time resolutions.
15
+ - document range and guarantees of ev_sleep.
16
+ - document reasonable ranges for periodics interval and offset.
17
+ - rename backend_fudge to backend_mintime to avoid future confusion :)
18
+ - change the default periodic reschedule function to hopefully be more
19
+ exact and correct even in corner cases or in the far future.
20
+ - do not rely on -lm anymore: use it when available but use our
21
+ own floor () if it is missing. This should make it easier to embed,
22
+ as no external libraries are required.
23
+ - strategically import macros from libecb and mark rarely-used functions
24
+ as cache-cold (saving almost 2k code size on typical amd64 setups).
25
+ - add Symbols.ev and Symbols.event files, that were missing.
26
+ - fix backend_mintime value for epoll (was 1/1024, is 1/1000 now).
27
+ - fix #3 "be smart about timeouts" to not "deadlock" when
28
+ timeout == now, also improve the section overall.
29
+ - avoid "AVOIDING FINISHING BEFORE RETURNING" idiom.
30
+ - support new EV_API_STATIC mode to make all libev symbols
31
+ static.
32
+ - supply default CFLAGS of -g -O3 with gcc when original CFLAGS
33
+ were empty.
34
+
3
35
  4.04 Wed Feb 16 09:01:51 CET 2011
4
36
  - fix two problems in the native win32 backend, where reuse of fd's
5
37
  with different underlying handles caused handles not to be removed
@@ -94,7 +126,7 @@ Revision history for libev, a high-performance and full-featured event loop.
94
126
  that this is a race condition regardless of EV_SIGNALFD.
95
127
  - backport inotify code to C89.
96
128
  - inotify file descriptors could leak into child processes.
97
- - ev_stat watchers could keep an errornous extra ref on the loop,
129
+ - ev_stat watchers could keep an erroneous extra ref on the loop,
98
130
  preventing exit when unregistering all watchers (testcases
99
131
  provided by ry@tinyclouds.org).
100
132
  - implement EV_WIN32_HANDLE_TO_FD and EV_WIN32_CLOSE_FD configuration
@@ -162,7 +194,7 @@ Revision history for libev, a high-performance and full-featured event loop.
162
194
  Malek Hadj-Ali).
163
195
  - implement ev_suspend and ev_resume.
164
196
  - new EV_CUSTOM revents flag for use by applications.
165
- - add documentation section about priorites.
197
+ - add documentation section about priorities.
166
198
  - add a glossary to the dcoumentation.
167
199
  - extend the ev_fork description slightly.
168
200
  - optimize a jump out of call_pending.
data/ext/libev/ev.c CHANGED
@@ -45,6 +45,12 @@
45
45
  # include "config.h"
46
46
  # endif
47
47
 
48
+ #if HAVE_FLOOR
49
+ # ifndef EV_USE_FLOOR
50
+ # define EV_USE_FLOOR 1
51
+ # endif
52
+ #endif
53
+
48
54
  # if HAVE_CLOCK_SYSCALL
49
55
  # ifndef EV_USE_CLOCK_SYSCALL
50
56
  # define EV_USE_CLOCK_SYSCALL 1
@@ -158,7 +164,6 @@
158
164
 
159
165
  #endif
160
166
 
161
- #include <math.h>
162
167
  #include <stdlib.h>
163
168
  #include <string.h>
164
169
  #include <fcntl.h>
@@ -180,7 +185,16 @@
180
185
  # include "ev.h"
181
186
  #endif
182
187
 
183
- EV_CPP(extern "C" {)
188
+ #if EV_NO_THREADS
189
+ # undef EV_NO_SMP
190
+ # define EV_NO_SMP 1
191
+ # undef ECB_NO_THREADS
192
+ # define ECB_NO_THREADS 1
193
+ #endif
194
+ #if EV_NO_SMP
195
+ # undef EV_NO_SMP
196
+ # define ECB_NO_SMP 1
197
+ #endif
184
198
 
185
199
  #ifndef _WIN32
186
200
  # include <sys/time.h>
@@ -234,6 +248,10 @@ EV_CPP(extern "C" {)
234
248
  # define EV_NSIG 65
235
249
  #endif
236
250
 
251
+ #ifndef EV_USE_FLOOR
252
+ # define EV_USE_FLOOR 0
253
+ #endif
254
+
237
255
  #ifndef EV_USE_CLOCK_SYSCALL
238
256
  # if __linux && __GLIBC__ >= 2
239
257
  # define EV_USE_CLOCK_SYSCALL EV_FEATURE_OS
@@ -445,14 +463,11 @@ struct signalfd_siginfo
445
463
  #endif
446
464
 
447
465
  /*
448
- * This is used to avoid floating point rounding problems.
449
- * It is added to ev_rt_now when scheduling periodics
450
- * to ensure progress, time-wise, even when rounding
451
- * errors are against us.
466
+ * This is used to work around floating point rounding problems.
452
467
  * This value is good at least till the year 4000.
453
- * Better solutions welcome.
454
468
  */
455
- #define TIME_EPSILON 0.0001220703125 /* 1/8192 */
469
+ #define MIN_INTERVAL 0.0001220703125 /* 1/2**13, good till 4000 */
470
+ /*#define MIN_INTERVAL 0.00000095367431640625 /* 1/2**20, good till 2200 */
456
471
 
457
472
  #define MIN_TIMEJUMP 1. /* minimum timejump that gets detected (if monotonic clock available) */
458
473
  #define MAX_BLOCKTIME 59.743 /* never wait longer than this time (to detect time jumps) */
@@ -460,23 +475,486 @@ struct signalfd_siginfo
460
475
  #define EV_TV_SET(tv,t) do { tv.tv_sec = (long)t; tv.tv_usec = (long)((t - tv.tv_sec) * 1e6); } while (0)
461
476
  #define EV_TS_SET(ts,t) do { ts.tv_sec = (long)t; ts.tv_nsec = (long)((t - ts.tv_sec) * 1e9); } while (0)
462
477
 
463
- #if __GNUC__ >= 4
464
- # define expect(expr,value) __builtin_expect ((expr),(value))
465
- # define noinline __attribute__ ((noinline))
478
+ /* the following is ecb.h embedded into libev - use update_ev_c to update from an external copy */
479
+ /* ECB.H BEGIN */
480
+ /*
481
+ * libecb - http://software.schmorp.de/pkg/libecb
482
+ *
483
+ * Copyright (©) 2009-2012 Marc Alexander Lehmann <libecb@schmorp.de>
484
+ * Copyright (©) 2011 Emanuele Giaquinta
485
+ * All rights reserved.
486
+ *
487
+ * Redistribution and use in source and binary forms, with or without modifica-
488
+ * tion, are permitted provided that the following conditions are met:
489
+ *
490
+ * 1. Redistributions of source code must retain the above copyright notice,
491
+ * this list of conditions and the following disclaimer.
492
+ *
493
+ * 2. Redistributions in binary form must reproduce the above copyright
494
+ * notice, this list of conditions and the following disclaimer in the
495
+ * documentation and/or other materials provided with the distribution.
496
+ *
497
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
498
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-
499
+ * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
500
+ * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-
501
+ * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
502
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
503
+ * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
504
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-
505
+ * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
506
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
507
+ */
508
+
509
+ #ifndef ECB_H
510
+ #define ECB_H
511
+
512
+ #ifdef _WIN32
513
+ typedef signed char int8_t;
514
+ typedef unsigned char uint8_t;
515
+ typedef signed short int16_t;
516
+ typedef unsigned short uint16_t;
517
+ typedef signed int int32_t;
518
+ typedef unsigned int uint32_t;
519
+ #if __GNUC__
520
+ typedef signed long long int64_t;
521
+ typedef unsigned long long uint64_t;
522
+ #else /* _MSC_VER || __BORLANDC__ */
523
+ typedef signed __int64 int64_t;
524
+ typedef unsigned __int64 uint64_t;
525
+ #endif
466
526
  #else
467
- # define expect(expr,value) (expr)
468
- # define noinline
469
- # if __STDC_VERSION__ < 199901L && __GNUC__ < 2
470
- # define inline
471
- # endif
527
+ #include <inttypes.h>
528
+ #endif
529
+
530
+ /* many compilers define _GNUC_ to some versions but then only implement
531
+ * what their idiot authors think are the "more important" extensions,
532
+ * causing enormous grief in return for some better fake benchmark numbers.
533
+ * or so.
534
+ * we try to detect these and simply assume they are not gcc - if they have
535
+ * an issue with that they should have done it right in the first place.
536
+ */
537
+ #ifndef ECB_GCC_VERSION
538
+ #if !defined(__GNUC_MINOR__) || defined(__INTEL_COMPILER) || defined(__SUNPRO_C) || defined(__SUNPRO_CC) || defined(__llvm__) || defined(__clang__)
539
+ #define ECB_GCC_VERSION(major,minor) 0
540
+ #else
541
+ #define ECB_GCC_VERSION(major,minor) (__GNUC__ > (major) || (__GNUC__ == (major) && __GNUC_MINOR__ >= (minor)))
542
+ #endif
543
+ #endif
544
+
545
+ /*****************************************************************************/
546
+
547
+ /* ECB_NO_THREADS - ecb is not used by multiple threads, ever */
548
+ /* ECB_NO_SMP - ecb might be used in multiple threads, but only on a single cpu */
549
+
550
+ #if ECB_NO_THREADS
551
+ # define ECB_NO_SMP 1
552
+ #endif
553
+
554
+ #if ECB_NO_THREADS || ECB_NO_SMP
555
+ #define ECB_MEMORY_FENCE do { } while (0)
556
+ #endif
557
+
558
+ #ifndef ECB_MEMORY_FENCE
559
+ #if ECB_GCC_VERSION(2,5) || defined(__INTEL_COMPILER) || (__llvm__ && __GNUC__) || __SUNPRO_C >= 0x5110 || __SUNPRO_CC >= 0x5110
560
+ #if __i386 || __i386__
561
+ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("lock; orb $0, -1(%%esp)" : : : "memory")
562
+ #define ECB_MEMORY_FENCE_ACQUIRE ECB_MEMORY_FENCE /* non-lock xchg might be enough */
563
+ #define ECB_MEMORY_FENCE_RELEASE do { } while (0) /* unlikely to change in future cpus */
564
+ #elif __amd64 || __amd64__ || __x86_64 || __x86_64__
565
+ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("mfence" : : : "memory")
566
+ #define ECB_MEMORY_FENCE_ACQUIRE __asm__ __volatile__ ("lfence" : : : "memory")
567
+ #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("sfence") /* play safe - not needed in any current cpu */
568
+ #elif __powerpc__ || __ppc__ || __powerpc64__ || __ppc64__
569
+ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("sync" : : : "memory")
570
+ #elif defined(__ARM_ARCH_6__ ) || defined(__ARM_ARCH_6J__ ) \
571
+ || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6ZK__)
572
+ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("mcr p15,0,%0,c7,c10,5" : : "r" (0) : "memory")
573
+ #elif defined(__ARM_ARCH_7__ ) || defined(__ARM_ARCH_7A__ ) \
574
+ || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7R__ )
575
+ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("dmb" : : : "memory")
576
+ #elif __sparc || __sparc__
577
+ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("membar #LoadStore | #LoadLoad | #StoreStore | #StoreLoad | " : : : "memory")
578
+ #define ECB_MEMORY_FENCE_ACQUIRE __asm__ __volatile__ ("membar #LoadStore | #LoadLoad" : : : "memory")
579
+ #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("membar #LoadStore | #StoreStore")
580
+ #elif defined(__s390__) || defined(__s390x__)
581
+ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("bcr 15,0" : : : "memory")
582
+ #endif
583
+ #endif
584
+ #endif
585
+
586
+ #ifndef ECB_MEMORY_FENCE
587
+ #if ECB_GCC_VERSION(4,4) || defined(__INTEL_COMPILER) || defined(__clang__)
588
+ #define ECB_MEMORY_FENCE __sync_synchronize ()
589
+ /*#define ECB_MEMORY_FENCE_ACQUIRE ({ char dummy = 0; __sync_lock_test_and_set (&dummy, 1); }) */
590
+ /*#define ECB_MEMORY_FENCE_RELEASE ({ char dummy = 1; __sync_lock_release (&dummy ); }) */
591
+ #elif _MSC_VER >= 1400 /* VC++ 2005 */
592
+ #pragma intrinsic(_ReadBarrier,_WriteBarrier,_ReadWriteBarrier)
593
+ #define ECB_MEMORY_FENCE _ReadWriteBarrier ()
594
+ #define ECB_MEMORY_FENCE_ACQUIRE _ReadWriteBarrier () /* according to msdn, _ReadBarrier is not a load fence */
595
+ #define ECB_MEMORY_FENCE_RELEASE _WriteBarrier ()
596
+ #elif defined(_WIN32)
597
+ #include <WinNT.h>
598
+ #define ECB_MEMORY_FENCE MemoryBarrier () /* actually just xchg on x86... scary */
599
+ #elif __SUNPRO_C >= 0x5110 || __SUNPRO_CC >= 0x5110
600
+ #include <mbarrier.h>
601
+ #define ECB_MEMORY_FENCE __machine_rw_barrier ()
602
+ #define ECB_MEMORY_FENCE_ACQUIRE __machine_r_barrier ()
603
+ #define ECB_MEMORY_FENCE_RELEASE __machine_w_barrier ()
604
+ #endif
605
+ #endif
606
+
607
+ #ifndef ECB_MEMORY_FENCE
608
+ #if !ECB_AVOID_PTHREADS
609
+ /*
610
+ * if you get undefined symbol references to pthread_mutex_lock,
611
+ * or failure to find pthread.h, then you should implement
612
+ * the ECB_MEMORY_FENCE operations for your cpu/compiler
613
+ * OR provide pthread.h and link against the posix thread library
614
+ * of your system.
615
+ */
616
+ #include <pthread.h>
617
+ #define ECB_NEEDS_PTHREADS 1
618
+ #define ECB_MEMORY_FENCE_NEEDS_PTHREADS 1
619
+
620
+ static pthread_mutex_t ecb_mf_lock = PTHREAD_MUTEX_INITIALIZER;
621
+ #define ECB_MEMORY_FENCE do { pthread_mutex_lock (&ecb_mf_lock); pthread_mutex_unlock (&ecb_mf_lock); } while (0)
622
+ #endif
623
+ #endif
624
+
625
+ #if !defined(ECB_MEMORY_FENCE_ACQUIRE) && defined(ECB_MEMORY_FENCE)
626
+ #define ECB_MEMORY_FENCE_ACQUIRE ECB_MEMORY_FENCE
627
+ #endif
628
+
629
+ #if !defined(ECB_MEMORY_FENCE_RELEASE) && defined(ECB_MEMORY_FENCE)
630
+ #define ECB_MEMORY_FENCE_RELEASE ECB_MEMORY_FENCE
631
+ #endif
632
+
633
+ /*****************************************************************************/
634
+
635
+ #define ECB_C99 (__STDC_VERSION__ >= 199901L)
636
+
637
+ #if __cplusplus
638
+ #define ecb_inline static inline
639
+ #elif ECB_GCC_VERSION(2,5)
640
+ #define ecb_inline static __inline__
641
+ #elif ECB_C99
642
+ #define ecb_inline static inline
643
+ #else
644
+ #define ecb_inline static
645
+ #endif
646
+
647
+ #if ECB_GCC_VERSION(3,3)
648
+ #define ecb_restrict __restrict__
649
+ #elif ECB_C99
650
+ #define ecb_restrict restrict
651
+ #else
652
+ #define ecb_restrict
653
+ #endif
654
+
655
+ typedef int ecb_bool;
656
+
657
+ #define ECB_CONCAT_(a, b) a ## b
658
+ #define ECB_CONCAT(a, b) ECB_CONCAT_(a, b)
659
+ #define ECB_STRINGIFY_(a) # a
660
+ #define ECB_STRINGIFY(a) ECB_STRINGIFY_(a)
661
+
662
+ #define ecb_function_ ecb_inline
663
+
664
+ #if ECB_GCC_VERSION(3,1)
665
+ #define ecb_attribute(attrlist) __attribute__(attrlist)
666
+ #define ecb_is_constant(expr) __builtin_constant_p (expr)
667
+ #define ecb_expect(expr,value) __builtin_expect ((expr),(value))
668
+ #define ecb_prefetch(addr,rw,locality) __builtin_prefetch (addr, rw, locality)
669
+ #else
670
+ #define ecb_attribute(attrlist)
671
+ #define ecb_is_constant(expr) 0
672
+ #define ecb_expect(expr,value) (expr)
673
+ #define ecb_prefetch(addr,rw,locality)
674
+ #endif
675
+
676
+ /* no emulation for ecb_decltype */
677
+ #if ECB_GCC_VERSION(4,5)
678
+ #define ecb_decltype(x) __decltype(x)
679
+ #elif ECB_GCC_VERSION(3,0)
680
+ #define ecb_decltype(x) __typeof(x)
681
+ #endif
682
+
683
+ #define ecb_noinline ecb_attribute ((__noinline__))
684
+ #define ecb_noreturn ecb_attribute ((__noreturn__))
685
+ #define ecb_unused ecb_attribute ((__unused__))
686
+ #define ecb_const ecb_attribute ((__const__))
687
+ #define ecb_pure ecb_attribute ((__pure__))
688
+
689
+ #if ECB_GCC_VERSION(4,3)
690
+ #define ecb_artificial ecb_attribute ((__artificial__))
691
+ #define ecb_hot ecb_attribute ((__hot__))
692
+ #define ecb_cold ecb_attribute ((__cold__))
693
+ #else
694
+ #define ecb_artificial
695
+ #define ecb_hot
696
+ #define ecb_cold
697
+ #endif
698
+
699
+ /* put around conditional expressions if you are very sure that the */
700
+ /* expression is mostly true or mostly false. note that these return */
701
+ /* booleans, not the expression. */
702
+ #define ecb_expect_false(expr) ecb_expect (!!(expr), 0)
703
+ #define ecb_expect_true(expr) ecb_expect (!!(expr), 1)
704
+ /* for compatibility to the rest of the world */
705
+ #define ecb_likely(expr) ecb_expect_true (expr)
706
+ #define ecb_unlikely(expr) ecb_expect_false (expr)
707
+
708
+ /* count trailing zero bits and count # of one bits */
709
+ #if ECB_GCC_VERSION(3,4)
710
+ /* we assume int == 32 bit, long == 32 or 64 bit and long long == 64 bit */
711
+ #define ecb_ld32(x) (__builtin_clz (x) ^ 31)
712
+ #define ecb_ld64(x) (__builtin_clzll (x) ^ 63)
713
+ #define ecb_ctz32(x) __builtin_ctz (x)
714
+ #define ecb_ctz64(x) __builtin_ctzll (x)
715
+ #define ecb_popcount32(x) __builtin_popcount (x)
716
+ /* no popcountll */
717
+ #else
718
+ ecb_function_ int ecb_ctz32 (uint32_t x) ecb_const;
719
+ ecb_function_ int
720
+ ecb_ctz32 (uint32_t x)
721
+ {
722
+ int r = 0;
723
+
724
+ x &= ~x + 1; /* this isolates the lowest bit */
725
+
726
+ #if ECB_branchless_on_i386
727
+ r += !!(x & 0xaaaaaaaa) << 0;
728
+ r += !!(x & 0xcccccccc) << 1;
729
+ r += !!(x & 0xf0f0f0f0) << 2;
730
+ r += !!(x & 0xff00ff00) << 3;
731
+ r += !!(x & 0xffff0000) << 4;
732
+ #else
733
+ if (x & 0xaaaaaaaa) r += 1;
734
+ if (x & 0xcccccccc) r += 2;
735
+ if (x & 0xf0f0f0f0) r += 4;
736
+ if (x & 0xff00ff00) r += 8;
737
+ if (x & 0xffff0000) r += 16;
738
+ #endif
739
+
740
+ return r;
741
+ }
742
+
743
+ ecb_function_ int ecb_ctz64 (uint64_t x) ecb_const;
744
+ ecb_function_ int
745
+ ecb_ctz64 (uint64_t x)
746
+ {
747
+ int shift = x & 0xffffffffU ? 0 : 32;
748
+ return ecb_ctz32 (x >> shift) + shift;
749
+ }
750
+
751
+ ecb_function_ int ecb_popcount32 (uint32_t x) ecb_const;
752
+ ecb_function_ int
753
+ ecb_popcount32 (uint32_t x)
754
+ {
755
+ x -= (x >> 1) & 0x55555555;
756
+ x = ((x >> 2) & 0x33333333) + (x & 0x33333333);
757
+ x = ((x >> 4) + x) & 0x0f0f0f0f;
758
+ x *= 0x01010101;
759
+
760
+ return x >> 24;
761
+ }
762
+
763
+ ecb_function_ int ecb_ld32 (uint32_t x) ecb_const;
764
+ ecb_function_ int ecb_ld32 (uint32_t x)
765
+ {
766
+ int r = 0;
767
+
768
+ if (x >> 16) { x >>= 16; r += 16; }
769
+ if (x >> 8) { x >>= 8; r += 8; }
770
+ if (x >> 4) { x >>= 4; r += 4; }
771
+ if (x >> 2) { x >>= 2; r += 2; }
772
+ if (x >> 1) { r += 1; }
773
+
774
+ return r;
775
+ }
776
+
777
+ ecb_function_ int ecb_ld64 (uint64_t x) ecb_const;
778
+ ecb_function_ int ecb_ld64 (uint64_t x)
779
+ {
780
+ int r = 0;
781
+
782
+ if (x >> 32) { x >>= 32; r += 32; }
783
+
784
+ return r + ecb_ld32 (x);
785
+ }
786
+ #endif
787
+
788
+ ecb_function_ uint8_t ecb_bitrev8 (uint8_t x) ecb_const;
789
+ ecb_function_ uint8_t ecb_bitrev8 (uint8_t x)
790
+ {
791
+ return ( (x * 0x0802U & 0x22110U)
792
+ | (x * 0x8020U & 0x88440U)) * 0x10101U >> 16;
793
+ }
794
+
795
+ ecb_function_ uint16_t ecb_bitrev16 (uint16_t x) ecb_const;
796
+ ecb_function_ uint16_t ecb_bitrev16 (uint16_t x)
797
+ {
798
+ x = ((x >> 1) & 0x5555) | ((x & 0x5555) << 1);
799
+ x = ((x >> 2) & 0x3333) | ((x & 0x3333) << 2);
800
+ x = ((x >> 4) & 0x0f0f) | ((x & 0x0f0f) << 4);
801
+ x = ( x >> 8 ) | ( x << 8);
802
+
803
+ return x;
804
+ }
805
+
806
+ ecb_function_ uint32_t ecb_bitrev32 (uint32_t x) ecb_const;
807
+ ecb_function_ uint32_t ecb_bitrev32 (uint32_t x)
808
+ {
809
+ x = ((x >> 1) & 0x55555555) | ((x & 0x55555555) << 1);
810
+ x = ((x >> 2) & 0x33333333) | ((x & 0x33333333) << 2);
811
+ x = ((x >> 4) & 0x0f0f0f0f) | ((x & 0x0f0f0f0f) << 4);
812
+ x = ((x >> 8) & 0x00ff00ff) | ((x & 0x00ff00ff) << 8);
813
+ x = ( x >> 16 ) | ( x << 16);
814
+
815
+ return x;
816
+ }
817
+
818
+ /* popcount64 is only available on 64 bit cpus as gcc builtin */
819
+ /* so for this version we are lazy */
820
+ ecb_function_ int ecb_popcount64 (uint64_t x) ecb_const;
821
+ ecb_function_ int
822
+ ecb_popcount64 (uint64_t x)
823
+ {
824
+ return ecb_popcount32 (x) + ecb_popcount32 (x >> 32);
825
+ }
826
+
827
+ ecb_inline uint8_t ecb_rotl8 (uint8_t x, unsigned int count) ecb_const;
828
+ ecb_inline uint8_t ecb_rotr8 (uint8_t x, unsigned int count) ecb_const;
829
+ ecb_inline uint16_t ecb_rotl16 (uint16_t x, unsigned int count) ecb_const;
830
+ ecb_inline uint16_t ecb_rotr16 (uint16_t x, unsigned int count) ecb_const;
831
+ ecb_inline uint32_t ecb_rotl32 (uint32_t x, unsigned int count) ecb_const;
832
+ ecb_inline uint32_t ecb_rotr32 (uint32_t x, unsigned int count) ecb_const;
833
+ ecb_inline uint64_t ecb_rotl64 (uint64_t x, unsigned int count) ecb_const;
834
+ ecb_inline uint64_t ecb_rotr64 (uint64_t x, unsigned int count) ecb_const;
835
+
836
+ ecb_inline uint8_t ecb_rotl8 (uint8_t x, unsigned int count) { return (x >> ( 8 - count)) | (x << count); }
837
+ ecb_inline uint8_t ecb_rotr8 (uint8_t x, unsigned int count) { return (x << ( 8 - count)) | (x >> count); }
838
+ ecb_inline uint16_t ecb_rotl16 (uint16_t x, unsigned int count) { return (x >> (16 - count)) | (x << count); }
839
+ ecb_inline uint16_t ecb_rotr16 (uint16_t x, unsigned int count) { return (x << (16 - count)) | (x >> count); }
840
+ ecb_inline uint32_t ecb_rotl32 (uint32_t x, unsigned int count) { return (x >> (32 - count)) | (x << count); }
841
+ ecb_inline uint32_t ecb_rotr32 (uint32_t x, unsigned int count) { return (x << (32 - count)) | (x >> count); }
842
+ ecb_inline uint64_t ecb_rotl64 (uint64_t x, unsigned int count) { return (x >> (64 - count)) | (x << count); }
843
+ ecb_inline uint64_t ecb_rotr64 (uint64_t x, unsigned int count) { return (x << (64 - count)) | (x >> count); }
844
+
845
+ #if ECB_GCC_VERSION(4,3)
846
+ #define ecb_bswap16(x) (__builtin_bswap32 (x) >> 16)
847
+ #define ecb_bswap32(x) __builtin_bswap32 (x)
848
+ #define ecb_bswap64(x) __builtin_bswap64 (x)
849
+ #else
850
+ ecb_function_ uint16_t ecb_bswap16 (uint16_t x) ecb_const;
851
+ ecb_function_ uint16_t
852
+ ecb_bswap16 (uint16_t x)
853
+ {
854
+ return ecb_rotl16 (x, 8);
855
+ }
856
+
857
+ ecb_function_ uint32_t ecb_bswap32 (uint32_t x) ecb_const;
858
+ ecb_function_ uint32_t
859
+ ecb_bswap32 (uint32_t x)
860
+ {
861
+ return (((uint32_t)ecb_bswap16 (x)) << 16) | ecb_bswap16 (x >> 16);
862
+ }
863
+
864
+ ecb_function_ uint64_t ecb_bswap64 (uint64_t x) ecb_const;
865
+ ecb_function_ uint64_t
866
+ ecb_bswap64 (uint64_t x)
867
+ {
868
+ return (((uint64_t)ecb_bswap32 (x)) << 32) | ecb_bswap32 (x >> 32);
869
+ }
870
+ #endif
871
+
872
+ #if ECB_GCC_VERSION(4,5)
873
+ #define ecb_unreachable() __builtin_unreachable ()
874
+ #else
875
+ /* this seems to work fine, but gcc always emits a warning for it :/ */
876
+ ecb_inline void ecb_unreachable (void) ecb_noreturn;
877
+ ecb_inline void ecb_unreachable (void) { }
878
+ #endif
879
+
880
+ /* try to tell the compiler that some condition is definitely true */
881
+ #define ecb_assume(cond) do { if (!(cond)) ecb_unreachable (); } while (0)
882
+
883
+ ecb_inline unsigned char ecb_byteorder_helper (void) ecb_const;
884
+ ecb_inline unsigned char
885
+ ecb_byteorder_helper (void)
886
+ {
887
+ const uint32_t u = 0x11223344;
888
+ return *(unsigned char *)&u;
889
+ }
890
+
891
+ ecb_inline ecb_bool ecb_big_endian (void) ecb_const;
892
+ ecb_inline ecb_bool ecb_big_endian (void) { return ecb_byteorder_helper () == 0x11; }
893
+ ecb_inline ecb_bool ecb_little_endian (void) ecb_const;
894
+ ecb_inline ecb_bool ecb_little_endian (void) { return ecb_byteorder_helper () == 0x44; }
895
+
896
+ #if ECB_GCC_VERSION(3,0) || ECB_C99
897
+ #define ecb_mod(m,n) ((m) % (n) + ((m) % (n) < 0 ? (n) : 0))
898
+ #else
899
+ #define ecb_mod(m,n) ((m) < 0 ? ((n) - 1 - ((-1 - (m)) % (n))) : ((m) % (n)))
900
+ #endif
901
+
902
+ #if __cplusplus
903
+ template<typename T>
904
+ static inline T ecb_div_rd (T val, T div)
905
+ {
906
+ return val < 0 ? - ((-val + div - 1) / div) : (val ) / div;
907
+ }
908
+ template<typename T>
909
+ static inline T ecb_div_ru (T val, T div)
910
+ {
911
+ return val < 0 ? - ((-val ) / div) : (val + div - 1) / div;
912
+ }
913
+ #else
914
+ #define ecb_div_rd(val,div) ((val) < 0 ? - ((-(val) + (div) - 1) / (div)) : ((val) ) / (div))
915
+ #define ecb_div_ru(val,div) ((val) < 0 ? - ((-(val) ) / (div)) : ((val) + (div) - 1) / (div))
916
+ #endif
917
+
918
+ #if ecb_cplusplus_does_not_suck
919
+ /* does not work for local types (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2657.htm) */
920
+ template<typename T, int N>
921
+ static inline int ecb_array_length (const T (&arr)[N])
922
+ {
923
+ return N;
924
+ }
925
+ #else
926
+ #define ecb_array_length(name) (sizeof (name) / sizeof (name [0]))
927
+ #endif
928
+
929
+ #endif
930
+
931
+ /* ECB.H END */
932
+
933
+ #if ECB_MEMORY_FENCE_NEEDS_PTHREADS
934
+ /* if your architecture doesn't need memory fences, e.g. because it is
935
+ * single-cpu/core, or if you use libev in a project that doesn't use libev
936
+ * from multiple threads, then you can define ECB_AVOID_PTHREADS when compiling
937
+ * libev, in which cases the memory fences become nops.
938
+ * alternatively, you can remove this #error and link against libpthread,
939
+ * which will then provide the memory fences.
940
+ */
941
+ # error "memory fences not defined for your architecture, please report"
942
+ #endif
943
+
944
+ #ifndef ECB_MEMORY_FENCE
945
+ # define ECB_MEMORY_FENCE do { } while (0)
946
+ # define ECB_MEMORY_FENCE_ACQUIRE ECB_MEMORY_FENCE
947
+ # define ECB_MEMORY_FENCE_RELEASE ECB_MEMORY_FENCE
472
948
  #endif
473
949
 
474
- #define expect_false(expr) expect ((expr) != 0, 0)
475
- #define expect_true(expr) expect ((expr) != 0, 1)
476
- #define inline_size static inline
950
+ #define expect_false(cond) ecb_expect_false (cond)
951
+ #define expect_true(cond) ecb_expect_true (cond)
952
+ #define noinline ecb_noinline
953
+
954
+ #define inline_size ecb_inline
477
955
 
478
956
  #if EV_FEATURE_CODE
479
- # define inline_speed static inline
957
+ # define inline_speed ecb_inline
480
958
  #else
481
959
  # define inline_speed static noinline
482
960
  #endif
@@ -525,11 +1003,59 @@ static EV_ATOMIC_T have_monotonic; /* did clock_gettime (CLOCK_MONOTONIC) work?
525
1003
 
526
1004
  /*****************************************************************************/
527
1005
 
1006
+ /* define a suitable floor function (only used by periodics atm) */
1007
+
1008
+ #if EV_USE_FLOOR
1009
+ # include <math.h>
1010
+ # define ev_floor(v) floor (v)
1011
+ #else
1012
+
1013
+ #include <float.h>
1014
+
1015
+ /* a floor() replacement function, should be independent of ev_tstamp type */
1016
+ static ev_tstamp noinline
1017
+ ev_floor (ev_tstamp v)
1018
+ {
1019
+ /* the choice of shift factor is not terribly important */
1020
+ #if FLT_RADIX != 2 /* assume FLT_RADIX == 10 */
1021
+ const ev_tstamp shift = sizeof (unsigned long) >= 8 ? 10000000000000000000. : 1000000000.;
1022
+ #else
1023
+ const ev_tstamp shift = sizeof (unsigned long) >= 8 ? 18446744073709551616. : 4294967296.;
1024
+ #endif
1025
+
1026
+ /* argument too large for an unsigned long? */
1027
+ if (expect_false (v >= shift))
1028
+ {
1029
+ ev_tstamp f;
1030
+
1031
+ if (v == v - 1.)
1032
+ return v; /* very large number */
1033
+
1034
+ f = shift * ev_floor (v * (1. / shift));
1035
+ return f + ev_floor (v - f);
1036
+ }
1037
+
1038
+ /* special treatment for negative args? */
1039
+ if (expect_false (v < 0.))
1040
+ {
1041
+ ev_tstamp f = -ev_floor (-v);
1042
+
1043
+ return f - (f == v ? 0 : 1);
1044
+ }
1045
+
1046
+ /* fits into an unsigned long */
1047
+ return (unsigned long)v;
1048
+ }
1049
+
1050
+ #endif
1051
+
1052
+ /*****************************************************************************/
1053
+
528
1054
  #ifdef __linux
529
1055
  # include <sys/utsname.h>
530
1056
  #endif
531
1057
 
532
- static unsigned int noinline
1058
+ static unsigned int noinline ecb_cold
533
1059
  ev_linux_version (void)
534
1060
  {
535
1061
  #ifdef __linux
@@ -568,7 +1094,7 @@ ev_linux_version (void)
568
1094
  /*****************************************************************************/
569
1095
 
570
1096
  #if EV_AVOID_STDIO
571
- static void noinline
1097
+ static void noinline ecb_cold
572
1098
  ev_printerr (const char *msg)
573
1099
  {
574
1100
  write (STDERR_FILENO, msg, strlen (msg));
@@ -577,13 +1103,13 @@ ev_printerr (const char *msg)
577
1103
 
578
1104
  static void (*syserr_cb)(const char *msg);
579
1105
 
580
- void
1106
+ void ecb_cold
581
1107
  ev_set_syserr_cb (void (*cb)(const char *msg))
582
1108
  {
583
1109
  syserr_cb = cb;
584
1110
  }
585
1111
 
586
- static void noinline
1112
+ static void noinline ecb_cold
587
1113
  ev_syserr (const char *msg)
588
1114
  {
589
1115
  if (!msg)
@@ -626,7 +1152,7 @@ ev_realloc_emul (void *ptr, long size)
626
1152
 
627
1153
  static void *(*alloc)(void *ptr, long size) = ev_realloc_emul;
628
1154
 
629
- void
1155
+ void ecb_cold
630
1156
  ev_set_allocator (void *(*cb)(void *ptr, long size))
631
1157
  {
632
1158
  alloc = cb;
@@ -725,11 +1251,11 @@ typedef struct
725
1251
  #include "ev_wrap.h"
726
1252
 
727
1253
  static struct ev_loop default_loop_struct;
728
- struct ev_loop *ev_default_loop_ptr;
1254
+ EV_API_DECL struct ev_loop *ev_default_loop_ptr = 0; /* needs to be initialised to make it a definition despite extern */
729
1255
 
730
1256
  #else
731
1257
 
732
- ev_tstamp ev_rt_now;
1258
+ EV_API_DECL ev_tstamp ev_rt_now = 0; /* needs to be initialised to make it a definition despite extern */
733
1259
  #define VAR(name,decl) static decl;
734
1260
  #include "ev_vars.h"
735
1261
  #undef VAR
@@ -818,14 +1344,6 @@ ev_sleep (ev_tstamp delay)
818
1344
  }
819
1345
  }
820
1346
 
821
- inline_speed int
822
- ev_timeout_to_ms (ev_tstamp timeout)
823
- {
824
- int ms = timeout * 1000. + .999999;
825
-
826
- return expect_true (ms) ? ms : timeout < 1e-6 ? 0 : 1;
827
- }
828
-
829
1347
  /*****************************************************************************/
830
1348
 
831
1349
  #define MALLOC_ROUND 4096 /* prefer to allocate in chunks of this size, must be 2**n and >> 4 longs */
@@ -841,7 +1359,7 @@ array_nextsize (int elem, int cur, int cnt)
841
1359
  ncur <<= 1;
842
1360
  while (cnt > ncur);
843
1361
 
844
- /* if size is large, round to MALLOC_ROUND - 4 * longs to accomodate malloc overhead */
1362
+ /* if size is large, round to MALLOC_ROUND - 4 * longs to accommodate malloc overhead */
845
1363
  if (elem * ncur > MALLOC_ROUND - sizeof (void *) * 4)
846
1364
  {
847
1365
  ncur *= elem;
@@ -853,7 +1371,7 @@ array_nextsize (int elem, int cur, int cnt)
853
1371
  return ncur;
854
1372
  }
855
1373
 
856
- static noinline void *
1374
+ static void * noinline ecb_cold
857
1375
  array_realloc (int elem, void *base, int *cur, int cnt)
858
1376
  {
859
1377
  *cur = array_nextsize (elem, *cur, cnt);
@@ -866,7 +1384,7 @@ array_realloc (int elem, void *base, int *cur, int cnt)
866
1384
  #define array_needsize(type,base,cur,cnt,init) \
867
1385
  if (expect_false ((cnt) > (cur))) \
868
1386
  { \
869
- int ocur_ = (cur); \
1387
+ int ecb_unused ocur_ = (cur); \
870
1388
  (base) = (type *)array_realloc \
871
1389
  (sizeof (type), (base), &(cur), (cnt)); \
872
1390
  init ((base) + (ocur_), (cur) - ocur_); \
@@ -982,7 +1500,7 @@ fd_reify (EV_P)
982
1500
  int fd = fdchanges [i];
983
1501
  ANFD *anfd = anfds + fd;
984
1502
 
985
- if (anfd->reify & EV__IOFDSET)
1503
+ if (anfd->reify & EV__IOFDSET && anfd->head)
986
1504
  {
987
1505
  SOCKET handle = EV_FD_TO_WIN32_HANDLE (fd);
988
1506
 
@@ -1046,7 +1564,7 @@ fd_change (EV_P_ int fd, int flags)
1046
1564
  }
1047
1565
 
1048
1566
  /* the given fd is invalid/unusable, so make sure it doesn't hurt us anymore */
1049
- inline_speed void
1567
+ inline_speed void ecb_cold
1050
1568
  fd_kill (EV_P_ int fd)
1051
1569
  {
1052
1570
  ev_io *w;
@@ -1059,7 +1577,7 @@ fd_kill (EV_P_ int fd)
1059
1577
  }
1060
1578
 
1061
1579
  /* check whether the given fd is actually valid, for error recovery */
1062
- inline_size int
1580
+ inline_size int ecb_cold
1063
1581
  fd_valid (int fd)
1064
1582
  {
1065
1583
  #ifdef _WIN32
@@ -1070,7 +1588,7 @@ fd_valid (int fd)
1070
1588
  }
1071
1589
 
1072
1590
  /* called on EBADF to verify fds */
1073
- static void noinline
1591
+ static void noinline ecb_cold
1074
1592
  fd_ebadf (EV_P)
1075
1593
  {
1076
1594
  int fd;
@@ -1082,7 +1600,7 @@ fd_ebadf (EV_P)
1082
1600
  }
1083
1601
 
1084
1602
  /* called on ENOMEM in select/poll to kill some fds and retry */
1085
- static void noinline
1603
+ static void noinline ecb_cold
1086
1604
  fd_enomem (EV_P)
1087
1605
  {
1088
1606
  int fd;
@@ -1287,7 +1805,7 @@ static ANSIG signals [EV_NSIG - 1];
1287
1805
 
1288
1806
  #if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE
1289
1807
 
1290
- static void noinline
1808
+ static void noinline ecb_cold
1291
1809
  evpipe_init (EV_P)
1292
1810
  {
1293
1811
  if (!ev_is_active (&pipe_w))
@@ -1319,15 +1837,27 @@ evpipe_init (EV_P)
1319
1837
  }
1320
1838
  }
1321
1839
 
1322
- inline_size void
1840
+ inline_speed void
1323
1841
  evpipe_write (EV_P_ EV_ATOMIC_T *flag)
1324
1842
  {
1325
- if (!*flag)
1843
+ if (expect_true (*flag))
1844
+ return;
1845
+
1846
+ *flag = 1;
1847
+
1848
+ ECB_MEMORY_FENCE_RELEASE; /* make sure flag is visible before the wakeup */
1849
+
1850
+ pipe_write_skipped = 1;
1851
+
1852
+ ECB_MEMORY_FENCE; /* make sure pipe_write_skipped is visible before we check pipe_write_wanted */
1853
+
1854
+ if (pipe_write_wanted)
1326
1855
  {
1327
- int old_errno = errno; /* save errno because write might clobber it */
1328
- char dummy;
1856
+ int old_errno;
1329
1857
 
1330
- *flag = 1;
1858
+ pipe_write_skipped = 0; /* just an optimisation, no fence needed */
1859
+
1860
+ old_errno = errno; /* save errno because write will clobber it */
1331
1861
 
1332
1862
  #if EV_USE_EVENTFD
1333
1863
  if (evfd >= 0)
@@ -1337,12 +1867,14 @@ evpipe_write (EV_P_ EV_ATOMIC_T *flag)
1337
1867
  }
1338
1868
  else
1339
1869
  #endif
1340
- /* win32 people keep sending patches that change this write() to send() */
1341
- /* and then run away. but send() is wrong, it wants a socket handle on win32 */
1342
- /* so when you think this write should be a send instead, please find out */
1343
- /* where your send() is from - it's definitely not the microsoft send, and */
1344
- /* tell me. thank you. */
1345
- write (evpipe [1], &dummy, 1);
1870
+ {
1871
+ /* win32 people keep sending patches that change this write() to send() */
1872
+ /* and then run away. but send() is wrong, it wants a socket handle on win32 */
1873
+ /* so when you think this write should be a send instead, please find out */
1874
+ /* where your send() is from - it's definitely not the microsoft send, and */
1875
+ /* tell me. thank you. */
1876
+ write (evpipe [1], &(evpipe [1]), 1);
1877
+ }
1346
1878
 
1347
1879
  errno = old_errno;
1348
1880
  }
@@ -1355,20 +1887,25 @@ pipecb (EV_P_ ev_io *iow, int revents)
1355
1887
  {
1356
1888
  int i;
1357
1889
 
1358
- #if EV_USE_EVENTFD
1359
- if (evfd >= 0)
1890
+ if (revents & EV_READ)
1360
1891
  {
1361
- uint64_t counter;
1362
- read (evfd, &counter, sizeof (uint64_t));
1363
- }
1364
- else
1892
+ #if EV_USE_EVENTFD
1893
+ if (evfd >= 0)
1894
+ {
1895
+ uint64_t counter;
1896
+ read (evfd, &counter, sizeof (uint64_t));
1897
+ }
1898
+ else
1365
1899
  #endif
1366
- {
1367
- char dummy;
1368
- /* see discussion in evpipe_write when you think this read should be recv in win32 */
1369
- read (evpipe [0], &dummy, 1);
1900
+ {
1901
+ char dummy;
1902
+ /* see discussion in evpipe_write when you think this read should be recv in win32 */
1903
+ read (evpipe [0], &dummy, 1);
1904
+ }
1370
1905
  }
1371
1906
 
1907
+ pipe_write_skipped = 0;
1908
+
1372
1909
  #if EV_SIGNAL_ENABLE
1373
1910
  if (sig_pending)
1374
1911
  {
@@ -1407,6 +1944,9 @@ ev_feed_signal (int signum)
1407
1944
  return;
1408
1945
  #endif
1409
1946
 
1947
+ if (!ev_active (&pipe_w))
1948
+ return;
1949
+
1410
1950
  signals [signum - 1].pending = 1;
1411
1951
  evpipe_write (EV_A_ &sig_pending);
1412
1952
  }
@@ -1547,20 +2087,20 @@ childcb (EV_P_ ev_signal *sw, int revents)
1547
2087
  # include "ev_select.c"
1548
2088
  #endif
1549
2089
 
1550
- int
2090
+ int ecb_cold
1551
2091
  ev_version_major (void)
1552
2092
  {
1553
2093
  return EV_VERSION_MAJOR;
1554
2094
  }
1555
2095
 
1556
- int
2096
+ int ecb_cold
1557
2097
  ev_version_minor (void)
1558
2098
  {
1559
2099
  return EV_VERSION_MINOR;
1560
2100
  }
1561
2101
 
1562
2102
  /* return true if we are running with elevated privileges and should ignore env variables */
1563
- int inline_size
2103
+ int inline_size ecb_cold
1564
2104
  enable_secure (void)
1565
2105
  {
1566
2106
  #ifdef _WIN32
@@ -1571,7 +2111,7 @@ enable_secure (void)
1571
2111
  #endif
1572
2112
  }
1573
2113
 
1574
- unsigned int
2114
+ unsigned int ecb_cold
1575
2115
  ev_supported_backends (void)
1576
2116
  {
1577
2117
  unsigned int flags = 0;
@@ -1585,7 +2125,7 @@ ev_supported_backends (void)
1585
2125
  return flags;
1586
2126
  }
1587
2127
 
1588
- unsigned int
2128
+ unsigned int ecb_cold
1589
2129
  ev_recommended_backends (void)
1590
2130
  {
1591
2131
  unsigned int flags = ev_supported_backends ();
@@ -1607,7 +2147,7 @@ ev_recommended_backends (void)
1607
2147
  return flags;
1608
2148
  }
1609
2149
 
1610
- unsigned int
2150
+ unsigned int ecb_cold
1611
2151
  ev_embeddable_backends (void)
1612
2152
  {
1613
2153
  int flags = EVBACKEND_EPOLL | EVBACKEND_KQUEUE | EVBACKEND_PORT;
@@ -1662,12 +2202,14 @@ ev_userdata (EV_P)
1662
2202
  return userdata;
1663
2203
  }
1664
2204
 
1665
- void ev_set_invoke_pending_cb (EV_P_ void (*invoke_pending_cb)(EV_P))
2205
+ void
2206
+ ev_set_invoke_pending_cb (EV_P_ void (*invoke_pending_cb)(EV_P))
1666
2207
  {
1667
2208
  invoke_cb = invoke_pending_cb;
1668
2209
  }
1669
2210
 
1670
- void ev_set_loop_release_cb (EV_P_ void (*release)(EV_P), void (*acquire)(EV_P))
2211
+ void
2212
+ ev_set_loop_release_cb (EV_P_ void (*release)(EV_P), void (*acquire)(EV_P))
1671
2213
  {
1672
2214
  release_cb = release;
1673
2215
  acquire_cb = acquire;
@@ -1675,7 +2217,7 @@ void ev_set_loop_release_cb (EV_P_ void (*release)(EV_P), void (*acquire)(EV_P))
1675
2217
  #endif
1676
2218
 
1677
2219
  /* initialise a loop structure, must be zero-initialised */
1678
- static void noinline
2220
+ static void noinline ecb_cold
1679
2221
  loop_init (EV_P_ unsigned int flags)
1680
2222
  {
1681
2223
  if (!backend)
@@ -1713,27 +2255,29 @@ loop_init (EV_P_ unsigned int flags)
1713
2255
  && getenv ("LIBEV_FLAGS"))
1714
2256
  flags = atoi (getenv ("LIBEV_FLAGS"));
1715
2257
 
1716
- ev_rt_now = ev_time ();
1717
- mn_now = get_clock ();
1718
- now_floor = mn_now;
1719
- rtmn_diff = ev_rt_now - mn_now;
2258
+ ev_rt_now = ev_time ();
2259
+ mn_now = get_clock ();
2260
+ now_floor = mn_now;
2261
+ rtmn_diff = ev_rt_now - mn_now;
1720
2262
  #if EV_FEATURE_API
1721
- invoke_cb = ev_invoke_pending;
2263
+ invoke_cb = ev_invoke_pending;
1722
2264
  #endif
1723
2265
 
1724
- io_blocktime = 0.;
1725
- timeout_blocktime = 0.;
1726
- backend = 0;
1727
- backend_fd = -1;
1728
- sig_pending = 0;
2266
+ io_blocktime = 0.;
2267
+ timeout_blocktime = 0.;
2268
+ backend = 0;
2269
+ backend_fd = -1;
2270
+ sig_pending = 0;
1729
2271
  #if EV_ASYNC_ENABLE
1730
- async_pending = 0;
2272
+ async_pending = 0;
1731
2273
  #endif
2274
+ pipe_write_skipped = 0;
2275
+ pipe_write_wanted = 0;
1732
2276
  #if EV_USE_INOTIFY
1733
- fs_fd = flags & EVFLAG_NOINOTIFY ? -1 : -2;
2277
+ fs_fd = flags & EVFLAG_NOINOTIFY ? -1 : -2;
1734
2278
  #endif
1735
2279
  #if EV_USE_SIGNALFD
1736
- sigfd = flags & EVFLAG_SIGNALFD ? -2 : -1;
2280
+ sigfd = flags & EVFLAG_SIGNALFD ? -2 : -1;
1737
2281
  #endif
1738
2282
 
1739
2283
  if (!(flags & EVBACKEND_MASK))
@@ -1768,7 +2312,7 @@ loop_init (EV_P_ unsigned int flags)
1768
2312
  }
1769
2313
 
1770
2314
  /* free up a loop structure */
1771
- void
2315
+ void ecb_cold
1772
2316
  ev_loop_destroy (EV_P)
1773
2317
  {
1774
2318
  int i;
@@ -1908,12 +2452,7 @@ loop_fork (EV_P)
1908
2452
 
1909
2453
  if (ev_is_active (&pipe_w))
1910
2454
  {
1911
- /* this "locks" the handlers against writing to the pipe */
1912
- /* while we modify the fd vars */
1913
- sig_pending = 1;
1914
- #if EV_ASYNC_ENABLE
1915
- async_pending = 1;
1916
- #endif
2455
+ /* pipe_write_wanted must be false now, so modifying fd vars should be safe */
1917
2456
 
1918
2457
  ev_ref (EV_A);
1919
2458
  ev_io_stop (EV_A_ &pipe_w);
@@ -1941,7 +2480,7 @@ loop_fork (EV_P)
1941
2480
 
1942
2481
  #if EV_MULTIPLICITY
1943
2482
 
1944
- struct ev_loop *
2483
+ struct ev_loop * ecb_cold
1945
2484
  ev_loop_new (unsigned int flags)
1946
2485
  {
1947
2486
  EV_P = (struct ev_loop *)ev_malloc (sizeof (struct ev_loop));
@@ -1959,7 +2498,7 @@ ev_loop_new (unsigned int flags)
1959
2498
  #endif /* multiplicity */
1960
2499
 
1961
2500
  #if EV_VERIFY
1962
- static void noinline
2501
+ static void noinline ecb_cold
1963
2502
  verify_watcher (EV_P_ W w)
1964
2503
  {
1965
2504
  assert (("libev: watcher has invalid priority", ABSPRI (w) >= 0 && ABSPRI (w) < NUMPRI));
@@ -1968,7 +2507,7 @@ verify_watcher (EV_P_ W w)
1968
2507
  assert (("libev: pending watcher not on pending queue", pendings [ABSPRI (w)][w->pending - 1].w == w));
1969
2508
  }
1970
2509
 
1971
- static void noinline
2510
+ static void noinline ecb_cold
1972
2511
  verify_heap (EV_P_ ANHE *heap, int N)
1973
2512
  {
1974
2513
  int i;
@@ -1983,7 +2522,7 @@ verify_heap (EV_P_ ANHE *heap, int N)
1983
2522
  }
1984
2523
  }
1985
2524
 
1986
- static void noinline
2525
+ static void noinline ecb_cold
1987
2526
  array_verify (EV_P_ W *ws, int cnt)
1988
2527
  {
1989
2528
  while (cnt--)
@@ -1995,7 +2534,7 @@ array_verify (EV_P_ W *ws, int cnt)
1995
2534
  #endif
1996
2535
 
1997
2536
  #if EV_FEATURE_API
1998
- void
2537
+ void ecb_cold
1999
2538
  ev_verify (EV_P)
2000
2539
  {
2001
2540
  #if EV_VERIFY
@@ -2071,7 +2610,7 @@ ev_verify (EV_P)
2071
2610
  #endif
2072
2611
 
2073
2612
  #if EV_MULTIPLICITY
2074
- struct ev_loop *
2613
+ struct ev_loop * ecb_cold
2075
2614
  #else
2076
2615
  int
2077
2616
  #endif
@@ -2210,12 +2749,28 @@ timers_reify (EV_P)
2210
2749
 
2211
2750
  #if EV_PERIODIC_ENABLE
2212
2751
 
2213
- inline_speed void
2752
+ static void noinline
2214
2753
  periodic_recalc (EV_P_ ev_periodic *w)
2215
2754
  {
2216
- /* TODO: use slow but potentially more correct incremental algo, */
2217
- /* also do not rely on ceil */
2218
- ev_at (w) = w->offset + ceil ((ev_rt_now - w->offset) / w->interval) * w->interval;
2755
+ ev_tstamp interval = w->interval > MIN_INTERVAL ? w->interval : MIN_INTERVAL;
2756
+ ev_tstamp at = w->offset + interval * ev_floor ((ev_rt_now - w->offset) / interval);
2757
+
2758
+ /* the above almost always errs on the low side */
2759
+ while (at <= ev_rt_now)
2760
+ {
2761
+ ev_tstamp nat = at + w->interval;
2762
+
2763
+ /* when resolution fails us, we use ev_rt_now */
2764
+ if (expect_false (nat == at))
2765
+ {
2766
+ at = ev_rt_now;
2767
+ break;
2768
+ }
2769
+
2770
+ at = nat;
2771
+ }
2772
+
2773
+ ev_at (w) = at;
2219
2774
  }
2220
2775
 
2221
2776
  /* make periodics pending */
@@ -2247,20 +2802,6 @@ periodics_reify (EV_P)
2247
2802
  else if (w->interval)
2248
2803
  {
2249
2804
  periodic_recalc (EV_A_ w);
2250
-
2251
- /* if next trigger time is not sufficiently in the future, put it there */
2252
- /* this might happen because of floating point inexactness */
2253
- if (ev_at (w) - ev_rt_now < TIME_EPSILON)
2254
- {
2255
- ev_at (w) += w->interval;
2256
-
2257
- /* if interval is unreasonably low we might still have a time in the past */
2258
- /* so correct this. this will make the periodic very inexact, but the user */
2259
- /* has effectively asked to get triggered more often than possible */
2260
- if (ev_at (w) < ev_rt_now)
2261
- ev_at (w) = ev_rt_now;
2262
- }
2263
-
2264
2805
  ANHE_at_cache (periodics [HEAP0]);
2265
2806
  downheap (periodics, periodiccnt, HEAP0);
2266
2807
  }
@@ -2278,7 +2819,7 @@ periodics_reify (EV_P)
2278
2819
 
2279
2820
  /* simply recalculate all periodics */
2280
2821
  /* TODO: maybe ensure that at least one event happens when jumping forward? */
2281
- static void noinline
2822
+ static void noinline ecb_cold
2282
2823
  periodics_reschedule (EV_P)
2283
2824
  {
2284
2825
  int i;
@@ -2301,7 +2842,7 @@ periodics_reschedule (EV_P)
2301
2842
  #endif
2302
2843
 
2303
2844
  /* adjust all timers by a given offset */
2304
- static void noinline
2845
+ static void noinline ecb_cold
2305
2846
  timers_reschedule (EV_P_ ev_tstamp adjust)
2306
2847
  {
2307
2848
  int i;
@@ -2348,9 +2889,12 @@ time_update (EV_P_ ev_tstamp max_block)
2348
2889
  */
2349
2890
  for (i = 4; --i; )
2350
2891
  {
2892
+ ev_tstamp diff;
2351
2893
  rtmn_diff = ev_rt_now - mn_now;
2352
2894
 
2353
- if (expect_true (fabs (odiff - rtmn_diff) < MIN_TIMEJUMP))
2895
+ diff = odiff - rtmn_diff;
2896
+
2897
+ if (expect_true ((diff < 0. ? -diff : diff) < MIN_TIMEJUMP))
2354
2898
  return; /* all is well */
2355
2899
 
2356
2900
  ev_rt_now = ev_time ();
@@ -2450,20 +2994,25 @@ ev_run (EV_P_ int flags)
2450
2994
  /* update time to cancel out callback processing overhead */
2451
2995
  time_update (EV_A_ 1e100);
2452
2996
 
2453
- if (expect_true (!(flags & EVRUN_NOWAIT || idleall || !activecnt)))
2997
+ /* from now on, we want a pipe-wake-up */
2998
+ pipe_write_wanted = 1;
2999
+
3000
+ ECB_MEMORY_FENCE; /* make sure pipe_write_wanted is visible before we check for potential skips */
3001
+
3002
+ if (expect_true (!(flags & EVRUN_NOWAIT || idleall || !activecnt || pipe_write_skipped)))
2454
3003
  {
2455
3004
  waittime = MAX_BLOCKTIME;
2456
3005
 
2457
3006
  if (timercnt)
2458
3007
  {
2459
- ev_tstamp to = ANHE_at (timers [HEAP0]) - mn_now + backend_fudge;
3008
+ ev_tstamp to = ANHE_at (timers [HEAP0]) - mn_now;
2460
3009
  if (waittime > to) waittime = to;
2461
3010
  }
2462
3011
 
2463
3012
  #if EV_PERIODIC_ENABLE
2464
3013
  if (periodiccnt)
2465
3014
  {
2466
- ev_tstamp to = ANHE_at (periodics [HEAP0]) - ev_rt_now + backend_fudge;
3015
+ ev_tstamp to = ANHE_at (periodics [HEAP0]) - ev_rt_now;
2467
3016
  if (waittime > to) waittime = to;
2468
3017
  }
2469
3018
  #endif
@@ -2472,13 +3021,18 @@ ev_run (EV_P_ int flags)
2472
3021
  if (expect_false (waittime < timeout_blocktime))
2473
3022
  waittime = timeout_blocktime;
2474
3023
 
3024
+ /* at this point, we NEED to wait, so we have to ensure */
3025
+ /* to pass a minimum nonzero value to the backend */
3026
+ if (expect_false (waittime < backend_mintime))
3027
+ waittime = backend_mintime;
3028
+
2475
3029
  /* extra check because io_blocktime is commonly 0 */
2476
3030
  if (expect_false (io_blocktime))
2477
3031
  {
2478
3032
  sleeptime = io_blocktime - (mn_now - prev_mn_now);
2479
3033
 
2480
- if (sleeptime > waittime - backend_fudge)
2481
- sleeptime = waittime - backend_fudge;
3034
+ if (sleeptime > waittime - backend_mintime)
3035
+ sleeptime = waittime - backend_mintime;
2482
3036
 
2483
3037
  if (expect_true (sleeptime > 0.))
2484
3038
  {
@@ -2495,6 +3049,15 @@ ev_run (EV_P_ int flags)
2495
3049
  backend_poll (EV_A_ waittime);
2496
3050
  assert ((loop_done = EVBREAK_CANCEL, 1)); /* assert for side effect */
2497
3051
 
3052
+ pipe_write_wanted = 0; /* just an optimisation, no fence needed */
3053
+
3054
+ if (pipe_write_skipped)
3055
+ {
3056
+ assert (("libev: pipe_w not active, but pipe not written", ev_is_active (&pipe_w)));
3057
+ ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM);
3058
+ }
3059
+
3060
+
2498
3061
  /* update ev_rt_now, do magic */
2499
3062
  time_update (EV_A_ waittime + sleeptime);
2500
3063
  }
@@ -2755,6 +3318,8 @@ ev_timer_again (EV_P_ ev_timer *w)
2755
3318
  {
2756
3319
  EV_FREQUENT_CHECK;
2757
3320
 
3321
+ clear_pending (EV_A_ (W)w);
3322
+
2758
3323
  if (ev_is_active (w))
2759
3324
  {
2760
3325
  if (w->repeat)
@@ -3157,7 +3722,7 @@ infy_cb (EV_P_ ev_io *w, int revents)
3157
3722
  }
3158
3723
  }
3159
3724
 
3160
- inline_size void
3725
+ inline_size void ecb_cold
3161
3726
  ev_check_2625 (EV_P)
3162
3727
  {
3163
3728
  /* kernels < 2.6.25 are borked
@@ -3792,7 +4357,7 @@ ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, vo
3792
4357
  /*****************************************************************************/
3793
4358
 
3794
4359
  #if EV_WALK_ENABLE
3795
- void
4360
+ void ecb_cold
3796
4361
  ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w))
3797
4362
  {
3798
4363
  int i, j;
@@ -3846,7 +4411,7 @@ ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w))
3846
4411
 
3847
4412
  #if EV_IDLE_ENABLE
3848
4413
  if (types & EV_IDLE)
3849
- for (j = NUMPRI; i--; )
4414
+ for (j = NUMPRI; j--; )
3850
4415
  for (i = idlecnt [j]; i--; )
3851
4416
  cb (EV_A_ EV_IDLE, idles [j][i]);
3852
4417
  #endif
@@ -3909,5 +4474,3 @@ ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w))
3909
4474
  #include "ev_wrap.h"
3910
4475
  #endif
3911
4476
 
3912
- EV_CPP(})
3913
-