rev 0.2.4 → 0.3.0

Sign up to get free protection for your applications and to get access to all the features.
data/CHANGES CHANGED
@@ -1,3 +1,23 @@
1
+ 0.3.0:
2
+
3
+ * Add Rev::StatWatcher to monitor filesystem changes
4
+
5
+ * Add Rev::Listener#fileno for accessing the underlying file descriptor
6
+
7
+ * Support for creating Rev::Listeners from existing TCPServers/UNIXServers
8
+
9
+ * Upgrade to libev 3.8
10
+
11
+ * Simplified code loading
12
+
13
+ * Pull in iobuffer gem and change outstanding uses of Rev::Buffer to IO::Buffer
14
+
15
+ * Fix memory leaks resulting from strange semantics of Ruby's xrealloc
16
+
17
+ * Rev::UNIXServer: use path instead of the first argument
18
+
19
+ * Rev::Server-based classes can build off ::*Server objects
20
+
1
21
  0.2.4:
2
22
 
3
23
  * Ugh, botched my first release from the git repo. Oh well. Try, try again.
@@ -1,4 +1,4 @@
1
- = Rev
1
+ h1. Rev
2
2
 
3
3
  Rev is an event library for Ruby, built on the libev event library which
4
4
  provides a cross-platform interface to high performance system calls . This
@@ -11,8 +11,10 @@ applications.
11
11
 
12
12
  You can include Rev in your programs with:
13
13
 
14
- require 'rubygems'
15
- require 'rev'
14
+ <pre>
15
+ require 'rubygems'
16
+ require 'rev'
17
+ </pre>
16
18
 
17
19
  For more information, consult the RubyForge page:
18
20
 
@@ -28,7 +30,7 @@ The latest development code is available via github at:
28
30
 
29
31
  git://github.com/tarcieri/rev.git
30
32
 
31
- == Anatomy
33
+ h2. Anatomy
32
34
 
33
35
  Rev builds on two core classes which bind to the libev API:
34
36
 
@@ -39,7 +41,7 @@ Rev builds on two core classes which bind to the libev API:
39
41
  an event observer to a loop and start running it, you will begin receiving
40
42
  callbacks to particlar methods when events occur.
41
43
 
42
- == Watchers
44
+ h2. Watchers
43
45
 
44
46
  There are presently two types of watchers:
45
47
 
@@ -49,7 +51,7 @@ There are presently two types of watchers:
49
51
  * Rev::TimerWatcher - This class waits for a specified duration then fires
50
52
  an event. You can also configure it to fire an event at specified intervals.
51
53
 
52
- == Using Watchers
54
+ h2. Using Watchers
53
55
 
54
56
  Watchers have five important methods:
55
57
 
@@ -70,7 +72,7 @@ Watchers have five important methods:
70
72
  * evloop - This returns the Rev::Loop object which the watcher is currently
71
73
  bound to.
72
74
 
73
- == Asynchronous Wrappers
75
+ h2. Asynchronous Wrappers
74
76
 
75
77
  Several classes which provide asynchronous event-driven wrappers for Ruby's
76
78
  core socket classes are also provided. Among these are:
@@ -90,33 +92,35 @@ core socket classes are also provided. Among these are:
90
92
  * Rev::HttpClient - An HTTP/1.1 client with support for chunked encoding
91
93
  and streaming response processing through asynchronous callbacks.
92
94
 
93
- == Example Program
95
+ h2. Example Program
94
96
 
95
97
  Below is an example of how to write an echo server:
96
98
 
97
- require 'rev'
98
- HOST = 'localhost'
99
- PORT = 4321
99
+ <pre>
100
+ require 'rev'
101
+ HOST = 'localhost'
102
+ PORT = 4321
100
103
 
101
- class EchoServerConnection < Rev::TCPSocket
102
- def on_connect
103
- puts "#{remote_addr}:#{remote_port} connected"
104
- end
104
+ class EchoServerConnection < Rev::TCPSocket
105
+ def on_connect
106
+ puts "#{remote_addr}:#{remote_port} connected"
107
+ end
105
108
 
106
- def on_close
107
- puts "#{remote_addr}:#{remote_port} disconnected"
108
- end
109
+ def on_close
110
+ puts "#{remote_addr}:#{remote_port} disconnected"
111
+ end
109
112
 
110
- def on_read(data)
111
- write data
112
- end
113
+ def on_read(data)
114
+ write data
113
115
  end
116
+ end
114
117
 
115
- server = Rev::TCPServer.new(HOST, PORT, EchoServerConnection)
116
- server.attach(Rev::Loop.default)
118
+ server = Rev::TCPServer.new(HOST, PORT, EchoServerConnection)
119
+ server.attach(Rev::Loop.default)
117
120
 
118
- puts "Echo server listening on #{HOST}:#{PORT}"
119
- Rev::Loop.default.run
121
+ puts "Echo server listening on #{HOST}:#{PORT}"
122
+ Rev::Loop.default.run
123
+ </pre>
120
124
 
121
125
  Here a new observer type (EchoServerConnection) is made by subclassing an
122
126
  existing one and adding new implementations to existing event handlers.
@@ -126,41 +130,4 @@ Rev::Watcher) is created and attached to the event loop.
126
130
 
127
131
  Once this is done, the event loop is started with event_loop.run. This method
128
132
  will block until there are no active watchers for the loop or the loop is
129
- stopped explicitly with event_loop.stop.
130
-
131
- == Defining Callbacks at Runtime
132
-
133
- It's often tedious to subclass in order to just change one callback. Rev
134
- gives you the ability to change event callbacks on the fly (provided you
135
- haven't overridden them in a subclass). This is especially useful for small
136
- one off programs or just experimenting with the API.
137
-
138
- Any callback (methods prefixed with on_*) can be set on the fly by passing it
139
- a block. (NOTE: Ruby 1.9/1.8.7 only)
140
-
141
- Below is an example of using this syntax. It implements an echo server
142
- identical to the one above:
143
-
144
- HOST = '127.0.0.1'
145
- PORT = 4321
146
-
147
- server = Rev::TCPServer.new(HOST, PORT) do |c|
148
- c.on_connect { puts "#{remote_addr}:#{remote_port} connected" }
149
- c.on_close { puts "#{remote_addr}:#{remote_port} disconnected" }
150
- c.on_read { |data| write data }
151
- end
152
-
153
- server.attach(Rev::Loop.default)
154
-
155
- puts "Echo server listening on #{HOST}:#{PORT}"
156
- Rev::Loop.default.run
157
-
158
- As you can see, it provides a more concise (albeint slightly slower)
159
- expression of the same server as above, without the need to subclass.
160
-
161
- Rev::TCPServer will automatically yield new connections if a block is
162
- given. In this case the "c" variable being passed to the block is
163
- a new instance of Rev::TCPSocket representing the newly created connection.
164
-
165
- The above example sets the on_connect, on_close, and on_read callbacks each
166
- time a new connection is created.
133
+ stopped explicitly with event_loop.stop.
data/Rakefile CHANGED
@@ -9,15 +9,15 @@ include FileUtils
9
9
  load 'rev.gemspec'
10
10
 
11
11
  # Default Rake task is compile
12
- task :default => :compile
12
+ task :default => %w(compile spec)
13
13
 
14
14
  # RDoc
15
15
  Rake::RDocTask.new(:rdoc) do |task|
16
16
  task.rdoc_dir = 'doc'
17
17
  task.title = 'Rev'
18
- task.options = %w(--title Revactor --main README --line-numbers)
18
+ task.options = %w(--title Revactor --main README.textile --line-numbers)
19
19
  task.rdoc_files.include(['ext/rev/*.c', 'lib/**/*.rb'])
20
- task.rdoc_files.include(['README', 'LICENSE'])
20
+ task.rdoc_files.include(['README.textile', 'LICENSE'])
21
21
  end
22
22
 
23
23
  # Rebuild parser Ragel
@@ -1,5 +1,68 @@
1
1
  Revision history for libev, a high-performance and full-featured event loop.
2
2
 
3
+ 3.8 Sun Aug 9 14:30:45 CEST 2009
4
+ - incompatible change: do not necessarily reset signal handler
5
+ to SIG_DFL when a sighandler is stopped.
6
+ - ev_default_destroy did not properly free or zero some members,
7
+ potentially causing crashes and memory corruption on repated
8
+ ev_default_destroy/ev_default_loop calls.
9
+ - take advantage of signalfd on GNU/Linux systems.
10
+ - document that the signal mask might be in an unspecified
11
+ state when using libev's signal handling.
12
+ - take advantage of some GNU/Linux calls to set cloexec/nonblock
13
+ on fd creation, to avoid race conditions.
14
+
15
+ 3.7 Fri Jul 17 16:36:32 CEST 2009
16
+ - ev_unloop and ev_loop wrongly used a global variable to exit loops,
17
+ instead of using a per-loop variable (bug caught by accident...).
18
+ - the ev_set_io_collect_interval interpretation has changed.
19
+ - add new functionality: ev_set_userdata, ev_userdata,
20
+ ev_set_invoke_pending_cb, ev_set_loop_release_cb,
21
+ ev_invoke_pending, ev_pending_count, together with a long example
22
+ about thread locking.
23
+ - add ev_timer_remaining (as requested by Denis F. Latypoff).
24
+ - add ev_loop_depth.
25
+ - calling ev_unloop in fork/prepare watchers will no longer poll
26
+ for new events.
27
+ - Denis F. Latypoff corrected many typos in example code snippets.
28
+ - honor autoconf detection of EV_USE_CLOCK_SYSCALL, also double-
29
+ check that the syscall number is available before trying to
30
+ use it (reported by ry@tinyclouds).
31
+ - use GetSystemTimeAsFileTime instead of _timeb on windows, for
32
+ slightly higher accuracy.
33
+ - properly declare ev_loop_verify and ev_now_update even when
34
+ !EV_MULTIPLICITY.
35
+ - do not compile in any priority code when EV_MAXPRI == EV_MINPRI.
36
+ - support EV_MINIMAL==2 for a reduced API.
37
+ - actually 0-initialise struct sigaction when installing signals.
38
+ - add section on hibernate and stopped processes to ev_timer docs.
39
+
40
+ 3.6 Tue Apr 28 02:49:30 CEST 2009
41
+ - multiple timers becoming ready within an event loop iteration
42
+ will be invoked in the "correct" order now.
43
+ - do not leave the event loop early just because we have no active
44
+ watchers, fixing a problem when embedding a kqueue loop
45
+ that has active kernel events but no registered watchers
46
+ (reported by blacksand blacksand).
47
+ - correctly zero the idx values for arrays, so destroying and
48
+ reinitialising the default loop actually works (patch by
49
+ Malek Hadj-Ali).
50
+ - implement ev_suspend and ev_resume.
51
+ - new EV_CUSTOM revents flag for use by applications.
52
+ - add documentation section about priorites.
53
+ - add a glossary to the dcoumentation.
54
+ - extend the ev_fork description slightly.
55
+ - optimize a jump out of call_pending.
56
+
57
+ 3.53 Sun Feb 15 02:38:20 CET 2009
58
+ - fix a bug in event pipe creation on win32 that would cause a
59
+ failed assertion on event loop creation (patch by Malek Hadj-Ali).
60
+ - probe for CLOCK_REALTIME support at runtime as well and fall
61
+ back to gettimeofday if there is an error, to support older
62
+ operating systems with newer header files/libraries.
63
+ - prefer gettimeofday over clock_gettime with USE_CLOCK_SYSCALL
64
+ (default most everywhere), otherwise not.
65
+
3
66
  3.52 Wed Jan 7 21:43:02 CET 2009
4
67
  - fix compilation of select backend in fd_set mode when NFDBITS is
5
68
  missing (to get it to compile on QNX, reported by Rodrigo Campos).
@@ -13,7 +76,7 @@ Revision history for libev, a high-performance and full-featured event loop.
13
76
  attacks harder (but not impossible - it's windows). Make sure
14
77
  it even works under vista, which thinks that getpeer/sockname
15
78
  should return fantasy port numbers.
16
- - include "libev" all assertion messages for potentially
79
+ - include "libev" in all assertion messages for potentially
17
80
  clearer diagnostics.
18
81
  - event_get_version (libevent compatibility) returned
19
82
  a useless string instead of the expected version string
@@ -1,4 +1,4 @@
1
- All files in libev are Copyright (C)2007,2008 Marc Alexander Lehmann.
1
+ All files in libev are Copyright (C)2007,2008,2009 Marc Alexander Lehmann.
2
2
 
3
3
  Redistribution and use in source and binary forms, with or without
4
4
  modification, are permitted provided that the following conditions are
@@ -59,6 +59,8 @@ extern "C" {
59
59
  # define EV_USE_MONOTONIC 1
60
60
  # endif
61
61
  # endif
62
+ # elif !defined(EV_USE_CLOCK_SYSCALL)
63
+ # define EV_USE_CLOCK_SYSCALL 0
62
64
  # endif
63
65
 
64
66
  # if HAVE_CLOCK_GETTIME
@@ -66,7 +68,7 @@ extern "C" {
66
68
  # define EV_USE_MONOTONIC 1
67
69
  # endif
68
70
  # ifndef EV_USE_REALTIME
69
- # define EV_USE_REALTIME 1
71
+ # define EV_USE_REALTIME 0
70
72
  # endif
71
73
  # else
72
74
  # ifndef EV_USE_MONOTONIC
@@ -133,6 +135,14 @@ extern "C" {
133
135
  # endif
134
136
  # endif
135
137
 
138
+ # ifndef EV_USE_SIGNALFD
139
+ # if HAVE_SIGNALFD && HAVE_SYS_SIGNALFD_H
140
+ # define EV_USE_SIGNALFD 1
141
+ # else
142
+ # define EV_USE_SIGNALFD 0
143
+ # endif
144
+ # endif
145
+
136
146
  # ifndef EV_USE_EVENTFD
137
147
  # if HAVE_EVENTFD
138
148
  # define EV_USE_EVENTFD 1
@@ -178,6 +188,33 @@ extern "C" {
178
188
 
179
189
  /* this block tries to deduce configuration from header-defined symbols and defaults */
180
190
 
191
+ /* try to deduce the maximum number of signals on this platform */
192
+ #if defined (EV_NSIG)
193
+ /* use what's provided */
194
+ #elif defined (NSIG)
195
+ # define EV_NSIG (NSIG)
196
+ #elif defined(_NSIG)
197
+ # define EV_NSIG (_NSIG)
198
+ #elif defined (SIGMAX)
199
+ # define EV_NSIG (SIGMAX+1)
200
+ #elif defined (SIG_MAX)
201
+ # define EV_NSIG (SIG_MAX+1)
202
+ #elif defined (_SIG_MAX)
203
+ # define EV_NSIG (_SIG_MAX+1)
204
+ #elif defined (MAXSIG)
205
+ # define EV_NSIG (MAXSIG+1)
206
+ #elif defined (MAX_SIG)
207
+ # define EV_NSIG (MAX_SIG+1)
208
+ #elif defined (SIGARRAYSIZE)
209
+ # define EV_NSIG SIGARRAYSIZE /* Assume ary[SIGARRAYSIZE] */
210
+ #elif defined (_sys_nsig)
211
+ # define EV_NSIG (_sys_nsig) /* Solaris 2.5 */
212
+ #else
213
+ # error "unable to find value for NSIG, please report"
214
+ /* to make it compile regardless, just remove the above line */
215
+ # define EV_NSIG 65
216
+ #endif
217
+
181
218
  #ifndef EV_USE_CLOCK_SYSCALL
182
219
  # if __linux && __GLIBC__ >= 2
183
220
  # define EV_USE_CLOCK_SYSCALL 1
@@ -195,7 +232,7 @@ extern "C" {
195
232
  #endif
196
233
 
197
234
  #ifndef EV_USE_REALTIME
198
- # define EV_USE_REALTIME 0
235
+ # define EV_USE_REALTIME !EV_USE_CLOCK_SYSCALL
199
236
  #endif
200
237
 
201
238
  #ifndef EV_USE_NANOSLEEP
@@ -266,6 +303,14 @@ extern "C" {
266
303
  # endif
267
304
  #endif
268
305
 
306
+ #ifndef EV_USE_SIGNALFD
307
+ # if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 9))
308
+ # define EV_USE_SIGNALFD 1
309
+ # else
310
+ # define EV_USE_SIGNALFD 0
311
+ # endif
312
+ #endif
313
+
269
314
  #if 0 /* debugging */
270
315
  # define EV_VERIFY 3
271
316
  # define EV_USE_4HEAP 1
@@ -284,6 +329,20 @@ extern "C" {
284
329
  # define EV_HEAP_CACHE_AT !EV_MINIMAL
285
330
  #endif
286
331
 
332
+ /* on linux, we can use a (slow) syscall to avoid a dependency on pthread, */
333
+ /* which makes programs even slower. might work on other unices, too. */
334
+ #if EV_USE_CLOCK_SYSCALL
335
+ # include <syscall.h>
336
+ # ifdef SYS_clock_gettime
337
+ # define clock_gettime(id, ts) syscall (SYS_clock_gettime, (id), (ts))
338
+ # undef EV_USE_MONOTONIC
339
+ # define EV_USE_MONOTONIC 1
340
+ # else
341
+ # undef EV_USE_CLOCK_SYSCALL
342
+ # define EV_USE_CLOCK_SYSCALL 0
343
+ # endif
344
+ #endif
345
+
287
346
  /* this block fixes any misconfiguration where we know we run into trouble otherwise */
288
347
 
289
348
  #ifndef CLOCK_MONOTONIC
@@ -322,18 +381,19 @@ extern "C" {
322
381
  # include <winsock.h>
323
382
  #endif
324
383
 
325
- /* on linux, we can use a (slow) syscall to avoid a dependency on pthread, */
326
- /* which makes programs even slower. might work on other unices, too. */
327
- #if EV_USE_CLOCK_SYSCALL
328
- # include <syscall.h>
329
- # define clock_gettime(id, ts) syscall (SYS_clock_gettime, (id), (ts))
330
- # undef EV_USE_MONOTONIC
331
- # define EV_USE_MONOTONIC 1
332
- #endif
333
-
334
384
  #if EV_USE_EVENTFD
335
385
  /* our minimum requirement is glibc 2.7 which has the stub, but not the header */
336
386
  # include <stdint.h>
387
+ # ifndef EFD_NONBLOCK
388
+ # define EFD_NONBLOCK O_NONBLOCK
389
+ # endif
390
+ # ifndef EFD_CLOEXEC
391
+ # ifdef O_CLOEXEC
392
+ # define EFD_CLOEXEC O_CLOEXEC
393
+ # else
394
+ # define EFD_CLOEXEC 02000000
395
+ # endif
396
+ # endif
337
397
  # ifdef __cplusplus
338
398
  extern "C" {
339
399
  # endif
@@ -343,6 +403,10 @@ int eventfd (unsigned int initval, int flags);
343
403
  # endif
344
404
  #endif
345
405
 
406
+ #if EV_USE_SIGNALFD
407
+ # include <sys/signalfd.h>
408
+ #endif
409
+
346
410
  /**/
347
411
 
348
412
  #if EV_VERIFY >= 3
@@ -386,8 +450,13 @@ int eventfd (unsigned int initval, int flags);
386
450
  # define inline_speed static inline
387
451
  #endif
388
452
 
389
- #define NUMPRI (EV_MAXPRI - EV_MINPRI + 1)
390
- #define ABSPRI(w) (((W)w)->priority - EV_MINPRI)
453
+ #define NUMPRI (EV_MAXPRI - EV_MINPRI + 1)
454
+
455
+ #if EV_MINPRI == EV_MAXPRI
456
+ # define ABSPRI(w) (((W)w), 0)
457
+ #else
458
+ # define ABSPRI(w) (((W)w)->priority - EV_MINPRI)
459
+ #endif
391
460
 
392
461
  #define EMPTY /* required for microsofts broken pseudo-c compiler */
393
462
  #define EMPTY2(a,b) /* used to suppress some warnings */
@@ -399,9 +468,13 @@ typedef ev_watcher_time *WT;
399
468
  #define ev_active(w) ((W)(w))->active
400
469
  #define ev_at(w) ((WT)(w))->at
401
470
 
402
- #if EV_USE_MONOTONIC
471
+ #if EV_USE_REALTIME
403
472
  /* sig_atomic_t is used to avoid per-thread variables or locking but still */
404
473
  /* giving it a reasonably high chance of working on typical architetcures */
474
+ static EV_ATOMIC_T have_realtime; /* did clock_gettime (CLOCK_REALTIME) work? */
475
+ #endif
476
+
477
+ #if EV_USE_MONOTONIC
405
478
  static EV_ATOMIC_T have_monotonic; /* did clock_gettime (CLOCK_MONOTONIC) work? */
406
479
  #endif
407
480
 
@@ -476,25 +549,30 @@ ev_realloc (void *ptr, long size)
476
549
 
477
550
  /*****************************************************************************/
478
551
 
552
+ /* set in reify when reification needed */
553
+ #define EV_ANFD_REIFY 1
554
+
555
+ /* file descriptor info structure */
479
556
  typedef struct
480
557
  {
481
558
  WL head;
482
- unsigned char events;
483
- unsigned char reify;
484
- unsigned char emask; /* the epoll backend stores the actual kernel mask in here */
559
+ unsigned char events; /* the events watched for */
560
+ unsigned char reify; /* flag set when this ANFD needs reification (EV_ANFD_REIFY, EV__IOFDSET) */
561
+ unsigned char emask; /* the epoll backend stores the actual kernel mask in here */
485
562
  unsigned char unused;
486
563
  #if EV_USE_EPOLL
487
- unsigned int egen; /* generation counter to counter epoll bugs */
564
+ unsigned int egen; /* generation counter to counter epoll bugs */
488
565
  #endif
489
566
  #if EV_SELECT_IS_WINSOCKET
490
567
  SOCKET handle;
491
568
  #endif
492
569
  } ANFD;
493
570
 
571
+ /* stores the pending event set for a given watcher */
494
572
  typedef struct
495
573
  {
496
574
  W w;
497
- int events;
575
+ int events; /* the pending event set for the given watcher */
498
576
  } ANPENDING;
499
577
 
500
578
  #if EV_USE_INOTIFY
@@ -507,6 +585,7 @@ typedef struct
507
585
 
508
586
  /* Heap Entry */
509
587
  #if EV_HEAP_CACHE_AT
588
+ /* a heap element */
510
589
  typedef struct {
511
590
  ev_tstamp at;
512
591
  WT w;
@@ -516,6 +595,7 @@ typedef struct
516
595
  #define ANHE_at(he) (he).at /* access cached at, read-only */
517
596
  #define ANHE_at_cache(he) (he).at = (he).w->at /* update at from watcher */
518
597
  #else
598
+ /* a heap element */
519
599
  typedef WT ANHE;
520
600
 
521
601
  #define ANHE_w(he) (he)
@@ -549,23 +629,40 @@ typedef struct
549
629
 
550
630
  #endif
551
631
 
632
+ #if EV_MINIMAL < 2
633
+ # define EV_RELEASE_CB if (expect_false (release_cb)) release_cb (EV_A)
634
+ # define EV_ACQUIRE_CB if (expect_false (acquire_cb)) acquire_cb (EV_A)
635
+ # define EV_INVOKE_PENDING invoke_cb (EV_A)
636
+ #else
637
+ # define EV_RELEASE_CB (void)0
638
+ # define EV_ACQUIRE_CB (void)0
639
+ # define EV_INVOKE_PENDING ev_invoke_pending (EV_A)
640
+ #endif
641
+
642
+ #define EVUNLOOP_RECURSE 0x80
643
+
552
644
  /*****************************************************************************/
553
645
 
646
+ #ifndef EV_HAVE_EV_TIME
554
647
  ev_tstamp
555
648
  ev_time (void)
556
649
  {
557
650
  #if EV_USE_REALTIME
558
- struct timespec ts;
559
- clock_gettime (CLOCK_REALTIME, &ts);
560
- return ts.tv_sec + ts.tv_nsec * 1e-9;
561
- #else
651
+ if (expect_true (have_realtime))
652
+ {
653
+ struct timespec ts;
654
+ clock_gettime (CLOCK_REALTIME, &ts);
655
+ return ts.tv_sec + ts.tv_nsec * 1e-9;
656
+ }
657
+ #endif
658
+
562
659
  struct timeval tv;
563
660
  gettimeofday (&tv, 0);
564
661
  return tv.tv_sec + tv.tv_usec * 1e-6;
565
- #endif
566
662
  }
663
+ #endif
567
664
 
568
- ev_tstamp inline_size
665
+ inline_size ev_tstamp
569
666
  get_clock (void)
570
667
  {
571
668
  #if EV_USE_MONOTONIC
@@ -609,7 +706,7 @@ ev_sleep (ev_tstamp delay)
609
706
  tv.tv_usec = (long)((delay - (ev_tstamp)(tv.tv_sec)) * 1e6);
610
707
 
611
708
  /* here we rely on sys/time.h + sys/types.h + unistd.h providing select */
612
- /* somehting nto guaranteed by newer posix versions, but guaranteed */
709
+ /* something not guaranteed by newer posix versions, but guaranteed */
613
710
  /* by older ones */
614
711
  select (0, 0, 0, 0, &tv);
615
712
  #endif
@@ -620,7 +717,9 @@ ev_sleep (ev_tstamp delay)
620
717
 
621
718
  #define MALLOC_ROUND 4096 /* prefer to allocate in chunks of this size, must be 2**n and >> 4 longs */
622
719
 
623
- int inline_size
720
+ /* find a suitable new size for the given array, */
721
+ /* hopefully by rounding to a ncie-to-malloc size */
722
+ inline_size int
624
723
  array_nextsize (int elem, int cur, int cnt)
625
724
  {
626
725
  int ncur = cur + 1;
@@ -671,10 +770,16 @@ array_realloc (int elem, void *base, int *cur, int cnt)
671
770
  #endif
672
771
 
673
772
  #define array_free(stem, idx) \
674
- ev_free (stem ## s idx); stem ## cnt idx = stem ## max idx = 0;
773
+ ev_free (stem ## s idx); stem ## cnt idx = stem ## max idx = 0; stem ## s idx = 0
675
774
 
676
775
  /*****************************************************************************/
677
776
 
777
+ /* dummy callback for pending events */
778
+ static void noinline
779
+ pendingcb (EV_P_ ev_prepare *w, int revents)
780
+ {
781
+ }
782
+
678
783
  void noinline
679
784
  ev_feed_event (EV_P_ void *w, int revents)
680
785
  {
@@ -692,7 +797,22 @@ ev_feed_event (EV_P_ void *w, int revents)
692
797
  }
693
798
  }
694
799
 
695
- void inline_speed
800
+ inline_speed void
801
+ feed_reverse (EV_P_ W w)
802
+ {
803
+ array_needsize (W, rfeeds, rfeedmax, rfeedcnt + 1, EMPTY2);
804
+ rfeeds [rfeedcnt++] = w;
805
+ }
806
+
807
+ inline_size void
808
+ feed_reverse_done (EV_P_ int revents)
809
+ {
810
+ do
811
+ ev_feed_event (EV_A_ rfeeds [--rfeedcnt], revents);
812
+ while (rfeedcnt);
813
+ }
814
+
815
+ inline_speed void
696
816
  queue_events (EV_P_ W *events, int eventcnt, int type)
697
817
  {
698
818
  int i;
@@ -703,8 +823,8 @@ queue_events (EV_P_ W *events, int eventcnt, int type)
703
823
 
704
824
  /*****************************************************************************/
705
825
 
706
- void inline_speed
707
- fd_event (EV_P_ int fd, int revents)
826
+ inline_speed void
827
+ fd_event_nc (EV_P_ int fd, int revents)
708
828
  {
709
829
  ANFD *anfd = anfds + fd;
710
830
  ev_io *w;
@@ -718,14 +838,27 @@ fd_event (EV_P_ int fd, int revents)
718
838
  }
719
839
  }
720
840
 
841
+ /* do not submit kernel events for fds that have reify set */
842
+ /* because that means they changed while we were polling for new events */
843
+ inline_speed void
844
+ fd_event (EV_P_ int fd, int revents)
845
+ {
846
+ ANFD *anfd = anfds + fd;
847
+
848
+ if (expect_true (!anfd->reify))
849
+ fd_event_nc (EV_A_ fd, revents);
850
+ }
851
+
721
852
  void
722
853
  ev_feed_fd_event (EV_P_ int fd, int revents)
723
854
  {
724
855
  if (fd >= 0 && fd < anfdmax)
725
- fd_event (EV_A_ fd, revents);
856
+ fd_event_nc (EV_A_ fd, revents);
726
857
  }
727
858
 
728
- void inline_size
859
+ /* make sure the external fd watch events are in-sync */
860
+ /* with the kernel/libev internal state */
861
+ inline_size void
729
862
  fd_reify (EV_P)
730
863
  {
731
864
  int i;
@@ -761,7 +894,7 @@ fd_reify (EV_P)
761
894
  anfd->reify = 0;
762
895
  anfd->events = events;
763
896
 
764
- if (o_events != events || o_reify & EV_IOFDSET)
897
+ if (o_events != events || o_reify & EV__IOFDSET)
765
898
  backend_modify (EV_A_ fd, o_events, events);
766
899
  }
767
900
  }
@@ -769,7 +902,8 @@ fd_reify (EV_P)
769
902
  fdchangecnt = 0;
770
903
  }
771
904
 
772
- void inline_size
905
+ /* something about the given fd changed */
906
+ inline_size void
773
907
  fd_change (EV_P_ int fd, int flags)
774
908
  {
775
909
  unsigned char reify = anfds [fd].reify;
@@ -783,7 +917,8 @@ fd_change (EV_P_ int fd, int flags)
783
917
  }
784
918
  }
785
919
 
786
- void inline_speed
920
+ /* the given fd is invalid/unusable, so make sure it doesn't hurt us anymore */
921
+ inline_speed void
787
922
  fd_kill (EV_P_ int fd)
788
923
  {
789
924
  ev_io *w;
@@ -795,7 +930,8 @@ fd_kill (EV_P_ int fd)
795
930
  }
796
931
  }
797
932
 
798
- int inline_size
933
+ /* check whether the given fd is atcually valid, for error recovery */
934
+ inline_size int
799
935
  fd_valid (int fd)
800
936
  {
801
937
  #ifdef _WIN32
@@ -827,7 +963,7 @@ fd_enomem (EV_P)
827
963
  if (anfds [fd].events)
828
964
  {
829
965
  fd_kill (EV_A_ fd);
830
- return;
966
+ break;
831
967
  }
832
968
  }
833
969
 
@@ -842,7 +978,7 @@ fd_rearm_all (EV_P)
842
978
  {
843
979
  anfds [fd].events = 0;
844
980
  anfds [fd].emask = 0;
845
- fd_change (EV_A_ fd, EV_IOFDSET | 1);
981
+ fd_change (EV_A_ fd, EV__IOFDSET | EV_ANFD_REIFY);
846
982
  }
847
983
  }
848
984
 
@@ -868,7 +1004,7 @@ fd_rearm_all (EV_P)
868
1004
  #define UPHEAP_DONE(p,k) ((p) == (k))
869
1005
 
870
1006
  /* away from the root */
871
- void inline_speed
1007
+ inline_speed void
872
1008
  downheap (ANHE *heap, int N, int k)
873
1009
  {
874
1010
  ANHE he = heap [k];
@@ -918,7 +1054,7 @@ downheap (ANHE *heap, int N, int k)
918
1054
  #define UPHEAP_DONE(p,k) (!(p))
919
1055
 
920
1056
  /* away from the root */
921
- void inline_speed
1057
+ inline_speed void
922
1058
  downheap (ANHE *heap, int N, int k)
923
1059
  {
924
1060
  ANHE he = heap [k];
@@ -927,7 +1063,7 @@ downheap (ANHE *heap, int N, int k)
927
1063
  {
928
1064
  int c = k << 1;
929
1065
 
930
- if (c > N + HEAP0 - 1)
1066
+ if (c >= N + HEAP0)
931
1067
  break;
932
1068
 
933
1069
  c += c + 1 < N + HEAP0 && ANHE_at (heap [c]) > ANHE_at (heap [c + 1])
@@ -948,7 +1084,7 @@ downheap (ANHE *heap, int N, int k)
948
1084
  #endif
949
1085
 
950
1086
  /* towards the root */
951
- void inline_speed
1087
+ inline_speed void
952
1088
  upheap (ANHE *heap, int k)
953
1089
  {
954
1090
  ANHE he = heap [k];
@@ -969,17 +1105,18 @@ upheap (ANHE *heap, int k)
969
1105
  ev_active (ANHE_w (he)) = k;
970
1106
  }
971
1107
 
972
- void inline_size
1108
+ /* move an element suitably so it is in a correct place */
1109
+ inline_size void
973
1110
  adjustheap (ANHE *heap, int N, int k)
974
1111
  {
975
- if (k > HEAP0 && ANHE_at (heap [HPARENT (k)]) >= ANHE_at (heap [k]))
1112
+ if (k > HEAP0 && ANHE_at (heap [k]) <= ANHE_at (heap [HPARENT (k)]))
976
1113
  upheap (heap, k);
977
1114
  else
978
1115
  downheap (heap, N, k);
979
1116
  }
980
1117
 
981
1118
  /* rebuild the heap: this function is used only once and executed rarely */
982
- void inline_size
1119
+ inline_size void
983
1120
  reheap (ANHE *heap, int N)
984
1121
  {
985
1122
  int i;
@@ -992,20 +1129,23 @@ reheap (ANHE *heap, int N)
992
1129
 
993
1130
  /*****************************************************************************/
994
1131
 
1132
+ /* associate signal watchers to a signal signal */
995
1133
  typedef struct
996
1134
  {
1135
+ EV_ATOMIC_T pending;
1136
+ #if EV_MULTIPLICITY
1137
+ EV_P;
1138
+ #endif
997
1139
  WL head;
998
- EV_ATOMIC_T gotsig;
999
1140
  } ANSIG;
1000
1141
 
1001
- static ANSIG *signals;
1002
- static int signalmax;
1003
-
1004
- static EV_ATOMIC_T gotsig;
1142
+ static ANSIG signals [EV_NSIG - 1];
1005
1143
 
1006
1144
  /*****************************************************************************/
1007
1145
 
1008
- void inline_speed
1146
+ /* used to prepare libev internal fd's */
1147
+ /* this is not fork-safe */
1148
+ inline_speed void
1009
1149
  fd_intern (int fd)
1010
1150
  {
1011
1151
  #ifdef _WIN32
@@ -1020,14 +1160,18 @@ fd_intern (int fd)
1020
1160
  static void noinline
1021
1161
  evpipe_init (EV_P)
1022
1162
  {
1023
- if (!ev_is_active (&pipeev))
1163
+ if (!ev_is_active (&pipe_w))
1024
1164
  {
1025
1165
  #if EV_USE_EVENTFD
1026
- if ((evfd = eventfd (0, 0)) >= 0)
1166
+ evfd = eventfd (0, EFD_NONBLOCK | EFD_CLOEXEC);
1167
+ if (evfd < 0 && errno == EINVAL)
1168
+ evfd = eventfd (0, 0);
1169
+
1170
+ if (evfd >= 0)
1027
1171
  {
1028
1172
  evpipe [0] = -1;
1029
- fd_intern (evfd);
1030
- ev_io_set (&pipeev, evfd, EV_READ);
1173
+ fd_intern (evfd); /* doing it twice doesn't hurt */
1174
+ ev_io_set (&pipe_w, evfd, EV_READ);
1031
1175
  }
1032
1176
  else
1033
1177
  #endif
@@ -1037,15 +1181,15 @@ evpipe_init (EV_P)
1037
1181
 
1038
1182
  fd_intern (evpipe [0]);
1039
1183
  fd_intern (evpipe [1]);
1040
- ev_io_set (&pipeev, evpipe [0], EV_READ);
1184
+ ev_io_set (&pipe_w, evpipe [0], EV_READ);
1041
1185
  }
1042
1186
 
1043
- ev_io_start (EV_A_ &pipeev);
1187
+ ev_io_start (EV_A_ &pipe_w);
1044
1188
  ev_unref (EV_A); /* watcher should not keep loop alive */
1045
1189
  }
1046
1190
  }
1047
1191
 
1048
- void inline_size
1192
+ inline_size void
1049
1193
  evpipe_write (EV_P_ EV_ATOMIC_T *flag)
1050
1194
  {
1051
1195
  if (!*flag)
@@ -1068,9 +1212,13 @@ evpipe_write (EV_P_ EV_ATOMIC_T *flag)
1068
1212
  }
1069
1213
  }
1070
1214
 
1215
+ /* called whenever the libev signal pipe */
1216
+ /* got some events (signal, async) */
1071
1217
  static void
1072
1218
  pipecb (EV_P_ ev_io *iow, int revents)
1073
1219
  {
1220
+ int i;
1221
+
1074
1222
  #if EV_USE_EVENTFD
1075
1223
  if (evfd >= 0)
1076
1224
  {
@@ -1084,21 +1232,19 @@ pipecb (EV_P_ ev_io *iow, int revents)
1084
1232
  read (evpipe [0], &dummy, 1);
1085
1233
  }
1086
1234
 
1087
- if (gotsig && ev_is_default_loop (EV_A))
1235
+ if (sig_pending)
1088
1236
  {
1089
- int signum;
1090
- gotsig = 0;
1237
+ sig_pending = 0;
1091
1238
 
1092
- for (signum = signalmax; signum--; )
1093
- if (signals [signum].gotsig)
1094
- ev_feed_signal_event (EV_A_ signum + 1);
1239
+ for (i = EV_NSIG - 1; i--; )
1240
+ if (expect_false (signals [i].pending))
1241
+ ev_feed_signal_event (EV_A_ i + 1);
1095
1242
  }
1096
1243
 
1097
1244
  #if EV_ASYNC_ENABLE
1098
- if (gotasync)
1245
+ if (async_pending)
1099
1246
  {
1100
- int i;
1101
- gotasync = 0;
1247
+ async_pending = 0;
1102
1248
 
1103
1249
  for (i = asynccnt; i--; )
1104
1250
  if (asyncs [i]->sent)
@@ -1116,15 +1262,15 @@ static void
1116
1262
  ev_sighandler (int signum)
1117
1263
  {
1118
1264
  #if EV_MULTIPLICITY
1119
- struct ev_loop *loop = &default_loop_struct;
1265
+ EV_P = signals [signum - 1].loop;
1120
1266
  #endif
1121
1267
 
1122
1268
  #if _WIN32
1123
1269
  signal (signum, ev_sighandler);
1124
1270
  #endif
1125
1271
 
1126
- signals [signum - 1].gotsig = 1;
1127
- evpipe_write (EV_A_ &gotsig);
1272
+ signals [signum - 1].pending = 1;
1273
+ evpipe_write (EV_A_ &sig_pending);
1128
1274
  }
1129
1275
 
1130
1276
  void noinline
@@ -1132,21 +1278,45 @@ ev_feed_signal_event (EV_P_ int signum)
1132
1278
  {
1133
1279
  WL w;
1134
1280
 
1135
- #if EV_MULTIPLICITY
1136
- assert (("libev: feeding signal events is only supported in the default loop", loop == ev_default_loop_ptr));
1137
- #endif
1281
+ if (expect_false (signum <= 0 || signum > EV_NSIG))
1282
+ return;
1138
1283
 
1139
1284
  --signum;
1140
1285
 
1141
- if (signum < 0 || signum >= signalmax)
1286
+ #if EV_MULTIPLICITY
1287
+ /* it is permissible to try to feed a signal to the wrong loop */
1288
+ /* or, likely more useful, feeding a signal nobody is waiting for */
1289
+
1290
+ if (expect_false (signals [signum].loop != EV_A))
1142
1291
  return;
1292
+ #endif
1143
1293
 
1144
- signals [signum].gotsig = 0;
1294
+ signals [signum].pending = 0;
1145
1295
 
1146
1296
  for (w = signals [signum].head; w; w = w->next)
1147
1297
  ev_feed_event (EV_A_ (W)w, EV_SIGNAL);
1148
1298
  }
1149
1299
 
1300
+ #if EV_USE_SIGNALFD
1301
+ static void
1302
+ sigfdcb (EV_P_ ev_io *iow, int revents)
1303
+ {
1304
+ struct signalfd_siginfo si[2], *sip; /* these structs are big */
1305
+
1306
+ for (;;)
1307
+ {
1308
+ ssize_t res = read (sigfd, si, sizeof (si));
1309
+
1310
+ /* not ISO-C, as res might be -1, but works with SuS */
1311
+ for (sip = si; (char *)sip < (char *)si + res; ++sip)
1312
+ ev_feed_signal_event (EV_A_ sip->ssi_signo);
1313
+
1314
+ if (res < (ssize_t)sizeof (si))
1315
+ break;
1316
+ }
1317
+ }
1318
+ #endif
1319
+
1150
1320
  /*****************************************************************************/
1151
1321
 
1152
1322
  static WL childs [EV_PID_HASHSIZE];
@@ -1159,7 +1329,8 @@ static ev_signal childev;
1159
1329
  # define WIFCONTINUED(status) 0
1160
1330
  #endif
1161
1331
 
1162
- void inline_speed
1332
+ /* handle a single child status event */
1333
+ inline_speed void
1163
1334
  child_reap (EV_P_ int chain, int pid, int status)
1164
1335
  {
1165
1336
  ev_child *w;
@@ -1182,6 +1353,7 @@ child_reap (EV_P_ int chain, int pid, int status)
1182
1353
  # define WCONTINUED 0
1183
1354
  #endif
1184
1355
 
1356
+ /* called on sigchld etc., calls waitpid */
1185
1357
  static void
1186
1358
  childcb (EV_P_ ev_signal *sw, int revents)
1187
1359
  {
@@ -1298,12 +1470,19 @@ ev_backend (EV_P)
1298
1470
  return backend;
1299
1471
  }
1300
1472
 
1473
+ #if EV_MINIMAL < 2
1301
1474
  unsigned int
1302
1475
  ev_loop_count (EV_P)
1303
1476
  {
1304
1477
  return loop_count;
1305
1478
  }
1306
1479
 
1480
+ unsigned int
1481
+ ev_loop_depth (EV_P)
1482
+ {
1483
+ return loop_depth;
1484
+ }
1485
+
1307
1486
  void
1308
1487
  ev_set_io_collect_interval (EV_P_ ev_tstamp interval)
1309
1488
  {
@@ -1316,31 +1495,54 @@ ev_set_timeout_collect_interval (EV_P_ ev_tstamp interval)
1316
1495
  timeout_blocktime = interval;
1317
1496
  }
1318
1497
 
1498
+ void
1499
+ ev_set_userdata (EV_P_ void *data)
1500
+ {
1501
+ userdata = data;
1502
+ }
1503
+
1504
+ void *
1505
+ ev_userdata (EV_P)
1506
+ {
1507
+ return userdata;
1508
+ }
1509
+
1510
+ void ev_set_invoke_pending_cb (EV_P_ void (*invoke_pending_cb)(EV_P))
1511
+ {
1512
+ invoke_cb = invoke_pending_cb;
1513
+ }
1514
+
1515
+ void ev_set_loop_release_cb (EV_P_ void (*release)(EV_P), void (*acquire)(EV_P))
1516
+ {
1517
+ release_cb = release;
1518
+ acquire_cb = acquire;
1519
+ }
1520
+ #endif
1521
+
1522
+ /* initialise a loop structure, must be zero-initialised */
1319
1523
  static void noinline
1320
1524
  loop_init (EV_P_ unsigned int flags)
1321
1525
  {
1322
1526
  if (!backend)
1323
1527
  {
1324
- #if EV_USE_MONOTONIC
1325
- {
1326
- struct timespec ts;
1327
- if (!clock_gettime (CLOCK_MONOTONIC, &ts))
1328
- have_monotonic = 1;
1329
- }
1528
+ #if EV_USE_REALTIME
1529
+ if (!have_realtime)
1530
+ {
1531
+ struct timespec ts;
1532
+
1533
+ if (!clock_gettime (CLOCK_REALTIME, &ts))
1534
+ have_realtime = 1;
1535
+ }
1330
1536
  #endif
1331
1537
 
1332
- ev_rt_now = ev_time ();
1333
- mn_now = get_clock ();
1334
- now_floor = mn_now;
1335
- rtmn_diff = ev_rt_now - mn_now;
1538
+ #if EV_USE_MONOTONIC
1539
+ if (!have_monotonic)
1540
+ {
1541
+ struct timespec ts;
1336
1542
 
1337
- io_blocktime = 0.;
1338
- timeout_blocktime = 0.;
1339
- backend = 0;
1340
- backend_fd = -1;
1341
- gotasync = 0;
1342
- #if EV_USE_INOTIFY
1343
- fs_fd = -2;
1543
+ if (!clock_gettime (CLOCK_MONOTONIC, &ts))
1544
+ have_monotonic = 1;
1545
+ }
1344
1546
  #endif
1345
1547
 
1346
1548
  /* pid check not overridable via env */
@@ -1354,6 +1556,29 @@ loop_init (EV_P_ unsigned int flags)
1354
1556
  && getenv ("LIBEV_FLAGS"))
1355
1557
  flags = atoi (getenv ("LIBEV_FLAGS"));
1356
1558
 
1559
+ ev_rt_now = ev_time ();
1560
+ mn_now = get_clock ();
1561
+ now_floor = mn_now;
1562
+ rtmn_diff = ev_rt_now - mn_now;
1563
+ #if EV_MINIMAL < 2
1564
+ invoke_cb = ev_invoke_pending;
1565
+ #endif
1566
+
1567
+ io_blocktime = 0.;
1568
+ timeout_blocktime = 0.;
1569
+ backend = 0;
1570
+ backend_fd = -1;
1571
+ sig_pending = 0;
1572
+ #if EV_ASYNC_ENABLE
1573
+ async_pending = 0;
1574
+ #endif
1575
+ #if EV_USE_INOTIFY
1576
+ fs_fd = flags & EVFLAG_NOINOTIFY ? -1 : -2;
1577
+ #endif
1578
+ #if EV_USE_SIGNALFD
1579
+ sigfd = flags & EVFLAG_NOSIGFD ? -1 : -2;
1580
+ #endif
1581
+
1357
1582
  if (!(flags & 0x0000ffffU))
1358
1583
  flags |= ev_recommended_backends ();
1359
1584
 
@@ -1373,20 +1598,23 @@ loop_init (EV_P_ unsigned int flags)
1373
1598
  if (!backend && (flags & EVBACKEND_SELECT)) backend = select_init (EV_A_ flags);
1374
1599
  #endif
1375
1600
 
1376
- ev_init (&pipeev, pipecb);
1377
- ev_set_priority (&pipeev, EV_MAXPRI);
1601
+ ev_prepare_init (&pending_w, pendingcb);
1602
+
1603
+ ev_init (&pipe_w, pipecb);
1604
+ ev_set_priority (&pipe_w, EV_MAXPRI);
1378
1605
  }
1379
1606
  }
1380
1607
 
1608
+ /* free up a loop structure */
1381
1609
  static void noinline
1382
1610
  loop_destroy (EV_P)
1383
1611
  {
1384
1612
  int i;
1385
1613
 
1386
- if (ev_is_active (&pipeev))
1614
+ if (ev_is_active (&pipe_w))
1387
1615
  {
1388
- ev_ref (EV_A); /* signal watcher */
1389
- ev_io_stop (EV_A_ &pipeev);
1616
+ /*ev_ref (EV_A);*/
1617
+ /*ev_io_stop (EV_A_ &pipe_w);*/
1390
1618
 
1391
1619
  #if EV_USE_EVENTFD
1392
1620
  if (evfd >= 0)
@@ -1400,6 +1628,16 @@ loop_destroy (EV_P)
1400
1628
  }
1401
1629
  }
1402
1630
 
1631
+ #if EV_USE_SIGNALFD
1632
+ if (ev_is_active (&sigfd_w))
1633
+ {
1634
+ /*ev_ref (EV_A);*/
1635
+ /*ev_io_stop (EV_A_ &sigfd_w);*/
1636
+
1637
+ close (sigfd);
1638
+ }
1639
+ #endif
1640
+
1403
1641
  #if EV_USE_INOTIFY
1404
1642
  if (fs_fd >= 0)
1405
1643
  close (fs_fd);
@@ -1432,9 +1670,10 @@ loop_destroy (EV_P)
1432
1670
  #endif
1433
1671
  }
1434
1672
 
1435
- ev_free (anfds); anfdmax = 0;
1673
+ ev_free (anfds); anfds = 0; anfdmax = 0;
1436
1674
 
1437
1675
  /* have to use the microsoft-never-gets-it-right macro */
1676
+ array_free (rfeed, EMPTY);
1438
1677
  array_free (fdchange, EMPTY);
1439
1678
  array_free (timer, EMPTY);
1440
1679
  #if EV_PERIODIC_ENABLE
@@ -1453,10 +1692,10 @@ loop_destroy (EV_P)
1453
1692
  }
1454
1693
 
1455
1694
  #if EV_USE_INOTIFY
1456
- void inline_size infy_fork (EV_P);
1695
+ inline_size void infy_fork (EV_P);
1457
1696
  #endif
1458
1697
 
1459
- void inline_size
1698
+ inline_size void
1460
1699
  loop_fork (EV_P)
1461
1700
  {
1462
1701
  #if EV_USE_PORT
@@ -1472,17 +1711,17 @@ loop_fork (EV_P)
1472
1711
  infy_fork (EV_A);
1473
1712
  #endif
1474
1713
 
1475
- if (ev_is_active (&pipeev))
1714
+ if (ev_is_active (&pipe_w))
1476
1715
  {
1477
1716
  /* this "locks" the handlers against writing to the pipe */
1478
1717
  /* while we modify the fd vars */
1479
- gotsig = 1;
1718
+ sig_pending = 1;
1480
1719
  #if EV_ASYNC_ENABLE
1481
- gotasync = 1;
1720
+ async_pending = 1;
1482
1721
  #endif
1483
1722
 
1484
1723
  ev_ref (EV_A);
1485
- ev_io_stop (EV_A_ &pipeev);
1724
+ ev_io_stop (EV_A_ &pipe_w);
1486
1725
 
1487
1726
  #if EV_USE_EVENTFD
1488
1727
  if (evfd >= 0)
@@ -1497,7 +1736,7 @@ loop_fork (EV_P)
1497
1736
 
1498
1737
  evpipe_init (EV_A);
1499
1738
  /* now iterate over everything, in case we missed something */
1500
- pipecb (EV_A_ &pipeev, EV_READ);
1739
+ pipecb (EV_A_ &pipe_w, EV_READ);
1501
1740
  }
1502
1741
 
1503
1742
  postfork = 0;
@@ -1508,14 +1747,13 @@ loop_fork (EV_P)
1508
1747
  struct ev_loop *
1509
1748
  ev_loop_new (unsigned int flags)
1510
1749
  {
1511
- struct ev_loop *loop = (struct ev_loop *)ev_malloc (sizeof (struct ev_loop));
1512
-
1513
- memset (loop, 0, sizeof (struct ev_loop));
1750
+ EV_P = (struct ev_loop *)ev_malloc (sizeof (struct ev_loop));
1514
1751
 
1752
+ memset (EV_A, 0, sizeof (struct ev_loop));
1515
1753
  loop_init (EV_A_ flags);
1516
1754
 
1517
1755
  if (ev_backend (EV_A))
1518
- return loop;
1756
+ return EV_A;
1519
1757
 
1520
1758
  return 0;
1521
1759
  }
@@ -1532,6 +1770,7 @@ ev_loop_fork (EV_P)
1532
1770
  {
1533
1771
  postfork = 1; /* must be in line with ev_default_fork */
1534
1772
  }
1773
+ #endif /* multiplicity */
1535
1774
 
1536
1775
  #if EV_VERIFY
1537
1776
  static void noinline
@@ -1569,6 +1808,7 @@ array_verify (EV_P_ W *ws, int cnt)
1569
1808
  }
1570
1809
  #endif
1571
1810
 
1811
+ #if EV_MINIMAL < 2
1572
1812
  void
1573
1813
  ev_loop_verify (EV_P)
1574
1814
  {
@@ -1627,12 +1867,11 @@ ev_loop_verify (EV_P)
1627
1867
 
1628
1868
  # if 0
1629
1869
  for (w = (ev_child *)childs [chain & (EV_PID_HASHSIZE - 1)]; w; w = (ev_child *)((WL)w)->next)
1630
- for (signum = signalmax; signum--; ) if (signals [signum].gotsig)
1870
+ for (signum = EV_NSIG; signum--; ) if (signals [signum].pending)
1631
1871
  # endif
1632
1872
  #endif
1633
1873
  }
1634
-
1635
- #endif /* multiplicity */
1874
+ #endif
1636
1875
 
1637
1876
  #if EV_MULTIPLICITY
1638
1877
  struct ev_loop *
@@ -1645,7 +1884,7 @@ ev_default_loop (unsigned int flags)
1645
1884
  if (!ev_default_loop_ptr)
1646
1885
  {
1647
1886
  #if EV_MULTIPLICITY
1648
- struct ev_loop *loop = ev_default_loop_ptr = &default_loop_struct;
1887
+ EV_P = ev_default_loop_ptr = &default_loop_struct;
1649
1888
  #else
1650
1889
  ev_default_loop_ptr = 1;
1651
1890
  #endif
@@ -1672,7 +1911,7 @@ void
1672
1911
  ev_default_destroy (void)
1673
1912
  {
1674
1913
  #if EV_MULTIPLICITY
1675
- struct ev_loop *loop = ev_default_loop_ptr;
1914
+ EV_P = ev_default_loop_ptr;
1676
1915
  #endif
1677
1916
 
1678
1917
  ev_default_loop_ptr = 0;
@@ -1689,7 +1928,7 @@ void
1689
1928
  ev_default_fork (void)
1690
1929
  {
1691
1930
  #if EV_MULTIPLICITY
1692
- struct ev_loop *loop = ev_default_loop_ptr;
1931
+ EV_P = ev_default_loop_ptr;
1693
1932
  #endif
1694
1933
 
1695
1934
  postfork = 1; /* must be in line with ev_loop_fork */
@@ -1703,8 +1942,20 @@ ev_invoke (EV_P_ void *w, int revents)
1703
1942
  EV_CB_INVOKE ((W)w, revents);
1704
1943
  }
1705
1944
 
1706
- void inline_speed
1707
- call_pending (EV_P)
1945
+ unsigned int
1946
+ ev_pending_count (EV_P)
1947
+ {
1948
+ int pri;
1949
+ unsigned int count = 0;
1950
+
1951
+ for (pri = NUMPRI; pri--; )
1952
+ count += pendingcnt [pri];
1953
+
1954
+ return count;
1955
+ }
1956
+
1957
+ void noinline
1958
+ ev_invoke_pending (EV_P)
1708
1959
  {
1709
1960
  int pri;
1710
1961
 
@@ -1713,19 +1964,19 @@ call_pending (EV_P)
1713
1964
  {
1714
1965
  ANPENDING *p = pendings [pri] + --pendingcnt [pri];
1715
1966
 
1716
- if (expect_true (p->w))
1717
- {
1718
- /*assert (("libev: non-pending watcher on pending list", p->w->pending));*/
1967
+ /*assert (("libev: non-pending watcher on pending list", p->w->pending));*/
1968
+ /* ^ this is no longer true, as pending_w could be here */
1719
1969
 
1720
- p->w->pending = 0;
1721
- EV_CB_INVOKE (p->w, p->events);
1722
- EV_FREQUENT_CHECK;
1723
- }
1970
+ p->w->pending = 0;
1971
+ EV_CB_INVOKE (p->w, p->events);
1972
+ EV_FREQUENT_CHECK;
1724
1973
  }
1725
1974
  }
1726
1975
 
1727
1976
  #if EV_IDLE_ENABLE
1728
- void inline_size
1977
+ /* make idle watchers pending. this handles the "call-idle */
1978
+ /* only when higher priorities are idle" logic */
1979
+ inline_size void
1729
1980
  idle_reify (EV_P)
1730
1981
  {
1731
1982
  if (expect_false (idleall))
@@ -1747,86 +1998,104 @@ idle_reify (EV_P)
1747
1998
  }
1748
1999
  #endif
1749
2000
 
1750
- void inline_size
2001
+ /* make timers pending */
2002
+ inline_size void
1751
2003
  timers_reify (EV_P)
1752
2004
  {
1753
2005
  EV_FREQUENT_CHECK;
1754
2006
 
1755
- while (timercnt && ANHE_at (timers [HEAP0]) < mn_now)
2007
+ if (timercnt && ANHE_at (timers [HEAP0]) < mn_now)
1756
2008
  {
1757
- ev_timer *w = (ev_timer *)ANHE_w (timers [HEAP0]);
2009
+ do
2010
+ {
2011
+ ev_timer *w = (ev_timer *)ANHE_w (timers [HEAP0]);
1758
2012
 
1759
- /*assert (("libev: inactive timer on timer heap detected", ev_is_active (w)));*/
2013
+ /*assert (("libev: inactive timer on timer heap detected", ev_is_active (w)));*/
1760
2014
 
1761
- /* first reschedule or stop timer */
1762
- if (w->repeat)
1763
- {
1764
- ev_at (w) += w->repeat;
1765
- if (ev_at (w) < mn_now)
1766
- ev_at (w) = mn_now;
2015
+ /* first reschedule or stop timer */
2016
+ if (w->repeat)
2017
+ {
2018
+ ev_at (w) += w->repeat;
2019
+ if (ev_at (w) < mn_now)
2020
+ ev_at (w) = mn_now;
2021
+
2022
+ assert (("libev: negative ev_timer repeat value found while processing timers", w->repeat > 0.));
1767
2023
 
1768
- assert (("libev: negative ev_timer repeat value found while processing timers", w->repeat > 0.));
2024
+ ANHE_at_cache (timers [HEAP0]);
2025
+ downheap (timers, timercnt, HEAP0);
2026
+ }
2027
+ else
2028
+ ev_timer_stop (EV_A_ w); /* nonrepeating: stop timer */
1769
2029
 
1770
- ANHE_at_cache (timers [HEAP0]);
1771
- downheap (timers, timercnt, HEAP0);
2030
+ EV_FREQUENT_CHECK;
2031
+ feed_reverse (EV_A_ (W)w);
1772
2032
  }
1773
- else
1774
- ev_timer_stop (EV_A_ w); /* nonrepeating: stop timer */
2033
+ while (timercnt && ANHE_at (timers [HEAP0]) < mn_now);
1775
2034
 
1776
- EV_FREQUENT_CHECK;
1777
- ev_feed_event (EV_A_ (W)w, EV_TIMEOUT);
2035
+ feed_reverse_done (EV_A_ EV_TIMEOUT);
1778
2036
  }
1779
2037
  }
1780
2038
 
1781
2039
  #if EV_PERIODIC_ENABLE
1782
- void inline_size
2040
+ /* make periodics pending */
2041
+ inline_size void
1783
2042
  periodics_reify (EV_P)
1784
2043
  {
1785
2044
  EV_FREQUENT_CHECK;
1786
2045
 
1787
2046
  while (periodiccnt && ANHE_at (periodics [HEAP0]) < ev_rt_now)
1788
2047
  {
1789
- ev_periodic *w = (ev_periodic *)ANHE_w (periodics [HEAP0]);
1790
-
1791
- /*assert (("libev: inactive timer on periodic heap detected", ev_is_active (w)));*/
2048
+ int feed_count = 0;
1792
2049
 
1793
- /* first reschedule or stop timer */
1794
- if (w->reschedule_cb)
2050
+ do
1795
2051
  {
1796
- ev_at (w) = w->reschedule_cb (w, ev_rt_now);
2052
+ ev_periodic *w = (ev_periodic *)ANHE_w (periodics [HEAP0]);
1797
2053
 
1798
- assert (("libev: ev_periodic reschedule callback returned time in the past", ev_at (w) >= ev_rt_now));
2054
+ /*assert (("libev: inactive timer on periodic heap detected", ev_is_active (w)));*/
1799
2055
 
1800
- ANHE_at_cache (periodics [HEAP0]);
1801
- downheap (periodics, periodiccnt, HEAP0);
1802
- }
1803
- else if (w->interval)
1804
- {
1805
- ev_at (w) = w->offset + ceil ((ev_rt_now - w->offset) / w->interval) * w->interval;
1806
- /* if next trigger time is not sufficiently in the future, put it there */
1807
- /* this might happen because of floating point inexactness */
1808
- if (ev_at (w) - ev_rt_now < TIME_EPSILON)
2056
+ /* first reschedule or stop timer */
2057
+ if (w->reschedule_cb)
1809
2058
  {
1810
- ev_at (w) += w->interval;
2059
+ ev_at (w) = w->reschedule_cb (w, ev_rt_now);
2060
+
2061
+ assert (("libev: ev_periodic reschedule callback returned time in the past", ev_at (w) >= ev_rt_now));
1811
2062
 
1812
- /* if interval is unreasonably low we might still have a time in the past */
1813
- /* so correct this. this will make the periodic very inexact, but the user */
1814
- /* has effectively asked to get triggered more often than possible */
1815
- if (ev_at (w) < ev_rt_now)
1816
- ev_at (w) = ev_rt_now;
2063
+ ANHE_at_cache (periodics [HEAP0]);
2064
+ downheap (periodics, periodiccnt, HEAP0);
1817
2065
  }
2066
+ else if (w->interval)
2067
+ {
2068
+ ev_at (w) = w->offset + ceil ((ev_rt_now - w->offset) / w->interval) * w->interval;
2069
+ /* if next trigger time is not sufficiently in the future, put it there */
2070
+ /* this might happen because of floating point inexactness */
2071
+ if (ev_at (w) - ev_rt_now < TIME_EPSILON)
2072
+ {
2073
+ ev_at (w) += w->interval;
2074
+
2075
+ /* if interval is unreasonably low we might still have a time in the past */
2076
+ /* so correct this. this will make the periodic very inexact, but the user */
2077
+ /* has effectively asked to get triggered more often than possible */
2078
+ if (ev_at (w) < ev_rt_now)
2079
+ ev_at (w) = ev_rt_now;
2080
+ }
2081
+
2082
+ ANHE_at_cache (periodics [HEAP0]);
2083
+ downheap (periodics, periodiccnt, HEAP0);
2084
+ }
2085
+ else
2086
+ ev_periodic_stop (EV_A_ w); /* nonrepeating: stop timer */
1818
2087
 
1819
- ANHE_at_cache (periodics [HEAP0]);
1820
- downheap (periodics, periodiccnt, HEAP0);
2088
+ EV_FREQUENT_CHECK;
2089
+ feed_reverse (EV_A_ (W)w);
1821
2090
  }
1822
- else
1823
- ev_periodic_stop (EV_A_ w); /* nonrepeating: stop timer */
2091
+ while (periodiccnt && ANHE_at (periodics [HEAP0]) < ev_rt_now);
1824
2092
 
1825
- EV_FREQUENT_CHECK;
1826
- ev_feed_event (EV_A_ (W)w, EV_PERIODIC);
2093
+ feed_reverse_done (EV_A_ EV_PERIODIC);
1827
2094
  }
1828
2095
  }
1829
2096
 
2097
+ /* simply recalculate all periodics */
2098
+ /* TODO: maybe ensure that at leats one event happens when jumping forward? */
1830
2099
  static void noinline
1831
2100
  periodics_reschedule (EV_P)
1832
2101
  {
@@ -1849,14 +2118,29 @@ periodics_reschedule (EV_P)
1849
2118
  }
1850
2119
  #endif
1851
2120
 
1852
- void inline_speed
1853
- time_update (EV_P_ ev_tstamp max_block)
2121
+ /* adjust all timers by a given offset */
2122
+ static void noinline
2123
+ timers_reschedule (EV_P_ ev_tstamp adjust)
1854
2124
  {
1855
2125
  int i;
1856
2126
 
2127
+ for (i = 0; i < timercnt; ++i)
2128
+ {
2129
+ ANHE *he = timers + i + HEAP0;
2130
+ ANHE_w (*he)->at += adjust;
2131
+ ANHE_at_cache (*he);
2132
+ }
2133
+ }
2134
+
2135
+ /* fetch new monotonic and realtime times from the kernel */
2136
+ /* also detetc if there was a timejump, and act accordingly */
2137
+ inline_speed void
2138
+ time_update (EV_P_ ev_tstamp max_block)
2139
+ {
1857
2140
  #if EV_USE_MONOTONIC
1858
2141
  if (expect_true (have_monotonic))
1859
2142
  {
2143
+ int i;
1860
2144
  ev_tstamp odiff = rtmn_diff;
1861
2145
 
1862
2146
  mn_now = get_clock ();
@@ -1892,11 +2176,11 @@ time_update (EV_P_ ev_tstamp max_block)
1892
2176
  now_floor = mn_now;
1893
2177
  }
1894
2178
 
2179
+ /* no timer adjustment, as the monotonic clock doesn't jump */
2180
+ /* timers_reschedule (EV_A_ rtmn_diff - odiff) */
1895
2181
  # if EV_PERIODIC_ENABLE
1896
2182
  periodics_reschedule (EV_A);
1897
2183
  # endif
1898
- /* no timer adjustment, as the monotonic clock doesn't jump */
1899
- /* timers_reschedule (EV_A_ rtmn_diff - odiff) */
1900
2184
  }
1901
2185
  else
1902
2186
  #endif
@@ -1905,16 +2189,11 @@ time_update (EV_P_ ev_tstamp max_block)
1905
2189
 
1906
2190
  if (expect_false (mn_now > ev_rt_now || ev_rt_now > mn_now + max_block + MIN_TIMEJUMP))
1907
2191
  {
2192
+ /* adjust timers. this is easy, as the offset is the same for all of them */
2193
+ timers_reschedule (EV_A_ ev_rt_now - mn_now);
1908
2194
  #if EV_PERIODIC_ENABLE
1909
2195
  periodics_reschedule (EV_A);
1910
2196
  #endif
1911
- /* adjust timers. this is easy, as the offset is the same for all of them */
1912
- for (i = 0; i < timercnt; ++i)
1913
- {
1914
- ANHE *he = timers + i + HEAP0;
1915
- ANHE_w (*he)->at += ev_rt_now - mn_now;
1916
- ANHE_at_cache (*he);
1917
- }
1918
2197
  }
1919
2198
 
1920
2199
  mn_now = ev_rt_now;
@@ -1922,31 +2201,17 @@ time_update (EV_P_ ev_tstamp max_block)
1922
2201
  }
1923
2202
 
1924
2203
  void
1925
- ev_ref (EV_P)
1926
- {
1927
- ++activecnt;
1928
- }
1929
-
1930
- void
1931
- ev_unref (EV_P)
1932
- {
1933
- --activecnt;
1934
- }
1935
-
1936
- void
1937
- ev_now_update (EV_P)
2204
+ ev_loop (EV_P_ int flags)
1938
2205
  {
1939
- time_update (EV_A_ 1e100);
1940
- }
2206
+ #if EV_MINIMAL < 2
2207
+ ++loop_depth;
2208
+ #endif
1941
2209
 
1942
- static int loop_done;
2210
+ assert (("libev: ev_loop recursion during release detected", loop_done != EVUNLOOP_RECURSE));
1943
2211
 
1944
- void
1945
- ev_loop (EV_P_ int flags)
1946
- {
1947
2212
  loop_done = EVUNLOOP_CANCEL;
1948
2213
 
1949
- call_pending (EV_A); /* in case we recurse, ensure ordering stays nice and clean */
2214
+ EV_INVOKE_PENDING; /* in case we recurse, ensure ordering stays nice and clean */
1950
2215
 
1951
2216
  do
1952
2217
  {
@@ -1969,7 +2234,7 @@ ev_loop (EV_P_ int flags)
1969
2234
  if (forkcnt)
1970
2235
  {
1971
2236
  queue_events (EV_A_ (W *)forks, forkcnt, EV_FORK);
1972
- call_pending (EV_A);
2237
+ EV_INVOKE_PENDING;
1973
2238
  }
1974
2239
  #endif
1975
2240
 
@@ -1977,10 +2242,10 @@ ev_loop (EV_P_ int flags)
1977
2242
  if (expect_false (preparecnt))
1978
2243
  {
1979
2244
  queue_events (EV_A_ (W *)prepares, preparecnt, EV_PREPARE);
1980
- call_pending (EV_A);
2245
+ EV_INVOKE_PENDING;
1981
2246
  }
1982
2247
 
1983
- if (expect_false (!activecnt))
2248
+ if (expect_false (loop_done))
1984
2249
  break;
1985
2250
 
1986
2251
  /* we might have forked, so reify kernel state if necessary */
@@ -1997,6 +2262,9 @@ ev_loop (EV_P_ int flags)
1997
2262
 
1998
2263
  if (expect_true (!(flags & EVLOOP_NONBLOCK || idleall || !activecnt)))
1999
2264
  {
2265
+ /* remember old timestamp for io_blocktime calculation */
2266
+ ev_tstamp prev_mn_now = mn_now;
2267
+
2000
2268
  /* update time to cancel out callback processing overhead */
2001
2269
  time_update (EV_A_ 1e100);
2002
2270
 
@@ -2016,23 +2284,32 @@ ev_loop (EV_P_ int flags)
2016
2284
  }
2017
2285
  #endif
2018
2286
 
2287
+ /* don't let timeouts decrease the waittime below timeout_blocktime */
2019
2288
  if (expect_false (waittime < timeout_blocktime))
2020
2289
  waittime = timeout_blocktime;
2021
2290
 
2022
- sleeptime = waittime - backend_fudge;
2291
+ /* extra check because io_blocktime is commonly 0 */
2292
+ if (expect_false (io_blocktime))
2293
+ {
2294
+ sleeptime = io_blocktime - (mn_now - prev_mn_now);
2023
2295
 
2024
- if (expect_true (sleeptime > io_blocktime))
2025
- sleeptime = io_blocktime;
2296
+ if (sleeptime > waittime - backend_fudge)
2297
+ sleeptime = waittime - backend_fudge;
2026
2298
 
2027
- if (sleeptime)
2028
- {
2029
- ev_sleep (sleeptime);
2030
- waittime -= sleeptime;
2299
+ if (expect_true (sleeptime > 0.))
2300
+ {
2301
+ ev_sleep (sleeptime);
2302
+ waittime -= sleeptime;
2303
+ }
2031
2304
  }
2032
2305
  }
2033
2306
 
2307
+ #if EV_MINIMAL < 2
2034
2308
  ++loop_count;
2309
+ #endif
2310
+ assert ((loop_done = EVUNLOOP_RECURSE, 1)); /* assert for side effect */
2035
2311
  backend_poll (EV_A_ waittime);
2312
+ assert ((loop_done = EVUNLOOP_CANCEL, 1)); /* assert for side effect */
2036
2313
 
2037
2314
  /* update ev_rt_now, do magic */
2038
2315
  time_update (EV_A_ waittime + sleeptime);
@@ -2053,7 +2330,7 @@ ev_loop (EV_P_ int flags)
2053
2330
  if (expect_false (checkcnt))
2054
2331
  queue_events (EV_A_ (W *)checks, checkcnt, EV_CHECK);
2055
2332
 
2056
- call_pending (EV_A);
2333
+ EV_INVOKE_PENDING;
2057
2334
  }
2058
2335
  while (expect_true (
2059
2336
  activecnt
@@ -2063,6 +2340,10 @@ ev_loop (EV_P_ int flags)
2063
2340
 
2064
2341
  if (loop_done == EVUNLOOP_ONE)
2065
2342
  loop_done = EVUNLOOP_CANCEL;
2343
+
2344
+ #if EV_MINIMAL < 2
2345
+ --loop_depth;
2346
+ #endif
2066
2347
  }
2067
2348
 
2068
2349
  void
@@ -2071,36 +2352,75 @@ ev_unloop (EV_P_ int how)
2071
2352
  loop_done = how;
2072
2353
  }
2073
2354
 
2355
+ void
2356
+ ev_ref (EV_P)
2357
+ {
2358
+ ++activecnt;
2359
+ }
2360
+
2361
+ void
2362
+ ev_unref (EV_P)
2363
+ {
2364
+ --activecnt;
2365
+ }
2366
+
2367
+ void
2368
+ ev_now_update (EV_P)
2369
+ {
2370
+ time_update (EV_A_ 1e100);
2371
+ }
2372
+
2373
+ void
2374
+ ev_suspend (EV_P)
2375
+ {
2376
+ ev_now_update (EV_A);
2377
+ }
2378
+
2379
+ void
2380
+ ev_resume (EV_P)
2381
+ {
2382
+ ev_tstamp mn_prev = mn_now;
2383
+
2384
+ ev_now_update (EV_A);
2385
+ timers_reschedule (EV_A_ mn_now - mn_prev);
2386
+ #if EV_PERIODIC_ENABLE
2387
+ /* TODO: really do this? */
2388
+ periodics_reschedule (EV_A);
2389
+ #endif
2390
+ }
2391
+
2074
2392
  /*****************************************************************************/
2393
+ /* singly-linked list management, used when the expected list length is short */
2075
2394
 
2076
- void inline_size
2395
+ inline_size void
2077
2396
  wlist_add (WL *head, WL elem)
2078
2397
  {
2079
2398
  elem->next = *head;
2080
2399
  *head = elem;
2081
2400
  }
2082
2401
 
2083
- void inline_size
2402
+ inline_size void
2084
2403
  wlist_del (WL *head, WL elem)
2085
2404
  {
2086
2405
  while (*head)
2087
2406
  {
2088
- if (*head == elem)
2407
+ if (expect_true (*head == elem))
2089
2408
  {
2090
2409
  *head = elem->next;
2091
- return;
2410
+ break;
2092
2411
  }
2093
2412
 
2094
2413
  head = &(*head)->next;
2095
2414
  }
2096
2415
  }
2097
2416
 
2098
- void inline_speed
2417
+ /* internal, faster, version of ev_clear_pending */
2418
+ inline_speed void
2099
2419
  clear_pending (EV_P_ W w)
2100
2420
  {
2101
2421
  if (w->pending)
2102
2422
  {
2103
- pendings [ABSPRI (w)][w->pending - 1].w = 0;
2423
+ pendings [ABSPRI (w)][w->pending - 1].w = (W)&pending_w;
2104
2424
  w->pending = 0;
2105
2425
  }
2106
2426
  }
@@ -2114,24 +2434,24 @@ ev_clear_pending (EV_P_ void *w)
2114
2434
  if (expect_true (pending))
2115
2435
  {
2116
2436
  ANPENDING *p = pendings [ABSPRI (w_)] + pending - 1;
2437
+ p->w = (W)&pending_w;
2117
2438
  w_->pending = 0;
2118
- p->w = 0;
2119
2439
  return p->events;
2120
2440
  }
2121
2441
  else
2122
2442
  return 0;
2123
2443
  }
2124
2444
 
2125
- void inline_size
2445
+ inline_size void
2126
2446
  pri_adjust (EV_P_ W w)
2127
2447
  {
2128
- int pri = w->priority;
2448
+ int pri = ev_priority (w);
2129
2449
  pri = pri < EV_MINPRI ? EV_MINPRI : pri;
2130
2450
  pri = pri > EV_MAXPRI ? EV_MAXPRI : pri;
2131
- w->priority = pri;
2451
+ ev_set_priority (w, pri);
2132
2452
  }
2133
2453
 
2134
- void inline_speed
2454
+ inline_speed void
2135
2455
  ev_start (EV_P_ W w, int active)
2136
2456
  {
2137
2457
  pri_adjust (EV_A_ w);
@@ -2139,7 +2459,7 @@ ev_start (EV_P_ W w, int active)
2139
2459
  ev_ref (EV_A);
2140
2460
  }
2141
2461
 
2142
- void inline_size
2462
+ inline_size void
2143
2463
  ev_stop (EV_P_ W w)
2144
2464
  {
2145
2465
  ev_unref (EV_A);
@@ -2157,7 +2477,7 @@ ev_io_start (EV_P_ ev_io *w)
2157
2477
  return;
2158
2478
 
2159
2479
  assert (("libev: ev_io_start called with negative fd", fd >= 0));
2160
- assert (("libev: ev_io start called with illegal event mask", !(w->events & ~(EV_IOFDSET | EV_READ | EV_WRITE))));
2480
+ assert (("libev: ev_io start called with illegal event mask", !(w->events & ~(EV__IOFDSET | EV_READ | EV_WRITE))));
2161
2481
 
2162
2482
  EV_FREQUENT_CHECK;
2163
2483
 
@@ -2165,8 +2485,8 @@ ev_io_start (EV_P_ ev_io *w)
2165
2485
  array_needsize (ANFD, anfds, anfdmax, fd + 1, array_init_zero);
2166
2486
  wlist_add (&anfds[fd].head, (WL)w);
2167
2487
 
2168
- fd_change (EV_A_ fd, w->events & EV_IOFDSET | 1);
2169
- w->events &= ~EV_IOFDSET;
2488
+ fd_change (EV_A_ fd, w->events & EV__IOFDSET | EV_ANFD_REIFY);
2489
+ w->events &= ~EV__IOFDSET;
2170
2490
 
2171
2491
  EV_FREQUENT_CHECK;
2172
2492
  }
@@ -2269,6 +2589,12 @@ ev_timer_again (EV_P_ ev_timer *w)
2269
2589
  EV_FREQUENT_CHECK;
2270
2590
  }
2271
2591
 
2592
+ ev_tstamp
2593
+ ev_timer_remaining (EV_P_ ev_timer *w)
2594
+ {
2595
+ return ev_at (w) - (ev_is_active (w) ? mn_now : 0.);
2596
+ }
2597
+
2272
2598
  #if EV_PERIODIC_ENABLE
2273
2599
  void noinline
2274
2600
  ev_periodic_start (EV_P_ ev_periodic *w)
@@ -2345,47 +2671,75 @@ ev_periodic_again (EV_P_ ev_periodic *w)
2345
2671
  void noinline
2346
2672
  ev_signal_start (EV_P_ ev_signal *w)
2347
2673
  {
2348
- #if EV_MULTIPLICITY
2349
- assert (("libev: signal watchers are only supported in the default loop", loop == ev_default_loop_ptr));
2350
- #endif
2351
2674
  if (expect_false (ev_is_active (w)))
2352
2675
  return;
2353
2676
 
2354
- assert (("libev: ev_signal_start called with illegal signal number", w->signum > 0));
2677
+ assert (("libev: ev_signal_start called with illegal signal number", w->signum > 0 && w->signum < EV_NSIG));
2355
2678
 
2356
- evpipe_init (EV_A);
2679
+ #if EV_MULTIPLICITY
2680
+ assert (("libev: a signal must not be attached to two different loops",
2681
+ !signals [w->signum - 1].loop || signals [w->signum - 1].loop == loop));
2682
+
2683
+ signals [w->signum - 1].loop = EV_A;
2684
+ #endif
2357
2685
 
2358
2686
  EV_FREQUENT_CHECK;
2359
2687
 
2360
- {
2361
- #ifndef _WIN32
2362
- sigset_t full, prev;
2363
- sigfillset (&full);
2364
- sigprocmask (SIG_SETMASK, &full, &prev);
2365
- #endif
2688
+ #if EV_USE_SIGNALFD
2689
+ if (sigfd == -2)
2690
+ {
2691
+ sigfd = signalfd (-1, &sigfd_set, SFD_NONBLOCK | SFD_CLOEXEC);
2692
+ if (sigfd < 0 && errno == EINVAL)
2693
+ sigfd = signalfd (-1, &sigfd_set, 0); /* retry without flags */
2694
+
2695
+ if (sigfd >= 0)
2696
+ {
2697
+ fd_intern (sigfd); /* doing it twice will not hurt */
2366
2698
 
2367
- array_needsize (ANSIG, signals, signalmax, w->signum, array_init_zero);
2699
+ sigemptyset (&sigfd_set);
2368
2700
 
2369
- #ifndef _WIN32
2370
- sigprocmask (SIG_SETMASK, &prev, 0);
2701
+ ev_io_init (&sigfd_w, sigfdcb, sigfd, EV_READ);
2702
+ ev_set_priority (&sigfd_w, EV_MAXPRI);
2703
+ ev_io_start (EV_A_ &sigfd_w);
2704
+ ev_unref (EV_A); /* signalfd watcher should not keep loop alive */
2705
+ }
2706
+ }
2707
+
2708
+ if (sigfd >= 0)
2709
+ {
2710
+ /* TODO: check .head */
2711
+ sigaddset (&sigfd_set, w->signum);
2712
+ sigprocmask (SIG_BLOCK, &sigfd_set, 0);
2713
+
2714
+ signalfd (sigfd, &sigfd_set, 0);
2715
+ }
2371
2716
  #endif
2372
- }
2373
2717
 
2374
2718
  ev_start (EV_A_ (W)w, 1);
2375
2719
  wlist_add (&signals [w->signum - 1].head, (WL)w);
2376
2720
 
2377
2721
  if (!((WL)w)->next)
2378
- {
2379
- #if _WIN32
2380
- signal (w->signum, ev_sighandler);
2381
- #else
2382
- struct sigaction sa;
2383
- sa.sa_handler = ev_sighandler;
2384
- sigfillset (&sa.sa_mask);
2385
- sa.sa_flags = SA_RESTART; /* if restarting works we save one iteration */
2386
- sigaction (w->signum, &sa, 0);
2722
+ # if EV_USE_SIGNALFD
2723
+ if (sigfd < 0) /*TODO*/
2724
+ # endif
2725
+ {
2726
+ # if _WIN32
2727
+ signal (w->signum, ev_sighandler);
2728
+ # else
2729
+ struct sigaction sa;
2730
+
2731
+ evpipe_init (EV_A);
2732
+
2733
+ sa.sa_handler = ev_sighandler;
2734
+ sigfillset (&sa.sa_mask);
2735
+ sa.sa_flags = SA_RESTART; /* if restarting works we save one iteration */
2736
+ sigaction (w->signum, &sa, 0);
2737
+
2738
+ sigemptyset (&sa.sa_mask);
2739
+ sigaddset (&sa.sa_mask, w->signum);
2740
+ sigprocmask (SIG_UNBLOCK, &sa.sa_mask, 0);
2387
2741
  #endif
2388
- }
2742
+ }
2389
2743
 
2390
2744
  EV_FREQUENT_CHECK;
2391
2745
  }
@@ -2403,7 +2757,23 @@ ev_signal_stop (EV_P_ ev_signal *w)
2403
2757
  ev_stop (EV_A_ (W)w);
2404
2758
 
2405
2759
  if (!signals [w->signum - 1].head)
2406
- signal (w->signum, SIG_DFL);
2760
+ {
2761
+ #if EV_MULTIPLICITY
2762
+ signals [w->signum - 1].loop = 0; /* unattach from signal */
2763
+ #endif
2764
+ #if EV_USE_SIGNALFD
2765
+ if (sigfd >= 0)
2766
+ {
2767
+ sigprocmask (SIG_UNBLOCK, &sigfd_set, 0);//D
2768
+ sigdelset (&sigfd_set, w->signum);
2769
+ signalfd (sigfd, &sigfd_set, 0);
2770
+ sigprocmask (SIG_BLOCK, &sigfd_set, 0);//D
2771
+ /*TODO: maybe unblock signal? */
2772
+ }
2773
+ else
2774
+ #endif
2775
+ signal (w->signum, SIG_DFL);
2776
+ }
2407
2777
 
2408
2778
  EV_FREQUENT_CHECK;
2409
2779
  }
@@ -2574,7 +2944,7 @@ infy_cb (EV_P_ ev_io *w, int revents)
2574
2944
  infy_wd (EV_A_ ev->wd, ev->wd, ev);
2575
2945
  }
2576
2946
 
2577
- void inline_size
2947
+ inline_size void
2578
2948
  check_2625 (EV_P)
2579
2949
  {
2580
2950
  /* kernels < 2.6.25 are borked
@@ -2597,7 +2967,7 @@ check_2625 (EV_P)
2597
2967
  fs_2625 = 1;
2598
2968
  }
2599
2969
 
2600
- void inline_size
2970
+ inline_size void
2601
2971
  infy_init (EV_P)
2602
2972
  {
2603
2973
  if (fs_fd != -2)
@@ -2617,7 +2987,7 @@ infy_init (EV_P)
2617
2987
  }
2618
2988
  }
2619
2989
 
2620
- void inline_size
2990
+ inline_size void
2621
2991
  infy_fork (EV_P)
2622
2992
  {
2623
2993
  int slot;
@@ -2893,7 +3263,7 @@ embed_prepare_cb (EV_P_ ev_prepare *prepare, int revents)
2893
3263
  ev_embed *w = (ev_embed *)(((char *)prepare) - offsetof (ev_embed, prepare));
2894
3264
 
2895
3265
  {
2896
- struct ev_loop *loop = w->other;
3266
+ EV_P = w->other;
2897
3267
 
2898
3268
  while (fdchangecnt)
2899
3269
  {
@@ -2911,7 +3281,7 @@ embed_fork_cb (EV_P_ ev_fork *fork_w, int revents)
2911
3281
  ev_embed_stop (EV_A_ w);
2912
3282
 
2913
3283
  {
2914
- struct ev_loop *loop = w->other;
3284
+ EV_P = w->other;
2915
3285
 
2916
3286
  ev_loop_fork (EV_A);
2917
3287
  ev_loop (EV_A_ EVLOOP_NONBLOCK);
@@ -2935,7 +3305,7 @@ ev_embed_start (EV_P_ ev_embed *w)
2935
3305
  return;
2936
3306
 
2937
3307
  {
2938
- struct ev_loop *loop = w->other;
3308
+ EV_P = w->other;
2939
3309
  assert (("libev: loop to be embedded is not embeddable", backend & ev_embeddable_backends ()));
2940
3310
  ev_io_init (&w->io, embed_io_cb, backend_fd, EV_READ);
2941
3311
  }
@@ -3057,7 +3427,7 @@ void
3057
3427
  ev_async_send (EV_P_ ev_async *w)
3058
3428
  {
3059
3429
  w->sent = 1;
3060
- evpipe_write (EV_A_ &gotasync);
3430
+ evpipe_write (EV_A_ &async_pending);
3061
3431
  }
3062
3432
  #endif
3063
3433
 
@@ -3129,6 +3499,114 @@ ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, vo
3129
3499
  }
3130
3500
  }
3131
3501
 
3502
+ /*****************************************************************************/
3503
+
3504
+ #if EV_WALK_ENABLE
3505
+ void
3506
+ ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w))
3507
+ {
3508
+ int i, j;
3509
+ ev_watcher_list *wl, *wn;
3510
+
3511
+ if (types & (EV_IO | EV_EMBED))
3512
+ for (i = 0; i < anfdmax; ++i)
3513
+ for (wl = anfds [i].head; wl; )
3514
+ {
3515
+ wn = wl->next;
3516
+
3517
+ #if EV_EMBED_ENABLE
3518
+ if (ev_cb ((ev_io *)wl) == embed_io_cb)
3519
+ {
3520
+ if (types & EV_EMBED)
3521
+ cb (EV_A_ EV_EMBED, ((char *)wl) - offsetof (struct ev_embed, io));
3522
+ }
3523
+ else
3524
+ #endif
3525
+ #if EV_USE_INOTIFY
3526
+ if (ev_cb ((ev_io *)wl) == infy_cb)
3527
+ ;
3528
+ else
3529
+ #endif
3530
+ if ((ev_io *)wl != &pipe_w)
3531
+ if (types & EV_IO)
3532
+ cb (EV_A_ EV_IO, wl);
3533
+
3534
+ wl = wn;
3535
+ }
3536
+
3537
+ if (types & (EV_TIMER | EV_STAT))
3538
+ for (i = timercnt + HEAP0; i-- > HEAP0; )
3539
+ #if EV_STAT_ENABLE
3540
+ /*TODO: timer is not always active*/
3541
+ if (ev_cb ((ev_timer *)ANHE_w (timers [i])) == stat_timer_cb)
3542
+ {
3543
+ if (types & EV_STAT)
3544
+ cb (EV_A_ EV_STAT, ((char *)ANHE_w (timers [i])) - offsetof (struct ev_stat, timer));
3545
+ }
3546
+ else
3547
+ #endif
3548
+ if (types & EV_TIMER)
3549
+ cb (EV_A_ EV_TIMER, ANHE_w (timers [i]));
3550
+
3551
+ #if EV_PERIODIC_ENABLE
3552
+ if (types & EV_PERIODIC)
3553
+ for (i = periodiccnt + HEAP0; i-- > HEAP0; )
3554
+ cb (EV_A_ EV_PERIODIC, ANHE_w (periodics [i]));
3555
+ #endif
3556
+
3557
+ #if EV_IDLE_ENABLE
3558
+ if (types & EV_IDLE)
3559
+ for (j = NUMPRI; i--; )
3560
+ for (i = idlecnt [j]; i--; )
3561
+ cb (EV_A_ EV_IDLE, idles [j][i]);
3562
+ #endif
3563
+
3564
+ #if EV_FORK_ENABLE
3565
+ if (types & EV_FORK)
3566
+ for (i = forkcnt; i--; )
3567
+ if (ev_cb (forks [i]) != embed_fork_cb)
3568
+ cb (EV_A_ EV_FORK, forks [i]);
3569
+ #endif
3570
+
3571
+ #if EV_ASYNC_ENABLE
3572
+ if (types & EV_ASYNC)
3573
+ for (i = asynccnt; i--; )
3574
+ cb (EV_A_ EV_ASYNC, asyncs [i]);
3575
+ #endif
3576
+
3577
+ if (types & EV_PREPARE)
3578
+ for (i = preparecnt; i--; )
3579
+ #if EV_EMBED_ENABLE
3580
+ if (ev_cb (prepares [i]) != embed_prepare_cb)
3581
+ #endif
3582
+ cb (EV_A_ EV_PREPARE, prepares [i]);
3583
+
3584
+ if (types & EV_CHECK)
3585
+ for (i = checkcnt; i--; )
3586
+ cb (EV_A_ EV_CHECK, checks [i]);
3587
+
3588
+ if (types & EV_SIGNAL)
3589
+ for (i = 0; i < EV_NSIG - 1; ++i)
3590
+ for (wl = signals [i].head; wl; )
3591
+ {
3592
+ wn = wl->next;
3593
+ cb (EV_A_ EV_SIGNAL, wl);
3594
+ wl = wn;
3595
+ }
3596
+
3597
+ if (types & EV_CHILD)
3598
+ for (i = EV_PID_HASHSIZE; i--; )
3599
+ for (wl = childs [i]; wl; )
3600
+ {
3601
+ wn = wl->next;
3602
+ cb (EV_A_ EV_CHILD, wl);
3603
+ wl = wn;
3604
+ }
3605
+ /* EV_STAT 0x00001000 /* stat data changed */
3606
+ /* EV_EMBED 0x00010000 /* embedded event loop needs sweep */
3607
+ }
3608
+ #endif
3609
+
3132
3610
  #if EV_MULTIPLICITY
3133
3611
  #include "ev_wrap.h"
3134
3612
  #endif