tupelo 0.11 → 0.12

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: be780826d017e0ea442fa49d2a6403cc44cfe263
4
- data.tar.gz: 133fa6c3218dd1fc4b06a8ea407a35ca3bdea0ac
3
+ metadata.gz: dc4bce707ea05097c67e685d20435968d326daaf
4
+ data.tar.gz: 02a62f5ed3513d411b0e3bbd4b6d21af08b06c58
5
5
  SHA512:
6
- metadata.gz: 3327df2fc6fd2587dd244b99a4ed64827aeafd3314f7d3f34360e69f5598f294cd1c31010f8729c7ba913f1d1f489ce624735090074976de5aca19f88db5a646
7
- data.tar.gz: 9b8c852daa50910f9c0823601a85618000404259893de1bc187393b8f0faeb99321ed06cbf78fb7ad488e9725326efa43202f201ae81c5a6d1e832cb9660cd3b
6
+ metadata.gz: 843a155e6e411c6ff9da110b83cfae4c69fa53b06abd0b34b0aa4f469bd97447e58e08091a3009b49d914615547dee9c5f3f3a84e1e0c89481b6184c8f429581
7
+ data.tar.gz: 74fd680cd05b27977afc4958eeea19311dcf11120e1d3c2232371e8abda0f275ffc3f9204cb3197412cbb08664c9fc37a789b43c7341a716b00a1f0fc8261fe3
data/README.md CHANGED
@@ -1,3 +1,5 @@
1
+ **NEWS**: Come hear a talk on Tupelo on December 11 in San Francisco at the [SF Distributed Computing meetup](http://www.meetup.com/San-Francisco-Distributed-Computing). Location is TBD. Abstract: [doc/sfdc.txt](doc/sfdc.txt).
2
+
1
3
  tupelo
2
4
  ==
3
5
 
@@ -308,7 +310,14 @@ Transactions have a significant disadvantage compared to using take/write to loc
308
310
 
309
311
  Transactions do have an advantage over using take/write to lock/unlock tuples: there is no possibility of deadlock. See [example/deadlock.rb](example/deadlock.rb) and [example/parallel.rb](example/parallel.rb).
310
312
 
311
- Another advantage of tranactions is that it is possible to guarantee continuous existence of a time-series of tuples. For example, suppose that tuples matching `{step: Numeric}` indicate the progress of some activity. With transactions, you can guarantee that there is exactly one matching tuple at any time. So any client which reads this template will find a match without blocking.
313
+ Another advantage of tranactions is that it is possible to guarantee continuous existence of a time-series of tuples. For example, suppose that tuples matching `{step: Numeric}` indicate the progress of some activity. With transactions, you can guarantee that there is exactly one matching tuple at any time, and that no client ever sees in intermediate or inconsistent state of the counter:
314
+
315
+ transaction do
316
+ step = take(step: nil)["step"]
317
+ write step: step + 1
318
+ end
319
+
320
+ Any client which reads this template will find a (unique) match without blocking.
312
321
 
313
322
  Another use of transactions: forcing a retry when something changes:
314
323
 
@@ -479,6 +488,14 @@ General application clients:
479
488
 
480
489
  * client threads construct transactions and wait for results (communicating with the worker thread over queues); they may also use asynchronous transactions
481
490
 
491
+ Some design principles:
492
+
493
+ * Once a transaction has been sent from a client to the message sequencer, it references only tuples, not templates. This makes it faster and simpler for each receiving client to apply or reject the transaction. Also, clients that do not support local template searching (such as archivers) can store tuples using especially efficient data structures that only support tuple-insert, tuple-delete, and iterate/export operations.
494
+
495
+ * Use non-blocking protocols. For example, transactions can be evaluated in one client without waiting for information from other clients. Even at the level of reading messages over sockets, tupelo uses (via funl and object-stream) non-blocking constructs. At the application level, you can use transactions to optimistically modify shared state (but applications are free to use locking if high contention demands it).
496
+
497
+ * Do the hard work on the client side. For example, all pattern matching happens in the client that requested an operation that has a template argument, not on the server or other clients.
498
+
482
499
  Protocol
483
500
  --------
484
501
 
data/bin/tspy CHANGED
@@ -28,18 +28,21 @@ if ARGV.delete("-h") or ARGV.delete("--help")
28
28
  end
29
29
 
30
30
  require 'tupelo/app'
31
+ require 'tupelo/archiver/tuplespace'
31
32
 
32
- Tupelo.application do |app|
33
- app.local do |client|
33
+ Tupelo.application do
34
+ # Use hash-and-count-based storage, for efficiency (this client never
35
+ # does take or read).
36
+ local tuplespace: [Tupelo::Archiver::Tuplespace, zero_tolerance: 1000] do
34
37
  trap :INT do
35
38
  exit!
36
39
  end
37
40
 
38
- note = client.notifier
39
- client.log "%10s %10s %10s %s" % %w{ tick client status operation }
41
+ note = notifier
42
+ log "%10s %10s %10s %s" % %w{ tick client status operation }
40
43
  loop do
41
44
  status, tick, cid, op = note.wait
42
- client.log "%10d %10d %10s %p" % [tick, cid, status, op]
45
+ log "%10d %10d %10s %p" % [tick, cid, status, op]
43
46
  end
44
47
  end
45
48
  end
@@ -0,0 +1,62 @@
1
+ # Accepts usual tupelo switches (such as --trace, --debug), plus one argument: a
2
+ # user name to be shared with other chat clients. New clients see a brief
3
+ # history of the chat, as well as new messages from other clients.
4
+ #
5
+ # You can run several instances of chat.rb. The first will set up all needed
6
+ # services. The rest will connect by referring to a yaml file in the same dir.
7
+ # Copy that file to remote hosts (and modify hostnames as needed) for remote
8
+ # access. If the first instance is run with "--persist-dir <dir>", messages
9
+ # will persist across service shutdown.
10
+ #
11
+ # Compare: https://github.com/bloom-lang/bud/blob/master/examples/chat.
12
+ #
13
+ # To do: use a subspace with a sorted data structure, like rbtree or in-memory
14
+ # sqlite, for the messages.
15
+
16
+ require 'tupelo/app'
17
+
18
+ svr = "chat.yaml"
19
+ history_period = 60 # seconds -- discard _my_ messages older than this
20
+
21
+ Thread.abort_on_exception = true
22
+
23
+ def display_message msg
24
+ from, time, line = msg.values_at(*%w{from time line})
25
+ time_str = Time.at(time).strftime("%I:%M.%S")
26
+ puts "#{from}@#{time_str}> #{line}"
27
+ end
28
+
29
+ Tupelo.tcp_application servers_file: svr do
30
+ me = argv.shift
31
+
32
+ local do
33
+ require 'readline'
34
+
35
+ Thread.new do
36
+ seen_at_start = {}
37
+ read_all(from: nil, line: nil, time: nil).
38
+ sort_by {|msg| msg["time"]}.
39
+ each {|msg| display_message msg; seen_at_start[msg] = true}
40
+
41
+ read from: nil, line: nil, time: nil do |msg|
42
+ next if msg["from"] == me or seen_at_start[msg]
43
+ print "\r"; display_message msg
44
+ Readline.redisplay ### why not u work?
45
+ end
46
+ end
47
+
48
+ Thread.new do
49
+ loop do
50
+ begin
51
+ t = Time.now.to_f - history_period
52
+ take({from: me, line: nil, time: 0..t}, timeout: 10)
53
+ rescue TimeoutError
54
+ end
55
+ end
56
+ end
57
+
58
+ while line = Readline.readline("#{me}> ", true)
59
+ write from: me, line: line, time: Time.now.to_f
60
+ end
61
+ end
62
+ end
data/example/fish.rb ADDED
@@ -0,0 +1,44 @@
1
+ require 'tupelo/app'
2
+
3
+ Tupelo.application do
4
+ 2.times do
5
+ child passive: true do # these could be threads in the next child
6
+ loop do
7
+ transaction do
8
+ fish, = take [String]
9
+ write [1, fish]
10
+ end
11
+ end
12
+ end
13
+ end
14
+
15
+ 2.times do
16
+ child passive: true do
17
+ loop do
18
+ transaction do
19
+ n1, fish = take([Integer, String]) ## need to iterate on this search
20
+ n2, _ = take([Integer, fish]) ## if this take blocks
21
+ write [n1 + n2, fish]
22
+ end
23
+ end
24
+ end
25
+ end
26
+
27
+ local do
28
+ seed = 3
29
+ srand seed
30
+ log "seed = #{seed}"
31
+
32
+ fishes = %w{ trout marlin char salmon }
33
+ a = fishes * 10
34
+ a.shuffle!
35
+ a.each do |fish|
36
+ write [fish]
37
+ sleep rand % 0.1
38
+ end
39
+
40
+ fishes.each do |fish|
41
+ log take [10, fish]
42
+ end
43
+ end
44
+ end
data/example/fish0.rb ADDED
@@ -0,0 +1,39 @@
1
+ # This works, but requires pre-initialization of all counters.
2
+
3
+ require 'tupelo/app'
4
+
5
+ Tupelo.application do
6
+ 2.times do
7
+ child passive: true do
8
+ loop do
9
+ transaction do
10
+ fish, _ = take([String])
11
+ n, _ = take([Integer, fish])
12
+ write [n + 1, fish]
13
+ end
14
+ end
15
+ end
16
+ end
17
+
18
+ local do
19
+ seed = 3
20
+ srand seed
21
+ log "seed = #{seed}"
22
+
23
+ fishes = %w{ trout marlin char salmon }
24
+ fishes.each do |fish|
25
+ write [0, fish]
26
+ end
27
+
28
+ a = fishes * 10
29
+ a.shuffle!
30
+ a.each do |fish|
31
+ write [fish]
32
+ sleep rand % 0.1
33
+ end
34
+
35
+ fishes.each do |fish|
36
+ log take [10, fish]
37
+ end
38
+ end
39
+ end
@@ -0,0 +1,85 @@
1
+ # A tupelo cluster may expose its services using some other protocol so that
2
+ # process need not be tupelo-aware to use them. This example expands on the
3
+ # http.rb example and shows how to use tupelo to coordinate multiple sinatra
4
+ # instances.
5
+ #
6
+ # Depends on the sinatra and http gems.
7
+
8
+ PORTS = [9001, 9002 , 9003]
9
+
10
+ fork do
11
+ require 'tupelo/app'
12
+
13
+ Tupelo.application do
14
+ PORTS.each do |port|
15
+ child do |client|
16
+ require 'sinatra/base'
17
+
18
+ Class.new(Sinatra::Base).class_eval do
19
+ get '/' do
20
+ "hello, world\n"
21
+ end
22
+
23
+ post '/send' do
24
+ text = params["text"]
25
+ dest = params["dest"]
26
+ client.write ["message", dest, text]
27
+ ## should use subspaces and a data structure that keeps
28
+ ## messages in order
29
+ end
30
+
31
+ get '/recv' do
32
+ dest = params["dest"]
33
+ _, _, text = client.take ["message", dest, String]
34
+ text
35
+ end
36
+
37
+ get '/exit' do
38
+ Thread.new {sleep 1; exit}
39
+ "bye\n"
40
+ end
41
+
42
+ set :port, port
43
+ run!
44
+ end
45
+ end
46
+ end
47
+ end
48
+ end
49
+
50
+ fork do # No tupelo in this process.
51
+ require 'http'
52
+
53
+ # For simplicity, one http client per http server
54
+ http_clients = PORTS.map.with_index do |port, i|
55
+ {
56
+ server_url: "http://localhost:#{port}",
57
+ id: i
58
+ }
59
+ end
60
+
61
+ http_clients.each_with_index do |http_client|
62
+ fork do
63
+ url = http_client[:server_url]
64
+ print "trying server at #{url}"
65
+ begin
66
+ print "."
67
+ HTTP.get url
68
+ rescue Errno::ECONNREFUSED
69
+ sleep 0.2
70
+ retry
71
+ end
72
+
73
+ other = (http_client[:id] + 1) % http_clients.size
74
+ me = http_client[:id]
75
+
76
+ puts
77
+ HTTP.post "#{url}/send?dest=#{other}&text=hello_from_#{me}"
78
+ text = HTTP.get "#{url}/recv?dest=#{me}"
79
+ puts "http client #{me} got: #{text}\n"
80
+ HTTP.get "#{url}/exit"
81
+ end
82
+ end
83
+ end
84
+
85
+ Process.waitall
@@ -1,4 +1,5 @@
1
- # Distributed version of pagerank.rb.
1
+ # Distributed version of pagerank.rb. This uses only a single host. To use
2
+ # many hosts, see remote.rb.
2
3
 
3
4
  # TODO
4
5
  #
@@ -7,42 +8,16 @@
7
8
  # And the subspaces could be defined by consistent hashing and smarter
8
9
  # partitioning.
9
10
  # Also, need to handle crashed process and lost tuple (as lease.rb maybe).
10
- # Would be nice to have remote option as well.
11
11
  # Abstract out domain-specific code from generic framework code.
12
12
  # Option to compare result with that of pagerank.rb using same seed.
13
13
 
14
14
  require 'tupelo/app'
15
+ require_relative 'update'
15
16
 
16
17
  NUM_WORKERS = 4
17
18
  NUM_VERTICES = 10
18
19
  PRNG_SEED = 1234
19
20
 
20
- def update vertex, incoming_messages, vs_dst
21
- vertex = vertex.dup
22
- incoming_messages ||= []
23
- outgoing_messages = []
24
- v_me = vertex["id"]
25
- rank = vertex["rank"]
26
- step = vertex["step"]
27
- active = true
28
-
29
- if step < 50
30
- rank = 0.15 / NUM_VERTICES + 0.85 * incoming_messages.inject(0.0) {|sum, m|
31
- sum + m["rank"]}
32
- outgoing_rank = rank / vs_dst.size
33
- outgoing_messages = vs_dst.map {|v_dst|
34
- {src: v_me, dst: v_dst, step: step + 1, rank: outgoing_rank}}
35
- else
36
- active = false
37
- end
38
-
39
- vertex["rank"] = rank
40
- vertex["active"] = active
41
- vertex["step"] += 1
42
-
43
- [vertex, outgoing_messages]
44
- end
45
-
46
21
  Tupelo.application do
47
22
 
48
23
  NUM_WORKERS.times do |i|
@@ -2,61 +2,29 @@
2
2
 
3
3
  # TODO
4
4
  #
5
- # Improvements noted in the article.
6
- # Scale better with subspaces and sqlite or other data structure.
7
- # And the subspaces could be defined by consistent hashing and smarter
8
- # partitioning.
9
- # Also, need to handle crashed process and lost tuple (as lease.rb maybe).
10
- # Would be nice to have remote option as well.
11
- # Abstract out domain-specific code from generic framework code.
12
- # Option to compare result with that of pagerank.rb using same seed.
5
+ # Improvements listed in pagerank.rb.
13
6
 
14
7
  require 'tupelo/app'
8
+ require 'tupelo/app/remote'
9
+ require_relative 'update'
15
10
 
16
- NUM_WORKERS = 4
11
+ NUM_WORKERS = 8
17
12
  NUM_VERTICES = 10
18
13
  PRNG_SEED = 1234
19
14
 
20
- def update vertex, incoming_messages, vs_dst
21
- vertex = vertex.dup
22
- incoming_messages ||= []
23
- outgoing_messages = []
24
- v_me = vertex["id"]
25
- rank = vertex["rank"]
26
- step = vertex["step"]
27
- active = true
28
-
29
- if step < 50
30
- rank = 0.15 / NUM_VERTICES + 0.85 * incoming_messages.inject(0.0) {|sum, m|
31
- sum + m["rank"]}
32
- outgoing_rank = rank / vs_dst.size
33
- outgoing_messages = vs_dst.map {|v_dst|
34
- {src: v_me, dst: v_dst, step: step + 1, rank: outgoing_rank}}
35
- else
36
- active = false
37
- end
38
-
39
- vertex["rank"] = rank
40
- vertex["active"] = active
41
- vertex["step"] += 1
42
-
43
- [vertex, outgoing_messages]
44
- end
45
-
46
- require 'tupelo/app/remote'
47
15
  def host i
48
- case i % 2
49
- when 0; "od1"
50
- when 1; "od2"
16
+ if ARGV.empty?
17
+ abort "Usage: #{$0} [usual opts] host1 host2 host3 ..."
51
18
  end
19
+ ARGV[i % ARGV.size]
52
20
  end
53
21
 
54
22
  Tupelo.tcp_application do
55
-
56
23
  NUM_WORKERS.times do |i|
57
- # child passive: true do
24
+ ## need better support from EasyServe:
25
+ ## remote: ..., load: 'update.rb' OR rsync option and then require
58
26
  remote host: host(i), log: true, passive: true, eval: %{
59
- log "hello"
27
+ log "starting worker #{i} on host #{host(i)}"
60
28
  def update vertex, incoming_messages, vs_dst
61
29
  vertex = vertex.dup
62
30
  incoming_messages ||= []
@@ -0,0 +1,25 @@
1
+ def update vertex, incoming_messages, vs_dst
2
+ vertex = vertex.dup
3
+ incoming_messages ||= []
4
+ outgoing_messages = []
5
+ v_me = vertex["id"]
6
+ rank = vertex["rank"]
7
+ step = vertex["step"]
8
+ active = true
9
+
10
+ if step < 50
11
+ rank = 0.15 / NUM_VERTICES + 0.85 * incoming_messages.inject(0.0) {|sum, m|
12
+ sum + m["rank"]}
13
+ outgoing_rank = rank / vs_dst.size
14
+ outgoing_messages = vs_dst.map {|v_dst|
15
+ {src: v_me, dst: v_dst, step: step + 1, rank: outgoing_rank}}
16
+ else
17
+ active = false
18
+ end
19
+
20
+ vertex["rank"] = rank
21
+ vertex["active"] = active
22
+ vertex["step"] += 1
23
+
24
+ [vertex, outgoing_messages]
25
+ end
@@ -0,0 +1,42 @@
1
+ # Tuples are not well suited to streaming data or very large data, but they
2
+ # can be used to coordinate access to such data.
3
+
4
+ require 'tupelo/app'
5
+
6
+ Tupelo.application do
7
+ child passive: true do
8
+ loop do
9
+ _, key = take ["session-request", nil]
10
+
11
+ serv = TCPServer.new 0
12
+ host = serv.addr[2]
13
+ port = serv.addr[1]
14
+
15
+ fork do
16
+ sock = serv.accept
17
+ sock.send "lots of data at #{Time.now}", 0
18
+ sleep 1
19
+ sock.send "lots more data at #{Time.now}", 0
20
+ end
21
+
22
+ write_wait ["session-response", key, host, port]
23
+ end
24
+ end
25
+
26
+ 2.times do
27
+ child do
28
+ key = client_id
29
+ # use client_id here just because we know it is unique to this client
30
+
31
+ write ["session-request", key]
32
+ _, _, host, port = take ["session-response", key, nil, nil]
33
+
34
+ sock = TCPSocket.new host, port
35
+ loop do
36
+ msg = sock.recv(1000)
37
+ break if msg.size == 0
38
+ log msg
39
+ end
40
+ end
41
+ end
42
+ end
@@ -0,0 +1,53 @@
1
+ # The shop has products, customers, and shopping carts. Customers move
2
+ # products to their carts, and the app has to prevent two customers getting
3
+ # the same instance of a product.
4
+ #
5
+ # It works, but every process has to handle every transaction. Not good. See
6
+ # shop-v2.rb.
7
+
8
+ require 'tupelo/app'
9
+
10
+ PRODUCT_IDS = 1..10
11
+ CUSTOMER_IDS = 1..10
12
+
13
+ Tupelo.application do
14
+ local do
15
+ PRODUCT_IDS.each do |product_id|
16
+ count = 10
17
+ write ["product", product_id, count]
18
+ end
19
+ end
20
+
21
+ CUSTOMER_IDS.each do |customer_id|
22
+ child passive: true do
23
+ loop do
24
+ sleep rand % 0.1
25
+ transaction do
26
+ # buy the first product we see:
27
+ _, product_id, count = take ["product", nil, 1..Float::INFINITY]
28
+ write ["product", product_id, count-1]
29
+ write ["cart", customer_id, product_id]
30
+ end
31
+ end
32
+ end
33
+ end
34
+
35
+ local do
36
+ PRODUCT_IDS.each do |product_id|
37
+ read ["product", product_id, 0] # wait until sold out
38
+ end
39
+
40
+ CUSTOMER_IDS.each do |customer_id|
41
+ h = Hash.new(0)
42
+ transaction do
43
+ while t=take_nowait(["cart", customer_id, nil])
44
+ h[t[2]] += 1
45
+ end
46
+ end
47
+ puts "Customer #{customer_id} bought:"
48
+ h.keys.sort.each do |product_id|
49
+ printf "%10d of product %3d.\n", h[product_id], product_id
50
+ end
51
+ end
52
+ end
53
+ end
@@ -0,0 +1,66 @@
1
+ # Makes the shop a bit more efficient using subspaces. Each customer needs to
2
+ # subscribe only to the inventory tuples, not to all the tuples in customer
3
+ # carts. Run with --trace to see tuples assigned to subspaces and also note how
4
+ # much contention there is.
5
+
6
+ require 'tupelo/app'
7
+
8
+ PRODUCT_IDS = 1..10
9
+ CUSTOMER_IDS = 1..10
10
+
11
+ Tupelo.application do
12
+ local do
13
+ use_subspaces!
14
+
15
+ define_subspace(
16
+ tag: "inventory",
17
+ template: [
18
+ {value: "product"},
19
+ nil, # product_id
20
+ {type: "number"} # count
21
+ ]
22
+ )
23
+
24
+ PRODUCT_IDS.each do |product_id|
25
+ count = 10
26
+ write ["product", product_id, count]
27
+ end
28
+ end
29
+
30
+ CUSTOMER_IDS.each do |customer_id|
31
+ child subscribe: "inventory", passive: true do
32
+ loop do
33
+ sleep rand % 0.1
34
+ transaction do
35
+ # buy the first product we see:
36
+ _, product_id, count = take ["product", nil, 1..Float::INFINITY]
37
+ write ["product", product_id, count-1]
38
+ write ["cart", customer_id, product_id]
39
+ # Note that the transaction *takes* from inventory and *writes*
40
+ # outside inventory. To support this, the client must subscribe
41
+ # to inventory. It doesn't matter whether it subscribes to the
42
+ # rest of the tuplespace. See the [subspace doc](doc/subspace.md).
43
+ end
44
+ end
45
+ end
46
+ end
47
+
48
+ local subscribe: :all do
49
+ PRODUCT_IDS.each do |product_id|
50
+ read ["product", product_id, 0] # wait until sold out
51
+ end
52
+
53
+ CUSTOMER_IDS.each do |customer_id|
54
+ h = Hash.new(0)
55
+ transaction do
56
+ while t=take_nowait(["cart", customer_id, nil])
57
+ h[t[2]] += 1
58
+ end
59
+ end
60
+ puts "Customer #{customer_id} bought:"
61
+ h.keys.sort.each do |product_id|
62
+ printf "%10d of product %3d.\n", h[product_id], product_id
63
+ end
64
+ end
65
+ end
66
+ end
@@ -1,3 +1,6 @@
1
+ # Minimal example of using subspaces to limit which tuples each client sees.
2
+ # Run with --trace to see assignment to subspaces.
3
+
1
4
  require 'tupelo/app'
2
5
 
3
6
  Tupelo.application do
@@ -0,0 +1,36 @@
1
+ # The #take and #take_nowait methods behave the same if a match is found in
2
+ # the local tuple store: they both send a transaction that takes the tuple.
3
+ # If no match is found, #take blocks, but #take_nowait returns nil.
4
+ #
5
+ # In a transaction, #take_nowait has the same behavior. But keep in mind that
6
+ # things may change by the time the transaction commit is successful.
7
+ # Some other process may write a matching tuple. So, the return value of
8
+ # nil is not a guarantee that, when the transaction finishes, there is no match.
9
+ # This example demonstrates this point.
10
+ #
11
+ # See these examples for interesting uses of #take_nowait in a transaction:
12
+ #
13
+ # broker-optimistic.rb
14
+ # broker-optimistic-v2.rb
15
+ # lease.rb
16
+ # pregel/distributed.rb
17
+
18
+ require 'tupelo/app'
19
+
20
+ Tupelo.application do
21
+ child do
22
+ results = transaction do
23
+ r1 = take_nowait [1]
24
+ sleep 1
25
+ r2 = take_nowait [2]
26
+ [r1, r2]
27
+ end
28
+
29
+ log results
30
+ end
31
+
32
+ child do
33
+ sleep 0.5
34
+ write [1], [2]
35
+ end
36
+ end
@@ -10,7 +10,7 @@ Tupelo.application do
10
10
  child do
11
11
  read ["ready"]
12
12
  r = take_nowait [1]
13
- log "winner! result = #{r.inspect}" if r
13
+ log "winner! result = #{r}" if r
14
14
  end
15
15
  end
16
16
 
@@ -0,0 +1,73 @@
1
+ module Tupelo
2
+ # Not an essential part of the library, but used to build up groups of
3
+ # processes for use in examples, tests, benchmarks, etc.
4
+ class AppBuilder
5
+ attr_reader :ez
6
+
7
+ # Does this app own (as child processes) the seq, cseq, and arc servers?
8
+ attr_reader :owns_servers
9
+
10
+ # Arguments available to application after tupelo has parsed out switches
11
+ # and args that it recognizes.
12
+ attr_reader :argv
13
+
14
+ def initialize ez, owns_servers: nil, argv: argv
15
+ @ez = ez
16
+ @owns_servers = owns_servers
17
+ @argv = argv
18
+ end
19
+
20
+ def log
21
+ ez.log
22
+ end
23
+
24
+ # Yields a client that runs in this process.
25
+ def local client_class = Client, **opts, &block
26
+ ez.local :seqd, :cseqd, :arcd do |seqd, cseqd, arcd|
27
+ opts = {seq: seqd, cseq: cseqd, arc: arcd, log: log}.merge(opts)
28
+ run_client client_class, **opts do |client|
29
+ if block
30
+ if block.arity == 0
31
+ client.instance_eval &block
32
+ else
33
+ yield client
34
+ end
35
+ end
36
+ end
37
+ end
38
+ end
39
+
40
+ # Yields a client that runs in a subprocess.
41
+ #
42
+ # A passive client will be forced to stop after all active clients exit. Use
43
+ # the passive flag for processes that wait for tuples and respond in some
44
+ # way. Then you do not have to manually interrupt the whole application when
45
+ # the active processes are done. See examples.
46
+ def child client_class = Client, passive: false, **opts, &block
47
+ ez.child :seqd, :cseqd, :arcd, passive: passive do |seqd, cseqd, arcd|
48
+ opts = {seq: seqd, cseq: cseqd, arc: arcd, log: log}.merge(opts)
49
+ run_client client_class, **opts do |client|
50
+ if block
51
+ if block.arity == 0
52
+ client.instance_eval &block
53
+ else
54
+ yield client
55
+ end
56
+ end
57
+ end
58
+ end
59
+ end
60
+
61
+ def run_client client_class, opts
62
+ log = opts[:log]
63
+ log.progname = "client <starting in #{log.progname}>"
64
+ client = client_class.new opts
65
+ client.start do
66
+ log.progname = "client #{client.client_id}"
67
+ end
68
+ yield client
69
+ ensure
70
+ client.stop if client # gracefully exit the tuplespace management thread
71
+ end
72
+ end
73
+ end
@@ -12,10 +12,14 @@ class Tupelo::AppBuilder
12
12
  note = client.notifier
13
13
  log << ( "%6s %6s %6s %s\n" % %w{ tick cid status operation } )
14
14
  loop do
15
- status, tick, cid, op = note.wait
15
+ status, tick, cid, op, tags = note.wait
16
16
  unless status == :attempt
17
17
  s = status == :failure ? "FAIL" : ""
18
- log << ( "%6d %6d %6s %p\n" % [tick, cid, s, op] )
18
+ if tags and not tags.empty?
19
+ log << ( "%6d %6d %6s %p to %p\n" % [tick, cid, s, op, tags] )
20
+ else
21
+ log << ( "%6d %6d %6s %p\n" % [tick, cid, s, op] )
22
+ end
19
23
  end
20
24
  end
21
25
  end
data/lib/tupelo/app.rb CHANGED
@@ -1,74 +1,8 @@
1
1
  require 'easy-serve'
2
2
  require 'tupelo/client'
3
+ require 'tupelo/app/builder'
3
4
 
4
5
  module Tupelo
5
- # Not an essential part of the library, but used to build up groups of
6
- # processes for use in examples, tests, benchmarks, etc.
7
- class AppBuilder
8
- attr_reader :ez
9
-
10
- # Does this app own (as child processes) the seq, cseq, and arc servers?
11
- attr_reader :owns_servers
12
-
13
- def initialize ez, owns_servers: nil
14
- @ez = ez
15
- @owns_servers = owns_servers
16
- end
17
-
18
- def log
19
- ez.log
20
- end
21
-
22
- # Yields a client that runs in this process.
23
- def local client_class = Client, **opts, &block
24
- ez.local :seqd, :cseqd, :arcd do |seqd, cseqd, arcd|
25
- opts = {seq: seqd, cseq: cseqd, arc: arcd, log: log}.merge(opts)
26
- run_client client_class, **opts do |client|
27
- if block
28
- if block.arity == 0
29
- client.instance_eval &block
30
- else
31
- yield client
32
- end
33
- end
34
- end
35
- end
36
- end
37
-
38
- # Yields a client that runs in a subprocess.
39
- #
40
- # A passive client will be forced to stop after all active clients exit. Use
41
- # the passive flag for processes that wait for tuples and respond in some
42
- # way. Then you do not have to manually interrupt the whole application when
43
- # the active processes are done. See examples.
44
- def child client_class = Client, passive: false, **opts, &block
45
- ez.child :seqd, :cseqd, :arcd, passive: passive do |seqd, cseqd, arcd|
46
- opts = {seq: seqd, cseq: cseqd, arc: arcd, log: log}.merge(opts)
47
- run_client client_class, **opts do |client|
48
- if block
49
- if block.arity == 0
50
- client.instance_eval &block
51
- else
52
- yield client
53
- end
54
- end
55
- end
56
- end
57
- end
58
-
59
- def run_client client_class, opts
60
- log = opts[:log]
61
- log.progname = "client <starting in #{log.progname}>"
62
- client = client_class.new opts
63
- client.start do
64
- log.progname = "client #{client.client_id}"
65
- end
66
- yield client
67
- ensure
68
- client.stop if client # gracefully exit the tuplespace management thread
69
- end
70
- end
71
-
72
6
  # Returns [argv, opts], leaving orig_argv unmodified. The opts hash contains
73
7
  # switches (and their arguments, if any) recognized by tupelo. The argv array
74
8
  # contains all unrecognized arguments.
@@ -181,7 +115,7 @@ module Tupelo
181
115
  end
182
116
  end
183
117
 
184
- app = AppBuilder.new(ez, owns_servers: owns_servers)
118
+ app = AppBuilder.new(ez, owns_servers: owns_servers, argv: argv.dup)
185
119
 
186
120
  if enable_trace
187
121
  require 'tupelo/app/trace'
@@ -1,14 +1,23 @@
1
1
  require 'tupelo/client/common'
2
2
 
3
3
  class Tupelo::Client
4
- # include into class that defines #worker and #log
4
+ # Include into class that defines #worker and #log.
5
5
  module Api
6
+ # If block given, yield matching tuple to the block if one is found
7
+ # locally and then yield each new tuple as it arrives.
8
+ # Otherwise, return one matching tuple, blocking if necessary.
6
9
  def read_wait template
7
- waiter = Waiter.new(worker.make_template(template), self)
10
+ waiter = Waiter.new(worker.make_template(template), self, !block_given?)
8
11
  worker << waiter
9
- result = waiter.wait
10
- waiter = nil
11
- result
12
+ if block_given?
13
+ loop do
14
+ yield waiter.wait
15
+ end
16
+ else
17
+ result = waiter.wait
18
+ waiter = nil
19
+ result
20
+ end
12
21
  ensure
13
22
  worker << Unwaiter.new(waiter) if waiter
14
23
  end
@@ -40,17 +49,18 @@ class Tupelo::Client
40
49
  class WaiterBase
41
50
  attr_reader :template
42
51
  attr_reader :queue
52
+ attr_reader :once
43
53
 
44
- def initialize template, client
54
+ def initialize template, client, once = true
45
55
  @template = template
46
56
  @queue = client.make_queue
47
57
  @client = client
58
+ @once = once
48
59
  end
49
60
 
50
61
  def gloms tuple
51
62
  if template === tuple
52
63
  peek tuple
53
- true
54
64
  else
55
65
  false
56
66
  end
@@ -58,6 +68,7 @@ class Tupelo::Client
58
68
 
59
69
  def peek tuple
60
70
  queue << tuple
71
+ once
61
72
  end
62
73
 
63
74
  def wait
@@ -328,7 +328,7 @@ class Tupelo::Client
328
328
  log.debug {"applying #{op} from client #{msg.client_id}"}
329
329
 
330
330
  notify_waiters.each do |waiter|
331
- waiter << [:attempt, msg.global_tick, msg.client_id, op]
331
+ waiter << [:attempt, msg.global_tick, msg.client_id, op, msg.tags]
332
332
  end
333
333
 
334
334
  granted_tuples = tuplespace.find_distinct_matches_for(op.takes)
@@ -370,7 +370,7 @@ class Tupelo::Client
370
370
  notify_waiters.each do |waiter|
371
371
  waiter << [
372
372
  succeeded ? :success : :failure,
373
- msg.global_tick, msg.client_id, op]
373
+ msg.global_tick, msg.client_id, op, msg.tags]
374
374
  end
375
375
 
376
376
  trans = nil
@@ -509,7 +509,10 @@ class Tupelo::Client
509
509
  def handle_waiter waiter
510
510
  tuple = tuplespace.find_match_for waiter.template
511
511
  if tuple
512
- waiter.peek tuple
512
+ once = waiter.peek tuple
513
+ unless once
514
+ read_waiters << waiter
515
+ end
513
516
  else
514
517
  read_waiters << waiter
515
518
  end
@@ -1,3 +1,3 @@
1
1
  module Tupelo
2
- VERSION = "0.11"
2
+ VERSION = "0.12"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: tupelo
3
3
  version: !ruby/object:Gem::Version
4
- version: '0.11'
4
+ version: '0.12'
5
5
  platform: ruby
6
6
  authors:
7
7
  - Joel VanderWerf
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2013-11-22 00:00:00.000000000 Z
11
+ date: 2013-12-03 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: atdo
@@ -82,6 +82,7 @@ files:
82
82
  - lib/tupelo/app/remote.rb
83
83
  - lib/tupelo/app/irb-shell.rb
84
84
  - lib/tupelo/app/trace.rb
85
+ - lib/tupelo/app/builder.rb
85
86
  - lib/tupelo/client.rb
86
87
  - lib/tupelo/tuplets/persistent-archiver.rb
87
88
  - lib/tupelo/tuplets/persistent-archiver/worker.rb
@@ -118,6 +119,7 @@ files:
118
119
  - example/pregel/pagerank.rb
119
120
  - example/pregel/pregel.rb
120
121
  - example/pregel/distributed.rb
122
+ - example/pregel/update.rb
121
123
  - example/take-nowait.rb
122
124
  - example/boolean-match.rb
123
125
  - example/lease.rb
@@ -127,10 +129,12 @@ files:
127
129
  - example/small-simplified.rb
128
130
  - example/small.rb
129
131
  - example/lock-mgr.rb
132
+ - example/take-nowait-caution.rb
130
133
  - example/concurrent-transactions.rb
131
134
  - example/tcp.rb
132
135
  - example/notify.rb
133
136
  - example/pulse.rb
137
+ - example/chat/chat.rb
134
138
  - example/hash-tuples.rb
135
139
  - example/balance-xfer-locking.rb
136
140
  - example/balance-xfer-retry.rb
@@ -138,17 +142,22 @@ files:
138
142
  - example/app-and-tup.rb
139
143
  - example/multi-tier/memo2.rb
140
144
  - example/multi-tier/http.rb
145
+ - example/multi-tier/multi-sinatras.rb
141
146
  - example/multi-tier/kvspace.rb
142
147
  - example/multi-tier/memo.rb
143
148
  - example/multi-tier/drb.rb
144
149
  - example/take-many.rb
150
+ - example/subspaces/simple.rb
151
+ - example/subspaces/shop/shop-v2.rb
152
+ - example/subspaces/shop/shop-v1.rb
145
153
  - example/subspaces/pubsub.rb
146
154
  - example/dphil-optimistic.rb
147
155
  - example/async-transaction.rb
148
156
  - example/wait-interrupt.rb
157
+ - example/fish0.rb
149
158
  - example/zk/lock.rb
150
- - example/subspace.rb
151
159
  - example/deadlock.rb
160
+ - example/fish.rb
152
161
  - example/add.rb
153
162
  - example/dphil-optimistic-v2.rb
154
163
  - example/parallel.rb
@@ -159,6 +168,7 @@ files:
159
168
  - example/lock-mgr-with-queue.rb
160
169
  - example/balance-xfer.rb
161
170
  - example/cancel.rb
171
+ - example/socket-broker.rb
162
172
  - example/timeout-trans.rb
163
173
  - example/optimist.rb
164
174
  - example/tiny-server.rb