conveyor 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
Binary file
@@ -0,0 +1,5 @@
1
+ == 0.1.0 / 2008-01-29
2
+
3
+ * Basic Channel implementation
4
+ * Basic HTTP interface
5
+
data/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ The MIT License
2
+
3
+ Copyright (c) 2008 Ryan King
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
@@ -0,0 +1,19 @@
1
+ History.txt
2
+ LICENSE
3
+ Manifest.txt
4
+ README.txt
5
+ Rakefile
6
+ bin/conveyor
7
+ docs/file-formats.mkd
8
+ docs/protocol.mkd
9
+ lib/conveyor.rb
10
+ lib/conveyor/base_channel.rb
11
+ lib/conveyor/channel.rb
12
+ lib/conveyor/server.rb
13
+ lib/priority_queue.rb
14
+ test/rand.rb
15
+ test/test_channel.rb
16
+ test/test_feeder-ng.rb
17
+ test/test_priority_queue.rb
18
+ test/test_replicated_channel.rb
19
+ test/test_server.rb
@@ -0,0 +1,61 @@
1
+ = Conveyor
2
+
3
+ by Ryan King (http://theryanking.com)
4
+
5
+ == Description
6
+
7
+ * Like TiVo for your data
8
+ * A distributed rewindable multi-queue
9
+
10
+ == Overview
11
+
12
+ A Conveyor server provides an HTTP interface that allows for POSTing and GETing items in streams called Channels.
13
+
14
+ POSTing is simple: you add an item to the channel and it gets persisted and assigned a sequence number.
15
+
16
+ Consuming items from a Channel is more flexible, you can:
17
+
18
+ * consume by id number ("GET /channels/foo/1337")
19
+ * consume by from a global queue ("GET /channels/foo?next")
20
+ * consume from a queue group ("GET /channels/foo?next&group=bar")
21
+ * this allows multiple groups of consumers to each have what appears to them to be a queue.
22
+
23
+ The payload for all of these is a stream of bytes. Conveyor will stream it back exactly as it was given.
24
+
25
+ Ok, actually rewinding functionality is still TODO, but should be done soon.
26
+
27
+ == REQUIREMENTS:
28
+
29
+ * Ruby
30
+ * Mongrel
31
+ * active_support
32
+
33
+ == INSTALL:
34
+
35
+ * gem install conveyor
36
+
37
+
38
+ == LICENSE:
39
+
40
+ (The MIT License)
41
+
42
+ Copyright (c) 2008 Ryan King
43
+
44
+ Permission is hereby granted, free of charge, to any person obtaining
45
+ a copy of this software and associated documentation files (the
46
+ 'Software'), to deal in the Software without restriction, including
47
+ without limitation the rights to use, copy, modify, merge, publish,
48
+ distribute, sublicense, and/or sell copies of the Software, and to
49
+ permit persons to whom the Software is furnished to do so, subject to
50
+ the following conditions:
51
+
52
+ The above copyright notice and this permission notice shall be
53
+ included in all copies or substantial portions of the Software.
54
+
55
+ THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
56
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
57
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
58
+ IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
59
+ CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
60
+ TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
61
+ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
@@ -0,0 +1,19 @@
1
+ # -*- ruby -*-
2
+
3
+ require 'rubygems'
4
+ require 'hoe'
5
+ require './lib/conveyor'
6
+
7
+ Hoe.new('conveyor', Conveyor::VERSION) do |p|
8
+ p.rubyforge_name = 'conveyor'
9
+ p.author = 'Ryan King'
10
+ p.email = 'ryan@theryanking.com'
11
+ p.summary = 'Like TiVo for your data.'
12
+ p.description = p.paragraphs_of('README.txt', 2..5).join("\n\n")
13
+ p.url = p.paragraphs_of('README.txt', 0).first.split(/\n/)[1..-1]
14
+ p.changes = p.paragraphs_of('History.txt', 0..1).join("\n\n")
15
+ p.extra_deps << ['mongrel']
16
+ p.extra_deps << ['activesupport']
17
+ end
18
+
19
+ # vim: syntax=Ruby
@@ -0,0 +1,11 @@
1
+ #!/usr/bin/env ruby -KU
2
+
3
+ unless ARGV.length == 2
4
+ puts "usage: #{$0} <port> <data directory>"
5
+ exit
6
+ end
7
+
8
+ $: << 'lib'
9
+
10
+ require 'conveyor/server'
11
+ Conveyor::Server.new('localhost', ARGV[0], ARGV[1]).run.join
@@ -0,0 +1,29 @@
1
+ = FILE FORMATS
2
+ == DATA FILES
3
+
4
+ id time offset length hash
5
+ content
6
+ ...
7
+
8
+ contrived example:
9
+
10
+ 1213124 2008-01-05T13:35:32 1234 11 asdfasdfasdfasdfasdfasdfasdfa
11
+ foo bar bam
12
+
13
+ * space separated line of metadata followed by content
14
+ * delimiter might be useful for sanity checking, but the hash could probably suffice for ensuring that the offset was calculated and persisted properly. We should look at what ARC does here.
15
+ * offset is to beginning of metadata line
16
+ * length doesn't include a trailing \n that separates the content from the next bit of metadata (this might not be necessary)
17
+
18
+ === INDEX FILES
19
+
20
+ id time offset length hash file
21
+
22
+ contrived example:
23
+
24
+ 1213124 2008-01-05T13:35:32 1234 11 asdfasdfasdfasdfasdfasdfasdfa 1
25
+
26
+ notes:
27
+ * 1 is the filename
28
+ * assuming a lucene-style directory of datafiles + ToC/index
29
+ * given that the files are written sequentially we can avoid writing every entry to the index file (as long as you write the first and last entry to the index). At most this means you have to read n entries, where n is the gap between index entries. Given that most clients will have persistent connections and be reading sequentially, we can do some clever things on the server side to make this really efficient (basically meaning you'll only have to pay that penalty on the first item you read).
@@ -0,0 +1,63 @@
1
+ # Conveyor protocol #
2
+
3
+ ## Create a channel ##
4
+
5
+ Request
6
+ : PUT /channels/{channel name}
7
+
8
+ Response
9
+ : success: 201, failure: ?
10
+
11
+ The simple explanation is that to create a channel you do a PUT operation on the url you want for the channel, which must conform to /channels/\A[a-zA-Z0-9\-]+\Z. In other words, the channel name may only have letters numbers and dashes.
12
+
13
+ ## Post to a channel ##
14
+ Request
15
+ : POST /channels/{channel name}
16
+ : body is the message
17
+
18
+ Response
19
+ : success: 202, failure: ?
20
+
21
+ A post to a channel URL with the message in the body.
22
+
23
+ ## Get from channel ##
24
+
25
+ ### Get by id ###
26
+
27
+ Request
28
+ : GET /channel/{channel name}/{id}
29
+
30
+ Response
31
+ : success: 200, failure: 404
32
+
33
+ ### Get by datetime ###
34
+
35
+ NOT IMPLEMENTED YET!
36
+
37
+ Request
38
+ : GET /channels/{channel name}?at={ISO datetime like 2008-01-11T17:53:59}
39
+
40
+ Response
41
+ : success: 200
42
+
43
+ Will return the first entry *after* that datetime.
44
+
45
+ ### Get Next (Queue-like semantics) ###
46
+
47
+ Request
48
+ : GET /channels/{channel name}?next
49
+
50
+ Response
51
+ : success: 200
52
+
53
+ If this is called for the first time, it will return the first item in the channel. Otherwise it will return the next item.
54
+
55
+ ### Get Next by Group (Multi-consumer queue) ###
56
+
57
+ Request
58
+ : GET /channels/{channel name}?next&group=foo
59
+
60
+ Response
61
+ : success: 200
62
+
63
+ If this is called for the first time, it will return the first item in the channel. Otherwise it will return the next item.
@@ -0,0 +1,4 @@
1
+ module Conveyor
2
+ VERSION = '0.1.0'
3
+ QUALITY = 'alpha'
4
+ end
@@ -0,0 +1,118 @@
1
+ require 'digest/md5'
2
+ require 'rubygems'
3
+ require 'active_support/core_ext/date/conversions'
4
+ require 'priority_queue'
5
+
6
+ module Conveyor
7
+ # BaseChannel
8
+ #
9
+ # Base implementation for channels. Not useful to instantiate directly.
10
+ class BaseChannel
11
+
12
+ NAME_PATTERN = %r{\A[a-zA-Z\-0-9]+\Z}
13
+ BUCKET_SIZE = 100_000
14
+
15
+ def initialize directory
16
+ @directory = directory
17
+ @data_files = []
18
+ @index = []
19
+ @iterator = 0
20
+
21
+ if File.exists?(@directory)
22
+ if !File.directory?(@directory)
23
+ raise "#{@directory} is not a directory"
24
+ end
25
+ else
26
+ Dir.mkdir(@directory)
27
+ end
28
+
29
+ index_path = File.join(@directory, 'index')
30
+
31
+ if File.exists?(index_path) && File.size(index_path) > 0
32
+ @index_file = File.open(index_path, 'r+')
33
+
34
+ @index_file.each_line do |line|
35
+ @index << parse_headers(line.strip, true)
36
+ @last_id = @index.last[:id]
37
+ end
38
+ @index_file.seek(0, IO::SEEK_END)
39
+ else
40
+ @index_file = File.open(index_path, 'a')
41
+ @last_id = 0
42
+ end
43
+ @index_file.sync = true
44
+
45
+ end
46
+
47
+ def inspect
48
+ "<#{self.class} dir:'#{@directory.to_s}' last_id:#{@last_id}>"
49
+ end
50
+
51
+ def pick_bucket i
52
+ (i / BUCKET_SIZE).to_i
53
+ end
54
+
55
+ def bucket_file i
56
+ unless @data_files[i]
57
+ @data_files[i] = File.open(File.join(@directory, i.to_s), 'a+')
58
+ @data_files[i].sync = true
59
+ end
60
+ yield @data_files[i]
61
+ end
62
+
63
+ def commit data, time = nil
64
+ Thread.exclusive do
65
+ i = @last_id + 1
66
+ t = time || Time.now
67
+ l = data.length
68
+ h = Digest::MD5.hexdigest(data)
69
+ b = pick_bucket(i)
70
+ header, o = nil
71
+ bucket_file(b) do |f|
72
+ f.seek(0, IO::SEEK_END)
73
+ o = f.pos
74
+ header = "#{i} #{t.xmlschema} #{o} #{l} #{h}"
75
+ f.write("#{header}\n" + data + "\n")
76
+ end
77
+
78
+ @last_id = i
79
+ @index_file.write "#{header} #{b}\n"
80
+ @index << {:id => i, :time => t, :offset => o, :length => l, :hash => h, :file => b}
81
+ i
82
+ end
83
+ end
84
+
85
+ def get id
86
+ return nil unless id <= @last_id
87
+ i = @index.find{|e| e[:id] == id}
88
+ header, content = nil
89
+ Thread.exclusive do
90
+ bucket_file(i[:file]) do |f|
91
+ f.seek i[:offset]
92
+ header = f.readline.strip
93
+ content = f.read(i[:length])
94
+ end
95
+ end
96
+ [parse_headers(header), content]
97
+ end
98
+
99
+ def parse_headers str, index_file=false
100
+ pattern = '\A(\d+) (\d{4}\-\d{2}\-\d{2}T\d{2}\:\d{2}\:\d{2}[+\-]\d{2}\:\d{2}) (\d+) (\d+) ([a-f0-9]+)'
101
+ pattern += ' (\d+)' if index_file
102
+ pattern += '\Z'
103
+ m = str.match(Regexp.new(pattern))
104
+ {
105
+ :id => m.captures[0].to_i,
106
+ :time => m.captures[1],
107
+ :offset => m.captures[2].to_i,
108
+ :length => m.captures[3].to_i,
109
+ :hash => m.captures[4],
110
+ :file => (index_file ? m.captures[5].to_i : nil)
111
+ }.reject {|k,v| v == nil}
112
+ end
113
+
114
+ def self.valid_channel_name? name
115
+ !!name.match(NAME_PATTERN)
116
+ end
117
+ end
118
+ end
@@ -0,0 +1,81 @@
1
+ require 'conveyor/base_channel'
2
+
3
+ module Conveyor
4
+ # Channel
5
+ #
6
+ # A basic channel.
7
+ class Channel < BaseChannel
8
+
9
+ # If +directory+ doesn't already exist, it will be created during initialization.
10
+ def initialize directory
11
+ @group_iterators = {}
12
+ @group_iterators_files = {}
13
+
14
+ super(directory)
15
+
16
+ iterator_path = File.join(@directory, 'iterator')
17
+
18
+ if File.exists?(iterator_path) && File.size(iterator_path) > 0
19
+ @iterator_file = File.open(iterator_path, 'r+')
20
+ @iterator_file.each_line do |line|
21
+ @iterator = line.to_i
22
+ end
23
+ @iterator_file.seek(0, IO::SEEK_END)
24
+ else
25
+ @iterator_file = File.open(iterator_path, 'a')
26
+ end
27
+ @iterator_file.sync = true
28
+
29
+ Dir.glob(File.join(@directory, 'iterator-*')) do |i|
30
+ g = i.split(%r{/}).last.match(%r{iterator-(.*)}).captures[0]
31
+ @group_iterators_files[g] = File.open(i, 'r+')
32
+ @group_iterators[g] = 0
33
+ @group_iterators_files[g].each_line do |line|
34
+ @group_iterators[g] = line.to_i
35
+ end
36
+ @group_iterators_files[g].seek(0, IO::SEEK_END)
37
+ end
38
+ end
39
+
40
+ # Add data to the channel.
41
+ def post data
42
+ commit data
43
+ end
44
+
45
+ # Returns the next item from the global (non-group) iterator.
46
+ def get_next
47
+ r = nil
48
+ Thread.exclusive do
49
+ @iterator += 1 # TODO make sure this is lower than @last_id
50
+ r = get(@iterator)
51
+ @iterator_file.write("#{@iterator}\n")
52
+ end
53
+ r
54
+ end
55
+
56
+ # Returns the next item for +group+. If +group+ hasn't been seen before, the first item is returned.
57
+ def get_next_by_group group
58
+ r = nil
59
+ Thread.exclusive do
60
+ @group_iterators[group] = 0 unless @group_iterators.key?(group)
61
+ @group_iterators[group] += 1
62
+ r = get(@group_iterators[group])
63
+ group_iterators_file(group) do |f|
64
+ f.write("#{@group_iterators[group]}\n")
65
+ end
66
+ end
67
+ r
68
+ end
69
+
70
+ private
71
+
72
+ def group_iterators_file group
73
+ unless @group_iterators_files[group]
74
+ @group_iterators_files[group] = File.open(File.join(@directory, 'iterator-' + group), 'a+')
75
+ @group_iterators_files[group].sync = true
76
+ end
77
+ yield @group_iterators_files[group]
78
+ end
79
+
80
+ end
81
+ end
@@ -0,0 +1,117 @@
1
+ require 'rubygems'
2
+ require 'mongrel'
3
+ require 'conveyor/channel'
4
+ require 'fileutils'
5
+
6
+ class Mongrel::HttpRequest
7
+ def put?
8
+ params["REQUEST_METHOD"] == "PUT"
9
+ end
10
+
11
+ def post?
12
+ params["REQUEST_METHOD"] == "POST"
13
+ end
14
+
15
+ def get?
16
+ params["REQUEST_METHOD"] == "GET"
17
+ end
18
+
19
+ def path_match pattern
20
+ params["REQUEST_PATH"].match(pattern)
21
+ end
22
+ end
23
+
24
+ module Conveyor
25
+
26
+ # An HTTP server for Conveyor.
27
+ class Server < Mongrel::HttpServer
28
+
29
+ # A Mongrel handler for multiple Conveyor Channels.
30
+ class ChannelsHandler < Mongrel::HttpHandler
31
+
32
+ def initialize data_directory
33
+ @data_directory = data_directory
34
+ @channels = {}
35
+ Dir.entries(@data_directory).each do |e|
36
+ if !['.', '..'].include?(e) && File.directory?(File.join(@data_directory, e))
37
+ @channels[e] = Channel.new(File.join(@data_directory, e))
38
+ end
39
+ end
40
+ end
41
+
42
+ def create_new_channel channel_name
43
+ @channels[channel_name] = Conveyor::Channel.new(File.join(@data_directory, channel_name))
44
+ end
45
+
46
+ def process request, response
47
+ if request.put? && m = request.path_match(%r{/channels/(.*)})
48
+ if Channel.valid_channel_name?(m.captures[0])
49
+ create_new_channel m.captures[0]
50
+ response.start(201) do |head, out|
51
+ out.write("created channel #{m.captures[0]}")
52
+ end
53
+ else
54
+ response.start(406) do |head, out|
55
+ out.write("invalid channel name. must match #{Channel::NAME_PATTERN}")
56
+ end
57
+ end
58
+ elsif request.post? && m = request.path_match(%r{/channels/(.*)})
59
+ if @channels.key?(m.captures[0])
60
+ if request.params.include?('HTTP_DATE') && d = Time.parse(request.params['HTTP_DATE'])
61
+ id = @channels[m.captures[0]].post(request.body.read)
62
+ response.start(202) do |head, out|
63
+ head["Location"] = "/channels/#{m.captures[0]}/#{id}"
64
+ end
65
+ else
66
+ response.start(400) do |head, out|
67
+ out.write "A valid Date header is required for all POSTs."
68
+ end
69
+ end
70
+ end
71
+
72
+ elsif request.get?
73
+ headers = content = nil
74
+ if m = request.path_match(%r{/channels/(.*)/(\d+)})
75
+ if @channels.key?(m.captures[0])
76
+ headers, content = @channels[m.captures[0]].get(m.captures[1].to_i)
77
+ end
78
+ elsif m = request.path_match(%r{/channels/(.*)})
79
+ if @channels.key?(m.captures[0])
80
+ params = Mongrel::HttpRequest.query_parse(request.params['QUERY_STRING'])
81
+ if params.key? 'next'
82
+ if params.key? 'group'
83
+ headers, content = @channels[m.captures[0]].get_next_by_group(params['group'])
84
+ else
85
+ headers, content = @channels[m.captures[0]].get_next
86
+ end
87
+ end
88
+ end
89
+ else
90
+ response.start(200) do |head, out|
91
+ out.write("fake!")
92
+ end
93
+ end
94
+
95
+ if headers && content
96
+ response.start(200) do |head, out|
97
+ head['Content-Location'] = "/channels/#{m.captures[0]}/#{headers[:id]}"
98
+ head['Content-MD5'] = headers[:hash]
99
+ head['Content-Type'] = 'application/octet-stream'
100
+ head['Last-Modified'] = Time.parse(headers[:time]).gmtime.to_s
101
+ out.write content
102
+ end
103
+ end
104
+
105
+ end
106
+ end
107
+ end
108
+
109
+ # +host+ and +port+ are passed along to Mongrel::HttpServer for TCP binding. +data_directory+ is used to store
110
+ # all channel data and should be created before intializing a Server.
111
+ def initialize(host, port, data_directory)
112
+ super(host, port)
113
+ ch = ChannelsHandler.new(data_directory)
114
+ register("/channels", ch)
115
+ end
116
+ end
117
+ end
@@ -0,0 +1,33 @@
1
+ # a naive (performance-wise) priority queue implementation
2
+ #
3
+ # modified from http://www.rubyquiz.com/quiz98.html
4
+ class PriorityQueue #:nodoc: this isn't ready for primetime yet
5
+ def initialize sort_proc = nil
6
+ @list = []
7
+ @sort_proc = sort_proc || proc{|x,y| x <=> y}
8
+ end
9
+
10
+ def add(item)
11
+ @list << item
12
+ @list.sort!(&@sort_proc)
13
+ self
14
+ end
15
+
16
+ alias << add
17
+
18
+ def front
19
+ @list.first
20
+ end
21
+
22
+ def pop
23
+ @list.shift
24
+ end
25
+
26
+ def empty?
27
+ @list.empty?
28
+ end
29
+
30
+ def length
31
+ @list.length
32
+ end
33
+ end
@@ -0,0 +1,34 @@
1
+ $: << 'lib'
2
+ require 'conveyor/channel'
3
+
4
+ def rand_str(len, domain)
5
+ (1..len).inject("") { |s, x| s << domain[rand(domain.length)] }
6
+ end
7
+
8
+ alphanum = [ ('a'..'z').to_a, ('A'..'Z').to_a, ('0'..'9').to_a ].flatten
9
+ data = []
10
+
11
+ channel = Conveyor::Channel.new '/tmp/bar'
12
+
13
+ puts "writing random data"
14
+ 1000.times{channel.post(d=rand_str(rand(10000), alphanum)); data << d}
15
+
16
+ puts "reading data back"
17
+
18
+ 1000.times do |i|
19
+ headers, content = channel.get(i + 1)
20
+
21
+ unless content == data[i]
22
+ puts "unmatched content"
23
+ puts "ORIGINAL"
24
+ puts "----"
25
+ puts data[i]
26
+ puts "----"
27
+
28
+ puts "RETURNED"
29
+ puts "----"
30
+ puts content
31
+ puts "----"
32
+ end
33
+
34
+ end
@@ -0,0 +1,126 @@
1
+ require "test/unit"
2
+ require 'fileutils'
3
+ require "conveyor/channel"
4
+
5
+ class TestConveyorChannel < Test::Unit::TestCase
6
+ include Conveyor
7
+ def test_init
8
+ FileUtils.rm_r('/tmp/bar')
9
+ assert_nothing_raised do
10
+ Channel.new '/tmp/bar'
11
+ end
12
+ end
13
+
14
+ def test_post
15
+ FileUtils.rm_r('/tmp/bar')
16
+ assert_nothing_raised do
17
+ w = Channel.new '/tmp/bar'
18
+ w.post 'foo'
19
+ assert_equal 1, w.instance_variable_get(:@last_id)
20
+ end
21
+ end
22
+
23
+ def test_parse_headers
24
+ i = Channel.new '/tmp/foo'
25
+ [
26
+ ["1 2008-01-08T13:04:40-08:00 0 3 acbd18db4cc2f85cedef654fccc4a4d8\n",
27
+ {:id => 1, :time => "2008-01-08T13:04:40-08:00", :offset => 0, :length => 3, :hash => "acbd18db4cc2f85cedef654fccc4a4d8"}
28
+ ]
29
+ ].each do |(str, ret)|
30
+ assert_equal ret, i.parse_headers(str)
31
+ end
32
+
33
+ [
34
+ ["2 2008-01-08T13:04:40-08:00 0 3 acbd18db4cc2f85cedef654fccc4a4d8 1\n",
35
+ {:id => 2, :time => "2008-01-08T13:04:40-08:00", :offset => 0, :length => 3, :hash => "acbd18db4cc2f85cedef654fccc4a4d8", :file => 1}
36
+ ]
37
+ ].each do |(str, ret)|
38
+ assert_equal ret, i.parse_headers(str, true)
39
+ end
40
+ end
41
+
42
+ def test_init_existing
43
+ FileUtils.rm_r('/tmp/bar') rescue nil
44
+ c = Channel.new('/tmp/bar')
45
+ c.post 'foo'
46
+ c.post 'bar'
47
+ c = nil
48
+ GC.start
49
+
50
+ d = Channel.new('/tmp/bar')
51
+ assert_equal 'foo', d.get(1)[1]
52
+ assert_equal 'bar', d.get(2)[1]
53
+
54
+ d.post('bam')
55
+ assert_equal 'bam', d.get(3)[1]
56
+ end
57
+
58
+ def test_get_next
59
+ FileUtils.rm_r('/tmp/bar') rescue nil
60
+ c = Channel.new('/tmp/bar')
61
+ c.post 'foo'
62
+ c.post 'bar'
63
+ c.post 'bam'
64
+
65
+ assert_equal 'foo', c.get_next[1]
66
+ assert_equal 'bar', c.get_next[1]
67
+ assert_equal 'bam', c.get_next[1]
68
+ assert_equal nil, c.get_next
69
+ end
70
+
71
+ def test_get_next_interupted
72
+ FileUtils.rm_r('/tmp/bar') rescue nil
73
+ c = Channel.new('/tmp/bar')
74
+ c.post 'foo'
75
+ c.post 'bar'
76
+ c.post 'bam'
77
+
78
+ assert_equal 'foo', c.get_next[1]
79
+ assert_equal 'bar', c.get_next[1]
80
+
81
+ d = Channel.new('/tmp/bar')
82
+ assert_not_equal c, d
83
+ assert_equal 'bam', d.get_next[1]
84
+ assert_equal nil, d.get_next
85
+ end
86
+
87
+ def test_get_next_by_group
88
+ FileUtils.rm_r('/tmp/bar') rescue nil
89
+ c = Channel.new('/tmp/bar')
90
+ c.post 'foo'
91
+ c.post 'bar'
92
+ c.post 'bam'
93
+
94
+ assert_equal 'foo', c.get_next_by_group('foo')[1]
95
+ assert_equal 'bar', c.get_next_by_group('foo')[1]
96
+ assert_equal 'bam', c.get_next_by_group('foo')[1]
97
+ assert_equal nil, c.get_next_by_group('foo')
98
+
99
+ assert_equal 'foo', c.get_next_by_group('bar')[1]
100
+ assert_equal 'bar', c.get_next_by_group('bar')[1]
101
+ assert_equal 'bam', c.get_next_by_group('bar')[1]
102
+ assert_equal nil, c.get_next_by_group('bar')
103
+ end
104
+
105
+ def test_get_next_by_group_interupted
106
+ FileUtils.rm_r('/tmp/bar') rescue nil
107
+ c = Channel.new('/tmp/bar')
108
+ c.post 'foo'
109
+ c.post 'bar'
110
+ c.post 'bam'
111
+
112
+ assert_equal 'foo', c.get_next_by_group('foo')[1]
113
+ assert_equal 'bar', c.get_next_by_group('foo')[1]
114
+ assert_equal 'foo', c.get_next_by_group('bar')[1]
115
+ assert_equal 'bar', c.get_next_by_group('bar')[1]
116
+
117
+ c = nil
118
+ GC.start
119
+ c = Channel.new('/tmp/bar')
120
+
121
+ assert_equal 'bam', c.get_next_by_group('foo')[1]
122
+ assert_equal nil, c.get_next_by_group('foo')
123
+ assert_equal 'bam', c.get_next_by_group('bar')[1]
124
+ assert_equal nil, c.get_next_by_group('bar')
125
+ end
126
+ end
File without changes
@@ -0,0 +1,16 @@
1
+ require "test/unit"
2
+
3
+ require "priority_queue"
4
+
5
+ class TestPriorityQueue < Test::Unit::TestCase
6
+ def test_simple
7
+
8
+ pq = PriorityQueue.new
9
+
10
+ pq << 1 << 2 << -1
11
+
12
+ assert_equal -1, pq.pop
13
+ assert_equal 1, pq.pop
14
+ assert_equal 2, pq.pop
15
+ end
16
+ end
@@ -0,0 +1,60 @@
1
+ require "test/unit"
2
+ require 'fileutils'
3
+ require "conveyor/replicated_channel"
4
+
5
+ class TestReplicatedChannel < Test::Unit::TestCase
6
+ def test_basic
7
+ FileUtils.rm_r '/tmp/foo' rescue nil
8
+ FileUtils.rm_r '/tmp/bar' rescue nil
9
+
10
+ c1 = Conveyor::ReplicatedChannel.new '/tmp/foo'
11
+ c2 = Conveyor::ReplicatedChannel.new '/tmp/bar'
12
+
13
+ c1.peers << c2
14
+ c2.peers << c1
15
+
16
+ c1.post('foo', Time.now)
17
+ c2.post('bar', Time.now)
18
+
19
+ c1.post('not', Time.now)
20
+ c2.post('not', Time.now)
21
+
22
+ c1.commit_thread.run
23
+ c2.commit_thread.run
24
+
25
+ assert_equal 'foo', c1.get(1)[1]
26
+ assert_equal 'foo', c2.get(1)[1]
27
+
28
+ assert_equal 'bar', c1.get(2)[1]
29
+ assert_equal 'bar', c2.get(2)[1]
30
+
31
+ assert_equal c1.get(1), c2.get(1)
32
+ assert_equal c1.get(2), c2.get(2)
33
+ end
34
+
35
+ def test_more
36
+ FileUtils.rm_r '/tmp/foo' rescue nil
37
+ FileUtils.rm_r '/tmp/bar' rescue nil
38
+
39
+ c1 = Conveyor::ReplicatedChannel.new '/tmp/foo'
40
+ c2 = Conveyor::ReplicatedChannel.new '/tmp/bar'
41
+
42
+ c1.peers << c2
43
+ c2.peers << c1
44
+
45
+ channels = [c1, c2]
46
+ data = %w[1 2 3 4 5 6 7 8 9 10]
47
+
48
+ data.each_with_index do |d, i|
49
+ channels[i % 2].post(d, Time.now)
50
+ end
51
+
52
+ c1.commit_thread.run
53
+ c2.commit_thread.run
54
+
55
+ c1d = (1..10).collect{|i| c1.get(i)}
56
+ c2d = (1..10).collect{|i| c2.get(i)}
57
+
58
+ assert_equal c1d, c2d
59
+ end
60
+ end
@@ -0,0 +1,93 @@
1
+ require "test/unit"
2
+ require "conveyor/server"
3
+ require 'net/http'
4
+
5
+ class TestConveyorServer < Test::Unit::TestCase
6
+ def setup
7
+ FileUtils.rm_r('/tmp/asdf') rescue nil
8
+ FileUtils.mkdir('/tmp/asdf')
9
+ @server = Conveyor::Server.new("127.0.0.1", 8888, '/tmp/asdf')
10
+ @server.run
11
+ end
12
+
13
+ def teardown
14
+ @server.stop
15
+ end
16
+
17
+ def test_channels
18
+ Net::HTTP.start("localhost", 8888) do |h|
19
+ req = h.get('/channels')
20
+ assert_equal Net::HTTPOK, req.class
21
+ end
22
+ end
23
+
24
+ def test_create_channel
25
+ Net::HTTP.start('localhost', 8888) do |h|
26
+ req = h.put('/channels/foo', '', {'Content-Type' => 'application/octet-stream'})
27
+ assert_equal Net::HTTPCreated, req.class
28
+
29
+ req = h.post('/channels/foo', 'foo', {'Content-Type' => 'application/octet-stream', 'Date' => Time.now.to_s})
30
+ assert_equal Net::HTTPAccepted, req.class
31
+ end
32
+ end
33
+
34
+ def test_post
35
+ Net::HTTP.start('localhost', 8888) do |h|
36
+ req = h.put('/channels/bar', '', {'Content-Type' => 'application/octet-stream'})
37
+ assert_equal Net::HTTPCreated, req.class
38
+
39
+ data =
40
+ ["ZqZyDN2SouQCYEHYS0LuM1XeqsF0MKIbFEBE6xQ972VqEcjs21wJSosvZMWEH1lq5ukTq4Ze",
41
+ "5sgCbNpumntlHC2jl6uXcW3Wz1RGTc5lqeGhpH2ZaCtAOc61TLBmLPUzPWVeJzkfr6qEQqTkuPK5kCE54u6iiTVFArMPSUy3oo",
42
+ "6qPDKL09AquFvgj9Zv9CFY2dn0x", "84ReeNklrWJeIu2USbBtwoUnkDwUgU8lNjJ",
43
+ "WUSYY2dCBdDdZEiGWtyfC5yGKVMgDhzBhyNLwcefxa49fED1Sf05f8MlgXOBx6n5I6Ae2Wy3Mds",
44
+ "uAlUDvngWqDl3PaRVl1i9RcwDIvJlNp6yMy9RQgVsucwNvKaSOQlJMarWItKy8zT2ON08ElKkZ2aQJlb45Z8FwfE0xh8sA",
45
+ "NxWmEBmJp0uiNRhyxa26frQjfFaNERmZbConrytNQKnHfilFsZWAo0Qy8eVKgq", "ajq3i5ksiBovQYfvj",
46
+ "yY3vhjeq","2IDeF0ccG8tRZIZSekz6fUii29"]
47
+
48
+ data.each do |d|
49
+ req = h.post('/channels/bar', d, {'Content-Type' => 'application/octet-stream', 'Date' => Time.now.to_s})
50
+ assert_equal Net::HTTPAccepted, req.class
51
+ end
52
+
53
+ data.each_with_index do |d, i|
54
+ req = h.get("/channels/bar/#{i+1}")
55
+ assert_equal d, req.body
56
+ end
57
+ end
58
+ end
59
+
60
+ def test_invalid_channel
61
+ Net::HTTP.start('localhost', 8888) do |h|
62
+ req = h.put('/channels/|', '', {'Content-Type' => 'application/octet-stream'})
63
+ assert_equal Net::HTTPNotAcceptable, req.class
64
+ end
65
+
66
+ end
67
+
68
+ def test_get_next
69
+ Net::HTTP.start('localhost', 8888) do |h|
70
+ req = h.put('/channels/bar', '', {'Content-Type' => 'application/octet-stream'})
71
+ assert_equal Net::HTTPCreated, req.class
72
+
73
+ data =
74
+ ["ZqZyDN2SouQCYEHYS0LuM1XeqsF0MKIbFEBE6xQ972VqEcjs21wJSosvZMWEH1lq5ukTq4Ze",
75
+ "5sgCbNpumntlHC2jl6uXcW3Wz1RGTc5lqeGhpH2ZaCtAOc61TLBmLPUzPWVeJzkfr6qEQqTkuPK5kCE54u6iiTVFArMPSUy3oo",
76
+ "6qPDKL09AquFvgj9Zv9CFY2dn0x", "84ReeNklrWJeIu2USbBtwoUnkDwUgU8lNjJ",
77
+ "WUSYY2dCBdDdZEiGWtyfC5yGKVMgDhzBhyNLwcefxa49fED1Sf05f8MlgXOBx6n5I6Ae2Wy3Mds",
78
+ "uAlUDvngWqDl3PaRVl1i9RcwDIvJlNp6yMy9RQgVsucwNvKaSOQlJMarWItKy8zT2ON08ElKkZ2aQJlb45Z8FwfE0xh8sA",
79
+ "NxWmEBmJp0uiNRhyxa26frQjfFaNERmZbConrytNQKnHfilFsZWAo0Qy8eVKgq", "ajq3i5ksiBovQYfvj",
80
+ "yY3vhjeq","2IDeF0ccG8tRZIZSekz6fUii29"]
81
+
82
+ data.each do |d|
83
+ req = h.post('/channels/bar', d, {'Content-Type' => 'application/octet-stream', 'Date' => Time.now.to_s})
84
+ assert_equal Net::HTTPAccepted, req.class
85
+ end
86
+
87
+ data.each_with_index do |d, i|
88
+ req = h.get("/channels/bar?next")
89
+ assert_equal d, req.body
90
+ end
91
+ end
92
+ end
93
+ end
metadata ADDED
@@ -0,0 +1,125 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: conveyor
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Ryan King
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain:
11
+ - |
12
+ -----BEGIN CERTIFICATE-----
13
+ MIIDNjCCAh6gAwIBAgIBADANBgkqhkiG9w0BAQUFADBBMQ0wCwYDVQQDDARyeWFu
14
+ MRswGQYKCZImiZPyLGQBGRYLdGhlcnlhbmtpbmcxEzARBgoJkiaJk/IsZAEZFgNj
15
+ b20wHhcNMDgwMTMwMjM0MTMzWhcNMDkwMTI5MjM0MTMzWjBBMQ0wCwYDVQQDDARy
16
+ eWFuMRswGQYKCZImiZPyLGQBGRYLdGhlcnlhbmtpbmcxEzARBgoJkiaJk/IsZAEZ
17
+ FgNjb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDDTUJAZENpuhnu
18
+ P+R7DZlKe8mOuSkpyL6ZXTfCY2dgtwM7/uKy9Z31oOJshAJLuhscbuJqhHIt47nE
19
+ Adov3Ewp/q7jE7yodMLirsFod15sYPXij7vvNKOri2qDqvRCB/jQ0L65DqoHXckV
20
+ WOpZRBLzOp/q/XmJs/cq4KSZ0LFh8la4jIZsrYEG/dJ8BBXzQc14x6VkSxTGhlFz
21
+ LDz8IUIvefXUAGD5NnY+P8KbmPtf4RiAoWxLRwKIZiUHRQzFXwUdJyrDC+qYJ6CX
22
+ KlcU3Y5b259KD2eEmJsgpPexWRsaeLUlxG73Qsg2Dffs0GiyoZ4fhg+C0gcN1Yza
23
+ tLPQH79fAgMBAAGjOTA3MAkGA1UdEwQCMAAwCwYDVR0PBAQDAgSwMB0GA1UdDgQW
24
+ BBTDywO/cXwzSXuI7wyMnsboiRFTbzANBgkqhkiG9w0BAQUFAAOCAQEAmUm1El7H
25
+ 4c423UFHQ1cYTxEdicBdSCDqCBJcYyVMoOwOSjDxw3/G2iRafr4OpY+WQaTYIx05
26
+ FpV4tarYrP6u3w+3hynmgdHRBJJLGtoDhB8ln5QwdcsSN7nK+ERoZvkM6LMpH5G3
27
+ P5PNDhq+Snq26SzFrr8fYTtBqccixD4Njepo3r3KVZaK8VlrbDrDwiliPtTL/s9R
28
+ MpCn2/2deOie29ZLFh41VWqBzDba05+j9e7ztisxttrzTO6IA525y60eQ5AUylzc
29
+ 1grJrsSQhj0OXe1kLH0fPXHewEVQzaHhvd5JGSTMiGntqbfFKV9raP2xidO9jMMO
30
+ Zls3y84CmyAEGg==
31
+ -----END CERTIFICATE-----
32
+
33
+ date: 2008-01-30 00:00:00 -08:00
34
+ default_executable:
35
+ dependencies:
36
+ - !ruby/object:Gem::Dependency
37
+ name: mongrel
38
+ version_requirement:
39
+ version_requirements: !ruby/object:Gem::Requirement
40
+ requirements:
41
+ - - ">="
42
+ - !ruby/object:Gem::Version
43
+ version: "0"
44
+ version:
45
+ - !ruby/object:Gem::Dependency
46
+ name: activesupport
47
+ version_requirement:
48
+ version_requirements: !ruby/object:Gem::Requirement
49
+ requirements:
50
+ - - ">="
51
+ - !ruby/object:Gem::Version
52
+ version: "0"
53
+ version:
54
+ - !ruby/object:Gem::Dependency
55
+ name: hoe
56
+ version_requirement:
57
+ version_requirements: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - ">="
60
+ - !ruby/object:Gem::Version
61
+ version: 1.4.0
62
+ version:
63
+ description: == Description * Like TiVo for your data * A distributed rewindable multi-queue == Overview A Conveyor server provides an HTTP interface that allows for POSTing and GETing items in streams called Channels.
64
+ email: ryan@theryanking.com
65
+ executables:
66
+ - conveyor
67
+ extensions: []
68
+
69
+ extra_rdoc_files:
70
+ - History.txt
71
+ - Manifest.txt
72
+ - README.txt
73
+ files:
74
+ - History.txt
75
+ - LICENSE
76
+ - Manifest.txt
77
+ - README.txt
78
+ - Rakefile
79
+ - bin/conveyor
80
+ - docs/file-formats.mkd
81
+ - docs/protocol.mkd
82
+ - lib/conveyor.rb
83
+ - lib/conveyor/base_channel.rb
84
+ - lib/conveyor/channel.rb
85
+ - lib/conveyor/server.rb
86
+ - lib/priority_queue.rb
87
+ - test/rand.rb
88
+ - test/test_channel.rb
89
+ - test/test_feeder-ng.rb
90
+ - test/test_priority_queue.rb
91
+ - test/test_replicated_channel.rb
92
+ - test/test_server.rb
93
+ has_rdoc: true
94
+ homepage:
95
+ post_install_message:
96
+ rdoc_options:
97
+ - --main
98
+ - README.txt
99
+ require_paths:
100
+ - lib
101
+ required_ruby_version: !ruby/object:Gem::Requirement
102
+ requirements:
103
+ - - ">="
104
+ - !ruby/object:Gem::Version
105
+ version: "0"
106
+ version:
107
+ required_rubygems_version: !ruby/object:Gem::Requirement
108
+ requirements:
109
+ - - ">="
110
+ - !ruby/object:Gem::Version
111
+ version: "0"
112
+ version:
113
+ requirements: []
114
+
115
+ rubyforge_project: conveyor
116
+ rubygems_version: 1.0.1
117
+ signing_key:
118
+ specification_version: 2
119
+ summary: Like TiVo for your data.
120
+ test_files:
121
+ - test/test_channel.rb
122
+ - test/test_feeder-ng.rb
123
+ - test/test_priority_queue.rb
124
+ - test/test_replicated_channel.rb
125
+ - test/test_server.rb
@@ -0,0 +1,2 @@
1
+ ���dp 0���^o��a1�f%��d��JM�;���W��|X�G��� ?K� ��/.+a��>"J"u��oy�+�N(���#؝WNZ+�i�j�l���8�J��`�7�Ɔ��h�N<Vl�)V*7^] /D���#$��P*�W?~�>p,��tn�js�]�օL��iQ5��H�s!���P�鎮y�U�wO����ج��f+.�9��-p�K �Y��0���v} ���U�
2
+ �ۥ�N|� �S