fsdb 0.5 → 0.6.0
Sign up to get free protection for your applications and to get access to all the features.
- data/{RELEASE-NOTES → History.txt} +6 -0
- data/{README → README.txt} +26 -17
- data/examples/flat.rb +146 -0
- data/examples/fsdb-example.rb +28 -0
- data/examples/rbformat.rb +17 -0
- data/examples/yaml2.rb +29 -0
- data/junk/OLDRakefile +98 -0
- data/junk/OLDRakefile2 +55 -0
- data/junk/check-cache.rb +18 -0
- data/junk/create-lock.rb +25 -0
- data/junk/doc/old-api/classes/FSDB.html +139 -0
- data/junk/doc/old-api/classes/FSDB/Database.html +953 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000029.html +16 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000030.html +16 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000031.html +16 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000032.html +16 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000033.html +33 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000034.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000035.html +22 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000036.html +16 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000037.html +22 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000038.html +43 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000039.html +25 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000040.html +43 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000041.html +23 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000042.html +22 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000043.html +16 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000044.html +16 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000045.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000046.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000047.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000048.html +16 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000049.html +71 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000050.html +43 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000051.html +53 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000052.html +44 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000053.html +39 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000054.html +72 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000055.html +39 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000056.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000057.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000058.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000059.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000060.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000061.html +23 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000062.html +23 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000063.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Database.src/M000064.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Database/AbortedTransaction.html +118 -0
- data/junk/doc/old-api/classes/FSDB/Database/CreateFileError.html +120 -0
- data/junk/doc/old-api/classes/FSDB/Database/DirIsImmutableError.html +117 -0
- data/junk/doc/old-api/classes/FSDB/Database/DirNotEmptyError.html +117 -0
- data/junk/doc/old-api/classes/FSDB/Database/FormatError.html +117 -0
- data/junk/doc/old-api/classes/FSDB/Database/MissingFileError.html +117 -0
- data/junk/doc/old-api/classes/FSDB/Database/MissingObjectError.html +117 -0
- data/junk/doc/old-api/classes/FSDB/Database/NotDirError.html +118 -0
- data/junk/doc/old-api/classes/FSDB/Database/PathComponentError.html +120 -0
- data/junk/doc/old-api/classes/FSDB/DatabaseDebuggable.html +148 -0
- data/junk/doc/old-api/classes/FSDB/DatabaseDebuggable.src/M000005.html +21 -0
- data/junk/doc/old-api/classes/FSDB/DatabaseDebuggable.src/M000007.html +21 -0
- data/junk/doc/old-api/classes/FSDB/DirectoryIterators.html +210 -0
- data/junk/doc/old-api/classes/FSDB/DirectoryIterators.src/M000006.html +22 -0
- data/junk/doc/old-api/classes/FSDB/DirectoryIterators.src/M000007.html +22 -0
- data/junk/doc/old-api/classes/FSDB/DirectoryIterators.src/M000008.html +22 -0
- data/junk/doc/old-api/classes/FSDB/DirectoryIterators.src/M000009.html +22 -0
- data/junk/doc/old-api/classes/FSDB/DirectoryIterators.src/M000010.html +22 -0
- data/junk/doc/old-api/classes/FSDB/DirectoryIterators.src/M000011.html +22 -0
- data/junk/doc/old-api/classes/FSDB/DirectoryIterators.src/M000012.html +22 -0
- data/junk/doc/old-api/classes/FSDB/DirectoryIterators.src/M000013.html +22 -0
- data/junk/doc/old-api/classes/FSDB/ForkSafely.html +126 -0
- data/junk/doc/old-api/classes/FSDB/Modex.html +237 -0
- data/junk/doc/old-api/classes/FSDB/Modex.src/M000024.html +21 -0
- data/junk/doc/old-api/classes/FSDB/Modex.src/M000025.html +30 -0
- data/junk/doc/old-api/classes/FSDB/Modex.src/M000026.html +21 -0
- data/junk/doc/old-api/classes/FSDB/Modex.src/M000027.html +30 -0
- data/junk/doc/old-api/classes/FSDB/Modex.src/M000028.html +44 -0
- data/junk/doc/old-api/classes/FSDB/Modex.src/M000029.html +26 -0
- data/junk/doc/old-api/classes/FSDB/Modex.src/M000030.html +48 -0
- data/junk/doc/old-api/classes/FSDB/Modex/ForkSafely.html +105 -0
- data/junk/doc/old-api/classes/FSDB/Mutex.html +244 -0
- data/junk/doc/old-api/classes/FSDB/Mutex.src/M000018.html +19 -0
- data/junk/doc/old-api/classes/FSDB/Mutex.src/M000019.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Mutex.src/M000020.html +19 -0
- data/junk/doc/old-api/classes/FSDB/Mutex.src/M000021.html +18 -0
- data/junk/doc/old-api/classes/FSDB/Mutex.src/M000022.html +23 -0
- data/junk/doc/old-api/classes/FSDB/Mutex.src/M000023.html +30 -0
- data/junk/doc/old-api/classes/FSDB/Mutex.src/M000024.html +26 -0
- data/junk/doc/old-api/classes/FSDB/Mutex.src/M000025.html +21 -0
- data/junk/doc/old-api/classes/FSDB/Mutex/ForkSafely.html +105 -0
- data/junk/doc/old-api/classes/FSDB/PathUtilities.html +257 -0
- data/junk/doc/old-api/classes/FSDB/PathUtilities.src/M000012.html +23 -0
- data/junk/doc/old-api/classes/FSDB/PathUtilities.src/M000013.html +18 -0
- data/junk/doc/old-api/classes/FSDB/PathUtilities.src/M000014.html +23 -0
- data/junk/doc/old-api/classes/FSDB/PathUtilities.src/M000015.html +18 -0
- data/junk/doc/old-api/classes/FSDB/PathUtilities.src/M000016.html +18 -0
- data/junk/doc/old-api/classes/FSDB/PathUtilities.src/M000017.html +22 -0
- data/junk/doc/old-api/classes/FSDB/PathUtilities.src/M000018.html +23 -0
- data/junk/doc/old-api/classes/FSDB/PathUtilities.src/M000019.html +18 -0
- data/junk/doc/old-api/classes/FSDB/PathUtilities/InvalidPathError.html +111 -0
- data/junk/doc/old-api/classes/File.html +272 -0
- data/junk/doc/old-api/classes/File.src/M000001.html +17 -0
- data/junk/doc/old-api/classes/File.src/M000002.html +17 -0
- data/junk/doc/old-api/classes/File.src/M000003.html +20 -0
- data/junk/doc/old-api/classes/File.src/M000004.html +20 -0
- data/junk/doc/old-api/classes/File.src/M000005.html +32 -0
- data/junk/doc/old-api/classes/File.src/M000006.html +32 -0
- data/junk/doc/old-api/created.rid +1 -0
- data/junk/doc/old-api/files/README.html +112 -0
- data/junk/doc/old-api/files/RELEASE-NOTES.html +233 -0
- data/junk/doc/old-api/files/fsdb_txt.html +888 -0
- data/junk/doc/old-api/files/lib/fsdb/database_rb.html +115 -0
- data/junk/doc/old-api/files/lib/fsdb/file-lock_rb.html +109 -0
- data/junk/doc/old-api/files/lib/fsdb/modex_rb.html +121 -0
- data/junk/doc/old-api/files/lib/fsdb/mutex_rb.html +108 -0
- data/junk/doc/old-api/files/lib/fsdb/util_rb.html +108 -0
- data/junk/doc/old-api/fr_class_index.html +47 -0
- data/junk/doc/old-api/fr_file_index.html +34 -0
- data/junk/doc/old-api/fr_method_index.html +90 -0
- data/junk/doc/old-api/index.html +24 -0
- data/junk/doc/old-api/rdoc-style.css +208 -0
- data/junk/file-lock-nb.rb +15 -0
- data/junk/fl.rb +144 -0
- data/junk/flock-test.rb +39 -0
- data/junk/fsdb.kateproject +47 -0
- data/junk/fsdb.prj +196 -0
- data/junk/fsdb.sf +46 -0
- data/junk/insert-dir.rb +48 -0
- data/junk/lock-test-bug.rb +150 -0
- data/junk/lock-test-too-simple.rb +136 -0
- data/junk/lock-test.rb +151 -0
- data/{script → junk}/mkrdoc +0 -0
- data/junk/restore-fsdb.rb +37 -0
- data/junk/rf.txt +5 -0
- data/junk/solaris-bug-fixed.rb +184 -0
- data/junk/solaris-bug.rb +182 -0
- data/junk/solaris-bug.txt +43 -0
- data/junk/sync.rb +327 -0
- data/junk/test-file-lock.html +86 -0
- data/junk/test-file-lock.rb +84 -0
- data/junk/test-processes.rb +131 -0
- data/junk/test-threads.rb +113 -0
- data/junk/wiki-mutex.rb +108 -0
- data/lib/fsdb/database.rb +5 -3
- data/lib/fsdb/delegatable.rb +21 -0
- data/lib/fsdb/faster-modex.rb +223 -0
- data/lib/fsdb/faster-mutex.rb +138 -0
- data/lib/fsdb/mutex.rb +4 -1
- data/lib/fsdb/persistent.rb +91 -0
- data/lib/fsdb/read-write-object.rb +36 -0
- data/lib/fsdb/server.rb +44 -0
- data/misc/fsdb-blorubu.txt +47 -0
- data/misc/mtime-and-file-id.txt +23 -0
- data/misc/posixlock.txt +148 -0
- data/rakefile +39 -0
- data/tasks/ann.rake +80 -0
- data/tasks/bones.rake +20 -0
- data/tasks/gem.rake +201 -0
- data/tasks/git.rake +40 -0
- data/tasks/notes.rake +27 -0
- data/tasks/post_load.rake +34 -0
- data/tasks/rdoc.rake +51 -0
- data/tasks/rubyforge.rake +55 -0
- data/tasks/setup.rb +292 -0
- data/tasks/spec.rake +54 -0
- data/tasks/svn.rake +47 -0
- data/tasks/test.rake +40 -0
- data/tasks/zentest.rake +36 -0
- data/test/err.txt +31 -0
- data/test/runs.rb +8 -0
- data/test/test-file-lock.rb +78 -0
- data/test/test-util.rb +1 -0
- data/test/trap.rb +31 -0
- metadata +198 -35
- data/Manifest +0 -36
- data/Rakefile +0 -10
- data/fsdb.gemspec +0 -113
data/{script → junk}/mkrdoc
RENAMED
File without changes
|
@@ -0,0 +1,37 @@
|
|
1
|
+
class FSDB
|
2
|
+
|
3
|
+
class CacheEntry
|
4
|
+
|
5
|
+
def object(mtime)
|
6
|
+
@mutex.synchronize do
|
7
|
+
unless @object and mtime == @mtime
|
8
|
+
@mtime = mtime
|
9
|
+
@object = yield
|
10
|
+
# @should_restore = true
|
11
|
+
end
|
12
|
+
@object
|
13
|
+
end
|
14
|
+
end
|
15
|
+
|
16
|
+
# Returns true for the first thread that calls it (test and set).
|
17
|
+
# def should_restore?
|
18
|
+
# @mutex.synchronize do
|
19
|
+
# result = @should_restore
|
20
|
+
# @should_restore = false
|
21
|
+
# result
|
22
|
+
# end
|
23
|
+
# end
|
24
|
+
|
25
|
+
|
26
|
+
def cache_object(f, cache_entry)
|
27
|
+
object = cache_entry.object(f.mtime) do
|
28
|
+
load(f)
|
29
|
+
end
|
30
|
+
# if cache_entry.should_restore? and object.respond_to?(:restore)
|
31
|
+
# object.restore(self) ### this doesn't really do the trick
|
32
|
+
# # we do this after updating the entry to allow for loading circular
|
33
|
+
# # references (even if it is not possible to correctly lock them)
|
34
|
+
# # for the non-concurrent version
|
35
|
+
# end
|
36
|
+
object
|
37
|
+
end
|
data/junk/rf.txt
ADDED
@@ -0,0 +1,184 @@
|
|
1
|
+
require 'thread'
|
2
|
+
require 'sync'
|
3
|
+
|
4
|
+
# Extensions to the File class for exception-safe file locking in a
|
5
|
+
# environment with multiple user threads.
|
6
|
+
|
7
|
+
# This is here because closing a file on solaris unlocks any locks that
|
8
|
+
# other threads might have. So we have to make sure that only the last
|
9
|
+
# reader thread closes the file.
|
10
|
+
#
|
11
|
+
# The hash maps inode number to a count of reader threads
|
12
|
+
$reader_count = Hash.new(0)
|
13
|
+
|
14
|
+
class File
|
15
|
+
|
16
|
+
# Get an exclusive (i.e., write) lock on the file, and yield to the block.
|
17
|
+
# If the lock is not available, wait for it without blocking other ruby
|
18
|
+
# threads.
|
19
|
+
def lock_exclusive
|
20
|
+
if Thread.list.size == 1
|
21
|
+
flock(LOCK_EX)
|
22
|
+
else
|
23
|
+
# ugly hack because waiting for a lock in a Ruby thread blocks the process
|
24
|
+
period = 0.001
|
25
|
+
until flock(LOCK_EX|LOCK_NB)
|
26
|
+
sleep period
|
27
|
+
period *= 2 if period < 1
|
28
|
+
end
|
29
|
+
end
|
30
|
+
|
31
|
+
yield self
|
32
|
+
|
33
|
+
ensure
|
34
|
+
flush
|
35
|
+
flock(LOCK_UN)
|
36
|
+
end
|
37
|
+
|
38
|
+
# Get a shared (i.e., read) lock on the file, and yield to the block.
|
39
|
+
# If the lock is not available, wait for it without blocking other ruby
|
40
|
+
# threads.
|
41
|
+
def lock_shared
|
42
|
+
if Thread.exclusive {$reader_count[self.stat.ino] == 1}
|
43
|
+
if Thread.list.size == 1
|
44
|
+
flock(LOCK_SH)
|
45
|
+
else
|
46
|
+
# ugly hack because waiting for a lock in a Ruby thread blocks the process
|
47
|
+
period = 0.001
|
48
|
+
until flock(LOCK_SH|LOCK_NB)
|
49
|
+
sleep period
|
50
|
+
period *= 2 if period < 1
|
51
|
+
end
|
52
|
+
end
|
53
|
+
|
54
|
+
yield self
|
55
|
+
end
|
56
|
+
|
57
|
+
ensure
|
58
|
+
Thread.exclusive {flock(LOCK_UN) if $reader_count[self.stat.ino] == 1}
|
59
|
+
## for solaris, no need to unlock here--closing does it
|
60
|
+
## but this has no effect on the bug
|
61
|
+
end
|
62
|
+
|
63
|
+
end
|
64
|
+
|
65
|
+
# Provides instance methods to open files in mode "r" or "r+"
|
66
|
+
|
67
|
+
module OpenLock
|
68
|
+
|
69
|
+
# Opens path for reading ("r") with a shared lock for the
|
70
|
+
# duration of the block.
|
71
|
+
def open_read_lock(path)
|
72
|
+
f = nil
|
73
|
+
f = File.open(path, "r")
|
74
|
+
Thread.exclusive {$reader_count[f] += 1}
|
75
|
+
f.lock_shared do
|
76
|
+
yield f
|
77
|
+
end
|
78
|
+
ensure
|
79
|
+
if f
|
80
|
+
Thread.exclusive do
|
81
|
+
cache = @openlock_file_cache ||= Hash.new {|h,v| h[v] = []}
|
82
|
+
# maps <inode number> => [<open File instance>, ...]
|
83
|
+
ino = f.stat.ino
|
84
|
+
if $reader_count[f] == 1
|
85
|
+
f.close
|
86
|
+
files = cache.delete(ino) and files.each {|g| g.close}
|
87
|
+
$reader_count.delete(f)
|
88
|
+
else
|
89
|
+
cache[ino] << f
|
90
|
+
$reader_count[f] -= 1
|
91
|
+
end
|
92
|
+
end
|
93
|
+
end
|
94
|
+
end
|
95
|
+
|
96
|
+
# Opens path for writing and reading ("r+") with an exclusive lock for
|
97
|
+
# the duration of the block.
|
98
|
+
def open_write_lock(path)
|
99
|
+
File.open(path, "r+") do |f|
|
100
|
+
f.lock_exclusive {yield f}
|
101
|
+
end
|
102
|
+
end
|
103
|
+
|
104
|
+
end
|
105
|
+
|
106
|
+
include OpenLock
|
107
|
+
|
108
|
+
process_count = 2
|
109
|
+
thread_count = 2
|
110
|
+
rep_count = (ARGV.shift || 10000).to_i
|
111
|
+
sync = Sync.new
|
112
|
+
|
113
|
+
test_file = '/tmp/test-file-lock.dat'
|
114
|
+
|
115
|
+
File.open(test_file, "w") {|f| Marshal.dump(0, f)}
|
116
|
+
|
117
|
+
(0...process_count).each do
|
118
|
+
fork do
|
119
|
+
|
120
|
+
increments = 0
|
121
|
+
|
122
|
+
threads =
|
123
|
+
(0...thread_count).map do
|
124
|
+
Thread.new do
|
125
|
+
(0...rep_count).each do
|
126
|
+
if rand(100) < 50
|
127
|
+
sync.synchronize(Sync::SH) do
|
128
|
+
open_read_lock(test_file) do |f|
|
129
|
+
str = f.read
|
130
|
+
data = Marshal.load(str)
|
131
|
+
end
|
132
|
+
end
|
133
|
+
else
|
134
|
+
sync.synchronize(Sync::EX) do
|
135
|
+
open_write_lock(test_file) do |f|
|
136
|
+
str = f.read
|
137
|
+
data = Marshal.load(str)
|
138
|
+
data += 1
|
139
|
+
f.rewind; f.truncate(0)
|
140
|
+
Marshal.dump(data, f)
|
141
|
+
f.flush
|
142
|
+
Thread.exclusive {increments += 1}
|
143
|
+
end
|
144
|
+
end
|
145
|
+
end
|
146
|
+
end
|
147
|
+
end
|
148
|
+
end
|
149
|
+
|
150
|
+
threads.each {|thread| thread.join}
|
151
|
+
|
152
|
+
File.open("#{test_file}#{Process.pid}", "w") do |f|
|
153
|
+
Marshal.dump(increments, f)
|
154
|
+
end
|
155
|
+
|
156
|
+
end
|
157
|
+
end
|
158
|
+
|
159
|
+
Thread.new do
|
160
|
+
count = 0
|
161
|
+
loop do
|
162
|
+
puts count
|
163
|
+
sleep 1
|
164
|
+
count += 1
|
165
|
+
end
|
166
|
+
end
|
167
|
+
|
168
|
+
increments = 0
|
169
|
+
(0...process_count).each do
|
170
|
+
pid = Process.wait
|
171
|
+
File.open("#{test_file}#{pid}", "r") do |f|
|
172
|
+
increments += Marshal.load(f)
|
173
|
+
end
|
174
|
+
end
|
175
|
+
|
176
|
+
data = File.open(test_file, "r") {|f| Marshal.load(f)}
|
177
|
+
|
178
|
+
if data == increments
|
179
|
+
puts "Equal counts: #{data}"
|
180
|
+
else
|
181
|
+
puts "Not equal:"
|
182
|
+
puts " increments: #{increments}"
|
183
|
+
puts " data : #{data}"
|
184
|
+
end
|
data/junk/solaris-bug.rb
ADDED
@@ -0,0 +1,182 @@
|
|
1
|
+
require 'thread'
|
2
|
+
require 'sync'
|
3
|
+
|
4
|
+
# Extensions to the File class for exception-safe file locking in a
|
5
|
+
# environment with multiple user threads.
|
6
|
+
|
7
|
+
# This is here because closing a file on solaris unlocks any locks that
|
8
|
+
# other threads might have. So we have to make sure that only the last
|
9
|
+
# reader thread closes the file.
|
10
|
+
#
|
11
|
+
# The hash maps inode number to a count of reader threads
|
12
|
+
$reader_count = Hash.new(0)
|
13
|
+
|
14
|
+
class File
|
15
|
+
|
16
|
+
# Get an exclusive (i.e., write) lock on the file, and yield to the block.
|
17
|
+
# If the lock is not available, wait for it without blocking other ruby
|
18
|
+
# threads.
|
19
|
+
def lock_exclusive
|
20
|
+
if Thread.list.size == 1
|
21
|
+
flock(LOCK_EX)
|
22
|
+
else
|
23
|
+
# ugly hack because waiting for a lock in a Ruby thread blocks the process
|
24
|
+
period = 0.001
|
25
|
+
until flock(LOCK_EX|LOCK_NB)
|
26
|
+
sleep period
|
27
|
+
period *= 2 if period < 1
|
28
|
+
end
|
29
|
+
end
|
30
|
+
|
31
|
+
yield self
|
32
|
+
|
33
|
+
ensure
|
34
|
+
flush
|
35
|
+
flock(LOCK_UN)
|
36
|
+
end
|
37
|
+
|
38
|
+
# Get a shared (i.e., read) lock on the file, and yield to the block.
|
39
|
+
# If the lock is not available, wait for it without blocking other ruby
|
40
|
+
# threads.
|
41
|
+
def lock_shared
|
42
|
+
if Thread.list.size == 1
|
43
|
+
flock(LOCK_SH)
|
44
|
+
else
|
45
|
+
# ugly hack because waiting for a lock in a Ruby thread blocks the process
|
46
|
+
period = 0.001
|
47
|
+
until flock(LOCK_SH|LOCK_NB)
|
48
|
+
sleep period
|
49
|
+
period *= 2 if period < 1
|
50
|
+
end
|
51
|
+
end
|
52
|
+
|
53
|
+
yield self
|
54
|
+
|
55
|
+
ensure
|
56
|
+
Thread.exclusive {flock(LOCK_UN) if $reader_count[self.stat.ino] == 1}
|
57
|
+
## for solaris, no need to unlock here--closing does it
|
58
|
+
## but this has no effect on the bug
|
59
|
+
end
|
60
|
+
|
61
|
+
end
|
62
|
+
|
63
|
+
# Provides instance methods to open files in mode "r" or "r+"
|
64
|
+
|
65
|
+
module OpenLock
|
66
|
+
|
67
|
+
# Opens path for reading ("r") with a shared lock for the
|
68
|
+
# duration of the block.
|
69
|
+
def open_read_lock(path)
|
70
|
+
f = nil
|
71
|
+
f = File.open(path, "r")
|
72
|
+
Thread.exclusive {$reader_count[f] += 1}
|
73
|
+
f.lock_shared do
|
74
|
+
yield f
|
75
|
+
end
|
76
|
+
ensure
|
77
|
+
if f
|
78
|
+
Thread.exclusive do
|
79
|
+
cache = @openlock_file_cache ||= Hash.new {|h,v| h[v] = []}
|
80
|
+
# maps <inode number> => [<open File instance>, ...]
|
81
|
+
ino = f.stat.ino
|
82
|
+
if $reader_count[f] == 1
|
83
|
+
f.close
|
84
|
+
files = cache.delete(ino) and files.each {|g| g.close}
|
85
|
+
$reader_count.delete(f)
|
86
|
+
else
|
87
|
+
cache[ino] << f
|
88
|
+
$reader_count[f] -= 1
|
89
|
+
end
|
90
|
+
end
|
91
|
+
end
|
92
|
+
end
|
93
|
+
|
94
|
+
# Opens path for writing and reading ("r+") with an exclusive lock for
|
95
|
+
# the duration of the block.
|
96
|
+
def open_write_lock(path)
|
97
|
+
File.open(path, "r+") do |f|
|
98
|
+
f.lock_exclusive {yield f}
|
99
|
+
end
|
100
|
+
end
|
101
|
+
|
102
|
+
end
|
103
|
+
|
104
|
+
include OpenLock
|
105
|
+
|
106
|
+
process_count = 2
|
107
|
+
thread_count = 2
|
108
|
+
rep_count = (ARGV.shift || 10000).to_i
|
109
|
+
sync = Sync.new
|
110
|
+
|
111
|
+
test_file = '/tmp/test-file-lock.dat'
|
112
|
+
|
113
|
+
File.open(test_file, "w") {|f| Marshal.dump(0, f)}
|
114
|
+
|
115
|
+
(0...process_count).each do
|
116
|
+
fork do
|
117
|
+
|
118
|
+
increments = 0
|
119
|
+
|
120
|
+
threads =
|
121
|
+
(0...thread_count).map do
|
122
|
+
Thread.new do
|
123
|
+
(0...rep_count).each do
|
124
|
+
if rand(100) < 50
|
125
|
+
sync.synchronize(Sync::SH) do
|
126
|
+
open_read_lock(test_file) do |f|
|
127
|
+
str = f.read
|
128
|
+
data = Marshal.load(str)
|
129
|
+
end
|
130
|
+
end
|
131
|
+
else
|
132
|
+
sync.synchronize(Sync::EX) do
|
133
|
+
open_write_lock(test_file) do |f|
|
134
|
+
str = f.read
|
135
|
+
data = Marshal.load(str)
|
136
|
+
data += 1
|
137
|
+
f.rewind; f.truncate(0)
|
138
|
+
Marshal.dump(data, f)
|
139
|
+
f.flush
|
140
|
+
Thread.exclusive {increments += 1}
|
141
|
+
end
|
142
|
+
end
|
143
|
+
end
|
144
|
+
end
|
145
|
+
end
|
146
|
+
end
|
147
|
+
|
148
|
+
threads.each {|thread| thread.join}
|
149
|
+
|
150
|
+
File.open("#{test_file}#{Process.pid}", "w") do |f|
|
151
|
+
Marshal.dump(increments, f)
|
152
|
+
end
|
153
|
+
|
154
|
+
end
|
155
|
+
end
|
156
|
+
|
157
|
+
Thread.new do
|
158
|
+
count = 0
|
159
|
+
loop do
|
160
|
+
puts count
|
161
|
+
sleep 1
|
162
|
+
count += 1
|
163
|
+
end
|
164
|
+
end
|
165
|
+
|
166
|
+
increments = 0
|
167
|
+
(0...process_count).each do
|
168
|
+
pid = Process.wait
|
169
|
+
File.open("#{test_file}#{pid}", "r") do |f|
|
170
|
+
increments += Marshal.load(f)
|
171
|
+
end
|
172
|
+
end
|
173
|
+
|
174
|
+
data = File.open(test_file, "r") {|f| Marshal.load(f)}
|
175
|
+
|
176
|
+
if data == increments
|
177
|
+
puts "Equal counts: #{data}"
|
178
|
+
else
|
179
|
+
puts "Not equal:"
|
180
|
+
puts " increments: #{increments}"
|
181
|
+
puts " data : #{data}"
|
182
|
+
end
|
@@ -0,0 +1,43 @@
|
|
1
|
+
Any solaris gurus out there?
|
2
|
+
|
3
|
+
I'm having trouble porting some multi-thread, multi-process code from linux to solaris. I've already dealt with (or tried to deal with) some differences in flock (solaris flock is based on fcntl locks), like the fact that closing a file releases locks on the file held by other threads.
|
4
|
+
|
5
|
+
I've managed to isolate the problem in a fairly simple test program. It's at
|
6
|
+
|
7
|
+
http://path.berkeley.edu/~vjoel/ruby/solaris-bug.rb
|
8
|
+
|
9
|
+
The program creates /tmp/test-file-lock.dat, which holds a marshalled fixnum starting at 0. Then it creates Np processes each with Nt threads which do a random sequence of reads and writes using some locking methods. The writes just increment the counter.
|
10
|
+
|
11
|
+
When a process is done, it writes the number of times it incremented the counter to the file /tmp/test-file-lock.dat#{pid}. Then the main process adds these up and compares with the contents of the counter file. The point of this is to test for colliding writers.
|
12
|
+
|
13
|
+
But the program fails before that final test--it seems to be having a collision between a reader and a writer that causes the reader to see a corrupt file.
|
14
|
+
|
15
|
+
A typical run fails like this. The counter 0..3 is a seconds clock:
|
16
|
+
|
17
|
+
$ ruby solaris-bug.rb
|
18
|
+
0
|
19
|
+
1
|
20
|
+
2
|
21
|
+
3
|
22
|
+
solaris-bug.rb:128:in `load': marshal data too short (ArgumentError)
|
23
|
+
|
24
|
+
It looks like there are a reader and a writer accessing the file at the same time, and the writer has just truncated the file (line 137) when the reader tries to read it.
|
25
|
+
|
26
|
+
This happens:
|
27
|
+
|
28
|
+
- on solaris, quad cpu
|
29
|
+
- ruby 1.7.3 (2002-10-30) [sparc-solaris2.7]
|
30
|
+
|
31
|
+
- *not* on single processor linux
|
32
|
+
- ruby 1.7.3 (2002-12-12) [i686-linux]
|
33
|
+
|
34
|
+
- *not* on dual SMP linux
|
35
|
+
- ruby 1.6.7 (2002-03-01) [i686-linux]
|
36
|
+
|
37
|
+
Also, the bug requires *both* of:
|
38
|
+
|
39
|
+
- thread_count >= 2
|
40
|
+
|
41
|
+
- process_count >= 2
|
42
|
+
|
43
|
+
Also, the bug requires that there be both reader and writer operations (i.e., that the random number lead to each branch often enough, say 50/50).
|