monga 0.0.10 → 0.0.11

Sign up to get free protection for your applications and to get access to all the features.
data/README.md CHANGED
@@ -1,11 +1,5 @@
1
1
  [![Build Status](https://travis-ci.org/fl00r/monga.png?branch=master)](https://travis-ci.org/fl00r/monga)
2
2
 
3
- This client is under development. You can try
4
-
5
- * [em-mongo](https://github.com/bcg/em-mongo) with Eventmachine inside
6
- * Official [mongo-ruby-driver](https://github.com/mongodb/mongo-ruby-driver) from 10gen
7
- * [Moped](http://mongoid.org/en/moped/) from Mongoid guys
8
-
9
3
  # Monga
10
4
 
11
5
  Yet another [MongoDB](http://www.mongodb.org/) Ruby Client.
@@ -16,6 +10,12 @@ It supports three kind of interfaces:
16
10
  * Synchronous (on Fibers)
17
11
  * Blocking (over TCPSocket, [kgio](http://bogomips.org/kgio/) actually)
18
12
 
13
+ You can also try:
14
+
15
+ * [em-mongo](https://github.com/bcg/em-mongo) with Eventmachine inside
16
+ * Official [mongo-ruby-driver](https://github.com/mongodb/mongo-ruby-driver) from 10gen
17
+ * [Moped](http://mongoid.org/en/moped/) from Mongoid guys
18
+
19
19
  ## Introduction
20
20
 
21
21
  Asynchronous API will be familiar to Node.js developers. Instead of Deferrable Object you will receive `err, response` into callback.
@@ -103,110 +103,148 @@ end
103
103
  puts "We have got #{docs.size} documents in this pretty array"
104
104
  ```
105
105
 
106
- ## To Do List
107
-
108
- * [ ] Write a Wiki
109
- * [ ] Write comments
110
- * [ ] Grammar improvement ;)
111
-
112
- ### Clients
113
- * [x] Client (Single instance connection)
114
- * [x] ReplicaSetClient
115
- * [ ] MasterSlaveClient
116
- * [x] ReadPref
117
- * [ ] Sharding Support
118
-
119
- ### Connection
120
- * [x] Connection
121
- * [x] Autoreconnect
122
- * [x] Connection Pool
123
-
124
- ### Protocol
125
- * [x] OP_QUERY
126
- * [x] OP_GET_MORE
127
- * [x] OP_KILL_CURSORS
128
- * [x] OP_INSERT
129
- * [x] OP_UPDATE
130
- * [x] OP_DELETE
131
- * [x] OP_REPLY
132
-
133
- ### Database
134
- * [x] create_collection
135
- * [x] drop_collection
136
- * [x] get_last_error
137
- * [x] drop_indexes
138
- * [x] get_indexes
139
- * Authentication
140
- * [ ] login
141
- * [ ] logout
142
- * [ ] add_user
143
- * [ ] check maxBsonSize / validate
144
- * [x] cmd
145
- * [x] eval
146
- * [x] aggregate
147
- * [x] distinct
148
- * [x] group
149
- * [x] mapReduce
150
- * [x] text
151
- * [ ] gridfs?
152
-
153
- ### Collection
154
- * QUERY_OP
155
- * [x] find
156
- * [x] find_one (first)
157
- * [x] sorting
158
- * INSERT_OP
159
- * [x] insert (single)
160
- * [x] insert (batch)
161
- * [x] safe_insert
162
- * FLAGS
163
- * [x] continue_on_error
164
- * UPDATE_OP
165
- * [x] update
166
- * [x] safe_update
167
- * FLAGS
168
- * [x] upsert
169
- * [x] multi_update
170
- * DELETE_OP
171
- * [x] delete
172
- * [x] safe_delete
173
- * FLAGS
174
- * [x] single_remove
175
- * INDEXES
176
- * [x] ensure_index
177
- * [x] drop_index
178
- * [x] drop_indexes
179
- * [x] get_indexes
180
- * [x] count
181
- * [x] all
182
- * [x] cursor
183
- * [ ] DBRef
184
-
185
- ### Cursor
186
- * [x] limit
187
- * [x] skip
188
- * [x] batch_size
189
- * [x] get_more
190
- * [x] next_document
191
- * [x] next_batch
192
- * [x] each_doc
193
- * [x] kill
194
- * [x] mark_to_kill
195
- * [x] batch_kill
196
- * [x] explain
197
- * [x] hint
198
- * Flags
199
- * [x] tailable_cursor
200
- * [x] slave_ok
201
- * [x] no_cursor_timeout
202
- * [x] await_data
203
- * [x] exhaust
204
- * [x] partial
205
-
206
- # ISSUES handled with
207
-
208
- Some commands, such as `db.getLastError`, `db.count` and other `db.commands` requires `numberToReturn` in OP_QUERY to be setted as `-1`. Also this commands should return a response. If nothing returned it should be interpreted as an Exception. Also, in contrast to the other queries it could return `errmsg` which should be interpreted as an Exception too. Other query methods could return `err` response.
209
-
210
- To create index you can't call any `db.ensureIndex` command but you should insert a document into `sytem.indexes` collection manually. To get list of indexes you should fetch all documents from this collection. But to drop index you should call specific `db.dropIndexes` command.
211
-
212
- `multi_update` flag works only with `$` commands (i.e. `$set: { title: "blahblah" }`)
106
+ ## Find
107
+
108
+ `find` method allways returns Cursor.
109
+ You can chain `skip`, `limit`, `batch_size` methods.
110
+ `all` will return all matching documents.
111
+ `each_doc` will return document into block.
112
+ `each_batch` will return batch into block.
113
+ For big collections iterating with small batches is a good choice.
114
+
115
+ ```ruby
116
+ # All docs
117
+ collection.find.all
118
+ # All matching docs
119
+ collection.find(name: "Peter").all
120
+ # skip and limit
121
+ collection.find(moderated: true).skip(20).limit(10).all
122
+ # iterating over cursor
123
+ collection.find(country: "Japan").each_doc do |doc|
124
+ puts doc.inspect
125
+ end
126
+ # Iterating over cursor with predefined batch size
127
+ collection.find(country: "China").batch_size(10_000).skip(1_000_000).each_doc do |chineese|
128
+ puts chineese.name
129
+ end
130
+ ```
131
+
132
+ ## Insert
133
+
134
+ `insert` method will puts data into socket without waiting for response.
135
+ `safe_insert` method will send current request and then `getLastError` request. So it will be blocked till MongoDB server returns response. If response contains an error it will raise it.
136
+
137
+ You could use `continue_on_error` flag if you use "batch" insert. In this case MongoDB will try to insert all items in batch and then returns an error if any happened. Otherwise MongoDB will fail on first bad insert and won't continue.
138
+
139
+ Also you could pass following flags for `safe_insert` method:
140
+
141
+ * j
142
+ * fsync
143
+ * w
144
+ * wtimeout
145
+
146
+ More info about safe methods http://docs.mongodb.org/manual/reference/command/getLastError/#dbcmd.getLastError
147
+
148
+ ```ruby
149
+ collection.insert(_id: 1, name: "Peter")
150
+ collection.safe_insert(_id: 1, name: "Peter")
151
+ #=> Duplicate key error
152
+ collection.safe_insert(_id: 2, name: "Ivan")
153
+
154
+ # Batch insert
155
+ batch = [
156
+ { _id: 3, name: "Nick" },
157
+ { _id: 2, name: "Mary" },
158
+ { _id: 4, name: "Kate" }
159
+ ]
160
+ collection.safe_insert(batch)
161
+ #=> Duplicate error key
162
+ collection.first(_id: 3)
163
+ #=> { _id: 3, name: "Nick" }
164
+ collection.first(_id: 4)
165
+ #=> nil
166
+
167
+ # Batch insert with `continue_on_error` flag
168
+ collection.safe_insert(batch, continue_on_error: true)
169
+ #=> Duplicate key error
170
+ # but all non existing docs are saved
171
+ collection.first(_id: 4)
172
+ #=> { _id: 4, name: "Kate" }
173
+ ```
174
+
175
+ ## Update
176
+
177
+ `update` method will also only puts data into socket without waiting for any response.
178
+ `safe_update` will raise an error if MongoDB can't update document.
179
+ You could use `upsert` and `multi_update` flags. With `upsert` it will insert current document if it doesn't present in database. With `multi_update` it will update all matching documents, otherwise only firtst will be updated.
180
+ Also `j`, `fsync`, `w`, `wtimeout` flags are available for `safe_update` mthod.
181
+
182
+ ```ruby
183
+ collection.insert(_id: 1, name: "Peter", job: "Dancer")
184
+ collection.insert(_id: 2, name: "Peter", job: "Painter")
185
+
186
+ collection.update(
187
+ { name: "Peter" },
188
+ { "$set" => { job: "Driver" } }
189
+ )
190
+ collection.find(name: "Peter").all
191
+ #=> [
192
+ #=> { _id: 1, name: "Peter", job: "Driver" },
193
+ #=> { _id: 2, name: "Peter", job: "Painter" }
194
+ #=> ]
195
+ collection.update(
196
+ { name: "Peter" },
197
+ { "$set" => { job: "Singer" } },
198
+ { multi_update: true }
199
+ )
200
+ collection.find(name: "Peter").all
201
+ #=> [
202
+ #=> { _id: 1, name: "Peter", job: "Singer" },
203
+ #=> { _id: 2, name: "Peter", job: "Singer" }
204
+ #=> ]
205
+ collection.update(
206
+ { name: "Biork" },
207
+ { "$set" => { job: "Artist" } }
208
+ )
209
+ collection.first(name: "Bjork")
210
+ #=> nil
211
+ collection.update(
212
+ { name: "Biork" },
213
+ { "$set" => { job: "Artist" } },
214
+ { upsert: true }
215
+ )
216
+ collection.first(name: "Bjork")
217
+ #=> { _id: "Some id", name: "Bjork", job: "Artist" }
218
+ ```
219
+
220
+ ## Delete
221
+ Same as `insert` and `update` it has got `safe_delete` method and `j`, `fsync`, `w`, `wtimeout` flags.
222
+ Also it supports `single_remove` flag if you want to delete first matching dcument.
223
+
224
+ ```ruby
225
+ batch = [
226
+ { _id: 1, name: "Antonio" },
227
+ { _id: 2, name: "Antonio" },
228
+ { _id: 3, name: "Antonio" }
229
+ ]
230
+ collection.safe_insert(batch)
231
+ collection.count
232
+ #=> 3
233
+ collection.safe_delete({ name: "Antonio" }, single_remove: true)
234
+ collection.count
235
+ #=> 2
236
+ collection.safe_delete(name: "Antonio")
237
+ collection.count
238
+ #=> 0
239
+ ```
240
+
241
+ ## Counting
242
+
243
+ ```ruby
244
+ # All items
245
+ collection.count
246
+ # Query
247
+ collection.count(query: { name: "Peter" })
248
+ # Limit, skip
249
+ collection.count(query: { name: "Peter" }, limit: 10, skip: 5)
250
+ ```
@@ -1,7 +1,8 @@
1
1
  require 'benchmark'
2
+ require 'em-synchrony'
2
3
 
3
- TOTAL_INSERTS = 10000
4
- TOTAL_READS = 50
4
+ TOTAL_INSERTS = 5000
5
+ TOTAL_READS = 40
5
6
 
6
7
  chars = ('a'..'z').to_a
7
8
  DOCS = [10, 100, 1000, 10000].map do |size|
@@ -53,14 +54,14 @@ fork do
53
54
 
54
55
  DOCS.each do |size, doc|
55
56
  GC.start
56
- x.report("Monga: Inserting #{size}b document") do
57
+ x.report("Monga (blocking): Inserting #{size}b document") do
57
58
  TOTAL_INSERTS.times do
58
59
  collection.safe_insert(doc)
59
60
  end
60
61
  end
61
62
 
62
63
  GC.start
63
- x.report("Monga: Reading #{size}b documents") do
64
+ x.report("Monga (blocking): Reading #{size}b documents") do
64
65
  TOTAL_READS.times do
65
66
  collection.find.all
66
67
  end
@@ -18,6 +18,7 @@ module Monga
18
18
  def initialize(opts = {})
19
19
  @opts = opts
20
20
  @opts[:type] ||= :block
21
+ @opts[:timeout] ||= 10
21
22
 
22
23
  sanitize_opts!
23
24
  create_client
@@ -18,7 +18,6 @@ module Monga::Protocol
18
18
  msg << doc.to_bson
19
19
  end
20
20
  when Hash
21
- # msg << BSON::BSON_C.serialize(documents).to_s
22
21
  msg << documents.to_bson
23
22
  end
24
23
  msg
@@ -2,6 +2,7 @@ module Monga
2
2
  class Request
3
3
  attr_reader :request_id, :connection
4
4
 
5
+ FLAGS = {}
5
6
  OP_CODES = {
6
7
  reply: 1,
7
8
  msg: 1000,
@@ -20,6 +21,7 @@ module Monga
20
21
  @collection_name = collection_name
21
22
  @options = options
22
23
 
24
+ check_flags
23
25
  @request_id = self.class.request_id
24
26
  end
25
27
 
@@ -69,6 +71,15 @@ module Monga
69
71
 
70
72
  private
71
73
 
74
+ # Ouch!
75
+ def check_flags
76
+ return unless @options[:query]
77
+ self.class::FLAGS.each do |k, byte|
78
+ v = @options[:query].delete(k)
79
+ @options[k] = v if v
80
+ end
81
+ end
82
+
72
83
  def flags
73
84
  flags = 0
74
85
  self.class::FLAGS.each do |k, byte|
@@ -4,7 +4,7 @@ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
4
4
 
5
5
  Gem::Specification.new do |spec|
6
6
  spec.name = "monga"
7
- spec.version = "0.0.10"
7
+ spec.version = "0.0.11"
8
8
  spec.authors = ["Petr Yanovich"]
9
9
  spec.email = ["fl00r@yandex.ru"]
10
10
  spec.description = %q{Yet another MongoDB Ruby Client}
@@ -22,7 +22,7 @@ Gem::Specification.new do |spec|
22
22
  spec.add_development_dependency "kgio"
23
23
  spec.add_development_dependency "em-synchrony"
24
24
 
25
- spec.add_dependency "bson", ["~> 2.0.0.rc1"]
25
+ spec.add_dependency "bson", ["2.0.0.rc1"]
26
26
  # spec.add_dependency "bson"
27
27
  # spec.add_dependency "bson_ext"
28
28
  spec.add_dependency "bin_utils"
@@ -124,6 +124,10 @@ describe Monga::Collection do
124
124
  it "should count all docs with limit and skip" do
125
125
  @collection.count(query: { artist: "Madonna" }, limit: 5, skip: 6).must_equal 4
126
126
  end
127
+
128
+ it "should work with flags" do
129
+ @collection.count(query: { artist: "Madonna" }, limit: 5, skip: 6, slave_ok: true)
130
+ end
127
131
  end
128
132
 
129
133
  # ENSURE/DROP INDEX
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: monga
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.10
4
+ version: 0.0.11
5
5
  prerelease:
6
6
  platform: ruby
7
7
  authors:
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2013-07-09 00:00:00.000000000 Z
12
+ date: 2013-09-05 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: bundler
@@ -80,7 +80,7 @@ dependencies:
80
80
  requirement: !ruby/object:Gem::Requirement
81
81
  none: false
82
82
  requirements:
83
- - - ~>
83
+ - - '='
84
84
  - !ruby/object:Gem::Version
85
85
  version: 2.0.0.rc1
86
86
  type: :runtime
@@ -88,7 +88,7 @@ dependencies:
88
88
  version_requirements: !ruby/object:Gem::Requirement
89
89
  none: false
90
90
  requirements:
91
- - - ~>
91
+ - - '='
92
92
  - !ruby/object:Gem::Version
93
93
  version: 2.0.0.rc1
94
94
  - !ruby/object:Gem::Dependency
@@ -121,7 +121,6 @@ files:
121
121
  - README.md
122
122
  - Rakefile
123
123
  - benchmarks/inserts.rb
124
- - benchmarks/prof.rb
125
124
  - lib/monga.rb
126
125
  - lib/monga/client.rb
127
126
  - lib/monga/clients/master_slave_client.rb
@@ -181,7 +180,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
181
180
  version: '0'
182
181
  segments:
183
182
  - 0
184
- hash: -4608881135826432488
183
+ hash: -3001858726288794131
185
184
  required_rubygems_version: !ruby/object:Gem::Requirement
186
185
  none: false
187
186
  requirements:
@@ -190,7 +189,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
190
189
  version: '0'
191
190
  segments:
192
191
  - 0
193
- hash: -4608881135826432488
192
+ hash: -3001858726288794131
194
193
  requirements: []
195
194
  rubyforge_project:
196
195
  rubygems_version: 1.8.25
@@ -1,26 +0,0 @@
1
- require 'ruby-prof'
2
- require File.expand_path('../../lib/monga', __FILE__)
3
- require 'mongo'
4
- include Mongo
5
-
6
- total = 100
7
- # monga_collection = Monga::Client.new(type: :block).get_database("dbTest").get_collection("testCollection")
8
- mongo_collection = MongoClient.new.db("dbTest").collection("testCollection")
9
-
10
- total.times do |i|
11
- mongo_collection.insert(title: "Row #{i}")
12
- end
13
- RubyProf.start
14
- mongo_collection.find.to_a
15
-
16
- result = RubyProf.stop
17
- mongo_collection.drop
18
-
19
- # Print a flat profile to text
20
- printer = RubyProf::FlatPrinter.new(result)
21
- printer.print(STDOUT)
22
-
23
-
24
-
25
- mongo_collection = MongoClient.new.db("dbTest").collection("testCollection")
26
- mongo_collection.find.to_a