hbase-jruby 0.3.5-java → 0.4.0-java

Sign up to get free protection for your applications and to get access to all the features.
data/CHANGELOG.md CHANGED
@@ -1,6 +1,19 @@
1
1
  Changelog
2
2
  =========
3
3
 
4
+ 0.4.0
5
+ -----
6
+ - Added support for append operation: `HBase::Table#append`
7
+ - Added support for atomic mutations on a single row: `HBase::Table#mutate`
8
+ - Added support for batch operations: `HBase::Table#batch`
9
+ - This method does not take an argument and requires a block
10
+ - Don't be confused with the shortcut method to `HBase::Scoped#batch(batch_size)`
11
+ - Changed `HBase::Table#increment` to return the updated values as a Hash
12
+ - Fixed HBase.resolve_dependency!(:local) on CDH distribution
13
+ - Empty-qualifier must be given as 'cf:', and not 'cf'
14
+ - Added `HBase::Row#empty?` method
15
+ - Added `HBase::ByteArray#to_s` method
16
+
4
17
  0.3.5
5
18
  -----
6
19
  - Improved `Scoped#count` method
data/README.md CHANGED
@@ -11,6 +11,33 @@
11
11
 
12
12
  gem install hbase-jruby
13
13
 
14
+ ### Using hbase-jruby in HBase shell
15
+
16
+ You can use this gem in HBase shell without external JRuby installation.
17
+
18
+ First, clone this repository,
19
+
20
+ ```sh
21
+ git clone https://github.com/junegunn/hbase-jruby.git
22
+ ```
23
+
24
+ then start up the shell (`hbase shell`) and type in the following lines:
25
+
26
+ ```ruby
27
+ $LOAD_PATH << 'hbase-jruby/lib'
28
+ require 'hbase-jruby'
29
+ ```
30
+
31
+ Now, you're all set.
32
+
33
+ ```ruby
34
+ # Start using it!
35
+ hbase = HBase.new
36
+
37
+ hbase.list
38
+ hbase[:my_table].create! :f
39
+ ```
40
+
14
41
  ## A quick example
15
42
 
16
43
  ```ruby
@@ -72,12 +99,18 @@ table.put 1,
72
99
  comment1: 'A must-have',
73
100
  comment2: 'Rewarding purchase'
74
101
 
75
- # GET
102
+ # GET (using schema)
76
103
  book = table.get(1)
77
104
  title = book[:title]
78
105
  comment2 = book[:comment2]
79
106
  as_hash = book.to_h
80
107
 
108
+ # GET (not using schema)
109
+ title = book.string('cf1:title') # cf:cq notation
110
+ year = book.short('cf1:year')
111
+ reviews = book.fixnum('cf2:reviews')
112
+ stars = book.fixnum(['cf2', 'stars']) # Array notation of [cf, cq]
113
+
81
114
  # SCAN
82
115
  table.range(0..100)
83
116
  .project(:cf1, :reviews, :summary)
@@ -122,7 +155,7 @@ Call `HBase.resolve_dependency!` helper method passing one of the arguments list
122
155
  | cdh4.1[.*] | Cloudera CDH4.1 | cdh4.1.4 | mvn |
123
156
  | cdh3[u*] | Cloudera CDH3 | cdh3u6 | mvn |
124
157
  | 0.95[.*] | Apache HBase 0.95 | 0.95.0 | mvn |
125
- | 0.94[.*] | Apache HBase 0.94 | 0.94.7 | mvn |
158
+ | 0.94[.*] | Apache HBase 0.94 | 0.94.9 | mvn |
126
159
  | 0.92[.*] | Apache HBase 0.92 | 0.92.2 | mvn |
127
160
  | *POM PATH* | Custom Maven POM file | - | mvn |
128
161
  | `:local` | Local HBase installation | - | hbase |
@@ -208,7 +241,19 @@ table.create! cf1: {},
208
241
  cf2: { compression: :snappy, bloomfilter: :row }
209
242
  ```
210
243
 
211
- ## Basic operations
244
+ ## List of operations
245
+
246
+ | Operation | Description |
247
+ | ------------------ | --------------------------------------------------------------------------------------------- |
248
+ | PUT | Puts data into the table |
249
+ | GET | Retrieves data from the table by one or more rowkeys |
250
+ | SCAN | Scans the table for a given range of rowkeys |
251
+ | DELETE | Deletes data in the table |
252
+ | INCREMENT | Atomically increments one or more columns |
253
+ | APPEND | Appends values to one or more columns within a single row |
254
+ | Checked PUT/DELETE | Atomically checks if the pre-exising data matches the expected value and puts or deletes data |
255
+ | MUTATE | Performs multiple mutations (PUTS and DELETES) atomically on a single row |
256
+ | Batch execution | Performs multiple actions (PUT, GET, DELETE, INCREMENT, APPEND, and MUTATE) at once |
212
257
 
213
258
  ### Defining table schema for easier data access
214
259
 
@@ -252,9 +297,9 @@ hbase.schema = {
252
297
  ```
253
298
 
254
299
  Columns that are not defined in the schema can be referenced
255
- using `FAMILY:QUALIFIER` notation or 2-element Array of column family name (as Symbol) and qualifier,
256
- however since there's no type information, they are returned as Java byte arrays,
257
- which have to be decoded manually.
300
+ using `FAMILY:QUALIFIER` notation or 2-element Array of column family name (as
301
+ Symbol) and qualifier, however since there's no type information, they are
302
+ returned as Java byte arrays, which have to be decoded manually.
258
303
 
259
304
  ### PUT
260
305
 
@@ -304,7 +349,7 @@ extra = HBase::Util.from_bytes(:bigdecimal, book['cf2:extra'])
304
349
  extra = book.bigdecimal 'cf2:extra'
305
350
  ```
306
351
 
307
- ### Batch-GET
352
+ #### Batch-GET
308
353
 
309
354
  ```ruby
310
355
  # Pass an array of row keys as the parameter
@@ -316,8 +361,8 @@ books = table.get(['rowkey1', 'rowkey2', 'rowkey3'])
316
361
  `to_h` and `to_H` return the Hash representation of the row.
317
362
  (The latter returns all values with their timestamp)
318
363
 
319
- If a column is defined in the schema, it is referenced using its quailifier in Symbol type.
320
- If a column is not defined, it is represented as a 2-element Array
364
+ If a column is defined in the schema, it is referenced using its quailifier in
365
+ Symbol type. If a column is not defined, it is represented as a 2-element Array
321
366
  of column family in Symbol and column qualifier as ByteArray.
322
367
  Even so, to make it easier to reference those columns, an extended version of
323
368
  Hash is returned with which you can also reference them with `FAMILY:QUALIFIER`
@@ -431,6 +476,25 @@ table.delete_row 'rowkey1'
431
476
  table.delete_row 'rowkey1', 'rowkey2', 'rowkey3'
432
477
  ```
433
478
 
479
+ ### INCREMENT: Atomic increment of column values
480
+
481
+ ```ruby
482
+ # Atomically increase cf2:reviews by one
483
+ inc = table.increment('rowkey1', reviews: 1)
484
+ puts inc[:reviews]
485
+
486
+ # Atomically increase two columns by one and five respectively
487
+ inc = table.increment('rowkey1', reviews: 1, stars: 5)
488
+ puts inc[:stars]
489
+ ```
490
+
491
+ ### APPEND
492
+
493
+ ```ruby
494
+ ret = table.append 'rowkey1', title: ' (limited edition)', summary: ' ...'
495
+ puts ret[:title] # Updated title
496
+ ```
497
+
434
498
  ### Checked PUT and DELETE
435
499
 
436
500
  ```ruby
@@ -444,19 +508,59 @@ table.check(:rowkey, in_print: false)
444
508
  # https://github.com/junegunn/hbase-jruby#delete
445
509
  ```
446
510
 
447
- ### Atomic increment of column values
511
+ ### MUTATE: Atomic mutations on a single row (PUTs and DELETEs)
448
512
 
449
513
  ```ruby
450
- # Atomically increase cf2:reviews by one
451
- table.increment('rowkey1', reviews: 1)
514
+ # Currently Put and Delete are supported
515
+ # - Refer to mutateRow method of org.apache.hadoop.hbase.client.HTable
516
+ table.mutate(rowkey) do |m|
517
+ m.put comment3: 'Nice', comment4: 'Great'
518
+ m.delete :comment1, :comment2
519
+ end
520
+ ```
452
521
 
453
- # Atomically increase two columns by one and five respectively
454
- table.increment('rowkey1', reviews: 1, stars: 5)
522
+ ### Batch execution
455
523
 
456
- # Increase column values of multiple rows.
457
- # - Atomicity is only guaranteed within each row.
458
- table.increment 'rowkey1' => { reviews: 1, stars: 5 },
459
- 'rowkey2' => { reviews: 1, stars: 3 }
524
+ *Disclaimer*: The ordering of execution of the actions is not defined.
525
+ Refer to the documentation of batch method of [HTable class](http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html).
526
+
527
+ ```ruby
528
+ ret = table.batch do |b|
529
+ b.put rowkey1, 'cf1:a' => 100, 'cf1:b' => 'hello'
530
+ b.get rowkey2
531
+ b.append rowkey3, 'cf1:b' => 'world'
532
+ b.delete rowkey3, 'cf2', 'cf3:z'
533
+ b.increment rowkey3, 'cf1:a' => 200, 'cf1:c' => 300
534
+ b.mutate(rowkey4) do |m|
535
+ m.put 'cf3:z' => 3.14
536
+ m.delete 'cf3:y', 'cf4'
537
+ end
538
+ end
539
+ ```
540
+
541
+ `batch` method returns an Array of Hashes which contains the results of the
542
+ actions in the order they are specified in the block. Each Hash has `:type` entry
543
+ (:get, :put, :append, etc.) and `:result` entry. If the type of an action is
544
+ :put, :delete, or :mutate, the `:result` will be given as a boolean. If it's an
545
+ :increment or :append, a plain Hash will be returned as the `:result`, just like
546
+ in [increment](https://github.com/junegunn/hbase-jruby#increment-atomic-increment-of-column-values)
547
+ and [append](https://github.com/junegunn/hbase-jruby#append) methods.
548
+ For :get action, `HBase::Row` instance will be returned or nil if not found.
549
+
550
+ If one or more actions has failed, `HBase::BatchException` will be raised.
551
+ Although you don't get to receive the return value from batch method,
552
+ you can still access the partial results using `results` method of
553
+ `HBase::BatchException`.
554
+
555
+ ```ruby
556
+ results =
557
+ begin
558
+ table.batch do |b|
559
+ # ...
560
+ end
561
+ rescue HBase::BatchException => e
562
+ e.results
563
+ end
460
564
  ```
461
565
 
462
566
  ### SCAN
@@ -579,8 +683,8 @@ scope.range(1, 100).
579
683
 
580
684
  ### *filter*
581
685
 
582
- You can configure server-side filtering of rows and columns with `HBase::Scoped#filter` calls.
583
- Multiple calls have conjunctive effects.
686
+ You can configure server-side filtering of rows and columns with
687
+ `HBase::Scoped#filter` calls. Multiple calls have conjunctive effects.
584
688
 
585
689
  ```ruby
586
690
  # Range scanning the table with filters
@@ -617,10 +721,10 @@ end
617
721
 
618
722
  ### *while*
619
723
 
620
- `HBase::Scoped#while` method takes the same parameters as `filter` method, the difference is that
621
- each filtering condition passed to `while` method is wrapped by `WhileMatchFilter`,
622
- which aborts scan immediately when the condition is not met at a certain row.
623
- See the following example.
724
+ `HBase::Scoped#while` method takes the same parameters as `filter` method, the
725
+ difference is that each filtering condition passed to `while` method is wrapped
726
+ by `WhileMatchFilter`, which aborts scan immediately when the condition is not
727
+ met at a certain row. See the following example.
624
728
 
625
729
  ```ruby
626
730
  (0...30).each do |idx|
@@ -669,8 +773,8 @@ scoped.project(offset: 1000, limit: 10)
669
773
 
670
774
  When using column filters on *fat* rows with many columns,
671
775
  it's advised that you limit the batch size with `HBase::Scoped#batch` call
672
- to avoid fetching all columns at once.
673
- However setting batch size allows multiple rows with the same row key are returned during scan.
776
+ to avoid fetching all columns at once. However setting batch size allows
777
+ multiple rows with the same row key are returned during scan.
674
778
 
675
779
  ```ruby
676
780
  # Let's say that we have rows with more than 10 columns whose qualifiers start with `str`
@@ -818,8 +922,9 @@ table.raw_families
818
922
  # "BLOCKCACHE" => "true"}}
819
923
  ```
820
924
 
821
- These String key-value pairs are not really a part of the public API of HBase, and thus might change over time.
822
- However, they are most useful when you need to create a table with the same properties as the existing one.
925
+ These String key-value pairs are not really a part of the public API of HBase,
926
+ and thus might change over time. However, they are most useful when you need to
927
+ create a table with the same properties as the existing one.
823
928
 
824
929
  ```ruby
825
930
  hbase[:dupe_table].create!(table.raw_families, table.raw_properties)
@@ -988,8 +1093,8 @@ end
988
1093
 
989
1094
  ### Lexicographic scan order
990
1095
 
991
- HBase stores rows in the lexicographic order of the rowkeys in their byte array representations.
992
- Thus the type of row key affects the scan order.
1096
+ HBase stores rows in the lexicographic order of the rowkeys in their byte array
1097
+ representations. Therefore, the type of the row key affects the scan order.
993
1098
 
994
1099
  ```ruby
995
1100
  (1..15).times do |i|
@@ -0,0 +1,12 @@
1
+ class HBase
2
+ class BatchException < RuntimeError
3
+ attr_reader :java_exception, :results
4
+
5
+ def initialize x, results
6
+ super x.to_s
7
+ @java_exception = x
8
+ @results = results
9
+ end
10
+ end
11
+ end
12
+
@@ -37,6 +37,12 @@ class ByteArray
37
37
  initialize_(*values)
38
38
  end
39
39
 
40
+ # Returns the String representation of the underlying byte array
41
+ # @return [String]
42
+ def to_s
43
+ @java.to_s
44
+ end
45
+
40
46
  def each
41
47
  return enum_for(:each) unless block_given?
42
48
  @java.to_a.each { |byte| yield byte }
@@ -17,7 +17,7 @@ class HBase
17
17
  'cdh4.1' => 'cdh4.1.4',
18
18
  'cdh3' => 'cdh3u6',
19
19
  '0.95' => '0.95.0',
20
- '0.94' => '0.94.7',
20
+ '0.94' => '0.94.9',
21
21
  '0.92' => '0.92.2',
22
22
  }
23
23
 
@@ -51,7 +51,7 @@ class HBase
51
51
  # Check for hbase executable
52
52
  hbase = `which hbase`
53
53
  raise RuntimeError, "Cannot find `hbase` executable" if hbase.empty?
54
- `hbase classpath`.split(':')
54
+ `hbase classpath`.strip.split(':').map { |e| Dir[e] }.flatten
55
55
  else
56
56
  # Check for Maven executable
57
57
  mvn = `which mvn`
@@ -99,8 +99,8 @@ class HBase
99
99
 
100
100
  # Load jars
101
101
  jars_loaded = jars.select { |jar|
102
- File.exists?(jar) &&
103
- File.extname(jar) == '.jar' &&
102
+ File.file?(jar) &&
103
+ File.extname(jar).downcase == '.jar' &&
104
104
  require(jar)
105
105
  }
106
106
 
@@ -158,9 +158,11 @@ class HBase
158
158
  HBase::Table => %w[
159
159
  org.apache.hadoop.hbase.HColumnDescriptor
160
160
  org.apache.hadoop.hbase.HTableDescriptor
161
+ org.apache.hadoop.hbase.client.Append
161
162
  org.apache.hadoop.hbase.client.Delete
162
163
  org.apache.hadoop.hbase.client.Increment
163
164
  org.apache.hadoop.hbase.client.Put
165
+ org.apache.hadoop.hbase.client.RowMutations
164
166
  org.apache.hadoop.hbase.io.hfile.Compression
165
167
  org.apache.hadoop.hbase.regionserver.StoreFile
166
168
  ],
@@ -1,11 +1,17 @@
1
1
  require 'bigdecimal'
2
2
 
3
3
  class HBase
4
- # Represents a row returned by HBase
4
+ # Represents a set of key-values returned by HBase
5
5
  # @author Junegunn Choi <junegunn.c@gmail.com>
6
6
  class Row
7
7
  include Enumerable
8
8
 
9
+ # Returns if the returned row is empty
10
+ # @return [Boolean]
11
+ def empty?
12
+ @result.empty?
13
+ end
14
+
9
15
  # Returns the rowkey of the row
10
16
  # @param [Symbol] type The type of the rowkey
11
17
  # Can be one of :string, :symbol, :fixnum, :float, :short, :int, :bigdecimal, :boolean and :raw.
@@ -269,7 +275,7 @@ class Row
269
275
 
270
276
  private
271
277
  def get_value col, with_versions = false
272
- cf, cq, _ = @table.lookup_and_parse col
278
+ cf, cq, _ = @table.lookup_and_parse col, true
273
279
  if with_versions
274
280
  # Need to make it a Ruby hash:
275
281
  # Prevents implicit conversion from ruby type to java type when updating the Hash
@@ -86,9 +86,10 @@ class Schema
86
86
 
87
87
  # @private
88
88
  # @param [Symbol] table
89
- def lookup_and_parse table, col
89
+ def lookup_and_parse table, col, expect_cq
90
90
  cf, cq, type = lookup table, col
91
91
  cf, cq = Util.parse_column_name(cf ? [cf, cq] : col)
92
+ raise ArgumentError, "Invalid column key: #{col}" if expect_cq && cq.nil?
92
93
  return [cf, cq, type]
93
94
  end
94
95
 
@@ -13,7 +13,7 @@ class Scoped
13
13
  # A clean HBase::Scoped object for the same table
14
14
  # @return [HBase::Scope] A clean HBase::Scoped object for the same table
15
15
  def unscope
16
- Scoped.send(:new, @table)
16
+ Scoped.send(:new, @table, @dcaching)
17
17
  end
18
18
 
19
19
  # Count the number of rows in the scope
@@ -253,7 +253,7 @@ class Scoped
253
253
 
254
254
  private
255
255
  # @param [HBase::Table] table
256
- def initialize table
256
+ def initialize table, default_caching
257
257
  @table = table
258
258
  @filters = []
259
259
  @project = []
@@ -261,6 +261,7 @@ private
261
261
  @range = nil
262
262
  @versions = nil
263
263
  @batch = nil
264
+ @dcaching = default_caching
264
265
  @caching = nil
265
266
  @limit = nil
266
267
  @mlimit = nil
@@ -533,7 +534,9 @@ private
533
534
  scan.setMaxResultSize(@limit)
534
535
  else
535
536
  @mlimit = @limit
536
- scan.caching = [@mlimit, @caching].compact.min
537
+ if [@caching, @dcaching].compact.all? { |c| @mlimit < c }
538
+ scan.caching = @mlimit
539
+ end
537
540
  end
538
541
  end
539
542
 
@@ -615,7 +618,7 @@ private
615
618
  case f
616
619
  when Hash
617
620
  f.map { |col, val|
618
- cf, cq, type = @table.lookup_and_parse col
621
+ cf, cq, type = @table.lookup_and_parse col, true
619
622
 
620
623
  case val
621
624
  when Array
@@ -0,0 +1,69 @@
1
+ class HBase
2
+ class Table
3
+ # Class used to register actions to perform in batch
4
+ class BatchAction
5
+ attr_reader :actions
6
+ attr_reader :types
7
+
8
+ class BatchGetScoped
9
+ # @see HBase::Scoped#get
10
+ def get rowkey
11
+ @callback.call @scoped.send(:getify, rowkey)
12
+ end
13
+
14
+ [:range, :project, :filter, :versions, :time_range, :at].each do |method|
15
+ define_method(method) do |*args|
16
+ BatchGetScoped.send(:new, @scoped.send(method, *args), @callback)
17
+ end
18
+ end
19
+
20
+ private
21
+ def initialize scoped, callback
22
+ @scoped = scoped
23
+ @callback = callback
24
+ end
25
+ end
26
+
27
+ # @see HBase::Table#put
28
+ def put *args
29
+ @actions << { :type => :put, :action => @mutation.put(*args) }
30
+ end
31
+
32
+ # @see HBase::Table#delete
33
+ def delete *args
34
+ @actions << { :type => :delete, :action => @mutation.delete(*args) }
35
+ end
36
+
37
+ # @see HBase::Table#append
38
+ def append *args
39
+ @actions << { :type => :append, :action => @mutation.append(*args) }
40
+ end
41
+
42
+ # @see HBase::Table#increment
43
+ def increment *args
44
+ @actions << { :type => :increment, :action => @mutation.increment(*args) }
45
+ end
46
+
47
+ # @see HBase::Table#mutate
48
+ def mutate *args, &blk
49
+ @actions << { :type => :mutate, :action => @mutation.mutate(*args, &blk) }
50
+ end
51
+
52
+ [:get, :range, :project, :filter, :versions, :time_range, :at].each do |method|
53
+ define_method(method) do |*args|
54
+ BatchGetScoped.send(:new, @table.scoped, proc { |get|
55
+ @actions << { :type => :get, :action => get }
56
+ }).send(method, *args)
57
+ end
58
+ end
59
+
60
+ private
61
+ def initialize table, mutation
62
+ @table = table
63
+ @mutation = mutation
64
+ @actions = []
65
+ end
66
+ end#BatchAction
67
+ end#Table
68
+ end#HBase
69
+
@@ -1,22 +1,25 @@
1
1
  class HBase
2
2
  class Table
3
3
  class CheckedOperation
4
- def initialize table, rowkey, cf, cq, val
5
- @table = table
6
- @rowkey = rowkey
7
- @cf = cf
8
- @cq = cq
9
- @val = val
4
+ def initialize table, mutation, rowkey, cf, cq, val
5
+ @table = table
6
+ @mutation = mutation
7
+ @rowkey = rowkey
8
+ @cf = cf
9
+ @cq = cq
10
+ @val = val
10
11
  end
11
12
 
12
13
  # @param [Hash] props
13
14
  def put props
14
- @table.htable.checkAndPut @rowkey, @cf, @cq, @val, @table.send(:make_put, @rowkey, props)
15
+ @table.htable.checkAndPut(
16
+ @rowkey, @cf, @cq, @val, @mutation.put(@rowkey, props))
15
17
  end
16
18
 
17
19
  # @param [Object] *extra Optional delete specification. Column family, qualifier, and timestamps
18
20
  def delete *extra
19
- @table.htable.checkAndDelete @rowkey, @cf, @cq, @val, @table.send(:make_delete, @rowkey, *extra)
21
+ @table.htable.checkAndDelete(
22
+ @rowkey, @cf, @cq, @val, @mutation.delete(@rowkey, *extra))
20
23
  end
21
24
  end
22
25
  end
@@ -0,0 +1,130 @@
1
+ class HBase
2
+ class Table
3
+ # Generate single-row mutation objects
4
+ class Mutation
5
+ include HBase::Util
6
+
7
+ def initialize table
8
+ @table = table
9
+ end
10
+
11
+ def put rowkey, props
12
+ Put.new(Util.to_bytes rowkey).tap { |put|
13
+ props.each do |col, val|
14
+ next if val.nil?
15
+
16
+ cf, cq, type = @table.lookup_and_parse col, true
17
+
18
+ case val
19
+ when Hash
20
+ val.each do |t, v|
21
+ case t
22
+ # Timestamp / Ruby Time
23
+ when Time, Fixnum
24
+ put.add cf, cq, time_to_long(t), Util.to_typed_bytes(type, v)
25
+ # Types: :byte, :short, :int, ...
26
+ else
27
+ put.add cf, cq, Util.to_typed_bytes(t, v)
28
+ end unless v.nil?
29
+ end
30
+ else
31
+ put.add cf, cq, Util.to_typed_bytes(type, val)
32
+ end
33
+ end
34
+ raise ArgumentError, "no column to put" if put.empty?
35
+ }
36
+ end
37
+
38
+ def delete rowkey, *extra
39
+ Delete.new(Util.to_bytes rowkey).tap { |del|
40
+ cf = cq = nil
41
+ prcd = false
42
+
43
+ prc = lambda do
44
+ unless prcd
45
+ if cq
46
+ # Delete all versions
47
+ del.deleteColumns cf, cq
48
+ elsif cf
49
+ del.deleteFamily cf
50
+ end
51
+ end
52
+ end
53
+
54
+ extra.each do |x|
55
+ case x
56
+ when Fixnum, Time
57
+ if cq
58
+ del.deleteColumn cf, cq, time_to_long(x)
59
+ prcd = true
60
+ else
61
+ raise ArgumentError, 'qualifier not given'
62
+ end
63
+ else
64
+ prc.call
65
+ cf, cq, _ = @table.lookup_and_parse x, false
66
+ prcd = false
67
+ end
68
+ end
69
+ prc.call
70
+ }
71
+ end
72
+
73
+ def increment rowkey, *spec
74
+ if spec.first.is_a?(Hash)
75
+ spec = spec.first
76
+ else
77
+ c, b = spec
78
+ spec = { c => (b || 1) }
79
+ end
80
+
81
+ Increment.new(Util.to_bytes rowkey).tap { |inc|
82
+ spec.each do |col, by|
83
+ cf, cq, _ = @table.lookup_and_parse col, true
84
+ inc.addColumn cf, cq, by
85
+ end
86
+ }
87
+ end
88
+
89
+ def append rowkey, spec
90
+ Append.new(Util.to_bytes rowkey).tap { |apnd|
91
+ spec.each do |col, val|
92
+ cf, cq, _ = @table.lookup_and_parse col, true
93
+ apnd.add(cf, cq, Util.to_bytes(val))
94
+ end
95
+ }
96
+ end
97
+
98
+ def mutate rowkey
99
+ rm = Mutator.new(self, rowkey)
100
+ yield rm
101
+ org.apache.hadoop.hbase.client.RowMutations.new(Util.to_bytes rowkey).tap { |m|
102
+ rm.mutations.each do |action|
103
+ m.add action
104
+ end
105
+ }
106
+ end
107
+
108
+ class Mutator
109
+ attr_reader :mutations
110
+
111
+ def initialize mutation, rowkey
112
+ @mutation = mutation
113
+ @rowkey = rowkey
114
+ @mutations = []
115
+ end
116
+
117
+ # @param [Hash] props Column values
118
+ def put props
119
+ @mutations << @mutation.put(@rowkey, props)
120
+ self
121
+ end
122
+
123
+ def delete *args
124
+ @mutations << @mutation.delete(@rowkey, *args)
125
+ self
126
+ end
127
+ end#RowMutation
128
+ end#Mutation
129
+ end#Table
130
+ end#HBase