opener-tokenizer 1.1.2 → 2.2.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: f9af1f4a79564c201746a78973f5fc3cdfe272f1
4
- data.tar.gz: 41e00e0819e7a16aa5d9e509fb4f28751811b292
2
+ SHA256:
3
+ metadata.gz: 5d249d10dfdca563615ea4ee70e905965970e36bca57ba78075c333ead37fad5
4
+ data.tar.gz: feb7d682b0c96c765a6bfee38178223ccf9e4c4d05ef0ad40d1a66712a28ee2a
5
5
  SHA512:
6
- metadata.gz: 563674a5a7f855eff87bd6659a5688f35502c0790fbde1797dc1b27f8f45f03f4c42ca8da2dfbe501da6c5a8db7523bc4cd308adf765c0e4a754e3874f1563c8
7
- data.tar.gz: e716377afcbc496894a89b95aeec064c89402c60839fb1213dfce2990a2baaab083e0f46950a1d8791c54a1028ac4c7faefd00dd49ee36b463af90204b55602d
6
+ metadata.gz: 8e48843449d69c2acea361f285d76c0d4b73887a497fa88146e84456c21d76ce1518c5c1d435e67301c7443c9d36d2f3b8fecd96c004dad1f91d29e627092be4
7
+ data.tar.gz: 5af4a2af84e13237345fa69bfd8908b8df653abc53fcd3fc32a727efa5399a1c7499c2ad5061c981b5ed6069e0b3bca9fadd6b164c2190e219026b6abb7716c6
@@ -0,0 +1,13 @@
1
+ Copyright 2014 OpeNER Project Consortium
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License");
4
+ you may not use this file except in compliance with the License.
5
+ You may obtain a copy of the License at
6
+
7
+ http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ Unless required by applicable law or agreed to in writing, software
10
+ distributed under the License is distributed on an "AS IS" BASIS,
11
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ See the License for the specific language governing permissions and
13
+ limitations under the License.
data/README.md CHANGED
@@ -1,11 +1,15 @@
1
1
  Introduction
2
2
  ------------
3
3
 
4
- The tokenizer tokenizes a text into sentences and words.
4
+ The tokenizer tokenizes a text into sentences and words.
5
5
 
6
6
  ### Confused by some terminology?
7
7
 
8
- This software is part of a larger collection of natural language processing tools known as "the OpeNER project". You can find more information about the project at [the OpeNER portal](http://opener-project.github.io). There you can also find references to terms like KAF (an XML standard to represent linguistic annotations in texts), component, cores, scenario's and pipelines.
8
+ This software is part of a larger collection of natural language processing
9
+ tools known as "the OpeNER project". You can find more information about the
10
+ project at [the OpeNER portal](http://opener-project.github.io). There you can
11
+ also find references to terms like KAF (an XML standard to represent linguistic
12
+ annotations in texts), component, cores, scenario's and pipelines.
9
13
 
10
14
  Quick Use Example
11
15
  -----------------
@@ -20,13 +24,14 @@ output KAF by default.
20
24
 
21
25
  ### Command line interface
22
26
 
23
- You should now be able to call the tokenizer as a regular shell
24
- command: by its name. Once installed the gem normally sits in your path so you can call it directly from anywhere.
27
+ You should now be able to call the tokenizer as a regular shell command: by its
28
+ name. Once installed the gem normally sits in your path so you can call it
29
+ directly from anywhere.
25
30
 
26
31
  Tokenizing some text:
27
32
 
28
33
  echo "This is English text" | tokenizer -l en --no-kaf
29
-
34
+
30
35
  Will result in
31
36
 
32
37
  <?xml version="1.0" encoding="UTF-8" standalone="no"?>
@@ -45,11 +50,13 @@ Will result in
45
50
  </text>
46
51
  </KAF>
47
52
 
48
- The available languages for tokenization are: English (en), German (de), Dutch (nl), French (fr), Spanish (es), Italian (it)
53
+ The available languages for tokenization are: English (en), German (de), Dutch
54
+ (nl), French (fr), Spanish (es), Italian (it)
49
55
 
50
56
  #### KAF input format
51
57
 
52
- The tokenizer is capable of taking KAF as input, and actually does so by default. You can do so like this:
58
+ The tokenizer is capable of taking KAF as input, and actually does so by
59
+ default. You can do so like this:
53
60
 
54
61
  echo "<?xml version='1.0' encoding='UTF-8' standalone='no'?><KAF version='v1.opener' xml:lang='en'><raw>This is what I call, a test!</raw></KAF>" | tokenizer
55
62
 
@@ -72,7 +79,8 @@ Will result in
72
79
  </text>
73
80
  </KAF>
74
81
 
75
- If the argument -k (--kaf) is passed, then the argument -l (--language) is ignored.
82
+ If the argument -k (--kaf) is passed, then the argument -l (--language) is
83
+ ignored.
76
84
 
77
85
  ### Webservices
78
86
 
@@ -80,7 +88,8 @@ You can launch a language identification webservice by executing:
80
88
 
81
89
  tokenizer-server
82
90
 
83
- This will launch a mini webserver with the webservice. It defaults to port 9292, so you can access it at <http://localhost:9292>.
91
+ This will launch a mini webserver with the webservice. It defaults to port 9292,
92
+ so you can access it at <http://localhost:9292>.
84
93
 
85
94
  To launch it on a different port provide the `-p [port-number]` option like this:
86
95
 
@@ -88,19 +97,25 @@ To launch it on a different port provide the `-p [port-number]` option like this
88
97
 
89
98
  It then launches at <http://localhost:1234>
90
99
 
91
- Documentation on the Webservice is provided by surfing to the urls provided above. For more information on how to launch a webservice run the command with the ```-h``` option.
100
+ Documentation on the Webservice is provided by surfing to the urls provided
101
+ above. For more information on how to launch a webservice run the command with
102
+ the `--help` option.
92
103
 
93
104
 
94
105
  ### Daemon
95
106
 
96
- Last but not least the tokenizer comes shipped with a daemon that can read jobs (and write) jobs to and from Amazon SQS queues. For more information type:
107
+ Last but not least the tokenizer comes shipped with a daemon that can read jobs
108
+ (and write) jobs to and from Amazon SQS queues. For more information type:
97
109
 
98
- tokenizer-daemon -h
110
+ tokenizer-daemon --help
99
111
 
100
112
  Description of dependencies
101
113
  ---------------------------
102
114
 
103
- This component runs best if you run it in an environment suited for OpeNER components. You can find an installation guide and helper tools in the [OpeNER installer](https://github.com/opener-project/opener-installer) and [an installation guide on the Opener Website](http://opener-project.github.io/getting-started/how-to/local-installation.html)
115
+ This component runs best if you run it in an environment suited for OpeNER
116
+ components. You can find an installation guide and helper tools in the
117
+ [OpeNER installer](https://github.com/opener-project/opener-installer) and
118
+ [an installation guide on the Opener Website](http://opener-project.github.io/getting-started/how-to/local-installation.html).
104
119
 
105
120
  At least you need the following system setup:
106
121
 
@@ -113,16 +128,20 @@ At least you need the following system setup:
113
128
 
114
129
  * Maven (for building the Gem)
115
130
 
116
-
117
131
  Language Extension
118
132
  ------------------
119
133
 
120
- The tokenizer module is a wrapping around a Perl script, which performs the actual tokenization based on rules (when to break a character sequence). The tokenizer already supports a lot of languages. Have a look to the core script to figure out how to extend to new languages.
134
+ The tokenizer module is a wrapping around a Perl script, which performs the
135
+ actual tokenization based on rules (when to break a character sequence). The
136
+ tokenizer already supports a lot of languages. Have a look to the core script to
137
+ figure out how to extend to new languages.
121
138
 
122
139
  The Core
123
140
  --------
124
141
 
125
- The component is a fat wrapper around the actual language technology core. The core is a rule based tokenizer implemented in Perl. You can find the core technologies in the following repositories:
142
+ The component is a fat wrapper around the actual language technology core. The
143
+ core is a rule based tokenizer implemented in Perl. You can find the core
144
+ technologies in the following repositories:
126
145
 
127
146
  * [tokenizer-base](http://github.com/opener-project/tokenizer-base)
128
147
 
@@ -135,9 +154,8 @@ Where to go from here
135
154
  Report problem/Get help
136
155
  -----------------------
137
156
 
138
- If you encounter problems, please email <support@opener-project.eu> or leave an issue in the
139
- [issue tracker](https://github.com/opener-project/tokenizer/issues).
140
-
157
+ If you encounter problems, please email <support@opener-project.eu> or leave an
158
+ issue in the [issue tracker](https://github.com/opener-project/tokenizer/issues).
141
159
 
142
160
  Contributing
143
161
  ------------
@@ -2,6 +2,6 @@
2
2
 
3
3
  require_relative '../lib/opener/tokenizer'
4
4
 
5
- cli = Opener::Tokenizer::CLI.new(:args => ARGV)
5
+ cli = Opener::Tokenizer::CLI.new
6
6
 
7
- cli.run(STDIN.tty? ? nil : STDIN.read)
7
+ cli.run
@@ -1,9 +1,10 @@
1
1
  #!/usr/bin/env ruby
2
- #
3
- require 'rubygems'
2
+
4
3
  require 'opener/daemons'
5
4
 
6
- exec_path = File.expand_path("../../exec/tokenizer.rb", __FILE__)
7
- Opener::Daemons::Controller.new(:name=>"tokenizer",
8
- :exec_path=>exec_path)
5
+ controller = Opener::Daemons::Controller.new(
6
+ :name => 'opener-tokenizer',
7
+ :exec_path => File.expand_path('../../exec/tokenizer.rb', __FILE__)
8
+ )
9
9
 
10
+ controller.run
@@ -1,8 +1,10 @@
1
1
  #!/usr/bin/env ruby
2
2
 
3
- require 'puma/cli'
3
+ require 'opener/webservice'
4
4
 
5
- rack_config = File.expand_path('../../config.ru', __FILE__)
5
+ parser = Opener::Webservice::OptionParser.new(
6
+ 'opener-tokenizer',
7
+ File.expand_path('../../config.ru', __FILE__)
8
+ )
6
9
 
7
- cli = Puma::CLI.new([rack_config] + ARGV)
8
- cli.run
10
+ parser.run
@@ -1,8 +1,9 @@
1
1
  #!/usr/bin/env ruby
2
- #
2
+
3
3
  require 'opener/daemons'
4
- require 'opener/tokenizer'
5
4
 
6
- options = Opener::Daemons::OptParser.parse!(ARGV)
7
- daemon = Opener::Daemons::Daemon.new(Opener::Tokenizer, options)
5
+ require_relative '../lib/opener/tokenizer'
6
+
7
+ daemon = Opener::Daemons::Daemon.new(Opener::Tokenizer)
8
+
8
9
  daemon.start
@@ -1,8 +1,9 @@
1
- require 'opener/tokenizers/base'
2
- require 'nokogiri'
3
1
  require 'open3'
4
- require 'optparse'
2
+
5
3
  require 'opener/core'
4
+ require 'opener/tokenizers/base'
5
+ require 'nokogiri'
6
+ require 'slop'
6
7
 
7
8
  require_relative 'tokenizer/version'
8
9
  require_relative 'tokenizer/cli'
@@ -41,8 +42,10 @@ module Opener
41
42
  #
42
43
  # @option options [Array] :args Collection of arbitrary arguments to pass
43
44
  # to the individual tokenizer commands.
45
+ #
44
46
  # @option options [String] :language The language to use for the
45
47
  # tokenization process.
48
+ #
46
49
  # @option options [TrueClass|FalseClass] :kaf When set to `true` the input
47
50
  # is assumed to be KAF.
48
51
  #
@@ -51,39 +54,36 @@ module Opener
51
54
  end
52
55
 
53
56
  ##
54
- # Processes the input and returns an array containing the output of STDOUT,
55
- # STDERR and an object containing process information.
57
+ # Tokenizes the input and returns the results as a KAF document.
56
58
  #
57
59
  # @param [String] input
58
- # @return [Array]
60
+ # @return [String]
59
61
  #
60
- def run(input)
61
- begin
62
- if options[:kaf]
63
- language, input = kaf_elements(input)
64
- else
65
- language = options[:language]
66
- end
67
-
68
- unless valid_language?(language)
69
- raise ArgumentError, "The specified language (#{language}) is invalid"
70
- end
71
-
72
- kernel = language_constant(language).new(:args => options[:args])
73
-
74
- stdout, stderr, process = Open3.capture3(*kernel.command.split(" "), :stdin_data => input)
75
- raise stderr unless process.success?
76
- return stdout
77
-
78
- rescue Exception => error
79
- return Opener::Core::ErrorLayer.new(input, error.message, self.class).add
62
+ def run input, params = {}
63
+ if options[:kaf]
64
+ language, input = kaf_elements(input)
65
+ else
66
+ language = options[:language]
67
+ end
68
+
69
+ unless valid_language?(language)
70
+ raise Core::UnsupportedLanguageError, language
80
71
  end
72
+
73
+ kernel = language_constant(language).new(:args => options[:args])
74
+
75
+ stdout, stderr, process = Open3.capture3(
76
+ *kernel.command.split(" "),
77
+ :stdin_data => input
78
+ )
79
+
80
+ raise stderr unless process.success?
81
+
82
+ return stdout
81
83
  end
82
84
 
83
85
  alias tokenize run
84
86
 
85
- private
86
-
87
87
  ##
88
88
  # Returns an Array containing the language an input from a KAF document.
89
89
  #
@@ -98,19 +98,25 @@ module Opener
98
98
  return language, text
99
99
  end
100
100
 
101
+ private
102
+
101
103
  ##
102
104
  # @param [String] language
103
105
  # @return [Class]
104
106
  #
105
107
  def language_constant(language)
106
- Opener::Tokenizers.const_get(language.upcase)
108
+ name = Core::LanguageCode.constant_name(language)
109
+
110
+ Tokenizers.const_get(name)
107
111
  end
108
112
 
109
113
  ##
110
114
  # @return [TrueClass|FalseClass]
111
115
  #
112
116
  def valid_language?(language)
113
- return Opener::Tokenizers.const_defined?(language.upcase)
117
+ name = Core::LanguageCode.constant_name(language)
118
+
119
+ return Tokenizers.const_defined?(name)
114
120
  end
115
121
  end # Tokenizer
116
122
  end # Opener
@@ -1,110 +1,92 @@
1
1
  module Opener
2
2
  class Tokenizer
3
3
  ##
4
- # CLI wrapper around {Opener::Tokenizer} using OptionParser.
4
+ # CLI wrapper around {Opener::Tokenizer} using Slop.
5
5
  #
6
- # @!attribute [r] options
7
- # @return [Hash]
8
- # @!attribute [r] option_parser
9
- # @return [OptionParser]
6
+ # @!attribute [r] parser
7
+ # @return [Slop]
10
8
  #
11
9
  class CLI
12
- attr_reader :options, :option_parser
10
+ attr_reader :parser
11
+
12
+ def initialize
13
+ @parser = configure_slop
14
+ end
13
15
 
14
16
  ##
15
- # @param [Hash] options
17
+ # @param [Array] argv
16
18
  #
17
- def initialize(options = {})
18
- @options = DEFAULT_OPTIONS.merge(options)
19
-
20
- @option_parser = OptionParser.new do |opts|
21
- opts.program_name = 'tokenizer'
22
- opts.summary_indent = ' '
23
-
24
- opts.on('-h', '--help', 'Shows this help message') do
25
- show_help
26
- end
27
-
28
- opts.on('-v', '--version', 'Shows the current version') do
29
- show_version
30
- end
19
+ def run(argv = ARGV)
20
+ parser.parse(argv)
21
+ end
31
22
 
32
- opts.on(
33
- '-l',
34
- '--language [VALUE]',
35
- 'Uses this specific language'
36
- ) do |value|
37
- @options[:language] = value
38
- @options[:kaf] = false
39
- end
23
+ ##
24
+ # @return [Slop]
25
+ #
26
+ def configure_slop
27
+ return Slop.new(:strict => false, :indent => 2, :help => true) do
28
+ banner 'Usage: tokenizer [OPTIONS]'
40
29
 
41
- opts.on('-k', '--kaf', 'Treats the input as a KAF document') do
42
- @options[:kaf] = true
43
- end
30
+ separator <<-EOF.chomp
44
31
 
45
- opts.on('-p', '--plain', 'Treats the input as plain text') do
46
- @options[:kaf] = false
47
- end
32
+ About:
48
33
 
49
- opts.separator <<-EOF
34
+ Tokenizer for KAF/plain text documents with support for various languages
35
+ such as Dutch and English. This command reads input from STDIN.
50
36
 
51
37
  Examples:
52
38
 
53
- cat example.txt | #{opts.program_name} -l en # Manually specify the language
54
- cat example.kaf | #{opts.program_name} # Uses the xml:lang attribute
39
+ cat example.txt | tokenizer -l en # Manually specify the language
40
+ cat example.kaf | tokenizer # Uses the xml:lang attribute
55
41
 
56
42
  Languages:
57
43
 
58
- * Dutch (nl)
59
- * English (en)
60
- * French (fr)
61
- * German (de)
62
- * Italian (it)
63
- * Spanish (es)
44
+ * Dutch (nl)
45
+ * English (en)
46
+ * French (fr)
47
+ * German (de)
48
+ * Italian (it)
49
+ * Spanish (es)
64
50
 
65
51
  KAF Input:
66
52
 
67
- If you give a KAF file as an input (-k or --kaf) the language is taken from
68
- the xml:lang attribute inside the file. Else it expects that you give the
69
- language as an argument (-l or --language)
53
+ If you give a KAF file as an input (-k or --kaf) the language is taken from
54
+ the xml:lang attribute inside the file. Else it expects that you give the
55
+ language as an argument (-l or --language)
70
56
 
71
- Sample KAF syntax:
57
+ Example KAF:
72
58
 
73
- <?xml version="1.0" encoding="UTF-8" standalone="no"?>
74
- <KAF version="v1.opener" xml:lang="en">
75
- <raw>This is some text.</raw>
76
- </KAF>
59
+ <?xml version="1.0" encoding="UTF-8" standalone="no"?>
60
+ <KAF version="v1.opener" xml:lang="en">
61
+ <raw>This is some text.</raw>
62
+ </KAF>
77
63
  EOF
78
- end
79
- end
80
64
 
81
- ##
82
- # @param [String] input
83
- #
84
- def run(input)
85
- option_parser.parse!(options[:args])
65
+ separator "\nOptions:\n"
86
66
 
87
- tokenizer = Tokenizer.new(options)
67
+ on :v, :version, 'Shows the current version' do
68
+ abort "tokenizer v#{VERSION} on #{RUBY_DESCRIPTION}"
69
+ end
88
70
 
89
- stdout, stderr, process = tokenizer.run(input)
71
+ on :l=, :language=, 'A specific language to use',
72
+ :as => String,
73
+ :default => DEFAULT_LANGUAGE
90
74
 
91
- puts stdout
92
- end
75
+ on :k, :kaf, 'Treats the input as a KAF document'
76
+ on :p, :plain, 'Treats the input as plain text'
93
77
 
94
- private
78
+ run do |opts, args|
79
+ tokenizer = Tokenizer.new(
80
+ :args => args,
81
+ :kaf => opts[:plain] ? false : true,
82
+ :language => opts[:language]
83
+ )
95
84
 
96
- ##
97
- # Shows the help message and exits the program.
98
- #
99
- def show_help
100
- abort option_parser.to_s
101
- end
85
+ input = STDIN.tty? ? nil : STDIN.read
102
86
 
103
- ##
104
- # Shows the version and exits the program.
105
- #
106
- def show_version
107
- abort "#{option_parser.program_name} v#{VERSION} on #{RUBY_DESCRIPTION}"
87
+ puts tokenizer.run(input)
88
+ end
89
+ end
108
90
  end
109
91
  end # CLI
110
92
  end # Tokenizer
@@ -1,5 +1,3 @@
1
- require 'sinatra/base'
2
- require 'httpclient'
3
1
  require 'opener/webservice'
4
2
 
5
3
  module Opener
@@ -7,10 +5,11 @@ module Opener
7
5
  ##
8
6
  # Text tokenizer server powered by Sinatra.
9
7
  #
10
- class Server < Webservice
8
+ class Server < Webservice::Server
11
9
  set :views, File.expand_path('../views', __FILE__)
12
- text_processor Tokenizer
13
- accepted_params :input, :kaf, :language
10
+
11
+ self.text_processor = Tokenizer
12
+ self.accepted_params = [:input, :kaf, :language]
14
13
  end # Server
15
14
  end # Tokenizer
16
15
  end # Opener
@@ -1,5 +1,7 @@
1
1
  module Opener
2
2
  class Tokenizer
3
- VERSION = "1.1.2"
3
+
4
+ VERSION = '2.2.0'
5
+
4
6
  end
5
7
  end
@@ -7,7 +7,8 @@ Gem::Specification.new do |gem|
7
7
  gem.summary = 'Gem that wraps up the the tokenizer cores'
8
8
  gem.description = gem.summary
9
9
  gem.homepage = 'http://opener-project.github.com/'
10
- gem.has_rdoc = "yard"
10
+
11
+ gem.license = 'Apache 2.0'
11
12
 
12
13
  gem.required_ruby_version = '>= 1.9.2'
13
14
 
@@ -16,23 +17,21 @@ Gem::Specification.new do |gem|
16
17
  'lib/**/*',
17
18
  'config.ru',
18
19
  '*.gemspec',
19
- 'README.md'
20
+ 'README.md',
21
+ 'LICENSE.txt'
20
22
  ]).select { |file| File.file?(file) }
21
23
 
22
24
  gem.executables = Dir.glob('bin/*').map { |file| File.basename(file) }
23
25
 
24
- gem.add_dependency 'opener-tokenizer-base', '>= 0.3.1'
25
- gem.add_dependency 'opener-webservice'
26
+ gem.add_dependency 'opener-tokenizer-base', '~> 1.0'
27
+ gem.add_dependency 'opener-webservice', '~> 2.1'
28
+ gem.add_dependency 'opener-daemons', '~> 2.1'
29
+ gem.add_dependency 'opener-core', '~> 2.4'
26
30
 
27
31
  gem.add_dependency 'nokogiri'
28
- gem.add_dependency 'sinatra', '~>1.4.2'
29
- gem.add_dependency 'httpclient'
30
- gem.add_dependency 'opener-daemons'
31
- gem.add_dependency 'opener-core', '>= 1.0.2'
32
- gem.add_dependency 'puma'
32
+ gem.add_dependency 'slop', '~> 3.5'
33
33
 
34
34
  gem.add_development_dependency 'rspec'
35
- gem.add_development_dependency 'cucumber'
36
35
  gem.add_development_dependency 'pry'
37
36
  gem.add_development_dependency 'rake'
38
37
  end
metadata CHANGED
@@ -1,183 +1,141 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: opener-tokenizer
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.2
4
+ version: 2.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - development@olery.com
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2014-06-19 00:00:00.000000000 Z
11
+ date: 2020-10-07 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
- name: opener-tokenizer-base
15
- version_requirements: !ruby/object:Gem::Requirement
16
- requirements:
17
- - - '>='
18
- - !ruby/object:Gem::Version
19
- version: 0.3.1
20
14
  requirement: !ruby/object:Gem::Requirement
21
15
  requirements:
22
- - - '>='
16
+ - - "~>"
23
17
  - !ruby/object:Gem::Version
24
- version: 0.3.1
18
+ version: '1.0'
19
+ name: opener-tokenizer-base
25
20
  prerelease: false
26
21
  type: :runtime
27
- - !ruby/object:Gem::Dependency
28
- name: opener-webservice
29
22
  version_requirements: !ruby/object:Gem::Requirement
30
23
  requirements:
31
- - - '>='
24
+ - - "~>"
32
25
  - !ruby/object:Gem::Version
33
- version: '0'
26
+ version: '1.0'
27
+ - !ruby/object:Gem::Dependency
34
28
  requirement: !ruby/object:Gem::Requirement
35
29
  requirements:
36
- - - '>='
30
+ - - "~>"
37
31
  - !ruby/object:Gem::Version
38
- version: '0'
32
+ version: '2.1'
33
+ name: opener-webservice
39
34
  prerelease: false
40
35
  type: :runtime
41
- - !ruby/object:Gem::Dependency
42
- name: nokogiri
43
36
  version_requirements: !ruby/object:Gem::Requirement
44
37
  requirements:
45
- - - '>='
38
+ - - "~>"
46
39
  - !ruby/object:Gem::Version
47
- version: '0'
40
+ version: '2.1'
41
+ - !ruby/object:Gem::Dependency
48
42
  requirement: !ruby/object:Gem::Requirement
49
43
  requirements:
50
- - - '>='
44
+ - - "~>"
51
45
  - !ruby/object:Gem::Version
52
- version: '0'
46
+ version: '2.1'
47
+ name: opener-daemons
53
48
  prerelease: false
54
49
  type: :runtime
55
- - !ruby/object:Gem::Dependency
56
- name: sinatra
57
50
  version_requirements: !ruby/object:Gem::Requirement
58
51
  requirements:
59
- - - ~>
52
+ - - "~>"
60
53
  - !ruby/object:Gem::Version
61
- version: 1.4.2
54
+ version: '2.1'
55
+ - !ruby/object:Gem::Dependency
62
56
  requirement: !ruby/object:Gem::Requirement
63
57
  requirements:
64
- - - ~>
58
+ - - "~>"
65
59
  - !ruby/object:Gem::Version
66
- version: 1.4.2
60
+ version: '2.4'
61
+ name: opener-core
67
62
  prerelease: false
68
63
  type: :runtime
69
- - !ruby/object:Gem::Dependency
70
- name: httpclient
71
64
  version_requirements: !ruby/object:Gem::Requirement
72
65
  requirements:
73
- - - '>='
66
+ - - "~>"
74
67
  - !ruby/object:Gem::Version
75
- version: '0'
68
+ version: '2.4'
69
+ - !ruby/object:Gem::Dependency
76
70
  requirement: !ruby/object:Gem::Requirement
77
71
  requirements:
78
- - - '>='
72
+ - - ">="
79
73
  - !ruby/object:Gem::Version
80
74
  version: '0'
75
+ name: nokogiri
81
76
  prerelease: false
82
77
  type: :runtime
83
- - !ruby/object:Gem::Dependency
84
- name: opener-daemons
85
78
  version_requirements: !ruby/object:Gem::Requirement
86
79
  requirements:
87
- - - '>='
88
- - !ruby/object:Gem::Version
89
- version: '0'
90
- requirement: !ruby/object:Gem::Requirement
91
- requirements:
92
- - - '>='
80
+ - - ">="
93
81
  - !ruby/object:Gem::Version
94
82
  version: '0'
95
- prerelease: false
96
- type: :runtime
97
83
  - !ruby/object:Gem::Dependency
98
- name: opener-core
99
- version_requirements: !ruby/object:Gem::Requirement
100
- requirements:
101
- - - '>='
102
- - !ruby/object:Gem::Version
103
- version: 1.0.2
104
84
  requirement: !ruby/object:Gem::Requirement
105
85
  requirements:
106
- - - '>='
86
+ - - "~>"
107
87
  - !ruby/object:Gem::Version
108
- version: 1.0.2
88
+ version: '3.5'
89
+ name: slop
109
90
  prerelease: false
110
91
  type: :runtime
111
- - !ruby/object:Gem::Dependency
112
- name: puma
113
92
  version_requirements: !ruby/object:Gem::Requirement
114
93
  requirements:
115
- - - '>='
116
- - !ruby/object:Gem::Version
117
- version: '0'
118
- requirement: !ruby/object:Gem::Requirement
119
- requirements:
120
- - - '>='
94
+ - - "~>"
121
95
  - !ruby/object:Gem::Version
122
- version: '0'
123
- prerelease: false
124
- type: :runtime
96
+ version: '3.5'
125
97
  - !ruby/object:Gem::Dependency
126
- name: rspec
127
- version_requirements: !ruby/object:Gem::Requirement
128
- requirements:
129
- - - '>='
130
- - !ruby/object:Gem::Version
131
- version: '0'
132
98
  requirement: !ruby/object:Gem::Requirement
133
99
  requirements:
134
- - - '>='
100
+ - - ">="
135
101
  - !ruby/object:Gem::Version
136
102
  version: '0'
103
+ name: rspec
137
104
  prerelease: false
138
105
  type: :development
139
- - !ruby/object:Gem::Dependency
140
- name: cucumber
141
106
  version_requirements: !ruby/object:Gem::Requirement
142
107
  requirements:
143
- - - '>='
108
+ - - ">="
144
109
  - !ruby/object:Gem::Version
145
110
  version: '0'
111
+ - !ruby/object:Gem::Dependency
146
112
  requirement: !ruby/object:Gem::Requirement
147
113
  requirements:
148
- - - '>='
114
+ - - ">="
149
115
  - !ruby/object:Gem::Version
150
116
  version: '0'
117
+ name: pry
151
118
  prerelease: false
152
119
  type: :development
153
- - !ruby/object:Gem::Dependency
154
- name: pry
155
120
  version_requirements: !ruby/object:Gem::Requirement
156
121
  requirements:
157
- - - '>='
122
+ - - ">="
158
123
  - !ruby/object:Gem::Version
159
124
  version: '0'
125
+ - !ruby/object:Gem::Dependency
160
126
  requirement: !ruby/object:Gem::Requirement
161
127
  requirements:
162
- - - '>='
128
+ - - ">="
163
129
  - !ruby/object:Gem::Version
164
130
  version: '0'
131
+ name: rake
165
132
  prerelease: false
166
133
  type: :development
167
- - !ruby/object:Gem::Dependency
168
- name: rake
169
134
  version_requirements: !ruby/object:Gem::Requirement
170
135
  requirements:
171
- - - '>='
172
- - !ruby/object:Gem::Version
173
- version: '0'
174
- requirement: !ruby/object:Gem::Requirement
175
- requirements:
176
- - - '>='
136
+ - - ">="
177
137
  - !ruby/object:Gem::Version
178
138
  version: '0'
179
- prerelease: false
180
- type: :development
181
139
  description: Gem that wraps up the the tokenizer cores
182
140
  email:
183
141
  executables:
@@ -187,6 +145,7 @@ executables:
187
145
  extensions: []
188
146
  extra_rdoc_files: []
189
147
  files:
148
+ - LICENSE.txt
190
149
  - README.md
191
150
  - bin/tokenizer
192
151
  - bin/tokenizer-daemon
@@ -202,7 +161,8 @@ files:
202
161
  - lib/opener/tokenizer/views/result.erb
203
162
  - opener-tokenizer.gemspec
204
163
  homepage: http://opener-project.github.com/
205
- licenses: []
164
+ licenses:
165
+ - Apache 2.0
206
166
  metadata: {}
207
167
  post_install_message:
208
168
  rdoc_options: []
@@ -210,17 +170,17 @@ require_paths:
210
170
  - lib
211
171
  required_ruby_version: !ruby/object:Gem::Requirement
212
172
  requirements:
213
- - - '>='
173
+ - - ">="
214
174
  - !ruby/object:Gem::Version
215
175
  version: 1.9.2
216
176
  required_rubygems_version: !ruby/object:Gem::Requirement
217
177
  requirements:
218
- - - '>='
178
+ - - ">="
219
179
  - !ruby/object:Gem::Version
220
180
  version: '0'
221
181
  requirements: []
222
182
  rubyforge_project:
223
- rubygems_version: 2.2.2
183
+ rubygems_version: 2.7.9
224
184
  signing_key:
225
185
  specification_version: 4
226
186
  summary: Gem that wraps up the the tokenizer cores