delayed_job_es 0.1.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/.gitignore +8 -0
- data/.travis.yml +7 -0
- data/CODE_OF_CONDUCT.md +74 -0
- data/Gemfile +4 -0
- data/Gemfile.lock +51 -0
- data/LICENSE.txt +21 -0
- data/README.md +43 -0
- data/Rakefile +10 -0
- data/bin/console +14 -0
- data/bin/setup +8 -0
- data/delayed_job_es.gemspec +41 -0
- data/lib/delayed/backend/es.rb +443 -0
- data/lib/delayed_job_es.rb +26 -0
- data/lib/delayed_job_es/version.rb +3 -0
- metadata +152 -0
checksums.yaml
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
1
|
+
---
|
|
2
|
+
SHA1:
|
|
3
|
+
metadata.gz: c88fb7a192a3120949ca0ed3632f4baaf9feb2f8
|
|
4
|
+
data.tar.gz: aa663a7f03dafebbf16b08639212c96cfe0915c1
|
|
5
|
+
SHA512:
|
|
6
|
+
metadata.gz: 8cbadc3e99718b31ac0f761b9f08212af85db8711fbb31310633c2c5a530a504fb0bf67ce07ebdce630580d4adfc507674550f3f543df342b4a97924f8fd60a4
|
|
7
|
+
data.tar.gz: 5ade19e9c69b8937e4bb6264c60f4b382089c8ca9fac6fc7ae9d8a55e6119ab2d2060028f652982720f680f0238249dc92c4257922c52947b729122507c21aac
|
data/.gitignore
ADDED
data/.travis.yml
ADDED
data/CODE_OF_CONDUCT.md
ADDED
|
@@ -0,0 +1,74 @@
|
|
|
1
|
+
# Contributor Covenant Code of Conduct
|
|
2
|
+
|
|
3
|
+
## Our Pledge
|
|
4
|
+
|
|
5
|
+
In the interest of fostering an open and welcoming environment, we as
|
|
6
|
+
contributors and maintainers pledge to making participation in our project and
|
|
7
|
+
our community a harassment-free experience for everyone, regardless of age, body
|
|
8
|
+
size, disability, ethnicity, gender identity and expression, level of experience,
|
|
9
|
+
nationality, personal appearance, race, religion, or sexual identity and
|
|
10
|
+
orientation.
|
|
11
|
+
|
|
12
|
+
## Our Standards
|
|
13
|
+
|
|
14
|
+
Examples of behavior that contributes to creating a positive environment
|
|
15
|
+
include:
|
|
16
|
+
|
|
17
|
+
* Using welcoming and inclusive language
|
|
18
|
+
* Being respectful of differing viewpoints and experiences
|
|
19
|
+
* Gracefully accepting constructive criticism
|
|
20
|
+
* Focusing on what is best for the community
|
|
21
|
+
* Showing empathy towards other community members
|
|
22
|
+
|
|
23
|
+
Examples of unacceptable behavior by participants include:
|
|
24
|
+
|
|
25
|
+
* The use of sexualized language or imagery and unwelcome sexual attention or
|
|
26
|
+
advances
|
|
27
|
+
* Trolling, insulting/derogatory comments, and personal or political attacks
|
|
28
|
+
* Public or private harassment
|
|
29
|
+
* Publishing others' private information, such as a physical or electronic
|
|
30
|
+
address, without explicit permission
|
|
31
|
+
* Other conduct which could reasonably be considered inappropriate in a
|
|
32
|
+
professional setting
|
|
33
|
+
|
|
34
|
+
## Our Responsibilities
|
|
35
|
+
|
|
36
|
+
Project maintainers are responsible for clarifying the standards of acceptable
|
|
37
|
+
behavior and are expected to take appropriate and fair corrective action in
|
|
38
|
+
response to any instances of unacceptable behavior.
|
|
39
|
+
|
|
40
|
+
Project maintainers have the right and responsibility to remove, edit, or
|
|
41
|
+
reject comments, commits, code, wiki edits, issues, and other contributions
|
|
42
|
+
that are not aligned to this Code of Conduct, or to ban temporarily or
|
|
43
|
+
permanently any contributor for other behaviors that they deem inappropriate,
|
|
44
|
+
threatening, offensive, or harmful.
|
|
45
|
+
|
|
46
|
+
## Scope
|
|
47
|
+
|
|
48
|
+
This Code of Conduct applies both within project spaces and in public spaces
|
|
49
|
+
when an individual is representing the project or its community. Examples of
|
|
50
|
+
representing a project or community include using an official project e-mail
|
|
51
|
+
address, posting via an official social media account, or acting as an appointed
|
|
52
|
+
representative at an online or offline event. Representation of a project may be
|
|
53
|
+
further defined and clarified by project maintainers.
|
|
54
|
+
|
|
55
|
+
## Enforcement
|
|
56
|
+
|
|
57
|
+
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
|
58
|
+
reported by contacting the project team at bhargav.r.raut@gmail.com. All
|
|
59
|
+
complaints will be reviewed and investigated and will result in a response that
|
|
60
|
+
is deemed necessary and appropriate to the circumstances. The project team is
|
|
61
|
+
obligated to maintain confidentiality with regard to the reporter of an incident.
|
|
62
|
+
Further details of specific enforcement policies may be posted separately.
|
|
63
|
+
|
|
64
|
+
Project maintainers who do not follow or enforce the Code of Conduct in good
|
|
65
|
+
faith may face temporary or permanent repercussions as determined by other
|
|
66
|
+
members of the project's leadership.
|
|
67
|
+
|
|
68
|
+
## Attribution
|
|
69
|
+
|
|
70
|
+
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
|
|
71
|
+
available at [http://contributor-covenant.org/version/1/4][version]
|
|
72
|
+
|
|
73
|
+
[homepage]: http://contributor-covenant.org
|
|
74
|
+
[version]: http://contributor-covenant.org/version/1/4/
|
data/Gemfile
ADDED
data/Gemfile.lock
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
PATH
|
|
2
|
+
remote: .
|
|
3
|
+
specs:
|
|
4
|
+
delayed_job_es (0.1.3)
|
|
5
|
+
delayed_job
|
|
6
|
+
elasticsearch
|
|
7
|
+
json
|
|
8
|
+
|
|
9
|
+
GEM
|
|
10
|
+
remote: https://rubygems.org/
|
|
11
|
+
specs:
|
|
12
|
+
activesupport (5.2.4.3)
|
|
13
|
+
concurrent-ruby (~> 1.0, >= 1.0.2)
|
|
14
|
+
i18n (>= 0.7, < 2)
|
|
15
|
+
minitest (~> 5.1)
|
|
16
|
+
tzinfo (~> 1.1)
|
|
17
|
+
concurrent-ruby (1.1.6)
|
|
18
|
+
delayed_job (4.1.8)
|
|
19
|
+
activesupport (>= 3.0, < 6.1)
|
|
20
|
+
elasticsearch (7.8.0)
|
|
21
|
+
elasticsearch-api (= 7.8.0)
|
|
22
|
+
elasticsearch-transport (= 7.8.0)
|
|
23
|
+
elasticsearch-api (7.8.0)
|
|
24
|
+
multi_json
|
|
25
|
+
elasticsearch-transport (7.8.0)
|
|
26
|
+
faraday (~> 1)
|
|
27
|
+
multi_json
|
|
28
|
+
faraday (1.0.1)
|
|
29
|
+
multipart-post (>= 1.2, < 3)
|
|
30
|
+
i18n (1.8.5)
|
|
31
|
+
concurrent-ruby (~> 1.0)
|
|
32
|
+
json (2.3.1)
|
|
33
|
+
minitest (5.14.1)
|
|
34
|
+
multi_json (1.15.0)
|
|
35
|
+
multipart-post (2.1.1)
|
|
36
|
+
rake (10.5.0)
|
|
37
|
+
thread_safe (0.3.6)
|
|
38
|
+
tzinfo (1.2.7)
|
|
39
|
+
thread_safe (~> 0.1)
|
|
40
|
+
|
|
41
|
+
PLATFORMS
|
|
42
|
+
ruby
|
|
43
|
+
|
|
44
|
+
DEPENDENCIES
|
|
45
|
+
bundler (~> 2.0)
|
|
46
|
+
delayed_job_es!
|
|
47
|
+
minitest (~> 5.0)
|
|
48
|
+
rake (~> 10.0)
|
|
49
|
+
|
|
50
|
+
BUNDLED WITH
|
|
51
|
+
2.0.2
|
data/LICENSE.txt
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
The MIT License (MIT)
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2020 Bhargav Raut
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in
|
|
13
|
+
all copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
|
21
|
+
THE SOFTWARE.
|
data/README.md
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
1
|
+
# DelayedJobEs
|
|
2
|
+
|
|
3
|
+
Welcome to your new gem! In this directory, you'll find the files you need to be able to package up your Ruby library into a gem. Put your Ruby code in the file `lib/delayed_job_es`. To experiment with that code, run `bin/console` for an interactive prompt.
|
|
4
|
+
|
|
5
|
+
TODO: Delete this and the text above, and describe your gem
|
|
6
|
+
|
|
7
|
+
## Installation
|
|
8
|
+
|
|
9
|
+
Add this line to your application's Gemfile:
|
|
10
|
+
|
|
11
|
+
```ruby
|
|
12
|
+
gem 'delayed_job_es'
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
And then execute:
|
|
16
|
+
|
|
17
|
+
$ bundle
|
|
18
|
+
|
|
19
|
+
Or install it yourself as:
|
|
20
|
+
|
|
21
|
+
$ gem install delayed_job_es
|
|
22
|
+
|
|
23
|
+
## Usage
|
|
24
|
+
|
|
25
|
+
TODO: Write usage instructions here
|
|
26
|
+
|
|
27
|
+
## Development
|
|
28
|
+
|
|
29
|
+
After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake test` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
|
|
30
|
+
|
|
31
|
+
To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
|
|
32
|
+
|
|
33
|
+
## Contributing
|
|
34
|
+
|
|
35
|
+
Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/delayed_job_es. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
|
|
36
|
+
|
|
37
|
+
## License
|
|
38
|
+
|
|
39
|
+
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
|
|
40
|
+
|
|
41
|
+
## Code of Conduct
|
|
42
|
+
|
|
43
|
+
Everyone interacting in the DelayedJobEs project’s codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/[USERNAME]/delayed_job_es/blob/master/CODE_OF_CONDUCT.md).
|
data/Rakefile
ADDED
data/bin/console
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
1
|
+
#!/usr/bin/env ruby
|
|
2
|
+
|
|
3
|
+
require "bundler/setup"
|
|
4
|
+
require "delayed_job_es"
|
|
5
|
+
|
|
6
|
+
# You can add fixtures and/or initialization code here to make experimenting
|
|
7
|
+
# with your gem easier. You can also use a different console, if you like.
|
|
8
|
+
|
|
9
|
+
# (If you use this, don't forget to add pry to your Gemfile!)
|
|
10
|
+
# require "pry"
|
|
11
|
+
# Pry.start
|
|
12
|
+
|
|
13
|
+
require "irb"
|
|
14
|
+
IRB.start(__FILE__)
|
data/bin/setup
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
lib = File.expand_path("lib", __dir__)
|
|
2
|
+
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
|
|
3
|
+
require "delayed_job_es/version"
|
|
4
|
+
|
|
5
|
+
Gem::Specification.new do |spec|
|
|
6
|
+
spec.name = "delayed_job_es"
|
|
7
|
+
spec.version = DelayedJobEs::VERSION
|
|
8
|
+
spec.authors = ["Great Manta"]
|
|
9
|
+
spec.email = ["icantremember111@gmail.com"]
|
|
10
|
+
# just get the basic thing working.
|
|
11
|
+
# we can go for the detailed specs later.
|
|
12
|
+
spec.summary = "Elasticsearch on ruby , has had many different gems being built and used. The official library itself has gone through some major revisions, and shifted from a simple persistence pattern to a repository pattern, before which there was a 'Tire' gem that has since been retired. Its clear that a delayed job backend using elasticsearch should be a zero assumption system. In this gem, I have only used two (stable) dependencies (ES-transport and Es-api). Morever the Job class does not include any modules/classes from either of them, and is a PORO that only includes the Delayed::Job::Backend module. This gem is in use in a high-volume production system and has had no major issues, under heavy load."
|
|
13
|
+
spec.description = "Elastic Search backend for Delayed Job, only using the ES transport client and the Es-ruby api as dependencies."
|
|
14
|
+
spec.homepage = "https://github.com/wordjelly/delayed_job_es"
|
|
15
|
+
spec.license = "MIT"
|
|
16
|
+
|
|
17
|
+
#spec.metadata["allowed_push_host"] = "TODO: Set to 'http://mygemserver.com'"
|
|
18
|
+
|
|
19
|
+
spec.metadata["homepage_uri"] = spec.homepage
|
|
20
|
+
#spec.metadata["source_code_uri"] = "TODO: Put your gem's public repo URL here."
|
|
21
|
+
#spec.metadata["changelog_uri"] = "TODO: Put your gem's CHANGELOG.md URL here."
|
|
22
|
+
|
|
23
|
+
# Specify which files should be added to the gem when it is released.
|
|
24
|
+
# The `git ls-files -z` loads the files in the RubyGem that have been added into git.
|
|
25
|
+
spec.files = Dir.chdir(File.expand_path('..', __FILE__)) do
|
|
26
|
+
`git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(test|spec|features)/}) }
|
|
27
|
+
end
|
|
28
|
+
spec.bindir = "exe"
|
|
29
|
+
spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
|
|
30
|
+
spec.require_paths = ["lib"]
|
|
31
|
+
|
|
32
|
+
spec.add_development_dependency "bundler", "~> 2.0"
|
|
33
|
+
spec.add_development_dependency "rake", "~> 10.0"
|
|
34
|
+
spec.add_development_dependency "minitest", "~> 5.0"
|
|
35
|
+
|
|
36
|
+
spec.add_runtime_dependency "delayed_job"
|
|
37
|
+
spec.add_runtime_dependency "elasticsearch"
|
|
38
|
+
spec.add_runtime_dependency "json"
|
|
39
|
+
|
|
40
|
+
|
|
41
|
+
end
|
|
@@ -0,0 +1,443 @@
|
|
|
1
|
+
require 'delayed_job'
|
|
2
|
+
require 'elasticsearch'
|
|
3
|
+
require 'json'
|
|
4
|
+
|
|
5
|
+
## Delayed::Backend::Es::Job.create_index!
|
|
6
|
+
## so we need to write a spec here.
|
|
7
|
+
module Delayed
|
|
8
|
+
module Backend
|
|
9
|
+
module Es
|
|
10
|
+
class Job
|
|
11
|
+
attr_accessor :id
|
|
12
|
+
attr_accessor :priority
|
|
13
|
+
attr_accessor :attempts
|
|
14
|
+
attr_accessor :handler
|
|
15
|
+
attr_accessor :last_error
|
|
16
|
+
attr_accessor :run_at
|
|
17
|
+
attr_accessor :locked_at
|
|
18
|
+
attr_accessor :locked_by
|
|
19
|
+
attr_accessor :failed_at
|
|
20
|
+
attr_accessor :queue
|
|
21
|
+
|
|
22
|
+
include Delayed::Backend::Base
|
|
23
|
+
|
|
24
|
+
INDEX_NAME = "delayed-jobs"
|
|
25
|
+
DOCUMENT_TYPE = "job"
|
|
26
|
+
|
|
27
|
+
################################################
|
|
28
|
+
##
|
|
29
|
+
##
|
|
30
|
+
## elasticsearch client
|
|
31
|
+
##
|
|
32
|
+
##
|
|
33
|
+
################################################
|
|
34
|
+
cattr_accessor :client
|
|
35
|
+
|
|
36
|
+
def self.get_client
|
|
37
|
+
client ||= Elasticsearch::Client.new
|
|
38
|
+
client
|
|
39
|
+
end
|
|
40
|
+
|
|
41
|
+
def self.mappings
|
|
42
|
+
{
|
|
43
|
+
payload_object: {
|
|
44
|
+
type: 'object'
|
|
45
|
+
},
|
|
46
|
+
locked_at: {
|
|
47
|
+
type: 'date',
|
|
48
|
+
format: 'yyyy-MM-dd HH:mm:ss'
|
|
49
|
+
},
|
|
50
|
+
failed_at: {
|
|
51
|
+
type: 'date',
|
|
52
|
+
format: 'yyyy-MM-dd HH:mm:ss'
|
|
53
|
+
},
|
|
54
|
+
priority: {
|
|
55
|
+
type: 'integer'
|
|
56
|
+
},
|
|
57
|
+
attempts: {
|
|
58
|
+
type: 'integer'
|
|
59
|
+
},
|
|
60
|
+
queue: {
|
|
61
|
+
type: 'keyword'
|
|
62
|
+
},
|
|
63
|
+
handler: {
|
|
64
|
+
type: 'keyword'
|
|
65
|
+
},
|
|
66
|
+
locked_by: {
|
|
67
|
+
type: 'keyword'
|
|
68
|
+
},
|
|
69
|
+
last_error: {
|
|
70
|
+
type: 'keyword'
|
|
71
|
+
},
|
|
72
|
+
run_at: {
|
|
73
|
+
type: 'date',
|
|
74
|
+
format: 'yyyy-MM-dd HH:mm:ss'
|
|
75
|
+
}
|
|
76
|
+
}
|
|
77
|
+
end
|
|
78
|
+
|
|
79
|
+
def self.create_index
|
|
80
|
+
response = get_client.indices.create :index => INDEX_NAME, :body => {
|
|
81
|
+
mappings: {DOCUMENT_TYPE => { :properties => mappings}}
|
|
82
|
+
}
|
|
83
|
+
end
|
|
84
|
+
|
|
85
|
+
def self.delete_index
|
|
86
|
+
response = get_client.indices.delete :index => INDEX_NAME
|
|
87
|
+
end
|
|
88
|
+
|
|
89
|
+
def self.create_indexes
|
|
90
|
+
delete_index
|
|
91
|
+
create_index
|
|
92
|
+
end
|
|
93
|
+
|
|
94
|
+
def initialize(hash = {})
|
|
95
|
+
self.attempts = 0
|
|
96
|
+
self.priority = 0
|
|
97
|
+
self.id = SecureRandom.hex(5)
|
|
98
|
+
hash.each { |k, v| send(:"#{k}=", v) }
|
|
99
|
+
end
|
|
100
|
+
|
|
101
|
+
## CALLING 'ALL' IS NEVER A GOOD IDEA
|
|
102
|
+
## MEMORY LEAKS ALWAYS BEGIN LIKE THIS!!!
|
|
103
|
+
## stub to call 10 jobs.
|
|
104
|
+
def self.all
|
|
105
|
+
search_response = get_client.search :index =>INDEX_NAME, :type => DOCUMENT_TYPE, :body => {:size => 10, :query => {match_all: {}}}
|
|
106
|
+
search_response["hits"]["hits"].map{|c|
|
|
107
|
+
new(c["_source"].merge("id" => c["_id"]))
|
|
108
|
+
}
|
|
109
|
+
end
|
|
110
|
+
|
|
111
|
+
def self.count
|
|
112
|
+
get_client.count index: INDEX_NAME
|
|
113
|
+
end
|
|
114
|
+
|
|
115
|
+
def self.delete_all
|
|
116
|
+
create_indexes
|
|
117
|
+
end
|
|
118
|
+
|
|
119
|
+
def self.create(attrs = {})
|
|
120
|
+
new(attrs).tap do |o|
|
|
121
|
+
o.save
|
|
122
|
+
end
|
|
123
|
+
end
|
|
124
|
+
|
|
125
|
+
def self.create!(*args)
|
|
126
|
+
create(*args)
|
|
127
|
+
end
|
|
128
|
+
|
|
129
|
+
##################################
|
|
130
|
+
##
|
|
131
|
+
##
|
|
132
|
+
## USES ES SCROLL API
|
|
133
|
+
##
|
|
134
|
+
##
|
|
135
|
+
##################################
|
|
136
|
+
def self.clear_locks!(worker_name)
|
|
137
|
+
scroll_id = nil
|
|
138
|
+
execution_count = 0
|
|
139
|
+
while true
|
|
140
|
+
begin
|
|
141
|
+
response = nil
|
|
142
|
+
# Display the initial results
|
|
143
|
+
#puts "--- BATCH 0 -------------------------------------------------"
|
|
144
|
+
#puts r['hits']['hits'].map { |d| d['_source']['title'] }.inspect
|
|
145
|
+
if scroll_id.blank?
|
|
146
|
+
response = get_client.search index: INDEX_NAME, scroll: '4m', body: {_source: false, query: {term: {locked_by: worker_name}}}
|
|
147
|
+
else
|
|
148
|
+
response = get_client.scroll scroll_id: scroll_id, scroll: '4m'
|
|
149
|
+
end
|
|
150
|
+
|
|
151
|
+
scroll_id = response['_scroll_id']
|
|
152
|
+
|
|
153
|
+
job_ids = response['hits']['hits'].map{|c| c['_id']}
|
|
154
|
+
|
|
155
|
+
break if job_ids.blank?
|
|
156
|
+
|
|
157
|
+
bulk_array = []
|
|
158
|
+
|
|
159
|
+
script =
|
|
160
|
+
{
|
|
161
|
+
:lang => "painless",
|
|
162
|
+
:params => {
|
|
163
|
+
|
|
164
|
+
},
|
|
165
|
+
:source => '''
|
|
166
|
+
ctx._source.locked_at = null;
|
|
167
|
+
ctx._source.locked_by = null;
|
|
168
|
+
'''
|
|
169
|
+
}
|
|
170
|
+
|
|
171
|
+
job_ids.each do |jid|
|
|
172
|
+
|
|
173
|
+
bulk_array << {
|
|
174
|
+
_update: {
|
|
175
|
+
_index: INDEX_NAME,
|
|
176
|
+
_type: DOCUMENT_TYPE,
|
|
177
|
+
_id: jid,
|
|
178
|
+
data: {
|
|
179
|
+
script: script,
|
|
180
|
+
scripted_upsert: false,
|
|
181
|
+
upsert: {}
|
|
182
|
+
}
|
|
183
|
+
}
|
|
184
|
+
}
|
|
185
|
+
|
|
186
|
+
end
|
|
187
|
+
|
|
188
|
+
bulk_response = get_client.bulk body: bulk_array
|
|
189
|
+
|
|
190
|
+
execution_count +=1
|
|
191
|
+
|
|
192
|
+
break if execution_count > 10
|
|
193
|
+
rescue => e
|
|
194
|
+
puts "error clearing locks--->"
|
|
195
|
+
puts e.to_s
|
|
196
|
+
break
|
|
197
|
+
end
|
|
198
|
+
end
|
|
199
|
+
end
|
|
200
|
+
|
|
201
|
+
# Find a few candidate jobs to run (in case some immediately get locked by others).
|
|
202
|
+
def self.find_available(worker_name, limit = 5, max_run_time = Worker.max_run_time)
|
|
203
|
+
#puts "max run time is:"
|
|
204
|
+
#puts Worker.max_run_time
|
|
205
|
+
right_now = Time.now
|
|
206
|
+
#####################################################
|
|
207
|
+
##
|
|
208
|
+
##
|
|
209
|
+
## THE BASE QUERY
|
|
210
|
+
## translated into human terms
|
|
211
|
+
## any job where
|
|
212
|
+
## 1. run_at is less than the current time
|
|
213
|
+
## AND
|
|
214
|
+
## 2. locked_by : current_worker OR locked_At : nil OR locked_at < (time_now - max_run_time)
|
|
215
|
+
## AND
|
|
216
|
+
## 3. failed_at : nil
|
|
217
|
+
## AND
|
|
218
|
+
## OPTIONAL ->
|
|
219
|
+
## priority queries
|
|
220
|
+
## OPTIONAL ->
|
|
221
|
+
## queue name.
|
|
222
|
+
##
|
|
223
|
+
##
|
|
224
|
+
#####################################################
|
|
225
|
+
|
|
226
|
+
query = {
|
|
227
|
+
bool: {
|
|
228
|
+
must: [
|
|
229
|
+
{
|
|
230
|
+
range: {
|
|
231
|
+
run_at: {
|
|
232
|
+
lte: right_now.strftime("%Y-%m-%d %H:%M:%S")
|
|
233
|
+
}
|
|
234
|
+
}
|
|
235
|
+
},
|
|
236
|
+
{
|
|
237
|
+
bool: {
|
|
238
|
+
should: [
|
|
239
|
+
{
|
|
240
|
+
term: {
|
|
241
|
+
locked_by: Worker.name
|
|
242
|
+
}
|
|
243
|
+
},
|
|
244
|
+
{
|
|
245
|
+
bool: {
|
|
246
|
+
must_not: [
|
|
247
|
+
{
|
|
248
|
+
exists: {
|
|
249
|
+
field: "locked_at"
|
|
250
|
+
}
|
|
251
|
+
}
|
|
252
|
+
]
|
|
253
|
+
}
|
|
254
|
+
},
|
|
255
|
+
{
|
|
256
|
+
range: {
|
|
257
|
+
locked_at: {
|
|
258
|
+
lte: (right_now - max_run_time).strftime("%Y-%m-%d %H:%M:%S")
|
|
259
|
+
}
|
|
260
|
+
}
|
|
261
|
+
}
|
|
262
|
+
]
|
|
263
|
+
}
|
|
264
|
+
}
|
|
265
|
+
],
|
|
266
|
+
must_not: [
|
|
267
|
+
{
|
|
268
|
+
exists: {
|
|
269
|
+
field: "failed_at"
|
|
270
|
+
}
|
|
271
|
+
}
|
|
272
|
+
]
|
|
273
|
+
}
|
|
274
|
+
}
|
|
275
|
+
|
|
276
|
+
################################################
|
|
277
|
+
##
|
|
278
|
+
##
|
|
279
|
+
## ADD PRIORITY CLAUSES
|
|
280
|
+
##
|
|
281
|
+
##
|
|
282
|
+
################################################
|
|
283
|
+
if Worker.min_priority
|
|
284
|
+
query[:bool][:must] << {
|
|
285
|
+
range: {
|
|
286
|
+
priority: {
|
|
287
|
+
gte: Worker.min_priority.to_i
|
|
288
|
+
}
|
|
289
|
+
}
|
|
290
|
+
}
|
|
291
|
+
end
|
|
292
|
+
|
|
293
|
+
if Worker.max_priority
|
|
294
|
+
query[:bool][:must] << {
|
|
295
|
+
range: {
|
|
296
|
+
priority: {
|
|
297
|
+
lte: Worker.max_priority.to_i
|
|
298
|
+
}
|
|
299
|
+
}
|
|
300
|
+
}
|
|
301
|
+
end
|
|
302
|
+
|
|
303
|
+
|
|
304
|
+
##############################################
|
|
305
|
+
##
|
|
306
|
+
##
|
|
307
|
+
## QUEUES IF SPECIFIED.
|
|
308
|
+
##
|
|
309
|
+
##
|
|
310
|
+
##############################################
|
|
311
|
+
if Worker.queues.any?
|
|
312
|
+
query[:bool][:must] << {
|
|
313
|
+
terms: {
|
|
314
|
+
queue: Worker.queues
|
|
315
|
+
}
|
|
316
|
+
}
|
|
317
|
+
end
|
|
318
|
+
|
|
319
|
+
|
|
320
|
+
#############################################
|
|
321
|
+
##
|
|
322
|
+
##
|
|
323
|
+
## SORT
|
|
324
|
+
##
|
|
325
|
+
##
|
|
326
|
+
############################################
|
|
327
|
+
sort = [
|
|
328
|
+
{"locked_by" => "desc"},
|
|
329
|
+
{"priority" => "asc"},
|
|
330
|
+
{"run_at" => "asc"}
|
|
331
|
+
]
|
|
332
|
+
|
|
333
|
+
##puts "find available jobs query is:"
|
|
334
|
+
##puts JSON.pretty_generate(query)
|
|
335
|
+
|
|
336
|
+
search_response = get_client.search :index => INDEX_NAME, :type => DOCUMENT_TYPE,
|
|
337
|
+
:body => {
|
|
338
|
+
size: limit,
|
|
339
|
+
sort: sort,
|
|
340
|
+
query: query
|
|
341
|
+
}
|
|
342
|
+
|
|
343
|
+
|
|
344
|
+
#puts "search_response is"
|
|
345
|
+
#puts search_response["hits"]["hits"]
|
|
346
|
+
## it would return the first hit.
|
|
347
|
+
search_response["hits"]["hits"].map{|c|
|
|
348
|
+
k = new(c["_source"])
|
|
349
|
+
k.id = c["_id"]
|
|
350
|
+
k
|
|
351
|
+
}
|
|
352
|
+
|
|
353
|
+
end
|
|
354
|
+
|
|
355
|
+
# Lock this job for this worker.
|
|
356
|
+
# Returns true if we have the lock, false otherwise.
|
|
357
|
+
def lock_exclusively!(_max_run_time, worker)
|
|
358
|
+
#puts "called lock exclusively ------------------------>"
|
|
359
|
+
|
|
360
|
+
script =
|
|
361
|
+
{
|
|
362
|
+
:lang => "painless",
|
|
363
|
+
:params => {
|
|
364
|
+
:locked_at => self.class.db_time_now.strftime("%Y-%m-%d %H:%M:%S"),
|
|
365
|
+
:locked_by => worker
|
|
366
|
+
},
|
|
367
|
+
:source => '''
|
|
368
|
+
ctx._source.locked_at = params.locked_at;
|
|
369
|
+
ctx._source.locked_by = params.locked_by;
|
|
370
|
+
'''
|
|
371
|
+
}
|
|
372
|
+
#begin
|
|
373
|
+
response = self.class.get_client.update(index: INDEX_NAME, type: DOCUMENT_TYPE, id: self.id.to_s, body: {
|
|
374
|
+
:script => script,
|
|
375
|
+
:scripted_upsert => false,
|
|
376
|
+
:upsert => {}
|
|
377
|
+
})
|
|
378
|
+
|
|
379
|
+
self
|
|
380
|
+
end
|
|
381
|
+
|
|
382
|
+
def self.db_time_now
|
|
383
|
+
Time.current
|
|
384
|
+
end
|
|
385
|
+
|
|
386
|
+
def update_attributes(attrs = {})
|
|
387
|
+
attrs.each { |k, v| send(:"#{k}=", v) }
|
|
388
|
+
save
|
|
389
|
+
end
|
|
390
|
+
|
|
391
|
+
def destroy
|
|
392
|
+
# gotta do this.
|
|
393
|
+
#puts "Calling destroy"
|
|
394
|
+
self.class.get_client.delete :index => INDEX_NAME, :type => DOCUMENT_TYPE, :id => self.id.to_s
|
|
395
|
+
end
|
|
396
|
+
|
|
397
|
+
def json_representation
|
|
398
|
+
if self.respond_to? "as_json"
|
|
399
|
+
as_json.except("payload_object").except(:payload_object)
|
|
400
|
+
else
|
|
401
|
+
puts "payload object is ----------->"
|
|
402
|
+
puts self.payload_object
|
|
403
|
+
attributes = {}
|
|
404
|
+
self.class.mappings.keys.each do |attr|
|
|
405
|
+
if attr.to_s == "payload_object"
|
|
406
|
+
## this object has to be serialized.
|
|
407
|
+
##
|
|
408
|
+
else
|
|
409
|
+
attributes[attr] = self.send(attr)
|
|
410
|
+
end
|
|
411
|
+
end
|
|
412
|
+
JSON.generate(attributes)
|
|
413
|
+
end
|
|
414
|
+
end
|
|
415
|
+
|
|
416
|
+
def save
|
|
417
|
+
#puts "Came to save --------------->"
|
|
418
|
+
self.run_at ||= Time.current.strftime("%Y-%m-%d %H:%M:%S")
|
|
419
|
+
## so here you do the actual saving.
|
|
420
|
+
#Elasticsearch::Client.gateway.
|
|
421
|
+
#puts "object as json -------------->"
|
|
422
|
+
#puts json_representation
|
|
423
|
+
save_response = self.class.get_client.index :index => INDEX_NAME, :type => DOCUMENT_TYPE, :body => json_representation, :id => self.id.to_s
|
|
424
|
+
#puts "save response is: #{save_response}"
|
|
425
|
+
self.class.all << self unless self.class.all.include?(self)
|
|
426
|
+
true
|
|
427
|
+
end
|
|
428
|
+
|
|
429
|
+
def save!
|
|
430
|
+
save
|
|
431
|
+
end
|
|
432
|
+
|
|
433
|
+
def reload
|
|
434
|
+
#puts "called reload job---------------->"
|
|
435
|
+
object = self.class.get_client.get :id => self.id, :index => INDEX_NAME, :type => DOCUMENT_TYPE
|
|
436
|
+
k = self.class.new(object["_source"])
|
|
437
|
+
k.id = object["_id"]
|
|
438
|
+
k
|
|
439
|
+
end
|
|
440
|
+
end
|
|
441
|
+
end
|
|
442
|
+
end
|
|
443
|
+
end
|
|
@@ -0,0 +1,26 @@
|
|
|
1
|
+
require "delayed_job_es/version"
|
|
2
|
+
require 'delayed/backend/es'
|
|
3
|
+
require 'delayed_job'
|
|
4
|
+
require 'elasticsearch'
|
|
5
|
+
require 'json'
|
|
6
|
+
|
|
7
|
+
Delayed::Worker.backend = Delayed::Backend::Es::Job
|
|
8
|
+
|
|
9
|
+
module DelayedJobEs
|
|
10
|
+
class Error < StandardError; end
|
|
11
|
+
# Your code goes here...
|
|
12
|
+
class DummyJob
|
|
13
|
+
|
|
14
|
+
attr_accessor :arguments
|
|
15
|
+
|
|
16
|
+
def initialize(args={})
|
|
17
|
+
@arguments = args[:arguments]
|
|
18
|
+
end
|
|
19
|
+
|
|
20
|
+
def perform
|
|
21
|
+
puts "Peforming dummyjob at #{Time.now}, with arguments : #{@arguments}"
|
|
22
|
+
end
|
|
23
|
+
|
|
24
|
+
end
|
|
25
|
+
|
|
26
|
+
end
|
metadata
ADDED
|
@@ -0,0 +1,152 @@
|
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
|
2
|
+
name: delayed_job_es
|
|
3
|
+
version: !ruby/object:Gem::Version
|
|
4
|
+
version: 0.1.5
|
|
5
|
+
platform: ruby
|
|
6
|
+
authors:
|
|
7
|
+
- Great Manta
|
|
8
|
+
autorequire:
|
|
9
|
+
bindir: exe
|
|
10
|
+
cert_chain: []
|
|
11
|
+
date: 2020-08-17 00:00:00.000000000 Z
|
|
12
|
+
dependencies:
|
|
13
|
+
- !ruby/object:Gem::Dependency
|
|
14
|
+
name: bundler
|
|
15
|
+
requirement: !ruby/object:Gem::Requirement
|
|
16
|
+
requirements:
|
|
17
|
+
- - "~>"
|
|
18
|
+
- !ruby/object:Gem::Version
|
|
19
|
+
version: '2.0'
|
|
20
|
+
type: :development
|
|
21
|
+
prerelease: false
|
|
22
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
23
|
+
requirements:
|
|
24
|
+
- - "~>"
|
|
25
|
+
- !ruby/object:Gem::Version
|
|
26
|
+
version: '2.0'
|
|
27
|
+
- !ruby/object:Gem::Dependency
|
|
28
|
+
name: rake
|
|
29
|
+
requirement: !ruby/object:Gem::Requirement
|
|
30
|
+
requirements:
|
|
31
|
+
- - "~>"
|
|
32
|
+
- !ruby/object:Gem::Version
|
|
33
|
+
version: '10.0'
|
|
34
|
+
type: :development
|
|
35
|
+
prerelease: false
|
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
37
|
+
requirements:
|
|
38
|
+
- - "~>"
|
|
39
|
+
- !ruby/object:Gem::Version
|
|
40
|
+
version: '10.0'
|
|
41
|
+
- !ruby/object:Gem::Dependency
|
|
42
|
+
name: minitest
|
|
43
|
+
requirement: !ruby/object:Gem::Requirement
|
|
44
|
+
requirements:
|
|
45
|
+
- - "~>"
|
|
46
|
+
- !ruby/object:Gem::Version
|
|
47
|
+
version: '5.0'
|
|
48
|
+
type: :development
|
|
49
|
+
prerelease: false
|
|
50
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
51
|
+
requirements:
|
|
52
|
+
- - "~>"
|
|
53
|
+
- !ruby/object:Gem::Version
|
|
54
|
+
version: '5.0'
|
|
55
|
+
- !ruby/object:Gem::Dependency
|
|
56
|
+
name: delayed_job
|
|
57
|
+
requirement: !ruby/object:Gem::Requirement
|
|
58
|
+
requirements:
|
|
59
|
+
- - ">="
|
|
60
|
+
- !ruby/object:Gem::Version
|
|
61
|
+
version: '0'
|
|
62
|
+
type: :runtime
|
|
63
|
+
prerelease: false
|
|
64
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
65
|
+
requirements:
|
|
66
|
+
- - ">="
|
|
67
|
+
- !ruby/object:Gem::Version
|
|
68
|
+
version: '0'
|
|
69
|
+
- !ruby/object:Gem::Dependency
|
|
70
|
+
name: elasticsearch
|
|
71
|
+
requirement: !ruby/object:Gem::Requirement
|
|
72
|
+
requirements:
|
|
73
|
+
- - ">="
|
|
74
|
+
- !ruby/object:Gem::Version
|
|
75
|
+
version: '0'
|
|
76
|
+
type: :runtime
|
|
77
|
+
prerelease: false
|
|
78
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
79
|
+
requirements:
|
|
80
|
+
- - ">="
|
|
81
|
+
- !ruby/object:Gem::Version
|
|
82
|
+
version: '0'
|
|
83
|
+
- !ruby/object:Gem::Dependency
|
|
84
|
+
name: json
|
|
85
|
+
requirement: !ruby/object:Gem::Requirement
|
|
86
|
+
requirements:
|
|
87
|
+
- - ">="
|
|
88
|
+
- !ruby/object:Gem::Version
|
|
89
|
+
version: '0'
|
|
90
|
+
type: :runtime
|
|
91
|
+
prerelease: false
|
|
92
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
93
|
+
requirements:
|
|
94
|
+
- - ">="
|
|
95
|
+
- !ruby/object:Gem::Version
|
|
96
|
+
version: '0'
|
|
97
|
+
description: Elastic Search backend for Delayed Job, only using the ES transport client
|
|
98
|
+
and the Es-ruby api as dependencies.
|
|
99
|
+
email:
|
|
100
|
+
- icantremember111@gmail.com
|
|
101
|
+
executables: []
|
|
102
|
+
extensions: []
|
|
103
|
+
extra_rdoc_files: []
|
|
104
|
+
files:
|
|
105
|
+
- ".gitignore"
|
|
106
|
+
- ".travis.yml"
|
|
107
|
+
- CODE_OF_CONDUCT.md
|
|
108
|
+
- Gemfile
|
|
109
|
+
- Gemfile.lock
|
|
110
|
+
- LICENSE.txt
|
|
111
|
+
- README.md
|
|
112
|
+
- Rakefile
|
|
113
|
+
- bin/console
|
|
114
|
+
- bin/setup
|
|
115
|
+
- delayed_job_es.gemspec
|
|
116
|
+
- lib/delayed/backend/es.rb
|
|
117
|
+
- lib/delayed_job_es.rb
|
|
118
|
+
- lib/delayed_job_es/version.rb
|
|
119
|
+
homepage: https://github.com/wordjelly/delayed_job_es
|
|
120
|
+
licenses:
|
|
121
|
+
- MIT
|
|
122
|
+
metadata:
|
|
123
|
+
homepage_uri: https://github.com/wordjelly/delayed_job_es
|
|
124
|
+
post_install_message:
|
|
125
|
+
rdoc_options: []
|
|
126
|
+
require_paths:
|
|
127
|
+
- lib
|
|
128
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
|
129
|
+
requirements:
|
|
130
|
+
- - ">="
|
|
131
|
+
- !ruby/object:Gem::Version
|
|
132
|
+
version: '0'
|
|
133
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
|
134
|
+
requirements:
|
|
135
|
+
- - ">="
|
|
136
|
+
- !ruby/object:Gem::Version
|
|
137
|
+
version: '0'
|
|
138
|
+
requirements: []
|
|
139
|
+
rubyforge_project:
|
|
140
|
+
rubygems_version: 2.6.14.4
|
|
141
|
+
signing_key:
|
|
142
|
+
specification_version: 4
|
|
143
|
+
summary: Elasticsearch on ruby , has had many different gems being built and used.
|
|
144
|
+
The official library itself has gone through some major revisions, and shifted from
|
|
145
|
+
a simple persistence pattern to a repository pattern, before which there was a 'Tire'
|
|
146
|
+
gem that has since been retired. Its clear that a delayed job backend using elasticsearch
|
|
147
|
+
should be a zero assumption system. In this gem, I have only used two (stable) dependencies
|
|
148
|
+
(ES-transport and Es-api). Morever the Job class does not include any modules/classes
|
|
149
|
+
from either of them, and is a PORO that only includes the Delayed::Job::Backend
|
|
150
|
+
module. This gem is in use in a high-volume production system and has had no major
|
|
151
|
+
issues, under heavy load.
|
|
152
|
+
test_files: []
|