eddieroger-rwikibot 2.0.2

Sign up to get free protection for your applications and to get access to all the features.
data/CHANGELOG ADDED
@@ -0,0 +1,79 @@
1
+ RWikiBot ChangeLog
2
+
3
+ 2.0.1
4
+ - Changing gemspec so MAYBE it will be included in github gems.
5
+
6
+ 2.0 ** BREAKING RELEASE** - this will break pretty all 1.X bots.
7
+ - It's smart enough now to handle query-continues! As such, there's a new dependency - deep_merge. Get it - it's in gem.
8
+ - Made a Page class. now, instead of bot.delete_page(title...), you do page.delete
9
+ - Got rid of the stupid Result class. That was a terrible decision.
10
+ - edit_page is now save. Part of the Page/class concept
11
+ - Changed user-agent to bot/RWikiBot/2.0
12
+ - page_exists has been replaced with the page attribute "missing" - this aligns with MW's behavior
13
+ - Removed redirects since it doesn't matter anymore now that the API is solidified
14
+ - Removed image_embdedded_in since it's defunct
15
+ - Removed revisions due to limited functionality
16
+ - Removed pageid_to_title because the API is smart enough to not require conversion
17
+ - Made the utilities all private because utilities don't need to be public
18
+ - Shortened up the code considerably by retuning make_request when I can
19
+ - Moved make_request and it's new friend raw_call to utilities because they are.
20
+ - Removed normalize since I can get the "missing" attribute when creating a page, and that's good 'nuff
21
+
22
+ 1.2
23
+ - Added edit functionality to complete the API
24
+ - Played catchup with a whole lot of methods
25
+
26
+ 1.1
27
+ - Switched from YAML to XML for a handful of reasons, most of which being poorly formatted YAML errors in API
28
+ - Disabled cookies from login since they're terrible and hard to maintain and bad long term practice and the API supports it (you're welcome)
29
+ - Striked the "name" field and "prefix" field from initialize. Those were antiquated anyway.
30
+ - Reordered the methods to align with documentation
31
+ - Changed User-Agent to bot-RWikiBot/1.1
32
+ - Refined error handling (sorta)
33
+
34
+ 1.0.10
35
+ - Changed a get back to post since the bug wasn't with me.
36
+
37
+ 1.0.9
38
+ - Disabled Logging because of permission errors on writing log files as a MediaWiki extension
39
+
40
+ 1.0.8
41
+ - Switched BACK to initialize-time config since calling from a hook didn't play with the files right, I think
42
+
43
+ 1.0.7
44
+ - Added PAGEID_TO_TITLE
45
+ - Turned Logger level up to INFO- sorry for forgetting that :-)
46
+
47
+ 1.0.6
48
+ - Added VERSION
49
+ - Removed some _more_ debugging code
50
+ - Fixed login code to send back tokens and things, thereby fixing login altogether. Bots are now welcome!
51
+
52
+ 1.0.5
53
+ - Added parameters for queries to the rdoc - save MediaWiki some bandwidth
54
+ - Added BACKLINKS, PAGE_EXISTS?, EMBEDDED_IN, IMAGE_EMBEDDED_IN
55
+ - Fixed a bug in ALL_PAGES, where it wasn't really sending the options hash as part of the request. Oops.
56
+ - Removed some debug puts
57
+ - Premiered the RWikiBot logo. Not in the source, but it exists now.
58
+
59
+ 1.0.4
60
+ - Changed config.yaml to sample-config.yaml
61
+ - Had rake include the sample-config.yaml file
62
+ - Fixed broken cookie code to ask for wikicookieprefix, which is usually dbname+tableprefix
63
+
64
+ 1.0.3
65
+ - Bugfix in config file
66
+
67
+ 1.0.2
68
+ - Changed initialize to ask for a configuration to use, defaulting to 'default'
69
+ - Modified config.yaml to prompt for wikidbprefix, which is the cookie identifier.
70
+ - PATCH: Changed bake to search use wikidbprefix in cookie name.
71
+
72
+ 1.0.1
73
+ - Added very basic error handing around the MAKE_REQUEST method since MediaWiki returns invalid YAML
74
+ - Fixed login method to accept domain value
75
+ - Check for domain being present and set in config.yaml
76
+ - Added redirect? method to return true/false if title is a redirect page
77
+ - Changed from "framework" to "library"
78
+
79
+ 1.0.0 - Initial Version
data/README.textile ADDED
@@ -0,0 +1,31 @@
1
+ h1=(). This is "RWikiBot":rwb 2.0
2
+
3
+ h2. Welcome.
4
+
5
+ Thanks for checking out RWikiBot. For me, this is heavily a labor of love, so I hope you find it as useful as I do. And, if you find new or useful patches or requests, please send them my way! For the most update-to-date information on "RWikiBot":rwb check out the "site":rwb.
6
+
7
+ h2. Installation
8
+
9
+ Want it? Come get it! RWikiBot is available as a gem from RubyForge or Github.
10
+
11
+ For the RubyForge gem, just type @gem install rwikibot@.
12
+
13
+ For the Github gem (which is probably more recent), then do this:
14
+ # @gem sources -a http://gems.github.com@ (you only have to do this once)
15
+ # @sudo gem install eddieroger-rwikibot@
16
+
17
+ h2. What's New? (highlights - see CHANGELOG for more)
18
+
19
+ * It's smart enough now to handle query-continues! As such, there's a new dependency - deep_merge. Get it - it's in gem.
20
+ * Made a Page class. now, instead of bot.delete_page(title...), you do page.delete
21
+ * Got rid of the stupid Result class. That was a terrible decision.
22
+ * edit_page is now save. Part of the Page/class concept
23
+ * Changed user-agent to bot/RWikiBot/2.0
24
+
25
+ h2. The Boring Stuff.
26
+
27
+ Author: Eddie Roger ("eddieroger@gmail.com":mailto:eddieroger@gmail.com) "eddieroger.com":http://www.eddieroger.com <br/>
28
+ Copyright: Copyright (c) 2009 Eddie Roger <br/>
29
+ License: GNU/GPL 2.0
30
+
31
+ [rwb]http://www.rwikibot.net
data/Rakefile ADDED
@@ -0,0 +1,45 @@
1
+ require 'rubygems'
2
+
3
+ Gem::manage_gems
4
+
5
+ require 'rake/gempackagetask'
6
+ require 'rake/rdoctask'
7
+
8
+ PKG_NAME = 'rwikibot'
9
+ PKG_VERSION = '2.0.0'
10
+ PKG_FILE_NAME = "#{PKG_NAME}-#{PKG_VERSION}"
11
+ RUBY_FORGE_PROJECT = 'rwikibot'
12
+ RUBY_FORGE_USER = 'eddieroger'
13
+
14
+ spec = Gem::Specification.new do |s|
15
+ s.platform = Gem::Platform::RUBY
16
+ s.name = PKG_NAME
17
+ s.version = PKG_VERSION
18
+ s.author = "Eddie Roger"
19
+ s.email = "eddieroger @nospam@ gmail.com"
20
+ s.summary = "A library for creating MediaWiki bots."
21
+ s.homepage = "http://www.rwikibot.net"
22
+ s.rubyforge_project = 'RWikiBot'
23
+ s.files = FileList['lib/*.rb', 'test/*'].to_a
24
+ s.require_path = "lib"
25
+ s.test_files = Dir.glob('tests/*.rb')
26
+ s.has_rdoc = 'true'
27
+ s.rdoc_options = ['--inline-source --force-update']
28
+ s.extra_rdoc_files = ["README", "CHANGELOG"]
29
+ end
30
+
31
+ Rake::GemPackageTask.new(spec) do |pkg|
32
+ pkg.need_tar = true
33
+ end
34
+
35
+ Rake::RDocTask.new(:rdoc) do |rdoc|
36
+ rdoc.rdoc_dir = 'rdoc'
37
+ rdoc.title = "#{PKG_NAME} -- the best way to create MediaWiki bots in Ruby"
38
+ rdoc.options << "--inline-source"
39
+ rdoc.rdoc_files.include('README', 'CHANGELOG')
40
+ rdoc.rdoc_files.include('lib/**/*.rb')
41
+ end
42
+
43
+ task :default => "pkg/#{spec.name}-#{spec.version}.gem" do
44
+ puts "generated latest version"
45
+ end
data/lib/errors.rb ADDED
@@ -0,0 +1,14 @@
1
+ module RWBErrors
2
+ class LoginError < StandardError
3
+ end
4
+
5
+ class RWikiBotError < StandardError
6
+ end
7
+
8
+ class VersionTooLowError < StandardError
9
+ end
10
+
11
+ class NotLoggedInError < StandardError
12
+ end
13
+
14
+ end
data/lib/pages.rb ADDED
@@ -0,0 +1,190 @@
1
+ ## This class defines a MediaWiki page. Think of it like this: the bot handles
2
+ ## site related stuff, but to take action on a page, you need a page object.
3
+ require 'rwikibot'
4
+ require 'errors'
5
+
6
+ module Pages
7
+ include RWBErrors
8
+
9
+ class Page
10
+ attr_reader :title, :namespace, :new, :length, :counter, :lastrevid, :missing
11
+
12
+ # Creates a new Page object.
13
+ def initialize(bot, title='')
14
+ @bot = bot
15
+
16
+ info = info(title)
17
+
18
+ @title = info['title']
19
+ @namespace = info['ns']
20
+ @new = info.has_key?('new')
21
+ @length = info['length']
22
+ @counter = info ['counter']
23
+ @lastrevid = info['lastrevid']
24
+ @missing = info.has_key?('missing')
25
+ end
26
+
27
+ # I used to have an exists method (page_exists), but I got rid of it in 2.0, but I'm bringing it back.
28
+ def exists?
29
+ if @missing
30
+ return false
31
+ else
32
+ return true
33
+ end
34
+ end
35
+
36
+ # This will get only the content of the article. It is a modification of
37
+ # revisions to specifically pull the content. I thought it would be
38
+ # useful.
39
+ def content(options=nil)
40
+ post_me = {'prop' => 'revisions', 'titles' => @title, 'rvprop' => 'content'}
41
+
42
+ post_me.merge!(options) if options
43
+
44
+ revisions_result = @bot.make_request('query', post_me )
45
+ revisions_result.fetch('pages').fetch('page').fetch('revisions').fetch('rev')
46
+ end
47
+
48
+ # If you have to ask what this method does, don't use it. Seriously, use
49
+ # with caution - this method does not have a confirmation step, and
50
+ # deleted (while restorable) are immediate.
51
+ def delete(reason="Deleted by RWikiBot")
52
+ raise RWBErrors::VersionTooLowError unless @bot.meets_version_requirement(1,12)
53
+ raise RWBErrors::NotLoggedInError unless @bot.logged_in?
54
+
55
+ post_me = {
56
+ 'title' => @title ,
57
+ 'token' => get_token('delete') ,
58
+ 'reason' => reason
59
+ }
60
+
61
+ @bot.make_request('delete', post_me)
62
+ end
63
+
64
+ # This method fetches any article that links to the article given in
65
+ # 'title'. Returned in alphabetical order.
66
+ def backlinks (titles, options = nil)
67
+ raise VersionTooLowError unless meets_version_requirement(1,9)
68
+
69
+ post_me = {'list' => 'backlinks', 'titles' => "#{title}" }
70
+
71
+ post_me.merge!(options) if options
72
+
73
+ backlinks_result = make_request('query', post_me)
74
+
75
+ backlinks_result.success? ?
76
+ backlinks_result.get_result.fetch('backlinks') :
77
+ backlinks_result.get_message
78
+ end
79
+
80
+ # This method pulls any page that includes the template requested. Please
81
+ # note - the template must be the full name, like "Template:Disputed" or
82
+ # "Template:Awesome".
83
+ def embedded_in(options=nil)
84
+ raise VersionTooLowError unless @bot.meets_version_requirement(1,9)
85
+
86
+ # This will get all pages. Limits vary based on user rights of the Bot. Set to bot.
87
+ post_me = {'list' => 'embeddedin', 'eititle' => @title }
88
+
89
+ post_me.merge!(options) if options
90
+
91
+ embeddedin_result = @bot.make_request('query', post_me)
92
+ embeddedin_result.fetch('embeddedin').fetch('ei')
93
+ end
94
+
95
+ # I decided to split this up since I wanted to normalize the bot framework as much as possible, or in other words, make it as easy to use as possible. I think the sacrifice of more methods is worth having more English looking code. Its the Ruby way.
96
+ # Info will return information about the page, from namespace to normalized title, last touched, etc.
97
+ def info(titles)
98
+ post_me = {"prop" => "info", 'titles' => titles}
99
+ info_result = @bot.make_request('query', post_me)
100
+ info_result.fetch('pages').fetch('page')
101
+ end
102
+
103
+ # This method will let you move a page from one name to another. A move token is required for this to work. Keep that in mind. (get_token much?)
104
+ def move(to, reason, movetalk= true, noredirect=false)
105
+ raise RWBErrors::VersionTooLowError unless @bot.meets_version_requirement(1,12)
106
+ raise RWBErrors::NotLoggedInError unless @bot.logged_in?
107
+
108
+ post_me = {
109
+ 'from' => @title ,
110
+ 'to' => "#{to}" ,
111
+ 'token' => get_token('move') ,
112
+ 'reason' => "#{reason}" ,
113
+ }
114
+
115
+ # These ifs are necessary because they should only be part of post_me if
116
+ # the passed vars are true (which they are by default)
117
+ post_me['movetalk'] = '' if movetalk
118
+ post_me['noredirect'] = '' if noredirect
119
+
120
+ @bot.make_request('move', post_me)
121
+ end
122
+
123
+ # Rollback does what it says - rolls back an article one version in the
124
+ # wiki. This is a function that requires not only a token, but a previous
125
+ # user.
126
+ def rollback(summary="", markbot=true)
127
+ temp_token = get_token("rollback") # special for rollback. Stupid rollback.
128
+ post_me = {
129
+ 'title' => @title,
130
+ 'token' => temp_token['token'],
131
+ 'user' => temp_token['user'],
132
+ 'summary' => summary
133
+ }
134
+
135
+ post_me['markbot'] = '' if markbots
136
+ @bot.make_request('rollback', post_me)
137
+ end
138
+
139
+ # This method is used to edit pages. Not much more to say about it. Be
140
+ # sure you're logged in and got a token (get_token). Options is an array
141
+ # (or hash) of extra values allowed by the API.
142
+ def save(content, summary=nil, options=nil)
143
+
144
+ post_me = {
145
+ 'text' => "#{content}" ,
146
+ 'token' => get_token("edit") ,
147
+ 'title' => @title ,
148
+ 'summary' => "#{summary}" ,
149
+ 'edittime' => Time.now.strftime("%Y%m%d%H%M%S") ,
150
+ }
151
+
152
+ post_me.merge!(options) if options
153
+
154
+ @bot.make_request('edit', post_me).fetch('result')
155
+ end
156
+
157
+ private
158
+ # This method should universally return tokens, just give title and type.
159
+ # You will receive a token string (suitable for use in other methods), so
160
+ # plan accordingly. Use an edit token for both editing and creating
161
+ # articles (edit_article, create_article). For rollback, more than just a
162
+ # token is required. So, for token=rollback, you get a hash of token|user.
163
+ # Just the way it goes.
164
+ def get_token(intoken)
165
+ if intoken.downcase == 'rollback'
166
+ #specific to rollback
167
+ post_me = {
168
+ 'prop' => 'revisions' ,
169
+ 'rvtoken' => intoken ,
170
+ 'titles' => @title
171
+ }
172
+ else
173
+ post_me = {
174
+ 'prop' => 'info',
175
+ 'intoken' => intoken,
176
+ 'titles' => @title
177
+ }
178
+ end
179
+ raw_token = @bot.make_request('query', post_me)
180
+
181
+ if intoken.downcase == 'rollback'
182
+ # Damn this decision to make rollback special!. Wasn't mine, I just have to live by it.
183
+ token2 = raw_token.fetch('pages').fetch('page').fetch('revisions').fetch('rev')
184
+ {'token' => token2.fetch('rollbacktoken') , 'user' => token2.fetch('user')}
185
+ else
186
+ raw_token.fetch('pages').fetch('page').fetch("#{intoken}token")
187
+ end
188
+ end
189
+ end
190
+ end
data/lib/rwikibot.rb ADDED
@@ -0,0 +1,171 @@
1
+ # This is a framework upon which to create MediaWiki Bots. It provides a set
2
+ # of methods to acccess MediaWiki's API and return information in various
3
+ # forms, depending on the type of information returned. By abstracting these
4
+ # methods into a Bot object, cleaner script code can be written later.
5
+ # Furthermore, it facilitates the updating of the API without breaking old
6
+ # bots. Last, but not least, its good to abstract. #
7
+ #
8
+ # Author:: Eddie Roger (mailto:eddieroger@gmail.com)
9
+ # Copyright:: Copyright (c) 2008 Eddie Roger
10
+ # License:: GNU/GPL 2.0
11
+
12
+ # ruby requires
13
+ require 'net/http'
14
+ require 'uri'
15
+ require 'cgi'
16
+
17
+ # my requires
18
+ require 'errors'
19
+ require 'utilities'
20
+ require 'pages'
21
+
22
+ # gem requires
23
+ require 'xmlsimple'
24
+ require 'deep_merge' # New in 2.0!
25
+
26
+ #This is the main bot object. The goal is to represent every API method in
27
+ #some form here, and then write seperate, cleaner scripts in individual bot
28
+ #files utilizing this framework. Basically, this is an include at best.
29
+ class RWikiBot
30
+
31
+ include RWBErrors
32
+ include RWBUtilities
33
+ include Pages
34
+
35
+ attr_reader :config
36
+
37
+ def initialize(username='rwikibot', password='', api_path='http://www.rwikibot.net/wiki/api.php', domain='', login=false)
38
+ @config = Hash.new
39
+
40
+ @config = {
41
+ 'username' => username,
42
+ 'password' => password,
43
+ 'api_path' => api_path,
44
+ 'domain' => domain,
45
+ 'cookies' => "",
46
+ 'logged_in' => false,
47
+ 'uri' => URI.parse(api_path)
48
+ }
49
+
50
+ @config['api_version'] = version.to_f
51
+
52
+ if login
53
+ login
54
+ end
55
+ end
56
+
57
+ # This is the method that will allow the bot to log in to the wiki. Its not
58
+ # always necessary, but bots need to log in to save changes or retrieve
59
+ # watchlists.
60
+ def login
61
+ # raise VersionTooLowError unless meets_version_requirement(0,0)
62
+
63
+ post_me = {'lgname'=>@config.fetch('username'),'lgpassword'=>@config.fetch('password')}
64
+ if @config.has_key?('domain') && (@config.fetch('domain') != nil)
65
+ post_me['lgdomain'] = @config.fetch('domain')
66
+ end
67
+
68
+ login_result = make_request('login', post_me)
69
+
70
+ # Now we need to changed some @config stuff, specifically that we're
71
+ # logged in and the variables of that This will also change the
72
+ # make_request, but I'll comment there
73
+ if login_result['result'] == "Success"
74
+ # All lg variables are directly from API and stored in config that way
75
+ @config['logged_in'] = true
76
+ @config['lgusername'] = login_result.fetch('lgusername')
77
+ @config['lguserid'] = login_result.fetch('lguserid')
78
+ @config['lgtoken'] = login_result.fetch('lgtoken')
79
+ @config['_session'] = login_result.fetch('sessionid')
80
+ @config['cookieprefix'] = login_result.fetch('cookieprefix')
81
+
82
+ true
83
+ else
84
+ # puts "Error logging in. Error was: "
85
+ raise LoginError, "#{login_result['result']}: #{login_result['details']}"
86
+ end
87
+ end
88
+
89
+ # Use Page to create a new page object that you can then manipulate. You
90
+ # could create a page on it's own, but if you do, be _sure_ to pass your bot
91
+ # along with the title, otherwise you won't get access to the super-fun
92
+ # make_request object that is pretty much required.
93
+ def page(title='')
94
+ Page.new(self, title)
95
+ end
96
+
97
+ # This will return a list of all pages in a given namespace. It returns a
98
+ # list of pages in with the normalized title and page ID, suitable for usage
99
+ # elsewhere. Accepts all parameters from the API in Hash form. Default is
100
+ # namespace => 0, which is just plain pages. Nothing 'special'.
101
+ def all_pages(options = nil)
102
+ # raise VersionTooLowError unless meets_version_requirement(1,9)
103
+ # This will get all pages. Limits vary based on user rights of the Bot. Set to bot.
104
+ post_me = {'list' => 'allpages', 'apnamespace' => '0', 'aplimit' => '5000'}
105
+ post_me.merge!(options) if options
106
+ allpages_result = make_request('query', post_me)
107
+ allpages_result.fetch('allpages')['p']
108
+ end
109
+
110
+ # This method will get the watchlist for the bot's MediaWiki username. This
111
+ # is really onlu useful if you want the bot to watch a specific list of
112
+ # pages, and would require the bot maintainer to login to the wiki as the
113
+ # bot to set the watchlist.
114
+ def watchlist(options=nil)
115
+ # raise VersionTooLowError unless meets_version_requirement(1,10)
116
+ raise NotLoggedInError unless logged_in?
117
+ post_me = {'list'=>'watchlist'}
118
+ post_me.merge!(options) if options
119
+ make_request('query', post_me).fetch('watchlist').fetch('item')
120
+ end
121
+
122
+ # This method will return Wiki-wide recent changes, almost as if looking at the Special page Recent Changes. But, in this format, a bot can handle it. Also we're using the API. And bots can't read.
123
+ def recent_changes(options=nil)
124
+ # raise VersionTooLowError unless meets_version_requirement(1,10)
125
+ post_me = {"list" => "recentchanges", 'rclimit' => '5000'}
126
+ post_me.merge!(options) if options
127
+ make_request('query' , post_me).fetch('recentchanges').fetch('rc')
128
+ end
129
+
130
+ # This will reutrn a list of the most recent log events. Useful for bots who
131
+ # want to validate log events, or even just a notify bot that checks for
132
+ # events and sends them off.
133
+ def log_events(options=nil)
134
+ # raise VersionTooLowError unless meets_version_requirement(1,11)
135
+ post_me = {"list" => "logevents"}
136
+ post_me.merge!(options) if options
137
+ make_request('query', post_me).fetch('logevents').fetch('item')
138
+ end
139
+
140
+ # This is the only meta method. It will return site information. I chose not
141
+ # to allow it to specify, and it will only return all known properties.
142
+ def site_info(siprop='general')
143
+ # raise VersionTooLowError unless meets_version_requirement(1,9)
144
+ post_me = {"meta" => "siteinfo" , "siprop" => siprop}
145
+ siteinfo_result = make_request('query', post_me)
146
+ siprop == 'general' ?
147
+ siteinfo_result.fetch('general') :
148
+ siteinfo_result.fetch('namespaces').fetch('ns')
149
+ end
150
+
151
+ # Get information about the current user
152
+ def user_info(uiprop=nil)
153
+ # raise VersionTooLowError unless meets_version_requirement(1,11)
154
+ post_me = {"meta" => "userinfo" }
155
+ post_me['uiprop'] = uiprop unless uiprop.nil?
156
+
157
+ make_request('query',post_me).fetch('userinfo')
158
+ end
159
+ end
160
+
161
+ # I'm never happy with good enough, and when it comes to my hashes, I like to see the members of it. So I changed the hash to_s. Overriding method makes me happy.
162
+ class Hash
163
+ def to_s
164
+ out = "{"
165
+ self.each do |key, value|
166
+ out += "#{key} => #{value},"
167
+ end
168
+ out = out.chop
169
+ out += "}"
170
+ end
171
+ end
data/lib/utilities.rb ADDED
@@ -0,0 +1,114 @@
1
+ module RWBUtilities
2
+
3
+ private
4
+
5
+ # is_redirect?
6
+ #
7
+ # Tests to see if a given page title is redirected to another page. Very Ruby.
8
+ def is_redirect?(title)
9
+ post_me = {'titles' => title, 'redirects'=>'', 'prop' => 'info'}
10
+ result = make_request('query', post_me)
11
+ (result['result'] == "Success") && result.has_key?("redirects")
12
+ end
13
+
14
+ # logged_in?
15
+ #
16
+ # A quick (and public) method of checking whether or not we're logged in, since I don't want @config exposed
17
+ def logged_in?
18
+ @config['logged_in']
19
+ end
20
+
21
+ # The point of this method is to iterate through an array of hashes, which most of the
22
+ # other methods return, and remove multiple instances of the same wiki page. We're more
23
+ # than often only concerned with the most recent revision, so we'll delete old ones.
24
+ #
25
+ # Hashes don't respond to the the Array.uniq method. So this is the same-ish
26
+ def make_unique(array)
27
+ test_array = array
28
+ count = 0
29
+
30
+ array.reverse.each do |current_item|
31
+ test_array.each do |test_item|
32
+ if (current_item.fetch('title') == test_item.fetch('title') && current_item.fetch('revid') > test_item.fetch('revid') )
33
+ # At this point, current is the same article as test, and current is newer. Delete test
34
+ array.delete(test_item)
35
+ count += 1
36
+ end
37
+ end
38
+ end
39
+
40
+ array
41
+ end
42
+
43
+ # This method will return the version of the MediaWiki server. This is done by parsing the
44
+ # version number from the generator attribute of the the site_info method. Useful? Yes
45
+ # - maybe yout bot is only compatible with MediaWiki 1.9.0 depending on what methods you
46
+ # use. I like it, anwyay.
47
+ def version
48
+ site_info.fetch('generator').split(' ')[1]
49
+ end
50
+
51
+ # Make Request is a method that actually handles making the request to the API. Since the
52
+ # API is somewhat standardized, this method is able to accept the action and a hash of
53
+ # variables, and it handles all the fun things MediaWiki likes to be weird over, like
54
+ # cookies and limits and actions. Its very solid, but I didn't want it public because
55
+ # it also does some post processing, and that's not very OO.
56
+ def make_request (action, post_this)
57
+ post_this['format'] = 'xml'
58
+ post_this['action'] = action
59
+
60
+ if (@config['logged_in'])
61
+ cookies = "#{@config['cookieprefix']}UserName=#{@config['lgusername']}; #{@config['cookieprefix']}UserID=#{@config['lguserid']}; #{@config['cookieprefix']}Token=#{@config['lgtoken']}; #{@config['cookieprefix']}_session=#{@config['_session']}"
62
+ else
63
+ cookies = ""
64
+ end
65
+
66
+ headers = {
67
+ 'User-agent'=>'bot-RWikiBot/2.0-rc1',
68
+ 'Cookie' => cookies
69
+ }
70
+
71
+ r = Hash.new
72
+ until post_this.nil?
73
+ return_result, post_this = raw_call(headers, post_this)
74
+ r.deep_merge(return_result.fetch(action))
75
+ end
76
+
77
+ r
78
+ end
79
+
80
+ # Raw Call handles actually, physically talking to the wiki. It is broken out to handle
81
+ # query-continues where applicable. So, all the methods call make_request, and it calls
82
+ # raw_call until raw_call returns a nil post_this.
83
+ def raw_call(headers, post_this)
84
+ request = Net::HTTP::Post.new(@config.fetch('uri').path, headers)
85
+ request.set_form_data(post_this)
86
+ response = Net::HTTP.new(@config.fetch('uri').host, @config.fetch('uri').port).start {|http|
87
+ http.request(request)
88
+ }
89
+
90
+ # Extra cookie handling. Because editing will be based on session IDs and it generates
91
+ # a new one each time until you start responding. I doubt this will change.
92
+ if (response.header['set-cookie'] != nil)
93
+ @config['_session'] = response.header['set-cookie'].split("=")[1]
94
+ end
95
+
96
+ return_result = XmlSimple.xml_in(response.body, { 'ForceArray' => false })
97
+
98
+ if return_result.has_key?('error')
99
+ raise RWikiBotError, "#{return_result.fetch('error').fetch('code').capitalize}: #{return_result.fetch('error').fetch('info')}"
100
+ end
101
+
102
+ if !post_this.keys.any?{|k| k.include?('limit')} && return_result.has_key?('query-continue')
103
+ return_result.fetch('query-continue').each do |key, value|
104
+ return_result.fetch('query-continue').fetch(key).each do |x,y|
105
+ post_this[x] = y
106
+ end
107
+ end
108
+ else
109
+ post_this = nil
110
+ end
111
+
112
+ [return_result, post_this]
113
+ end
114
+ end
metadata ADDED
@@ -0,0 +1,79 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: eddieroger-rwikibot
3
+ version: !ruby/object:Gem::Version
4
+ version: 2.0.2
5
+ platform: ruby
6
+ authors:
7
+ - Eddie Roger
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+
12
+ date: 2009-01-02 00:00:00 -08:00
13
+ default_executable:
14
+ dependencies:
15
+ - !ruby/object:Gem::Dependency
16
+ name: deep_merge
17
+ version_requirement:
18
+ version_requirements: !ruby/object:Gem::Requirement
19
+ requirements:
20
+ - - ">"
21
+ - !ruby/object:Gem::Version
22
+ version: 0.0.0
23
+ version:
24
+ - !ruby/object:Gem::Dependency
25
+ name: xml-simple
26
+ version_requirement:
27
+ version_requirements: !ruby/object:Gem::Requirement
28
+ requirements:
29
+ - - ">"
30
+ - !ruby/object:Gem::Version
31
+ version: 0.0.0
32
+ version:
33
+ description: A Ruby framework for creating MediaWiki robots.
34
+ email: eddieroger@gmail.com
35
+ executables: []
36
+
37
+ extensions: []
38
+
39
+ extra_rdoc_files:
40
+ - README.textile
41
+ files:
42
+ - CHANGELOG
43
+ - README.textile
44
+ - Rakefile
45
+ - lib/rwikibot.rb
46
+ - lib/pages.rb
47
+ - lib/errors.rb
48
+ - lib/utilities.rb
49
+ has_rdoc: true
50
+ homepage: http://rwikibot.net/wiki
51
+ post_install_message:
52
+ rdoc_options:
53
+ - --main
54
+ - --inline-source
55
+ - --force-update
56
+ - README.textile
57
+ require_paths:
58
+ - lib
59
+ required_ruby_version: !ruby/object:Gem::Requirement
60
+ requirements:
61
+ - - ">="
62
+ - !ruby/object:Gem::Version
63
+ version: "0"
64
+ version:
65
+ required_rubygems_version: !ruby/object:Gem::Requirement
66
+ requirements:
67
+ - - ">="
68
+ - !ruby/object:Gem::Version
69
+ version: "0"
70
+ version:
71
+ requirements: []
72
+
73
+ rubyforge_project:
74
+ rubygems_version: 1.2.0
75
+ signing_key:
76
+ specification_version: 2
77
+ summary: Creates an abstraction layer between MediaWiki API and Ruby.
78
+ test_files: []
79
+