pangdudu-rwikibot 2.0.6.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/CHANGELOG ADDED
@@ -0,0 +1,93 @@
1
+ RWikiBot ChangeLog
2
+
3
+ 2.0.5
4
+ - Apparently meets_version_requirement was wrong. Not no more.
5
+
6
+ 2.0.4
7
+ - I get it - I just suck at keeping versions consistent. And Github requires incremental gemspecs.
8
+
9
+ 2.0.3
10
+ - Removed an extra puts. Oops.
11
+ - Added Page::protect
12
+ - Reimplemented meets_version_requirement and references to it
13
+
14
+ 2.0.2
15
+ - Random little changes. Forgot to document. Sorry.
16
+
17
+ 2.0.1
18
+ - Changing gemspec so MAYBE it will be included in github gems.
19
+
20
+ 2.0 ** BREAKING RELEASE** - this will break pretty all 1.X bots.
21
+ - It's smart enough now to handle query-continues! As such, there's a new dependency - deep_merge. Get it - it's in gem.
22
+ - Made a Page class. now, instead of bot.delete_page(title...), you do page.delete
23
+ - Got rid of the stupid Result class. That was a terrible decision.
24
+ - edit_page is now save. Part of the Page/class concept
25
+ - Changed user-agent to bot/RWikiBot/2.0
26
+ - page_exists has been replaced with the page attribute "missing" - this aligns with MW's behavior
27
+ - Removed redirects since it doesn't matter anymore now that the API is solidified
28
+ - Removed image_embdedded_in since it's defunct
29
+ - Removed revisions due to limited functionality
30
+ - Removed pageid_to_title because the API is smart enough to not require conversion
31
+ - Made the utilities all private because utilities don't need to be public
32
+ - Shortened up the code considerably by retuning make_request when I can
33
+ - Moved make_request and it's new friend raw_call to utilities because they are.
34
+ - Removed normalize since I can get the "missing" attribute when creating a page, and that's good 'nuff
35
+
36
+ 1.2
37
+ - Added edit functionality to complete the API
38
+ - Played catchup with a whole lot of methods
39
+
40
+ 1.1
41
+ - Switched from YAML to XML for a handful of reasons, most of which being poorly formatted YAML errors in API
42
+ - Disabled cookies from login since they're terrible and hard to maintain and bad long term practice and the API supports it (you're welcome)
43
+ - Striked the "name" field and "prefix" field from initialize. Those were antiquated anyway.
44
+ - Reordered the methods to align with documentation
45
+ - Changed User-Agent to bot-RWikiBot/1.1
46
+ - Refined error handling (sorta)
47
+
48
+ 1.0.10
49
+ - Changed a get back to post since the bug wasn't with me.
50
+
51
+ 1.0.9
52
+ - Disabled Logging because of permission errors on writing log files as a MediaWiki extension
53
+
54
+ 1.0.8
55
+ - Switched BACK to initialize-time config since calling from a hook didn't play with the files right, I think
56
+
57
+ 1.0.7
58
+ - Added PAGEID_TO_TITLE
59
+ - Turned Logger level up to INFO- sorry for forgetting that :-)
60
+
61
+ 1.0.6
62
+ - Added VERSION
63
+ - Removed some _more_ debugging code
64
+ - Fixed login code to send back tokens and things, thereby fixing login altogether. Bots are now welcome!
65
+
66
+ 1.0.5
67
+ - Added parameters for queries to the rdoc - save MediaWiki some bandwidth
68
+ - Added BACKLINKS, PAGE_EXISTS?, EMBEDDED_IN, IMAGE_EMBEDDED_IN
69
+ - Fixed a bug in ALL_PAGES, where it wasn't really sending the options hash as part of the request. Oops.
70
+ - Removed some debug puts
71
+ - Premiered the RWikiBot logo. Not in the source, but it exists now.
72
+
73
+ 1.0.4
74
+ - Changed config.yaml to sample-config.yaml
75
+ - Had rake include the sample-config.yaml file
76
+ - Fixed broken cookie code to ask for wikicookieprefix, which is usually dbname+tableprefix
77
+
78
+ 1.0.3
79
+ - Bugfix in config file
80
+
81
+ 1.0.2
82
+ - Changed initialize to ask for a configuration to use, defaulting to 'default'
83
+ - Modified config.yaml to prompt for wikidbprefix, which is the cookie identifier.
84
+ - PATCH: Changed bake to search use wikidbprefix in cookie name.
85
+
86
+ 1.0.1
87
+ - Added very basic error handing around the MAKE_REQUEST method since MediaWiki returns invalid YAML
88
+ - Fixed login method to accept domain value
89
+ - Check for domain being present and set in config.yaml
90
+ - Added redirect? method to return true/false if title is a redirect page
91
+ - Changed from "framework" to "library"
92
+
93
+ 1.0.0 - Initial Version
data/README.textile ADDED
@@ -0,0 +1,31 @@
1
+ h1=(). This is "RWikiBot":rwb 2.0
2
+
3
+ h2. Welcome.
4
+
5
+ Thanks for checking out RWikiBot. For me, this is heavily a labor of love, so I hope you find it as useful as I do. And, if you find new or useful patches or requests, please send them my way! For the most update-to-date information on "RWikiBot":rwb check out the "site":rwb.
6
+
7
+ h2. Installation
8
+
9
+ Want it? Come get it! RWikiBot is available as a gem from RubyForge or Github.
10
+
11
+ For the RubyForge gem, just type @gem install rwikibot@.
12
+
13
+ For the Github gem (which is probably more recent), then do this:
14
+ # @gem sources -a http://gems.github.com@ (you only have to do this once)
15
+ # @sudo gem install pangdudu-rwikibot@
16
+
17
+ h2. What's New? (highlights - see CHANGELOG for more)
18
+
19
+ * It's smart enough now to handle query-continues! As such, there's a new dependency - deep_merge. Get it - it's in gem.
20
+ * Made a Page class. now, instead of bot.delete_page(title...), you do page.delete
21
+ * Got rid of the stupid Result class. That was a terrible decision.
22
+ * edit_page is now save. Part of the Page/class concept
23
+ * Changed user-agent to bot/RWikiBot/2.0
24
+
25
+ h2. The Boring Stuff.
26
+
27
+ Author: Eddie Roger ("eddieroger@gmail.com":mailto:eddieroger@gmail.com) "eddieroger.com":http://www.eddieroger.com <br/>
28
+ Copyright: Copyright (c) 2009 Eddie Roger <br/>
29
+ License: GNU/GPL 2.0
30
+
31
+ [rwb]http://www.rwikibot.net
data/Rakefile ADDED
@@ -0,0 +1,45 @@
1
+ require 'rubygems'
2
+
3
+ Gem::manage_gems
4
+
5
+ require 'rake/gempackagetask'
6
+ require 'rake/rdoctask'
7
+
8
+ PKG_NAME = 'rwikibot'
9
+ PKG_VERSION = '2.0.0'
10
+ PKG_FILE_NAME = "#{PKG_NAME}-#{PKG_VERSION}"
11
+ RUBY_FORGE_PROJECT = 'rwikibot'
12
+ RUBY_FORGE_USER = 'eddieroger'
13
+
14
+ spec = Gem::Specification.new do |s|
15
+ s.platform = Gem::Platform::RUBY
16
+ s.name = PKG_NAME
17
+ s.version = PKG_VERSION
18
+ s.author = "Eddie Roger"
19
+ s.email = "eddieroger @nospam@ gmail.com"
20
+ s.summary = "A library for creating MediaWiki bots."
21
+ s.homepage = "http://www.rwikibot.net"
22
+ s.rubyforge_project = 'RWikiBot'
23
+ s.files = FileList['lib/*.rb', 'test/*'].to_a
24
+ s.require_path = "lib"
25
+ s.test_files = Dir.glob('tests/*.rb')
26
+ s.has_rdoc = 'true'
27
+ s.rdoc_options = ['--inline-source --force-update']
28
+ s.extra_rdoc_files = ["README", "CHANGELOG"]
29
+ end
30
+
31
+ Rake::GemPackageTask.new(spec) do |pkg|
32
+ pkg.need_tar = true
33
+ end
34
+
35
+ Rake::RDocTask.new(:rdoc) do |rdoc|
36
+ rdoc.rdoc_dir = 'rdoc'
37
+ rdoc.title = "#{PKG_NAME} -- the best way to create MediaWiki bots in Ruby"
38
+ rdoc.options << "--inline-source"
39
+ rdoc.rdoc_files.include('README', 'CHANGELOG')
40
+ rdoc.rdoc_files.include('lib/**/*.rb')
41
+ end
42
+
43
+ task :default => "pkg/#{spec.name}-#{spec.version}.gem" do
44
+ puts "generated latest version"
45
+ end
data/lib/errors.rb ADDED
@@ -0,0 +1,14 @@
1
+ module RWBErrors
2
+ class LoginError < StandardError
3
+ end
4
+
5
+ class RWikiBotError < StandardError
6
+ end
7
+
8
+ class VersionTooLowError < StandardError
9
+ end
10
+
11
+ class NotLoggedInError < StandardError
12
+ end
13
+
14
+ end
data/lib/pages.rb ADDED
@@ -0,0 +1,216 @@
1
+ ## This class defines a MediaWiki page. Think of it like this: the bot handles
2
+ ## site related stuff, but to take action on a page, you need a page object.
3
+ require 'rwikibot'
4
+ require 'errors'
5
+
6
+ module Pages
7
+ include RWBErrors
8
+
9
+ class Page
10
+ attr_reader :title, :namespace, :new, :length, :counter, :lastrevid, :missing
11
+
12
+ # Creates a new Page object.
13
+ def initialize(bot, title='')
14
+ @bot = bot
15
+ #puts @bot.config
16
+
17
+ info = info(title)
18
+ @title = info['title']
19
+ @namespace = info['ns']
20
+ @new = info.has_key?('new')
21
+ @length = info['length']
22
+ @counter = info ['counter']
23
+ @lastrevid = info['lastrevid']
24
+ @missing = info.has_key?('missing')
25
+ end
26
+
27
+ # I used to have an exists method (page_exists), but I got rid of it in 2.0, but I'm bringing it back.
28
+ def exists?
29
+ if @missing
30
+ return false
31
+ else
32
+ return true
33
+ end
34
+ end
35
+
36
+ # This will get only the content of the article. It is a modification of
37
+ # revisions to specifically pull the content. I thought it would be
38
+ # useful.
39
+ def content(options=nil)
40
+ post_me = {'prop' => 'revisions', 'titles' => @title, 'rvprop' => 'content'}
41
+
42
+ post_me.merge!(options) if options
43
+
44
+ revisions_result = @bot.make_request('query', post_me )
45
+ revisions_result.fetch('pages').fetch('page').fetch('revisions').fetch('rev')
46
+ end
47
+
48
+ # If you have to ask what this method does, don't use it. Seriously, use
49
+ # with caution - this method does not have a confirmation step, and
50
+ # deleted (while restorable) are immediate.
51
+ def delete(reason="Deleted by RWikiBot")
52
+ raise RWBErrors::VersionTooLowError unless @bot.meets_version_requirement(1,12)
53
+ raise RWBErrors::NotLoggedInError unless @bot.logged_in?
54
+
55
+ post_me = {
56
+ 'title' => @title ,
57
+ 'token' => get_token('delete') ,
58
+ 'reason' => reason
59
+ }
60
+
61
+ @bot.make_request('delete', post_me)
62
+ end
63
+
64
+ # This method fetches any article that links to the article given in
65
+ # 'title'. Returned in alphabetical order.
66
+ def backlinks (titles, options = nil)
67
+ raise VersionTooLowError unless meets_version_requirement(1,9)
68
+
69
+ post_me = {'list' => 'backlinks', 'titles' => "#{title}" }
70
+
71
+ post_me.merge!(options) if options
72
+
73
+ backlinks_result = make_request('query', post_me)
74
+
75
+ backlinks_result.success? ?
76
+ backlinks_result.get_result.fetch('backlinks') :
77
+ backlinks_result.get_message
78
+ end
79
+
80
+ # This method pulls any page that includes the template requested. Please
81
+ # note - the template must be the full name, like "Template:Disputed" or
82
+ # "Template:Awesome".
83
+ def embedded_in(options=nil)
84
+ raise VersionTooLowError unless @bot.meets_version_requirement(1,9)
85
+
86
+ # This will get all pages. Limits vary based on user rights of the Bot. Set to bot.
87
+ post_me = {'list' => 'embeddedin', 'eititle' => @title }
88
+
89
+ post_me.merge!(options) if options
90
+
91
+ embeddedin_result = @bot.make_request('query', post_me)
92
+ embeddedin_result.fetch('embeddedin').fetch('ei')
93
+ end
94
+
95
+ # I decided to split this up since I wanted to normalize the bot framework as much as possible, or in other words, make it as easy to use as possible. I think the sacrifice of more methods is worth having more English looking code. Its the Ruby way.
96
+ # Info will return information about the page, from namespace to normalized title, last touched, etc.
97
+ def info(titles)
98
+ raise RWBErrors::VersionTooLowError unless @bot.meets_version_requirement(1,9)
99
+
100
+ post_me = {"prop" => "info", 'titles' => titles}
101
+ info_result = @bot.make_request('query', post_me)
102
+ info_result.fetch('pages').fetch('page')
103
+ end
104
+
105
+ # This method will let you move a page from one name to another. A move token is required for this to work. Keep that in mind. (get_token much?)
106
+ def move(to, reason, movetalk= true, noredirect=false)
107
+ raise RWBErrors::VersionTooLowError unless @bot.meets_version_requirement(1,12)
108
+ raise RWBErrors::NotLoggedInError unless @bot.logged_in?
109
+
110
+ post_me = {
111
+ 'from' => @title ,
112
+ 'to' => "#{to}" ,
113
+ 'token' => get_token('move') ,
114
+ 'reason' => "#{reason}" ,
115
+ }
116
+
117
+ # These ifs are necessary because they should only be part of post_me if
118
+ # the passed vars are true (which they are by default)
119
+ post_me['movetalk'] = '' if movetalk
120
+ post_me['noredirect'] = '' if noredirect
121
+
122
+ @bot.make_request('move', post_me)
123
+ end # move
124
+
125
+ # This method is used to protect (and unprotect!) pages. See the API for
126
+ # possible values. By default, it will lock a page to require sysop level
127
+ # privledge and never expire.
128
+ def protect(protections='edit=sysop', expiry='infinite', reason='', cascade=true)
129
+ raise RWBErrors::VersionTooLowError unless @bot.meets_version_requirement(1,12)
130
+ raise RWBErrors::NotLoggedInError unless @bot.logged_in?
131
+
132
+ post_me = {
133
+ 'title' => @title ,
134
+ 'token' => get_token('protect') ,
135
+ 'protections' => protections ,
136
+ 'expiry' => expiry ,
137
+ 'reason' => reason ,
138
+ }
139
+
140
+ post_me['cascade'] = '' if cascade
141
+ @bot.make_request('protect', post_me)
142
+ end # protect
143
+
144
+ # Rollback does what it says - rolls back an article one version in the
145
+ # wiki. This is a function that requires not only a token, but a previous
146
+ # user.
147
+ def rollback(summary="", markbot=true)
148
+ raise RWBErrors::VersionTooLowError unless @bot.meets_version_requirement(1,12)
149
+ raise RWBErrors::NotLoggedInError unless @bot.logged_in?
150
+
151
+ temp_token = get_token("rollback") # special for rollback. Stupid rollback.
152
+ post_me = {
153
+ 'title' => @title,
154
+ 'token' => temp_token['token'],
155
+ 'user' => temp_token['user'],
156
+ 'summary' => summary
157
+ }
158
+
159
+ post_me['markbot'] = '' if markbots
160
+ @bot.make_request('rollback', post_me)
161
+ end
162
+
163
+ # This method is used to edit pages. Not much more to say about it. Be
164
+ # sure you're logged in and got a token (get_token). Options is an array
165
+ # (or hash) of extra values allowed by the API.
166
+ def save(content, summary=nil, options=nil)
167
+ raise RWBErrors::VersionTooLowError unless @bot.meets_version_requirement(1,13)
168
+ raise RWBErrors::NotLoggedInError unless @bot.logged_in?
169
+
170
+ post_me = {
171
+ 'text' => "#{content}" ,
172
+ 'token' => get_token("edit") ,
173
+ 'title' => @title ,
174
+ 'summary' => "#{summary}" ,
175
+ 'edittime' => Time.now.strftime("%Y%m%d%H%M%S") ,
176
+ }
177
+
178
+ post_me.merge!(options) if options
179
+
180
+ @bot.make_request('edit', post_me).fetch('result')
181
+ end
182
+
183
+ private
184
+ # This method should universally return tokens, just give title and type.
185
+ # You will receive a token string (suitable for use in other methods), so
186
+ # plan accordingly. Use an edit token for both editing and creating
187
+ # articles (edit_article, create_article). For rollback, more than just a
188
+ # token is required. So, for token=rollback, you get a hash of token|user.
189
+ # Just the way it goes.
190
+ def get_token(intoken)
191
+ if intoken.downcase == 'rollback'
192
+ #specific to rollback
193
+ post_me = {
194
+ 'prop' => 'revisions' ,
195
+ 'rvtoken' => intoken ,
196
+ 'titles' => @title
197
+ }
198
+ else
199
+ post_me = {
200
+ 'prop' => 'info',
201
+ 'intoken' => intoken,
202
+ 'titles' => @title
203
+ }
204
+ end
205
+ raw_token = @bot.make_request('query', post_me)
206
+
207
+ if intoken.downcase == 'rollback'
208
+ # Damn this decision to make rollback special!. Wasn't mine, I just have to live by it.
209
+ token2 = raw_token.fetch('pages').fetch('page').fetch('revisions').fetch('rev')
210
+ {'token' => token2.fetch('rollbacktoken') , 'user' => token2.fetch('user')}
211
+ else
212
+ raw_token.fetch('pages').fetch('page').fetch("#{intoken}token")
213
+ end
214
+ end
215
+ end
216
+ end
data/lib/rwikibot.rb ADDED
@@ -0,0 +1,171 @@
1
+ # This is a framework upon which to create MediaWiki Bots. It provides a set
2
+ # of methods to acccess MediaWiki's API and return information in various
3
+ # forms, depending on the type of information returned. By abstracting these
4
+ # methods into a Bot object, cleaner script code can be written later.
5
+ # Furthermore, it facilitates the updating of the API without breaking old
6
+ # bots. Last, but not least, its good to abstract. #
7
+ #
8
+ # Author:: Eddie Roger (mailto:eddieroger@gmail.com)
9
+ # Copyright:: Copyright (c) 2008 Eddie Roger
10
+ # License:: GNU/GPL 2.0
11
+
12
+ # ruby requires
13
+ require 'net/http'
14
+ require 'uri'
15
+ require 'cgi'
16
+
17
+ # my requires
18
+ require 'errors'
19
+ require 'utilities'
20
+ require 'pages'
21
+
22
+ # gem requires
23
+ require 'xmlsimple'
24
+ require 'deep_merge' # New in 2.0!
25
+
26
+ #This is the main bot object. The goal is to represent every API method in
27
+ #some form here, and then write seperate, cleaner scripts in individual bot
28
+ #files utilizing this framework. Basically, this is an include at best.
29
+ class RWikiBot
30
+
31
+ include RWBErrors
32
+ include RWBUtilities
33
+ include Pages
34
+
35
+ attr_reader :config
36
+
37
+ def initialize(username='rwikibot', password='', api_path='http://www.rwikibot.net/wiki/api.php', domain='', login=false)
38
+ @config = Hash.new
39
+
40
+ @config = {
41
+ 'username' => username,
42
+ 'password' => password,
43
+ 'api_path' => api_path,
44
+ 'domain' => domain,
45
+ 'cookies' => "",
46
+ 'logged_in' => false,
47
+ 'uri' => URI.parse(api_path)
48
+ }
49
+
50
+ @config['api_version'] = version.to_f
51
+
52
+ if login
53
+ login
54
+ end
55
+ end
56
+
57
+ # This is the method that will allow the bot to log in to the wiki. Its not
58
+ # always necessary, but bots need to log in to save changes or retrieve
59
+ # watchlists.
60
+ def login
61
+ raise VersionTooLowError unless meets_version_requirement(0,0)
62
+
63
+ post_me = {'lgname'=>@config.fetch('username'),'lgpassword'=>@config.fetch('password')}
64
+ if @config.has_key?('domain') && (@config.fetch('domain') != nil)
65
+ post_me['lgdomain'] = @config.fetch('domain')
66
+ end
67
+
68
+ login_result = make_request('login', post_me)
69
+
70
+ # Now we need to changed some @config stuff, specifically that we're
71
+ # logged in and the variables of that This will also change the
72
+ # make_request, but I'll comment there
73
+ if login_result['result'] == "Success"
74
+ # All lg variables are directly from API and stored in config that way
75
+ @config['logged_in'] = true
76
+ @config['lgusername'] = login_result.fetch('lgusername')
77
+ @config['lguserid'] = login_result.fetch('lguserid')
78
+ @config['lgtoken'] = login_result.fetch('lgtoken')
79
+ @config['_session'] = login_result.fetch('sessionid')
80
+ @config['cookieprefix'] = login_result.fetch('cookieprefix')
81
+
82
+ true
83
+ else
84
+ # puts "Error logging in. Error was: "
85
+ raise LoginError, "#{login_result['result']}: #{login_result['details']}"
86
+ end
87
+ end
88
+
89
+ # Use Page to create a new page object that you can then manipulate. You
90
+ # could create a page on it's own, but if you do, be _sure_ to pass your bot
91
+ # along with the title, otherwise you won't get access to the super-fun
92
+ # make_request object that is pretty much required.
93
+ def page(title='')
94
+ Page.new(self, title)
95
+ end
96
+
97
+ # This will return a list of all pages in a given namespace. It returns a
98
+ # list of pages in with the normalized title and page ID, suitable for usage
99
+ # elsewhere. Accepts all parameters from the API in Hash form. Default is
100
+ # namespace => 0, which is just plain pages. Nothing 'special'.
101
+ def all_pages(options = nil)
102
+ raise VersionTooLowError unless meets_version_requirement(1,9)
103
+ # This will get all pages. Limits vary based on user rights of the Bot. Set to bot.
104
+ post_me = {'list' => 'allpages', 'apnamespace' => '0', 'aplimit' => '5000'}
105
+ post_me.merge!(options) if options
106
+ allpages_result = make_request('query', post_me)
107
+ allpages_result.fetch('allpages')['p']
108
+ end
109
+
110
+ # This method will get the watchlist for the bot's MediaWiki username. This
111
+ # is really onlu useful if you want the bot to watch a specific list of
112
+ # pages, and would require the bot maintainer to login to the wiki as the
113
+ # bot to set the watchlist.
114
+ def watchlist(options=nil)
115
+ raise VersionTooLowError unless meets_version_requirement(1,10)
116
+ raise NotLoggedInError unless logged_in?
117
+ post_me = {'list'=>'watchlist'}
118
+ post_me.merge!(options) if options
119
+ make_request('query', post_me).fetch('watchlist').fetch('item')
120
+ end
121
+
122
+ # This method will return Wiki-wide recent changes, almost as if looking at the Special page Recent Changes. But, in this format, a bot can handle it. Also we're using the API. And bots can't read.
123
+ def recent_changes(options=nil)
124
+ raise VersionTooLowError unless meets_version_requirement(1,10)
125
+ post_me = {"list" => "recentchanges", 'rclimit' => '5000'}
126
+ post_me.merge!(options) if options
127
+ make_request('query' , post_me).fetch('recentchanges').fetch('rc')
128
+ end
129
+
130
+ # This will reutrn a list of the most recent log events. Useful for bots who
131
+ # want to validate log events, or even just a notify bot that checks for
132
+ # events and sends them off.
133
+ def log_events(options=nil)
134
+ raise VersionTooLowError unless meets_version_requirement(1,11)
135
+ post_me = {"list" => "logevents"}
136
+ post_me.merge!(options) if options
137
+ make_request('query', post_me).fetch('logevents').fetch('item')
138
+ end
139
+
140
+ # This is the only meta method. It will return site information. I chose not
141
+ # to allow it to specify, and it will only return all known properties.
142
+ def site_info(siprop='general')
143
+ #raise VersionTooLowError unless meets_version_requirement(1,9)
144
+ post_me = {"meta" => "siteinfo" , "siprop" => siprop}
145
+ siteinfo_result = make_request('query', post_me)
146
+ siprop == 'general' ?
147
+ siteinfo_result.fetch('general') :
148
+ siteinfo_result.fetch('namespaces').fetch('ns')
149
+ end
150
+
151
+ # Get information about the current user
152
+ def user_info(uiprop=nil)
153
+ raise VersionTooLowError unless meets_version_requirement(1,11)
154
+ post_me = {"meta" => "userinfo" }
155
+ post_me['uiprop'] = uiprop unless uiprop.nil?
156
+
157
+ make_request('query',post_me).fetch('userinfo')
158
+ end
159
+ end
160
+
161
+ # I'm never happy with good enough, and when it comes to my hashes, I like to see the members of it. So I changed the hash to_s. Overriding method makes me happy.
162
+ class Hash
163
+ def to_s
164
+ out = "{"
165
+ self.each do |key, value|
166
+ out += "#{key} => #{value},"
167
+ end
168
+ out = out.chop
169
+ out += "}"
170
+ end
171
+ end
data/lib/utilities.rb ADDED
@@ -0,0 +1,119 @@
1
+ module RWBUtilities
2
+
3
+
4
+ def meets_version_requirement(maj, min)
5
+ major, minor = @config['api_version'].to_s.split('.')
6
+ # puts "#{major} > #{maj}"
7
+ if ( major.to_i > maj.to_i ) || ( (major.to_i == maj.to_i) && (minor.to_i >= min.to_i) )
8
+ return true
9
+ else
10
+ return false
11
+ end
12
+ end # meets_version_requirement
13
+
14
+ # Tests to see if a given page title is redirected to another page. Very Ruby.
15
+ def is_redirect?(title)
16
+ post_me = {'titles' => title, 'redirects'=>'', 'prop' => 'info'}
17
+ result = make_request('query', post_me)
18
+ (result['result'] == "Success") && result.has_key?("redirects")
19
+ end
20
+
21
+ # A quick (and public) method of checking whether or not we're logged in, since I don't want @config exposed
22
+ def logged_in?
23
+ @config['logged_in']
24
+ end
25
+
26
+ # The point of this method is to iterate through an array of hashes, which most of the
27
+ # other methods return, and remove multiple instances of the same wiki page. We're more
28
+ # than often only concerned with the most recent revision, so we'll delete old ones.
29
+ #
30
+ # Hashes don't respond to the the Array.uniq method. So this is the same-ish
31
+ def make_unique(array)
32
+ test_array = array
33
+ count = 0
34
+
35
+ array.reverse.each do |current_item|
36
+ test_array.each do |test_item|
37
+ if (current_item.fetch('title') == test_item.fetch('title') && current_item.fetch('revid') > test_item.fetch('revid') )
38
+ # At this point, current is the same article as test, and current is newer. Delete test
39
+ array.delete(test_item)
40
+ count += 1
41
+ end
42
+ end
43
+ end
44
+
45
+ array
46
+ end
47
+
48
+ # This method will return the version of the MediaWiki server. This is done by parsing the
49
+ # version number from the generator attribute of the the site_info method. Useful? Yes
50
+ # - maybe yout bot is only compatible with MediaWiki 1.9.0 depending on what methods you
51
+ # use. I like it, anwyay.
52
+ def version
53
+ site_info.fetch('generator').split(' ')[1]
54
+ end
55
+
56
+ # Make Request is a method that actually handles making the request to the API. Since the
57
+ # API is somewhat standardized, this method is able to accept the action and a hash of
58
+ # variables, and it handles all the fun things MediaWiki likes to be weird over, like
59
+ # cookies and limits and actions. Its very solid, but I didn't want it public because
60
+ # it also does some post processing, and that's not very OO.
61
+ def make_request (action, post_this)
62
+ post_this['format'] = 'xml'
63
+ post_this['action'] = action
64
+
65
+ if (@config['logged_in'])
66
+ cookies = "#{@config['cookieprefix']}UserName=#{@config['lgusername']}; #{@config['cookieprefix']}UserID=#{@config['lguserid']}; #{@config['cookieprefix']}Token=#{@config['lgtoken']}; #{@config['cookieprefix']}_session=#{@config['_session']}"
67
+ else
68
+ cookies = ""
69
+ end
70
+
71
+ headers = {
72
+ 'User-agent'=>'bot-RWikiBot/2.0-rc1',
73
+ 'Cookie' => cookies
74
+ }
75
+
76
+ r = Hash.new
77
+ until post_this.nil?
78
+ return_result, post_this = raw_call(headers, post_this)
79
+ r.deep_merge(return_result.fetch(action))
80
+ end
81
+
82
+ r
83
+ end
84
+
85
+ # Raw Call handles actually, physically talking to the wiki. It is broken out to handle
86
+ # query-continues where applicable. So, all the methods call make_request, and it calls
87
+ # raw_call until raw_call returns a nil post_this.
88
+ def raw_call(headers, post_this)
89
+ request = Net::HTTP::Post.new(@config.fetch('uri').path, headers)
90
+ request.set_form_data(post_this)
91
+ response = Net::HTTP.new(@config.fetch('uri').host, @config.fetch('uri').port).start {|http|
92
+ http.request(request)
93
+ }
94
+
95
+ # Extra cookie handling. Because editing will be based on session IDs and it generates
96
+ # a new one each time until you start responding. I doubt this will change.
97
+ if (response.header['set-cookie'] != nil)
98
+ @config['_session'] = response.header['set-cookie'].split("=")[1]
99
+ end
100
+
101
+ return_result = XmlSimple.xml_in(response.body, { 'ForceArray' => false })
102
+
103
+ if return_result.has_key?('error')
104
+ raise RWikiBotError, "#{return_result.fetch('error').fetch('code').capitalize}: #{return_result.fetch('error').fetch('info')}"
105
+ end
106
+
107
+ if !post_this.keys.any?{|k| k.include?('limit')} && return_result.has_key?('query-continue')
108
+ return_result.fetch('query-continue').each do |key, value|
109
+ return_result.fetch('query-continue').fetch(key).each do |x,y|
110
+ post_this[x] = y
111
+ end
112
+ end
113
+ else
114
+ post_this = nil
115
+ end
116
+
117
+ [return_result, post_this]
118
+ end
119
+ end
metadata ADDED
@@ -0,0 +1,83 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: pangdudu-rwikibot
3
+ version: !ruby/object:Gem::Version
4
+ version: 2.0.6.1
5
+ platform: ruby
6
+ authors:
7
+ - Eddie Roger
8
+ - pangdudu
9
+ autorequire:
10
+ bindir: bin
11
+ cert_chain: []
12
+
13
+ date: 2009-09-03 00:00:00 -07:00
14
+ default_executable:
15
+ dependencies:
16
+ - !ruby/object:Gem::Dependency
17
+ name: deep_merge
18
+ type: :runtime
19
+ version_requirement:
20
+ version_requirements: !ruby/object:Gem::Requirement
21
+ requirements:
22
+ - - ">"
23
+ - !ruby/object:Gem::Version
24
+ version: 0.0.0
25
+ version:
26
+ - !ruby/object:Gem::Dependency
27
+ name: xml-simple
28
+ type: :runtime
29
+ version_requirement:
30
+ version_requirements: !ruby/object:Gem::Requirement
31
+ requirements:
32
+ - - ">"
33
+ - !ruby/object:Gem::Version
34
+ version: 0.0.0
35
+ version:
36
+ description: A Ruby framework for creating MediaWiki robots.
37
+ email: pangdudu@github
38
+ executables: []
39
+
40
+ extensions: []
41
+
42
+ extra_rdoc_files:
43
+ - README.textile
44
+ files:
45
+ - CHANGELOG
46
+ - README.textile
47
+ - Rakefile
48
+ - lib/rwikibot.rb
49
+ - lib/pages.rb
50
+ - lib/errors.rb
51
+ - lib/utilities.rb
52
+ has_rdoc: true
53
+ homepage: http://rwikibot.net/wiki
54
+ licenses:
55
+ post_install_message:
56
+ rdoc_options:
57
+ - --main
58
+ - --inline-source
59
+ - --force-update
60
+ - README.textile
61
+ require_paths:
62
+ - lib
63
+ required_ruby_version: !ruby/object:Gem::Requirement
64
+ requirements:
65
+ - - ">="
66
+ - !ruby/object:Gem::Version
67
+ version: "0"
68
+ version:
69
+ required_rubygems_version: !ruby/object:Gem::Requirement
70
+ requirements:
71
+ - - ">="
72
+ - !ruby/object:Gem::Version
73
+ version: "0"
74
+ version:
75
+ requirements: []
76
+
77
+ rubyforge_project:
78
+ rubygems_version: 1.3.5
79
+ signing_key:
80
+ specification_version: 2
81
+ summary: Creates an abstraction layer between MediaWiki API and Ruby.
82
+ test_files: []
83
+