apirunner 0.5.0 → 0.5.5
Sign up to get free protection for your applications and to get access to all the features.
- data/README.rdoc +302 -33
- data/VERSION +1 -1
- data/apirunner.gemspec +2 -17
- data/examples/config/api_runner.yml +2 -2
- data/examples/test/api_runner/001_create_user.yml +199 -112
- data/lib/api_runner.rb +3 -3
- data/lib/apirunner/railtie.rb +6 -5
- data/lib/result.rb +2 -1
- data/lib/tasks/api.rake +7 -0
- data/spec/api_runner_spec.rb +1 -1
- metadata +4 -19
- data/examples/test/api_runner/002_update_resources.yml +0 -360
- data/examples/test/api_runner/003_update_ratings.yml +0 -88
- data/examples/test/api_runner/004_update_series_ratings.yml +0 -88
- data/examples/test/api_runner/005_rateables_and_pagination.yml +0 -34
- data/examples/test/api_runner/006_recommendations.yml +0 -78
- data/examples/test/api_runner/007_item_predictions.yml +0 -286
- data/examples/test/api_runner/008_discovery.yml +0 -299
- data/examples/test/api_runner/009_cacheable_operations.yml +0 -1
- data/examples/test/api_runner/010_fsk.yml +0 -168
- data/examples/test/api_runner/011_misc.yml +0 -116
- data/examples/test/api_runner/012_telekom_error_reports.yml +0 -1831
- data/examples/test/api_runner/013-extended-unpersonalized-discovery.yml +0 -711
- data/examples/test/api_runner/014-extended-personalized-discovery.yml +0 -764
- data/examples/test/api_runner/015_create_10000_users.yml +0 -43
- data/examples/test/api_runner/999_delete_user.yml +0 -78
data/README.rdoc
CHANGED
@@ -1,6 +1,6 @@
|
|
1
1
|
= apirunner
|
2
2
|
|
3
|
-
*apirunner* let's you test your _JSON_ _API_ from the outside. Sometimes model-, controller and routing-test's are not enough, you want to send requests to your application and validate the response in ganular detail
|
3
|
+
*apirunner* let's you test your _JSON_ _API_ from the outside. Sometimes model-, controller and routing-test's are not enough, you want to send requests to your application and validate the response in ganular detail? Then apirunner will be your best friend.
|
4
4
|
|
5
5
|
apirunner is no replacement to rspec or cucumber tests, nor does it replace webrat or capable tools like that. It's an addition that lets you query you API, specify your queries in detail, parse the expected response code, message, header and body and compare all (or any) of 'em to your expectation, as well as check and document every testcases performance.
|
6
6
|
|
@@ -13,14 +13,15 @@ apirunner was initially developed for testing of the mighty (m8ty) i18n recommen
|
|
13
13
|
*apirunner* *can*:
|
14
14
|
|
15
15
|
* be configured for as many environemnts as you wish (your local machine, you staging environment, your production boxes, your wifes handbag)
|
16
|
-
* send GET, POST, PUT and DELETE requests via HTTP
|
17
|
-
* wait arbitrary but well
|
16
|
+
* send GET, POST, PUT and DELETE requests via HTTP and HTTPS
|
17
|
+
* wait arbitrary but well specified amount of time before sending a request
|
18
18
|
* read as many testcases as you wish from YAML files and execute them in the order of file appearance
|
19
|
-
* generate iterational testcases at runtime (for mass/performance
|
19
|
+
* generate iterational testcases at runtime (for mass/performance tests)
|
20
20
|
* read more then one testcase from a file
|
21
21
|
* match the response code's of your applications responses
|
22
|
-
* match the syntactical
|
22
|
+
* match the syntactical correctness of the response format (as long as it is JSON)
|
23
23
|
* proof the occurance and match the content of your app's HTTP headers
|
24
|
+
* inspect certain caching related header values and validate max-age and sweep times (Varnish)
|
24
25
|
* proof the occurence and match the content of your app's body (as long as it responds JSON)
|
25
26
|
* optionally match only parts of header / body (you dont have to specify them in more detail than you are interested in)
|
26
27
|
* exclude certain value test's from certain environments (by reading excludes from excludes.yml)
|
@@ -28,20 +29,31 @@ apirunner was initially developed for testing of the mighty (m8ty) i18n recommen
|
|
28
29
|
* provide you with some nice feedback at the console .... yeah sexy dots (".") and fancy F's ("F") ....
|
29
30
|
* print out a nice error report (that you as a awesome ruby coder will never see)
|
30
31
|
* print out a nice success report if you wish
|
32
|
+
* print out equivalent curl commands for every request
|
31
33
|
* measure the performance of your api from the outsite (no concurrency provided today, sry)
|
32
34
|
* print out a nice performance report
|
33
35
|
* substitute defined resource names of you testcases (resource namespacing) so that several testruns on the same box dont interfere (Hudson vs. developer)
|
34
36
|
* be invoked from within rake to generate some example configuration and testcase files
|
37
|
+
* be integrated into Hudson or any other CI system that accepts external tasks
|
38
|
+
* be invoked with several environment keywords for more granular control over you testcases execution
|
39
|
+
* write continious CSV logfiles with performance data of every request sent in every run of your testcase
|
35
40
|
* be invoked also from within rake to run your test's
|
41
|
+
* be extended by additional plugins that check certain behaviour of your api that apirunner does'nt check today
|
36
42
|
* not travel to Ibiza
|
37
43
|
|
38
44
|
== Installation
|
39
45
|
|
46
|
+
Rails3:
|
47
|
+
|
40
48
|
gem install apirunner
|
41
49
|
|
50
|
+
Rails2:
|
51
|
+
|
52
|
+
script/plugin install git://github.com/janroesner/apirunner.git
|
53
|
+
|
42
54
|
== Prerequisites
|
43
55
|
|
44
|
-
Until today apirunner runs only in connection with a rails application itself. In the future it (hopefully) will be able to run even isolated without a Rails environment. Releases of Rails prior to 3.0.0.rc are untested and will likely fail. Please don't blame the author
|
56
|
+
Until today apirunner runs only in connection with a rails application itself. In the future it (hopefully) will be able to run even isolated without a Rails environment. Releases of Rails prior to 3.0.0.rc are untested and will likely fail. Please don't blame the author but submit you patches.
|
45
57
|
|
46
58
|
== Invocation
|
47
59
|
|
@@ -59,7 +71,7 @@ should result in:
|
|
59
71
|
rake api:run:staging # runs a series of nessecary api calls and parses their response in environment staging
|
60
72
|
rake api:scaffold # generates configuration and a skeleton for apirunner tests as well as excludes
|
61
73
|
|
62
|
-
Tasks are
|
74
|
+
Tasks are selfexeplaining so far ...
|
63
75
|
|
64
76
|
== Configuration
|
65
77
|
|
@@ -71,23 +83,30 @@ The latter one generates a starter configuration file in your config directory:
|
|
71
83
|
|
72
84
|
Additionally there will be some example testcases which can be found in:
|
73
85
|
|
74
|
-
|
75
|
-
|
76
|
-
|
77
|
-
|
78
|
-
|
79
|
-
|
80
|
-
|
81
|
-
|
82
|
-
|
83
|
-
|
84
|
-
|
85
|
-
|
86
|
-
|
87
|
-
|
88
|
-
|
89
|
-
|
90
|
-
|
86
|
+
test/apirunner/001_create_user.yml
|
87
|
+
test/apirunner/002_update_resources.yml
|
88
|
+
test/apirunner/003_update_ratings.yml
|
89
|
+
test/apirunner/004_update_series_ratings.yml
|
90
|
+
test/apirunner/005_rateables_and_pagination.yml
|
91
|
+
test/apirunner/006_recommendations.yml
|
92
|
+
test/apirunner/007_item_predictions.yml
|
93
|
+
test/apirunner/008_discovery.yml
|
94
|
+
test/apirunner/010_fsk.yml
|
95
|
+
test/apirunner/011_misc.yml
|
96
|
+
test/apirunner/012_telekom_performance_tests.yml
|
97
|
+
test/apirunner/013_telekom_test_data_expectation.yml
|
98
|
+
test/apirunner/014-extended-unpersonalized-discovery.yml
|
99
|
+
test/apirunner/015-extended-personalized-discovery.yml
|
100
|
+
test/apirunner/016_create_10000_users.yml
|
101
|
+
test/apirunner/100_basic_varnish_tests.yml
|
102
|
+
test/apirunner/101_user_cache_update_and_delete_tests.yml
|
103
|
+
test/apirunner/102_user_cache_recommendations.yml
|
104
|
+
test/apirunner/103_user_chache_predictions.yml
|
105
|
+
test/apirunner/104_user_cache_discovery.yml
|
106
|
+
test/apirunner/105_test_discovery_caching.yml
|
107
|
+
test/apirunner/999_delete_user.yml
|
108
|
+
test/apirunner/excludes.yml
|
109
|
+
|
91
110
|
|
92
111
|
These testcases are specific to recent requirements regarding the moviepilot API but can be helpful to understand, how the YAML expectation files have to be created.
|
93
112
|
|
@@ -136,6 +155,9 @@ At first take some time and change config/api_runner.yml to your needs. You migh
|
|
136
155
|
- none
|
137
156
|
|
138
157
|
The configuration options above need some explanation (uuuuugh) but have to follow the YAML standard, so BE CAREFUL(!) about proper indentation (two spaces).
|
158
|
+
|
159
|
+
== Environments
|
160
|
+
|
139
161
|
So far you can define as many environments as you would like to query. The example above specifies 3 of them [:local, :staging, :production].
|
140
162
|
|
141
163
|
You can specify a :protocol, :host, :port as well as a (URL) :namespace per environment. The namespace option is mandatory, so you can omit it. We introduced it, so we can support different versions of our api at the same time and question different versions on different boxes with one setup.
|
@@ -148,13 +170,109 @@ The option makes the expectation matcher build ressource URI's like so:
|
|
148
170
|
|
149
171
|
The ressource pathes are simply appended before the request is sent.
|
150
172
|
|
151
|
-
|
173
|
+
== Generals
|
174
|
+
|
175
|
+
Every option that is not realted to a certain environment has to be mentioned in the generals section. As it can be seen in the example above there are few of them.
|
176
|
+
|
177
|
+
*Verbosity* - apirunner has 5 verbosity modes. The first one that is found after the *verboosity* keyword is used. The others are here only fpr documentation purpose.
|
178
|
+
|
179
|
+
general:
|
180
|
+
verbosity:
|
181
|
+
- verbose_on_error
|
182
|
+
- verbose_on_success
|
183
|
+
- rspec
|
184
|
+
- performance
|
185
|
+
|
186
|
+
*verbose_on_error* - prints out detailed testcase information in case of an error
|
187
|
+
*verbose_on_sccess* - prints out detailed testcase information for every testcase, even if there was no error at all
|
188
|
+
*rspec* - you will see only dots and F's as you know it from rspec tests
|
189
|
+
*performance* - prints out a stripped down performance report for every testcase that was run
|
190
|
+
|
191
|
+
== Executing only selected test cases
|
192
|
+
It is possible to only execute several selected tests of a test suite by setting the environment variable to a list of regular expression that are matched against the file names of tests.
|
193
|
+
|
194
|
+
ONLY=001,011,update rake api:run:production
|
195
|
+
|
196
|
+
would run all test files in
|
197
|
+
|
198
|
+
test/apirunner/*
|
199
|
+
|
200
|
+
whose file name would match the regular expression:
|
201
|
+
|
202
|
+
/(001|011|update)/
|
203
|
+
|
204
|
+
== Curl Output
|
205
|
+
When trying to debug errors, it can be very useful to replay a test step by step. If you run apirunner with
|
206
|
+
|
207
|
+
VERBOSE=1
|
208
|
+
|
209
|
+
the output will contain equivalent curl commands for every test.
|
210
|
+
|
211
|
+
== Priority
|
212
|
+
|
213
|
+
Priority can be misunderstood, cause it works exactly the other way around as you'd expect it to. Every testcase can (but does not have to) have a priority. If a testcase has'nt, it defaults to 0. The apirunner can be configured to run in a certain priority level like so:
|
214
|
+
|
215
|
+
general:
|
216
|
+
<other stuff>
|
217
|
+
priority: 0
|
218
|
+
|
219
|
+
With that configuration it runs every testcase with priority 0, nothing more is run. If you do not configure a priority level in the config/api_runner.yml it defaults to 0 too. If you set the priority to 1 for example, all testcases with the priority level 1 and below (0) are executed. If you make the apirunner run testcases at priority level 4, all testcases with priority level 4 and below (3,2,1,0) are invoked.
|
220
|
+
|
221
|
+
That way you can build different layers of your tests and run them just as you like.
|
222
|
+
|
223
|
+
== Substitution
|
224
|
+
|
225
|
+
We introduced substitution, cause we had to ... Imagine several developers start the apirunner against the same environment. Apirunner A creates a resource via PUT in advance to fire some nice testcases, while another apirunner B started earlier and makes the resource die issuing a DELETE request. Result: some angry developers. Another scenario: You set up a Hudson or any other CI system to prove your API running to your well paying customer. But as it always happens he - as an energetic salesman and perfectly educated computer science specialist - takes a look at you CI in the very moment your top dog dev runs a the whole testsuite for a performace check against the live machines. Result: both runs fails, cause they interfere.
|
226
|
+
|
227
|
+
Substitution to the rescue.
|
228
|
+
|
229
|
+
general:
|
230
|
+
<stuff...>
|
231
|
+
substitution:
|
232
|
+
substitutes:
|
233
|
+
- daisyduck
|
234
|
+
- duffyduck
|
235
|
+
- roadrunner
|
236
|
+
- luckyluke
|
237
|
+
- wileecoyote
|
238
|
+
prefix: sweetest_
|
239
|
+
|
240
|
+
You can substitute every "string" in your request, not only parts of the url, but the whole stuff that is mentioned in your testcases request section. In the above example every occurance of the substitutes [daisyduck, duffyduck, roadrunner, luckyluke, wileecoyote] is substituted by a prefixed version of the very same string: [sweetest_daisyduck, sweetest_duffyduck, sweetest_roadrunner, sweetest_luckyluke, sweetest_wileecoyote].
|
241
|
+
|
242
|
+
Every of your sweet dev's should simply set their own prefix and there should not be any interference afterwards anymore.
|
243
|
+
|
244
|
+
== CSV
|
245
|
+
|
246
|
+
The most recent apirunner supports performance checks of your API. Would'nt is be nice, if you could record your API's performance in a CSV to make it a graph later? Here we go. Apirunner does so, if you tell him to.
|
247
|
+
|
248
|
+
general:
|
249
|
+
<stuff ...>
|
250
|
+
csv_mode:
|
251
|
+
- append
|
252
|
+
- create
|
253
|
+
- none
|
254
|
+
|
255
|
+
The CSV file is created in your Rails app's tmp directory and automatically named after the environment you are running it against. The CSV is generated in an intelligent way so that the deletion of testcases as well as adding some new does not detroy the existing CSV structure. There are three modes that the CSV writer can run in.
|
256
|
+
|
257
|
+
*append* - Guess the most used mode, every new testruns data is appended to an already existing CSV file. If none exists yet, it's created on the fly
|
258
|
+
*create* - You want only to record the most recent run? Go with "create" and only the last runs data is recorded in the CSV file
|
259
|
+
*none* - no arms, no cookies, none disables the creation of CSV files at all
|
152
260
|
|
153
261
|
== Excludes
|
154
262
|
|
155
|
-
You may also want to define some excludes for some of your
|
263
|
+
You may also want to define some excludes for some of your environments. Imagine you run your production environment fully cached by Varnish and have some testcases that you cant "priorize out of scope". On the other hand you would like to run the same testsuite against you local development environment. You'll see a lot of errors, cause there is not caching set up on your local box. Here excludes come in and become very handy.
|
156
264
|
|
157
|
-
|
265
|
+
In the excludes files excludes.yml in the test/api_runner directory you can simply mention keys that shall not be evaluated in a certain environment. Example:
|
266
|
+
|
267
|
+
local:
|
268
|
+
excludes:
|
269
|
+
- "max-age"
|
270
|
+
staging:
|
271
|
+
excludes:
|
272
|
+
- "foo"
|
273
|
+
- "bar"
|
274
|
+
|
275
|
+
This snipppet makes apirunner drop testcases where there "max-age" occures as a key to be evaluated while checking the responses header but only for your local environment. In the staging environment "foo" and "bar" are not checked. Excludes apply only to header- and body-checks, so they are implemented only in these plugins.
|
158
276
|
|
159
277
|
|
160
278
|
== Testing
|
@@ -165,10 +283,11 @@ There are rspec model tests for all classes which can be invoked via:
|
|
165
283
|
|
166
284
|
== Dependencies
|
167
285
|
|
168
|
-
apirunner heavily depends on the following great
|
286
|
+
apirunner heavily depends on the following great Gem's:
|
169
287
|
|
170
|
-
|
171
|
-
|
288
|
+
* nokogiri
|
289
|
+
* json
|
290
|
+
* aaronh-chronic
|
172
291
|
|
173
292
|
== Examples
|
174
293
|
|
@@ -176,11 +295,161 @@ After invoking:
|
|
176
295
|
|
177
296
|
rake api:scaffold
|
178
297
|
|
179
|
-
you will find some YAML example files for request and expectation generation in test/api_runner. You can create as many story files here as you like, they are executed in the order they are read from the filesystem, so you should name them like 000_create_some_ressource.yml, 001_read_some_ressource.yml and so on.
|
298
|
+
you will find some YAML example files for request and expectation generation in test/api_runner. You can create as many story files here as you like, they are executed in the order they are read from the filesystem, so you should name them like 000_create_some_ressource.yml, 001_read_some_ressource.yml and so on.
|
299
|
+
|
300
|
+
Alternatively you can place all your stories into one single file. Some examples:
|
301
|
+
|
302
|
+
- name: "001/2: Create new User"
|
303
|
+
request:
|
304
|
+
headers:
|
305
|
+
Content-Type: 'application/json'
|
306
|
+
path: '/users/duffyduck'
|
307
|
+
method: 'PUT'
|
308
|
+
body:
|
309
|
+
watchlist:
|
310
|
+
- m1035
|
311
|
+
- m2087
|
312
|
+
blacklist:
|
313
|
+
- m1554
|
314
|
+
- m2981
|
315
|
+
skiplist:
|
316
|
+
- m1590
|
317
|
+
- m1056
|
318
|
+
ratings:
|
319
|
+
m12493: 4.0
|
320
|
+
m1875: 2.5
|
321
|
+
m7258: 3.0
|
322
|
+
m7339: 4.0
|
323
|
+
m3642: 5.0
|
324
|
+
expires_at: 2011-09-09T22:41:50+00:00
|
325
|
+
response_expectation:
|
326
|
+
status_code: 201
|
327
|
+
headers:
|
328
|
+
Last-Modified: /.*/
|
329
|
+
body:
|
330
|
+
username: 'duffyduck'
|
331
|
+
watchlist:
|
332
|
+
- m1035
|
333
|
+
- m2087
|
334
|
+
blacklist:
|
335
|
+
- m1554
|
336
|
+
- m2981
|
337
|
+
skiplist:
|
338
|
+
- m1590
|
339
|
+
- m1056
|
340
|
+
ratings:
|
341
|
+
m12493: 4.0
|
342
|
+
m1875: 2.5
|
343
|
+
m7258: 3.0
|
344
|
+
m7339: 4.0
|
345
|
+
m3642: 5.0
|
346
|
+
fsk: "18"
|
347
|
+
|
348
|
+
This testcase creates a PUT request for the resource /users/duffyduck. It creates a JSON body containung 4 arrays - the users watchlist, blacklist, skiplist and his ratings. These arrays include the values itself.
|
349
|
+
|
350
|
+
*name*
|
351
|
+
|
352
|
+
The name of you testcase should be unique. Best practise is to give it an unique identifying number. Reason: The name of the testcase is used to generate an identifying hash for the cSV generation. If you do not need the CSV functionality, don't mind.
|
353
|
+
|
354
|
+
*request*
|
355
|
+
|
356
|
+
In the request section you define everthing that is needed to generate you HTTP(s) request to your api.
|
357
|
+
|
358
|
+
*headers*
|
359
|
+
|
360
|
+
In the headers section you can declare every header as a key value pair, the value should be a string and such quoted with " or '. If you query a Rails application you should not forget the Content-Type: 'application/json' and you could also mention Accept: 'application/json' as well as any other header key that may be important for your application to query.
|
361
|
+
|
362
|
+
Cache-Control headers are accessible through a hash, when you need to test a value of s-maxage from the Cache-Control header:
|
363
|
+
|
364
|
+
Cache-Control:public, s-maxage=86400
|
365
|
+
|
366
|
+
you can test for it with:
|
367
|
+
|
368
|
+
Cache-Control[s-maxage]: @in one day # test that s-maxage from Cache-Controll is set to now + one day
|
369
|
+
|
370
|
+
Time-test in Caching-Headers can be done with relative time values that can be understood by https://github.com/mojombo/chronic. Testing includes a tolerance of +/- 5 seconds, as the test runs on a real system and you will have some latency. Other possible values are would be
|
371
|
+
|
372
|
+
@tomorrow 4:00am
|
373
|
+
@next_occurence_of 3:00am
|
374
|
+
@in 5 hours
|
375
|
+
|
376
|
+
*path*
|
377
|
+
|
378
|
+
The path specifies the exact path of the recent resource to query. Keep in mind that this path is added to the protocol+domain+namespace so that the above path for example evaluates to:
|
379
|
+
|
380
|
+
http://staging.moviepilot.de/api1v0/users/duffyduck
|
381
|
+
|
382
|
+
*method*
|
383
|
+
|
384
|
+
Here you have to mention the HTTP method that is used for your request. Today only typical RESTful actions are supported, these are:
|
385
|
+
|
386
|
+
* POST
|
387
|
+
* GET
|
388
|
+
* PUT
|
389
|
+
* DELETE
|
390
|
+
|
391
|
+
*body*
|
392
|
+
|
393
|
+
In the body you specify the content you want to send to your API. You can create you nested data in shape of hashes, arrays and single values according to the YAML standard. If you get stuck with it, have a look here: yaml.org (http://www.yaml.org/spec/)
|
394
|
+
|
395
|
+
*response_expectation*
|
396
|
+
|
397
|
+
When it comes to the response expectation it gets intereresting. The todays integrated plugins allow several checks. These include:
|
398
|
+
|
399
|
+
* correctnes of the response body format as JSON
|
400
|
+
* the HTTP response code
|
401
|
+
* the response header definition
|
402
|
+
* the response body definition
|
403
|
+
* some chaching related time checks
|
404
|
+
|
405
|
+
Header and body definition checks are very interesting, cause they follow a special strategy. Response bodies can become very huge sometimes. And in most cases you are not interested in the whole body, you are only interested in some values to match you expectation. Same applies to the header. Apirunner provides you with exactly that. You can declare the structure of you expected body/header in YAML format and simply omit all the values you are not interested in. But KEEP IN MIND that you have to build at least as much structure that is needed to address the value you are checking.
|
406
|
+
|
407
|
+
For example you response body consists of an array of hashes where there only the second hash is of interest for you, and that hash contains an array of hashes itself where the the last hash is of interest, you only had to write something like that:
|
408
|
+
|
409
|
+
responce_expectation:
|
410
|
+
outer_array:
|
411
|
+
inner_array:
|
412
|
+
key_to_be_checked: "expected value"
|
413
|
+
|
414
|
+
The apirunner build a tree structure from both, the response body and your expectation. Then it builds relative pathes for every leave of your expectation tree and uses XPath to find the corresponding leave in the response tree. Then it compares both and applies your matching rules.
|
415
|
+
|
416
|
+
Again, have a look at the YAML specification at yaml.org(http://www.yaml.org/spec)
|
417
|
+
|
418
|
+
There are three kinds of matching mechanisms:
|
419
|
+
|
420
|
+
*structure match*
|
421
|
+
|
422
|
+
Structure matches are written directly in YAML and look like so for example:
|
423
|
+
|
424
|
+
response_expectation:
|
425
|
+
body:
|
426
|
+
watchlist:
|
427
|
+
- m1035
|
428
|
+
- m2087
|
429
|
+
blacklist:
|
430
|
+
- m1554
|
431
|
+
- m2981
|
432
|
+
skiplist:
|
433
|
+
- m1590
|
434
|
+
- m1056
|
435
|
+
|
436
|
+
*string match*
|
437
|
+
|
438
|
+
String matches give you the possibility to check a certain key like so:
|
439
|
+
|
440
|
+
response_expectation:
|
441
|
+
body:
|
442
|
+
username: 'duffyduck'
|
443
|
+
|
444
|
+
Strings can be quoted either using ' or ".
|
445
|
+
|
446
|
+
*regular expressions*
|
447
|
+
|
448
|
+
*status_code*
|
180
449
|
|
181
|
-
|
450
|
+
*headers*
|
182
451
|
|
183
|
-
|
452
|
+
*body*
|
184
453
|
|
185
454
|
== Authors
|
186
455
|
|
data/VERSION
CHANGED
@@ -1 +1 @@
|
|
1
|
-
0.5.
|
1
|
+
0.5.5
|
data/apirunner.gemspec
CHANGED
@@ -5,11 +5,11 @@
|
|
5
5
|
|
6
6
|
Gem::Specification.new do |s|
|
7
7
|
s.name = %q{apirunner}
|
8
|
-
s.version = "0.5.
|
8
|
+
s.version = "0.5.5"
|
9
9
|
|
10
10
|
s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
|
11
11
|
s.authors = ["jan@moviepilot.com"]
|
12
|
-
s.date = %q{
|
12
|
+
s.date = %q{2011-03-30}
|
13
13
|
s.description = %q{apirunner is a testsuite to query your RESTful JSON API and match response with your defined expectations}
|
14
14
|
s.email = %q{developers@moviepilot.com}
|
15
15
|
s.extra_rdoc_files = [
|
@@ -29,21 +29,6 @@ Gem::Specification.new do |s|
|
|
29
29
|
"changelog.txt",
|
30
30
|
"examples/config/api_runner.yml",
|
31
31
|
"examples/test/api_runner/001_create_user.yml",
|
32
|
-
"examples/test/api_runner/002_update_resources.yml",
|
33
|
-
"examples/test/api_runner/003_update_ratings.yml",
|
34
|
-
"examples/test/api_runner/004_update_series_ratings.yml",
|
35
|
-
"examples/test/api_runner/005_rateables_and_pagination.yml",
|
36
|
-
"examples/test/api_runner/006_recommendations.yml",
|
37
|
-
"examples/test/api_runner/007_item_predictions.yml",
|
38
|
-
"examples/test/api_runner/008_discovery.yml",
|
39
|
-
"examples/test/api_runner/009_cacheable_operations.yml",
|
40
|
-
"examples/test/api_runner/010_fsk.yml",
|
41
|
-
"examples/test/api_runner/011_misc.yml",
|
42
|
-
"examples/test/api_runner/012_telekom_error_reports.yml",
|
43
|
-
"examples/test/api_runner/013-extended-unpersonalized-discovery.yml",
|
44
|
-
"examples/test/api_runner/014-extended-personalized-discovery.yml",
|
45
|
-
"examples/test/api_runner/015_create_10000_users.yml",
|
46
|
-
"examples/test/api_runner/999_delete_user.yml",
|
47
32
|
"examples/test/api_runner/excludes.yml",
|
48
33
|
"features/apirunner.feature",
|
49
34
|
"features/step_definitions/apirunner_steps.rb",
|