retreval 0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
metadata ADDED
@@ -0,0 +1,390 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: retreval
3
+ version: !ruby/object:Gem::Version
4
+ prerelease:
5
+ version: "0.1"
6
+ platform: ruby
7
+ authors:
8
+ - Werner Robitza
9
+ autorequire:
10
+ bindir: bin
11
+ cert_chain: []
12
+
13
+ date: 2011-04-05 00:00:00 Z
14
+ dependencies: []
15
+
16
+ description: |-
17
+ README
18
+ ======
19
+
20
+ This is a simple API to evaluate information retrieval results. It allows you to load ranked and unranked query results and calculate various evaluation metrics (precision, recall, MAP, kappa) against a previously loaded gold standard.
21
+
22
+ Start this program from the command line with:
23
+
24
+ retreval -l <gold-standard-file> -q <query-results> -f <format> -o <output-prefix>
25
+
26
+ The options are outlined when you pass no arguments and just call
27
+
28
+ retreval
29
+
30
+ You will find further information in the RDOC documentation and the HOWTO section below.
31
+
32
+ If you want to see an example, use this command:
33
+
34
+ retreval -l example/gold_standard.yml -q example/query_results.yml -f yaml -v
35
+
36
+
37
+ INSTALLATION
38
+ ============
39
+
40
+ You can manually download the sources and build the Gem from there by `cd`ing to the folder where this README is saved and calling
41
+
42
+ gem build retreval.gemspec
43
+
44
+ This will create a file called `retreval` which you just have to install:
45
+
46
+ gem install retreval
47
+
48
+ And you're done.
49
+
50
+
51
+ HOWTO
52
+ =====
53
+
54
+ This API supports the following evaluation tasks:
55
+
56
+ - Loading a Gold Standard that takes a set of documents, queries and corresponding judgements of relevancy (i.e. "Is this document relevant for this query?")
57
+ - Calculation of the _kappa measure_ for the given gold standard
58
+
59
+ - Loading ranked or unranked query results for a certain query
60
+ - Calculation of _precision_ and _recall_ for each result
61
+ - Calculation of the _F-measure_ for weighing precision and recall
62
+ - Calculation of _mean average precision_ for multiple query results
63
+ - Calculation of the _11-point precision_ and _average precision_ for ranked query results
64
+
65
+ - Printing of summary tables and results
66
+
67
+ Typically, you will want to use this Gem either standalone or within another application's context.
68
+
69
+ Standalone Usage
70
+ ================
71
+
72
+ Call parameters
73
+ ---------------
74
+
75
+ After installing the Gem (see INSTALLATION), you can always call `retreval` from the commandline. The typical call is:
76
+
77
+ retreval -l <gold-standard-file> -q <query-results> -f <format> -o <output-prefix>
78
+
79
+ Where you have to define the following options:
80
+
81
+ - `gold-standard-file` is a file in a specified format that includes all the judgements
82
+ - `query-results` is a file in a specified format that includes all the query results in a single file
83
+ - `format` is the format that the files will use (either "yaml" or "plain")
84
+ - `output-prefix` is the prefix of output files that will be created
85
+
86
+ Formats
87
+ -------
88
+
89
+ Right now, we focus on the formats you can use to load data into the API. Currently, we support YAML files that must adhere to a special syntax. So, in order to load a gold standard, we need a file in the following format:
90
+
91
+ * "query" denotes the query
92
+ * "documents" these are the documents judged for this query
93
+ * "id" the ID of the document (e.g. its filename, etc.)
94
+ * "judgements" an array of judgements, each one with:
95
+ * "relevant" a boolean value of the judgment (relevant or not)
96
+ * "user" an optional identifier of the user
97
+
98
+ Example file, with one query, two documents, and one judgement:
99
+
100
+ - query: 12th air force germany 1957
101
+ documents:
102
+ - id: g5701s.ict21311
103
+ judgements: []
104
+
105
+ - id: g5701s.ict21313
106
+ judgements:
107
+ - relevant: false
108
+ user: 2
109
+
110
+ So, when calling the program, specify the format as `yaml`.
111
+ For the query results, a similar format is used. Note that it is necessary to specify whether the result sets are ranked or not, as this will heavily influence the calculations. You can specify the score for a document. By "score" we mean the score that your retrieval algorithm has given the document. But this is not necessary. The documents will always be ranked in the order of their appearance, regardless of their score. Thus in the following example, the document with "07" at the end is the first and "25" is the last, regardless of the score.
112
+
113
+ ---
114
+ query: 12th air force germany 1957
115
+ ranked: true
116
+ documents:
117
+ - score: 0.44034874
118
+ document: g5701s.ict21307
119
+ - score: 0.44034874
120
+ document: g5701s.ict21309
121
+ - score: 0.44034874
122
+ document: g5701s.ict21311
123
+ - score: 0.44034874
124
+ document: g5701s.ict21313
125
+ - score: 0.44034874
126
+ document: g5701s.ict21315
127
+ - score: 0.44034874
128
+ document: g5701s.ict21317
129
+ - score: 0.44034874
130
+ document: g5701s.ict21319
131
+ - score: 0.44034874
132
+ document: g5701s.ict21321
133
+ - score: 0.44034874
134
+ document: g5701s.ict21323
135
+ - score: 0.44034874
136
+ document: g5701s.ict21325
137
+ ---
138
+ query: 1612
139
+ ranked: true
140
+ documents:
141
+ - score: 1.0174774
142
+ document: g3290.np000144
143
+ - score: 0.763108
144
+ document: g3201b.ct000726
145
+ - score: 0.763108
146
+ document: g3400.ct000886
147
+ - score: 0.6359234
148
+ document: g3201s.ct000130
149
+ ---
150
+
151
+ **Note**: You can also use the `plain` format, which will load the gold standard in a different way (but not the results):
152
+
153
+ my_query my_document_1 false
154
+ my_query my_document_2 true
155
+
156
+ See that every query/document/relevancy pair is separated by a tabulator? You can also add the user's ID in the fourth column if necessary.
157
+
158
+ Running the evaluation
159
+ -----------------------
160
+
161
+ After you have specified the input files and the format, you can run the program. If needed, the `-v` switch will turn on verbose messages, such as information on how many judgements, documents and users there are, but this shouldn't be necessary.
162
+
163
+ The program will first load the gold standard and then calculate the statistics for each result set. The output files are automatically created and contain a YAML representation of the results.
164
+
165
+ Calculations may take a while depending on the amount of judgements and documents. If there are a thousand judgements, always consider a few seconds for each result set.
166
+
167
+ Interpreting the output files
168
+ ------------------------------
169
+
170
+ Two output files will be created:
171
+
172
+ - `output_avg_precision.yml`
173
+ - `output_statistics.yml`
174
+
175
+ The first lists the average precision for each query in the query result file. The second file lists all supported statistics for each query in the query results file.
176
+
177
+ For example, for a ranked evaluation, the first two entries of such a query result statistic look like this:
178
+
179
+ ---
180
+ 12th air force germany 1957:
181
+ - :precision: 0.0
182
+ :recall: 0.0
183
+ :false_negatives: 1
184
+ :false_positives: 1
185
+ :true_negatives: 2516
186
+ :true_positives: 0
187
+ :document: g5701s.ict21313
188
+ :relevant: false
189
+ - :precision: 0.0
190
+ :recall: 0.0
191
+ :false_negatives: 1
192
+ :false_positives: 2
193
+ :true_negatives: 2515
194
+ :true_positives: 0
195
+ :document: g5701s.ict21317
196
+ :relevant: false
197
+
198
+ You can see the precision and recall for that specific point and also the number of documents for the contingency table (true/false positives/negatives). Also, the document identifier is given.
199
+
200
+ API Usage
201
+ =========
202
+
203
+ Using this API in another ruby application is probably the more common use case. All you have to do is include the Gem in your Ruby or Ruby on Rails application. For details about available methods, please refer to the API documentation generated by RDoc.
204
+
205
+ **Important**: For this implementation, we use the document ID, the query and the user ID as the primary keys for matching objects. This means that your documents and queries are identified by a string and thus the strings should be sanitized first.
206
+
207
+ Loading the Gold Standard
208
+ -------------------------
209
+
210
+ Once you have loaded the Gem, you will probably start by creating a new gold standard.
211
+
212
+ gold_standard = GoldStandard.new
213
+
214
+ Then, you can load judgements into this standard, either from a file, or manually:
215
+
216
+ gold_standard.load_from_yaml_file "my-file.yml"
217
+ gold_standard.add_judgement :document => doc_id, :query => query_string, :relevant => boolean, :user => John
218
+
219
+ There is a nice shortcut for the `add_judgement` method. Both lines are essentially the same:
220
+
221
+ gold_standard.add_judgement :document => doc_id, :query => query_string, :relevant => boolean, :user => John
222
+ gold_standard << :document => doc_id, :query => query_string, :relevant => boolean, :user => John
223
+
224
+ Note the usage of typical Rails hashes for better readability (also, this Gem was developed to be used in a Rails webapp).
225
+
226
+ Now that you have loaded the gold standard, you can do things like:
227
+
228
+ gold_standard.contains_judgement? :document => "a document", :query => "the query"
229
+ gold_standard.relevant? :document => "a document", :query => "the query"
230
+
231
+
232
+ Loading the Query Results
233
+ -------------------------
234
+
235
+ Now we want to create a new `QueryResultSet`. A query result set can contain more than one result, which is what we normally want. It is important that you specify the gold standard it belongs to.
236
+
237
+ query_result_set = QueryResultSet.new :gold_standard => gold_standard
238
+
239
+ Just like the Gold Standard, you can read a query result set from a file:
240
+
241
+ query_result_set.load_from_yaml_file "my-results-file.yml"
242
+
243
+ Alternatively, you can load the query results one by one. To do this, you have to create the results (either ranked or unranked) and then add documents:
244
+
245
+ my_result = RankedQueryResult.new :query => "the query"
246
+ my_result.add_document :document => "test_document 1", :score => 13
247
+ my_result.add_document :document => "test_document 2", :score => 11
248
+ my_result.add_document :document => "test_document 3", :score => 3
249
+
250
+ This result would be ranked, obviously, and contain three documents. Documents can have a score, but this is optional. You can also create an Array of documents first and add them altogether:
251
+
252
+ documents = Array.new
253
+ documents << ResultDocument.new :id => "test_document 1", :score => 20
254
+ documents << ResultDocument.new :id => "test_document 2", :score => 21
255
+ my_result = RankedQueryResult.new :query => "the query", :documents => documents
256
+
257
+ The same applies to `UnrankedQueryResult`s, obviously. The order of ranked documents is the same as the order in which they were added to the result.
258
+
259
+ The `QueryResultSet` will now contain all the results. They are stored in an array called `query_results`, which you can access. So, to iterate over each result, you might want to use the following code:
260
+
261
+ query_result_set.query_results.each_with_index do |result, index|
262
+ # ...
263
+ end
264
+
265
+ Or, more simply:
266
+
267
+ for result in query_result_set.query_results
268
+ # ...
269
+ end
270
+
271
+ Calculating statistics
272
+ ----------------------
273
+
274
+ Now to the interesting part: Calculating statistics. As mentioned before, there is a conceptual difference between ranked and unranked results. Unranked results are much easier to calculate and thus take less CPU time.
275
+
276
+ No matter if unranked or ranked, you can get the most important statistics by just calling the `statistics` method.
277
+
278
+ statistics = my_result.statistics
279
+
280
+ In the simple case of an unranked result, you will receive a hash with the following information:
281
+
282
+ * `precision` - the precision of the results
283
+ * `recall` - the recall of the results
284
+ * `false_negatives` - number of not retrieved but relevant items
285
+ * `false_positives` - number of retrieved but nonrelevant
286
+ * `true_negatives` - number of not retrieved and nonrelevantv items
287
+ * `true_positives` - number of retrieved and relevant items
288
+
289
+ In case of a ranked result, you will receive an Array that consists of _n_ such Hashes, depending on the number of documents. Each Hash will give you the information at a certain rank, e.g. the following to lines return the recall at the fourth rank.
290
+
291
+ statistics = my_ranked_result.statistics
292
+ statistics[3][:recall]
293
+
294
+ In addition to the information mentioned above, you can also get for each rank:
295
+
296
+ * `document` - the ID of the document that was returned at this rank
297
+ * `relevant` - whether the document was relevant or not
298
+
299
+ Calculating statistics with missing judgements
300
+ ----------------------------------------------
301
+
302
+ Sometimes, you don't have judgements for all document/query pairs in the gold standard. If this happens, the results will be cleaned up first. This means that every document in the results that doesn't appear to have a judgement will be removed temporarily.
303
+
304
+ As an example, take the following results:
305
+
306
+ * A
307
+ * B
308
+ * C
309
+ * D
310
+
311
+ Our gold standard only contains judgements for A and C. The results will be cleaned up first, thus leading to:
312
+
313
+ * A
314
+ * C
315
+
316
+ With this approach, we can still provide meaningful results (for precision and recall).
317
+
318
+ Other statistics
319
+ ----------------
320
+
321
+ There are several other statistics that can be calculated, for example the **F measure**. The F measure weighs precision and recall and has one parameter, either "alpha" or "beta". Get the F measure like so:
322
+
323
+ my_result.f_measure :beta => 1
324
+
325
+ If you don't specify either alpha or beta, we will assume that beta = 1.
326
+
327
+ Another interesting measure is **Cohen's Kappa**, which tells us about the inter-agreement of assessors. Get the kappa statistic like this:
328
+
329
+ gold_standard.kappa
330
+
331
+ This will calculate the average kappa for each pairwise combination of users in the gold standard.
332
+
333
+ For ranked results one might also want to calculate an **11-point precision**. Just call the following:
334
+
335
+ my_ranked_result.eleven_point_precision
336
+
337
+ This will return a Hash that has indices at the 11 recall levels from 0 to 1 (with steps of 0.1) and the corresponding precision at that recall level.
338
+ email: werner.robitza@univie.ac.at
339
+ executables:
340
+ - retreval
341
+ extensions: []
342
+
343
+ extra_rdoc_files: []
344
+
345
+ files:
346
+ - bin/retreval
347
+ - CHANGELOG
348
+ - example/gold_standard.yml
349
+ - example/query_results.yml
350
+ - lib/retreval/gold_standard.rb
351
+ - lib/retreval/options.rb
352
+ - lib/retreval/query_result.rb
353
+ - lib/retreval/runner.rb
354
+ - output_avg_precision.yml
355
+ - output_statistics.yml
356
+ - README.md
357
+ - retreval.gemspec
358
+ - test/test_gold_standard.rb
359
+ - test/test_query_result.rb
360
+ - TODO
361
+ homepage: http://github.com/slhck/retreval
362
+ licenses: []
363
+
364
+ post_install_message:
365
+ rdoc_options: []
366
+
367
+ require_paths:
368
+ - lib
369
+ required_ruby_version: !ruby/object:Gem::Requirement
370
+ none: false
371
+ requirements:
372
+ - - ">="
373
+ - !ruby/object:Gem::Version
374
+ version: "1.9"
375
+ required_rubygems_version: !ruby/object:Gem::Requirement
376
+ none: false
377
+ requirements:
378
+ - - ">="
379
+ - !ruby/object:Gem::Version
380
+ version: "0"
381
+ requirements: []
382
+
383
+ rubyforge_project:
384
+ rubygems_version: 1.7.2
385
+ signing_key:
386
+ specification_version: 3
387
+ summary: A Ruby API for Evaluating Retrieval Results
388
+ test_files:
389
+ - test/test_gold_standard.rb
390
+ - test/test_query_result.rb