picguard 1.0.1 → 1.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 97758b22222ef4f51b940030b23d6bdc5f434aac
4
- data.tar.gz: 452acbd8bdc5bfbca225e79659eab662e1a9da48
3
+ metadata.gz: a98abe2c1e9ffc8a0a0619f1a29197e0aa43175f
4
+ data.tar.gz: 68409340239c5e44662d5694c0036ac726e1302e
5
5
  SHA512:
6
- metadata.gz: 6d9ffb3845ac4ae51f51bd688ca61387bde74a994d994cefcd9a521a5ca664fdbd1a79d881c87472775a041b72efec88a56532d725868dee0692bb7547ae473f
7
- data.tar.gz: d398879a6b644e0f80279d668b8b244545ae1f2f0da201cfe5e05a50c4a533c73b887ca6728fd8592ba86ca6b3c7bb580d5af8e332ce89b3ec4b1da4031a9a6f
6
+ metadata.gz: 36b340420006eb8367bc1168d7e4977792538789243481993ecef496365ea53006ea73cc4914baea88a8ad30c3ae5011836c91467501e23ccc32711d39f9e29e
7
+ data.tar.gz: 0a52d1234658f3224ae7255715e626aace926a8d4f93d9d764b10d2d97010acb9857c7c4fdbcb94abb60a3d9655d9d7b7ef5ed151cb220dbcd3ff91213cd0e1f
data/README.md CHANGED
@@ -6,9 +6,27 @@ Picguard guards your application by filtering out the pictures showing inappropr
6
6
  It uses Google Vision API to process the pictures that are being uploaded via your application.
7
7
 
8
8
  # Why use Picguard?
9
- Imagine a business application that allows user to upload a photo when creating his profile. The user is free to upload any kind of picture (including pictures showing violent scenes or adult content). The most popular solution for that is implementing a feature that allows other users to report the inappropriate content - which means you rely on your users when it comes to the safety of your application - not so great. Another scenario would be: what if for a profile picture an application should only allow the picture showing a human face? The soultion would be often the same as in the first example.
9
+ Imagine a business application that allows user to upload a photo when creating his profile. The user is free to upload any kind of picture (including pictures showing violent scenes or adult content). The most popular solution for that is implementing a feature that allows other users to report the inappropriate content - which means you rely on your users when it comes to the safety of your application - not so great. Another scenario would be: what if for a profile picture an application should only allow the picture showing a human face? The solution would be often the same as in the first example.
10
10
  Picguard lets you configure your preferences (globally or separately for each model) for image filtering and gives you a clean solution to validate the picture before it's saved.
11
11
 
12
+ ## Requirements
13
+
14
+ [ImageMagick](http://www.imagemagick.org) must be installed and Picguard must have access to it.
15
+
16
+ If you're on Mac OS X, you'll want to run the following with Homebrew:
17
+
18
+ ```bash
19
+ brew install imagemagick
20
+ ```
21
+
22
+ If you're on Ubuntu (or any Debian base Linux distribution), you'll want to run the following with apt-get:
23
+
24
+ ```bash
25
+ sudo apt-get install imagemagick
26
+ ```
27
+
28
+ After the installation you should be able to run `which convert` in your terminal and you should see for example `/usr/local/bin/convert` as a response.
29
+
12
30
  ## Installation
13
31
 
14
32
  Add Picguard to your Gemfile:
@@ -34,9 +52,9 @@ end
34
52
 
35
53
  ##### Let's go through all of the attributes you need to set up in the config:
36
54
 
37
- `google_api_key` is the secret key that you use to authenticate to the Google Vision API, you can generate one that is connected to your Google Project. More about generating the key can be found [here](https://cloud.google.com/vision/docs/getting-started#setting_up_an_api_key). If you are completely new to the Google Cloud Platform you should probably start [here](https://cloud.google.com/vision/docs/getting-started).
38
- `threshold_adult` and `threshold_violence` are the thresholds for the adult and violent content. This is the highest value that you consider acceptable, everything above this level will be categorised as unsafe. For all of the likelihood levels please check [this piece of documentation](https://cloud.google.com/vision/reference/rest/v1/images/annotate#Likelihood).
39
- `threshold_face` is the threshold for face recognition. Google responds with a float value from 0 to 1 that reflects how sure the Google API is when it comes to face recognition. Only the picture with values above your threshold will be categorised as the ones containing human face.
55
+ `google_api_key` is the secret key that you use to authenticate to the Google Vision API, you can generate one that is connected to your Google Project. More about generating the key can be found [here](https://cloud.google.com/vision/docs/getting-started#setting_up_an_api_key). If you are completely new to the Google Cloud Platform you should probably start [here](https://cloud.google.com/vision/docs/getting-started).
56
+ `threshold_adult` and `threshold_violence` are the thresholds for the adult and violent content. This is the highest value that you consider acceptable, everything above this level will be categorised as unsafe. For all of the likelihood levels please check [this piece of documentation](https://cloud.google.com/vision/reference/rest/v1/images/annotate#Likelihood).
57
+ `threshold_face` is the threshold for face recognition. Google responds with a float value from 0 to 1 that reflects how sure the Google API is when it comes to face recognition. Only the pictures with values above your threshold will be categorised as the ones showing human face.
40
58
 
41
59
  ## Validations
42
60
  To validate the content of the picture simply add the following validation to your model:
@@ -50,14 +68,16 @@ To validate the content of the picture simply add the following validation to yo
50
68
  ```
51
69
  ###### where
52
70
 
53
- `image` is the name of the model's attribute that should be validated
54
- `guard` is the name of the picguard validator
55
- `safe_search` and `face_detection` are the flags reflecting what should be validated for given model
56
- `method_name` is the name of the `image`'s attribute method that returns image file path
71
+ `image` is the name of the model's attribute that should be validated
72
+ `guard` is the name of the picguard validator
73
+ `safe_search` and `face_detection` are the flags reflecting what should be validated for given model
74
+ `method_name` value is the name of the `image`'s attribute method that returns image file path
75
+
76
+ *NOTE: Sometimes to return file path it's necessary to chain methods such as `avatar.tempfile.path`. In such case you need to pass an array of symbols to the `method_name` attribute (e.g. `[:tempfile, :path])`. For popular file-uploading gems, Picguard gives you a `tool` attribute that is used interchangeably with the `method_name`. More examples below.*
57
77
 
58
78
  ## Sample validations for popular file-uploading gems
59
79
 
60
- ### CarrierWave
80
+ ### [CarrierWave](https://github.com/carrierwaveuploader/carrierwave)
61
81
 
62
82
  ```ruby
63
83
  class User < ActiveRecord::Base
@@ -66,7 +86,40 @@ class User < ActiveRecord::Base
66
86
  validates :avatar, guard: {
67
87
  safe_search: true,
68
88
  face_detection: true,
69
- method_name: :path
89
+ tool: :carrierwave
90
+ }
91
+ end
92
+ ```
93
+
94
+ ### [Paperclip](https://github.com/thoughtbot/paperclip)
95
+
96
+ ```ruby
97
+ class User < ActiveRecord::Base
98
+ has_attached_file :avatar,
99
+ styles: { medium: '300x300>', thumb: '100x100>' },
100
+ default_url: "/images/:style/missing.jpg"
101
+
102
+ validates_attachment_content_type :avatar, content_type: /\Aimage\/.*\Z/
103
+
104
+ validates :avatar, guard: {
105
+ safe_search: false,
106
+ face_detection: true,
107
+ threshold_face: 0.5,
108
+ tool: :paperclip
109
+ }
110
+ end
111
+ ```
112
+
113
+ ### [Dragonfly](https://github.com/markevans/dragonfly)
114
+
115
+ ```ruby
116
+ class User < ActiveRecord::Base
117
+ dragonfly_accessor :avatar
118
+
119
+ validates :avatar, presence: true, guard: {
120
+ safe_search: true,
121
+ threshold_adult: 'VERY_LIKELY',
122
+ tool: :dragonfly
70
123
  }
71
124
  end
72
125
  ```
@@ -2,6 +2,12 @@ require 'picguard'
2
2
  require 'active_model'
3
3
 
4
4
  class GuardValidator < ActiveModel::EachValidator
5
+ METHOD_NAMES = {
6
+ carrierwave: :path,
7
+ paperclip: :staged_path,
8
+ dragonfly: [:tempfile, :path]
9
+ }.freeze
10
+ private_constant :METHOD_NAMES
5
11
 
6
12
  def validate_each(record, attribute, value)
7
13
  image_path = fetch_image_path(record, attribute)
@@ -12,10 +18,14 @@ class GuardValidator < ActiveModel::EachValidator
12
18
  private
13
19
 
14
20
  def fetch_image_path(record, attribute)
15
- arr = [attribute].push(*Array(options[:method_name]))
21
+ arr = [attribute].push(*Array(fetch_method_names))
16
22
  arr.inject(record, :public_send)
17
23
  end
18
24
 
25
+ def fetch_method_names
26
+ options[:tool].present? ? METHOD_NAMES.fetch(options[:tool]) : options[:method_name]
27
+ end
28
+
19
29
  def valid?(image_path)
20
30
  return false unless path_exists?(image_path)
21
31
  result = Picguard.analyze(
@@ -1,3 +1,3 @@
1
1
  module Picguard
2
- VERSION = "1.0.1"
2
+ VERSION = "1.1.0"
3
3
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: picguard
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.1
4
+ version: 1.1.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Szymon Baranowski
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2016-03-02 00:00:00.000000000 Z
12
+ date: 2016-03-05 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: google-api-client