ddtrace 0.12.1 → 0.13.0.beta1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (82) hide show
  1. checksums.yaml +4 -4
  2. data/.env +11 -21
  3. data/.rubocop.yml +1 -4
  4. data/Appraisals +75 -439
  5. data/CHANGELOG.md +16 -19
  6. data/Rakefile +89 -259
  7. data/circle.yml +69 -0
  8. data/ddtrace.gemspec +6 -6
  9. data/docker-compose.yml +37 -222
  10. data/docs/GettingStarted.md +260 -19
  11. data/gemfiles/contrib.gemfile +5 -0
  12. data/gemfiles/contrib_old.gemfile +4 -1
  13. data/gemfiles/rails30_postgres.gemfile +0 -1
  14. data/gemfiles/rails30_postgres_sidekiq.gemfile +0 -1
  15. data/gemfiles/rails32_mysql2.gemfile +0 -1
  16. data/gemfiles/rails32_postgres.gemfile +0 -1
  17. data/gemfiles/rails32_postgres_redis.gemfile +0 -1
  18. data/gemfiles/rails32_postgres_sidekiq.gemfile +0 -1
  19. data/gemfiles/rails5_mysql2.gemfile +1 -1
  20. data/gemfiles/rails5_postgres.gemfile +1 -1
  21. data/gemfiles/rails5_postgres_redis.gemfile +1 -1
  22. data/gemfiles/rails5_postgres_sidekiq.gemfile +1 -1
  23. data/lib/ddtrace.rb +6 -0
  24. data/lib/ddtrace/configuration.rb +2 -2
  25. data/lib/ddtrace/contrib/active_model_serializers/event.rb +57 -0
  26. data/lib/ddtrace/contrib/active_model_serializers/events.rb +30 -0
  27. data/lib/ddtrace/contrib/active_model_serializers/events/render.rb +32 -0
  28. data/lib/ddtrace/contrib/active_model_serializers/events/serialize.rb +35 -0
  29. data/lib/ddtrace/contrib/active_model_serializers/patcher.rb +62 -0
  30. data/lib/ddtrace/contrib/active_record/event.rb +30 -0
  31. data/lib/ddtrace/contrib/active_record/events.rb +30 -0
  32. data/lib/ddtrace/contrib/active_record/events/instantiation.rb +51 -0
  33. data/lib/ddtrace/contrib/active_record/events/sql.rb +48 -0
  34. data/lib/ddtrace/contrib/active_record/patcher.rb +3 -73
  35. data/lib/ddtrace/contrib/active_record/utils.rb +1 -15
  36. data/lib/ddtrace/contrib/active_support/notifications/event.rb +62 -0
  37. data/lib/ddtrace/contrib/aws/instrumentation.rb +2 -2
  38. data/lib/ddtrace/contrib/elasticsearch/patcher.rb +2 -2
  39. data/lib/ddtrace/contrib/elasticsearch/quantize.rb +8 -40
  40. data/lib/ddtrace/contrib/excon/middleware.rb +140 -0
  41. data/lib/ddtrace/contrib/excon/patcher.rb +50 -0
  42. data/lib/ddtrace/contrib/grpc/datadog_interceptor.rb +65 -0
  43. data/lib/ddtrace/contrib/grpc/datadog_interceptor/client.rb +49 -0
  44. data/lib/ddtrace/contrib/grpc/datadog_interceptor/server.rb +66 -0
  45. data/lib/ddtrace/contrib/grpc/intercept_with_datadog.rb +49 -0
  46. data/lib/ddtrace/contrib/grpc/patcher.rb +62 -0
  47. data/lib/ddtrace/contrib/http/patcher.rb +16 -18
  48. data/lib/ddtrace/contrib/racecar/event.rb +61 -0
  49. data/lib/ddtrace/contrib/racecar/events.rb +30 -0
  50. data/lib/ddtrace/contrib/racecar/events/batch.rb +27 -0
  51. data/lib/ddtrace/contrib/racecar/events/message.rb +27 -0
  52. data/lib/ddtrace/contrib/racecar/patcher.rb +6 -52
  53. data/lib/ddtrace/contrib/rack/middlewares.rb +65 -11
  54. data/lib/ddtrace/contrib/rack/patcher.rb +16 -0
  55. data/lib/ddtrace/contrib/rack/request_queue.rb +34 -0
  56. data/lib/ddtrace/contrib/rails/action_view.rb +65 -0
  57. data/lib/ddtrace/contrib/rails/active_support.rb +8 -9
  58. data/lib/ddtrace/contrib/rails/core_extensions.rb +115 -74
  59. data/lib/ddtrace/contrib/rake/instrumentation.rb +70 -0
  60. data/lib/ddtrace/contrib/rake/patcher.rb +53 -0
  61. data/lib/ddtrace/contrib/sequel/database.rb +58 -0
  62. data/lib/ddtrace/contrib/sequel/dataset.rb +59 -0
  63. data/lib/ddtrace/contrib/sequel/patcher.rb +56 -0
  64. data/lib/ddtrace/contrib/sequel/utils.rb +28 -0
  65. data/lib/ddtrace/ext/distributed.rb +5 -0
  66. data/lib/ddtrace/ext/grpc.rb +7 -0
  67. data/lib/ddtrace/ext/http.rb +35 -5
  68. data/lib/ddtrace/propagation/grpc_propagator.rb +54 -0
  69. data/lib/ddtrace/quantization/hash.rb +89 -0
  70. data/lib/ddtrace/tracer.rb +1 -4
  71. data/lib/ddtrace/utils.rb +4 -10
  72. data/lib/ddtrace/utils/database.rb +21 -0
  73. data/lib/ddtrace/version.rb +3 -3
  74. metadata +38 -13
  75. data/.circleci/config.yml +0 -456
  76. data/.circleci/images/primary/Dockerfile-1.9.3 +0 -69
  77. data/.circleci/images/primary/Dockerfile-2.0.0 +0 -69
  78. data/.circleci/images/primary/Dockerfile-2.1.10 +0 -69
  79. data/.circleci/images/primary/Dockerfile-2.2.10 +0 -69
  80. data/.circleci/images/primary/Dockerfile-2.3.7 +0 -73
  81. data/.circleci/images/primary/Dockerfile-2.4.4 +0 -73
  82. data/lib/ddtrace/contrib/rails/action_controller_patch.rb +0 -77
data/ddtrace.gemspec CHANGED
@@ -6,17 +6,17 @@ require 'ddtrace/version'
6
6
 
7
7
  Gem::Specification.new do |spec|
8
8
  spec.name = 'ddtrace'
9
- spec.version = Datadog::VERSION::STRING
9
+ spec.version = "#{Datadog::VERSION::STRING}#{ENV['VERSION_SUFFIX']}"
10
10
  spec.required_ruby_version = '>= 1.9.1'
11
11
  spec.authors = ['Datadog, Inc.']
12
12
  spec.email = ['dev@datadoghq.com']
13
13
 
14
14
  spec.summary = 'Datadog tracing code for your Ruby applications'
15
- spec.description = <<-EOS.gsub(/^[\s]+/, '')
16
- ddtrace is Datadog’s tracing client for Ruby. It is used to trace requests
17
- as they flow across web servers, databases and microservices so that developers
18
- have great visiblity into bottlenecks and troublesome requests.
19
- EOS
15
+ spec.description = <<-EOS
16
+ ddtrace is Datadog’s tracing client for Ruby. It is used to trace requests
17
+ as they flow across web servers, databases and microservices so that developers
18
+ have great visiblity into bottlenecks and troublesome requests.
19
+ EOS
20
20
 
21
21
  spec.homepage = 'https://github.com/DataDog/dd-trace-rb'
22
22
  spec.license = 'BSD-3-Clause'
data/docker-compose.yml CHANGED
@@ -1,237 +1,52 @@
1
- version: '3.2'
2
- services:
3
- tracer-1.9:
4
- build:
5
- context: ./.circleci/images/primary
6
- dockerfile: Dockerfile-1.9.3
7
- command: /bin/bash
8
- depends_on:
9
- - ddagent
10
- - elasticsearch
11
- - memcached
12
- - mongodb
13
- - mysql
14
- - postgres
15
- - redis
16
- env_file: ./.env
17
- environment:
18
- - TEST_DATADOG_INTEGRATION=1
19
- - TEST_DDAGENT_HOST=ddagent
20
- - TEST_ELASTICSEARCH_HOST=elasticsearch
21
- - TEST_MEMCACHED_HOST=memcached
22
- - TEST_MONGODB_HOST=mongodb
23
- - TEST_MYSQL_HOST=mysql
24
- - TEST_POSTGRES_HOST=postgres
25
- - TEST_REDIS_HOST=redis
26
- stdin_open: true
27
- tty: true
28
- volumes:
29
- - .:/app
30
- - bundle-1.9:/usr/local/bundle
31
- tracer-2.0:
32
- build:
33
- context: ./.circleci/images/primary
34
- dockerfile: Dockerfile-2.0.0
35
- command: /bin/bash
36
- depends_on:
37
- - ddagent
38
- - elasticsearch
39
- - memcached
40
- - mongodb
41
- - mysql
42
- - postgres
43
- - redis
44
- env_file: ./.env
45
- environment:
46
- - TEST_DATADOG_INTEGRATION=1
47
- - TEST_DDAGENT_HOST=ddagent
48
- - TEST_ELASTICSEARCH_HOST=elasticsearch
49
- - TEST_MEMCACHED_HOST=memcached
50
- - TEST_MONGODB_HOST=mongodb
51
- - TEST_MYSQL_HOST=mysql
52
- - TEST_POSTGRES_HOST=postgres
53
- - TEST_REDIS_HOST=redis
54
- stdin_open: true
55
- tty: true
56
- volumes:
57
- - .:/app
58
- - bundle-2.0:/usr/local/bundle
59
- tracer-2.1:
60
- build:
61
- context: ./.circleci/images/primary
62
- dockerfile: Dockerfile-2.1.10
63
- command: /bin/bash
64
- depends_on:
65
- - ddagent
66
- - elasticsearch
67
- - memcached
68
- - mongodb
69
- - mysql
70
- - postgres
71
- - redis
72
- env_file: ./.env
73
- environment:
74
- - TEST_DATADOG_INTEGRATION=1
75
- - TEST_DDAGENT_HOST=ddagent
76
- - TEST_ELASTICSEARCH_HOST=elasticsearch
77
- - TEST_MEMCACHED_HOST=memcached
78
- - TEST_MONGODB_HOST=mongodb
79
- - TEST_MYSQL_HOST=mysql
80
- - TEST_POSTGRES_HOST=postgres
81
- - TEST_REDIS_HOST=redis
82
- stdin_open: true
83
- tty: true
84
- volumes:
85
- - .:/app
86
- - bundle-2.1:/usr/local/bundle
87
- tracer-2.2:
88
- build:
89
- context: ./.circleci/images/primary
90
- dockerfile: Dockerfile-2.2.10
91
- command: /bin/bash
92
- depends_on:
93
- - ddagent
94
- - elasticsearch
95
- - memcached
96
- - mongodb
97
- - mysql
98
- - postgres
99
- - redis
100
- env_file: ./.env
101
- environment:
102
- - TEST_DATADOG_INTEGRATION=1
103
- - TEST_DDAGENT_HOST=ddagent
104
- - TEST_ELASTICSEARCH_HOST=elasticsearch
105
- - TEST_MEMCACHED_HOST=memcached
106
- - TEST_MONGODB_HOST=mongodb
107
- - TEST_MYSQL_HOST=mysql
108
- - TEST_POSTGRES_HOST=postgres
109
- - TEST_REDIS_HOST=redis
110
- stdin_open: true
111
- tty: true
112
- volumes:
113
- - .:/app
114
- - bundle-2.2:/usr/local/bundle
115
- tracer-2.3:
116
- build:
117
- context: ./.circleci/images/primary
118
- dockerfile: Dockerfile-2.3.7
119
- command: /bin/bash
120
- depends_on:
121
- - ddagent
122
- - elasticsearch
123
- - memcached
124
- - mongodb
125
- - mysql
126
- - postgres
127
- - redis
128
- env_file: ./.env
129
- environment:
130
- - TEST_DATADOG_INTEGRATION=1
131
- - TEST_DDAGENT_HOST=ddagent
132
- - TEST_ELASTICSEARCH_HOST=elasticsearch
133
- - TEST_MEMCACHED_HOST=memcached
134
- - TEST_MONGODB_HOST=mongodb
135
- - TEST_MYSQL_HOST=mysql
136
- - TEST_POSTGRES_HOST=postgres
137
- - TEST_REDIS_HOST=redis
138
- stdin_open: true
139
- tty: true
140
- volumes:
141
- - .:/app
142
- - bundle-2.3:/usr/local/bundle
143
- tracer-2.4:
144
- build:
145
- context: ./.circleci/images/primary
146
- dockerfile: Dockerfile-2.4.4
147
- command: /bin/bash
148
- depends_on:
149
- - ddagent
150
- - elasticsearch
151
- - memcached
152
- - mongodb
153
- - mysql
154
- - postgres
155
- - redis
156
- env_file: ./.env
1
+ # remember to use this compose file __ONLY__ for development/testing purposes
2
+ postgres:
3
+ image: postgres:9.6
157
4
  environment:
158
- - TEST_DATADOG_INTEGRATION=1
159
- - TEST_DDAGENT_HOST=ddagent
160
- - TEST_ELASTICSEARCH_HOST=elasticsearch
161
- - TEST_MEMCACHED_HOST=memcached
162
- - TEST_MONGODB_HOST=mongodb
163
- - TEST_MYSQL_HOST=mysql
164
- - TEST_POSTGRES_HOST=postgres
165
- - TEST_REDIS_HOST=redis
166
- stdin_open: true
167
- tty: true
168
- volumes:
169
- - .:/app
170
- - bundle-2.4:/usr/local/bundle
171
- ddagent:
172
- image: datadog/docker-dd-agent
5
+ - POSTGRES_PASSWORD=$TEST_POSTGRES_PASSWORD
6
+ - POSTGRES_USER=$TEST_POSTGRES_USER
7
+ - POSTGRES_DB=$TEST_POSTGRES_DB
8
+ ports:
9
+ - "127.0.0.1:${TEST_POSTGRES_PORT}:5432"
10
+
11
+ mysql:
12
+ image: mysql:5.6
173
13
  environment:
174
- - DD_APM_ENABLED=true
175
- - DD_BIND_HOST=0.0.0.0
176
- - DD_API_KEY=invalid_key_but_this_is_fine
177
- expose:
178
- - "8126"
14
+ - MYSQL_ROOT_PASSWORD=$TEST_MYSQL_ROOT_PASSWORD
15
+ - MYSQL_PASSWORD=$TEST_MYSQL_PASSWORD
16
+ - MYSQL_USER=$TEST_MYSQL_USER
179
17
  ports:
180
- - "${TEST_DDAGENT_PORT}:8126"
181
- elasticsearch:
18
+ - "127.0.0.1:${TEST_MYSQL_PORT}:3306"
19
+
20
+ elasticsearch:
182
21
  # Note: ES 5.0 dies with error:
183
22
  # max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
184
23
  # see https://github.com/docker-library/elasticsearch/issues/98 for details
185
24
  # For now, just rely on a 2.X server.
186
25
  image: elasticsearch:2.4
187
- expose:
188
- - "9200"
189
- - "9300"
190
26
  ports:
191
- - "${TEST_ELASTICSEARCH_REST_PORT}:9200"
192
- - "${TEST_ELASTICSEARCH_NATIVE_PORT}:9300"
193
- memcached:
194
- image: memcached:1.5-alpine
195
- expose:
196
- - "11211"
27
+ - "127.0.0.1:${TEST_ELASTICSEARCH_REST_PORT}:9200"
28
+ - "127.0.0.1:${TEST_ELASTICSEARCH_NATIVE_PORT}:9300"
29
+
30
+ redis:
31
+ image: redis:3.0
197
32
  ports:
198
- - "${TEST_MEMCACHED_PORT}:11211"
199
- mongodb:
33
+ - "127.0.0.1:${TEST_REDIS_PORT}:6379"
34
+
35
+ mongodb:
200
36
  image: mongo:3.5
201
- expose:
202
- - "27017"
203
37
  ports:
204
- - "${TEST_MONGODB_PORT}:27017"
205
- mysql:
206
- image: mysql:5.6
207
- environment:
208
- - MYSQL_ROOT_PASSWORD=$TEST_MYSQL_ROOT_PASSWORD
209
- - MYSQL_PASSWORD=$TEST_MYSQL_PASSWORD
210
- - MYSQL_USER=$TEST_MYSQL_USER
211
- expose:
212
- - "3306"
38
+ - "127.0.0.1:${TEST_MONGODB_PORT}:27017"
39
+
40
+ memcached:
41
+ image: memcached:1.5-alpine
213
42
  ports:
214
- - "${TEST_MYSQL_PORT}:3306"
215
- postgres:
216
- image: postgres:9.6
43
+ - "127.0.0.1:${TEST_MEMCACHED_PORT}:11211"
44
+
45
+ ddagent:
46
+ image: datadog/docker-dd-agent
217
47
  environment:
218
- - POSTGRES_PASSWORD=$TEST_POSTGRES_PASSWORD
219
- - POSTGRES_USER=$TEST_POSTGRES_USER
220
- - POSTGRES_DB=$TEST_POSTGRES_DB
221
- expose:
222
- - "5432"
223
- ports:
224
- - "${TEST_POSTGRES_PORT}:5432"
225
- redis:
226
- image: redis:3.0
227
- expose:
228
- - "6379"
48
+ - DD_APM_ENABLED=true
49
+ - DD_BIND_HOST=0.0.0.0
50
+ - DD_API_KEY=invalid_key_but_this_is_fine
229
51
  ports:
230
- - "${TEST_REDIS_PORT}:6379"
231
- volumes:
232
- bundle-1.9:
233
- bundle-2.0:
234
- bundle-2.1:
235
- bundle-2.2:
236
- bundle-2.3:
237
- bundle-2.4:
52
+ - "127.0.0.1:8126:8126"
@@ -27,7 +27,9 @@ For descriptions of terminology used in APM, take a look at the [official docume
27
27
  - [AWS](#aws)
28
28
  - [Dalli](#dalli)
29
29
  - [Elastic Search](#elastic-search)
30
+ - [Excon](#excon)
30
31
  - [Faraday](#faraday)
32
+ - [gRPC](#grpc)
31
33
  - [Grape](#grape)
32
34
  - [GraphQL](#graphql)
33
35
  - [MongoDB](#mongodb)
@@ -35,8 +37,10 @@ For descriptions of terminology used in APM, take a look at the [official docume
35
37
  - [Racecar](#racecar)
36
38
  - [Rack](#rack)
37
39
  - [Rails](#rails)
40
+ - [Rake](#rake)
38
41
  - [Redis](#redis)
39
42
  - [Resque](#resque)
43
+ - [Sequel](#sequel)
40
44
  - [Sidekiq](#sidekiq)
41
45
  - [Sinatra](#sinatra)
42
46
  - [Sucker Punch](#sucker-punch)
@@ -47,6 +51,7 @@ For descriptions of terminology used in APM, take a look at the [official docume
47
51
  - [Sampling](#sampling)
48
52
  - [Priority sampling](#priority-sampling)
49
53
  - [Distributed tracing](#distributed-tracing)
54
+ - [HTTP request queuing](#http-request-queuing)
50
55
  - [Processing pipeline](#processing-pipeline)
51
56
  - [Filtering](#filtering)
52
57
  - [Processing](#processing)
@@ -221,16 +226,6 @@ def finish(name, id, payload)
221
226
  end
222
227
  end
223
228
  ```
224
- #####Enriching traces from nested methods
225
-
226
- You can tag additional information to current active span from any method. Note however that if the method is called and there is no span currently active `active_span` will be nil.
227
-
228
- ```ruby
229
- # e.g. adding tag to active span
230
-
231
- current_span = Datadog.tracer.active_span
232
- current_span.set_tag('my_tag', 'my_value') unless current_span.nil?
233
- ```
234
229
 
235
230
  ## Integration instrumentation
236
231
 
@@ -253,7 +248,9 @@ For a list of available integrations, and their configuration options, please re
253
248
  | AWS | `aws` | `>= 2.0` | *[Link](#aws)* | *[Link](https://github.com/aws/aws-sdk-ruby)* |
254
249
  | Dalli | `dalli` | `>= 2.7` | *[Link](#dalli)* | *[Link](https://github.com/petergoldstein/dalli)* |
255
250
  | Elastic Search | `elasticsearch` | `>= 6.0` | *[Link](#elastic-search)* | *[Link](https://github.com/elastic/elasticsearch-ruby)* |
251
+ | Excon | `excon` | `>= 0.62` | *[Link](#excon)* | *[Link](https://github.com/excon/excon)* |
256
252
  | Faraday | `faraday` | `>= 0.14` | *[Link](#faraday)* | *[Link](https://github.com/lostisland/faraday)* |
253
+ | gRPC | `grpc` | `>= 1.10` | *[Link](#grpc)* | *[Link](https://github.com/grpc/grpc/tree/master/src/rubyc)* |
257
254
  | Grape | `grape` | `>= 1.0` | *[Link](#grape)* | *[Link](https://github.com/ruby-grape/grape)* |
258
255
  | GraphQL | `graphql` | `>= 1.7.9` | *[Link](#graphql)* | *[Link](https://github.com/rmosolgo/graphql-ruby)* |
259
256
  | MongoDB | `mongo` | `>= 2.0, < 2.5` | *[Link](#mongodb)* | *[Link](https://github.com/mongodb/mongo-ruby-driver)* |
@@ -261,8 +258,10 @@ For a list of available integrations, and their configuration options, please re
261
258
  | Racecar | `racecar` | `>= 0.3.5` | *[Link](#racecar)* | *[Link](https://github.com/zendesk/racecar)* |
262
259
  | Rack | `rack` | `>= 1.4.7` | *[Link](#rack)* | *[Link](https://github.com/rack/rack)* |
263
260
  | Rails | `rails` | `>= 3.2, < 5.2` | *[Link](#rails)* | *[Link](https://github.com/rails/rails)* |
261
+ | Rake | `rake` | `>= 12.0` | *[Link](#rake)* | *[Link](https://github.com/ruby/rake)* |
264
262
  | Redis | `redis` | `>= 3.2, < 4.0` | *[Link](#redis)* | *[Link](https://github.com/redis/redis-rb)* |
265
263
  | Resque | `resque` | `>= 1.0, < 2.0` | *[Link](#resque)* | *[Link](https://github.com/resque/resque)* |
264
+ | Sequel | `sequel` | `>= 3.41` | *[Link](#sequel)* | *[Link](https://github.com/jeremyevans/sequel)* |
266
265
  | Sidekiq | `sidekiq` | `>= 4.0` | *[Link](#sidekiq)* | *[Link](https://github.com/mperham/sidekiq)* |
267
266
  | Sinatra | `sinatra` | `>= 1.4.5` | *[Link](#sinatra)* | *[Link](https://github.com/sinatra/sinatra)* |
268
267
  | Sucker Punch | `sucker_punch` | `>= 2.0` | *[Link](#sucker-punch)* | *[Link](https://github.com/brandonhilkert/sucker_punch)* |
@@ -362,6 +361,58 @@ Where `options` is an optional `Hash` that accepts the following parameters:
362
361
  | ``service_name`` | Service name used for `elasticsearch` instrumentation | elasticsearch |
363
362
  | ``quantize`` | Hash containing options for quantization. May include `:show` with an Array of keys to not quantize (or `:all` to skip quantization), or `:exclude` with Array of keys to exclude entirely. | {} |
364
363
 
364
+ ### Excon
365
+
366
+ The `excon` integration is available through the `ddtrace` middleware:
367
+
368
+ ```ruby
369
+ require 'excon'
370
+ require 'ddtrace'
371
+
372
+ # Configure default Excon tracing behavior
373
+ Datadog.configure do |c|
374
+ c.use :excon, service_name: 'excon'
375
+ end
376
+
377
+ connection = Excon.new('https://example.com')
378
+ connection.get
379
+ ```
380
+
381
+ Where `options` is an optional `Hash` that accepts the following parameters:
382
+
383
+ | Key | Description | Default |
384
+ | --- | --- | --- |
385
+ | `service_name` | Service name for Excon instrumentation. When provided to middleware for a specific connection, it applies only to that connection object. | `'excon'` |
386
+ | `split_by_domain` | Uses the request domain as the service name when set to `true`. | `false` |
387
+ | `distributed_tracing` | Enables [distributed tracing](#distributed-tracing) | `false` |
388
+ | `error_handler` | A `Proc` that accepts a `response` parameter. If it evaluates to a *truthy* value, the trace span is marked as an error. By default only sets 5XX responses as errors. | `nil` |
389
+ | `tracer` | A `Datadog::Tracer` instance used to instrument the application. Usually you don't need to set that. | `Datadog.tracer` |
390
+
391
+ **Configuring connections to use different settings**
392
+
393
+ If you use multiple connections with Excon, you can give each of them different settings by configuring their constructors with middleware:
394
+
395
+ ```ruby
396
+ # Wrap the Datadog tracing middleware around the default middleware stack
397
+ Excon.new(
398
+ 'http://example.com',
399
+ middlewares: Datadog::Contrib::Excon::Middleware.with(options).around_default_stack
400
+ )
401
+
402
+ # Insert the middleware into a custom middleware stack.
403
+ # NOTE: Trace middleware must be inserted after ResponseParser!
404
+ Excon.new(
405
+ 'http://example.com',
406
+ middlewares: [
407
+ Excon::Middleware::ResponseParser,
408
+ Datadog::Contrib::Excon::Middleware.with(options),
409
+ Excon::Middleware::Idempotent
410
+ ]
411
+ )
412
+ ```
413
+
414
+ Where `options` is a Hash that contains any of the parameters listed in the table above.
415
+
365
416
  ### Faraday
366
417
 
367
418
  The `faraday` integration is available through the `ddtrace` middleware:
@@ -384,12 +435,63 @@ connection.get('/foo')
384
435
 
385
436
  Where `options` is an optional `Hash` that accepts the following parameters:
386
437
 
387
- | Key | Default | Description |
438
+ | Key | Description | Default |
388
439
  | --- | --- | --- |
389
- | `service_name` | Global service name (default: `faraday`) | Service name for this specific connection object. |
390
- | `split_by_domain` | `false` | Uses the request domain as the service name when set to `true`. |
391
- | `distributed_tracing` | `false` | Propagates tracing context along the HTTP request when set to `true`. |
392
- | `error_handler` | ``5xx`` evaluated as errors | A callable object that receives a single argument – the request environment. If it evaluates to a *truthy* value, the trace span is marked as an error. |
440
+ | `service_name` | Service name for Faraday instrumentation. When provided to middleware for a specific connection, it applies only to that connection object. | `'faraday'` |
441
+ | `split_by_domain` | Uses the request domain as the service name when set to `true`. | `false` |
442
+ | `distributed_tracing` | Enables [distributed tracing](#distributed-tracing) | `false` |
443
+ | `error_handler` | A `Proc` that accepts a `response` parameter. If it evaluates to a *truthy* value, the trace span is marked as an error. By default only sets 5XX responses as errors. | ``5xx`` evaluated as errors |
444
+ | `tracer` | A `Datadog::Tracer` instance used to instrument the application. Usually you don't need to set that. | `Datadog.tracer` |
445
+
446
+ ### gRPC
447
+
448
+ The `grpc` integration adds both client and server interceptors, which run as middleware prior to executing the service's remote procedure call. As gRPC applications are often distributed, the integration shares trace information between client and server.
449
+
450
+ To setup your integration, use the ``Datadog.configure`` method like so:
451
+
452
+ ```ruby
453
+ require 'grpc'
454
+ require 'ddtrace'
455
+
456
+ Datadog.configure do |c|
457
+ c.use :grpc, options
458
+ end
459
+
460
+ # run your application normally
461
+
462
+ # server side
463
+ server = GRPC::RpcServer.new
464
+ server.add_http2_port('localhost:50051', :this_port_is_insecure)
465
+ server.handle(Demo)
466
+ server.run_till_terminated
467
+
468
+ # client side
469
+ client = Demo.rpc_stub_class.new('localhost:50051', :this_channel_is_insecure)
470
+ client.my_endpoint(DemoMessage.new(contents: 'hello!'))
471
+ ```
472
+
473
+ In situations where you have multiple clients calling multiple distinct services, you may pass the Datadog interceptor directly, like so
474
+
475
+ ```ruby
476
+ configured_interceptor = Datadog::Contrib::GRPC::DatadogInterceptor::Client.new do |c|
477
+ c.service_name = "Alternate"
478
+ end
479
+
480
+ alternate_client = Demo::Echo::Service.rpc_stub_class.new(
481
+ 'localhost:50052',
482
+ :this_channel_is_insecure,
483
+ :interceptors => [configured_interceptor]
484
+ )
485
+ ```
486
+
487
+ The integration will ensure that the ``configured_interceptor`` establishes a unique tracing setup for that client instance.
488
+
489
+ The following configuration options are supported:
490
+
491
+ | Key | Description | Default |
492
+ | --- | --- | --- |
493
+ | ``service_name`` | Service name used for `grpc` instrumentation | grpc |
494
+ | ``tracer`` | Datadog tracer used for `grpc` instrumentation | Datadog.tracer |
393
495
 
394
496
  ### Grape
395
497
 
@@ -524,10 +626,8 @@ Where `options` is an optional `Hash` that accepts the following parameters:
524
626
 
525
627
  | Key | Description | Default |
526
628
  | --- | --- | --- |
527
- | ``service_name`` | Service name used for `http` instrumentation | net/http |
528
- | ``distributed_tracing`` | Enables distributed tracing | ``false`` |
529
- | ``tracer`` | A ``Datadog::Tracer`` instance used to instrument the application. Usually you don't need to set that. | ``Datadog.tracer`` |
530
-
629
+ | ``service_name`` | Service name used for `http` instrumentation | http |
630
+ | ``distributed_tracing`` | Enables [distributed tracing](#distributed-tracing) | ``false`` |
531
631
 
532
632
  If you wish to configure each connection object individually, you may use the ``Datadog.configure`` as it follows:
533
633
 
@@ -594,6 +694,9 @@ Where `options` is an optional `Hash` that accepts the following parameters:
594
694
  | ``quantize.fragment`` | Defines behavior for URL fragments. Removes fragments by default. May be `:show` to show URL fragments. Option must be nested inside the `quantize` option. | ``nil`` |
595
695
  | ``application`` | Your Rack application. Necessary for enabling middleware resource names. | ``nil`` |
596
696
  | ``tracer`` | A ``Datadog::Tracer`` instance used to instrument the application. Usually you don't need to set that. | ``Datadog.tracer`` |
697
+ | ``request_queuing`` | Track HTTP request time spent in the queue of the frontend server. See [HTTP request queuing](#http-request-queuing) for setup details. Set to `true` to enable. | ``false`` |
698
+ | ``web_service_name`` | Service name for frontend server request queuing spans. (e.g. `'nginx'`) | ``'web-server'`` |
699
+ | ``headers`` | Hash of HTTP request or response headers to add as tags to the `rack.request`. Accepts `request` and `response` keys with Array values e.g. `['Last-Modified']`. Adds `http.request.headers.*` and `http.response.headers.*` tags respectively. | ``{ response: ['Content-Type', 'X-Request-ID'] }`` |
597
700
 
598
701
  **Configuring URL quantization behavior**
599
702
 
@@ -653,6 +756,71 @@ Where `options` is an optional `Hash` that accepts the following parameters:
653
756
  | ``template_base_path`` | Used when the template name is parsed. If you don't store your templates in the ``views/`` folder, you may need to change this value | ``views/`` |
654
757
  | ``tracer`` | A ``Datadog::Tracer`` instance used to instrument the application. Usually you don't need to set that. | ``Datadog.tracer`` |
655
758
 
759
+ ### Rake
760
+
761
+ You can add instrumentation around your Rake tasks by activating the `rake` integration. Each task and its subsequent subtasks will be traced.
762
+
763
+ To activate Rake task tracing, add the following to your `Rakefile`:
764
+
765
+ ```ruby
766
+ # At the top of your Rakefile:
767
+ require 'rake'
768
+ require 'ddtrace'
769
+
770
+ Datadog.configure do |c|
771
+ c.use :rake, options
772
+ end
773
+
774
+ task :my_task do
775
+ # Do something task work here...
776
+ end
777
+
778
+ Rake::Task['my_task'].invoke
779
+ ```
780
+
781
+ Where `options` is an optional `Hash` that accepts the following parameters:
782
+
783
+ | Key | Description | Default |
784
+ | --- | --- | --- |
785
+ | ``enabled`` | Defines whether Rake tasks should be traced. Useful for temporarily disabling tracing. `true` or `false` | ``true`` |
786
+ | ``quantize`` | Hash containing options for quantization of task arguments. See below for more details and examples. | ``{}`` |
787
+ | ``service_name`` | Service name which the Rake task traces should be grouped under. | ``rake`` |
788
+ | ``tracer`` | A ``Datadog::Tracer`` instance used to instrument the application. Usually you don't need to set that. | ``Datadog.tracer`` |
789
+
790
+ **Configuring task quantization behavior**
791
+
792
+ ```ruby
793
+ Datadog.configure do |c|
794
+ # Given a task that accepts :one, :two, :three...
795
+ # Invoked with 'foo', 'bar', 'baz'.
796
+
797
+ # Default behavior: all arguments are quantized.
798
+ # `rake.invoke.args` tag --> ['?']
799
+ # `rake.execute.args` tag --> { one: '?', two: '?', three: '?' }
800
+ c.use :rake
801
+
802
+ # Show values for any argument matching :two exactly
803
+ # `rake.invoke.args` tag --> ['?']
804
+ # `rake.execute.args` tag --> { one: '?', two: 'bar', three: '?' }
805
+ c.use :rake, quantize: { args: { show: [:two] } }
806
+
807
+ # Show all values for all arguments.
808
+ # `rake.invoke.args` tag --> ['foo', 'bar', 'baz']
809
+ # `rake.execute.args` tag --> { one: 'foo', two: 'bar', three: 'baz' }
810
+ c.use :rake, quantize: { args: { show: :all } }
811
+
812
+ # Totally exclude any argument matching :three exactly
813
+ # `rake.invoke.args` tag --> ['?']
814
+ # `rake.execute.args` tag --> { one: '?', two: '?' }
815
+ c.use :rake, quantize: { args: { exclude: [:three] } }
816
+
817
+ # Remove the arguments entirely
818
+ # `rake.invoke.args` tag --> ['?']
819
+ # `rake.execute.args` tag --> {}
820
+ c.use :rake, quantize: { args: { exclude: :all } }
821
+ end
822
+ ```
823
+
656
824
  ### Redis
657
825
 
658
826
  The Redis integration will trace simple calls as well as pipelines.
@@ -715,6 +883,54 @@ Where `options` is an optional `Hash` that accepts the following parameters:
715
883
  | ``service_name`` | Service name used for `resque` instrumentation | resque |
716
884
  | ``workers`` | An array including all worker classes you want to trace (eg ``[MyJob]``) | ``[]`` |
717
885
 
886
+ ### Sequel
887
+
888
+ The Sequel integration traces queries made to your database.
889
+
890
+ ```ruby
891
+ require 'sequel'
892
+ require 'ddtrace'
893
+
894
+ # Connect to database
895
+ database = Sequel.sqlite
896
+
897
+ # Create a table
898
+ database.create_table :articles do
899
+ primary_key :id
900
+ String :name
901
+ end
902
+
903
+ Datadog.configure do |c|
904
+ c.use :sequel, options
905
+ end
906
+
907
+ # Perform a query
908
+ articles = database[:articles]
909
+ articles.all
910
+ ```
911
+
912
+ Where `options` is an optional `Hash` that accepts the following parameters:
913
+
914
+ | Key | Description | Default |
915
+ | --- | --- | --- |
916
+ | ``service_name`` | Service name used for `sequel.query` spans. | Name of database adapter (e.g. `mysql2`) |
917
+ | ``tracer`` | A ``Datadog::Tracer`` instance used to instrument the application. Usually you don't need to set that. | ``Datadog.tracer`` |
918
+
919
+ Only Ruby 2.0+ is supported.
920
+
921
+ **Configuring databases to use different settings**
922
+
923
+ If you use multiple databases with Sequel, you can give each of them different settings by configuring their respective `Sequel::Database` objects:
924
+
925
+ ```ruby
926
+ sqlite_database = Sequel.sqlite
927
+ postgres_database = Sequel.connect('postgres://user:password@host:port/database_name')
928
+
929
+ # Configure each database with different service names
930
+ Datadog.configure(sqlite_database, service_name: 'my-sqlite-db')
931
+ Datadog.configure(postgres_database, service_name: 'my-postgres-db')
932
+ ```
933
+
718
934
  ### Sidekiq
719
935
 
720
936
  The Sidekiq integration is a server-side middleware which will trace job executions.
@@ -1007,6 +1223,7 @@ Many integrations included in `ddtrace` support distributed tracing. Distributed
1007
1223
 
1008
1224
  For more details on how to activate distributed tracing for integrations, see their documentation:
1009
1225
 
1226
+ - [Excon](#excon)
1010
1227
  - [Faraday](#faraday)
1011
1228
  - [Net/HTTP](#nethttp)
1012
1229
  - [Rack](#rack)
@@ -1036,6 +1253,30 @@ Datadog.tracer.trace('web.work') do |span|
1036
1253
  end
1037
1254
  ```
1038
1255
 
1256
+ ### HTTP request queuing
1257
+
1258
+ Traces that originate from HTTP requests can be configured to include the time spent in a frontend web server or load balancer queue, before the request reaches the Ruby application.
1259
+
1260
+ This functionality is **experimental** and deactivated by default.
1261
+
1262
+ To activate this feature, you must add a ``X-Request-Start`` or ``X-Queue-Start`` header from your web server (i.e. Nginx). The following is an Nginx configuration example:
1263
+
1264
+ ```
1265
+ # /etc/nginx/conf.d/ruby_service.conf
1266
+ server {
1267
+ listen 8080;
1268
+
1269
+ location / {
1270
+ proxy_set_header X-Request-Start "t=${msec}";
1271
+ proxy_pass http://web:3000;
1272
+ }
1273
+ }
1274
+ ```
1275
+
1276
+ Then you must enable the request queuing feature in the integration handling the request.
1277
+
1278
+ For Rack based applications, see the [documentation](#rack) for details for enabling this feature.
1279
+
1039
1280
  ### Processing Pipeline
1040
1281
 
1041
1282
  Some applications might require that traces be altered or filtered out before they are sent upstream. The processing pipeline allows users to create *processors* to define such behavior.