apollo-studio-tracing 1.0.0.beta.1

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 5e5e5b59a3310a3863053df2f9a0534ad3b0ce2faa60b6ce69803be0be1c9a24
4
+ data.tar.gz: 604ad5829a5fc0034d2afed4ce72aff5679648fd8a3976683efc9a2bd36b3e7e
5
+ SHA512:
6
+ metadata.gz: d4439961889fdd4f5bbd6810b9293e2fd4eeb0e9056faa096e406ef32994efde30ae0b85d1b7930336d5b25c0195b24f386e40c228c53d569f05c0db3cff74fc
7
+ data.tar.gz: 82036596d4287c2a10b82b00ef79e602430efb0b9d95682ba87d1a109d62498fc662a3a0d5d5ddd6522307276858ebb4e11d9e9f3d48ff5de6c98d0cae8008db
@@ -0,0 +1,27 @@
1
+ # [1.0.0-beta.1](https://github.com/EnjoyTech/apollo-studio-tracing-ruby/compare/v0.1.0...v1.0.0-beta.1) (2020-10-23)
2
+
3
+
4
+ ### Bug Fixes
5
+
6
+ * revert Rails.logger reference, doesn't work correctly in non-Rails environment ([eeea691](https://github.com/EnjoyTech/apollo-studio-tracing-ruby/commit/eeea6913be0171db0b45c58ff6b34dddbbea764b))
7
+ * **debug:** add debug queueing ([074bb79](https://github.com/EnjoyTech/apollo-studio-tracing-ruby/commit/074bb79aef78a7b5f65744dd5a6ea4a913de4338))
8
+ * **tracing:** ensure thread started when queueing traces ([4bb2238](https://github.com/EnjoyTech/apollo-studio-tracing-ruby/commit/4bb22387fc5230909b1201a58b7be922a153dcb7))
9
+ * properly capture errors and record them on traces ([3ba25ff](https://github.com/EnjoyTech/apollo-studio-tracing-ruby/commit/3ba25fff60efab9a98f6212192b7543de9a19057))
10
+ * use Rails.logger if it exists ([b0a7103](https://github.com/EnjoyTech/apollo-studio-tracing-ruby/commit/b0a7103882cda74dbcf39cf6f84339e655e3506b))
11
+ * **tracing:** fix multiplexed tracing ([43b6c86](https://github.com/EnjoyTech/apollo-studio-tracing-ruby/commit/43b6c86006be2f300211b2c2bccdf5b8d0ffc658))
12
+ * remove NotInstalledError, remove unused src file ([dc8e31d](https://github.com/EnjoyTech/apollo-studio-tracing-ruby/commit/dc8e31da901c998ef1bafc6a5b28aae51f3ee0c6))
13
+
14
+
15
+ ### chore
16
+
17
+ * remove debug statements, initial release! ([5f634c0](https://github.com/EnjoyTech/apollo-studio-tracing-ruby/commit/5f634c05f5560fa6cf9f68cfb5837c715828214c))
18
+
19
+
20
+ ### BREAKING CHANGES
21
+
22
+ * Initial release. Substantially divergent from `apollo-federation-ruby`, so marking
23
+ this as a breaking change.
24
+
25
+ ## 1.0.0 (2020-10-23)
26
+
27
+ - First release, based on https://github.com/Gusto/apollo-federation-ruby
data/LICENSE ADDED
@@ -0,0 +1,23 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2017 Reginald
4
+ Copyright (c) 2019 Gusto
5
+ Copyright (c) 2020 Enjoy Technology Inc.
6
+
7
+ Permission is hereby granted, free of charge, to any person obtaining a copy
8
+ of this software and associated documentation files (the "Software"), to deal
9
+ in the Software without restriction, including without limitation the rights
10
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
11
+ copies of the Software, and to permit persons to whom the Software is
12
+ furnished to do so, subject to the following conditions:
13
+
14
+ The above copyright notice and this permission notice shall be included in all
15
+ copies or substantial portions of the Software.
16
+
17
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
18
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
19
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
20
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
21
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
22
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
23
+ SOFTWARE.
@@ -0,0 +1,71 @@
1
+ # apollo-studio-tracing
2
+
3
+ [![CircleCI](https://circleci.com/gh/EnjoyTech/apollo-studio-tracing-ruby/tree/master.svg?style=svg)](https://circleci.com/gh/EnjoyTech/apollo-studio-tracing-ruby/tree/master)
4
+
5
+ This gem extends the [GraphQL Ruby](http://graphql-ruby.org/) gem to add support for sending trace data to [Apollo Studio](https://www.apollographql.com/docs/studio/). It is intended to be a full-featured replacement for the unmaintained [apollo-tracing-ruby](https://github.com/uniiverse/apollo-tracing-ruby) gem, and it is built HEAVILY from the work done within the Gusto [apollo-federation-ruby](https://github.com/Gusto/apollo-federation-ruby) gem.
6
+
7
+ ## DISCLAIMER
8
+
9
+ This gem is still in a beta stage and may have some bugs or incompatibilities. See the [Known Issues and Limitations](#known-issues-and-limitations) below. If you run into any problems, please [file an issue](https://github.com/EnjoyTech/apollo-studio-tracing-ruby/issues).
10
+
11
+ ## Installation
12
+
13
+ Add this line to your application's Gemfile:
14
+
15
+ ```ruby
16
+ gem 'apollo-studio-tracing'
17
+ ```
18
+
19
+ And then execute:
20
+
21
+ ```
22
+ $ bundle install
23
+ ```
24
+
25
+ Or install it yourself as:
26
+
27
+ ```
28
+ $ gem install apollo-studio-tracing
29
+ ```
30
+
31
+ ## Getting Started
32
+
33
+ 1. Add `use ApolloStudioTracing` to your schema class.
34
+ 2. Change your controller to add `apollo_tracing_enabled: true` to the execution context. Ensure that `apollo_client_name` and `apollo_client_version` are set as well, for proper client information in Studio:
35
+
36
+ ```ruby
37
+ def execute
38
+ # ...
39
+ context = {
40
+ apollo_client_name: request.headers["apollographql-client-name"],
41
+ apollo_client_version: request.headers["apollographql-client-version"],
42
+ apollo_tracing_enabled: Rails.env.production?,
43
+ }
44
+ # ...
45
+ end
46
+ ```
47
+
48
+ ### Updating the Apollo .proto definition
49
+
50
+ Install [Google Protocol Buffers](https://github.com/protocolbuffers/protobuf) via Homebrew
51
+
52
+ ```
53
+ $ brew install protobuf
54
+ ```
55
+
56
+ Regenerate the Ruby protos with the included script:
57
+
58
+ ```
59
+ $ bin/generate-proto.sh
60
+ Removing old client
61
+ Downloading latest Apollo Protobuf IDL
62
+ Generating Ruby client stubs
63
+ ```
64
+
65
+ ## Known Issues and Limitations
66
+
67
+ - Only works with class-based schemas, the legacy `.define` API will not be supported
68
+
69
+ ## Maintainers
70
+
71
+ - [Luke Sanwick](https://github.com/lsanwick)
@@ -0,0 +1,15 @@
1
+ #!/usr/bin/env bash
2
+
3
+ set -eo pipefail
4
+
5
+ DIR=`dirname "$0"`
6
+ OUTPUT_DIR=$DIR/../lib/apollo-studio-tracing/proto
7
+
8
+ echo "Removing old client"
9
+ rm -f $OUTPUT_DIR/apollo.proto $OUTPUT_DIR/apollo_pb.rb
10
+
11
+ echo "Downloading latest Apollo Protobuf IDL"
12
+ curl --silent --output $OUTPUT_DIR/apollo.proto https://raw.githubusercontent.com/apollographql/apollo-server/main/packages/apollo-reporting-protobuf/src/reports.proto
13
+
14
+ echo "Generating Ruby client stubs"
15
+ protoc -I $OUTPUT_DIR --ruby_out $OUTPUT_DIR $OUTPUT_DIR/apollo.proto
@@ -0,0 +1,29 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ #
5
+ # This file was generated by Bundler.
6
+ #
7
+ # The application 'rspec' is installed as part of a gem, and
8
+ # this file is here to facilitate running it.
9
+ #
10
+
11
+ require 'pathname'
12
+ ENV['BUNDLE_GEMFILE'] ||= File.expand_path('../../Gemfile',
13
+ Pathname.new(__FILE__).realpath,)
14
+
15
+ bundle_binstub = File.expand_path('bundle', __dir__)
16
+
17
+ if File.file?(bundle_binstub)
18
+ if File.read(bundle_binstub, 300) =~ /This file was generated by Bundler/
19
+ load(bundle_binstub)
20
+ else
21
+ abort("Your `bin/bundle` was not generated by Bundler, so this binstub cannot run.
22
+ Replace `bin/bundle` by running `bundle binstubs bundler --force`, then run this command again.")
23
+ end
24
+ end
25
+
26
+ require 'rubygems'
27
+ require 'bundler/setup'
28
+
29
+ load Gem.bin_path('rspec-core', 'rspec')
@@ -0,0 +1,44 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'apollo-studio-tracing/proto'
4
+ require 'apollo-studio-tracing/node_map'
5
+ require 'apollo-studio-tracing/tracer'
6
+
7
+ module ApolloStudioTracing
8
+ extend self
9
+
10
+ KEY = :apollo_trace
11
+ DEBUG_KEY = "#{KEY}_debug".to_sym
12
+
13
+ attr_accessor :logger
14
+
15
+ # TODO: Initialize this to Rails.logger in a Railtie
16
+ self.logger = Logger.new(STDOUT)
17
+
18
+ def use(schema, **options)
19
+ tracer = ApolloStudioTracing::Tracer.new(**options)
20
+ # TODO: Shutdown tracers when reloading code in Rails
21
+ # (although it's unlikely you'll have Apollo Tracing enabled in development)
22
+ tracers << tracer
23
+ schema.tracer(tracer)
24
+ tracer.start_trace_channel
25
+ end
26
+
27
+ def flush
28
+ tracers.each(&:flush_trace_channel)
29
+ end
30
+
31
+ def shutdown
32
+ tracers.each(&:shutdown_trace_channel)
33
+ end
34
+
35
+ trap('SIGINT') do
36
+ Thread.new { shutdown }
37
+ end
38
+
39
+ private
40
+
41
+ def tracers
42
+ @tracers ||= []
43
+ end
44
+ end
@@ -0,0 +1,63 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'net/http'
4
+ require 'openssl'
5
+ require 'uri'
6
+ require 'zlib'
7
+
8
+ module ApolloStudioTracing
9
+ module API
10
+ extend self
11
+
12
+ APOLLO_URL = 'https://engine-report.apollodata.com/api/ingress/traces'
13
+ APOLLO_URI = ::URI.parse(APOLLO_URL)
14
+ UploadAttemptError = Class.new(StandardError)
15
+ RetryableUploadAttemptError = Class.new(UploadAttemptError)
16
+
17
+ def upload(report_data, max_attempts:, min_retry_delay_secs:, **options)
18
+ attempt ||= 0
19
+ attempt_upload(report_data, **options)
20
+ rescue UploadAttemptError => e
21
+ attempt += 1
22
+ if e.is_a?(RetryableUploadAttemptError) && attempt < max_attempts
23
+ retry_delay = min_retry_delay_secs * 2**attempt
24
+ ApolloStudioTracing.logger.warn(
25
+ "Attempt to send Apollo trace report failed and will be retried in #{retry_delay} " \
26
+ "secs: #{e.message}",
27
+ )
28
+ sleep(retry_delay)
29
+ retry
30
+ else
31
+ ApolloStudioTracing.logger.warn("Failed to send Apollo trace report: #{e.message}")
32
+ end
33
+ end
34
+
35
+ private
36
+
37
+ def attempt_upload(report_data, compress:, api_key:)
38
+ body = compress ? gzip(report_data) : report_data
39
+ headers = { 'X-Api-Key' => api_key }
40
+ headers['Content-Encoding'] = 'gzip' if compress
41
+ result = Net::HTTP.post(APOLLO_URI, body, headers)
42
+
43
+ if result.is_a?(Net::HTTPServerError)
44
+ raise RetryableUploadAttemptError, "#{result.code} #{result.message} - #{result.body}"
45
+ end
46
+
47
+ if !result.is_a?(Net::HTTPSuccess)
48
+ raise UploadAttemptError, "#{result.message} (#{result.code}) - #{result.body}"
49
+ end
50
+ rescue IOError, SocketError, SystemCallError, OpenSSL::OpenSSLError => e
51
+ raise RetryableUploadAttemptError, "#{e.class} - #{e.message}"
52
+ end
53
+
54
+ def gzip(data)
55
+ output = StringIO.new
56
+ output.set_encoding('BINARY')
57
+ gz = Zlib::GzipWriter.new(output)
58
+ gz.write(data)
59
+ gz.close
60
+ output.string
61
+ end
62
+ end
63
+ end
@@ -0,0 +1,79 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'apollo-studio-tracing/proto'
4
+
5
+ module ApolloStudioTracing
6
+ # NodeMap stores a flat map of trace nodes by stringified paths
7
+ # (i.e. "_entities.0.id") for fast lookup when we need to alter
8
+ # nodes (to add end times or errors.)
9
+ #
10
+ # When adding a node to the NodeMap, it will create any missing
11
+ # parent nodes and ensure the tree is consistent.
12
+ #
13
+ # Only the "root" node is attached to the trace extension.
14
+ class NodeMap
15
+ ROOT_KEY = ''
16
+
17
+ attr_reader :nodes
18
+ def initialize
19
+ @nodes = {
20
+ ROOT_KEY => ApolloStudioTracing::Node.new,
21
+ }
22
+ end
23
+
24
+ def root
25
+ nodes[ROOT_KEY]
26
+ end
27
+
28
+ def node_for_path(path)
29
+ nodes[array_wrap(path).join('.')]
30
+ end
31
+
32
+ def add(path)
33
+ node = ApolloStudioTracing::Node.new
34
+ node_key = path.join('.')
35
+ key = path.last
36
+
37
+ case key
38
+ when String # field
39
+ node.response_name = key
40
+ when Integer # index of an array
41
+ node.index = key
42
+ end
43
+
44
+ nodes[node_key] = node
45
+
46
+ # find or create a parent node and add this node to its children
47
+ parent_path = path[0..-2]
48
+ parent_node = nodes[parent_path.join('.')] || add(parent_path)
49
+ parent_node.child << node
50
+
51
+ node
52
+ end
53
+
54
+ def add_error(error)
55
+ path = array_wrap(error['path']).join('.')
56
+ node = nodes[path] || root
57
+
58
+ locations = array_wrap(error['locations']).map do |location|
59
+ ApolloStudioTracing::Location.new(location)
60
+ end
61
+
62
+ node.error << ApolloStudioTracing::Error.new(
63
+ message: error['message'],
64
+ location: locations,
65
+ json: JSON.dump(error),
66
+ )
67
+ end
68
+
69
+ def array_wrap(object)
70
+ if object.nil?
71
+ []
72
+ elsif object.respond_to?(:to_ary)
73
+ object.to_ary || [object]
74
+ else
75
+ [object]
76
+ end
77
+ end
78
+ end
79
+ end
@@ -0,0 +1,13 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative 'proto/apollo_pb'
4
+
5
+ module ApolloStudioTracing
6
+ Trace = ::Mdg::Engine::Proto::Trace
7
+ TracesAndStats = ::Mdg::Engine::Proto::TracesAndStats
8
+ Node = ::Mdg::Engine::Proto::Trace::Node
9
+ Location = ::Mdg::Engine::Proto::Trace::Location
10
+ Error = ::Mdg::Engine::Proto::Trace::Error
11
+ ReportHeader = ::Mdg::Engine::Proto::ReportHeader
12
+ Report = ::Mdg::Engine::Proto::Report
13
+ end
@@ -0,0 +1,381 @@
1
+ syntax = "proto3";
2
+
3
+ package mdg.engine.proto;
4
+
5
+ import "google/protobuf/timestamp.proto";
6
+
7
+ message Trace {
8
+ message CachePolicy {
9
+ enum Scope {
10
+ UNKNOWN = 0;
11
+ PUBLIC = 1;
12
+ PRIVATE = 2;
13
+ }
14
+
15
+ Scope scope = 1;
16
+ int64 max_age_ns = 2; // use 0 for absent, -1 for 0
17
+ }
18
+
19
+ message Details {
20
+ // The variables associated with this query (unless the reporting agent is
21
+ // configured to keep them all private). Values are JSON: ie, strings are
22
+ // enclosed in double quotes, etc. The value of a private variable is
23
+ // the empty string.
24
+ map<string, string> variables_json = 4;
25
+ // Deprecated. Engineproxy did not encode variable values as JSON, so you
26
+ // couldn't tell numbers from numeric strings. Send variables_json instead.
27
+ map<string, bytes> deprecated_variables = 1;
28
+ // This is deprecated and only used for legacy applications
29
+ // don't include this in traces inside a FullTracesReport; the operation
30
+ // name for these traces comes from the key of the traces_per_query map.
31
+ string operation_name = 3;
32
+ }
33
+
34
+ message Error {
35
+ string message = 1; // required
36
+ repeated Location location = 2;
37
+ uint64 time_ns = 3;
38
+ string json = 4;
39
+ }
40
+
41
+ message HTTP {
42
+ message Values {
43
+ repeated string value = 1;
44
+ }
45
+
46
+ enum Method {
47
+ UNKNOWN = 0;
48
+ OPTIONS = 1;
49
+ GET = 2;
50
+ HEAD = 3;
51
+ POST = 4;
52
+ PUT = 5;
53
+ DELETE = 6;
54
+ TRACE = 7;
55
+ CONNECT = 8;
56
+ PATCH = 9;
57
+ }
58
+ Method method = 1;
59
+ string host = 2;
60
+ string path = 3;
61
+
62
+ // Should exclude manual blacklist ("Auth" by default)
63
+ map<string, Values> request_headers = 4;
64
+ map<string, Values> response_headers = 5;
65
+
66
+ uint32 status_code = 6;
67
+
68
+ bool secure = 8; // TLS was used
69
+ string protocol = 9; // by convention "HTTP/1.0", "HTTP/1.1", "HTTP/2" or "h2"
70
+ }
71
+
72
+ message Location {
73
+ uint32 line = 1;
74
+ uint32 column = 2;
75
+ }
76
+
77
+ // We store information on each resolver execution as a Node on a tree.
78
+ // The structure of the tree corresponds to the structure of the GraphQL
79
+ // response; it does not indicate the order in which resolvers were
80
+ // invoked. Note that nodes representing indexes (and the root node)
81
+ // don't contain all Node fields (eg types and times).
82
+ message Node {
83
+ // The name of the field (for Nodes representing a resolver call) or the
84
+ // index in a list (for intermediate Nodes representing elements of a list).
85
+ // field_name is the name of the field as it appears in the GraphQL
86
+ // response: ie, it may be an alias. (In that case, the original_field_name
87
+ // field holds the actual field name from the schema.) In any context where
88
+ // we're building up a path, we use the response_name rather than the
89
+ // original_field_name.
90
+ oneof id {
91
+ string response_name = 1;
92
+ uint32 index = 2;
93
+ }
94
+
95
+ string original_field_name = 14;
96
+
97
+ // The field's return type; e.g. "String!" for User.email:String!
98
+ string type = 3;
99
+
100
+ // The field's parent type; e.g. "User" for User.email:String!
101
+ string parent_type = 13;
102
+
103
+ CachePolicy cache_policy = 5;
104
+
105
+ // relative to the trace's start_time, in ns
106
+ uint64 start_time = 8;
107
+ // relative to the trace's start_time, in ns
108
+ uint64 end_time = 9;
109
+
110
+ repeated Error error = 11;
111
+ repeated Node child = 12;
112
+
113
+ reserved 4;
114
+ }
115
+
116
+ // represents a node in the query plan, under which there is a trace tree for that service fetch.
117
+ // In particular, each fetch node represents a call to an implementing service, and calls to implementing
118
+ // services may not be unique. See https://github.com/apollographql/apollo-server/blob/main/packages/apollo-gateway/src/QueryPlan.ts
119
+ // for more information and details.
120
+ message QueryPlanNode {
121
+ // This represents a set of nodes to be executed sequentially by the Gateway executor
122
+ message SequenceNode {
123
+ repeated QueryPlanNode nodes = 1;
124
+ }
125
+ // This represents a set of nodes to be executed in parallel by the Gateway executor
126
+ message ParallelNode {
127
+ repeated QueryPlanNode nodes = 1;
128
+ }
129
+ // This represents a node to send an operation to an implementing service
130
+ message FetchNode {
131
+ // XXX When we want to include more details about the sub-operation that was
132
+ // executed against this service, we should include that here in each fetch node.
133
+ // This might include an operation signature, requires directive, reference resolutions, etc.
134
+ string service_name = 1;
135
+
136
+ bool trace_parsing_failed = 2;
137
+
138
+ // This Trace only contains start_time, end_time, duration_ns, and root;
139
+ // all timings were calculated **on the federated service**, and clock skew
140
+ // will be handled by the ingress server.
141
+ Trace trace = 3;
142
+
143
+ // relative to the outer trace's start_time, in ns, measured in the gateway.
144
+ uint64 sent_time_offset = 4;
145
+
146
+ // Wallclock times measured in the gateway for when this operation was
147
+ // sent and received.
148
+ google.protobuf.Timestamp sent_time = 5;
149
+ google.protobuf.Timestamp received_time = 6;
150
+ }
151
+
152
+ // This node represents a way to reach into the response path and attach related entities.
153
+ // XXX Flatten is really not the right name and this node may be renamed in the query planner.
154
+ message FlattenNode {
155
+ repeated ResponsePathElement response_path = 1;
156
+ QueryPlanNode node = 2;
157
+ }
158
+ message ResponsePathElement {
159
+ oneof id {
160
+ string field_name = 1;
161
+ uint32 index = 2;
162
+ }
163
+ }
164
+ oneof node {
165
+ SequenceNode sequence = 1;
166
+ ParallelNode parallel = 2;
167
+ FetchNode fetch = 3;
168
+ FlattenNode flatten = 4;
169
+ }
170
+ }
171
+
172
+ // Wallclock time when the trace began.
173
+ google.protobuf.Timestamp start_time = 4; // required
174
+ // Wallclock time when the trace ended.
175
+ google.protobuf.Timestamp end_time = 3; // required
176
+ // High precision duration of the trace; may not equal end_time-start_time
177
+ // (eg, if your machine's clock changed during the trace).
178
+ uint64 duration_ns = 11; // required
179
+ // A tree containing information about all resolvers run directly by this
180
+ // service, including errors.
181
+ Node root = 14;
182
+
183
+ // -------------------------------------------------------------------------
184
+ // Fields below this line are *not* included in federated traces (the traces
185
+ // sent from federated services to the gateway).
186
+
187
+ // In addition to details.raw_query, we include a "signature" of the query,
188
+ // which can be normalized: for example, you may want to discard aliases, drop
189
+ // unused operations and fragments, sort fields, etc. The most important thing
190
+ // here is that the signature match the signature in StatsReports. In
191
+ // StatsReports signatures show up as the key in the per_query map (with the
192
+ // operation name prepended). The signature should be a valid GraphQL query.
193
+ // All traces must have a signature; if this Trace is in a FullTracesReport
194
+ // that signature is in the key of traces_per_query rather than in this field.
195
+ // Engineproxy provides the signature in legacy_signature_needs_resigning
196
+ // instead.
197
+ string signature = 19;
198
+
199
+ Details details = 6;
200
+
201
+ // Note: engineproxy always sets client_name, client_version, and client_address to "none".
202
+ // apollo-engine-reporting allows for them to be set by the user.
203
+ string client_name = 7;
204
+ string client_version = 8;
205
+ string client_address = 9;
206
+ string client_reference_id = 23;
207
+
208
+ HTTP http = 10;
209
+
210
+ CachePolicy cache_policy = 18;
211
+
212
+ // If this Trace was created by a gateway, this is the query plan, including
213
+ // sub-Traces for federated services. Note that the 'root' tree on the
214
+ // top-level Trace won't contain any resolvers (though it could contain errors
215
+ // that occurred in the gateway itself).
216
+ QueryPlanNode query_plan = 26;
217
+
218
+ // Was this response served from a full query response cache? (In that case
219
+ // the node tree will have no resolvers.)
220
+ bool full_query_cache_hit = 20;
221
+
222
+ // Was this query specified successfully as a persisted query hash?
223
+ bool persisted_query_hit = 21;
224
+ // Did this query contain both a full query string and a persisted query hash?
225
+ // (This typically means that a previous request was rejected as an unknown
226
+ // persisted query.)
227
+ bool persisted_query_register = 22;
228
+
229
+ // Was this operation registered and a part of the safelist?
230
+ bool registered_operation = 24;
231
+
232
+ // Was this operation forbidden due to lack of safelisting?
233
+ bool forbidden_operation = 25;
234
+
235
+ // --------------------------------------------------------------
236
+ // Fields below this line are only set by the old Go engineproxy.
237
+
238
+ // Older agents (eg the Go engineproxy) relied to some degree on the Engine
239
+ // backend to run their own semi-compatible implementation of a specific
240
+ // variant of query signatures. The backend does not do this for new agents (which
241
+ // set the above 'signature' field). It used to still "re-sign" signatures
242
+ // from engineproxy, but we've now simplified the backend to no longer do this.
243
+ // Deprecated and ignored in FullTracesReports.
244
+ string legacy_signature_needs_resigning = 5;
245
+
246
+ // removed: Node parse = 12; Node validate = 13;
247
+ // Id128 server_id = 1; Id128 client_id = 2;
248
+ reserved 12, 13, 1, 2;
249
+ }
250
+
251
+ // The `service` value embedded within the header key is not guaranteed to contain an actual service,
252
+ // and, in most cases, the service information is trusted to come from upstream processing. If the
253
+ // service _is_ specified in this header, then it is checked to match the context that is reporting it.
254
+ // Otherwise, the service information is deduced from the token context of the reporter and then sent
255
+ // along via other mechanisms (in Kafka, the `ReportKafkaKey). The other information (hostname,
256
+ // agent_version, etc.) is sent by the Apollo Engine Reporting agent, but we do not currently save that
257
+ // information to any of our persistent storage.
258
+ message ReportHeader {
259
+ // eg "host-01.example.com"
260
+ string hostname = 5;
261
+
262
+ // eg "engineproxy 0.1.0"
263
+ string agent_version = 6; // required
264
+ // eg "prod-4279-20160804T065423Z-5-g3cf0aa8" (taken from `git describe --tags`)
265
+ string service_version = 7;
266
+ // eg "node v4.6.0"
267
+ string runtime_version = 8;
268
+ // eg "Linux box 4.6.5-1-ec2 #1 SMP Mon Aug 1 02:31:38 PDT 2016 x86_64 GNU/Linux"
269
+ string uname = 9;
270
+ // eg "current", "prod"
271
+ string schema_tag = 10;
272
+ // An id that is used to represent the schema to Apollo Graph Manager
273
+ // Using this in place of what used to be schema_hash, since that is no longer
274
+ // attached to a schema in the backend.
275
+ string executable_schema_id = 11;
276
+
277
+ reserved 3; // removed string service = 3;
278
+ }
279
+
280
+ message PathErrorStats {
281
+ map<string, PathErrorStats> children = 1;
282
+ uint64 errors_count = 4;
283
+ uint64 requests_with_errors_count = 5;
284
+ }
285
+
286
+ message QueryLatencyStats {
287
+ repeated int64 latency_count = 1;
288
+ uint64 request_count = 2;
289
+ uint64 cache_hits = 3;
290
+ uint64 persisted_query_hits = 4;
291
+ uint64 persisted_query_misses = 5;
292
+ repeated int64 cache_latency_count = 6;
293
+ PathErrorStats root_error_stats = 7;
294
+ uint64 requests_with_errors_count = 8;
295
+ repeated int64 public_cache_ttl_count = 9;
296
+ repeated int64 private_cache_ttl_count = 10;
297
+ uint64 registered_operation_count = 11;
298
+ uint64 forbidden_operation_count = 12;
299
+ }
300
+
301
+ message StatsContext {
302
+ string client_reference_id = 1;
303
+ string client_name = 2;
304
+ string client_version = 3;
305
+ }
306
+
307
+ message ContextualizedQueryLatencyStats {
308
+ QueryLatencyStats query_latency_stats = 1;
309
+ StatsContext context = 2;
310
+ }
311
+
312
+ message ContextualizedTypeStats {
313
+ StatsContext context = 1;
314
+ map<string, TypeStat> per_type_stat = 2;
315
+ }
316
+
317
+ message FieldStat {
318
+ string return_type = 3; // required; eg "String!" for User.email:String!
319
+ uint64 errors_count = 4;
320
+ uint64 count = 5;
321
+ uint64 requests_with_errors_count = 6;
322
+ repeated int64 latency_count = 8; // Duration histogram; see docs/histograms.md
323
+ reserved 1, 2, 7;
324
+ }
325
+
326
+ message TypeStat {
327
+ // Key is (eg) "email" for User.email:String!
328
+ map<string, FieldStat> per_field_stat = 3;
329
+ reserved 1, 2;
330
+ }
331
+
332
+ message Field {
333
+ string name = 2; // required; eg "email" for User.email:String!
334
+ string return_type = 3; // required; eg "String!" for User.email:String!
335
+ }
336
+
337
+ message Type {
338
+ string name = 1; // required; eg "User" for User.email:String!
339
+ repeated Field field = 2;
340
+ }
341
+
342
+ // This is the top-level message used by the new traces ingress. This
343
+ // is designed for the apollo-engine-reporting TypeScript agent and will
344
+ // eventually be documented as a public ingress API. This message consists
345
+ // solely of traces; the equivalent of the StatsReport is automatically
346
+ // generated server-side from this message. Agent should either send a trace or include it in the stats
347
+ // for every request in this report. Generally, buffering up until a large
348
+ // size has been reached (say, 4MB) or 5-10 seconds has passed is appropriate.
349
+ // This message used to be know as FullTracesReport, but got renamed since it isn't just for traces anymore
350
+ message Report {
351
+ ReportHeader header = 1;
352
+
353
+ // key is statsReportKey (# operationName\nsignature) Note that the nested
354
+ // traces will *not* have a signature or details.operationName (because the
355
+ // key is adequate).
356
+ //
357
+ // We also assume that traces don't have
358
+ // legacy_per_query_implicit_operation_name, and we don't require them to have
359
+ // details.raw_query (which would consume a lot of space and has privacy/data
360
+ // access issues, and isn't currently exposed by our app anyway).
361
+ map<string, TracesAndStats> traces_per_query = 5;
362
+
363
+ // This is the time that the requests in this trace are considered to have taken place
364
+ // If this field is not present the max of the end_time of each trace will be used instead.
365
+ // If there are no traces and no end_time present the report will not be able to be processed.
366
+ // Note: This will override the end_time from traces.
367
+ google.protobuf.Timestamp end_time = 2; // required if no traces in this message
368
+ }
369
+
370
+ message ContextualizedStats {
371
+ StatsContext context = 1;
372
+ QueryLatencyStats query_latency_stats = 2;
373
+ // Key is type name.
374
+ map<string, TypeStat> per_type_stat = 3;
375
+ }
376
+
377
+ // A sequence of traces and stats. An individual trace should either be counted as a stat or trace
378
+ message TracesAndStats {
379
+ repeated Trace trace = 1;
380
+ repeated ContextualizedStats stats_with_context = 2;
381
+ }