autotel-plugins 0.19.2 → 0.19.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -6,8 +6,8 @@ OpenTelemetry instrumentation for libraries **without official support** OR wher
6
6
 
7
7
  **autotel-plugins only includes instrumentation that:**
8
8
 
9
- 1. **Has NO official OpenTelemetry package** (e.g., Drizzle ORM)
10
- 2. **Has BROKEN official instrumentation** (e.g., Mongoose in ESM+tsx)
9
+ 1. **Has NO official OpenTelemetry package** (e.g., BigQuery)
10
+ 2. **Has BROKEN official instrumentation**
11
11
  3. **Adds significant value** beyond official packages
12
12
 
13
13
  **We do NOT include:**
@@ -57,10 +57,6 @@ For databases/ORMs with **working** official instrumentation, **use those direct
57
57
 
58
58
  [Browse all official instrumentations →](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node)
59
59
 
60
- ### ⚠️ Mongoose ESM Exception
61
-
62
- **Note:** [`@opentelemetry/instrumentation-mongoose`](https://www.npmjs.com/package/@opentelemetry/instrumentation-mongoose) is fundamentally broken in ESM+tsx environments due to module loading hook issues. It works in CommonJS, but if you're using ESM with tsx/ts-node, use our custom plugin below.
63
-
64
60
  ## Installation
65
61
 
66
62
  Install the package and **autotel** (required for all plugins):
@@ -75,20 +71,12 @@ Each plugin needs the core packages above plus the library (and optional OTel in
75
71
 
76
72
  | Plugin | Install |
77
73
  | ------------ | -------------------------------------------------------------------------------------------------------------------------- |
78
- | **Mongoose** | `autotel` + `autotel-plugins` + `mongoose` |
79
- | **Drizzle** | `autotel` + `autotel-plugins` + `drizzle-orm` + your driver (e.g. `postgres`, `mysql2`, `better-sqlite3`) |
80
74
  | **BigQuery** | `autotel` + `autotel-plugins` + `@google-cloud/bigquery` |
81
75
  | **Kafka** | `autotel` + `autotel-plugins` + `kafkajs`. Optional: `@opentelemetry/instrumentation-kafkajs` for producer/consumer spans. |
82
76
 
83
77
  Examples:
84
78
 
85
79
  ```bash
86
- # Mongoose
87
- npm install autotel autotel-plugins mongoose
88
-
89
- # Drizzle (e.g. Postgres)
90
- npm install autotel autotel-plugins drizzle-orm postgres
91
-
92
80
  # BigQuery
93
81
  npm install autotel autotel-plugins @google-cloud/bigquery
94
82
 
@@ -98,209 +86,6 @@ npm install autotel autotel-plugins kafkajs @opentelemetry/instrumentation-kafka
98
86
 
99
87
  ## Currently Supported
100
88
 
101
- ### Mongoose
102
-
103
- Instrument Mongoose database operations with OpenTelemetry tracing using runtime patching. Works in ESM+tsx unlike the official package. **✨ NEW: Automatic hook instrumentation - no manual trace() calls needed!**
104
-
105
- **Why we provide this:**
106
-
107
- The official [`@opentelemetry/instrumentation-mongoose`](https://www.npmjs.com/package/@opentelemetry/instrumentation-mongoose) package is fundamentally broken in ESM+tsx environments:
108
-
109
- - Uses module loading hooks (`import-in-the-middle`) that fail with ESM import hoisting
110
- - Mongoose package lacks proper dual-mode exports (CJS only)
111
- - Works in CommonJS, but fails in modern ESM projects
112
- - No timeline for ESM support
113
-
114
- Our implementation uses **runtime patching** instead of module loading hooks, so it works everywhere.
115
-
116
- #### Basic Usage (with Automatic Hook Tracing)
117
-
118
- ```typescript
119
- import mongoose from 'mongoose';
120
- import { init } from 'autotel';
121
- import { instrumentMongoose } from 'autotel-plugins/mongoose';
122
-
123
- // Initialize Autotel
124
- init({ service: 'my-app' });
125
-
126
- // IMPORTANT: Instrument BEFORE defining schemas to enable automatic hook tracing
127
- instrumentMongoose(mongoose, {
128
- dbName: 'myapp',
129
- peerName: 'localhost',
130
- peerPort: 27017,
131
- });
132
-
133
- // NOW define schemas - hooks are automatically traced!
134
- const userSchema = new mongoose.Schema({ name: String, email: String });
135
-
136
- userSchema.pre('save', async function () {
137
- // ✨ This hook is AUTOMATICALLY traced - no manual trace() needed!
138
- this.email = this.email.toLowerCase();
139
- });
140
-
141
- const User = mongoose.model('User', userSchema);
142
-
143
- // Connect to MongoDB
144
- await mongoose.connect('mongodb://localhost:27017/myapp');
145
-
146
- // All operations AND hooks are automatically traced
147
- await User.create({ name: 'Alice', email: 'ALICE@EXAMPLE.COM' });
148
- // Creates spans: mongoose.users.create + mongoose.users.pre.save
149
- ```
150
-
151
- #### What Gets Automatically Traced
152
-
153
- **1. Model Operations** (all automatic):
154
-
155
- - `create`, `insertMany`, `find`, `findOne`, `findById`
156
- - `findOneAndUpdate`, `findByIdAndUpdate`, `updateOne`, `updateMany`
157
- - `deleteOne`, `deleteMany`, `countDocuments`, `aggregate`
158
- - Instance methods: `save`, `remove`, `deleteOne`
159
-
160
- **2. Schema Hooks** (automatic - no manual code needed!):
161
-
162
- - Pre hooks: `pre('save')`, `pre('findOneAndUpdate')`, etc.
163
- - Post hooks: `post('save')`, `post('remove')`, etc.
164
- - Built-in hooks: `post('init')` (document hydration)
165
-
166
- #### Hook Instrumentation Setup
167
-
168
- For automatic hook tracing, call `instrumentMongoose()` **before** defining schemas. ESM import hoisting means you need a separate init file:
169
-
170
- **Pattern for ESM+tsx projects:**
171
-
172
- Create `init-mongoose.ts`:
173
-
174
- ```typescript
175
- import mongoose from 'mongoose';
176
- import { instrumentMongoose } from 'autotel-plugins/mongoose';
177
-
178
- instrumentMongoose(mongoose, { dbName: 'myapp' });
179
- ```
180
-
181
- Import before schemas in `index.ts`:
182
-
183
- ```typescript
184
- import './init-mongoose'; // Import first!
185
- import { User, Post } from './schema'; // Hooks auto-instrumented
186
- ```
187
-
188
- #### Configuration
189
-
190
- ```typescript
191
- {
192
- dbName?: string // Database name
193
- captureCollectionName?: boolean // Include collection in spans (default: true)
194
- peerName?: string // MongoDB host
195
- peerPort?: number // MongoDB port (default: 27017)
196
- tracerName?: string // Custom tracer name
197
- }
198
- ```
199
-
200
- #### Span Attributes
201
-
202
- **Operation Spans (SpanKind.CLIENT):**
203
-
204
- - `db.system` - "mongoose"
205
- - `db.operation` - create, find, update, etc.
206
- - `db.mongodb.collection` - Collection name
207
- - `db.name` - Database name
208
- - `net.peer.name` / `net.peer.port` - MongoDB server
209
-
210
- **Hook Spans (SpanKind.INTERNAL):**
211
-
212
- - `hook.type` - "pre" or "post"
213
- - `hook.operation` - save, findOneAndUpdate, etc.
214
- - `hook.model` - Model name (User, Post, etc.)
215
- - `db.mongodb.collection` - Collection name
216
- - `db.system` - "mongoose"
217
- - `db.name` - Database name
218
-
219
- #### Before vs After (70% Less Code!)
220
-
221
- **Before (Manual instrumentation):**
222
-
223
- ```typescript
224
- import { trace } from 'autotel';
225
-
226
- userSchema.pre('save', async function () {
227
- await trace((ctx) => async () => {
228
- ctx.setAttribute('hook.type', 'pre');
229
- ctx.setAttribute('hook.operation', 'save');
230
- // ... lots of boilerplate
231
- this.email = this.email.toLowerCase();
232
- })();
233
- });
234
- ```
235
-
236
- **After (Automatic instrumentation):**
237
-
238
- ```typescript
239
- // NO trace() imports needed!
240
- userSchema.pre('save', async function () {
241
- // Automatically traced with all attributes!
242
- this.email = this.email.toLowerCase();
243
- });
244
- ```
245
-
246
- ### Drizzle ORM
247
-
248
- Instrument Drizzle database operations with OpenTelemetry tracing. Drizzle doesn't have official instrumentation, so we provide it here.
249
-
250
- ```typescript
251
- import { drizzle } from 'drizzle-orm/postgres-js';
252
- import postgres from 'postgres';
253
- import { instrumentDrizzleClient } from 'autotel-plugins/drizzle';
254
-
255
- const queryClient = postgres(process.env.DATABASE_URL!);
256
- const db = drizzle({ client: queryClient });
257
-
258
- // Instrument the database instance
259
- instrumentDrizzleClient(db, {
260
- dbSystem: 'postgresql',
261
- dbName: 'myapp',
262
- peerName: 'db.example.com',
263
- peerPort: 5432,
264
- captureQueryText: true,
265
- });
266
-
267
- // All queries are now traced
268
- await db.select().from(users).where(eq(users.id, 123));
269
- ```
270
-
271
- **Supported databases:**
272
-
273
- - PostgreSQL (node-postgres, postgres.js)
274
- - MySQL (mysql2)
275
- - SQLite (better-sqlite3, LibSQL/Turso)
276
-
277
- **Functions:**
278
-
279
- - `instrumentDrizzle(client, config)` - Instrument a database client/pool
280
- - `instrumentDrizzleClient(db, config)` - Instrument a Drizzle database instance
281
-
282
- **Configuration:**
283
-
284
- ```typescript
285
- {
286
- dbSystem?: string // Database type (postgresql, mysql, sqlite)
287
- dbName?: string // Database name
288
- captureQueryText?: boolean // Capture SQL in spans (default: true)
289
- maxQueryTextLength?: number // Max SQL length (default: 1000)
290
- peerName?: string // Database host
291
- peerPort?: number // Database port
292
- }
293
- ```
294
-
295
- **Span Attributes:**
296
-
297
- - `db.system` - Database type (postgresql, mysql, sqlite)
298
- - `db.operation` - Operation name (SELECT, INSERT, UPDATE, DELETE)
299
- - `db.name` - Database name
300
- - `db.statement` - SQL query text (if `captureQueryText: true`)
301
- - `net.peer.name` - Database host
302
- - `net.peer.port` - Database port
303
-
304
89
  ### Kafka
305
90
 
306
91
  Composition layer for KafkaJS: processing span wrapper, producer span wrapper, batch lineage for fan-in trace correlation, and batch consumer wrapper. Works alongside optional `@opentelemetry/instrumentation-kafkajs` for producer/consumer spans.
@@ -334,38 +119,12 @@ await consumer.run({
334
119
 
335
120
  Optional: install `@opentelemetry/instrumentation-kafkajs` for producer/consumer spans.
336
121
 
337
- ## Usage with Autotel
338
-
339
- Drizzle instrumentation works seamlessly with [Autotel](../autotel):
340
-
341
- ```typescript
342
- import { init } from 'autotel';
343
- import { instrumentDrizzleClient } from 'autotel-plugins/drizzle';
344
- import { drizzle } from 'drizzle-orm/postgres-js';
345
- import postgres from 'postgres';
346
-
347
- // Initialize Autotel
348
- init({
349
- service: 'my-service',
350
- endpoint: 'http://localhost:4318',
351
- });
352
-
353
- // Instrument your database
354
- const client = postgres(process.env.DATABASE_URL!);
355
- const db = drizzle({ client });
356
- instrumentDrizzleClient(db, { dbSystem: 'postgresql' });
357
-
358
- // Traces will be sent to your OTLP endpoint
359
- await db.select().from(users);
360
- ```
361
-
362
122
  ## Combining with Official Packages
363
123
 
364
124
  Mix autotel-plugins with official OpenTelemetry instrumentations:
365
125
 
366
126
  ```typescript
367
127
  import { init } from 'autotel';
368
- import { instrumentDrizzleClient } from 'autotel-plugins/drizzle';
369
128
  import { HttpInstrumentation } from '@opentelemetry/instrumentation-http';
370
129
  import { ExpressInstrumentation } from '@opentelemetry/instrumentation-express';
371
130
  import { PgInstrumentation } from '@opentelemetry/instrumentation-pg';
@@ -378,47 +137,8 @@ init({
378
137
  new PgInstrumentation(), // Official packages
379
138
  ],
380
139
  });
381
-
382
- // Drizzle (no official package available)
383
- const db = drizzle({ client: postgres(process.env.DATABASE_URL!) });
384
- instrumentDrizzleClient(db, { dbSystem: 'postgresql' });
385
140
  ```
386
141
 
387
- ## Security Considerations
388
-
389
- ### Query Text Capture
390
-
391
- By default, Drizzle instrumentation captures SQL text which may contain sensitive data:
392
-
393
- ```typescript
394
- // Disable SQL capture to prevent PII leakage
395
- instrumentDrizzleClient(db, {
396
- captureQueryText: false,
397
- });
398
- ```
399
-
400
- ## Examples
401
-
402
- See the [example-drizzle](../../apps/example-drizzle) directory for a complete working example.
403
-
404
- ## TypeScript
405
-
406
- Full type safety with TypeScript:
407
-
408
- ```typescript
409
- import type { InstrumentDrizzleConfig } from 'autotel-plugins';
410
-
411
- const config: InstrumentDrizzleConfig = {
412
- dbSystem: 'postgresql',
413
- captureQueryText: true,
414
- maxQueryTextLength: 1000,
415
- };
416
- ```
417
-
418
- ## Future
419
-
420
- When official OpenTelemetry instrumentation becomes available for Drizzle ORM, we will announce deprecation and provide a migration guide.
421
-
422
142
  ## Creating Your Own Instrumentation
423
143
 
424
144
  Don't see your library here? Autotel makes it easy to create custom instrumentation for any library using simple, well-tested utilities.
@@ -1,4 +1,4 @@
1
- import { BigQuery, BigQueryOptions } from '@google-cloud/bigquery';
1
+ import { BigQueryOptions, BigQuery } from '@google-cloud/bigquery';
2
2
 
3
3
  /**
4
4
  * Plugin-only options for BigQuery instrumentation (not part of official BigQueryOptions).
@@ -1,4 +1,4 @@
1
- import { BigQuery, BigQueryOptions } from '@google-cloud/bigquery';
1
+ import { BigQueryOptions, BigQuery } from '@google-cloud/bigquery';
2
2
 
3
3
  /**
4
4
  * Plugin-only options for BigQuery instrumentation (not part of official BigQueryOptions).
@@ -49,4 +49,4 @@ declare const SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_PROCESSED: "messaging.kafk
49
49
  declare const SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_FAILED: "messaging.kafka.batch.messages_failed";
50
50
  declare const SEMATTRS_MESSAGING_KAFKA_BATCH_PROCESSING_TIME_MS: "messaging.kafka.batch.processing_time_ms";
51
51
 
52
- export { SEMATTRS_MESSAGING_KAFKA_OFFSET as A, SEMATTRS_MESSAGING_KAFKA_MESSAGE_KEY as B, SEMATTRS_LINKED_TRACE_ID_COUNT as C, SEMATTRS_LINKED_TRACE_ID_HASH as D, CORRELATION_ID_HEADER as E, SEMATTRS_MESSAGING_BATCH_MESSAGE_COUNT as F, SEMATTRS_MESSAGING_KAFKA_BATCH_FIRST_OFFSET as G, SEMATTRS_MESSAGING_KAFKA_BATCH_LAST_OFFSET as H, SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_PROCESSED as I, SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_FAILED as J, SEMATTRS_MESSAGING_KAFKA_BATCH_PROCESSING_TIME_MS as K, SEMATTRS_MESSAGING_OPERATION_NAME as L, SEMATTRS_MESSAGING_MESSAGE_ID as M, SEMATTRS_MESSAGING_MESSAGE_CONVERSATION_ID as N, SEMATTRS_MESSAGING_CONSUMER_ID as O, SEMATTRS_MESSAGING_RABBITMQ_DESTINATION_ROUTING_KEY as P, SEMATTRS_MESSAGING_RABBITMQ_DESTINATION_EXCHANGE as Q, SEMATTRS_MESSAGING_RABBITMQ_ACK_RESULT as R, SEMATTRS_DB_SYSTEM as S, SEMATTRS_MESSAGING_RABBITMQ_REQUEUE as T, SEMATTRS_DB_SYSTEM_NAME as a, SEMATTRS_DB_OPERATION as b, SEMATTRS_DB_OPERATION_NAME as c, SEMATTRS_DB_STATEMENT as d, SEMATTRS_DB_NAME as e, SEMATTRS_DB_NAMESPACE as f, SEMATTRS_DB_COLLECTION_NAME as g, SEMATTRS_DB_QUERY_TEXT as h, SEMATTRS_DB_QUERY_SUMMARY as i, SEMATTRS_NET_PEER_NAME as j, SEMATTRS_NET_PEER_PORT as k, SEMATTRS_GCP_BIGQUERY_JOB_ID as l, SEMATTRS_GCP_BIGQUERY_JOB_LOCATION as m, SEMATTRS_GCP_BIGQUERY_PROJECT_ID as n, SEMATTRS_GCP_BIGQUERY_DESTINATION_TABLE as o, SEMATTRS_GCP_BIGQUERY_SOURCE_TABLES as p, SEMATTRS_GCP_BIGQUERY_STATEMENT_TYPE as q, SEMATTRS_GCP_BIGQUERY_QUERY_HASH as r, SEMATTRS_GCP_BIGQUERY_ROWS_AFFECTED as s, SEMATTRS_GCP_BIGQUERY_ROWS_RETURNED as t, SEMATTRS_GCP_BIGQUERY_SCHEMA_FIELDS as u, SEMATTRS_MESSAGING_SYSTEM as v, SEMATTRS_MESSAGING_DESTINATION_NAME as w, SEMATTRS_MESSAGING_OPERATION as x, SEMATTRS_MESSAGING_KAFKA_CONSUMER_GROUP as y, SEMATTRS_MESSAGING_KAFKA_PARTITION as z };
52
+ export { SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_FAILED as A, SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_PROCESSED as B, CORRELATION_ID_HEADER as C, SEMATTRS_MESSAGING_KAFKA_BATCH_PROCESSING_TIME_MS as D, SEMATTRS_MESSAGING_KAFKA_CONSUMER_GROUP as E, SEMATTRS_MESSAGING_KAFKA_MESSAGE_KEY as F, SEMATTRS_MESSAGING_KAFKA_OFFSET as G, SEMATTRS_MESSAGING_KAFKA_PARTITION as H, SEMATTRS_MESSAGING_MESSAGE_CONVERSATION_ID as I, SEMATTRS_MESSAGING_MESSAGE_ID as J, SEMATTRS_MESSAGING_OPERATION as K, SEMATTRS_MESSAGING_OPERATION_NAME as L, SEMATTRS_MESSAGING_RABBITMQ_ACK_RESULT as M, SEMATTRS_MESSAGING_RABBITMQ_DESTINATION_EXCHANGE as N, SEMATTRS_MESSAGING_RABBITMQ_DESTINATION_ROUTING_KEY as O, SEMATTRS_MESSAGING_RABBITMQ_REQUEUE as P, SEMATTRS_MESSAGING_SYSTEM as Q, SEMATTRS_NET_PEER_NAME as R, SEMATTRS_DB_COLLECTION_NAME as S, SEMATTRS_NET_PEER_PORT as T, SEMATTRS_DB_NAME as a, SEMATTRS_DB_NAMESPACE as b, SEMATTRS_DB_OPERATION as c, SEMATTRS_DB_OPERATION_NAME as d, SEMATTRS_DB_QUERY_SUMMARY as e, SEMATTRS_DB_QUERY_TEXT as f, SEMATTRS_DB_STATEMENT as g, SEMATTRS_DB_SYSTEM as h, SEMATTRS_DB_SYSTEM_NAME as i, SEMATTRS_GCP_BIGQUERY_DESTINATION_TABLE as j, SEMATTRS_GCP_BIGQUERY_JOB_ID as k, SEMATTRS_GCP_BIGQUERY_JOB_LOCATION as l, SEMATTRS_GCP_BIGQUERY_PROJECT_ID as m, SEMATTRS_GCP_BIGQUERY_QUERY_HASH as n, SEMATTRS_GCP_BIGQUERY_ROWS_AFFECTED as o, SEMATTRS_GCP_BIGQUERY_ROWS_RETURNED as p, SEMATTRS_GCP_BIGQUERY_SCHEMA_FIELDS as q, SEMATTRS_GCP_BIGQUERY_SOURCE_TABLES as r, SEMATTRS_GCP_BIGQUERY_STATEMENT_TYPE as s, SEMATTRS_LINKED_TRACE_ID_COUNT as t, SEMATTRS_LINKED_TRACE_ID_HASH as u, SEMATTRS_MESSAGING_BATCH_MESSAGE_COUNT as v, SEMATTRS_MESSAGING_CONSUMER_ID as w, SEMATTRS_MESSAGING_DESTINATION_NAME as x, SEMATTRS_MESSAGING_KAFKA_BATCH_FIRST_OFFSET as y, SEMATTRS_MESSAGING_KAFKA_BATCH_LAST_OFFSET as z };
@@ -49,4 +49,4 @@ declare const SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_PROCESSED: "messaging.kafk
49
49
  declare const SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_FAILED: "messaging.kafka.batch.messages_failed";
50
50
  declare const SEMATTRS_MESSAGING_KAFKA_BATCH_PROCESSING_TIME_MS: "messaging.kafka.batch.processing_time_ms";
51
51
 
52
- export { SEMATTRS_MESSAGING_KAFKA_OFFSET as A, SEMATTRS_MESSAGING_KAFKA_MESSAGE_KEY as B, SEMATTRS_LINKED_TRACE_ID_COUNT as C, SEMATTRS_LINKED_TRACE_ID_HASH as D, CORRELATION_ID_HEADER as E, SEMATTRS_MESSAGING_BATCH_MESSAGE_COUNT as F, SEMATTRS_MESSAGING_KAFKA_BATCH_FIRST_OFFSET as G, SEMATTRS_MESSAGING_KAFKA_BATCH_LAST_OFFSET as H, SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_PROCESSED as I, SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_FAILED as J, SEMATTRS_MESSAGING_KAFKA_BATCH_PROCESSING_TIME_MS as K, SEMATTRS_MESSAGING_OPERATION_NAME as L, SEMATTRS_MESSAGING_MESSAGE_ID as M, SEMATTRS_MESSAGING_MESSAGE_CONVERSATION_ID as N, SEMATTRS_MESSAGING_CONSUMER_ID as O, SEMATTRS_MESSAGING_RABBITMQ_DESTINATION_ROUTING_KEY as P, SEMATTRS_MESSAGING_RABBITMQ_DESTINATION_EXCHANGE as Q, SEMATTRS_MESSAGING_RABBITMQ_ACK_RESULT as R, SEMATTRS_DB_SYSTEM as S, SEMATTRS_MESSAGING_RABBITMQ_REQUEUE as T, SEMATTRS_DB_SYSTEM_NAME as a, SEMATTRS_DB_OPERATION as b, SEMATTRS_DB_OPERATION_NAME as c, SEMATTRS_DB_STATEMENT as d, SEMATTRS_DB_NAME as e, SEMATTRS_DB_NAMESPACE as f, SEMATTRS_DB_COLLECTION_NAME as g, SEMATTRS_DB_QUERY_TEXT as h, SEMATTRS_DB_QUERY_SUMMARY as i, SEMATTRS_NET_PEER_NAME as j, SEMATTRS_NET_PEER_PORT as k, SEMATTRS_GCP_BIGQUERY_JOB_ID as l, SEMATTRS_GCP_BIGQUERY_JOB_LOCATION as m, SEMATTRS_GCP_BIGQUERY_PROJECT_ID as n, SEMATTRS_GCP_BIGQUERY_DESTINATION_TABLE as o, SEMATTRS_GCP_BIGQUERY_SOURCE_TABLES as p, SEMATTRS_GCP_BIGQUERY_STATEMENT_TYPE as q, SEMATTRS_GCP_BIGQUERY_QUERY_HASH as r, SEMATTRS_GCP_BIGQUERY_ROWS_AFFECTED as s, SEMATTRS_GCP_BIGQUERY_ROWS_RETURNED as t, SEMATTRS_GCP_BIGQUERY_SCHEMA_FIELDS as u, SEMATTRS_MESSAGING_SYSTEM as v, SEMATTRS_MESSAGING_DESTINATION_NAME as w, SEMATTRS_MESSAGING_OPERATION as x, SEMATTRS_MESSAGING_KAFKA_CONSUMER_GROUP as y, SEMATTRS_MESSAGING_KAFKA_PARTITION as z };
52
+ export { SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_FAILED as A, SEMATTRS_MESSAGING_KAFKA_BATCH_MESSAGES_PROCESSED as B, CORRELATION_ID_HEADER as C, SEMATTRS_MESSAGING_KAFKA_BATCH_PROCESSING_TIME_MS as D, SEMATTRS_MESSAGING_KAFKA_CONSUMER_GROUP as E, SEMATTRS_MESSAGING_KAFKA_MESSAGE_KEY as F, SEMATTRS_MESSAGING_KAFKA_OFFSET as G, SEMATTRS_MESSAGING_KAFKA_PARTITION as H, SEMATTRS_MESSAGING_MESSAGE_CONVERSATION_ID as I, SEMATTRS_MESSAGING_MESSAGE_ID as J, SEMATTRS_MESSAGING_OPERATION as K, SEMATTRS_MESSAGING_OPERATION_NAME as L, SEMATTRS_MESSAGING_RABBITMQ_ACK_RESULT as M, SEMATTRS_MESSAGING_RABBITMQ_DESTINATION_EXCHANGE as N, SEMATTRS_MESSAGING_RABBITMQ_DESTINATION_ROUTING_KEY as O, SEMATTRS_MESSAGING_RABBITMQ_REQUEUE as P, SEMATTRS_MESSAGING_SYSTEM as Q, SEMATTRS_NET_PEER_NAME as R, SEMATTRS_DB_COLLECTION_NAME as S, SEMATTRS_NET_PEER_PORT as T, SEMATTRS_DB_NAME as a, SEMATTRS_DB_NAMESPACE as b, SEMATTRS_DB_OPERATION as c, SEMATTRS_DB_OPERATION_NAME as d, SEMATTRS_DB_QUERY_SUMMARY as e, SEMATTRS_DB_QUERY_TEXT as f, SEMATTRS_DB_STATEMENT as g, SEMATTRS_DB_SYSTEM as h, SEMATTRS_DB_SYSTEM_NAME as i, SEMATTRS_GCP_BIGQUERY_DESTINATION_TABLE as j, SEMATTRS_GCP_BIGQUERY_JOB_ID as k, SEMATTRS_GCP_BIGQUERY_JOB_LOCATION as l, SEMATTRS_GCP_BIGQUERY_PROJECT_ID as m, SEMATTRS_GCP_BIGQUERY_QUERY_HASH as n, SEMATTRS_GCP_BIGQUERY_ROWS_AFFECTED as o, SEMATTRS_GCP_BIGQUERY_ROWS_RETURNED as p, SEMATTRS_GCP_BIGQUERY_SCHEMA_FIELDS as q, SEMATTRS_GCP_BIGQUERY_SOURCE_TABLES as r, SEMATTRS_GCP_BIGQUERY_STATEMENT_TYPE as s, SEMATTRS_LINKED_TRACE_ID_COUNT as t, SEMATTRS_LINKED_TRACE_ID_HASH as u, SEMATTRS_MESSAGING_BATCH_MESSAGE_COUNT as v, SEMATTRS_MESSAGING_CONSUMER_ID as w, SEMATTRS_MESSAGING_DESTINATION_NAME as x, SEMATTRS_MESSAGING_KAFKA_BATCH_FIRST_OFFSET as y, SEMATTRS_MESSAGING_KAFKA_BATCH_LAST_OFFSET as z };