dd-trace 5.4.0 → 5.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CONTRIBUTING.md CHANGED
@@ -70,4 +70,102 @@ We follow an all-green policy which means that for any PR to be merged _all_ tes
70
70
 
71
71
  Eventually we plan to look into putting these permission-required tests behind a label which team members can add to their PRs at creation to run the full CI and can add to outside contributor PRs to trigger the CI from their own user credentials. If the label is not present there will be another action which checks the label is present. Rather than showing a bunch of confusing failures to new contributors it would just show a single job failure which indicates an additional label is required, and we can name it in a way that makes it clear that it's not the responsibility of the outside contributor to add it. Something like `approve-full-ci` is one possible choice there.
72
72
 
73
+ ## Development Requirements
74
+
75
+ Since this project supports multiple Node versions, using a version
76
+ manager such as [nvm](https://github.com/creationix/nvm) is recommended.
77
+
78
+ We use [yarn](https://yarnpkg.com/) for its workspace functionality, so make sure to install that as well.
79
+
80
+ To install dependencies once you have Node and yarn installed, run:
81
+
82
+ ```sh
83
+ $ yarn
84
+ ```
85
+
86
+
87
+ ## Testing
88
+
89
+ Before running _plugin_ tests, the data stores need to be running.
90
+ The easiest way to start all of them is to use the provided
91
+ docker-compose configuration:
92
+
93
+ ```sh
94
+ $ docker-compose up -d -V --remove-orphans --force-recreate
95
+ $ yarn services
96
+ ```
97
+
98
+ > **Note**
99
+ > The `couchbase`, `grpc` and `oracledb` instrumentations rely on native modules
100
+ > that do not compile on ARM64 devices (for example M1/M2 Mac) - their tests
101
+ > cannot be run locally on these devices.
102
+
103
+ ### Unit Tests
104
+
105
+ There are several types of unit tests, for various types of components. The
106
+ following commands may be useful:
107
+
108
+ ```sh
109
+ # Tracer core tests (i.e. testing `packages/dd-trace`)
110
+ $ yarn test:trace:core
111
+ # "Core" library tests (i.e. testing `packages/datadog-core`
112
+ $ yarn test:core
113
+ # Instrumentations tests (i.e. testing `packages/datadog-instrumentations`
114
+ $ yarn test:instrumentations
115
+ ```
116
+
117
+ Several other components have test commands as well. See `package.json` for
118
+ details.
119
+
120
+ To test _plugins_ (i.e. components in `packages/datadog-plugin-XXXX`
121
+ directories, set the `PLUGINS` environment variable to the plugin you're
122
+ interested in, and use `yarn test:plugins`. If you need to test multiple
123
+ plugins you may separate then with a pipe (`|`) delimiter. Here's an
124
+ example testing the `express` and `bluebird` plugins:
125
+
126
+ ```sh
127
+ PLUGINS="express|bluebird" yarn test:plugins
128
+ ```
129
+
130
+
131
+ ### Memory Leaks
132
+
133
+ To run the memory leak tests, use:
134
+
135
+ ```sh
136
+ $ yarn leak:core
137
+
138
+ # or
139
+
140
+ $ yarn leak:plugins
141
+ ```
142
+
143
+
144
+ ### Linting
145
+
146
+ We use [ESLint](https://eslint.org) to make sure that new code
147
+ conforms to our coding standards.
148
+
149
+ To run the linter, use:
150
+
151
+ ```sh
152
+ $ yarn lint
153
+ ```
154
+
155
+
156
+ ### Benchmarks
157
+
158
+ Our microbenchmarks live in `benchmark/sirun`. Each directory in there
159
+ corresponds to a specific benchmark test and its variants, which are used to
160
+ track regressions and improvements over time.
161
+
162
+ In addition to those, when two or more approaches must be compared, please write
163
+ a benchmark in the `benchmark/index.js` module so that we can keep track of the
164
+ most efficient algorithm. To run your benchmark, use:
165
+
166
+ ```sh
167
+ $ yarn bench
168
+ ```
169
+
170
+
73
171
  [1]: https://docs.datadoghq.com/help
package/README.md CHANGED
@@ -62,95 +62,12 @@ For more information about library versioning and compatibility, see the [NodeJS
62
62
  Changes associated with each individual release are documented on the [GitHub Releases](https://github.com/DataDog/dd-trace-js/releases) screen.
63
63
 
64
64
 
65
- ## Development
65
+ ## Development and Contribution
66
66
 
67
- Before contributing to this open source project, read our [CONTRIBUTING.md](https://github.com/DataDog/dd-trace-js/blob/master/CONTRIBUTING.md).
67
+ Please read the [CONTRIBUTING.md](https://github.com/DataDog/dd-trace-js/blob/master/CONTRIBUTING.md) document before contributing to this open source project.
68
68
 
69
69
 
70
- ## Requirements
71
-
72
- Since this project supports multiple Node versions, using a version
73
- manager such as [nvm](https://github.com/creationix/nvm) is recommended.
74
-
75
- We use [yarn](https://yarnpkg.com/) for its workspace functionality, so make sure to install that as well.
76
-
77
- To install dependencies once you have Node and yarn installed, run:
78
-
79
- ```sh
80
- $ yarn
81
- ```
82
-
83
-
84
- ## Testing
85
-
86
- Before running _plugin_ tests, the data stores need to be running.
87
- The easiest way to start all of them is to use the provided
88
- docker-compose configuration:
89
-
90
- ```sh
91
- $ docker-compose up -d -V --remove-orphans --force-recreate
92
- $ yarn services
93
- ```
94
-
95
- > **Note**
96
- > The `couchbase`, `grpc` and `oracledb` instrumentations rely on native modules
97
- > that do not compile on ARM64 devices (for example M1/M2 Mac) - their tests
98
- > cannot be run locally on these devices.
99
-
100
- ### Unit Tests
101
-
102
- There are several types of unit tests, for various types of components. The
103
- following commands may be useful:
104
-
105
- ```sh
106
- # Tracer core tests (i.e. testing `packages/dd-trace`)
107
- $ yarn test:trace:core
108
- # "Core" library tests (i.e. testing `packages/datadog-core`
109
- $ yarn test:core
110
- # Instrumentations tests (i.e. testing `packages/datadog-instrumentations`
111
- $ yarn test:instrumentations
112
- ```
113
-
114
- Several other components have test commands as well. See `package.json` for
115
- details.
116
-
117
- To test _plugins_ (i.e. components in `packages/datadog-plugin-XXXX`
118
- directories, set the `PLUGINS` environment variable to the plugin you're
119
- interested in, and use `yarn test:plugins`. If you need to test multiple
120
- plugins you may separate then with a pipe (`|`) delimiter. Here's an
121
- example testing the `express` and `bluebird` plugins:
122
-
123
- ```sh
124
- PLUGINS="express|bluebird" yarn test:plugins
125
- ```
126
-
127
-
128
- ### Memory Leaks
129
-
130
- To run the memory leak tests, use:
131
-
132
- ```sh
133
- $ yarn leak:core
134
-
135
- # or
136
-
137
- $ yarn leak:plugins
138
- ```
139
-
140
-
141
- ### Linting
142
-
143
- We use [ESLint](https://eslint.org) to make sure that new code
144
- conforms to our coding standards.
145
-
146
- To run the linter, use:
147
-
148
- ```sh
149
- $ yarn lint
150
- ```
151
-
152
-
153
- ### Experimental ESM Support
70
+ ## Experimental ESM Support
154
71
 
155
72
  > **Warning**
156
73
  >
@@ -168,21 +85,6 @@ node --loader dd-trace/loader-hook.mjs entrypoint.js
168
85
  ```
169
86
 
170
87
 
171
- ### Benchmarks
172
-
173
- Our microbenchmarks live in `benchmark/sirun`. Each directory in there
174
- corresponds to a specific benchmark test and its variants, which are used to
175
- track regressions and improvements over time.
176
-
177
- In addition to those, when two or more approaches must be compared, please write
178
- a benchmark in the `benchmark/index.js` module so that we can keep track of the
179
- most efficient algorithm. To run your benchmark, use:
180
-
181
- ```sh
182
- $ yarn bench
183
- ```
184
-
185
-
186
88
  ## Serverless / Lambda
187
89
 
188
90
  Note that there is a separate Lambda project, [datadog-lambda-js](https://github.com/DataDog/datadog-lambda-js), that is responsible for enabling metrics and distributed tracing when your application runs on Lambda.
@@ -199,4 +101,4 @@ If you would like to trace your bundled application then please read this page o
199
101
 
200
102
  ## Security Vulnerabilities
201
103
 
202
- If you have found a security issue, please contact the security team directly at [security@datadoghq.com](mailto:security@datadoghq.com).
104
+ Please refer to the [SECURITY.md](https://github.com/DataDog/dd-trace-js/blob/master/SECURITY.md) document if you have found a security issue.
@@ -0,0 +1 @@
1
+ module.exports = require('../../packages/datadog-plugin-cypress/src/after-run')
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "dd-trace",
3
- "version": "5.4.0",
3
+ "version": "5.5.0",
4
4
  "description": "Datadog APM tracing client for JavaScript",
5
5
  "main": "index.js",
6
6
  "typings": "index.d.ts",
@@ -17,6 +17,7 @@ const testSuiteFinishCh = channel('ci:cucumber:test-suite:finish')
17
17
  const testSuiteCodeCoverageCh = channel('ci:cucumber:test-suite:code-coverage')
18
18
 
19
19
  const libraryConfigurationCh = channel('ci:cucumber:library-configuration')
20
+ const knownTestsCh = channel('ci:cucumber:known-tests')
20
21
  const skippableSuitesCh = channel('ci:cucumber:test-suite:skippable')
21
22
  const sessionStartCh = channel('ci:cucumber:session:start')
22
23
  const sessionFinishCh = channel('ci:cucumber:session:finish')
@@ -41,12 +42,18 @@ const originalCoverageMap = createCoverageMap()
41
42
  // TODO: remove in a later major version
42
43
  const patched = new WeakSet()
43
44
 
45
+ const lastStatusByPickleId = new Map()
46
+ const numRetriesByPickleId = new Map()
47
+
44
48
  let pickleByFile = {}
45
49
  const pickleResultByFile = {}
46
50
  let skippableSuites = []
47
51
  let itrCorrelationId = ''
48
52
  let isForcedToRun = false
49
53
  let isUnskippable = false
54
+ let isEarlyFlakeDetectionEnabled = false
55
+ let earlyFlakeDetectionNumRetries = 0
56
+ let knownTests = []
50
57
 
51
58
  function getSuiteStatusFromTestStatuses (testStatuses) {
52
59
  if (testStatuses.some(status => status === 'fail')) {
@@ -84,6 +91,20 @@ function getStatusFromResultLatest (result) {
84
91
  return { status: 'fail', errorMessage: result.message }
85
92
  }
86
93
 
94
+ function isNewTest (testSuite, testName) {
95
+ return !knownTests.includes(`cucumber.${testSuite}.${testName}`)
96
+ }
97
+
98
+ function getTestStatusFromRetries (testStatuses) {
99
+ if (testStatuses.every(status => status === 'fail')) {
100
+ return 'fail'
101
+ }
102
+ if (testStatuses.some(status => status === 'pass')) {
103
+ return 'pass'
104
+ }
105
+ return 'pass'
106
+ }
107
+
87
108
  function wrapRun (pl, isLatestVersion) {
88
109
  if (patched.has(pl)) return
89
110
 
@@ -98,18 +119,7 @@ function wrapRun (pl, isLatestVersion) {
98
119
  return asyncResource.runInAsyncScope(() => {
99
120
  const testFileAbsolutePath = this.pickle.uri
100
121
 
101
- if (!pickleResultByFile[testFileAbsolutePath]) { // first test in suite
102
- isUnskippable = isMarkedAsUnskippable(this.pickle)
103
- const testSuitePath = getTestSuitePath(testFileAbsolutePath, process.cwd())
104
- isForcedToRun = isUnskippable && skippableSuites.includes(testSuitePath)
105
-
106
- testSuiteStartCh.publish({ testSuitePath, isUnskippable, isForcedToRun, itrCorrelationId })
107
- }
108
-
109
- const testSourceLine = this.gherkinDocument &&
110
- this.gherkinDocument.feature &&
111
- this.gherkinDocument.feature.location &&
112
- this.gherkinDocument.feature.location.line
122
+ const testSourceLine = this.gherkinDocument?.feature?.location?.line
113
123
 
114
124
  testStartCh.publish({
115
125
  testName: this.pickle.name,
@@ -123,30 +133,20 @@ function wrapRun (pl, isLatestVersion) {
123
133
  const { status, skipReason, errorMessage } = isLatestVersion
124
134
  ? getStatusFromResultLatest(result) : getStatusFromResult(result)
125
135
 
126
- if (!pickleResultByFile[testFileAbsolutePath]) {
127
- pickleResultByFile[testFileAbsolutePath] = [status]
136
+ if (lastStatusByPickleId.has(this.pickle.id)) {
137
+ lastStatusByPickleId.get(this.pickle.id).push(status)
128
138
  } else {
129
- pickleResultByFile[testFileAbsolutePath].push(status)
139
+ lastStatusByPickleId.set(this.pickle.id, [status])
130
140
  }
131
- testFinishCh.publish({ status, skipReason, errorMessage })
132
- // last test in suite
133
- if (pickleResultByFile[testFileAbsolutePath].length === pickleByFile[testFileAbsolutePath].length) {
134
- const testSuiteStatus = getSuiteStatusFromTestStatuses(pickleResultByFile[testFileAbsolutePath])
135
- if (global.__coverage__) {
136
- const coverageFiles = getCoveredFilenamesFromCoverage(global.__coverage__)
137
-
138
- testSuiteCodeCoverageCh.publish({
139
- coverageFiles,
140
- suiteFile: testFileAbsolutePath
141
- })
142
- // We need to reset coverage to get a code coverage per suite
143
- // Before that, we preserve the original coverage
144
- mergeCoverage(global.__coverage__, originalCoverageMap)
145
- resetCoverage(global.__coverage__)
146
- }
147
-
148
- testSuiteFinishCh.publish(testSuiteStatus)
141
+ let isNew = false
142
+ let isEfdRetry = false
143
+ if (isEarlyFlakeDetectionEnabled && status !== 'skip') {
144
+ const numRetries = numRetriesByPickleId.get(this.pickle.id)
145
+
146
+ isNew = numRetries !== undefined
147
+ isEfdRetry = numRetries > 0
149
148
  }
149
+ testFinishCh.publish({ status, skipReason, errorMessage, isNew, isEfdRetry })
150
150
  })
151
151
  return promise
152
152
  } catch (err) {
@@ -258,12 +258,11 @@ function getPickleByFile (runtime) {
258
258
  }, {})
259
259
  }
260
260
 
261
- addHook({
262
- name: '@cucumber/cucumber',
263
- versions: ['>=7.0.0'],
264
- file: 'lib/runtime/index.js'
265
- }, (runtimePackage, frameworkVersion) => {
266
- shimmer.wrap(runtimePackage.default.prototype, 'start', start => async function () {
261
+ function getWrappedStart (start, frameworkVersion) {
262
+ return async function () {
263
+ if (!libraryConfigurationCh.hasSubscribers) {
264
+ return start.apply(this, arguments)
265
+ }
267
266
  const asyncResource = new AsyncResource('bound-anonymous-fn')
268
267
  let onDone
269
268
 
@@ -275,7 +274,23 @@ addHook({
275
274
  libraryConfigurationCh.publish({ onDone })
276
275
  })
277
276
 
278
- await configPromise
277
+ const configurationResponse = await configPromise
278
+
279
+ isEarlyFlakeDetectionEnabled = configurationResponse.libraryConfig?.isEarlyFlakeDetectionEnabled
280
+ earlyFlakeDetectionNumRetries = configurationResponse.libraryConfig?.earlyFlakeDetectionNumRetries
281
+
282
+ if (isEarlyFlakeDetectionEnabled) {
283
+ const knownTestsPromise = new Promise(resolve => {
284
+ onDone = resolve
285
+ })
286
+ asyncResource.runInAsyncScope(() => {
287
+ knownTestsCh.publish({ onDone })
288
+ })
289
+ const knownTestsResponse = await knownTestsPromise
290
+ if (!knownTestsResponse.err) {
291
+ knownTests = knownTestsResponse.knownTests
292
+ }
293
+ }
279
294
 
280
295
  const skippableSuitesPromise = new Promise(resolve => {
281
296
  onDone = resolve
@@ -342,11 +357,110 @@ addHook({
342
357
  testCodeCoverageLinesTotal,
343
358
  numSkippedSuites: skippedSuites.length,
344
359
  hasUnskippableSuites: isUnskippable,
345
- hasForcedToRunSuites: isForcedToRun
360
+ hasForcedToRunSuites: isForcedToRun,
361
+ isEarlyFlakeDetectionEnabled
346
362
  })
347
363
  })
348
364
  return success
349
- })
365
+ }
366
+ }
367
+
368
+ function getWrappedRunTest (runTestFunction) {
369
+ return async function (pickleId) {
370
+ const test = this.eventDataCollector.getPickle(pickleId)
371
+
372
+ const testFileAbsolutePath = test.uri
373
+ const testSuitePath = getTestSuitePath(testFileAbsolutePath, process.cwd())
374
+
375
+ if (!pickleResultByFile[testFileAbsolutePath]) { // first test in suite
376
+ isUnskippable = isMarkedAsUnskippable(test)
377
+ isForcedToRun = isUnskippable && skippableSuites.includes(testSuitePath)
378
+
379
+ testSuiteStartCh.publish({ testSuitePath, isUnskippable, isForcedToRun, itrCorrelationId })
380
+ }
381
+
382
+ let isNew = false
383
+
384
+ if (isEarlyFlakeDetectionEnabled) {
385
+ isNew = isNewTest(testSuitePath, test.name)
386
+ if (isNew) {
387
+ numRetriesByPickleId.set(pickleId, 0)
388
+ }
389
+ }
390
+ const runTestCaseResult = await runTestFunction.apply(this, arguments)
391
+
392
+ const testStatuses = lastStatusByPickleId.get(pickleId)
393
+ const lastTestStatus = testStatuses[testStatuses.length - 1]
394
+ // If it's a new test and it hasn't been skipped, we run it again
395
+ if (isEarlyFlakeDetectionEnabled && lastTestStatus !== 'skip' && isNew) {
396
+ for (let retryIndex = 0; retryIndex < earlyFlakeDetectionNumRetries; retryIndex++) {
397
+ numRetriesByPickleId.set(pickleId, retryIndex + 1)
398
+ await runTestFunction.apply(this, arguments)
399
+ }
400
+ }
401
+ let testStatus = lastTestStatus
402
+ if (isEarlyFlakeDetectionEnabled) {
403
+ /**
404
+ * If Early Flake Detection (EFD) is enabled the logic is as follows:
405
+ * - If all attempts for a test are failing, the test has failed and we will let the test process fail.
406
+ * - If just a single attempt passes, we will prevent the test process from failing.
407
+ * The rationale behind is the following: you may still be able to block your CI pipeline by gating
408
+ * on flakiness (the test will be considered flaky), but you may choose to unblock the pipeline too.
409
+ */
410
+ testStatus = getTestStatusFromRetries(testStatuses)
411
+ if (testStatus === 'pass') {
412
+ this.success = true
413
+ }
414
+ }
415
+
416
+ if (!pickleResultByFile[testFileAbsolutePath]) {
417
+ pickleResultByFile[testFileAbsolutePath] = [testStatus]
418
+ } else {
419
+ pickleResultByFile[testFileAbsolutePath].push(testStatus)
420
+ }
421
+
422
+ // last test in suite
423
+ if (pickleResultByFile[testFileAbsolutePath].length === pickleByFile[testFileAbsolutePath].length) {
424
+ const testSuiteStatus = getSuiteStatusFromTestStatuses(pickleResultByFile[testFileAbsolutePath])
425
+ if (global.__coverage__) {
426
+ const coverageFiles = getCoveredFilenamesFromCoverage(global.__coverage__)
427
+
428
+ testSuiteCodeCoverageCh.publish({
429
+ coverageFiles,
430
+ suiteFile: testFileAbsolutePath
431
+ })
432
+ // We need to reset coverage to get a code coverage per suite
433
+ // Before that, we preserve the original coverage
434
+ mergeCoverage(global.__coverage__, originalCoverageMap)
435
+ resetCoverage(global.__coverage__)
436
+ }
437
+
438
+ testSuiteFinishCh.publish(testSuiteStatus)
439
+ }
440
+
441
+ return runTestCaseResult
442
+ }
443
+ }
444
+
445
+ // From 7.3.0 onwards, runPickle becomes runTestCase
446
+ addHook({
447
+ name: '@cucumber/cucumber',
448
+ versions: ['>=7.3.0'],
449
+ file: 'lib/runtime/index.js'
450
+ }, (runtimePackage, frameworkVersion) => {
451
+ shimmer.wrap(runtimePackage.default.prototype, 'runTestCase', runTestCase => getWrappedRunTest(runTestCase))
452
+ shimmer.wrap(runtimePackage.default.prototype, 'start', start => getWrappedStart(start, frameworkVersion))
453
+
454
+ return runtimePackage
455
+ })
456
+
457
+ addHook({
458
+ name: '@cucumber/cucumber',
459
+ versions: ['>=7.0.0 <7.3.0'],
460
+ file: 'lib/runtime/index.js'
461
+ }, (runtimePackage, frameworkVersion) => {
462
+ shimmer.wrap(runtimePackage.default.prototype, 'runPickle', runPickle => getWrappedRunTest(runPickle))
463
+ shimmer.wrap(runtimePackage.default.prototype, 'start', start => getWrappedStart(start, frameworkVersion))
350
464
 
351
465
  return runtimePackage
352
466
  })
@@ -58,6 +58,7 @@ let hasUnskippableSuites = false
58
58
  let hasForcedToRunSuites = false
59
59
  let isEarlyFlakeDetectionEnabled = false
60
60
  let earlyFlakeDetectionNumRetries = 0
61
+ let hasFilteredSkippableSuites = false
61
62
 
62
63
  const sessionAsyncResource = new AsyncResource('bound-anonymous-fn')
63
64
 
@@ -270,6 +271,23 @@ function getTestEnvironment (pkg, jestVersion) {
270
271
  return getWrappedEnvironment(pkg, jestVersion)
271
272
  }
272
273
 
274
+ function applySuiteSkipping (originalTests, rootDir, frameworkVersion) {
275
+ const jestSuitesToRun = getJestSuitesToRun(skippableSuites, originalTests, rootDir || process.cwd())
276
+ hasFilteredSkippableSuites = true
277
+ log.debug(
278
+ () => `${jestSuitesToRun.suitesToRun.length} out of ${originalTests.length} suites are going to run.`
279
+ )
280
+ hasUnskippableSuites = jestSuitesToRun.hasUnskippableSuites
281
+ hasForcedToRunSuites = jestSuitesToRun.hasForcedToRunSuites
282
+
283
+ isSuitesSkipped = jestSuitesToRun.suitesToRun.length !== originalTests.length
284
+ numSkippedSuites = jestSuitesToRun.skippedSuites.length
285
+
286
+ itrSkippedSuitesCh.publish({ skippedSuites: jestSuitesToRun.skippedSuites, frameworkVersion })
287
+ skippableSuites = []
288
+ return jestSuitesToRun.suitesToRun
289
+ }
290
+
273
291
  addHook({
274
292
  name: 'jest-environment-node',
275
293
  versions: ['>=24.8.0']
@@ -280,6 +298,51 @@ addHook({
280
298
  versions: ['>=24.8.0']
281
299
  }, getTestEnvironment)
282
300
 
301
+ function getWrappedScheduleTests (scheduleTests, frameworkVersion) {
302
+ return async function (tests) {
303
+ if (!isSuitesSkippingEnabled || hasFilteredSkippableSuites) {
304
+ return scheduleTests.apply(this, arguments)
305
+ }
306
+ const [test] = tests
307
+ const rootDir = test?.context?.config?.rootDir
308
+
309
+ arguments[0] = applySuiteSkipping(tests, rootDir, frameworkVersion)
310
+
311
+ return scheduleTests.apply(this, arguments)
312
+ }
313
+ }
314
+
315
+ addHook({
316
+ name: '@jest/core',
317
+ file: 'build/TestScheduler.js',
318
+ versions: ['>=27.0.0']
319
+ }, (testSchedulerPackage, frameworkVersion) => {
320
+ const oldCreateTestScheduler = testSchedulerPackage.createTestScheduler
321
+ const newCreateTestScheduler = async function () {
322
+ if (!isSuitesSkippingEnabled || hasFilteredSkippableSuites) {
323
+ return oldCreateTestScheduler.apply(this, arguments)
324
+ }
325
+ // If suite skipping is enabled and has not filtered skippable suites yet, we'll attempt to do it
326
+ const scheduler = await oldCreateTestScheduler.apply(this, arguments)
327
+ shimmer.wrap(scheduler, 'scheduleTests', scheduleTests => getWrappedScheduleTests(scheduleTests, frameworkVersion))
328
+ return scheduler
329
+ }
330
+ testSchedulerPackage.createTestScheduler = newCreateTestScheduler
331
+ return testSchedulerPackage
332
+ })
333
+
334
+ addHook({
335
+ name: '@jest/core',
336
+ file: 'build/TestScheduler.js',
337
+ versions: ['>=24.8.0 <27.0.0']
338
+ }, (testSchedulerPackage, frameworkVersion) => {
339
+ shimmer.wrap(
340
+ testSchedulerPackage.default.prototype,
341
+ 'scheduleTests', scheduleTests => getWrappedScheduleTests(scheduleTests, frameworkVersion)
342
+ )
343
+ return testSchedulerPackage
344
+ })
345
+
283
346
  addHook({
284
347
  name: '@jest/test-sequencer',
285
348
  versions: ['>=24.8.0']
@@ -287,29 +350,13 @@ addHook({
287
350
  shimmer.wrap(sequencerPackage.default.prototype, 'shard', shard => function () {
288
351
  const shardedTests = shard.apply(this, arguments)
289
352
 
290
- if (!shardedTests.length) {
353
+ if (!shardedTests.length || !isSuitesSkippingEnabled || !skippableSuites.length) {
291
354
  return shardedTests
292
355
  }
293
- // TODO: could we get the rootDir from each test?
294
356
  const [test] = shardedTests
295
357
  const rootDir = test?.context?.config?.rootDir
296
358
 
297
- const jestSuitesToRun = getJestSuitesToRun(skippableSuites, shardedTests, rootDir || process.cwd())
298
-
299
- log.debug(
300
- () => `${jestSuitesToRun.suitesToRun.length} out of ${shardedTests.length} suites are going to run.`
301
- )
302
-
303
- hasUnskippableSuites = jestSuitesToRun.hasUnskippableSuites
304
- hasForcedToRunSuites = jestSuitesToRun.hasForcedToRunSuites
305
-
306
- isSuitesSkipped = jestSuitesToRun.suitesToRun.length !== shardedTests.length
307
- numSkippedSuites = jestSuitesToRun.skippedSuites.length
308
-
309
- itrSkippedSuitesCh.publish({ skippedSuites: jestSuitesToRun.skippedSuites, frameworkVersion })
310
-
311
- skippableSuites = []
312
- return jestSuitesToRun.suitesToRun
359
+ return applySuiteSkipping(shardedTests, rootDir, frameworkVersion)
313
360
  })
314
361
  return sequencerPackage
315
362
  })
@@ -660,13 +707,13 @@ addHook({
660
707
  const SearchSource = searchSourcePackage.default ? searchSourcePackage.default : searchSourcePackage
661
708
 
662
709
  shimmer.wrap(SearchSource.prototype, 'getTestPaths', getTestPaths => async function () {
663
- if (!skippableSuites.length) {
710
+ if (!isSuitesSkippingEnabled || !skippableSuites.length) {
664
711
  return getTestPaths.apply(this, arguments)
665
712
  }
666
713
 
667
714
  const [{ rootDir, shard }] = arguments
668
715
 
669
- if (shard && shard.shardIndex) {
716
+ if (shard?.shardCount > 1) {
670
717
  // If the user is using jest sharding, we want to apply the filtering of tests in the shard process.
671
718
  // The reason for this is the following:
672
719
  // The tests for different shards are likely being run in different CI jobs so
@@ -680,21 +727,8 @@ addHook({
680
727
  const testPaths = await getTestPaths.apply(this, arguments)
681
728
  const { tests } = testPaths
682
729
 
683
- const jestSuitesToRun = getJestSuitesToRun(skippableSuites, tests, rootDir)
684
-
685
- log.debug(() => `${jestSuitesToRun.suitesToRun.length} out of ${tests.length} suites are going to run.`)
686
-
687
- hasUnskippableSuites = jestSuitesToRun.hasUnskippableSuites
688
- hasForcedToRunSuites = jestSuitesToRun.hasForcedToRunSuites
689
-
690
- isSuitesSkipped = jestSuitesToRun.suitesToRun.length !== tests.length
691
- numSkippedSuites = jestSuitesToRun.skippedSuites.length
692
-
693
- itrSkippedSuitesCh.publish({ skippedSuites: jestSuitesToRun.skippedSuites, frameworkVersion })
694
-
695
- skippableSuites = []
696
-
697
- return { ...testPaths, tests: jestSuitesToRun.suitesToRun }
730
+ const suitesToRun = applySuiteSkipping(tests, rootDir, frameworkVersion)
731
+ return { ...testPaths, tests: suitesToRun }
698
732
  })
699
733
 
700
734
  return searchSourcePackage
@@ -28,9 +28,12 @@ class AmqplibConsumerPlugin extends ConsumerPlugin {
28
28
  }
29
29
  })
30
30
 
31
- if (this.config.dsmEnabled && message) {
31
+ if (
32
+ this.config.dsmEnabled &&
33
+ message?.properties?.headers?.[CONTEXT_PROPAGATION_KEY]
34
+ ) {
32
35
  const payloadSize = getAmqpMessageSize({ headers: message.properties.headers, content: message.content })
33
- const queue = fields.queue ?? fields.routingKey
36
+ const queue = fields.queue ? fields.queue : fields.routingKey
34
37
  this.tracer.decodeDataStreamsContext(message.properties.headers[CONTEXT_PROPAGATION_KEY])
35
38
  this.tracer
36
39
  .setCheckpoint(['direction:in', `topic:${queue}`, 'type:rabbitmq'], span, payloadSize)