loadtest 6.2.1 → 6.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -90,9 +90,13 @@ so that you can abort deployment e.g. if 99% of the requests don't finish in 10
90
90
  ### Usage Don'ts
91
91
 
92
92
  `loadtest` saturates a single CPU pretty quickly.
93
- Do not use `loadtest` if the Node.js process is above 100% usage in `top`, which happens approx. when your load is above 1000~4000 rps.
93
+ Do not use `loadtest` in this mode
94
+ if the Node.js process is above 100% usage in `top`, which happens approx. when your load is above 1000~4000 rps.
94
95
  (You can measure the practical limits of `loadtest` on your specific test machines by running it against a simple
95
- Apache or nginx process and seeing when it reaches 100% CPU.)
96
+ [test server](#test-server)
97
+ and seeing when it reaches 100% CPU.)
98
+ In this case try using in multi-process mode using the `--cores` parameter,
99
+ see below.
96
100
 
97
101
  There are better tools for that use case:
98
102
 
@@ -260,8 +264,9 @@ The following parameters are _not_ compatible with Apache ab.
260
264
  #### `--rps requestsPerSecond`
261
265
 
262
266
  Controls the number of requests per second that are sent.
263
- Can be fractional, e.g. `--rps 0.5` sends one request every two seconds.
264
- Not used by default: each request is sent as soon as the previous one is responded.
267
+ Cannot be fractional, e.g. `--rps 0.5`.
268
+ In this mode each request is not sent as soon as the previous one is responded,
269
+ but periodically even if previous requests have not been responded yet.
265
270
 
266
271
  Note: Concurrency doesn't affect the final number of requests per second,
267
272
  since rps will be shared by all the clients. E.g.:
@@ -276,6 +281,19 @@ to send all of the rps, adjust it with `-c` if needed.
276
281
 
277
282
  Note: --rps is not supported for websockets.
278
283
 
284
+ #### `--cores number`
285
+
286
+ Start `loadtest` in multi-process mode on a number of cores simultaneously.
287
+ Useful when a single CPU is saturated.
288
+ Forks the requested number of processes using the
289
+ [Node.js cluster module](https://nodejs.org/api/cluster.html).
290
+
291
+ In this mode the total number of requests and the rps rate are shared among all processes.
292
+ The result returned is the aggregation of results from all cores.
293
+
294
+ Note: this option is not available in the API,
295
+ where it runs just in the provided process.
296
+
279
297
  #### `--timeout milliseconds`
280
298
 
281
299
  Timeout for each generated request in milliseconds.
@@ -337,11 +355,11 @@ Sets the certificate for the http client to use. Must be used with `--key`.
337
355
 
338
356
  Sets the key for the http client to use. Must be used with `--cert`.
339
357
 
340
- ### Server
358
+ ### Test Server
341
359
 
342
360
  loadtest bundles a test server. To run it:
343
361
 
344
- $ testserver-loadtest [--delay ms] [error 5xx] [percent yy] [port]
362
+ $ testserver-loadtest [options] [port]
345
363
 
346
364
  This command will show the number of requests received per second,
347
365
  the latency in answering requests and the headers for selected requests.
@@ -354,6 +372,27 @@ The optional delay instructs the server to wait for the given number of millisec
354
372
  before answering each request, to simulate a busy server.
355
373
  You can also simulate errors on a given percent of requests.
356
374
 
375
+ The following optional parameters are available.
376
+
377
+ #### `--delay ms`
378
+
379
+ Wait the specified number of milliseconds before answering each request.
380
+
381
+ #### `--error 5xx`
382
+
383
+ Return the given error for every request.
384
+
385
+ #### `--percent yy`
386
+
387
+ Return an error (default 500) only for the specified % of requests.
388
+
389
+ #### `--cores number`
390
+
391
+ Number of cores to use. If not 1, will start in multi-process mode.
392
+
393
+ Note: since version v6.3.0 the test server uses half the available cores by default;
394
+ use `--cores 1` to use in single-process mode.
395
+
357
396
  ### Complete Example
358
397
 
359
398
  Let us now see how to measure the performance of the test server.
@@ -364,8 +403,9 @@ First we install `loadtest` globally:
364
403
 
365
404
  Now we start the test server:
366
405
 
367
- $ testserver-loadtest
368
- Listening on port 7357
406
+ $ testserver-loadtest --cores 2
407
+ Listening on http://localhost:7357/
408
+ Listening on http://localhost:7357/
369
409
 
370
410
  On a different console window we run a load test against it for 20 seconds
371
411
  with concurrency 10 (only relevant results are shown):
@@ -458,7 +498,7 @@ The result (with the same test server) is impressive:
458
498
  99% 10 ms
459
499
  100% 25 ms (longest request)
460
500
 
461
- Now you're talking! The steady rate also goes up to 2 krps:
501
+ Now we're talking! The steady rate also goes up to 2 krps:
462
502
 
463
503
  $ loadtest http://localhost:7357/ -t 20 -c 10 --keepalive --rps 2000
464
504
  ...
@@ -528,7 +568,7 @@ and will not call the callback.
528
568
 
529
569
  The latency result returned at the end of the load test contains a full set of data, including:
530
570
  mean latency, number of errors and percentiles.
531
- An example follows:
571
+ A simplified example follows:
532
572
 
533
573
  ```javascript
534
574
  {
@@ -545,8 +585,8 @@ An example follows:
545
585
  '95': 11,
546
586
  '99': 15
547
587
  },
548
- rps: 2824,
549
- totalTimeSeconds: 0.354108,
588
+ effectiveRps: 2824,
589
+ elapsedSeconds: 0.354108,
550
590
  meanLatencyMs: 7.72,
551
591
  maxLatencyMs: 20,
552
592
  totalErrors: 3,
package/bin/loadtest.js CHANGED
@@ -3,6 +3,8 @@
3
3
  import {readFile} from 'fs/promises'
4
4
  import * as stdio from 'stdio'
5
5
  import {loadTest} from '../lib/loadtest.js'
6
+ import {runTask} from '../lib/cluster.js'
7
+ import {Result} from '../lib/result.js'
6
8
 
7
9
 
8
10
  const options = stdio.getopt({
@@ -32,8 +34,9 @@ const options = stdio.getopt({
32
34
  key: {args: 1, description: 'The client key to use'},
33
35
  cert: {args: 1, description: 'The client certificate to use'},
34
36
  quiet: {description: 'Do not log any messages'},
37
+ cores: {args: 1, description: 'Number of cores to use', default: 1},
35
38
  agent: {description: 'Use a keep-alive http agent (deprecated)'},
36
- debug: {description: 'Show debug messages (deprecated)'}
39
+ debug: {description: 'Show debug messages (deprecated)'},
37
40
  });
38
41
 
39
42
  async function processAndRun(options) {
@@ -51,21 +54,62 @@ async function processAndRun(options) {
51
54
  help();
52
55
  }
53
56
  options.url = options.args[0];
54
- try {
55
- const result = await loadTest(options)
56
- result.show()
57
- } catch(error) {
58
- console.error(error.message)
59
- help()
57
+ options.cores = parseInt(options.cores) || 1
58
+ const results = await runTask(options.cores, async workerId => await startTest(options, workerId))
59
+ if (!results) {
60
+ process.exit(0)
61
+ return
62
+ }
63
+ showResults(results)
64
+ }
65
+
66
+ function showResults(results) {
67
+ if (results.length == 1) {
68
+ results[0].show()
69
+ return
70
+ }
71
+ const combined = new Result()
72
+ for (const result of results) {
73
+ combined.combine(result)
74
+ }
75
+ combined.show()
76
+ }
77
+
78
+ async function startTest(options, workerId) {
79
+ if (!workerId) {
80
+ // standalone; controlled errors
81
+ try {
82
+ return await loadTest(options)
83
+ } catch(error) {
84
+ console.error(error.message)
85
+ return help()
86
+ }
87
+ }
88
+ shareWorker(options, workerId)
89
+ return await loadTest(options)
90
+ }
91
+
92
+ function shareWorker(options, workerId) {
93
+ options.maxRequests = shareOption(options.maxRequests, workerId, options.cores)
94
+ options.rps = shareOption(options.rps, workerId, options.cores)
95
+ }
96
+
97
+ function shareOption(option, workerId, cores) {
98
+ if (!option) return null
99
+ const total = parseInt(option)
100
+ const shared = Math.round(total / cores)
101
+ if (workerId == cores) {
102
+ // last worker gets remainder
103
+ return total - shared * (cores - 1)
104
+ } else {
105
+ return shared
60
106
  }
61
107
  }
62
108
 
63
109
  await processAndRun(options)
64
110
 
65
- /**
66
- * Show online help.
67
- */
68
111
  function help() {
69
112
  options.printHelp();
70
113
  process.exit(1);
71
114
  }
115
+
package/bin/testserver.js CHANGED
@@ -3,40 +3,51 @@
3
3
  import * as stdio from 'stdio'
4
4
  import {startServer} from '../lib/testserver.js'
5
5
  import {loadConfig} from '../lib/config.js'
6
+ import {getHalfCores, runTask} from '../lib/cluster.js'
6
7
 
8
+ const options = readOptions()
9
+ start(options)
7
10
 
8
- const options = stdio.getopt({
9
- delay: {key: 'd', args: 1, description: 'Delay the response for the given milliseconds'},
10
- error: {key: 'e', args: 1, description: 'Return an HTTP error code'},
11
- percent: {key: 'p', args: 1, description: 'Return an error (default 500) only for some % of requests'},
12
- });
13
- const configuration = loadConfig()
14
- if (options.args && options.args.length == 1) {
15
- options.port = parseInt(options.args[0], 10);
16
- if (!options.port) {
17
- console.error('Invalid port');
18
- options.printHelp();
19
- process.exit(1);
11
+
12
+ function readOptions() {
13
+ const options = stdio.getopt({
14
+ delay: {key: 'd', args: 1, description: 'Delay the response for the given milliseconds'},
15
+ error: {key: 'e', args: 1, description: 'Return an HTTP error code'},
16
+ percent: {key: 'p', args: 1, description: 'Return an error (default 500) only for some % of requests'},
17
+ cores: {key: 'c', args: 1, description: 'Number of cores to use, default is half the total', default: getHalfCores()}
18
+ });
19
+ const configuration = loadConfig()
20
+ if (options.args && options.args.length == 1) {
21
+ options.port = parseInt(options.args[0], 10);
22
+ if (!options.port) {
23
+ console.error('Invalid port');
24
+ options.printHelp();
25
+ process.exit(1);
26
+ }
20
27
  }
21
- }
22
- if(options.delay) {
23
- if(isNaN(options.delay)) {
24
- console.error('Invalid delay');
25
- options.printHelp();
26
- process.exit(1);
28
+ if(options.delay) {
29
+ if(isNaN(options.delay)) {
30
+ console.error('Invalid delay');
31
+ options.printHelp();
32
+ process.exit(1);
33
+ }
34
+ options.delay = parseInt(options.delay, 10);
27
35
  }
28
- options.delay = parseInt(options.delay, 10);
29
- }
30
36
 
31
- if(!options.delay) {
32
- options.delay = configuration.delay
33
- }
34
- if(!options.error) {
35
- options.error = configuration.error
37
+ if(!options.delay) {
38
+ options.delay = configuration.delay
39
+ }
40
+ if(!options.error) {
41
+ options.error = configuration.error
42
+ }
43
+ if(!options.percent) {
44
+ options.percent = configuration.percent
45
+ }
46
+ return options
36
47
  }
37
- if(!options.percent) {
38
- options.percent = configuration.percent
48
+
49
+ function start(options) {
50
+ runTask(options.cores, async () => await startServer(options))
39
51
  }
40
52
 
41
- startServer(options);
42
53
 
@@ -0,0 +1,17 @@
1
+ process.env.NODE_CLUSTER_SCHED_POLICY = 'none'
2
+
3
+ const cluster = await import('cluster')
4
+ console.log(cluster)
5
+ //import * as cluster from 'cluster'
6
+
7
+ if (cluster.isPrimary) {
8
+ console.log(`process.env.NODE_CLUSTER_SCHED_POLICY: ${process.env.NODE_CLUSTER_SCHED_POLICY}`)
9
+ for (let index = 0; index < 2; index++) {
10
+ cluster.fork()
11
+ setTimeout(() => console.log(`cluster.schedulingPolicy: ${cluster.schedulingPolicy}`), 1000)
12
+ }
13
+ } else {
14
+ setTimeout(() => null, 1000)
15
+ }
16
+
17
+
@@ -0,0 +1,17 @@
1
+ process.env.NODE_CLUSTER_SCHED_POLICY = 'none'
2
+
3
+ const cluster = require('cluster')
4
+ console.log(cluster)
5
+ //import * as cluster from 'cluster'
6
+
7
+ if (cluster.isPrimary) {
8
+ console.log(`process.env.NODE_CLUSTER_SCHED_POLICY: ${process.env.NODE_CLUSTER_SCHED_POLICY}`)
9
+ for (let index = 0; index < 2; index++) {
10
+ cluster.fork()
11
+ setTimeout(() => console.log(`cluster.schedulingPolicy: ${cluster.schedulingPolicy}`), 1000)
12
+ }
13
+ } else {
14
+ setTimeout(() => null, 1000)
15
+ }
16
+
17
+
package/lib/cluster.js ADDED
@@ -0,0 +1,42 @@
1
+ process.env.NODE_CLUSTER_SCHED_POLICY = 'none'
2
+
3
+ import {cpus} from 'os'
4
+ // dynamic import as workaround: https://github.com/nodejs/node/issues/49240
5
+ const cluster = await import('cluster')
6
+
7
+
8
+ export function getHalfCores() {
9
+ const totalCores = cpus().length
10
+ return Math.round(totalCores / 2) || 1
11
+ }
12
+
13
+ export async function runTask(cores, task) {
14
+ if (cores == 1) {
15
+ return [await task()]
16
+ }
17
+ if (cluster.isPrimary) {
18
+ return await runWorkers(cores)
19
+ } else {
20
+ const result = await task(cluster.worker.id)
21
+ process.send(result)
22
+ }
23
+ }
24
+
25
+ function runWorkers(cores) {
26
+ return new Promise((resolve, reject) => {
27
+ const results = []
28
+ for (let index = 0; index < cores; index++) {
29
+ const worker = cluster.fork()
30
+ worker.on('message', message => {
31
+ results.push(message)
32
+ if (results.length === cores) {
33
+ return resolve(results)
34
+ }
35
+ })
36
+ worker.on('error', error => {
37
+ return reject(error)
38
+ })
39
+ }
40
+ })
41
+ }
42
+
package/lib/latency.js CHANGED
@@ -17,8 +17,8 @@ export class Latency {
17
17
  this.partialRequests = 0;
18
18
  this.partialTime = 0;
19
19
  this.partialErrors = 0;
20
- this.lastShown = this.getTime();
21
- this.initialTime = this.getTime();
20
+ this.lastShownNs = this.getTimeNs();
21
+ this.startTimeNs = this.getTimeNs();
22
22
  this.totalRequests = 0;
23
23
  this.totalTime = 0;
24
24
  this.totalErrors = 0;
@@ -45,7 +45,7 @@ export class Latency {
45
45
  */
46
46
  start(requestId) {
47
47
  requestId = requestId || createId();
48
- this.requests[requestId] = this.getTime();
48
+ this.requests[requestId] = this.getTimeNs();
49
49
  this.requestIdToIndex[requestId] = this.requestIndex++;
50
50
  return requestId;
51
51
  }
@@ -61,7 +61,7 @@ export class Latency {
61
61
  if (!this.running) {
62
62
  return -1;
63
63
  }
64
- const elapsed = this.getElapsed(this.requests[requestId]);
64
+ const elapsed = this.getElapsedMs(this.requests[requestId]);
65
65
  this.add(elapsed, errorCode);
66
66
  delete this.requests[requestId];
67
67
  return elapsed;
@@ -105,7 +105,7 @@ export class Latency {
105
105
  * Show latency for partial requests.
106
106
  */
107
107
  showPartial() {
108
- const elapsedSeconds = this.getElapsed(this.lastShown) / 1000;
108
+ const elapsedSeconds = this.getElapsedMs(this.lastShownNs) / 1000;
109
109
  const meanTime = this.partialTime / this.partialRequests || 0.0;
110
110
  const result = {
111
111
  meanLatencyMs: Math.round(meanTime * 10) / 10,
@@ -125,25 +125,26 @@ export class Latency {
125
125
  this.partialTime = 0;
126
126
  this.partialRequests = 0;
127
127
  this.partialErrors = 0;
128
- this.lastShown = this.getTime();
128
+ this.lastShownNs = this.getTimeNs();
129
129
  }
130
130
 
131
131
  /**
132
- * Returns the current high-resolution real time in a [seconds, nanoseconds] tuple Array
132
+ * Returns the current high-resolution real time in nanoseconds as a big int.
133
133
  * @return {*}
134
134
  */
135
- getTime() {
136
- return process.hrtime();
135
+ getTimeNs() {
136
+ return process.hrtime.bigint();
137
137
  }
138
138
 
139
139
  /**
140
- * calculates the elapsed time between the assigned startTime and now
141
- * @param startTime
140
+ * Calculates the elapsed time between the assigned start time and now in ms.
141
+ * @param startTimeNs time in nanoseconds (bigint)
142
142
  * @return {Number} the elapsed time in milliseconds
143
143
  */
144
- getElapsed(startTime) {
145
- const elapsed = process.hrtime(startTime);
146
- return elapsed[0] * 1000 + elapsed[1] / 1000000;
144
+ getElapsedMs(startTimeNs) {
145
+ const endTimeNs = this.getTimeNs()
146
+ const elapsedNs = endTimeNs - startTimeNs
147
+ return Number(elapsedNs / 1000000n)
147
148
  }
148
149
 
149
150
  /**
@@ -153,7 +154,7 @@ export class Latency {
153
154
  if (this.options.maxRequests && this.totalRequests >= this.options.maxRequests) {
154
155
  return true;
155
156
  }
156
- const elapsedSeconds = this.getElapsed(this.initialTime) / 1000;
157
+ const elapsedSeconds = this.getElapsedMs(this.startTimeNs) / 1000;
157
158
  if (this.options.maxSeconds && elapsedSeconds >= this.options.maxSeconds) {
158
159
  return true;
159
160
  }
@@ -165,6 +166,7 @@ export class Latency {
165
166
  */
166
167
  finish() {
167
168
  this.running = false;
169
+ this.endTimeNs = this.getTimeNs()
168
170
  if (this.callback) {
169
171
  return this.callback(null, this.getResult());
170
172
  }
@@ -174,38 +176,11 @@ export class Latency {
174
176
  * Get final result.
175
177
  */
176
178
  getResult() {
177
- const result = new Result(this.options, this)
179
+ const result = new Result()
180
+ result.compute(this.options, this)
178
181
  return result
179
182
  }
180
183
 
181
- /**
182
- * Compute the percentiles.
183
- */
184
- computePercentiles() {
185
- const percentiles = {
186
- 50: false,
187
- 90: false,
188
- 95: false,
189
- 99: false
190
- };
191
- let counted = 0;
192
-
193
- for (let ms = 0; ms <= this.maxLatencyMs; ms++) {
194
- if (!this.histogramMs[ms]) {
195
- continue;
196
- }
197
- counted += this.histogramMs[ms];
198
- const percent = counted / this.totalRequests * 100;
199
-
200
- Object.keys(percentiles).forEach(percentile => {
201
- if (!percentiles[percentile] && percent > percentile) {
202
- percentiles[percentile] = ms;
203
- }
204
- });
205
- }
206
- return percentiles;
207
- }
208
-
209
184
  /**
210
185
  * Show final result.
211
186
  */
package/lib/loadtest.js CHANGED
@@ -160,7 +160,7 @@ class Operation {
160
160
  */
161
161
  stop() {
162
162
  this.running = false;
163
- this.latency.running = false;
163
+ this.latency.finish()
164
164
  if (this.showTimer) {
165
165
  this.showTimer.stop();
166
166
  }
package/lib/result.js CHANGED
@@ -4,26 +4,114 @@
4
4
  * Result of a load test.
5
5
  */
6
6
  export class Result {
7
- constructor(options, latency) {
8
- // options
7
+ constructor() {
8
+ this.url = null
9
+ this.cores = 0
10
+ this.maxRequests = 0
11
+ this.maxSeconds = 0
12
+ this.concurrency = 0
13
+ this.agent = null
14
+ this.requestsPerSecond = 0
15
+ this.startTimeMs = Number.MAX_SAFE_INTEGER
16
+ this.endTimeMs = 0
17
+ this.elapsedSeconds = 0
18
+ this.totalRequests = 0
19
+ this.totalErrors = 0
20
+ this.totalTimeSeconds = 0
21
+ this.accumulatedMs = 0
22
+ this.maxLatencyMs = 0
23
+ this.minLatencyMs = Number.MAX_SAFE_INTEGER
24
+ this.errorCodes = {}
25
+ this.histogramMs = {}
26
+ }
27
+
28
+ compute(options, latency) {
29
+ // configuration
9
30
  this.url = options.url
10
- this.maxRequests = options.maxRequests
11
- this.maxSeconds = options.maxSeconds
12
- this.concurrency = options.concurrency
31
+ this.cores = options.cores
32
+ this.maxRequests = parseInt(options.maxRequests)
33
+ this.maxSeconds = parseInt(options.maxSeconds)
34
+ this.concurrency = parseInt(options.concurrency)
13
35
  this.agent = options.agentKeepAlive ? 'keepalive' : 'none';
14
- this.requestsPerSecond = options.requestsPerSecond
15
- // results
16
- this.elapsedSeconds = latency.getElapsed(latency.initialTime) / 1000
17
- const meanTime = latency.totalTime / latency.totalRequests
36
+ this.requestsPerSecond = parseInt(options.requestsPerSecond)
37
+ // result
38
+ this.startTimeMs = Number(latency.startTimeNs / 1000000n)
39
+ this.endTimeMs = Number(latency.endTimeNs / 1000000n)
18
40
  this.totalRequests = latency.totalRequests
19
41
  this.totalErrors = latency.totalErrors
20
- this.totalTimeSeconds = this.elapsedSeconds
21
- this.rps = Math.round(latency.totalRequests / this.elapsedSeconds)
22
- this.meanLatencyMs = Math.round(meanTime * 10) / 10
42
+ this.accumulatedMs = latency.totalTime
23
43
  this.maxLatencyMs = latency.maxLatencyMs
24
44
  this.minLatencyMs = latency.minLatencyMs
25
- this.percentiles = latency.computePercentiles()
26
45
  this.errorCodes = latency.errorCodes
46
+ this.histogramMs = latency.histogramMs
47
+ this.computeDerived()
48
+ }
49
+
50
+ computeDerived() {
51
+ this.elapsedSeconds = (this.endTimeMs - this.startTimeMs) / 1000
52
+ this.totalTimeSeconds = this.elapsedSeconds // backwards compatibility
53
+ const meanTime = this.accumulatedMs / this.totalRequests
54
+ this.meanLatencyMs = Math.round(meanTime * 10) / 10
55
+ this.effectiveRps = Math.round(this.totalRequests / this.elapsedSeconds)
56
+ this.rps = this.effectiveRps // backwards compatibility
57
+ this.computePercentiles()
58
+ }
59
+
60
+ computePercentiles() {
61
+ this.percentiles = {
62
+ 50: false,
63
+ 90: false,
64
+ 95: false,
65
+ 99: false
66
+ };
67
+ let counted = 0;
68
+
69
+ for (let ms = 0; ms <= this.maxLatencyMs; ms++) {
70
+ if (!this.histogramMs[ms]) {
71
+ continue;
72
+ }
73
+ counted += this.histogramMs[ms];
74
+ const percent = counted / this.totalRequests * 100;
75
+
76
+ Object.keys(this.percentiles).forEach(percentile => {
77
+ if (!this.percentiles[percentile] && percent > percentile) {
78
+ this.percentiles[percentile] = ms;
79
+ }
80
+ });
81
+ }
82
+ }
83
+
84
+ combine(result) {
85
+ // configuration
86
+ this.url = this.url || result.url
87
+ this.cores += 1
88
+ this.maxRequests += result.maxRequests
89
+ this.maxSeconds = this.maxSeconds || result.maxSeconds
90
+ this.concurrency = this.concurrency || result.concurrency
91
+ this.agent = this.agent || result.agent
92
+ this.requestsPerSecond += result.requestsPerSecond || 0
93
+ // result
94
+ this.startTimeMs = Math.min(this.startTimeMs, result.startTimeMs)
95
+ this.endTimeMs = Math.max(this.endTimeMs, result.endTimeMs)
96
+ this.totalRequests += result.totalRequests
97
+ this.totalErrors += result.totalErrors
98
+ this.accumulatedMs += result.accumulatedMs
99
+ this.maxLatencyMs = Math.max(this.maxLatencyMs, result.maxLatencyMs)
100
+ this.minLatencyMs = Math.min(this.minLatencyMs, result.minLatencyMs)
101
+ this.combineMap(this.errorCodes, result.errorCodes)
102
+ this.combineMap(this.histogramMs, result.histogramMs)
103
+ this.computeDerived()
104
+ }
105
+
106
+ combineMap(originalMap, addedMap) {
107
+ for (const key in {...originalMap, ...addedMap}) {
108
+ if (!originalMap[key]) {
109
+ originalMap[key] = 0
110
+ }
111
+ if (addedMap[key]) {
112
+ originalMap[key] += addedMap[key]
113
+ }
114
+ }
27
115
  }
28
116
 
29
117
  /**
@@ -38,16 +126,19 @@ export class Result {
38
126
  console.info('Max time (s): %s', this.maxSeconds);
39
127
  }
40
128
  console.info('Concurrency level: %s', this.concurrency);
129
+ if (this.cores) {
130
+ console.info('Running on cores: %s', this.cores);
131
+ }
41
132
  console.info('Agent: %s', this.agent);
42
133
  if (this.requestsPerSecond) {
43
- console.info('Requests per second: %s', this.requestsPerSecond);
134
+ console.info('Target rps: %s', this.requestsPerSecond);
44
135
  }
45
136
  console.info('');
46
137
  console.info('Completed requests: %s', this.totalRequests);
47
138
  console.info('Total errors: %s', this.totalErrors);
48
- console.info('Total time: %s s', this.totalTimeSeconds);
49
- console.info('Requests per second: %s', this.rps);
139
+ console.info('Total time: %s s', this.elapsedSeconds);
50
140
  console.info('Mean latency: %s ms', this.meanLatencyMs);
141
+ console.info('Effective rps: %s', this.effectiveRps);
51
142
  console.info('');
52
143
  console.info('Percentage of the requests served within a certain time');
53
144
 
package/lib/testserver.js CHANGED
@@ -5,9 +5,33 @@ import * as net from 'net'
5
5
  import {Latency} from './latency.js'
6
6
 
7
7
  const PORT = 7357;
8
- const LOG_HEADERS_INTERVAL_SECONDS = 1;
8
+ const LOG_HEADERS_INTERVAL_MS = 5000;
9
9
 
10
10
 
11
+ /**
12
+ * Start a test server. Parameters:
13
+ * - `options`, can contain:
14
+ * - port: the port to use, default 7357.
15
+ * - delay: wait the given milliseconds before answering.
16
+ * - quiet: do not log any messages.
17
+ * - percent: give an error (default 500) on some % of requests.
18
+ * - error: set an HTTP error code, default is 500.
19
+ * - `callback`: optional callback, called after the server has started.
20
+ * If not present will return a promise.
21
+ */
22
+ export function startServer(options, callback) {
23
+ const server = new TestServer(options);
24
+ if (callback) {
25
+ return server.start(callback)
26
+ }
27
+ return new Promise((resolve, reject) => {
28
+ server.start((error, result) => {
29
+ if (error) return reject(error)
30
+ return resolve(result)
31
+ })
32
+ })
33
+ }
34
+
11
35
  /**
12
36
  * A test server, with the given options (see below on startServer()).
13
37
  */
@@ -18,6 +42,7 @@ class TestServer {
18
42
  this.server = null;
19
43
  this.wsServer = null;
20
44
  this.latency = new Latency({});
45
+ this.totalRequests = 0
21
46
  this.debuggedTime = Date.now();
22
47
  }
23
48
 
@@ -82,10 +107,10 @@ class TestServer {
82
107
  request.body += data.toString();
83
108
  });
84
109
  request.on('end', () => {
85
- const now = Date.now();
86
- if (now - this.debuggedTime > LOG_HEADERS_INTERVAL_SECONDS * 1000) {
87
- this.debug(request);
88
- this.debuggedTime = now;
110
+ this.totalRequests += 1
111
+ const elapsedMs = Date.now() - this.debuggedTime
112
+ if (elapsedMs > LOG_HEADERS_INTERVAL_MS) {
113
+ this.debug(request, elapsedMs);
89
114
  }
90
115
  if (!this.options.delay) {
91
116
  return this.end(response, id);
@@ -118,10 +143,20 @@ class TestServer {
118
143
  * Debug headers and other interesting information: POST body.
119
144
  */
120
145
  debug(request) {
121
- if (!this.options.quiet) console.info('Headers for %s to %s: %s', request.method, request.url, util.inspect(request.headers));
146
+ if (this.options.quiet) return
147
+ const headers = util.inspect(request.headers)
148
+ const now = Date.now()
149
+ const elapsedMs = now - this.debuggedTime
150
+ const rps = (this.totalRequests / elapsedMs) * 1000
151
+ if (rps > 1) {
152
+ console.info(`Requests per second: ${rps.toFixed(0)}`)
153
+ }
154
+ console.info(`Headers for ${request.method} to ${request.url}: ${headers}`)
122
155
  if (request.body) {
123
- if (!this.options.quiet) console.info('Body: %s', request.body);
156
+ console.info(`Body: ${request.body}`);
124
157
  }
158
+ this.debuggedTime = now;
159
+ this.totalRequests = 0
125
160
  }
126
161
 
127
162
  /**
@@ -154,27 +189,3 @@ class TestServer {
154
189
  }
155
190
  }
156
191
 
157
- /**
158
- * Start a test server. Parameters:
159
- * - `options`, can contain:
160
- * - port: the port to use, default 7357.
161
- * - delay: wait the given milliseconds before answering.
162
- * - quiet: do not log any messages.
163
- * - percent: give an error (default 500) on some % of requests.
164
- * - error: set an HTTP error code, default is 500.
165
- * - `callback`: optional callback, called after the server has started.
166
- * If not present will return a promise.
167
- */
168
- export function startServer(options, callback) {
169
- const server = new TestServer(options);
170
- if (callback) {
171
- return server.start(callback)
172
- }
173
- return new Promise((resolve, reject) => {
174
- server.start((error, result) => {
175
- if (error) return reject(error)
176
- return resolve(result)
177
- })
178
- })
179
- }
180
-
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "loadtest",
3
- "version": "6.2.1",
3
+ "version": "6.3.0",
4
4
  "type": "module",
5
5
  "description": "Run load tests for your web application. Mostly ab-compatible interface, with an option to force requests per second. Includes an API for automated load testing.",
6
6
  "homepage": "https://github.com/alexfernandez/loadtest",
package/test/all.js CHANGED
@@ -14,6 +14,7 @@ import {test as testBodyGenerator} from './body-generator.js'
14
14
  import {test as testLoadtest} from './loadtest.js'
15
15
  import {test as testWebsocket} from './websocket.js'
16
16
  import {test as integrationTest} from './integration.js'
17
+ import {test as testResult} from './result.js'
17
18
 
18
19
 
19
20
  /**
@@ -23,7 +24,7 @@ function test() {
23
24
  const tests = [
24
25
  testHrtimer, testHeaders, testLatency, testHttpClient,
25
26
  testServer, integrationTest, testLoadtest, testWebsocket,
26
- testRequestGenerator, testBodyGenerator,
27
+ testRequestGenerator, testBodyGenerator, testResult,
27
28
  ];
28
29
  testing.run(tests, 4200);
29
30
  }
package/test/result.js ADDED
@@ -0,0 +1,48 @@
1
+ import testing from 'testing'
2
+ import {Result} from '../lib/result.js'
3
+
4
+
5
+ function testCombineEmptyResults(callback) {
6
+ const result = new Result()
7
+ result.combine(new Result())
8
+ testing.assert(!result.url, callback)
9
+ testing.success(callback)
10
+ }
11
+
12
+ function testCombineResults(callback) {
13
+ const combined = new Result()
14
+ const url = 'https://pinchito.es/'
15
+ for (let index = 0; index < 3; index++) {
16
+ const result = {
17
+ url,
18
+ cores: 7,
19
+ maxRequests: 1000,
20
+ concurrency: 10,
21
+ agent: 'none',
22
+ requestsPerSecond: 100,
23
+ totalRequests: 330,
24
+ totalErrors: 10,
25
+ startTimeMs: 1000 + index * 1000,
26
+ endTimeMs: 1000 + index * 2000,
27
+ accumulatedMs: 5000,
28
+ maxLatencyMs: 350 + index,
29
+ minLatencyMs: 2 + index,
30
+ errorCodes: {200: 100, 100: 200},
31
+ histogramMs: {2: 1, 3: 4, 100: 300},
32
+ }
33
+ combined.combine(result)
34
+ }
35
+ testing.assertEquals(combined.url, url, callback)
36
+ testing.assertEquals(combined.cores, 3, callback)
37
+ testing.assertEquals(combined.totalErrors, 30, callback)
38
+ testing.assertEquals(combined.elapsedSeconds, 4, callback)
39
+ testing.success(callback)
40
+ }
41
+
42
+ export function test(callback) {
43
+ const tests = [
44
+ testCombineEmptyResults, testCombineResults,
45
+ ];
46
+ testing.run(tests, callback);
47
+ }
48
+