loadtest 6.2.2 → 6.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,4 +1,3 @@
1
- [![Build Status](https://secure.travis-ci.org/alexfernandez/loadtest.svg)](http://travis-ci.org/alexfernandez/loadtest)
2
1
  [![run on repl.it](http://repl.it/badge/github/alexfernandez/loadtest)](https://repl.it/github/alexfernandez/loadtest)
3
2
 
4
3
  [![NPM](https://nodei.co/npm/loadtest.png?downloads=true)](https://nodei.co/npm/loadtest/)
@@ -90,9 +89,13 @@ so that you can abort deployment e.g. if 99% of the requests don't finish in 10
90
89
  ### Usage Don'ts
91
90
 
92
91
  `loadtest` saturates a single CPU pretty quickly.
93
- Do not use `loadtest` if the Node.js process is above 100% usage in `top`, which happens approx. when your load is above 1000~4000 rps.
92
+ Do not use `loadtest` in this mode
93
+ if the Node.js process is above 100% usage in `top`, which happens approx. when your load is above 1000~4000 rps.
94
94
  (You can measure the practical limits of `loadtest` on your specific test machines by running it against a simple
95
- Apache or nginx process and seeing when it reaches 100% CPU.)
95
+ [test server](#test-server)
96
+ and seeing when it reaches 100% CPU.)
97
+ In this case try using in multi-process mode using the `--cores` parameter,
98
+ see below.
96
99
 
97
100
  There are better tools for that use case:
98
101
 
@@ -260,8 +263,9 @@ The following parameters are _not_ compatible with Apache ab.
260
263
  #### `--rps requestsPerSecond`
261
264
 
262
265
  Controls the number of requests per second that are sent.
263
- Can be fractional, e.g. `--rps 0.5` sends one request every two seconds.
264
- Not used by default: each request is sent as soon as the previous one is responded.
266
+ Cannot be fractional, e.g. `--rps 0.5`.
267
+ In this mode each request is not sent as soon as the previous one is responded,
268
+ but periodically even if previous requests have not been responded yet.
265
269
 
266
270
  Note: Concurrency doesn't affect the final number of requests per second,
267
271
  since rps will be shared by all the clients. E.g.:
@@ -276,6 +280,19 @@ to send all of the rps, adjust it with `-c` if needed.
276
280
 
277
281
  Note: --rps is not supported for websockets.
278
282
 
283
+ #### `--cores number`
284
+
285
+ Start `loadtest` in multi-process mode on a number of cores simultaneously.
286
+ Useful when a single CPU is saturated.
287
+ Forks the requested number of processes using the
288
+ [Node.js cluster module](https://nodejs.org/api/cluster.html).
289
+
290
+ In this mode the total number of requests and the rps rate are shared among all processes.
291
+ The result returned is the aggregation of results from all cores.
292
+
293
+ Note: this option is not available in the API,
294
+ where it runs just in the provided process.
295
+
279
296
  #### `--timeout milliseconds`
280
297
 
281
298
  Timeout for each generated request in milliseconds.
@@ -337,11 +354,11 @@ Sets the certificate for the http client to use. Must be used with `--key`.
337
354
 
338
355
  Sets the key for the http client to use. Must be used with `--cert`.
339
356
 
340
- ### Server
357
+ ### Test Server
341
358
 
342
359
  loadtest bundles a test server. To run it:
343
360
 
344
- $ testserver-loadtest [--delay ms] [error 5xx] [percent yy] [port]
361
+ $ testserver-loadtest [options] [port]
345
362
 
346
363
  This command will show the number of requests received per second,
347
364
  the latency in answering requests and the headers for selected requests.
@@ -354,6 +371,27 @@ The optional delay instructs the server to wait for the given number of millisec
354
371
  before answering each request, to simulate a busy server.
355
372
  You can also simulate errors on a given percent of requests.
356
373
 
374
+ The following optional parameters are available.
375
+
376
+ #### `--delay ms`
377
+
378
+ Wait the specified number of milliseconds before answering each request.
379
+
380
+ #### `--error 5xx`
381
+
382
+ Return the given error for every request.
383
+
384
+ #### `--percent yy`
385
+
386
+ Return an error (default 500) only for the specified % of requests.
387
+
388
+ #### `--cores number`
389
+
390
+ Number of cores to use. If not 1, will start in multi-process mode.
391
+
392
+ Note: since version v6.3.0 the test server uses half the available cores by default;
393
+ use `--cores 1` to use in single-process mode.
394
+
357
395
  ### Complete Example
358
396
 
359
397
  Let us now see how to measure the performance of the test server.
@@ -364,8 +402,9 @@ First we install `loadtest` globally:
364
402
 
365
403
  Now we start the test server:
366
404
 
367
- $ testserver-loadtest
368
- Listening on port 7357
405
+ $ testserver-loadtest --cores 2
406
+ Listening on http://localhost:7357/
407
+ Listening on http://localhost:7357/
369
408
 
370
409
  On a different console window we run a load test against it for 20 seconds
371
410
  with concurrency 10 (only relevant results are shown):
@@ -458,7 +497,7 @@ The result (with the same test server) is impressive:
458
497
  99% 10 ms
459
498
  100% 25 ms (longest request)
460
499
 
461
- Now you're talking! The steady rate also goes up to 2 krps:
500
+ Now we're talking! The steady rate also goes up to 2 krps:
462
501
 
463
502
  $ loadtest http://localhost:7357/ -t 20 -c 10 --keepalive --rps 2000
464
503
  ...
@@ -528,7 +567,7 @@ and will not call the callback.
528
567
 
529
568
  The latency result returned at the end of the load test contains a full set of data, including:
530
569
  mean latency, number of errors and percentiles.
531
- An example follows:
570
+ A simplified example follows:
532
571
 
533
572
  ```javascript
534
573
  {
@@ -545,8 +584,8 @@ An example follows:
545
584
  '95': 11,
546
585
  '99': 15
547
586
  },
548
- rps: 2824,
549
- totalTimeSeconds: 0.354108,
587
+ effectiveRps: 2824,
588
+ elapsedSeconds: 0.354108,
550
589
  meanLatencyMs: 7.72,
551
590
  maxLatencyMs: 20,
552
591
  totalErrors: 3,
package/bin/loadtest.js CHANGED
@@ -3,6 +3,8 @@
3
3
  import {readFile} from 'fs/promises'
4
4
  import * as stdio from 'stdio'
5
5
  import {loadTest} from '../lib/loadtest.js'
6
+ import {runTask} from '../lib/cluster.js'
7
+ import {Result} from '../lib/result.js'
6
8
 
7
9
 
8
10
  const options = stdio.getopt({
@@ -32,8 +34,9 @@ const options = stdio.getopt({
32
34
  key: {args: 1, description: 'The client key to use'},
33
35
  cert: {args: 1, description: 'The client certificate to use'},
34
36
  quiet: {description: 'Do not log any messages'},
37
+ cores: {args: 1, description: 'Number of cores to use', default: 1},
35
38
  agent: {description: 'Use a keep-alive http agent (deprecated)'},
36
- debug: {description: 'Show debug messages (deprecated)'}
39
+ debug: {description: 'Show debug messages (deprecated)'},
37
40
  });
38
41
 
39
42
  async function processAndRun(options) {
@@ -51,21 +54,62 @@ async function processAndRun(options) {
51
54
  help();
52
55
  }
53
56
  options.url = options.args[0];
54
- try {
55
- const result = await loadTest(options)
56
- result.show()
57
- } catch(error) {
58
- console.error(error.message)
59
- help()
57
+ options.cores = parseInt(options.cores) || 1
58
+ const results = await runTask(options.cores, async workerId => await startTest(options, workerId))
59
+ if (!results) {
60
+ process.exit(0)
61
+ return
62
+ }
63
+ showResults(results)
64
+ }
65
+
66
+ function showResults(results) {
67
+ if (results.length == 1) {
68
+ results[0].show()
69
+ return
70
+ }
71
+ const combined = new Result()
72
+ for (const result of results) {
73
+ combined.combine(result)
74
+ }
75
+ combined.show()
76
+ }
77
+
78
+ async function startTest(options, workerId) {
79
+ if (!workerId) {
80
+ // standalone; controlled errors
81
+ try {
82
+ return await loadTest(options)
83
+ } catch(error) {
84
+ console.error(error.message)
85
+ return help()
86
+ }
87
+ }
88
+ shareWorker(options, workerId)
89
+ return await loadTest(options)
90
+ }
91
+
92
+ function shareWorker(options, workerId) {
93
+ options.maxRequests = shareOption(options.maxRequests, workerId, options.cores)
94
+ options.rps = shareOption(options.rps, workerId, options.cores)
95
+ }
96
+
97
+ function shareOption(option, workerId, cores) {
98
+ if (!option) return null
99
+ const total = parseInt(option)
100
+ const shared = Math.round(total / cores)
101
+ if (workerId == cores) {
102
+ // last worker gets remainder
103
+ return total - shared * (cores - 1)
104
+ } else {
105
+ return shared
60
106
  }
61
107
  }
62
108
 
63
109
  await processAndRun(options)
64
110
 
65
- /**
66
- * Show online help.
67
- */
68
111
  function help() {
69
112
  options.printHelp();
70
113
  process.exit(1);
71
114
  }
115
+
package/bin/testserver.js CHANGED
@@ -3,40 +3,51 @@
3
3
  import * as stdio from 'stdio'
4
4
  import {startServer} from '../lib/testserver.js'
5
5
  import {loadConfig} from '../lib/config.js'
6
+ import {getHalfCores, runTask} from '../lib/cluster.js'
6
7
 
8
+ const options = readOptions()
9
+ start(options)
7
10
 
8
- const options = stdio.getopt({
9
- delay: {key: 'd', args: 1, description: 'Delay the response for the given milliseconds'},
10
- error: {key: 'e', args: 1, description: 'Return an HTTP error code'},
11
- percent: {key: 'p', args: 1, description: 'Return an error (default 500) only for some % of requests'},
12
- });
13
- const configuration = loadConfig()
14
- if (options.args && options.args.length == 1) {
15
- options.port = parseInt(options.args[0], 10);
16
- if (!options.port) {
17
- console.error('Invalid port');
18
- options.printHelp();
19
- process.exit(1);
11
+
12
+ function readOptions() {
13
+ const options = stdio.getopt({
14
+ delay: {key: 'd', args: 1, description: 'Delay the response for the given milliseconds'},
15
+ error: {key: 'e', args: 1, description: 'Return an HTTP error code'},
16
+ percent: {key: 'p', args: 1, description: 'Return an error (default 500) only for some % of requests'},
17
+ cores: {key: 'c', args: 1, description: 'Number of cores to use, default is half the total', default: getHalfCores()}
18
+ });
19
+ const configuration = loadConfig()
20
+ if (options.args && options.args.length == 1) {
21
+ options.port = parseInt(options.args[0], 10);
22
+ if (!options.port) {
23
+ console.error('Invalid port');
24
+ options.printHelp();
25
+ process.exit(1);
26
+ }
20
27
  }
21
- }
22
- if(options.delay) {
23
- if(isNaN(options.delay)) {
24
- console.error('Invalid delay');
25
- options.printHelp();
26
- process.exit(1);
28
+ if(options.delay) {
29
+ if(isNaN(options.delay)) {
30
+ console.error('Invalid delay');
31
+ options.printHelp();
32
+ process.exit(1);
33
+ }
34
+ options.delay = parseInt(options.delay, 10);
27
35
  }
28
- options.delay = parseInt(options.delay, 10);
29
- }
30
36
 
31
- if(!options.delay) {
32
- options.delay = configuration.delay
33
- }
34
- if(!options.error) {
35
- options.error = configuration.error
37
+ if(!options.delay) {
38
+ options.delay = configuration.delay
39
+ }
40
+ if(!options.error) {
41
+ options.error = configuration.error
42
+ }
43
+ if(!options.percent) {
44
+ options.percent = configuration.percent
45
+ }
46
+ return options
36
47
  }
37
- if(!options.percent) {
38
- options.percent = configuration.percent
48
+
49
+ function start(options) {
50
+ runTask(options.cores, async () => await startServer(options))
39
51
  }
40
52
 
41
- startServer(options);
42
53
 
package/lib/cluster.js ADDED
@@ -0,0 +1,42 @@
1
+ process.env.NODE_CLUSTER_SCHED_POLICY = 'none'
2
+
3
+ import {cpus} from 'os'
4
+ // dynamic import as workaround: https://github.com/nodejs/node/issues/49240
5
+ const cluster = await import('cluster')
6
+
7
+
8
+ export function getHalfCores() {
9
+ const totalCores = cpus().length
10
+ return Math.round(totalCores / 2) || 1
11
+ }
12
+
13
+ export async function runTask(cores, task) {
14
+ if (cores == 1) {
15
+ return [await task()]
16
+ }
17
+ if (cluster.isPrimary) {
18
+ return await runWorkers(cores)
19
+ } else {
20
+ const result = await task(cluster.worker.id)
21
+ process.send(result)
22
+ }
23
+ }
24
+
25
+ function runWorkers(cores) {
26
+ return new Promise((resolve, reject) => {
27
+ const results = []
28
+ for (let index = 0; index < cores; index++) {
29
+ const worker = cluster.fork()
30
+ worker.on('message', message => {
31
+ results.push(message)
32
+ if (results.length === cores) {
33
+ return resolve(results)
34
+ }
35
+ })
36
+ worker.on('error', error => {
37
+ return reject(error)
38
+ })
39
+ }
40
+ })
41
+ }
42
+
package/lib/latency.js CHANGED
@@ -17,8 +17,8 @@ export class Latency {
17
17
  this.partialRequests = 0;
18
18
  this.partialTime = 0;
19
19
  this.partialErrors = 0;
20
- this.lastShown = this.getTime();
21
- this.initialTime = this.getTime();
20
+ this.lastShownNs = this.getTimeNs();
21
+ this.startTimeNs = this.getTimeNs();
22
22
  this.totalRequests = 0;
23
23
  this.totalTime = 0;
24
24
  this.totalErrors = 0;
@@ -45,7 +45,7 @@ export class Latency {
45
45
  */
46
46
  start(requestId) {
47
47
  requestId = requestId || createId();
48
- this.requests[requestId] = this.getTime();
48
+ this.requests[requestId] = this.getTimeNs();
49
49
  this.requestIdToIndex[requestId] = this.requestIndex++;
50
50
  return requestId;
51
51
  }
@@ -61,7 +61,7 @@ export class Latency {
61
61
  if (!this.running) {
62
62
  return -1;
63
63
  }
64
- const elapsed = this.getElapsed(this.requests[requestId]);
64
+ const elapsed = this.getElapsedMs(this.requests[requestId]);
65
65
  this.add(elapsed, errorCode);
66
66
  delete this.requests[requestId];
67
67
  return elapsed;
@@ -105,7 +105,7 @@ export class Latency {
105
105
  * Show latency for partial requests.
106
106
  */
107
107
  showPartial() {
108
- const elapsedSeconds = this.getElapsed(this.lastShown) / 1000;
108
+ const elapsedSeconds = this.getElapsedMs(this.lastShownNs) / 1000;
109
109
  const meanTime = this.partialTime / this.partialRequests || 0.0;
110
110
  const result = {
111
111
  meanLatencyMs: Math.round(meanTime * 10) / 10,
@@ -125,25 +125,26 @@ export class Latency {
125
125
  this.partialTime = 0;
126
126
  this.partialRequests = 0;
127
127
  this.partialErrors = 0;
128
- this.lastShown = this.getTime();
128
+ this.lastShownNs = this.getTimeNs();
129
129
  }
130
130
 
131
131
  /**
132
- * Returns the current high-resolution real time in a [seconds, nanoseconds] tuple Array
132
+ * Returns the current high-resolution real time in nanoseconds as a big int.
133
133
  * @return {*}
134
134
  */
135
- getTime() {
136
- return process.hrtime();
135
+ getTimeNs() {
136
+ return process.hrtime.bigint();
137
137
  }
138
138
 
139
139
  /**
140
- * calculates the elapsed time between the assigned startTime and now
141
- * @param startTime
140
+ * Calculates the elapsed time between the assigned start time and now in ms.
141
+ * @param startTimeNs time in nanoseconds (bigint)
142
142
  * @return {Number} the elapsed time in milliseconds
143
143
  */
144
- getElapsed(startTime) {
145
- const elapsed = process.hrtime(startTime);
146
- return elapsed[0] * 1000 + elapsed[1] / 1000000;
144
+ getElapsedMs(startTimeNs) {
145
+ const endTimeNs = this.getTimeNs()
146
+ const elapsedNs = endTimeNs - startTimeNs
147
+ return Number(elapsedNs / 1000000n)
147
148
  }
148
149
 
149
150
  /**
@@ -153,7 +154,7 @@ export class Latency {
153
154
  if (this.options.maxRequests && this.totalRequests >= this.options.maxRequests) {
154
155
  return true;
155
156
  }
156
- const elapsedSeconds = this.getElapsed(this.initialTime) / 1000;
157
+ const elapsedSeconds = this.getElapsedMs(this.startTimeNs) / 1000;
157
158
  if (this.options.maxSeconds && elapsedSeconds >= this.options.maxSeconds) {
158
159
  return true;
159
160
  }
@@ -165,6 +166,7 @@ export class Latency {
165
166
  */
166
167
  finish() {
167
168
  this.running = false;
169
+ this.endTimeNs = this.getTimeNs()
168
170
  if (this.callback) {
169
171
  return this.callback(null, this.getResult());
170
172
  }
@@ -174,38 +176,11 @@ export class Latency {
174
176
  * Get final result.
175
177
  */
176
178
  getResult() {
177
- const result = new Result(this.options, this)
179
+ const result = new Result()
180
+ result.compute(this.options, this)
178
181
  return result
179
182
  }
180
183
 
181
- /**
182
- * Compute the percentiles.
183
- */
184
- computePercentiles() {
185
- const percentiles = {
186
- 50: false,
187
- 90: false,
188
- 95: false,
189
- 99: false
190
- };
191
- let counted = 0;
192
-
193
- for (let ms = 0; ms <= this.maxLatencyMs; ms++) {
194
- if (!this.histogramMs[ms]) {
195
- continue;
196
- }
197
- counted += this.histogramMs[ms];
198
- const percent = counted / this.totalRequests * 100;
199
-
200
- Object.keys(percentiles).forEach(percentile => {
201
- if (!percentiles[percentile] && percent > percentile) {
202
- percentiles[percentile] = ms;
203
- }
204
- });
205
- }
206
- return percentiles;
207
- }
208
-
209
184
  /**
210
185
  * Show final result.
211
186
  */
package/lib/loadtest.js CHANGED
@@ -160,7 +160,7 @@ class Operation {
160
160
  */
161
161
  stop() {
162
162
  this.running = false;
163
- this.latency.running = false;
163
+ this.latency.finish()
164
164
  if (this.showTimer) {
165
165
  this.showTimer.stop();
166
166
  }
package/lib/result.js CHANGED
@@ -4,26 +4,114 @@
4
4
  * Result of a load test.
5
5
  */
6
6
  export class Result {
7
- constructor(options, latency) {
8
- // options
7
+ constructor() {
8
+ this.url = null
9
+ this.cores = 0
10
+ this.maxRequests = 0
11
+ this.maxSeconds = 0
12
+ this.concurrency = 0
13
+ this.agent = null
14
+ this.requestsPerSecond = 0
15
+ this.startTimeMs = Number.MAX_SAFE_INTEGER
16
+ this.endTimeMs = 0
17
+ this.elapsedSeconds = 0
18
+ this.totalRequests = 0
19
+ this.totalErrors = 0
20
+ this.totalTimeSeconds = 0
21
+ this.accumulatedMs = 0
22
+ this.maxLatencyMs = 0
23
+ this.minLatencyMs = Number.MAX_SAFE_INTEGER
24
+ this.errorCodes = {}
25
+ this.histogramMs = {}
26
+ }
27
+
28
+ compute(options, latency) {
29
+ // configuration
9
30
  this.url = options.url
10
- this.maxRequests = options.maxRequests
11
- this.maxSeconds = options.maxSeconds
12
- this.concurrency = options.concurrency
31
+ this.cores = options.cores
32
+ this.maxRequests = parseInt(options.maxRequests)
33
+ this.maxSeconds = parseInt(options.maxSeconds)
34
+ this.concurrency = parseInt(options.concurrency)
13
35
  this.agent = options.agentKeepAlive ? 'keepalive' : 'none';
14
- this.requestsPerSecond = options.requestsPerSecond
15
- // results
16
- this.elapsedSeconds = latency.getElapsed(latency.initialTime) / 1000
17
- const meanTime = latency.totalTime / latency.totalRequests
36
+ this.requestsPerSecond = parseInt(options.requestsPerSecond)
37
+ // result
38
+ this.startTimeMs = Number(latency.startTimeNs / 1000000n)
39
+ this.endTimeMs = Number(latency.endTimeNs / 1000000n)
18
40
  this.totalRequests = latency.totalRequests
19
41
  this.totalErrors = latency.totalErrors
20
- this.totalTimeSeconds = this.elapsedSeconds
21
- this.rps = Math.round(latency.totalRequests / this.elapsedSeconds)
22
- this.meanLatencyMs = Math.round(meanTime * 10) / 10
42
+ this.accumulatedMs = latency.totalTime
23
43
  this.maxLatencyMs = latency.maxLatencyMs
24
44
  this.minLatencyMs = latency.minLatencyMs
25
- this.percentiles = latency.computePercentiles()
26
45
  this.errorCodes = latency.errorCodes
46
+ this.histogramMs = latency.histogramMs
47
+ this.computeDerived()
48
+ }
49
+
50
+ computeDerived() {
51
+ this.elapsedSeconds = (this.endTimeMs - this.startTimeMs) / 1000
52
+ this.totalTimeSeconds = this.elapsedSeconds // backwards compatibility
53
+ const meanTime = this.accumulatedMs / this.totalRequests
54
+ this.meanLatencyMs = Math.round(meanTime * 10) / 10
55
+ this.effectiveRps = Math.round(this.totalRequests / this.elapsedSeconds)
56
+ this.rps = this.effectiveRps // backwards compatibility
57
+ this.computePercentiles()
58
+ }
59
+
60
+ computePercentiles() {
61
+ this.percentiles = {
62
+ 50: false,
63
+ 90: false,
64
+ 95: false,
65
+ 99: false
66
+ };
67
+ let counted = 0;
68
+
69
+ for (let ms = 0; ms <= this.maxLatencyMs; ms++) {
70
+ if (!this.histogramMs[ms]) {
71
+ continue;
72
+ }
73
+ counted += this.histogramMs[ms];
74
+ const percent = counted / this.totalRequests * 100;
75
+
76
+ Object.keys(this.percentiles).forEach(percentile => {
77
+ if (!this.percentiles[percentile] && percent > percentile) {
78
+ this.percentiles[percentile] = ms;
79
+ }
80
+ });
81
+ }
82
+ }
83
+
84
+ combine(result) {
85
+ // configuration
86
+ this.url = this.url || result.url
87
+ this.cores += 1
88
+ this.maxRequests += result.maxRequests
89
+ this.maxSeconds = this.maxSeconds || result.maxSeconds
90
+ this.concurrency = this.concurrency || result.concurrency
91
+ this.agent = this.agent || result.agent
92
+ this.requestsPerSecond += result.requestsPerSecond || 0
93
+ // result
94
+ this.startTimeMs = Math.min(this.startTimeMs, result.startTimeMs)
95
+ this.endTimeMs = Math.max(this.endTimeMs, result.endTimeMs)
96
+ this.totalRequests += result.totalRequests
97
+ this.totalErrors += result.totalErrors
98
+ this.accumulatedMs += result.accumulatedMs
99
+ this.maxLatencyMs = Math.max(this.maxLatencyMs, result.maxLatencyMs)
100
+ this.minLatencyMs = Math.min(this.minLatencyMs, result.minLatencyMs)
101
+ this.combineMap(this.errorCodes, result.errorCodes)
102
+ this.combineMap(this.histogramMs, result.histogramMs)
103
+ this.computeDerived()
104
+ }
105
+
106
+ combineMap(originalMap, addedMap) {
107
+ for (const key in {...originalMap, ...addedMap}) {
108
+ if (!originalMap[key]) {
109
+ originalMap[key] = 0
110
+ }
111
+ if (addedMap[key]) {
112
+ originalMap[key] += addedMap[key]
113
+ }
114
+ }
27
115
  }
28
116
 
29
117
  /**
@@ -38,16 +126,19 @@ export class Result {
38
126
  console.info('Max time (s): %s', this.maxSeconds);
39
127
  }
40
128
  console.info('Concurrency level: %s', this.concurrency);
129
+ if (this.cores) {
130
+ console.info('Running on cores: %s', this.cores);
131
+ }
41
132
  console.info('Agent: %s', this.agent);
42
133
  if (this.requestsPerSecond) {
43
- console.info('Requests per second: %s', this.requestsPerSecond);
134
+ console.info('Target rps: %s', this.requestsPerSecond);
44
135
  }
45
136
  console.info('');
46
137
  console.info('Completed requests: %s', this.totalRequests);
47
138
  console.info('Total errors: %s', this.totalErrors);
48
- console.info('Total time: %s s', this.totalTimeSeconds);
49
- console.info('Requests per second: %s', this.rps);
139
+ console.info('Total time: %s s', this.elapsedSeconds);
50
140
  console.info('Mean latency: %s ms', this.meanLatencyMs);
141
+ console.info('Effective rps: %s', this.effectiveRps);
51
142
  console.info('');
52
143
  console.info('Percentage of the requests served within a certain time');
53
144
 
package/lib/testserver.js CHANGED
@@ -8,6 +8,30 @@ const PORT = 7357;
8
8
  const LOG_HEADERS_INTERVAL_MS = 5000;
9
9
 
10
10
 
11
+ /**
12
+ * Start a test server. Parameters:
13
+ * - `options`, can contain:
14
+ * - port: the port to use, default 7357.
15
+ * - delay: wait the given milliseconds before answering.
16
+ * - quiet: do not log any messages.
17
+ * - percent: give an error (default 500) on some % of requests.
18
+ * - error: set an HTTP error code, default is 500.
19
+ * - `callback`: optional callback, called after the server has started.
20
+ * If not present will return a promise.
21
+ */
22
+ export function startServer(options, callback) {
23
+ const server = new TestServer(options);
24
+ if (callback) {
25
+ return server.start(callback)
26
+ }
27
+ return new Promise((resolve, reject) => {
28
+ server.start((error, result) => {
29
+ if (error) return reject(error)
30
+ return resolve(result)
31
+ })
32
+ })
33
+ }
34
+
11
35
  /**
12
36
  * A test server, with the given options (see below on startServer()).
13
37
  */
@@ -82,6 +106,11 @@ class TestServer {
82
106
  request.on('data', data => {
83
107
  request.body += data.toString();
84
108
  });
109
+ request.on('error', () => {
110
+ // ignore request
111
+ response.end()
112
+ this.latency.end(id, -1);
113
+ })
85
114
  request.on('end', () => {
86
115
  this.totalRequests += 1
87
116
  const elapsedMs = Date.now() - this.debuggedTime
@@ -165,27 +194,3 @@ class TestServer {
165
194
  }
166
195
  }
167
196
 
168
- /**
169
- * Start a test server. Parameters:
170
- * - `options`, can contain:
171
- * - port: the port to use, default 7357.
172
- * - delay: wait the given milliseconds before answering.
173
- * - quiet: do not log any messages.
174
- * - percent: give an error (default 500) on some % of requests.
175
- * - error: set an HTTP error code, default is 500.
176
- * - `callback`: optional callback, called after the server has started.
177
- * If not present will return a promise.
178
- */
179
- export function startServer(options, callback) {
180
- const server = new TestServer(options);
181
- if (callback) {
182
- return server.start(callback)
183
- }
184
- return new Promise((resolve, reject) => {
185
- server.start((error, result) => {
186
- if (error) return reject(error)
187
- return resolve(result)
188
- })
189
- })
190
- }
191
-
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "loadtest",
3
- "version": "6.2.2",
3
+ "version": "6.3.1",
4
4
  "type": "module",
5
5
  "description": "Run load tests for your web application. Mostly ab-compatible interface, with an option to force requests per second. Includes an API for automated load testing.",
6
6
  "homepage": "https://github.com/alexfernandez/loadtest",
package/test/all.js CHANGED
@@ -14,6 +14,7 @@ import {test as testBodyGenerator} from './body-generator.js'
14
14
  import {test as testLoadtest} from './loadtest.js'
15
15
  import {test as testWebsocket} from './websocket.js'
16
16
  import {test as integrationTest} from './integration.js'
17
+ import {test as testResult} from './result.js'
17
18
 
18
19
 
19
20
  /**
@@ -23,7 +24,7 @@ function test() {
23
24
  const tests = [
24
25
  testHrtimer, testHeaders, testLatency, testHttpClient,
25
26
  testServer, integrationTest, testLoadtest, testWebsocket,
26
- testRequestGenerator, testBodyGenerator,
27
+ testRequestGenerator, testBodyGenerator, testResult,
27
28
  ];
28
29
  testing.run(tests, 4200);
29
30
  }
package/test/result.js ADDED
@@ -0,0 +1,48 @@
1
+ import testing from 'testing'
2
+ import {Result} from '../lib/result.js'
3
+
4
+
5
+ function testCombineEmptyResults(callback) {
6
+ const result = new Result()
7
+ result.combine(new Result())
8
+ testing.assert(!result.url, callback)
9
+ testing.success(callback)
10
+ }
11
+
12
+ function testCombineResults(callback) {
13
+ const combined = new Result()
14
+ const url = 'https://pinchito.es/'
15
+ for (let index = 0; index < 3; index++) {
16
+ const result = {
17
+ url,
18
+ cores: 7,
19
+ maxRequests: 1000,
20
+ concurrency: 10,
21
+ agent: 'none',
22
+ requestsPerSecond: 100,
23
+ totalRequests: 330,
24
+ totalErrors: 10,
25
+ startTimeMs: 1000 + index * 1000,
26
+ endTimeMs: 1000 + index * 2000,
27
+ accumulatedMs: 5000,
28
+ maxLatencyMs: 350 + index,
29
+ minLatencyMs: 2 + index,
30
+ errorCodes: {200: 100, 100: 200},
31
+ histogramMs: {2: 1, 3: 4, 100: 300},
32
+ }
33
+ combined.combine(result)
34
+ }
35
+ testing.assertEquals(combined.url, url, callback)
36
+ testing.assertEquals(combined.cores, 3, callback)
37
+ testing.assertEquals(combined.totalErrors, 30, callback)
38
+ testing.assertEquals(combined.elapsedSeconds, 4, callback)
39
+ testing.success(callback)
40
+ }
41
+
42
+ export function test(callback) {
43
+ const tests = [
44
+ testCombineEmptyResults, testCombineResults,
45
+ ];
46
+ testing.run(tests, callback);
47
+ }
48
+