loadtest 7.1.1 → 8.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -77,7 +77,7 @@ but the resulting figure is much more robust.
77
77
  Using the provided API it is very easy to integrate loadtest with your package, and run programmatic load tests.
78
78
  loadtest makes it very easy to run load tests as part of systems tests, before deploying a new version of your software.
79
79
  The result includes mean response times and percentiles,
80
- so that you can abort deployment e.g. if 99% of the requests don't finish in 10 ms or less.
80
+ so that you can abort deployment e.g. if 99% of all requests don't finish in 10 ms or less.
81
81
 
82
82
  ### Usage Don'ts
83
83
 
@@ -116,43 +116,57 @@ It may need installing from source though, and its interface is not `ab`-compati
116
116
 
117
117
  The following parameters are compatible with Apache ab.
118
118
 
119
- #### `-n requests`
119
+ #### `-t`, `--maxSeconds`
120
+
121
+ Max number of seconds to wait until requests no longer go out.
122
+ Default is 10 seconds, applies only if no `--maxRequests` is specified.
123
+
124
+ Note: this is different than Apache `ab`, which stops _receiving_ requests after the given seconds.
125
+
126
+ **Warning**: max seconds used to have no default value,
127
+ so tests would run indefinitely if no `--maxSeconds` and no `--maxRequests` were specified.
128
+ Max seconds was changed to default to 10 in version 8.
129
+
130
+ #### `-n`, `--maxRequests`
120
131
 
121
132
  Number of requests to send out.
122
- Default is no limit; will keep on sending if not specified.
133
+ Default is no limit;
134
+ will keep on sending until the time limit in `--maxSeconds` is reached.
123
135
 
124
136
  Note: the total number of requests sent can be bigger than the parameter if there is a concurrency parameter;
125
137
  loadtest will report just the first `n`.
126
138
 
127
- #### `-c concurrency`
139
+ #### `-c`, `--concurrency`
128
140
 
129
141
  loadtest will create a certain number of clients; this parameter controls how many.
130
142
  Requests from them will arrive concurrently to the server.
131
- Default value is 1.
143
+ Default value is 10.
132
144
 
133
145
  Note: requests are not sent in parallel (from different processes),
134
146
  but concurrently (a second request may be sent before the first has been answered).
147
+ Does not apply if `--requestsPerSecond` is specified.
135
148
 
136
- #### `-t timelimit`
137
-
138
- Max number of seconds to wait until requests no longer go out.
139
- Default is no limit; will keep on sending if not specified.
149
+ Beware: if concurrency is too low then it is possible that there will not be enough clients
150
+ to send all the supported traffic,
151
+ adjust it with `-c` if needed.
140
152
 
141
- Note: this is different than Apache `ab`, which stops _receiving_ requests after the given seconds.
153
+ **Warning**: concurrency used to have a default value of 1,
154
+ until it was changed to 10 in version 8.
142
155
 
143
- #### `-k` or `--keepalive`
156
+ #### `-k`, `--keepalive`
144
157
 
145
- Open connections using keep-alive: use header 'Connection: Keep-alive' instead of 'Connection: Close'.
158
+ Open connections using keep-alive:
159
+ use header `Connection: keep-alive` instead of `Connection: close`.
146
160
 
147
161
  Note: Uses [agentkeepalive](https://npmjs.org/package/agentkeepalive),
148
162
  which performs better than the default node.js agent.
149
163
 
150
- #### `-C cookie-name=value`
164
+ #### `-C`, `--cookie cookie-name=value`
151
165
 
152
166
  Send a cookie with the request. The cookie `name=value` is then sent to the server.
153
167
  This parameter can be repeated as many times as needed.
154
168
 
155
- #### `-H header:value`
169
+ #### `-H`, `--header header:value`
156
170
 
157
171
  Send a custom header with the request. The line `header:value` is then sent to the server.
158
172
  This parameter can be repeated as many times as needed.
@@ -172,19 +186,19 @@ Note: if you need to add a header with spaces, be sure to surround both header a
172
186
 
173
187
  $ loadtest -H "Authorization: Basic xxx=="
174
188
 
175
- #### `-T content-type`
189
+ #### `-T`, `--contentType`
176
190
 
177
191
  Set the MIME content type for POST data. Default: `text/plain`.
178
192
 
179
- #### `-P POST-body`
193
+ #### `-P`, `--postBody`
180
194
 
181
195
  Send the string as the POST body. E.g.: `-P '{"key": "a9acf03f"}'`
182
196
 
183
- #### `-A PATCH-body`
197
+ #### `-A`, `--patchBody`
184
198
 
185
199
  Send the string as the PATCH body. E.g.: `-A '{"key": "a9acf03f"}'`
186
200
 
187
- #### `-m method`
201
+ #### `-m`, `--method`
188
202
 
189
203
  Set method that will be sent to the test URL.
190
204
  Accepts: `GET`, `POST`, `PUT`, `DELETE`, `PATCH`,
@@ -198,7 +212,7 @@ Requires setting the method with `-m` and the type with `-T`.
198
212
  Example: `--data '{"username": "test", "password": "test"}' -T 'application/x-www-form-urlencoded' -m POST`
199
213
 
200
214
 
201
- #### `-p POST-file`
215
+ #### `-p`, `--postFile`
202
216
 
203
217
  Send the data contained in the given file in the POST body.
204
218
  Remember to set `-T` to the correct content-type.
@@ -222,7 +236,7 @@ export default function request(requestId) {
222
236
 
223
237
  See sample file in `sample/post-file.js`, and test in `test/body-generator.js`.
224
238
 
225
- #### `-u PUT-file`
239
+ #### `-u`, `--putFile`
226
240
 
227
241
  Send the data contained in the given file as a PUT request.
228
242
  Remember to set `-T` to the correct content-type.
@@ -233,7 +247,7 @@ to provide the body of each request.
233
247
  This is useful if you want to generate request bodies dynamically and vary them for each request.
234
248
  For examples see above for `-p`.
235
249
 
236
- #### `-a PATCH-file`
250
+ #### `-a`, `--patchFile`
237
251
 
238
252
  Send the data contained in the given file as a PATCH request.
239
253
  Remember to set `-T` to the correct content-type.
@@ -244,12 +258,12 @@ to provide the body of each request.
244
258
  This is useful if you want to generate request bodies dynamically and vary them for each request.
245
259
  For examples see above for `-p`.
246
260
 
247
- ##### `-r recover`
261
+ ##### `-r`, `--recover`
248
262
 
249
263
  Recover from errors. Always active: loadtest does not stop on errors.
250
264
  After the tests are finished, if there were errors a report with all error codes will be shown.
251
265
 
252
- #### `-s secureProtocol`
266
+ #### `-s`, `--secureProtocol`
253
267
 
254
268
  The TLS/SSL method to use. (e.g. TLSv1_method)
255
269
 
@@ -257,7 +271,7 @@ Example:
257
271
 
258
272
  $ loadtest -n 1000 -s TLSv1_method https://www.example.com
259
273
 
260
- #### `-V version`
274
+ #### `-V`, `--version`
261
275
 
262
276
  Show version number and exit.
263
277
 
@@ -265,25 +279,17 @@ Show version number and exit.
265
279
 
266
280
  The following parameters are _not_ compatible with Apache ab.
267
281
 
268
- #### `--rps requestsPerSecond`
282
+ #### `--rps`, `--requestsPerSecond`
269
283
 
270
284
  Controls the number of requests per second that are sent.
271
285
  Cannot be fractional, e.g. `--rps 0.5`.
272
286
  In this mode each request is not sent as soon as the previous one is responded,
273
287
  but periodically even if previous requests have not been responded yet.
274
288
 
275
- Note: Concurrency doesn't affect the final number of requests per second,
276
- since rps will be shared by all the clients. E.g.:
277
-
278
- loadtest <url> -c 10 --rps 10
279
-
280
- will send a total of 10 rps to the given URL, from 10 different clients
281
- (each client will send 1 request per second).
282
-
283
- Beware: if concurrency is too low then it is possible that there will not be enough clients
284
- to send all of the rps, adjust it with `-c` if needed.
289
+ Note: the `--concurrency` option will be ignored if `--requestsPerSecond` is specified;
290
+ clients will be created on demand.
285
291
 
286
- Note: --rps is not supported for websockets.
292
+ Note: `--rps` is not supported for websockets.
287
293
 
288
294
  #### `--cores number`
289
295
 
@@ -310,7 +316,7 @@ Setting this to 0 disables timeout (default).
310
316
  #### `-R requestGeneratorModule.js`
311
317
 
312
318
  Use a custom request generator function from an external file.
313
- See an example of a request generator module in [`--requestGenerator`](#requestGenerator) below.
319
+ See an example of a request generator module in [`requestGenerator`](doc/api.md#requestGenerator).
314
320
  Also see [`sample/request-generator.js`](sample/request-generator.js) for some sample code including a body
315
321
  (or [`sample/request-generator.ts`](sample/request-generator.ts) for ES6/TypeScript).
316
322
 
@@ -342,6 +348,16 @@ Sets the certificate for the http client to use. Must be used with `--key`.
342
348
 
343
349
  Sets the key for the http client to use. Must be used with `--cert`.
344
350
 
351
+ #### `--tcp` (experimental)
352
+
353
+ Option to use low level TCP sockets,
354
+ faster than the standard HTTP library.
355
+ Not all options are supported.
356
+
357
+ **Warning**: experimental option.
358
+ May not work with your test case.
359
+ See [TCP Sockets Performance](doc/tcp-sockets.md) for details.
360
+
345
361
  ### Test Server
346
362
 
347
363
  loadtest bundles a test server. To run it:
@@ -407,7 +423,7 @@ with concurrency 10 (only relevant results are shown):
407
423
  Requests per second: 368
408
424
  Total time: 44.503181166000005 s
409
425
 
410
- Percentage of the requests served within a certain time
426
+ Percentage of requests served within a certain time
411
427
  50% 4 ms
412
428
  90% 5 ms
413
429
  95% 6 ms
@@ -424,7 +440,7 @@ Now we will try a fixed rate of 1000 rps:
424
440
  Requests: 9546, requests per second: 1000, mean latency: 0 ms
425
441
  Requests: 14549, requests per second: 1000, mean latency: 20 ms
426
442
  ...
427
- Percentage of the requests served within a certain time
443
+ Percentage of requests served within a certain time
428
444
  50% 1 ms
429
445
  90% 2 ms
430
446
  95% 8 ms
@@ -455,7 +471,7 @@ Let us lower the rate to 500 rps:
455
471
  Requests per second: 488
456
472
  Total time: 20.002735398000002 s
457
473
 
458
- Percentage of the requests served within a certain time
474
+ Percentage of requests served within a certain time
459
475
  50% 1 ms
460
476
  90% 1 ms
461
477
  95% 1 ms
@@ -478,7 +494,7 @@ The result (with the same test server) is impressive:
478
494
  ...
479
495
  Requests per second: 4099
480
496
 
481
- Percentage of the requests served within a certain time
497
+ Percentage of requests served within a certain time
482
498
  50% 2 ms
483
499
  90% 3 ms
484
500
  95% 3 ms
@@ -491,7 +507,7 @@ Now we're talking! The steady rate also goes up to 2 krps:
491
507
  ...
492
508
  Requests per second: 1950
493
509
 
494
- Percentage of the requests served within a certain time
510
+ Percentage of requests served within a certain time
495
511
  50% 1 ms
496
512
  90% 2 ms
497
513
  95% 2 ms
@@ -571,6 +587,8 @@ see [doc/api.md](doc/api.md) for details.
571
587
  * `percent`: return error only for the given % of requests.
572
588
  * `logger(request, response)`: function to call after every request.
573
589
 
590
+ Returns a test server that you can `close()` when finished.
591
+
574
592
  ### Configuration file
575
593
 
576
594
  It is possible to put configuration options in a file named `.loadtestrc` in your working directory or in a file whose name is specified in the `loadtest` entry of your `package.json`. The options in the file will be used only if they are not specified in the command line.
package/bin/loadtest.js CHANGED
@@ -9,9 +9,9 @@ import {getHalfCores} from '../lib/cluster.js'
9
9
 
10
10
 
11
11
  const options = stdio.getopt({
12
+ maxSeconds: {key: 't', args: 1, description: 'Max time in seconds to wait for responses, default 10'},
12
13
  maxRequests: {key: 'n', args: 1, description: 'Number of requests to perform'},
13
- concurrency: {key: 'c', args: 1, description: 'Number of requests to make'},
14
- maxSeconds: {key: 't', args: 1, description: 'Max time in seconds to wait for responses'},
14
+ concurrency: {key: 'c', args: 1, description: 'Number of concurrent requests, default 10'},
15
15
  timeout: {key: 'd', args: 1, description: 'Timeout for each request in milliseconds'},
16
16
  contentType: {key: 'T', args: 1, description: 'MIME type for the body'},
17
17
  cookies: {key: 'C', multiple: true, description: 'Send a cookie as name=value'},
@@ -36,6 +36,7 @@ const options = stdio.getopt({
36
36
  cert: {args: 1, description: 'The client certificate to use'},
37
37
  quiet: {description: 'Do not log any messages'},
38
38
  cores: {args: 1, description: 'Number of cores to use', default: getHalfCores()},
39
+ tcp: {description: 'Use TCP sockets (experimental)'},
39
40
  agent: {description: 'Use a keep-alive http agent (deprecated)'},
40
41
  debug: {description: 'Show debug messages (deprecated)'},
41
42
  });
@@ -0,0 +1,21 @@
1
+ import {loadTest, startServer} from '../index.js'
2
+
3
+ const port = 7359;
4
+ const serverOptions = {port}
5
+
6
+
7
+ async function runTcpPerformanceTest() {
8
+ const server = await startServer(serverOptions)
9
+ const options = {
10
+ url: `http://localhost:${port}`,
11
+ method: 'GET',
12
+ tcp: true,
13
+ };
14
+ const result = await loadTest(options)
15
+ await server.close()
16
+ console.log(`Requests received: ${server.totalRequests}`)
17
+ result.show()
18
+ }
19
+
20
+ await runTcpPerformanceTest()
21
+
package/bin/testserver.js CHANGED
@@ -5,49 +5,51 @@ import {startServer} from '../lib/testserver.js'
5
5
  import {loadConfig} from '../lib/config.js'
6
6
  import {getHalfCores, runTask} from '../lib/cluster.js'
7
7
 
8
+ const configuration = loadConfig()
8
9
  const options = readOptions()
9
10
  start(options)
10
11
 
11
12
 
12
13
  function readOptions() {
13
14
  const options = stdio.getopt({
15
+ port: {key: 'p', args: 1, description: 'Port for the server'},
14
16
  delay: {key: 'd', args: 1, description: 'Delay the response for the given milliseconds'},
15
17
  error: {key: 'e', args: 1, description: 'Return an HTTP error code'},
16
- percent: {key: 'p', args: 1, description: 'Return an error (default 500) only for some % of requests'},
17
- cores: {key: 'c', args: 1, description: 'Number of cores to use, default is half the total', default: getHalfCores()}
18
+ percent: {key: 'P', args: 1, description: 'Return an error (default 500) only for some % of requests'},
19
+ cores: {key: 'c', args: 1, description: 'Number of cores to use, default is half the total', default: getHalfCores()},
20
+ body: {key: 'b', args: 1, description: 'Body to return, default "OK"'},
21
+ file: {key: 'f', args: 1, description: 'File to read and return as body'},
18
22
  });
19
- const configuration = loadConfig()
20
23
  if (options.args && options.args.length == 1) {
21
- options.port = parseInt(options.args[0], 10);
22
- if (!options.port) {
23
- console.error('Invalid port');
24
- options.printHelp();
25
- process.exit(1);
26
- }
24
+ options.port = options.port || options.args[0]
27
25
  }
28
- if(options.delay) {
29
- if(isNaN(options.delay)) {
30
- console.error('Invalid delay');
31
- options.printHelp();
32
- process.exit(1);
33
- }
34
- options.delay = parseInt(options.delay, 10);
26
+ return {
27
+ port: readInt(options, 'port'),
28
+ delay: readInt(options, 'delay'),
29
+ error: readInt(options, 'error'),
30
+ percent: readInt(options, 'percent'),
31
+ cores: readInt(options, 'cores'),
32
+ body: readString(options, 'body'),
33
+ file: readString(options, 'file'),
35
34
  }
35
+ }
36
36
 
37
- if(!options.delay) {
38
- options.delay = configuration.delay
39
- }
40
- if(!options.error) {
41
- options.error = configuration.error
42
- }
43
- if(!options.percent) {
44
- options.percent = configuration.percent
37
+ function readString(options, key) {
38
+ return options[key] || configuration[key]
39
+ }
40
+
41
+ function readInt(options, key) {
42
+ if (options[key] && isNaN(options[key])) {
43
+ console.error(`Invalid ${key}`);
44
+ options.printHelp();
45
+ process.exit(1);
45
46
  }
46
- return options
47
+ const value = readString(options, key)
48
+ return parseInt(value) || undefined
47
49
  }
48
50
 
49
51
  function start(options) {
50
- runTask(options.cores, async () => await startServer(options))
52
+ runTask(options.cores, async () => {await startServer(options)})
51
53
  }
52
54
 
53
55
 
package/doc/api.md CHANGED
@@ -37,7 +37,7 @@ result.show()
37
37
  console.log('Tests run successfully')
38
38
  ```
39
39
 
40
- The call returns a `Result` object that contains all info about the load test, also described below.
40
+ The call returns a `Result` object that contains all info about the load test, also described [below](#result).
41
41
  Call `result.show()` to display the results in the standard format on the console.
42
42
 
43
43
  As a legacy from before promises existed,
@@ -61,46 +61,6 @@ loadTest(options, function(error, result) {
61
61
  })
62
62
  ```
63
63
 
64
-
65
- Beware: if there are no `maxRequests` and no `maxSeconds`, then tests will run forever
66
- and will not call the callback.
67
-
68
- ### Result
69
-
70
- The latency result returned at the end of the load test contains a full set of data, including:
71
- mean latency, number of errors and percentiles.
72
- A simplified example follows:
73
-
74
- ```javascript
75
- {
76
- url: 'http://localhost:80/',
77
- maxRequests: 1000,
78
- maxSeconds: 0,
79
- concurrency: 10,
80
- agent: 'none',
81
- requestsPerSecond: undefined,
82
- totalRequests: 1000,
83
- percentiles: {
84
- '50': 7,
85
- '90': 10,
86
- '95': 11,
87
- '99': 15
88
- },
89
- effectiveRps: 2824,
90
- elapsedSeconds: 0.354108,
91
- meanLatencyMs: 7.72,
92
- maxLatencyMs: 20,
93
- totalErrors: 3,
94
- errorCodes: {
95
- '0': 1,
96
- '500': 2
97
- },
98
- }
99
- ```
100
-
101
- The `result` object also has a `result.show()` function
102
- that displays the results on the console in the standard format.
103
-
104
64
  ### Options
105
65
 
106
66
  All options but `url` are, as their name implies, optional.
@@ -110,23 +70,34 @@ See also the [simplified list](../README.md#loadtest-parameters).
110
70
 
111
71
  The URL to invoke. Mandatory.
112
72
 
113
- #### `concurrency`
73
+ #### `maxSeconds`
114
74
 
115
- How many clients to start in parallel.
75
+ Max number of seconds to run the tests.
76
+ Default is 10 seconds, applies only if no `maxRequests` is specified.
77
+
78
+ Note: after the given number of seconds `loadtest` will stop sending requests,
79
+ but may continue receiving tests afterwards.
80
+
81
+ **Warning**: max seconds used to have no default value,
82
+ so tests would run indefinitely if no `maxSeconds` and no `maxRequests` were specified.
83
+ Max seconds was changed to default to 10 in version 8.
116
84
 
117
85
  #### `maxRequests`
118
86
 
119
87
  A max number of requests; after they are reached the test will end.
88
+ Default is no limit;
89
+ will keep on sending until the time limit in `maxSeconds` is reached.
120
90
 
121
91
  Note: the actual number of requests sent can be bigger if there is a concurrency level;
122
92
  loadtest will report just on the max number of requests.
123
93
 
124
- #### `maxSeconds`
94
+ #### `concurrency`
125
95
 
126
- Max number of seconds to run the tests.
96
+ How many clients to start in parallel, default is 10.
97
+ Does not apply if `requestsPerSecond` is specified.
127
98
 
128
- Note: after the given number of seconds `loadtest` will stop sending requests,
129
- but may continue receiving tests afterwards.
99
+ **Warning**: concurrency used to have a default value of 1,
100
+ until it was changed to 10 in version 8.
130
101
 
131
102
  #### `timeout`
132
103
 
@@ -134,7 +105,7 @@ Timeout for each generated request in milliseconds. Setting this to 0 disables t
134
105
 
135
106
  #### `cookies`
136
107
 
137
- An array of cookies to send. Each cookie should be a string of the form name=value.
108
+ An array of cookies to send. Each cookie should be a string of the form `name=value`.
138
109
 
139
110
  #### `headers`
140
111
 
@@ -146,9 +117,6 @@ like this:
146
117
  accept: "text/plain;text/html"
147
118
  }
148
119
 
149
- Note: when using the API, the "host" header is not inferred from the URL but needs to be sent
150
- explicitly.
151
-
152
120
  #### `method`
153
121
 
154
122
  The method to use: POST, PUT. Default: GET.
@@ -317,6 +285,86 @@ function contentInspector(result) {
317
285
  }
318
286
  },
319
287
  ```
288
+
289
+ #### `tcp`
290
+
291
+ If true, use low-level TCP sockets.
292
+ Faster option that can increase performance by up to 10x,
293
+ especially in local test setups.
294
+
295
+ **Warning**: Experimental option.
296
+ May not work for your test case.
297
+ Not compatible with options `indexParam`, `statusCallback`, `requestGenerator`.
298
+ See [TCP Sockets Performance](doc/tcp-sockets.md) for details.
299
+
300
+ ### Result
301
+
302
+ The latency result returned at the end of the load test contains a full set of data, including:
303
+ mean latency, number of errors and percentiles.
304
+ A simplified example follows:
305
+
306
+ ```javascript
307
+ {
308
+ url: 'http://localhost:80/',
309
+ maxRequests: 1000,
310
+ maxSeconds: 0,
311
+ concurrency: 10,
312
+ agent: 'none',
313
+ requestsPerSecond: undefined,
314
+ totalRequests: 1000,
315
+ percentiles: {
316
+ '50': 7,
317
+ '90': 10,
318
+ '95': 11,
319
+ '99': 15
320
+ },
321
+ effectiveRps: 2824,
322
+ elapsedSeconds: 0.354108,
323
+ meanLatencyMs: 7.72,
324
+ maxLatencyMs: 20,
325
+ totalErrors: 3,
326
+ clients: 10,
327
+ errorCodes: {
328
+ '0': 1,
329
+ '500': 2
330
+ },
331
+ }
332
+ ```
333
+
334
+ The `result` object also has a `result.show()` function
335
+ that displays the results on the console in the standard format.
336
+
337
+ Some of the attributes (`url`, `concurrency`) will be identical to the parameters passed.
338
+ The following attributes can also be returned.
339
+
340
+ #### `totalRequests`
341
+
342
+ How many requests were actually processed.
343
+
344
+ #### `totalRequests`
345
+
346
+ How many requests resulted in an error.
347
+
348
+ #### `effectiveRps`
349
+
350
+ How many requests per second were actually processed.
351
+
352
+ #### `elapsedSeconds`
353
+
354
+ How many seconds the test lasted.
355
+
356
+ #### `meanLatencyMs`
357
+
358
+ Average latency in milliseconds.
359
+
360
+ #### `errorCodes`
361
+
362
+ Object containing a map with all status codes received.
363
+
364
+ #### `clients`
365
+
366
+ Number of concurrent clients started.
367
+ Should equal the concurrency level unless the `rps` option is specified.
320
368
 
321
369
  ### Start Test Server
322
370
 
@@ -330,7 +378,7 @@ await server.close()
330
378
  ```
331
379
 
332
380
  This function returns when the server is up and running,
333
- with an HTTP server which can be `close()`d when it is no longer useful.
381
+ with a server object which can be `close()`d when it is no longer useful.
334
382
  As a legacy from before promises existed,
335
383
  if an optional callback is passed as second parameter then it will not behave as `async`:
336
384
 
@@ -338,6 +386,9 @@ if an optional callback is passed as second parameter then it will not behave as
338
386
  const server = startServer({port: 8000}, error => console.error(error))
339
387
  ```
340
388
 
389
+ **Warning**: up until version 7 this function returned an HTTP server;
390
+ this was changed to a test server object with an identical `close()` method.
391
+
341
392
  The following options are available.
342
393
 
343
394
  #### `port`