@mimik/sumologic-winston-logger 2.1.9 → 2.1.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -11,9 +11,9 @@ The following environment variables are used to configure the logger:
11
11
  | LOG_LEVEL | Log level for the running instance | debug | |
12
12
  | CONSOLE_LEVEL | Console log level | debug | |
13
13
  | FILTER_FILE | Filename containing filter rules | null | |
14
- | EXIT_DELAY | Delay before exiting gracefully | 2000 | in millisecond |
15
- | NO_STACK | Whether to include call stacks in all log messages | no | expected: yes/no |
16
- | LOG_MODE | Comma-separated list defining the log mode/backends | sumologic | enum: awsS3, awsKinesis, sumologic, all, none |
14
+ | EXIT_DELAY | Delay for flushing the transports and exiting | 2000 | in millisecond |
15
+ | NO_STACK | Whether to include call stacks in all log messages | yes | expected: yes/no |
16
+ | LOG_MODE | Comma-separated list defining the log mode/backends | none | enum: awsS3, awsKinesis, sumologic, all, none |
17
17
 
18
18
  If `LOG_MODE` includes `sumologic`, the following environment variables are required:
19
19
 
@@ -30,14 +30,14 @@ If `LOG_MODE` includes `awsKinesis`, the following environment variables are req
30
30
  | KINESIS_AWS_STREAM_NAME_ERROR | Stream name for `error` level logs | | |
31
31
  | KINESIS_AWS_STREAM_NAME_OTHER | Stream name for any other level | | |
32
32
  | KINESIS_AWS_REGION | AWS region of the stream | | |
33
- | KINESIS_AWS_TIMEOUT | Maximum time before flushing | 1000 | in millisecond |
34
- | KINESIS_AWS_MAX_SIZE | Maximum accumulated log size before flushing | 5 | in MB |
35
- | KINESIS_AWS_MAX_EVENTS | Maximum number of accumulated logs before flushing | 1000 | |
36
- | KINESIS_AWS_MAX_RETRIES | Maximum connection retries | 4 | |
33
+ | KINESIS_AWS_TIMEOUT | Max time before sending events to Kinesis | 5 | in minute |
34
+ | KINESIS_AWS_MAX_SIZE | Max size of the data before sending to Kinesis | 5 | in MB |
35
+ | KINESIS_AWS_MAX_EVENTS | Max number of events before sending to Kinesis | 1000 | |
36
+ | KINESIS_AWS_MAX_RETRIES | Max retries to connect to Kinesis | 4 | |
37
37
  | KINESIS_AWS_ACCESS_KEY_ID | AWS access key ID | | |
38
38
  | KINESIS_AWS_SECRET_ACCESS_KEY | AWS secret access key | | |
39
- | KINESIS_AWS_HTTP_OPTIONS_SOCKET_TIMEOUT | HTTP handler socket timeout | 5000 | in millisecond |
40
- | KINESIS_AWS_HTTP_OPTIONS_CONNECTION_TIMEOUT | HTTP handler connection timeout | 5000 | in millisecond |
39
+ | KINESIS_AWS_HTTP_OPTIONS_SOCKET_TIMEOUT | Socket timeout for the http handler | 5000 | in millisecond |
40
+ | KINESIS_AWS_HTTP_OPTIONS_CONNECTION_TIMEOUT | Connection timeout for the http handler | 5000 | in millisecond |
41
41
 
42
42
  If `LOG_MODE` includes `awsS3`, the following environment variables are required:
43
43
 
@@ -45,9 +45,9 @@ If `LOG_MODE` includes `awsS3`, the following environment variables are required
45
45
  | ----------------- | ----------- | ------- | -------- |
46
46
  | S3_AWS_BUCKET_NAME | S3 bucket name for storing logs | | |
47
47
  | S3_AWS_REGION | AWS region of the bucket | | |
48
- | S3_AWS_TIMEOUT | Maximum time before flushing | 5 | in minute |
49
- | S3_AWS_MAX_SIZE | Maximum accumulated log size before flushing | 5 | in MB |
50
- | S3_AWS_MAX_EVENTS | Maximum number of accumulated logs before flushing | 1000 | |
48
+ | S3_AWS_TIMEOUT | Max time before sending events to S3 | 5 | in minute |
49
+ | S3_AWS_MAX_SIZE | Max size of the data before sending to S3 | 5 | in MB |
50
+ | S3_AWS_MAX_EVENTS | Max number of events before sending to S3 | 1000 | |
51
51
  | S3_AWS_ACCESS_KEY_ID | AWS access key ID | | |
52
52
  | S3_AWS_SECRET_ACCESS_KEY | AWS secret access key | | |
53
53
 
@@ -66,10 +66,9 @@ If `global.serverId` is set, it overrides `SERVER_ID`.
66
66
 
67
67
  Sumologic-Winston-Logger is a log wrapper that can write to multiple logging services.
68
68
  Currently, Winston, SumoLogic, AWS Kinesis and AWS S3 are supported.
69
- The package also adds strackTrace info.
70
- StackTrace information gets tacked on to Sumologic calls, In addition and method, module, and line info or
71
- concatenated is added to the end of winston lines. **Line formatting is not currently configrable.**
72
- The package also allow some level of filtering.
69
+ The package also adds stackTrace info.
70
+ StackTrace information is included in all log entries. For error-level logs, the full stack trace is added. For other log levels, method name, file name, and line number are appended. **Line formatting is not currently configurable.**
71
+ The package also allows some level of filtering.
73
72
 
74
73
  ## Motivation ##
75
74
  To centralize logging in a single npm node package
@@ -80,12 +79,12 @@ The logger is discoverable on npmjs.org.
80
79
 
81
80
  To install:
82
81
  ```
83
- npm install sumologic-winston-logger --save
82
+ npm install @mimik/sumologic-winston-logger --save
84
83
  ```
85
84
 
86
85
  ## Configuration - Details ##
87
86
 
88
- The following sections decribe the details of each of the environment variables listed above.
87
+ The following sections describe the details of each of the environment variables listed above.
89
88
 
90
89
  ### `LOG_LEVEL` ###
91
90
  Log levels supported inclusively according to the following list, as borrowed from winston:
@@ -106,14 +105,14 @@ Thus, if one declares a log level of silly, the levels error, warn, info, debug,
106
105
  ### `SUMO_LOGIC_ENDPOINT` ###
107
106
  The endpoint defined in SumoLogic to where log information will be sent. See the section, _Finding SumoLogic endpoint and collector code_, below to find the value to assign to this environment variable.
108
107
 
109
- |Example to set the environment varible `SUMO_LOGIC_ENDPOINT`|
108
+ |Example to set the environment variable `SUMO_LOGIC_ENDPOINT`|
110
109
  |---|
111
110
  |`export SUMO_LOGIC_ENDPOINT=https://endpoint1.collection.us2.sumologic.com/receiver/v1/http/`|
112
111
 
113
112
  ### `SUMO_LOGIC_COLLECTOR_CODE` ###
114
113
  The collector code as defined in SumoLogic. The Collector Code is the Base64 string that appears after the last slash in the URL defined in the SumoLogic Control Panel.
115
114
 
116
- |Example to set the environment varible `SUMO_LOGIC_COLLECTOR_CODE`|
115
+ |Example to set the environment variable `SUMO_LOGIC_COLLECTOR_CODE`|
117
116
  |---|
118
117
  |`export SUMO_LOGIC_COLLECTOR_CODE=AhfheisdcOllectorCodeInSumoLogicuZw==`|
119
118
 
@@ -145,20 +144,20 @@ The collector code is the Base64 encrypted part of the URL that follows the last
145
144
  ### `FILTER_FILE` ###
146
145
 
147
146
  FILTER_FILE is the location where the definition of the filtering configuration is. The location has to be a full file name.
148
- When the environment (NODE_ENV) in which the logger is used is `prod` or `production`, the content of the log will be filtered according to the log filtering configuration included in the file refered by the FILTER_FILE valiable.
147
+ When the environment (NODE_ENV) in which the logger is used is `prod` or `production`, the content of the log will be filtered according to the log filtering configuration included in the file referred by the FILTER_FILE variable.
149
148
  The filter will replace values of the designated property names to '-----'.
150
149
 
151
150
  ## Sample Use ##
152
151
 
153
- The intent of this package is to support all the behavior common to console.log while also including support of multiple arguements and all data types.
152
+ The intent of this package is to support all the behavior common to console.log while also including support of multiple arguments and all data types.
154
153
 
155
154
  Formatting console logs is left to winston, except that some stackTrace info is appended to each line.
156
155
 
157
156
  Formatting of SumoLogic logs is handled by this module in the following ways:
158
157
 
159
- * only the first is use for message
158
+ * only the first argument is used as the message
160
159
  * only one object can be passed as parameter
161
- * structured stackTrace info is added to every log except is `NO_STACK` is set the 'yes'
160
+ * structured stackTrace info is added to every log except when NO_STACK is set to 'yes'
162
161
  * if the last parameter is a string it will be considered as a `correlationId`
163
162
 
164
163
 
@@ -166,7 +165,7 @@ Formatting of SumoLogic logs is handled by this module in the following ways:
166
165
  Listing 2 below show you how to declare a logger to run under ECMAScript 6 and then log using the various log levels supported by the Sumologic-Winston-Logger.
167
166
 
168
167
  ``` javascript
169
- import logger from 'sumologic-winston-logger';
168
+ import logger from '@mimik/sumologic-winston-logger';
170
169
 
171
170
  logger.log('debug', 'this is a debug statement using log');
172
171
  logger.debug({ message: 'this is a debug statement using an object'});
@@ -3,10 +3,9 @@
3
3
 
4
4
  Sumologic-Winston-Logger is a log wrapper that can write to multiple logging services.
5
5
  Currently, Winston, SumoLogic, AWS Kinesis and AWS S3 are supported.
6
- The package also adds strackTrace info.
7
- StackTrace information gets tacked on to Sumologic calls, In addition and method, module, and line info or
8
- concatenated is added to the end of winston lines. **Line formatting is not currently configrable.**
9
- The package also allow some level of filtering.
6
+ The package also adds stackTrace info.
7
+ StackTrace information is included in all log entries. For error-level logs, the full stack trace is added. For other log levels, method name, file name, and line number are appended. **Line formatting is not currently configurable.**
8
+ The package also allows some level of filtering.
10
9
 
11
10
  ## Motivation ##
12
11
  To centralize logging in a single npm node package
@@ -17,12 +16,12 @@ The logger is discoverable on npmjs.org.
17
16
 
18
17
  To install:
19
18
  ```
20
- npm install sumologic-winston-logger --save
19
+ npm install @mimik/sumologic-winston-logger --save
21
20
  ```
22
21
 
23
22
  ## Configuration - Details ##
24
23
 
25
- The following sections decribe the details of each of the environment variables listed above.
24
+ The following sections describe the details of each of the environment variables listed above.
26
25
 
27
26
  ### `LOG_LEVEL` ###
28
27
  Log levels supported inclusively according to the following list, as borrowed from winston:
@@ -43,14 +42,14 @@ Thus, if one declares a log level of silly, the levels error, warn, info, debug,
43
42
  ### `SUMO_LOGIC_ENDPOINT` ###
44
43
  The endpoint defined in SumoLogic to where log information will be sent. See the section, _Finding SumoLogic endpoint and collector code_, below to find the value to assign to this environment variable.
45
44
 
46
- |Example to set the environment varible `SUMO_LOGIC_ENDPOINT`|
45
+ |Example to set the environment variable `SUMO_LOGIC_ENDPOINT`|
47
46
  |---|
48
47
  |`export SUMO_LOGIC_ENDPOINT=https://endpoint1.collection.us2.sumologic.com/receiver/v1/http/`|
49
48
 
50
49
  ### `SUMO_LOGIC_COLLECTOR_CODE` ###
51
50
  The collector code as defined in SumoLogic. The Collector Code is the Base64 string that appears after the last slash in the URL defined in the SumoLogic Control Panel.
52
51
 
53
- |Example to set the environment varible `SUMO_LOGIC_COLLECTOR_CODE`|
52
+ |Example to set the environment variable `SUMO_LOGIC_COLLECTOR_CODE`|
54
53
  |---|
55
54
  |`export SUMO_LOGIC_COLLECTOR_CODE=AhfheisdcOllectorCodeInSumoLogicuZw==`|
56
55
 
@@ -82,20 +81,20 @@ The collector code is the Base64 encrypted part of the URL that follows the last
82
81
  ### `FILTER_FILE` ###
83
82
 
84
83
  FILTER_FILE is the location where the definition of the filtering configuration is. The location has to be a full file name.
85
- When the environment (NODE_ENV) in which the logger is used is `prod` or `production`, the content of the log will be filtered according to the log filtering configuration included in the file refered by the FILTER_FILE valiable.
84
+ When the environment (NODE_ENV) in which the logger is used is `prod` or `production`, the content of the log will be filtered according to the log filtering configuration included in the file referred by the FILTER_FILE variable.
86
85
  The filter will replace values of the designated property names to '-----'.
87
86
 
88
87
  ## Sample Use ##
89
88
 
90
- The intent of this package is to support all the behavior common to console.log while also including support of multiple arguements and all data types.
89
+ The intent of this package is to support all the behavior common to console.log while also including support of multiple arguments and all data types.
91
90
 
92
91
  Formatting console logs is left to winston, except that some stackTrace info is appended to each line.
93
92
 
94
93
  Formatting of SumoLogic logs is handled by this module in the following ways:
95
94
 
96
- * only the first is use for message
95
+ * only the first argument is used as the message
97
96
  * only one object can be passed as parameter
98
- * structured stackTrace info is added to every log except is `NO_STACK` is set the 'yes'
97
+ * structured stackTrace info is added to every log except when NO_STACK is set to 'yes'
99
98
  * if the last parameter is a string it will be considered as a `correlationId`
100
99
 
101
100
 
@@ -103,7 +102,7 @@ Formatting of SumoLogic logs is handled by this module in the following ways:
103
102
  Listing 2 below show you how to declare a logger to run under ECMAScript 6 and then log using the various log levels supported by the Sumologic-Winston-Logger.
104
103
 
105
104
  ``` javascript
106
- import logger from 'sumologic-winston-logger';
105
+ import logger from '@mimik/sumologic-winston-logger';
107
106
 
108
107
  logger.log('debug', 'this is a debug statement using log');
109
108
  logger.debug({ message: 'this is a debug statement using an object'});
@@ -1,19 +1,16 @@
1
- /* eslint no-process-env: "off" */
1
+ /* eslint-disable no-process-env, no-magic-numbers */
2
2
  import {
3
3
  ALL_MODE,
4
4
  ALL_MODES,
5
5
  AWS_KINESIS,
6
6
  AWS_S3,
7
- DEFAULT,
7
+ DEBUG_LEVEL,
8
+ ENV_LOCAL,
8
9
  NONE_MODE,
9
10
  SUMOLOGIC,
10
11
  } from '../lib/common.js';
11
- import difference from 'lodash.difference';
12
- import isNil from 'lodash.isnil';
13
- import isUndefined from 'lodash.isundefined';
14
12
  import process from 'node:process';
15
13
  import { readFileSync } from 'node:fs';
16
- import split from 'lodash.split';
17
14
 
18
15
  const DECIMAL = 10;
19
16
 
@@ -33,9 +30,9 @@ const DECIMAL = 10;
33
30
  * | LOG_LEVEL | Log level for the running instance | debug | |
34
31
  * | CONSOLE_LEVEL | Console log level | debug | |
35
32
  * | FILTER_FILE | Filename containing filter rules | null | |
36
- * | EXIT_DELAY | Delay before exiting gracefully | 2000 | in millisecond |
37
- * | NO_STACK | Whether to include call stacks in all log messages | no | expected: yes/no |
38
- * | LOG_MODE | Comma-separated list defining the log mode/backends | sumologic | enum: awsS3, awsKinesis, sumologic, all, none |
33
+ * | EXIT_DELAY | Delay for flushing the transports and exiting | 2000 | in millisecond |
34
+ * | NO_STACK | Whether to include call stacks in all log messages | yes | expected: yes/no |
35
+ * | LOG_MODE | Comma-separated list defining the log mode/backends | none | enum: awsS3, awsKinesis, sumologic, all, none |
39
36
  *
40
37
  * If `LOG_MODE` includes `sumologic`, the following environment variables are required:
41
38
  *
@@ -52,14 +49,14 @@ const DECIMAL = 10;
52
49
  * | KINESIS_AWS_STREAM_NAME_ERROR | Stream name for `error` level logs | | |
53
50
  * | KINESIS_AWS_STREAM_NAME_OTHER | Stream name for any other level | | |
54
51
  * | KINESIS_AWS_REGION | AWS region of the stream | | |
55
- * | KINESIS_AWS_TIMEOUT | Maximum time before flushing | 1000 | in millisecond |
56
- * | KINESIS_AWS_MAX_SIZE | Maximum accumulated log size before flushing | 5 | in MB |
57
- * | KINESIS_AWS_MAX_EVENTS | Maximum number of accumulated logs before flushing | 1000 | |
58
- * | KINESIS_AWS_MAX_RETRIES | Maximum connection retries | 4 | |
52
+ * | KINESIS_AWS_TIMEOUT | Max time before sending events to Kinesis | 5 | in minute |
53
+ * | KINESIS_AWS_MAX_SIZE | Max size of the data before sending to Kinesis | 5 | in MB |
54
+ * | KINESIS_AWS_MAX_EVENTS | Max number of events before sending to Kinesis | 1000 | |
55
+ * | KINESIS_AWS_MAX_RETRIES | Max retries to connect to Kinesis | 4 | |
59
56
  * | KINESIS_AWS_ACCESS_KEY_ID | AWS access key ID | | |
60
57
  * | KINESIS_AWS_SECRET_ACCESS_KEY | AWS secret access key | | |
61
- * | KINESIS_AWS_HTTP_OPTIONS_SOCKET_TIMEOUT | HTTP handler socket timeout | 5000 | in millisecond |
62
- * | KINESIS_AWS_HTTP_OPTIONS_CONNECTION_TIMEOUT | HTTP handler connection timeout | 5000 | in millisecond |
58
+ * | KINESIS_AWS_HTTP_OPTIONS_SOCKET_TIMEOUT | Socket timeout for the http handler | 5000 | in millisecond |
59
+ * | KINESIS_AWS_HTTP_OPTIONS_CONNECTION_TIMEOUT | Connection timeout for the http handler | 5000 | in millisecond |
63
60
  *
64
61
  * If `LOG_MODE` includes `awsS3`, the following environment variables are required:
65
62
  *
@@ -67,9 +64,9 @@ const DECIMAL = 10;
67
64
  * | ----------------- | ----------- | ------- | -------- |
68
65
  * | S3_AWS_BUCKET_NAME | S3 bucket name for storing logs | | |
69
66
  * | S3_AWS_REGION | AWS region of the bucket | | |
70
- * | S3_AWS_TIMEOUT | Maximum time before flushing | 5 | in minute |
71
- * | S3_AWS_MAX_SIZE | Maximum accumulated log size before flushing | 5 | in MB |
72
- * | S3_AWS_MAX_EVENTS | Maximum number of accumulated logs before flushing | 1000 | |
67
+ * | S3_AWS_TIMEOUT | Max time before sending events to S3 | 5 | in minute |
68
+ * | S3_AWS_MAX_SIZE | Max size of the data before sending to S3 | 5 | in MB |
69
+ * | S3_AWS_MAX_EVENTS | Max number of events before sending to S3 | 1000 | |
73
70
  * | S3_AWS_ACCESS_KEY_ID | AWS access key ID | | |
74
71
  * | S3_AWS_SECRET_ACCESS_KEY | AWS secret access key | | |
75
72
  *
@@ -90,12 +87,12 @@ const checkConfig = (config) => {
90
87
  if (typeof node[prop] === 'object' && node[prop]) {
91
88
  traverseNodeSync(node[prop], `${path}.${prop}`);
92
89
  }
93
- else if (isUndefined(node[prop])) errs.push(`${path}.${prop}`);
90
+ else if (node[prop] === undefined) errs.push(`${path}.${prop}`);
94
91
  });
95
92
  };
96
93
 
97
94
  traverseNodeSync(config, 'configuration');
98
- if (errs.length > 1) {
95
+ if (errs.length > 0) {
99
96
  throw new Error(`Missing values for ${errs}`);
100
97
  }
101
98
  };
@@ -104,9 +101,9 @@ const checkMode = (mode) => {
104
101
  let logMode = null;
105
102
 
106
103
  if (mode) {
107
- logMode = split(mode.trim(), /\s*,\s*/u);
104
+ logMode = mode.trim().split(/\s*,\s*/u);
108
105
  if (logMode.length === 0) throw new Error('Invalid LOG_MODE: cannot be an empty array');
109
- if (difference(logMode, ALL_MODES).length !== 0) throw new Error(`Invalid items in LOG_MODE: ${mode}`);
106
+ if (logMode.some(item => !ALL_MODES.includes(item))) throw new Error(`Invalid items in LOG_MODE: ${mode}`);
110
107
  if (logMode.includes(NONE_MODE) && logMode.length !== 1) throw new Error(`Cannot have multiple modes when ${NONE_MODE} is selected`);
111
108
  if (logMode.includes(ALL_MODE)) logMode = [SUMOLOGIC, AWS_S3]; // legacy support
112
109
  }
@@ -118,18 +115,18 @@ const configuration = {
118
115
  type: process.env.SERVER_TYPE || null,
119
116
  id: process.env.SERVER_ID || null,
120
117
  },
121
- env: process.env.NODE_ENV || DEFAULT.ENV,
118
+ env: process.env.NODE_ENV || ENV_LOCAL,
122
119
  level: {
123
- log: process.env.LOG_LEVEL || DEFAULT.LEVEL,
124
- console: process.env.CONSOLE_LEVEL || DEFAULT.LEVEL,
120
+ log: process.env.LOG_LEVEL || DEBUG_LEVEL,
121
+ console: process.env.CONSOLE_LEVEL || DEBUG_LEVEL,
125
122
  },
126
123
  filter: {
127
- file: process.env.FILTER_FILE || DEFAULT.FILTER_FILE,
124
+ file: process.env.FILTER_FILE || null,
128
125
  },
129
- exitDelay: parseInt(process.env.EXIT_DELAY, DECIMAL) || DEFAULT.EXIT_DELAY, // in millisecond
130
- noStack: process.env.NO_STACK || DEFAULT.NO_STACK,
126
+ exitDelay: parseInt(process.env.EXIT_DELAY, DECIMAL) || 2000, // in millisecond
127
+ noStack: process.env.NO_STACK || 'yes',
131
128
  };
132
- configuration.mode = checkMode(process.env.LOG_MODE) || DEFAULT.MODE;
129
+ configuration.mode = checkMode(process.env.LOG_MODE) || [NONE_MODE];
133
130
 
134
131
  if (configuration.mode.includes(SUMOLOGIC)) {
135
132
  configuration[SUMOLOGIC] = {
@@ -143,30 +140,30 @@ if (configuration.mode.includes(AWS_KINESIS)) {
143
140
  streamNameError: process.env.KINESIS_AWS_STREAM_NAME_ERROR,
144
141
  streamNameOther: process.env.KINESIS_AWS_STREAM_NAME_OTHER,
145
142
  region: process.env.KINESIS_AWS_REGION,
146
- timeout: parseInt(process.env.KINESIS_AWS_TIMEOUT, DECIMAL) || DEFAULT.KINESIS_TIMEOUT, // in millisecond
147
- maxSize: parseInt(process.env.KINESIS_AWS_MAX_SIZE, DECIMAL) || DEFAULT.KINESIS_MAX_SIZE, // in mByte
148
- maxEvents: parseInt(process.env.KINESIS_AWS_MAX_EVENTS, DECIMAL) || DEFAULT.KINESIS_MAX_EVENTS,
149
- maxRetries: parseInt(process.env.KINESIS_AWS_MAX_RETRIES, DECIMAL) || DEFAULT.KINESIS_MAX_RETRIES,
143
+ timeout: parseInt(process.env.KINESIS_AWS_TIMEOUT, DECIMAL) || 5, // in minute
144
+ maxSize: parseInt(process.env.KINESIS_AWS_MAX_SIZE, DECIMAL) || 5, // in mByte
145
+ maxEvents: parseInt(process.env.KINESIS_AWS_MAX_EVENTS, DECIMAL) || 1000,
146
+ maxRetries: parseInt(process.env.KINESIS_AWS_MAX_RETRIES, DECIMAL) || 4,
150
147
  httpOptions: {
151
- socketTimeout: parseInt(process.env.KINESIS_AWS_HTTP_OPTIONS_SOCKET_TIMEOUT, DECIMAL) || DEFAULT.KINESIS_HTTP_OPTIONS_SOCKET_TIMEOUT,
152
- connectionTimeout: parseInt(process.env.KINESIS_AWS_HTTP_OPTIONS_CONNECTION_TIMEOUT, DECIMAL) || DEFAULT.KINESIS_HTTP_OPTIONS_CONNECTION_TIMEOUT,
148
+ socketTimeout: parseInt(process.env.KINESIS_AWS_HTTP_OPTIONS_SOCKET_TIMEOUT, DECIMAL) || 5000, // in millisecond
149
+ connectionTimeout: parseInt(process.env.KINESIS_AWS_HTTP_OPTIONS_CONNECTION_TIMEOUT, DECIMAL) || 5000, // in millisecond
153
150
  },
154
151
  };
155
152
 
156
- if (!isNil(process.env.KINESIS_AWS_ACCESS_KEY_ID)) configuration[AWS_KINESIS].accessKeyId = process.env.KINESIS_AWS_ACCESS_KEY_ID;
157
- if (!isNil(process.env.KINESIS_AWS_SECRET_ACCESS_KEY)) configuration[AWS_KINESIS].secretAccessKey = process.env.KINESIS_AWS_SECRET_ACCESS_KEY;
153
+ if (process.env.KINESIS_AWS_ACCESS_KEY_ID !== undefined) configuration[AWS_KINESIS].accessKeyId = process.env.KINESIS_AWS_ACCESS_KEY_ID;
154
+ if (process.env.KINESIS_AWS_SECRET_ACCESS_KEY !== undefined) configuration[AWS_KINESIS].secretAccessKey = process.env.KINESIS_AWS_SECRET_ACCESS_KEY;
158
155
  }
159
156
  if (configuration.mode.includes(AWS_S3)) {
160
157
  configuration[AWS_S3] = {
161
158
  bucketname: process.env.S3_AWS_BUCKET_NAME,
162
159
  region: process.env.S3_AWS_REGION,
163
- timeout: parseInt(process.env.S3_AWS_TIMEOUT, DECIMAL) || DEFAULT.S3_TIMEOUT, // in minute
164
- maxSize: parseInt(process.env.S3_AWS_MAX_SIZE, DECIMAL) || DEFAULT.S3_MAX_SIZE, // in mByte
165
- maxEvents: parseInt(process.env.S3_AWS_MAX_EVENTS, DECIMAL) || DEFAULT.S3_MAX_EVENTS,
160
+ timeout: parseInt(process.env.S3_AWS_TIMEOUT, DECIMAL) || 5, // in minute
161
+ maxSize: parseInt(process.env.S3_AWS_MAX_SIZE, DECIMAL) || 5, // in mByte
162
+ maxEvents: parseInt(process.env.S3_AWS_MAX_EVENTS, DECIMAL) || 1000,
166
163
  };
167
164
 
168
- if (!isNil(process.env.S3_AWS_ACCESS_KEY_ID)) configuration[AWS_S3].accessKeyId = process.env.S3_AWS_ACCESS_KEY_ID;
169
- if (!isNil(process.env.S3_AWS_SECRET_ACCESS_KEY)) configuration[AWS_S3].secretAccessKey = process.env.S3_AWS_SECRET_ACCESS_KEY;
165
+ if (process.env.S3_AWS_ACCESS_KEY_ID !== undefined) configuration[AWS_S3].accessKeyId = process.env.S3_AWS_ACCESS_KEY_ID;
166
+ if (process.env.S3_AWS_SECRET_ACCESS_KEY !== undefined) configuration[AWS_S3].secretAccessKey = process.env.S3_AWS_SECRET_ACCESS_KEY;
170
167
  }
171
168
  const { filter } = configuration;
172
169
  let filterConfig = [];
package/eslint.config.js CHANGED
@@ -30,10 +30,8 @@ export default [
30
30
  ecmaVersion: ECMA_VERSION,
31
31
  globals: {
32
32
  ...globals.nodeBuiltin,
33
- console: 'readonly',
34
33
  describe: 'readonly',
35
34
  it: 'readonly',
36
- require: 'readonly',
37
35
  },
38
36
  sourceType: 'module',
39
37
  },
@@ -41,6 +39,7 @@ export default [
41
39
  '@stylistic/brace-style': ['warn', 'stroustrup', { allowSingleLine: true }],
42
40
  '@stylistic/line-comment-position': ['off'],
43
41
  '@stylistic/max-len': ['warn', MAX_LENGTH_LINE, { ignoreComments: true, ignoreStrings: true, ignoreRegExpLiterals: true }],
42
+ '@stylistic/quotes': ['warn', 'single'],
44
43
  '@stylistic/semi': ['error', 'always'],
45
44
  'capitalized-comments': ['off'],
46
45
  'complexity': ['error', MAX_COMPLEXITY],
@@ -51,6 +50,7 @@ export default [
51
50
  'init-declarations': ['off'],
52
51
  'linebreak-style': ['off'],
53
52
  'max-depth': ['error', MAX_DEPTH],
53
+ 'max-len': 'off',
54
54
  'max-lines': ['warn', { max: MAX_LINES_IN_FILES, skipComments: true, skipBlankLines: true }],
55
55
  'max-lines-per-function': ['warn', { max: MAX_LINES_IN_FUNCTION, skipComments: true, skipBlankLines: true }],
56
56
  'max-params': ['error', MAX_FUNCTION_PARAMETERS],
@@ -63,7 +63,7 @@ export default [
63
63
  'no-undefined': ['off'],
64
64
  'one-var': ['error', 'never'],
65
65
  'processDoc/validate-document-env': ['error'],
66
- 'quotes': ['warn', 'single'],
66
+ 'quotes': 'off',
67
67
  'sort-imports': ['error', { allowSeparatedGroups: true }],
68
68
  'sort-keys': ['error', 'asc', { caseSensitive: true, minKeys: MIN_KEYS_IN_OBJECT, natural: false, allowLineSeparatedGroups: true }],
69
69
  },
package/index.js CHANGED
@@ -118,7 +118,7 @@ logger.flushAndExit = (code) => {
118
118
  if (config.mode.includes(NONE_MODE)) return process.exit(code);
119
119
  if (awsS3) {
120
120
  awsS3.flush(FLUSH_EXIT);
121
- awsS3.on(FLUSH_EXIT, () => {
121
+ awsS3.once(FLUSH_EXIT, () => {
122
122
  if ((!sumo || sumoDone) && (!awsKinesis || awsKinesisDone)) return process.exit(code);
123
123
  awsS3Done = true;
124
124
  return null;
@@ -126,7 +126,7 @@ logger.flushAndExit = (code) => {
126
126
  }
127
127
  if (sumo) {
128
128
  sumo.flush(FLUSH_EXIT);
129
- sumo.on(FLUSH_EXIT, () => {
129
+ sumo.once(FLUSH_EXIT, () => {
130
130
  if ((!awsS3 || awsS3Done) && (!awsKinesis || awsKinesisDone)) return process.exit(code);
131
131
  sumoDone = true;
132
132
  return null;
@@ -134,7 +134,7 @@ logger.flushAndExit = (code) => {
134
134
  }
135
135
  if (awsKinesis) {
136
136
  awsKinesis.flush(FLUSH_EXIT);
137
- awsKinesis.on(FLUSH_EXIT, () => {
137
+ awsKinesis.once(FLUSH_EXIT, () => {
138
138
  if ((!awsS3 || awsS3Done) && (!sumo || sumoDone)) return process.exit(code);
139
139
  awsKinesisDone = true;
140
140
  return null;
@@ -155,23 +155,23 @@ logger.flush = () => {
155
155
  if (config.mode.includes(NONE_MODE)) return null;
156
156
  if (awsS3) {
157
157
  awsS3.flush(FLUSH);
158
- awsS3.on(FLUSH, () => {
158
+ awsS3.once(FLUSH, () => {
159
159
  if ((!sumo || sumoDone) && (!awsKinesis || awsKinesisDone)) return null;
160
160
  awsS3Done = true;
161
161
  return null;
162
162
  });
163
163
  }
164
164
  if (sumo) {
165
- sumo.flush();
166
- sumo.on(FLUSH, () => {
165
+ sumo.flush(FLUSH);
166
+ sumo.once(FLUSH, () => {
167
167
  if ((!awsS3 || awsS3Done) && (!awsKinesis || awsKinesisDone)) return null;
168
168
  sumoDone = true;
169
169
  return null;
170
170
  });
171
171
  }
172
172
  if (awsKinesis) {
173
- awsKinesis.flush();
174
- awsKinesis.on(FLUSH, () => {
173
+ awsKinesis.flush(FLUSH);
174
+ awsKinesis.once(FLUSH, () => {
175
175
  if ((!awsS3 || awsS3Done) && (!sumo || sumoDone)) return null;
176
176
  awsKinesisDone = true;
177
177
  return null;
@@ -13,6 +13,7 @@ import {
13
13
  UNKNOWN_ID,
14
14
  UNKNOWN_TYPE,
15
15
  WARN,
16
+ safeStringify,
16
17
  } from './common.js';
17
18
  import {
18
19
  KinesisClient,
@@ -24,7 +25,6 @@ import {
24
25
  } from 'node:timers';
25
26
  import { Buffer } from 'node:buffer';
26
27
  import { NodeHttpHandler } from '@smithy/node-http-handler';
27
- import Promise from 'bluebird';
28
28
  import Transport from 'winston-transport';
29
29
 
30
30
  const RANDOM_MIN = 0;
@@ -101,11 +101,11 @@ export default class AwsKinesis extends Transport {
101
101
  }
102
102
 
103
103
  flushEvent() {
104
- return Promise.map(Object.keys(events), (level) => {
104
+ return Promise.all(Object.keys(events).map((level) => {
105
105
  if (events[level].Records.length) {
106
106
  return this.put(events[level].Records, level)
107
107
  .then(() => {
108
- events[level].data = []; // we may lose some logs due to concurrency issues
108
+ events[level].Records = []; // we may lose some logs due to concurrency issues
109
109
  events[level].size = 0;
110
110
  })
111
111
  .catch(err => this.emit(WARN, {
@@ -115,17 +115,16 @@ export default class AwsKinesis extends Transport {
115
115
  }));
116
116
  }
117
117
  return Promise.resolve();
118
- })
118
+ }))
119
119
  .then(() => this.emit(LOG, { message: `logs sent to ${AWS_KINESIS}` }));
120
120
  }
121
121
 
122
122
  log(info, callback) {
123
- const messageInfo = info[MESSAGE];
124
- const infoSize = messageInfo.length;
123
+ const infoSize = Buffer.byteLength(info[MESSAGE]);
125
124
  let { level } = info;
126
125
  const { serverType } = globalThis;
127
126
  let { serverId } = globalThis;
128
- const data = JSON.parse(messageInfo);
127
+ const data = { ...info };
129
128
 
130
129
  if (serverType) {
131
130
  if (!serverId) serverId = `${UNKNOWN_ID}${CLIENTS}`;
@@ -144,7 +143,7 @@ export default class AwsKinesis extends Transport {
144
143
  levelData = events[level];
145
144
  }
146
145
  levelData.size += infoSize;
147
- levelData.Records.push({ Data: Buffer.from(JSON.stringify(data)), PartitionKey: `${PARTITION_KEY}-${randomInt(RANDOM_MIN, RANDOM_LIMIT)}` });
146
+ levelData.Records.push({ Data: Buffer.from(safeStringify(data)), PartitionKey: `${PARTITION_KEY}-${randomInt(RANDOM_MIN, RANDOM_LIMIT)}` });
148
147
  if (levelData.Records.length >= this.maxEvents || levelData.size >= this.maxSize) {
149
148
  this.send(levelData.Records, level);
150
149
  levelData.Records = [];
@@ -9,6 +9,7 @@ import {
9
9
  UNKNOWN_ID,
10
10
  UNKNOWN_TYPE,
11
11
  WARN,
12
+ safeStringify,
12
13
  } from './common.js';
13
14
  import {
14
15
  PutObjectCommand,
@@ -18,7 +19,6 @@ import {
18
19
  setImmediate,
19
20
  setInterval,
20
21
  } from 'node:timers';
21
- import Promise from 'bluebird';
22
22
  import Transport from 'winston-transport';
23
23
 
24
24
  const events = {};
@@ -89,7 +89,7 @@ export default class AwsS3 extends Transport {
89
89
  const errors = [];
90
90
  let count = 0;
91
91
 
92
- return Promise.map(Object.keys(data), sType => Promise.each(Object.keys(data[sType]), (sId) => {
92
+ return Promise.all(Object.keys(data).map(sType => Object.keys(data[sType]).reduce((chain, sId) => chain.then(() => {
93
93
  const command = new PutObjectCommand({
94
94
  Bucket: this.bucketname,
95
95
  Key: `${lvl}/${sType}/${sId}/${date.getFullYear()}/${date.getMonth() + 1}/${date.getDate()}/${date.toISOString()}.json`,
@@ -104,7 +104,7 @@ export default class AwsS3 extends Transport {
104
104
  data: data[sType][sId],
105
105
  error: { message: err.message, statusCode: err.statusCode || SYSTEM_ERROR, details: err },
106
106
  }));
107
- }))
107
+ }), Promise.resolve())))
108
108
  .then(() => ({ count, errors }));
109
109
  }
110
110
 
@@ -118,7 +118,7 @@ export default class AwsS3 extends Transport {
118
118
  if (count === 0) {
119
119
  return this.emit(WARN, {
120
120
  errors,
121
- nblogs: errorCount,
121
+ nbLogs: errorCount,
122
122
  nbLogsSent: 0,
123
123
  message: `could not log to ${AWS_S3}`,
124
124
  });
@@ -133,7 +133,7 @@ export default class AwsS3 extends Transport {
133
133
  }
134
134
 
135
135
  flushEvent() {
136
- return Promise.map(Object.keys(events), (level) => {
136
+ return Promise.all(Object.keys(events).map((level) => {
137
137
  if (events[level].data.length) {
138
138
  return this.put(events[level].data, level, new Date())
139
139
  .then(() => {
@@ -143,12 +143,12 @@ export default class AwsS3 extends Transport {
143
143
  .catch(err => this.emit(WARN, { error: err.message, message: `could not log to ${AWS_S3}` }));
144
144
  }
145
145
  return Promise.resolve();
146
- })
146
+ }))
147
147
  .then(() => this.emit(LOG, { message: `logs sent to ${AWS_S3}` }));
148
148
  }
149
149
 
150
150
  flushTypeEvent() {
151
- return Promise.map(Object.keys(typeEvents), (level) => {
151
+ return Promise.all(Object.keys(typeEvents).map((level) => {
152
152
  if (Object.keys(typeEvents[level].data).length) {
153
153
  return this.putRemote(typeEvents[level].data, level, new Date())
154
154
  .then((result) => {
@@ -162,7 +162,7 @@ export default class AwsS3 extends Transport {
162
162
  if (count === 0) {
163
163
  return this.emit(WARN, {
164
164
  errors,
165
- nblogs: errorCount,
165
+ nbLogs: errorCount,
166
166
  nbLogsSent: 0,
167
167
  message: `could not log to ${AWS_S3}`,
168
168
  });
@@ -176,16 +176,15 @@ export default class AwsS3 extends Transport {
176
176
  });
177
177
  }
178
178
  return Promise.resolve();
179
- });
179
+ }));
180
180
  }
181
181
 
182
182
  log(info, callback) {
183
- const messageInfo = info[MESSAGE];
184
- const infoSize = messageInfo.length;
183
+ const infoSize = Buffer.byteLength(info[MESSAGE]);
185
184
  const { level } = info;
186
185
  const { serverType } = globalThis;
187
186
  let { serverId } = globalThis;
188
- const data = JSON.parse(messageInfo);
187
+ const data = { ...info };
189
188
 
190
189
  if (serverType) {
191
190
  if (!serverId) serverId = `${UNKNOWN_ID}${CLIENTS}`;
@@ -211,7 +210,7 @@ export default class AwsS3 extends Transport {
211
210
  }
212
211
  typeLevelData.size += infoSize;
213
212
  typeLevelData.nbEvents += 1;
214
- serverData.push(data);
213
+ serverData.push(safeStringify(data));
215
214
  if (typeLevelData.nbEvents >= this.maxEvents || typeLevelData.size >= this.maxSize) {
216
215
  this.sendRemote(typeLevelData.data, level, new Date());
217
216
  typeLevelData.data = {};
@@ -229,7 +228,7 @@ export default class AwsS3 extends Transport {
229
228
  levelData = events[level];
230
229
  }
231
230
  levelData.size += infoSize;
232
- levelData.data.push(data);
231
+ levelData.data.push(safeStringify(data));
233
232
  if (levelData.data.length >= this.maxEvents || levelData.size >= this.maxSize) {
234
233
  this.send(levelData.data, level, new Date());
235
234
  levelData.data = [];