@mimik/sumologic-winston-logger 2.1.13 → 2.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,6 +1,10 @@
1
- <a name="config"></a>
1
+ # @mimik/sumologic-winston-logger
2
2
 
3
- ## config() <code>object</code>
3
+ Log wrapper for sumo, s3, kinesis and winston.
4
+
5
+ <a name="module_configuration"></a>
6
+
7
+ ## configuration
4
8
  The following environment variables are used to configure the logger:
5
9
 
6
10
  | Env variable name | Description | Default | Comments |
@@ -13,7 +17,7 @@ The following environment variables are used to configure the logger:
13
17
  | FILTER_FILE | Filename containing filter rules | null | |
14
18
  | FLUSH_EXIT_DELAY | Delay for flushing the transports and exiting | 2000 | in millisecond |
15
19
  | FLUSH_EXIT_TIMEOUT | Timeout safety net in case flush never completes | 5000 | in millisecond |
16
- | NO_STACK | Whether to include call stacks in all log messages | yes | expected: yes/no |
20
+ | NO_STACK | Set to 'yes' to suppress stack info on non-error logs | yes | expected: yes/no |
17
21
  | LOG_MODE | Comma-separated list defining the log mode/backends | none | enum: awsS3, awsKinesis, sumologic, all, none |
18
22
 
19
23
  If `LOG_MODE` includes `sumologic`, the following environment variables are required:
@@ -60,15 +64,49 @@ log payload for S3 and Kinesis.
60
64
  If `globalThis.serverType` is set, it overrides `SERVER_TYPE`.
61
65
  If `globalThis.serverId` is set, it overrides `SERVER_ID`.
62
66
 
63
- **Kind**: global function
64
- **Returns**: <code>object</code> - configuration - Logger configuration.
67
+
68
+ * [configuration](#module_configuration)
69
+ * [~checkConfig(config)](#module_configuration..checkConfig)
70
+ * [~checkMode(mode)](#module_configuration..checkMode) ⇒ <code>Array.&lt;string&gt;</code> \| <code>null</code>
71
+
72
+ <a name="module_configuration..checkConfig"></a>
73
+
74
+ ### configuration~checkConfig(config)
75
+ Validates that no property in the configuration tree is undefined.
76
+
77
+ **Kind**: inner method of [<code>configuration</code>](#module_configuration)
78
+ **Throws**:
79
+
80
+ - <code>Error</code> If any property value is undefined.
81
+
82
+
83
+ | Param | Type | Description |
84
+ | --- | --- | --- |
85
+ | config | <code>object</code> | The configuration object to validate. |
86
+
87
+ <a name="module_configuration..checkMode"></a>
88
+
89
+ ### configuration~checkMode(mode) ⇒ <code>Array.&lt;string&gt;</code> \| <code>null</code>
90
+ Parses and validates the LOG_MODE string.
91
+
92
+ **Kind**: inner method of [<code>configuration</code>](#module_configuration)
93
+ **Returns**: <code>Array.&lt;string&gt;</code> \| <code>null</code> - Array of validated mode strings, or null if mode is falsy.
94
+ **Throws**:
95
+
96
+ - <code>Error</code> If the mode string contains invalid values.
97
+
98
+
99
+ | Param | Type | Description |
100
+ | --- | --- | --- |
101
+ | mode | <code>string</code> \| <code>undefined</code> | Comma-separated list of log modes. |
102
+
65
103
 
66
104
  ## Synopsis ##
67
105
 
68
106
  Sumologic-Winston-Logger is a log wrapper that can write to multiple logging services.
69
107
  Currently, Winston, SumoLogic, AWS Kinesis and AWS S3 are supported.
70
108
  The package also adds stackTrace info.
71
- StackTrace information is included in all log entries. For error-level logs, the full stack trace is added. For other log levels, method name, file name, and line number are appended. **Line formatting is not currently configurable.**
109
+ StackTrace information is included in all log entries. For error-level logs, the full stack trace is added. For other log levels, method name, file name, and line number are appended. **Line formatting is not currently configurable.**
72
110
  The package also allows some level of filtering.
73
111
 
74
112
  ## Motivation ##
@@ -118,11 +156,11 @@ The collector code as defined in SumoLogic. The Collector Code is the Base64 str
118
156
  |`export SUMO_LOGIC_COLLECTOR_CODE=AhfheisdcOllectorCodeInSumoLogicuZw==`|
119
157
 
120
158
 
121
- To learn more about setting this environment variable, see the section, _Finding SumoLogic endpoint and collector code_, below.
159
+ To learn more about setting this environment variable, see the section, _Finding SumoLogic endpoint and collector code_, below.
122
160
 
123
161
  #### Finding SumoLogic endpoint and collector code ####
124
162
 
125
- To find the values that you will apply the environment variables, `SUMO_LOGIC_ENDPOINT` and `SUMO_LOGIC_COLLECTOR_CODE`, in the SumoLogic Control Panel, goto: Manage > Collection, and get the source category
163
+ To find the values that you will apply the environment variables, `SUMO_LOGIC_ENDPOINT` and `SUMO_LOGIC_COLLECTOR_CODE`, in the SumoLogic Control Panel, goto: Manage > Collection, and get the source category
126
164
 
127
165
  ![Screen Shot 2016-10-10 at 2.03.31 PM.png](https://bitbucket.org/repo/xbXGRo/images/3258406172-Screen%20Shot%202016-10-10%20at%202.03.31%20PM.png)
128
166
 
@@ -137,7 +175,7 @@ To find the values that you will apply the environment variables, `SUMO_LOGIC_EN
137
175
 
138
176
  **Figure 3: The HTTP Source Address dialog shows collector's URL. The collector code is a Base64 string appended after the last slash in the Source Address URL**
139
177
 
140
- The endpoint is the part of the url that ends with a slash. i.e.
178
+ The endpoint is the part of the url that ends with a slash. i.e.
141
179
  `https://endpoint1.collection.us2.sumologic.com/receiver/v1/http/`
142
180
 
143
181
  The collector code is the Base64 encoded part of the URL that follows the last slash in the url.
@@ -145,7 +183,7 @@ The collector code is the Base64 encoded part of the URL that follows the last s
145
183
  ### `FILTER_FILE` ###
146
184
 
147
185
  FILTER_FILE is the location where the definition of the filtering configuration is. The location has to be a full file name.
148
- When the environment (NODE_ENV) in which the logger is used is `prod` or `production`, the content of the log will be filtered according to the log filtering configuration included in the file referred by the FILTER_FILE variable.
186
+ When a `FILTER_FILE` is specified, the content of the log will be filtered according to the log filtering configuration included in the file referred by the FILTER_FILE variable.
149
187
  The filter will replace values of the designated property names to '-----'.
150
188
 
151
189
  ## Sample Use ##
@@ -157,9 +195,10 @@ Formatting console logs is left to winston, except that some stackTrace info is
157
195
  Formatting of SumoLogic logs is handled by this module in the following ways:
158
196
 
159
197
  * only the first argument is used as the message
160
- * only one object can be passed as parameter
198
+ * one metadata object can be passed as a parameter; its properties are included in the log entry
161
199
  * structured stackTrace info is added to every log except when NO_STACK is set to 'yes'
162
200
  * if the last parameter is a string it will be considered as a `correlationId`
201
+ * if the last parameter is a plain object **and** there is already an earlier metadata object, its properties are merged into the log entry at the top level (alongside `serverType`/`serverId`) — this is the _extra fields_ feature
163
202
 
164
203
 
165
204
  ### Logging Examples ###
@@ -177,9 +216,14 @@ logger.info('this is an info statement', { meta: 'data' });
177
216
  logger.verbose('this is a verbose statement');
178
217
  logger.silly('this is a silly statement o_O');
179
218
  logger.debug('this is a debug statement with 2 params', { meta: 'data' });
180
- logger.debug('this is a debug statement with 2 params and a correlationId', { meta: 'data' }, '123456')
219
+ logger.debug('this is a debug statement with 2 params and a correlationId', { meta: 'data' }, '123456');
220
+
221
+ // Extra fields — the trailing plain object is merged into the log entry at the top level.
222
+ // Works with or without a correlationId.
223
+ logger.info('this is an info statement with extra fields', { meta: 'data' }, { type: 'myService' });
224
+ logger.info('this is an info statement with extra fields and a correlationId', { meta: 'data' }, '123456', { type: 'myService', version: '1.0' });
181
225
  ```
182
- **Listing 2: Examples of using the logger to make a log entry, using the log levels, log, silly, verbose, debug, info, warn, and error**
226
+ **Listing 2: Examples of using the logger to make a log entry, using the log levels, log, silly, verbose, debug, info, warn, and error**
183
227
 
184
228
  By default, log entries are logged to console.
185
229
  The log() method is also supported, and adds a level parameter in position 1.
@@ -194,9 +238,19 @@ To trail in SumoLogic go to Search > Live Tail in the SumoLogic user interface a
194
238
  |---|
195
239
  |`sourceCategory=local/node/challengeAPI/logs`|
196
240
 
241
+ ### Logger API ###
242
+
243
+ In addition to the standard Winston log methods (`error`, `warn`, `info`, `verbose`, `debug`, `silly`), the logger exposes the following:
244
+
245
+ | Property / Method | Description |
246
+ | --- | --- |
247
+ | `logger.LEVELS` | Array of supported log levels in severity order: `['error', 'warn', 'info', 'verbose', 'debug', 'silly']` |
248
+ | `logger.flush()` | Flushes all active transports (SumoLogic, S3, Kinesis). Completes asynchronously after the configured `FLUSH_EXIT_DELAY`. |
249
+ | `logger.flushAndExit(code)` | Flushes all active transports and then calls `process.exit(code)`. A safety-net timer (`FLUSH_EXIT_TIMEOUT`) ensures the process exits even if a transport never responds. |
250
+
197
251
  ### Stack Trace ###
198
252
  All calls to `error()` include the stack trace.
199
- All other log levels will include a line number and file name.
253
+ When `NO_STACK` is not set to `'yes'` and the environment is development or local, other log levels will include a line number and file name.
200
254
 
201
255
  ## License ##
202
256
  MIT
@@ -8,19 +8,16 @@ import {
8
8
  ENV_LOCAL,
9
9
  NONE_MODE,
10
10
  SUMOLOGIC,
11
- isNil,
11
+ toInt,
12
12
  } from '../lib/common.js';
13
13
  import process from 'node:process';
14
14
  import { readFileSync } from 'node:fs';
15
15
 
16
- const DECIMAL = 10;
17
-
18
16
  /**
19
17
  *
20
18
  * Logger configuration.
21
19
  *
22
- * @function config
23
- * @return {object} configuration - Logger configuration.
20
+ * @module configuration
24
21
  * @description The following environment variables are used to configure the logger:
25
22
  *
26
23
  * | Env variable name | Description | Default | Comments |
@@ -33,7 +30,7 @@ const DECIMAL = 10;
33
30
  * | FILTER_FILE | Filename containing filter rules | null | |
34
31
  * | FLUSH_EXIT_DELAY | Delay for flushing the transports and exiting | 2000 | in millisecond |
35
32
  * | FLUSH_EXIT_TIMEOUT | Timeout safety net in case flush never completes | 5000 | in millisecond |
36
- * | NO_STACK | Whether to include call stacks in all log messages | yes | expected: yes/no |
33
+ * | NO_STACK | Set to 'yes' to suppress stack info on non-error logs | yes | expected: yes/no |
37
34
  * | LOG_MODE | Comma-separated list defining the log mode/backends | none | enum: awsS3, awsKinesis, sumologic, all, none |
38
35
  *
39
36
  * If `LOG_MODE` includes `sumologic`, the following environment variables are required:
@@ -81,6 +78,12 @@ const DECIMAL = 10;
81
78
  * If `globalThis.serverId` is set, it overrides `SERVER_ID`.
82
79
  */
83
80
 
81
+ /**
82
+ * Validates that no property in the configuration tree is undefined.
83
+ *
84
+ * @param {object} config - The configuration object to validate.
85
+ * @throws {Error} If any property value is undefined.
86
+ */
84
87
  const checkConfig = (config) => {
85
88
  const errs = [];
86
89
 
@@ -99,6 +102,13 @@ const checkConfig = (config) => {
99
102
  }
100
103
  };
101
104
 
105
+ /**
106
+ * Parses and validates the LOG_MODE string.
107
+ *
108
+ * @param {string|undefined} mode - Comma-separated list of log modes.
109
+ * @returns {string[]|null} Array of validated mode strings, or null if mode is falsy.
110
+ * @throws {Error} If the mode string contains invalid values.
111
+ */
102
112
  const checkMode = (mode) => {
103
113
  let logMode = null;
104
114
 
@@ -125,8 +135,8 @@ const configuration = {
125
135
  filter: {
126
136
  file: process.env.FILTER_FILE || null,
127
137
  },
128
- flushExitDelay: parseInt(process.env.FLUSH_EXIT_DELAY, DECIMAL) || 2000, // in millisecond
129
- flushExitTimeout: parseInt(process.env.FLUSH_EXIT_TIMEOUT, DECIMAL) || 5000, // in millisecond
138
+ flushExitDelay: toInt(process.env.FLUSH_EXIT_DELAY, 2000), // in millisecond
139
+ flushExitTimeout: toInt(process.env.FLUSH_EXIT_TIMEOUT, 5000), // in millisecond
130
140
  noStack: process.env.NO_STACK || 'yes',
131
141
  };
132
142
  configuration.mode = checkMode(process.env.LOG_MODE) || [NONE_MODE];
@@ -143,30 +153,30 @@ if (configuration.mode.includes(AWS_KINESIS)) {
143
153
  streamNameError: process.env.KINESIS_AWS_STREAM_NAME_ERROR,
144
154
  streamNameOther: process.env.KINESIS_AWS_STREAM_NAME_OTHER,
145
155
  region: process.env.KINESIS_AWS_REGION,
146
- timeout: parseInt(process.env.KINESIS_AWS_TIMEOUT, DECIMAL) || 5, // in minute
147
- maxSize: parseInt(process.env.KINESIS_AWS_MAX_SIZE, DECIMAL) || 5, // in mByte
148
- maxEvents: parseInt(process.env.KINESIS_AWS_MAX_EVENTS, DECIMAL) || 1000,
149
- maxRetries: parseInt(process.env.KINESIS_AWS_MAX_RETRIES, DECIMAL) || 4,
156
+ timeout: toInt(process.env.KINESIS_AWS_TIMEOUT, 5), // in minute
157
+ maxSize: toInt(process.env.KINESIS_AWS_MAX_SIZE, 5), // in mByte
158
+ maxEvents: toInt(process.env.KINESIS_AWS_MAX_EVENTS, 1000),
159
+ maxRetries: toInt(process.env.KINESIS_AWS_MAX_RETRIES, 4),
150
160
  httpOptions: {
151
- socketTimeout: parseInt(process.env.KINESIS_AWS_HTTP_OPTIONS_SOCKET_TIMEOUT, DECIMAL) || 5000, // in millisecond
152
- connectionTimeout: parseInt(process.env.KINESIS_AWS_HTTP_OPTIONS_CONNECTION_TIMEOUT, DECIMAL) || 5000, // in millisecond
161
+ socketTimeout: toInt(process.env.KINESIS_AWS_HTTP_OPTIONS_SOCKET_TIMEOUT, 5000), // in millisecond
162
+ connectionTimeout: toInt(process.env.KINESIS_AWS_HTTP_OPTIONS_CONNECTION_TIMEOUT, 5000), // in millisecond
153
163
  },
154
164
  };
155
165
 
156
- if (!isNil(process.env.KINESIS_AWS_ACCESS_KEY_ID)) configuration[AWS_KINESIS].accessKeyId = process.env.KINESIS_AWS_ACCESS_KEY_ID;
157
- if (!isNil(process.env.KINESIS_AWS_SECRET_ACCESS_KEY)) configuration[AWS_KINESIS].secretAccessKey = process.env.KINESIS_AWS_SECRET_ACCESS_KEY;
166
+ if (process.env.KINESIS_AWS_ACCESS_KEY_ID !== undefined) configuration[AWS_KINESIS].accessKeyId = process.env.KINESIS_AWS_ACCESS_KEY_ID;
167
+ if (process.env.KINESIS_AWS_SECRET_ACCESS_KEY !== undefined) configuration[AWS_KINESIS].secretAccessKey = process.env.KINESIS_AWS_SECRET_ACCESS_KEY;
158
168
  }
159
169
  if (configuration.mode.includes(AWS_S3)) {
160
170
  configuration[AWS_S3] = {
161
171
  bucketname: process.env.S3_AWS_BUCKET_NAME,
162
172
  region: process.env.S3_AWS_REGION,
163
- timeout: parseInt(process.env.S3_AWS_TIMEOUT, DECIMAL) || 5, // in minute
164
- maxSize: parseInt(process.env.S3_AWS_MAX_SIZE, DECIMAL) || 5, // in mByte
165
- maxEvents: parseInt(process.env.S3_AWS_MAX_EVENTS, DECIMAL) || 1000,
173
+ timeout: toInt(process.env.S3_AWS_TIMEOUT, 5), // in minute
174
+ maxSize: toInt(process.env.S3_AWS_MAX_SIZE, 5), // in mByte
175
+ maxEvents: toInt(process.env.S3_AWS_MAX_EVENTS, 1000),
166
176
  };
167
177
 
168
- if (!isNil(process.env.S3_AWS_ACCESS_KEY_ID)) configuration[AWS_S3].accessKeyId = process.env.S3_AWS_ACCESS_KEY_ID;
169
- if (!isNil(process.env.S3_AWS_SECRET_ACCESS_KEY)) configuration[AWS_S3].secretAccessKey = process.env.S3_AWS_SECRET_ACCESS_KEY;
178
+ if (process.env.S3_AWS_ACCESS_KEY_ID !== undefined) configuration[AWS_S3].accessKeyId = process.env.S3_AWS_ACCESS_KEY_ID;
179
+ if (process.env.S3_AWS_SECRET_ACCESS_KEY !== undefined) configuration[AWS_S3].secretAccessKey = process.env.S3_AWS_SECRET_ACCESS_KEY;
170
180
  }
171
181
  const { filter } = configuration;
172
182
  let filterConfig = [];
package/index.js CHANGED
@@ -10,6 +10,7 @@ import {
10
10
  } from './lib/common.js';
11
11
  import {
12
12
  correlationId,
13
+ extraFields,
13
14
  filterMeta,
14
15
  stackInfo,
15
16
  } from './lib/formatLib.js';
@@ -42,6 +43,7 @@ const logger = createLogger({
42
43
  format: format.combine(
43
44
  filterMeta({ env: config.env, config: config.filter.config }),
44
45
  format.metadata(),
46
+ extraFields(),
45
47
  stackInfo({ env: config.env, noStack: config.noStack }),
46
48
  correlationId(),
47
49
  ),
@@ -73,7 +73,7 @@ export default class AwsKinesis extends Transport {
73
73
  events[level].size = 0;
74
74
  }
75
75
  });
76
- }, this.timeInterval);
76
+ }, this.timeInterval).unref();
77
77
  }
78
78
 
79
79
  put(Records, lvl) {
@@ -103,13 +103,14 @@ export default class AwsKinesis extends Transport {
103
103
  flushEvent() {
104
104
  return Promise.all(Object.keys(events).map((level) => {
105
105
  if (events[level].Records.length) {
106
- return this.put(events[level].Records, level)
106
+ const failedRecords = events[level].Records;
107
+ return this.put(failedRecords, level)
107
108
  .then(() => {
108
109
  events[level].Records = []; // we may lose some logs due to concurrency issues
109
110
  events[level].size = 0;
110
111
  })
111
112
  .catch(err => this.emit(WARN, {
112
- data: events[level].Records,
113
+ data: failedRecords,
113
114
  message: `could not log to ${AWS_KINESIS}`,
114
115
  error: { message: err.message, statusCode: err.statusCode || SYSTEM_ERROR, details: err },
115
116
  }));
@@ -124,7 +125,7 @@ export default class AwsKinesis extends Transport {
124
125
  let { level } = info;
125
126
  const { serverType } = globalThis;
126
127
  let { serverId } = globalThis;
127
- const data = { ...info };
128
+ const data = Object.fromEntries(Object.entries(info));
128
129
 
129
130
  if (serverType) {
130
131
  if (!serverId) serverId = `${UNKNOWN_ID}${CLIENTS}`;
@@ -62,7 +62,7 @@ export default class AwsS3 extends Transport {
62
62
  typeEvents[level].nbEvents = 0;
63
63
  }
64
64
  });
65
- }, this.timeInterval);
65
+ }, this.timeInterval).unref();
66
66
  }
67
67
 
68
68
  put(data, lvl, date) {
@@ -184,7 +184,7 @@ export default class AwsS3 extends Transport {
184
184
  const { level } = info;
185
185
  const { serverType } = globalThis;
186
186
  let { serverId } = globalThis;
187
- const data = { ...info };
187
+ const data = Object.fromEntries(Object.entries(info));
188
188
 
189
189
  if (serverType) {
190
190
  if (!serverId) serverId = `${UNKNOWN_ID}${CLIENTS}`;
package/lib/common.js CHANGED
@@ -53,7 +53,11 @@ const safeStringify = (obj) => {
53
53
  });
54
54
  };
55
55
 
56
- const isNil = value => value === null || value === undefined;
56
+ const DECIMAL = 10;
57
+ const toInt = (value, fallback) => {
58
+ const parsed = parseInt(value, DECIMAL);
59
+ return Number.isNaN(parsed) ? fallback : parsed;
60
+ };
57
61
 
58
62
  export {
59
63
  ALL_MODE,
@@ -61,6 +65,7 @@ export {
61
65
  AWS_KINESIS,
62
66
  AWS_S3,
63
67
  CLIENTS,
68
+ DECIMAL,
64
69
  DEBUG_LEVEL,
65
70
  ENV_DEV,
66
71
  ENV_LOCAL,
@@ -90,5 +95,5 @@ export {
90
95
  UNKNOWN_ID,
91
96
  WARN,
92
97
  safeStringify,
93
- isNil,
98
+ toInt,
94
99
  };
package/lib/formatLib.js CHANGED
@@ -1,4 +1,5 @@
1
1
  import {
2
+ DECIMAL,
2
3
  ENV_DEV,
3
4
  ENV_LOCAL,
4
5
  LEVEL,
@@ -10,6 +11,7 @@ import { logs } from '@mimik/lib-filters';
10
11
  import { parseStack } from './stackLib.js';
11
12
 
12
13
  const INDEX_ADJUST = 1;
14
+ const MIN_EXTRA_FIELDS_ARGS = 2;
13
15
 
14
16
  const isReserved = (value) => {
15
17
  if (value === LEVEL || value === 'level') return true;
@@ -49,8 +51,7 @@ const correlationId = format((origInfo) => {
49
51
 
50
52
  ([info.correlationId, info.step] = resultsSteps);
51
53
  }
52
- else info.step = undefined;
53
- if (info.step) info.step = parseInt(info.step, 10);
54
+ if (info.step) info.step = parseInt(info.step, DECIMAL);
54
55
  return info;
55
56
  });
56
57
 
@@ -99,8 +100,33 @@ const filterMeta = format((origInfo, opts) => {
99
100
  return info;
100
101
  });
101
102
 
103
+ const extraFields = format((origInfo) => {
104
+ const info = { ...origInfo };
105
+ const meta = info[SPLAT] ? [...info[SPLAT]] : undefined;
106
+
107
+ if (!meta || meta.length < MIN_EXTRA_FIELDS_ARGS) return info;
108
+
109
+ const last = meta[meta.length - 1];
110
+ if (typeof last !== 'object' || last === null || Array.isArray(last)) return info;
111
+
112
+ const hasOtherObject = meta.slice(0, -1).some(
113
+ item => typeof item === 'object' && item !== null && !Array.isArray(item),
114
+ );
115
+ if (!hasOtherObject) return info;
116
+
117
+ Object.entries(last).forEach(([key, value]) => {
118
+ if (!isReserved(key)) {
119
+ info[key] = value;
120
+ }
121
+ });
122
+
123
+ info[SPLAT] = meta.slice(0, -1);
124
+ return info;
125
+ });
126
+
102
127
  export {
103
128
  stackInfo,
104
129
  correlationId,
130
+ extraFields,
105
131
  filterMeta,
106
132
  };
package/lib/stackLib.js CHANGED
@@ -2,7 +2,7 @@ import { basename } from 'node:path';
2
2
  import { createHash } from 'node:crypto';
3
3
  import { fileURLToPath } from 'node:url';
4
4
 
5
- // Stack trace format :
5
+ // Stack trace format:
6
6
  // https://github.com/v8/v8/wiki/Stack%20Trace%20API
7
7
  // these regexes are used to pull out the parts of the stack trace like method name, line number, etc.
8
8
  const STACKREG = /at\s+(?<method>.*)\s+\((?<path>.*):(?<line>\d*):(?<position>\d*)\)/iu;
@@ -34,20 +34,16 @@ const parseStack = (newError) => {
34
34
  const stackParts = STACKREG.exec(firstLine) || STACKREG2.exec(firstLine);
35
35
 
36
36
  if (!stackParts || stackParts.length !== ALL_STACK) return null;
37
- const stackInfo = {
37
+ const stack = `${SHIFT}${truncatedList.join('\n').trimStart()}`;
38
+ return {
38
39
  method: stackParts[METHOD],
39
40
  path: stackParts[PATH],
40
41
  line: stackParts[LINE],
41
42
  pos: stackParts[POSITION],
42
43
  file: basename(stackParts[PATH]),
43
- stack: `${SHIFT}${truncatedList.join('\n').trimStart()}`,
44
+ stack,
45
+ hash: createHash('sha256').update(stack).digest('hex'),
44
46
  };
45
- stackInfo.hash = createHash('sha256')
46
- .update(stackInfo.stack)
47
- .digest('hex');
48
- // this is a hash of the stack trace for easier searching for the stack trace
49
- // security of this is not a concern since the stack trace is also sent unencrypted
50
- return stackInfo;
51
47
  };
52
48
 
53
49
  export {
@@ -28,7 +28,8 @@ export default class Sumo extends Transport {
28
28
  log(info, callback) {
29
29
  const { serverType } = globalThis;
30
30
  let { serverId } = globalThis;
31
- const data = { ...info };
31
+ const splatArgs = info[SPLAT];
32
+ const data = Object.fromEntries(Object.entries(info));
32
33
 
33
34
  if (serverType) {
34
35
  if (!serverId) serverId = `${UNKNOWN_ID}${CLIENTS}`;
@@ -51,8 +52,8 @@ export default class Sumo extends Transport {
51
52
  const resp = { endpoint: this.endpoint, data, message: `could not log to ${SUMOLOGIC}` };
52
53
 
53
54
  if (data.correlationId) resp.correlationId = data.correlationId;
54
- else if (data[SPLAT] && Array.isArray(data[SPLAT])) {
55
- const last = data[SPLAT][data[SPLAT].length - 1];
55
+ else if (splatArgs && Array.isArray(splatArgs)) {
56
+ const last = splatArgs[splatArgs.length - 1];
56
57
 
57
58
  if (typeof last === 'string') resp.correlationId = last;
58
59
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mimik/sumologic-winston-logger",
3
- "version": "2.1.13",
3
+ "version": "2.2.0",
4
4
  "description": "Log wrapper for sumo, s3, kinesis and winston",
5
5
  "main": "./index.js",
6
6
  "type": "module",
@@ -10,7 +10,7 @@
10
10
  },
11
11
  "scripts": {
12
12
  "lint": "eslint . --no-error-on-unmatched-pattern",
13
- "docs": "jsdoc2md configuration/config.js > README.md && cat README_Supplement.md >> README.md",
13
+ "docs": "jsdoc2md --template docs/README.hbs configuration/config.js > README.md",
14
14
  "test": "mocha --reporter mochawesome --bail --exit --check-leaks --global serverType,serverId test/logger.spec.js test/loggerProd.spec.js",
15
15
  "test-ci": "c8 --reporter=lcov --reporter=text npm test",
16
16
  "prepublishOnly": "npm run docs && npm run lint && npm run test-ci",
@@ -32,29 +32,30 @@
32
32
  },
33
33
  "dependencies": {
34
34
  "@mimik/lib-filters": "^2.0.7",
35
- "@aws-sdk/client-s3": "3.995.0",
36
- "@aws-sdk/client-kinesis": "3.995.0",
37
- "@smithy/node-http-handler": "4.4.10",
38
- "axios": "1.13.5",
35
+ "@aws-sdk/client-s3": "3.1006.0",
36
+ "@aws-sdk/client-kinesis": "3.1006.0",
37
+ "@smithy/node-http-handler": "4.4.14",
38
+ "axios": "1.13.6",
39
39
  "winston": "3.19.0",
40
40
  "winston-transport": "4.9.0"
41
41
  },
42
42
  "devDependencies": {
43
- "@eslint/js": "9.39.2",
43
+ "@eslint/js": "9.39.4",
44
44
  "@mimik/eslint-plugin-document-env": "^2.0.8",
45
+ "@mimik/eslint-plugin-logger": "^1.0.2",
45
46
  "@mimik/request-helper": "^2.0.5",
46
- "@stylistic/eslint-plugin": "5.9.0",
47
+ "@stylistic/eslint-plugin": "5.10.0",
47
48
  "aws-sdk-client-mock": "4.1.0",
48
- "c8": "10.1.3",
49
+ "c8": "11.0.0",
49
50
  "chai": "6.2.2",
50
- "eslint": "9.39.2",
51
+ "eslint": "9.39.4",
51
52
  "eslint-plugin-import": "2.32.0",
52
53
  "express": "5.2.1",
53
- "globals": "17.3.0",
54
+ "globals": "17.4.0",
54
55
  "husky": "9.1.7",
55
56
  "jsdoc-to-markdown": "9.1.3",
56
57
  "mocha": "11.7.5",
57
58
  "mochawesome": "7.1.4",
58
- "sinon": "21.0.1"
59
+ "sinon": "21.0.2"
59
60
  }
60
61
  }
@@ -1,8 +0,0 @@
1
- {
2
- "permissions": {
3
- "allow": [
4
- "Bash(npx mocha:*)",
5
- "Bash(done)"
6
- ]
7
- }
8
- }
package/.husky/pre-commit DELETED
@@ -1,2 +0,0 @@
1
- #!/bin/sh
2
- npm run commit-ready
package/.husky/pre-push DELETED
@@ -1,2 +0,0 @@
1
- #!/bin/sh
2
- npm run test