@63klabs/cache-data 1.2.8 → 1.2.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,341 +0,0 @@
1
- # Endpoint Properties, Methods, and Use
2
-
3
- TODO
4
-
5
- ## Monitoring and Logging
6
-
7
- ### tools.Timer
8
-
9
- In its simplest form we can do the following:
10
-
11
- ```js
12
- /*
13
- Assuming:
14
- const { tools, cache, endpoint } = require('@63klabs/cache-data');
15
- */
16
-
17
- const timerTaskGetGames = new tools.Timer("Getting games", true); // We give it a name for logging, and we set to true so the timer starts right away
18
-
19
- /* A block of code we want to execute and get timing for */
20
- // do something
21
- // do something
22
-
23
- timerTaskGetGames.stop(); // if debug level is >= 3 (DebugAndLog.DIAG) it will log the elapsed time in ms
24
- ```
25
-
26
- The above code will create a timer which we can access by the variable name `timerTaskGetGames`. Since we set the second parameter to `true` it will start the timer upon creation.
27
-
28
- Then a block of code will execute.
29
-
30
- Then we stop the timer using `.stop()` and if the logging level is 3 or greater it will send a log entry with the elapsed time to the console.
31
-
32
- You are able to get the current time elapsed in milliseconds from a running Timer by calling `const ms = timerVarName.elapsed()`
33
-
34
- ### tools.DebugAndLog
35
-
36
- ```js
37
- /*
38
- Assuming:
39
- const { tools, cache, endpoint } = require('@63klabs/cache-data');
40
- */
41
-
42
- /* increase the log level - comment out when not needed */
43
- tools.DebugAndLog.setLogLevel(5, "2022-02-28T04:59:59Z"); // we can increase the debug level with an expiration
44
-
45
- tools.DebugAndLog.debug("Hello World");
46
- tools.DebugAndLog.msg("The sky is set to be blue today");
47
- tools.DebugAndLog.diag("Temperature log:", log);
48
-
49
- try {
50
- // some code
51
- } catch (error) {
52
- tools.DebugAndLog.error("We have an error in try/catch 1", error);
53
- }
54
-
55
- try {
56
- // some code
57
- } catch (error) {
58
- tools.DebugAndLog.warn("We have an error but will log it as a warning in try/catch 2", error);
59
- }
60
- ```
61
-
62
- Before calling `Config.init()` you can set the log level using `DebugAndLog.setLogLevel()`. If you set the log level after calling `Config.init()` OR after calling any `DebugAndLog` function, you will get an error. That is because a default log level has already been set and we will not allow the changing of the log level after a script has begun.
63
-
64
- There are six (6) logging functions.
65
-
66
- ```js
67
- DebugAndLog.error(msgStr, obj); // logs at ALL logging levels
68
- DebugAndLog.warn(msgStr, obj); // logs at ALL logging levels
69
- DebugAndLog.log(msgStr, tagStr, obj); // logs at ALL logging levels
70
- DebugAndLog.msg(msgStr, obj); // logs at level 1 and above
71
- DebugAndLog.diag(msgStr, obj); // logs at level 3 and above
72
- DebugAndLog.debug(msgStr, obj); // logs at level 5
73
- ```
74
-
75
- In the above the `obj` parameter is optional and is an object you wish to log. Be careful of logging objects that may contain sensitive information.
76
-
77
- Choose the method based on how verbose you want your logging to be at various script levels.
78
-
79
- Note that `DebugAndLog.log(msgStr, tagStr)` allows you to add a tag. If a tag is not provided `LOG` will be used and your log entry will look like `[LOG] your message`.
80
-
81
- If you provide `TEMP` as a tag ('temperature' for example) then the log entry will look something like this: `[TEMP] your message`.
82
-
83
- ## Sanitize and Obfuscate functions
84
-
85
- These functions attempt to scrub items labled as 'secret', 'key', 'token' and 'Authorization' from objects for logging purposes.
86
-
87
- Sanitization is also performed on objects passed to the DebugAndLog logging functions.
88
-
89
- #### Sanitize
90
-
91
- You can pass an object to sanitize for logging purposes.
92
-
93
- NOTE: This is a tool that attempts to sanitize and may miss sensitive information. Inspect the [regular expression used for performing search](https://regex101.com/r/IJp35p/3) for more information. Care should be taken when logging objects for purposes of debugging.
94
-
95
- What it attempts to do:
96
-
97
- - Finds object keys with 'secret', 'key', and 'token' in the name and obfuscates their values.
98
- - It checks string values for key:value and key=value pairs and obfuscates the value side if the key contains the words 'secret', 'key', or 'token'. For example, parameters in a query string `https://www.example.com?client=435&key=1234EXAMPLE783271234567` would produce `https://www.example.com?client=435&key=******4567`
99
- - It checks for 'Authentication' object keys and sanitizes the value.
100
- - It checks for multi-value (arrays) of object keys named with secret, key, or token such as `"Client-Secrets":[123456789,1234567890,90987654321]`
101
-
102
- ```JavaScript
103
- // Note: These fake secrets are hard-coded for demo/test purposes only. NEVER hard-code secrets!
104
- const obj = {
105
- secret: "98765-EXAMPLE-1234567890efcd",
106
- apiKey: "123456-EXAMPLE-123456789bcea",
107
- kbToken: "ABCD-EXAMPLE-12345678901234567890",
108
- queryString: "?site=456&secret=12345EXAMPLE123456&b=1",
109
- headers: {
110
- Authorization: "Basic someBase64EXAMPLE1234567"
111
- }
112
- };
113
-
114
- console.log("My Sanitized Object", tools.sanitize(obj));
115
- /* output: My Sanitized Object {
116
- secret: '******efcd',
117
- apiKey: '******bcea',
118
- kbToken: '******7890',
119
- queryString: '?site=456&secret=******3456&b=1',
120
- headers: { Authorization: 'Basic ******4567' }
121
- }
122
- */
123
- ```
124
-
125
- > It is best to avoid logging ANY data that contains sensitive information. While this function provides an extra layer of protection, it should be used sparingly for debugging purposes (not on-going logging) in non-production environments.
126
-
127
- #### Obfuscate
128
-
129
- You can pass a string to obfuscate.
130
-
131
- For example, `12345EXAMPLE7890` will return `******7890`.
132
-
133
- By default, asterisks are used to pad the left-hand side, and only 4 characters are kept on the right. The length of the string returned is not dependent on the length of the string passed in which in turn obfuscates the original length of the string. However, the right side will not reveal more than 25% of the string (it actually rounds up 1 character so a 2 character string would still reveal the final character).
134
-
135
- Default options can be changed by passing an options object.
136
-
137
- ```JavaScript
138
- const str = "EXAMPLE1234567890123456789";
139
-
140
- console.log( tools.obfuscate(str) );
141
- // output: ******6789
142
-
143
- const opt = { keep: 6, char: 'X', len: 16 };
144
- console.log( tools.obfuscate(str, opt) );
145
- // output: XXXXXXXXXX456789
146
- ```
147
-
148
- ### AWS-SDK
149
-
150
- The @63klabs/cache-data package will automatically detect and use the correct AWS SDK based on the version of Node.
151
-
152
- Node 16 environments will use AWS-SDK version 2.
153
-
154
- Node 18+ environments will use AWS-SDK version 3.
155
-
156
- Note that `package.json` for @63klabs/cache-data only installs the AWS-SDK on dev environments. This is because AWS Lambda already includes the AWS-SDK without requiring installs. This makes your application lighter and ensures you are always running the most recent SDK release. Given this, that means that AWS SDK v3 is not available in Lambda functions using Node 16, and v2 is not available in Lambda Node >=18 environments.
157
-
158
- Because DynamoDb, S3, and SSM Parameter store are used by cache-data, only those SDKs are included. A client is provided for each along with limited number of commands. To make gets and puts easier a get and put command is mapped for DynamoDb and S3. (Uses appropriate commands underneath for V2 and V3 so your code wouldn't need to change.)
159
-
160
- #### `tools.AWS` Object
161
-
162
- When `tools` is imported, you can use the `tools.AWS` object to perform common read/write operations on S3, DynamoDb, and SSM Parameter Store.
163
-
164
- ```javascript
165
- const { tools } = require('@63klabs/cache-data');
166
-
167
- console.log(`NODE VERSION ${tools.AWS.NODE_VER} USING AWS SDK ${tools.AWS.SDK_VER}`);
168
- console.log(`REGION: ${tools.AWS.REGION}`); // set from Lambda environment variable AWS_REGION
169
-
170
- var getParams = {
171
- Bucket: 'mybucket', // bucket name,
172
- Key: 'hello.txt' // object to get
173
- }
174
-
175
- const result = await tools.AWS.s3.get(getParams);
176
-
177
- let objectData = await s3Body.transformToString(); // V3: Object bodies in V3 are readable streams, so we convert to string
178
- // let objectData = data.Body.toString('utf-8'); // V2: Object bodies are Buffers, so we convert to string
179
- console.log(`hello.txt Body: ${objectData}`);
180
- // outputs "hello.txt Body: Hello, World!"
181
-
182
- ```
183
-
184
- The `tools.AWS` object provides the following:
185
-
186
- ```js
187
- {
188
- NODE_VER: '20.6.0',
189
- NODE_VER_MAJOR: 20,
190
- NODE_VER_MINOR: 6,
191
- NODE_VER_PATCH: 0,
192
- NODE_VER_MAJOR_MINOR: '20.6',
193
- NODE_VER_ARRAY: [ 20, 6, 0 ],
194
- REGION: "us-east-1", // Set from Node environment process.env.AWS_REGION
195
- SDK_VER: "V3",
196
- SDK_V2: false, // if (tools.AWS.SDK_V2) { console.log('AWS SDK Version 2!'); }
197
- SDK_V3: true, // if (tools.AWS.SDK_V3) { console.log('AWS SDK Version 3!'); }
198
- INFO: { /* an object containing all of the properties listed above */ }
199
- dynamo: {
200
- client: DynamoDBDocumentClient,
201
- put: (params) => client.send(new PutCommand(params)), // const result = await tools.AWS.dynamo.put(params);
202
- get: (params) => client.send(new GetCommand(params)), // const result = await tools.AWS.dynamo.get(params);
203
- scan: (params) => client.send(new ScanCommand(params)), // const result = await tools.AWS.dynamo.scan(params);
204
- delete: (params) => client.send(new DeleteCommand(params)), // const result = await tools.AWS.dynamo.delete(params);
205
- update: (params) => client.send(new UpdateCommand(params)), // const result = await tools.AWS.dynamo.update(params);
206
- sdk: {
207
- DynamoDBClient,
208
- DynamoDBDocumentClient,
209
- GetCommand,
210
- PutCommand
211
- }
212
- },
213
- s3: {
214
- client: S3,
215
- put: (params) => client.send(new PutObjectCommand(params)), // const result = await tools.AWS.s3.put(params)
216
- get: (params) => client.send(new GetObjectCommand(params)), // const result = await tools.AWS.s3.get(params)
217
- sdk: {
218
- S3,
219
- GetObjectCommand,
220
- PutObjectCommand
221
- }
222
-
223
- },
224
- ssm: {
225
- client: SSMClient,
226
- getByName: (params) => client.send(new GetParametersCommand(query)), // const params = await tools.AWS.ssm.getByName(query)
227
- getByPath: (params) => client.send(new GetParametersByPathCommand(query)), // const params = await tools.AWS.ssm.getByPath(query)
228
- sdk: {
229
- SSMClient,
230
- GetParametersByPathCommand,
231
- GetParametersCommand
232
- }
233
- }
234
- }
235
- ```
236
-
237
- Because Node 16 and the AWS SDK v2 are being deprecated, this documentation will mainly cover AWS SDK v3. However, `{DynamoDb, S3, SSM}` are still available when your environment is using Node 16 and AWS SDK v2 by importing `tools` from cache-data and accessing the `AWS` class. (See Using AWS SDK V2 through tools.AWS (Deprecated) below.)
238
-
239
- ##### Using AWS SDK V3 through tools.AWS
240
-
241
- To use the AWS SDK you normally have to import the proper SDKs and libraries, create a client, and then send the commands. The way this is accomplished in version 2 and version 3 of the AWS SDK is slightly different. How to use the AWS SDK is beyond the scope of this package. However, since the package uses reads and writes to S3 objects, DynamoDb tables, and SSM Parameter store, it readily makes these commands available through the `AWS` object from `tools`.
242
-
243
- Also, as a shortcut as you move from Node 16 and Node 18 (and above), the methods exposed will not differ as it automatically uses the correct methods for the loaded SDK.
244
-
245
- To use the methds you only need to pass the parameter or query object as you normally would.
246
-
247
- ```javascript
248
- // Given the two parameter/query objects:
249
-
250
- let paramsForPut = {
251
- TableName: 'myTable',
252
- Item: {
253
- 'hash_id': '8e91cef4a27',
254
- 'episode_name': "There's No Disgrace Like Home",
255
- 'air_date': "1990-01-28",
256
- 'production_code': '7G04'
257
- }
258
- }
259
-
260
- let paramsForGet = {
261
- TableName: 'myTable',
262
- Key: {'hash_id': '8e91cef4a27'}
263
- };
264
- ```
265
-
266
- ```javascript
267
- // Using AWS SDK V2
268
- const { DynamoDb } = require('aws-sdk');
269
-
270
- const dbDocClient = new DynamoDB.DocumentClient( {region: 'us-east-1'} );
271
-
272
- const dbPutResult = await dbDocClient.put(paramsForNewRecord).promise();
273
- const dbGetResult = await dbDocClient.get(paramsForGet).promise();
274
- ```
275
-
276
- ```javascript
277
- // Using AWS SDK V3
278
- const { DynamoDBClient} = require("@aws-sdk/client-dynamodb");
279
- const { DynamoDBDocumentClient, GetCommand, PutCommand} = require("@aws-sdk/lib-dynamodb");
280
-
281
- const dbClient = new DynamoDBClient({ region: AWS.REGION });
282
- const dbDocClient = DynamoDBDocumentClient.from(dbClient);
283
-
284
- const dbPutResult = await dbDocClient.send(PutCommand(paramsForNewRecord));
285
- const dbGetResult = await dbDocClient.send(GetCommand(paramsForGetRecord));
286
- ```
287
-
288
- ```javascript
289
- // Using tools to handle the SDK version and basic calls for you
290
- const { tools } = require('@63klabs/cache-data');
291
-
292
- const dbPutResult = await tools.AWS.dynamodb.put(paramsForNewRecord);
293
- const dbGetResult = await tools.AWS.dynamodb.get(paramsForGetRecrod);
294
- ```
295
-
296
- Refer to the section about the tools.AWS above for the variables, methods, and SDK objects available.
297
-
298
- For more on creating parameter/query objects for S3, DynamoDb, and SSM Parameter Store:
299
-
300
- - [Amazon S3 examples using SDK for JavaScript (v3)](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/javascript_s3_code_examples.html)
301
- - [Using the DynamoDB Document Client](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/dynamodb-example-dynamodb-utilities.html)
302
- - []()
303
-
304
- ##### Import Additional Commands
305
-
306
- When using AWS SDK version 3, you can import additional commands and use them with the client provided by `tools.AWS`.
307
-
308
- ```javascript
309
- const { tools } = require('@63klabs/cache-data');
310
- const { DeleteObjectCommand } = require('@aws-sdk/client-s3'); // AWS SDK v3
311
-
312
- const command = new DeleteObjectCommand({
313
- Bucket: "myBucket",
314
- Key: "good-bye.txt"
315
- });
316
-
317
- const response = await tools.AWS.s3.client.send(command);
318
- ```
319
-
320
- #### Using AWS SDK V2 through tools.AWS (Deprecated)
321
-
322
- Because Node 16 and the AWS SDK v2 is being deprecated, this documentation will mainly cover AWS SDK v3. However, `{DynamoDb, S3, SSM}` are still available by importing `tools` from cache-data and accessing the `AWS` class:
323
-
324
- ```js
325
- // NodeJS 16 using AWS SDK v2
326
- const {tools} = require("@63klabs/cache-data");
327
-
328
- // using the provided S3 client
329
- const s3result1 = await tools.AWS.s3.client.putObject(params).promise();
330
-
331
- // using your own client
332
- const s3client = new tools.AWS.s3.sdk.S3();
333
- const s3result2 = await s3Client.putObject(params).promise();
334
-
335
- // similarly with DynamoDb
336
- const dbResult1 = await tools.AWS.dynamo.client.put(params).promise(); // tools.AWS.dynamo.client uses DynamoDB.DocumentClient
337
-
338
- // using your own DynamoDb Document client
339
- const dbClient = new tools.AWS.dynamo.sdk.DynamoDB.DocumentClient( {region: 'us-east-1'} );
340
- const dbResult2 = await dbClient.put(params).promise(),
341
- ```
@@ -1,178 +0,0 @@
1
- # Lambda Optimizations
2
-
3
- Get the most out of your Lambda function!
4
-
5
- - Optimal performance is somewhere between 512MB and 1024MB. 1024MB is recommended.
6
- - Utilize the arm64 architecture for Lambda
7
-
8
- These are just general recommendations from AWS and the Lambda developer community. Increasing memory and using the arm64 architecture improves performance resulting in quicker execution (which can drive down cost). Further enhancing execution time is that fact that since you are most likely utilizing this package because you are calling external endpoints, the speed in which the requests occur can increase as well as additional memory opens up additional processor power, resulting in faster network speeds.
9
-
10
- Utilize the following best practices:
11
-
12
- - Take care of initialization outside of handler
13
- - Multi-task with async operations
14
- - Reduce the number of packages
15
- - Turn on X-Ray tracing
16
-
17
- ## Lambda Memory Allocation
18
-
19
- As pointed out in many online resources, including [AWS's own documentation](https://docs.aws.amazon.com/lambda/latest/operatorguide/computing-power.html), Lambda applications should be given more than the default 128MB when using network resources and processing data. I recommend trying 512MB and adjusting depending on your workload and execution experiences. See [Lower AWS Lambda bill by increasing memory by Taavi Rehemägi](https://dashbird.io/blog/lower-aws-lambda-bill-increasing-memory/).
20
-
21
- Example: The charts below reflect 1 million requests over a seven-day period. As you can see, the invocations remained at a high level throughout the seven-day period. There was a dramatic drop in execution time once the memory was increased from 128 to 512MB. Latency was also improved. This also reduced the number of concurrent executions taking place. (The spike in errors was due to a 3rd party endpoint being down.)
22
-
23
- ![Metrics before and after upgrade to 512MB with 1M invocations over a 7 day period](https://github.com/63klabs/cache-data/assets/17443749/0ec98af5-edcf-4e2a-8017-dd17b9c7a11c)
24
-
25
- If you are worried about cost, the Lambda function demonstrated above handles approximately 4.6 million requests a month, each averaging 46ms in Lambda run time. This means that the Lambda function executes a total of 211,000 seconds a month which is still within the 400,000 seconds provided by the Free Tier. If there was no free tier, the cost would have been around USD $2.00.
26
-
27
- ## Use arm64 Architecture
28
-
29
- In regards to the AWS Graviton ARM architecture processor, Amazon touts that it is faster than the default processor and recommends its use.
30
-
31
- When I switching over to arm64 I did see a performance improvement.
32
-
33
- Note that if you are using precompiled packages, they must be compatible with arm64. For example, cache-data works with Lambda Insights and Lambda X-Ray layers. When specifying these Lambda Layers to be used by your function you must specify either the standard version or the arm64 version.
34
-
35
- ## Initialize Outside of Handler
36
-
37
- Run code that initializes your function outside of the handler. If you need to load in secrets from parameter store, load configuration files, and initialize objects, do this at the top of the script that contains your handler script (typically src/index.js).
38
-
39
- All code outside of the handler function is executed when the script is loaded during a cold start.
40
-
41
- This will increase the cold start time (which is typically just a few hundred milliseconds) but will perform better on subsequent calls.
42
-
43
- Common practice for cache-data is to implement the Config and Cache init in a Configuration script that is imported at the top of the index script and then checked for completion before executing the handler.
44
-
45
- - [index.js example for handler](../00-example-implementation/example-handler.js)
46
- - [configuration example](../00-example-implementation/example-config.js)
47
-
48
- ## Multi-task with async operations
49
-
50
- Since the cache-data package is built for calling remote endpoints, it utilizes asynchronous functions so that your code doesn't have to wait for a response to perform other tasks that do not require the fetched data right away.
51
-
52
- Async is like doing laundry. When you put a load into the washer, you don't sit there staring at the machine with your hands folded, waiting while it completes all wash cycles. You put a load and then perform other chores until it signals that it is done. Then you put the clothes in the dryer (or hang on a line) and as they dry you perform other tasks.
53
-
54
- But you can't dry your clothes before you wash them, and you can't fold and put away your clothes until they are dry. So there are some tasks that require a previous step to be completed first.
55
-
56
- Cache-data operates like a laundromat with many washers and dryers so that you are never waiting for a single load to complete before starting the next.
57
-
58
- If you need to gather data from multiple endpoints, and the data from one doesn't rely on the data from another, you can dispatch multiple requests at once and wait for all to complete before moving on.
59
-
60
- Also, you can dispatch the requests, proceed with other processes, and then come back and check on the request before proceeding. This is done in the example index.js code. The configuration is initialized and left to run on its own as the script continues on performing other tasks. This bides time (maybe just a few milliseconds) to proceed with other work before checking to ensure that all initialization tasks have completed before proceeding.
61
-
62
- ## Reduce the Number of Packages Your Function Requires
63
-
64
- When you deploy your function you perform an `npm install` in your buildspec or GitHub actions script. Ensure that you only deploy production dependencies as you do not need `devDependencies` included with your deployed Lambda function.
65
-
66
- `devDependencies` should only be used for local development and testing.
67
-
68
- Including all dev packages (sometimes just even 5 will result in hundreds of sub-dependencies being installed) with your deployment will:
69
-
70
- - Create a large function package with longer cold starts
71
- - Likely reach the size limit for being able to inspect code via the Lambda console
72
- - Introduce security vulnerabilities
73
-
74
- To prevent `devDependencies` from deploying, you can:
75
-
76
- - set the `NODE_ENV` environment variable for your GitHub Action or AWS CodeBuild environment to `production`
77
- - run the install with the `--` flag: `npm install --`
78
-
79
- If you are performing automated tests during the build in CodeBuild or GitHub Actions, then you can perform an install with `devDependencies` first, run the tests, and then perform a prune command to remove the extra packages.
80
-
81
- ```yaml
82
- # buildspec.yml
83
-
84
- version: 0.2
85
-
86
- phases:
87
- install:
88
- runtime-versions:
89
- nodejs: latest
90
- commands:
91
-
92
- # ...
93
-
94
- pre_build:
95
- commands:
96
-
97
- # Install dev so we can run tests, but we will remove dev dependencies later
98
- - npm install --include=dev
99
-
100
- # Run Test
101
- - npm test
102
-
103
- # Remove dev dependencies, keep only production
104
- - npm prune --omit=dev
105
-
106
- build:
107
- commands:
108
- - aws cloudformation package --template template.yml --s3-bucket $S3_ARTIFACTS_BUCKET --output-template template-export.yml
109
-
110
- # ...
111
- ```
112
-
113
- ## Turn on X-Ray and Lambda Insights for Monitoring
114
-
115
- The cache-data package works with X-Ray and Lambda Insights for monitoring your application performace.
116
-
117
- When enabled, X-Ray will trace requests as it moves through your application resources. Each resource needs to have tracing enabled. When a request comes through it is given a unique request identifier which is passed from resource to resource. AWS X-Ray can then generate a map for each request which shows how API Gateway, Lambda, S3, DynamoDb, and remote endpoints are connected.
118
-
119
- In addition to enabling tracing, make sure you set `CACHE_DATA_AWS_X_RAY_ON: true` in your Lambda environment variables.
120
-
121
- Lambda Insights provides additional metrics for your Lambda function and can be enabled just by including the Lambda Insights Layer in your function definition.
122
-
123
- Cache-data also provides:
124
-
125
- - Logging methods including `Timer` and `DebugAndLog`.
126
- - Automatic request response logging when using the `ClientRequest` and `Response` objects which can be used in CloudWatch Dashboards.
127
-
128
- For information on using these Class objects see [Features](../features/tools/README.md).
129
-
130
- ## CloudFormation Template Example
131
-
132
- Below is an example of various settings in a CloudFormation template demonstrating key elements used to enable X-Ray tracing, setting environment variables, including the Lambda Insights Layer, and providing your Lambda function execution privileges for using Lambda Insights and X-Ray.
133
-
134
- A more complete example is provided in [example-template-lambda-function.yml](../00-example-implementation/example-template-lambda-function.yml).
135
-
136
- An even more complete application starter template that implements cache-data as an API web service can be found in the [63Klabs GitHub repositories](https://github.com/63Klabs).
137
-
138
- ```yaml
139
- # template.yml
140
- Resources:
141
-
142
- # API Gateway
143
-
144
- WebApi:
145
- Type: AWS::Serverless::Api
146
- Properties:
147
- # ...
148
- TracingEnabled: True
149
-
150
- # Lambda Function
151
-
152
- AppFunction:
153
- Type: AWS::Serverless::Function
154
- Properties:
155
- # ...
156
- Tracing: "Active"
157
-
158
- Layers:
159
- - !Sub "arn:aws:lambda:${AWS::Region}:${InsightsAccount}:layer:LambdaInsightsExtension:${Version}"
160
-
161
- Environment:
162
- Variables:
163
- # ...
164
- LOG_LEVEL: 5 # 0 for prod, 2-5 for non-prod
165
- CACHE_DATA_AWS_X_RAY_ON: true
166
-
167
- # LambdaFunction Execution Role
168
-
169
- LambdaExecutionRole:
170
- Type: AWS::IAM::Role
171
- Properties:
172
-
173
- # ...
174
- # These are for application monitoring via LambdaInsights and X-Ray
175
- ManagedPolicyArns:
176
- - 'arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy'
177
- - 'arn:aws:iam::aws:policy/AWSXRayDaemonWriteAccess'
178
- ```