@pixlcore/xyplug-s3 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE.md ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Joseph Huckaby
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,359 @@
1
+ <p align="center"><img src="https://raw.githubusercontent.com/pixlcore/xyplug-s3/refs/heads/main/logo.png" height="128" alt="S3"/></p>
2
+ <h1 align="center">S3 Toolbox</h1>
3
+
4
+ An AWS S3 event plugin for the [xyOps Workflow Automation System](https://xyops.io). It can upload, download, move, copy, list, grep, and delete files in S3 buckets, and is designed to work naturally with xyOps job input and output files.
5
+
6
+ ## Requirements
7
+
8
+ - `Node.js`
9
+ - `npx`
10
+ - AWS credentials with permission to access your target S3 bucket
11
+
12
+ ## Environment Variables
13
+
14
+ Create a [Secret Vault](https://xyops.io/docs/secrets) in xyOps and assign this Plugin to it. Add:
15
+
16
+ - `AWS_ACCESS_KEY_ID`
17
+ - `AWS_SECRET_ACCESS_KEY`
18
+
19
+ These should be credentials for an IAM user or role with the minimum S3 permissions needed for your workflow.
20
+
21
+ ## Data Collection
22
+
23
+ This plugin does not collect, store, or transmit user data anywhere except to the configured AWS S3 service endpoint. AWS may log requests according to their own policies.
24
+
25
+ ## Overview
26
+
27
+ The plugin exposes a toolset with seven tools:
28
+
29
+ - [Upload Files](#upload-files)
30
+ - [Download Files](#download-files)
31
+ - [Delete Files](#delete-files)
32
+ - [Move Files](#move-files)
33
+ - [Copy Files](#copy-files)
34
+ - [List Files](#list-files)
35
+ - [Grep Files](#grep-files)
36
+
37
+ All tools operate on a single configured bucket and region per job run.
38
+
39
+ ## Common Parameters
40
+
41
+ These parameters are always present regardless of the selected tool:
42
+
43
+ | Parameter | Required | Description |
44
+ |-----------|----------|-------------|
45
+ | `Region ID` | Yes | AWS region containing the bucket, e.g. `us-east-1`. |
46
+ | `Bucket Name` | Yes | The S3 bucket to operate on. |
47
+
48
+ ## General Notes
49
+
50
+ - `Remote Path` values are S3 key prefixes. Leave blank to operate at the bucket root.
51
+ - For folder-like prefixes, it is best to include a trailing slash, e.g. `incoming/` or `logs/2026/03/`.
52
+ - `Filename Pattern` uses glob-style matching and is applied to filenames, not full directory paths.
53
+ - `Older Than` and `Newer Than` can be specified as raw seconds or friendly text like `7 days`, `12 hours`, or `30 minutes`.
54
+ - `Maximum Files` is especially useful when combined with `Sort`.
55
+ - Progress is reported back to xyOps while large transfers are in progress.
56
+
57
+ ## Tool Reference
58
+
59
+ ### Upload Files
60
+
61
+ Uploads local files into S3.
62
+
63
+ In normal xyOps usage, if you leave `Local Path` blank, the plugin uploads files from the job temp directory. This means the easiest way to upload files to S3 is to attach them as job inputs or pass them from upstream workflow steps. The user does not need to know where those files live on disk, because xyOps and xySat handle that automatically.
64
+
65
+ If you do want to upload from a specific directory on the server, you can set `Local Path` explicitly.
66
+
67
+ | Parameter | Required | Description |
68
+ |-----------|----------|-------------|
69
+ | `Local Path` | No | Base local directory to upload from. Leave blank to use the job temp directory and any job input files already placed there by xyOps. |
70
+ | `Filename Pattern` | No | Optional glob to limit which local files are uploaded. |
71
+ | `Remote Path` | No | Base S3 prefix to upload into. Leave blank for the bucket root. |
72
+ | `Compress Files (Gzip)` | No | Gzip-compress files during upload. The plugin also appends `.gz` to the destination filename. |
73
+ | `Custom S3 Params` | No | JSON object passed to S3 for uploaded objects, for settings such as `ACL`, `StorageClass`, `ContentType`, or `CacheControl`. |
74
+
75
+ Notes:
76
+
77
+ - Compression is streamed directly to S3. Files are not written to a temporary `.gz` file first, so disk usage stays low.
78
+ - The response returns the uploaded file list in job `data`.
79
+
80
+ Example custom S3 params:
81
+
82
+ ```json
83
+ {
84
+ "ACL": "public-read",
85
+ "StorageClass": "STANDARD_IA"
86
+ }
87
+ ```
88
+
89
+ Example static asset upload params:
90
+
91
+ ```json
92
+ {
93
+ "ContentType": "text/css",
94
+ "CacheControl": "public, max-age=86400"
95
+ }
96
+ ```
97
+
98
+ ### Download Files
99
+
100
+ Downloads files from S3 to the local machine running the job.
101
+
102
+ By default, if you leave `Local Path` blank, files are downloaded into the job temp directory. Also by default, `Attach Files` is enabled, which means the downloaded files are attached to the job output and become available to downstream workflow steps.
103
+
104
+ You can override either behavior:
105
+
106
+ - Set `Local Path` to save files somewhere else.
107
+ - Uncheck `Attach Files` if you want local files only and do not want them attached to job outputs.
108
+
109
+ | Parameter | Required | Description |
110
+ |-----------|----------|-------------|
111
+ | `Remote Path` | No | Base S3 prefix to download from. Leave blank for the bucket root. |
112
+ | `Filename Pattern` | No | Optional glob to limit which remote files are downloaded. |
113
+ | `Local Path` | No | Destination directory on local disk. Leave blank to use the xyOps job temp directory. |
114
+ | `Decompress Files (Gunzip)` | No | Automatically gunzip downloaded files and strip a trailing `.gz` from the local filename when present. |
115
+ | `Delete Files` | No | Delete the remote S3 files after they are successfully downloaded. |
116
+ | `Attach Files` | No | Attach downloaded files to the job output. Enabled by default. |
117
+ | `Maximum Files` | No | Limit how many files are downloaded. `0` means no limit. |
118
+ | `Sort Files` | No | Sort before downloading: `newest`, `oldest`, `largest`, or `smallest`. Useful with `Maximum Files`. |
119
+
120
+ Notes:
121
+
122
+ - This tool returns both the matched file metadata and the total transferred byte count in job `data`.
123
+ - `Delete Files` turns the tool into a download-and-consume action, which is useful for queue-like workflows.
124
+
125
+ ### Delete Files
126
+
127
+ Deletes files from S3.
128
+
129
+ Use this tool when you want to purge objects matching a prefix, filename pattern, age filter, or size/date ordering strategy.
130
+
131
+ | Parameter | Required | Description |
132
+ |-----------|----------|-------------|
133
+ | `Remote Path` | No | Base S3 prefix to delete from. Leave blank for the bucket root. |
134
+ | `Filename Pattern` | No | Optional glob to limit which remote files are deleted. |
135
+ | `Older Than` | No | Delete only files older than this relative time, e.g. `7 days` or `3600`. |
136
+ | `Maximum Files` | No | Limit how many files are deleted. `0` means no limit. |
137
+ | `Sort Files` | No | Sort before deleting: `oldest`, `newest`, `largest`, or `smallest`. Useful with `Maximum Files`. |
138
+ | `Dry Run` | No | Preview the matched files without actually deleting anything. |
139
+
140
+ Notes:
141
+
142
+ - Start with `Dry Run` enabled when building destructive workflows.
143
+ - This tool returns the matched file metadata and total byte count in job `data`.
144
+
145
+ ### Move Files
146
+
147
+ Moves files from one S3 path to another, optionally across buckets.
148
+
149
+ This is ideal for archive pipelines, inbox-to-processed flows, or lifecycle-style workflows controlled by xyOps.
150
+
151
+ | Parameter | Required | Description |
152
+ |-----------|----------|-------------|
153
+ | `Remote Path` | No | Base S3 prefix to move from. |
154
+ | `Filename Pattern` | No | Optional glob to limit which remote files are moved. |
155
+ | `Destination Path` | No | Base S3 prefix to move files into. Leave blank to preserve the relative path at the destination root. |
156
+ | `Destination Bucket` | No | Optional destination bucket. Leave blank to keep the source bucket. |
157
+ | `Maximum Files` | No | Limit how many files are moved. `0` means no limit. |
158
+ | `Sort Files` | No | Sort before moving: `oldest`, `newest`, `largest`, or `smallest`. |
159
+ | `Custom S3 Params` | No | JSON object applied to destination objects, e.g. `ACL` or `StorageClass`. |
160
+ | `Dry Run` | No | Preview the matched files without actually moving them. |
161
+
162
+ Notes:
163
+
164
+ - Under the hood, an S3 move is a copy followed by a delete.
165
+ - If you need destination metadata like `ACL` or `StorageClass`, specify it in `Custom S3 Params`.
166
+ - This tool returns the matched file metadata and total byte count in job `data`.
167
+
168
+ ### Copy Files
169
+
170
+ Copies files from one S3 path to another, optionally across buckets.
171
+
172
+ Use this for replication, fan-out, staging, publishing, or storage-class transitions where you want to keep the source objects intact.
173
+
174
+ | Parameter | Required | Description |
175
+ |-----------|----------|-------------|
176
+ | `Remote Path` | No | Base S3 prefix to copy from. |
177
+ | `Filename Pattern` | No | Optional glob to limit which remote files are copied. |
178
+ | `Destination Path` | No | Base S3 prefix to copy files into. Leave blank to preserve the relative path at the destination root. |
179
+ | `Destination Bucket` | No | Optional destination bucket. Leave blank to keep the source bucket. |
180
+ | `Maximum Files` | No | Limit how many files are copied. `0` means no limit. |
181
+ | `Sort Files` | No | Sort before copying: `oldest`, `newest`, `largest`, or `smallest`. |
182
+ | `Custom S3 Params` | No | JSON object applied to destination objects, e.g. `ACL` or `StorageClass`. |
183
+
184
+ Notes:
185
+
186
+ - If you want copied objects to have explicit metadata or a different storage class, specify it in `Custom S3 Params`.
187
+ - This tool returns the matched file metadata and total byte count in job `data`.
188
+
189
+ ### List Files
190
+
191
+ Lists files in S3 and returns metadata without downloading object contents.
192
+
193
+ This is useful for audits, inventory flows, preflight checks, or driving downstream workflow logic from S3 state.
194
+
195
+ | Parameter | Required | Description |
196
+ |-----------|----------|-------------|
197
+ | `Remote Path` | No | Base S3 prefix to list from. Leave blank for the bucket root. |
198
+ | `Filename Pattern` | No | Optional glob to limit which remote files are included. |
199
+ | `Older Than` | No | Include only files older than this relative time. |
200
+ | `Newer Than` | No | Include only files newer than this relative time. |
201
+ | `Maximum Files` | No | Limit how many files are returned. `0` means no limit. |
202
+ | `Sort Files` | No | Sort before returning metadata: `newest`, `oldest`, `largest`, or `smallest`. |
203
+
204
+ Output:
205
+
206
+ - `data.files`: Array of file objects containing `key`, `size`, and `mtime`
207
+ - `data.bytes`: Total bytes across all matched files
208
+
209
+ ### Grep Files
210
+
211
+ Searches inside files stored in S3 using a regular expression.
212
+
213
+ This tool can stream through large remote files, including gzip-compressed files, and extract only the matching lines. It is especially useful for log processing and incident response workflows.
214
+
215
+ | Parameter | Required | Description |
216
+ |-----------|----------|-------------|
217
+ | `Remote Path` | No | Base S3 prefix to search under. Leave blank for the bucket root. |
218
+ | `Filename Pattern` | No | Optional glob to limit which remote files are searched. |
219
+ | `Match Pattern (Regex)` | Yes | JavaScript regular expression pattern used to match lines inside each file. |
220
+ | `Decompress Files (Gunzip)` | No | Automatically gunzip files while searching. Enable this for `.gz` log archives. |
221
+ | `Older Than` | No | Search only files older than this relative time. |
222
+ | `Newer Than` | No | Search only files newer than this relative time. |
223
+ | `Maximum Matches` | No | Stop after this many matching lines have been found. |
224
+ | `Output Format` | No | Return results as structured JSON (`data`) or write them into an attached text file (`file`). |
225
+
226
+ Output behavior:
227
+
228
+ - `JSON Data`: Returns `data.count` and `data.matches`, where each match contains the matched `line` and its source `file` metadata.
229
+ - `Text File`: Writes all matched lines into `matched-lines.txt` and attaches that file to the job output.
230
+
231
+ Notes:
232
+
233
+ - Matching is line-based.
234
+ - Searching is streamed, so memory usage stays low even for very large objects.
235
+ - Compressed `.gz` files can be searched without creating decompressed files on disk first.
236
+
237
+ ## Custom S3 Params
238
+
239
+ The `Upload Files`, `Move Files`, and `Copy Files` tools support `Custom S3 Params`, which is a raw JSON object passed through to S3 when creating destination objects.
240
+
241
+ Common examples:
242
+
243
+ ```json
244
+ {
245
+ "ACL": "public-read",
246
+ "StorageClass": "STANDARD_IA"
247
+ }
248
+ ```
249
+
250
+ ```json
251
+ {
252
+ "ContentType": "application/json",
253
+ "CacheControl": "public, max-age=300"
254
+ }
255
+ ```
256
+
257
+ Useful keys include:
258
+
259
+ - `ACL`
260
+ - `StorageClass`
261
+ - `ContentType`
262
+ - `CacheControl`
263
+ - `ContentDisposition`
264
+ - `ContentEncoding`
265
+ - `Metadata`
266
+
267
+ This is particularly useful when:
268
+
269
+ - publishing static assets from a workflow
270
+ - copying or moving objects into a lower-cost storage tier
271
+ - setting HTTP headers on web-facing objects
272
+ - forcing a fresh metadata policy on copied objects
273
+
274
+ ## Local Testing
275
+
276
+ When invoked by xyOps, the plugin expects a single JSON document on STDIN using the xyOps wire protocol. You can simulate this locally by piping JSON into `node index.js`.
277
+
278
+ Example upload test using the current directory as the local source:
279
+
280
+ ```json
281
+ {
282
+ "xy": 1,
283
+ "params": {
284
+ "region": "us-east-1",
285
+ "bucket": "my-bucket",
286
+ "tool": "uploadFiles",
287
+ "localPath": "./",
288
+ "filespec": "*.txt",
289
+ "remotePath": "test-upload/"
290
+ }
291
+ }
292
+ ```
293
+
294
+ Example download test:
295
+
296
+ ```json
297
+ {
298
+ "xy": 1,
299
+ "params": {
300
+ "region": "us-east-1",
301
+ "bucket": "my-bucket",
302
+ "tool": "downloadFiles",
303
+ "remotePath": "test-upload/",
304
+ "localPath": "./downloads/",
305
+ "attach": false
306
+ }
307
+ }
308
+ ```
309
+
310
+ Example grep test on compressed logs:
311
+
312
+ ```json
313
+ {
314
+ "xy": 1,
315
+ "params": {
316
+ "region": "us-east-1",
317
+ "bucket": "my-bucket",
318
+ "tool": "grepFiles",
319
+ "remotePath": "logs/",
320
+ "filespec": "*.log.gz",
321
+ "match": "ERROR|FATAL",
322
+ "decompress": true,
323
+ "output": "data",
324
+ "maxLines": 100
325
+ }
326
+ }
327
+ ```
328
+
329
+ Run any of the above like this:
330
+
331
+ ```sh
332
+ export AWS_ACCESS_KEY_ID="YOUR_AWS_ACCESS_KEY_ID"
333
+ export AWS_SECRET_ACCESS_KEY="YOUR_AWS_SECRET_ACCESS_KEY"
334
+ cat sample.json | node index.js
335
+ ```
336
+
337
+ Or without a file:
338
+
339
+ ```sh
340
+ echo '{"xy":1,"params":{"region":"us-east-1","bucket":"my-bucket","tool":"listFiles","remotePath":"incoming/","filespec":"*.csv"}}' | node index.js
341
+ ```
342
+
343
+ ## Output Summary
344
+
345
+ Depending on the selected tool, the plugin returns structured job `data` such as:
346
+
347
+ - uploaded file paths
348
+ - downloaded / moved / copied / deleted file metadata
349
+ - total byte counts
350
+ - grep match counts and matched lines
351
+
352
+ For file-producing tools, the plugin can also attach local files to the xyOps job output:
353
+
354
+ - `Download Files`: attaches downloaded files when `Attach Files` is enabled
355
+ - `Grep Files`: attaches `matched-lines.txt` when `Output Format` is set to `Text File`
356
+
357
+ ## License
358
+
359
+ MIT
package/index.js ADDED
@@ -0,0 +1,224 @@
1
+ #!/usr/bin/env node
2
+
3
+ // S3 Utility Plugin for xyOps
4
+ // Copyright (c) 2026 PixlCore LLC
5
+ // MIT License
6
+
7
+ const fs = require('fs');
8
+ const Path = require('path');
9
+ const S3 = require('s3-api');
10
+ const Perf = require('pixl-perf');
11
+ const picomatch = require('picomatch');
12
+
13
+ const app = {
14
+
15
+ async run() {
16
+ // read job from STDIN
17
+ const chunks = [];
18
+ for await (const chunk of process.stdin) chunks.push(chunk);
19
+ this.job = JSON.parse( chunks.join('').trim() );
20
+ this.params = this.job.params;
21
+
22
+ console.log(`Setting up S3 with bucket: ${this.params.bucket} in region: ${this.params.region}...`);
23
+
24
+ // setup s3 instance
25
+ this.s3 = new S3({
26
+ bucket: this.params.bucket,
27
+ region: this.params.region
28
+ });
29
+
30
+ // setup logging hook
31
+ this.s3.attachLogAgent({
32
+ debug: function(level, msg, data) {
33
+ console.log( msg );
34
+ },
35
+ error: function(code, msg, data) {
36
+ console.log( "Error: " + code + ": " + msg );
37
+ }
38
+ });
39
+
40
+ // setup perf hook
41
+ this.perf = new Perf();
42
+ this.perf.begin();
43
+ this.s3.attachPerfAgent( this.perf );
44
+
45
+ // jump to tool handler
46
+ let func = 'tool_' + this.params.tool;
47
+ if (!this[func]) return this.fatal('tool', "Unknown tool: " + this.params.tool);
48
+
49
+ // let the handler do the rest of the work
50
+ await this[func]();
51
+ },
52
+
53
+ async tool_uploadFiles() {
54
+ // upload files to s3
55
+ let { files } = await this.s3.uploadFiles({
56
+ localPath: this.params.localPath || './',
57
+ remotePath: this.params.remotePath,
58
+ filespec: picomatch.makeRe( this.params.filespec || '*' ),
59
+ compress: this.params.compress || false,
60
+ suffix: this.params.compress ? '.gz' : undefined,
61
+ params: this.params.s3params || {},
62
+ progress: this.progress
63
+ });
64
+
65
+ this.sendFinalResponse({
66
+ code: 0,
67
+ data: { files }
68
+ });
69
+ },
70
+
71
+ async tool_downloadFiles() {
72
+ // download files from s3
73
+ let { files, bytes } = await this.s3.downloadFiles({
74
+ remotePath: this.params.remotePath,
75
+ localPath: this.params.localPath || './',
76
+ filespec: picomatch.makeRe( this.params.filespec || '*' ),
77
+ decompress: this.params.decompress || false,
78
+ strip: this.params.decompress ? /\.gz$/ : undefined,
79
+ max: this.params.max || 0,
80
+ sort: this.params.sort || '',
81
+ delete: this.params.delete || false,
82
+ progress: this.progress
83
+ });
84
+
85
+ this.sendFinalResponse({
86
+ code: 0,
87
+ files: this.params.attach ? files.map( file => Path.resolve( this.params.localPath || './', Path.basename(file.key) ) ) : [],
88
+ data: { files, bytes }
89
+ });
90
+ },
91
+
92
+ async tool_deleteFiles() {
93
+ // delete files in s3
94
+ let { files, bytes } = await this.s3.deleteFiles({
95
+ remotePath: this.params.remotePath,
96
+ filespec: picomatch.makeRe( this.params.filespec || '*' ),
97
+ older: this.params.older || '',
98
+ max: this.params.max || 0,
99
+ sort: this.params.sort || '',
100
+ dry: this.params.dry || false,
101
+ progress: this.progress
102
+ });
103
+
104
+ this.sendFinalResponse({
105
+ code: 0,
106
+ files: files.map( file => Path.basename(file.key) ),
107
+ data: { files, bytes }
108
+ });
109
+ },
110
+
111
+ async tool_moveFiles() {
112
+ // move files in s3
113
+ let { files, bytes } = await this.s3.moveFiles({
114
+ remotePath: this.params.remotePath,
115
+ destPath: this.params.destPath,
116
+ bucket: this.params.destBucket || '',
117
+ filespec: picomatch.makeRe( this.params.filespec || '*' ),
118
+ max: this.params.max || 0,
119
+ sort: this.params.sort || '',
120
+ params: this.params.s3params || {},
121
+ dry: this.params.dry || false,
122
+ progress: this.progress
123
+ });
124
+
125
+ this.sendFinalResponse({
126
+ code: 0,
127
+ data: { files, bytes }
128
+ });
129
+ },
130
+
131
+ async tool_copyFiles() {
132
+ // copy files in s3
133
+ let { files, bytes } = await this.s3.copyFiles({
134
+ remotePath: this.params.remotePath,
135
+ destPath: this.params.destPath,
136
+ bucket: this.params.destBucket || '',
137
+ filespec: picomatch.makeRe( this.params.filespec || '*' ),
138
+ max: this.params.max || 0,
139
+ sort: this.params.sort || '',
140
+ params: this.params.s3params || {},
141
+ progress: this.progress
142
+ });
143
+
144
+ this.sendFinalResponse({
145
+ code: 0,
146
+ data: { files, bytes }
147
+ });
148
+ },
149
+
150
+ async tool_listFiles() {
151
+ // list files in s3
152
+ let { files, bytes } = await this.s3.list({
153
+ remotePath: this.params.remotePath,
154
+ filespec: picomatch.makeRe( this.params.filespec || '*' ),
155
+ older: this.params.older || '',
156
+ newer: this.params.newer || '',
157
+ max: this.params.max || 0,
158
+ sort: this.params.sort || '',
159
+ progress: this.progress
160
+ });
161
+
162
+ this.sendFinalResponse({
163
+ code: 0,
164
+ data: { files, bytes }
165
+ });
166
+ },
167
+
168
+ async tool_grepFiles() {
169
+ // grep files in s3
170
+ let mode = this.params.output;
171
+ let matches = [];
172
+ let count = 0;
173
+
174
+ await this.s3.grepFiles({
175
+ remotePath: this.params.remotePath,
176
+ filespec: picomatch.makeRe( this.params.filespec || '*' ),
177
+ match: new RegExp( this.params.match || '.+' ),
178
+ decompress: this.params.decompress || false,
179
+ maxLines: this.params.maxLines || 0,
180
+ older: this.params.older || '',
181
+ newer: this.params.newer || '',
182
+
183
+ iterator: function(line, file) {
184
+ count++;
185
+ if (mode == 'data') matches.push({ file, line });
186
+ else fs.appendFileSync( 'matched-lines.txt', line + "\n" );
187
+ }
188
+ });
189
+
190
+ this.sendFinalResponse({
191
+ code: 0,
192
+ files: (mode == 'file') ? ['matched-lines.txt'] : null,
193
+ data: { count, matches }
194
+ });
195
+ },
196
+
197
+ progress(prog) {
198
+ // send progress updates to xyops
199
+ if (prog.loaded && prog.total) {
200
+ console.log( JSON.stringify({ xy: 1, progress: prog.loaded / prog.total }) );
201
+ }
202
+ },
203
+
204
+ fatal(code, description) {
205
+ // Emit an error response and exit.
206
+ return this.sendFinalResponse({ code, description });
207
+ },
208
+
209
+ sendFinalResponse(payload) {
210
+ // Emit a final XYWP message and exit.
211
+ payload.xy = 1;
212
+ if (this.perf) {
213
+ this.perf.end();
214
+ payload.perf = this.perf.metrics();
215
+ }
216
+ process.stdout.write(`${JSON.stringify(payload)}\n`, () => process.exit(0));
217
+ }
218
+ };
219
+
220
+ app.run().catch((err) => {
221
+ // Catch-all handler for unexpected errors.
222
+ console.error( err );
223
+ return app.fatal("error", err && err.message ? err.message : "Unknown error");
224
+ });
package/logo.png ADDED
Binary file
package/package.json ADDED
@@ -0,0 +1,31 @@
1
+ {
2
+ "name": "@pixlcore/xyplug-s3",
3
+ "version": "1.0.0",
4
+ "description": "An AWS S3 utility plugin for the xyOps workflow automation system.",
5
+ "author": "Joseph Huckaby <jhuckaby@pixlcore.com>",
6
+ "homepage": "https://github.com/pixlcore/xyplug-s3",
7
+ "license": "MIT",
8
+ "bin": {
9
+ "xyplug-s3": "index.js"
10
+ },
11
+ "repository": {
12
+ "type": "git",
13
+ "url": "https://github.com/pixlcore/xyplug-s3"
14
+ },
15
+ "bugs": {
16
+ "url": "https://github.com/pixlcore/xyplug-s3/issues"
17
+ },
18
+ "keywords": [
19
+ "xyops",
20
+ "s3",
21
+ "aws"
22
+ ],
23
+ "dependencies": {
24
+ "picomatch": "4.0.4",
25
+ "pixl-perf": "^1.0.9",
26
+ "s3-api": "^2.0.22"
27
+ },
28
+ "publishConfig": {
29
+ "access": "public"
30
+ }
31
+ }
package/xyops.json ADDED
@@ -0,0 +1,445 @@
1
+ {
2
+ "type": "xypdf",
3
+ "description": "xyOps Portable Data Object",
4
+ "version": "1.0",
5
+ "items": [
6
+ {
7
+ "type": "plugin",
8
+ "data": {
9
+ "id": "pmn6zec9v4vn5e7s3",
10
+ "title": "S3 Toolbox",
11
+ "enabled": true,
12
+ "type": "event",
13
+ "command": "npx -y @pixlcore/xyplug-s3@1.0.0",
14
+ "script": "",
15
+ "kill": "parent",
16
+ "notes": "Upload, download, move, copy, list, grep, and delete your files in S3 buckets.",
17
+ "icon": "aws",
18
+ "env": {
19
+ "AWS_ACCESS_KEY_ID": "Your AWS `accessKeyId` credential.",
20
+ "AWS_SECRET_ACCESS_KEY": "Your AWS `secretAccessKey` credential."
21
+ },
22
+ "params": [
23
+ {
24
+ "id": "region",
25
+ "title": "Region ID",
26
+ "type": "text",
27
+ "value": "us-east-1",
28
+ "caption": "The AWS Region ID where your bucket resides, e.g. `us-east-1`.",
29
+ "required": true
30
+ },
31
+ {
32
+ "id": "bucket",
33
+ "title": "Bucket Name",
34
+ "type": "text",
35
+ "value": "",
36
+ "caption": "The name of your S3 bucket, e.g. `mybucket01`.",
37
+ "required": true
38
+ },
39
+ {
40
+ "id": "tool",
41
+ "title": "Tool Select",
42
+ "type": "toolset",
43
+ "caption": "",
44
+ "data": {
45
+ "tools": [
46
+ {
47
+ "id": "uploadFiles",
48
+ "title": "Upload Files",
49
+ "description": "Upload job input files to an S3 bucket.",
50
+ "fields": [
51
+ {
52
+ "id": "localPath",
53
+ "title": "Local Path",
54
+ "type": "text",
55
+ "value": "",
56
+ "caption": "The local path to find files under. Leave blank to use the job temp directory and job input files."
57
+ },
58
+ {
59
+ "id": "filespec",
60
+ "title": "Filename Pattern",
61
+ "type": "text",
62
+ "value": "",
63
+ "caption": "Optionally filter the input files using a glob pattern, applied to the filenames."
64
+ },
65
+ {
66
+ "id": "remotePath",
67
+ "title": "Remote Path",
68
+ "type": "text",
69
+ "value": "",
70
+ "caption": "The base S3 path to store files under. Leave blank to store at the root level."
71
+ },
72
+ {
73
+ "id": "compress",
74
+ "title": "Compress Files (Gzip)",
75
+ "type": "checkbox",
76
+ "value": false,
77
+ "caption": "Optionally gzip-compress all files before upload. This also adds a `.gz` filename suffix."
78
+ },
79
+ {
80
+ "id": "s3params",
81
+ "title": "Custom S3 Params",
82
+ "type": "json",
83
+ "value": {},
84
+ "caption": "Optionally specify parameters to the S3 API, for e.g. ACL and Storage Class."
85
+ }
86
+ ]
87
+ },
88
+ {
89
+ "id": "downloadFiles",
90
+ "title": "Download Files",
91
+ "description": "Download S3 files and attach to the job output.",
92
+ "fields": [
93
+ {
94
+ "id": "remotePath",
95
+ "title": "Remote Path",
96
+ "type": "text",
97
+ "value": "",
98
+ "caption": "The base S3 path to fetch files from. Leave blank to fetch files at the root level."
99
+ },
100
+ {
101
+ "id": "filespec",
102
+ "title": "Filename Pattern",
103
+ "type": "text",
104
+ "value": "",
105
+ "caption": "Optionally filter the remote files using a glob pattern, applied to the filenames."
106
+ },
107
+ {
108
+ "id": "localPath",
109
+ "title": "Local Path",
110
+ "type": "text",
111
+ "value": "",
112
+ "caption": "The local path to save files in. Leave blank to use the job temp directory."
113
+ },
114
+ {
115
+ "id": "decompress",
116
+ "title": "Decompress Files (Gunzip)",
117
+ "type": "checkbox",
118
+ "value": false,
119
+ "caption": "Optionally decompress all gzipped files after download. This also strips the `.gz` filename suffix if found."
120
+ },
121
+ {
122
+ "id": "delete",
123
+ "title": "Delete Files",
124
+ "type": "checkbox",
125
+ "value": false,
126
+ "caption": "Optionally delete all remote S3 files after successful download."
127
+ },
128
+ {
129
+ "id": "attach",
130
+ "title": "Attach Files",
131
+ "type": "checkbox",
132
+ "value": true,
133
+ "caption": "Optionally attach all downloaded files to the job output."
134
+ },
135
+ {
136
+ "id": "max",
137
+ "title": "Maximum Files",
138
+ "type": "text",
139
+ "variant": "number",
140
+ "value": 0,
141
+ "caption": "Optionally set a maximum number of files to download."
142
+ },
143
+ {
144
+ "id": "sort",
145
+ "title": "Sort Files",
146
+ "type": "select",
147
+ "value": "Newest First [newest], Oldest First [oldest], Largest First [largest], Smallest First [smallest]",
148
+ "caption": "Optionally sort the files before downloading. Useful when combined with maximum."
149
+ }
150
+ ]
151
+ },
152
+ {
153
+ "id": "deleteFiles",
154
+ "title": "Delete Files",
155
+ "description": "Delete remote S3 files.",
156
+ "fields": [
157
+ {
158
+ "id": "remotePath",
159
+ "title": "Remote Path",
160
+ "type": "text",
161
+ "value": "",
162
+ "caption": "The base S3 path to fetch files from. Leave blank to fetch files at the root level."
163
+ },
164
+ {
165
+ "id": "filespec",
166
+ "title": "Filename Pattern",
167
+ "type": "text",
168
+ "value": "",
169
+ "caption": "Optionally filter the remote files using a glob pattern, applied to the filenames."
170
+ },
171
+ {
172
+ "id": "older",
173
+ "title": "Older Than",
174
+ "type": "text",
175
+ "value": "",
176
+ "caption": "Optionally filter the S3 files based on their mod date, so they must be older than the specified number of seconds (or specify a string like `7 days`)."
177
+ },
178
+ {
179
+ "id": "max",
180
+ "title": "Maximum Files",
181
+ "type": "text",
182
+ "variant": "number",
183
+ "value": 0,
184
+ "caption": "Optionally set a maximum limit on the number of files to delete."
185
+ },
186
+ {
187
+ "id": "sort",
188
+ "title": "Sort Files",
189
+ "type": "select",
190
+ "value": "Oldest First [oldest], Newest First [newest], Largest First [largest], Smallest First [smallest]",
191
+ "caption": "Optionally sort the files before deleting. Useful when combined with maximum."
192
+ },
193
+ {
194
+ "id": "dry",
195
+ "title": "Dry Run",
196
+ "type": "checkbox",
197
+ "value": false,
198
+ "caption": "When checked, will perform a 'dry run' and not actually perform the deletes."
199
+ }
200
+ ]
201
+ },
202
+ {
203
+ "id": "moveFiles",
204
+ "title": "Move Files",
205
+ "description": "Move remote S3 files to another path and/or bucket.",
206
+ "fields": [
207
+ {
208
+ "id": "remotePath",
209
+ "title": "Remote Path",
210
+ "type": "text",
211
+ "value": "",
212
+ "caption": "The base S3 path to move files from. Leave blank to move files from the root level."
213
+ },
214
+ {
215
+ "id": "filespec",
216
+ "title": "Filename Pattern",
217
+ "type": "text",
218
+ "value": "",
219
+ "caption": "Optionally filter the remote files using a glob pattern, applied to the filenames."
220
+ },
221
+ {
222
+ "id": "destPath",
223
+ "title": "Destination Path",
224
+ "type": "text",
225
+ "value": "",
226
+ "caption": "The base S3 path to move files to. Leave blank to keep the same relative path at the destination root."
227
+ },
228
+ {
229
+ "id": "destBucket",
230
+ "title": "Destination Bucket",
231
+ "type": "text",
232
+ "value": "",
233
+ "caption": "Optionally override the destination bucket name. Leave blank to use the source bucket configured above."
234
+ },
235
+ {
236
+ "id": "max",
237
+ "title": "Maximum Files",
238
+ "type": "text",
239
+ "variant": "number",
240
+ "value": 0,
241
+ "caption": "Optionally set a maximum limit on the number of files to move."
242
+ },
243
+ {
244
+ "id": "sort",
245
+ "title": "Sort Files",
246
+ "type": "select",
247
+ "value": "Oldest First [oldest], Newest First [newest], Largest First [largest], Smallest First [smallest]",
248
+ "caption": "Optionally sort the files before moving. Useful when combined with maximum."
249
+ },
250
+ {
251
+ "id": "s3params",
252
+ "title": "Custom S3 Params",
253
+ "type": "json",
254
+ "value": {},
255
+ "caption": "Optionally specify parameters to the S3 API for the destination objects, for e.g. ACL and Storage Class."
256
+ },
257
+ {
258
+ "id": "dry",
259
+ "title": "Dry Run",
260
+ "type": "checkbox",
261
+ "value": false,
262
+ "caption": "When checked, will perform a 'dry run' and not actually perform the moves."
263
+ }
264
+ ]
265
+ },
266
+ {
267
+ "id": "copyFiles",
268
+ "title": "Copy Files",
269
+ "description": "Copy remote S3 files to another path and/or bucket.",
270
+ "fields": [
271
+ {
272
+ "id": "remotePath",
273
+ "title": "Remote Path",
274
+ "type": "text",
275
+ "value": "",
276
+ "caption": "The base S3 path to copy files from. Leave blank to copy files from the root level."
277
+ },
278
+ {
279
+ "id": "filespec",
280
+ "title": "Filename Pattern",
281
+ "type": "text",
282
+ "value": "",
283
+ "caption": "Optionally filter the remote files using a glob pattern, applied to the filenames."
284
+ },
285
+ {
286
+ "id": "destPath",
287
+ "title": "Destination Path",
288
+ "type": "text",
289
+ "value": "",
290
+ "caption": "The base S3 path to copy files to. Leave blank to keep the same relative path at the destination root."
291
+ },
292
+ {
293
+ "id": "destBucket",
294
+ "title": "Destination Bucket",
295
+ "type": "text",
296
+ "value": "",
297
+ "caption": "Optionally override the destination bucket name. Leave blank to use the source bucket configured above."
298
+ },
299
+ {
300
+ "id": "max",
301
+ "title": "Maximum Files",
302
+ "type": "text",
303
+ "variant": "number",
304
+ "value": 0,
305
+ "caption": "Optionally set a maximum limit on the number of files to copy."
306
+ },
307
+ {
308
+ "id": "sort",
309
+ "title": "Sort Files",
310
+ "type": "select",
311
+ "value": "Oldest First [oldest], Newest First [newest], Largest First [largest], Smallest First [smallest]",
312
+ "caption": "Optionally sort the files before copying. Useful when combined with maximum."
313
+ },
314
+ {
315
+ "id": "s3params",
316
+ "title": "Custom S3 Params",
317
+ "type": "json",
318
+ "value": {},
319
+ "caption": "Optionally specify parameters to the S3 API for the destination objects, for e.g. ACL and Storage Class."
320
+ }
321
+ ]
322
+ },
323
+ {
324
+ "id": "listFiles",
325
+ "title": "List Files",
326
+ "description": "List remote S3 files with optional filter, and return metadata.",
327
+ "fields": [
328
+ {
329
+ "id": "remotePath",
330
+ "title": "Remote Path",
331
+ "type": "text",
332
+ "value": "",
333
+ "caption": "The base S3 path to list files from. Leave blank to list files at the root level."
334
+ },
335
+ {
336
+ "id": "filespec",
337
+ "title": "Filename Pattern",
338
+ "type": "text",
339
+ "value": "",
340
+ "caption": "Optionally filter the remote files using a glob pattern, applied to the filenames."
341
+ },
342
+ {
343
+ "id": "older",
344
+ "title": "Older Than",
345
+ "type": "text",
346
+ "value": "",
347
+ "caption": "Optionally filter the S3 files based on their mod date, so they must be older than the specified number of seconds (or specify a string like `7 days`)."
348
+ },
349
+ {
350
+ "id": "newer",
351
+ "title": "Newer Than",
352
+ "type": "text",
353
+ "value": "",
354
+ "caption": "Optionally filter the S3 files based on their mod date, so they must be newer than the specified number of seconds (or specify a string like `7 days`)."
355
+ },
356
+ {
357
+ "id": "max",
358
+ "title": "Maximum Files",
359
+ "type": "text",
360
+ "variant": "number",
361
+ "value": 0,
362
+ "caption": "Optionally set a maximum number of files to return."
363
+ },
364
+ {
365
+ "id": "sort",
366
+ "title": "Sort Files",
367
+ "type": "select",
368
+ "value": "Newest First [newest], Oldest First [oldest], Largest First [largest], Smallest First [smallest]",
369
+ "caption": "Optionally sort the files before returning metadata."
370
+ }
371
+ ]
372
+ },
373
+ {
374
+ "id": "grepFiles",
375
+ "title": "Grep Files",
376
+ "description": "Search inside remote S3 files for regular expression match.",
377
+ "fields": [
378
+ {
379
+ "id": "remotePath",
380
+ "title": "Remote Path",
381
+ "type": "text",
382
+ "value": "",
383
+ "caption": "The base S3 path to search under. Leave blank to search from the root level."
384
+ },
385
+ {
386
+ "id": "filespec",
387
+ "title": "Filename Pattern",
388
+ "type": "text",
389
+ "value": "",
390
+ "caption": "Optionally filter the remote files using a glob pattern, applied to the filenames."
391
+ },
392
+ {
393
+ "id": "match",
394
+ "title": "Match Pattern (Regex)",
395
+ "type": "text",
396
+ "value": "",
397
+ "caption": "The regular expression pattern to search for inside each remote file.",
398
+ "required": true
399
+ },
400
+ {
401
+ "id": "decompress",
402
+ "title": "Decompress Files (Gunzip)",
403
+ "type": "checkbox",
404
+ "value": false,
405
+ "caption": "Automatically decompress all gzipped files during download before searching their contents."
406
+ },
407
+ {
408
+ "id": "older",
409
+ "title": "Older Than",
410
+ "type": "text",
411
+ "value": "",
412
+ "caption": "Optionally filter the S3 files based on their mod date, so they must be older than the specified number of seconds (or specify a string like `7 days`)."
413
+ },
414
+ {
415
+ "id": "newer",
416
+ "title": "Newer Than",
417
+ "type": "text",
418
+ "value": "",
419
+ "caption": "Optionally filter the S3 files based on their mod date, so they must be newer than the specified number of seconds (or specify a string like `7 days`)."
420
+ },
421
+ {
422
+ "id": "maxLines",
423
+ "title": "Maximum Matches",
424
+ "type": "text",
425
+ "variant": "number",
426
+ "value": 1000,
427
+ "caption": "Optionally stop after this many matching lines have been found."
428
+ },
429
+ {
430
+ "id": "output",
431
+ "title": "Output Format",
432
+ "type": "select",
433
+ "value": "JSON Data [data], Text File [file]",
434
+ "caption": "Select whether you want the matched lines returned as job JSON data, or attached as a text file."
435
+ }
436
+ ]
437
+ }
438
+ ]
439
+ }
440
+ }
441
+ ]
442
+ }
443
+ }
444
+ ]
445
+ }