@pingpolls/redisq 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,784 @@
1
+ # RedisQueue
2
+
3
+ A lightweight, type-safe Redis-based message queue for Bun with support for delayed messages, retries, and batch processing.
4
+
5
+ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
6
+ [![Built by PingPolls](https://img.shields.io/badge/Built%20by-PingPolls-orange)](https://pingpolls.com)
7
+
8
+ ## Features
9
+
10
+ - ✨ **Type-safe** - Full TypeScript support with generic types
11
+ - 🚀 **Fast** - Built on top of Bun's native Redis client, processing thousands of messages per second
12
+ - ⏰ **Delayed Messages** - Schedule messages for future processing
13
+ - 🔄 **Automatic Retries** - Exponential backoff with configurable limits
14
+ - 📦 **Batch Processing** - Collect and process messages in batches at scheduled intervals
15
+ - 🎯 **Simple API** - Easy to use with minimal configuration
16
+ - 🔧 **Flexible** - Works with any Bun-based framework
17
+
18
+ ## Installation
19
+
20
+ ```bash
21
+ bun install @pingpolls/redisq
22
+ ```
23
+
24
+ > **Note:** This package is built for Bun and requires Bun runtime to work.
25
+
26
+ ## Quick Start
27
+
28
+ ```typescript
29
+ import RedisQueue from '@pingpolls/redisq';
30
+
31
+ // Initialize the queue
32
+ const queue = new RedisQueue({
33
+ host: '127.0.0.1',
34
+ port: '6379',
35
+ namespace: 'myapp'
36
+ });
37
+
38
+ // Create a queue
39
+ await queue.createQueue({ qname: 'emails' });
40
+
41
+ // Send a message
42
+ await queue.sendMessage({
43
+ qname: 'emails',
44
+ message: JSON.stringify({ to: 'user@example.com', subject: 'Hello!' })
45
+ });
46
+
47
+ // Start a worker to process messages
48
+ await queue.startWorker('emails', async (message) => {
49
+ const email = JSON.parse(message.message);
50
+ console.log(`Sending email to ${email.to}`);
51
+
52
+ // Process your message here
53
+ await sendEmail(email);
54
+
55
+ return { success: true };
56
+ });
57
+ ```
58
+
59
+ ## Core Concepts
60
+
61
+ ### Regular Queues
62
+
63
+ Regular queues process messages individually as they arrive. Perfect for real-time processing like sending emails, webhooks, or notifications.
64
+
65
+ ### Batch Queues
66
+
67
+ Batch queues (suffix `:batch`) collect messages over a time period and process them together. Ideal for bulk operations like database imports, report generation, or aggregating analytics.
68
+
69
+ ## Usage Examples
70
+
71
+ ### SvelteKit
72
+
73
+ #### Option 1: Direct Import (Recommended for Simple Apps)
74
+
75
+ Create a queue service in `src/lib/server/queue.ts`:
76
+
77
+ ```typescript
78
+ import RedisQueue from '@pingpolls/redisq';
79
+
80
+ export const queue = new RedisQueue({
81
+ host: import.meta.env.REDIS_HOST || '127.0.0.1',
82
+ port: import.meta.env.REDIS_PORT || '6379',
83
+ namespace: 'sveltekit-app'
84
+ });
85
+
86
+ // Initialize queues
87
+ export async function initQueues() {
88
+ await queue.createQueue({ qname: 'emails', maxRetries: 3 });
89
+ await queue.createQueue({
90
+ qname: 'analytics:batch',
91
+ every: 60, // Process every 60 seconds
92
+ maxRetries: 2
93
+ });
94
+ }
95
+ ```
96
+
97
+ Start workers in `src/hooks.server.ts`:
98
+
99
+ ```typescript
100
+ import { queue, initQueues } from '$lib/server/queue';
101
+
102
+ await initQueues();
103
+
104
+ // Start email worker
105
+ queue.startWorker('emails', async (message) => {
106
+ const data = JSON.parse(message.message);
107
+ await sendEmail(data);
108
+ return { success: true };
109
+ }, { silent: false });
110
+
111
+ // Start analytics batch worker
112
+ queue.startWorker('analytics:batch', async (batch) => {
113
+ console.log(`Processing ${batch.messages.length} analytics events`);
114
+ await saveToDB(batch.messages);
115
+ return { success: true };
116
+ });
117
+ ```
118
+
119
+ Use in your routes by importing directly:
120
+
121
+ ```typescript
122
+ // src/routes/api/signup/+server.ts
123
+ import { queue } from '$lib/server/queue';
124
+ import { json } from '@sveltejs/kit';
125
+
126
+ export async function POST({ request }) {
127
+ const { email, name } = await request.json();
128
+
129
+ // Send welcome email
130
+ await queue.sendMessage({
131
+ qname: 'emails',
132
+ message: JSON.stringify({ to: email, template: 'welcome', name })
133
+ });
134
+
135
+ // Track signup event
136
+ await queue.sendBatchMessage({
137
+ qname: 'analytics:batch',
138
+ batchId: 'signups',
139
+ message: JSON.stringify({ event: 'signup', email, timestamp: Date.now() })
140
+ });
141
+
142
+ return json({ success: true });
143
+ }
144
+ ```
145
+
146
+ ```typescript
147
+ // src/routes/dashboard/+page.server.ts
148
+ import { queue } from '$lib/server/queue';
149
+
150
+ export async function load() {
151
+ // Queue a background task
152
+ await queue.sendMessage({
153
+ qname: 'emails',
154
+ message: JSON.stringify({ type: 'daily-report' })
155
+ });
156
+
157
+ return {
158
+ message: 'Report queued'
159
+ };
160
+ }
161
+ ```
162
+
163
+ #### Option 2: Using event.locals (Recommended for Larger Apps)
164
+
165
+ This approach provides better dependency injection and testing capabilities with SvelteKit 5.
166
+
167
+ Create the queue service in `src/lib/server/queue.ts`:
168
+
169
+ ```typescript
170
+ import RedisQueue from '@pingpolls/redisq';
171
+
172
+ export function createQueue() {
173
+ return new RedisQueue({
174
+ host: import.meta.env.REDIS_HOST || '127.0.0.1',
175
+ port: import.meta.env.REDIS_PORT || '6379',
176
+ namespace: 'sveltekit-app'
177
+ });
178
+ }
179
+
180
+ export async function initQueues(queue: RedisQueue) {
181
+ await queue.createQueue({ qname: 'emails', maxRetries: 3 });
182
+ await queue.createQueue({
183
+ qname: 'analytics:batch',
184
+ every: 60,
185
+ maxRetries: 2
186
+ });
187
+ }
188
+ ```
189
+
190
+ Setup in `src/hooks.server.ts`:
191
+
192
+ ```typescript
193
+ import type { Handle } from '@sveltejs/kit';
194
+ import { createQueue, initQueues } from '$lib/server/queue';
195
+
196
+ // Create single queue instance
197
+ const queue = createQueue();
198
+ await initQueues(queue);
199
+
200
+ // Start workers
201
+ queue.startWorker('emails', async (message) => {
202
+ const data = JSON.parse(message.message);
203
+ await sendEmail(data);
204
+ return { success: true };
205
+ });
206
+
207
+ queue.startWorker('analytics:batch', async (batch) => {
208
+ await saveToDB(batch.messages);
209
+ return { success: true };
210
+ });
211
+
212
+ export const handle: Handle = async ({ event, resolve }) => {
213
+ // Attach queue to event.locals
214
+ event.locals.queue = queue;
215
+
216
+ return resolve(event);
217
+ };
218
+ ```
219
+
220
+ Add TypeScript types in `src/app.d.ts`:
221
+
222
+ ```typescript
223
+ import type RedisQueue from '@pingpolls/redisq';
224
+
225
+ declare global {
226
+ namespace App {
227
+ interface Locals {
228
+ queue: RedisQueue;
229
+ }
230
+ }
231
+ }
232
+
233
+ export {};
234
+ ```
235
+
236
+ Now use `event.locals.queue` in your routes:
237
+
238
+ ```typescript
239
+ // src/routes/api/signup/+server.ts
240
+ import { json } from '@sveltejs/kit';
241
+ import type { RequestHandler } from './$types';
242
+
243
+ export const POST: RequestHandler = async ({ request, locals }) => {
244
+ const { email, name } = await request.json();
245
+
246
+ // Access queue from locals
247
+ await locals.queue.sendMessage({
248
+ qname: 'emails',
249
+ message: JSON.stringify({ to: email, template: 'welcome', name })
250
+ });
251
+
252
+ await locals.queue.sendBatchMessage({
253
+ qname: 'analytics:batch',
254
+ batchId: 'signups',
255
+ message: JSON.stringify({ event: 'signup', email, timestamp: Date.now() })
256
+ });
257
+
258
+ return json({ success: true });
259
+ };
260
+ ```
261
+
262
+ ```typescript
263
+ // src/routes/orders/[id]/+page.server.ts
264
+ import type { PageServerLoad, Actions } from './$types';
265
+
266
+ export const load: PageServerLoad = async ({ params, locals }) => {
267
+ // Queue background task
268
+ await locals.queue.sendMessage({
269
+ qname: 'emails',
270
+ message: JSON.stringify({ type: 'order-confirmation', orderId: params.id })
271
+ });
272
+
273
+ return {
274
+ orderId: params.id
275
+ };
276
+ };
277
+
278
+ export const actions: Actions = {
279
+ cancel: async ({ params, locals }) => {
280
+ await locals.queue.sendMessage({
281
+ qname: 'emails',
282
+ message: JSON.stringify({ type: 'order-cancellation', orderId: params.id })
283
+ });
284
+
285
+ return { success: true };
286
+ }
287
+ };
288
+ ```
289
+
290
+ **Benefits of using `event.locals`:**
291
+ - Single queue instance across your app
292
+ - Better for dependency injection and testing
293
+ - Cleaner imports in routes
294
+ - Type-safe access with TypeScript
295
+ - Follows SvelteKit 5 best practices
296
+
297
+ ### Next.js (App Router)
298
+
299
+ Create queue in `lib/queue.ts`:
300
+
301
+ ```typescript
302
+ import RedisQueue from '@pingpolls/redisq';
303
+
304
+ export const queue = new RedisQueue({
305
+ host: process.env.REDIS_HOST!,
306
+ port: process.env.REDIS_PORT!,
307
+ namespace: 'nextjs-app'
308
+ });
309
+ ```
310
+
311
+ Initialize in `app/api/worker/route.ts`:
312
+
313
+ ```typescript
314
+ import { queue } from '@/lib/queue';
315
+
316
+ // Create queues on startup
317
+ await queue.createQueue({ qname: 'notifications', maxRetries: 3 });
318
+
319
+ // Start worker
320
+ queue.startWorker('notifications', async (message) => {
321
+ const notification = JSON.parse(message.message);
322
+ await sendPushNotification(notification);
323
+ return { success: true };
324
+ });
325
+
326
+ export async function GET() {
327
+ return Response.json({ status: 'Worker running' });
328
+ }
329
+ ```
330
+
331
+ Use in API routes:
332
+
333
+ ```typescript
334
+ // app/api/orders/route.ts
335
+ import { queue } from '@/lib/queue';
336
+
337
+ export async function POST(request: Request) {
338
+ const order = await request.json();
339
+
340
+ // Queue order confirmation email
341
+ await queue.sendMessage({
342
+ qname: 'notifications',
343
+ message: JSON.stringify({ type: 'order', orderId: order.id }),
344
+ delay: 5000 // Send after 5 seconds
345
+ });
346
+
347
+ return Response.json({ success: true });
348
+ }
349
+ ```
350
+
351
+ ### Nuxt
352
+
353
+ Create queue plugin in `server/plugins/queue.ts`:
354
+
355
+ ```typescript
356
+ import RedisQueue from '@pingpolls/redisq';
357
+
358
+ export default defineNitroPlugin(async (nitroApp) => {
359
+ const queue = new RedisQueue({
360
+ host: process.env.REDIS_HOST || '127.0.0.1',
361
+ port: process.env.REDIS_PORT || '6379',
362
+ namespace: 'nuxt-app'
363
+ });
364
+
365
+ await queue.createQueue({ qname: 'tasks', maxRetries: 3 });
366
+
367
+ // Start worker
368
+ queue.startWorker('tasks', async (message) => {
369
+ const task = JSON.parse(message.message);
370
+ await processTask(task);
371
+ return { success: true };
372
+ });
373
+
374
+ // Make queue available globally
375
+ nitroApp.queue = queue;
376
+ });
377
+ ```
378
+
379
+ Use in API routes:
380
+
381
+ ```typescript
382
+ // server/api/process.post.ts
383
+ export default defineEventHandler(async (event) => {
384
+ const body = await readBody(event);
385
+
386
+ await event.context.nitroApp.queue.sendMessage({
387
+ qname: 'tasks',
388
+ message: JSON.stringify(body)
389
+ });
390
+
391
+ return { queued: true };
392
+ });
393
+ ```
394
+
395
+ ### Hono
396
+
397
+ ```typescript
398
+ import { Hono } from 'hono';
399
+ import RedisQueue from '@pingpolls/redisq';
400
+
401
+ const app = new Hono();
402
+
403
+ const queue = new RedisQueue({
404
+ host: '127.0.0.1',
405
+ port: '6379',
406
+ namespace: 'hono-app'
407
+ });
408
+
409
+ // Initialize queue
410
+ await queue.createQueue({ qname: 'webhooks', maxRetries: 3 });
411
+ await queue.createQueue({
412
+ qname: 'logs:batch',
413
+ every: 30,
414
+ maxRetries: 2
415
+ });
416
+
417
+ // Start regular queue worker
418
+ queue.startWorker('webhooks', async (message) => {
419
+ const webhook = JSON.parse(message.message);
420
+ await fetch(webhook.url, {
421
+ method: 'POST',
422
+ body: JSON.stringify(webhook.data),
423
+ headers: { 'Content-Type': 'application/json' }
424
+ });
425
+ return { success: true };
426
+ });
427
+
428
+ // Start batch queue worker
429
+ queue.startWorker('logs:batch', async (batch) => {
430
+ console.log(`Processing ${batch.messages.length} logs`);
431
+ await saveLogsToDatabase(batch.messages);
432
+ return { success: true };
433
+ });
434
+
435
+ // API endpoints
436
+ app.post('/webhook', async (c) => {
437
+ const body = await c.req.json();
438
+
439
+ await queue.sendMessage({
440
+ qname: 'webhooks',
441
+ message: JSON.stringify(body),
442
+ delay: 1000 // Delay 1 second
443
+ });
444
+
445
+ return c.json({ queued: true });
446
+ });
447
+
448
+ app.post('/log', async (c) => {
449
+ const log = await c.req.json();
450
+
451
+ await queue.sendBatchMessage({
452
+ qname: 'logs:batch',
453
+ batchId: 'app-logs',
454
+ message: JSON.stringify(log)
455
+ });
456
+
457
+ return c.json({ logged: true });
458
+ });
459
+
460
+ export default app;
461
+ ```
462
+
463
+ ## API Reference
464
+
465
+ ### Constructor
466
+
467
+ ```typescript
468
+ new RedisQueue(options: QueueOptions)
469
+ ```
470
+
471
+ **Options:**
472
+
473
+ ```typescript
474
+ type QueueOptions = {
475
+ host: string;
476
+ port: string;
477
+ user?: string;
478
+ password?: string;
479
+ namespace?: string;
480
+ tls?: boolean;
481
+ } | {
482
+ redis: BunRedisClient;
483
+ }
484
+ ```
485
+
486
+ ### createQueue
487
+
488
+ Creates a new queue with specified configuration.
489
+
490
+ ```typescript
491
+ await queue.createQueue({
492
+ qname: 'my-queue',
493
+ maxsize?: 65536,
494
+ maxRetries?: 0,
495
+ maxBackoffSeconds?: 30
496
+ });
497
+
498
+ // For batch queues
499
+ await queue.createQueue({
500
+ qname: 'my-queue:batch',
501
+ every?: 60, // Process every 60 seconds
502
+ maxRetries?: 0,
503
+ maxBackoffSeconds?: 30
504
+ });
505
+ ```
506
+
507
+ **Parameters:**
508
+ - `qname` - Queue name (must end with `:batch` for batch queues)
509
+ - `maxsize` - Maximum message size in bytes (default: 65536)
510
+ - `maxRetries` - Max retry attempts: `0` = no retry, `-1` = unlimited, `n` = max retries (default: 0)
511
+ - `maxBackoffSeconds` - Maximum backoff time for exponential backoff (default: 30)
512
+ - `every` - Batch processing interval in seconds (batch queues only, default: 60)
513
+
514
+ ### sendMessage
515
+
516
+ Sends a message to a regular queue.
517
+
518
+ ```typescript
519
+ const id = await queue.sendMessage({
520
+ qname: 'my-queue',
521
+ message: 'Hello, World!',
522
+ delay?: 5000 // Optional delay in milliseconds
523
+ });
524
+ ```
525
+
526
+ ### sendBatchMessage
527
+
528
+ Sends a message to a batch queue.
529
+
530
+ ```typescript
531
+ await queue.sendBatchMessage({
532
+ qname: 'analytics:batch',
533
+ batchId: 'user-events',
534
+ message: JSON.stringify({ event: 'click', timestamp: Date.now() })
535
+ });
536
+ ```
537
+
538
+ **Note:** Messages with the same `batchId` are processed together in the same batch.
539
+
540
+ ### startWorker
541
+
542
+ Starts a worker to process messages.
543
+
544
+ ```typescript
545
+ // Regular queue worker
546
+ await queue.startWorker('my-queue', async (message) => {
547
+ console.log(message.id);
548
+ console.log(message.message);
549
+ console.log(message.attempt);
550
+ console.log(message.sent);
551
+
552
+ return { success: true }; // or { success: false } to retry
553
+ }, {
554
+ concurrency?: 1,
555
+ silent?: false
556
+ });
557
+
558
+ // Batch queue worker
559
+ await queue.startWorker('analytics:batch', async (batch) => {
560
+ console.log(batch.batchId);
561
+ console.log(batch.messages); // Array of messages
562
+ console.log(batch.attempt);
563
+ console.log(batch.sent);
564
+
565
+ return { success: true };
566
+ });
567
+ ```
568
+
569
+ ### deleteMessage
570
+
571
+ Manually delete a message from the queue.
572
+
573
+ ```typescript
574
+ await queue.deleteMessage('my-queue', messageId);
575
+ ```
576
+
577
+ ### getQueue
578
+
579
+ Get queue attributes and message count.
580
+
581
+ ```typescript
582
+ const attrs = await queue.getQueue('my-queue');
583
+ console.log(attrs.msgs); // Number of pending messages
584
+ ```
585
+
586
+ ### listQueues
587
+
588
+ List all queue names.
589
+
590
+ ```typescript
591
+ const queues = await queue.listQueues();
592
+ ```
593
+
594
+ ### deleteQueue
595
+
596
+ Delete a queue and all its data.
597
+
598
+ ```typescript
599
+ await queue.deleteQueue('my-queue');
600
+ ```
601
+
602
+ ### stopWorker
603
+
604
+ Stop a specific worker.
605
+
606
+ ```typescript
607
+ queue.stopWorker('my-queue');
608
+ ```
609
+
610
+ ### close
611
+
612
+ Close all workers and Redis connection.
613
+
614
+ ```typescript
615
+ await queue.close();
616
+ ```
617
+
618
+ ## Message Types
619
+
620
+ ### Message (Regular Queue)
621
+
622
+ ```typescript
623
+ interface Message {
624
+ id: string;
625
+ message: string;
626
+ sent: number; // Timestamp
627
+ attempt: number; // Current attempt (1-based)
628
+ }
629
+ ```
630
+
631
+ ### BatchMessage (Batch Queue)
632
+
633
+ ```typescript
634
+ interface BatchMessage {
635
+ batchId: string;
636
+ messages: Array<{
637
+ id: string;
638
+ message: string;
639
+ sent: number;
640
+ }>;
641
+ sent: number; // Batch creation timestamp
642
+ attempt: number; // Batch-level attempt counter
643
+ }
644
+ ```
645
+
646
+ ## Retry Behavior
647
+
648
+ When a worker returns `{ success: false }` or throws an error:
649
+
650
+ 1. Message is retried with exponential backoff: `2^attempt + random(0-1000ms)`
651
+ 2. Backoff is capped at `maxBackoffSeconds`
652
+ 3. After `maxRetries` attempts, the message is deleted
653
+ 4. Set `maxRetries: -1` for unlimited retries
654
+
655
+ **Example backoff times (maxBackoffSeconds = 30):**
656
+ - Attempt 1: ~2 seconds
657
+ - Attempt 2: ~4 seconds
658
+ - Attempt 3: ~8 seconds
659
+ - Attempt 4: ~16 seconds
660
+ - Attempt 5+: 30 seconds (capped)
661
+
662
+ ## Batch Processing
663
+
664
+ Batch queues collect messages over time and process them together:
665
+
666
+ 1. Messages are grouped by `batchId`
667
+ 2. Every `every` seconds, all pending batches are processed
668
+ 3. Each batch is processed independently
669
+ 4. If a batch fails, only that batch is retried
670
+ 5. Successful batches are deleted immediately
671
+
672
+ **Use cases:**
673
+ - Bulk database inserts
674
+ - Aggregating analytics events
675
+ - Processing CSV uploads in chunks
676
+ - Generating periodic reports
677
+ - Batching API calls to external services
678
+
679
+ ## Environment Variables
680
+
681
+ ```bash
682
+ # Redis Configuration
683
+ REDIS_HOST=127.0.0.1
684
+ REDIS_PORT=6379
685
+ REDIS_PASSWORD=your-password
686
+ REDIS_USER=your-username
687
+ REDIS_TLS=false
688
+ ```
689
+
690
+ ## Best Practices
691
+
692
+ 1. **Message Format**: Always use JSON for complex data
693
+ ```typescript
694
+ message: JSON.stringify({ userId: 123, action: 'signup' })
695
+ ```
696
+
697
+ 2. **Error Handling**: Return `{ success: false }` for retriable errors, throw for fatal errors
698
+ ```typescript
699
+ try {
700
+ await sendEmail(data);
701
+ return { success: true };
702
+ } catch (error) {
703
+ if (error.code === 'RATE_LIMIT') {
704
+ return { success: false }; // Retry
705
+ }
706
+ throw error; // Fatal error
707
+ }
708
+ ```
709
+
710
+ 3. **Idempotency**: Make sure your workers can handle duplicate messages safely
711
+
712
+ 4. **Monitoring**: Use `silent: false` in production to log errors
713
+
714
+ 5. **Graceful Shutdown**: Always call `queue.close()` on app shutdown
715
+
716
+ 6. **Batch Sizing**: Choose appropriate `every` intervals based on your workload
717
+ - High frequency: 10-30 seconds
718
+ - Medium frequency: 60-300 seconds
719
+ - Low frequency: 300-3600 seconds
720
+
721
+ ## Testing
722
+
723
+ ```typescript
724
+ import { describe, test, expect } from 'bun:test';
725
+ import RedisQueue from '@pingpolls/redisq';
726
+
727
+ describe('Queue Tests', () => {
728
+ test('processes messages', async () => {
729
+ const queue = new RedisQueue({ host: '127.0.0.1', port: '6379' });
730
+
731
+ await queue.createQueue({ qname: 'test' });
732
+
733
+ const received: string[] = [];
734
+
735
+ const promise = new Promise<void>((resolve) => {
736
+ queue.startWorker('test', async (msg) => {
737
+ received.push(msg.message);
738
+ resolve();
739
+ return { success: true };
740
+ });
741
+ });
742
+
743
+ await queue.sendMessage({ qname: 'test', message: 'Hello' });
744
+ await promise;
745
+
746
+ expect(received).toContain('Hello');
747
+ await queue.close();
748
+ });
749
+ });
750
+ ```
751
+
752
+ ## Performance
753
+
754
+ Run the stress test to benchmark on your hardware:
755
+
756
+ ```bash
757
+ bun run stress-test.ts
758
+ ```
759
+
760
+ Expected results on limited 2CPU 2GB Dockerized WSL2 with:
761
+ - **Throughput**: ~7,010 messages/second
762
+ - **Latency (p50)**: 1.45 ms
763
+ - **Latency (p95)**: 3.94 ms
764
+ - **Latency (p99)**: 7.83 ms
765
+
766
+ *Results may vary based on hardware, Redis configuration, and network conditions.*
767
+
768
+ ## License
769
+
770
+ Apache License 2.0
771
+
772
+ ## Credits
773
+
774
+ Built with ❤️ by [PingPolls](https://pingpolls.com/?ref=redisq)
775
+
776
+ ## Contributing
777
+
778
+ Contributions are welcome! Please feel free to submit a Pull Request.
779
+
780
+ ## Support
781
+
782
+ - 🐛 [Report bugs](https://github.com/pingpolls/redisq/issues)
783
+ - 💡 [Request features](https://github.com/pingpolls/redisq/issues)
784
+ - 📧 [Contact us](https://pingpolls.com/contact?ref=redisq)