@nomikos/module-comm 1.0.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +53 -0
- package/index.js +11 -0
- package/jest.config.js +25 -0
- package/package.json +19 -0
- package/src/lib/amqpServer.js +57 -0
- package/src/lib/env.js +12 -0
- package/src/services/AmqpEncolador.js +15 -0
- package/src/services/AmqpReceiver.js +65 -0
- package/src/services/AmqpSender.js +67 -0
- package/src/services/__tests__/AmqpReceiver.test.js +165 -0
package/README.md
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
1
|
+
|
|
2
|
+
# @nomikos/module-google
|
|
3
|
+
|
|
4
|
+
Servicios de FCM y GCE
|
|
5
|
+
|
|
6
|
+
# Sobre redundancia con varios workers pero haciendo de a una tarea
|
|
7
|
+
|
|
8
|
+
One approach to handling failover in a case where you want redundant consumers but need to process messages in a specific order is to use the exclusive consumer option when setting up the bind to the queue, and to have two consumers who keep trying to bind even when they can't get the exclusive lock.
|
|
9
|
+
|
|
10
|
+
The process is something like this:
|
|
11
|
+
|
|
12
|
+
Consumer A starts first and binds to the queue as an exclusive consumer. Consumer A begins processing messages from the queue.
|
|
13
|
+
Consumer B starts next and attempts to bind to the queue as an exclusive consumer, but is rejected because the queue already has an exclusive consumer.
|
|
14
|
+
On a recurring basis, consumer B attempts to get an exclusive bind on the queue but is rejected.
|
|
15
|
+
Process hosting consumer A crashes.
|
|
16
|
+
Consumer B attempts to bind to the queue as an exclusive consumer, and succeeds this time. Consumer B starts processing messages from the queue.
|
|
17
|
+
Consumer A is brought back online, and attempts an exclusive bind, but is rejected now.
|
|
18
|
+
Consumer B continues to process messages in FIFO order.
|
|
19
|
+
While this approach doesn't provide load sharing, it does provide redundancy.
|
|
20
|
+
|
|
21
|
+
### Exclusive Queues
|
|
22
|
+
An exclusive queue can only be used (consumed from, purged, deleted, etc) by its declaring connection. An attempt to use an exclusive queue from a different connection will result in a channel-level exception RESOURCE_LOCKED with an error message that says cannot obtain exclusive access to locked queue.
|
|
23
|
+
|
|
24
|
+
Exclusive queues are deleted when their declaring connection is closed or gone (e.g. due to underlying TCP connection loss). They therefore are only suitable for client-specific transient state.
|
|
25
|
+
|
|
26
|
+
It is common to make exclusive queues server-named.
|
|
27
|
+
|
|
28
|
+
### Single Active Consumer
|
|
29
|
+
Single active consumer allows to have only one consumer at a time consuming from a queue and to fail over to another registered consumer in case the active one is cancelled or dies. Consuming with only one consumer is useful when messages must be consumed and processed in the same order they arrive in the queue.
|
|
30
|
+
|
|
31
|
+
A typical sequence of events would be the following:
|
|
32
|
+
|
|
33
|
+
A queue is declared and some consumers register to it at roughly the same time.
|
|
34
|
+
The very first registered consumer become the single active consumer: messages are dispatched to it and the other consumers are ignored.
|
|
35
|
+
The single active consumer is cancelled for some reason or simply dies. One of the registered consumer becomes the new single active consumer and messages are now dispatched to it. In other terms, the queue fails over automatically to another consumer.
|
|
36
|
+
Note that without the single active consumer feature enabled, messages would be dispatched to all consumers using round-robin.
|
|
37
|
+
|
|
38
|
+
Single active consumer can be enabled when declaring a queue, with the x-single-active-consumer argument set to true, e.g. with the Java client:
|
|
39
|
+
|
|
40
|
+
Channel ch = ...;
|
|
41
|
+
Map<String, Object> arguments = new HashMap<String, Object>();
|
|
42
|
+
arguments.put("x-single-active-consumer", true);
|
|
43
|
+
ch.queueDeclare("my-queue", false, false, false, arguments);
|
|
44
|
+
Compared to AMQP exclusive consumer, single active consumer puts less pressure on the application side to maintain consumption continuity. Consumers just need to be registered and failover is handled automatically, there's no need to detect the active consumer failure and to register a new consumer.
|
|
45
|
+
|
|
46
|
+
The management UI and the CLI can report which consumer is the current active one on a queue where the feature is enabled.
|
|
47
|
+
|
|
48
|
+
Please note the following about single active consumer:
|
|
49
|
+
|
|
50
|
+
There's no guarantee on the selected active consumer, it is picked up randomly, even if consumer priorities are in use.
|
|
51
|
+
Trying to register a consumer with the exclusive consume flag set to true will result in an error if single active consumer is enabled on the queue.
|
|
52
|
+
Messages are always delivered to the active consumer, even if it is too busy at some point. This can happen when using manual acknowledgment and basic.qos, the consumer may be busy dealing with the maximum number of unacknowledged messages it requested with basic.qos. In this case, the other consumers are ignored and messages are enqueued.
|
|
53
|
+
It is not possible to enable single active consumer with a policy. Here is the reason why. Policies in RabbitMQ are dynamic by nature, they can come and go, enabling and disabling the features they declare. Imagine suddenly disabling single active consumer on a queue: the broker would start sending messages to inactive consumers and messages would be processed in parallel, exactly the opposite of what single active consumer is trying to achieve. As the semantics of single active consumer do not play well with the dynamic nature of policies, this feature can be enabled only when declaring a queue, with queue arguments.
|
package/index.js
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
1
|
+
const AmqpReceiver = require('./src/services/AmqpReceiver')
|
|
2
|
+
const AmqpSender = require('./src/services/AmqpSender')
|
|
3
|
+
const AmqpEncolador = require('./src/services/AmqpEncolador')
|
|
4
|
+
const amqpConn = require('./src/lib/amqpServer')
|
|
5
|
+
|
|
6
|
+
module.exports = {
|
|
7
|
+
AmqpReceiver,
|
|
8
|
+
AmqpSender,
|
|
9
|
+
AmqpEncolador,
|
|
10
|
+
amqpConn
|
|
11
|
+
}
|
package/jest.config.js
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
1
|
+
process.env.NODE_APP_PATH = __dirname
|
|
2
|
+
module.exports = {
|
|
3
|
+
verbose: true,
|
|
4
|
+
silent: true,
|
|
5
|
+
// diabolico pq evita mostrar algunos logs cuando los analizo (en dev)
|
|
6
|
+
forceExit: false,
|
|
7
|
+
testEnvironment: 'node',
|
|
8
|
+
testMatch: [
|
|
9
|
+
'**/__tests__/**/*.test.js'
|
|
10
|
+
],
|
|
11
|
+
collectCoverageFrom: [
|
|
12
|
+
'src/**/*.js'
|
|
13
|
+
],
|
|
14
|
+
coveragePathIgnorePatterns: [
|
|
15
|
+
'/node_modules/',
|
|
16
|
+
'__tests__',
|
|
17
|
+
'src/bin'
|
|
18
|
+
],
|
|
19
|
+
testPathIgnorePatterns: [
|
|
20
|
+
'/node_modules/'
|
|
21
|
+
],
|
|
22
|
+
transform: {
|
|
23
|
+
'^.+\\.[t|j]sx?$': ['babel-jest', {rootMode: 'upward'}]
|
|
24
|
+
}
|
|
25
|
+
}
|
package/package.json
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "@nomikos/module-comm",
|
|
3
|
+
"version": "1.0.4",
|
|
4
|
+
"description": "Servicios externos de redis, amqp",
|
|
5
|
+
"main": "index.js",
|
|
6
|
+
"prettier": "@nomikos/prettierrc",
|
|
7
|
+
"scripts": {
|
|
8
|
+
"test": "NODE_ENV=test jest",
|
|
9
|
+
"lint": "eslint --fix src && prettier --write \"src/**/*.js\""
|
|
10
|
+
},
|
|
11
|
+
"author": "Igor Parra B.",
|
|
12
|
+
"license": "ISC",
|
|
13
|
+
"dependencies": {
|
|
14
|
+
"amqp-connection-manager": "^3.1.1",
|
|
15
|
+
"amqplib": "^0.5.5",
|
|
16
|
+
"awilix": "^4.2.6",
|
|
17
|
+
"yenv": "^2.1.1"
|
|
18
|
+
}
|
|
19
|
+
}
|
|
@@ -0,0 +1,57 @@
|
|
|
1
|
+
const { env } = require('./env')
|
|
2
|
+
const amqp = require('amqp-connection-manager')
|
|
3
|
+
const amqpHost = env['amqp-host']
|
|
4
|
+
|
|
5
|
+
let amqpConn = null
|
|
6
|
+
|
|
7
|
+
function amqpClientConnect(loggerRoot) {
|
|
8
|
+
if (!loggerRoot) {
|
|
9
|
+
loggerRoot = console
|
|
10
|
+
// console.warn escribe a pm2 logs
|
|
11
|
+
loggerRoot.debug = console.warn
|
|
12
|
+
loggerRoot.fatal = console.warn
|
|
13
|
+
}
|
|
14
|
+
|
|
15
|
+
return new Promise((resolve, reject) => {
|
|
16
|
+
if (amqpConn !== null) {
|
|
17
|
+
if (amqpConn.isConnected()) {
|
|
18
|
+
loggerRoot.debug('Devolviendo conneccion amqp ya iniciada')
|
|
19
|
+
return resolve(amqpConn)
|
|
20
|
+
}
|
|
21
|
+
} else {
|
|
22
|
+
loggerRoot.debug('Iniciando nueva conneccion amqp')
|
|
23
|
+
}
|
|
24
|
+
|
|
25
|
+
// Devuelve singleton. Se conecta ahora.
|
|
26
|
+
amqpConn = amqp.connect(amqpHost)
|
|
27
|
+
|
|
28
|
+
// Emitted whenever we successfully connect to a broker.
|
|
29
|
+
amqpConn.on('connect', function (a, b) {
|
|
30
|
+
loggerRoot.debug('CLIENTE AMQP CONECTADO')
|
|
31
|
+
resolve(amqpConn)
|
|
32
|
+
})
|
|
33
|
+
|
|
34
|
+
// Emitted whenever we disconnect from a broker.
|
|
35
|
+
amqpConn.on('disconnect', function (params) {
|
|
36
|
+
const error = params.err
|
|
37
|
+
if (error.includes && error.includes('PRECONDITION_FAILED')) {
|
|
38
|
+
/**
|
|
39
|
+
* This error results in the channel that was used for the declaration being
|
|
40
|
+
* forcibly closed by RabbitMQ. If the program subsequently tries to
|
|
41
|
+
* communicate with RabbitMQ using the same channel without re-opening it then
|
|
42
|
+
* Bunny will raise a Bunny::ChannelAlreadyClosed error. In order to continue
|
|
43
|
+
* communications in the same program after such an error, a different channel
|
|
44
|
+
* would have to be used.
|
|
45
|
+
*
|
|
46
|
+
* TODO: Sacar cola de lista de colas para evitar siguientes errores en
|
|
47
|
+
* presente instancia de app.
|
|
48
|
+
* Luego habria quer reparar cola.
|
|
49
|
+
*/
|
|
50
|
+
loggerRoot.fatal('Sacar cola de lista de colas !', error)
|
|
51
|
+
}
|
|
52
|
+
loggerRoot.fatal('CLIENTE AMQP DESCONECTADO, error.stack:', error.stack)
|
|
53
|
+
})
|
|
54
|
+
})
|
|
55
|
+
}
|
|
56
|
+
|
|
57
|
+
module.exports = amqpClientConnect
|
package/src/lib/env.js
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
const yenv = require('yenv')
|
|
2
|
+
process.env.NODE_ENV = process.env.NODE_ENV || 'local'
|
|
3
|
+
|
|
4
|
+
/**
|
|
5
|
+
* We just export what `yenv()` returns.
|
|
6
|
+
* `keyblade` will make sure we don't rely on undefined values.
|
|
7
|
+
*/
|
|
8
|
+
exports.env = yenv('env.yaml', {
|
|
9
|
+
cwd: process.env.NODE_APP_PATH,
|
|
10
|
+
message: key => `[yenv] ${key} not found in the loaded environment`,
|
|
11
|
+
logBeforeThrow: message => console.warn({message})
|
|
12
|
+
})
|
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
module.exports = {
|
|
2
|
+
|
|
3
|
+
send (queueName, payload, axiosEncolador, tracers, logger) {
|
|
4
|
+
|
|
5
|
+
const headers = {
|
|
6
|
+
'x-tracer-session-id': tracers.sessionId,
|
|
7
|
+
'x-tracer-user-id': tracers.userId,
|
|
8
|
+
'x-tracer-systems': tracers.systems
|
|
9
|
+
}
|
|
10
|
+
|
|
11
|
+
logger.info(`A ENCOLADOR: ${queueName}`, {payload, headers})
|
|
12
|
+
|
|
13
|
+
return axiosEncolador.post('queue', {queueName, payload, headers})
|
|
14
|
+
}
|
|
15
|
+
}
|
|
@@ -0,0 +1,65 @@
|
|
|
1
|
+
const amqpClientConnect = require('../lib/amqpServer')
|
|
2
|
+
let channelWrapper = null
|
|
3
|
+
|
|
4
|
+
module.exports = {
|
|
5
|
+
async init(colas, amqpReceiverAdapter, loggerRoot = null) {
|
|
6
|
+
const amqpConn = await amqpClientConnect(loggerRoot)
|
|
7
|
+
|
|
8
|
+
if (!loggerRoot) {
|
|
9
|
+
loggerRoot = console
|
|
10
|
+
// console.warn escribe a pm2 logs
|
|
11
|
+
loggerRoot.debug = console.warn
|
|
12
|
+
loggerRoot.fatal = console.warn
|
|
13
|
+
}
|
|
14
|
+
|
|
15
|
+
await new Promise((resolve) => {
|
|
16
|
+
if (channelWrapper) {
|
|
17
|
+
loggerRoot.debug('FATAL: Solo usar un channel para recibir')
|
|
18
|
+
return
|
|
19
|
+
}
|
|
20
|
+
|
|
21
|
+
channelWrapper = amqpConn.createChannel({
|
|
22
|
+
name: 'channel-receiver',
|
|
23
|
+
setup: async (ch) => {
|
|
24
|
+
loggerRoot.debug('CREANDO CHANNEL PARA CONSUMIR')
|
|
25
|
+
|
|
26
|
+
await Promise.all(
|
|
27
|
+
Object.keys(colas).map(async (queueName) => {
|
|
28
|
+
// Configurar cola
|
|
29
|
+
const cola = colas[queueName]
|
|
30
|
+
const options = cola.options
|
|
31
|
+
const prefetch = cola.prefetch || 1
|
|
32
|
+
|
|
33
|
+
await new Promise((resolve) => {
|
|
34
|
+
ch.assertQueue(queueName, options)
|
|
35
|
+
.then(() => {
|
|
36
|
+
if (
|
|
37
|
+
options.arguments &&
|
|
38
|
+
options.arguments['x-dead-letter-routing-key']
|
|
39
|
+
) {
|
|
40
|
+
// Crear su respectivo dead-letter
|
|
41
|
+
loggerRoot.debug(`Creando dead letter: ${queueName}.dl`)
|
|
42
|
+
ch.assertQueue(`${queueName}.dl`, { durable: true })
|
|
43
|
+
}
|
|
44
|
+
})
|
|
45
|
+
.then(() => {
|
|
46
|
+
ch.prefetch(prefetch)
|
|
47
|
+
amqpReceiverAdapter(ch, queueName, cola)
|
|
48
|
+
})
|
|
49
|
+
.then(() => {
|
|
50
|
+
loggerRoot.debug(`COLA CONSUMO ${queueName}`)
|
|
51
|
+
resolve()
|
|
52
|
+
})
|
|
53
|
+
})
|
|
54
|
+
})
|
|
55
|
+
) // end Promise.all
|
|
56
|
+
}
|
|
57
|
+
})
|
|
58
|
+
|
|
59
|
+
channelWrapper.waitForConnect().then(function () {
|
|
60
|
+
loggerRoot.debug('CHANEL PARA CONSUMIR LISTO')
|
|
61
|
+
resolve()
|
|
62
|
+
})
|
|
63
|
+
})
|
|
64
|
+
}
|
|
65
|
+
}
|
|
@@ -0,0 +1,67 @@
|
|
|
1
|
+
const { env } = require('../lib/env')
|
|
2
|
+
const amqpClientConnect = require('../lib/amqpServer')
|
|
3
|
+
let channelWrapper = null
|
|
4
|
+
|
|
5
|
+
module.exports = {
|
|
6
|
+
sendToQueue: async (queueName, { payload, headers }, logger = null) => {
|
|
7
|
+
let data = {}
|
|
8
|
+
if (headers) {
|
|
9
|
+
data = { payload, headers }
|
|
10
|
+
} else {
|
|
11
|
+
data = { payload }
|
|
12
|
+
}
|
|
13
|
+
|
|
14
|
+
await channelWrapper
|
|
15
|
+
.sendToQueue(queueName, data, { persistent: true })
|
|
16
|
+
.then(() => {
|
|
17
|
+
if (logger) {
|
|
18
|
+
logger.info(`SENT OK A COLA AMQP: ${queueName}`, data)
|
|
19
|
+
}
|
|
20
|
+
})
|
|
21
|
+
.catch((e) => {
|
|
22
|
+
throw e
|
|
23
|
+
})
|
|
24
|
+
},
|
|
25
|
+
|
|
26
|
+
async init(colas, loggerRoot = null) {
|
|
27
|
+
const amqpConn = await amqpClientConnect(loggerRoot)
|
|
28
|
+
if (loggerRoot) {
|
|
29
|
+
loggerRoot.debug(
|
|
30
|
+
`Amqp write on ${env['amqp-host']} in ${env.NODE_ENV} mode.`
|
|
31
|
+
)
|
|
32
|
+
}
|
|
33
|
+
await new Promise((resolve) => {
|
|
34
|
+
if (channelWrapper) {
|
|
35
|
+
loggerRoot.debug('WARNING: Solo usar un channel para enviar')
|
|
36
|
+
resolve()
|
|
37
|
+
return
|
|
38
|
+
}
|
|
39
|
+
|
|
40
|
+
channelWrapper = amqpConn.createChannel({
|
|
41
|
+
name: 'channel-sender',
|
|
42
|
+
json: true,
|
|
43
|
+
setup: async (channel) => {
|
|
44
|
+
loggerRoot.debug('CREANDO CHANNEL PARA ENVIAR')
|
|
45
|
+
|
|
46
|
+
await Promise.all(
|
|
47
|
+
Object.keys(colas).map(async (queueName) => {
|
|
48
|
+
const q = colas[queueName]
|
|
49
|
+
const options = q.options
|
|
50
|
+
await new Promise((resolve) => {
|
|
51
|
+
channel.assertQueue(queueName, options).then(() => {
|
|
52
|
+
loggerRoot.debug(`COLA ENVIO ${queueName}`)
|
|
53
|
+
resolve()
|
|
54
|
+
})
|
|
55
|
+
})
|
|
56
|
+
})
|
|
57
|
+
) // end Promise.all
|
|
58
|
+
}
|
|
59
|
+
})
|
|
60
|
+
|
|
61
|
+
channelWrapper.waitForConnect().then(function () {
|
|
62
|
+
loggerRoot.debug('CHANEL PARA ENVIAR LISTO')
|
|
63
|
+
resolve()
|
|
64
|
+
})
|
|
65
|
+
})
|
|
66
|
+
}
|
|
67
|
+
}
|
|
@@ -0,0 +1,165 @@
|
|
|
1
|
+
/* eslint-disable promise/always-return */
|
|
2
|
+
import AmqpSender from '../AmqpSender'
|
|
3
|
+
const { closeAmqpConn } = require('../../lib/amqpServer')
|
|
4
|
+
const AmqpReceiver = require('../AmqpReceiver')
|
|
5
|
+
|
|
6
|
+
const axios = require('axios')
|
|
7
|
+
const MockAdapter = require('axios-mock-adapter')
|
|
8
|
+
|
|
9
|
+
/**
|
|
10
|
+
* Aplicando mock a instancia common de axios!
|
|
11
|
+
* Se veran afectados todas las llamadas. Esto obliga a simular mocks en todos
|
|
12
|
+
* los tests.
|
|
13
|
+
*/
|
|
14
|
+
const mockAxios = new MockAdapter(axios)
|
|
15
|
+
|
|
16
|
+
const EventEmitter = require('events').EventEmitter
|
|
17
|
+
const controlEmitter = new EventEmitter()
|
|
18
|
+
|
|
19
|
+
const uuidv1 = require('uuid/v1')
|
|
20
|
+
|
|
21
|
+
const colas = {
|
|
22
|
+
'task.test.q': {
|
|
23
|
+
options: {
|
|
24
|
+
durable: true
|
|
25
|
+
},
|
|
26
|
+
forwardUrls: '/final-endpoint',
|
|
27
|
+
prefetch: 1
|
|
28
|
+
}
|
|
29
|
+
}
|
|
30
|
+
|
|
31
|
+
const bodyMessageToTest = {
|
|
32
|
+
queueName: 'task.test.q',
|
|
33
|
+
routingKey: 'task.test.q',
|
|
34
|
+
headers: {
|
|
35
|
+
'x-tracer-user-id': 3,
|
|
36
|
+
'x-tracer-session-id': uuidv1()
|
|
37
|
+
},
|
|
38
|
+
payload: {
|
|
39
|
+
foo: 'bar' // Aqui no testeamos contenido
|
|
40
|
+
}
|
|
41
|
+
}
|
|
42
|
+
|
|
43
|
+
/**
|
|
44
|
+
* Ver si hace ack, nack, requeue (esto siempre consume el mensaje,
|
|
45
|
+
* luego no se puede analizar que pasó con el mensaje, por que desaparece)
|
|
46
|
+
*
|
|
47
|
+
* Si hace requeue y es nack ver si quedó en dlx
|
|
48
|
+
* Si no hace requeue y es nack ver que no quedó en dlx
|
|
49
|
+
* Para analizar dlx hay que consumir ultimo mensaje en cola dead.letter.queue
|
|
50
|
+
*
|
|
51
|
+
* Es delicado testear colas en prod, ya que pueden causar loops infinitos
|
|
52
|
+
* al hacer caer la conn. Como tenemos autolevantamiento de conn se pueden
|
|
53
|
+
* causar estos loops infinitos.
|
|
54
|
+
*/
|
|
55
|
+
|
|
56
|
+
describe('Api de Desencolación', () => {
|
|
57
|
+
let receiver, sender
|
|
58
|
+
|
|
59
|
+
beforeAll(async () => {
|
|
60
|
+
// const desencolador = new Desencolador({axios})
|
|
61
|
+
// receiver = await new AmqpReceiver({desencolador})
|
|
62
|
+
// sender = await new AmqpSender()
|
|
63
|
+
})
|
|
64
|
+
|
|
65
|
+
afterAll(() => {
|
|
66
|
+
// Siempre cerrar, para evitar cuelgue de jest
|
|
67
|
+
closeAmqpConn()
|
|
68
|
+
})
|
|
69
|
+
|
|
70
|
+
it.only('Desencolar mensajes desde colas', async () => {
|
|
71
|
+
expect.assertions(1)
|
|
72
|
+
|
|
73
|
+
mockAxios.onPost('/final-endpoint').reply(200)
|
|
74
|
+
|
|
75
|
+
// Set promesa antes de todo para que escuche des de ya el evento "consuming"
|
|
76
|
+
const result = new Promise(function (resolve, reject) {
|
|
77
|
+
controlEmitter.on('consuming', ({ op }) => {
|
|
78
|
+
if (op === 'ack') {
|
|
79
|
+
return resolve(true)
|
|
80
|
+
}
|
|
81
|
+
reject(new Error())
|
|
82
|
+
})
|
|
83
|
+
})
|
|
84
|
+
|
|
85
|
+
const onMessage = (queuePayload) => {
|
|
86
|
+
const dataEnCola = JSON.parse(queuePayload.content.toString())
|
|
87
|
+
console.warn({ dataEnCola }) // Contiene {payload, headers}
|
|
88
|
+
|
|
89
|
+
const op = 'ack'
|
|
90
|
+
controlEmitter.emit('consuming', { op, queuePayload })
|
|
91
|
+
}
|
|
92
|
+
|
|
93
|
+
const queueName = 'task.test.q'
|
|
94
|
+
|
|
95
|
+
function receiverAdapter(ch, queueName, cola) {
|
|
96
|
+
console.warn({ cola })
|
|
97
|
+
return ch.consume(queueName, onMessage, {
|
|
98
|
+
noAck: false
|
|
99
|
+
})
|
|
100
|
+
}
|
|
101
|
+
|
|
102
|
+
await AmqpSender.init(colas)
|
|
103
|
+
await AmqpReceiver.init(colas, receiverAdapter)
|
|
104
|
+
|
|
105
|
+
// Enviamos mensaje a cola
|
|
106
|
+
await AmqpSender.sendToQueue(queueName, { testendo: 'esto' })
|
|
107
|
+
|
|
108
|
+
await result.then((r) => {
|
|
109
|
+
console.warn('Desencolado OK')
|
|
110
|
+
expect(r).toBe(true)
|
|
111
|
+
})
|
|
112
|
+
})
|
|
113
|
+
|
|
114
|
+
it('Mensajes rechazado en api remota (simulando status 400), ver si quedó en dlx.', async () => {
|
|
115
|
+
// expect.assertions(1)
|
|
116
|
+
|
|
117
|
+
mockAxios.onPost('/final-endpoint').reply(400)
|
|
118
|
+
const desencolador = new Desencolador({ axios })
|
|
119
|
+
receiver = await new AmqpReceiver({ desencolador })
|
|
120
|
+
|
|
121
|
+
// Set promesa antes de todo para que escuche desde ya
|
|
122
|
+
// evento consuming
|
|
123
|
+
const result = new Promise(function (resolve, reject) {
|
|
124
|
+
controlEmitter.on('consuming', ({ op }) => {
|
|
125
|
+
if (op === 'nack') {
|
|
126
|
+
return resolve(true)
|
|
127
|
+
}
|
|
128
|
+
reject(new Error())
|
|
129
|
+
})
|
|
130
|
+
})
|
|
131
|
+
|
|
132
|
+
await receiver.loadColas(colas)
|
|
133
|
+
|
|
134
|
+
// Esto es lo estamos testeando
|
|
135
|
+
await receiver.initListenig(controlEmitter)
|
|
136
|
+
|
|
137
|
+
const body = bodyMessageToTest
|
|
138
|
+
|
|
139
|
+
// Enviamos mensaje a cola
|
|
140
|
+
const config = new QueueConfig(body)
|
|
141
|
+
await sender.execute(config)
|
|
142
|
+
|
|
143
|
+
await result.then((r) => {
|
|
144
|
+
console.warn('Mensajes rechazado en api remota, ver si quedó en dlx.')
|
|
145
|
+
expect(r).toBe(true)
|
|
146
|
+
})
|
|
147
|
+
})
|
|
148
|
+
|
|
149
|
+
/**
|
|
150
|
+
* Si limpio la cola antes de testear puedo analizar siempre el ultimo mensaje,
|
|
151
|
+
* si no tengo que leer hasta encontrar un token dado
|
|
152
|
+
*/
|
|
153
|
+
})
|
|
154
|
+
|
|
155
|
+
function setup() {
|
|
156
|
+
const axiosOK = {
|
|
157
|
+
post: jest.fn(() => {
|
|
158
|
+
return {
|
|
159
|
+
status: 200
|
|
160
|
+
}
|
|
161
|
+
})
|
|
162
|
+
}
|
|
163
|
+
|
|
164
|
+
return { axiosOK }
|
|
165
|
+
}
|