a2a-distributed 0.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,98 @@
1
+ Metadata-Version: 2.3
2
+ Name: a2a-distributed
3
+ Version: 0.0.0
4
+ Summary: Add your description here
5
+ Author: Tomas Pilar
6
+ Author-email: Tomas Pilar <thomas7pilar@gmail.com>
7
+ Requires-Dist: a2a-redis>=0.2.1
8
+ Requires-Dist: a2a-sdk>=0.3.24
9
+ Requires-Dist: bullmq>=2.19.5
10
+ Requires-Dist: celery>=5.6.2
11
+ Requires-Dist: rq>=2.7.0
12
+ Requires-Python: >=3.13
13
+ Description-Content-Type: text/markdown
14
+
15
+ # a2a-distributed
16
+
17
+ `a2a-distributed` provides distributed execution capabilities for A2A agents. It allows you to offload agent execution to background workers using popular task queues like **Celery**, **BullMQ**, or **RQ (Redis Queue)**.
18
+
19
+ ## Architecture
20
+
21
+ The package follows a Client/Worker architecture:
22
+
23
+ 1. **DistributedAgentExecutor (Client Side):** Used by the A2A server to enqueue agent execution tasks into a distributed queue.
24
+ 2. **DistributedAgentWorker (Worker Side):** A background process that listens to the queue, reconstructs the execution context, and runs the agent using a local `AgentExecutor`.
25
+
26
+ ## Exports
27
+
28
+ ### Base Classes
29
+ - `DistributedAgentExecutor`: Abstract base class for all distributed executors.
30
+ - `DistributedAgentWorker`: Abstract base class for all distributed workers.
31
+
32
+ ### Celery Implementation
33
+ - `CeleryAgentExecutor`: Enqueues tasks to Celery.
34
+ - `CeleryAgentWorker`: Handles Celery tasks.
35
+
36
+ ### BullMQ Implementation
37
+ - `BullMQAgentExecutor`: Enqueues jobs to BullMQ.
38
+ - `BullMQAgentWorker`: Processes BullMQ jobs.
39
+
40
+ ### RQ Implementation
41
+ - `RQAgentExecutor`: Enqueues jobs to Redis Queue.
42
+ - `RQAgentWorker`: Processes RQ jobs.
43
+
44
+ ### Utilities
45
+ - `run_worker`: A utility function to run any `DistributedAgentWorker` with built-in signal handling for graceful shutdown.
46
+
47
+ ## Usage Example
48
+
49
+ ### 1. Setting up the Executor (Client Side)
50
+
51
+ ```python
52
+ from a2a_distributed import BullMQAgentExecutor
53
+
54
+ # Initialize the executor with connection options
55
+ executor = BullMQAgentExecutor(
56
+ queue_name="agent-tasks",
57
+ redis_opts={"host": "localhost", "port": 6379}
58
+ )
59
+
60
+ # Use it in your A2A server configuration
61
+ # server = A2AServer(executor=executor, ...)
62
+ ```
63
+
64
+ ### 2. Setting up the Worker (Worker Side)
65
+
66
+ ```python
67
+ import asyncio
68
+ from a2a_distributed import BullMQAgentWorker, run_worker
69
+ from a2a.server.agent_execution import LocalAgentExecutor
70
+ from a2a_redis import RedisEventQueue
71
+
72
+ async def main():
73
+ # 1. Local executor that actually runs the agent logic
74
+ local_executor = LocalAgentExecutor(...)
75
+
76
+ # 2. Event queue for reporting status/results back
77
+ event_queue = RedisEventQueue(...)
78
+
79
+ # 3. The Distributed Worker
80
+ worker = BullMQAgentWorker(
81
+ agent_executor=local_executor,
82
+ event_queue=event_queue,
83
+ queue_name="agent-tasks",
84
+ redis_opts={"host": "localhost", "port": 6379}
85
+ )
86
+
87
+ # 4. Run the worker using the utility
88
+ await run_worker(worker)
89
+
90
+ if __name__ == "__main__":
91
+ asyncio.run(main())
92
+ ```
93
+
94
+ ## Running Workers
95
+
96
+ To run a worker, simply call `await worker.run()` or use the `run_worker(worker)` utility which handles `SIGINT` and `SIGTERM` signals for you.
97
+
98
+ For Celery and RQ, the workers are often started via their respective CLI tools, but the `DistributedAgentWorker` classes provided here allow for programmatic control and integration within your own async loops.
@@ -0,0 +1,84 @@
1
+ # a2a-distributed
2
+
3
+ `a2a-distributed` provides distributed execution capabilities for A2A agents. It allows you to offload agent execution to background workers using popular task queues like **Celery**, **BullMQ**, or **RQ (Redis Queue)**.
4
+
5
+ ## Architecture
6
+
7
+ The package follows a Client/Worker architecture:
8
+
9
+ 1. **DistributedAgentExecutor (Client Side):** Used by the A2A server to enqueue agent execution tasks into a distributed queue.
10
+ 2. **DistributedAgentWorker (Worker Side):** A background process that listens to the queue, reconstructs the execution context, and runs the agent using a local `AgentExecutor`.
11
+
12
+ ## Exports
13
+
14
+ ### Base Classes
15
+ - `DistributedAgentExecutor`: Abstract base class for all distributed executors.
16
+ - `DistributedAgentWorker`: Abstract base class for all distributed workers.
17
+
18
+ ### Celery Implementation
19
+ - `CeleryAgentExecutor`: Enqueues tasks to Celery.
20
+ - `CeleryAgentWorker`: Handles Celery tasks.
21
+
22
+ ### BullMQ Implementation
23
+ - `BullMQAgentExecutor`: Enqueues jobs to BullMQ.
24
+ - `BullMQAgentWorker`: Processes BullMQ jobs.
25
+
26
+ ### RQ Implementation
27
+ - `RQAgentExecutor`: Enqueues jobs to Redis Queue.
28
+ - `RQAgentWorker`: Processes RQ jobs.
29
+
30
+ ### Utilities
31
+ - `run_worker`: A utility function to run any `DistributedAgentWorker` with built-in signal handling for graceful shutdown.
32
+
33
+ ## Usage Example
34
+
35
+ ### 1. Setting up the Executor (Client Side)
36
+
37
+ ```python
38
+ from a2a_distributed import BullMQAgentExecutor
39
+
40
+ # Initialize the executor with connection options
41
+ executor = BullMQAgentExecutor(
42
+ queue_name="agent-tasks",
43
+ redis_opts={"host": "localhost", "port": 6379}
44
+ )
45
+
46
+ # Use it in your A2A server configuration
47
+ # server = A2AServer(executor=executor, ...)
48
+ ```
49
+
50
+ ### 2. Setting up the Worker (Worker Side)
51
+
52
+ ```python
53
+ import asyncio
54
+ from a2a_distributed import BullMQAgentWorker, run_worker
55
+ from a2a.server.agent_execution import LocalAgentExecutor
56
+ from a2a_redis import RedisEventQueue
57
+
58
+ async def main():
59
+ # 1. Local executor that actually runs the agent logic
60
+ local_executor = LocalAgentExecutor(...)
61
+
62
+ # 2. Event queue for reporting status/results back
63
+ event_queue = RedisEventQueue(...)
64
+
65
+ # 3. The Distributed Worker
66
+ worker = BullMQAgentWorker(
67
+ agent_executor=local_executor,
68
+ event_queue=event_queue,
69
+ queue_name="agent-tasks",
70
+ redis_opts={"host": "localhost", "port": 6379}
71
+ )
72
+
73
+ # 4. Run the worker using the utility
74
+ await run_worker(worker)
75
+
76
+ if __name__ == "__main__":
77
+ asyncio.run(main())
78
+ ```
79
+
80
+ ## Running Workers
81
+
82
+ To run a worker, simply call `await worker.run()` or use the `run_worker(worker)` utility which handles `SIGINT` and `SIGTERM` signals for you.
83
+
84
+ For Celery and RQ, the workers are often started via their respective CLI tools, but the `DistributedAgentWorker` classes provided here allow for programmatic control and integration within your own async loops.
@@ -0,0 +1,20 @@
1
+ [project]
2
+ name = "a2a-distributed"
3
+ version = "0.0.0"
4
+ description = "Add your description here"
5
+ readme = "README.md"
6
+ authors = [
7
+ { name = "Tomas Pilar", email = "thomas7pilar@gmail.com" }
8
+ ]
9
+ requires-python = ">=3.13"
10
+ dependencies = [
11
+ "a2a-redis>=0.2.1",
12
+ "a2a-sdk>=0.3.24",
13
+ "bullmq>=2.19.5",
14
+ "celery>=5.6.2",
15
+ "rq>=2.7.0",
16
+ ]
17
+
18
+ [build-system]
19
+ requires = ["uv_build>=0.10.4,<0.11.0"]
20
+ build-backend = "uv_build"
@@ -0,0 +1,17 @@
1
+ from a2a_distributed.executor import DistributedAgentExecutor, DistributedAgentWorker
2
+ from a2a_distributed.celery_executor import CeleryAgentExecutor, CeleryAgentWorker
3
+ from a2a_distributed.rq_executor import RQAgentExecutor, RQAgentWorker
4
+ from a2a_distributed.bullmq_executor import BullMQAgentExecutor, BullMQAgentWorker
5
+ from a2a_distributed.runner import run_worker
6
+
7
+ __all__ = [
8
+ "DistributedAgentExecutor",
9
+ "DistributedAgentWorker",
10
+ "CeleryAgentExecutor",
11
+ "CeleryAgentWorker",
12
+ "RQAgentExecutor",
13
+ "RQAgentWorker",
14
+ "BullMQAgentExecutor",
15
+ "BullMQAgentWorker",
16
+ "run_worker",
17
+ ]
@@ -0,0 +1,165 @@
1
+ from typing import Any, Dict, Optional
2
+ from bullmq import Queue, Worker
3
+ from a2a.server.agent_execution import AgentExecutor
4
+ from a2a.server.agent_execution.context import RequestContext
5
+ from a2a.server.events.event_queue import EventQueue
6
+ from a2a_distributed.executor import DistributedAgentExecutor, DistributedAgentWorker
7
+
8
+
9
+ class BullMQAgentExecutor(DistributedAgentExecutor):
10
+ """Distributed Agent Executor using BullMQ.
11
+
12
+ This executor delegates task execution to a BullMQ worker.
13
+ """
14
+
15
+ def __init__(
16
+ self,
17
+ queue_name: str = "default",
18
+ redis_opts: Optional[Dict[str, Any]] = None,
19
+ job_name: str = "a2a_execute_task",
20
+ ):
21
+ """Initializes the BullMQAgentExecutor.
22
+
23
+ Args:
24
+ queue_name: The name of the BullMQ queue.
25
+ redis_opts: Redis connection options (e.g., {"host": "localhost", "port": 6379}).
26
+ job_name: The name of the BullMQ job.
27
+ """
28
+ self.queue_name = queue_name
29
+ self.redis_opts = redis_opts or {}
30
+ self.job_name = job_name
31
+ self._queue: Optional[Queue] = None
32
+
33
+ def _get_queue(self) -> Queue:
34
+ """Returns the BullMQ Queue instance, initializing it if necessary."""
35
+ if self._queue is None:
36
+ self._queue = Queue(self.queue_name, connection=self.redis_opts)
37
+ return self._queue
38
+
39
+ async def execute(
40
+ self, context: RequestContext, event_queue: EventQueue
41
+ ) -> None:
42
+ """Executes the agent logic by adding a job to BullMQ."""
43
+ payload = self._get_payload(context)
44
+ queue = self._get_queue()
45
+
46
+ # Add a job to the queue
47
+ await queue.add(
48
+ self.job_name,
49
+ payload,
50
+ jobId=context.task_id,
51
+ )
52
+
53
+ async def cancel(
54
+ self, context: RequestContext, event_queue: EventQueue
55
+ ) -> None:
56
+ """Cancels the agent logic.
57
+
58
+ BullMQ job cancellation is usually handled by job removal or worker-side
59
+ interruption. Here we provide a way to mark it for cancellation.
60
+ """
61
+ if context.task_id:
62
+ queue = self._get_queue()
63
+ # In BullMQ, you can remove a job if it hasn't started,
64
+ # or signal cancellation via a separate mechanism.
65
+ # For now, we attempt to remove the job from the queue.
66
+ job = await queue.getJob(context.task_id)
67
+ if job:
68
+ await job.remove()
69
+
70
+ def _get_payload(self, context: RequestContext) -> Dict[str, Any]:
71
+ """Extracts serializable payload from the request context."""
72
+ return {
73
+ "task_id": context.task_id,
74
+ "context_id": context.context_id,
75
+ "message": context.message.model_dump() if context.message else None,
76
+ "metadata": context.metadata,
77
+ }
78
+
79
+ async def close(self) -> None:
80
+ """Closes the BullMQ queue connection."""
81
+ if self._queue:
82
+ await self._queue.close()
83
+ self._queue = None
84
+
85
+
86
+ class BullMQAgentWorker(DistributedAgentWorker):
87
+ """Distributed Agent Worker using BullMQ.
88
+
89
+ This worker handles jobs enqueued by BullMQAgentExecutor.
90
+ """
91
+
92
+ def __init__(
93
+ self,
94
+ agent_executor: AgentExecutor,
95
+ event_queue: EventQueue,
96
+ queue_name: str = "default",
97
+ redis_opts: Optional[Dict[str, Any]] = None,
98
+ ):
99
+ """Initializes the BullMQAgentWorker.
100
+
101
+ Args:
102
+ agent_executor: The local AgentExecutor to use for actual execution.
103
+ event_queue: The event queue to use for reporting status updates and results.
104
+ queue_name: The name of the BullMQ queue to listen to.
105
+ redis_opts: Redis connection options.
106
+ """
107
+ super().__init__(agent_executor, event_queue)
108
+ self.queue_name = queue_name
109
+ self.redis_opts = redis_opts or {}
110
+ self._worker: Optional[Worker] = None
111
+ self._stop_event: Optional[asyncio.Event] = None
112
+
113
+ async def run(self) -> None:
114
+ """Starts the BullMQ worker and waits until closed."""
115
+ if self._worker is None:
116
+ self._worker = Worker(
117
+ self.queue_name,
118
+ self._process_job,
119
+ connection=self.redis_opts
120
+ )
121
+
122
+ self._stop_event = asyncio.Event()
123
+ await self._stop_event.wait()
124
+
125
+ async def _process_job(self, job: Any) -> None:
126
+ """Internal processor for BullMQ jobs."""
127
+ await self._execute_job(job.data)
128
+
129
+ async def _execute_job(self, payload: Dict[str, Any]) -> None:
130
+ """Executes a single job with the given payload."""
131
+ context = self._reconstruct_context(payload)
132
+ await self.agent_executor.execute(context, self.event_queue)
133
+
134
+ async def _cancel_job(self, payload: Dict[str, Any]) -> None:
135
+ """Cancels a single job with the given payload.
136
+
137
+ Note: In BullMQ, cancellation is often handled via job removal or
138
+ external signals, but we provide this for consistency.
139
+ """
140
+ context = self._reconstruct_context(payload)
141
+ await self.agent_executor.cancel(context, self.event_queue)
142
+
143
+ def _reconstruct_context(self, payload: Dict[str, Any]) -> RequestContext:
144
+ """Reconstructs RequestContext from payload."""
145
+ from a2a.server.agent_execution.context import RequestContext
146
+ from a2a.models.messages import BaseMessage
147
+
148
+ message = None
149
+ if payload.get("message"):
150
+ message = BaseMessage.model_validate(payload["message"])
151
+
152
+ return RequestContext(
153
+ task_id=payload.get("task_id"),
154
+ context_id=payload.get("context_id"),
155
+ message=message,
156
+ metadata=payload.get("metadata", {}),
157
+ )
158
+
159
+ async def close(self) -> None:
160
+ """Closes the BullMQ worker."""
161
+ if self._stop_event:
162
+ self._stop_event.set()
163
+ if self._worker:
164
+ await self._worker.close()
165
+ self._worker = None
@@ -0,0 +1,158 @@
1
+ from typing import Any, Dict, Optional
2
+ from celery import Celery
3
+ from a2a.server.agent_execution import AgentExecutor
4
+ from a2a.server.agent_execution.context import RequestContext
5
+ from a2a.server.events.event_queue import EventQueue
6
+ from a2a_distributed.executor import DistributedAgentExecutor, DistributedAgentWorker
7
+
8
+
9
+ class CeleryAgentExecutor(DistributedAgentExecutor):
10
+ """Distributed Agent Executor using Celery.
11
+
12
+ This executor delegates task execution to a Celery worker.
13
+ """
14
+
15
+ def __init__(
16
+ self,
17
+ celery_app: Celery,
18
+ task_name: str = "a2a_execute_task",
19
+ cancel_task_name: str = "a2a_cancel_task",
20
+ ):
21
+ """Initializes the CeleryAgentExecutor.
22
+
23
+ Args:
24
+ celery_app: The Celery application instance.
25
+ task_name: The name of the Celery task to execute the agent logic.
26
+ cancel_task_name: The name of the Celery task to cancel the agent logic.
27
+ """
28
+ self.celery_app = celery_app
29
+ self.task_name = task_name
30
+ self.cancel_task_name = cancel_task_name
31
+
32
+ async def execute(
33
+ self, context: RequestContext, event_queue: EventQueue
34
+ ) -> None:
35
+ """Executes the agent logic by triggering a Celery task.
36
+
37
+ Note: In a distributed environment, the worker should report status updates
38
+ and results back through a shared event mechanism (e.g., Redis Pub/Sub)
39
+ which this executor or the server should listen to.
40
+ """
41
+ # Serialize the context or necessary parts of it
42
+ # For now, we assume the worker can handle the task_id and message
43
+ payload = self._get_payload(context)
44
+
45
+ # Trigger the Celery task
46
+ self.celery_app.send_task(
47
+ self.task_name,
48
+ kwargs={"payload": payload},
49
+ task_id=context.task_id,
50
+ )
51
+
52
+ async def cancel(
53
+ self, context: RequestContext, event_queue: EventQueue
54
+ ) -> None:
55
+ """Cancels the agent logic by revoking the Celery task or sending a cancel task."""
56
+ if context.task_id:
57
+ # Option 1: Revoke the task (standard Celery)
58
+ self.celery_app.control.revoke(context.task_id, terminate=True)
59
+
60
+ # Option 2: Send a dedicated cancellation task if needed
61
+ self.celery_app.send_task(
62
+ self.cancel_task_name,
63
+ kwargs={"task_id": context.task_id},
64
+ )
65
+
66
+ def _get_payload(self, context: RequestContext) -> Dict[str, Any]:
67
+ """Extracts serializable payload from the request context."""
68
+ return {
69
+ "task_id": context.task_id,
70
+ "context_id": context.context_id,
71
+ "message": context.message.model_dump() if context.message else None,
72
+ "metadata": context.metadata,
73
+ }
74
+
75
+
76
+ class CeleryAgentWorker(DistributedAgentWorker):
77
+ """Distributed Agent Worker using Celery.
78
+
79
+ This worker handles tasks enqueued by CeleryAgentExecutor.
80
+ """
81
+
82
+ def __init__(
83
+ self,
84
+ agent_executor: AgentExecutor,
85
+ event_queue: EventQueue,
86
+ celery_app: Celery,
87
+ task_name: str = "a2a_execute_task",
88
+ cancel_task_name: str = "a2a_cancel_task",
89
+ ):
90
+ """Initializes the CeleryAgentWorker.
91
+
92
+ Args:
93
+ agent_executor: The local AgentExecutor to use for actual execution.
94
+ event_queue: The event queue to use for reporting status updates and results.
95
+ celery_app: The Celery application instance.
96
+ task_name: The name of the Celery task to listen to.
97
+ cancel_task_name: The name of the Celery task for cancellation.
98
+ """
99
+ super().__init__(agent_executor, event_queue)
100
+ self.celery_app = celery_app
101
+ self.task_name = task_name
102
+ self.cancel_task_name = cancel_task_name
103
+ self._setup_tasks()
104
+ self._stop_event: Optional[asyncio.Event] = None
105
+
106
+ def _setup_tasks(self) -> None:
107
+ """Registers the Celery tasks."""
108
+
109
+ @self.celery_app.task(name=self.task_name)
110
+ def execute_task(payload: Dict[str, Any]) -> None:
111
+ import asyncio
112
+ asyncio.run(self._execute_job(payload))
113
+
114
+ @self.celery_app.task(name=self.cancel_task_name)
115
+ def cancel_task(payload: Dict[str, Any]) -> None:
116
+ import asyncio
117
+ asyncio.run(self._cancel_job(payload))
118
+
119
+ async def run(self) -> None:
120
+ """Starts the Celery worker and waits until closed.
121
+
122
+ Note: In Celery, workers are typically started via the CLI.
123
+ This method allows for programmatic waiting if needed.
124
+ """
125
+ self._stop_event = asyncio.Event()
126
+ await self._stop_event.wait()
127
+
128
+ async def close(self) -> None:
129
+ """Closes the worker."""
130
+ if self._stop_event:
131
+ self._stop_event.set()
132
+
133
+ async def _execute_job(self, payload: Dict[str, Any]) -> None:
134
+ """Executes a single job with the given payload."""
135
+ context = self._reconstruct_context(payload)
136
+ await self.agent_executor.execute(context, self.event_queue)
137
+
138
+ async def _cancel_job(self, payload: Dict[str, Any]) -> None:
139
+ """Cancels a single job with the given payload."""
140
+ context = self._reconstruct_context(payload)
141
+ await self.agent_executor.cancel(context, self.event_queue)
142
+
143
+ def _reconstruct_context(self, payload: Dict[str, Any]) -> RequestContext:
144
+ """Reconstructs RequestContext from payload."""
145
+ from a2a.server.agent_execution.context import RequestContext
146
+ from a2a.models.messages import BaseMessage
147
+
148
+ message = None
149
+ if payload.get("message"):
150
+ # Note: This assumes BaseMessage.model_validate can handle the dict
151
+ message = BaseMessage.model_validate(payload["message"])
152
+
153
+ return RequestContext(
154
+ task_id=payload.get("task_id"),
155
+ context_id=payload.get("context_id"),
156
+ message=message,
157
+ metadata=payload.get("metadata", {}),
158
+ )
@@ -0,0 +1,49 @@
1
+ from abc import ABC, abstractmethod
2
+ from typing import Any, Dict
3
+ from a2a.server.agent_execution import AgentExecutor
4
+ from a2a.server.events.event_queue import EventQueue
5
+
6
+
7
+ class DistributedAgentExecutor(AgentExecutor, ABC):
8
+ """Abstract base class for distributed agent executors (Client side).
9
+
10
+ This class is responsible for enqueuing jobs and requesting cancellations.
11
+ """
12
+ pass
13
+
14
+
15
+ class DistributedAgentWorker(ABC):
16
+ """Abstract base class for distributed agent workers (Worker side).
17
+
18
+ This class is responsible for receiving jobs from the queue and
19
+ executing them using a local AgentExecutor.
20
+ """
21
+
22
+ def __init__(self, agent_executor: AgentExecutor, event_queue: EventQueue):
23
+ """Initializes the DistributedAgentWorker.
24
+
25
+ Args:
26
+ agent_executor: The local AgentExecutor to use for actual execution.
27
+ event_queue: The event queue to use for reporting status updates and results.
28
+ """
29
+ self.agent_executor = agent_executor
30
+ self.event_queue = event_queue
31
+
32
+ @abstractmethod
33
+ async def run(self) -> None:
34
+ """Starts the worker to listen for and execute jobs."""
35
+ pass
36
+
37
+ async def close(self) -> None:
38
+ """Closes the worker and releases resources."""
39
+ pass
40
+
41
+ @abstractmethod
42
+ async def _execute_job(self, payload: Dict[str, Any]) -> None:
43
+ """Internal: Executes a single job with the given payload."""
44
+ pass
45
+
46
+ @abstractmethod
47
+ async def _cancel_job(self, payload: Dict[str, Any]) -> None:
48
+ """Internal: Cancels a single job with the given payload."""
49
+ pass
File without changes
@@ -0,0 +1,157 @@
1
+ from typing import Any, Dict, Optional
2
+ from redis import Redis
3
+ from rq import Queue, Worker
4
+ from a2a.server.agent_execution import AgentExecutor
5
+ from a2a.server.agent_execution.context import RequestContext
6
+ from a2a.server.events.event_queue import EventQueue
7
+ from a2a_distributed.executor import DistributedAgentExecutor, DistributedAgentWorker
8
+
9
+
10
+ class RQAgentExecutor(DistributedAgentExecutor):
11
+ """Distributed Agent Executor using RQ (Redis Queue).
12
+
13
+ This executor delegates task execution to an RQ worker.
14
+ """
15
+
16
+ def __init__(
17
+ self,
18
+ redis_conn: Redis,
19
+ queue_name: str = "default",
20
+ task_func_path: str = "a2a_distributed.rq_executor.execute_task",
21
+ cancel_func_path: str = "a2a_distributed.rq_executor.cancel_task",
22
+ ):
23
+ """Initializes the RQAgentExecutor.
24
+
25
+ Args:
26
+ redis_conn: The Redis connection to use for the RQ queue.
27
+ queue_name: The name of the RQ queue to use.
28
+ task_func_path: The fully qualified path to the worker function for execution.
29
+ cancel_func_path: The fully qualified path to the worker function for cancellation.
30
+ """
31
+ self.redis_conn = redis_conn
32
+ self.queue = Queue(queue_name, connection=self.redis_conn)
33
+ self.task_func_path = task_func_path
34
+ self.cancel_func_path = cancel_func_path
35
+
36
+ async def execute(
37
+ self, context: RequestContext, event_queue: EventQueue
38
+ ) -> None:
39
+ """Executes the agent logic by enqueuing a task to RQ."""
40
+ payload = self._get_payload(context)
41
+
42
+ # Enqueue the task with the context's task_id as the RQ job ID
43
+ self.queue.enqueue(
44
+ self.task_func_path,
45
+ payload=payload,
46
+ job_id=context.task_id,
47
+ )
48
+
49
+ async def cancel(
50
+ self, context: RequestContext, event_queue: EventQueue
51
+ ) -> None:
52
+ """Cancels the agent logic by sending a cancel task to RQ."""
53
+ if context.task_id:
54
+ payload = self._get_payload(context)
55
+ # Enqueue the cancellation task
56
+ self.queue.enqueue(
57
+ self.cancel_func_path,
58
+ payload=payload,
59
+ )
60
+
61
+ def _get_payload(self, context: RequestContext) -> Dict[str, Any]:
62
+ """Extracts serializable payload from the request context."""
63
+ return {
64
+ "task_id": context.task_id,
65
+ "context_id": context.context_id,
66
+ "message": context.message.model_dump() if context.message else None,
67
+ "metadata": context.metadata,
68
+ }
69
+
70
+
71
+ class RQAgentWorker(DistributedAgentWorker):
72
+ """Distributed Agent Worker using RQ.
73
+
74
+ This worker handles tasks enqueued by RQAgentExecutor.
75
+ """
76
+
77
+ _instance: Optional["RQAgentWorker"] = None
78
+
79
+ def __init__(
80
+ self,
81
+ agent_executor: AgentExecutor,
82
+ event_queue: EventQueue,
83
+ redis_conn: Redis,
84
+ queue_name: str = "default",
85
+ ):
86
+ """Initializes the RQAgentWorker.
87
+
88
+ Args:
89
+ agent_executor: The local AgentExecutor to use for actual execution.
90
+ event_queue: The event queue to use for reporting status updates and results.
91
+ redis_conn: The Redis connection to use.
92
+ queue_name: The name of the RQ queue to listen to.
93
+ """
94
+ super().__init__(agent_executor, event_queue)
95
+ self.redis_conn = redis_conn
96
+ self.queue_name = queue_name
97
+ self._worker: Optional[Worker] = None
98
+ RQAgentWorker._instance = self
99
+
100
+ @classmethod
101
+ def get_instance(cls) -> "RQAgentWorker":
102
+ if cls._instance is None:
103
+ raise RuntimeError("RQAgentWorker instance not initialized")
104
+ return cls._instance
105
+
106
+ async def run(self) -> None:
107
+ """Starts the RQ worker."""
108
+ if self._worker is None:
109
+ self._worker = Worker([self.queue_name], connection=self.redis_conn)
110
+ self._worker.work()
111
+
112
+ async def close(self) -> None:
113
+ """Closes the RQ worker."""
114
+ # Note: RQ's Worker.work() is usually blocking.
115
+ # For a clean async stop, one might need to use signals or
116
+ # a custom worker loop.
117
+ pass
118
+
119
+ async def _execute_job(self, payload: Dict[str, Any]) -> None:
120
+ """Executes a single job with the given payload."""
121
+ context = self._reconstruct_context(payload)
122
+ await self.agent_executor.execute(context, self.event_queue)
123
+
124
+ async def _cancel_job(self, payload: Dict[str, Any]) -> None:
125
+ """Cancels a single job with the given payload."""
126
+ context = self._reconstruct_context(payload)
127
+ await self.agent_executor.cancel(context, self.event_queue)
128
+
129
+ def _reconstruct_context(self, payload: Dict[str, Any]) -> RequestContext:
130
+ """Reconstructs RequestContext from payload."""
131
+ from a2a.server.agent_execution.context import RequestContext
132
+ from a2a.models.messages import BaseMessage
133
+
134
+ message = None
135
+ if payload.get("message"):
136
+ message = BaseMessage.model_validate(payload["message"])
137
+
138
+ return RequestContext(
139
+ task_id=payload.get("task_id"),
140
+ context_id=payload.get("context_id"),
141
+ message=message,
142
+ metadata=payload.get("metadata", {}),
143
+ )
144
+
145
+
146
+ def execute_task(payload: Dict[str, Any]) -> None:
147
+ """Global task function for RQ."""
148
+ import asyncio
149
+ worker = RQAgentWorker.get_instance()
150
+ asyncio.run(worker._execute_job(payload))
151
+
152
+
153
+ def cancel_task(payload: Dict[str, Any]) -> None:
154
+ """Global cancel task function for RQ."""
155
+ import asyncio
156
+ worker = RQAgentWorker.get_instance()
157
+ asyncio.run(worker._cancel_job(payload))
@@ -0,0 +1,33 @@
1
+ import asyncio
2
+ import signal
3
+ from a2a_distributed.executor import DistributedAgentWorker
4
+
5
+
6
+ async def run_worker(worker: DistributedAgentWorker) -> None:
7
+ """Utility to run a DistributedAgentWorker and handle termination signals.
8
+
9
+ This function starts the worker and waits for a SIGINT or SIGTERM signal
10
+ to perform a graceful shutdown.
11
+
12
+ Args:
13
+ worker: The DistributedAgentWorker instance to run.
14
+ """
15
+ loop = asyncio.get_running_loop()
16
+
17
+ # Setup shutdown logic
18
+ shutdown_task = asyncio.create_task(worker.run())
19
+
20
+ def shutdown_handler():
21
+ print(f"Shutdown signal received. Closing {worker.__class__.__name__}...")
22
+ asyncio.create_task(worker.close())
23
+
24
+ # Register signal handlers
25
+ for sig in (signal.SIGINT, signal.SIGTERM):
26
+ loop.add_signal_handler(sig, shutdown_handler)
27
+
28
+ try:
29
+ await shutdown_task
30
+ except asyncio.CancelledError:
31
+ pass
32
+ finally:
33
+ print(f"{worker.__class__.__name__} stopped.")