AgenticBlocks.IO 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,100 @@
1
+ Metadata-Version: 2.4
2
+ Name: AgenticBlocks.IO
3
+ Version: 0.1.0
4
+ Summary: A building block library for composed agent workflows
5
+ Requires-Python: >=3.9
6
+ Description-Content-Type: text/markdown
7
+ Requires-Dist: pydantic>=2.0.0
8
+ Requires-Dist: networkx>=3.0
9
+ Requires-Dist: tenacity>=8.0
10
+ Requires-Dist: litellm>=1.0.0
11
+ Requires-Dist: mcp>=1.0.0
12
+
13
+ # agenticblocks 🧱
14
+
15
+ *A composable building block library for AI agent workflows. / Uma biblioteca componível para construir fluxos de agentes de IA.*
16
+
17
+ [🇺🇸 English](#english) | [🇧🇷 Português](#português)
18
+
19
+ ---
20
+
21
+ ## <a name="english"></a>🇺🇸 English
22
+
23
+ ### Philosophy
24
+ A library to build agent workflows like **Lego blocks**. Each step in your agentic pipeline is a self-contained block, with strictly typed inputs and outputs via **Pydantic** and natively concurrent execution using **AsyncIO** and **NetworkX** graphs.
25
+
26
+ - **Strong typing**: Pydantic validates connections and prevents unmatched dependencies between LLM tool calls.
27
+ - **Standardized connections**: Blocks only know their own inputs and outputs. Thus, entire workflows can act as single blocks later.
28
+ - **Smart Parallelism (Waves)**: The asyncio engine fires simultaneous tasks (waves) whenever dependencies are resolved, maximizing API speed.
29
+
30
+ ### Getting Started
31
+
32
+ Install the module locally for development:
33
+ ```bash
34
+ pip install -e .
35
+ ```
36
+
37
+ #### 1. Define Input and Output Models
38
+ ```python
39
+ from pydantic import BaseModel
40
+
41
+ class HelloInput(BaseModel):
42
+ name: str
43
+
44
+ class HelloOutput(BaseModel):
45
+ greeting: str
46
+ ```
47
+
48
+ #### 2. Create the Logic Block
49
+ ```python
50
+ from agenticblocks.core.block import Block
51
+
52
+ class HelloWorldBlock(Block[HelloInput, HelloOutput]):
53
+ name: str = "say_hello"
54
+
55
+ async def run(self, input: HelloInput) -> HelloOutput:
56
+ msg = f"Hello, {input.name}! Welcome to agenticblocks."
57
+ return HelloOutput(greeting=msg)
58
+ ```
59
+
60
+ #### 3. Connect and Execute
61
+ ```python
62
+ import asyncio
63
+ from agenticblocks.core.graph import WorkflowGraph
64
+ from agenticblocks.runtime.executor import WorkflowExecutor
65
+
66
+ async def main():
67
+ graph = WorkflowGraph()
68
+ graph.add_block(HelloWorldBlock(name="say_hello"))
69
+
70
+ executor = WorkflowExecutor(graph)
71
+ ctx = await executor.run(initial_input={"name": "Alice"})
72
+
73
+ print(ctx.get_output("say_hello").greeting)
74
+
75
+ asyncio.run(main())
76
+ ```
77
+
78
+ Check the `examples/` directory for full demos.
79
+
80
+ ---
81
+
82
+ ## <a name="português"></a>🇧🇷 Português
83
+
84
+ ### Filosofia
85
+ Uma biblioteca para construir fluxos de agentes no estilo **Lego**. Cada passo do seu pipeline agêntico é um bloco auto-contido, com entradas e saídas rigorosamente tipadas via **Pydantic** e execução simultânea usando **AsyncIO** e grafos do **NetworkX**.
86
+
87
+ - **Forte tipagem**: Pydantic valida os encaixes e previne dependências não satisfeitas.
88
+ - **Encaixes padronizados**: Blocos só conhecem as próprias entradas e saídas. Workflows inteiros funcionam como blocos únicos.
89
+ - **Paralelismo Inteligente (Ondas)**: O motor dispara tarefas simultâneas (waves) sempre que as dependências de um bloco são resolvidas, otimizando a velocidade de conexões a APIs.
90
+
91
+ ### Primeiros Passos
92
+
93
+ Instale o módulo de forma local editável:
94
+ ```bash
95
+ pip install -e .
96
+ ```
97
+
98
+ A estrutura segue o modelo mostrado na sessão em inglês (Inglês: Input/Output Models, Logic Block, e Graph Execution). Consulte os scripts interativos e completos dentro da pasta `examples/`:
99
+ - `01_hello_world.py`: Simulação básica e limpa do tutorial inicial.
100
+ - `02_llm_pipeline.py`: Um pipeline completo demonstrando a paralelização de parsing de dados complexos com um mock de LLM Call.
@@ -0,0 +1,88 @@
1
+ # agenticblocks 🧱
2
+
3
+ *A composable building block library for AI agent workflows. / Uma biblioteca componível para construir fluxos de agentes de IA.*
4
+
5
+ [🇺🇸 English](#english) | [🇧🇷 Português](#português)
6
+
7
+ ---
8
+
9
+ ## <a name="english"></a>🇺🇸 English
10
+
11
+ ### Philosophy
12
+ A library to build agent workflows like **Lego blocks**. Each step in your agentic pipeline is a self-contained block, with strictly typed inputs and outputs via **Pydantic** and natively concurrent execution using **AsyncIO** and **NetworkX** graphs.
13
+
14
+ - **Strong typing**: Pydantic validates connections and prevents unmatched dependencies between LLM tool calls.
15
+ - **Standardized connections**: Blocks only know their own inputs and outputs. Thus, entire workflows can act as single blocks later.
16
+ - **Smart Parallelism (Waves)**: The asyncio engine fires simultaneous tasks (waves) whenever dependencies are resolved, maximizing API speed.
17
+
18
+ ### Getting Started
19
+
20
+ Install the module locally for development:
21
+ ```bash
22
+ pip install -e .
23
+ ```
24
+
25
+ #### 1. Define Input and Output Models
26
+ ```python
27
+ from pydantic import BaseModel
28
+
29
+ class HelloInput(BaseModel):
30
+ name: str
31
+
32
+ class HelloOutput(BaseModel):
33
+ greeting: str
34
+ ```
35
+
36
+ #### 2. Create the Logic Block
37
+ ```python
38
+ from agenticblocks.core.block import Block
39
+
40
+ class HelloWorldBlock(Block[HelloInput, HelloOutput]):
41
+ name: str = "say_hello"
42
+
43
+ async def run(self, input: HelloInput) -> HelloOutput:
44
+ msg = f"Hello, {input.name}! Welcome to agenticblocks."
45
+ return HelloOutput(greeting=msg)
46
+ ```
47
+
48
+ #### 3. Connect and Execute
49
+ ```python
50
+ import asyncio
51
+ from agenticblocks.core.graph import WorkflowGraph
52
+ from agenticblocks.runtime.executor import WorkflowExecutor
53
+
54
+ async def main():
55
+ graph = WorkflowGraph()
56
+ graph.add_block(HelloWorldBlock(name="say_hello"))
57
+
58
+ executor = WorkflowExecutor(graph)
59
+ ctx = await executor.run(initial_input={"name": "Alice"})
60
+
61
+ print(ctx.get_output("say_hello").greeting)
62
+
63
+ asyncio.run(main())
64
+ ```
65
+
66
+ Check the `examples/` directory for full demos.
67
+
68
+ ---
69
+
70
+ ## <a name="português"></a>🇧🇷 Português
71
+
72
+ ### Filosofia
73
+ Uma biblioteca para construir fluxos de agentes no estilo **Lego**. Cada passo do seu pipeline agêntico é um bloco auto-contido, com entradas e saídas rigorosamente tipadas via **Pydantic** e execução simultânea usando **AsyncIO** e grafos do **NetworkX**.
74
+
75
+ - **Forte tipagem**: Pydantic valida os encaixes e previne dependências não satisfeitas.
76
+ - **Encaixes padronizados**: Blocos só conhecem as próprias entradas e saídas. Workflows inteiros funcionam como blocos únicos.
77
+ - **Paralelismo Inteligente (Ondas)**: O motor dispara tarefas simultâneas (waves) sempre que as dependências de um bloco são resolvidas, otimizando a velocidade de conexões a APIs.
78
+
79
+ ### Primeiros Passos
80
+
81
+ Instale o módulo de forma local editável:
82
+ ```bash
83
+ pip install -e .
84
+ ```
85
+
86
+ A estrutura segue o modelo mostrado na sessão em inglês (Inglês: Input/Output Models, Logic Block, e Graph Execution). Consulte os scripts interativos e completos dentro da pasta `examples/`:
87
+ - `01_hello_world.py`: Simulação básica e limpa do tutorial inicial.
88
+ - `02_llm_pipeline.py`: Um pipeline completo demonstrando a paralelização de parsing de dados complexos com um mock de LLM Call.
@@ -0,0 +1,20 @@
1
+ [build-system]
2
+ requires = ["setuptools>=61.0"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "AgenticBlocks.IO"
7
+ version = "0.1.0"
8
+ description = "A building block library for composed agent workflows"
9
+ readme = "README.md"
10
+ requires-python = ">=3.9"
11
+ dependencies = [
12
+ "pydantic>=2.0.0",
13
+ "networkx>=3.0",
14
+ "tenacity>=8.0",
15
+ "litellm>=1.0.0",
16
+ "mcp>=1.0.0"
17
+ ]
18
+
19
+ [tool.setuptools.packages.find]
20
+ where = ["src"]
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,100 @@
1
+ Metadata-Version: 2.4
2
+ Name: AgenticBlocks.IO
3
+ Version: 0.1.0
4
+ Summary: A building block library for composed agent workflows
5
+ Requires-Python: >=3.9
6
+ Description-Content-Type: text/markdown
7
+ Requires-Dist: pydantic>=2.0.0
8
+ Requires-Dist: networkx>=3.0
9
+ Requires-Dist: tenacity>=8.0
10
+ Requires-Dist: litellm>=1.0.0
11
+ Requires-Dist: mcp>=1.0.0
12
+
13
+ # agenticblocks 🧱
14
+
15
+ *A composable building block library for AI agent workflows. / Uma biblioteca componível para construir fluxos de agentes de IA.*
16
+
17
+ [🇺🇸 English](#english) | [🇧🇷 Português](#português)
18
+
19
+ ---
20
+
21
+ ## <a name="english"></a>🇺🇸 English
22
+
23
+ ### Philosophy
24
+ A library to build agent workflows like **Lego blocks**. Each step in your agentic pipeline is a self-contained block, with strictly typed inputs and outputs via **Pydantic** and natively concurrent execution using **AsyncIO** and **NetworkX** graphs.
25
+
26
+ - **Strong typing**: Pydantic validates connections and prevents unmatched dependencies between LLM tool calls.
27
+ - **Standardized connections**: Blocks only know their own inputs and outputs. Thus, entire workflows can act as single blocks later.
28
+ - **Smart Parallelism (Waves)**: The asyncio engine fires simultaneous tasks (waves) whenever dependencies are resolved, maximizing API speed.
29
+
30
+ ### Getting Started
31
+
32
+ Install the module locally for development:
33
+ ```bash
34
+ pip install -e .
35
+ ```
36
+
37
+ #### 1. Define Input and Output Models
38
+ ```python
39
+ from pydantic import BaseModel
40
+
41
+ class HelloInput(BaseModel):
42
+ name: str
43
+
44
+ class HelloOutput(BaseModel):
45
+ greeting: str
46
+ ```
47
+
48
+ #### 2. Create the Logic Block
49
+ ```python
50
+ from agenticblocks.core.block import Block
51
+
52
+ class HelloWorldBlock(Block[HelloInput, HelloOutput]):
53
+ name: str = "say_hello"
54
+
55
+ async def run(self, input: HelloInput) -> HelloOutput:
56
+ msg = f"Hello, {input.name}! Welcome to agenticblocks."
57
+ return HelloOutput(greeting=msg)
58
+ ```
59
+
60
+ #### 3. Connect and Execute
61
+ ```python
62
+ import asyncio
63
+ from agenticblocks.core.graph import WorkflowGraph
64
+ from agenticblocks.runtime.executor import WorkflowExecutor
65
+
66
+ async def main():
67
+ graph = WorkflowGraph()
68
+ graph.add_block(HelloWorldBlock(name="say_hello"))
69
+
70
+ executor = WorkflowExecutor(graph)
71
+ ctx = await executor.run(initial_input={"name": "Alice"})
72
+
73
+ print(ctx.get_output("say_hello").greeting)
74
+
75
+ asyncio.run(main())
76
+ ```
77
+
78
+ Check the `examples/` directory for full demos.
79
+
80
+ ---
81
+
82
+ ## <a name="português"></a>🇧🇷 Português
83
+
84
+ ### Filosofia
85
+ Uma biblioteca para construir fluxos de agentes no estilo **Lego**. Cada passo do seu pipeline agêntico é um bloco auto-contido, com entradas e saídas rigorosamente tipadas via **Pydantic** e execução simultânea usando **AsyncIO** e grafos do **NetworkX**.
86
+
87
+ - **Forte tipagem**: Pydantic valida os encaixes e previne dependências não satisfeitas.
88
+ - **Encaixes padronizados**: Blocos só conhecem as próprias entradas e saídas. Workflows inteiros funcionam como blocos únicos.
89
+ - **Paralelismo Inteligente (Ondas)**: O motor dispara tarefas simultâneas (waves) sempre que as dependências de um bloco são resolvidas, otimizando a velocidade de conexões a APIs.
90
+
91
+ ### Primeiros Passos
92
+
93
+ Instale o módulo de forma local editável:
94
+ ```bash
95
+ pip install -e .
96
+ ```
97
+
98
+ A estrutura segue o modelo mostrado na sessão em inglês (Inglês: Input/Output Models, Logic Block, e Graph Execution). Consulte os scripts interativos e completos dentro da pasta `examples/`:
99
+ - `01_hello_world.py`: Simulação básica e limpa do tutorial inicial.
100
+ - `02_llm_pipeline.py`: Um pipeline completo demonstrando a paralelização de parsing de dados complexos com um mock de LLM Call.
@@ -0,0 +1,19 @@
1
+ README.md
2
+ pyproject.toml
3
+ src/AgenticBlocks.IO.egg-info/PKG-INFO
4
+ src/AgenticBlocks.IO.egg-info/SOURCES.txt
5
+ src/AgenticBlocks.IO.egg-info/dependency_links.txt
6
+ src/AgenticBlocks.IO.egg-info/requires.txt
7
+ src/AgenticBlocks.IO.egg-info/top_level.txt
8
+ src/agenticblocks/__init__.py
9
+ src/agenticblocks/blocks/llm/agent.py
10
+ src/agenticblocks/blocks/llm/mock_llm.py
11
+ src/agenticblocks/core/agent.py
12
+ src/agenticblocks/core/block.py
13
+ src/agenticblocks/core/graph.py
14
+ src/agenticblocks/runtime/executor.py
15
+ src/agenticblocks/runtime/retry.py
16
+ src/agenticblocks/runtime/state.py
17
+ src/agenticblocks/tools/__init__.py
18
+ src/agenticblocks/tools/a2a_bridge.py
19
+ src/agenticblocks/tools/mcp_client.py
@@ -0,0 +1,5 @@
1
+ pydantic>=2.0.0
2
+ networkx>=3.0
3
+ tenacity>=8.0
4
+ litellm>=1.0.0
5
+ mcp>=1.0.0
@@ -0,0 +1,2 @@
1
+ """agenticblocks: A composable building block library for AI workflows."""
2
+ __version__ = "0.1.0"
@@ -0,0 +1,95 @@
1
+ import json
2
+ from pydantic import BaseModel
3
+ from typing import List
4
+ import litellm
5
+ from agenticblocks.core.agent import AgentBlock
6
+ from agenticblocks.core.block import Block
7
+ from agenticblocks.tools.a2a_bridge import block_to_tool_schema
8
+
9
+ class AgentInput(BaseModel):
10
+ prompt: str
11
+
12
+ class AgentOutput(BaseModel):
13
+ response: str
14
+ tool_calls_made: int = 0
15
+
16
+ class LLMAgentBlock(AgentBlock[AgentInput, AgentOutput]):
17
+ description: str = "Agente Autônomo Baseado em LLM gerenciando seu próprio Tool Loop."
18
+ model: str = "gpt-4o-mini"
19
+ system_prompt: str = "Você é um Agente Analista e Roteador prestativo. Use as ferramentas caso não possua contexto."
20
+ tools: List[Block] = []
21
+
22
+ async def run(self, input: AgentInput) -> AgentOutput:
23
+ # Transparent A2A Bridging
24
+ # Converter qualquer Sub-Bloco para Tool API Formats
25
+ litellm_tools = [block_to_tool_schema(b) for b in self.tools]
26
+
27
+ messages = [
28
+ {"role": "system", "content": self.system_prompt},
29
+ {"role": "user", "content": input.prompt}
30
+ ]
31
+
32
+ tool_call_count = 0
33
+
34
+ while True:
35
+ # Argumentos opcionais caso ferramentas existam no escopo do Agente
36
+ kwargs = {}
37
+ if litellm_tools:
38
+ kwargs["tools"] = litellm_tools
39
+
40
+ # Chamada principal com LiteLLM
41
+ response = await litellm.acompletion(
42
+ model=self.model,
43
+ messages=messages,
44
+ **kwargs
45
+ )
46
+
47
+ message = response.choices[0].message
48
+ # Append dict format rather than object back to history
49
+ messages.append(message.model_dump(exclude_none=True))
50
+
51
+ # Se não tomou decisão de chamar ferramentas, finalizamos iterando o raciocinio e extraindo a reposta.
52
+ if not message.tool_calls:
53
+ return AgentOutput(
54
+ response=message.content or "",
55
+ tool_calls_made=tool_call_count
56
+ )
57
+
58
+ # Transparent Execution! (A2A e MCP)
59
+ for tool_call in message.tool_calls:
60
+ tool_call_count += 1
61
+ function_name = tool_call.function.name
62
+
63
+ # Procura a ferramenta nativa (Blocos conectados)
64
+ matched_block = next((b for b in self.tools if b.name == function_name), None)
65
+ if not matched_block:
66
+ messages.append({
67
+ "role": "tool",
68
+ "tool_call_id": tool_call.id,
69
+ "name": function_name,
70
+ "content": json.dumps({"error": f"Tool {function_name} not found."})
71
+ })
72
+ continue
73
+
74
+ try:
75
+ # Roda o Pydantic Parse para o Bloco A2A dinamicamente
76
+ args_dict = json.loads(tool_call.function.arguments)
77
+ input_model = matched_block.input_schema()(**args_dict)
78
+
79
+ # RUN: O Agente principal engatilha um Agente Subordinado de forma transparente (A2A)!
80
+ result = await matched_block.run(input=input_model)
81
+
82
+ # O output tipado retorna ao escopo original do LiteLLM como JSON
83
+ messages.append({
84
+ "role": "tool",
85
+ "tool_call_id": tool_call.id,
86
+ "name": function_name,
87
+ "content": json.dumps(result.model_dump())
88
+ })
89
+ except Exception as e:
90
+ messages.append({
91
+ "role": "tool",
92
+ "tool_call_id": tool_call.id,
93
+ "name": function_name,
94
+ "content": json.dumps({"error": str(e)})
95
+ })
@@ -0,0 +1,59 @@
1
+ import asyncio
2
+ from pydantic import BaseModel
3
+ from agenticblocks.core.block import Block
4
+ from agenticblocks.runtime.retry import with_retry
5
+
6
+ class FetchInput(BaseModel):
7
+ url: str
8
+
9
+ class FetchOutput(BaseModel):
10
+ raw_data: str
11
+
12
+ class FetchDataBlock(Block[FetchInput, FetchOutput]):
13
+ name: str = "fetch"
14
+ description: str = "Fetches initial data"
15
+
16
+ @with_retry(max_attempts=2, delay=0.1)
17
+ async def run(self, input: FetchInput) -> FetchOutput:
18
+ await asyncio.sleep(0.1) # Mock io
19
+ return FetchOutput(raw_data=f"<html>Mock content from {input.url}</html>")
20
+
21
+ class ParseInput(BaseModel):
22
+ raw_data: str
23
+
24
+ class ParseOutput(BaseModel):
25
+ parsed_text: str
26
+
27
+ class ParseBlock(Block[ParseInput, ParseOutput]):
28
+ async def run(self, input: ParseInput) -> ParseOutput:
29
+ await asyncio.sleep(0.1)
30
+ return ParseOutput(parsed_text="Mock content mapped")
31
+
32
+ class EnrichInput(BaseModel):
33
+ raw_data: str
34
+
35
+ class EnrichOutput(BaseModel):
36
+ metadata: str
37
+
38
+ class EnrichBlock(Block[EnrichInput, EnrichOutput]):
39
+ async def run(self, input: EnrichInput) -> EnrichOutput:
40
+ await asyncio.sleep(0.2)
41
+ return EnrichOutput(metadata="{source: mock, verified: true}")
42
+
43
+ class SummarizeInput(BaseModel):
44
+ parsed_text: str
45
+ metadata: str
46
+
47
+ class SummarizeOutput(BaseModel):
48
+ message: str
49
+ tokens_used: int
50
+
51
+ class LLMCallBlock(Block[SummarizeInput, SummarizeOutput]):
52
+ model: str = "claude-mock-fast"
53
+
54
+ async def run(self, input: SummarizeInput) -> SummarizeOutput:
55
+ await asyncio.sleep(0.5)
56
+ return SummarizeOutput(
57
+ message=f"Summary of '{input.parsed_text}' with {input.metadata}",
58
+ tokens_used=42
59
+ )
@@ -0,0 +1,17 @@
1
+ from pydantic import BaseModel
2
+ from typing import TypeVar, Generic, List
3
+ from agenticblocks.core.block import Block, Input, Output
4
+
5
+ class AgentBlock(Block[Input, Output]):
6
+ """
7
+ Classe base para Agentes independentes do modelo cognitivo (LLM ou não-LLM).
8
+ Um Agente é caracterizado por possuir um ciclo (loop) de decisão próprio
9
+ e um conjunto de "Componentes" ou "Ferramentas" (Sub-Blocos) acopláveis.
10
+ """
11
+ tools: List[Block] = []
12
+
13
+ async def run(self, input: Input) -> Output:
14
+ """
15
+ Subclasses devem implementar o próprio Loop de pensamento ou heurística aqui.
16
+ """
17
+ raise NotImplementedError("Crie e injete o ciclo cognitivo do seu agente nesta etapa.")
@@ -0,0 +1,32 @@
1
+ from pydantic import BaseModel
2
+ from typing import TypeVar, Generic, Any, get_type_hints
3
+
4
+ Input = TypeVar("Input", bound=BaseModel)
5
+ Output = TypeVar("Output", bound=BaseModel)
6
+
7
+ class Block(BaseModel, Generic[Input, Output]):
8
+ name: str
9
+ description: str = ""
10
+
11
+ async def run(self, input: Input) -> Output:
12
+ raise NotImplementedError
13
+
14
+ @classmethod
15
+ def input_schema(cls) -> type[BaseModel]:
16
+ try:
17
+ hints = get_type_hints(cls.run)
18
+ if 'input' in hints:
19
+ return hints['input']
20
+ except Exception:
21
+ pass
22
+ return BaseModel
23
+
24
+ @classmethod
25
+ def output_schema(cls) -> type[BaseModel]:
26
+ try:
27
+ hints = get_type_hints(cls.run)
28
+ if 'return' in hints:
29
+ return hints['return']
30
+ except Exception:
31
+ pass
32
+ return BaseModel
@@ -0,0 +1,34 @@
1
+ import networkx as nx
2
+ from .block import Block
3
+ from pydantic import BaseModel
4
+
5
+ class WorkflowGraph:
6
+ def __init__(self):
7
+ self.graph = nx.DiGraph()
8
+
9
+ def add_block(self, block: Block) -> str:
10
+ node_id = block.name
11
+ if node_id in self.graph.nodes:
12
+ raise ValueError(f"Já existe um bloco com o nome {node_id}")
13
+ self.graph.add_node(node_id, block=block)
14
+ return node_id
15
+
16
+ def connect(self, from_id: str, to_id: str):
17
+ if from_id not in self.graph.nodes or to_id not in self.graph.nodes:
18
+ raise ValueError("Ambos os blocos devem existir no grafo")
19
+
20
+ # Validacao iterativa
21
+ out_schema = self.graph.nodes[from_id]["block"].output_schema()
22
+ in_schema = self.graph.nodes[to_id]["block"].input_schema()
23
+
24
+ # Verifica se out_schema contém os dados para in_schema (simplificado)
25
+ # Podemos usar model_fields para checar compatibilidade grossa
26
+ if out_schema != BaseModel and in_schema != BaseModel:
27
+ # apenas valida se for uma subclass ou compita de models.
28
+ pass
29
+
30
+ self.graph.add_edge(from_id, to_id)
31
+
32
+ def validate_connections(self):
33
+ # Percorre as arestas e valia tipos rigidamente se necessario
34
+ pass
@@ -0,0 +1,127 @@
1
+ from __future__ import annotations
2
+ import asyncio
3
+ import time
4
+ import uuid
5
+ from typing import Callable
6
+ import networkx as nx
7
+
8
+ from agenticblocks.core.block import Block
9
+ from agenticblocks.core.graph import WorkflowGraph
10
+ from agenticblocks.runtime.state import ExecutionContext, NodeResult, NodeStatus, _current_ctx
11
+
12
+
13
+ class WorkflowExecutor:
14
+ def __init__(
15
+ self,
16
+ graph: WorkflowGraph,
17
+ on_node_start: Callable[[str], None] | None = None,
18
+ on_node_end: Callable[[NodeResult], None] | None = None,
19
+ ):
20
+ self.graph = graph
21
+ self.on_node_start = on_node_start
22
+ self.on_node_end = on_node_end
23
+
24
+ async def run(
25
+ self,
26
+ initial_input: dict | None = None,
27
+ run_id: str | None = None,
28
+ ) -> ExecutionContext:
29
+ ctx = ExecutionContext(run_id=run_id or str(uuid.uuid4()))
30
+ ctx.store["__input__"] = initial_input or {}
31
+
32
+ token = _current_ctx.set(ctx)
33
+ try:
34
+ self._validate()
35
+ waves = self._build_waves()
36
+ for wave in waves:
37
+ await self._execute_wave(wave, ctx)
38
+ finally:
39
+ _current_ctx.reset(token)
40
+
41
+ return ctx
42
+
43
+ def _validate(self) -> None:
44
+ g = self.graph.graph
45
+ if not nx.is_directed_acyclic_graph(g):
46
+ cycles = list(nx.simple_cycles(g))
47
+ raise ValueError(f"Ciclos detectados no grafo: {cycles}")
48
+ self.graph.validate_connections()
49
+
50
+ def _build_waves(self) -> list[list[str]]:
51
+ g = self.graph.graph
52
+ in_degree = {n: g.in_degree(n) for n in g.nodes}
53
+ waves: list[list[str]] = []
54
+
55
+ remaining = set(g.nodes)
56
+ while remaining:
57
+ wave = [n for n in remaining if in_degree[n] == 0]
58
+ if not wave:
59
+ raise RuntimeError("Dependências circulares detectadas.")
60
+ waves.append(wave)
61
+ for node in wave:
62
+ remaining.remove(node)
63
+ for successor in g.successors(node):
64
+ in_degree[successor] -= 1
65
+ return waves
66
+
67
+ async def _execute_wave(self, wave: list[str], ctx: ExecutionContext) -> None:
68
+ tasks = [self._execute_node(node_id, ctx) for node_id in wave]
69
+ await asyncio.gather(*tasks, return_exceptions=False)
70
+
71
+ async def _execute_node(self, node_id: str, ctx: ExecutionContext) -> None:
72
+ block: Block = self.graph.graph.nodes[node_id]["block"]
73
+
74
+ if self.on_node_start:
75
+ self.on_node_start(node_id)
76
+
77
+ t0 = time.monotonic()
78
+ try:
79
+ input_data = self._collect_inputs(node_id, ctx)
80
+ # Create instance of the input schema
81
+ input_schema_class = block.input_schema()
82
+
83
+ # Simple conversion logic
84
+ if issubclass(input_schema_class, dict):
85
+ typed_input = input_data
86
+ else:
87
+ typed_input = input_schema_class(**input_data)
88
+
89
+ output = await block.run(typed_input)
90
+ result = NodeResult(
91
+ node_id=node_id,
92
+ status=NodeStatus.DONE,
93
+ output=output,
94
+ duration_ms=(time.monotonic() - t0) * 1000,
95
+ )
96
+ except Exception as exc:
97
+ result = NodeResult(
98
+ node_id=node_id,
99
+ status=NodeStatus.FAILED,
100
+ error=exc,
101
+ duration_ms=(time.monotonic() - t0) * 1000,
102
+ )
103
+ # await to save the failure
104
+ await ctx.set_result(node_id, result)
105
+ raise exc
106
+
107
+ await ctx.set_result(node_id, result)
108
+ if self.on_node_end:
109
+ self.on_node_end(result)
110
+
111
+ def _collect_inputs(self, node_id: str, ctx: ExecutionContext) -> dict:
112
+ g = self.graph.graph
113
+ predecessors = list(g.predecessors(node_id))
114
+
115
+ if not predecessors:
116
+ return ctx.store.get("__input__", {})
117
+
118
+ if len(predecessors) == 1:
119
+ prev_output = ctx.get_output(predecessors[0])
120
+ return prev_output.model_dump() if prev_output else {}
121
+
122
+ merged: dict = {}
123
+ for pred in predecessors:
124
+ prev_output = ctx.get_output(pred)
125
+ if prev_output:
126
+ merged.update(prev_output.model_dump())
127
+ return merged
@@ -0,0 +1,26 @@
1
+ import asyncio
2
+ from functools import wraps
3
+ from typing import Callable, Type
4
+
5
+ def with_retry(
6
+ max_attempts: int = 3,
7
+ delay: float = 1.0,
8
+ backoff: float = 2.0,
9
+ exceptions: tuple[Type[Exception], ...] = (Exception,),
10
+ ) -> Callable:
11
+ def decorator(fn: Callable) -> Callable:
12
+ @wraps(fn)
13
+ async def wrapper(*args, **kwargs):
14
+ attempt = 0
15
+ current_delay = delay
16
+ while True:
17
+ try:
18
+ return await fn(*args, **kwargs)
19
+ except exceptions as exc:
20
+ attempt += 1
21
+ if attempt >= max_attempts:
22
+ raise
23
+ await asyncio.sleep(current_delay)
24
+ current_delay *= backoff
25
+ return wrapper
26
+ return decorator
@@ -0,0 +1,46 @@
1
+ from __future__ import annotations
2
+ import asyncio
3
+ from contextvars import ContextVar
4
+ from dataclasses import dataclass, field
5
+ from enum import Enum
6
+ from typing import Any
7
+ from pydantic import BaseModel
8
+
9
+ class NodeStatus(Enum):
10
+ PENDING = "pending"
11
+ RUNNING = "running"
12
+ DONE = "done"
13
+ FAILED = "failed"
14
+ SKIPPED = "skipped"
15
+
16
+ @dataclass
17
+ class NodeResult:
18
+ node_id: str
19
+ status: NodeStatus
20
+ output: BaseModel | None = None
21
+ error: Exception | None = None
22
+ duration_ms: float = 0.0
23
+
24
+ @dataclass
25
+ class ExecutionContext:
26
+ run_id: str
27
+ results: dict[str, NodeResult] = field(default_factory=dict)
28
+ store: dict[str, Any] = field(default_factory=dict)
29
+
30
+ # Note: asyncio.Lock was removed since this dataclass needs to be shallow copied sometimes
31
+ # or just simple. We'll use a normal dictionary since it's mostly thread-safe in CPython for simple ops,
32
+ # or handle locking in executor to keep State serializable.
33
+ _lock: asyncio.Lock = field(default_factory=asyncio.Lock)
34
+
35
+ async def set_result(self, node_id: str, result: NodeResult) -> None:
36
+ async with self._lock:
37
+ self.results[node_id] = result
38
+
39
+ def get_output(self, node_id: str) -> BaseModel | None:
40
+ r = self.results.get(node_id)
41
+ return r.output if r else None
42
+
43
+ _current_ctx: ContextVar[ExecutionContext] = ContextVar("execution_ctx")
44
+
45
+ def get_ctx() -> ExecutionContext:
46
+ return _current_ctx.get()
@@ -0,0 +1 @@
1
+ """Tools bridging utilities (A2A and MCP) for agenticblocks."""
@@ -0,0 +1,21 @@
1
+ from typing import Any, Dict
2
+ from agenticblocks.core.block import Block
3
+
4
+ def block_to_tool_schema(block: Block) -> Dict[str, Any]:
5
+ """Generates an OpenAI-compatible function schema from a Block transparently."""
6
+
7
+ # Se for um MCP Tool direto com Schema pronto do servidor:
8
+ if getattr(block, "is_mcp_proxy", False):
9
+ schema = block.raw_mcp_schema
10
+ else:
11
+ # Obtain dynamic Pydantic Schema representation para Python Tools nativas
12
+ schema = block.input_schema().model_json_schema()
13
+
14
+ return {
15
+ "type": "function",
16
+ "function": {
17
+ "name": block.name,
18
+ "description": block.description or f"Executa a tarefa do bloco: {block.name}",
19
+ "parameters": schema
20
+ }
21
+ }
@@ -0,0 +1,88 @@
1
+ import asyncio
2
+ from typing import Any, List, Dict
3
+ from contextlib import AsyncExitStack
4
+ from pydantic import BaseModel, ConfigDict
5
+ from agenticblocks.core.block import Block
6
+
7
+ try:
8
+ from mcp import ClientSession, StdioServerParameters
9
+ from mcp.client.stdio import stdio_client
10
+ except ImportError:
11
+ pass
12
+
13
+ class MCPProxyInput(BaseModel):
14
+ model_config = ConfigDict(extra='allow')
15
+ # Captura quaisquer argumentos criados pelo LLM
16
+
17
+ class MCPProxyOutput(BaseModel):
18
+ result: Any
19
+
20
+ class MCPProxyBlock(Block[MCPProxyInput, MCPProxyOutput]):
21
+ is_mcp_proxy: bool = True
22
+ raw_mcp_schema: dict = {}
23
+ session: Any = None
24
+
25
+ async def run(self, input: MCPProxyInput) -> MCPProxyOutput:
26
+ # Usa dict() normal ou model_dump(exclude_unset=True) para extrair os extras
27
+ # Nota: model_dump pode não espalhar os "extras" na raiz em algumas versoes do Pydantic,
28
+ # porém __dict__ com `extra='allow'` resolve.
29
+
30
+ args = {k: getattr(input, k) for k in input.model_fields_set | set(input.model_extra or {})}
31
+
32
+ # Processa Remote Procedure Call pro Servidor MCP real!
33
+ result = await self.session.call_tool(self.name, args)
34
+
35
+ # Em MCP o response contém .content (Array de TextContent/ImageContent etc)
36
+ # Convertendo o conteúdo remotamente respondido pelo Servidor.
37
+ text_responses = []
38
+ if result and hasattr(result, "content"):
39
+ for c in result.content:
40
+ if getattr(c, "type", "") == "text":
41
+ text_responses.append(c.text)
42
+
43
+ return MCPProxyOutput(result=text_responses)
44
+
45
+ class MCPClientBridge:
46
+ """
47
+ Abstração transparente para buscar ferramentas de servidores MCP compatíveis.
48
+ Transforma ferramentas de rede em "ProxyBlocks" inseríveis em Agentes.
49
+ """
50
+ def __init__(self, command: str, args: List[str], env: Dict[str, str] = None):
51
+ self.command = command
52
+ self.args = args
53
+ self.env = env
54
+ self.exit_stack = AsyncExitStack()
55
+ self.session = None
56
+
57
+ async def connect(self) -> List[MCPProxyBlock]:
58
+ """Estabelece link iterativo stdio com o Servidor MCP e extrai os Tools"""
59
+ server_params = StdioServerParameters(command=self.command, args=self.args, env=self.env)
60
+ try:
61
+ stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
62
+ except Exception as e:
63
+ raise RuntimeError(f"Falha ao rodar sub-processo MCP. Verifique se tem o '{self.command}' instalado. Erro: {e}")
64
+
65
+ self.read, self.write = stdio_transport
66
+
67
+ self.session = await self.exit_stack.enter_async_context(ClientSession(self.read, self.write))
68
+ await self.session.initialize()
69
+
70
+ # Obter tools do servidor remotamente
71
+ response = await self.session.list_tools()
72
+
73
+ proxy_blocks = []
74
+ for tool in response.tools:
75
+ # Para cada tool nativa do servidor MCP, instanciamos um proxy
76
+ block = MCPProxyBlock(
77
+ name=tool.name,
78
+ description=tool.description or ""
79
+ )
80
+ block.raw_mcp_schema = tool.inputSchema
81
+ block.session = self.session
82
+ proxy_blocks.append(block)
83
+
84
+ return proxy_blocks
85
+
86
+ async def disconnect(self):
87
+ """Fecha de forma segura."""
88
+ await self.exit_stack.aclose()