openai-agents 0.0.5__tar.gz → 0.0.7__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of openai-agents might be problematic. Click here for more details.
- {openai_agents-0.0.5 → openai_agents-0.0.7}/.github/workflows/issues.yml +6 -3
- {openai_agents-0.0.5 → openai_agents-0.0.7}/.github/workflows/tests.yml +3 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/.gitignore +2 -2
- {openai_agents-0.0.5 → openai_agents-0.0.7}/Makefile +1 -1
- {openai_agents-0.0.5 → openai_agents-0.0.7}/PKG-INFO +9 -1
- {openai_agents-0.0.5 → openai_agents-0.0.7}/README.md +2 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/agents.md +3 -1
- openai_agents-0.0.7/docs/assets/images/graph.png +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/context.md +3 -3
- openai_agents-0.0.7/docs/examples.md +36 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/guardrails.md +1 -1
- openai_agents-0.0.7/docs/mcp.md +51 -0
- openai_agents-0.0.7/docs/ref/mcp/server.md +3 -0
- openai_agents-0.0.7/docs/ref/mcp/util.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/events.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/exceptions.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/input.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/model.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/models/openai_provider.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/models/openai_stt.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/models/openai_tts.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/pipeline.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/pipeline_config.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/result.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/utils.md +3 -0
- openai_agents-0.0.7/docs/ref/voice/workflow.md +3 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/tracing.md +10 -1
- openai_agents-0.0.7/docs/visualization.md +86 -0
- openai_agents-0.0.7/docs/voice/pipeline.md +75 -0
- openai_agents-0.0.7/docs/voice/quickstart.md +194 -0
- openai_agents-0.0.7/docs/voice/tracing.md +14 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/basic/lifecycle_example.py +1 -1
- openai_agents-0.0.7/examples/financial_research_agent/README.md +38 -0
- openai_agents-0.0.7/examples/financial_research_agent/agents/financials_agent.py +23 -0
- openai_agents-0.0.7/examples/financial_research_agent/agents/planner_agent.py +35 -0
- openai_agents-0.0.7/examples/financial_research_agent/agents/risk_agent.py +22 -0
- openai_agents-0.0.7/examples/financial_research_agent/agents/search_agent.py +18 -0
- openai_agents-0.0.7/examples/financial_research_agent/agents/verifier_agent.py +27 -0
- openai_agents-0.0.7/examples/financial_research_agent/agents/writer_agent.py +34 -0
- openai_agents-0.0.7/examples/financial_research_agent/main.py +17 -0
- openai_agents-0.0.7/examples/financial_research_agent/manager.py +135 -0
- openai_agents-0.0.7/examples/financial_research_agent/printer.py +46 -0
- openai_agents-0.0.7/examples/mcp/filesystem_example/README.md +26 -0
- openai_agents-0.0.7/examples/mcp/filesystem_example/main.py +57 -0
- openai_agents-0.0.7/examples/mcp/filesystem_example/sample_files/favorite_books.txt +20 -0
- openai_agents-0.0.7/examples/mcp/filesystem_example/sample_files/favorite_cities.txt +4 -0
- openai_agents-0.0.7/examples/mcp/filesystem_example/sample_files/favorite_songs.txt +10 -0
- openai_agents-0.0.7/examples/mcp/git_example/README.md +25 -0
- openai_agents-0.0.7/examples/mcp/git_example/main.py +48 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/agents/search_agent.py +1 -1
- openai_agents-0.0.7/examples/voice/static/README.md +26 -0
- openai_agents-0.0.7/examples/voice/static/main.py +88 -0
- openai_agents-0.0.7/examples/voice/static/util.py +69 -0
- openai_agents-0.0.7/examples/voice/streamed/README.md +25 -0
- openai_agents-0.0.7/examples/voice/streamed/__init__.py +0 -0
- openai_agents-0.0.7/examples/voice/streamed/main.py +233 -0
- openai_agents-0.0.7/examples/voice/streamed/my_workflow.py +81 -0
- openai_agents-0.0.7/mkdocs.yml +149 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/pyproject.toml +25 -13
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/__init__.py +16 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/_run_impl.py +56 -6
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/agent.py +25 -0
- openai_agents-0.0.7/src/agents/extensions/__init__.py +0 -0
- openai_agents-0.0.7/src/agents/extensions/visualization.py +137 -0
- openai_agents-0.0.7/src/agents/mcp/__init__.py +21 -0
- openai_agents-0.0.7/src/agents/mcp/server.py +301 -0
- openai_agents-0.0.7/src/agents/mcp/util.py +115 -0
- openai_agents-0.0.7/src/agents/models/__init__.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/models/openai_chatcompletions.py +1 -1
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/models/openai_provider.py +13 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/models/openai_responses.py +6 -2
- openai_agents-0.0.7/src/agents/py.typed +1 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/run.py +45 -7
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tracing/__init__.py +16 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tracing/create.py +150 -1
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tracing/processors.py +26 -7
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tracing/span_data.py +128 -2
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tracing/util.py +5 -0
- openai_agents-0.0.7/src/agents/util/__init__.py +0 -0
- openai_agents-0.0.7/src/agents/voice/__init__.py +51 -0
- openai_agents-0.0.7/src/agents/voice/events.py +47 -0
- openai_agents-0.0.7/src/agents/voice/exceptions.py +8 -0
- openai_agents-0.0.7/src/agents/voice/imports.py +11 -0
- openai_agents-0.0.7/src/agents/voice/input.py +88 -0
- openai_agents-0.0.7/src/agents/voice/model.py +193 -0
- openai_agents-0.0.7/src/agents/voice/models/__init__.py +0 -0
- openai_agents-0.0.7/src/agents/voice/models/openai_model_provider.py +97 -0
- openai_agents-0.0.7/src/agents/voice/models/openai_stt.py +456 -0
- openai_agents-0.0.7/src/agents/voice/models/openai_tts.py +54 -0
- openai_agents-0.0.7/src/agents/voice/pipeline.py +151 -0
- openai_agents-0.0.7/src/agents/voice/pipeline_config.py +46 -0
- openai_agents-0.0.7/src/agents/voice/result.py +287 -0
- openai_agents-0.0.7/src/agents/voice/utils.py +37 -0
- openai_agents-0.0.7/src/agents/voice/workflow.py +93 -0
- openai_agents-0.0.7/tests/__init__.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/fake_model.py +10 -0
- openai_agents-0.0.7/tests/mcp/__init__.py +0 -0
- openai_agents-0.0.7/tests/mcp/conftest.py +11 -0
- openai_agents-0.0.7/tests/mcp/helpers.py +58 -0
- openai_agents-0.0.7/tests/mcp/test_caching.py +57 -0
- openai_agents-0.0.7/tests/mcp/test_connect_disconnect.py +69 -0
- openai_agents-0.0.7/tests/mcp/test_mcp_tracing.py +198 -0
- openai_agents-0.0.7/tests/mcp/test_mcp_util.py +109 -0
- openai_agents-0.0.7/tests/mcp/test_runner_calls_mcp.py +197 -0
- openai_agents-0.0.7/tests/mcp/test_server_errors.py +42 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_agent_runner_streamed.py +1 -1
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_agent_tracing.py +148 -62
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_global_hooks.py +2 -2
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_responses_tracing.py +8 -25
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_run_step_execution.py +1 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_run_step_processing.py +78 -22
- openai_agents-0.0.7/tests/test_tool_choice_reset.py +210 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_tracing.py +143 -142
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_tracing_errors.py +26 -89
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_tracing_errors_streamed.py +12 -171
- openai_agents-0.0.7/tests/test_visualization.py +136 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/testing_processor.py +23 -5
- openai_agents-0.0.7/tests/tracing/test_processor_api_key.py +27 -0
- openai_agents-0.0.7/tests/voice/__init__.py +0 -0
- openai_agents-0.0.7/tests/voice/conftest.py +14 -0
- openai_agents-0.0.7/tests/voice/fake_models.py +115 -0
- openai_agents-0.0.7/tests/voice/helpers.py +21 -0
- openai_agents-0.0.7/tests/voice/test_input.py +127 -0
- openai_agents-0.0.7/tests/voice/test_openai_stt.py +367 -0
- openai_agents-0.0.7/tests/voice/test_openai_tts.py +94 -0
- openai_agents-0.0.7/tests/voice/test_pipeline.py +179 -0
- openai_agents-0.0.7/tests/voice/test_workflow.py +188 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/uv.lock +627 -20
- openai_agents-0.0.5/mkdocs.yml +0 -121
- {openai_agents-0.0.5 → openai_agents-0.0.7}/.github/ISSUE_TEMPLATE/bug_report.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/.github/ISSUE_TEMPLATE/feature_request.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/.github/ISSUE_TEMPLATE/model_provider.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/.github/ISSUE_TEMPLATE/question.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/.github/workflows/docs.yml +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/.github/workflows/publish.yml +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/.prettierrc +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/LICENSE +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/assets/images/favicon-platform.svg +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/assets/images/orchestration.png +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/assets/logo.svg +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/config.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/handoffs.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/index.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/models.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/multi_agent.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/quickstart.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/agent.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/agent_output.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/exceptions.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/extensions/handoff_filters.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/extensions/handoff_prompt.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/function_schema.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/guardrail.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/handoffs.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/index.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/items.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/lifecycle.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/model_settings.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/models/interface.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/models/openai_chatcompletions.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/models/openai_responses.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/result.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/run.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/run_context.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/stream_events.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/tool.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/tracing/create.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/tracing/index.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/tracing/processor_interface.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/tracing/processors.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/tracing/scope.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/tracing/setup.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/tracing/span_data.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/tracing/spans.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/tracing/traces.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/tracing/util.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/ref/usage.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/results.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/running_agents.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/streaming.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/stylesheets/extra.css +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/docs/tools.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/__init__.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/agent_patterns/README.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/agent_patterns/agents_as_tools.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/agent_patterns/deterministic.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/agent_patterns/forcing_tool_use.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/agent_patterns/input_guardrails.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/agent_patterns/llm_as_a_judge.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/agent_patterns/output_guardrails.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/agent_patterns/parallelization.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/agent_patterns/routing.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/basic/agent_lifecycle_example.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/basic/dynamic_system_prompt.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/basic/hello_world.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/basic/hello_world_jupyter.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/basic/stream_items.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/basic/stream_text.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/basic/tools.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/customer_service/main.py +0 -0
- {openai_agents-0.0.5/examples/research_bot/agents → openai_agents-0.0.7/examples/financial_research_agent}/__init__.py +0 -0
- {openai_agents-0.0.5/src/agents/extensions → openai_agents-0.0.7/examples/financial_research_agent/agents}/__init__.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/handoffs/message_filter.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/handoffs/message_filter_streaming.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/model_providers/README.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/model_providers/custom_example_agent.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/model_providers/custom_example_global.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/model_providers/custom_example_provider.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/README.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/__init__.py +0 -0
- {openai_agents-0.0.5/src/agents/models → openai_agents-0.0.7/examples/research_bot/agents}/__init__.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/agents/planner_agent.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/agents/writer_agent.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/main.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/manager.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/printer.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/sample_outputs/product_recs.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/sample_outputs/product_recs.txt +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/sample_outputs/vacation.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/research_bot/sample_outputs/vacation.txt +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/tools/computer_use.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/tools/file_search.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/examples/tools/web_search.py +0 -0
- {openai_agents-0.0.5/src/agents/util → openai_agents-0.0.7/examples/voice}/__init__.py +0 -0
- {openai_agents-0.0.5/tests → openai_agents-0.0.7/examples/voice/static}/__init__.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/_config.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/_debug.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/agent_output.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/computer.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/exceptions.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/extensions/handoff_filters.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/extensions/handoff_prompt.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/function_schema.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/guardrail.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/handoffs.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/items.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/lifecycle.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/logger.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/model_settings.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/models/_openai_shared.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/models/fake_id.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/models/interface.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/result.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/run_context.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/stream_events.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/strict_schema.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tool.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tracing/logger.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tracing/processor_interface.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tracing/scope.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tracing/setup.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tracing/spans.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/tracing/traces.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/usage.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/util/_coro.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/util/_error_tracing.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/util/_json.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/util/_pretty_print.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/util/_transforms.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/util/_types.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/src/agents/version.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/README.md +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/conftest.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_agent_config.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_agent_hooks.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_agent_runner.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_computer_action.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_config.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_doc_parsing.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_extension_filters.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_function_schema.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_function_tool.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_function_tool_decorator.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_guardrails.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_handoff_tool.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_items_helpers.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_max_turns.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_openai_chatcompletions.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_openai_chatcompletions_converter.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_openai_chatcompletions_stream.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_openai_responses_converter.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_output_tool.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_pretty_print.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_responses.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_result_cast.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_run_config.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_strict_schema.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_tool_converter.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_tool_use_behavior.py +0 -0
- {openai_agents-0.0.5 → openai_agents-0.0.7}/tests/test_trace_processor.py +0 -0
|
@@ -17,7 +17,10 @@ jobs:
|
|
|
17
17
|
stale-issue-label: "stale"
|
|
18
18
|
stale-issue-message: "This issue is stale because it has been open for 7 days with no activity."
|
|
19
19
|
close-issue-message: "This issue was closed because it has been inactive for 3 days since being marked as stale."
|
|
20
|
-
|
|
21
|
-
days-before-pr-
|
|
22
|
-
|
|
20
|
+
any-of-issue-labels: 'question,needs-more-info'
|
|
21
|
+
days-before-pr-stale: 10
|
|
22
|
+
days-before-pr-close: 7
|
|
23
|
+
stale-pr-label: "stale"
|
|
24
|
+
stale-pr-message: "This PR is stale because it has been open for 10 days with no activity."
|
|
25
|
+
close-pr-message: "This PR was closed because it has been inactive for 7 days since being marked as stale."
|
|
23
26
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
|
@@ -5,6 +5,7 @@ sync:
|
|
|
5
5
|
.PHONY: format
|
|
6
6
|
format:
|
|
7
7
|
uv run ruff format
|
|
8
|
+
uv run ruff check --fix
|
|
8
9
|
|
|
9
10
|
.PHONY: lint
|
|
10
11
|
lint:
|
|
@@ -36,7 +37,6 @@ snapshots-create:
|
|
|
36
37
|
.PHONY: old_version_tests
|
|
37
38
|
old_version_tests:
|
|
38
39
|
UV_PROJECT_ENVIRONMENT=.venv_39 uv run --python 3.9 -m pytest
|
|
39
|
-
UV_PROJECT_ENVIRONMENT=.venv_39 uv run --python 3.9 -m mypy .
|
|
40
40
|
|
|
41
41
|
.PHONY: build-docs
|
|
42
42
|
build-docs:
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: openai-agents
|
|
3
|
-
Version: 0.0.
|
|
3
|
+
Version: 0.0.7
|
|
4
4
|
Summary: OpenAI Agents SDK
|
|
5
5
|
Project-URL: Homepage, https://github.com/openai/openai-agents-python
|
|
6
6
|
Project-URL: Repository, https://github.com/openai/openai-agents-python
|
|
@@ -19,11 +19,17 @@ Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
|
|
19
19
|
Classifier: Typing :: Typed
|
|
20
20
|
Requires-Python: >=3.9
|
|
21
21
|
Requires-Dist: griffe<2,>=1.5.6
|
|
22
|
+
Requires-Dist: mcp; python_version >= '3.10'
|
|
22
23
|
Requires-Dist: openai>=1.66.5
|
|
23
24
|
Requires-Dist: pydantic<3,>=2.10
|
|
24
25
|
Requires-Dist: requests<3,>=2.0
|
|
25
26
|
Requires-Dist: types-requests<3,>=2.0
|
|
26
27
|
Requires-Dist: typing-extensions<5,>=4.12.2
|
|
28
|
+
Provides-Extra: viz
|
|
29
|
+
Requires-Dist: graphviz>=0.17; extra == 'viz'
|
|
30
|
+
Provides-Extra: voice
|
|
31
|
+
Requires-Dist: numpy<3,>=2.2.0; (python_version >= '3.10') and extra == 'voice'
|
|
32
|
+
Requires-Dist: websockets<16,>=15.0; extra == 'voice'
|
|
27
33
|
Description-Content-Type: text/markdown
|
|
28
34
|
|
|
29
35
|
# OpenAI Agents SDK
|
|
@@ -58,6 +64,8 @@ source env/bin/activate
|
|
|
58
64
|
pip install openai-agents
|
|
59
65
|
```
|
|
60
66
|
|
|
67
|
+
For voice support, install with the optional `voice` group: `pip install 'openai-agents[voice]'`.
|
|
68
|
+
|
|
61
69
|
## Hello world example
|
|
62
70
|
|
|
63
71
|
```python
|
|
@@ -142,4 +142,6 @@ Supplying a list of tools doesn't always mean the LLM will use a tool. You can f
|
|
|
142
142
|
|
|
143
143
|
!!! note
|
|
144
144
|
|
|
145
|
-
|
|
145
|
+
To prevent infinite loops, the framework automatically resets `tool_choice` to "auto" after a tool call. This behavior is configurable via [`agent.reset_tool_choice`][agents.agent.Agent.reset_tool_choice]. The infinite loop is because tool results are sent to the LLM, which then generates another tool call because of `tool_choice`, ad infinitum.
|
|
146
|
+
|
|
147
|
+
If you want the Agent to completely stop after a tool call (rather than continuing with auto mode), you can set [`Agent.tool_use_behavior="stop_on_first_tool"`] which will directly use the tool output as the final response without further LLM processing.
|
|
Binary file
|
|
@@ -41,14 +41,14 @@ async def fetch_user_age(wrapper: RunContextWrapper[UserInfo]) -> str: # (2)!
|
|
|
41
41
|
return f"User {wrapper.context.name} is 47 years old"
|
|
42
42
|
|
|
43
43
|
async def main():
|
|
44
|
-
user_info = UserInfo(name="John", uid=123)
|
|
44
|
+
user_info = UserInfo(name="John", uid=123)
|
|
45
45
|
|
|
46
|
-
agent = Agent[UserInfo]( # (
|
|
46
|
+
agent = Agent[UserInfo]( # (3)!
|
|
47
47
|
name="Assistant",
|
|
48
48
|
tools=[fetch_user_age],
|
|
49
49
|
)
|
|
50
50
|
|
|
51
|
-
result = await Runner.run(
|
|
51
|
+
result = await Runner.run( # (4)!
|
|
52
52
|
starting_agent=agent,
|
|
53
53
|
input="What is the age of the user?",
|
|
54
54
|
context=user_info,
|
|
@@ -0,0 +1,36 @@
|
|
|
1
|
+
# Examples
|
|
2
|
+
|
|
3
|
+
Check out a variety of sample implementations of the SDK in the examples section of the [repo](https://github.com/openai/openai-agents-python/tree/main/examples). The examples are organized into several categories that demonstrate different patterns and capabilities.
|
|
4
|
+
|
|
5
|
+
|
|
6
|
+
## Categories
|
|
7
|
+
|
|
8
|
+
- **agent_patterns:**
|
|
9
|
+
Examples in this category illustrate common agent design patterns, such as
|
|
10
|
+
|
|
11
|
+
- Deterministic workflows
|
|
12
|
+
- Agents as tools
|
|
13
|
+
- Parallel agent execution
|
|
14
|
+
|
|
15
|
+
- **basic:**
|
|
16
|
+
These examples showcase foundational capabilities of the SDK, such as
|
|
17
|
+
|
|
18
|
+
- Dynamic system prompts
|
|
19
|
+
- Streaming outputs
|
|
20
|
+
- Lifecycle events
|
|
21
|
+
|
|
22
|
+
- **tool examples:**
|
|
23
|
+
Learn how to implement OAI hosted tools such as web search and file search,
|
|
24
|
+
and integrate them into your agents.
|
|
25
|
+
|
|
26
|
+
- **model providers:**
|
|
27
|
+
Explore how to use non-OpenAI models with the SDK.
|
|
28
|
+
|
|
29
|
+
- **handoffs:**
|
|
30
|
+
See practical examples of agent handoffs.
|
|
31
|
+
|
|
32
|
+
- **customer_service** and **research_bot:**
|
|
33
|
+
Two more built-out examples that illustrate real-world applications
|
|
34
|
+
|
|
35
|
+
- **customer_service**: Example customer service system for an airline.
|
|
36
|
+
- **research_bot**: Simple deep research clone.
|
|
@@ -29,7 +29,7 @@ Output guardrails run in 3 steps:
|
|
|
29
29
|
|
|
30
30
|
!!! Note
|
|
31
31
|
|
|
32
|
-
Output guardrails are intended to run on the final agent
|
|
32
|
+
Output guardrails are intended to run on the final agent output, so an agent's guardrails only run if the agent is the *last* agent. Similar to the input guardrails, we do this because guardrails tend to be related to the actual Agent - you'd run different guardrails for different agents, so colocating the code is useful for readability.
|
|
33
33
|
|
|
34
34
|
## Tripwires
|
|
35
35
|
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
# Model context protocol
|
|
2
|
+
|
|
3
|
+
The [Model context protocol](https://modelcontextprotocol.io/introduction) (aka MCP) is a way to provide tools and context to the LLM. From the MCP docs:
|
|
4
|
+
|
|
5
|
+
> MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
|
|
6
|
+
|
|
7
|
+
The Agents SDK has support for MCP. This enables you to use a wide range of MCP servers to provide tools to your Agents.
|
|
8
|
+
|
|
9
|
+
## MCP servers
|
|
10
|
+
|
|
11
|
+
Currently, the MCP spec defines two kinds of servers, based on the transport mechanism they use:
|
|
12
|
+
|
|
13
|
+
1. **stdio** servers run as a subprocess of your application. You can think of them as running "locally".
|
|
14
|
+
2. **HTTP over SSE** servers run remotely. You connect to them via a URL.
|
|
15
|
+
|
|
16
|
+
You can use the [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse] classes to connect to these servers.
|
|
17
|
+
|
|
18
|
+
For example, this is how you'd use the [official MCP filesystem server](https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem).
|
|
19
|
+
|
|
20
|
+
```python
|
|
21
|
+
async with MCPServerStdio(
|
|
22
|
+
params={
|
|
23
|
+
"command": "npx",
|
|
24
|
+
"args": ["-y", "@modelcontextprotocol/server-filesystem", samples_dir],
|
|
25
|
+
}
|
|
26
|
+
) as server:
|
|
27
|
+
tools = await server.list_tools()
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
## Using MCP servers
|
|
31
|
+
|
|
32
|
+
MCP servers can be added to Agents. The Agents SDK will call `list_tools()` on the MCP servers each time the Agent is run. This makes the LLM aware of the MCP server's tools. When the LLM calls a tool from an MCP server, the SDK calls `call_tool()` on that server.
|
|
33
|
+
|
|
34
|
+
```python
|
|
35
|
+
|
|
36
|
+
agent=Agent(
|
|
37
|
+
name="Assistant",
|
|
38
|
+
instructions="Use the tools to achieve the task",
|
|
39
|
+
mcp_servers=[mcp_server_1, mcp_server_2]
|
|
40
|
+
)
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
## Caching
|
|
44
|
+
|
|
45
|
+
Every time an Agent runs, it calls `list_tools()` on the MCP server. This can be a latency hit, especially if the server is a remote server. To automatically cache the list of tools, you can pass `cache_tools_list=True` to both [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse]. You should only do this if you're certain the tool list will not change.
|
|
46
|
+
|
|
47
|
+
If you want to invalidate the cache, you can call `invalidate_tools_cache()` on the servers.
|
|
48
|
+
|
|
49
|
+
## End-to-end example
|
|
50
|
+
|
|
51
|
+
View complete working examples at [examples/mcp](https://github.com/openai/openai-agents-python/tree/main/examples/mcp).
|
|
@@ -35,6 +35,9 @@ By default, the SDK traces the following:
|
|
|
35
35
|
- Function tool calls are each wrapped in `function_span()`
|
|
36
36
|
- Guardrails are wrapped in `guardrail_span()`
|
|
37
37
|
- Handoffs are wrapped in `handoff_span()`
|
|
38
|
+
- Audio inputs (speech-to-text) are wrapped in a `transcription_span()`
|
|
39
|
+
- Audio outputs (text-to-speech) are wrapped in a `speech_span()`
|
|
40
|
+
- Related audio spans may be parented under a `speech_group_span()`
|
|
38
41
|
|
|
39
42
|
By default, the trace is named "Agent trace". You can set this name if you use `trace`, or you can can configure the name and other properties with the [`RunConfig`][agents.run.RunConfig].
|
|
40
43
|
|
|
@@ -76,7 +79,11 @@ Spans are automatically part of the current trace, and are nested under the near
|
|
|
76
79
|
|
|
77
80
|
## Sensitive data
|
|
78
81
|
|
|
79
|
-
|
|
82
|
+
Certain spans may capture potentially sensitive data.
|
|
83
|
+
|
|
84
|
+
The `generation_span()` stores the inputs/outputs of the LLM generation, and `function_span()` stores the inputs/outputs of function calls. These may contain sensitive data, so you can disable capturing that data via [`RunConfig.trace_include_sensitive_data`][agents.run.RunConfig.trace_include_sensitive_data].
|
|
85
|
+
|
|
86
|
+
Similarly, Audio spans include base64-encoded PCM data for input and output audio by default. You can disable capturing this audio data by configuring [`VoicePipelineConfig.trace_include_sensitive_audio_data`][agents.voice.pipeline_config.VoicePipelineConfig.trace_include_sensitive_audio_data].
|
|
80
87
|
|
|
81
88
|
## Custom tracing processors
|
|
82
89
|
|
|
@@ -92,6 +99,7 @@ To customize this default setup, to send traces to alternative or additional bac
|
|
|
92
99
|
|
|
93
100
|
## External tracing processors list
|
|
94
101
|
|
|
102
|
+
- [Weights & Biases](https://weave-docs.wandb.ai/guides/integrations/openai_agents)
|
|
95
103
|
- [Arize-Phoenix](https://docs.arize.com/phoenix/tracing/integrations-tracing/openai-agents-sdk)
|
|
96
104
|
- [MLflow](https://mlflow.org/docs/latest/tracing/integrations/openai-agent)
|
|
97
105
|
- [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk)
|
|
@@ -102,3 +110,4 @@ To customize this default setup, to send traces to alternative or additional bac
|
|
|
102
110
|
- [LangSmith](https://docs.smith.langchain.com/observability/how_to_guides/trace_with_openai_agents_sdk)
|
|
103
111
|
- [Maxim AI](https://www.getmaxim.ai/docs/observe/integrations/openai-agents-sdk)
|
|
104
112
|
- [Comet Opik](https://www.comet.com/docs/opik/tracing/integrations/openai_agents)
|
|
113
|
+
- [Langfuse](https://langfuse.com/docs/integrations/openaiagentssdk/openai-agents)
|
|
@@ -0,0 +1,86 @@
|
|
|
1
|
+
# Agent Visualization
|
|
2
|
+
|
|
3
|
+
Agent visualization allows you to generate a structured graphical representation of agents and their relationships using **Graphviz**. This is useful for understanding how agents, tools, and handoffs interact within an application.
|
|
4
|
+
|
|
5
|
+
## Installation
|
|
6
|
+
|
|
7
|
+
Install the optional `viz` dependency group:
|
|
8
|
+
|
|
9
|
+
```bash
|
|
10
|
+
pip install "openai-agents[viz]"
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
## Generating a Graph
|
|
14
|
+
|
|
15
|
+
You can generate an agent visualization using the `draw_graph` function. This function creates a directed graph where:
|
|
16
|
+
|
|
17
|
+
- **Agents** are represented as yellow boxes.
|
|
18
|
+
- **Tools** are represented as green ellipses.
|
|
19
|
+
- **Handoffs** are directed edges from one agent to another.
|
|
20
|
+
|
|
21
|
+
### Example Usage
|
|
22
|
+
|
|
23
|
+
```python
|
|
24
|
+
from agents import Agent, function_tool
|
|
25
|
+
from agents.extensions.visualization import draw_graph
|
|
26
|
+
|
|
27
|
+
@function_tool
|
|
28
|
+
def get_weather(city: str) -> str:
|
|
29
|
+
return f"The weather in {city} is sunny."
|
|
30
|
+
|
|
31
|
+
spanish_agent = Agent(
|
|
32
|
+
name="Spanish agent",
|
|
33
|
+
instructions="You only speak Spanish.",
|
|
34
|
+
)
|
|
35
|
+
|
|
36
|
+
english_agent = Agent(
|
|
37
|
+
name="English agent",
|
|
38
|
+
instructions="You only speak English",
|
|
39
|
+
)
|
|
40
|
+
|
|
41
|
+
triage_agent = Agent(
|
|
42
|
+
name="Triage agent",
|
|
43
|
+
instructions="Handoff to the appropriate agent based on the language of the request.",
|
|
44
|
+
handoffs=[spanish_agent, english_agent],
|
|
45
|
+
tools=[get_weather],
|
|
46
|
+
)
|
|
47
|
+
|
|
48
|
+
draw_graph(triage_agent)
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+

|
|
52
|
+
|
|
53
|
+
This generates a graph that visually represents the structure of the **triage agent** and its connections to sub-agents and tools.
|
|
54
|
+
|
|
55
|
+
|
|
56
|
+
## Understanding the Visualization
|
|
57
|
+
|
|
58
|
+
The generated graph includes:
|
|
59
|
+
|
|
60
|
+
- A **start node** (`__start__`) indicating the entry point.
|
|
61
|
+
- Agents represented as **rectangles** with yellow fill.
|
|
62
|
+
- Tools represented as **ellipses** with green fill.
|
|
63
|
+
- Directed edges indicating interactions:
|
|
64
|
+
- **Solid arrows** for agent-to-agent handoffs.
|
|
65
|
+
- **Dotted arrows** for tool invocations.
|
|
66
|
+
- An **end node** (`__end__`) indicating where execution terminates.
|
|
67
|
+
|
|
68
|
+
## Customizing the Graph
|
|
69
|
+
|
|
70
|
+
### Showing the Graph
|
|
71
|
+
By default, `draw_graph` displays the graph inline. To show the graph in a separate window, write the following:
|
|
72
|
+
|
|
73
|
+
```python
|
|
74
|
+
draw_graph(triage_agent).view()
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
### Saving the Graph
|
|
78
|
+
By default, `draw_graph` displays the graph inline. To save it as a file, specify a filename:
|
|
79
|
+
|
|
80
|
+
```python
|
|
81
|
+
draw_graph(triage_agent, filename="agent_graph.png")
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
This will generate `agent_graph.png` in the working directory.
|
|
85
|
+
|
|
86
|
+
|
|
@@ -0,0 +1,75 @@
|
|
|
1
|
+
# Pipelines and workflows
|
|
2
|
+
|
|
3
|
+
[`VoicePipeline`][agents.voice.pipeline.VoicePipeline] is a class that makes it easy to turn your agentic workflows into a voice app. You pass in a workflow to run, and the pipeline takes care of transcribing input audio, detecting when the audio ends, calling your workflow at the right time, and turning the workflow output back into audio.
|
|
4
|
+
|
|
5
|
+
```mermaid
|
|
6
|
+
graph LR
|
|
7
|
+
%% Input
|
|
8
|
+
A["🎤 Audio Input"]
|
|
9
|
+
|
|
10
|
+
%% Voice Pipeline
|
|
11
|
+
subgraph Voice_Pipeline [Voice Pipeline]
|
|
12
|
+
direction TB
|
|
13
|
+
B["Transcribe (speech-to-text)"]
|
|
14
|
+
C["Your Code"]:::highlight
|
|
15
|
+
D["Text-to-speech"]
|
|
16
|
+
B --> C --> D
|
|
17
|
+
end
|
|
18
|
+
|
|
19
|
+
%% Output
|
|
20
|
+
E["🎧 Audio Output"]
|
|
21
|
+
|
|
22
|
+
%% Flow
|
|
23
|
+
A --> Voice_Pipeline
|
|
24
|
+
Voice_Pipeline --> E
|
|
25
|
+
|
|
26
|
+
%% Custom styling
|
|
27
|
+
classDef highlight fill:#ffcc66,stroke:#333,stroke-width:1px,font-weight:700;
|
|
28
|
+
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
## Configuring a pipeline
|
|
32
|
+
|
|
33
|
+
When you create a pipeline, you can set a few things:
|
|
34
|
+
|
|
35
|
+
1. The [`workflow`][agents.voice.workflow.VoiceWorkflowBase], which is the code that runs each time new audio is transcribed.
|
|
36
|
+
2. The [`speech-to-text`][agents.voice.model.STTModel] and [`text-to-speech`][agents.voice.model.TTSModel] models used
|
|
37
|
+
3. The [`config`][agents.voice.pipeline_config.VoicePipelineConfig], which lets you configure things like:
|
|
38
|
+
- A model provider, which can map model names to models
|
|
39
|
+
- Tracing, including whether to disable tracing, whether audio files are uploaded, the workflow name, trace IDs etc.
|
|
40
|
+
- Settings on the TTS and STT models, like the prompt, language and data types used.
|
|
41
|
+
|
|
42
|
+
## Running a pipeline
|
|
43
|
+
|
|
44
|
+
You can run a pipeline via the [`run()`][agents.voice.pipeline.VoicePipeline.run] method, which lets you pass in audio input in two forms:
|
|
45
|
+
|
|
46
|
+
1. [`AudioInput`][agents.voice.input.AudioInput] is used when you have a full audio transcript, and just want to produce a result for it. This is useful in cases where you don't need to detect when a speaker is done speaking; for example, when you have pre-recorded audio or in push-to-talk apps where it's clear when the user is done speaking.
|
|
47
|
+
2. [`StreamedAudioInput`][agents.voice.input.StreamedAudioInput] is used when you might need to detect when a user is done speaking. It allows you to push audio chunks as they are detected, and the voice pipeline will automatically run the agent workflow at the right time, via a process called "activity detection".
|
|
48
|
+
|
|
49
|
+
## Results
|
|
50
|
+
|
|
51
|
+
The result of a voice pipeline run is a [`StreamedAudioResult`][agents.voice.result.StreamedAudioResult]. This is an object that lets you stream events as they occur. There are a few kinds of [`VoiceStreamEvent`][agents.voice.events.VoiceStreamEvent], including:
|
|
52
|
+
|
|
53
|
+
1. [`VoiceStreamEventAudio`][agents.voice.events.VoiceStreamEventAudio], which contains a chunk of audio.
|
|
54
|
+
2. [`VoiceStreamEventLifecycle`][agents.voice.events.VoiceStreamEventLifecycle], which informs you of lifecycle events like a turn starting or ending.
|
|
55
|
+
3. [`VoiceStreamEventError`][agents.voice.events.VoiceStreamEventError], is an error event.
|
|
56
|
+
|
|
57
|
+
```python
|
|
58
|
+
|
|
59
|
+
result = await pipeline.run(input)
|
|
60
|
+
|
|
61
|
+
async for event in result.stream():
|
|
62
|
+
if event.type == "voice_stream_event_audio":
|
|
63
|
+
# play audio
|
|
64
|
+
elif event.type == "voice_stream_event_lifecycle":
|
|
65
|
+
# lifecycle
|
|
66
|
+
elif event.type == "voice_stream_event_error"
|
|
67
|
+
# error
|
|
68
|
+
...
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
## Best practices
|
|
72
|
+
|
|
73
|
+
### Interruptions
|
|
74
|
+
|
|
75
|
+
The Agents SDK currently does not support any built-in interruptions support for [`StreamedAudioInput`][agents.voice.input.StreamedAudioInput]. Instead for every detected turn it will trigger a separate run of your workflow. If you want to handle interruptions inside your application you can listen to the [`VoiceStreamEventLifecycle`][agents.voice.events.VoiceStreamEventLifecycle] events. `turn_started` will indicate that a new turn was transcribed and processing is beginning. `turn_ended` will trigger after all the audio was dispatched for a respective turn. You could use these events to mute the microphone of the speaker when the model starts a turn and unmute it after you flushed all the related audio for a turn.
|