deepset-mcp 0.0.5rc1__tar.gz → 0.0.7__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/.github/workflows/ci.yml +1 -1
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/.github/workflows/docker_push.yml +1 -1
- deepset_mcp-0.0.7/.github/workflows/publish_docs.yml +41 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/.github/workflows/pypi_release.yml +1 -1
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/.gitignore +1 -0
- deepset_mcp-0.0.7/PKG-INFO +100 -0
- deepset_mcp-0.0.7/README.md +63 -0
- deepset_mcp-0.0.7/docs/concepts/mcp_server_concepts.md +176 -0
- deepset_mcp-0.0.7/docs/concepts/sdk_concepts.md +48 -0
- deepset_mcp-0.0.7/docs/guides/api_sdk.md +399 -0
- deepset_mcp-0.0.7/docs/guides/mcp_server.md +995 -0
- deepset_mcp-0.0.7/docs/index.md +86 -0
- deepset_mcp-0.0.7/docs/installation.md +168 -0
- deepset_mcp-0.0.7/docs/reference/api_sdk_reference.md +129 -0
- deepset_mcp-0.0.7/docs/reference/mcp_reference.md +15 -0
- deepset_mcp-0.0.7/docs/reference/tool_reference.md +7 -0
- deepset_mcp-0.0.7/mkdocs.yml +57 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/pyproject.toml +7 -1
- deepset_mcp-0.0.7/src/deepset_mcp/__init__.py +9 -0
- deepset_mcp-0.0.7/src/deepset_mcp/api/__init__.py +7 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/client.py +126 -107
- deepset_mcp-0.0.7/src/deepset_mcp/api/custom_components/__init__.py +7 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/custom_components/models.py +7 -8
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/custom_components/protocols.py +4 -3
- deepset_mcp-0.0.7/src/deepset_mcp/api/custom_components/resource.py +86 -0
- deepset_mcp-0.0.7/src/deepset_mcp/api/haystack_service/__init__.py +7 -0
- deepset_mcp-0.0.7/src/deepset_mcp/api/haystack_service/protocols.py +38 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/haystack_service/resource.py +46 -0
- deepset_mcp-0.0.7/src/deepset_mcp/api/indexes/__init__.py +7 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/indexes/models.py +23 -11
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/indexes/protocols.py +13 -4
- deepset_mcp-0.0.7/src/deepset_mcp/api/indexes/resource.py +206 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/integrations/__init__.py +4 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/integrations/models.py +4 -13
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/integrations/protocols.py +3 -3
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/integrations/resource.py +5 -5
- deepset_mcp-0.0.7/src/deepset_mcp/api/pipeline/__init__.py +7 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/pipeline/models.py +66 -28
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/pipeline/protocols.py +6 -10
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/pipeline/resource.py +101 -58
- deepset_mcp-0.0.7/src/deepset_mcp/api/pipeline_template/__init__.py +7 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/pipeline_template/models.py +12 -23
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/pipeline_template/protocols.py +11 -5
- deepset_mcp-0.0.7/src/deepset_mcp/api/pipeline_template/resource.py +104 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/protocols.py +13 -11
- deepset_mcp-0.0.7/src/deepset_mcp/api/secrets/__init__.py +7 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/secrets/models.py +2 -8
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/secrets/protocols.py +4 -3
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/secrets/resource.py +32 -7
- deepset_mcp-0.0.7/src/deepset_mcp/api/shared_models.py +131 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/transport.py +30 -58
- deepset_mcp-0.0.7/src/deepset_mcp/api/user/__init__.py +7 -0
- deepset_mcp-0.0.7/src/deepset_mcp/api/workspace/__init__.py +9 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/workspace/models.py +4 -8
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/workspace/protocols.py +3 -3
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/workspace/resource.py +5 -9
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/main.py +5 -20
- deepset_mcp-0.0.7/src/deepset_mcp/mcp/__init__.py +10 -0
- {deepset_mcp-0.0.5rc1/src/deepset_mcp → deepset_mcp-0.0.7/src/deepset_mcp/mcp}/server.py +8 -18
- {deepset_mcp-0.0.5rc1/src/deepset_mcp → deepset_mcp-0.0.7/src/deepset_mcp/mcp}/store.py +3 -3
- {deepset_mcp-0.0.5rc1/src/deepset_mcp → deepset_mcp-0.0.7/src/deepset_mcp/mcp}/tool_factory.py +21 -38
- deepset_mcp-0.0.7/src/deepset_mcp/mcp/tool_models.py +57 -0
- {deepset_mcp-0.0.5rc1/src/deepset_mcp → deepset_mcp-0.0.7/src/deepset_mcp/mcp}/tool_registry.py +16 -6
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/tools → deepset_mcp-0.0.7/src/deepset_mcp}/tokonomics/__init__.py +3 -1
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/tools → deepset_mcp-0.0.7/src/deepset_mcp}/tokonomics/decorators.py +2 -2
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/tools → deepset_mcp-0.0.7/src/deepset_mcp}/tokonomics/explorer.py +1 -1
- deepset_mcp-0.0.7/src/deepset_mcp/tools/__init__.py +62 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/tools/custom_components.py +7 -4
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/tools/haystack_service.py +64 -22
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/tools/haystack_service_models.py +40 -0
- deepset_mcp-0.0.7/src/deepset_mcp/tools/indexes.py +232 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/tools/object_store.py +1 -1
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/tools/pipeline.py +40 -10
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/tools/pipeline_template.py +35 -18
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/tools/secrets.py +29 -13
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/tools/workspace.py +2 -2
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/integration/test_integration_haystack_service_resource.py +35 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/integration/test_integration_index_resource.py +104 -21
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/integration/test_integration_pipeline_logs.py +77 -6
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/integration/test_integration_pipeline_resource.py +42 -24
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/integration/test_integration_pipeline_template_resource.py +86 -18
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/integration/test_integration_secret_resource.py +64 -8
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/integration/test_integration_workspace_resource.py +4 -6
- {deepset_mcp-0.0.5rc1/test/integration/tools → deepset_mcp-0.0.7/test/integration}/tokonomics/test_integration_tokonomics.py +3 -4
- deepset_mcp-0.0.7/test/integration/tools/test_integration_haystack_service.py +207 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/custom_components/test_custom_components_resource.py +8 -14
- deepset_mcp-0.0.7/test/unit/api/haystack_service/test_haystack_service_resource.py +291 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/indexes/test_index_resource.py +249 -8
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/integrations/test_integration_resource.py +12 -14
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/pipeline/test_pipeline_resource.py +245 -54
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/pipeline_template/test_pipeline_template_resource.py +176 -39
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/secrets/test_secret_resource.py +80 -5
- deepset_mcp-0.0.7/test/unit/api/test_shared_models.py +315 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/workspace/test_workspace_resource.py +15 -21
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/test_server_base_url.py +3 -6
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/test_store.py +14 -14
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/test_tool_factory.py +28 -30
- {deepset_mcp-0.0.5rc1/test/unit/tools → deepset_mcp-0.0.7/test/unit}/tokonomics/test_decorators.py +2 -3
- {deepset_mcp-0.0.5rc1/test/unit/tools → deepset_mcp-0.0.7/test/unit}/tokonomics/test_explorer.py +1 -2
- {deepset_mcp-0.0.5rc1/test/unit/tools → deepset_mcp-0.0.7/test/unit}/tokonomics/test_integration.py +2 -2
- {deepset_mcp-0.0.5rc1/test/unit/tools → deepset_mcp-0.0.7/test/unit}/tokonomics/test_object_store.py +1 -1
- {deepset_mcp-0.0.5rc1/test/unit/tools → deepset_mcp-0.0.7/test/unit}/tokonomics/test_object_store_backends.py +1 -1
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/tools/test_custom_components.py +71 -11
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/tools/test_doc_search.py +8 -6
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/tools/test_haystack_service.py +100 -0
- deepset_mcp-0.0.7/test/unit/tools/test_indexes.py +709 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/tools/test_pipeline.py +60 -18
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/tools/test_pipeline_template.py +24 -20
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/tools/test_secrets.py +20 -17
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/tools/test_workspace.py +9 -13
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/uv.lock +286 -0
- deepset_mcp-0.0.5rc1/PKG-INFO +0 -807
- deepset_mcp-0.0.5rc1/README.md +0 -770
- deepset_mcp-0.0.5rc1/src/deepset_mcp/__init__.py +0 -10
- deepset_mcp-0.0.5rc1/src/deepset_mcp/api/README.md +0 -536
- deepset_mcp-0.0.5rc1/src/deepset_mcp/api/custom_components/resource.py +0 -60
- deepset_mcp-0.0.5rc1/src/deepset_mcp/api/haystack_service/protocols.py +0 -17
- deepset_mcp-0.0.5rc1/src/deepset_mcp/api/indexes/resource.py +0 -142
- deepset_mcp-0.0.5rc1/src/deepset_mcp/api/pipeline/__init__.py +0 -21
- deepset_mcp-0.0.5rc1/src/deepset_mcp/api/pipeline/log_level.py +0 -13
- deepset_mcp-0.0.5rc1/src/deepset_mcp/api/pipeline_template/resource.py +0 -92
- deepset_mcp-0.0.5rc1/src/deepset_mcp/api/shared_models.py +0 -21
- deepset_mcp-0.0.5rc1/src/deepset_mcp/api/workspace/__init__.py +0 -11
- deepset_mcp-0.0.5rc1/src/deepset_mcp/tool_models.py +0 -42
- deepset_mcp-0.0.5rc1/src/deepset_mcp/tools/indexes.py +0 -133
- deepset_mcp-0.0.5rc1/test/unit/api/custom_components/__init__.py +0 -4
- deepset_mcp-0.0.5rc1/test/unit/api/haystack_service/__init__.py +0 -4
- deepset_mcp-0.0.5rc1/test/unit/api/haystack_service/test_haystack_service_resource.py +0 -117
- deepset_mcp-0.0.5rc1/test/unit/api/indexes/__init__.py +0 -4
- deepset_mcp-0.0.5rc1/test/unit/api/pipeline/__init__.py +0 -4
- deepset_mcp-0.0.5rc1/test/unit/api/pipeline_template/__init__.py +0 -4
- deepset_mcp-0.0.5rc1/test/unit/api/user/__init__.py +0 -4
- deepset_mcp-0.0.5rc1/test/unit/tools/__init__.py +0 -4
- deepset_mcp-0.0.5rc1/test/unit/tools/test_indexes.py +0 -433
- deepset_mcp-0.0.5rc1/test/unit/tools/tokonomics/__init__.py +0 -4
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/.github/workflows/ai_agent.yml +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/.python-version +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/Dockerfile +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/LICENSE +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/Makefile +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/REPO.md +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/assets/claude_desktop_projects.png +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/assets/claude_desktop_with_tools.png +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/assets/deepset-mcp-3.gif +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/entrypoint.sh +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/license-header.txt +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/licenserc.toml +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/exceptions.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/user/protocols.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/api/user/resource.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/config.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/initialize_embedding_model.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/prompts/deepset_copilot_prompt.md +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/prompts/deepset_debugging_agent.md +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/py.typed +0 -0
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/tools → deepset_mcp-0.0.7/src/deepset_mcp}/tokonomics/object_store.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/tools/doc_search.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/src/deepset_mcp/tools/model_protocol.py +0 -0
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/api → deepset_mcp-0.0.7/test}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/api/custom_components → deepset_mcp-0.0.7/test/integration}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/integration/conftest.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/integration/test_integration_integrations_resource.py +0 -0
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/api/haystack_service → deepset_mcp-0.0.7/test/integration/tokonomics}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/api/indexes → deepset_mcp-0.0.7/test/integration/tools}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/api/pipeline_template → deepset_mcp-0.0.7/test/unit}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/api/secrets → deepset_mcp-0.0.7/test/unit/api}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/api/user → deepset_mcp-0.0.7/test/unit/api/custom_components}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1/src/deepset_mcp/tools → deepset_mcp-0.0.7/test/unit/api/haystack_service}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1/test → deepset_mcp-0.0.7/test/unit/api/indexes}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/integrations/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1/test/integration → deepset_mcp-0.0.7/test/unit/api/pipeline}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/pipeline/test_pipeline_resource_search.py +0 -0
- {deepset_mcp-0.0.5rc1/test/integration/tools → deepset_mcp-0.0.7/test/unit/api/pipeline_template}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/test_transport.py +0 -0
- {deepset_mcp-0.0.5rc1/test/integration/tools/tokonomics → deepset_mcp-0.0.7/test/unit/api/user}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/user/test_user_resource.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/api/workspace/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/conftest.py +0 -0
- {deepset_mcp-0.0.5rc1 → deepset_mcp-0.0.7}/test/unit/test_async_deepset_client.py +0 -0
- {deepset_mcp-0.0.5rc1/test/unit → deepset_mcp-0.0.7/test/unit/tokonomics}/__init__.py +0 -0
- {deepset_mcp-0.0.5rc1/test/unit/api → deepset_mcp-0.0.7/test/unit/tools}/__init__.py +0 -0
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
name: Deploy Documentation
|
|
2
|
+
on:
|
|
3
|
+
push:
|
|
4
|
+
branches: [main]
|
|
5
|
+
workflow_dispatch:
|
|
6
|
+
|
|
7
|
+
permissions:
|
|
8
|
+
contents: read
|
|
9
|
+
pages: write
|
|
10
|
+
id-token: write
|
|
11
|
+
|
|
12
|
+
jobs:
|
|
13
|
+
deploy:
|
|
14
|
+
runs-on: ubuntu-latest
|
|
15
|
+
environment:
|
|
16
|
+
name: github-pages
|
|
17
|
+
url: ${{ steps.deployment.outputs.page_url }}
|
|
18
|
+
steps:
|
|
19
|
+
- uses: actions/checkout@v5
|
|
20
|
+
|
|
21
|
+
- uses: actions/setup-python@v5
|
|
22
|
+
with:
|
|
23
|
+
python-version: '3.11'
|
|
24
|
+
|
|
25
|
+
- name: Install uv
|
|
26
|
+
uses: astral-sh/setup-uv@v5
|
|
27
|
+
|
|
28
|
+
- name: Install project and docs dependencies
|
|
29
|
+
run: make install
|
|
30
|
+
|
|
31
|
+
- name: Build documentation
|
|
32
|
+
run: uv run mkdocs build
|
|
33
|
+
|
|
34
|
+
- uses: actions/configure-pages@v5
|
|
35
|
+
|
|
36
|
+
- uses: actions/upload-pages-artifact@v3
|
|
37
|
+
with:
|
|
38
|
+
path: ./site
|
|
39
|
+
|
|
40
|
+
- id: deployment
|
|
41
|
+
uses: actions/deploy-pages@v4
|
|
@@ -0,0 +1,100 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: deepset-mcp
|
|
3
|
+
Version: 0.0.7
|
|
4
|
+
Summary: Collection of MCP tools and Agents to work with the deepset AI platform. Create, debug or learn about pipelines on the platform. Useable from the CLI, Cursor, Claude Code, or other MCP clients.
|
|
5
|
+
Project-URL: Homepage, https://deepset.ai
|
|
6
|
+
Author-email: Mathis Lucka <mathis.lucka@deepset.ai>, Tanay Soni <tanay.soni@deepset.ai>
|
|
7
|
+
License-Expression: Apache-2.0
|
|
8
|
+
License-File: LICENSE
|
|
9
|
+
Keywords: Agents,Haystack,LLM,MCP,deepset,pipelines
|
|
10
|
+
Classifier: Development Status :: 4 - Beta
|
|
11
|
+
Classifier: Intended Audience :: Developers
|
|
12
|
+
Classifier: License :: Freely Distributable
|
|
13
|
+
Classifier: License :: OSI Approved :: Apache Software License
|
|
14
|
+
Classifier: Operating System :: OS Independent
|
|
15
|
+
Classifier: Programming Language :: Python
|
|
16
|
+
Classifier: Programming Language :: Python :: 3
|
|
17
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
18
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
19
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
20
|
+
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
21
|
+
Requires-Python: >=3.11
|
|
22
|
+
Requires-Dist: fastapi
|
|
23
|
+
Requires-Dist: glom
|
|
24
|
+
Requires-Dist: httpx
|
|
25
|
+
Requires-Dist: mcp>=1.10.1
|
|
26
|
+
Requires-Dist: model2vec
|
|
27
|
+
Requires-Dist: numpy
|
|
28
|
+
Requires-Dist: orjson
|
|
29
|
+
Requires-Dist: pydantic>=2.0.0
|
|
30
|
+
Requires-Dist: pyjwt[crypto]
|
|
31
|
+
Requires-Dist: pyyaml
|
|
32
|
+
Requires-Dist: rich
|
|
33
|
+
Requires-Dist: typer
|
|
34
|
+
Provides-Extra: redis
|
|
35
|
+
Requires-Dist: redis>=4.0.0; extra == 'redis'
|
|
36
|
+
Description-Content-Type: text/markdown
|
|
37
|
+
|
|
38
|
+
# deepset-mcp
|
|
39
|
+
|
|
40
|
+
**The official MCP server and Python SDK for the deepset AI platform**
|
|
41
|
+
|
|
42
|
+
deepset-mcp enables AI agents to build and debug pipelines on the [deepset AI platform](https://www.deepset.ai/products-and-services/deepset-ai-platform) through 30+ specialized tools. It also provides a Python SDK for programmatic access to many platform resources.
|
|
43
|
+
|
|
44
|
+
## Documentation
|
|
45
|
+
|
|
46
|
+
📖 **[View the full documentation](https://deepset-ai.github.io/deepset-mcp-server/)**
|
|
47
|
+
|
|
48
|
+
## Quick Links
|
|
49
|
+
|
|
50
|
+
- 🔗 **[deepset AI Platform](https://www.deepset.ai/products-and-services/deepset-ai-platform)**
|
|
51
|
+
- 📚 **[Installation Guide](https://deepset-ai.github.io/deepset-mcp-server/installation/)**
|
|
52
|
+
- 🛠️ **[MCP Server Guide](https://deepset-ai.github.io/deepset-mcp-server/guides/mcp_server/)**
|
|
53
|
+
- 🐍 **[Python SDK Guide](https://deepset-ai.github.io/deepset-mcp-server/guides/api_sdk/)**
|
|
54
|
+
|
|
55
|
+
## Development
|
|
56
|
+
|
|
57
|
+
### Installation
|
|
58
|
+
|
|
59
|
+
Install the project using [uv](https://docs.astral.sh/uv/):
|
|
60
|
+
|
|
61
|
+
```bash
|
|
62
|
+
# Install uv first
|
|
63
|
+
pipx install uv
|
|
64
|
+
|
|
65
|
+
# Install project with all dependencies
|
|
66
|
+
uv sync --locked --all-extras --all-groups
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
### Code Quality & Testing
|
|
70
|
+
|
|
71
|
+
Run code quality checks and tests using the Makefile:
|
|
72
|
+
|
|
73
|
+
```bash
|
|
74
|
+
# Install dependencies
|
|
75
|
+
make install
|
|
76
|
+
|
|
77
|
+
# Code quality
|
|
78
|
+
make lint # Run ruff linting
|
|
79
|
+
make format # Format code with ruff
|
|
80
|
+
make types # Run mypy type checking
|
|
81
|
+
|
|
82
|
+
# Testing
|
|
83
|
+
make test # Run unit tests (default)
|
|
84
|
+
make test-unit # Run unit tests only
|
|
85
|
+
make test-integration # Run integration tests
|
|
86
|
+
make test-all # Run all tests
|
|
87
|
+
|
|
88
|
+
# Clean up
|
|
89
|
+
make clean # Remove cache files
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
### Documentation
|
|
93
|
+
|
|
94
|
+
Documentation is built using [MkDocs](https://www.mkdocs.org/) with the Material theme:
|
|
95
|
+
|
|
96
|
+
- Configuration: `mkdocs.yml`
|
|
97
|
+
- Content: `docs/` directory
|
|
98
|
+
- Auto-generated API docs via [mkdocstrings](https://mkdocstrings.github.io/)
|
|
99
|
+
- Deployed via GitHub Pages (automated via GitHub Actions on push to main branch)
|
|
100
|
+
|
|
@@ -0,0 +1,63 @@
|
|
|
1
|
+
# deepset-mcp
|
|
2
|
+
|
|
3
|
+
**The official MCP server and Python SDK for the deepset AI platform**
|
|
4
|
+
|
|
5
|
+
deepset-mcp enables AI agents to build and debug pipelines on the [deepset AI platform](https://www.deepset.ai/products-and-services/deepset-ai-platform) through 30+ specialized tools. It also provides a Python SDK for programmatic access to many platform resources.
|
|
6
|
+
|
|
7
|
+
## Documentation
|
|
8
|
+
|
|
9
|
+
📖 **[View the full documentation](https://deepset-ai.github.io/deepset-mcp-server/)**
|
|
10
|
+
|
|
11
|
+
## Quick Links
|
|
12
|
+
|
|
13
|
+
- 🔗 **[deepset AI Platform](https://www.deepset.ai/products-and-services/deepset-ai-platform)**
|
|
14
|
+
- 📚 **[Installation Guide](https://deepset-ai.github.io/deepset-mcp-server/installation/)**
|
|
15
|
+
- 🛠️ **[MCP Server Guide](https://deepset-ai.github.io/deepset-mcp-server/guides/mcp_server/)**
|
|
16
|
+
- 🐍 **[Python SDK Guide](https://deepset-ai.github.io/deepset-mcp-server/guides/api_sdk/)**
|
|
17
|
+
|
|
18
|
+
## Development
|
|
19
|
+
|
|
20
|
+
### Installation
|
|
21
|
+
|
|
22
|
+
Install the project using [uv](https://docs.astral.sh/uv/):
|
|
23
|
+
|
|
24
|
+
```bash
|
|
25
|
+
# Install uv first
|
|
26
|
+
pipx install uv
|
|
27
|
+
|
|
28
|
+
# Install project with all dependencies
|
|
29
|
+
uv sync --locked --all-extras --all-groups
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
### Code Quality & Testing
|
|
33
|
+
|
|
34
|
+
Run code quality checks and tests using the Makefile:
|
|
35
|
+
|
|
36
|
+
```bash
|
|
37
|
+
# Install dependencies
|
|
38
|
+
make install
|
|
39
|
+
|
|
40
|
+
# Code quality
|
|
41
|
+
make lint # Run ruff linting
|
|
42
|
+
make format # Format code with ruff
|
|
43
|
+
make types # Run mypy type checking
|
|
44
|
+
|
|
45
|
+
# Testing
|
|
46
|
+
make test # Run unit tests (default)
|
|
47
|
+
make test-unit # Run unit tests only
|
|
48
|
+
make test-integration # Run integration tests
|
|
49
|
+
make test-all # Run all tests
|
|
50
|
+
|
|
51
|
+
# Clean up
|
|
52
|
+
make clean # Remove cache files
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
### Documentation
|
|
56
|
+
|
|
57
|
+
Documentation is built using [MkDocs](https://www.mkdocs.org/) with the Material theme:
|
|
58
|
+
|
|
59
|
+
- Configuration: `mkdocs.yml`
|
|
60
|
+
- Content: `docs/` directory
|
|
61
|
+
- Auto-generated API docs via [mkdocstrings](https://mkdocstrings.github.io/)
|
|
62
|
+
- Deployed via GitHub Pages (automated via GitHub Actions on push to main branch)
|
|
63
|
+
|
|
@@ -0,0 +1,176 @@
|
|
|
1
|
+
# MCP Server Concepts
|
|
2
|
+
|
|
3
|
+
This section explains key concepts that enable efficient AI tool orchestration between the deepset AI platform and various clients. Understanding these concepts helps you grasp why certain design decisions were made and how different components work together to create effective AI workflows.
|
|
4
|
+
|
|
5
|
+
## deepset AI Platform
|
|
6
|
+
|
|
7
|
+
The [deepset AI platform](https://www.deepset.ai/products-and-services/deepset-ai-platform) is a Software-as-a-Service solution for building and managing Large Language Model applications throughout their entire lifecycle. It serves as the foundation for creating AI-powered search and question-answering systems.
|
|
8
|
+
|
|
9
|
+
**Pipeline-Based Architecture**: The platform organizes AI functionality into pipelines—modular building blocks that can be mixed, matched, and replaced to form various configurations. Components like retrievers, generators, and processors connect together to create complete AI workflows. This flexibility allows you to customize behavior for different use cases while maintaining a consistent development experience.
|
|
10
|
+
|
|
11
|
+
**Model-Agnostic Design**: You can use all major LLMs in your applications without being locked into a specific vendor. The platform abstracts model differences, letting you switch between providers like OpenAI, Anthropic, or open-source models without rewriting your pipeline logic.
|
|
12
|
+
|
|
13
|
+
**Comprehensive Workflow Support**: The platform handles the entire AI application lifecycle—from data preprocessing and pipeline creation through evaluation, prototyping, deployment, and monitoring. This eliminates the need to stitch together separate tools for different development phases.
|
|
14
|
+
|
|
15
|
+
**Workspace Organization**: Multiple workspaces keep data and pipelines separate within an organization. Each workspace maintains its own indexes, files, and pipeline configurations, enabling clean separation between development, staging, and production environments.
|
|
16
|
+
|
|
17
|
+
The platform builds on Haystack, the open-source Python framework, providing a production-ready implementation with additional deepset-specific components and enterprise features.
|
|
18
|
+
|
|
19
|
+
## Model Context Protocol (MCP)
|
|
20
|
+
|
|
21
|
+
The [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) is an open standard that enables Large Language Models to connect with external data sources and tools in a consistent way. Think of MCP as a universal adapter that lets AI applications access diverse systems without custom integration code for each connection.
|
|
22
|
+
|
|
23
|
+
**Standardized Communication**: MCP defines how AI applications request information from external systems and how those systems respond. This standardization means that once you implement MCP support, your tools work with any MCP-compatible AI application—whether that's Claude Desktop, Cursor, or custom agents.
|
|
24
|
+
|
|
25
|
+
**Three-Layer Architecture**: MCP operates through three distinct components:
|
|
26
|
+
|
|
27
|
+
- **MCP Host**: The AI application (like Claude Desktop) that manages user interactions and orchestrates tool usage
|
|
28
|
+
|
|
29
|
+
- **MCP Client**: The protocol component that maintains connections to servers and handles communication
|
|
30
|
+
|
|
31
|
+
- **MCP Server**: Programs that expose specific capabilities like file access, database queries, or API interactions
|
|
32
|
+
|
|
33
|
+
**Capability Exchange**: Servers expose three types of capabilities to AI applications:
|
|
34
|
+
|
|
35
|
+
- **Tools**: Functions the AI can execute (like searching pipelines or creating indexes)
|
|
36
|
+
|
|
37
|
+
- **Resources**: Data sources the AI can read (like configuration files or documentation)
|
|
38
|
+
|
|
39
|
+
- **Prompts**: Template interactions that guide AI behavior
|
|
40
|
+
|
|
41
|
+
**Transport Flexibility**: MCP supports different communication methods—stdio transport for local processes and HTTP transport for remote services. This flexibility enables both desktop integrations and cloud-based deployments.
|
|
42
|
+
|
|
43
|
+
The protocol emphasizes human oversight and control, requiring user approval for tool executions while enabling sophisticated AI workflows across multiple specialized servers.
|
|
44
|
+
|
|
45
|
+
## Integrating deepset Platform with MCP Clients
|
|
46
|
+
|
|
47
|
+
MCP clients like Cursor, Claude Desktop, and Claude Code can connect to deepset platform capabilities through the deepset MCP server. This integration transforms how AI assistants interact with your search and AI pipeline infrastructure.
|
|
48
|
+
|
|
49
|
+
**Client-Side Configuration**: MCP clients require configuration files that specify how to connect to the deepset MCP server. These configurations include the execution command (typically `uvx deepset-mcp`), environment variables for authentication, and workspace settings. The client handles launching the server process and managing the connection lifecycle.
|
|
50
|
+
|
|
51
|
+
**Authentication Flow**: The integration supports both static API keys (set once in configuration) and dynamic authentication (extracted from request headers). Static authentication works well for single-user scenarios, while dynamic authentication enables multi-user deployments where different users access different deepset workspaces.
|
|
52
|
+
|
|
53
|
+
**Tool Discovery**: Once connected, MCP clients automatically discover available deepset tools—pipeline management, search operations, file uploads, and index creation. The client presents these tools to the AI assistant, which can then reason about when and how to use them based on user requests.
|
|
54
|
+
|
|
55
|
+
**Context Sharing**: The MCP protocol enables efficient sharing of complex data structures between tools. When a pipeline search returns large results, those results are stored in an object store rather than passed directly through the conversation. This approach prevents context window overflow while enabling sophisticated multi-step workflows.
|
|
56
|
+
|
|
57
|
+
**Workspace Flexibility**: Clients can be configured with a default workspace for all operations, or they can operate in dynamic mode where the AI assistant specifies the workspace for each tool invocation. This flexibility supports both focused single-project work and multi-environment management.
|
|
58
|
+
|
|
59
|
+
The integration creates a seamless experience where AI assistants can naturally work with your deepset platform resources, turning conversational requests like "search our documentation for deployment guides" into actual pipeline executions.
|
|
60
|
+
|
|
61
|
+
View our [installation guides](../installation.md) to set up the deepset MCP server with various MCP clients.
|
|
62
|
+
|
|
63
|
+
## Use with Custom Agents
|
|
64
|
+
|
|
65
|
+
Beyond MCP clients, the deepset MCP server tools can be used directly by custom AI agents that implement MCP client functionality. This approach enables building specialized AI applications that deeply integrate with deepset platform capabilities.
|
|
66
|
+
|
|
67
|
+
**Agent-Tool Interface**: AI agents can consume deepset MCP tools through the same protocol used by desktop clients. The tools are exposed as callable functions with typed parameters and structured return values, making them natural building blocks for agent workflows.
|
|
68
|
+
|
|
69
|
+
**Haystack Agent Integration**: Haystack provides a built-in MCPToolset that dynamically discovers and loads MCP server tools. This integration enables Haystack agents to use deepset platform capabilities alongside other tools in their workflows. The agent can reason about which tools to use, execute searches, analyze results, and take follow-up actions all within a single conversation.
|
|
70
|
+
|
|
71
|
+
**Custom Tool Orchestration**: When building custom agents, you can combine deepset MCP tools with other capabilities—web search, document processing, code execution, or domain-specific APIs. This combination creates powerful AI assistants that can bridge multiple systems while maintaining access to your deepset platform resources.
|
|
72
|
+
|
|
73
|
+
**Reference-Based Workflows**: The object store concept becomes particularly powerful in agent scenarios. Agents can chain operations together efficiently—search for documents, analyze the results, extract insights, and create new pipelines—all without re-transmitting large data structures between tool calls.
|
|
74
|
+
|
|
75
|
+
**Production Deployment**: Custom agents using deepset MCP tools can be deployed as remote services using HTTP transport. This enables building multi-user AI applications where different users access their own deepset workspaces through the same agent interface.
|
|
76
|
+
|
|
77
|
+
The flexibility of this approach means you can create everything from simple automation scripts that manage pipeline deployments to sophisticated AI assistants that help users explore and analyze their knowledge bases through natural conversation.
|
|
78
|
+
|
|
79
|
+
View [our guide](../guides/mcp_server.md#how-to-use-deepset-mcp-with-a-custom-haystack-agent) on how to integrate the deepset MCP package with a custom Haystack agent.
|
|
80
|
+
|
|
81
|
+
## Object Store
|
|
82
|
+
|
|
83
|
+
The Object Store is a key-value storage system that temporarily holds Python objects returned by tools. It addresses two critical challenges in AI tool orchestration:
|
|
84
|
+
|
|
85
|
+
**Context Window Management**: Large tool outputs can overwhelm the LLM's context window, making it impossible to process results effectively. The Object Store prevents this by storing complete objects separately from the conversation context.
|
|
86
|
+
|
|
87
|
+
**Cost and Performance Optimization**: When tools need to use outputs from previous tools, the LLM would normally regenerate that data, leading to increased costs and potential inconsistencies. The Object Store eliminates this redundancy by allowing direct data reuse.
|
|
88
|
+
|
|
89
|
+
### Storage Backends
|
|
90
|
+
|
|
91
|
+
The Object Store supports two backend implementations:
|
|
92
|
+
|
|
93
|
+
**In-Memory Backend** (`InMemoryBackend`): Stores objects in server memory with counter-based IDs (e.g., `obj_001`). Suitable for single-server deployments and development environments.
|
|
94
|
+
|
|
95
|
+
**Redis Backend** (`RedisBackend`): Uses Redis for distributed storage with UUID-based IDs (e.g., `obj_a7f3b2c1`). Required for multi-server deployments and production environments where persistence and scalability matter.
|
|
96
|
+
|
|
97
|
+
Both backends support configurable time-to-live (TTL) values to automatically clean up expired objects, preventing memory leaks in long-running deployments.
|
|
98
|
+
|
|
99
|
+
### Object Serialization
|
|
100
|
+
|
|
101
|
+
Objects are serialized using the `orjson` library for optimal performance. The serialization process handles:
|
|
102
|
+
- Pydantic models (using `model_dump()`)
|
|
103
|
+
- Sets and tuples (converted to lists)
|
|
104
|
+
- Nested objects with configurable depth limits
|
|
105
|
+
|
|
106
|
+
## Tool Output Truncation and Exploration
|
|
107
|
+
|
|
108
|
+
Tool output truncation addresses the challenge of presenting large, complex data structures to LLMs in a manageable format.
|
|
109
|
+
|
|
110
|
+
### The RichExplorer Component
|
|
111
|
+
|
|
112
|
+
The `RichExplorer` class generates human-readable, truncated representations of stored objects using the Rich library. It applies intelligent limits to prevent information overload:
|
|
113
|
+
|
|
114
|
+
**Collection Limits**: Lists and dictionaries are truncated after a configurable number of items (default: 25), with ellipsis indicating additional content.
|
|
115
|
+
|
|
116
|
+
**Depth Limits**: Nested structures are explored to a maximum depth (default: 4 levels) to prevent infinite expansion of complex hierarchies.
|
|
117
|
+
|
|
118
|
+
**String Truncation**: Long strings are cut at a configurable length (default: 300 characters) with clear truncation indicators.
|
|
119
|
+
|
|
120
|
+
**Context Headers**: Each output includes a header showing the object ID, type information, and size metadata (e.g., `@obj_123 → dict (length: 42)`).
|
|
121
|
+
|
|
122
|
+
### Exploration Tools
|
|
123
|
+
|
|
124
|
+
Two specialized tools enable LLMs to navigate stored objects:
|
|
125
|
+
|
|
126
|
+
**`get_from_object_store`**: Retrieves objects or specific nested properties using dot-notation paths (e.g., `@obj_123.config.timeout`). This tool provides the primary interface for accessing stored data.
|
|
127
|
+
|
|
128
|
+
**`get_slice_from_object_store`**: Extracts specific ranges from strings and lists (e.g., characters 100-200 of a document, items 10-20 of a list). This enables efficient inspection of large sequences without loading entire contents.
|
|
129
|
+
|
|
130
|
+
The distinction between these tools reflects different access patterns:
|
|
131
|
+
- Use `get_from_object_store` for structural navigation (accessing object properties)
|
|
132
|
+
- Use `get_slice_from_object_store` for range-based access (viewing portions of sequences)
|
|
133
|
+
|
|
134
|
+
## Tool Invocation by Reference
|
|
135
|
+
|
|
136
|
+
Tool invocation by reference enables tools to accept previously stored objects as parameters, eliminating the need to re-pass large data structures through the conversation.
|
|
137
|
+
|
|
138
|
+
### Reference Syntax
|
|
139
|
+
|
|
140
|
+
References use a consistent `@obj_id` or `@obj_id.path.to.property` format:
|
|
141
|
+
- `@obj_123` references an entire stored object
|
|
142
|
+
- `@obj_123.config.database_url` references a nested property
|
|
143
|
+
- Mixed usage: `validate_pipeline(config="@obj_123.pipeline_config", dry_run=True)`
|
|
144
|
+
|
|
145
|
+
### The @referenceable Decorator
|
|
146
|
+
|
|
147
|
+
Tools that accept references are decorated with `@referenceable`, which:
|
|
148
|
+
|
|
149
|
+
**Type System Integration**: Automatically modifies function signatures to accept string references alongside original parameter types. For example, a parameter `config: dict` becomes `config: dict | str`.
|
|
150
|
+
|
|
151
|
+
**Runtime Resolution**: Transparently resolves references to actual objects before function execution. The LLM sees the enhanced signature while the underlying function receives correctly typed objects.
|
|
152
|
+
|
|
153
|
+
**Validation**: Ensures non-reference strings are rejected for type-safe parameters, preventing accidental misuse.
|
|
154
|
+
|
|
155
|
+
**Path Validation**: Validates object paths using allow-list patterns to prevent unauthorized access to object internals.
|
|
156
|
+
|
|
157
|
+
### Workflow Optimization
|
|
158
|
+
|
|
159
|
+
Reference-based invocation creates efficient multi-step workflows:
|
|
160
|
+
|
|
161
|
+
1. **Initial Tool Call**: `get_pipeline("my-pipeline")` returns configuration stored as `@obj_123`
|
|
162
|
+
2. **Reference Usage**: `validate_pipeline(config="@obj_123.pipeline_config")` processes the configuration without re-transmission
|
|
163
|
+
3. **Cost Reduction**: Eliminates token costs for re-generating large YAML configurations, API responses, or data structures
|
|
164
|
+
4. **Consistency**: Prevents subtle errors from LLM re-generation of complex data
|
|
165
|
+
|
|
166
|
+
### Decorator Combinations
|
|
167
|
+
|
|
168
|
+
The system provides three decorator patterns:
|
|
169
|
+
|
|
170
|
+
**`@explorable`**: Tools store outputs in the Object Store for later reference
|
|
171
|
+
**`@referenceable`**: Tools accept object references as input parameters
|
|
172
|
+
**`@explorable_and_referenceable`**: Tools both accept references and store outputs, enabling chainable workflows
|
|
173
|
+
|
|
174
|
+
These decorators work together to create seamless data flow between tools while maintaining type safety and performance optimization.
|
|
175
|
+
|
|
176
|
+
View [our in-depth guides](../guides/mcp_server.md#how-to-create-custom-referenceable-and-explorable-tools) on how to work with the object store and tool invocation by reference.
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
# SDK Concepts
|
|
2
|
+
|
|
3
|
+
## About the Client-Resource Architecture
|
|
4
|
+
|
|
5
|
+
The Deepset API SDK is designed around a client-resource pattern that reflects how the deepset platform organizes its services.
|
|
6
|
+
|
|
7
|
+
At the core is the `AsyncDeepsetClient`, which serves as your gateway to the platform. The client exposes resource classes
|
|
8
|
+
for each platform component.
|
|
9
|
+
|
|
10
|
+
The resource classes themselves act as domain-specific interfaces.
|
|
11
|
+
|
|
12
|
+
Accessing resources through a shared client instance is easy by leveraging the built-in async context manager. Resources,
|
|
13
|
+
and connections are cleaned up automatically, as soon as we exit the context.
|
|
14
|
+
|
|
15
|
+
This design enables the SDK to provide both type safety and operational clarity.
|
|
16
|
+
Each resource class knows its domain deeply, while the client handles cross-cutting concerns like authentication and connection pooling.
|
|
17
|
+
|
|
18
|
+
## Understanding Resource Scoping
|
|
19
|
+
|
|
20
|
+
The platform distinguishes between two fundamental scoping patterns:
|
|
21
|
+
|
|
22
|
+
**Workspace-scoped resources** operate within specific project boundaries. This scoping exists because these resources often contain sensitive data, custom business logic, or project-specific configurations that shouldn't leak between environments:
|
|
23
|
+
|
|
24
|
+
- **Pipelines**: AI workflows containing your custom logic and data processing rules
|
|
25
|
+
- **Indexes**: Document storage systems with your proprietary data
|
|
26
|
+
- **Pipeline Templates**: Reusable configurations specific to your use cases
|
|
27
|
+
- **Custom Components**: Your domain-specific Haystack components
|
|
28
|
+
|
|
29
|
+
**Global resources** operate at the platform level because they represent shared infrastructure or account-level concerns:
|
|
30
|
+
|
|
31
|
+
- **Workspaces**: Project organization and isolation boundaries
|
|
32
|
+
- **Integrations**: Platform-wide service connections and credentials
|
|
33
|
+
- **Secrets**: Centralized credential management across projects
|
|
34
|
+
- **Users**: Account and identity management
|
|
35
|
+
- **Haystack Service**: Shared component schemas and metadata
|
|
36
|
+
|
|
37
|
+
This scoping model enables both isolation (workspace resources) and efficiency (global resources), allowing teams to work independently while sharing common platform services.
|
|
38
|
+
|
|
39
|
+
## Why Asynchronous by Design
|
|
40
|
+
|
|
41
|
+
The SDK's async-first design reflects the reality of modern AI applications. Unlike traditional CRUD operations, AI workloads involve:
|
|
42
|
+
|
|
43
|
+
- **Long-running operations**: Pipeline deployments and large document indexing
|
|
44
|
+
- **Streaming responses**: Real-time text generation and search results
|
|
45
|
+
- **High concurrency needs**: Processing multiple queries simultaneously
|
|
46
|
+
- **Variable response times**: AI operations can take seconds to minutes
|
|
47
|
+
|
|
48
|
+
Asynchronous operations allow your application to remain responsive during these long-running tasks. The async context manager pattern ensures proper resource cleanup, which is critical when dealing with HTTP connections and streaming responses.
|