@sweetoburrito/backstage-plugin-ai-assistant-backend-module-embeddings-provider-ollama 0.3.4 → 0.3.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +74 -2
  2. package/package.json +2 -2
package/README.md CHANGED
@@ -1,5 +1,77 @@
1
1
  # @sweetoburrito/backstage-plugin-ai-assistant-backend-module-embeddings-provider-ollama
2
2
 
3
- The embeddings-provider-ollama backend module for the ai-assistant plugin.
3
+ An embeddings provider module that lets the Backstage AI Assistant backend create vector
4
+ embeddings using Ollama-hosted models (local Ollama server or Ollama Cloud).
4
5
 
5
- _This plugin was created through the Backstage CLI_
6
+ This README explains how the provider works, when to use it, configuration options, and how
7
+ to wire it into your Backstage backend.
8
+
9
+ ## Features
10
+
11
+ - Convert text or documents to numeric vector embeddings using an Ollama model.
12
+ - Exposes a provider implementation compatible with the AI Assistant backend so different
13
+ embeddings services can be swapped without changing the rest of the app.
14
+ - Minimal configuration for local or remote Ollama endpoints and optional API key support.
15
+
16
+ ## When to use
17
+
18
+ Use this module when you run an Ollama embeddings-capable model and want the AI Assistant to
19
+ build semantic search indices, vector stores, or provide retrieval-augmented generation (RAG)
20
+ capabilities in Backstage. It's a good fit for local development with Ollama or when using an
21
+ Ollama-hosted endpoint.
22
+
23
+ ## Configuration
24
+
25
+ Add the provider configuration to your Backstage `app-config.yaml` or `app-config.local.yaml`
26
+ under `aiAssistant.embeddings.ollama`.
27
+
28
+ Minimum configuration keys (example):
29
+
30
+ ```yaml
31
+ aiAssistant:
32
+ embeddings:
33
+ ollama:
34
+ baseUrl: 'http://localhost:11434'
35
+ model: 'text-embedding-3-small'
36
+ apiKey: ${OLLAMA_API_KEY}
37
+ ```
38
+
39
+ Field descriptions:
40
+
41
+ - `baseUrl` - The base URL of your Ollama service. For a local Ollama server this is typically
42
+ `http://localhost:11434`. For Ollama Cloud or a proxied endpoint, use the full base URL. For ollama with webui you must set the base url to the `/ollama` route. i.e `http://localhost:11434/ollama`
43
+ - `model` - The name of the model to use for generating embeddings. The model must support
44
+ embeddings (check your Ollama model documentation for supported capabilities).
45
+ - `apiKey` - (Optional) An API key for Ollama Cloud or any endpoint that requires authentication.
46
+ Mark this value as secret in Backstage configuration when applicable.
47
+
48
+ The exact keys available and required depend on your Ollama setup. Check the provider's
49
+ `config.d.ts` in the package for the canonical types used by the module.
50
+
51
+ ## Install
52
+
53
+ Install the module into your Backstage backend workspace:
54
+
55
+ ```sh
56
+ yarn workspace backend add @sweetoburrito/backstage-plugin-ai-assistant-backend-module-embeddings-provider-ollama
57
+ ```
58
+
59
+ ## Wire the provider into your backend
60
+
61
+ Add the provider module import to your backend entrypoint (usually `packages/backend/src/index.ts`):
62
+
63
+ ```diff
64
+ // packages/backend/src/index.ts
65
+
66
+ // other backend modules...
67
+ backend.add(import('@sweetoburrito/backstage-plugin-ai-assistant-backend'));
68
+
69
+ // Add the Ollama embeddings provider
70
+ ++backend.add(
71
+ ++ import(
72
+ ++ '@sweetoburrito/backstage-plugin-ai-assistant-backend-module-embeddings-provider-ollama'
73
+ ++ ),
74
+ ++);
75
+ ```
76
+
77
+ Restart your backend after adding the provider so it registers with the AI Assistant plugin.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@sweetoburrito/backstage-plugin-ai-assistant-backend-module-embeddings-provider-ollama",
3
- "version": "0.3.4",
3
+ "version": "0.3.5",
4
4
  "license": "Apache-2.0",
5
5
  "description": "The embeddings-provider-ollama backend module for the ai-assistant plugin.",
6
6
  "main": "dist/index.cjs.js",
@@ -30,7 +30,7 @@
30
30
  "dependencies": {
31
31
  "@backstage/backend-plugin-api": "backstage:^",
32
32
  "@langchain/ollama": "^0.2.3",
33
- "@sweetoburrito/backstage-plugin-ai-assistant-node": "^0.5.0"
33
+ "@sweetoburrito/backstage-plugin-ai-assistant-node": "^0.6.0"
34
34
  },
35
35
  "devDependencies": {
36
36
  "@backstage/backend-test-utils": "backstage:^",