@equinor/fusion-framework-cli 13.0.0-cli-search-index.3 → 13.0.0-cli-search-index.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,3 +1,3 @@
1
1
  // Generated by genversion.
2
- export const version = '13.0.0-cli-search-index.3';
2
+ export const version = '13.0.0-cli-search-index.4';
3
3
  //# sourceMappingURL=version.js.map
@@ -1 +1 @@
1
- export declare const version = "13.0.0-cli-search-index.3";
1
+ export declare const version = "13.0.0-cli-search-index.4";
@@ -0,0 +1,573 @@
1
+ > [!CAUTION]
2
+ >
3
+ > **⚠️ INTERNAL USE ONLY - NOT FOR EXTERNAL USE**
4
+ >
5
+ > **This feature is exclusively for internal use by the Fusion Core team.**
6
+ >
7
+ > This AI functionality is **NOT supported for third-party users** and is not part of the public API. These commands are experimental and intended only for internal development and documentation purposes within the Fusion Framework team.
8
+ >
9
+ > **Do not use these commands if you are:**
10
+ > - A third-party developer using Fusion Framework
11
+ > - Building applications on top of Fusion Framework
12
+ > - Expecting official support or documentation for external use
13
+ >
14
+ > For questions or issues related to these commands, contact the Fusion Core team directly.
15
+
16
+ ---
17
+
18
+ The Fusion Framework CLI provides powerful AI commands for interacting with Large Language Models (LLMs), generating document embeddings, and performing semantic search. These commands integrate with Azure OpenAI and Azure Cognitive Search to enable intelligent codebase understanding and Q&A capabilities.
19
+
20
+ ## Overview
21
+
22
+ The `ai` command group includes three main subcommands:
23
+
24
+ - **`chat`** - Interactive chat with AI models using vector store context retrieval
25
+ - **`embeddings`** - Generate embeddings from markdown and TypeScript files for semantic search
26
+ - **`search`** - Search the vector store to validate embeddings and retrieve relevant documents
27
+
28
+ ## Prerequisites
29
+
30
+ Before using the AI commands, you need:
31
+
32
+ 1. **Azure OpenAI Service** with:
33
+ - Chat model deployment (e.g., GPT-4, GPT-3.5-turbo)
34
+ - Embedding model deployment (e.g., text-embedding-ada-002)
35
+
36
+ 2. **Azure Cognitive Search** with:
37
+ - A search service instance
38
+ - A search index configured for vector search
39
+
40
+ 3. **Configuration** - Via environment variables (`.env` file for local development or GitHub Variables/Secrets for CI/CD)
41
+
42
+ ## Configuration
43
+
44
+ ### Environment Variables
45
+
46
+ All AI commands are configured via environment variables. For local development, use a `.env` file in your project root. For CI/CD, use GitHub Variables and Secrets.
47
+
48
+ #### Local Development (.env file)
49
+
50
+ Create a `.env` file in your project root:
51
+
52
+ ```bash
53
+ # Azure OpenAI Configuration
54
+ AZURE_OPENAI_API_KEY=your-api-key
55
+ AZURE_OPENAI_API_VERSION=2024-02-15-preview
56
+ AZURE_OPENAI_INSTANCE_NAME=your-instance-name
57
+ AZURE_OPENAI_CHAT_DEPLOYMENT_NAME=gpt-4
58
+ AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME=text-embedding-ada-002
59
+
60
+ # Azure Cognitive Search Configuration
61
+ AZURE_SEARCH_ENDPOINT=https://your-search.search.windows.net
62
+ AZURE_SEARCH_API_KEY=your-search-api-key
63
+ AZURE_SEARCH_INDEX_NAME=your-index-name
64
+ ```
65
+
66
+ **Note:** Add `.env` to your `.gitignore` to keep credentials secure.
67
+
68
+ #### CI/CD (GitHub Actions)
69
+
70
+ For GitHub Actions workflows, configure:
71
+ - **Secrets** (Settings → Secrets and variables → Actions → Secrets): For sensitive data
72
+ - `AZURE_OPENAI_API_KEY`
73
+ - `AZURE_SEARCH_API_KEY`
74
+ - **Variables** (Settings → Secrets and variables → Actions → Variables): For non-sensitive configuration
75
+ - `AZURE_OPENAI_API_VERSION`
76
+ - `AZURE_OPENAI_INSTANCE_NAME`
77
+ - `AZURE_OPENAI_CHAT_DEPLOYMENT_NAME`
78
+ - `AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME`
79
+ - `AZURE_SEARCH_ENDPOINT`
80
+ - `AZURE_SEARCH_INDEX_NAME`
81
+
82
+ ### Configuration File
83
+
84
+ For the `embeddings` command, you can create a `fusion-ai.config.ts` file in your project root:
85
+
86
+ ```typescript
87
+ import { configureFusionAI, type FusionAIConfig } from '@equinor/fusion-framework-cli/lib/ai/fusion-ai';
88
+
89
+ export default configureFusionAI((): FusionAIConfig => {
90
+ return {
91
+ // File patterns to match for processing
92
+ patterns: [
93
+ 'packages/**/src/**/*.{ts,tsx}',
94
+ 'packages/**/docs/**/*.md',
95
+ 'packages/**/README.md',
96
+ ],
97
+ // Embedding generation configuration
98
+ embedding: {
99
+ // Size of text chunks for embedding
100
+ chunkSize: 1000,
101
+ // Overlap between chunks to maintain context
102
+ chunkOverlap: 200,
103
+ },
104
+ // Metadata processing configuration
105
+ metadata: {
106
+ // Optional: Custom metadata processor
107
+ // attributeProcessor: (metadata, document) => {
108
+ // // Transform or filter metadata attributes
109
+ // return metadata;
110
+ // },
111
+ },
112
+ };
113
+ });
114
+ ```
115
+
116
+ ## Commands
117
+
118
+ ### `ai chat`
119
+
120
+ Interactive chat with Large Language Models enhanced with vector store context retrieval (RAG - Retrieval-Augmented Generation).
121
+
122
+ #### Features
123
+
124
+ - Real-time streaming responses from AI models
125
+ - Automatic context retrieval from vector store for enhanced accuracy
126
+ - Intelligent message history compression using AI summarization
127
+ - Special commands: `exit`, `quit`, `clear`, `help`
128
+ - Configurable context retrieval limits
129
+
130
+ #### Usage
131
+
132
+ ```bash
133
+ ffc ai chat [options]
134
+ ```
135
+
136
+ #### Options
137
+
138
+ | Option | Description | Default |
139
+ |--------|-------------|---------|
140
+ | `--context-limit <number>` | Max context documents to retrieve | `5` |
141
+ | `--history-limit <number>` | Max messages in conversation history | `20` |
142
+ | `--verbose` | Enable verbose output | `false` |
143
+
144
+ **Note:** Azure configuration (API keys, endpoints, etc.) is provided via environment variables (`.env` file or GitHub Variables/Secrets), not command-line options.
145
+
146
+ #### Interactive Commands
147
+
148
+ While in chat mode, you can use these special commands:
149
+
150
+ - `exit` or `quit` - End the conversation
151
+ - `clear` - Clear conversation history
152
+ - `help` - Show available commands
153
+ - `Ctrl+C` - Exit immediately
154
+
155
+ #### Examples
156
+
157
+ ```bash
158
+ # Start interactive chat with default settings
159
+ # (Azure configuration loaded from .env file)
160
+ ffc ai chat
161
+
162
+ # Increase context retrieval limit for more comprehensive responses
163
+ ffc ai chat --context-limit 10
164
+
165
+ # Increase conversation history limit for longer sessions
166
+ ffc ai chat --history-limit 100
167
+
168
+ # Enable verbose output for debugging
169
+ ffc ai chat --verbose
170
+ ```
171
+
172
+ #### How It Works
173
+
174
+ 1. **Context Retrieval**: When you ask a question, the system automatically searches the vector store for relevant documents
175
+ 2. **Message Formatting**: Retrieved context is included in the system message to provide the AI with relevant information
176
+ 3. **Streaming Response**: The AI response streams in real-time for immediate feedback
177
+ 4. **History Management**: Conversation history is automatically compressed when it reaches 10 messages to maintain context while reducing token usage
178
+
179
+ ### `ai embeddings`
180
+
181
+ Generate embeddings from markdown and TypeScript files for semantic search indexing.
182
+
183
+ #### Features
184
+
185
+ - Markdown document chunking with frontmatter extraction
186
+ - TypeScript/TSX TSDoc extraction and chunking
187
+ - Glob pattern support for file collection
188
+ - Recursive directory processing
189
+ - Dry-run mode for testing
190
+ - Git diff-based processing for CI/CD workflows
191
+ - Automatic metadata extraction (git commit, author, etc.)
192
+
193
+ #### Usage
194
+
195
+ ```bash
196
+ ffc ai embeddings [options] [glob-patterns...]
197
+ ```
198
+
199
+ #### Options
200
+
201
+ | Option | Description | Default |
202
+ |--------|-------------|---------|
203
+ | `--dry-run` | Show what would be processed without doing it | `false` |
204
+ | `--config <path>` | Path to config file | `fusion-ai.config.ts` |
205
+ | `--recursive` | Process directories recursively | `false` |
206
+ | `--diff` | Process only changed files (workflow mode) | `false` |
207
+ | `--base-ref <ref>` | Git reference to compare against | `HEAD~1` |
208
+ | `--clean` | Delete all existing documents before processing | `false` |
209
+
210
+ **Note:** Azure configuration (API keys, endpoints, etc.) is provided via environment variables (`.env` file or GitHub Variables/Secrets), not command-line options.
211
+
212
+ #### Examples
213
+
214
+ ```bash
215
+ # Dry run to see what would be processed
216
+ ffc ai embeddings --dry-run ./src
217
+
218
+ # Process all TypeScript and Markdown files in a directory
219
+ ffc ai embeddings ./src
220
+
221
+ # Process only changed files (useful for CI/CD)
222
+ ffc ai embeddings --diff ./src
223
+
224
+ # Process changed files compared to a specific branch
225
+ ffc ai embeddings --diff --base-ref origin/main ./src
226
+
227
+ # Clean and re-index all documents
228
+ ffc ai embeddings --clean ./src
229
+
230
+ # Process specific file patterns
231
+ ffc ai embeddings "packages/**/*.ts" "docs/**/*.md"
232
+
233
+ # Use custom config file
234
+ ffc ai embeddings --config ./custom-ai.config.ts ./src
235
+ ```
236
+
237
+ #### Workflow Integration
238
+
239
+ The `--diff` flag is particularly useful for CI/CD pipelines. See the [CI/CD Integration](#cicd-integration) section for complete workflow examples.
240
+
241
+ #### File Processing
242
+
243
+ The command processes two types of files:
244
+
245
+ 1. **Markdown Files** (`.md`):
246
+ - Extracts frontmatter metadata
247
+ - Chunks content based on headers and structure
248
+ - Preserves document hierarchy
249
+
250
+ 2. **TypeScript/TSX Files** (`.ts`, `.tsx`):
251
+ - Extracts TSDoc comments from functions, classes, and interfaces
252
+ - Chunks large TSDoc blocks intelligently
253
+ - Preserves code context and signatures
254
+
255
+ #### Metadata
256
+
257
+ Each document is enriched with metadata:
258
+
259
+ - `source` - File path relative to project root
260
+ - `rootPath` - Project root directory
261
+ - Git metadata (when available):
262
+ - `commit` - Git commit hash
263
+ - `author` - Commit author
264
+ - `date` - Commit date
265
+ - `message` - Commit message
266
+
267
+ ### `ai search`
268
+
269
+ Search the vector store to validate embeddings and retrieve relevant documents using semantic search.
270
+
271
+ #### Features
272
+
273
+ - Semantic search using vector embeddings
274
+ - Configurable result limits
275
+ - OData filter support for metadata-based filtering
276
+ - JSON output option for programmatic use
277
+ - Detailed result display with scores and metadata
278
+
279
+ #### Usage
280
+
281
+ ```bash
282
+ ffc ai search <query> [options]
283
+ ```
284
+
285
+ #### Options
286
+
287
+ | Option | Description | Default |
288
+ |--------|-------------|---------|
289
+ | `<query>` | Search query string | Required |
290
+ | `--limit <number>` | Maximum number of results | `10` |
291
+ | `--filter <expression>` | OData filter expression for metadata filtering | - |
292
+ | `--json` | Output results as JSON | `false` |
293
+ | `--raw` | Output raw metadata without normalization | `false` |
294
+ | `--verbose` | Enable verbose output | `false` |
295
+
296
+ **Note:** Azure configuration (API keys, endpoints, etc.) is provided via environment variables (`.env` file or GitHub Variables/Secrets), not command-line options.
297
+
298
+ #### Examples
299
+
300
+ ```bash
301
+ # Basic search
302
+ ffc ai search "how to use the framework"
303
+
304
+ # Limit results
305
+ ffc ai search "authentication" --limit 5
306
+
307
+ # Filter by source file
308
+ ffc ai search "typescript" --filter "metadata/source eq 'src/index.ts'"
309
+
310
+ # Output as JSON for programmatic use
311
+ ffc ai search "documentation" --json
312
+
313
+ # Output raw metadata structure
314
+ ffc ai search "documentation" --json --raw
315
+
316
+ # Enable verbose output
317
+ ffc ai search "API reference" --verbose
318
+ ```
319
+
320
+ #### OData Filter Examples
321
+
322
+ The `--filter` option supports OData filter expressions:
323
+
324
+ ```bash
325
+ # Filter by source file
326
+ --filter "metadata/source eq 'packages/framework/src/index.ts'"
327
+
328
+ # Filter by multiple sources
329
+ --filter "metadata/source eq 'src/a.ts' or metadata/source eq 'src/b.ts'"
330
+
331
+ # Filter by commit author
332
+ --filter "metadata/attributes/author eq 'John Doe'"
333
+
334
+ # Filter by date range (if available in metadata)
335
+ --filter "metadata/attributes/date gt '2024-01-01'"
336
+ ```
337
+
338
+ #### Output Formats
339
+
340
+ **Human-readable format** (default):
341
+ - Shows results with scores, sources, and content previews
342
+ - Truncates long content for readability
343
+ - Displays metadata in a structured format
344
+
345
+ **JSON format** (`--json`):
346
+ - Machine-readable output
347
+ - Full document content and metadata
348
+ - Suitable for piping to other tools or scripts
349
+
350
+ ## Common Workflows
351
+
352
+ ### Initial Setup
353
+
354
+ 1. **Create `.env` file** in your project root with Azure configuration:
355
+ ```bash
356
+ # See Configuration section above for all required variables
357
+ AZURE_OPENAI_API_KEY=your-key
358
+ AZURE_OPENAI_INSTANCE_NAME=your-instance
359
+ # ... other variables
360
+ ```
361
+
362
+ 2. **Create configuration file** (optional):
363
+ ```bash
364
+ # Create fusion-ai.config.ts in project root
365
+ ```
366
+
367
+ 3. **Generate initial embeddings**:
368
+ ```bash
369
+ ffc ai embeddings --clean ./src
370
+ ```
371
+
372
+ 4. **Verify embeddings with search**:
373
+ ```bash
374
+ ffc ai search "test query"
375
+ ```
376
+
377
+ 5. **Start using chat**:
378
+ ```bash
379
+ ffc ai chat
380
+ ```
381
+
382
+ ### Incremental Updates
383
+
384
+ For ongoing development, use diff-based processing:
385
+
386
+ ```bash
387
+ # Process only changed files
388
+ ffc ai embeddings --diff ./src
389
+ ```
390
+
391
+ ### CI/CD Integration
392
+
393
+ #### Basic Workflow
394
+
395
+ ```yaml
396
+ # Example GitHub Actions workflow
397
+ name: Update AI Index
398
+
399
+ on:
400
+ push:
401
+ branches: [main]
402
+
403
+ jobs:
404
+ update-embeddings:
405
+ runs-on: ubuntu-latest
406
+ steps:
407
+ - uses: actions/checkout@v4
408
+ with:
409
+ fetch-depth: 0 # Required for --diff to work
410
+
411
+ - uses: pnpm/action-setup@v2
412
+ with:
413
+ version: 9
414
+
415
+ - uses: actions/setup-node@v4
416
+ with:
417
+ node-version: '22'
418
+
419
+ - run: pnpm install
420
+
421
+ - name: Update embeddings
422
+ run: |
423
+ ffc ai embeddings --diff
424
+ env:
425
+ AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
426
+ AZURE_OPENAI_API_VERSION: ${{ vars.AZURE_OPENAI_API_VERSION }}
427
+ AZURE_OPENAI_INSTANCE_NAME: ${{ vars.AZURE_OPENAI_INSTANCE_NAME }}
428
+ AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME: ${{ vars.AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME }}
429
+ AZURE_SEARCH_ENDPOINT: ${{ vars.AZURE_SEARCH_ENDPOINT }}
430
+ AZURE_SEARCH_API_KEY: ${{ secrets.AZURE_SEARCH_API_KEY }}
431
+ AZURE_SEARCH_INDEX_NAME: ${{ vars.AZURE_SEARCH_INDEX_NAME }}
432
+ ```
433
+
434
+ #### Advanced Workflow with SHA Tracking
435
+
436
+ For production use, track the last indexed commit SHA to enable efficient incremental updates:
437
+
438
+ ```yaml
439
+ name: Index Documentation
440
+
441
+ on:
442
+ push:
443
+ branches: [main]
444
+
445
+ jobs:
446
+ index-docs:
447
+ runs-on: ubuntu-latest
448
+ permissions:
449
+ contents: write # Required to commit the SHA file
450
+
451
+ steps:
452
+ - name: Checkout repository
453
+ uses: actions/checkout@v4
454
+ with:
455
+ fetch-depth: 0 # Fetch full history for diff comparison
456
+
457
+ - name: Setup Node.js
458
+ uses: actions/setup-node@v4
459
+ with:
460
+ node-version: '22'
461
+ cache: 'pnpm'
462
+
463
+ - name: Install dependencies
464
+ run: pnpm install
465
+
466
+ - name: Get last indexed SHA
467
+ id: last_sha
468
+ run: |
469
+ if [ -f .index-base-ref ]; then
470
+ echo "last_sha=$(cat .index-base-ref)" >> $GITHUB_OUTPUT
471
+ else
472
+ echo "last_sha=" >> $GITHUB_OUTPUT
473
+ fi
474
+
475
+ - name: Index documentation
476
+ run: |
477
+ if [ -n "${{ steps.last_sha.outputs.last_sha }}" ]; then
478
+ ffc ai embeddings --diff --base-ref ${{ steps.last_sha.outputs.last_sha }}
479
+ else
480
+ ffc ai embeddings
481
+ fi
482
+ env:
483
+ AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
484
+ AZURE_OPENAI_API_VERSION: ${{ vars.AZURE_OPENAI_API_VERSION }}
485
+ AZURE_OPENAI_INSTANCE_NAME: ${{ vars.AZURE_OPENAI_INSTANCE_NAME }}
486
+ AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME: ${{ vars.AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME }}
487
+ AZURE_SEARCH_API_KEY: ${{ secrets.AZURE_SEARCH_API_KEY }}
488
+ AZURE_SEARCH_ENDPOINT: ${{ vars.AZURE_SEARCH_ENDPOINT }}
489
+ AZURE_SEARCH_INDEX_NAME: ${{ vars.AZURE_SEARCH_INDEX_NAME }}
490
+
491
+ - name: Save current SHA
492
+ run: echo ${{ github.sha }} > .index-base-ref
493
+
494
+ - name: Commit and push SHA file
495
+ run: |
496
+ git config --local user.email "action@github.com"
497
+ git config --local user.name "GitHub Action"
498
+ git add .index-base-ref
499
+ git commit -m "Update last indexed SHA" || echo "No changes to commit"
500
+ git push
501
+ ```
502
+
503
+ **Benefits of SHA tracking:**
504
+ - Only processes files changed since the last successful indexing
505
+ - Handles cases where previous runs may have failed
506
+ - Automatically falls back to full indexing on first run
507
+ - More efficient than comparing against a fixed branch reference
508
+
509
+ **GitHub Secrets and Variables:**
510
+ - **Secrets** (Settings → Secrets and variables → Actions → Secrets): Store sensitive data like API keys
511
+ - `AZURE_OPENAI_API_KEY`
512
+ - `AZURE_SEARCH_API_KEY`
513
+ - **Variables** (Settings → Secrets and variables → Actions → Variables): Store non-sensitive configuration
514
+ - `AZURE_OPENAI_API_VERSION`
515
+ - `AZURE_OPENAI_INSTANCE_NAME`
516
+ - `AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME`
517
+ - `AZURE_SEARCH_ENDPOINT`
518
+ - `AZURE_SEARCH_INDEX_NAME`
519
+
520
+ ## Troubleshooting
521
+
522
+ ### Common Issues
523
+
524
+ **Error: "API key is required"**
525
+ - Ensure environment variables are set in your `.env` file
526
+ - Verify the `.env` file is in your project root
527
+ - Check that variable names match exactly (case-sensitive)
528
+ - For CI/CD, verify GitHub Secrets and Variables are configured
529
+
530
+ **Error: "Azure Search index name is required"**
531
+ - Ensure `AZURE_SEARCH_INDEX_NAME` is set
532
+ - Verify the index exists in your Azure Search service
533
+
534
+ **No results from search**
535
+ - Verify embeddings have been generated: `ffc ai embeddings --dry-run ./src`
536
+ - Check that the index contains documents
537
+ - Try a broader search query
538
+
539
+ **Chat not retrieving context**
540
+ - Verify vector store is configured correctly in your `.env` file
541
+ - Check that embeddings exist in the index
542
+ - Ensure `AZURE_SEARCH_INDEX_NAME` is set correctly
543
+
544
+ **Embeddings command processes no files**
545
+ - Check file patterns in config or command arguments
546
+ - Verify files match the patterns (use `--dry-run` to debug)
547
+ - Ensure files are not ignored by `.gitignore`
548
+
549
+ ### Debug Mode
550
+
551
+ Use the `--verbose` flag for detailed output:
552
+
553
+ ```bash
554
+ ffc ai chat --verbose
555
+ ffc ai search "query" --verbose
556
+ ```
557
+
558
+ ## Best Practices
559
+
560
+ 1. **Use environment variables** for sensitive credentials
561
+ 2. **Start with `--dry-run`** when testing new configurations
562
+ 3. **Use `--diff` in CI/CD** to only process changed files
563
+ 4. **Regular re-indexing** with `--clean` to keep index fresh
564
+ 5. **Monitor token usage** - embeddings and chat consume API tokens
565
+ 6. **Test search queries** before relying on chat context retrieval
566
+ 7. **Keep configuration files** in version control (without secrets)
567
+
568
+ ## Additional Resources
569
+
570
+ - [Azure OpenAI Documentation](https://learn.microsoft.com/azure/ai-services/openai/)
571
+ - [Azure Cognitive Search Documentation](https://learn.microsoft.com/azure/search/)
572
+ - [LangChain Documentation](https://js.langchain.com/) (used internally for RAG)
573
+
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@equinor/fusion-framework-cli",
3
- "version": "13.0.0-cli-search-index.3",
3
+ "version": "13.0.0-cli-search-index.4",
4
4
  "keywords": [
5
5
  "Fusion",
6
6
  "Fusion Framework",
@@ -154,11 +154,11 @@
154
154
  "type-fest": "^5.0.0",
155
155
  "typescript": "^5.8.2",
156
156
  "vitest": "^3.2.4",
157
- "@equinor/fusion-framework-module-app": "7.1.1-cli-search-index.0",
158
- "@equinor/fusion-framework-module-http": "7.0.6-cli-search-index.0",
159
157
  "@equinor/fusion-framework-module": "5.0.6-cli-search-index.0",
158
+ "@equinor/fusion-framework-module-ai": "1.1.0-cli-search-index.0",
159
+ "@equinor/fusion-framework-module-app": "7.1.1-cli-search-index.0",
160
160
  "@equinor/fusion-framework-module-service-discovery": "9.0.5-cli-search-index.0",
161
- "@equinor/fusion-framework-module-ai": "1.1.0-cli-search-index.0"
161
+ "@equinor/fusion-framework-module-http": "7.0.6-cli-search-index.0"
162
162
  },
163
163
  "peerDependenciesMeta": {
164
164
  "typescript": {