@vfarcic/dot-ai 0.140.0 → 0.143.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (61) hide show
  1. package/dist/core/ai-provider-factory.d.ts.map +1 -1
  2. package/dist/core/ai-provider-factory.js +4 -6
  3. package/dist/core/ai-provider.interface.d.ts +2 -2
  4. package/dist/core/ai-provider.interface.js +1 -1
  5. package/dist/core/crd-availability.d.ts +16 -0
  6. package/dist/core/crd-availability.d.ts.map +1 -0
  7. package/dist/core/crd-availability.js +108 -0
  8. package/dist/core/deploy-operation.js +2 -2
  9. package/dist/core/embedding-service.d.ts +8 -3
  10. package/dist/core/embedding-service.d.ts.map +1 -1
  11. package/dist/core/embedding-service.js +8 -16
  12. package/dist/core/model-config.d.ts +6 -8
  13. package/dist/core/model-config.d.ts.map +1 -1
  14. package/dist/core/model-config.js +6 -8
  15. package/dist/core/providers/vercel-provider.d.ts.map +1 -1
  16. package/dist/core/providers/vercel-provider.js +12 -11
  17. package/dist/core/schema.d.ts +2 -9
  18. package/dist/core/schema.d.ts.map +1 -1
  19. package/dist/core/schema.js +1 -2
  20. package/dist/core/solution-cr.d.ts +21 -0
  21. package/dist/core/solution-cr.d.ts.map +1 -0
  22. package/dist/core/solution-cr.js +112 -0
  23. package/dist/core/solution-utils.d.ts +0 -6
  24. package/dist/core/solution-utils.d.ts.map +1 -1
  25. package/dist/core/solution-utils.js +0 -26
  26. package/dist/core/unified-creation-session.d.ts.map +1 -1
  27. package/dist/core/unified-creation-session.js +21 -4
  28. package/dist/core/vector-db-service.d.ts.map +1 -1
  29. package/dist/core/vector-db-service.js +5 -0
  30. package/dist/interfaces/rest-registry.d.ts.map +1 -1
  31. package/dist/interfaces/rest-registry.js +1 -0
  32. package/dist/tools/answer-question.d.ts +6 -1
  33. package/dist/tools/answer-question.d.ts.map +1 -1
  34. package/dist/tools/answer-question.js +1 -1
  35. package/dist/tools/generate-manifests.d.ts.map +1 -1
  36. package/dist/tools/generate-manifests.js +34 -55
  37. package/dist/tools/operate-analysis.d.ts.map +1 -1
  38. package/dist/tools/operate-analysis.js +9 -15
  39. package/dist/tools/organizational-data.d.ts +17 -11
  40. package/dist/tools/organizational-data.d.ts.map +1 -1
  41. package/dist/tools/project-setup.d.ts +7 -3
  42. package/dist/tools/project-setup.d.ts.map +1 -1
  43. package/dist/tools/project-setup.js +1 -1
  44. package/dist/tools/recommend.d.ts +2 -0
  45. package/dist/tools/recommend.d.ts.map +1 -1
  46. package/dist/tools/recommend.js +15 -11
  47. package/dist/tools/remediate.d.ts +10 -3
  48. package/dist/tools/remediate.d.ts.map +1 -1
  49. package/dist/tools/remediate.js +1 -1
  50. package/package.json +4 -8
  51. package/prompts/operate-system.md +4 -3
  52. package/prompts/question-generation.md +8 -1
  53. package/prompts/remediate-system.md +10 -1
  54. package/prompts/resource-selection.md +7 -9
  55. package/scripts/crossplane.nu +1 -1
  56. package/scripts/dot-ai.nu +14 -2
  57. package/shared-prompts/prd-close.md +7 -1
  58. package/shared-prompts/deploy.md +0 -23
  59. package/shared-prompts/manage-org-data.md +0 -42
  60. package/shared-prompts/remediate.md +0 -44
  61. package/shared-prompts/status.md +0 -19
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@vfarcic/dot-ai",
3
- "version": "0.140.0",
3
+ "version": "0.143.0",
4
4
  "description": "AI-powered development productivity platform that enhances software development workflows through intelligent automation and AI-driven assistance",
5
5
  "mcpName": "io.github.vfarcic/dot-ai",
6
6
  "main": "dist/index.js",
@@ -19,15 +19,13 @@
19
19
  "test:integration": "./tests/integration/infrastructure/run-integration-tests.sh",
20
20
  "test:integration:watch": "vitest --config=vitest.integration.config.ts --test-timeout=1200000",
21
21
  "test:integration:sonnet": "AI_PROVIDER=anthropic AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
22
+ "test:integration:opus": "AI_PROVIDER=anthropic_opus AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
22
23
  "test:integration:haiku": "AI_PROVIDER=anthropic_haiku AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
23
24
  "test:integration:gpt": "AI_PROVIDER=openai AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
24
- "test:integration:gpt-pro": "AI_PROVIDER=openai_pro AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
25
25
  "test:integration:gemini": "AI_PROVIDER=google AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
26
- "test:integration:gemini-flash": "AI_PROVIDER=google_fast AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
27
26
  "test:integration:grok": "AI_PROVIDER=xai AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
28
- "test:integration:grok-fast": "AI_PROVIDER=xai_fast AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
29
- "test:integration:mistral": "AI_PROVIDER=mistral AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
30
- "test:integration:deepseek": "AI_PROVIDER=deepseek AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
27
+ "test:integration:kimi": "AI_PROVIDER=kimi AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
28
+ "test:integration:kimi-thinking": "AI_PROVIDER=kimi_thinking AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
31
29
  "test:integration:bedrock": "AI_PROVIDER=amazon_bedrock AI_MODEL=global.anthropic.claude-sonnet-4-20250514-v1:0 AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
32
30
  "test:integration:custom-endpoint": "AI_PROVIDER=openai AI_PROVIDER_SDK=vercel DEBUG_DOT_AI=true ./tests/integration/infrastructure/run-integration-tests.sh",
33
31
  "eval:comparative": "DEBUG_DOT_AI=true npx tsx src/evaluation/eval-runner.ts",
@@ -99,9 +97,7 @@
99
97
  "dependencies": {
100
98
  "@ai-sdk/amazon-bedrock": "^3.0.50",
101
99
  "@ai-sdk/anthropic": "^2.0.23",
102
- "@ai-sdk/deepseek": "^1.0.23",
103
100
  "@ai-sdk/google": "^2.0.17",
104
- "@ai-sdk/mistral": "^2.0.19",
105
101
  "@ai-sdk/openai": "^2.0.42",
106
102
  "@ai-sdk/xai": "^2.0.26",
107
103
  "@anthropic-ai/sdk": "^0.65.0",
@@ -117,7 +117,7 @@ Once analysis is complete, respond with ONLY this JSON format:
117
117
  },
118
118
  "commands": [
119
119
  "kubectl set image deployment/my-api my-api=my-api:v2.0 -n default",
120
- "kubectl apply -f /tmp/hpa.yaml"
120
+ "kubectl apply -f - <<EOF\napiVersion: autoscaling/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n name: my-api-hpa\n namespace: default\nspec:\n scaleTargetRef:\n apiVersion: apps/v1\n kind: Deployment\n name: my-api\n minReplicas: 2\n maxReplicas: 10\nEOF"
121
121
  ],
122
122
  "dryRunValidation": {
123
123
  "status": "success",
@@ -173,7 +173,8 @@ Once analysis is complete, respond with ONLY this JSON format:
173
173
  - **Use specific resources**: `deployment/my-api` not `deployments my-api`
174
174
  - **Include namespace**: Always specify `-n namespace` for clarity
175
175
  - **Imperative when possible**: Use `kubectl set image`, `kubectl scale` for simple updates
176
- - **Declarative for complex**: Use `kubectl apply` for multi-field updates or new resources
176
+ - **Declarative for complex**: Use `kubectl apply -f -` with inline heredoc YAML for new resources
177
+ - **Never reference files**: Don't use `kubectl apply -f /path/file.yaml` - files don't exist. Always use inline YAML with heredoc: `kubectl apply -f - <<EOF\n...\nEOF`
177
178
  - **No shell operators**: Don't chain commands with `&&` or `;` - return array of individual commands
178
179
 
179
180
  **Command ordering**:
@@ -289,7 +290,7 @@ Once analysis is complete, respond with ONLY this JSON format:
289
290
  },
290
291
  "commands": [
291
292
  "kubectl patch deployment/my-api -n default --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/template/spec/containers/0/resources\", \"value\": {\"requests\": {\"cpu\": \"100m\", \"memory\": \"128Mi\"}}}]'",
292
- "kubectl apply -f /tmp/scaled-object.yaml"
293
+ "kubectl apply -f - <<EOF\napiVersion: keda.sh/v1alpha1\nkind: ScaledObject\nmetadata:\n name: my-api-scaler\n namespace: default\nspec:\n scaleTargetRef:\n name: my-api\n minReplicaCount: 2\n maxReplicaCount: 10\n triggers:\n - type: cpu\n metricType: Utilization\n metadata:\n value: '70'\nEOF"
293
294
  ],
294
295
  "dryRunValidation": {
295
296
  "status": "success",
@@ -164,10 +164,17 @@ Return your response as JSON in this exact format:
164
164
  "open": {
165
165
  "question": "Is there anything else about your requirements or constraints that would help us provide better recommendations?",
166
166
  "placeholder": "e.g., specific security requirements, performance needs, existing infrastructure constraints..."
167
- }
167
+ },
168
+ "relevantPolicies": ["Minimum 3 replicas for production deployments", "Resource limits required for all containers"]
168
169
  }
169
170
  ```
170
171
 
172
+ **CRITICAL - Relevant Policies Field:**
173
+ - Include `relevantPolicies` array with the **descriptions** of organizational policies that influenced your question generation
174
+ - Use the policy `description` field from the Organizational Policies section provided above
175
+ - Only include policies that were actually applied (e.g., policies that resulted in questions being added or made required)
176
+ - Use empty array `[]` if no organizational policies influenced the questions
177
+
171
178
  ## Important Notes
172
179
 
173
180
  - **CRITICAL VALIDATION REQUIREMENT**: The REQUIRED section MUST contain a question with `"id": "name"` - responses without this will be rejected
@@ -95,6 +95,15 @@ Once investigation is complete, respond with ONLY this JSON format:
95
95
  - **Focus on fixes**: Include only actions that change system state to resolve issue
96
96
  - **No validation actions**: Describe validation needs in `validationIntent`, not as separate actions
97
97
 
98
+ **Kubectl Patch Strategy Selection**:
99
+ - **Use `--type=json` for array updates** (containers, volumes, env vars):
100
+ - JSON Patch allows precise array element targeting by index
101
+ - Example: `kubectl patch deployment app -n ns --type=json -p='[{"op":"replace","path":"/spec/template/spec/containers/0/resources/limits/memory","value":"256Mi"}]'`
102
+ - **Use `--type=merge` for simple field updates**:
103
+ - Simpler syntax for non-array fields
104
+ - Example: `kubectl patch deployment app -n ns --type=merge -p='{"spec":{"replicas":3}}'`
105
+ - **Avoid `--type=strategic`**: Can cause "invalid character" errors with partial array specifications, especially for containers
106
+
98
107
  **Risk Assessment**:
99
108
  - **Low risk**: Restart pods, scale replicas, update labels, increase resource requests
100
109
  - **Medium risk**: Change environment variables, update resource limits, modify ConfigMaps/Secrets, patch deployments
@@ -125,7 +134,7 @@ Once investigation is complete, respond with ONLY this JSON format:
125
134
  "actions": [
126
135
  {
127
136
  "description": "Remove bootstrap.recovery configuration to allow fresh cluster initialization",
128
- "command": "kubectl patch cluster postgres-db -n test-ns --type=json -p='[{\"op\": \"remove\", \"path\": \"/spec/bootstrap/recovery\"}]'",
137
+ "command": "kubectl patch cluster postgres-db -n test-ns --type=json -p='[{\"op\":\"remove\",\"path\":\"/spec/bootstrap/recovery\"}]'",
129
138
  "risk": "medium",
130
139
  "rationale": "Removing invalid backup reference allows operator to create cluster with fresh initialization instead of waiting for non-existent backup"
131
140
  }
@@ -82,20 +82,18 @@ Respond with ONLY a JSON object containing an array of complete solutions. Each
82
82
  "score": 95,
83
83
  "description": "Complete web application deployment with networking",
84
84
  "reasons": ["High capability match for web applications", "Includes essential networking"],
85
- "patternInfluences": [
86
- {
87
- "patternId": "web-app-pattern-123",
88
- "description": "Web application deployment pattern",
89
- "influence": "high",
90
- "matchedTriggers": ["web application", "frontend"]
91
- }
92
- ],
93
- "usedPatterns": true
85
+ "appliedPatterns": ["High availability web application pattern", "Ingress with TLS termination"]
94
86
  }
95
87
  ]
96
88
  }
97
89
  ```
98
90
 
91
+ **CRITICAL - Applied Patterns Field:**
92
+ - Include `appliedPatterns` array with the **descriptions** of organizational patterns you applied to this solution
93
+ - Use the pattern `description` field from the Organizational Patterns section provided above
94
+ - Only include patterns that directly influenced this solution's resource selection or configuration
95
+ - Use empty array `[]` if no organizational patterns were applied
96
+
99
97
  IMPORTANT: Your response must be ONLY the JSON object, nothing else.
100
98
 
101
99
  ## Selection Philosophy
@@ -27,7 +27,7 @@ def --env "main apply crossplane" [
27
27
  (
28
28
  helm upgrade --install crossplane "crossplane/crossplane"
29
29
  --namespace crossplane-system --create-namespace
30
- --set provider.defaultActivations={"*.m.upbound.io", "*.m.crossplane.io"}
30
+ --set provider.defaultActivations={"*.m.upbound.io","*.m.crossplane.io"}
31
31
  --wait
32
32
  )
33
33
 
package/scripts/dot-ai.nu CHANGED
@@ -1,6 +1,6 @@
1
1
  #!/usr/bin/env nu
2
2
 
3
- # Installs DevOps AI Toolkit with MCP server support
3
+ # Installs DevOps AI Toolkit with MCP server support and controller
4
4
  #
5
5
  # Examples:
6
6
  # > main apply dot-ai --host dot-ai.127.0.0.1.nip.io
@@ -12,8 +12,10 @@ def "main apply dot-ai" [
12
12
  --provider = "anthropic",
13
13
  --model = "claude-haiku-4-5-20251001",
14
14
  --ingress-enabled = true,
15
+ --ingress-class = "nginx",
15
16
  --host = "dot-ai.127.0.0.1.nip.io",
16
- --version = "0.128.0",
17
+ --version = "0.140.0",
18
+ --controller-version = "0.16.0",
17
19
  --enable-tracing = false
18
20
  ] {
19
21
 
@@ -42,6 +44,13 @@ def "main apply dot-ai" [
42
44
  []
43
45
  }
44
46
 
47
+ (
48
+ helm upgrade --install dot-ai-controller
49
+ $"oci://ghcr.io/vfarcic/dot-ai-controller/charts/dot-ai-controller:($controller_version)"
50
+ --namespace dot-ai --create-namespace
51
+ --wait
52
+ )
53
+
45
54
  (
46
55
  helm upgrade --install dot-ai-mcp
47
56
  $"oci://ghcr.io/vfarcic/dot-ai/charts/dot-ai:($version)"
@@ -50,12 +59,15 @@ def "main apply dot-ai" [
50
59
  --set $"ai.provider=($provider)"
51
60
  --set $"ai.model=($model)"
52
61
  --set $"ingress.enabled=($ingress_enabled)"
62
+ --set $"ingress.className=($ingress_class)"
53
63
  --set $"ingress.host=($host)"
64
+ --set "controller.enabled=true"
54
65
  ...$tracing_flags
55
66
  --namespace dot-ai --create-namespace
56
67
  --wait
57
68
  )
58
69
 
70
+ print $"DevOps AI Controller (ansi yellow_bold)($controller_version)(ansi reset) installed in (ansi yellow_bold)dot-ai(ansi reset) namespace"
59
71
  print $"DevOps AI Toolkit is available at (ansi yellow_bold)http://($host)(ansi reset)"
60
72
 
61
73
  if $enable_tracing {
@@ -119,7 +119,7 @@ Update the PRD metadata and add completion work log:
119
119
 
120
120
  ### Step 4: Move PRD to Archive
121
121
 
122
- Move the PRD file to the done directory:
122
+ Move the PRD file to the done directory and update roadmap:
123
123
 
124
124
  ```bash
125
125
  # Create done directory if it doesn't exist
@@ -129,6 +129,12 @@ mkdir -p prds/done
129
129
  git mv prds/[number]-[name].md prds/done/
130
130
  ```
131
131
 
132
+ **Update ROADMAP.md (if it exists):**
133
+ - [ ] Check if `docs/ROADMAP.md` exists
134
+ - [ ] Remove the closed PRD from the roadmap (search for "PRD #[number]")
135
+ - [ ] Remove the entire line that references this PRD
136
+ - [ ] Closed PRDs should not appear in future roadmap as they're no longer being worked on
137
+
132
138
  ### Step 5: Update GitHub Issue
133
139
 
134
140
  **Reopen issue temporarily to update:**
@@ -1,23 +0,0 @@
1
- ---
2
- name: deploy
3
- description: Deploy applications, infrastructure, and services to Kubernetes
4
- category: deployment
5
- ---
6
-
7
- # Deploy to Kubernetes
8
-
9
- What do you want to deploy?
10
-
11
- **Examples:**
12
- - "Deploy a Node.js web application with PostgreSQL database"
13
- - "Deploy Prometheus monitoring with Grafana dashboards"
14
- - "Deploy WordPress with MySQL and persistent storage"
15
- - "Deploy ArgoCD for GitOps workflows"
16
- - "Deploy Redis cluster for caching"
17
- - "Deploy ingress controller with SSL certificates"
18
-
19
- **Your deployment intent**: [Please describe what you want to deploy]
20
-
21
- ---
22
-
23
- Once you provide your intent, I'll call the `recommend` tool to generate deployment recommendations for your Kubernetes cluster.
@@ -1,42 +0,0 @@
1
- ---
2
- name: manage-org-data
3
- description: Manage organizational patterns, policy intents, and cluster resource capabilities
4
- category: administration
5
- ---
6
-
7
- # Manage Organizational Data
8
-
9
- ## Choose what you want to manage:
10
-
11
- ### Organizational Patterns:
12
- 1. Create a new organizational pattern
13
- 2. List existing patterns
14
- 3. Get pattern details
15
- 4. Search patterns
16
- 5. Delete a specific pattern
17
- 6. Delete all patterns
18
-
19
- ### Policy Intents:
20
- 7. Create a new policy intent
21
- 8. List existing policy intents
22
- 9. Get policy intent details
23
- 10. Search policy intents
24
- 11. Delete a specific policy intent
25
- 12. Delete all policy intents
26
-
27
- ### Resource Capabilities:
28
- 13. Scan cluster for resource capabilities
29
- 14. List discovered capabilities
30
- 15. Get capability details
31
- 16. Delete specific capability data
32
- 17. Delete all capabilities
33
- 18. Check scan progress
34
- 19. Analyze capabilities
35
-
36
- **Your choice**: [Type the number (1-19) or describe what you want to do]
37
-
38
- ---
39
-
40
- Examples: Type "1" or "Create a new organizational pattern" or "7" or "Create a new policy intent" or "13" or "Scan cluster"
41
-
42
- Once you make your choice, I'll call the `manageOrgData` tool with the appropriate parameters.
@@ -1,44 +0,0 @@
1
- ---
2
- name: remediate
3
- description: AI-powered Kubernetes issue analysis and remediation
4
- category: troubleshooting
5
- ---
6
-
7
- # Kubernetes Issue Remediation
8
-
9
- ## What's going wrong with your Kubernetes cluster?
10
-
11
- Describe the issue you're experiencing and I'll use AI-powered investigation to identify the root cause and provide executable remediation steps.
12
-
13
- **Examples:**
14
- - "Pod stuck in Pending state"
15
- - "Database connection failing in production namespace"
16
- - "Application deployment not working"
17
- - "Something is wrong with my ingress"
18
- - "Memory issues in my pods"
19
- - "Storage problems in namespace xyz"
20
- - "Network connectivity issues"
21
- - "Service discovery not working"
22
-
23
- **Your issue description**: [Describe what's going wrong]
24
-
25
- ---
26
-
27
- ## Execution Modes:
28
-
29
- **Manual Mode** (default): You review and approve each remediation step
30
- **Automatic Mode**: AI executes low-risk fixes automatically based on confidence thresholds
31
-
32
- To use automatic mode, add phrases like:
33
- - "fix this automatically"
34
- - "remediate automatically with high confidence"
35
- - "auto-fix if safe"
36
-
37
- ---
38
-
39
- Once you describe your issue, I'll call the `remediate` tool to:
40
- 1. **Investigate** - Multi-step analysis to identify root cause
41
- 2. **Analyze** - Provide detailed explanation with confidence level
42
- 3. **Remediate** - Generate specific kubectl commands with risk assessment
43
- 4. **Execute** - Run fixes via MCP or guide you through manual execution
44
- 5. **Validate** - Confirm the issue is resolved
@@ -1,19 +0,0 @@
1
- ---
2
- name: status
3
- description: Check system status and health
4
- category: administration
5
- ---
6
-
7
- # System Status Check
8
-
9
- I'll check the comprehensive system status including:
10
-
11
- ✅ **Version information**
12
- ✅ **Vector DB connection status**
13
- ✅ **Embedding service capabilities**
14
- ✅ **Anthropic API connectivity**
15
- ✅ **Pattern management health**
16
-
17
- ---
18
-
19
- Calling the `version` tool to get current system status...