@vfarcic/dot-ai 0.19.0 → 0.20.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@vfarcic/dot-ai",
3
- "version": "0.19.0",
3
+ "version": "0.20.0",
4
4
  "description": "Universal Kubernetes application deployment agent with CLI and MCP interfaces",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",
@@ -20,7 +20,7 @@ Select all resources that could be relevant for this intent. Consider:
20
20
  - Platform-specific resources (e.g., Crossplane, Knative, Istio, ArgoCD) that could simplify the deployment
21
21
  - **CRD Selection Priority**: If you see multiple CRDs from the same group with similar purposes (like "App" and "AppClaim"), include the namespace-scoped ones (marked as "Namespaced: true") rather than cluster-scoped ones, as they're more appropriate for application deployments
22
22
 
23
- Don't limit yourself - if the intent is complex, select as many resources as needed.
23
+ Don't limit yourself - if the intent is complex, select as many resources as needed. **Be extra inclusive** - the detailed schema analysis phase will filter out inappropriate resources, so it's better to include more candidates initially.
24
24
 
25
25
  ## Response Format
26
26
 
@@ -29,16 +29,48 @@ Consider:
29
29
  - **Custom Resource Definitions (CRDs)** that may provide simpler, higher-level abstractions
30
30
  - Platform operators (Crossplane, Knative, etc.) that might offer better user experience
31
31
  - User experience - simpler declarative approaches often score higher than complex multi-resource solutions
32
+ - **Schema-based capability analysis**: Examine the actual resource schema fields to determine what capabilities each resource truly supports
33
+
34
+ ## Schema-Based Capability Analysis
35
+
36
+ **CRITICAL**: Before scoring any solution, analyze all resource schemas in that solution to determine actual capabilities:
37
+
38
+ ### Capability Detection Method
39
+ For each resource schema in the solution, examine field patterns that indicate capabilities:
40
+ - **Field names and types**: Look for schema fields whose names, descriptions, or types relate to the user's intent
41
+ - **Nested structures**: Check for complex objects that suggest advanced functionality
42
+ - **Reference patterns**: Identify fields that reference other resources or external systems
43
+ - **Configuration options**: Note fields that allow customization relevant to the user's needs
44
+
45
+ ### Intent-Schema Matching Process
46
+ 1. **Extract keywords** from user intent (e.g., "storage", "network", "scale", "database", "monitor")
47
+ 2. **Search all schemas** in the solution for matching or related terminology in field names, descriptions, and types
48
+ 3. **Evaluate field depth**: Complex nested structures often indicate more comprehensive capabilities
49
+ 4. **Check for extension points**: Fields that allow custom configuration or references to external resources
50
+
51
+ ### Solution Scoring Based on Schema Analysis
52
+ - **High relevance (80-100 points)**: Schemas contain multiple fields directly related to user intent
53
+ - **Medium relevance (50-79 points)**: Schemas contain some fields that could support user intent
54
+ - **Low relevance (20-49 points)**: Schemas have minimal or indirect support for user intent
55
+ - **Reject (0-19 points)**: Schemas lack any fields related to user intent - DO NOT include these solutions
32
56
 
33
57
  ## CRD Preference Guidelines
34
58
 
35
59
  When evaluating CRDs vs standard Kubernetes resources:
36
- - **Prefer CRDs with matching capabilities**: If a CRD's listed capabilities directly address the user's specific needs, it should score higher than manually combining multiple standard resources
37
- - **Favor purpose-built solutions**: CRDs designed for specific use cases should score higher than generic resource combinations when the use case aligns
38
- - **Value comprehensive functionality**: A single CRD that handles multiple related concerns (deployment + networking + scaling) should score higher than manually orchestrating separate resources for the same outcome
60
+ - **Prefer CRDs with matching capabilities**: If a CRD's schemas directly address the user's specific needs, it should score higher than manually combining multiple standard resources
61
+ - **Favor purpose-built solutions**: CRDs designed for specific use cases should score higher than generic resource combinations when the use case aligns AND the schemas support the required capabilities
62
+ - **Value comprehensive functionality**: A single CRD that handles multiple related concerns should score higher than manually orchestrating separate resources for the same outcome
39
63
  - **Consider operational simplicity**: CRDs that provide intuitive, domain-specific interfaces should be preferred over complex multi-resource configurations
40
64
  - **Give preference to platform abstractions**: For application deployment scenarios, purpose-built CRDs with comprehensive application platform features should be weighted more favorably than basic resources requiring manual orchestration
41
- - **Match scope to intent**: Only prefer CRDs when their documented capabilities genuinely align with what the user is trying to achieve
65
+ - **Match scope to intent**: Only prefer CRDs when their schemas genuinely align with what the user is trying to achieve
66
+
67
+ ## Solution Filtering Rules
68
+
69
+ **IMPORTANT**: To avoid rejecting all solutions:
70
+ - **Be inclusive initially**: The resource selection phase should identify MORE potential candidates, not fewer
71
+ - **Apply schema filtering here**: Only reject solutions where schemas completely lack relevant fields
72
+ - **Provide alternatives**: If rejecting solutions, always provide at least 2-3 viable alternatives
73
+ - **Explain rejections**: When scoring low, clearly explain which schema fields are missing
42
74
 
43
75
  ## Response Format
44
76