@vfarcic/dot-ai 0.27.0 → 0.28.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@vfarcic/dot-ai",
3
- "version": "0.27.0",
3
+ "version": "0.28.0",
4
4
  "description": "Universal Kubernetes application deployment agent with CLI and MCP interfaces",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",
@@ -15,73 +15,32 @@ Analyze the user's intent and determine the best solution(s). This could be:
15
15
  - A combination of resources that can actually integrate and work together to create a complete solution
16
16
  - Multiple alternative approaches ranked by effectiveness
17
17
 
18
- For each solution, provide:
19
- 1. A score from 0-100 for how well it meets the user's needs
20
- 2. Specific reasons why this solution addresses the intent
21
- 3. Whether it's a single resource or combination, and why
22
- 4. Production readiness and best practices
23
-
24
- Consider:
25
- - Semantic meaning and typical use cases
26
- - Resource relationships and orchestration patterns
27
- - Complete end-to-end solutions vs partial solutions
28
- - Production patterns and best practices
29
- - **Custom Resource Definitions (CRDs)** that may provide simpler, higher-level abstractions
30
- - Platform operators (Crossplane, Knative, etc.) that might offer better user experience
31
- - User experience - simpler declarative approaches often score higher than complex multi-resource solutions
32
- - **Schema-based capability analysis**: Examine the actual resource schema fields to determine what capabilities each resource truly supports
33
- - **Intent-solution alignment**: Ensure solutions directly fulfill the user's stated intent rather than just providing prerequisites or supporting infrastructure
34
- - **Complete intent fulfillment**: Solutions must address ALL parts of the user's intent, not just some aspects
35
-
36
- ## Schema-Based Capability Analysis
37
-
38
- **CRITICAL**: Before scoring any solution, analyze all resource schemas in that solution to determine actual capabilities:
39
-
40
- ### Capability Detection Method
41
- For each resource schema in the solution, examine field patterns that indicate capabilities:
42
- - **Field names and types**: Look for schema fields whose names, descriptions, or types relate to the user's intent
43
- - **Nested structures**: Check for complex objects that suggest advanced functionality
44
- - **Reference patterns**: Identify fields that reference other resources or external systems
45
- - **Configuration options**: Note fields that allow customization relevant to the user's needs
46
- - **Capability precision**: Distinguish between similar but different capabilities (e.g., external connections vs direct integration, configuration vs execution, monitoring vs logging)
47
-
48
- ### Intent-Schema Matching Process
49
- 1. **Extract keywords** from user intent (e.g., "storage", "network", "scale", "database", "monitor")
50
- 2. **Search all schemas** in the solution for matching or related terminology in field names, descriptions, and types
51
- 3. **Evaluate field depth**: Complex nested structures often indicate more comprehensive capabilities
52
- 4. **Check for extension points**: Fields that allow custom configuration or references to external resources
53
-
54
- ### Solution Scoring Based on Schema Analysis
55
- - **High relevance (80-100 points)**: Schemas contain multiple fields directly related to user intent
56
- - **Medium relevance (50-79 points)**: Schemas contain some fields that could support user intent
57
- - **Low relevance (20-49 points)**: Schemas have minimal or indirect support for user intent
58
- - **Reject (0-19 points)**: Schemas lack any fields related to user intent - DO NOT include these solutions
59
-
60
- ## CRD Preference Guidelines
61
-
62
- When evaluating CRDs vs standard Kubernetes resources:
63
- - **Prefer CRDs with matching capabilities**: If a CRD's schemas directly address the user's specific needs, it should score higher than manually combining multiple standard resources
64
- - **Favor purpose-built solutions**: CRDs designed for specific use cases should score higher than generic resource combinations when the use case aligns AND the schemas support the required capabilities
65
- - **Value comprehensive functionality**: A single CRD that handles multiple related concerns should score higher than manually orchestrating separate resources for the same outcome
66
- - **Consider operational simplicity**: CRDs that provide intuitive, domain-specific interfaces should be preferred over complex multi-resource configurations
67
- - **Give preference to platform abstractions**: For application deployment scenarios, purpose-built CRDs with comprehensive application platform features should be weighted more favorably than basic resources requiring manual orchestration
68
- - **Match scope to intent**: Only prefer CRDs when their schemas genuinely align with what the user is trying to achieve
69
-
70
- ## Resource Combination Validation
71
-
72
- **CRITICAL**: When proposing combination solutions, verify schema-based integration compatibility:
73
-
74
- - **Check integration fields**: For combinations, ensure one resource has schema fields that can reference or integrate with the other resource
75
- - **Verify field compatibility**: Analyze schemas to confirm resources have compatible integration points before combining them
76
- - **Reject incompatible combinations**: Do not suggest combinations where resource schemas lack the necessary fields to work together
77
-
78
- ## Solution Filtering Rules
79
-
80
- **IMPORTANT**: To avoid rejecting all solutions:
81
- - **Be inclusive initially**: The resource selection phase should identify MORE potential candidates, not fewer
82
- - **Apply schema filtering here**: Only reject solutions where schemas completely lack relevant fields
83
- - **Provide alternatives**: If rejecting solutions, always provide at least 2-3 viable alternatives
84
- - **Explain rejections**: When scoring low, clearly explain which schema fields are missing
18
+ ## MANDATORY Validation Process
19
+
20
+ **STEP 1: Extract Intent Requirements**
21
+ Parse the user intent and identify ALL requirements (e.g., "stateful application" + "persistent storage" + "accessible through Ingress").
22
+
23
+ **STEP 2: Schema Analysis for Each Resource**
24
+ For each resource in your solution, examine its schema fields to verify it can fulfill the requirements:
25
+ - **Direct field matching**: Look for schema fields whose names directly relate to the requirements
26
+ - **Integration capability**: Check if the resource has fields to integrate with other needed resources
27
+ - **Reject false matches**: Do not assume capabilities that aren't explicitly present in the schema fields
28
+
29
+ **STEP 3: Solution Completeness Check**
30
+ Verify your solution addresses ALL requirements from Step 1. Incomplete solutions must score lower.
31
+
32
+ **STEP 4: Combination Validation**
33
+ For multi-resource solutions, verify integration compatibility by checking that resources have schema fields to reference each other.
34
+
35
+ ## Scoring Guidelines
36
+
37
+ Score solutions based on completeness and schema validation:
38
+
39
+ - **90-100**: Complete solution, schema fields directly support ALL requirements
40
+ - **70-89**: Good solution, schema fields support most requirements with minor gaps
41
+ - **50-69**: Partial solution, schema fields support some requirements but missing others
42
+ - **30-49**: Incomplete solution, schema fields only partially support requirements
43
+ - **0-29**: Poor fit, schema fields don't meaningfully support the requirements
85
44
 
86
45
  ## Response Format
87
46
 
@@ -100,21 +59,10 @@ When evaluating CRDs vs standard Kubernetes resources:
100
59
  "score": 85,
101
60
  "description": "Brief description of this solution",
102
61
  "reasons": ["reason1", "reason2"],
103
- "analysis": "Detailed explanation of why this solution meets the user's needs"
62
+ "analysis": "Detailed explanation of schema analysis and why this solution meets the user's needs"
104
63
  }
105
64
  ]
106
65
  }
107
66
  ```
108
67
 
109
- For each resource in the `resources` array, provide:
110
- - `kind`: The resource type (e.g., "Deployment", "Service", "AppClaim")
111
- - `apiVersion`: The API version (e.g., "apps/v1", "v1")
112
- - `group`: The API group (empty string for core resources, e.g., "apps", "devopstoolkit.live")
113
-
114
- ## Scoring Guidelines
115
-
116
- - **90-100**: Complete solution, fully addresses ALL aspects of user intent
117
- - **70-89**: Good solution, addresses most aspects of user intent with minor gaps
118
- - **50-69**: Partial solution, addresses some aspects of user intent but requires additional work
119
- - **30-49**: Incomplete solution, only addresses part of the user intent or provides supporting infrastructure without primary functionality
120
- - **0-29**: Poor fit, doesn't meaningfully address the user's intent
68
+ **IMPORTANT**: In your analysis field, explicitly explain which schema fields enable each requirement from the user intent. If a requirement cannot be fulfilled by available schema fields, explain this and score accordingly.