@sassoftware/sas-score-mcp-serverjs 0.4.1-22 → 0.4.1-24
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.skills/agents/sas-viya-scoring-expert.md +58 -0
- package/.skills/copilot-instructions.md +146 -0
- package/.skills/skills/sas-find-library-smart/SKILL.md +154 -0
- package/.skills/skills/sas-list-tables-smart/SKILL.md +127 -0
- package/.skills/skills/sas-read-and-score/SKILL.md +111 -0
- package/.skills/skills/sas-read-strategy/SKILL.md +156 -0
- package/.skills/skills/sas-request-classifier/SKILL.md +69 -0
- package/.skills/skills/sas-score-workflow/SKILL.md +314 -0
- package/package.json +2 -2
|
@@ -0,0 +1,58 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: SAS Viya Scoring Expert
|
|
3
|
+
description: Specialized SAS and Viya agent that classifies requests, selects the right SAS skill, and uses MCP tools safely for jobs, CAS data, libraries, models, scoring, and content workflows.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# SAS Viya Scoring Expert
|
|
7
|
+
|
|
8
|
+
You are a SAS Viya expert agent.
|
|
9
|
+
|
|
10
|
+
Your job is to help users work with SAS and Viya resources through the SAS MCP server.
|
|
11
|
+
Treat requests as domain-specific SAS tasks, not generic coding tasks.
|
|
12
|
+
|
|
13
|
+
## Default behavior
|
|
14
|
+
Before using MCP tools:
|
|
15
|
+
- Determine whether the request is about jobs, code, CAS data, libraries, models, scoring, content, or environment issues.
|
|
16
|
+
- If the request includes ambiguous terms such as model, score, scoring, read, query, job, code, table, content, asset, or resource, classify the request before acting.
|
|
17
|
+
- Prefer loading the most relevant SAS skill before using low-level tools.
|
|
18
|
+
- If confidence is low, ask one focused clarifying question.
|
|
19
|
+
- Prefer discovery and inspection before execution, publish, scoring, deploy, write, or destructive actions.
|
|
20
|
+
|
|
21
|
+
## Skill-first policy
|
|
22
|
+
Use skills as the primary source of SAS workflow guidance.
|
|
23
|
+
Load one or more relevant SAS skills before using tools when the request is ambiguous, cross-domain, or execution-oriented.
|
|
24
|
+
Do not load unrelated skills.
|
|
25
|
+
|
|
26
|
+
## Routing policy
|
|
27
|
+
When a request is ambiguous or could map to more than one SAS domain:
|
|
28
|
+
- Start with classification.
|
|
29
|
+
- Identify the most likely SAS asset or workflow type.
|
|
30
|
+
- Choose the best matching SAS skill.
|
|
31
|
+
- Only then select MCP tools.
|
|
32
|
+
|
|
33
|
+
## Ambiguity policy
|
|
34
|
+
These terms are overloaded in SAS and Viya workflows and should not be interpreted casually:
|
|
35
|
+
- model
|
|
36
|
+
- score
|
|
37
|
+
- scoring
|
|
38
|
+
- read
|
|
39
|
+
- query
|
|
40
|
+
- job
|
|
41
|
+
- code
|
|
42
|
+
- table
|
|
43
|
+
- content
|
|
44
|
+
- asset
|
|
45
|
+
- resource
|
|
46
|
+
|
|
47
|
+
If the meaning is unclear, ask one targeted clarifying question or use discovery-oriented skills before any execution step.
|
|
48
|
+
|
|
49
|
+
## Tool usage policy
|
|
50
|
+
- Prefer read-only discovery before execution.
|
|
51
|
+
- Confirm the target asset type before running jobs, scoring data, publishing models, or modifying content.
|
|
52
|
+
- If tool results contradict the initial interpretation, correct course explicitly and continue.
|
|
53
|
+
- Never invent asset names, identifiers, libraries, or model types.
|
|
54
|
+
|
|
55
|
+
## Response style
|
|
56
|
+
Be concise, explicit, and domain-aware.
|
|
57
|
+
State which SAS concept or asset type you are acting on when ambiguity is possible.
|
|
58
|
+
Prefer short structured answers when guiding the user.
|
|
@@ -0,0 +1,146 @@
|
|
|
1
|
+
# SAS Agent instructions for this repository
|
|
2
|
+
|
|
3
|
+
## Project overview
|
|
4
|
+
This repository builds and maintains a SAS-focused agent experience on top of an MCP server.
|
|
5
|
+
The MCP server exposes SAS and Viya capabilities such as jobs, code artifacts, CAS server resources, SAS server resources, MAS models, score/scoring assets, and related metadata.
|
|
6
|
+
Your job is to help users complete SAS-related tasks safely and accurately by selecting the right skill first, then using the right MCP tools.
|
|
7
|
+
|
|
8
|
+
## Operating model
|
|
9
|
+
Treat this repository as a domain-specialized SAS agent, not as a generic coding project.
|
|
10
|
+
Prefer domain interpretation and skill-based guidance before directly invoking low-level tools.
|
|
11
|
+
When a request is ambiguous, resolve the ambiguity before taking action.
|
|
12
|
+
|
|
13
|
+
## Request classification
|
|
14
|
+
Before using SAS MCP tools, classify the request into one of these categories:
|
|
15
|
+
|
|
16
|
+
- SAS job or flow execution
|
|
17
|
+
- SAS code or program analysis
|
|
18
|
+
- CAS data, caslibs, tables, or resources
|
|
19
|
+
- SAS data, librefs, tables, or resources
|
|
20
|
+
- MAS model, SAS job model, SAS jobdef model
|
|
21
|
+
- Score model / scoring artifact / scoring execution
|
|
22
|
+
- General SAS content or metadata discovery
|
|
23
|
+
- Authentication, connection, or environment issue
|
|
24
|
+
|
|
25
|
+
If the request could belong to multiple categories, ask one clarifying question unless lightweight discovery can resolve it safely.
|
|
26
|
+
|
|
27
|
+
## Skill-first behavior
|
|
28
|
+
Before invoking MCP tools, decide whether one or more SAS skills should be used.
|
|
29
|
+
Prefer loading the most relevant SAS skill for the request category.
|
|
30
|
+
Use more than one skill only when the task clearly spans multiple domains, for example:
|
|
31
|
+
- CAS discovery + scoring
|
|
32
|
+
- model lookup + job execution
|
|
33
|
+
- content discovery + code analysis
|
|
34
|
+
|
|
35
|
+
Do not load unrelated skills.
|
|
36
|
+
Do not treat "model", "score", "job", "code", or "table" as interchangeable terms.
|
|
37
|
+
|
|
38
|
+
## Tool usage policy
|
|
39
|
+
Use MCP tools only after you have identified the most likely domain.
|
|
40
|
+
Prefer read or discovery operations before write, execute, deploy, or destructive operations.
|
|
41
|
+
When a user asks to run, publish, deploy, or score something, confirm that you have identified the correct SAS asset type first.
|
|
42
|
+
If a tool response reveals that the original interpretation was wrong, correct course explicitly and continue.
|
|
43
|
+
|
|
44
|
+
## Ambiguity handling
|
|
45
|
+
In this repository:
|
|
46
|
+
|
|
47
|
+
- "model" usually refers to MAS models, SAS job models, or SAS jobdef models, SCR models
|
|
48
|
+
- "score" or "scoring" usually refers to running a model on data, not measuring test coverage.
|
|
49
|
+
- "job" usually refers to a SAS job or flow, not a CI job.
|
|
50
|
+
|
|
51
|
+
When these terms appear without clear SAS context, ask a clarifying question or use the SAS request classifier skill before invoking tools.
|
|
52
|
+
The following terms are ambiguous and must be disambiguated from context or by asking a question:
|
|
53
|
+
- model
|
|
54
|
+
- score
|
|
55
|
+
- scoring
|
|
56
|
+
- job
|
|
57
|
+
- code
|
|
58
|
+
- table
|
|
59
|
+
- content
|
|
60
|
+
- resource
|
|
61
|
+
|
|
62
|
+
Examples:
|
|
63
|
+
- "find my model" may refer to a MAS model, model repository entry, or scoring asset
|
|
64
|
+
- "run scoring" may refer to a job, MAS, jobdef, SCR model
|
|
65
|
+
- "open the table" may refer to a CAS table or SAS dataset
|
|
66
|
+
|
|
67
|
+
## Response style
|
|
68
|
+
Be concise, precise, and domain-aware.
|
|
69
|
+
Explain which SAS concept you are acting on when ambiguity is possible.
|
|
70
|
+
Do not pretend certainty when the asset type or environment is unclear.
|
|
71
|
+
Prefer structured answers with short steps when guiding the user.
|
|
72
|
+
|
|
73
|
+
## Coding and implementation guidance
|
|
74
|
+
When editing code in this repository:
|
|
75
|
+
- Preserve existing MCP server patterns and naming conventions.
|
|
76
|
+
- Prefer small, composable modules over large prompt files.
|
|
77
|
+
- Keep tool descriptions short, specific, and distinct.
|
|
78
|
+
- Put durable domain workflows in skills, not in tool descriptions.
|
|
79
|
+
- Keep always-on instructions short; detailed procedures belong in skills.
|
|
80
|
+
- Prefer configuration and prompt assets that can be reused across Claude and Copilot.
|
|
81
|
+
|
|
82
|
+
## Repository structure expectations
|
|
83
|
+
Expect to find:
|
|
84
|
+
- MCP server implementation code
|
|
85
|
+
- prompt or skill assets
|
|
86
|
+
- configuration for client integrations
|
|
87
|
+
- SAS/Viya-specific adapters or resource logic
|
|
88
|
+
- tests or examples for skill and tool behavior
|
|
89
|
+
|
|
90
|
+
When adding new artifacts:
|
|
91
|
+
- Put repo-wide guidance in `.github/copilot-instructions.md`
|
|
92
|
+
- Put targeted reusable workflows in `.github/skills/<skill-name>/SKILL.md`
|
|
93
|
+
- Keep supporting references, examples, and templates next to the skill that uses them
|
|
94
|
+
|
|
95
|
+
## Safety and correctness
|
|
96
|
+
Never make up SAS assets, job names, model identifiers, or CAS resources.
|
|
97
|
+
If a requested action depends on environment-specific details, verify those details first.
|
|
98
|
+
Prefer inspection and discovery over assumption.
|
|
99
|
+
|
|
100
|
+
---
|
|
101
|
+
|
|
102
|
+
# Available Skills
|
|
103
|
+
|
|
104
|
+
This repository provides specialized skills for SAS-focused workflows. Load the relevant skill for the user's request before using MCP tools.
|
|
105
|
+
|
|
106
|
+
## sas-request-classifier
|
|
107
|
+
**Purpose:** Classify ambiguous SAS or Viya requests before using MCP tools.
|
|
108
|
+
|
|
109
|
+
**Use when:** Request mentions jobs, code, models, scoring, CAS tables, content, or resources and the correct SAS domain is not yet clear.
|
|
110
|
+
|
|
111
|
+
**Trigger phrases:** "find my model", "run scoring", "open the table", or any ambiguous request using domain terms.
|
|
112
|
+
|
|
113
|
+
## sas-find-library-smart
|
|
114
|
+
**Purpose:** Find a SAS Viya library (libref or caslib) with intelligent server detection. Automatically checks CAS first, then SAS if not found.
|
|
115
|
+
|
|
116
|
+
**Use when:** User needs to verify a library exists, before accessing tables within it.
|
|
117
|
+
|
|
118
|
+
**Trigger phrases:** "find library", "does library exist", "check if library", "locate library", "is there a library named", "verify library".
|
|
119
|
+
|
|
120
|
+
## sas-list-tables-smart
|
|
121
|
+
**Purpose:** List all tables in a SAS Viya library with intelligent server detection. When the server is not specified, automatically checks CAS first, then SAS if not found.
|
|
122
|
+
|
|
123
|
+
**Use when:** User wants to browse or explore available tables.
|
|
124
|
+
|
|
125
|
+
**Trigger phrases:** "list tables in", "show tables in", "what tables are in", "browse tables in", "tables in library", "enumerate tables".
|
|
126
|
+
|
|
127
|
+
## sas-read-strategy
|
|
128
|
+
**Purpose:** Guide the user in choosing the right data retrieval tool: `sas-score-read-table` (for raw row access with filters) or `sas-score-sas-query` (for analytical queries, aggregations, joins).
|
|
129
|
+
|
|
130
|
+
**Use when:** User wants to fetch records from a SAS/CAS table.
|
|
131
|
+
|
|
132
|
+
**Trigger phrases:** "read records from", "get data where", "fetch rows from", "query the table", "give me the first N records", "aggregate by", "join tables".
|
|
133
|
+
|
|
134
|
+
## sas-read-and-score
|
|
135
|
+
**Purpose:** Guide the full read → score workflow in SAS Viya: reading records from a table and then scoring them with a MAS model.
|
|
136
|
+
|
|
137
|
+
**Use when:** User wants to score records from a table, run a model against query results, predict outcomes for a set of rows, or any combination of fetching data and scoring it.
|
|
138
|
+
|
|
139
|
+
**Trigger phrases:** "score these records", "score results of my query", "run the model on this table", "predict for these customers", "fetch and score", "read and score", "score rows from", "run model on table data".
|
|
140
|
+
|
|
141
|
+
## sas-score-workflow
|
|
142
|
+
**Purpose:** Mandatory routing logic for all scoring requests. Extracts model.type suffix and routes to the correct tool (run-job|run-jobdef|model-score|scr-score|run-program). Handles both MAS models and alternative scoring engines.
|
|
143
|
+
|
|
144
|
+
**Use when:** User requests scoring with a model name that may require routing to different execution engines.
|
|
145
|
+
|
|
146
|
+
**Trigger phrases:** "score with model X.job", "score X.jobdef scenario", "score with model X.mas", "score with model X.scr", any request with "score" + model name containing a dot (.) + type suffix.
|
|
@@ -0,0 +1,154 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sas-find-library-smart
|
|
3
|
+
description: >
|
|
4
|
+
Find a SAS Viya library (libref or caslib) with intelligent server detection. Automatically checks
|
|
5
|
+
CAS first, then SAS if not found. Use this skill when the user needs to verify a library exists,
|
|
6
|
+
before accessing tables within it. Trigger phrases include: "find library", "does library exist",
|
|
7
|
+
"check if library", "locate library", "is there a library named", "verify library", or any request
|
|
8
|
+
to confirm a library's availability across servers.
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
# Smart Library Lookup (Find Library)
|
|
12
|
+
|
|
13
|
+
Intelligently locates a SAS Viya library by checking CAS first, then SAS if the library is not found
|
|
14
|
+
in CAS. Provides the user with clear information about library availability and location.
|
|
15
|
+
|
|
16
|
+
**If the user specifies the server explicitly** (e.g., "find library Public in cas"):
|
|
17
|
+
- Use the specified server: `server: "cas"` or `server: "sas"`
|
|
18
|
+
- Proceed directly to finding the library
|
|
19
|
+
|
|
20
|
+
**If the server is NOT specified:**
|
|
21
|
+
1. **First attempt**: Check CAS (`server: "cas"`)
|
|
22
|
+
2. **If not found in CAS**: Check SAS with uppercase library name (`server: "sas"`)
|
|
23
|
+
3. **If not found in either**:
|
|
24
|
+
- Inform user: *"The library '<lib>' was not found in CAS or SAS servers. Please verify the library name."*
|
|
25
|
+
- Suggest: *"Would you like to list available libraries?"* (suggest `sas-score-list-libraries`)
|
|
26
|
+
4. **If found**:
|
|
27
|
+
- Inform user which server contains the library: *"Found library '<lib>' in CAS"* or *"Found library '<lib>' in SAS"*
|
|
28
|
+
- Offer next steps: *"Would you like to list tables in this library?"* (suggest `sas-score-list-tables`)
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
## Using sas-score-find-library
|
|
33
|
+
|
|
34
|
+
**When:**
|
|
35
|
+
- User wants to verify a library exists
|
|
36
|
+
- User needs to determine which server contains a library
|
|
37
|
+
- User wants to check library availability before accessing it
|
|
38
|
+
- User wants to explore available libraries (before querying)
|
|
39
|
+
|
|
40
|
+
**How:**
|
|
41
|
+
```
|
|
42
|
+
sas-score-find-library({
|
|
43
|
+
name: "libraryname", // required
|
|
44
|
+
server: "cas" or "sas" // optional; determined by server check if not specified
|
|
45
|
+
})
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
**Rules:**
|
|
49
|
+
- Always determine the correct server first (cas → sas → neither)
|
|
50
|
+
- **For SAS server: always uppercase the library name** (e.g., "public" → "PUBLIC")
|
|
51
|
+
- If library name is missing, ask: *"Which library name would you like to find?"*
|
|
52
|
+
- Return the server where the library was found
|
|
53
|
+
- If not found in either server, clearly inform the user and offer to list available libraries
|
|
54
|
+
- Do not proceed with table access until library existence is confirmed
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
58
|
+
## Smart server detection logic
|
|
59
|
+
|
|
60
|
+
```
|
|
61
|
+
IF server specified by user
|
|
62
|
+
→ IF server is "sas"
|
|
63
|
+
→ uppercase lib
|
|
64
|
+
→ use that server, call sas-score-find-library
|
|
65
|
+
ELSE
|
|
66
|
+
→ TRY sas-score-find-library(lib, server="cas")
|
|
67
|
+
IF library found
|
|
68
|
+
→ success, inform user: library found in CAS
|
|
69
|
+
ELSE
|
|
70
|
+
→ uppercase lib
|
|
71
|
+
→ TRY sas-score-find-library(lib.toUpperCase(), server="sas")
|
|
72
|
+
IF library found
|
|
73
|
+
→ success, inform user: library found in SAS
|
|
74
|
+
ELSE
|
|
75
|
+
→ inform user library not found in either server
|
|
76
|
+
→ offer to list available libraries
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
---
|
|
80
|
+
|
|
81
|
+
## Common patterns
|
|
82
|
+
|
|
83
|
+
**Pattern 1 — Find library, server unspecified**
|
|
84
|
+
> "Find library Public"
|
|
85
|
+
|
|
86
|
+
1. Try CAS: `sas-score-find-library({ name: "Public", server: "cas" })`
|
|
87
|
+
2. If not found, try SAS with uppercase: `sas-score-find-library({ name: "PUBLIC", server: "sas" })`
|
|
88
|
+
3. If found in CAS → *"Found library 'Public' in CAS. Would you like to list tables in it?"*
|
|
89
|
+
4. If found in SAS → *"Found library 'PUBLIC' in SAS. Would you like to list tables in it?"*
|
|
90
|
+
5. If not found → *"The library 'Public' was not found in CAS or SAS. Would you like to list available libraries?"*
|
|
91
|
+
|
|
92
|
+
**Pattern 2 — Find library with explicit server (CAS)**
|
|
93
|
+
> "Find library MyData in cas"
|
|
94
|
+
|
|
95
|
+
1. Skip server detection
|
|
96
|
+
2. Call: `sas-score-find-library({ name: "MyData", server: "cas" })`
|
|
97
|
+
3. Result → *"Found library 'MyData' in CAS"* or *"Library 'MyData' not found in CAS"*
|
|
98
|
+
|
|
99
|
+
**Pattern 3 — Find library with explicit server (SAS)**
|
|
100
|
+
> "Does library SASHELP exist in sas"
|
|
101
|
+
|
|
102
|
+
1. Skip server detection
|
|
103
|
+
2. Uppercase lib: `sas-score-find-library({ name: "SASHELP", server: "sas" })`
|
|
104
|
+
3. Result → *"Found library 'SASHELP' in SAS"* or *"Library 'SASHELP' not found in SAS"*
|
|
105
|
+
|
|
106
|
+
**Pattern 4 — Library not found, offer next steps**
|
|
107
|
+
> "Check if library staging exists"
|
|
108
|
+
|
|
109
|
+
1. Try CAS: `sas-score-find-library({ name: "staging", server: "cas" })` → not found
|
|
110
|
+
2. Try SAS: `sas-score-find-library({ name: "STAGING", server: "sas" })` → not found
|
|
111
|
+
3. Respond:
|
|
112
|
+
- *"The library 'staging' was not found in CAS or SAS."*
|
|
113
|
+
- *"Would you like to:"*
|
|
114
|
+
- *"List all available libraries? (use `sas-score-list-libraries`))"*
|
|
115
|
+
- *"Check a different library name?"*
|
|
116
|
+
|
|
117
|
+
**Pattern 5 — Library found, follow-up action**
|
|
118
|
+
> "Verify library samples exists"
|
|
119
|
+
|
|
120
|
+
1. Try CAS: `sas-score-find-library({ name: "samples", server: "cas" })` → found
|
|
121
|
+
2. Respond:
|
|
122
|
+
- *"Found library 'samples' in CAS."*
|
|
123
|
+
- *"Would you like to list tables in this library? (use `sas-score-list-tables`))"*
|
|
124
|
+
|
|
125
|
+
---
|
|
126
|
+
|
|
127
|
+
## Output presentation
|
|
128
|
+
|
|
129
|
+
**When library is found:**
|
|
130
|
+
```
|
|
131
|
+
✓ Found library '<lib>' in <SERVER>
|
|
132
|
+
|
|
133
|
+
Would you like to:
|
|
134
|
+
• List tables in this library (use sas-list-tables-smart skill)
|
|
135
|
+
• Read data from a specific table (use sas-read-strategy skill)
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
**When library is not found:**
|
|
139
|
+
```
|
|
140
|
+
✗ Library '<lib>' not found in either CAS or SAS
|
|
141
|
+
|
|
142
|
+
Suggestions:
|
|
143
|
+
• Check the spelling of the library name
|
|
144
|
+
• List available libraries (use list-libraries tool)
|
|
145
|
+
• Try a different library name
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
---
|
|
149
|
+
|
|
150
|
+
## Integration with other skills
|
|
151
|
+
|
|
152
|
+
- **After finding library → List tables**: Use `sas-list-tables-smart` skill to browse available tables
|
|
153
|
+
- **After finding library → Read data**: Use `sas-read-strategy` skill to retrieve data from tables
|
|
154
|
+
- **Library not found → Explore**: Use `sas-score-list-libraries` tool to see all available libraries
|
|
@@ -0,0 +1,127 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sas-list-tables-smart
|
|
3
|
+
description: >
|
|
4
|
+
List all tables in a SAS Viya library with intelligent server detection. When the server is not
|
|
5
|
+
specified, automatically checks CAS first, then SAS if not found. Informs the user if the library
|
|
6
|
+
does not exist in either server. Use this skill when the user wants to browse or explore available
|
|
7
|
+
tables. Trigger phrases include: "list tables in", "show tables in", "what tables are in",
|
|
8
|
+
"browse tables in", "tables in library", "enumerate tables", or any request to explore data sources.
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
# Smart Data Access in SAS Library (List, Read, Query)
|
|
12
|
+
|
|
13
|
+
Intelligently enumerates tables in a SAS Viya library, automatically determining the correct server
|
|
14
|
+
when not explicitly specified.
|
|
15
|
+
|
|
16
|
+
> **Pre-flight check**: Before listing tables, verify the library exists using the `sas-find-library-smart` skill.
|
|
17
|
+
> This ensures consistent server detection across all data operations.
|
|
18
|
+
|
|
19
|
+
**If the user specifies the server explicitly** (e.g., "list tables in Public in cas"):
|
|
20
|
+
- Use the specified server: `server: "cas"` or `server: "sas"`
|
|
21
|
+
- Proceed directly to listing tables
|
|
22
|
+
|
|
23
|
+
**If the server is NOT specified:**
|
|
24
|
+
1. **First attempt**: Check CAS (`server: "cas"`)
|
|
25
|
+
2. **If no tables found in CAS**: Check SAS (`server: "sas"`)
|
|
26
|
+
3. **If no tables found in either**:
|
|
27
|
+
- Inform user: *"The library '<lib>' was not found in CAS or SAS. Please verify the library name is correct."*
|
|
28
|
+
- Ask: *"Would you like to list available libraries?"* (suggest `sas-score-list-libraries`)
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
## Using sas-score-list-tables
|
|
33
|
+
|
|
34
|
+
**When:**
|
|
35
|
+
- User wants to browse all tables in a library
|
|
36
|
+
- User wants to see what data is available
|
|
37
|
+
- User wants to explore library contents before querying
|
|
38
|
+
|
|
39
|
+
**How:**
|
|
40
|
+
```
|
|
41
|
+
sas-score-list-tables({
|
|
42
|
+
lib: "libraryname", // required
|
|
43
|
+
server: "cas" or "sas", // required; determined by server check
|
|
44
|
+
limit: 10, // optional; default 10, adjust for pagination
|
|
45
|
+
start: 1 // optional; default 1, use for pagination
|
|
46
|
+
})
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
**Rules:**
|
|
50
|
+
- Always determine the correct server first (cas → sas → neither)
|
|
51
|
+
- **For SAS server: always uppercase the library name** (e.g., "maps" → "MAPS")
|
|
52
|
+
- If library name is missing, ask: *"Which library should I list tables from?"*
|
|
53
|
+
- Default page size is 10; adjust based on user request ("show me all", "25 tables", etc.)
|
|
54
|
+
- If returned table count equals the limit, suggest pagination: *"There may be more tables. Use `start: {next_offset}` to see more."*
|
|
55
|
+
- If no tables are found despite library existing, report: *"No tables found in {lib} on {server} server."*
|
|
56
|
+
- Return table names only; do not fetch table metadata unless explicitly requested
|
|
57
|
+
|
|
58
|
+
---
|
|
59
|
+
|
|
60
|
+
## Smart server detection logic
|
|
61
|
+
|
|
62
|
+
```
|
|
63
|
+
IF server specified by user
|
|
64
|
+
→ IF server is "sas"
|
|
65
|
+
→ uppercase lib
|
|
66
|
+
→ use that server
|
|
67
|
+
ELSE
|
|
68
|
+
→ TRY sas-score-list-tables(lib, server="cas")
|
|
69
|
+
IF tables found
|
|
70
|
+
→ success, return tables
|
|
71
|
+
ELSE
|
|
72
|
+
→ uppercase lib
|
|
73
|
+
→ TRY sas-score-list-tables(lib.toUpperCase(), server="sas")
|
|
74
|
+
IF tables found
|
|
75
|
+
→ success, return tables
|
|
76
|
+
ELSE
|
|
77
|
+
→ inform user library not found in either server
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
---
|
|
81
|
+
|
|
82
|
+
## Common patterns
|
|
83
|
+
|
|
84
|
+
**Pattern 1 — List tables, server unspecified**
|
|
85
|
+
> "List tables in Public"
|
|
86
|
+
|
|
87
|
+
1. Try CAS: `sas-score-list-tables({ lib: "Public", server: "cas" })`
|
|
88
|
+
2. If empty, try SAS with uppercase: `sas-score-list-tables({ lib: "PUBLIC", server: "sas" })`
|
|
89
|
+
3. If still empty → inform user
|
|
90
|
+
|
|
91
|
+
**Pattern 2 — List tables with explicit server (SAS)**
|
|
92
|
+
> "List tables in sashelp in sas"
|
|
93
|
+
|
|
94
|
+
1. Skip server detection
|
|
95
|
+
2. Call with uppercase lib: `sas-score-list-tables({ lib: "SASHELP", server: "sas" })`
|
|
96
|
+
|
|
97
|
+
**Pattern 3 — List tables with explicit server (CAS)**
|
|
98
|
+
> "List tables in Public in cas"
|
|
99
|
+
|
|
100
|
+
1. No uppercase needed for CAS
|
|
101
|
+
2. Call: `sas-score-list-tables({ lib: "Public", server: "cas" })`
|
|
102
|
+
|
|
103
|
+
**Pattern 4 — Pagination**
|
|
104
|
+
> "Show me 25 tables in Samples, then the next batch"
|
|
105
|
+
|
|
106
|
+
1. First call: `sas-score-list-tables({ lib: "Samples", limit: 25, start: 1 })`
|
|
107
|
+
2. Next call: `sas-score-list-tables({ lib: "Samples", limit: 25, start: 26 })`
|
|
108
|
+
|
|
109
|
+
**Pattern 5 — Library not found**
|
|
110
|
+
> "List tables in foo"
|
|
111
|
+
|
|
112
|
+
1. Try CAS: empty
|
|
113
|
+
2. Try SAS with uppercase: empty
|
|
114
|
+
3. Response: *"The library 'foo' was not found in CAS or SAS. Please verify the library name."*
|
|
115
|
+
|
|
116
|
+
---
|
|
117
|
+
|
|
118
|
+
## Error handling
|
|
119
|
+
|
|
120
|
+
| Scenario | Action |
|
|
121
|
+
|---|---|
|
|
122
|
+
| Library not found in either server | Inform user and ask to verify library name |
|
|
123
|
+
| Empty result on first server | Automatically check second server |
|
|
124
|
+
| User specifies invalid server | Return error; ask user to clarify: `"cas"` or `"sas"` |
|
|
125
|
+
| Missing library name | Ask: *"Which library should I list tables from?"* |
|
|
126
|
+
| Library verification needed | Use `sas-find-library-smart` skill to verify library exists first |
|
|
127
|
+
|
|
@@ -0,0 +1,111 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sas-read-and-score
|
|
3
|
+
description: >
|
|
4
|
+
Guide the full read → score workflow in SAS Viya: reading records from a table and then scoring
|
|
5
|
+
them with a MAS model (using sas-score-model-score). Use this skill whenever the user wants to score records
|
|
6
|
+
from a table, run a model against query results, predict outcomes for a set of rows, or any
|
|
7
|
+
combination of fetching data and scoring it. Trigger phrases include: "score these records",
|
|
8
|
+
"score results of my query", "run the model on this table", "predict for these customers",
|
|
9
|
+
"fetch and score", "read and score", "score rows from", "run model on table data", or any request
|
|
10
|
+
that combines reading/querying table data with model prediction.
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
# SAS Read → Score Workflow
|
|
14
|
+
|
|
15
|
+
Orchestrates the full two-step pattern of reading records from a SAS/CAS table and scoring them
|
|
16
|
+
with a deployed MAS model.
|
|
17
|
+
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
## Pre-flight verification
|
|
21
|
+
|
|
22
|
+
**Before attempting to read or score table data:**
|
|
23
|
+
1. **Verify library exists**: Use `sas-find-library-smart` to check the library in CAS first, then SAS if needed
|
|
24
|
+
2. **Verify table exists**: Use `sas-score-find-table` to confirm the table is in the library
|
|
25
|
+
3. **Confirm server location**: Ensure you know which server (CAS or SAS) contains the data
|
|
26
|
+
|
|
27
|
+
This ensures consistent behavior with other data access operations.
|
|
28
|
+
|
|
29
|
+
---
|
|
30
|
+
|
|
31
|
+
## Workflow overview
|
|
32
|
+
|
|
33
|
+
The typical flow involves:
|
|
34
|
+
1. **Fetch data** — Identify which table/query will provide input records
|
|
35
|
+
2. **Validate model** — Confirm the model exists and understand its input schema
|
|
36
|
+
3. **Score** — Invoke the model on the fetched records
|
|
37
|
+
4. **Present results** — Merge predictions with original data and display
|
|
38
|
+
|
|
39
|
+
---
|
|
40
|
+
|
|
41
|
+
## Scenario: User already has data
|
|
42
|
+
|
|
43
|
+
If the user provides scenario data directly (e.g., "Score age=45, income=60000 with model X"):
|
|
44
|
+
- Extract the scenario values
|
|
45
|
+
- Validate against model's input schema
|
|
46
|
+
- Invoke scoring
|
|
47
|
+
- Return prediction
|
|
48
|
+
|
|
49
|
+
---
|
|
50
|
+
|
|
51
|
+
## Scenario: User wants to score table rows
|
|
52
|
+
|
|
53
|
+
If the user specifies a table (e.g., "Score all customers in Public.customers with model X"):
|
|
54
|
+
- Fetch raw rows (possibly filtered: "where status='active'")
|
|
55
|
+
- Validate model compatibility with input columns
|
|
56
|
+
- Invoke scoring on each row
|
|
57
|
+
- Merge results with original data
|
|
58
|
+
- Display combined table
|
|
59
|
+
|
|
60
|
+
---
|
|
61
|
+
|
|
62
|
+
## Scenario: User wants to score query results
|
|
63
|
+
|
|
64
|
+
If the user wants to score aggregated/filtered results (e.g., "Score high-value customers (spend > 5000) with model X"):
|
|
65
|
+
- Determine which records meet criteria (aggregation/filtering)
|
|
66
|
+
- Validate model expects these input columns
|
|
67
|
+
- Invoke scoring
|
|
68
|
+
- Merge predictions with summary data
|
|
69
|
+
- Display results
|
|
70
|
+
|
|
71
|
+
---
|
|
72
|
+
|
|
73
|
+
## Scenario: User unfamiliar with model
|
|
74
|
+
|
|
75
|
+
If the user specifies a model name that's new/unknown:
|
|
76
|
+
- Check if model exists
|
|
77
|
+
- Retrieve model schema (inputs, outputs)
|
|
78
|
+
- Show user what inputs the model expects
|
|
79
|
+
- Confirm before proceeding with scoring
|
|
80
|
+
|
|
81
|
+
---
|
|
82
|
+
|
|
83
|
+
## Rules
|
|
84
|
+
|
|
85
|
+
- Always validate table/library existence before attempting to read
|
|
86
|
+
- Always check model exists before invoking `sas-score-model-score`
|
|
87
|
+
- Match table columns to model input variables; warn on mismatch
|
|
88
|
+
- If multiple records: score batch if possible; fall back to row-by-row
|
|
89
|
+
- Merge predictions with original data using row index or key column
|
|
90
|
+
- Present results as table with original columns + new prediction columns
|
|
91
|
+
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
## Error handling
|
|
95
|
+
|
|
96
|
+
| Problem | Action |
|
|
97
|
+
|---|---|
|
|
98
|
+
| Table not found | Ask for correct lib.tablename |
|
|
99
|
+
| Model not found | Inform user; suggest verifying model name |
|
|
100
|
+
| Field/column mismatch | Show mismatch, ask user to confirm or adjust query |
|
|
101
|
+
| Scoring error | Return structured error, suggest checking model inputs |
|
|
102
|
+
| Empty read result | Inform user, ask if they want to adjust the query/filter |
|
|
103
|
+
| Data type mismatch | Warn user about type conversion, proceed or ask for clarification |
|
|
104
|
+
|
|
105
|
+
---
|
|
106
|
+
|
|
107
|
+
## Integration with other skills
|
|
108
|
+
|
|
109
|
+
- **Before this workflow**: Use `sas-find-library-smart` to verify the library exists
|
|
110
|
+
- **For data retrieval**: Use `sas-read-strategy` to choose the right read tool (read-table vs sas-query)
|
|
111
|
+
- **For scoring**: Use `sas-score-workflow` for advanced scoring options beyond MAS models
|
|
@@ -0,0 +1,156 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sas-read-strategy
|
|
3
|
+
description: >
|
|
4
|
+
Guide the user in choosing the right data retrieval tool: sas-score-read-table (for raw row access with filters)
|
|
5
|
+
or sas-score-sas-query (for analytical queries, aggregations, joins). Use this skill when the user wants to
|
|
6
|
+
fetch records from a SAS/CAS table. Trigger phrases include: "read records from", "get data where",
|
|
7
|
+
"fetch rows from", "query the table", "give me the first N records", "aggregate by", "join tables",
|
|
8
|
+
or any request that starts with data retrieval.
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
# SAS Read Strategy
|
|
12
|
+
|
|
13
|
+
Guides the decision between `sas-score-read-table` and `sas-score-sas-query` based on the user's intent and the nature
|
|
14
|
+
of the data operation. Determines which server contains the data and which retrieval tool is most appropriate.
|
|
15
|
+
|
|
16
|
+
## Determine the server location
|
|
17
|
+
|
|
18
|
+
Before retrieving data, verify the library and determine which server(s) contain the table:
|
|
19
|
+
|
|
20
|
+
**Step 1: Verify library existence**
|
|
21
|
+
- Use `sas-find-library-smart` skill to check the library in CAS first, then SAS if needed
|
|
22
|
+
- This establishes the correct server and uppercase convention for SAS libraries
|
|
23
|
+
- Informs the user which server contains the library
|
|
24
|
+
|
|
25
|
+
**Step 2: Locate the specific table**
|
|
26
|
+
- Use `sas-score-find-table` to check if the table exists in the library
|
|
27
|
+
- Possible outcomes:
|
|
28
|
+
- If table exists **only in CAS** → set `server: "cas"`
|
|
29
|
+
- If table exists **only in SAS** → set `server: "sas"`
|
|
30
|
+
- If table exists **in both** → ask the user: *"The table exists in both CAS and SAS. Which server would you prefer to query from?"*
|
|
31
|
+
- If table exists **in neither** → inform user and suggest verifying the table name
|
|
32
|
+
|
|
33
|
+
---
|
|
34
|
+
|
|
35
|
+
## Determine the read strategy
|
|
36
|
+
|
|
37
|
+
Ask yourself: does the user already have the data in hand?
|
|
38
|
+
|
|
39
|
+
- **Yes (user pasted values, or data is already in context)** → skip this strategy; data is ready to use.
|
|
40
|
+
- **No** → choose the read tool based on intent:
|
|
41
|
+
|
|
42
|
+
| User's Intent | Tool | Example |
|
|
43
|
+
|---|---|---|
|
|
44
|
+
| Get specific raw rows, apply simple filter, retrieve first N records | `sas-score-read-table` | "Show me 10 rows from customers where status='active'" |
|
|
45
|
+
| Aggregate/summarize, calculate, join tables, analytical question | `sas-score-sas-query` | "Average salary by department", "Count orders by region" |
|
|
46
|
+
|
|
47
|
+
---
|
|
48
|
+
|
|
49
|
+
## Using read-table
|
|
50
|
+
|
|
51
|
+
**When:**
|
|
52
|
+
- User asks for raw records, row-by-row data
|
|
53
|
+
- Simple WHERE filtering (e.g., "where status = 'active'")
|
|
54
|
+
- Pagination needed ("first 50 rows", "next 10 rows")
|
|
55
|
+
|
|
56
|
+
**How:**
|
|
57
|
+
```
|
|
58
|
+
sas-score-read-table({
|
|
59
|
+
table: "tablename",
|
|
60
|
+
lib: "libraryname",
|
|
61
|
+
server: "cas" or "sas", // determined from sas-score-find-table check
|
|
62
|
+
limit: N, // default 10, adjust based on user request
|
|
63
|
+
where: "..." // optional SQL WHERE clause
|
|
64
|
+
})
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
**Rules:**
|
|
68
|
+
- Always determine the server first using `sas-score-find-table`
|
|
69
|
+
- Keep batch size ≤ 50 rows unless the user explicitly requests more
|
|
70
|
+
- If table name is missing, ask: *"Which table should I read from? (format: lib.tablename)"*
|
|
71
|
+
- If library is missing, ask: *"Which library contains the table?"*
|
|
72
|
+
- If table exists in both servers, ask user which to use
|
|
73
|
+
- Return raw column values; do not transform or aggregate
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
77
|
+
## Using sas-query
|
|
78
|
+
|
|
79
|
+
**When:**
|
|
80
|
+
- User asks for aggregation (SUM, AVG, COUNT, GROUP BY)
|
|
81
|
+
- User asks for joins, calculations, or analytical insights
|
|
82
|
+
- User's question is phrased analytically ("compare", "analyze", "breakdown", "trend")
|
|
83
|
+
|
|
84
|
+
**How:**
|
|
85
|
+
```
|
|
86
|
+
sas-query({
|
|
87
|
+
table: "lib.tablename",
|
|
88
|
+
query: "user's natural language question",
|
|
89
|
+
sql: "SELECT ... FROM ... WHERE ... GROUP BY ..." // generated from query
|
|
90
|
+
})
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
**Rules:**
|
|
94
|
+
- Check which server(s) contain the table using `sas-score-find-table` first
|
|
95
|
+
- If table exists in both CAS and SAS, ask user which to query from
|
|
96
|
+
- Parse the user's natural language question into a PROC SQL SELECT statement
|
|
97
|
+
- Ensure SELECT statement is valid SQL syntax
|
|
98
|
+
- Do not add trailing semicolons to the SQL string
|
|
99
|
+
- If table name is missing, ask: *"Which table should I query? (format: lib.tablename)"*
|
|
100
|
+
- If the intent is unclear, ask for clarification: *"Do you want raw rows, or an aggregated summary?"*
|
|
101
|
+
|
|
102
|
+
---
|
|
103
|
+
|
|
104
|
+
## Common patterns
|
|
105
|
+
|
|
106
|
+
**Pattern A — Raw row retrieval**
|
|
107
|
+
> "Show me the first 5 rows from Public.customers"
|
|
108
|
+
|
|
109
|
+
→ `sas-score-read-table({ table: "customers", lib: "Public", limit: 5 })`
|
|
110
|
+
|
|
111
|
+
**Pattern B — Filtered retrieval**
|
|
112
|
+
> "Get all high-value orders (amount > 5000) from mylib.orders"
|
|
113
|
+
|
|
114
|
+
→ `sas-score-read-table({ table: "orders", lib: "mylib", where: "amount > 5000" })`
|
|
115
|
+
|
|
116
|
+
**Pattern C — Aggregation**
|
|
117
|
+
> "What is the average price by make in Public.cars?"
|
|
118
|
+
|
|
119
|
+
→ `sas-score-sas-query({ table: "Public.cars", query: "average price by make", sql: "SELECT make, AVG(msrp) AS avg_price FROM Public.cars GROUP BY make" })`
|
|
120
|
+
|
|
121
|
+
**Pattern D — Join + analysis**
|
|
122
|
+
> "Show me total sales by customer in the sales and customers tables"
|
|
123
|
+
|
|
124
|
+
→ `sas-score-sas-query()` with a JOIN in the generated SQL
|
|
125
|
+
|
|
126
|
+
---
|
|
127
|
+
|
|
128
|
+
## Error handling
|
|
129
|
+
|
|
130
|
+
| Problem | Action |
|
|
131
|
+
|---|---|
|
|
132
|
+
| Library not found | Use `sas-find-library-smart` skill to verify the library exists |
|
|
133
|
+
| Table not found in either server | Inform user and suggest checking the table name |
|
|
134
|
+
| Table exists in both CAS and SAS | Ask: *"The table exists in both servers. Which would you prefer: CAS or SAS?"* |
|
|
135
|
+
| Table exists only in one server | Use that server automatically in your request |
|
|
136
|
+
| Table name missing entirely | Ask: *"Which table should I read from?"* |
|
|
137
|
+
| Ambiguous intent (raw vs aggregate) | Ask: *"Do you want individual rows or a summary by some field?"* |
|
|
138
|
+
| Empty result | Inform user, ask to adjust filter or query |
|
|
139
|
+
|
|
140
|
+
---
|
|
141
|
+
|
|
142
|
+
## Integration with other skills
|
|
143
|
+
|
|
144
|
+
- **Before this skill**: Use `sas-find-library-smart` to verify and locate the library
|
|
145
|
+
- **After this skill**: Use `sas-read-and-score` to score the retrieved data
|
|
146
|
+
|
|
147
|
+
---
|
|
148
|
+
|
|
149
|
+
## Next steps
|
|
150
|
+
|
|
151
|
+
Once data is retrieved, typical follow-ups include:
|
|
152
|
+
- **Visualize** — present as table or chart
|
|
153
|
+
- **Export** — format and offer download
|
|
154
|
+
- **Analyze further** — ask clarifying questions
|
|
155
|
+
- **Score** — run predictions on the data
|
|
156
|
+
- **Combine** — join with other datasets
|
|
@@ -0,0 +1,69 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sas-request-classifier
|
|
3
|
+
description: Classify ambiguous SAS or Viya requests before using MCP tools. Use when prompts mention jobs, code, models, scoring, CAS tables, content, or resources and the correct SAS domain is not yet clear.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# SAS Request Classifier
|
|
7
|
+
|
|
8
|
+
Use this skill to determine what kind of SAS object, workflow, or environment the user is referring to before selecting tools.
|
|
9
|
+
|
|
10
|
+
## When to use
|
|
11
|
+
Use this skill when the request contains ambiguous domain terms such as:
|
|
12
|
+
- model
|
|
13
|
+
- score
|
|
14
|
+
- scoring
|
|
15
|
+
- read
|
|
16
|
+
- query
|
|
17
|
+
- job
|
|
18
|
+
- jobdef
|
|
19
|
+
- code
|
|
20
|
+
- table
|
|
21
|
+
- content
|
|
22
|
+
- asset
|
|
23
|
+
- resource
|
|
24
|
+
|
|
25
|
+
Use this skill before any execution-oriented tool call if there is a chance the request is referring to the wrong SAS domain.
|
|
26
|
+
|
|
27
|
+
## Goal
|
|
28
|
+
Map the user request to the most likely SAS domain and hand off to the correct downstream skill or tool path.
|
|
29
|
+
|
|
30
|
+
## Classification targets
|
|
31
|
+
Classify the request into one or more of these categories:
|
|
32
|
+
- Reading or querying tables → Route to **sas-read-strategy**
|
|
33
|
+
- CAS resource, caslib, or table discovery → Route to **sas-find-library-smart** or **sas-list-tables-smart**
|
|
34
|
+
- SAS data, libref, or table discovery → Route to **sas-find-library-smart** or **sas-list-tables-smart**
|
|
35
|
+
- Score model or scoring artifact → Route to **sas-score-workflow**
|
|
36
|
+
- Read data and score together → Route to **sas-read-and-score**
|
|
37
|
+
- SAS job or flow execution
|
|
38
|
+
- SAS code or program analysis
|
|
39
|
+
- General content or metadata lookup
|
|
40
|
+
- Environment, auth, or connectivity issue
|
|
41
|
+
|
|
42
|
+
## Procedure
|
|
43
|
+
1. Read the request and identify ambiguous nouns and verbs.
|
|
44
|
+
2. Infer whether the request is asking to discover, inspect, execute, deploy, score, compare, or troubleshoot.
|
|
45
|
+
3. Decide the most likely SAS domain and matching skill.
|
|
46
|
+
4. If confidence is low, ask one focused clarifying question.
|
|
47
|
+
5. If confidence is high, load and use the relevant skill:
|
|
48
|
+
- **sas-find-library-smart** — Find CAS or SAS libraries
|
|
49
|
+
- **sas-list-tables-smart** — Browse tables in a library
|
|
50
|
+
- **sas-read-strategy** — Choose read-table vs. sas-query for data retrieval
|
|
51
|
+
- **sas-read-and-score** — Combine data reading with model scoring
|
|
52
|
+
- **sas-score-workflow** — Route scoring requests to correct execution engine
|
|
53
|
+
6. Only after classification and skill guidance, use MCP tools.
|
|
54
|
+
|
|
55
|
+
## Disambiguation hints
|
|
56
|
+
- "Run" often implies job execution, but may also mean scoring or model invocation. Check for "score" or "model" context.
|
|
57
|
+
- "Model" may refer to MAS models, SAS jobs, jobdefs, or SCR models. Look for context or type suffix (e.g., `.job`, `.mas`, `.scr`).
|
|
58
|
+
- "Score" may refer to model scoring or job execution. Look for model name or context.
|
|
59
|
+
- "Table" usually suggests CAS, but confirm library name and server context.
|
|
60
|
+
- "Find", "list", "browse" — starts as discovery. Route to sas-find-library-smart or sas-list-tables-smart.
|
|
61
|
+
- "Read", "query", "fetch" — data retrieval. Route to sas-read-strategy.
|
|
62
|
+
- "Predict", "score records", "run model on data" — combined workflow. Route to sas-read-and-score or sas-score-workflow.
|
|
63
|
+
|
|
64
|
+
## Output
|
|
65
|
+
When you finish classification, state:
|
|
66
|
+
- the inferred SAS domain
|
|
67
|
+
- the confidence level
|
|
68
|
+
- the relevant skill(s) to load
|
|
69
|
+
- any remaining ambiguity or clarifying questions needed
|
|
@@ -0,0 +1,314 @@
|
|
|
1
|
+
gi ---
|
|
2
|
+
name: sas-score-workflow
|
|
3
|
+
description: >
|
|
4
|
+
Guide the full model scoring workflow: validate model familiarity, route to appropriate scoring tool
|
|
5
|
+
based on model type, invoke scoring with scenario data, and present merged results. Use this skill
|
|
6
|
+
when the user wants to run predictions on data (already fetched or user-supplied). Supports generic
|
|
7
|
+
syntax: "score with model <name>.<type> scenario =<params>" where type is job|jobdef|mas|scr|sas.
|
|
8
|
+
Trigger phrases: "score these records", "predict using model", "run model on", "score with model X.mas".
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
# SAS Score Workflow
|
|
12
|
+
|
|
13
|
+
Orchestrates model validation, type-based routing, scoring invocation, and result presentation.
|
|
14
|
+
Handles both MAS models and alternative scoring engines (jobs, jobdefs, SCR, SAS programs).
|
|
15
|
+
|
|
16
|
+
---
|
|
17
|
+
|
|
18
|
+
## Generic Scoring Syntax
|
|
19
|
+
|
|
20
|
+
Users can invoke scoring with a unified syntax that automatically routes to the correct tool:
|
|
21
|
+
|
|
22
|
+
```
|
|
23
|
+
score with model <name>.<type> [scenario =<key=value pairs>]
|
|
24
|
+
score <name>.<type> [scenario =<key=value pairs>]
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
**Type determines the routing:**
|
|
28
|
+
- `.job` → route to `sas-score-run-job` with scoring parameters
|
|
29
|
+
- `.jobdef` → route to `sas-score-run-jobdef` with scoring parameters
|
|
30
|
+
- `.mas` → route to `sas-score-model-score` (Model Analytical Service — default)
|
|
31
|
+
- `.scr` → route to `sas-score-scr-score` (SAS Container Runtime)
|
|
32
|
+
- `.sas` → route to `sas-score-run-sas-program` to run sas program in folder
|
|
33
|
+
|
|
34
|
+
If no type is specified (bare model name), assume `.mas` (MAS model).
|
|
35
|
+
|
|
36
|
+
---
|
|
37
|
+
|
|
38
|
+
## Type-Based Routing
|
|
39
|
+
|
|
40
|
+
### Parse and Strip Model Type
|
|
41
|
+
|
|
42
|
+
When a user provides a model name with a type suffix (e.g., `simplejon.job`, `churn.mas`):
|
|
43
|
+
|
|
44
|
+
1. **Extract the type:** Split on the last dot to identify the type suffix
|
|
45
|
+
- `simplejon.job` → type = `job`, base name = `simplejon`
|
|
46
|
+
- `churn.mas` → type = `mas`, base name = `churn`
|
|
47
|
+
- `fraud_detector.jobdef` → type = `jobdef`, base name = `fraud_detector`
|
|
48
|
+
|
|
49
|
+
2. **Validate the type:** Confirm it matches one of the supported types: `job`, `jobdef`, `mas`, `scr`, `sas`
|
|
50
|
+
- If type is unrecognized, assume `.mas` (default MAS model) and treat the entire input as the model name
|
|
51
|
+
|
|
52
|
+
3. **Strip the type suffix:** Remove the `.type` from the model name before passing to the routing tool
|
|
53
|
+
- **Critical:** Always pass the base name (without the dot and type) to the invoked tool
|
|
54
|
+
- `simplejon.job` → pass `simplejon` to `sas-score-run-job`
|
|
55
|
+
- `churn.mas` → pass `churn` to `sas-score-model-score`
|
|
56
|
+
- `fraud_detector.jobdef` → pass `fraud_detector` to `sas-score-run-jobdef`
|
|
57
|
+
|
|
58
|
+
### Type: `.mas` (Model Aggregation Service)
|
|
59
|
+
- **Tool**: `sas-score-model-score`
|
|
60
|
+
- **Use for**: Standard MAS-deployed predictive models
|
|
61
|
+
- **Example**: `score with model churn.mas scenario =age=45,income=60000`
|
|
62
|
+
- **Invocation**: `sas-score-model-score({ model: "churn", scenario: {...} })`
|
|
63
|
+
|
|
64
|
+
### Type: `.job` (SAS Viya Job)
|
|
65
|
+
- **Tool**: `sas-score-run-job`
|
|
66
|
+
- **Use for**: Pre-built scoring jobs with parameters
|
|
67
|
+
- **Example**: `score with model monthly_scorer.job scenario =month=10,year=2025`
|
|
68
|
+
- **Invocation**: `sas-score-run-job({ name: "monthly_scorer", scenario: {...} })`
|
|
69
|
+
|
|
70
|
+
### Type: `.jobdef` (SAS Viya Job Definition)
|
|
71
|
+
- **Tool**: `sas-score-run-jobdef`
|
|
72
|
+
- **Use for**: Job definitions that perform scoring logic
|
|
73
|
+
- **Example**: `score with model fraud_detector.jobdef using amount=500,merchant=online`
|
|
74
|
+
- **Invocation**: `sas-score-run-jobdef({ name: "fraud_detector", scenario: {...} })`
|
|
75
|
+
|
|
76
|
+
### Type: `.scr` (Score Code Runtime)
|
|
77
|
+
- **Tool**: `sas-score-scr-score`
|
|
78
|
+
- **Use for**: Models deployed in SCR containers (REST endpoints)
|
|
79
|
+
- **Example**: `score https://scr-host/models/loan.scr using age=45,credit=700`
|
|
80
|
+
- **Invocation**: `sas-score-scr-score({ url: "https://scr-host/models/loan", scenario: {...} })`
|
|
81
|
+
|
|
82
|
+
### Type: `.sas` (SAS Program / SQL)
|
|
83
|
+
- **Tool**: `sas-score-run-sas-program`
|
|
84
|
+
- **Use for**: Custom SAS or SQL scoring code
|
|
85
|
+
- **Example**: `score my_scoring_code.sas using x=1,y=2`
|
|
86
|
+
- **Invocation**: `sas-score-run-sas-program({ folder: "my_scoring_code", scenario: {...} })`
|
|
87
|
+
|
|
88
|
+
|
|
89
|
+
|
|
90
|
+
---
|
|
91
|
+
|
|
92
|
+
## Scenario Parsing
|
|
93
|
+
|
|
94
|
+
The scenario parameter (comma-separated key=value pairs) is parsed into an object:
|
|
95
|
+
|
|
96
|
+
```
|
|
97
|
+
scenario =age=45,income=60000,region=South
|
|
98
|
+
↓ parsed as:
|
|
99
|
+
{ age: "45", income: "60000", region: "South" }
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
Accepted formats:
|
|
103
|
+
- **String**: `age=45,income=60000`
|
|
104
|
+
- **Object**: `{ age: 45, income: 60000 }`
|
|
105
|
+
- **Array** (batch): `[ {age:45, income:60000}, {age:50, income:75000} ]`
|
|
106
|
+
|
|
107
|
+
---
|
|
108
|
+
|
|
109
|
+
## Integration with other skills
|
|
110
|
+
|
|
111
|
+
- **Before scoring table data**: Use `sas-find-library-smart` to verify the library, then `sas-read-strategy` to fetch records
|
|
112
|
+
- **For read + score workflows**: Use `sas-read-and-score` for the complete end-to-end pattern
|
|
113
|
+
|
|
114
|
+
---
|
|
115
|
+
|
|
116
|
+
## Step 1 — Check model familiarity before scoring
|
|
117
|
+
|
|
118
|
+
Score immediately if:
|
|
119
|
+
- The user names a specific model they've used before in this session, OR
|
|
120
|
+
- The model name matches a previously confirmed model in the conversation
|
|
121
|
+
|
|
122
|
+
Pause and suggest investigation if:
|
|
123
|
+
- The model name is new, vague, or misspelled-looking (e.g. "the churn one", "that cancer model")
|
|
124
|
+
- The user seems unsure of the required input variable names
|
|
125
|
+
|
|
126
|
+
**Suggested message:**
|
|
127
|
+
> "I don't recognize that model — want me to run `find-model` to confirm it exists,
|
|
128
|
+
> or `model-info` to check its required inputs first?"
|
|
129
|
+
|
|
130
|
+
---
|
|
131
|
+
|
|
132
|
+
## Step 2 — Prepare the scenario data
|
|
133
|
+
|
|
134
|
+
**For a single record** (one object):
|
|
135
|
+
```javascript
|
|
136
|
+
scenario = { field1: value1, field2: value2, ... }
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
**For batch scoring** (multiple records — the typical case):
|
|
140
|
+
```javascript
|
|
141
|
+
scenario = [
|
|
142
|
+
{ field1: val1, field2: val2, ... },
|
|
143
|
+
{ field1: val3, field2: val4, ... },
|
|
144
|
+
...
|
|
145
|
+
]
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
**Critical rules:**
|
|
149
|
+
- Loop or call sas-score-model-score **once per row**.
|
|
150
|
+
- Field names in the scenario must match the model's expected input variable names **exactly**.
|
|
151
|
+
- If table column names differ from model input names, **flag this to the user** and ask for confirmation before scoring.
|
|
152
|
+
- Example: Table has `age_years`, but model expects `age` → ask user which column maps to which input.
|
|
153
|
+
- Do not add units, labels, or extra metadata — raw field values only.
|
|
154
|
+
|
|
155
|
+
---
|
|
156
|
+
|
|
157
|
+
## Step 3 — Invoke the appropriate scoring tool
|
|
158
|
+
|
|
159
|
+
Based on the type extracted from the model name, invoke the corresponding tool:
|
|
160
|
+
|
|
161
|
+
**For `.mas` (default):**
|
|
162
|
+
```javascript
|
|
163
|
+
sas-score-model-score({
|
|
164
|
+
model: "<modelname>",
|
|
165
|
+
scenario: scenario, // object or array
|
|
166
|
+
uflag: false // set true if you need field names prefixed with _
|
|
167
|
+
})
|
|
168
|
+
```
|
|
169
|
+
|
|
170
|
+
**For `.job`:**
|
|
171
|
+
```javascript
|
|
172
|
+
sas-score-run-job({
|
|
173
|
+
name: "<jobname>",
|
|
174
|
+
scenario: scenario
|
|
175
|
+
})
|
|
176
|
+
```
|
|
177
|
+
|
|
178
|
+
**For `.jobdef`:**
|
|
179
|
+
```javascript
|
|
180
|
+
sas-score-run-jobdef({
|
|
181
|
+
name: "<jobdefname>",
|
|
182
|
+
scenario: scenario
|
|
183
|
+
})
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
**For `.scr`:**
|
|
187
|
+
```javascript
|
|
188
|
+
sas-score-scr-score({
|
|
189
|
+
url: "<scr_endpoint_url>",
|
|
190
|
+
scenario: scenario
|
|
191
|
+
})
|
|
192
|
+
```
|
|
193
|
+
|
|
194
|
+
**For `.sas`:**
|
|
195
|
+
```javascript
|
|
196
|
+
sas-score-run-sas-program({
|
|
197
|
+
src: "<sas_or_sql_code>",
|
|
198
|
+
scenario: scenario
|
|
199
|
+
})
|
|
200
|
+
```
|
|
201
|
+
|
|
202
|
+
**Rules:**
|
|
203
|
+
- Pass the full batch in one call; do not loop over rows
|
|
204
|
+
- If scoring fails, return the structured error and suggest troubleshooting
|
|
205
|
+
- For MAS models, include uflag parameter if underscore-prefixed output is needed
|
|
206
|
+
- For jobs/jobdefs, scenario becomes parameter arguments
|
|
207
|
+
- For SCR, include full URL endpoint
|
|
208
|
+
|
|
209
|
+
---
|
|
210
|
+
|
|
211
|
+
## Step 4 — Present the results
|
|
212
|
+
|
|
213
|
+
Merge the scoring output back with the input records and present as a table where possible.
|
|
214
|
+
|
|
215
|
+
**Always surface:**
|
|
216
|
+
- The key prediction/score field(s) (e.g. `P_churn`, `score`, `prediction`, `P_risk`)
|
|
217
|
+
- Any probability/confidence fields for classification models (e.g. `P_class0`, `P_class1`)
|
|
218
|
+
- Selected input fields that drove the prediction, so the user can see context
|
|
219
|
+
|
|
220
|
+
**Formatting:**
|
|
221
|
+
- Present results in a table for clarity
|
|
222
|
+
- If results exceed 10 rows, show the first 10 and ask: *"Want to see more results or export the full set?"*
|
|
223
|
+
- Round numeric predictions to 2–4 decimal places for readability
|
|
224
|
+
|
|
225
|
+
---
|
|
226
|
+
|
|
227
|
+
## Common flows
|
|
228
|
+
|
|
229
|
+
**Flow A — Score rows with MAS model**
|
|
230
|
+
> "Score the first 10 customers in Public.customers with the churn model"
|
|
231
|
+
|
|
232
|
+
1. `sas-score-read-table` → { table: "Public.customers", limit: 10 }
|
|
233
|
+
2. `sas-score-model-score` → { model: "churn", scenario: [ ...10 row objects ] }
|
|
234
|
+
3. Present merged results with prediction + key inputs
|
|
235
|
+
|
|
236
|
+
**Flow B — Score with a scoring job**
|
|
237
|
+
> "Score December sales with the monthly_scorer job using month=12,year=2025"
|
|
238
|
+
|
|
239
|
+
1. `sas-score-run-job` → { name: "monthly_scorer", scenario: { month: "12", year: "2025" } }
|
|
240
|
+
2. Capture job output and tables
|
|
241
|
+
3. Present results
|
|
242
|
+
|
|
243
|
+
**Flow C — Score with a job definition**
|
|
244
|
+
> "Run fraud detection jobdef on transaction amount=500, merchant=online"
|
|
245
|
+
|
|
246
|
+
1. `sas-score-run-jobdef` → { name: "fraud_detection", scenario: { amount: "500", merchant: "online" } }
|
|
247
|
+
2. Capture log, listings, and tables
|
|
248
|
+
3. Present results
|
|
249
|
+
|
|
250
|
+
**Flow D — Score with SCR endpoint**
|
|
251
|
+
> "Score with the loan model at https://scr-host/models/loan using age=45, credit_score=700"
|
|
252
|
+
|
|
253
|
+
1. `sas-score-scr-score` → { url: "https://scr-host/models/loan", scenario: { age: "45", credit_score: "700" } }
|
|
254
|
+
2. Capture prediction response
|
|
255
|
+
3. Present result
|
|
256
|
+
|
|
257
|
+
**Flow E — Score results of an analytical query with MAS**
|
|
258
|
+
> "Score high-value customers (spend > 5000) in mylib.sales with the fraud model"
|
|
259
|
+
|
|
260
|
+
1. `sas-score-sas-query` → { table: "mylib.sales", sql: "SELECT * FROM mylib.sales WHERE spend > 5000" }
|
|
261
|
+
2. `sas-score-model-score` → { model: "fraud", scenario: [ ...result rows ] }
|
|
262
|
+
3. Present merged results
|
|
263
|
+
|
|
264
|
+
**Flow F — User supplies scenario data directly**
|
|
265
|
+
> "Score age=45, income=60000, region=South with the churn model"
|
|
266
|
+
|
|
267
|
+
1. Skip read step
|
|
268
|
+
2. `sas-score-model-score` → { model: "churn", scenario: { age: "45", income: "60000", region: "South" } }
|
|
269
|
+
3. Present result
|
|
270
|
+
|
|
271
|
+
**Flow G — Model unfamiliar, need to confirm**
|
|
272
|
+
> "Score Public.applicants with the creditRisk2 model"
|
|
273
|
+
|
|
274
|
+
1. Pause — "creditRisk2" is new
|
|
275
|
+
2. Suggest: `find-model` to confirm it exists, `model-info` to get input variables
|
|
276
|
+
3. Once confirmed → `sas-score-read-table` + `sas-score-model-score`
|
|
277
|
+
|
|
278
|
+
**Flow H — Generic score syntax with type routing**
|
|
279
|
+
> "score with model churn.mas scenario =age=45,income=60000"
|
|
280
|
+
> "score fraud_detector.jobdef where scenario =amount=500"
|
|
281
|
+
> "score monthly_report.job using month=10,year=2025"
|
|
282
|
+
|
|
283
|
+
1. Parse model name to extract type (.mas, .job, .jobdef, .scr, .sas)
|
|
284
|
+
2. Route to appropriate tool based on type
|
|
285
|
+
3. Parse scenario and invoke tool with parameters
|
|
286
|
+
4. Present results from routed tool
|
|
287
|
+
|
|
288
|
+
---
|
|
289
|
+
|
|
290
|
+
## Error handling
|
|
291
|
+
|
|
292
|
+
| Problem | Action |
|
|
293
|
+
|---|---|
|
|
294
|
+
| Model not found | Suggest `find-model` to verify the model is deployed |
|
|
295
|
+
| Input field name mismatch | Show the mismatch (table has X, model expects Y), ask user to confirm mapping |
|
|
296
|
+
| Scoring error / invalid inputs | Return structured error, suggest `model-info` to check required inputs and data types |
|
|
297
|
+
| Empty read result | Tell user, ask if they want to adjust the query/filter before scoring |
|
|
298
|
+
| Missing input fields | Ask which table columns map to the required model inputs |
|
|
299
|
+
|
|
300
|
+
---
|
|
301
|
+
|
|
302
|
+
## Tips
|
|
303
|
+
|
|
304
|
+
- **Batch is better:** Always pass the full set of records in one `sas-score-model-score` call. Do not loop.
|
|
305
|
+
- **Confirm mappings:** If column names don't match model inputs, ask before scoring.
|
|
306
|
+
- **Show context:** Include key input columns in the result output so predictions make sense.
|
|
307
|
+
- **Limit output:** For large result sets (>10 rows), ask before showing all.
|
|
308
|
+
|
|
309
|
+
---
|
|
310
|
+
|
|
311
|
+
## Integration with other skills
|
|
312
|
+
|
|
313
|
+
- **Before scoring table data**: Use `sas-find-library-smart` to verify the library, then `sas-read-strategy` to fetch records
|
|
314
|
+
- **For read + score workflows**: Use `sas-read-and-score` for the complete end-to-end pattern
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@sassoftware/sas-score-mcp-serverjs",
|
|
3
|
-
"version": "0.4.1-
|
|
3
|
+
"version": "0.4.1-24",
|
|
4
4
|
"description": "A mcp server for SAS Viya",
|
|
5
5
|
"author": "Deva Kumar <deva.kumar@sas.com>",
|
|
6
6
|
"license": "Apache-2.0",
|
|
@@ -43,7 +43,7 @@
|
|
|
43
43
|
"openApi.json",
|
|
44
44
|
"openApi.yaml",
|
|
45
45
|
"scripts",
|
|
46
|
-
".
|
|
46
|
+
".skills"
|
|
47
47
|
],
|
|
48
48
|
"dependencies": {
|
|
49
49
|
"@modelcontextprotocol/sdk": "^1.29.0",
|