@sassoftware/sas-score-mcp-serverjs 1.0.1-3 → 1.0.1-4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (30) hide show
  1. package/package.json +2 -3
  2. package/src/oauthHandlers/callback.js +1 -1
  3. package/src/processHeaders.js +1 -1
  4. package/src/setupSkills.js +1 -1
  5. package/.skills_claude/README.md +0 -303
  6. package/.skills_claude/TESTING_GUIDE.md +0 -252
  7. package/.skills_claude/agents/sas-viya-scoring-expert.md +0 -58
  8. package/.skills_claude/claude-desktop-config.json +0 -16
  9. package/.skills_claude/claude-desktop-system-prompt.md +0 -127
  10. package/.skills_claude/copilot-instructions.md +0 -155
  11. package/.skills_claude/instructions.md +0 -184
  12. package/.skills_claude/skills/sas-find-library-smart/SKILL.md +0 -157
  13. package/.skills_claude/skills/sas-find-resource-strategy/SKILL.md +0 -105
  14. package/.skills_claude/skills/sas-list-resource-strategy/SKILL.md +0 -124
  15. package/.skills_claude/skills/sas-list-tables-smart/SKILL.md +0 -126
  16. package/.skills_claude/skills/sas-read-and-score/SKILL.md +0 -112
  17. package/.skills_claude/skills/sas-read-strategy/SKILL.md +0 -154
  18. package/.skills_claude/skills/sas-request-classifier/SKILL.md +0 -69
  19. package/.skills_claude/skills/sas-score-workflow/SKILL.md +0 -200
  20. package/.skills_claude/skills-index.md +0 -345
  21. package/.skills_github/agents/sas-viya-scoring-expert.md +0 -58
  22. package/.skills_github/copilot-instructions.md +0 -177
  23. package/.skills_github/skills/sas-find-library-smart/SKILL.md +0 -155
  24. package/.skills_github/skills/sas-find-resource-strategy/SKILL.md +0 -105
  25. package/.skills_github/skills/sas-list-resource-strategy/SKILL.md +0 -124
  26. package/.skills_github/skills/sas-list-tables-smart/SKILL.md +0 -128
  27. package/.skills_github/skills/sas-read-and-score/SKILL.md +0 -113
  28. package/.skills_github/skills/sas-read-strategy/SKILL.md +0 -154
  29. package/.skills_github/skills/sas-request-classifier/SKILL.md +0 -74
  30. package/.skills_github/skills/sas-score-workflow/SKILL.md +0 -314
@@ -1,200 +0,0 @@
1
- ---
2
- name: sas-score-workflow
3
- description: >
4
- Guide the full model scoring workflow: validate model familiarity, route to appropriate scoring tool
5
- based on model type, invoke scoring with scenario data, and present merged results. Use this skill
6
- when the user wants to run predictions on data (already fetched or user-supplied). Supports generic
7
- syntax: "score with model <name>.<type> scenario =<params>" where type is job|jobdef|mas|scr|sas.
8
- Trigger phrases: "score these records", "predict using model", "run model on", "score with model X.mas".
9
- ---
10
-
11
- # SAS Score Workflow
12
-
13
- Orchestrates model validation, type-based routing, scoring invocation, and result presentation.
14
- Handles both MAS models and alternative scoring engines (jobs, jobdefs, SCR, SAS programs).
15
-
16
- ---
17
-
18
- ## Generic Scoring Syntax
19
-
20
- Users can invoke scoring with a unified syntax that automatically routes to the correct tool:
21
-
22
- ```
23
- score with model <name>.<type> [scenario =<key=value pairs>]
24
- score <name>.<type> [scenario =<key=value pairs>]
25
- ```
26
-
27
- **Type determines the routing:**
28
- - `.job` → route to `sas-score-run-job` with scoring parameters
29
- - `.jobdef` → route to `sas-score-run-jobdef` with scoring parameters
30
- - `.mas` → route to `sas-score-model-score` (Model Analytical Service — default)
31
- - `.scr` → route to `sas-score-scr-score` (SAS Container Runtime)
32
- - `.sas` → route to `sas-score-run-sas-program` to run sas program in folder
33
-
34
- If no type is specified (bare model name), assume `.mas` (MAS model).
35
-
36
- ---
37
-
38
- ## Type-Based Routing
39
-
40
- ### Parse and Strip Model Type
41
-
42
- When a user provides a model name with a type suffix (e.g., `simplejon.job`, `churn.mas`):
43
-
44
- 1. **Extract the type:** Split on the last dot to identify the type suffix
45
- - `simplejon.job` → type = `job`, base name = `simplejon`
46
- - `churn.mas` → type = `mas`, base name = `churn`
47
- - `fraud_detector.jobdef` → type = `jobdef`, base name = `fraud_detector`
48
-
49
- 2. **Validate the type:** Confirm it matches one of the supported types: `job`, `jobdef`, `mas`, `scr`, `sas`
50
- - If type is unrecognized, assume `.mas` (default MAS model) and treat the entire input as the model name
51
-
52
- 3. **Strip the type suffix:** Remove the `.type` from the model name before passing to the routing tool
53
- - **Critical:** Always pass the base name (without the dot and type) to the invoked tool
54
- - `simplejon.job` → pass `simplejon` to `sas-score-run-job`
55
- - `churn.mas` → pass `churn` to `sas-score-model-score`
56
- - `fraud_detector.jobdef` → pass `fraud_detector` to `sas-score-run-jobdef`
57
-
58
- ### Type: `.mas` (Model Aggregation Service)
59
- - **Tool**: `sas-score-model-score`
60
- - **Use for**: Standard MAS-deployed predictive models
61
- - **Example**: `score with model churn.mas scenario =age=45,income=60000`
62
- - **Invocation**: `sas-score-model-score({ model: "churn", scenario: {...} })`
63
-
64
- ### Type: `.job` (SAS Viya Job)
65
- - **Tool**: `sas-score-run-job`
66
- - **Use for**: Pre-built scoring jobs with parameters
67
- - **Example**: `score with model monthly_scorer.job scenario =month=10,year=2025`
68
- - **Invocation**: `sas-score-run-job({ name: "monthly_scorer", scenario: {...} })`
69
-
70
- ### Type: `.jobdef` (SAS Viya Job Definition)
71
- - **Tool**: `sas-score-run-jobdef`
72
- - **Use for**: Job definitions that perform scoring logic
73
- - **Example**: `score with model fraud_detector.jobdef using amount=500,merchant=online`
74
- - **Invocation**: `sas-score-run-jobdef({ name: "fraud_detector", scenario: {...} })`
75
-
76
- ### Type: `.scr` (Score Code Runtime)
77
- - **Tool**: `sas-score-scr-score`
78
- - **Use for**: Models deployed in SCR containers (REST endpoints)
79
- - **Example**: `score https://scr-host/models/loan.scr using age=45,credit=700`
80
- - **Invocation**: `sas-score-scr-score({ url: "https://scr-host/models/loan", scenario: {...} })`
81
-
82
- ### Type: `.sas` (SAS Program / SQL)
83
- - **Tool**: `sas-score-run-sas-program`
84
- - **Use for**: Custom SAS or SQL scoring code
85
- - **Example**: `score my_scoring_code.sas using x=1,y=2`
86
- - **Invocation**: `sas-score-run-sas-program({ folder: "my_scoring_code", scenario: {...} })`
87
-
88
-
89
-
90
- ---
91
-
92
- ## Scenario Parsing
93
-
94
- The scenario parameter (comma-separated key=value pairs) is parsed into an object:
95
-
96
- ```
97
- scenario =age=45,income=60000,region=South
98
- ↓ parsed as:
99
- { age: "45", income: "60000", region: "South" }
100
- ```
101
-
102
- Accepted formats:
103
- - **String**: `age=45,income=60000`
104
- - **Object**: `{ age: 45, income: 60000 }`
105
- - **Array** (batch): `[ {age:45, income:60000}, {age:50, income:75000} ]`
106
-
107
- ---
108
-
109
- ## Integration with other skills
110
-
111
- - **Before scoring table data**: Use `sas-find-library-smart` to verify the library, then `sas-read-strategy` to fetch records
112
- - **For read + score workflows**: Use `sas-read-and-score` for the complete end-to-end pattern
113
-
114
- ---
115
-
116
- ## Step 1 — Check model familiarity before scoring
117
-
118
- Score immediately if:
119
- - The user names a specific model they've used before in this session, OR
120
- - The model name matches a previously confirmed model in the conversation
121
-
122
- Pause and suggest investigation if:
123
- - The model name is new, vague, or misspelled-looking (e.g. "the churn one", "that cancer model")
124
- - The user seems unsure of the required input variable names
125
-
126
- **Suggested message:**
127
- > "I don't recognize that model — want me to run `find-model` to confirm it exists,
128
- > or `model-info` to check its required inputs first?"
129
-
130
- ---
131
-
132
- ## Step 2 — Prepare the scenario data
133
-
134
- **For a single record** (one object):
135
- ```javascript
136
- scenario = { field1: value1, field2: value2, ... }
137
- ```
138
-
139
- **For batch scoring** (multiple records — the typical case):
140
- ```javascript
141
- scenario = [
142
- { field1: val1, field2: val2, ... },
143
- { field1: val3, field2: val4, ... },
144
- ...
145
- ]
146
- ```
147
-
148
- **Critical rules:**
149
- - Loop or call sas-score-model-score **once per row**.
150
- - Field names in the scenario must match the model's expected input variable names **exactly**.
151
- - If table column names differ from model input names, **flag this to the user** and ask for confirmation before scoring.
152
- - Example: Table has `age_years`, but model expects `age` → ask user which column maps to which input.
153
- - Do not add units, labels, or extra metadata — raw field values only.
154
-
155
- ---
156
-
157
- ## Step 3 — Invoke the appropriate scoring tool
158
-
159
- Based on the type extracted from the model name, invoke the corresponding tool:
160
-
161
- **For `.mas` (default):**
162
- ```javascript
163
- sas-score-model-score({
164
- model: "<modelname>",
165
- scenario: scenario, // object or array
166
- uflag: false // set true if you need field names prefixed with _
167
- })
168
- ```
169
-
170
- **For `.job`:**
171
- ```javascript
172
- sas-score-run-job({
173
- name: "<jobname>",
174
- scenario: scenario
175
- })
176
- ```
177
-
178
- **For `.jobdef`:**
179
- ```javascript
180
- sas-score-run-jobdef({
181
- name: "<jobdefname>",
182
- scenario: scenario
183
- })
184
- ```
185
-
186
- **For `.scr`:**
187
- ```javascript
188
- sas-score-scr-score({
189
- url: "<scr_endpoint_url>",
190
- scenario: scenario
191
- })
192
- ```
193
-
194
- **For `.sas`:**
195
- ```javascript
196
- sas-score-run-sas-program({
197
- src: "<sas_or_sql_code>",
198
- scenario: scenario
199
- })
200
- ```
@@ -1,345 +0,0 @@
1
- # SAS Agent Skills Index
2
-
3
- Quick reference for all available SAS agent skills in this repository.
4
-
5
- ## Skills Overview
6
-
7
- The SAS agent includes 6 specialized skills for different SAS workflows. Load the most relevant skill for your task.
8
-
9
- ---
10
-
11
- ## 1. sas-request-classifier
12
-
13
- **File:** `skills/sas-request-classifier/SKILL.md`
14
-
15
- **Purpose:** Disambiguate SAS domain terms before using MCP tools.
16
-
17
- **Use when:**
18
- - Request mentions jobs, code, models, scoring, CAS tables, content, or resources
19
- - The correct SAS domain is not yet clear
20
- - Multiple interpretations are possible
21
-
22
- **Trigger phrases:**
23
- - "find my model"
24
- - "run scoring"
25
- - "open the table"
26
- - Any ambiguous request using domain terms
27
-
28
- **What it does:**
29
- - Classifies the request into a SAS domain (jobs, data, models, scoring, etc.)
30
- - Identifies which downstream skill to use
31
- - Raises clarifying questions if needed
32
- - Routes to the most appropriate skill or tool
33
-
34
- **Routes to:**
35
- - `sas-find-library-smart` — Library discovery
36
- - `sas-list-tables-smart` — Table enumeration
37
- - `sas-read-strategy` — Data retrieval
38
- - `sas-read-and-score` — Combined read + score
39
- - `sas-score-workflow` — Model scoring
40
-
41
- ---
42
-
43
- ## 2. sas-find-library-smart
44
-
45
- **File:** `skills/sas-find-library-smart/SKILL.md`
46
-
47
- **Purpose:** Find a SAS Viya library (libref or caslib) with intelligent server detection.
48
-
49
- **Use when:**
50
- - User needs to verify a library exists
51
- - User wants to determine which server (CAS or SAS) contains a library
52
- - Before accessing tables within a library
53
-
54
- **Trigger phrases:**
55
- - "find library Public"
56
- - "does library exist"
57
- - "check if library staging"
58
- - "locate library SASHELP"
59
- - "is there a library named X"
60
- - "verify library"
61
-
62
- **What it does:**
63
- - Checks CAS first for the library
64
- - If not found in CAS, checks SAS (with uppercase)
65
- - Returns the server location where the library was found
66
- - Offers next steps (list tables, read data, etc.)
67
-
68
- **Server detection logic:**
69
- - **CAS first** — Always try CAS with the library name as-is
70
- - **SAS second** — If not in CAS, try SAS with uppercase name
71
- - **Smart naming** — Knows CAS is case-sensitive, SAS requires uppercase
72
-
73
- **Integration:**
74
- - Use before: `sas-list-tables-smart`, `sas-read-strategy`
75
-
76
- ---
77
-
78
- ## 3. sas-list-tables-smart
79
-
80
- **File:** `skills/sas-list-tables-smart/SKILL.md`
81
-
82
- **Purpose:** List all tables in a SAS Viya library with intelligent server detection.
83
-
84
- **Use when:**
85
- - User wants to browse or explore available tables
86
- - User wants to see what data is available in a library
87
- - User wants to verify a table exists before accessing it
88
-
89
- **Trigger phrases:**
90
- - "list tables in Public"
91
- - "show tables in SASHELP"
92
- - "what tables are in mylib"
93
- - "browse tables in"
94
- - "tables in library"
95
- - "enumerate tables"
96
-
97
- **What it does:**
98
- - Checks CAS first for tables in the library
99
- - If none found in CAS, checks SAS
100
- - Returns paginated list of table names
101
- - Suggests pagination if more tables exist
102
-
103
- **Server detection logic:**
104
- - Same as `sas-find-library-smart`
105
- - Automatically determines correct server
106
- - Handles case sensitivity per server
107
-
108
- **Pagination:**
109
- - Default: 10 tables per page
110
- - Adjust limit based on user request
111
- - Use `start` parameter for pagination
112
-
113
- **Integration:**
114
- - Use after: `sas-find-library-smart`
115
- - Use before: `sas-read-strategy`
116
-
117
- ---
118
-
119
- ## 4. sas-read-strategy
120
-
121
- **File:** `skills/sas-read-strategy/SKILL.md`
122
-
123
- **Purpose:** Guide selection of the right data retrieval tool (raw rows vs. analytical queries).
124
-
125
- **Use when:**
126
- - User wants to fetch records from a SAS/CAS table
127
- - User wants to read data with or without aggregation
128
- - Ambiguity about read tool choice
129
-
130
- **Trigger phrases:**
131
- - "read records from"
132
- - "get data where"
133
- - "fetch rows from"
134
- - "query the table"
135
- - "give me the first N records"
136
- - "aggregate by"
137
- - "join tables"
138
- - "count", "average", "sum by"
139
-
140
- **What it does:**
141
- 1. Verifies library exists (uses `sas-find-library-smart`)
142
- 2. Locates the specific table
143
- 3. Determines user intent: raw rows vs. analytical query
144
- 4. Routes to correct tool:
145
- - **`sas-score-read-table`** — Raw rows, simple filters, pagination
146
- - **`sas-score-sas-query`** — Aggregations, joins, calculations
147
-
148
- **Decision logic:**
149
-
150
- | User's Intent | Tool | Example |
151
- |---|---|---|
152
- | Get raw records, row-by-row | `read-table` | "Show me 10 rows where status='active'" |
153
- | Aggregate, summarize, calculate | `sas-query` | "Average price by make" |
154
-
155
- **Server detection:**
156
- - Uses `sas-find-library-smart` to locate library
157
- - Determines correct server (CAS or SAS)
158
- - Handles case sensitivity
159
-
160
- **Integration:**
161
- - Use after: `sas-find-library-smart`, optionally `sas-list-tables-smart`
162
- - Use before: `sas-read-and-score` (if scoring after reading)
163
-
164
- ---
165
-
166
- ## 5. sas-read-and-score
167
-
168
- **File:** `skills/sas-read-and-score/SKILL.md`
169
-
170
- **Purpose:** Guide the full read → score workflow: fetch table records and score them with a MAS model.
171
-
172
- **Use when:**
173
- - User wants to score records from a table
174
- - User wants to run a model against query results
175
- - User wants to predict outcomes for a set of rows
176
- - Any request combining data reading with model scoring
177
-
178
- **Trigger phrases:**
179
- - "score these records"
180
- - "score results of my query"
181
- - "run the model on this table"
182
- - "predict for these customers"
183
- - "fetch and score"
184
- - "read and score"
185
- - "score rows from"
186
- - "run model on table data"
187
-
188
- **What it does:**
189
- 1. Verifies library and table existence
190
- 2. Validates model existence and schema
191
- 3. Reads/fetches the table data
192
- 4. Scores each record with the model
193
- 5. Merges predictions with original data
194
- 6. Returns combined results
195
-
196
- **Workflow:**
197
-
198
- ```
199
- User Request
200
-
201
- Verify Library (sas-find-library-smart)
202
-
203
- Fetch Table Data (sas-read-strategy)
204
-
205
- Validate Model & Schema
206
-
207
- Score Records (sas-score-workflow or sas-score-model-score)
208
-
209
- Merge & Present Results
210
- ```
211
-
212
- **Error handling:**
213
- - Missing table → Ask for correct lib.tablename
214
- - Missing model → Ask for model name or suggest discovery
215
- - Field mismatch → Warn about column name mismatches
216
- - Empty result → Ask to adjust query/filter
217
-
218
- **Integration:**
219
- - Depends on: `sas-find-library-smart`, `sas-read-strategy`
220
- - Uses: `sas-score-workflow` for routing
221
-
222
- ---
223
-
224
- ## 6. sas-score-workflow
225
-
226
- **File:** `skills/sas-score-workflow/SKILL.md`
227
-
228
- **Purpose:** Route scoring requests to the correct tool based on model type suffix.
229
-
230
- **Use when:**
231
- - User requests scoring with a model name
232
- - Model name contains a type suffix (.job, .jobdef, .mas, .scr, .sas)
233
- - Model type is ambiguous
234
-
235
- **Trigger phrases:**
236
- - "score with model X.job"
237
- - "score X.jobdef scenario"
238
- - "score with model X.mas"
239
- - "score X.scr using"
240
- - "score with model X.sas"
241
- - "score with model X" (bare name, defaults to .mas)
242
-
243
- **What it does:**
244
- 1. Parses the model name to extract type suffix
245
- 2. Routes to the appropriate scoring tool:
246
- - `.job` → `sas-score-run-job`
247
- - `.jobdef` → `sas-score-run-jobdef`
248
- - `.mas` → `sas-score-model-score` (default)
249
- - `.scr` → `sas-score-scr-score`
250
- - `.sas` → `sas-score-run-sas-program`
251
- 3. Prepares scenario data (key=value pairs)
252
- 4. Invokes the appropriate tool
253
- 5. Returns predictions
254
-
255
- **Type-based routing:**
256
-
257
- | Type | Tool | Use for | Example |
258
- |------|------|---------|---------|
259
- | `.mas` | `sas-score-model-score` | MAS models | `score churn.mas scenario =age=45` |
260
- | `.job` | `sas-score-run-job` | SAS jobs | `score scorer.job scenario =month=10` |
261
- | `.jobdef` | `sas-score-run-jobdef` | Job definitions | `score detector.jobdef using x=1` |
262
- | `.scr` | `sas-score-scr-score` | SCR containers | `score https://scr/model.scr using x=1` |
263
- | `.sas` | `sas-score-run-sas-program` | SAS programs | `score code.sas using x=1,y=2` |
264
- | (none) | `sas-score-model-score` | Assume MAS | `score churn scenario =age=45` |
265
-
266
- **Scenario parsing:**
267
- ```
268
- scenario =age=45,income=60000,region=South
269
- ↓ parsed as:
270
- { age: "45", income: "60000", region: "South" }
271
- ```
272
-
273
- **Batch scoring:**
274
- - If multiple records: scores each row
275
- - Falls back to row-by-row if batch not available
276
- - Merges predictions with original data
277
-
278
- **Integration:**
279
- - Used by: `sas-read-and-score` for scoring table data
280
- - Input from: User or `sas-read-strategy` (if chained)
281
-
282
- ---
283
-
284
- ## Skill Usage Matrix
285
-
286
- Quick guide to which skill(s) to use for different tasks:
287
-
288
- | Task | Primary Skill | Secondary Skills |
289
- |------|---------------|------------------|
290
- | Find a library | `sas-find-library-smart` | — |
291
- | Browse tables in a library | `sas-list-tables-smart` | `sas-find-library-smart` |
292
- | Read raw table data | `sas-read-strategy` | `sas-find-library-smart` |
293
- | Aggregate/analyze table data | `sas-read-strategy` | `sas-find-library-smart` |
294
- | Score records from table | `sas-read-and-score` | All read skills + `sas-score-workflow` |
295
- | Score with MAS model | `sas-score-workflow` | — |
296
- | Score with SAS job | `sas-score-workflow` | — |
297
- | Ambiguous request | `sas-request-classifier` | Any of above |
298
-
299
- ---
300
-
301
- ## Loading a Skill
302
-
303
- When you encounter a task that matches a skill's trigger phrases or purpose:
304
-
305
- 1. **Ask the agent** — Most skill loading is automatic
306
- 2. **Explicitly reference** — "Use the sas-find-library-smart skill"
307
- 3. **Context clues** — Agent infers from your request
308
-
309
- Example:
310
- ```
311
- You: "List tables in Public"
312
-
313
- Agent (automatic):
314
- 1. Recognizes "list tables in" → loads sas-list-tables-smart
315
- 2. Calls sas-score-list-tables
316
- 3. Returns table names
317
- ```
318
-
319
- ---
320
-
321
- ## Skill Composition
322
-
323
- Skills can be used together:
324
-
325
- ```
326
- Typical Workflow:
327
- sas-request-classifier
328
- → sas-find-library-smart (verify library)
329
- → sas-list-tables-smart (browse tables)
330
- → sas-read-strategy (choose read tool)
331
- → sas-read-and-score (optional: score results)
332
- → sas-score-workflow (if scoring needed)
333
- ```
334
-
335
- ---
336
-
337
- ## Next Steps
338
-
339
- 1. **Read a skill file** — Open any `.claude/skills/<name>/SKILL.md` for details
340
- 2. **Try a task** — Make a request matching a skill's trigger phrase
341
- 3. **Explore workflows** — Combine skills for complex tasks
342
- 4. **Review patterns** — See "Common patterns" in each skill file
343
-
344
- For setup instructions, see `README.md`.
345
- For usage instructions, see `instructions.md`.
@@ -1,58 +0,0 @@
1
- ---
2
- name: SAS Viya Scoring Expert
3
- description: Specialized SAS and Viya agent that classifies requests, selects the right SAS skill, and uses MCP tools safely for jobs, CAS data, libraries, models, scoring, and content workflows.
4
- ---
5
-
6
- # SAS Viya Scoring Expert
7
-
8
- You are a SAS Viya expert agent.
9
-
10
- Your job is to help users work with SAS and Viya resources through the SAS MCP server.
11
- Treat requests as domain-specific SAS tasks, not generic coding tasks.
12
-
13
- ## Default behavior
14
- Before using MCP tools:
15
- - Determine whether the request is about jobs, code, CAS data, libraries, models, scoring, content, or environment issues.
16
- - If the request includes ambiguous terms such as model, score, scoring, read, query, job, code, table, content, asset, or resource, classify the request before acting.
17
- - Prefer loading the most relevant SAS skill before using low-level tools.
18
- - If confidence is low, ask one focused clarifying question.
19
- - Prefer discovery and inspection before execution, publish, scoring, deploy, write, or destructive actions.
20
-
21
- ## Skill-first policy
22
- Use skills as the primary source of SAS workflow guidance.
23
- Load one or more relevant SAS skills before using tools when the request is ambiguous, cross-domain, or execution-oriented.
24
- Do not load unrelated skills.
25
-
26
- ## Routing policy
27
- When a request is ambiguous or could map to more than one SAS domain:
28
- - Start with classification.
29
- - Identify the most likely SAS asset or workflow type.
30
- - Choose the best matching SAS skill.
31
- - Only then select MCP tools.
32
-
33
- ## Ambiguity policy
34
- These terms are overloaded in SAS and Viya workflows and should not be interpreted casually:
35
- - model
36
- - score
37
- - scoring
38
- - read
39
- - query
40
- - job
41
- - code
42
- - table
43
- - content
44
- - asset
45
- - resource
46
-
47
- If the meaning is unclear, ask one targeted clarifying question or use discovery-oriented skills before any execution step.
48
-
49
- ## Tool usage policy
50
- - Prefer read-only discovery before execution.
51
- - Confirm the target asset type before running jobs, scoring data, publishing models, or modifying content.
52
- - If tool results contradict the initial interpretation, correct course explicitly and continue.
53
- - Never invent asset names, identifiers, libraries, or model types.
54
-
55
- ## Response style
56
- Be concise, explicit, and domain-aware.
57
- State which SAS concept or asset type you are acting on when ambiguity is possible.
58
- Prefer short structured answers when guiding the user.