@sassoftware/sas-score-mcp-serverjs 0.4.0 → 0.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (39) hide show
  1. package/cli.js +9 -127
  2. package/package.json +2 -3
  3. package/src/createMcpServer.js +0 -1
  4. package/src/expressMcpServer.js +27 -53
  5. package/src/handleGetDelete.js +3 -6
  6. package/src/hapiMcpServer.js +18 -10
  7. package/src/toolHelpers/_jobSubmit.js +0 -2
  8. package/src/toolHelpers/_listLibrary.js +39 -56
  9. package/src/toolHelpers/getLogonPayload.js +1 -3
  10. package/src/toolSet/devaScore.js +36 -28
  11. package/src/toolSet/findJob.js +49 -23
  12. package/src/toolSet/findJobdef.js +54 -24
  13. package/src/toolSet/findLibrary.js +57 -25
  14. package/src/toolSet/findModel.js +53 -31
  15. package/src/toolSet/findTable.js +54 -25
  16. package/src/toolSet/getEnv.js +38 -20
  17. package/src/toolSet/listJobdefs.js +58 -24
  18. package/src/toolSet/listJobs.js +72 -24
  19. package/src/toolSet/listLibraries.js +47 -37
  20. package/src/toolSet/listModels.js +47 -20
  21. package/src/toolSet/listTables.js +58 -29
  22. package/src/toolSet/makeTools.js +0 -3
  23. package/src/toolSet/modelInfo.js +49 -18
  24. package/src/toolSet/modelScore.js +69 -27
  25. package/src/toolSet/readTable.js +62 -25
  26. package/src/toolSet/runCasProgram.js +43 -23
  27. package/src/toolSet/runJob.js +19 -20
  28. package/src/toolSet/runJobdef.js +23 -21
  29. package/src/toolSet/runMacro.js +20 -20
  30. package/src/toolSet/runProgram.js +71 -24
  31. package/src/toolSet/sasQuery.js +70 -23
  32. package/src/toolSet/scrInfo.js +4 -3
  33. package/src/toolSet/setContext.js +48 -22
  34. package/src/toolSet/tableInfo.js +71 -28
  35. package/skills/mcp-tool-description-optimizer/SKILL.md +0 -129
  36. package/skills/mcp-tool-description-optimizer/references/examples.md +0 -123
  37. package/skills/sas-read-and-score/SKILL.md +0 -91
  38. package/skills/sas-read-strategy/SKILL.md +0 -143
  39. package/skills/sas-score-workflow/SKILL.md +0 -282
@@ -1,91 +0,0 @@
1
- ---
2
- name: sas-read-and-score
3
- description: >
4
- Guide the full read → score workflow in SAS Viya: reading records from a table (using read-table
5
- or sas-query) and then scoring them with a MAS model (using model-score). Use this skill whenever
6
- the user wants to score records from a table, run a model against query results, predict outcomes
7
- for a set of rows, or any combination of fetching data and scoring it. Trigger phrases include:
8
- "score these records", "score results of my query", "run the model on this table",
9
- "predict for these customers", "fetch and score", "read and score", "score rows from",
10
- "run model on table data", or any request that combines reading table data with model prediction.
11
- ---
12
-
13
- # SAS Read → Score Workflow
14
-
15
- Orchestrates the full two-step pattern of reading records from a SAS/CAS table and scoring them
16
- with a deployed MAS model.
17
-
18
- This skill chains two sub-skills:
19
- 1. **sas-read-strategy** — Choose between `read-table` and `sas-query`
20
- 2. **sas-score-workflow** — Validate model, invoke scoring, present results
21
-
22
- ---
23
-
24
- ## Quick reference
25
-
26
- 1. **Does the user already have data in hand?**
27
- - Yes → skip to Step 2 (scoring)
28
- - No → use `sas-read-strategy` to fetch data
29
-
30
- 2. **Is the model name familiar?**
31
- - Yes → proceed to score
32
- - No → pause and use `find-model` / `model-info`
33
-
34
- 3. **Invoke model-score** with the fetched data
35
-
36
- 4. **Merge results** and present as a table
37
-
38
- ---
39
-
40
- ## Common flows
41
-
42
- **Flow A — Score rows from a table directly**
43
- > "Score the first 10 customers in Public.customers with the churn model"
44
-
45
- 1. Apply `sas-read-strategy` → use `read-table` to fetch 10 rows
46
- 2. Apply `sas-score-workflow` → invoke `model-score` and present results
47
-
48
- **Flow B — Score results of an analytical query**
49
- > "Score high-value customers (spend > 5000) in mylib.sales with the fraud model"
50
-
51
- 1. Apply `sas-read-strategy` → use `sas-query` for aggregation
52
- 2. Apply `sas-score-workflow` → invoke `model-score` and present results
53
-
54
- **Flow C — User supplies scenario data directly**
55
- > "Score age=45, income=60000, region=South with the churn model"
56
-
57
- 1. Skip read strategy
58
- 2. Apply `sas-score-workflow` → invoke `model-score` and present result
59
-
60
- **Flow D — Model unfamiliar**
61
- > "Score Public.applicants with the creditRisk2 model"
62
-
63
- 1. Model unfamiliar → use `find-model` to verify existence
64
- 2. Apply `sas-read-strategy` to fetch data
65
- 3. Apply `sas-score-workflow` to score
66
-
67
- ---
68
-
69
- ## For detailed guidance
70
-
71
- - **Read strategy decisions?** See `sas-read-strategy` skill
72
- - **Scoring validation and presentation?** See `sas-score-workflow` skill
73
-
74
- **Flow D — Model unfamiliar**
75
- > "Score Public.applicants with the creditRisk2 model"
76
-
77
- 1. Pause — "creditRisk2" is new
78
- 2. Suggest: `find-model` to confirm it exists, `model-info` to get input variables
79
- 3. Once confirmed → `read-table` + `model-score`
80
-
81
- ---
82
-
83
- ## Error handling
84
-
85
- | Problem | Action |
86
- |---|---|
87
- | Table not found | Ask for correct lib.tablename |
88
- | Model not found | Suggest find-model |
89
- | Field name mismatch | Show mismatch, ask user to confirm mapping |
90
- | Scoring error | Return structured error, suggest model-info |
91
- | Empty read result | Tell user, ask if they want to adjust the query/filter |
@@ -1,143 +0,0 @@
1
- ---
2
- name: sas-read-strategy
3
- description: >
4
- Guide the user in choosing the right data retrieval tool: read-table (for raw row access with filters)
5
- or sas-query (for analytical queries, aggregations, joins). Use this skill when the user wants to
6
- fetch records from a SAS/CAS table. Trigger phrases include: "read records from", "get data where",
7
- "fetch rows from", "query the table", "give me the first N records", "aggregate by", "join tables",
8
- or any request that starts with data retrieval.
9
- ---
10
-
11
- # SAS Read Strategy
12
-
13
- Guides the decision between `read-table` and `sas-query` based on the user's intent and the nature
14
- of the data operation.
15
-
16
- ---
17
-
18
- ## Determine the server location
19
-
20
- Before retrieving data, check which server(s) contain the table:
21
-
22
- 1. **Check both servers**: Use `find-table` to check if the table exists in CAS and/or SAS
23
- 2. **Decide server placement**:
24
- - If table exists **only in CAS** → set `server: "cas"`
25
- - If table exists **only in SAS** → set `server: "sas"`
26
- - If table exists **in both** → ask the user: *"The table exists in both CAS and SAS. Which server would you prefer to query from?"*
27
- - If table exists **in neither** → inform user and ask which library to search in
28
-
29
- ---
30
-
31
- ## Determine the read strategy
32
-
33
- Ask yourself: does the user already have the data in hand?
34
-
35
- - **Yes (user pasted values, or data is already in context)** → skip this strategy; data is ready to use.
36
- - **No** → choose the read tool based on intent:
37
-
38
- | User's Intent | Tool | Example |
39
- |---|---|---|
40
- | Get specific raw rows, apply simple filter, retrieve first N records | `read-table` | "Show me 10 rows from customers where status='active'" |
41
- | Aggregate/summarize, calculate, join tables, analytical question | `sas-query` | "Average salary by department", "Count orders by region" |
42
-
43
- ---
44
-
45
- ## Using read-table
46
-
47
- **When:**
48
- - User asks for raw records, row-by-row data
49
- - Simple WHERE filtering (e.g., "where status = 'active'")
50
- - Pagination needed ("first 50 rows", "next 10 rows")
51
-
52
- **How:**
53
- ```
54
- read-table({
55
- table: "tablename",
56
- lib: "libraryname",
57
- server: "cas" or "sas", // determined from find-table check
58
- limit: N, // default 10, adjust based on user request
59
- where: "..." // optional SQL WHERE clause
60
- })
61
- ```
62
-
63
- **Rules:**
64
- - Always determine the server first using `find-table`
65
- - Keep batch size ≤ 50 rows unless the user explicitly requests more
66
- - If table name is missing, ask: *"Which table should I read from? (format: lib.tablename)"*
67
- - If library is missing, ask: *"Which library contains the table?"*
68
- - If table exists in both servers, ask user which to use
69
- - Return raw column values; do not transform or aggregate
70
-
71
- ---
72
-
73
- ## Using sas-query
74
-
75
- **When:**
76
- - User asks for aggregation (SUM, AVG, COUNT, GROUP BY)
77
- - User asks for joins, calculations, or analytical insights
78
- - User's question is phrased analytically ("compare", "analyze", "breakdown", "trend")
79
-
80
- **How:**
81
- ```
82
- sas-query({
83
- table: "lib.tablename",
84
- query: "user's natural language question",
85
- sql: "SELECT ... FROM ... WHERE ... GROUP BY ..." // generated from query
86
- })
87
- ```
88
-
89
- **Rules:**
90
- - Check which server(s) contain the table using `find-table` first
91
- - If table exists in both CAS and SAS, ask user which to query from
92
- - Parse the user's natural language question into a PROC SQL SELECT statement
93
- - Ensure SELECT statement is valid SQL syntax
94
- - Do not add trailing semicolons to the SQL string
95
- - If table name is missing, ask: *"Which table should I query? (format: lib.tablename)"*
96
- - If the intent is unclear, ask for clarification: *"Do you want raw rows, or an aggregated summary?"*
97
-
98
- ---
99
-
100
- ## Common patterns
101
-
102
- **Pattern A — Raw row retrieval**
103
- > "Show me the first 5 rows from Public.customers"
104
-
105
- → `read-table({ table: "customers", lib: "Public", limit: 5 })`
106
-
107
- **Pattern B — Filtered retrieval**
108
- > "Get all high-value orders (amount > 5000) from mylib.orders"
109
-
110
- → `read-table({ table: "orders", lib: "mylib", where: "amount > 5000" })`
111
-
112
- **Pattern C — Aggregation**
113
- > "What is the average price by make in Public.cars?"
114
-
115
- → `sas-query({ table: "Public.cars", query: "average price by make", sql: "SELECT make, AVG(msrp) AS avg_price FROM Public.cars GROUP BY make" })`
116
-
117
- **Pattern D — Join + analysis**
118
- > "Show me total sales by customer in the sales and customers tables"
119
-
120
- → `sas-query()` with a JOIN in the generated SQL
121
-
122
- ---
123
-
124
- ## Error handling
125
-
126
- | Problem | Action |
127
- |---|---|
128
- | Table not found in either server | Ask: *"Which library contains the table? (e.g., Public, Samples, mylib)"* |
129
- | Table exists in both CAS and SAS | Ask: *"The table exists in both servers. Which would you prefer: CAS or SAS?"* |
130
- | Table exists only in one server | Use that server automatically in your request |
131
- | Library not found | Ask: *"Which library contains the table? (e.g., Public, Samples, mylib)"* |
132
- | Ambiguous intent (raw vs aggregate) | Ask: *"Do you want individual rows or a summary by some field?"* |
133
- | Empty result | Inform user, ask to adjust filter or query |
134
-
135
- ---
136
-
137
- ## Next steps
138
-
139
- Once data is retrieved, decide the next action based on context:
140
- - If user wants to **score** the data → move to `sas-score-workflow`
141
- - If user wants to **visualize** → present as table or chart
142
- - If user wants to **export** → format and offer download
143
- - If user wants to **analyze further** → ask clarifying questions
@@ -1,282 +0,0 @@
1
- ---
2
- name: sas-score-workflow
3
- description: >
4
- Guide the full model scoring workflow: validate model familiarity, route to appropriate scoring tool
5
- based on model type, invoke scoring with scenario data, and present merged results. Use this skill
6
- when the user wants to run predictions on data (already fetched or user-supplied). Supports generic
7
- syntax: "score with model <name>.<type> scenario =<params>" where type is job|jobdef|mas|scr|sas.
8
- Trigger phrases: "score these records", "predict using model", "run model on", "score with model X.mas".
9
- ---
10
-
11
- # SAS Score Workflow
12
-
13
- Orchestrates model validation, type-based routing, scoring invocation, and result presentation.
14
- Handles both MAS models and alternative scoring engines (jobs, jobdefs, SCR, SAS programs).
15
-
16
- ---
17
-
18
- ## Generic Scoring Syntax
19
-
20
- Users can invoke scoring with a unified syntax that automatically routes to the correct tool:
21
-
22
- ```
23
- score with model <name>.<type> [scenario =<key=value pairs>]
24
- score <name>.<type> [scenario =<key=value pairs>]
25
- ```
26
-
27
- **Type determines the routing:**
28
- - `.job` → route to `run-job` with scoring parameters
29
- - `.jobdef` → route to `run-jobdef` with scoring parameters
30
- - `.mas` → route to `model-score` (Model Analytical Service — default)
31
- - `.scr` → route to `scr-score` (SAS Container Runtime)
32
- - `.sas` → route to `run-sas-program` to run sas program in folder
33
-
34
- If no type is specified (bare model name), assume `.mas` (MAS model).
35
-
36
- ---
37
-
38
- ## Type-Based Routing
39
-
40
- Parse the model name to extract the type suffix and route accordingly:
41
-
42
- ### Type: `.mas` (Model Aggregation Service)
43
- - **Tool**: `model-score`
44
- - **Use for**: Standard MAS-deployed predictive models
45
- - **Example**: `score with model churn.mas scenario =age=45,income=60000`
46
- - **Invocation**: `model-score({ model: "churn", scenario: {...} })`
47
-
48
- ### Type: `.job` (SAS Viya Job)
49
- - **Tool**: `run-job`
50
- - **Use for**: Pre-built scoring jobs with parameters
51
- - **Example**: `score with model monthly_scorer.job scenario =month=10,year=2025`
52
- - **Invocation**: `run-job({ name: "monthly_scorer", scenario: {...} })`
53
-
54
- ### Type: `.jobdef` (SAS Viya Job Definition)
55
- - **Tool**: `run-jobdef`
56
- - **Use for**: Job definitions that perform scoring logic
57
- - **Example**: `score with model fraud_detector.jobdef using amount=500,merchant=online`
58
- - **Invocation**: `run-jobdef({ name: "fraud_detector", scenario: {...} })`
59
-
60
- ### Type: `.scr` (Score Code Runtime)
61
- - **Tool**: `scr-score`
62
- - **Use for**: Models deployed in SCR containers (REST endpoints)
63
- - **Example**: `score https://scr-host/models/loan.scr using age=45,credit=700`
64
- - **Invocation**: `scr-score({ url: "https://scr-host/models/loan", scenario: {...} })`
65
-
66
- ### Type: `.sas` (SAS Program / SQL)
67
- - **Tool**: `run-sas-program`
68
- - **Use for**: Custom SAS or SQL scoring code
69
- - **Example**: `score my_scoring_code.sas using x=1,y=2`
70
- - **Invocation**: `run-sas-program({ folder: "my_scoring_code", scenario: {...} })`
71
-
72
- ---
73
-
74
- ## Scenario Parsing
75
-
76
- The scenario parameter (comma-separated key=value pairs) is parsed into an object:
77
-
78
- ```
79
- scenario =age=45,income=60000,region=South
80
- ↓ parsed as:
81
- { age: "45", income: "60000", region: "South" }
82
- ```
83
-
84
- Accepted formats:
85
- - **String**: `age=45,income=60000`
86
- - **Object**: `{ age: 45, income: 60000 }`
87
- - **Array** (batch): `[ {age:45, income:60000}, {age:50, income:75000} ]`
88
-
89
- ---
90
-
91
- ## Step 1 — Check model familiarity before scoring
92
-
93
- Score immediately if:
94
- - The user names a specific model they've used before in this session, OR
95
- - The model name matches a previously confirmed model in the conversation
96
-
97
- Pause and suggest investigation if:
98
- - The model name is new, vague, or misspelled-looking (e.g. "the churn one", "that cancer model")
99
- - The user seems unsure of the required input variable names
100
-
101
- **Suggested message:**
102
- > "I don't recognize that model — want me to run `find-model` to confirm it exists,
103
- > or `model-info` to check its required inputs first?"
104
-
105
- ---
106
-
107
- ## Step 2 — Prepare the scenario data
108
-
109
- **For a single record** (one object):
110
- ```javascript
111
- scenario = { field1: value1, field2: value2, ... }
112
- ```
113
-
114
- **For batch scoring** (multiple records — the typical case):
115
- ```javascript
116
- scenario = [
117
- { field1: val1, field2: val2, ... },
118
- { field1: val3, field2: val4, ... },
119
- ...
120
- ]
121
- ```
122
-
123
- **Critical rules:**
124
- - Loop or call model-score **once per row**.
125
- - Field names in the scenario must match the model's expected input variable names **exactly**.
126
- - If table column names differ from model input names, **flag this to the user** and ask for confirmation before scoring.
127
- - Example: Table has `age_years`, but model expects `age` → ask user which column maps to which input.
128
- - Do not add units, labels, or extra metadata — raw field values only.
129
-
130
- ---
131
-
132
- ## Step 3 — Invoke the appropriate scoring tool
133
-
134
- Based on the type extracted from the model name, invoke the corresponding tool:
135
-
136
- **For `.mas` (default):**
137
- ```javascript
138
- model-score({
139
- model: "<modelname>",
140
- scenario: scenario, // object or array
141
- uflag: false // set true if you need field names prefixed with _
142
- })
143
- ```
144
-
145
- **For `.job`:**
146
- ```javascript
147
- run-job({
148
- name: "<jobname>",
149
- scenario: scenario
150
- })
151
- ```
152
-
153
- **For `.jobdef`:**
154
- ```javascript
155
- run-jobdef({
156
- name: "<jobdefname>",
157
- scenario: scenario
158
- })
159
- ```
160
-
161
- **For `.scr`:**
162
- ```javascript
163
- scr-score({
164
- url: "<scr_endpoint_url>",
165
- scenario: scenario
166
- })
167
- ```
168
-
169
- **For `.sas`:**
170
- ```javascript
171
- run-sas-program({
172
- src: "<sas_or_sql_code>",
173
- scenario: scenario
174
- })
175
- ```
176
-
177
- **Rules:**
178
- - Pass the full batch in one call; do not loop over rows
179
- - If scoring fails, return the structured error and suggest troubleshooting
180
- - For MAS models, include uflag parameter if underscore-prefixed output is needed
181
- - For jobs/jobdefs, scenario becomes parameter arguments
182
- - For SCR, include full URL endpoint
183
-
184
- ---
185
-
186
- ## Step 4 — Present the results
187
-
188
- Merge the scoring output back with the input records and present as a table where possible.
189
-
190
- **Always surface:**
191
- - The key prediction/score field(s) (e.g. `P_churn`, `score`, `prediction`, `P_risk`)
192
- - Any probability/confidence fields for classification models (e.g. `P_class0`, `P_class1`)
193
- - Selected input fields that drove the prediction, so the user can see context
194
-
195
- **Formatting:**
196
- - Present results in a table for clarity
197
- - If results exceed 10 rows, show the first 10 and ask: *"Want to see more results or export the full set?"*
198
- - Round numeric predictions to 2–4 decimal places for readability
199
-
200
- ---
201
-
202
- ## Common flows
203
-
204
- **Flow A — Score rows with MAS model**
205
- > "Score the first 10 customers in Public.customers with the churn model"
206
-
207
- 1. `read-table` → { table: "Public.customers", limit: 10 }
208
- 2. `model-score` → { model: "churn", scenario: [ ...10 row objects ] }
209
- 3. Present merged results with prediction + key inputs
210
-
211
- **Flow B — Score with a scoring job**
212
- > "Score December sales with the monthly_scorer job using month=12,year=2025"
213
-
214
- 1. `run-job` → { name: "monthly_scorer", scenario: { month: "12", year: "2025" } }
215
- 2. Capture job output and tables
216
- 3. Present results
217
-
218
- **Flow C — Score with a job definition**
219
- > "Run fraud detection jobdef on transaction amount=500, merchant=online"
220
-
221
- 1. `run-jobdef` → { name: "fraud_detection", scenario: { amount: "500", merchant: "online" } }
222
- 2. Capture log, listings, and tables
223
- 3. Present results
224
-
225
- **Flow D — Score with SCR endpoint**
226
- > "Score with the loan model at https://scr-host/models/loan using age=45, credit_score=700"
227
-
228
- 1. `scr-score` → { url: "https://scr-host/models/loan", scenario: { age: "45", credit_score: "700" } }
229
- 2. Capture prediction response
230
- 3. Present result
231
-
232
- **Flow E — Score results of an analytical query with MAS**
233
- > "Score high-value customers (spend > 5000) in mylib.sales with the fraud model"
234
-
235
- 1. `sas-query` → { table: "mylib.sales", sql: "SELECT * FROM mylib.sales WHERE spend > 5000" }
236
- 2. `model-score` → { model: "fraud", scenario: [ ...result rows ] }
237
- 3. Present merged results
238
-
239
- **Flow F — User supplies scenario data directly**
240
- > "Score age=45, income=60000, region=South with the churn model"
241
-
242
- 1. Skip read step
243
- 2. `model-score` → { model: "churn", scenario: { age: "45", income: "60000", region: "South" } }
244
- 3. Present result
245
-
246
- **Flow G — Model unfamiliar, need to confirm**
247
- > "Score Public.applicants with the creditRisk2 model"
248
-
249
- 1. Pause — "creditRisk2" is new
250
- 2. Suggest: `find-model` to confirm it exists, `model-info` to get input variables
251
- 3. Once confirmed → `read-table` + `model-score`
252
-
253
- **Flow H — Generic score syntax with type routing**
254
- > "score with model churn.mas scenario =age=45,income=60000"
255
- > "score fraud_detector.jobdef where scenario =amount=500"
256
- > "score monthly_report.job using month=10,year=2025"
257
-
258
- 1. Parse model name to extract type (.mas, .job, .jobdef, .scr, .sas)
259
- 2. Route to appropriate tool based on type
260
- 3. Parse scenario and invoke tool with parameters
261
- 4. Present results from routed tool
262
-
263
- ---
264
-
265
- ## Error handling
266
-
267
- | Problem | Action |
268
- |---|---|
269
- | Model not found | Suggest `find-model` to verify the model is deployed |
270
- | Input field name mismatch | Show the mismatch (table has X, model expects Y), ask user to confirm mapping |
271
- | Scoring error / invalid inputs | Return structured error, suggest `model-info` to check required inputs and data types |
272
- | Empty read result | Tell user, ask if they want to adjust the query/filter before scoring |
273
- | Missing input fields | Ask which table columns map to the required model inputs |
274
-
275
- ---
276
-
277
- ## Tips
278
-
279
- - **Batch is better:** Always pass the full set of records in one `model-score` call. Do not loop.
280
- - **Confirm mappings:** If column names don't match model inputs, ask before scoring.
281
- - **Show context:** Include key input columns in the result output so predictions make sense.
282
- - **Limit output:** For large result sets (>10 rows), ask before showing all.