@sassoftware/sas-score-mcp-serverjs 1.0.1-2 → 1.0.1-3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (27) hide show
  1. package/.skills_claude/README.md +303 -0
  2. package/.skills_claude/TESTING_GUIDE.md +252 -0
  3. package/.skills_claude/agents/sas-viya-scoring-expert.md +58 -0
  4. package/.skills_claude/claude-desktop-config.json +16 -0
  5. package/.skills_claude/claude-desktop-system-prompt.md +127 -0
  6. package/.skills_claude/copilot-instructions.md +155 -0
  7. package/.skills_claude/instructions.md +184 -0
  8. package/.skills_claude/skills/sas-find-library-smart/SKILL.md +157 -0
  9. package/.skills_claude/skills/sas-find-resource-strategy/SKILL.md +105 -0
  10. package/.skills_claude/skills/sas-list-resource-strategy/SKILL.md +124 -0
  11. package/.skills_claude/skills/sas-list-tables-smart/SKILL.md +126 -0
  12. package/.skills_claude/skills/sas-read-and-score/SKILL.md +112 -0
  13. package/.skills_claude/skills/sas-read-strategy/SKILL.md +154 -0
  14. package/.skills_claude/skills/sas-request-classifier/SKILL.md +69 -0
  15. package/.skills_claude/skills/sas-score-workflow/SKILL.md +200 -0
  16. package/.skills_claude/skills-index.md +345 -0
  17. package/.skills_github/agents/sas-viya-scoring-expert.md +58 -0
  18. package/.skills_github/copilot-instructions.md +177 -0
  19. package/.skills_github/skills/sas-find-library-smart/SKILL.md +155 -0
  20. package/.skills_github/skills/sas-find-resource-strategy/SKILL.md +105 -0
  21. package/.skills_github/skills/sas-list-resource-strategy/SKILL.md +124 -0
  22. package/.skills_github/skills/sas-list-tables-smart/SKILL.md +128 -0
  23. package/.skills_github/skills/sas-read-and-score/SKILL.md +113 -0
  24. package/.skills_github/skills/sas-read-strategy/SKILL.md +154 -0
  25. package/.skills_github/skills/sas-request-classifier/SKILL.md +74 -0
  26. package/.skills_github/skills/sas-score-workflow/SKILL.md +314 -0
  27. package/package.json +3 -2
@@ -0,0 +1,113 @@
1
+ ---
2
+ name: sas-read-and-score
3
+ description: >
4
+ Guide the full read → score workflow in SAS Viya: reading records from a table and then scoring
5
+ them with a MAS model (using sas-score-model-score). Use this skill whenever the user wants to score records
6
+ from a table, run a model against query results, predict outcomes for a set of rows, or any
7
+ combination of fetching data and scoring it. Trigger phrases include: "score these records",
8
+ "score results of my query", "run the model on this table", "predict for these customers",
9
+ "fetch and score", "read and score", "score rows from", "run model on table data", or any request
10
+ that combines reading/querying table data with model prediction.
11
+ ---
12
+
13
+ # SAS Read → Score Workflow
14
+
15
+ Orchestrates the full two-step pattern of reading records from a SAS/CAS table and scoring them
16
+ with a deployed MAS model.
17
+
18
+ ---
19
+
20
+ ## Pre-flight verification
21
+
22
+ **Before attempting to read or score table data:**
23
+ 1. **Verify library exists**: Use `sas-find-resource-strategy` for library lookup (CAS first, then SAS if needed)
24
+ 2. **Verify table exists**: Use `sas-find-resource-strategy` for table lookup in the selected library/server
25
+ 3. **Verify model exists**: Use `sas-find-resource-strategy` for model lookup before scoring
26
+ 4. **Confirm server location**: Ensure you know which server (CAS or SAS) contains the data
27
+
28
+ This ensures consistent behavior with other data access operations.
29
+
30
+ ---
31
+
32
+ ## Workflow overview
33
+
34
+ The typical flow involves:
35
+ 1. **Fetch data** — Identify which table/query will provide input records
36
+ 2. **Validate model** — Confirm the model exists and understand its input schema
37
+ 3. **Score** — Invoke the model on the fetched records
38
+ 4. **Present results** — Merge predictions with original data and display
39
+
40
+ ---
41
+
42
+ ## Scenario: User already has data
43
+
44
+ If the user provides scenario data directly (e.g., "Score age=45, income=60000 with model X"):
45
+ - Extract the scenario values
46
+ - Validate against model's input schema
47
+ - Invoke scoring
48
+ - Return prediction
49
+
50
+ ---
51
+
52
+ ## Scenario: User wants to score table rows
53
+
54
+ If the user specifies a table (e.g., "Score all customers in Public.customers with model X"):
55
+ - Fetch raw rows (possibly filtered: "where status='active'")
56
+ - Validate model compatibility with input columns
57
+ - Invoke scoring on each row
58
+ - Merge results with original data
59
+ - Display combined table
60
+
61
+ ---
62
+
63
+ ## Scenario: User wants to score query results
64
+
65
+ If the user wants to score aggregated/filtered results (e.g., "Score high-value customers (spend > 5000) with model X"):
66
+ - Determine which records meet criteria (aggregation/filtering)
67
+ - Validate model expects these input columns
68
+ - Invoke scoring
69
+ - Merge predictions with summary data
70
+ - Display results
71
+
72
+ ---
73
+
74
+ ## Scenario: User unfamiliar with model
75
+
76
+ If the user specifies a model name that's new/unknown:
77
+ - Check if model exists
78
+ - Retrieve model schema (inputs, outputs)
79
+ - Show user what inputs the model expects
80
+ - Confirm before proceeding with scoring
81
+
82
+ ---
83
+
84
+ ## Rules
85
+
86
+ - Always validate table/library existence before attempting to read
87
+ - Always check model exists before invoking `sas-score-model-score`
88
+ - Match table columns to model input variables; warn on mismatch
89
+ - Use find tools only for resource existence checks; do not use list tools to find resources
90
+ - If multiple records: score batch if possible; fall back to row-by-row
91
+ - Merge predictions with original data using row index or key column
92
+ - Present results as table with original columns + new prediction columns
93
+
94
+ ---
95
+
96
+ ## Error handling
97
+
98
+ | Problem | Action |
99
+ |---|---|
100
+ | Table not found | Ask for correct lib.tablename |
101
+ | Model not found | Inform user; suggest verifying model name |
102
+ | Field/column mismatch | Show mismatch, ask user to confirm or adjust query |
103
+ | Scoring error | Return structured error, suggest checking model inputs |
104
+ | Empty read result | Inform user, ask if they want to adjust the query/filter |
105
+ | Data type mismatch | Warn user about type conversion, proceed or ask for clarification |
106
+
107
+ ---
108
+
109
+ ## Integration with other skills
110
+
111
+ - **Before this workflow**: Use `sas-find-resource-strategy` to verify library/table/model resources exist
112
+ - **For data retrieval**: Use `sas-read-strategy` to choose the right read tool (read-table vs sas-query)
113
+ - **For scoring**: Use `sas-score-workflow` for advanced scoring options beyond MAS models
@@ -0,0 +1,154 @@
1
+ ---
2
+ name: sas-read-strategy
3
+ description: >
4
+ Guide the user in choosing the right data retrieval tool: sas-score-read-table (for raw row access with filters)
5
+ or sas-score-sas-query (for analytical queries, aggregations, joins). Use this skill when the user wants to
6
+ fetch records from a SAS/CAS table. Trigger phrases include: "read records from", "get data where",
7
+ "fetch rows from", "query the table", "give me the first N records", "aggregate by", "join tables",
8
+ or any request that starts with data retrieval.
9
+ ---
10
+
11
+ # SAS Read Strategy
12
+
13
+ Guides the decision between `sas-score-read-table` and `sas-score-sas-query` based on the user's intent and the nature
14
+ of the data operation. Determines which server contains the data and which retrieval tool is most appropriate.
15
+
16
+ ## Determine the server location
17
+
18
+ Before retrieving data, locate the specific table first and determine which server(s) contain it:
19
+
20
+ **Step 1: Locate the specific table (table-first)**
21
+ - Use `sas-find-resource-strategy` to check if the table exists in CAS first, then SAS if needed
22
+ - Do not perform a separate library-first lookup; table lookup already handles server-aware discovery
23
+ - Possible outcomes:
24
+ - If table exists **only in CAS** → set `server: "cas"`
25
+ - If table exists **only in SAS** → set `server: "sas"`
26
+ - If table exists **in both** → ask the user: *"The table exists in both CAS and SAS. Which server would you prefer to query from?"*
27
+ - If table exists **in neither** → inform user and suggest verifying the table name
28
+
29
+ ---
30
+
31
+ ## Determine the read strategy
32
+
33
+ Ask yourself: does the user already have the data in hand?
34
+
35
+ - **Yes (user pasted values, or data is already in context)** → skip this strategy; data is ready to use.
36
+ - **No** → choose the read tool based on intent:
37
+
38
+ | User's Intent | Tool | Example |
39
+ |---|---|---|
40
+ | Get specific raw rows, apply simple filter, retrieve first N records | `sas-score-read-table` | "Show me 10 rows from customers where status='active'" |
41
+ | Aggregate/summarize, calculate, join tables, analytical question | `sas-score-sas-query` | "Average salary by department", "Count orders by region" |
42
+
43
+ ---
44
+
45
+ ## Using read-table
46
+
47
+ **When:**
48
+ - User asks for raw records, row-by-row data
49
+ - Simple WHERE filtering (e.g., "where status = 'active'")
50
+ - Pagination needed ("first 50 rows", "next 10 rows")
51
+
52
+ **How:**
53
+ ```
54
+ sas-score-read-table({
55
+ table: "tablename",
56
+ lib: "libraryname",
57
+ server: "cas" or "sas", // determined from table-first lookup via sas-find-resource-strategy
58
+ limit: N, // default 10, adjust based on user request
59
+ where: "..." // optional SQL WHERE clause
60
+ })
61
+ ```
62
+
63
+ **Rules:**
64
+ - Always determine the server first using `sas-find-resource-strategy`
65
+ - Keep batch size ≤ 50 rows unless the user explicitly requests more
66
+ - If table name is missing, ask: *"Which table should I read from? (format: lib.tablename)"*
67
+ - If library is missing, ask: *"Which library contains the table?"*
68
+ - Prefer table-first lookup (CAS first, then SAS) instead of a separate library lookup step
69
+ - If table exists in both servers, ask user which to use
70
+ - Return raw column values; do not transform or aggregate
71
+ - Do not use list tools to determine whether a resource exists
72
+
73
+ ---
74
+
75
+ ## Using sas-query
76
+
77
+ **When:**
78
+ - User asks for aggregation (SUM, AVG, COUNT, GROUP BY)
79
+ - User asks for joins, calculations, or analytical insights
80
+ - User's question is phrased analytically ("compare", "analyze", "breakdown", "trend")
81
+
82
+ **How:**
83
+ ```
84
+ sas-query({
85
+ table: "lib.tablename",
86
+ query: "user's natural language question",
87
+ sql: "SELECT ... FROM ... WHERE ... GROUP BY ..." // generated from query
88
+ })
89
+ ```
90
+
91
+ **Rules:**
92
+ - Check which server(s) contain the table using `sas-find-resource-strategy` first
93
+ - If table exists in both CAS and SAS, ask user which to query from
94
+ - Parse the user's natural language question into a PROC SQL SELECT statement
95
+ - Ensure SELECT statement is valid SQL syntax
96
+ - Do not add trailing semicolons to the SQL string
97
+ - If table name is missing, ask: *"Which table should I query? (format: lib.tablename)"*
98
+ - If the intent is unclear, ask for clarification: *"Do you want raw rows, or an aggregated summary?"*
99
+
100
+ ---
101
+
102
+ ## Common patterns
103
+
104
+ **Pattern A — Raw row retrieval**
105
+ > "Show me the first 5 rows from Public.customers"
106
+
107
+ → `sas-score-read-table({ table: "customers", lib: "Public", limit: 5 })`
108
+
109
+ **Pattern B — Filtered retrieval**
110
+ > "Get all high-value orders (amount > 5000) from mylib.orders"
111
+
112
+ → `sas-score-read-table({ table: "orders", lib: "mylib", where: "amount > 5000" })`
113
+
114
+ **Pattern C — Aggregation**
115
+ > "What is the average price by make in Public.cars?"
116
+
117
+ → `sas-score-sas-query({ table: "Public.cars", query: "average price by make", sql: "SELECT make, AVG(msrp) AS avg_price FROM Public.cars GROUP BY make" })`
118
+
119
+ **Pattern D — Join + analysis**
120
+ > "Show me total sales by customer in the sales and customers tables"
121
+
122
+ → `sas-score-sas-query()` with a JOIN in the generated SQL
123
+
124
+ ---
125
+
126
+ ## Error handling
127
+
128
+ | Problem | Action |
129
+ |---|---|
130
+ | Library missing or unclear | Ask the user which library contains the table |
131
+ | Table not found in either server | Inform user and suggest checking the table name |
132
+ | Table exists in both CAS and SAS | Ask: *"The table exists in both servers. Which would you prefer: CAS or SAS?"* |
133
+ | Table exists only in one server | Use that server automatically in your request |
134
+ | Table name missing entirely | Ask: *"Which table should I read from?"* |
135
+ | Ambiguous intent (raw vs aggregate) | Ask: *"Do you want individual rows or a summary by some field?"* |
136
+ | Empty result | Inform user, ask to adjust filter or query |
137
+
138
+ ---
139
+
140
+ ## Integration with other skills
141
+
142
+ - **Before this skill**: Use `sas-find-resource-strategy` for table-first resource lookup
143
+ - **After this skill**: Use `sas-read-and-score` to score the retrieved data
144
+
145
+ ---
146
+
147
+ ## Next steps
148
+
149
+ Once data is retrieved, typical follow-ups include:
150
+ - **Visualize** — present as table or chart
151
+ - **Export** — format and offer download
152
+ - **Analyze further** — ask clarifying questions
153
+ - **Score** — run predictions on the data
154
+ - **Combine** — join with other datasets
@@ -0,0 +1,74 @@
1
+ ---
2
+ name: sas-request-classifier
3
+ description: Classify ambiguous SAS or Viya requests before using MCP tools. Use when prompts mention jobs, code, models, scoring, CAS tables, content, or resources and the correct SAS domain is not yet clear.
4
+ ---
5
+
6
+ # SAS Request Classifier
7
+
8
+ Use this skill to determine what kind of SAS object, workflow, or environment the user is referring to before selecting tools.
9
+
10
+ ## When to use
11
+ Use this skill when the request contains ambiguous domain terms such as:
12
+ - model
13
+ - score
14
+ - scoring
15
+ - read
16
+ - query
17
+ - job
18
+ - jobdef
19
+ - code
20
+ - table
21
+ - content
22
+ - asset
23
+ - resource
24
+
25
+ Use this skill before any execution-oriented tool call if there is a chance the request is referring to the wrong SAS domain.
26
+
27
+ ## Goal
28
+ Map the user request to the most likely SAS domain and hand off to the correct downstream skill or tool path.
29
+
30
+ ## Classification targets
31
+ Classify the request into one or more of these categories:
32
+ - Resource existence lookup (library/table/model/job/jobdef) -> Route to **sas-find-resource-strategy**
33
+ - Resource listing (library/table/model/job/jobdef) -> Route to **sas-list-resource-strategy**
34
+ - Reading or querying tables → Route to **sas-read-strategy**
35
+ - CAS resource, SAS resource, caslib, or table discovery → Route to **sas-find-library-smart**
36
+ - SAS data, libref, or table discovery → Route to **sas-find-library-smart**
37
+ - Score model or scoring artifact → Route to **sas-score-workflow**
38
+ - Read data and score together → Route to **sas-read-and-score**
39
+ - SAS job or flow execution
40
+ - SAS code or program analysis
41
+ - General content or metadata lookup
42
+ - Environment, auth, or connectivity issue
43
+
44
+ ## Procedure
45
+ 1. Read the request and identify ambiguous nouns and verbs.
46
+ 2. Infer whether the request is asking to discover, inspect, execute, deploy, score, compare, or troubleshoot.
47
+ 3. Decide the most likely SAS domain and matching skill.
48
+ 4. If confidence is low, ask one focused clarifying question.
49
+ 5. If confidence is high, load and use the relevant skill:
50
+ - **sas-find-resource-strategy** — Unified find-only strategy for library/table/model/job/jobdef
51
+ - **sas-list-resource-strategy** — Unified list strategy for library/table/model/job/jobdef with non-null pagination defaults
52
+ - **sas-find-library-smart** — Find CAS or SAS libraries
53
+ - **sas-list-tables-smart** — Browse tables in a library
54
+ - **sas-read-strategy** — Choose read-table vs. sas-query for data retrieval
55
+ - **sas-read-and-score** — Combine data reading with model scoring
56
+ - **sas-score-workflow** — Route scoring requests to correct execution engine
57
+ 6. Only after classification and skill guidance, use MCP tools.
58
+
59
+ ## Disambiguation hints
60
+ - "Run" often implies job execution, but may also mean scoring or model invocation. Check for "score" or "model" context.
61
+ - "Model" may refer to MAS models, SAS jobs, jobdefs, or SCR models. Look for context or type suffix (e.g., `.job`, `.mas`, `.scr`).
62
+ - "Score" may refer to model scoring or job execution. Look for model name or context.
63
+ - "Table" usually suggests CAS or SAS but confirm library name and server context.
64
+ - "Find" — resource lookup. Route to sas-find-resource-strategy.
65
+ - "List" or "browse" — resource exploration. Route to sas-list-resource-strategy.
66
+ - "Read", "query", "fetch" — data retrieval. Route to sas-read-strategy.
67
+ - "Predict", "score records", "run model on data" — combined workflow. Route to sas-read-and-score or sas-score-workflow.
68
+
69
+ ## Output
70
+ When you finish classification, state:
71
+ - the inferred SAS domain
72
+ - the confidence level
73
+ - the relevant skill(s) to load
74
+ - any remaining ambiguity or clarifying questions needed
@@ -0,0 +1,314 @@
1
+ gi ---
2
+ name: sas-score-workflow
3
+ description: >
4
+ Guide the full model scoring workflow: validate model familiarity, route to appropriate scoring tool
5
+ based on model type, invoke scoring with scenario data, and present merged results. Use this skill
6
+ when the user wants to run predictions on data (already fetched or user-supplied). Supports generic
7
+ syntax: "score with model <name>.<type> scenario =<params>" where type is job|jobdef|mas|scr|sas.
8
+ Trigger phrases: "score these records", "predict using model", "run model on", "score with model X.mas".
9
+ ---
10
+
11
+ # SAS Score Workflow
12
+
13
+ Orchestrates model validation, type-based routing, scoring invocation, and result presentation.
14
+ Handles both MAS models and alternative scoring engines (jobs, jobdefs, SCR, SAS programs).
15
+
16
+ ---
17
+
18
+ ## Generic Scoring Syntax
19
+
20
+ Users can invoke scoring with a unified syntax that automatically routes to the correct tool:
21
+
22
+ ```
23
+ score with model <name>.<type> [scenario =<key=value pairs>]
24
+ score <name>.<type> [scenario =<key=value pairs>]
25
+ ```
26
+
27
+ **Type determines the routing:**
28
+ - `.job` → route to `sas-score-run-job` with scoring parameters
29
+ - `.jobdef` → route to `sas-score-run-jobdef` with scoring parameters
30
+ - `.mas` → route to `sas-score-model-score` (Model Analytical Service — default)
31
+ - `.scr` → route to `sas-score-scr-score` (SAS Container Runtime)
32
+ - `.sas` → route to `sas-score-run-sas-program` to run sas program in folder
33
+
34
+ If no type is specified (bare model name), assume `.mas` (MAS model).
35
+
36
+ ---
37
+
38
+ ## Type-Based Routing
39
+
40
+ ### Parse and Strip Model Type
41
+
42
+ When a user provides a model name with a type suffix (e.g., `simplejon.job`, `churn.mas`):
43
+
44
+ 1. **Extract the type:** Split on the last dot to identify the type suffix
45
+ - `simplejon.job` → type = `job`, base name = `simplejon`
46
+ - `churn.mas` → type = `mas`, base name = `churn`
47
+ - `fraud_detector.jobdef` → type = `jobdef`, base name = `fraud_detector`
48
+
49
+ 2. **Validate the type:** Confirm it matches one of the supported types: `job`, `jobdef`, `mas`, `scr`, `sas`
50
+ - If type is unrecognized, assume `.mas` (default MAS model) and treat the entire input as the model name
51
+
52
+ 3. **Strip the type suffix:** Remove the `.type` from the model name before passing to the routing tool
53
+ - **Critical:** Always pass the base name (without the dot and type) to the invoked tool
54
+ - `simplejon.job` → pass `simplejon` to `sas-score-run-job`
55
+ - `churn.mas` → pass `churn` to `sas-score-model-score`
56
+ - `fraud_detector.jobdef` → pass `fraud_detector` to `sas-score-run-jobdef`
57
+
58
+ ### Type: `.mas` (Model Aggregation Service)
59
+ - **Tool**: `sas-score-model-score`
60
+ - **Use for**: Standard MAS-deployed predictive models
61
+ - **Example**: `score with model churn.mas scenario =age=45,income=60000`
62
+ - **Invocation**: `sas-score-model-score({ model: "churn", scenario: {...} })`
63
+
64
+ ### Type: `.job` (SAS Viya Job)
65
+ - **Tool**: `sas-score-run-job`
66
+ - **Use for**: Pre-built scoring jobs with parameters
67
+ - **Example**: `score with model monthly_scorer.job scenario =month=10,year=2025`
68
+ - **Invocation**: `sas-score-run-job({ name: "monthly_scorer", scenario: {...} })`
69
+
70
+ ### Type: `.jobdef` (SAS Viya Job Definition)
71
+ - **Tool**: `sas-score-run-jobdef`
72
+ - **Use for**: Job definitions that perform scoring logic
73
+ - **Example**: `score with model fraud_detector.jobdef using amount=500,merchant=online`
74
+ - **Invocation**: `sas-score-run-jobdef({ name: "fraud_detector", scenario: {...} })`
75
+
76
+ ### Type: `.scr` (Score Code Runtime)
77
+ - **Tool**: `sas-score-scr-score`
78
+ - **Use for**: Models deployed in SCR containers (REST endpoints)
79
+ - **Example**: `score https://scr-host/models/loan.scr using age=45,credit=700`
80
+ - **Invocation**: `sas-score-scr-score({ url: "https://scr-host/models/loan", scenario: {...} })`
81
+
82
+ ### Type: `.sas` (SAS Program / SQL)
83
+ - **Tool**: `sas-score-run-sas-program`
84
+ - **Use for**: Custom SAS or SQL scoring code
85
+ - **Example**: `score my_scoring_code.sas using x=1,y=2`
86
+ - **Invocation**: `sas-score-run-sas-program({ folder: "my_scoring_code", scenario: {...} })`
87
+
88
+
89
+
90
+ ---
91
+
92
+ ## Scenario Parsing
93
+
94
+ The scenario parameter (comma-separated key=value pairs) is parsed into an object:
95
+
96
+ ```
97
+ scenario =age=45,income=60000,region=South
98
+ ↓ parsed as:
99
+ { age: "45", income: "60000", region: "South" }
100
+ ```
101
+
102
+ Accepted formats:
103
+ - **String**: `age=45,income=60000`
104
+ - **Object**: `{ age: 45, income: 60000 }`
105
+ - **Array** (batch): `[ {age:45, income:60000}, {age:50, income:75000} ]`
106
+
107
+ ---
108
+
109
+ ## Integration with other skills
110
+
111
+ - **Before scoring table data**: Use `sas-find-resource-strategy` to verify library/table/model resources, then `sas-read-strategy` to fetch records
112
+ - **For read + score workflows**: Use `sas-read-and-score` for the complete end-to-end pattern
113
+
114
+ ---
115
+
116
+ ## Step 1 — Check model familiarity before scoring
117
+
118
+ Score immediately if:
119
+ - The user names a specific model they've used before in this session, OR
120
+ - The model name matches a previously confirmed model in the conversation
121
+
122
+ Pause and suggest investigation if:
123
+ - The model name is new, vague, or misspelled-looking (e.g. "the churn one", "that cancer model")
124
+ - The user seems unsure of the required input variable names
125
+
126
+ **Suggested message:**
127
+ > "I don't recognize that model — want me to use `sas-find-resource-strategy` to confirm it exists,
128
+ > or `model-info` to check its required inputs first?"
129
+
130
+ ---
131
+
132
+ ## Step 2 — Prepare the scenario data
133
+
134
+ **For a single record** (one object):
135
+ ```javascript
136
+ scenario = { field1: value1, field2: value2, ... }
137
+ ```
138
+
139
+ **For batch scoring** (multiple records — the typical case):
140
+ ```javascript
141
+ scenario = [
142
+ { field1: val1, field2: val2, ... },
143
+ { field1: val3, field2: val4, ... },
144
+ ...
145
+ ]
146
+ ```
147
+
148
+ **Critical rules:**
149
+ - Loop or call sas-score-model-score **once per row**.
150
+ - Field names in the scenario must match the model's expected input variable names **exactly**.
151
+ - If table column names differ from model input names, **flag this to the user** and ask for confirmation before scoring.
152
+ - Example: Table has `age_years`, but model expects `age` → ask user which column maps to which input.
153
+ - Do not add units, labels, or extra metadata — raw field values only.
154
+
155
+ ---
156
+
157
+ ## Step 3 — Invoke the appropriate scoring tool
158
+
159
+ Based on the type extracted from the model name, invoke the corresponding tool:
160
+
161
+ **For `.mas` (default):**
162
+ ```javascript
163
+ sas-score-model-score({
164
+ model: "<modelname>",
165
+ scenario: scenario, // object or array
166
+ uflag: false // set true if you need field names prefixed with _
167
+ })
168
+ ```
169
+
170
+ **For `.job`:**
171
+ ```javascript
172
+ sas-score-run-job({
173
+ name: "<jobname>",
174
+ scenario: scenario
175
+ })
176
+ ```
177
+
178
+ **For `.jobdef`:**
179
+ ```javascript
180
+ sas-score-run-jobdef({
181
+ name: "<jobdefname>",
182
+ scenario: scenario
183
+ })
184
+ ```
185
+
186
+ **For `.scr`:**
187
+ ```javascript
188
+ sas-score-scr-score({
189
+ url: "<scr_endpoint_url>",
190
+ scenario: scenario
191
+ })
192
+ ```
193
+
194
+ **For `.sas`:**
195
+ ```javascript
196
+ sas-score-run-sas-program({
197
+ src: "<sas_or_sql_code>",
198
+ scenario: scenario
199
+ })
200
+ ```
201
+
202
+ **Rules:**
203
+ - Pass the full batch in one call; do not loop over rows
204
+ - If scoring fails, return the structured error and suggest troubleshooting
205
+ - For MAS models, include uflag parameter if underscore-prefixed output is needed
206
+ - For jobs/jobdefs, scenario becomes parameter arguments
207
+ - For SCR, include full URL endpoint
208
+
209
+ ---
210
+
211
+ ## Step 4 — Present the results
212
+
213
+ Merge the scoring output back with the input records and present as a table where possible.
214
+
215
+ **Always surface:**
216
+ - The key prediction/score field(s) (e.g. `P_churn`, `score`, `prediction`, `P_risk`)
217
+ - Any probability/confidence fields for classification models (e.g. `P_class0`, `P_class1`)
218
+ - Selected input fields that drove the prediction, so the user can see context
219
+
220
+ **Formatting:**
221
+ - Present results in a table for clarity
222
+ - If results exceed 10 rows, show the first 10 and ask: *"Want to see more results or export the full set?"*
223
+ - Round numeric predictions to 2–4 decimal places for readability
224
+
225
+ ---
226
+
227
+ ## Common flows
228
+
229
+ **Flow A — Score rows with MAS model**
230
+ > "Score the first 10 customers in Public.customers with the churn model"
231
+
232
+ 1. `sas-score-read-table` → { table: "Public.customers", limit: 10 }
233
+ 2. `sas-score-model-score` → { model: "churn", scenario: [ ...10 row objects ] }
234
+ 3. Present merged results with prediction + key inputs
235
+
236
+ **Flow B — Score with a scoring job**
237
+ > "Score December sales with the monthly_scorer job using month=12,year=2025"
238
+
239
+ 1. `sas-score-run-job` → { name: "monthly_scorer", scenario: { month: "12", year: "2025" } }
240
+ 2. Capture job output and tables
241
+ 3. Present results
242
+
243
+ **Flow C — Score with a job definition**
244
+ > "Run fraud detection jobdef on transaction amount=500, merchant=online"
245
+
246
+ 1. `sas-score-run-jobdef` → { name: "fraud_detection", scenario: { amount: "500", merchant: "online" } }
247
+ 2. Capture log, listings, and tables
248
+ 3. Present results
249
+
250
+ **Flow D — Score with SCR endpoint**
251
+ > "Score with the loan model at https://scr-host/models/loan using age=45, credit_score=700"
252
+
253
+ 1. `sas-score-scr-score` → { url: "https://scr-host/models/loan", scenario: { age: "45", credit_score: "700" } }
254
+ 2. Capture prediction response
255
+ 3. Present result
256
+
257
+ **Flow E — Score results of an analytical query with MAS**
258
+ > "Score high-value customers (spend > 5000) in mylib.sales with the fraud model"
259
+
260
+ 1. `sas-score-sas-query` → { table: "mylib.sales", sql: "SELECT * FROM mylib.sales WHERE spend > 5000" }
261
+ 2. `sas-score-model-score` → { model: "fraud", scenario: [ ...result rows ] }
262
+ 3. Present merged results
263
+
264
+ **Flow F — User supplies scenario data directly**
265
+ > "Score age=45, income=60000, region=South with the churn model"
266
+
267
+ 1. Skip read step
268
+ 2. `sas-score-model-score` → { model: "churn", scenario: { age: "45", income: "60000", region: "South" } }
269
+ 3. Present result
270
+
271
+ **Flow G — Model unfamiliar, need to confirm**
272
+ > "Score Public.applicants with the creditRisk2 model"
273
+
274
+ 1. Pause — "creditRisk2" is new
275
+ 2. Suggest: `sas-find-resource-strategy` to confirm resource existence, `model-info` to get input variables
276
+ 3. Once confirmed → `sas-score-read-table` + `sas-score-model-score`
277
+
278
+ **Flow H — Generic score syntax with type routing**
279
+ > "score with model churn.mas scenario =age=45,income=60000"
280
+ > "score fraud_detector.jobdef where scenario =amount=500"
281
+ > "score monthly_report.job using month=10,year=2025"
282
+
283
+ 1. Parse model name to extract type (.mas, .job, .jobdef, .scr, .sas)
284
+ 2. Route to appropriate tool based on type
285
+ 3. Parse scenario and invoke tool with parameters
286
+ 4. Present results from routed tool
287
+
288
+ ---
289
+
290
+ ## Error handling
291
+
292
+ | Problem | Action |
293
+ |---|---|
294
+ | Model not found | Use `sas-find-resource-strategy` to verify the model (or job/jobdef type) exists |
295
+ | Input field name mismatch | Show the mismatch (table has X, model expects Y), ask user to confirm mapping |
296
+ | Scoring error / invalid inputs | Return structured error, suggest `model-info` to check required inputs and data types |
297
+ | Empty read result | Tell user, ask if they want to adjust the query/filter before scoring |
298
+ | Missing input fields | Ask which table columns map to the required model inputs |
299
+
300
+ ---
301
+
302
+ ## Tips
303
+
304
+ - **Batch is better:** Always pass the full set of records in one `sas-score-model-score` call. Do not loop.
305
+ - **Confirm mappings:** If column names don't match model inputs, ask before scoring.
306
+ - **Show context:** Include key input columns in the result output so predictions make sense.
307
+ - **Limit output:** For large result sets (>10 rows), ask before showing all.
308
+
309
+ ---
310
+
311
+ ## Integration with other skills
312
+
313
+ - **Before scoring table data**: Use `sas-find-resource-strategy` to verify resources, then `sas-read-strategy` to fetch records
314
+ - **For read + score workflows**: Use `sas-read-and-score` for the complete end-to-end pattern
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@sassoftware/sas-score-mcp-serverjs",
3
- "version": "1.0.1-2",
3
+ "version": "1.0.1-3",
4
4
  "description": "A mcp server for SAS Viya",
5
5
  "author": "Deva Kumar <deva.kumar@sas.com>",
6
6
  "license": "Apache-2.0",
@@ -42,7 +42,8 @@
42
42
  "openApi.json",
43
43
  "openApi.yaml",
44
44
  "scripts",
45
- ".skills"
45
+ ".skills_github",
46
+ ".skills_claude"
46
47
  ],
47
48
  "dependencies": {
48
49
  "@modelcontextprotocol/sdk": "^1.29.0",