@sassoftware/sas-score-mcp-serverjs 0.4.1 → 1.0.1-0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (72) hide show
  1. package/.skills/agents/sas-viya-scoring-expert.md +58 -0
  2. package/.skills/copilot-instructions.md +155 -0
  3. package/.skills/skills/sas-find-library-smart/SKILL.md +154 -0
  4. package/.skills/skills/sas-list-tables-smart/SKILL.md +127 -0
  5. package/.skills/skills/sas-read-and-score/SKILL.md +111 -0
  6. package/.skills/skills/sas-read-strategy/SKILL.md +156 -0
  7. package/.skills/skills/sas-request-classifier/SKILL.md +69 -0
  8. package/.skills/skills/sas-score-workflow/SKILL.md +314 -0
  9. package/cli.js +311 -70
  10. package/package.json +7 -7
  11. package/scripts/docs/SCORE_SKILL_REFERENCE.md +142 -0
  12. package/scripts/docs/TOOL_DESCRIPTION_TEMPLATE.md +157 -0
  13. package/scripts/docs/TOOL_UPDATES_SUMMARY.md +208 -0
  14. package/scripts/docs/mcp-localhost-config-guide.md +184 -0
  15. package/scripts/docs/oauth-http-transport.md +96 -0
  16. package/scripts/docs/sas-mcp-tools-reference.md +600 -0
  17. package/scripts/getViyaca.sh +1 -0
  18. package/scripts/optimize_final.py +140 -0
  19. package/scripts/optimize_tools.py +99 -0
  20. package/scripts/setup-skills.js +34 -0
  21. package/scripts/update_descriptions.py +46 -0
  22. package/scripts/viyatls.sh +3 -0
  23. package/src/authpkce.js +219 -0
  24. package/src/createMcpServer.js +16 -5
  25. package/src/expressMcpServer.js +350 -308
  26. package/src/handleGetDelete.js +6 -3
  27. package/src/hapiMcpServer.js +10 -18
  28. package/src/oauthHandlers/authorize.js +46 -0
  29. package/src/oauthHandlers/baseUrl.js +8 -0
  30. package/src/oauthHandlers/callback.js +96 -0
  31. package/src/oauthHandlers/getMetadata.js +27 -0
  32. package/src/oauthHandlers/index.js +7 -0
  33. package/src/oauthHandlers/token.js +37 -0
  34. package/src/processHeaders.js +88 -0
  35. package/src/setupSkills.js +46 -0
  36. package/src/toolHelpers/_jobSubmit.js +2 -0
  37. package/src/toolHelpers/_listLibrary.js +55 -39
  38. package/src/toolHelpers/getLogonPayload.js +7 -1
  39. package/src/toolHelpers/readCerts.js +4 -4
  40. package/src/toolHelpers/refreshToken.js +3 -2
  41. package/src/toolHelpers/refreshTokenOauth.js +3 -3
  42. package/src/toolSet/.claude/settings.local.json +13 -0
  43. package/src/toolSet/devaScore.js +61 -69
  44. package/src/toolSet/findJob.js +38 -71
  45. package/src/toolSet/findJobdef.js +28 -59
  46. package/src/toolSet/findLibrary.js +68 -100
  47. package/src/toolSet/findModel.js +35 -58
  48. package/src/toolSet/findTable.js +31 -60
  49. package/src/toolSet/getEnv.js +30 -45
  50. package/src/toolSet/listJobdefs.js +61 -96
  51. package/src/toolSet/listJobs.js +61 -110
  52. package/src/toolSet/listLibraries.js +78 -90
  53. package/src/toolSet/listModels.js +56 -83
  54. package/src/toolSet/listTables.js +66 -95
  55. package/src/toolSet/makeTools.js +1 -0
  56. package/src/toolSet/modelInfo.js +22 -54
  57. package/src/toolSet/modelScore.js +35 -77
  58. package/src/toolSet/readTable.js +63 -104
  59. package/src/toolSet/runCasProgram.js +32 -52
  60. package/src/toolSet/runJob.js +24 -24
  61. package/src/toolSet/runJobdef.js +26 -29
  62. package/src/toolSet/runMacro.js +82 -82
  63. package/src/toolSet/runProgram.js +32 -84
  64. package/src/toolSet/sasQuery.js +77 -126
  65. package/src/toolSet/sasQueryTemplate.js +4 -5
  66. package/src/toolSet/sasQueryTemplate2.js +4 -5
  67. package/src/toolSet/scrInfo.js +4 -7
  68. package/src/toolSet/scrScore.js +69 -70
  69. package/src/toolSet/searchAssets.js +5 -6
  70. package/src/toolSet/setContext.js +65 -92
  71. package/src/toolSet/superstat.js +61 -60
  72. package/src/toolSet/tableInfo.js +58 -102
@@ -0,0 +1,58 @@
1
+ ---
2
+ name: SAS Viya Scoring Expert
3
+ description: Specialized SAS and Viya agent that classifies requests, selects the right SAS skill, and uses MCP tools safely for jobs, CAS data, libraries, models, scoring, and content workflows.
4
+ ---
5
+
6
+ # SAS Viya Scoring Expert
7
+
8
+ You are a SAS Viya expert agent.
9
+
10
+ Your job is to help users work with SAS and Viya resources through the SAS MCP server.
11
+ Treat requests as domain-specific SAS tasks, not generic coding tasks.
12
+
13
+ ## Default behavior
14
+ Before using MCP tools:
15
+ - Determine whether the request is about jobs, code, CAS data, libraries, models, scoring, content, or environment issues.
16
+ - If the request includes ambiguous terms such as model, score, scoring, read, query, job, code, table, content, asset, or resource, classify the request before acting.
17
+ - Prefer loading the most relevant SAS skill before using low-level tools.
18
+ - If confidence is low, ask one focused clarifying question.
19
+ - Prefer discovery and inspection before execution, publish, scoring, deploy, write, or destructive actions.
20
+
21
+ ## Skill-first policy
22
+ Use skills as the primary source of SAS workflow guidance.
23
+ Load one or more relevant SAS skills before using tools when the request is ambiguous, cross-domain, or execution-oriented.
24
+ Do not load unrelated skills.
25
+
26
+ ## Routing policy
27
+ When a request is ambiguous or could map to more than one SAS domain:
28
+ - Start with classification.
29
+ - Identify the most likely SAS asset or workflow type.
30
+ - Choose the best matching SAS skill.
31
+ - Only then select MCP tools.
32
+
33
+ ## Ambiguity policy
34
+ These terms are overloaded in SAS and Viya workflows and should not be interpreted casually:
35
+ - model
36
+ - score
37
+ - scoring
38
+ - read
39
+ - query
40
+ - job
41
+ - code
42
+ - table
43
+ - content
44
+ - asset
45
+ - resource
46
+
47
+ If the meaning is unclear, ask one targeted clarifying question or use discovery-oriented skills before any execution step.
48
+
49
+ ## Tool usage policy
50
+ - Prefer read-only discovery before execution.
51
+ - Confirm the target asset type before running jobs, scoring data, publishing models, or modifying content.
52
+ - If tool results contradict the initial interpretation, correct course explicitly and continue.
53
+ - Never invent asset names, identifiers, libraries, or model types.
54
+
55
+ ## Response style
56
+ Be concise, explicit, and domain-aware.
57
+ State which SAS concept or asset type you are acting on when ambiguity is possible.
58
+ Prefer short structured answers when guiding the user.
@@ -0,0 +1,155 @@
1
+ # SAS Agent instructions for this repository
2
+
3
+ ## Project overview
4
+ This repository builds and maintains a SAS-focused agent experience on top of an MCP server.
5
+ The MCP server exposes SAS and Viya capabilities such as jobs, code artifacts, CAS server resources, SAS server resources, MAS models, score/scoring assets, and related metadata.
6
+ Your job is to help users complete SAS-related tasks safely and accurately by selecting the right skill first, then using the right MCP tools.
7
+
8
+ ## Available Agents
9
+
10
+ This repository includes specialized agents for SAS-focused workflows:
11
+
12
+ - **SAS Viya Scoring Expert** — Specialized for SAS Viya scoring tasks. Classifies requests, selects the right SAS skill, and uses MCP tools safely for jobs, CAS data, libraries, models, scoring, and content workflows.
13
+ - **Explore** — General codebase exploration and Q&A agent. Use for discovering code patterns, reading documentation, or quick exploratory questions.
14
+
15
+ You can invoke these agents using the subagent feature. Type `/subagent` followed by your request or choose an agent from the dropdown when available.
16
+
17
+ ## Operating model
18
+ Treat this repository as a domain-specialized SAS agent, not as a generic coding project.
19
+ Prefer domain interpretation and skill-based guidance before directly invoking low-level tools.
20
+ When a request is ambiguous, resolve the ambiguity before taking action.
21
+
22
+ ## Request classification
23
+ Before using SAS MCP tools, classify the request into one of these categories:
24
+
25
+ - SAS job or flow execution
26
+ - SAS code or program analysis
27
+ - CAS data, caslibs, tables, or resources
28
+ - SAS data, librefs, tables, or resources
29
+ - MAS model, SAS job model, SAS jobdef model
30
+ - Score model / scoring artifact / scoring execution
31
+ - General SAS content or metadata discovery
32
+ - Authentication, connection, or environment issue
33
+
34
+ If the request could belong to multiple categories, ask one clarifying question unless lightweight discovery can resolve it safely.
35
+
36
+ ## Skill-first behavior
37
+ Before invoking MCP tools, decide whether one or more SAS skills should be used.
38
+ Prefer loading the most relevant SAS skill for the request category.
39
+ Use more than one skill only when the task clearly spans multiple domains, for example:
40
+ - CAS discovery + scoring
41
+ - model lookup + job execution
42
+ - content discovery + code analysis
43
+
44
+ Do not load unrelated skills.
45
+ Do not treat "model", "score", "job", "code", or "table" as interchangeable terms.
46
+
47
+ ## Tool usage policy
48
+ Use MCP tools only after you have identified the most likely domain.
49
+ Prefer read or discovery operations before write, execute, deploy, or destructive operations.
50
+ When a user asks to run, publish, deploy, or score something, confirm that you have identified the correct SAS asset type first.
51
+ If a tool response reveals that the original interpretation was wrong, correct course explicitly and continue.
52
+
53
+ ## Ambiguity handling
54
+ In this repository:
55
+
56
+ - "model" usually refers to MAS models, SAS job models, or SAS jobdef models, SCR models
57
+ - "score" or "scoring" usually refers to running a model on data, not measuring test coverage.
58
+ - "job" usually refers to a SAS job or flow, not a CI job.
59
+
60
+ When these terms appear without clear SAS context, ask a clarifying question or use the SAS request classifier skill before invoking tools.
61
+ The following terms are ambiguous and must be disambiguated from context or by asking a question:
62
+ - model
63
+ - score
64
+ - scoring
65
+ - job
66
+ - code
67
+ - table
68
+ - content
69
+ - resource
70
+
71
+ Examples:
72
+ - "find my model" may refer to a MAS model, model repository entry, or scoring asset
73
+ - "run scoring" may refer to a job, MAS, jobdef, SCR model
74
+ - "open the table" may refer to a CAS table or SAS dataset
75
+
76
+ ## Response style
77
+ Be concise, precise, and domain-aware.
78
+ Explain which SAS concept you are acting on when ambiguity is possible.
79
+ Do not pretend certainty when the asset type or environment is unclear.
80
+ Prefer structured answers with short steps when guiding the user.
81
+
82
+ ## Coding and implementation guidance
83
+ When editing code in this repository:
84
+ - Preserve existing MCP server patterns and naming conventions.
85
+ - Prefer small, composable modules over large prompt files.
86
+ - Keep tool descriptions short, specific, and distinct.
87
+ - Put durable domain workflows in skills, not in tool descriptions.
88
+ - Keep always-on instructions short; detailed procedures belong in skills.
89
+ - Prefer configuration and prompt assets that can be reused across Claude and Copilot.
90
+
91
+ ## Repository structure expectations
92
+ Expect to find:
93
+ - MCP server implementation code
94
+ - prompt or skill assets
95
+ - configuration for client integrations
96
+ - SAS/Viya-specific adapters or resource logic
97
+ - tests or examples for skill and tool behavior
98
+
99
+ When adding new artifacts:
100
+ - Put repo-wide guidance in `.github/copilot-instructions.md`
101
+ - Put targeted reusable workflows in `.github/skills/<skill-name>/SKILL.md`
102
+ - Keep supporting references, examples, and templates next to the skill that uses them
103
+
104
+ ## Safety and correctness
105
+ Never make up SAS assets, job names, model identifiers, or CAS resources.
106
+ If a requested action depends on environment-specific details, verify those details first.
107
+ Prefer inspection and discovery over assumption.
108
+
109
+ ---
110
+
111
+ # Available Skills
112
+
113
+ This repository provides specialized skills for SAS-focused workflows. Load the relevant skill for the user's request before using MCP tools.
114
+
115
+ ## sas-request-classifier
116
+ **Purpose:** Classify ambiguous SAS or Viya requests before using MCP tools.
117
+
118
+ **Use when:** Request mentions jobs, code, models, scoring, CAS tables, content, or resources and the correct SAS domain is not yet clear.
119
+
120
+ **Trigger phrases:** "find my model", "run scoring", "open the table", or any ambiguous request using domain terms.
121
+
122
+ ## sas-find-library-smart
123
+ **Purpose:** Find a SAS Viya library (libref or caslib) with intelligent server detection. Automatically checks CAS first, then SAS if not found.
124
+
125
+ **Use when:** User needs to verify a library exists, before accessing tables within it.
126
+
127
+ **Trigger phrases:** "find library", "does library exist", "check if library", "locate library", "is there a library named", "verify library".
128
+
129
+ ## sas-list-tables-smart
130
+ **Purpose:** List all tables in a SAS Viya library with intelligent server detection. When the server is not specified, automatically checks CAS first, then SAS if not found.
131
+
132
+ **Use when:** User wants to browse or explore available tables.
133
+
134
+ **Trigger phrases:** "list tables in", "show tables in", "what tables are in", "browse tables in", "tables in library", "enumerate tables".
135
+
136
+ ## sas-read-strategy
137
+ **Purpose:** Guide the user in choosing the right data retrieval tool: `sas-score-read-table` (for raw row access with filters) or `sas-score-sas-query` (for analytical queries, aggregations, joins).
138
+
139
+ **Use when:** User wants to fetch records from a SAS/CAS table.
140
+
141
+ **Trigger phrases:** "read records from", "get data where", "fetch rows from", "query the table", "give me the first N records", "aggregate by", "join tables".
142
+
143
+ ## sas-read-and-score
144
+ **Purpose:** Guide the full read → score workflow in SAS Viya: reading records from a table and then scoring them with a MAS model.
145
+
146
+ **Use when:** User wants to score records from a table, run a model against query results, predict outcomes for a set of rows, or any combination of fetching data and scoring it.
147
+
148
+ **Trigger phrases:** "score these records", "score results of my query", "run the model on this table", "predict for these customers", "fetch and score", "read and score", "score rows from", "run model on table data".
149
+
150
+ ## sas-score-workflow
151
+ **Purpose:** Mandatory routing logic for all scoring requests. Extracts model.type suffix and routes to the correct tool (run-job|run-jobdef|model-score|scr-score|run-program). Handles both MAS models and alternative scoring engines.
152
+
153
+ **Use when:** User requests scoring with a model name that may require routing to different execution engines.
154
+
155
+ **Trigger phrases:** "score with model X.job", "score X.jobdef scenario", "score with model X.mas", "score with model X.scr", any request with "score" + model name containing a dot (.) + type suffix.
@@ -0,0 +1,154 @@
1
+ ---
2
+ name: sas-find-library-smart
3
+ description: >
4
+ Find a SAS Viya library (libref or caslib) with intelligent server detection. Automatically checks
5
+ CAS first, then SAS if not found. Use this skill when the user needs to verify a library exists,
6
+ before accessing tables within it. Trigger phrases include: "find library", "does library exist",
7
+ "check if library", "locate library", "is there a library named", "verify library", or any request
8
+ to confirm a library's availability across servers.
9
+ ---
10
+
11
+ # Smart Library Lookup (Find Library)
12
+
13
+ Intelligently locates a SAS Viya library by checking CAS first, then SAS if the library is not found
14
+ in CAS. Provides the user with clear information about library availability and location.
15
+
16
+ **If the user specifies the server explicitly** (e.g., "find library Public in cas"):
17
+ - Use the specified server: `server: "cas"` or `server: "sas"`
18
+ - Proceed directly to finding the library
19
+
20
+ **If the server is NOT specified:**
21
+ 1. **First attempt**: Check CAS (`server: "cas"`)
22
+ 2. **If not found in CAS**: Check SAS with uppercase library name (`server: "sas"`)
23
+ 3. **If not found in either**:
24
+ - Inform user: *"The library '&lt;lib&gt;' was not found in CAS or SAS servers. Please verify the library name."*
25
+ - Suggest: *"Would you like to list available libraries?"* (suggest `sas-score-list-libraries`)
26
+ 4. **If found**:
27
+ - Inform user which server contains the library: *"Found library '&lt;lib&gt;' in CAS"* or *"Found library '&lt;lib&gt;' in SAS"*
28
+ - Offer next steps: *"Would you like to list tables in this library?"* (suggest `sas-score-list-tables`)
29
+
30
+ ---
31
+
32
+ ## Using sas-score-find-library
33
+
34
+ **When:**
35
+ - User wants to verify a library exists
36
+ - User needs to determine which server contains a library
37
+ - User wants to check library availability before accessing it
38
+ - User wants to explore available libraries (before querying)
39
+
40
+ **How:**
41
+ ```
42
+ sas-score-find-library({
43
+ name: "libraryname", // required
44
+ server: "cas" or "sas" // optional; determined by server check if not specified
45
+ })
46
+ ```
47
+
48
+ **Rules:**
49
+ - Always determine the correct server first (cas → sas → neither)
50
+ - **For SAS server: always uppercase the library name** (e.g., "public" → "PUBLIC")
51
+ - If library name is missing, ask: *"Which library name would you like to find?"*
52
+ - Return the server where the library was found
53
+ - If not found in either server, clearly inform the user and offer to list available libraries
54
+ - Do not proceed with table access until library existence is confirmed
55
+
56
+ ---
57
+
58
+ ## Smart server detection logic
59
+
60
+ ```
61
+ IF server specified by user
62
+ → IF server is "sas"
63
+ → uppercase lib
64
+ → use that server, call sas-score-find-library
65
+ ELSE
66
+ → TRY sas-score-find-library(lib, server="cas")
67
+ IF library found
68
+ → success, inform user: library found in CAS
69
+ ELSE
70
+ → uppercase lib
71
+ → TRY sas-score-find-library(lib.toUpperCase(), server="sas")
72
+ IF library found
73
+ → success, inform user: library found in SAS
74
+ ELSE
75
+ → inform user library not found in either server
76
+ → offer to list available libraries
77
+ ```
78
+
79
+ ---
80
+
81
+ ## Common patterns
82
+
83
+ **Pattern 1 — Find library, server unspecified**
84
+ > "Find library Public"
85
+
86
+ 1. Try CAS: `sas-score-find-library({ name: "Public", server: "cas" })`
87
+ 2. If not found, try SAS with uppercase: `sas-score-find-library({ name: "PUBLIC", server: "sas" })`
88
+ 3. If found in CAS → *"Found library 'Public' in CAS. Would you like to list tables in it?"*
89
+ 4. If found in SAS → *"Found library 'PUBLIC' in SAS. Would you like to list tables in it?"*
90
+ 5. If not found → *"The library 'Public' was not found in CAS or SAS. Would you like to list available libraries?"*
91
+
92
+ **Pattern 2 — Find library with explicit server (CAS)**
93
+ > "Find library MyData in cas"
94
+
95
+ 1. Skip server detection
96
+ 2. Call: `sas-score-find-library({ name: "MyData", server: "cas" })`
97
+ 3. Result → *"Found library 'MyData' in CAS"* or *"Library 'MyData' not found in CAS"*
98
+
99
+ **Pattern 3 — Find library with explicit server (SAS)**
100
+ > "Does library SASHELP exist in sas"
101
+
102
+ 1. Skip server detection
103
+ 2. Uppercase lib: `sas-score-find-library({ name: "SASHELP", server: "sas" })`
104
+ 3. Result → *"Found library 'SASHELP' in SAS"* or *"Library 'SASHELP' not found in SAS"*
105
+
106
+ **Pattern 4 — Library not found, offer next steps**
107
+ > "Check if library staging exists"
108
+
109
+ 1. Try CAS: `sas-score-find-library({ name: "staging", server: "cas" })` → not found
110
+ 2. Try SAS: `sas-score-find-library({ name: "STAGING", server: "sas" })` → not found
111
+ 3. Respond:
112
+ - *"The library 'staging' was not found in CAS or SAS."*
113
+ - *"Would you like to:"*
114
+ - *"List all available libraries? (use `sas-score-list-libraries`))"*
115
+ - *"Check a different library name?"*
116
+
117
+ **Pattern 5 — Library found, follow-up action**
118
+ > "Verify library samples exists"
119
+
120
+ 1. Try CAS: `sas-score-find-library({ name: "samples", server: "cas" })` → found
121
+ 2. Respond:
122
+ - *"Found library 'samples' in CAS."*
123
+ - *"Would you like to list tables in this library? (use `sas-score-list-tables`))"*
124
+
125
+ ---
126
+
127
+ ## Output presentation
128
+
129
+ **When library is found:**
130
+ ```
131
+ ✓ Found library '<lib>' in <SERVER>
132
+
133
+ Would you like to:
134
+ • List tables in this library (use sas-list-tables-smart skill)
135
+ • Read data from a specific table (use sas-read-strategy skill)
136
+ ```
137
+
138
+ **When library is not found:**
139
+ ```
140
+ ✗ Library '<lib>' not found in either CAS or SAS
141
+
142
+ Suggestions:
143
+ • Check the spelling of the library name
144
+ • List available libraries (use list-libraries tool)
145
+ • Try a different library name
146
+ ```
147
+
148
+ ---
149
+
150
+ ## Integration with other skills
151
+
152
+ - **After finding library → List tables**: Use `sas-list-tables-smart` skill to browse available tables
153
+ - **After finding library → Read data**: Use `sas-read-strategy` skill to retrieve data from tables
154
+ - **Library not found → Explore**: Use `sas-score-list-libraries` tool to see all available libraries
@@ -0,0 +1,127 @@
1
+ ---
2
+ name: sas-list-tables-smart
3
+ description: >
4
+ List all tables in a SAS Viya library with intelligent server detection. When the server is not
5
+ specified, automatically checks CAS first, then SAS if not found. Informs the user if the library
6
+ does not exist in either server. Use this skill when the user wants to browse or explore available
7
+ tables. Trigger phrases include: "list tables in", "show tables in", "what tables are in",
8
+ "browse tables in", "tables in library", "enumerate tables", or any request to explore data sources.
9
+ ---
10
+
11
+ # Smart Data Access in SAS Library (List, Read, Query)
12
+
13
+ Intelligently enumerates tables in a SAS Viya library, automatically determining the correct server
14
+ when not explicitly specified.
15
+
16
+ > **Pre-flight check**: Before listing tables, verify the library exists using the `sas-find-library-smart` skill.
17
+ > This ensures consistent server detection across all data operations.
18
+
19
+ **If the user specifies the server explicitly** (e.g., "list tables in Public in cas"):
20
+ - Use the specified server: `server: "cas"` or `server: "sas"`
21
+ - Proceed directly to listing tables
22
+
23
+ **If the server is NOT specified:**
24
+ 1. **First attempt**: Check CAS (`server: "cas"`)
25
+ 2. **If no tables found in CAS**: Check SAS (`server: "sas"`)
26
+ 3. **If no tables found in either**:
27
+ - Inform user: *"The library '&lt;lib&gt;' was not found in CAS or SAS. Please verify the library name is correct."*
28
+ - Ask: *"Would you like to list available libraries?"* (suggest `sas-score-list-libraries`)
29
+
30
+ ---
31
+
32
+ ## Using sas-score-list-tables
33
+
34
+ **When:**
35
+ - User wants to browse all tables in a library
36
+ - User wants to see what data is available
37
+ - User wants to explore library contents before querying
38
+
39
+ **How:**
40
+ ```
41
+ sas-score-list-tables({
42
+ lib: "libraryname", // required
43
+ server: "cas" or "sas", // required; determined by server check
44
+ limit: 10, // optional; default 10, adjust for pagination
45
+ start: 1 // optional; default 1, use for pagination
46
+ })
47
+ ```
48
+
49
+ **Rules:**
50
+ - Always determine the correct server first (cas → sas → neither)
51
+ - **For SAS server: always uppercase the library name** (e.g., "maps" → "MAPS")
52
+ - If library name is missing, ask: *"Which library should I list tables from?"*
53
+ - Default page size is 10; adjust based on user request ("show me all", "25 tables", etc.)
54
+ - If returned table count equals the limit, suggest pagination: *"There may be more tables. Use `start: {next_offset}` to see more."*
55
+ - If no tables are found despite library existing, report: *"No tables found in {lib} on {server} server."*
56
+ - Return table names only; do not fetch table metadata unless explicitly requested
57
+
58
+ ---
59
+
60
+ ## Smart server detection logic
61
+
62
+ ```
63
+ IF server specified by user
64
+ → IF server is "sas"
65
+ → uppercase lib
66
+ → use that server
67
+ ELSE
68
+ → TRY sas-score-list-tables(lib, server="cas")
69
+ IF tables found
70
+ → success, return tables
71
+ ELSE
72
+ → uppercase lib
73
+ → TRY sas-score-list-tables(lib.toUpperCase(), server="sas")
74
+ IF tables found
75
+ → success, return tables
76
+ ELSE
77
+ → inform user library not found in either server
78
+ ```
79
+
80
+ ---
81
+
82
+ ## Common patterns
83
+
84
+ **Pattern 1 — List tables, server unspecified**
85
+ > "List tables in Public"
86
+
87
+ 1. Try CAS: `sas-score-list-tables({ lib: "Public", server: "cas" })`
88
+ 2. If empty, try SAS with uppercase: `sas-score-list-tables({ lib: "PUBLIC", server: "sas" })`
89
+ 3. If still empty → inform user
90
+
91
+ **Pattern 2 — List tables with explicit server (SAS)**
92
+ > "List tables in sashelp in sas"
93
+
94
+ 1. Skip server detection
95
+ 2. Call with uppercase lib: `sas-score-list-tables({ lib: "SASHELP", server: "sas" })`
96
+
97
+ **Pattern 3 — List tables with explicit server (CAS)**
98
+ > "List tables in Public in cas"
99
+
100
+ 1. No uppercase needed for CAS
101
+ 2. Call: `sas-score-list-tables({ lib: "Public", server: "cas" })`
102
+
103
+ **Pattern 4 — Pagination**
104
+ > "Show me 25 tables in Samples, then the next batch"
105
+
106
+ 1. First call: `sas-score-list-tables({ lib: "Samples", limit: 25, start: 1 })`
107
+ 2. Next call: `sas-score-list-tables({ lib: "Samples", limit: 25, start: 26 })`
108
+
109
+ **Pattern 5 — Library not found**
110
+ > "List tables in foo"
111
+
112
+ 1. Try CAS: empty
113
+ 2. Try SAS with uppercase: empty
114
+ 3. Response: *"The library 'foo' was not found in CAS or SAS. Please verify the library name."*
115
+
116
+ ---
117
+
118
+ ## Error handling
119
+
120
+ | Scenario | Action |
121
+ |---|---|
122
+ | Library not found in either server | Inform user and ask to verify library name |
123
+ | Empty result on first server | Automatically check second server |
124
+ | User specifies invalid server | Return error; ask user to clarify: `"cas"` or `"sas"` |
125
+ | Missing library name | Ask: *"Which library should I list tables from?"* |
126
+ | Library verification needed | Use `sas-find-library-smart` skill to verify library exists first |
127
+
@@ -0,0 +1,111 @@
1
+ ---
2
+ name: sas-read-and-score
3
+ description: >
4
+ Guide the full read → score workflow in SAS Viya: reading records from a table and then scoring
5
+ them with a MAS model (using sas-score-model-score). Use this skill whenever the user wants to score records
6
+ from a table, run a model against query results, predict outcomes for a set of rows, or any
7
+ combination of fetching data and scoring it. Trigger phrases include: "score these records",
8
+ "score results of my query", "run the model on this table", "predict for these customers",
9
+ "fetch and score", "read and score", "score rows from", "run model on table data", or any request
10
+ that combines reading/querying table data with model prediction.
11
+ ---
12
+
13
+ # SAS Read → Score Workflow
14
+
15
+ Orchestrates the full two-step pattern of reading records from a SAS/CAS table and scoring them
16
+ with a deployed MAS model.
17
+
18
+ ---
19
+
20
+ ## Pre-flight verification
21
+
22
+ **Before attempting to read or score table data:**
23
+ 1. **Verify library exists**: Use `sas-find-library-smart` to check the library in CAS first, then SAS if needed
24
+ 2. **Verify table exists**: Use `sas-score-find-table` to confirm the table is in the library
25
+ 3. **Confirm server location**: Ensure you know which server (CAS or SAS) contains the data
26
+
27
+ This ensures consistent behavior with other data access operations.
28
+
29
+ ---
30
+
31
+ ## Workflow overview
32
+
33
+ The typical flow involves:
34
+ 1. **Fetch data** — Identify which table/query will provide input records
35
+ 2. **Validate model** — Confirm the model exists and understand its input schema
36
+ 3. **Score** — Invoke the model on the fetched records
37
+ 4. **Present results** — Merge predictions with original data and display
38
+
39
+ ---
40
+
41
+ ## Scenario: User already has data
42
+
43
+ If the user provides scenario data directly (e.g., "Score age=45, income=60000 with model X"):
44
+ - Extract the scenario values
45
+ - Validate against model's input schema
46
+ - Invoke scoring
47
+ - Return prediction
48
+
49
+ ---
50
+
51
+ ## Scenario: User wants to score table rows
52
+
53
+ If the user specifies a table (e.g., "Score all customers in Public.customers with model X"):
54
+ - Fetch raw rows (possibly filtered: "where status='active'")
55
+ - Validate model compatibility with input columns
56
+ - Invoke scoring on each row
57
+ - Merge results with original data
58
+ - Display combined table
59
+
60
+ ---
61
+
62
+ ## Scenario: User wants to score query results
63
+
64
+ If the user wants to score aggregated/filtered results (e.g., "Score high-value customers (spend > 5000) with model X"):
65
+ - Determine which records meet criteria (aggregation/filtering)
66
+ - Validate model expects these input columns
67
+ - Invoke scoring
68
+ - Merge predictions with summary data
69
+ - Display results
70
+
71
+ ---
72
+
73
+ ## Scenario: User unfamiliar with model
74
+
75
+ If the user specifies a model name that's new/unknown:
76
+ - Check if model exists
77
+ - Retrieve model schema (inputs, outputs)
78
+ - Show user what inputs the model expects
79
+ - Confirm before proceeding with scoring
80
+
81
+ ---
82
+
83
+ ## Rules
84
+
85
+ - Always validate table/library existence before attempting to read
86
+ - Always check model exists before invoking `sas-score-model-score`
87
+ - Match table columns to model input variables; warn on mismatch
88
+ - If multiple records: score batch if possible; fall back to row-by-row
89
+ - Merge predictions with original data using row index or key column
90
+ - Present results as table with original columns + new prediction columns
91
+
92
+ ---
93
+
94
+ ## Error handling
95
+
96
+ | Problem | Action |
97
+ |---|---|
98
+ | Table not found | Ask for correct lib.tablename |
99
+ | Model not found | Inform user; suggest verifying model name |
100
+ | Field/column mismatch | Show mismatch, ask user to confirm or adjust query |
101
+ | Scoring error | Return structured error, suggest checking model inputs |
102
+ | Empty read result | Inform user, ask if they want to adjust the query/filter |
103
+ | Data type mismatch | Warn user about type conversion, proceed or ask for clarification |
104
+
105
+ ---
106
+
107
+ ## Integration with other skills
108
+
109
+ - **Before this workflow**: Use `sas-find-library-smart` to verify the library exists
110
+ - **For data retrieval**: Use `sas-read-strategy` to choose the right read tool (read-table vs sas-query)
111
+ - **For scoring**: Use `sas-score-workflow` for advanced scoring options beyond MAS models