fa-mcp-sdk 0.4.68 → 0.4.70

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  ---
2
- name: deploy-mcp
3
- description: "Implement an fa-mcp MCP server end-to-end in this already-scaffolded project: verify Agent Tester OpenAI creds, seed dev-time secrets and lenient config, push the scaffold to GitLab (creating a new repo OR reusing an existing one when instructed), draft an implementation plan, implement tools/prompts/resources, iterate via the Agent Tester headless API, then push the finished work. Use when the user asks to develop/implement/deploy the MCP server in this project, mentions 'deploy-mcp', 'развернуть MCP', 'реализовать MCP', or supplies a feature brief."
2
+ name: create-mcp-wizard
3
+ description: "Implement an fa-mcp MCP server end-to-end in this already-scaffolded project: verify Agent Tester OpenAI creds, seed dev-time secrets and lenient config, push the scaffold to GitLab (creating a new repo OR reusing an existing one when instructed), draft an implementation plan, implement tools/prompts/resources, iterate via the Agent Tester headless API, then push the finished work. Use when the user asks to develop/implement/deploy the MCP server in this project, mentions 'create-mcp-wizard', 'развернуть MCP', 'реализовать MCP', or supplies a feature brief."
4
4
  disable-model-invocation: true
5
5
  allowed-tools: Bash(node *), Bash(yarn *), Bash(npm *), Bash(git *), Bash(pwd), Bash(cd *), Bash(curl *), Read, Write, Edit, Glob, Grep
6
6
  ---
@@ -34,6 +34,13 @@ All supporting scripts live in `${CLAUDE_SKILL_DIR}/scripts/` and are invoked wi
34
34
  by the CLI / skill infrastructure and by the SDK maintainer. Do NOT modify, add, or delete files
35
35
  inside them unless the accompanying text explicitly instructs you to. This applies to every step
36
36
  below — implementation, tests, dev report, everything.
37
+ - **Reporting language**. Language for all generated artifacts (`claudedocs/*.md`, commit
38
+ messages, user-facing summaries) is resolved in this order:
39
+ 1. Explicit directive in the feature brief.
40
+ 2. Else, contents of `preferred-language.txt` in the project root, if it exists.
41
+ 3. Else — English.
42
+ Translate prose — headings and body text — to the resolved language; leave code, paths, YAML
43
+ keys, and CLI commands as-is. Report the resolved language and its source in the Step 1 summary.
37
44
 
38
45
  ## Step 1 — Scan the accompanying text for requirements
39
46
 
@@ -50,8 +57,10 @@ Before touching code, read every message/file the user attached and extract:
50
57
  - **Agent Tester OpenAI creds** — `apiKey` (required for Step 2) and `baseURL` (optional — Azure /
51
58
  proxy / local LLM). If the text already supplies them, use those. If `config/local.yaml` already
52
59
  has a working `agentTester.openAi.apiKey`, re-use it instead of asking again.
60
+ - **Reporting language** — resolve per the Ground rule above; record it for later steps.
53
61
 
54
- Summarize what you found to the user in 3-6 bullets and get a one-line confirmation before proceeding.
62
+ Summarize what you found to the user in 3-6 bullets (including the resolved reporting language
63
+ and its source) and get a one-line confirmation before proceeding.
55
64
 
56
65
  ## Step 2 — Verify Agent Tester OpenAI credentials
57
66
 
@@ -128,7 +137,7 @@ what belongs in the initial commit. If the tree contains scratch notes, local-on
128
137
  anything the user flagged as not-for-commit, stash it with an untracked-inclusive stash:
129
138
 
130
139
  ```
131
- git stash push -u -m "deploy-mcp: pre-initial-push stash" -- <paths>
140
+ git stash push -u -m "create-mcp-wizard: pre-initial-push stash" -- <paths>
132
141
  ```
133
142
 
134
143
  Announce what you stashed so the user can recover it later via `git stash list` / `git stash pop`.
@@ -205,7 +214,8 @@ OR switch to branch 4a if the "collision" is in fact the already-existing target
205
214
 
206
215
  ## Step 6 — Draft and commit to a plan
207
216
 
208
- Create `claudedocs/impl-plan.md` (create the directory if needed). Structure:
217
+ Create `claudedocs/impl-plan.md` (create the directory if needed) in the reporting language.
218
+ Structure:
209
219
 
210
220
  ```markdown
211
221
  # Implementation Plan — <project name>
@@ -245,7 +255,9 @@ Create `claudedocs/impl-plan.md` (create the directory if needed). Structure:
245
255
  - [ ] `yarn typecheck` clean
246
256
  - [ ] `yarn test:mcp`, `:mcp-http`, `:mcp-sse` all green
247
257
  - [ ] Agent Tester iterations done, `claudedocs/test-log.md` has entries
248
- - [ ] `claudedocs/dev-report.md` written
258
+ - [ ] `claudedocs/dev-report.md` written (full report)
259
+ - [ ] `claudedocs/breef-report.md` written (brief of work + problems — same content echoed to console)
260
+ - [ ] `claudedocs/dev-problems.md` written (blockers, failed checks, open questions)
249
261
  - [ ] Final GitLab push (Step 10) complete
250
262
  ```
251
263
 
@@ -353,8 +365,9 @@ agent prompt, handler logic, error message — per `FA-MCP-SDK-DOC/08-agent-test
353
365
  fix, rebuild (`yarn cb`), restart, and re-run the scenario. After restart, in-memory sessions on
354
366
  the server are wiped — delete the stale `claudedocs/.agent-session` file before re-running.
355
367
 
356
- Log every iteration in `claudedocs/test-log.md` (session header + per-scenario: sent / expected /
357
- received / tools used / result / diagnosis / fix). This is the audit trail.
368
+ Log every iteration in `claudedocs/test-log.md` in the reporting language (session header +
369
+ per-scenario: sent / expected / received / tools used / result / diagnosis / fix). This is the
370
+ audit trail.
358
371
 
359
372
  Stop the server with `node scripts/kill-port.js <port>` (or Ctrl+C) when you're done iterating.
360
373
 
@@ -373,9 +386,22 @@ yarn test:mcp-sse
373
386
 
374
387
  Zero errors, zero warnings that matter, all transport tests green.
375
388
 
376
- Write `claudedocs/dev-report.md` per the structure in `CLAUDE.md` "Development Report"
377
- (what was built, architecture decisions, agent prompt rationale, test coverage, Agent Tester findings,
378
- configuration, known limitations).
389
+ Write `claudedocs/dev-report.md` in the reporting language, following the structure in
390
+ `CLAUDE.md` → "Development Report" (what was built, architecture decisions, agent prompt rationale,
391
+ test coverage, Agent Tester findings, configuration, known limitations).
392
+
393
+ Alongside the full report, produce two companion files in the reporting language:
394
+
395
+ - **`claudedocs/breef-report.md`** — a brief of the work done and problems encountered. Keep it
396
+ short and scannable (not a duplicate of `dev-report.md`): what was implemented, what passed,
397
+ what failed, the key problems in 1–2 lines each. The same content is echoed verbatim to the
398
+ console as part of the "Final report" step below — that is the primary way the user sees it.
399
+ - **`claudedocs/dev-problems.md`** — a focused list of what could NOT be done / tested / connected
400
+ to during this session, plus any open questions, unresolved blockers, or decisions the user
401
+ needs to make. Include: failed external connections (DB, upstream API, AD, Consul, etc.),
402
+ tests that were skipped or disabled and why, missing creds, ambiguous requirements from the
403
+ brief, anything deferred. If there are no problems, write the file anyway with a single
404
+ "No outstanding issues." line so the user can see the check was done.
379
405
 
380
406
  ## Step 10 — Final GitLab push
381
407
 
@@ -386,7 +412,7 @@ implemented feature and pushes it on top of the scaffold commit.
386
412
  content that shouldn't ship to the remote, stash it first:
387
413
 
388
414
  ```
389
- git stash push -u -m "deploy-mcp: pre-final-push stash" -- <paths>
415
+ git stash push -u -m "create-mcp-wizard: pre-final-push stash" -- <paths>
390
416
  ```
391
417
 
392
418
  Leave anything stashed from Step 5 still stashed — if it shouldn't be in the initial commit, it
@@ -423,8 +449,14 @@ Tell the user:
423
449
  (Step 5) and the feature push (Step 10) landed on `main`.
424
450
  3. Summary of tools/resources/prompts/endpoints that were implemented.
425
451
  4. Any flagged limitations from the dev report.
426
- 5. Link to `claudedocs/impl-plan.md`, `claudedocs/test-log.md`, `claudedocs/dev-report.md`.
452
+ 5. Links to `claudedocs/impl-plan.md`, `claudedocs/test-log.md`, `claudedocs/dev-report.md`,
453
+ `claudedocs/breef-report.md`, `claudedocs/dev-problems.md`.
427
454
  6. Anything still stashed from Step 5 / Step 10 (so the user remembers to `git stash pop` or drop).
455
+ 7. **Echo the full contents of `claudedocs/breef-report.md` to the console** (in the reporting
456
+ language, as written). This is the brief of work done + problems — it must appear inline in
457
+ the chat, not only as a file link, so the user can read it without opening the file. If
458
+ `claudedocs/dev-problems.md` contains anything other than "No outstanding issues.", also call
459
+ that out explicitly and point at the file.
428
460
 
429
461
  ## Troubleshooting
430
462
 
@@ -1,6 +1,6 @@
1
1
  #!/usr/bin/env node
2
2
  /**
3
- * Thin wrapper around POST /agent-tester/api/chat/test — used by the deploy-mcp skill
3
+ * Thin wrapper around POST /agent-tester/api/chat/test — used by the create-mcp-wizard skill
4
4
  * to exercise the freshly-built MCP server through the full agent loop.
5
5
  *
6
6
  * Usage:
@@ -107,4 +107,4 @@ const req = http.request({
107
107
  req.on('error', (e) => { console.error(`request error: ${e.message}`); process.exit(1); });
108
108
  req.on('timeout', () => { req.destroy(new Error('timeout')); });
109
109
  req.write(payload);
110
- req.end();
110
+ req.end();
@@ -15,12 +15,13 @@ npm install fa-mcp-sdk
15
15
  | [01-getting-started](01-getting-started.md) | `initMcpServer()`, `McpServerData`, `IPromptData`, `IResourceData`, `AppConfig` | Starting new project |
16
16
  | [02-1-tools-and-api](02-1-tools-and-api.md) | Tool definitions, `toolHandler`, outbound webhooks, REST API with tsoa, OpenAPI/Swagger | Creating tools, REST endpoints, webhook callbacks |
17
17
  | [02-2-prompts-and-resources](02-2-prompts-and-resources.md) | Standard/custom prompts, resources, `requireAuth` | Configuring prompts/resources |
18
- | [03-configuration](03-configuration.md) | `appConfig`, YAML config, access points for external services, cache, PostgreSQL | Server configuration, external services, DB |
18
+ | [03-configuration](03-configuration.md) | `appConfig`, YAML config, access points for external services, cache | Server configuration, external services |
19
19
  | [04-authentication](04-authentication.md) | JWT, Basic auth, server tokens, `createAuthMW()`, Token Generator, CLI Token Generator, JWT Generation API | Authentication setup |
20
20
  | [05-ad-authorization](05-ad-authorization.md) | AD group authorization at HTTP/tool levels | AD group restrictions |
21
21
  | [06-utilities](06-utilities.md) | `ServerError`, `normalizeHeaders`, logging, Consul, graceful shutdown | Error handling, utilities |
22
22
  | [07-testing-and-operations](07-testing-and-operations.md) | Test clients (STDIO, HTTP, SSE, Streamable HTTP) | Testing, deployment |
23
23
  | [08-agent-tester-and-headless-api](08-agent-tester-and-headless-api.md) | Agent Tester, Headless API, structured logging, automated testing, UI `data-testid` reference | Agent-driven tool development, CLI automation, UI E2E tests |
24
+ | [09-database](09-database.md) | PostgreSQL sugar layer (`queryMAIN`, `execMAIN`, `getInsertSqlMAIN`, `getMergeSqlMAIN`, `mergeByBatch`), `pgvector`, secondary DBs | Database access, upserts, batching |
24
25
 
25
26
  ## Key Exports
26
27
 
@@ -35,7 +36,13 @@ import { createAuthMW, generateToken, getAuthHeadersForTests, TTokenType, genera
35
36
  import { formatToolResult, ToolExecutionError, ServerError, BaseMcpError, ValidationError, getTools } from 'fa-mcp-sdk';
36
37
 
37
38
  // Database & Cache
38
- import { queryMAIN, execMAIN, oneRowMAIN, checkMainDB, getCache } from 'fa-mcp-sdk';
39
+ import {
40
+ queryMAIN, queryRsMAIN, oneRowMAIN, execMAIN,
41
+ getInsertSqlMAIN, getMergeSqlMAIN, mergeByBatch,
42
+ checkMainDB, getMainDBConnectionStatus,
43
+ IQueryPgArgsCOptional,
44
+ getCache,
45
+ } from 'fa-mcp-sdk';
39
46
 
40
47
  // Utilities
41
48
  import { logger, fileLogger, Logger, trim, ppj, toError, toStr, normalizeHeaders } from 'fa-mcp-sdk';
@@ -1,4 +1,4 @@
1
- # Configuration, Cache, and Database
1
+ # Configuration, Cache, and Access Points
2
2
 
3
3
  ## Custom Startup Diagnostics
4
4
 
@@ -362,151 +362,23 @@ cache.close();
362
362
  const data = await cache.getOrSet('key', async () => await fetchData(), 3600);
363
363
  ```
364
364
 
365
- ## Database Integration
365
+ ## Database
366
366
 
367
- To disable the use of the database, you need to set appConfig.db.postgres.dbs.main.host to an empty value.
368
- In this case, when the configuration is formed, appConfig.isMainDBUsed is set to false.
367
+ PostgreSQL integration (including the `MAIN` sugar layer `queryMAIN`, `execMAIN`, `getMergeSqlMAIN`,
368
+ `mergeByBatch`, `pgvector` support, etc.) is documented in [09-database.md](09-database.md).
369
369
 
370
+ Minimal config snippet (see [09-database.md](09-database.md) for the full reference):
370
371
 
371
- If you enable database support (`isMainDBUsed: true` in config):
372
-
373
- ```typescript
374
- import { queryMAIN, execMAIN, oneRowMAIN, queryRsMAIN, checkMainDB } from 'fa-mcp-sdk';
375
-
376
- // Check database connection. If there is no connection, the application stops
377
- await checkMainDB();
378
-
379
- // queryMAIN - the main function of executing SQL queries to the main database
380
-
381
- // Function Signature:
382
- const queryMAIN = async <R extends QueryResultRow = any> (
383
- arg: string | IQueryPgArgsCOptional,
384
- sqlValues?: any[],
385
- throwError = false,
386
- ): Promise<QueryResult<R> | undefined> {...}
387
-
388
- // Types used:
389
- export interface IQueryPgArgs {
390
- connectionId: string,
391
- poolConfig?: PoolConfig & IDbOptionsPg,
392
- client?: IPoolPg,
393
- sqlText: string,
394
- sqlValues?: any[],
395
- throwError?: boolean,
396
- prefix?: string,
397
- registerTypesFunctions?: IRegisterTypeFn[],
398
- }
399
- export interface IQueryPgArgsCOptional extends Omit<IQueryPgArgs, 'connectionId'> {
400
- connectionId?: string
401
- }
402
-
403
- // Examples of use
404
- const users1 = await queryMAIN('SELECT * FROM users WHERE active = $1', [true]);
405
- // Alternative use case
406
- const users2 = await queryMAIN({ sqlText: 'SELECT * FROM users WHERE active = $1', sqlValues: [true] });
407
-
408
-
409
- // execMAIN - execute SQL commands without returning result set
410
- // Function Signature:
411
- const execMAIN = async (
412
- arg: string | IQueryPgArgsCOptional,
413
- ): Promise<number | undefined> {...}
414
-
415
- // Examples:
416
- await execMAIN({ sqlText: 'INSERT INTO logs (message, created_at) VALUES ($1, $2)', sqlValues: ['Server started', new Date()] });
417
- await execMAIN({ sqlText: 'UPDATE users SET active = $1 WHERE id = $2', sqlValues: [false, userId] });
418
-
419
- // queryRsMAIN - execute SQL and return rows array directly
420
- // Function Signature:
421
- const queryRsMAIN = async <R extends QueryResultRow = any> (
422
- arg: string | IQueryPgArgsCOptional,
423
- sqlValues?: any[],
424
- throwError = false,
425
- ): Promise<R[] | undefined> {...}
426
-
427
- // Example:
428
- const users = await queryRsMAIN<User>('SELECT * FROM users WHERE active = $1', [true]);
429
-
430
- // oneRowMAIN - execute SQL and return single row
431
- // Function Signature:
432
- const oneRowMAIN = async <R extends QueryResultRow = any> (
433
- arg: string | IQueryPgArgsCOptional,
434
- sqlValues?: any[],
435
- throwError = false,
436
- ): Promise<R | undefined> {...}
437
-
438
- // Example:
439
- const user = await oneRowMAIN<User>('SELECT * FROM users WHERE id = $1', [userId]);
440
-
441
- // getMainDBConnectionStatus - check database connection status
442
- // Function Signature:
443
- const getMainDBConnectionStatus = async (): Promise<string> {...}
444
-
445
- // Possible return values: 'connected' | 'disconnected' | 'error' | 'db_not_used'
446
- const status = await getMainDBConnectionStatus();
447
-
448
- // checkMainDB - verify database connectivity (stops application if failed)
449
- // Function Signature:
450
- const checkMainDB = async (): Promise<void> {...}
451
-
452
- // Example:
453
- await checkMainDB(); // Throws or exits process if DB connection fails
454
-
455
- // getInsertSqlMAIN - generate INSERT SQL statement
456
- // Function Signature:
457
- const getInsertSqlMAIN = async <U extends TDBRecord = TDBRecord> (arg: {
458
- commonSchemaAndTable: string,
459
- recordset: TRecordSet<U>,
460
- excludeFromInsert?: string[],
461
- addOutputInserted?: boolean,
462
- isErrorOnConflict?: boolean,
463
- keepSerialFields?: boolean,
464
- }): Promise<string> {...}
465
-
466
- // Example:
467
- const insertSql = await getInsertSqlMAIN({
468
- commonSchemaAndTable: 'public.users',
469
- recordset: [{ name: 'John', email: 'john@example.com' }],
470
- addOutputInserted: true
471
- });
472
-
473
- // getMergeSqlMAIN - generate UPSERT (INSERT...ON CONFLICT) SQL statement
474
- // Function Signature:
475
- const getMergeSqlMAIN = async <U extends TDBRecord = TDBRecord> (arg: {
476
- commonSchemaAndTable: string,
477
- recordset: TRecordSet<U>,
478
- conflictFields?: string[],
479
- omitFields?: string[],
480
- updateFields?: string[],
481
- fieldsExcludedFromUpdatePart?: string[],
482
- noUpdateIfNull?: boolean,
483
- mergeCorrection?: (_sql: string) => string,
484
- returning?: string,
485
- }): Promise<string> {...}
486
-
487
- // Example:
488
- const mergeSql = await getMergeSqlMAIN({
489
- commonSchemaAndTable: 'public.users',
490
- recordset: [{ id: 1, name: 'John Updated', email: 'john@example.com' }],
491
- conflictFields: ['email'],
492
- returning: '*'
493
- });
494
-
495
- // mergeByBatch - execute merge operations in batches
496
- // Function Signature:
497
- const mergeByBatch = async <U extends TDBRecord = TDBRecord> (arg: {
498
- recordset: TRecordSet<U>,
499
- getMergeSqlFn: Function
500
- batchSize?: number
501
- }): Promise<any[]> {...}
502
-
503
- // Example:
504
- const results = await mergeByBatch({
505
- recordset: largeDataSet,
506
- getMergeSqlFn: (batch) => getMergeSqlMAIN({
507
- commonSchemaAndTable: 'public.users',
508
- recordset: batch
509
- }),
510
- batchSize: 500
511
- });
372
+ ```yaml
373
+ db:
374
+ postgres:
375
+ dbs:
376
+ main:
377
+ label: 'My Database'
378
+ host: '' # empty string disables DB (isMainDBUsed = false)
379
+ port: 5432
380
+ database: <database>
381
+ user: <user>
382
+ password: <password>
383
+ usedExtensions: [] # e.g. [pgvector]
512
384
  ```
@@ -0,0 +1,295 @@
1
+ # PostgreSQL Database
2
+
3
+ The SDK wraps [`af-db-ts`](https://www.npmjs.com/package/af-db-ts) with a thin sugar layer bound to a single
4
+ logical connection — `main`. All helper functions below are pre-configured with `connectionId = 'main'`,
5
+ automatically register `pgvector` when the extension is enabled, and normalize the call shape (SQL string or
6
+ full argument object).
7
+
8
+ For the vast majority of MCP servers **only the sugar layer is needed** — direct `af-db-ts` calls are
9
+ reserved for edge cases (secondary databases, transactions on an explicit client, cursor streaming,
10
+ cross-DB migration).
11
+
12
+ ## 1. Enabling / Disabling the Database
13
+
14
+ Database support is driven entirely by `config/*.yaml`. The SDK computes `appConfig.isMainDBUsed` at startup
15
+ based on whether a host is configured:
16
+
17
+ ```yaml
18
+ db:
19
+ postgres:
20
+ dbs:
21
+ main:
22
+ label: 'My Database' # shown in diagnostics and admin pages
23
+ host: '' # empty string disables DB (isMainDBUsed = false)
24
+ port: 5432
25
+ database: <database>
26
+ user: <user>
27
+ password: <password>
28
+ usedExtensions: [] # e.g. [pgvector]
29
+ ```
30
+
31
+ - `host: ''` — DB is disabled. `getMainDBConnectionStatus()` returns `'db_not_used'`; the `MAIN` helpers
32
+ are not meant to be called in this state.
33
+ - `host: <value>` — DB is enabled. Call `await checkMainDB()` early in startup so a misconfigured server
34
+ fails fast instead of returning 500s later.
35
+
36
+ ### Enabling `pgvector`
37
+
38
+ ```yaml
39
+ db:
40
+ postgres:
41
+ dbs:
42
+ main:
43
+ # ...
44
+ usedExtensions:
45
+ - pgvector
46
+ ```
47
+
48
+ When `pgvector` is listed, the SDK automatically injects `pgvector.registerType` into every `queryMAIN`
49
+ call, so `vector` columns come back as `number[]` with no per-call setup.
50
+
51
+ ## 2. Sugar Layer — the `MAIN` Family
52
+
53
+ All imports come from `fa-mcp-sdk`:
54
+
55
+ ```typescript
56
+ import {
57
+ queryMAIN, queryRsMAIN, oneRowMAIN, execMAIN,
58
+ getInsertSqlMAIN, getMergeSqlMAIN, mergeByBatch,
59
+ checkMainDB, getMainDBConnectionStatus,
60
+ IQueryPgArgsCOptional,
61
+ } from 'fa-mcp-sdk';
62
+ ```
63
+
64
+ Every query-style helper accepts **two call shapes**:
65
+
66
+ 1. `fn(sqlText, sqlValues?, throwError?)` — shortest form, preferred for most reads.
67
+ 2. `fn({ sqlText, sqlValues, throwError, client, ... })` — full `IQueryPgArgsCOptional` object, needed when
68
+ you want to pass `client` (external pool client for transactions), a log `prefix`, or other advanced
69
+ options.
70
+
71
+ ### 2.1. `queryMAIN<R>(arg, sqlValues?, throwError?)`
72
+
73
+ Returns the full `QueryResult<R>` (`rows`, `rowCount`, `fields`, …) or `undefined` on error when
74
+ `throwError=false`.
75
+
76
+ ```typescript
77
+ // Prepared parameters — always preferred for user input
78
+ const res = await queryMAIN<{ id: number; email: string }>(
79
+ `SELECT id, email FROM public.users WHERE active = $1 ORDER BY id`,
80
+ [true],
81
+ );
82
+ const firstEmail = res?.rows?.[0]?.email;
83
+
84
+ // Object form — e.g. inside an externally-opened transaction
85
+ await queryMAIN({ client, sqlText: `TRUNCATE TABLE public.staging;` });
86
+ ```
87
+
88
+ ### 2.2. `queryRsMAIN<R>(arg, sqlValues?, throwError?)`
89
+
90
+ "Rows only" — returns `R[] | undefined`. Use in ~90% of reads when metadata isn't needed.
91
+
92
+ ```typescript
93
+ const rows = await queryRsMAIN<{ userId: number }>(
94
+ `SELECT "userId" FROM public.sessions WHERE "expiresAt" > NOW()`,
95
+ );
96
+ const ids = new Set((rows || []).map((r) => r.userId));
97
+ ```
98
+
99
+ ### 2.3. `oneRowMAIN<R>(arg, sqlValues?, throwError?)`
100
+
101
+ Returns the first row or `undefined` — the most readable form for look-ups.
102
+
103
+ ```typescript
104
+ const user = await oneRowMAIN<{ id: number; role: string }>(
105
+ `SELECT id, role FROM public.users WHERE email = $1`,
106
+ [email],
107
+ );
108
+ if (!user) throw new Error('User not found');
109
+ ```
110
+
111
+ ### 2.4. `execMAIN(arg): Promise<number | undefined>`
112
+
113
+ For DDL/DML without consuming rows. Returns `rowCount` (or the **sum** of `rowCount` for batch SQL
114
+ concatenated with `;`). Handy for "how many rows did I affect" counters and for transaction primitives.
115
+
116
+ ```typescript
117
+ // Single statement
118
+ await execMAIN(`UPDATE public.jobs SET status = 'done' WHERE id = ${jobId}`);
119
+
120
+ // Batch UPDATE — sum of rowCount across ;-separated statements
121
+ const sqls = await Promise.all(items.map((it) => buildUpdateSql(it)));
122
+ const affected = await execMAIN(sqls.join('\n'));
123
+
124
+ // Transaction primitives — simple flow on the cached pool
125
+ try {
126
+ await execMAIN({ sqlText: 'BEGIN' });
127
+ // ... writes via queryMAIN / execMAIN ...
128
+ await execMAIN({ sqlText: 'COMMIT' });
129
+ } catch (err) {
130
+ await execMAIN({ sqlText: 'ROLLBACK' });
131
+ throw err;
132
+ }
133
+ ```
134
+
135
+ ### 2.5. `getInsertSqlMAIN<U>(arg): Promise<string>`
136
+
137
+ Generates an `INSERT` statement from table metadata — the recordset is filtered against the table schema,
138
+ so fields that don't exist in the table are silently dropped. Pair with `queryMAIN` to execute.
139
+
140
+ | Field | Purpose |
141
+ |------------------------|-----------------------------------------------------------------------------------|
142
+ | `commonSchemaAndTable` | `'schema.table'` |
143
+ | `recordset` | `TRecordSet<U>` — rows to insert |
144
+ | `excludeFromInsert` | Columns to skip (typically the auto-increment PK) |
145
+ | `addOutputInserted` | Append `RETURNING *` to get generated ids / defaults |
146
+ | `isErrorOnConflict` | Throw on uniqueness violation (default: swallowed) |
147
+ | `keepSerialFields` | Do **not** drop `serial` values from the recordset (used when migrating ids) |
148
+
149
+ ```typescript
150
+ const sql = await getInsertSqlMAIN({
151
+ commonSchemaAndTable: 'public.users',
152
+ recordset: [{ name: 'John', email: 'john@example.com' }],
153
+ excludeFromInsert: ['id'], // PK is auto-increment
154
+ addOutputInserted: true,
155
+ });
156
+ const res = await queryMAIN<{ id: number; name: string }>(sql, undefined, true);
157
+ const created = res?.rows?.[0];
158
+ ```
159
+
160
+ ### 2.6. `getMergeSqlMAIN<U>(arg): Promise<string>`
161
+
162
+ Generates an upsert — `INSERT ... ON CONFLICT (...) DO UPDATE ...`.
163
+
164
+ | Field | Purpose |
165
+ |--------------------------------|-----------------------------------------------------------------------------------------------|
166
+ | `commonSchemaAndTable` | `'schema.table'` |
167
+ | `recordset` | `TRecordSet<U>` — rows to upsert |
168
+ | `conflictFields` | Columns for `ON CONFLICT (...)`. Defaults to the PK |
169
+ | `omitFields` | Excluded from both `INSERT` and `UPDATE` (no effect when `updateFields` is set explicitly) |
170
+ | `updateFields` | If set — only these fields appear in `DO UPDATE` (minus `fieldsExcludedFromUpdatePart`) |
171
+ | `fieldsExcludedFromUpdatePart` | Present in `INSERT`, excluded from `UPDATE` — typical for `createdAt`, `createdBy` |
172
+ | `noUpdateIfNull` | Don't overwrite existing values with `NULL` — **critical for incremental syncs with partial payloads** |
173
+ | `mergeCorrection` | `(sql) => sql` — final rewrite hook |
174
+ | `returning` | `'*'` or quoted field list for `RETURNING` |
175
+
176
+ ```typescript
177
+ const mergeSql = await getMergeSqlMAIN({
178
+ commonSchemaAndTable: 'public.external_items',
179
+ recordset: batch,
180
+ noUpdateIfNull: true, // partial payload upsert
181
+ fieldsExcludedFromUpdatePart: ['createdBy', 'createdAt'],
182
+ });
183
+ await queryMAIN(mergeSql);
184
+ ```
185
+
186
+ ### 2.7. `mergeByBatch<U>({ recordset, getMergeSqlFn, batchSize? })`
187
+
188
+ Universal batched-upsert runner. Slices `recordset` into batches, calls `getMergeSqlFn(batch)` for each, and
189
+ executes the generated SQL through `queryMAIN`. Returns one entry per batch.
190
+
191
+ - Default `batchSize` is `999`; in practice **use 50–100 for wide rows** — you hit Postgres' parameter
192
+ limit or statement-size limit well before 999.
193
+ - **The runner mutates the input via `Array.prototype.splice`.** By the time it returns, `recordset` is
194
+ empty. Clone the array upfront if you need to retain the data.
195
+
196
+ ```typescript
197
+ const getMergeSqlFn = async (batch: TRecordSet) => getMergeSqlMAIN({
198
+ commonSchemaAndTable: 'public.publications',
199
+ recordset: batch,
200
+ noUpdateIfNull: true,
201
+ });
202
+ await mergeByBatch({ recordset: dataset, getMergeSqlFn, batchSize: 100 });
203
+ // dataset is now []
204
+ ```
205
+
206
+ ### 2.8. `checkMainDB()`
207
+
208
+ Startup liveness check. Runs `SELECT 1 FROM pg_catalog.pg_class LIMIT 1` — a neutral query that works on
209
+ any PostgreSQL instance. On failure (except under `NODE_ENV=test`) the process exits with code `1`. Call
210
+ it early in `start.ts` so misconfigured servers fail immediately.
211
+
212
+ ### 2.9. `getMainDBConnectionStatus()`
213
+
214
+ Returns one of `'connected' | 'disconnected' | 'error' | 'db_not_used'`. Safe to call from a `/health`
215
+ endpoint or admin page — never throws, never exits.
216
+
217
+ ## 3. Types
218
+
219
+ ```typescript
220
+ // Re-exported by the SDK
221
+ import { IQueryPgArgsCOptional } from 'fa-mcp-sdk';
222
+
223
+ // Directly from af-db-ts when you need them
224
+ import { IQueryPgArgs, TDBRecord, TRecordSet } from 'af-db-ts';
225
+ ```
226
+
227
+ - `IQueryPgArgs` — full query-arg shape used by `queryPg` directly; `connectionId` is required.
228
+ - `IQueryPgArgsCOptional` — what the `MAIN` helpers accept; `connectionId` is pre-filled by the SDK.
229
+ - `TDBRecord` — `Record<string, any>` — a generic row shape. Prefer concrete interfaces (`IUserRow`, …)
230
+ where they exist; use `TDBRecord` only when the row shape is not fixed.
231
+ - `TRecordSet<U extends TDBRecord = TDBRecord>` — the array shape expected by `getInsertSqlMAIN`,
232
+ `getMergeSqlMAIN`, and `mergeByBatch`.
233
+
234
+ ## 4. Decision Tree
235
+
236
+ ```
237
+ Need to talk to the main DB?
238
+ ├─ Yes → use the sugar layer
239
+ │ ├─ rows only (R[]) → queryRsMAIN
240
+ │ ├─ single row (R | undefined) → oneRowMAIN
241
+ │ ├─ full QueryResult (rowCount…) → queryMAIN
242
+ │ ├─ DDL / DML, no rows → execMAIN
243
+ │ ├─ generate INSERT SQL → getInsertSqlMAIN → queryMAIN
244
+ │ ├─ generate UPSERT SQL → getMergeSqlMAIN → queryMAIN
245
+ │ └─ batch upsert many rows → mergeByBatch + getMergeSqlMAIN
246
+ └─ No (secondary DB / low level) → direct af-db-ts imports
247
+ ├─ plain query → queryPg + IQueryPgArgs (wrap it, mirror pg-db.ts)
248
+ ├─ transaction / cursor → getPoolPg(<id>) + manual BEGIN/COMMIT/ROLLBACK
249
+ └─ cross-DB SQL generation → getInsertSqlPg / getMergeSqlPg / getUpdateSqlPg
250
+ ```
251
+
252
+ ## 5. Best-Practice Checklist
253
+
254
+ - [ ] Use the `MAIN` sugar for the main DB — reach for `queryPg` only when talking to a secondary database.
255
+ - [ ] Always pass user input through `sqlValues` (`$1`, `$2`, …) — no string concatenation.
256
+ - [ ] Type your rows: `queryMAIN<IUserRow>(...)`, `TRecordSet<IUserRow>` in SQL generators.
257
+ - [ ] For auto-increment tables: `excludeFromInsert: ['<pk>']` + `addOutputInserted: true` when you need
258
+ the generated id back.
259
+ - [ ] For incremental syncs of external sources with partial payloads: `noUpdateIfNull: true`; put audit
260
+ columns (`createdAt`, `createdBy`) into `fieldsExcludedFromUpdatePart`.
261
+ - [ ] For large recordsets go through `mergeByBatch` — remember it **mutates** the input.
262
+ - [ ] For transactions on the main DB the simplest form is
263
+ `execMAIN({ sqlText: 'BEGIN' | 'COMMIT' | 'ROLLBACK' })`. When you need a single physical client
264
+ across many operations, use `getPoolPg(...)` from `af-db-ts` and pass the resulting `client` through
265
+ the object form of the `MAIN` helpers.
266
+ - [ ] Never call `client.release()` on a client obtained from `getPoolPg` — pool lifecycle is owned by the
267
+ SDK and closed during graceful shutdown (via `closeAllPgConnectionsPg`).
268
+ - [ ] For writes whose success must be verified, pass `throwError = true` so failures surface instead of
269
+ silently returning `undefined`.
270
+ - [ ] Call `await checkMainDB()` early at startup; expose `getMainDBConnectionStatus()` from `/health`.
271
+
272
+ ## 6. Secondary Databases (advanced)
273
+
274
+ The SDK only exposes sugar for the single `main` connection. If your server needs extra databases, declare
275
+ them under `db.postgres.dbs.<alias>` and write a small wrapper mirroring `src/core/db/pg-db.ts` — set the
276
+ appropriate `connectionId` and, if needed, supply `registerTypesFunctions`. Typical cases: read-only
277
+ replicas, legacy sources, cross-service ETL jobs.
278
+
279
+ ```typescript
280
+ import { queryPg, IQueryPgArgs } from 'af-db-ts';
281
+ import type { QueryResult, QueryResultRow } from 'pg';
282
+
283
+ const SECONDARY = 'reporting'; // must match a key under db.postgres.dbs
284
+
285
+ export const queryReporting = async <R extends QueryResultRow = any> (
286
+ arg: string | Omit<IQueryPgArgs, 'connectionId'>,
287
+ sqlValues?: any[],
288
+ throwError = false,
289
+ ): Promise<QueryResult<R> | undefined> => {
290
+ const q: IQueryPgArgs = typeof arg === 'string'
291
+ ? { sqlText: arg, connectionId: SECONDARY, sqlValues, throwError }
292
+ : { ...arg, connectionId: SECONDARY };
293
+ return queryPg<R>(q);
294
+ };
295
+ ```
@@ -90,3 +90,4 @@ glm.sh
90
90
  /.serena/
91
91
  /.playwright-mcp/
92
92
  /claudedocs/
93
+ preferred-language.txt
@@ -50,7 +50,7 @@
50
50
  "dependencies": {
51
51
  "@modelcontextprotocol/sdk": "^1.29.0",
52
52
  "dotenv": "^17.4.1",
53
- "fa-mcp-sdk": "^0.4.68"
53
+ "fa-mcp-sdk": "^0.4.70"
54
54
  },
55
55
  "devDependencies": {
56
56
  "@types/express": "^5.0.6",
@@ -140,7 +140,7 @@ Characteristics:
140
140
 
141
141
  ---
142
142
 
143
- ### `/deploy-mcp` — End-to-End MCP Server Implementation
143
+ ### `/create-mcp-wizard` — End-to-End MCP Server Implementation
144
144
 
145
145
  Orchestrates the full implementation workflow from feature brief to a live GitLab repo. The project
146
146
  must already be scaffolded by the `fa-mcp` CLI — this skill picks up from `yarn install` onwards.
@@ -169,20 +169,25 @@ Pipeline (10 steps):
169
169
 
170
170
  Characteristics:
171
171
 
172
- - **Launch**: **command-only** via `/deploy-mcp`. `disable-model-invocation: true` — does NOT
172
+ - **Launch**: **command-only** via `/create-mcp-wizard`. `disable-model-invocation: true` — does NOT
173
173
  trigger on implicit mentions
174
174
  - **Input**: feature brief comes from the accompanying user message(s) and attached files. OpenAI
175
175
  and GitLab creds may be supplied inline or asked interactively
176
176
  - **Ground rules**: every step explicit and verified; free-form inputs asked in plain prose (never
177
177
  predefined options); exclusions from the brief honoured; dev defaults intentionally lenient;
178
178
  `.claude/`, `deploy/`, `FA-MCP-SDK-DOC/` are NOT modified unless the brief explicitly says to
179
+ - **Reporting language**: all generated artifacts (`claudedocs/*.md`, commit messages, user-facing
180
+ summaries) are written in a language resolved in this order: (1) explicit directive in the
181
+ feature brief, else (2) contents of `preferred-language.txt` in the project root, else
182
+ (3) English. Prose (headings + body) is translated; code, paths, YAML keys, and CLI commands
183
+ stay as-is
179
184
  - **Output**: implemented project + `claudedocs/{impl-plan,test-log,dev-report}.md`, GitLab repo
180
185
  with two commits on `main` (scaffold + feature)
181
186
 
182
187
  **Examples:**
183
188
 
184
189
  ```
185
- /deploy-mcp
186
- /deploy-mcp реализуй инструменты из task.md, OpenAI key sk-..., GitLab group mcp-servers
187
- /deploy-mcp implement tools from the message; repo уже существует, push to git@gitlab.example:ai/mcp-foo.git
190
+ /create-mcp-wizard
191
+ /create-mcp-wizard реализуй инструменты из task.md, OpenAI key sk-..., GitLab group mcp-servers
192
+ /create-mcp-wizard implement tools from the message; repo уже существует, push to git@gitlab.example:ai/mcp-foo.git
188
193
  ```
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "fa-mcp-sdk",
3
3
  "productName": "FA MCP SDK",
4
- "version": "0.4.68",
4
+ "version": "0.4.70",
5
5
  "description": "Core infrastructure and templates for building Model Context Protocol (MCP) servers with TypeScript",
6
6
  "type": "module",
7
7
  "main": "dist/core/index.js",