@firstlovecenter/ai-chat 0.1.1 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,76 @@
1
+ # Changelog
2
+
3
+ All notable changes to `@firstlovecenter/ai-chat` are documented here.
4
+
5
+ The format follows [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
+ and the project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
+
8
+ ## [0.2.1] — 2026-05-08
9
+
10
+ ### Fixed
11
+
12
+ - **Chat "Thinking…" indicator visibility** — the spinning indicator was being rendered at the bottom of the thread, below the input pill, where users had to scroll to see it. The thinking state now shows the animated spinner inline next to the AI sparkle avatar where the response will appear, and the redundant bottom indicator is removed. Closes a UX bug reported during initial smoke-testing of `<AiChat />` from the package.
13
+
14
+ ## [0.2.0] — 2026-05-08
15
+
16
+ ### Added
17
+
18
+ - **`ConfigureAiChatOpts.rolePrompt`** — host-supplied persona prepended to the system blocks on every turn. Accepts a `string` for a static org-wide role or a `(ctx) => string | Promise<string>` function for per-request variation (e.g. different role per scope, locale, or A/B cohort). The role text is marked `cached: true` so it benefits from Anthropic's ephemeral prompt cache and Vertex's automatic prefix cache. Sent every turn (stateless API), but with caching the marginal cost after the first turn rounds to zero for typical 200–500 token roles.
19
+
20
+ ### Fixed
21
+
22
+ - **Cache marker cap (Claude provider)** — Anthropic limits `cache_control` markers to 4 per request. The provider now silently keeps the marker on the first 4 cached system blocks and drops the hint from any beyond, rather than letting the request fail. This guards against the natural growth of cached blocks once `rolePrompt` is in use (`role` + `system` + `schema` + `semantic-layer` + `scope-summary` would have been 5).
23
+
24
+ ## [0.1.1] — 2026-05-08
25
+
26
+ ### Fixed
27
+
28
+ - **`AGENT_NO_PRESENT` fallback** — when the model emits text but never calls `present()` (common for advisory questions like "what should we do to increase X?"), the agent loop now wraps the model's text as a single-block `paragraph_brief` `PresentPayload` instead of failing hard. The user always gets a usable answer; the genuine dead end (no tools AND no text) still surfaces the error. The `SELF_VERIFY_REQUIRED` gate is unaffected — it still fires only when the model explicitly calls `present()` with `raw_numbers` it might have hallucinated.
29
+
30
+ ## [0.1.0] — 2026-05-08
31
+
32
+ Initial public release on npm under `@firstlovecenter/ai-chat`.
33
+
34
+ ### Added
35
+
36
+ - **`configureAiChat({...})` runtime factory** — single entry point that wires the package's agent loop, narrators, providers, and three route factories with host-supplied ports. Returns `{ runAgent, routes, registries }`.
37
+ - **Six injected ports**:
38
+ - `PersistencePort` — domain-shaped CRUD for `chat_sessions`, `chat_messages`, `ai_settings`.
39
+ - `AuthPort<S>` — `requireAuth(req)` + `isSuperAdmin(scope)`.
40
+ - `ScopePort<S>` — `resolveScopeLabel` + `buildScopeSummary` (UI chip + system-prompt context).
41
+ - `ToolsPort` — host's tool registry + `buildSystemBlocks(ctx)`.
42
+ - `VertexPort` — `projectId`, `defaultLocation`, `auth: GoogleAuth` (host constructs; package never reads `process.env.GCP_*`), `modelIds`.
43
+ - `LoggerPort` — optional structured logging.
44
+ - **Three reference persistence adapters** behind the same `PersistencePort` contract:
45
+ - `createDrizzlePersistence(db)` — drizzle-orm MySQL (canonical table objects shipped from `@firstlovecenter/ai-chat/server/drizzle`).
46
+ - `createPrismaPersistence(prisma)` — Prisma client (schema fragment shipped at `prisma/chat-models.prisma` for hosts to paste).
47
+ - `createMemoryPersistence()` — in-memory adapter for the package's own contract tests (internal-only).
48
+ - **Three route factories** returning Next-compatible handlers:
49
+ - `agentCustom` — POST SSE handler (Custom chat protocol).
50
+ - `chatSessions` — list/POST + detail GET/PATCH/DELETE.
51
+ - `adminSettings` — GET/PATCH for `tool_provider`, `gcp_location`, `chat_interface` (validated against registries).
52
+ - **Lifecycle hooks** consumed by route factories so consuming hosts can plumb in project-specific concerns without forking:
53
+ - `RouteHooks<S>` (all routes): `onRequest` (before auth — shutdown gating returns 503), `onAuthenticated` (after auth — rate limiting returns 429).
54
+ - `AgentCustomHooks<S>` (SSE route only): adds `generateSessionId` (per-request session id used for `ToolContext.sessionId` and downstream resources like SQL view names), `onSessionStart` (per-request setup, e.g. `CREATE VIEW`), `onSessionEnd` (always-runs cleanup with `cause: 'complete' | 'error' | 'abort'`).
55
+ - **Two registries** the host's settings UI maps over:
56
+ - `toolProviders` — built-in `claude` + `gemini` Vertex AI adapters; extensible via `extraToolProviders`.
57
+ - `chatInterfaces` — `custom` (this package's UI) + `vercel` (placeholder for the Vercel AI SDK build, landing in v0.x).
58
+ - **Custom chat UI** — `AiChat` and `AnswerBlocks` exported from `@firstlovecenter/ai-chat/ui`. Logic lifted byte-for-byte from the original FLC implementation; only host-coupled imports were rewritten.
59
+ - **MIT license**, public `publishConfig`, peer-deps marked optional for `drizzle-orm` and `@prisma/client` so a host installing only one ORM never pulls the other.
60
+
61
+ ### Subpath exports
62
+
63
+ - `@firstlovecenter/ai-chat/server` — runtime factory, agent loop, ports, registries, types.
64
+ - `@firstlovecenter/ai-chat/server/drizzle` — drizzle adapter + canonical table objects.
65
+ - `@firstlovecenter/ai-chat/server/prisma` — prisma adapter + schema fragment string.
66
+ - `@firstlovecenter/ai-chat/ui` — `AiChat`, `AnswerBlocks`, registries.
67
+
68
+ ### Build
69
+
70
+ - Dual ESM + CJS output via `tsup` with full `.d.ts` (and `.d.cts`) generation.
71
+ - UI bundle gets a post-build `'use client';` directive injection (tsup otherwise strips module-level directives during bundling, breaking RSC consumers that import the UI from a server file).
72
+
73
+ [0.2.1]: https://github.com/firstlovecenter/flc-ai-chat/compare/v0.2.0...v0.2.1
74
+ [0.2.0]: https://github.com/firstlovecenter/flc-ai-chat/compare/v0.1.1...v0.2.0
75
+ [0.1.1]: https://github.com/firstlovecenter/flc-ai-chat/compare/v0.1.0...v0.1.1
76
+ [0.1.0]: https://github.com/firstlovecenter/flc-ai-chat/releases/tag/v0.1.0
@@ -146,11 +146,19 @@ var ClaudeToolProvider = class {
146
146
  patchVertexBuildRequestSync(this.client);
147
147
  }
148
148
  async runTurn(input) {
149
- const system = input.system.map((b) => ({
150
- type: "text",
151
- text: b.text,
152
- ...b.cached ? { cache_control: { type: "ephemeral" } } : {}
153
- }));
149
+ let cacheMarkersUsed = 0;
150
+ const MAX_CACHE_MARKERS = 4;
151
+ const system = input.system.map((b) => {
152
+ if (b.cached && cacheMarkersUsed < MAX_CACHE_MARKERS) {
153
+ cacheMarkersUsed++;
154
+ return {
155
+ type: "text",
156
+ text: b.text,
157
+ cache_control: { type: "ephemeral" }
158
+ };
159
+ }
160
+ return { type: "text", text: b.text };
161
+ });
154
162
  const messages = toAnthropicMessages(input.messages);
155
163
  const response = await this.client.messages.create({
156
164
  model: this.modelId,
@@ -1393,6 +1401,16 @@ function configureAiChat(opts) {
1393
1401
  ];
1394
1402
  const getProvider = (id) => toolProviders2.find((p) => p.id === id) ?? getToolProvider(id);
1395
1403
  const chatInterfaces = opts.chatInterfaces ?? BUILTIN_CHAT_INTERFACE_IDS.map((id) => ({ id }));
1404
+ const tools = opts.rolePrompt ? {
1405
+ tools: opts.tools.tools,
1406
+ async buildSystemBlocks(ctx) {
1407
+ const inner = await opts.tools.buildSystemBlocks(ctx);
1408
+ const rolePrompt = opts.rolePrompt;
1409
+ const role = typeof rolePrompt === "function" ? await rolePrompt(ctx) : rolePrompt;
1410
+ if (!role || !role.trim()) return inner;
1411
+ return [{ text: role, cached: true }, ...inner];
1412
+ }
1413
+ } : opts.tools;
1396
1414
  const runAgentBound = async ({
1397
1415
  question,
1398
1416
  ctx,
@@ -1415,11 +1433,11 @@ function configureAiChat(opts) {
1415
1433
  modelIds: opts.vertex.modelIds,
1416
1434
  location: location ?? settings.gcpLocation
1417
1435
  });
1418
- const systemBlocks = await opts.tools.buildSystemBlocks(ctx);
1436
+ const systemBlocks = await tools.buildSystemBlocks(ctx);
1419
1437
  const input = {
1420
1438
  question,
1421
1439
  ctx,
1422
- tools: opts.tools.tools,
1440
+ tools: tools.tools,
1423
1441
  systemBlocks,
1424
1442
  provider,
1425
1443
  maxToolTurns,
@@ -1435,7 +1453,7 @@ function configureAiChat(opts) {
1435
1453
  persistence: opts.persistence,
1436
1454
  auth: opts.auth,
1437
1455
  scope: opts.scope,
1438
- tools: opts.tools,
1456
+ tools,
1439
1457
  vertex: opts.vertex,
1440
1458
  logger: opts.logger,
1441
1459
  resolveNarratorId: opts.resolveNarratorId,