chat 4.26.0 → 4.28.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (50) hide show
  1. package/dist/{chunk-OPV5U4WG.js → chunk-V25FKIIL.js} +44 -1
  2. package/dist/index.d.ts +485 -33
  3. package/dist/index.js +862 -135
  4. package/dist/{jsx-runtime-DxATbnrP.d.ts → jsx-runtime-DxGwoLu2.d.ts} +49 -5
  5. package/dist/jsx-runtime.d.ts +1 -1
  6. package/dist/jsx-runtime.js +1 -1
  7. package/docs/actions.mdx +52 -1
  8. package/docs/adapters.mdx +43 -37
  9. package/docs/api/cards.mdx +4 -0
  10. package/docs/api/chat.mdx +172 -6
  11. package/docs/api/index.mdx +2 -0
  12. package/docs/api/markdown.mdx +28 -5
  13. package/docs/api/message.mdx +58 -1
  14. package/docs/api/meta.json +2 -0
  15. package/docs/api/modals.mdx +50 -0
  16. package/docs/api/postable-message.mdx +55 -1
  17. package/docs/api/thread.mdx +33 -3
  18. package/docs/api/transcripts.mdx +220 -0
  19. package/docs/cards.mdx +6 -0
  20. package/docs/concurrency.mdx +4 -0
  21. package/docs/contributing/building.mdx +73 -1
  22. package/docs/contributing/publishing.mdx +33 -0
  23. package/docs/conversation-history.mdx +137 -0
  24. package/docs/direct-messages.mdx +13 -4
  25. package/docs/ephemeral-messages.mdx +1 -1
  26. package/docs/error-handling.mdx +15 -3
  27. package/docs/files.mdx +2 -1
  28. package/docs/getting-started.mdx +1 -11
  29. package/docs/index.mdx +7 -5
  30. package/docs/meta.json +14 -5
  31. package/docs/modals.mdx +97 -1
  32. package/docs/posting-messages.mdx +7 -3
  33. package/docs/streaming.mdx +74 -18
  34. package/docs/subject.mdx +53 -0
  35. package/docs/threads-messages-channels.mdx +43 -0
  36. package/docs/usage.mdx +11 -2
  37. package/package.json +3 -2
  38. package/resources/guides/create-a-discord-support-bot-with-nuxt-and-redis.md +180 -0
  39. package/resources/guides/how-to-build-a-slack-bot-with-next-js-and-redis.md +134 -0
  40. package/resources/guides/how-to-build-an-ai-agent-for-slack-with-chat-sdk-and-ai-sdk.md +220 -0
  41. package/resources/guides/run-and-track-deploys-from-slack.md +270 -0
  42. package/resources/guides/ship-a-github-code-review-bot-with-hono-and-redis.md +147 -0
  43. package/resources/guides/triage-form-submissions-with-chat-sdk.md +178 -0
  44. package/resources/templates.json +19 -0
  45. package/docs/guides/code-review-hono.mdx +0 -241
  46. package/docs/guides/discord-nuxt.mdx +0 -227
  47. package/docs/guides/durable-chat-sessions-nextjs.mdx +0 -337
  48. package/docs/guides/meta.json +0 -10
  49. package/docs/guides/scheduled-posts-neon.mdx +0 -447
  50. package/docs/guides/slack-nextjs.mdx +0 -234
@@ -0,0 +1,220 @@
1
+ # How to build an AI agent for Slack with Chat SDK and AI SDK
2
+
3
+ **Author:** Ben Sabic
4
+
5
+ ---
6
+
7
+ You can build an AI-powered Slack agent that responds to mentions, maintains conversation history, and calls tools autonomously using Chat SDK and AI SDK. Chat SDK handles the platform integration (webhooks, message formatting, thread tracking), while AI SDK's `ToolLoopAgent` manages the reasoning loop that lets your agent call tools and act on results. Together with Vercel AI Gateway and Redis for state, you get a production-ready Slack agent without managing infrastructure or juggling provider SDKs.
8
+
9
+ This guide will walk you through building a Slack agent with Chat SDK, AI SDK's `ToolLoopAgent`, and Claude via the [Vercel AI Gateway](https://vercel.com/ai-gateway). You'll wire up streaming responses, tool calling, and multi-turn conversation history, then scale your tool set for production with toolpick.
10
+
11
+ ## Prerequisites
12
+
13
+ Before you begin, make sure you have:
14
+
15
+ * Node.js 18+
16
+
17
+ * [pnpm](https://pnpm.io/) (or npm/yarn)
18
+
19
+ * A Slack workspace where you can install apps
20
+
21
+ * A Redis instance (local or hosted, such as [Upstash](https://vercel.com/marketplace/upstash))
22
+
23
+ * A [Vercel account](https://vercel.com/signup) with an AI Gateway API key
24
+
25
+
26
+ ## How it works
27
+
28
+ Chat SDK is a unified TypeScript SDK for building chatbots across Slack, Teams, Discord, and other platforms. You register event handlers (like `onNewMention` and `onSubscribedMessage`), and the SDK routes incoming webhooks to them. The Slack adapter handles webhook verification, message parsing, and the Slack API. The Redis state adapter tracks which threads your bot has subscribed to and manages distributed locking for concurrent message handling.
29
+
30
+ AI SDK's `ToolLoopAgent` wraps a language model with tools and runs an autonomous loop: the model generates text or calls a tool, the SDK executes the tool, feeds the result back, and repeats until the model finishes. When you pass a model string like `"anthropic/claude-sonnet-4.6"`, and host your application on Vercel, the AI SDK will route the request through the AI Gateway automatically.
31
+
32
+ Chat SDK accepts any `AsyncIterable<string>` as a message, so you can pass the agent's `fullStream` directly to `thread.post()` for real-time streaming in Slack.
33
+
34
+ ## Steps
35
+
36
+ ### 1\. Scaffold the project, install dependencies, and add Vercel Plugin
37
+
38
+ Create a new Next.js app and add the Chat SDK, AI SDK, and adapter packages:
39
+
40
+ `npx create-next-app@latest my-slack-agent --typescript --app cd my-slack-agent pnpm add chat @chat-adapter/slack @chat-adapter/state-redis ai zod`
41
+
42
+ The `chat` package is the Chat SDK core. The `@chat-adapter/slack` and `@chat-adapter/state-redis` packages are the [Slack platform adapter](https://chat-sdk.dev/adapters/slack) and [Redis state adapter.](https://chat-sdk.dev/adapters/redis) The `ai` package is the AI SDK, which includes the AI Gateway provider and `ToolLoopAgent`. `zod` is used to define tool input schemas.
43
+
44
+ The [Vercel Plugin](https://vercel.com/docs/agent-resources/vercel-plugin) equips your AI coding agent (e.g., Claude Code) with skills, specialist agents, slash commands, and more.
45
+
46
+ `npx plugins add vercel/vercel-plugin`
47
+
48
+ ### 2\. Create a Slack app
49
+
50
+ Go to [api.slack.com/apps](https://api.slack.com/apps), click **Create New App**, then **From a manifest**.
51
+
52
+ Select your workspace and paste this manifest:
53
+
54
+ `display_information: name: AI Agent description: An AI agent built with Chat SDK and AI SDK features: bot_user: display_name: AI Agent always_online: true oauth_config: scopes: bot: - app_mentions:read - channels:history - channels:read - chat:write - groups:history - groups:read - im:history - im:read - mpim:history - mpim:read - reactions:read - reactions:write - users:read settings: event_subscriptions: request_url: https://your-domain.com/api/webhooks/slack bot_events: - app_mention - message.channels - message.groups - message.im - message.mpim interactivity: is_enabled: true request_url: https://your-domain.com/api/webhooks/slack org_deploy_enabled: false socket_mode_enabled: false token_rotation_enabled: false`
55
+
56
+ After creating the app:
57
+
58
+ 1. Go to **Install App**, and install the app to your workspace
59
+
60
+ 2. Go to **OAuth & Permissions** > **OAuth Tokens** and copy the **Bot User OAuth Token**
61
+
62
+ 3. Go to **Basic Information** > **App Credentials** and copy the **Signing Secret**
63
+
64
+
65
+ You'll replace the `request_url` placeholders with your real domain after deploying (or a tunnel URL for local testing).
66
+
67
+ ### 3\. Configure environment variables
68
+
69
+ Create a `.env.local` file in your project root:
70
+
71
+ `SLACK_BOT_TOKEN=xoxb-your-bot-token SLACK_SIGNING_SECRET=your-signing-secret REDIS_URL=redis://localhost:6379 AI_GATEWAY_API_KEY=your-ai-gateway-api-key`
72
+
73
+ The Slack adapter reads `SLACK_BOT_TOKEN` and `SLACK_SIGNING_SECRET` automatically. The Redis state adapter reads `REDIS_URL`. AI SDK uses `AI_GATEWAY_API_KEY` to authenticate with the Vercel AI Gateway, or alternatively, use [OIDC authentication](https://ai-sdk.dev/providers/ai-sdk-providers/ai-gateway#oidc-authentication-vercel-deployments).
74
+
75
+ You can create an AI Gateway API key from your [Vercel dashboard](https://vercel.com) under **AI Gateway** and click **Create an API Key**.
76
+
77
+ ### 4\. Define your agent's tools
78
+
79
+ Create `lib/tools.ts` with the tools your agent can call. This example defines a weather tool and docs tool, but you can add any tools your use case requires:
80
+
81
+ ``import { tool } from "ai"; import { z } from "zod"; export const tools = { getWeather: tool({ description: "Get the current weather for a location", inputSchema: z.object({ location: z.string().describe("City name, e.g. San Francisco"), }), execute: async ({ location }) => { // Replace with a real weather API call const response = await fetch( `https://api.weatherapi.com/v1/current.json?key=${process.env.WEATHER_API_KEY}&q=${encodeURIComponent(location)}` ); const data = await response.json(); return { location, temperature: data.current.temp_f, condition: data.current.condition.text, }; }, }), searchDocs: tool({ description: "Search the company documentation for a topic", inputSchema: z.object({ query: z.string().describe("The search query"), }), execute: async ({ query }) => { // Replace with your actual search implementation return { results: [`Result for: ${query}`] }; }, }), };``
82
+
83
+ Each tool has a `description` (which tells the model when to use it), an `inputSchema` (a Zod schema that the model fills in), and an `execute` function that runs when the tool is called.
84
+
85
+ ### 5\. Create the agent and bot
86
+
87
+ Create `lib/bot.ts` with a `ToolLoopAgent` and a `Chat` instance:
88
+
89
+ `import { Chat } from "chat"; import { toAiMessages } from "chat"; import { createSlackAdapter } from "@chat-adapter/slack"; import { createRedisState } from "@chat-adapter/state-redis"; import { ToolLoopAgent } from "ai"; import { tools } from "./tools"; const agent = new ToolLoopAgent({ model: "anthropic/claude-sonnet-4.6", instructions: "You are a helpful AI assistant in a Slack workspace. " + "Answer questions clearly and use your tools when you need " + "real-time data. Keep responses concise and well-formatted for chat.", tools, }); export const bot = new Chat({ userName: "ai-agent", adapters: { slack: createSlackAdapter(), }, state: createRedisState(), }); // Handle first-time mentions bot.onNewMention(async (thread, message) => { await thread.subscribe(); const result = await agent.stream({ prompt: message.text }); await thread.post(result.fullStream); }); // Handle follow-up messages in subscribed threads bot.onSubscribedMessage(async (thread, message) => { const allMessages = []; for await (const msg of thread.allMessages) { allMessages.push(msg); } const history = await toAiMessages(allMessages); const result = await agent.stream({ messages: history }); await thread.post(result.fullStream); });`
90
+
91
+ When someone @mentions the bot, `onNewMention` fires. The handler subscribes to the thread (to track future messages in that thread) and streams the agent's response. For follow-up messages, `onSubscribedMessage` retrieves the full thread history using `thread.allMessages`, converts it to the AI SDK message format with `toAiMessages`and passes it to the agent so it has a complete conversation context.
92
+
93
+ The `fullStream` is preferred over `textStream` because it preserves paragraph breaks between tool-calling steps. Chat SDK auto-detects the stream type and handles Slack's native streaming API for real-time updates.
94
+
95
+ ### 6\. Wire up the webhook route
96
+
97
+ Create the API route at `app/api/webhooks/[platform]/route.ts`:
98
+
99
+ ``import { after } from "next/server"; import { bot } from "@/lib/bot"; type Platform = keyof typeof bot.webhooks; export async function POST( request: Request, context: RouteContext<"/api/webhooks/[platform]"> ) { const { platform } = await context.params; const handler = bot.webhooks[platform as Platform]; if (!handler) { return new Response(`Unknown platform: ${platform}`, { status: 404 }); } return handler(request, { waitUntil: (task) => after(() => task), }); }``
100
+
101
+ This creates a `POST /api/webhooks/slack` endpoint. The `waitUntil` option ensures your event handlers finish processing after the HTTP response is sent, which is required on serverless platforms where the function would otherwise terminate early.
102
+
103
+ ### 7\. Test locally
104
+
105
+ 1. Start the dev server:
106
+
107
+ `pnpm dev`
108
+
109
+ 2. Expose it with a tunnel:
110
+
111
+ `npx ngrok http 3000`
112
+
113
+ 3. Copy the tunnel URL (for example, `https://abc123.ngrok-free.dev`) and update both **Event Subscriptions** and **Interactivity** Request URLs in your [Slack app settings](https://api.slack.com/apps) to `https://abc123.ngrok-free.dev/api/webhooks/slack`
114
+
115
+ 4. Invite the bot to a channel (`/invite @AI Agent`)
116
+
117
+ 5. @mention the bot with a question. You should see a streaming response appear in the thread. Try asking it to use one of your tools, such as "What's the weather in San Francisco?"
118
+
119
+
120
+ ### 8\. Deploy to Vercel
121
+
122
+ First, link your project and add your environment variables:
123
+
124
+ `vercel link vercel env add SLACK_BOT_TOKEN vercel env add SLACK_SIGNING_SECRET vercel env add REDIS_URL vercel env add AI_GATEWAY_API_KEY`
125
+
126
+ Alternatively, add them in the Vercel dashboard under **Settings** > **Environment Variables**.
127
+
128
+ Then deploy:
129
+
130
+ `vercel`
131
+
132
+ Update the **Event Subscriptions** and **Interactivity** Request URLs in your Slack app settings to your production URL, for example `https://my-slack-agent.vercel.app/api/webhooks/slack`.
133
+
134
+ When deployed to Vercel, AI Gateway supports OIDC-based authentication, so you can also authenticate without a static API key. See the [AI Gateway authentication docs](https://vercel.com/docs/ai-gateway/authentication-and-byok#oidc-tokens).
135
+
136
+ ## Troubleshooting
137
+
138
+ ### Bot doesn't respond to mentions
139
+
140
+ Check that your Slack app has the `app_mentions:read` scope and that the **Event Subscriptions** Request URL is correct. Slack sends a challenge request when you first set the URL, so your server must be running.
141
+
142
+ ### Streaming appears choppy or delayed
143
+
144
+ Chat SDK uses Slack's native streaming API for smooth updates. If you're seeing issues, check that your Redis connection is stable, as the SDK uses distributed locks to manage concurrent messages.
145
+
146
+ ### Tool calls fail silently
147
+
148
+ If the agent calls a tool but no result appears, check for errors in your tool's `execute` function. AI SDK surfaces tool execution errors back to the model, which may attempt to recover. Add error handling in your tools and check your server logs for details.
149
+
150
+ ### Thread history grows too large
151
+
152
+ For long-running threads, the conversation history can exceed the model's context window. Consider limiting the number of messages you pass to the agent by slicing the history array or by using a summarization step for older messages.
153
+
154
+ ## Scaling to many tools with toolpick
155
+
156
+ The agent in this guide has two tools. In production, a Slack agent often grows to 15, 20, or 30 tools as you integrate services like GitHub, [Linear](https://vercel.com/marketplace/linear), [Upstash](https://vercel.com/marketplace/upstash), calendars, and deploy pipelines. At that scale, every tool definition is sent to the model on every step, which increases token costs and makes it harder for the model to pick the right tool.
157
+
158
+ [toolpick](https://www.npmjs.com/package/toolpick) solves this by indexing your tools at startup and selecting only the most relevant ones for each step. It hooks into `ToolLoopAgent` via the `prepareStep` option, so you don't need to change your handler logic.
159
+
160
+ ### Install toolpick
161
+
162
+ `pnpm add toolpick`
163
+
164
+ ### Create a tool index
165
+
166
+ Build an index from your full tool set. toolpick uses a combination of keyword matching and semantic embeddings to find the best tools for each step:
167
+
168
+ `import { createToolIndex } from "toolpick"; const toolIndex = createToolIndex(tools, { embeddingModel: "openai/text-embedding-3-small", });`
169
+
170
+ For higher accuracy with vague queries (like "ship it" or "ping the team"), add a re-ranker model that uses a cheap LLM to pick the final candidates:
171
+
172
+ `const toolIndex = createToolIndex(tools, { embeddingModel: "openai/text-embedding-3-small", rerankerModel: "openai/gpt-4o-mini", });`
173
+
174
+ ### Update your agent to use toolpick
175
+
176
+ Pass `toolIndex.prepareStep()` to your `ToolLoopAgent`. This sets `activeTools` on each step, so the model only sees the tools it needs, while all tools remain available for execution:
177
+
178
+ `const agent = new ToolLoopAgent({ model: "anthropic/claude-sonnet-4.6", instructions: "..." tools, prepareStep: toolIndex.prepareStep(), });`
179
+
180
+ If the model can't find a relevant tool in the current selection, toolpick automatically moves to the next page of results. After two misses, it exposes all tools as a fallback. Your agent never gets stuck in a loop, unable to find the right tool.
181
+
182
+ ### Enrich descriptions and cache embeddings
183
+
184
+ For an extra accuracy boost, enable `enrichDescriptions` to expand your tool descriptions with synonyms and alternative phrasings. This runs a one-time LLM call during `warmUp()` at server startup. You can also persist the computed embeddings to disk with `fileCache` so subsequent restarts skip the embedding API call entirely:
185
+
186
+ `import { createToolIndex, fileCache } from "toolpick"; const toolIndex = createToolIndex(tools, { embeddingModel: "openai/text-embedding-3-small", rerankerModel: "openai/gpt-4o-mini", enrichDescriptions: true, embeddingCache: fileCache(".toolpick-cache.json"), }); await toolIndex.warmUp();`
187
+
188
+ This setup is optional for agents with a handful of tools, but becomes worthwhile as your tool set grows. The per-step cost of re-ranking with `gpt-4o-mini` is approximately $0.0001, which is negligible compared to the token savings from sending fewer tool definitions to the primary model.
189
+
190
+ ## How to add Teams, Discord, or other platforms
191
+
192
+ Chat SDK supports multiple platforms from a single codebase. The event handlers and agent logic you've already defined work identically across all of them, since the SDK normalizes messages, threads, and reactions into a consistent format.
193
+
194
+ To add Microsoft Teams or another platform, register an additional adapter:
195
+
196
+ `import { createSlackAdapter } from "@chat-adapter/slack"; import { createTeamsAdapter } from "@chat-adapter/teams"; export const bot = new Chat({ adapters: { slack: createSlackAdapter(), teams: createTeamsAdapter(), }, state, userName: "ai-agent", });`
197
+
198
+ The existing webhook route in `src/index.ts` already uses a `:platform` parameter, so Teams webhooks would be handled at `/api/webhooks/teams` with no additional routing code.
199
+
200
+ Streaming behavior varies by platform. Slack uses its native streaming API for smooth real-time updates, while Teams, Discord, and Google Chat fall back to a post-then-edit pattern that throttles updates to avoid rate limits. You can adjust the update interval with the `streamingUpdateIntervalMs` option when creating your `Chat` instance.
201
+
202
+ See the [Chat SDK adapter directory](https://chat-sdk.dev/adapters) for the full list of supported platforms.
203
+
204
+ ### Other Chat SDK Guides
205
+
206
+ ## Related resources
207
+
208
+ * [Chat SDK streaming](https://chat-sdk.dev/docs/streaming)
209
+
210
+ * [Chat SDK actions](https://chat-sdk.dev/docs/actions) and [cards](https://chat-sdk.dev/docs/cards)
211
+
212
+ * [AI SDK agent documentation](https://ai-sdk.dev/docs/agents/building-agents)
213
+
214
+ * [AI Gateway documentation](https://vercel.com/docs/ai-gateway)
215
+
216
+ * [toolpick documentation](https://github.com/pontusab/toolpick)
217
+
218
+ ---
219
+
220
+ [View full KB sitemap](/kb/sitemap.md)
@@ -0,0 +1,270 @@
1
+ # Run and track deploys from Slack
2
+
3
+ **Author:** Ben Sabic
4
+
5
+ ---
6
+
7
+ Build a Slack bot that orchestrates your entire deployment lifecycle in the workspace your team already uses daily.
8
+
9
+ This guide walks you through a Slack bot that orchestrates the entire deploy lifecycle from a single slash command. Type `/deploy staging` and the bot:
10
+
11
+ * Dispatches a GitHub Actions workflow
12
+
13
+ * Polls the run until it completes
14
+
15
+ * Comments on the relevant PR(s)
16
+
17
+ * Updates linked Linear issue(s)
18
+
19
+ * Posts a summary card back to Slack
20
+
21
+
22
+ For production deploys, the bot gates the workflow with an approval step, so the deploy proceeds only after an authorized team member approves it.
23
+
24
+ The bot is built with [Chat SDK](https://chat-sdk.dev) and [Vercel Workflow](https://vercel.com/workflow). Chat SDK handles the Slack interaction layer (cards, buttons, modals, and slash commands), while Vercel Workflow handles stateful orchestration (pausing for approval, polling GitHub, and resuming when events arrive). You write the deploy pipeline as a single function that pauses and resumes over minutes or hours without a database or state machine.
25
+
26
+ Deploy the template now, or read on for a deeper look at how it all works.
27
+
28
+ ## Quick start with an AI coding agent
29
+
30
+ If you're working with an AI coding agent like Claude Code or Cursor, you can clone the template and hand off implementation with this prompt:
31
+
32
+ `I want to build a deploy bot for Slack using Chat SDK and Vercel Workflow. Clone the template repo at https://github.com/vercel-labs/chat-sdk-deploy-bot, install dependencies with pnpm, and walk me through setting up the environment variables in .env.local. I need a Slack app, a GitHub fine-grained personal access token with Actions (read/write), Contents (read), Issues (write), and Pull requests (read) permissions, and Redis (Upstash) configured. After setup, help me deploy it to Vercel and test the /deploy slash command. When searching for information, check for applicable skill(s) first and review local documentation.`
33
+
34
+ ### Vercel Plugin
35
+
36
+ Turn your agent into a Vercel expert with this [plugin](https://vercel.com/docs/agent-resources/vercel-plugin). The [Chat SDK](https://skills.sh/vercel/chat/chat-sdk) and [Workflow](https://skills.sh/vercel/workflow/workflow) skills are both included.
37
+
38
+ `npx plugins add vercel/vercel-plugin`
39
+
40
+ ## Setup and deployment
41
+
42
+ ### What you need before deploying
43
+
44
+ You'll need accounts with these services:
45
+
46
+ * **Slack** for the bot interface. Create a new app at [api.slack.com/apps](https://api.slack.com/apps).
47
+
48
+ * **GitHub** for workflow dispatch. You'll need a fine-grained personal access token for the target repository.
49
+
50
+ * **Redis** for Chat SDK state and Vercel Workflow. Any Redis provider works. [Upstash](https://upstash.com) supports serverless deployments and has a free tier.
51
+
52
+ * **Linear** (optional) for issue tracking. Set `LINEAR_API_KEY` to enable it.
53
+
54
+
55
+ ### Configure your Slack app
56
+
57
+ 1. Create a new Slack app from a manifest at [api.slack.com/apps](https://api.slack.com/apps). Use the [slack-manifest.json](https://github.com/vercel-labs/chat-sdk-deploy-bot/blob/main/slack-manifest.json) file included in the template repo. Replace the `https://example.com` URLs with your production domain (e.g. `https://your-app.vercel.app/api/webhooks/slack`).
58
+
59
+ 2. Install the app in your workspace and copy the **Bot User OAuth Token**.
60
+
61
+ 3. Copy the **Signing Secret** from the **Basic Information** page.
62
+
63
+
64
+ ### Configure GitHub
65
+
66
+ 1. Create a fine-grained [personal access token](https://github.com/settings/tokens) for the target repository with these permissions:
67
+
68
+ * Actions: read and write
69
+
70
+ * Contents: read
71
+
72
+ * Issues: write
73
+
74
+ * Pull requests: read
75
+
76
+ 2. Configure the token for a repository that has a workflow triggered with `workflow_dispatch`. Here's an example:
77
+
78
+
79
+ `name: Deploy on: workflow_dispatch: inputs: environment: description: Target environment required: true type: choice options: - staging - production deploy_id: description: Optional deploy correlation ID required: false type: string run-name: Deploy ${{ inputs.environment }} (${{ inputs.deploy_id || github.sha }})`
80
+
81
+ The `deploy_id` input is optional, but including it in `run-name` helps the bot reliably match the run it dispatched against other concurrent runs.
82
+
83
+ If you want the bot to comment on GitHub PRs as a thread (with webhook-driven replies):
84
+
85
+ 1. Add a repository webhook pointing at `https://<your-domain>/api/webhooks/github`
86
+
87
+ 2. Set the content type to `application/json`
88
+
89
+ 3. Use the same secret as `GITHUB_WEBHOOK_SECRET`
90
+
91
+ 4. Subscribe to `issue_comment` and `pull_request_review_comment` events
92
+
93
+
94
+ ### Configure Linear (optional)
95
+
96
+ Set `LINEAR_API_KEY` to enable Linear integration. No separate webhook setup is required.
97
+
98
+ The bot extracts issue keys from branch names and commit messages using a team prefix (defaults to `ENG`, configurable via `LINEAR_TEAM_PREFIX`). On successful deploys, staging deploys comment on linked issues. Production deploys comment and transition issues to the state configured in `LINEAR_PRODUCTION_STATE` (defaults to `Done`).
99
+
100
+ For the bot to know which commits are new in each deploy, your deploy pipeline must maintain four git tags in the target repo:
101
+
102
+ * `deploy/staging/previous`
103
+
104
+ * `deploy/staging/latest`
105
+
106
+ * `deploy/production/previous`
107
+
108
+ * `deploy/production/latest`
109
+
110
+
111
+ The bot compares `previous` to `latest` to find the commit range. It doesn't create or move these tags itself, so your CI pipeline should update them as part of the deploy process. If the tags don't exist, the bot skips Linear updates rather than guessing.
112
+
113
+ ### Environment variables
114
+
115
+ | Variable | Required | Purpose |
116
+ | ------------------------- | -------- | -------------------------------------------------------------- |
117
+ | `SLACK_BOT_TOKEN` | Yes | Bot User OAuth Token (`xoxb-...`) |
118
+ | `SLACK_SIGNING_SECRET` | Yes | Request verification from the **Basic Information** page |
119
+ | `GITHUB_TOKEN` | Yes | Fine-grained personal access token (`github_pat_...`) |
120
+ | `GITHUB_WEBHOOK_SECRET` | Yes | Secret for verifying GitHub webhook payloads |
121
+ | `GITHUB_REPO_OWNER` | Yes | Repository owner or organization |
122
+ | `GITHUB_REPO_NAME` | Yes | Repository name |
123
+ | `GITHUB_WORKFLOW_ID` | Yes | Workflow filename (e.g. `deploy.yml`) or numeric ID |
124
+ | `REDIS_URL` | Yes | Redis connection string |
125
+ | `LINEAR_API_KEY` | No | Enables Linear integration (`lin_api_...`) |
126
+ | `LINEAR_TEAM_PREFIX` | No | Issue key prefix (default: `ENG`) |
127
+ | `LINEAR_PRODUCTION_STATE` | No | State to transition prod issues to (default: `Done`) |
128
+ | `DEPLOY_PROD_ALLOWED` | No | Comma-separated Slack user IDs allowed to trigger prod deploys |
129
+ | `DEPLOY_PROD_APPROVERS` | No | Comma-separated Slack user IDs allowed to approve prod deploys |
130
+
131
+ If `DEPLOY_PROD_ALLOWED` is empty or unset, nobody can trigger production deploys. If `DEPLOY_PROD_APPROVERS` is empty or unset, nobody can approve them. Staging deploys are available to everyone.
132
+
133
+ ### Deploy to Vercel
134
+
135
+ [Deploy the bot with one click](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fvercel-labs%2Fchat-sdk-deploy-bot&env=SLACK_BOT_TOKEN,SLACK_SIGNING_SECRET,GITHUB_TOKEN,GITHUB_WEBHOOK_SECRET,GITHUB_REPO_OWNER,GITHUB_REPO_NAME,GITHUB_WORKFLOW_ID,REDIS_URL), or clone the repo and deploy manually:
136
+
137
+ `git clone https://github.com/vercel-labs/chat-sdk-deploy-bot.git cd chat-sdk-deploy-bot pnpm install vercel`
138
+
139
+ After deploying, update your Slack app's request URLs to point to your production domain: `https://<your-vercel-domain>/api/webhooks/slack`.
140
+
141
+ ### Test the slash command
142
+
143
+ Open Slack and type:
144
+
145
+ `/deploy staging`
146
+
147
+ The bot should post a deploy card to the channel and dispatch your GitHub Actions workflow. You'll see status updates in the Slack thread as the run progresses, followed by a summary card when it completes.
148
+
149
+ ### Local development
150
+
151
+ `git clone https://github.com/vercel-labs/chat-sdk-deploy-bot.git cd chat-sdk-deploy-bot pnpm install cp .env.example .env.local pnpm dev`
152
+
153
+ This starts a Next.js dev server. To receive Slack webhooks locally, use [ngrok](https://ngrok.com) to create a public tunnel:
154
+
155
+ `ngrok http 3000`
156
+
157
+ Then update your Slack app's request URLs to the ngrok URL (e.g. `https://abc123.ngrok-free.dev/api/webhooks/slack`).
158
+
159
+ ## How the deploy bot works
160
+
161
+ The bot has three interfaces: Slack for user interaction, GitHub for dispatching and monitoring workflows, and (optionally) Linear for issue tracking. Here's the flow:
162
+
163
+ 1. A user types `/deploy staging`, `/deploy production`, or `/deploy` (which opens a modal with environment and branch options)
164
+
165
+ 2. For staging deploys, the bot posts a deploy card to Slack and immediately dispatches a GitHub Actions workflow
166
+
167
+ 3. For production deploys, the bot adds Approve and Cancel buttons to the card and pauses. The workflow only continues if an authorized approver clicks Approve
168
+
169
+ 4. Once dispatched, the bot polls the GitHub Actions run every 5 seconds for up to 60 minutes, updating a status message in Slack as it progresses
170
+
171
+ 5. When the run completes, the bot comments on associated GitHub PRs and (if Linear is enabled) comments on linked issues and transitions production issues to your configured done state
172
+
173
+ 6. The bot posts a final summary card to Slack with the environment, branch, commit, duration, linked issues, and a link to the workflow run
174
+
175
+
176
+ [Vercel Workflow](https://vercel.com/workflow) makes this possible. A Vercel Workflow function can suspend itself mid-execution and resume later with full state preserved. The approval gate and the polling loop are both regular code. The function pauses while waiting for a button click, resumes when it arrives, then loops while polling GitHub. No cron jobs, no queues, no external state store.
177
+
178
+ ## Code walkthrough
179
+
180
+ The template is a Next.js app. The bot logic lives in `lib/` (setup, handlers, and integrations) and `workflows/` (stateful deploy orchestration).
181
+
182
+ ### Building the bot
183
+
184
+ The bot is a Chat SDK instance with adapters for Slack, GitHub, and optionally Linear, plus Redis-backed state:
185
+
186
+ `import { Chat } from "chat"; import { createGitHubAdapter } from "@chat-adapter/github"; import { createLinearAdapter } from "@chat-adapter/linear"; import { createSlackAdapter } from "@chat-adapter/slack"; import { createRedisState } from "@chat-adapter/state-redis"; const adapters = { github: createGitHubAdapter(), ...(LINEAR_ENABLED ? { linear: createLinearAdapter() } : {}), slack: createSlackAdapter(), }; export const bot = new Chat<typeof adapters, DeployThreadState>({ adapters, state: createRedisState(), userName: "deploy-bot", }).registerSingleton();`
187
+
188
+ Each deploy lives in a Slack thread with typed state (environment, branch, commit SHA, and the Slack user ID of whoever ran `/deploy`). This state is stored in Redis via Chat SDK's state adapter, so the approval handler and the workflow can coordinate without passing data through button payloads alone.
189
+
190
+ ### Slash command and permissions
191
+
192
+ The bot registers a `/deploy` slash command with two paths. If the user provides an argument (`/deploy staging` or `/deploy production`), the bot deploys immediately on the `main` branch. If no argument is given, the bot opens a modal where the user can pick an environment and optionally specify a branch:
193
+
194
+ `bot.onSlashCommand("/deploy", async (event) => { const args = event.text.trim().toLowerCase(); if (!args) { await event.openModal( Modal({ callbackId: "deploy_form", children: [ Select({ id: "environment", label: "Environment", options: [ SelectOption({ label: "Staging", value: "staging" }), SelectOption({ label: "Production", value: "production" }), ], }), TextInput({ id: "branch", label: "Branch", optional: true, placeholder: "main", }), ], submitLabel: "Deploy", title: "Deploy", }) ); return; } const environment = args === "production" || args === "prod" ? "production" : "staging"; // ... permission check, payload build, workflow start });`
195
+
196
+ The bot resolves the HEAD commit for the branch, posts a deploy card to Slack, and starts the Vercel Workflow. Staging deploys are open to everyone. Production deploys are gated by `DEPLOY_PROD_ALLOWED` (who can trigger) and `DEPLOY_PROD_APPROVERS` (who can approve). When a permission check fails, the bot sends an ephemeral message visible only to the user who tried.
197
+
198
+ ### The deploy workflow
199
+
200
+ The deploy workflow is the core of the bot.
201
+
202
+ It's a single function, marked with `"use workflow"`, that orchestrates the entire deploy lifecycle:
203
+
204
+ `export const deployWorkflow = async (rawPayload: string) => { "use workflow"; const parsed: unknown = JSON.parse(rawPayload); if (!isDeployWorkflowPayload(parsed)) { throw new Error("Invalid deploy workflow payload"); } const { thread: serializedThread, ...deploy } = parsed; // Gate production behind approval if (deploy.environment === "production") { const approved = await runApprovalGate(serializedThread, deploy); if (!approved) return; } // Dispatch and find the GitHub Actions run const githubRunId = await findGitHubRun(serializedThread, deploy); if (githubRunId === null) return; // Poll until complete (up to 60 minutes) const result = await pollUntilComplete(deploy, githubRunId); // Notify Linear and GitHub const { prCount, resolved } = await notifyExternalSystems( serializedThread, deploy, result ); // Post summary card await postFinalSummary(serializedThread, deploy, result, resolved, prCount); };`
205
+
206
+ This reads like sequential code, but it may take an hour to finish. Vercel Workflow handles the suspend-and-resume mechanics. When the function calls `sleep("5s")` during polling, or waits for a hook event during approval, it suspends. When the timer fires or the webhook arrives, it resumes exactly where it left off with all variables intact.
207
+
208
+ ### Approval gate
209
+
210
+ For production deploys, the workflow creates a hook and waits:
211
+
212
+ `const runApprovalGate = async (serializedThread, deploy) => { const { workflowRunId } = getWorkflowMetadata(); await postApprovalCard(serializedThread, deploy, workflowRunId); using hook = createHook<ApprovalPayload>({ token: workflowRunId }); for await (const event of hook) { if (event.approved) return true; return false; } return false; };`
213
+
214
+ `createHook` registers a listener with a unique token (the workflow run ID). The workflow suspends at the `for await` loop. When someone clicks Approve in Slack, the action handler calls `resumeHook` with that same token, and the workflow picks up with `event.approved` set to `true`. If they click Cancel, it resumes with `false` and the workflow exits.
215
+
216
+ Only the person who triggered the deploy can cancel it. Anyone in the `DEPLOY_PROD_APPROVERS` list can approve.
217
+
218
+ ### GitHub Actions dispatch and polling
219
+
220
+ The bot dispatches a `workflow_dispatch` event to your GitHub Actions workflow, then finds the resulting run by matching it against the branch, commit SHA, and a deploy correlation ID:
221
+
222
+ `const findGitHubRun = async (serializedThread, deploy) => { const dispatch = await dispatchGitHubWorkflow(deploy); let githubRunId = null; for (let attempt = 0; attempt < 10; attempt++) { await sleep("3s"); githubRunId = await findDispatchedRunOnce(deploy, dispatch); if (githubRunId !== null) break; } return githubRunId; };`
223
+
224
+ The dispatch function gracefully degrades if your workflow doesn't accept all the expected inputs. It tries `{ environment, deploy_id }` first, then `{ environment }` alone, then no inputs at all. This makes the bot compatible with most existing deploy workflows without changes.
225
+
226
+ Once a run is found, the bot polls every 5 seconds until the run completes or 60 minutes pass. Each `sleep("5s")` call suspends the Vercel Workflow function, and each `fetchRunSnapshot` is marked with `"use step"` so it retries automatically if the GitHub API call fails.
227
+
228
+ ### Linear and GitHub notifications
229
+
230
+ On a successful deploy, the bot notifies both Linear and GitHub.
231
+
232
+ Linear issues are found by comparing deploy tags. The bot looks at the commit range between `deploy/{environment}/previous` and `deploy/{environment}/latest` in your repo, extracts Linear issue keys (like `ENG-123`) from branch names and commit messages, then comments on each issue with the deploy details. For production deploys, it also transitions issues to your configured done state.
233
+
234
+ GitHub pull requests associated with the deploy commit receive a comment with a summary table linking back to the workflow run.
235
+
236
+ Both steps are wrapped in `"use step"` directives, so they're retryable and isolated from each other. If the Linear step fails, the GitHub PR comments still proceed.
237
+
238
+ ### Summary card
239
+
240
+ When the run completes, the bot posts a final card to the Slack thread with the environment, branch, commit, duration, linked issues, and a link to the GitHub Actions run. If Linear is enabled, the card also includes a table of issue identifiers and titles. If the deploy fails to dispatch or the run can't be matched, the triggerer is notified.
241
+
242
+ ## How to add Teams, Discord, or other platforms
243
+
244
+ Chat SDK supports multiple platforms from a single codebase. The cards, fields, and buttons you've already defined render natively on each platform, including Block Kit on Slack, Adaptive Cards on Teams, and Google Chat Cards.
245
+
246
+ To add Microsoft Teams or another platform, register an additional adapter:
247
+
248
+ `import { createTeamsAdapter } from "@chat-adapter/teams"; export const bot = new Chat({ adapters: { github: createGitHubAdapter(), slack: createSlackAdapter(), teams: createTeamsAdapter(), }, state: createRedisState(), userName: "deploy-bot", });`
249
+
250
+ The existing webhook route at `app/api/webhooks/[platform]/route.ts` already uses a dynamic segment, so Teams webhooks would be handled at `/api/webhooks/teams` with no additional routing code.
251
+
252
+ Modals are currently Slack-only, so the `/deploy` command with no arguments (which opens a modal) only works on Slack. On other platforms, require the environment argument.
253
+
254
+ See the [Chat SDK adapter directory](https://chat-sdk.dev/adapters) for the full list of supported platforms.
255
+
256
+ ## Related resources
257
+
258
+ * [Chat SDK Deploy Bot template](https://github.com/vercel-labs/chat-sdk-deploy-bot)
259
+
260
+ * [Chat SDK documentation](https://chat-sdk.dev/docs)
261
+
262
+ * [Chat SDK GitHub](https://github.com/vercel/chat)
263
+
264
+ * [Vercel Workflow documentation](https://vercel.com/docs/workflow)
265
+
266
+ * [Workflow SDK](https://useworkflow.dev/)
267
+
268
+ ---
269
+
270
+ [View full KB sitemap](/kb/sitemap.md)
@@ -0,0 +1,147 @@
1
+ # Ship a GitHub code review bot with Hono and Redis
2
+
3
+ **Author:** Hayden Bleasel, Ben Sabic
4
+
5
+ ---
6
+
7
+ You can ship a GitHub bot that reviews pull requests on demand by combining Chat SDK, Vercel Sandbox, and AI SDK. When a user @mentions the bot on a PR, Chat SDK picks up the mention, spins up a Vercel Sandbox with the repo cloned, and uses AI SDK to analyze the diff. The sandbox gives the agent safe shell access to the repository, so it can run `git diff`, read source files, and explore the codebase without any code escaping a disposable environment.
8
+
9
+ This guide will walk you through scaffolding a Hono app, configuring a GitHub webhook, wiring up Chat SDK with the GitHub adapter, running a sandboxed AI review, and deploying to Vercel.
10
+
11
+ ## Prerequisites
12
+
13
+ Before you begin, make sure you have:
14
+
15
+ * Node.js 18+
16
+
17
+ * [pnpm](https://pnpm.io/) (or npm/yarn)
18
+
19
+ * A GitHub repository where you have admin access
20
+
21
+ * A Redis instance (local or hosted, such as [Upstash](https://vercel.com/marketplace/upstash))
22
+
23
+ * A [Vercel account](https://vercel.com/signup)
24
+
25
+
26
+ ## How it works
27
+
28
+ Chat SDK is a unified TypeScript SDK for building chatbots across GitHub, Slack, Teams, and other platforms. You register event handlers (like `onNewMention` and `onSubscribedMessage`), and the SDK routes incoming webhooks to them. The GitHub adapter handles signature verification, event parsing, and routing, while the Redis state adapter tracks which threads your bot has subscribed to and manages distributed locking for concurrent message handling.
29
+
30
+ When someone @mentions the bot on a pull request, the handler fetches the PR's head and base branches, creates a Vercel Sandbox with the repo cloned, and gives an AI SDK `ToolLoopAgent` a `bash` tool scoped to that sandbox. The agent can run `git diff`, read files, and explore the codebase freely. Everything it runs stays inside the sandbox, which is destroyed after the review completes.
31
+
32
+ ## Steps
33
+
34
+ ### 1\. Scaffold the project and install dependencies
35
+
36
+ Create a new Hono app and add the Chat SDK, AI SDK, and adapter packages:
37
+
38
+ `pnpm create hono my-review-bot cd my-review-bot pnpm add @octokit/rest @vercel/functions @vercel/sandbox ai bash-tool chat @chat-adapter/github @chat-adapter/state-redis`
39
+
40
+ Select the `vercel` template when prompted by `create-hono`. This sets up the project for Vercel deployment with the correct entry point.
41
+
42
+ The `chat` package is the Chat SDK core. The `@chat-adapter/github` and `@chat-adapter/state-redis` packages are the [GitHub platform adapter](https://chat-sdk.dev/adapters/github) and [Redis state adapter](https://chat-sdk.dev/adapters/redis). `@vercel/sandbox` provides the ephemeral execution environment, and `bash-tool` wires it up as an AI SDK tool.
43
+
44
+ ### 2\. Configure a GitHub webhook
45
+
46
+ 1. Go to your repository **Settings**, then **Webhooks**, then **Add webhook**
47
+
48
+ 2. Set **Payload URL** to [`https://your-domain.com/api/webhooks/github`](https://your-domain.com/api/webhooks/github)
49
+
50
+ 3. Set **Content type** to `application/json`
51
+
52
+ 4. Set a **Secret** and save it. You'll need this as `GITHUB_WEBHOOK_SECRET`
53
+
54
+ 5. Under **Which events would you like to trigger this webhook?**, select **Let me select individual events** and check:
55
+
56
+ * **Issue comments** (for @mention on the PR conversation tab)
57
+
58
+ * **Pull request review comments** (for @mention on inline review threads)
59
+
60
+
61
+ Then gather your credentials:
62
+
63
+ 1. Go to [Settings > Developer settings > Personal access tokens](https://github.com/settings/tokens) and create a token with `repo` scope. You'll need this as `GITHUB_TOKEN`
64
+
65
+ 2. Copy the **Webhook secret** you set above. You'll need this as `GITHUB_WEBHOOK_SECRET`
66
+
67
+
68
+ ### 3\. Configure environment variables
69
+
70
+ Create a `.env` file in your project root:
71
+
72
+ `GITHUB_TOKEN=ghp_your_personal_access_token GITHUB_WEBHOOK_SECRET=your_webhook_secret REDIS_URL=redis://localhost:6379 BOT_USERNAME=my-review-bot`
73
+
74
+ The model (`anthropic/claude-sonnet-4.6`) uses AI Gateway. Develop locally by linking to your Vercel project with `vc link`, then pulling your OIDC token with `vc pull --environment development`.
75
+
76
+ ### 4\. Define the review function
77
+
78
+ Create the core review logic. This clones the repo into a Vercel Sandbox, then uses AI SDK with a bash tool to let Claude analyze the diff and read files directly.
79
+
80
+ ``import { Sandbox } from "@vercel/sandbox"; import { ToolLoopAgent, stepCountIs } from "ai"; import { createBashTool } from "bash-tool"; interface ReviewInput { owner: string; repo: string; prBranch: string; baseBranch: string; } export async function reviewPullRequest(input: ReviewInput): Promise<string> { const { owner, repo, prBranch, baseBranch } = input; const sandbox = await Sandbox.create({ source: { type: "git", url: `https://github.com/${owner}/${repo}`, username: "x-access-token", password: process.env.GITHUB_TOKEN, depth: 50, }, timeout: 5 * 60 * 1000, }); try { await sandbox.runCommand("git", ["fetch", "origin", prBranch, baseBranch]); await sandbox.runCommand("git", ["checkout", prBranch]); const diffResult = await sandbox.runCommand("git", [ "diff", `origin/${baseBranch}...HEAD`, ]); const diff = await diffResult.output("stdout"); const { tools } = await createBashTool({ sandbox }); const agent = new ToolLoopAgent({ model: "anthropic/claude-sonnet-4.6", tools, stopWhen: stepCountIs(20), }); const result = await agent.generate({ prompt: `You are reviewing a pull request for bugs and issues. Here is the diff for this PR: \`\`\`diff ${diff} \`\`\` Use the bash and readFile tools to inspect any files you need more context on. Look for bugs, security issues, performance problems, and missing error handling. Organize findings by severity (critical, warning, suggestion). If the code looks good, say so.`, }); return result.text; } finally { await sandbox.stop(); } }``
81
+
82
+ The `createBashTool` gives the agent `bash`, `readFile`, and `writeFile` tools, all scoped to the sandbox. The agent can run `git diff`, read source files, and explore the repo freely without any code escaping the sandbox.
83
+
84
+ The function returns the review text instead of posting it directly. This lets the Chat SDK handler post it as a threaded reply.
85
+
86
+ ### 5\. Create the bot
87
+
88
+ Create a `Chat` instance with the GitHub adapter. When someone @mentions the bot on a PR, it fetches the PR metadata, runs the review, and posts the result back to the thread.
89
+
90
+ `import { Chat } from "chat"; import { createGitHubAdapter } from "@chat-adapter/github"; import { createRedisState } from "@chat-adapter/state-redis"; import { Octokit } from "@octokit/rest"; import { reviewPullRequest } from "./review"; import type { GitHubRawMessage } from "@chat-adapter/github"; const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN }); export const bot = new Chat({ userName: process.env.BOT_USERNAME!, adapters: { github: createGitHubAdapter(), }, state: createRedisState(), }); bot.onNewMention(async (thread, message) => { const raw = message.raw as GitHubRawMessage; const { owner, repo, prNumber } = { owner: raw.repository.owner.login, repo: raw.repository.name, prNumber: raw.prNumber, }; // Fetch PR branch info const { data: pr } = await octokit.pulls.get({ owner, repo, pull_number: prNumber, }); await thread.post("Starting code review..."); await thread.subscribe(); const review = await reviewPullRequest({ owner, repo, prBranch: pr.head.ref, baseBranch: pr.base.ref, }); await thread.post(review); }); bot.onSubscribedMessage(async (thread, message) => { await thread.post( "I've already reviewed this PR. @mention me on a new PR to start another review." ); });`
91
+
92
+ `onNewMention` fires when a user @mentions the bot, for example `@codereview can you review this?`. The handler extracts the PR details from the message's raw payload, runs the sandboxed review, and posts the result. Calling `thread.subscribe()` lets the bot respond to follow-up messages in the same thread.
93
+
94
+ ### 6\. Handle the webhook
95
+
96
+ Create the Hono app with a single webhook route that delegates to Chat SDK:
97
+
98
+ `import { Hono } from "hono"; import { waitUntil } from "@vercel/functions"; import { bot } from "./bot"; const app = new Hono(); app.post("/api/webhooks/github", async (c) => { const handler = bot.webhooks.github; if (!handler) { return c.text("GitHub adapter not configured", 404); } return handler(c.req.raw, { waitUntil }); }); export default app;`
99
+
100
+ Chat SDK's GitHub adapter handles signature verification, event parsing, and routing internally. The `waitUntil` option ensures the review completes after the HTTP response is sent. This is required on serverless platforms where the function would otherwise terminate before your handlers finish.
101
+
102
+ ### 7\. Test locally
103
+
104
+ 1. Start your development server (`pnpm dev`)
105
+
106
+ 2. Expose it with a tunnel (e.g. `ngrok http 3000`)
107
+
108
+ 3. Update the webhook URL in your GitHub repository settings to your tunnel URL
109
+
110
+ 4. Open a pull request
111
+
112
+ 5. Comment `@my-review-bot can you review this?`. The bot should respond with "Starting code review..." followed by the full review
113
+
114
+
115
+ ### 8\. Deploy to Vercel
116
+
117
+ Deploy your bot to Vercel:
118
+
119
+ `vercel deploy`
120
+
121
+ After deployment, set your environment variables in the Vercel dashboard (`GITHUB_TOKEN`, `GITHUB_WEBHOOK_SECRET`, `REDIS_URL`, `BOT_USERNAME`). Update the webhook URL in your GitHub repository settings to your production URL.
122
+
123
+ ## Troubleshooting
124
+
125
+ ### Bot doesn't respond to mentions
126
+
127
+ Check that your webhook is configured with the **Issue comments** and **Pull request review comments** events, and that the **Payload URL** matches your deployed endpoint. GitHub sends a `ping` event when you first save the webhook, so your server must be running and reachable.
128
+
129
+ ### Webhook signature verification fails
130
+
131
+ Confirm that `GITHUB_WEBHOOK_SECRET` matches the secret you set in the webhook configuration. A mismatched or missing secret will cause the adapter to reject incoming webhooks.
132
+
133
+ ### Sandbox fails to clone the repo
134
+
135
+ Verify that `GITHUB_TOKEN` has `repo` scope and hasn't expired. For private repositories, the token must also have access to the specific repo. Check the sandbox logs for authentication errors.
136
+
137
+ ### Review times out or runs out of steps
138
+
139
+ The sandbox has a 5-minute timeout and the agent stops after 20 steps. For large PRs, increase these limits in `src/review.ts` by adjusting the `timeout` option on `Sandbox.create()` and the `stepCountIs()` value on the agent.
140
+
141
+ ### Redis connection errors
142
+
143
+ Verify that `REDIS_URL` is reachable from your deployment environment. The state adapter uses Redis for distributed locking, so the bot won't process messages without a working connection.
144
+
145
+ ---
146
+
147
+ [View full KB sitemap](/kb/sitemap.md)