@kardoe/quickback 0.5.6 → 0.5.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,5 +1,452 @@
1
1
  // AUTO-GENERATED - DO NOT EDIT
2
2
  // Generated by scripts/bundle-docs.ts from marketing/docs
3
- export const DOCS = {};
4
- export const TOPIC_LIST = [];
3
+ export const DOCS = {
4
+ "account-ui/customization": {
5
+ "title": "Customization",
6
+ "content": "# Customization\n\nCustomize the Account UI to match your brand and messaging.\n\n**Standalone (template):** Configure via environment variables, edit `src/config/app.ts` directly, or use `setAppConfig()` for runtime overrides.\n\n**Library (npm):** Configure entirely via `setAppConfig()`. Import it from `quickback-better-auth-account-ui` instead of `@/config/app`.\n\n## Branding\n\n### App Identity\n\nSet your application's name and description:\n\n```bash\nVITE_APP_NAME=Acme SaaS\nVITE_APP_TAGLINE=Build faster, ship sooner\nVITE_APP_DESCRIPTION=The complete platform for modern web applications\n```\n\nThese values appear in:\n- Page titles (`<title>Acme SaaS | Login</title>`)\n- Meta descriptions\n- UI headers and footers\n- Email templates\n\n### Visual Identity\n\n```bash\n# Primary theme color (used for buttons, links, accents)\nVITE_THEME_COLOR=#3b82f6\n\n# Company name (shown in footer, legal pages)\nVITE_COMPANY_NAME=Acme Corporation\n\n# Company address (shown in footer, receipts)\nVITE_COMPANY_ADDRESS=123 Main St, Suite 100, San Francisco, CA 94105\n```\n\n### Logo and Favicon\n\nPlace your branding assets in the `public/` directory:\n\n```\npublic/\n├── logo.png # Main logo (shown in header)\n├── favicon.ico # Browser favicon\n└── og-image.png # Social media preview image\n```\n\nThen update the paths in `src/config/app.ts`:\n\n```ts title=\"src/config/app.ts\"\nbranding: {\n primaryColor: \"#1e293b\",\n logoUrl: \"/logo.png\",\n faviconUrl: \"/favicon.ico\"\n},\n```\n\nOr set them at runtime in `src/main.tsx`:\n\n```ts\n\nsetAppConfig({\n branding: {\n logoUrl: '/logo.png',\n faviconUrl: '/favicon.ico',\n },\n});\n```\n\n## Custom Labels\n\nOverride default UI labels using runtime configuration:\n\n```ts\n\nsetAppConfig({\n labels: {\n terms: 'Terms and Conditions',\n privacy: 'Privacy Notice',\n support: 'Help Center',\n company: 'My Company',\n organizations: 'Workspaces', // Rename \"Organizations\" to \"Workspaces\"\n },\n});\n```\n\n### Available Labels\n\n| Label | Default | Usage |\n|-------|---------|-------|\n| `terms` | \"Terms of Service\" | Footer link text |\n| `privacy` | \"Privacy Policy\" | Footer link text |\n| `support` | \"Contact Support\" | Footer link text |\n| `company` | (from `VITE_COMPANY_NAME`) | Footer company name |\n| `organizations` | \"Organizations\" | Navigation and page titles |\n\n## Custom Messages\n\nCustomize user-facing messages:\n\n```ts\n\nsetAppConfig({\n messages: {\n noOrganizations: \"You haven't joined any workspaces yet.\",\n contactAdminForInvite: \"Contact your team lead to get access.\",\n noPendingInvitations: \"No pending invites at this time\",\n createOrganization: \"Create Workspace\",\n // ... more messages\n },\n});\n```\n\n### Available Messages\n\n| Message | Default | Usage |\n|---------|---------|-------|\n| `noOrganizations` | \"You're not a member of any organizations yet.\" | Empty organizations list |\n| `contactAdminForInvite` | \"Contact an admin to be invited to an organization.\" | Organizations page help text |\n| `noPendingInvitations` | \"You have no pending invitations\" | Empty invitations list |\n| `createOrganization` | \"Create Organization\" | Button text |\n| `accountInformation` | \"Account Information\" | Profile section header |\n| `timezone` | \"Timezone\" | Label for timezone field |\n| `serverLocation` | \"Server Location\" | Label for server location |\n| `memberSince` | \"Member since\" | Account age label |\n| `pendingInvitations` | \"Pending Invitations\" | Section header |\n| `invitedAs` | \"Invited as\" | Invitation role label |\n| `accept` | \"Accept\" | Accept invitation button |\n| `decline` | \"Decline\" | Decline invitation button |\n| `userFallback` | \"User\" | Fallback for missing name |\n| `noEmailProvided` | \"No email provided\" | Fallback for missing email |\n| `more` | \"More\" | Pagination/load more text |\n| `total` | \"total\" | Total count label |\n| `active` | \"Active\" | Active status label |\n| `new` | \"New\" | New item label |\n\n## Route Customization\n\nCustomize route paths using runtime configuration:\n\n```ts\n\nsetAppConfig({\n routes: {\n public: {\n home: '/',\n login: '/sign-in', // Change from /login\n signup: '/sign-up', // Change from /signup\n forgotPassword: '/reset-password',\n welcome: '/getting-started',\n },\n authenticated: {\n dashboard: '/dashboard',\n profile: '/my-account',\n settings: '/preferences',\n },\n organizations: {\n list: '/workspaces', // Rename route\n create: '/workspaces/new',\n detail: (slug) => `/workspaces/${slug}`,\n },\n },\n});\n```\n\n## Password Requirements\n\nCustomize password validation rules:\n\n```ts\n\nsetAppConfig({\n auth: {\n passwordRequirements: {\n minLength: 12, // Minimum password length\n maxLength: 128, // Maximum password length\n requireUppercase: true, // Must have uppercase letter\n requireLowercase: true, // Must have lowercase letter\n requireNumbers: true, // Must have number\n requireSymbols: true, // Must have special character\n },\n },\n});\n```\n\nPassword requirements are displayed to users during signup and password changes.\n\n## Session Duration\n\nControl how long users stay logged in:\n\n```ts\n\nsetAppConfig({\n auth: {\n sessionDuration: 7 * 24 * 60 * 60, // 7 days in seconds\n },\n});\n```\n\n## Email Customization\n\nConfigure email sender information:\n\n```bash\nVITE_EMAIL_FROM=noreply@acme.com\nVITE_EMAIL_REPLY_TO=support@acme.com\nVITE_SUPPORT_EMAIL=help@acme.com\n```\n\nThe `fromName` is automatically set to `VITE_APP_NAME`:\n\n```ts\n// Emails will show:\n// From: \"Acme SaaS <noreply@acme.com>\"\n```\n\n## URL Configuration\n\nConfigure URLs for external links:\n\n```bash\n# Main application (redirected to after login)\nVITE_APP_URL=https://app.acme.com\n\n# Organization/tenant URL pattern\nVITE_TENANT_URL_PATTERN=/workspace/{slug}\n\n# Support pages\nVITE_SUPPORT_URL=https://help.acme.com\nVITE_PRIVACY_URL=https://acme.com/legal/privacy\nVITE_TERMS_URL=https://acme.com/legal/terms\n```\n\n### Tenant URL Patterns\n\nThe tenant URL pattern supports flexible organization routing:\n\n```bash\n# Path-based routing\nVITE_TENANT_URL_PATTERN=/organizations/{slug}\n# Result: https://app.acme.com/organizations/acme-corp\n\n# Subdomain routing\nVITE_TENANT_URL_PATTERN=https://{slug}.acme.com\n# Result: https://acme-corp.acme.com\n\n# Custom pattern\nVITE_TENANT_URL_PATTERN=/workspace/{slug}/dashboard\n# Result: https://app.acme.com/workspace/acme-corp/dashboard\n```\n\n## Complete Customization Example\n\nHere's a complete example showing all customization options:\n\n```ts title=\"src/main.tsx\"\n\n// Apply customizations before rendering\nsetAppConfig({\n name: 'Acme SaaS',\n tagline: 'Build faster, ship sooner',\n description: 'The complete platform for modern web applications',\n\n branding: {\n primaryColor: '#3b82f6',\n logoUrl: '/acme-logo.png',\n faviconUrl: '/favicon.ico',\n },\n\n labels: {\n organizations: 'Workspaces',\n terms: 'Terms and Conditions',\n privacy: 'Privacy Notice',\n support: 'Help Center',\n },\n\n messages: {\n noOrganizations: \"You haven't joined any workspaces yet.\",\n createOrganization: 'Create Workspace',\n contactAdminForInvite: 'Contact your team lead for access.',\n },\n\n routes: {\n public: {\n login: '/sign-in',\n signup: '/sign-up',\n },\n organizations: {\n list: '/workspaces',\n create: '/workspaces/new',\n detail: (slug) => `/workspaces/${slug}`,\n },\n },\n\n auth: {\n passwordRequirements: {\n minLength: 12,\n requireSymbols: true,\n },\n sessionDuration: 7 * 24 * 60 * 60, // 7 days\n },\n});\n\n// Render app\n\nconst root = document.getElementById('root')!;\ncreateRoot(root).render(\n \n);\n```\n\n## Custom Styles\n\nOverride default styles with custom CSS:\n\n```css title=\"custom-styles.css\"\n/* Override primary color */\n:root {\n --primary: 59 130 246; /* Tailwind blue-500 */\n --primary-foreground: 255 255 255;\n}\n\n/* Custom logo size */\n.logo {\n width: 180px;\n height: auto;\n}\n\n/* Custom button styles */\n.btn-primary {\n background-color: var(--primary);\n border-radius: 8px;\n font-weight: 600;\n}\n```\n\nImport your custom CSS after the Account UI styles:\n\n```ts\n\n```\n\n## Next Steps\n\n- **[Worker Setup](/account-ui/worker)** - Deploy your customized UI\n- **[Environment Variables](/account-ui/environment-variables)** - Complete configuration reference\n- **[Feature Flags](/account-ui/features)** - Enable/disable features"
7
+ },
8
+ "account-ui/environment-variables": {
9
+ "title": "Environment Variables",
10
+ "content": "# Environment Variables\n\nConfigure the Account UI by setting environment variables in your `.env` file or deployment platform.\n\n## Required Variables\n\nThese variables are **required** for the Account UI to function:\n\n### API Configuration\n\n```bash\n# Quickback API URL\nVITE_API_URL=https://api.example.com\n\n# Account UI URL (where the Account UI is deployed)\nVITE_ACCOUNT_APP_URL=https://account.example.com\n```\n\n## App Identity\n\nConfigure your application's basic information:\n\n```bash\n# App name (displayed in UI)\nVITE_APP_NAME=My Application\n\n# Tagline (shown on landing pages)\nVITE_APP_TAGLINE=Build faster, ship sooner\n\n# Full description\nVITE_APP_DESCRIPTION=A complete platform for building modern web applications\n\n# Company information\nVITE_COMPANY_NAME=My Company Inc.\nVITE_COMPANY_ADDRESS=123 Main St, Suite 100, San Francisco, CA 94105\n```\n\n## URLs\n\n### Main Application\n\n```bash\n# Your main application URL\n# Users will be redirected here after authentication\nVITE_APP_URL=https://app.example.com\n\n# Organization/Tenant URL pattern (optional)\n# Use {slug} as placeholder for organization identifier\n# Examples:\n# /organizations/{slug} -> app.example.com/organizations/acme\n# /workspace/{slug} -> app.example.com/workspace/acme\n# https://{slug}.example.com -> acme.example.com\nVITE_TENANT_URL_PATTERN=/organizations/{slug}\n```\n\n### Support and Legal\n\n```bash\n# Support/help URL\nVITE_SUPPORT_URL=https://support.example.com\n\n# Privacy policy URL\nVITE_PRIVACY_URL=https://example.com/privacy\n\n# Terms of service URL\nVITE_TERMS_URL=https://example.com/terms\n```\n\n## Email Configuration\n\n```bash\n# From address for all emails\nVITE_EMAIL_FROM=noreply@example.com\n\n# Reply-to address\nVITE_EMAIL_REPLY_TO=support@example.com\n\n# Support email address\nVITE_SUPPORT_EMAIL=support@example.com\n\n# AWS Region for SES (if using AWS SES)\nVITE_EMAIL_REGION=us-east-1\n```\n\n## Branding\n\n```bash\n# Primary theme color (hex code)\nVITE_THEME_COLOR=#1e293b\n\n# SEO keywords (comma-separated)\nVITE_SEO_KEYWORDS=saas,authentication,account management\n```\n\n## Feature Flags\n\nEnable or disable features using boolean environment variables:\n\n### Authentication Features\n\n```bash\n# Enable user signup (default: true)\nENABLE_SIGNUP=true\n\n# Enable email verification (default: true)\nENABLE_EMAIL_VERIFICATION=true\n\n# Disable email deliverability checks (default: false)\n# Set to true to skip checking if email addresses are deliverable\nDISABLE_EMAIL_STATUS_CHECK=false\n\n# Enable passkey (WebAuthn) authentication (default: true)\nENABLE_PASSKEYS=true\n\n# Enable passkey signup — create accounts with just a passkey (default: true)\nENABLE_PASSKEY_SIGNUP=true\n\n# Enable email OTP verification (default: true)\nENABLE_EMAIL_OTP=true\n\n# Enable magic link authentication (default: true)\nENABLE_MAGIC_LINK=true\n\n# Enable social authentication (default: false)\nENABLE_SOCIAL_AUTH=false\n```\n\n### Account Management Features\n\n```bash\n# Enable account deletion (default: true)\nENABLE_ACCOUNT_DELETION=true\n\n# Enable file uploads (avatar, etc.) (default: false)\nVITE_ENABLE_FILE_UPLOADS=false\n\n# Enable dark mode toggle (default: true)\nENABLE_THEME_TOGGLE=true\n```\n\n### Organization Features\n\n```bash\n# Enable multi-tenant organizations (default: true)\nENABLE_ORGANIZATIONS=true\n\n# Enable teams within organizations (default: true)\nENABLE_TEAMS=true\n```\n\n### Admin Features\n\n```bash\n# Enable admin panel (default: true)\nENABLE_ADMIN=true\n```\n\n## Complete Example\n\nHere's a complete `.env` file with all common configuration:\n\n```bash title=\".env\"\n# ===== REQUIRED =====\nVITE_API_URL=https://api.example.com\nVITE_ACCOUNT_APP_URL=https://account.example.com\n\n# ===== APP IDENTITY =====\nVITE_APP_NAME=Acme SaaS\nVITE_APP_TAGLINE=Build faster, ship sooner\nVITE_APP_DESCRIPTION=The complete platform for modern web applications\nVITE_COMPANY_NAME=Acme Corporation\nVITE_COMPANY_ADDRESS=123 Main St, Suite 100, San Francisco, CA 94105\n\n# ===== URLS =====\nVITE_APP_URL=https://app.acme.com\nVITE_TENANT_URL_PATTERN=/organizations/{slug}\nVITE_SUPPORT_URL=https://help.acme.com\nVITE_PRIVACY_URL=https://acme.com/privacy\nVITE_TERMS_URL=https://acme.com/terms\n\n# ===== EMAIL =====\nVITE_EMAIL_FROM=noreply@acme.com\nVITE_EMAIL_REPLY_TO=support@acme.com\nVITE_SUPPORT_EMAIL=support@acme.com\nVITE_EMAIL_REGION=us-east-1\n\n# ===== BRANDING =====\nVITE_THEME_COLOR=#3b82f6\n\n# ===== SEO =====\nVITE_SEO_KEYWORDS=saas,project management,team collaboration\n\n# ===== FEATURES =====\nENABLE_SIGNUP=true\nENABLE_EMAIL_VERIFICATION=true\nENABLE_PASSKEYS=true\nENABLE_PASSKEY_SIGNUP=true\nENABLE_EMAIL_OTP=true\nENABLE_MAGIC_LINK=true\nENABLE_ORGANIZATIONS=true\nENABLE_TEAMS=true\nENABLE_ADMIN=true\nENABLE_ACCOUNT_DELETION=true\nVITE_ENABLE_FILE_UPLOADS=true\nENABLE_THEME_TOGGLE=true\nDISABLE_EMAIL_STATUS_CHECK=false\nENABLE_SOCIAL_AUTH=false\n```\n\n## Environment-Specific Configuration\n\n### Development\n\n```bash title=\".env.development\"\nVITE_API_URL=http://localhost:8787\nVITE_ACCOUNT_APP_URL=http://localhost:5173\nVITE_APP_URL=http://localhost:3000\nDISABLE_EMAIL_STATUS_CHECK=true\n```\n\n### Staging\n\n```bash title=\".env.staging\"\nVITE_API_URL=https://api-staging.example.com\nVITE_ACCOUNT_APP_URL=https://account-staging.example.com\nVITE_APP_URL=https://app-staging.example.com\n```\n\n### Production\n\n```bash title=\".env.production\"\nVITE_API_URL=https://api.example.com\nVITE_ACCOUNT_APP_URL=https://account.example.com\nVITE_APP_URL=https://app.example.com\nENABLE_EMAIL_VERIFICATION=true\nDISABLE_EMAIL_STATUS_CHECK=false\n```\n\n## Cloudflare Pages Setup\n\nWhen deploying to Cloudflare Pages, set environment variables in the dashboard:\n\n1. Go to **Settings** → **Environment Variables**\n2. Add each `VITE_*` variable\n3. Separate values for **Production** and **Preview** branches\n\n## Wrangler Configuration\n\nFor `wrangler dev` and `wrangler deploy`, add variables to `wrangler.toml`:\n\n```toml title=\"wrangler.toml\"\n[env.production.vars]\nVITE_API_URL = \"https://api.example.com\"\nVITE_ACCOUNT_APP_URL = \"https://account.example.com\"\nVITE_APP_NAME = \"My App\"\n# ... other variables\n```\n\nOr use `.dev.vars` for local development (git-ignored):\n\n```bash title=\".dev.vars\"\nVITE_API_URL=http://localhost:8787\nVITE_ACCOUNT_APP_URL=http://localhost:5173\n```\n\n## Runtime Configuration\n\nSome values can be overridden at runtime using the config API:\n\n```ts\n\nsetAppConfig({\n name: 'My Custom App',\n branding: {\n primaryColor: '#ff6600',\n logoUrl: '/custom-logo.png',\n },\n urls: {\n base: 'https://account.custom.com',\n app: 'https://app.custom.com',\n },\n});\n```\n\n## Validation\n\nThe Account UI validates configuration on startup:\n\n- **Missing VITE_API_URL**: Throws error\n- **Invalid URLs**: Logs warning\n- **Missing optional fields**: Uses defaults\n\nCheck the browser console for configuration warnings.\n\n## Next Steps\n\n- **[Feature Flags](/account-ui/features)** - Detailed feature configuration\n- **[Customization](/account-ui/customization)** - Customize labels and messages\n- **[Worker Setup](/account-ui/worker)** - Deploy to Cloudflare"
11
+ },
12
+ "account-ui/features/admin": {
13
+ "title": "Admin Panel",
14
+ "content": "When `ENABLE_ADMIN=true`, Account UI provides an admin panel at `/admin` for managing users and system settings.\n\n## User Management\n\n- View all registered users\n- Search and filter users\n- Create users manually\n- Ban/unban users\n- Reset user passwords\n- View user sessions\n\n## Access Control\n\nThe admin panel is only accessible to users with the `admin` role. Non-admin users receive a 403 error.\n\n## Related\n\n- [Environment Variables](/account-ui/environment-variables)\n- [Access Control](/compiler/definitions/access) — Role-based permissions"
15
+ },
16
+ "account-ui/features/api-keys": {
17
+ "title": "API Key Management",
18
+ "content": "Account UI provides an API key management interface for users to create, view, and revoke API keys.\n\n## Creating API Keys\n\nUsers can create API keys with:\n\n- Custom name/description\n- Optional expiration date\n- Scoped to the current organization\n\n## Managing Keys\n\n- View all active API keys\n- See last-used timestamp\n- Revoke individual keys\n\n## Related\n\n- [API Keys Auth](/stack/auth/api-keys) — Server-side API key authentication\n- [Environment Variables](/account-ui/environment-variables)"
19
+ },
20
+ "account-ui/features/avatars": {
21
+ "title": "Avatars",
22
+ "content": "When `VITE_ENABLE_FILE_UPLOADS=true`, Account UI provides avatar upload functionality.\n\n## Upload Flow\n\n1. User clicks their profile picture\n2. Image picker opens (supports JPG, PNG, WebP)\n3. Image is cropped to square\n4. Uploaded to R2/S3 storage\n5. Profile updated with new avatar URL\n\n## Requirements\n\n- `VITE_ENABLE_FILE_UPLOADS=true` environment variable\n- R2 bucket or S3-compatible storage configured\n- Upload endpoints available in your API\n\n## Related\n\n- [R2 Storage](/stack/storage/r2) — File storage setup\n- [Environment Variables](/account-ui/environment-variables)"
23
+ },
24
+ "account-ui/features/cli-authorize": {
25
+ "title": "CLI Authorization",
26
+ "content": "Account UI includes a device authorization page that allows users to approve CLI login requests.\n\n## How It Works\n\n1. User runs `quickback login` in the CLI\n2. CLI displays a URL and user code\n3. User opens the URL in their browser\n4. Account UI shows the authorization page with the device code\n5. User confirms the code matches and approves\n6. CLI receives the authentication token\n\n## Configuration\n\nThe CLI authorization page is always available when the Account UI is deployed. No additional configuration is needed.\n\n## Related\n\n- [Device Auth](/stack/auth/device-auth) — Server-side device auth flow\n- [CLI Reference](/compiler/cloud-compiler/cli) — CLI commands"
27
+ },
28
+ "account-ui/features": {
29
+ "title": "Feature Flags",
30
+ "content": "# Feature Flags\n\nControl which features are available in your Account UI deployment using feature flags. All features are configured via environment variables.\n\n## Authentication Features\n\n### User Signup\n\n```bash\nENABLE_SIGNUP=true # default: true\n```\n\n**When enabled:**\n- Shows \"Sign Up\" link on login page\n- `/signup` route is accessible\n- New users can create accounts\n\n**When disabled:**\n- Signup route returns 404\n- Only existing users can log in\n- Useful for invite-only applications\n\n### Email Verification\n\n```bash\nENABLE_EMAIL_VERIFICATION=true # default: true\n```\n\n**When enabled:**\n- Users must verify email before full access\n- Verification email sent on signup\n- \"Resend verification email\" option available\n- Unverified users see verification prompt\n\n**When disabled:**\n- Email addresses are trusted without verification\n- Users have immediate access after signup\n\n### Email Deliverability Check\n\n```bash\nDISABLE_EMAIL_STATUS_CHECK=false # default: false\n```\n\n**When `false` (checking enabled):**\n- System validates email addresses are deliverable\n- Rejects disposable/temporary email providers\n- Prevents typos in domain names\n\n**When `true` (checking disabled):**\n- Accepts all email formats\n- Useful for development/testing\n- Allows `@test.com`, `@localhost`, etc.\n\n### Passkeys (WebAuthn)\n\n```bash\nENABLE_PASSKEYS=true # default: true\n```\n\n**When enabled:**\n- Users can register passkeys (fingerprint, Face ID, hardware keys)\n- Passwordless login option\n- \"Manage Passkeys\" page available\n- Passkey setup wizard\n\n**When disabled:**\n- No passkey registration\n- Password-only authentication\n\n**Requirements:**\n- HTTPS (passkeys require secure context)\n- Modern browser with WebAuthn support\n\n### Passkey Signup\n\n```bash\nENABLE_PASSKEY_SIGNUP=true # default: true\n```\n\n**When enabled (and browser supports WebAuthn):**\n- \"Create Account with Passkey\" button on signup page\n- Creates an anonymous session, registers a passkey, then shows an email collection step\n- Users can optionally provide their name and email, or skip to go straight to dashboard\n- If email is provided and verification is required, user verifies via OTP then goes to dashboard\n- When email delivery is also configured, both passkey and email signup options are shown with an \"Or\" divider\n\n**When disabled:**\n- Passkey signup option hidden on signup page\n- Users must sign up with email (passkey can still be added later from account settings)\n\n**Behavior when email is not configured:**\n- If `ENABLE_PASSKEY_SIGNUP=true` and email delivery is not available, only passkey signup is shown\n- If both passkey signup and email are unavailable, a fallback message directs users to contact an administrator\n\n**Requirements:**\n- HTTPS (WebAuthn requires secure context)\n- `ENABLE_PASSKEYS=true` (passkeys must be enabled)\n- `anonymous` plugin enabled on the backend\n\n### Email OTP\n\n```bash\nENABLE_EMAIL_OTP=true # default: true\n```\n\n**When enabled:**\n- Users can receive one-time passwords via email\n- Alternative to password login\n- `/email-otp` route available\n\n**When disabled:**\n- No email OTP option\n- Password or passkey required\n\n### Magic Link\n\n```bash\nENABLE_MAGIC_LINK=true # default: true\n```\n\n**When enabled:**\n- Users can request email login links\n- Passwordless authentication via email\n- No password required\n\n**When disabled:**\n- Password or other auth method required\n\n### Social Authentication\n\n```bash\nENABLE_SOCIAL_AUTH=false # default: false\n```\n\n**When enabled:**\n- OAuth login with Google, GitHub, etc.\n- \"Sign in with...\" buttons\n- Social account linking\n\n**When disabled:**\n- Email-based authentication only\n\n**Additional Configuration:**\nRequires Better Auth social providers to be configured in your API.\n\n## Account Management Features\n\n### Account Deletion\n\n```bash\nENABLE_ACCOUNT_DELETION=true # default: true\n```\n\n**When enabled:**\n- \"Delete Account\" option in settings\n- Confirmation dialog with password check\n- Permanent account removal\n\n**When disabled:**\n- No delete account option\n- Users must contact support to delete\n\n### File Uploads\n\n```bash\nVITE_ENABLE_FILE_UPLOADS=false # default: false\n```\n\n**When enabled:**\n- Avatar/profile picture upload\n- Image cropping and editing\n- File upload to R2/S3\n\n**When disabled:**\n- No file upload functionality\n- Users can only use default avatars\n\n**Requirements:**\n- R2 bucket or S3 configured\n- Upload endpoints in your API\n\n### Theme Toggle\n\n```bash\nENABLE_THEME_TOGGLE=true # default: true\n```\n\n**When enabled:**\n- Light/dark mode switcher\n- User preference saved\n- System theme detection\n\n**When disabled:**\n- Single theme mode\n- No theme switcher in UI\n\n## Organization Features\n\n### Organizations (Multi-Tenancy)\n\n```bash\nENABLE_ORGANIZATIONS=true # default: true\n```\n\n**When enabled:**\n- Users can create organizations\n- Organization management pages\n- Member invitations and roles\n- `/organizations/*` routes\n\n**When disabled:**\n- Single-user mode only\n- No organization features\n- Simpler user experience\n\n**Includes:**\n- Organization creation and deletion\n- Member management (owner, admin, member roles)\n- Invitation system\n- Organization settings\n\n### Teams\n\n```bash\nENABLE_TEAMS=true # default: true\n```\n\n**When enabled (requires ENABLE_ORGANIZATIONS=true):**\n- Sub-teams within organizations\n- Team-based permissions\n- Team management UI\n\n**When disabled:**\n- Organization members only\n- No team structure\n\n## Admin Features\n\n### Admin Panel\n\n```bash\nENABLE_ADMIN=true # default: true\n```\n\n**When enabled:**\n- `/admin` route accessible to admin users\n- User management dashboard\n- Subscription management\n- Admin-only features:\n - Create users manually\n - Ban/unban users\n - Reset user passwords\n - View all sessions\n - Manage subscriptions\n\n**When disabled:**\n- No admin panel\n- Admin must use database directly\n\n**Requirements:**\n- User must have admin role in database\n\n## Feature Combinations\n\n### Minimal Configuration (Password-Only)\n\n```bash\nENABLE_SIGNUP=true\nENABLE_EMAIL_VERIFICATION=false\nENABLE_PASSKEYS=false\nENABLE_PASSKEY_SIGNUP=false\nENABLE_EMAIL_OTP=false\nENABLE_MAGIC_LINK=false\nENABLE_SOCIAL_AUTH=false\nENABLE_ORGANIZATIONS=false\nENABLE_ADMIN=false\n```\n\nSimple email/password authentication for single-tenant apps.\n\n### Maximum Security\n\n```bash\nENABLE_SIGNUP=true\nENABLE_EMAIL_VERIFICATION=true\nDISABLE_EMAIL_STATUS_CHECK=false\nENABLE_PASSKEYS=true\nENABLE_PASSKEY_SIGNUP=true\nENABLE_EMAIL_OTP=true\nENABLE_MAGIC_LINK=true\nENABLE_SOCIAL_AUTH=true\nENABLE_ACCOUNT_DELETION=true\n```\n\nAll authentication methods with email verification and deliverability checks.\n\n### Multi-Tenant SaaS\n\n```bash\nENABLE_SIGNUP=true\nENABLE_EMAIL_VERIFICATION=true\nENABLE_PASSKEYS=true\nENABLE_PASSKEY_SIGNUP=true\nENABLE_ORGANIZATIONS=true\nENABLE_TEAMS=true\nENABLE_ADMIN=true\nVITE_ENABLE_FILE_UPLOADS=true\n```\n\nFull-featured SaaS with organizations, teams, and admin panel.\n\n### Invite-Only Platform\n\n```bash\nENABLE_SIGNUP=false\nENABLE_EMAIL_VERIFICATION=true\nENABLE_PASSKEYS=true\nENABLE_ORGANIZATIONS=true\nENABLE_ADMIN=true\n```\n\nNo public signup - users must be created by admin or invited to organizations.\n\n## Feature Detection\n\nCheck if a feature is enabled in your code:\n\n```ts\n\nif (isFeatureEnabled('organizations')) {\n // Show organizations menu\n}\n\nif (isFeatureEnabled('passkeys')) {\n // Offer passkey setup\n}\n```\n\nGet all enabled features:\n\n```ts\n\nconst enabled = getEnabledFeatures();\n// ['organizations', 'admin', 'passkeys', ...]\n```\n\n## Dynamic Feature Configuration\n\nOverride features at runtime:\n\n```ts\n\nsetAppConfig({\n features: {\n organizations: false, // Disable organizations\n passkeys: true, // Enable passkeys\n },\n});\n```\n\n## Testing Features\n\nFor local development, create `.env.local`:\n\n```bash title=\".env.local\"\n# Test with all features enabled\nENABLE_SIGNUP=true\nENABLE_EMAIL_VERIFICATION=true\nENABLE_PASSKEYS=true\nENABLE_PASSKEY_SIGNUP=true\nENABLE_EMAIL_OTP=true\nENABLE_MAGIC_LINK=true\nENABLE_ORGANIZATIONS=true\nENABLE_TEAMS=true\nENABLE_ADMIN=true\nVITE_ENABLE_FILE_UPLOADS=true\nENABLE_THEME_TOGGLE=true\nDISABLE_EMAIL_STATUS_CHECK=true # Allow test emails\n```\n\n## Next Steps\n\n- **[Environment Variables](/account-ui/environment-variables)** - Complete variable reference\n- **[Customization](/account-ui/customization)** - Customize UI text and labels\n- **[Worker Setup](/account-ui/worker)** - Deploy your configuration"
31
+ },
32
+ "account-ui/features/organizations": {
33
+ "title": "Organizations",
34
+ "content": "Account UI provides a complete organization management interface when `ENABLE_ORGANIZATIONS=true`.\n\n## Organization CRUD\n\n- Create new organizations\n- Update organization name and settings\n- Delete organizations (owner only)\n\n## Member Management\n\n- Invite members by email\n- Assign roles: **owner**, **admin**, **member**\n- Remove members\n- Transfer ownership\n\n## Invitations\n\n- Pending invitation list\n- Resend invitations\n- Cancel invitations\n\n> **Note:** These three roles (`owner`, `admin`, `member`) are Better Auth's built-in organization roles and map directly to Account UI's role picker. Use the same roles in your Quickback [Access](/compiler/definitions/access) rules for a seamless experience.\n\n## Related\n\n- [Environment Variables](/account-ui/environment-variables)\n- [Account UI Overview](/account-ui)"
35
+ },
36
+ "account-ui/features/passkeys": {
37
+ "title": "Passkeys",
38
+ "content": "When `ENABLE_PASSKEYS=true`, Account UI provides passkey registration and management.\n\n## Passkey Registration\n\n- Setup wizard guides users through passkey creation\n- Supports fingerprint, Face ID, and hardware security keys\n- Multiple passkeys can be registered per account\n\n## Passkey Signup\n\nWhen `ENABLE_PASSKEY_SIGNUP=true`, users can create an account using only a passkey — no email required.\n\n- \"Create Account with Passkey\" button on the signup page\n- Creates an anonymous session, registers a passkey, then shows an **email collection step**\n- Users can optionally provide their name and email address, or skip\n- If email is provided and email delivery is configured, users verify via OTP then go to dashboard\n- If email is skipped, users go straight to dashboard\n- When email delivery is configured, both passkey and email signup options are shown on the initial signup page\n- When email delivery is **not** configured, only passkey signup is shown — preventing users from getting stuck on a verification flow that can't send emails\n\nThe email collection step is always shown (even when email delivery isn't configured) because the email is still useful as an identifier for the account.\n\n## Passkey Login\n\n- \"Sign in with passkey\" button on login page\n- Browser's native WebAuthn dialog\n- No password required\n\n## Managing Passkeys\n\n- View registered passkeys\n- Remove individual passkeys\n- Register additional passkeys\n\n## Requirements\n\n- HTTPS (WebAuthn requires secure context)\n- Modern browser with WebAuthn support\n\n## Related\n\n- [Auth Plugins](/stack/auth/plugins) — Plugin configuration\n- [Environment Variables](/account-ui/environment-variables)"
39
+ },
40
+ "account-ui/features/passwordless": {
41
+ "title": "Passwordless Authentication",
42
+ "content": "Account UI supports passwordless authentication via email OTP and magic links.\n\n## Email OTP\n\nWhen `ENABLE_EMAIL_OTP=true`:\n\n- Users enter their email address\n- A one-time password is sent via email\n- User enters the OTP code to authenticate\n\n## Magic Links\n\nWhen `ENABLE_MAGIC_LINK=true`:\n\n- Users enter their email address\n- A login link is sent via email\n- Clicking the link authenticates the user\n\n## Combo Auth\n\nWhen both are enabled, Account UI uses the `@kardoe/better-auth-combo-auth` plugin to combine them into a single flow — the email contains both a clickable link and a code.\n\n## Related\n\n- [Auth Plugins](/stack/auth/plugins) — Plugin configuration\n- [Environment Variables](/account-ui/environment-variables)"
43
+ },
44
+ "account-ui/features/sessions": {
45
+ "title": "Session Management",
46
+ "content": "Account UI includes a session management page where users can view and revoke their active sessions.\n\n## Active Sessions\n\nUsers can see all devices where they're currently logged in, including:\n\n- Device type and browser\n- IP address and approximate location\n- Last active timestamp\n\n## Revoking Sessions\n\nUsers can revoke any session except their current one. This immediately invalidates the session token.\n\n## Related\n\n- [Environment Variables](/account-ui/environment-variables)\n- [Account UI Overview](/account-ui)"
47
+ },
48
+ "account-ui": {
49
+ "title": "Account UI",
50
+ "content": "# Quickback Account UI\n\nA production-ready React application providing complete authentication and account management functionality. Works with any Better Auth backend or Quickback-compiled API.\n\n## Overview\n\nAccount UI is a standalone SPA that provides:\n\n- **Authentication flows** - Login, signup, password reset, email verification\n- **Account management** - Profile editing, password changes, session management\n- **Organizations** - Multi-tenant organization management with roles and invitations\n- **Passkeys** - WebAuthn/FIDO2 passwordless authentication\n- **Admin panel** - User and subscription management\n- **API key management** - Generate and manage API keys for programmatic access\n\n## Features\n\n### Authentication\n\n- Email/password authentication with secure password requirements\n- Passkey (WebAuthn) support for passwordless login\n- Magic link authentication via email\n- Email OTP verification\n- Email verification flow\n- Password reset with secure tokens\n- Session management across devices\n\n### Account Management\n\n- User profile editing with avatar upload\n- Password management\n- Device and session management\n- Two-factor authentication setup\n- Account deletion with confirmation\n\n### Organizations\n\n- Multi-tenant organization support\n- Role-based access control (owner, admin, member)\n- Member invitation system\n- Organization settings management\n- Organization creation and deletion\n\n### Admin Features\n\n- User management dashboard\n- Subscription management\n- User creation and deletion\n- Password reset for users\n- Session management\n- Ban/unban users\n\n## Quick Start\n\nAccount UI can be consumed in two ways:\n\n### Option A: Standalone Template\n\nClone the repo and own the source — edit anything you want:\n\n```bash\nnpx degit Kardoe-com/quickback-better-auth-account-ui my-account-app\ncd my-account-app\nnpm install\n```\n\nConfigure environment variables, build, and deploy:\n\n```bash\ncp .env.example .env.development\nnpm run build\nnpx wrangler deploy\n```\n\n### Option B: Library Import\n\nInstall as a dependency and get updates via `npm update`:\n\n```bash\nnpm install quickback-better-auth-account-ui\n```\n\n```tsx\n\nsetAppConfig({\n authRoute: 'quickback',\n name: 'My App',\n companyName: 'My Company',\n});\n```\n\nSee the [Library Usage](/account-ui/library-usage) guide for full details.\n\n### Which Mode Should I Use?\n\n| | Standalone | Library |\n|---|---|---|\n| **Install** | `npx degit` | `npm install` |\n| **Customize** | Edit source directly | `setAppConfig()` + CSS |\n| **Updates** | Manual (re-clone/merge) | `npm update` |\n| **Best for** | Full control, heavy customization | Embedding in existing apps, staying up to date |\n\n## Architecture\n\n### Standalone Mode\n\nThe Account UI runs as a **standalone SPA** that:\n\n1. **Runs separately** from your main application\n2. **Communicates** with your API backend via REST\n3. **Serves** from its own subdomain (e.g., `account.example.com`)\n4. **Redirects** users to your main app after authentication\n\n**Why standalone?** Security isolation, reusability across apps, edge performance via Cloudflare, and independent maintenance.\n\n### Library Mode\n\nWhen imported as a library, the Account UI becomes a **component in your existing React app**:\n\n1. **Renders inside** your application's router\n2. **Shares** your app's React and router context\n3. **Configured** via `setAppConfig()` — no env files needed\n4. **Updated** via `npm update` — no manual merging\n\n**Why library?** Seamless integration, automatic updates, and simpler deployment when you already have a React app.\n\n## Configuration\n\n### Standalone (Template)\n\nSince you own the source code, you can customize the Account UI by:\n\n1. **Environment variables** — Set `VITE_*` variables for build-time configuration\n2. **Edit source directly** — Modify `src/config/app.ts` for defaults, labels, messages, routes, and features\n3. **Runtime overrides** — Call `setAppConfig()` in `src/main.tsx` for programmatic configuration\n\n### Library\n\nWhen using as a library, configure entirely via `setAppConfig()`:\n\n```ts\n\nsetAppConfig({\n authRoute: 'quickback',\n name: 'My App',\n companyName: 'My Company',\n tagline: 'Build faster',\n});\n```\n\nSee the following guides for detailed configuration:\n\n- **[Environment Variables](/account-ui/environment-variables)** - Complete list of all configuration options\n- **[Feature Flags](/account-ui/features)** - Enable/disable features\n- **[Customization](/account-ui/customization)** - Branding, labels, and messaging\n- **[Worker Setup](/account-ui/worker)** - Cloudflare Worker configuration\n\n## Integration with Your App\n\n### Redirecting to Account UI\n\nFrom your main application, redirect users to the Account UI for authentication:\n\n```ts\n// Redirect to login\nwindow.location.href = 'https://account.example.com/login';\n\n// Redirect to profile\nwindow.location.href = 'https://account.example.com/profile';\n\n// Redirect to organization\nwindow.location.href = 'https://account.example.com/organizations/acme-corp';\n```\n\n### Returning to Your App\n\nAfter authentication, the Account UI can redirect users back to your application:\n\n```bash\n# Set the main app URL\nVITE_APP_URL=https://app.example.com\n\n# Optional: Set tenant URL pattern\nVITE_TENANT_URL_PATTERN=/organizations/{slug}\n```\n\nWhen a user logs in, they'll be redirected to `VITE_APP_URL`. When they access an organization, they'll be sent to the tenant URL (e.g., `https://app.example.com/organizations/acme-corp`).\n\n## API Integration\n\nThe Account UI connects to your API backend. The API path structure depends on your auth route mode:\n\n### Better Auth Mode (default)\n\n```\nhttps://api.example.com/api/auth/sign-in/email\nhttps://api.example.com/api/auth/sign-up/email\nhttps://api.example.com/api/auth/get-session\nhttps://api.example.com/api/auth/passkey/register\n```\n\n### Quickback Mode\n\n```\nhttps://api.example.com/auth/v1/sign-in/email\nhttps://api.example.com/auth/v1/sign-up/email\nhttps://api.example.com/auth/v1/get-session\nhttps://api.example.com/auth/v1/passkey/register\n```\n\nSee [Standalone Usage](/account-ui/standalone) for Better Auth backends or [With Quickback](/account-ui/with-quickback) for Quickback-compiled backends.\n\n## Project Structure\n\n```\nmy-account-app/\n├── src/\n│ ├── main.tsx # App entry point\n│ ├── App.tsx # Router & routes\n│ ├── worker.ts # Cloudflare Worker entry\n│ ├── auth/\n│ │ └── authClient.ts # Better Auth client\n│ ├── config/\n│ │ ├── app.ts # App config, labels, messages, features\n│ │ ├── features.ts # Feature flag helpers\n│ │ ├── routes.ts # Route helpers\n│ │ └── runtime.ts # Runtime API URL resolution\n│ ├── pages/ # Page components\n│ ├── components/ # Shared UI components\n│ ├── layouts/ # Auth & public layouts\n│ └── lib/ # API client, utilities\n├── .env.example # Environment template\n├── wrangler.toml # Cloudflare Worker config\n├── vite.config.ts # Vite build config\n└── tailwind.config.ts # Tailwind CSS config\n```\n\n## Next Steps\n\n- **[Standalone Usage](/account-ui/standalone)** - Use with any Better Auth backend (template mode)\n- **[With Quickback](/account-ui/with-quickback)** - Use with a Quickback-compiled backend\n- **[Library Usage](/account-ui/library-usage)** - Import as an npm package into your existing React app\n- **[Environment Variables](/account-ui/environment-variables)** - Configure your deployment\n- **[Feature Flags](/account-ui/features)** - Enable optional features\n- **[Customization](/account-ui/customization)** - Match your brand\n- **[Worker Setup](/account-ui/worker)** - Deploy to Cloudflare"
51
+ },
52
+ "account-ui/library-usage": {
53
+ "title": "Library Usage",
54
+ "content": "# Library Usage\n\nInstead of cloning the template, you can install Account UI as an npm package. This gives you automatic updates via `npm update` while configuring the UI through `setAppConfig()`.\n\n## Installation\n\n```bash\nnpm install quickback-better-auth-account-ui\n```\n\n### Peer Dependencies\n\nAccount UI expects these packages in your app:\n\n```bash\nnpm install react react-dom react-router-dom @tanstack/react-query\n```\n\n## Basic Setup\n\n### 1. Import and Configure\n\n```tsx title=\"src/main.tsx\"\n\nsetAppConfig({\n authRoute: 'quickback', // or 'better-auth' for standalone Better Auth\n name: 'My App',\n companyName: 'My Company Inc.',\n tagline: 'Build faster, ship sooner',\n description: 'My app description',\n});\n\nReactDOM.createRoot(document.getElementById('root')!).render(\n \n </React.StrictMode>\n);\n```\n\n### 2. Set API URLs\n\nThe library reads API URLs from `globalThis` variables. Set these before importing the library:\n\n```tsx title=\"src/main.tsx\"\n// Set API URLs before auth-ui loads\n(globalThis as any).__QUICKBACK_API_URL__ = 'https://api.example.com';\n(globalThis as any).__QUICKBACK_APP_URL__ = 'https://account.example.com';\n\n// Then import dynamically to ensure globals are set first\nconst { AuthApp, setAppConfig } = await import('quickback-better-auth-account-ui');\nawait import('quickback-better-auth-account-ui/styles.css');\n```\n\nOr with Vite environment variables:\n\n```tsx title=\"src/main.tsx\"\n(globalThis as any).__QUICKBACK_API_URL__ = import.meta.env.VITE_API_URL;\n(globalThis as any).__QUICKBACK_APP_URL__ = import.meta.env.VITE_APP_URL;\n\nPromise.all([\n import('quickback-better-auth-account-ui'),\n import('quickback-better-auth-account-ui/styles.css'),\n]).then(([{ AuthApp, setAppConfig }]) => {\n setAppConfig({\n authRoute: 'quickback',\n name: import.meta.env.VITE_APP_NAME || 'My App',\n companyName: import.meta.env.VITE_COMPANY_NAME || 'My Company',\n tagline: import.meta.env.VITE_APP_TAGLINE || 'Welcome',\n });\n\n ReactDOM.createRoot(document.getElementById('root')!).render(\n \n </React.StrictMode>\n );\n});\n```\n\n## Cloudflare Worker\n\nThe library exports a Cloudflare Worker entry point for SPA routing and static asset serving:\n\n```ts title=\"src/worker.ts\"\nexport { default } from 'quickback-better-auth-account-ui/worker';\n```\n\nOr extend it with custom logic:\n\n```ts title=\"src/worker.ts\"\n\nexport default createAuthWorker();\n```\n\n## Configuration via `setAppConfig()`\n\nIn library mode, all configuration happens through `setAppConfig()` instead of environment variables.\n\n### Auth Route Mode\n\n```ts\nsetAppConfig({\n authRoute: 'quickback', // Quickback API: /auth/v1, /api/v1, /storage/v1\n // or\n authRoute: 'better-auth', // Better Auth default: /api/auth\n});\n```\n\n### Branding\n\n```ts\nsetAppConfig({\n name: 'Acme SaaS',\n companyName: 'Acme Corporation',\n tagline: 'Build faster, ship sooner',\n description: 'The complete platform for modern web applications',\n branding: {\n primaryColor: '#3b82f6',\n logoUrl: '/logo.png',\n faviconUrl: '/favicon.ico',\n },\n});\n```\n\n### Labels and Messages\n\n```ts\nsetAppConfig({\n labels: {\n organizations: 'Workspaces',\n terms: 'Terms and Conditions',\n privacy: 'Privacy Notice',\n },\n messages: {\n noOrganizations: \"You haven't joined any workspaces yet.\",\n createOrganization: 'Create Workspace',\n },\n});\n```\n\n### Routes\n\n```ts\nsetAppConfig({\n routes: {\n public: {\n login: '/sign-in',\n signup: '/sign-up',\n },\n organizations: {\n list: '/workspaces',\n create: '/workspaces/new',\n },\n },\n});\n```\n\n### Password and Session\n\n```ts\nsetAppConfig({\n auth: {\n passwordRequirements: {\n minLength: 12,\n requireSymbols: true,\n },\n sessionDuration: 7 * 24 * 60 * 60, // 7 days\n },\n});\n```\n\nSee [Customization](/account-ui/customization) for the full reference of all configurable options.\n\n## Exports\n\nThe library provides the following exports:\n\n### Main Entry (`quickback-better-auth-account-ui`)\n\n| Export | Type | Description |\n|--------|------|-------------|\n| `AuthApp` | Component | The main React application component |\n| `authClient` | Object | Pre-configured Better Auth client |\n| `appConfig` | Object | Current app configuration |\n| `setAppConfig` | Function | Set app configuration overrides |\n| `createAppConfig` | Function | Create a new config object |\n| `getApiBase` | Function | Get the current API base URL |\n| `setApiBase` | Function | Override the API base URL |\n| `getAuthApiUrl` | Function | Get the auth API URL |\n| `getDataApiUrl` | Function | Get the data API URL |\n| `getStorageApiUrl` | Function | Get the storage API URL |\n\n### Styles (`quickback-better-auth-account-ui/styles.css`)\n\nThe compiled CSS for the Account UI. Must be imported for the UI to render correctly.\n\n### Worker (`quickback-better-auth-account-ui/worker`)\n\n| Export | Type | Description |\n|--------|------|-------------|\n| `default` | Worker | Cloudflare Worker with SPA routing |\n| `createAuthWorker` | Function | Factory for creating a worker instance |\n\n## Custom Styles\n\nOverride styles by importing your CSS after the library styles:\n\n```ts\n\n```\n\n```css title=\"my-overrides.css\"\n:root {\n --primary: 59 130 246;\n --primary-foreground: 255 255 255;\n}\n```\n\n## Updating\n\nUpdate to the latest version:\n\n```bash\nnpm update quickback-better-auth-account-ui\n```\n\nThe library follows semantic versioning. Check the [changelog](https://github.com/Kardoe-com/quickback-better-auth-account-ui/releases) for breaking changes.\n\n## Next Steps\n\n- **[Customization](/account-ui/customization)** — Full configuration reference\n- **[Feature Flags](/account-ui/features)** — Enable and disable features\n- **[Worker Setup](/account-ui/worker)** — Deploy to Cloudflare Workers\n- **[With Quickback](/account-ui/with-quickback)** — Quickback-specific configuration"
55
+ },
56
+ "account-ui/standalone": {
57
+ "title": "Standalone Usage",
58
+ "content": "Account UI can be used with **any Better Auth-powered backend**, not just Quickback-compiled projects. This makes it a drop-in authentication frontend for any app using Better Auth.\n\n## Requirements\n\n- A backend running [Better Auth](https://better-auth.com) with API endpoints\n- Node.js 18+ and npm/pnpm/bun\n- A Cloudflare account (for deployment)\n\n## Setup\n\n### 1. Clone or Install the Account UI\n\n```bash\nnpx degit Kardoe-com/quickback-better-auth-account-ui my-account-app\ncd my-account-app\nnpm install\n```\n\n### 2. Configure the Auth Route Mode\n\nThe key difference from Quickback mode is the `VITE_AUTH_ROUTE` variable. Set it to `better-auth` to use Better Auth's default API paths:\n\n```bash title=\".env\"\n# Auth route mode — use 'better-auth' for standalone\nVITE_AUTH_ROUTE=better-auth\n\n# Your Better Auth backend URL\nVITE_API_URL=https://api.example.com\n\n# Where this Account UI is hosted\nVITE_ACCOUNT_APP_URL=https://account.example.com\n```\n\nThis configures the auth client to use `/api/auth` as the base path (Better Auth's default), instead of Quickback's `/auth/v1`.\n\n### 3. Configure Your App\n\n```bash title=\".env\"\n# App identity\nVITE_APP_NAME=My App\nVITE_APP_TAGLINE=Welcome to My App\n\n# Redirect after login\nVITE_APP_URL=https://app.example.com\n\n# Feature flags\nENABLE_SIGNUP=true\nENABLE_ORGANIZATIONS=true\nENABLE_PASSKEYS=true\nENABLE_EMAIL_OTP=true\n```\n\nSee [Environment Variables](/account-ui/environment-variables) for the complete reference.\n\n### 4. Build and Deploy\n\n```bash\nnpm run build\nnpx wrangler deploy\n```\n\n## Auth Route Modes\n\nAccount UI supports three routing modes, controlled by `VITE_AUTH_ROUTE`:\n\n| Mode | Value | Auth Path | Data Path | Storage Path |\n|------|-------|-----------|-----------|--------------|\n| Better Auth | `better-auth` | `/api/auth` | — | — |\n| Quickback | `quickback` | `/auth/v1` | `/api/v1` | `/storage/v1` |\n| Custom | `custom` | Custom | Custom | Custom |\n\n### Better Auth Mode (Default)\n\nRoutes all auth requests to `/api/auth/*` — the standard Better Auth convention:\n\n```bash\nVITE_AUTH_ROUTE=better-auth\n```\n\n### Custom Mode\n\nFor non-standard setups, use custom mode with explicit paths:\n\n```bash\nVITE_AUTH_ROUTE=custom\nVITE_AUTH_BASE_PATH=/my-auth\nVITE_DATA_BASE_PATH=/my-api\nVITE_STORAGE_BASE_PATH=/my-storage\n```\n\n## Backend Requirements\n\nYour Better Auth backend must have the following plugins enabled for full Account UI functionality:\n\n| Feature | Required Plugin | Required For |\n|---------|----------------|--------------|\n| Organizations | `organization` | Multi-tenant org management |\n| Admin panel | `admin` | User management dashboard |\n| API keys | `apiKey` | API key generation UI |\n| Passkeys | `@better-auth/passkey` | WebAuthn authentication |\n| Email OTP | `emailOTP` | One-time password login |\n| CLI login | `deviceAuthorization` | `quickback login` flow |\n\nOnly enable the features that match your backend's plugin configuration. Disable features for plugins you haven't installed:\n\n```bash\n# If your backend doesn't have the organization plugin:\nENABLE_ORGANIZATIONS=false\n\n# If you haven't set up passkeys:\nENABLE_PASSKEYS=false\n```\n\n## CORS Configuration\n\nYour Better Auth backend must allow requests from the Account UI's domain. Configure CORS to include:\n\n```typescript\n// In your Better Auth config\ntrustedOrigins: [\n \"https://account.example.com\", // Account UI domain\n],\n```\n\n## Differences from Quickback Mode\n\n| Aspect | Standalone | With Quickback |\n|--------|-----------|----------------|\n| Auth routes | `/api/auth/*` | `/auth/v1/*` |\n| Data API | Not used | `/api/v1/*` |\n| Storage API | Not used | `/storage/v1/*` |\n| File uploads | Manual setup | Auto-configured |\n| Backend | Any Better Auth app | Quickback-compiled |\n\n## Next Steps\n\n- [Environment Variables](/account-ui/environment-variables) — Complete configuration reference\n- [Customization](/account-ui/customization) — Branding, labels, and theming\n- [Worker Setup](/account-ui/worker) — Cloudflare Workers deployment\n- [Feature Flags](/account-ui/features) — Enable and disable features"
59
+ },
60
+ "account-ui/with-quickback": {
61
+ "title": "With Quickback",
62
+ "content": "When using Account UI with a Quickback-compiled backend, set the auth route mode to `quickback` to match Quickback's API structure. This is the recommended setup for Quickback projects.\n\n## Setup\n\nYou can use Account UI with Quickback as either a standalone template or an npm library.\n\n### Option A: Template (Clone the Source)\n\n#### 1. Clone the Template\n\n```bash\nnpx degit Kardoe-com/quickback-better-auth-account-ui my-account-app\ncd my-account-app\nnpm install\n```\n\n#### 2. Set the Auth Route Mode\n\nConfigure Account UI to use Quickback's route conventions:\n\n```bash title=\".env\"\n# Use Quickback route mode\nVITE_AUTH_ROUTE=quickback\n\n# Your Quickback API URL\nVITE_API_URL=https://api.example.com\n\n# Where this Account UI is hosted\nVITE_ACCOUNT_APP_URL=https://account.example.com\n\n# Your main application URL (redirect after login)\nVITE_APP_URL=https://app.example.com\n```\n\n#### 3. Configure Your App\n\n```bash title=\".env\"\n# App identity\nVITE_APP_NAME=My App\nVITE_APP_TAGLINE=Build faster, ship sooner\nVITE_COMPANY_NAME=My Company Inc.\n\n# Organization routing\nVITE_TENANT_URL_PATTERN=/organizations/{slug}\n\n# Features (match your quickback.config.ts)\nENABLE_SIGNUP=true\nENABLE_ORGANIZATIONS=true\nENABLE_PASSKEYS=true\nENABLE_EMAIL_OTP=true\nENABLE_ADMIN=true\n```\n\n#### 4. Deploy\n\n```bash\nnpm run build\nnpx wrangler deploy\n```\n\n### Option B: Library (npm install)\n\nInstall as a dependency in your existing React app:\n\n```bash\nnpm install quickback-better-auth-account-ui\n```\n\n```tsx title=\"src/main.tsx\"\n\nsetAppConfig({\n authRoute: 'quickback',\n name: 'My App',\n companyName: 'My Company Inc.',\n tagline: 'Build faster, ship sooner',\n});\n\n// Render AuthApp inside your router\n```\n\nFor a Cloudflare Worker, re-export the worker entry:\n\n```ts title=\"src/worker.ts\"\nexport { default } from 'quickback-better-auth-account-ui/worker';\n```\n\nSee [Library Usage](/account-ui/library-usage) for the complete guide.\n\n## How It Works\n\nIn Quickback mode, Account UI routes requests to three API paths:\n\n| Path | Purpose | Examples |\n|------|---------|---------|\n| `/auth/v1/*` | Authentication | Login, signup, sessions, passkeys |\n| `/api/v1/*` | Data API | Organizations, API keys |\n| `/storage/v1/*` | File storage | Avatar uploads, file management |\n\nThese paths match what the Quickback compiler generates in your backend's `src/index.ts`.\n\n## Architecture\n\nA typical Quickback deployment has three services:\n\n```\naccount.example.com → Account UI (this app)\napi.example.com → Quickback-compiled API\napp.example.com → Your main application\n```\n\nThe Account UI handles all authentication flows. After login, users are redirected to your main application at `VITE_APP_URL`. Organization-specific redirects use the `VITE_TENANT_URL_PATTERN`.\n\n## Matching Features to Config\n\nEnable Account UI features that match your `quickback.config.ts`:\n\n```typescript title=\"quickback/quickback.config.ts\"\nexport default defineConfig({\n features: [\"organizations\"], // → ENABLE_ORGANIZATIONS=true\n providers: {\n auth: {\n name: \"better-auth\",\n config: {\n plugins: [\n \"organization\", // → ENABLE_ORGANIZATIONS=true\n \"admin\", // → ENABLE_ADMIN=true\n \"apiKey\", // → API key UI available\n \"passkey\", // → ENABLE_PASSKEYS=true\n \"emailOtp\", // → ENABLE_EMAIL_OTP=true\n \"deviceAuthorization\", // → CLI login flow available\n ],\n },\n },\n },\n});\n```\n\nIf your config doesn't include a plugin, disable the corresponding feature flag in Account UI to hide that section of the UI.\n\n## File Uploads\n\nWhen your Quickback project includes R2 file storage, enable avatar uploads:\n\n```bash\nVITE_ENABLE_FILE_UPLOADS=true\n```\n\nThis allows users to upload profile avatars via the storage API at `/storage/v1/object/avatars/`.\n\n## CLI Authorization\n\nIf your Quickback config includes the `deviceAuthorization` plugin, Account UI provides the `/cli/authorize` page. When users run `quickback login`, they're directed to this page to approve the CLI session.\n\nNo additional configuration is needed — the device authorization endpoints are generated automatically by the compiler.\n\n## Environment-Specific Setup\n\n### Development\n\n```bash title=\".env.development\"\nVITE_AUTH_ROUTE=quickback\nVITE_API_URL=http://localhost:8787\nVITE_ACCOUNT_APP_URL=http://localhost:5173\nVITE_APP_URL=http://localhost:3000\nDISABLE_EMAIL_STATUS_CHECK=true\n```\n\n### Production\n\n```bash title=\".env.production\"\nVITE_AUTH_ROUTE=quickback\nVITE_API_URL=https://api.example.com\nVITE_ACCOUNT_APP_URL=https://account.example.com\nVITE_APP_URL=https://app.example.com\nENABLE_EMAIL_VERIFICATION=true\n```\n\n## Wrangler Configuration\n\nFor Cloudflare Workers deployment, set variables in `wrangler.toml`:\n\n```toml title=\"wrangler.toml\"\n[vars]\nVITE_AUTH_ROUTE = \"quickback\"\nVITE_API_URL = \"https://api.example.com\"\nVITE_ACCOUNT_APP_URL = \"https://account.example.com\"\nVITE_APP_URL = \"https://app.example.com\"\nVITE_APP_NAME = \"My App\"\nVITE_TENANT_URL_PATTERN = \"/organizations/{slug}\"\n```\n\n## Next Steps\n\n- [Environment Variables](/account-ui/environment-variables) — Complete variable reference\n- [Customization](/account-ui/customization) — Branding, labels, and theming\n- [Feature Flags](/account-ui/features) — Enable and disable features\n- [Worker Setup](/account-ui/worker) — Cloudflare Workers deployment details"
63
+ },
64
+ "account-ui/worker": {
65
+ "title": "Worker Setup",
66
+ "content": "# Worker Setup\n\nDeploy the Account UI as a Cloudflare Worker with static asset serving.\n\n## Quick Start\n\n### Using the Template\n\n```bash\nnpx degit Kardoe-com/quickback-better-auth-account-ui my-account-app\ncd my-account-app\nnpm install\n```\n\n### Using the Library\n\nIf you're consuming Account UI as an npm package, re-export the worker from the library:\n\n```ts title=\"src/worker.ts\"\nexport { default } from 'quickback-better-auth-account-ui/worker';\n```\n\nThen continue with the Wrangler configuration below.\n\n### 2. Configure Wrangler\n\nThe template includes a `wrangler.toml` pre-configured for Cloudflare Workers:\n\n```toml title=\"wrangler.toml\"\nname = \"my-account-app\"\ncompatibility_date = \"2025-01-01\"\ncompatibility_flags = [\"nodejs_compat\"]\n\n# Custom domain (uncomment and set your domain)\n# routes = [\n# { pattern = \"account.example.com\", custom_domain = true }\n# ]\n\n# Worker entry point\nmain = \"src/worker.ts\"\n\n# Static assets\n[assets]\nbinding = \"ASSETS\"\ndirectory = \"dist/client\"\n```\n\n### 3. Build and Deploy\n\n```bash\nnpm run build\nnpx wrangler deploy\n```\n\n## Worker Entry Point\n\nThe worker is at `src/worker.ts`. It serves static assets and handles SPA routing:\n\n- Static asset serving via the `ASSETS` binding\n- SPA routing (all routes serve `index.html`)\n- Health check endpoint (`/health`)\n\n### Custom Worker Logic\n\nEdit `src/worker.ts` directly to add custom logic:\n\n```ts title=\"src/worker.ts\"\nexport default {\n async fetch(request: Request, env: Env): Promise\n\n### Multiple Environments\n\n```toml title=\"wrangler.toml\"\n# Development\n[env.dev]\nname = \"my-account-app-dev\"\nroute = \"account-dev.example.com/*\"\n\n[env.dev.vars]\nVITE_API_URL = \"https://api-dev.example.com\"\nVITE_ACCOUNT_APP_URL = \"https://account-dev.example.com\"\n\n# Staging\n[env.staging]\nname = \"my-account-app-staging\"\nroute = \"account-staging.example.com/*\"\n\n[env.staging.vars]\nVITE_API_URL = \"https://api-staging.example.com\"\nVITE_ACCOUNT_APP_URL = \"https://account-staging.example.com\"\n\n# Production\n[env.production]\nname = \"my-account-app\"\nroute = \"account.example.com/*\"\n\n[env.production.vars]\nVITE_API_URL = \"https://api.example.com\"\nVITE_ACCOUNT_APP_URL = \"https://account.example.com\"\n```\n\nDeploy to specific environment:\n\n```bash\nwrangler deploy --env staging\nwrangler deploy --env production\n```\n\n## Cloudflare Pages Alternative\n\nYou can also deploy to Cloudflare Pages instead of Workers:\n\n### 1. Configure Build\n\n```toml title=\"wrangler.toml\"\n# Remove [assets] section - Pages handles this\n\nname = \"my-account-app\"\npages_build_output_dir = \"dist/client\"\n```\n\n### 2. Deploy to Pages\n\n```bash\n# First time setup\nwrangler pages project create my-account-app\n\n# Deploy\nwrangler pages deploy dist/client\n```\n\n### 3. Configure Environment Variables\n\nSet environment variables in Cloudflare Pages dashboard:\n1. Go to your Pages project\n2. **Settings** → **Environment Variables**\n3. Add `VITE_*` variables\n4. Redeploy\n\n## Local Development\n\n### Option 1: Vite Dev Server\n\nThe fastest way to develop locally:\n\n```bash\nnpx vite dev\n```\n\nThis starts the Vite dev server with hot module replacement on `localhost:5173`.\n\n### Option 2: Wrangler Dev\n\nTest the worker locally with `wrangler dev`:\n\n```bash\nnpm run build && wrangler dev\n```\n\nThis runs the worker on `localhost:8787`.\n\n## Health Check\n\nThe worker includes a built-in health check endpoint:\n\n```bash\ncurl https://account.example.com/health\n```\n\nResponse:\n\n```json\n{\n \"status\": \"ok\",\n \"app\": \"auth-ui\"\n}\n```\n\nUse this for:\n- Uptime monitoring\n- Load balancer health checks\n- Deployment verification\n\n## Security Headers\n\nAdd security headers by editing `src/worker.ts`:\n\n```ts title=\"src/worker.ts\"\nexport default {\n async fetch(request: Request, env: Env): Promise<Response> {\n // ... routing logic ...\n\n // Add security headers to response\n const response = await env.ASSETS.fetch(request);\n\n const headers = new Headers(response.headers);\n headers.set('X-Frame-Options', 'DENY');\n headers.set('X-Content-Type-Options', 'nosniff');\n headers.set('Referrer-Policy', 'strict-origin-when-cross-origin');\n headers.set('Permissions-Policy', 'geolocation=(), microphone=(), camera=()');\n\n return new Response(response.body, {\n status: response.status,\n statusText: response.statusText,\n headers,\n });\n },\n};\n```\n\n## Custom 404 Page\n\nServe a custom 404 page for unmatched routes:\n\n```ts title=\"src/worker.ts\"\nexport default {\n async fetch(request: Request, env: Env): Promise<Response> {\n const url = new URL(request.url);\n\n // Try to serve static asset\n const assetResponse = await env.ASSETS.fetch(request);\n\n if (assetResponse.status !== 404) {\n return assetResponse;\n }\n\n // Serve index.html for SPA routes\n if (!url.pathname.startsWith('/api/')) {\n const indexRequest = new Request(`${url.origin}/index.html`, request);\n return env.ASSETS.fetch(indexRequest);\n }\n\n // Custom 404 for API routes\n return new Response('Not Found', { status: 404 });\n },\n};\n```\n\n## Deployment Checklist\n\nBefore deploying to production:\n\n- [ ] Set all required environment variables (`VITE_API_URL`, etc.)\n- [ ] Configure custom domain in Cloudflare\n- [ ] Add SSL certificate (automatic with Cloudflare)\n- [ ] Test all authentication flows\n- [ ] Configure DNS records\n- [ ] Set up monitoring/health checks\n- [ ] Test email sending\n- [ ] Verify passkey functionality (requires HTTPS)\n- [ ] Check CORS configuration on API\n- [ ] Review security headers\n- [ ] Test organization invitations\n- [ ] Verify redirection to main app works\n\n## Troubleshooting\n\n### Assets Not Loading\n\n**Problem**: 404 errors for static assets (JS, CSS)\n\n**Solution**: Check `directory` path in `wrangler.toml`:\n\n```toml\n[assets]\nbinding = \"ASSETS\"\ndirectory = \"dist/client\" # Must point to Vite's client build output\n```\n\n### Environment Variables Not Working\n\n**Problem**: Config values are undefined\n\n**Solution**: Remember that `VITE_*` variables are build-time. Either:\n1. Set them in `.env.production` before building\n2. Use `setAppConfig()` in `src/main.tsx` for runtime overrides\n3. Edit `src/config/app.ts` defaults directly\n\n### SPA Routing Issues\n\n**Problem**: Refreshing on `/profile` returns 404\n\n**Solution**: Ensure your worker serves `index.html` for all non-asset routes:\n\n```ts\nif (assetResponse.status === 404 && !url.pathname.startsWith('/api/')) {\n return env.ASSETS.fetch(new Request(`${url.origin}/index.html`, request));\n}\n```\n\n## Next Steps\n\n- **[Environment Variables](/account-ui/environment-variables)** - Configure your deployment\n- **[Feature Flags](/account-ui/features)** - Enable features\n- **[Customization](/account-ui/customization)** - Brand your UI"
67
+ },
68
+ "compiler/cloud-compiler/authentication": {
69
+ "title": "Authentication",
70
+ "content": "The CLI supports two authentication methods: interactive login for development and API keys for CI/CD.\n\n## Interactive Login (Recommended)\n\nThe CLI uses [OAuth 2.0 Device Authorization](https://datatracker.ietf.org/doc/html/rfc8628) to authenticate:\n\n```bash\nquickback login\n```\n\nA code is displayed in your terminal. Approve it in your browser and you're authenticated. See the [CLI Reference](/compiler/cloud-compiler/cli#login) for the full flow.\n\nCredentials are stored at `~/.quickback/credentials.json` and include your session token, user info, and active organization.\n\n## API Key (CI/CD)\n\nFor non-interactive environments (CI/CD, scripts), use an API key:\n\n```bash\nQUICKBACK_API_KEY=your_api_key quickback compile\n```\n\nCreate API keys from your [Quickback account](https://account.quickback.dev/profile). Each key is scoped to your organization.\n\nThe API key takes precedence over stored credentials from `quickback login`.\n\n## How Tokens Are Validated\n\nThe compiler-cloud worker validates authentication by forwarding your token to the Quickback API's `/internal/validate` endpoint via a [Cloudflare service binding](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/). This resolves both session tokens (from `quickback login`) and API keys (from `QUICKBACK_API_KEY`).\n\n## Credential Storage\n\nCredentials are stored at `~/.quickback/credentials.json`:\n\n```json\n{\n \"token\": \"...\",\n \"user\": {\n \"id\": \"...\",\n \"email\": \"paul@example.com\",\n \"name\": \"Paul Stenhouse\",\n \"tier\": \"free\"\n },\n \"expiresAt\": \"2026-02-16T01:42:21.519Z\",\n \"organization\": {\n \"id\": \"...\",\n \"name\": \"Acme\",\n \"slug\": \"acme\"\n }\n}\n```\n\nSessions expire after 7 days. Run `quickback login` again to re-authenticate.\n\n## Organizations\n\nAfter login, the CLI auto-selects your organization:\n- **One organization** — automatically set as active.\n- **Multiple organizations** — you're prompted to choose one.\n\nThe active organization is stored in your credentials and sent with compile requests, so the compiler knows which org context to use."
71
+ },
72
+ "compiler/cloud-compiler/cli": {
73
+ "title": "CLI Reference",
74
+ "content": "The Quickback CLI is the fastest way to create, compile, and manage your backend projects.\n\n## Installation\n\n```bash\nnpm install -g @kardoe/quickback\n```\n\n## Commands\n\n### Create a Project\n\n```bash\nquickback create <template> <name>\n```\n\n**Templates:**\n- `cloudflare` - Cloudflare Workers + D1 + Better Auth (free)\n- `bun` - Bun + SQLite + Better Auth (free)\n- `turso` - Turso/LibSQL + Better Auth (pro)\n\n**Example:**\n```bash\nquickback create cloudflare my-app\n```\n\nThis scaffolds a complete project with:\n- `quickback.config.ts` - Project configuration\n- `definitions/features/` - Your table definitions\n- Example todos feature with full security configuration\n\n### Compile Definitions\n\n```bash\nquickback compile\n```\n\nReads your definitions and generates:\n- Database migrations (Drizzle)\n- API route handlers (Hono)\n- TypeScript client SDK\n- OpenAPI specification\n\nRun this after making changes to your definitions.\n\n### Initialize Structure\n\n```bash\nquickback init\n```\n\nCreates the Quickback folder structure in an existing project:\n```\nquickback/\n definitions/\n features/\n auth/\n quickback.config.ts\n```\n\n### View Documentation\n\n```bash\nquickback docs # List available topics\nquickback docs <topic> # Show documentation for a topic\n```\n\n**Available topics:**\n- `firewall` - Data isolation layer\n- `access` - Role-based permissions\n- `guards` - Field protection\n- `masking` - PII redaction\n- `actions` - Custom business logic\n- `api` - CRUD endpoints reference\n- `config` - Configuration reference\n- `features` - Schema definitions\n\nDocumentation is bundled with the CLI and works offline.\n\n### Manage Claude Code Skill\n\n```bash\nquickback claude install # Interactive install\nquickback claude install --global # Install to ~/.claude/skills/\nquickback claude install --local # Install to ./quickback/.claude/\nquickback claude update # Update to latest version\nquickback claude remove # Remove installed skill\nquickback claude status # Check installation status\n```\n\nThe Quickback skill for Claude Code provides AI assistance for:\n- Creating resource definitions with proper security layers\n- Configuring Firewall, Access, Guards, and Masking\n- Debugging configuration issues\n- Understanding security patterns\n\n## Authentication\n\n### Login\n\n```bash\nquickback login\n```\n\nUses the [OAuth 2.0 Device Authorization Grant](https://datatracker.ietf.org/doc/html/rfc8628) (RFC 8628) to authenticate securely without exposing tokens in URLs.\n\n**How it works:**\n\n1. The CLI requests a one-time device code from the Quickback API.\n2. A code is displayed in your terminal (e.g., `AUL8-H93S`).\n3. Your browser opens to the Quickback account page where you approve the code.\n4. The CLI detects approval and exchanges it for a session token.\n5. If you belong to one organization, it's auto-selected. If you have multiple, you choose one.\n6. Credentials are stored locally.\n\n```\n$ quickback login\n\n🔐 Quickback Login\n\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\n Your code: AUL8-H93S\n\n Visit: https://account.quickback.dev/cli/authorize\n\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\n✓ Login successful!\nWelcome, Paul Stenhouse!\n\nUsing organization: Acme\n✓ Active organization: Acme\n```\n\nThis flow works in headless environments (SSH, containers, WSL) since it doesn't require a localhost callback.\n\n### Logout\n\n```bash\nquickback logout\n```\n\nClears stored credentials from `~/.quickback/credentials.json`.\n\n### Check Auth Status\n\n```bash\nquickback whoami\n```\n\nShows the currently authenticated user, organization, and token expiration.\n\n### Credential Storage\n\nCredentials are stored at `~/.quickback/credentials.json`:\n\n```json\n{\n \"token\": \"...\",\n \"user\": {\n \"id\": \"...\",\n \"email\": \"paul@example.com\",\n \"name\": \"Paul Stenhouse\",\n \"tier\": \"free\"\n },\n \"expiresAt\": \"2026-02-16T01:42:21.519Z\",\n \"organization\": {\n \"id\": \"...\",\n \"name\": \"Acme\",\n \"slug\": \"acme\"\n }\n}\n```\n\nSessions expire after 7 days. Run `quickback login` again to re-authenticate.\n\n### Organizations\n\nAfter login, the CLI auto-selects your organization:\n- **One organization** - automatically set as active.\n- **Multiple organizations** - you're prompted to choose one.\n\nThe active organization is stored in your credentials and sent with compile requests, so the compiler knows which org context to use.\n\n## Quick Start\n\n```bash\n# 1. Create a new project\nquickback create cloudflare my-app\ncd my-app\n\n# 2. Define your tables in definitions/features/\n# (Already has example todos feature)\n\n# 3. Compile\nquickback compile\n\n# 4. Run\nnpm run dev\n```\n\n## Options\n\n| Flag | Description |\n|------|-------------|\n| `-v, --version` | Show version number |\n| `-h, --help` | Show help message |\n\n## Environment Variables\n\n| Variable | Description |\n|----------|-------------|\n| `QUICKBACK_API_KEY` | API key for authentication (alternative to `quickback login`) |\n| `QUICKBACK_API_URL` | Override compiler API URL |\n\n### API Key Authentication\n\nUse an API key instead of interactive login. Useful for CI/CD pipelines and automated workflows:\n\n```bash\n# Pass API key for a single command\nQUICKBACK_API_KEY=your_api_key quickback compile\n\n# Or export for the session\nexport QUICKBACK_API_KEY=your_api_key\nquickback compile\n```\n\nThe API key takes precedence over stored credentials from `quickback login`.\n\nYou can create API keys from your [Quickback account](https://account.quickback.dev/profile). Each key is scoped to your organization.\n\n### Custom Compiler URL\n\nPoint the CLI to a different compiler (local or custom):\n\n```bash\n# Use a local compiler\nQUICKBACK_API_URL=http://localhost:3020 quickback compile\n\n# Or export for the session\nexport QUICKBACK_API_URL=http://localhost:3020\nquickback compile\n```\n\nSee [Local Compiler](/compiler/cloud-compiler/local-compiler) for running the compiler locally with Docker.\n\n## Troubleshooting\n\n### \"Command not found: quickback\"\n\nMake sure the CLI is installed globally:\n```bash\nnpm install -g @kardoe/quickback\n```\n\nOr use npx:\n```bash\nnpx @kardoe/quickback create cloudflare my-app\n```\n\n### Compile errors\n\n1. Check your `quickback.config.ts` exists and is valid\n2. Ensure all tables in `definitions/features/` have valid exports\n3. Run `quickback compile` with `--verbose` for detailed output\n\n### Authentication issues\n\nClear credentials and re-authenticate:\n```bash\nquickback logout\nquickback login\n```\n\n### \"Could not load organizations\"\n\nThis can happen if your session token expired or if the API is temporarily unavailable. Re-login:\n```bash\nquickback logout\nquickback login\n```"
75
+ },
76
+ "compiler/cloud-compiler/endpoints": {
77
+ "title": "Endpoints",
78
+ "content": "The cloud compiler API is hosted at `https://compiler.quickback.dev`.\n\n## Public (No Auth)\n\n| Endpoint | Method | Description |\n|----------|--------|-------------|\n| `/health` | GET | Health check |\n| `/templates` | GET | List available templates |\n\n## Authenticated\n\n| Endpoint | Method | Description |\n|----------|--------|-------------|\n| `/compile` | POST | Compile definitions into backend files |\n\nThe `/compile` endpoint requires either a session token (from `quickback login`) or an API key (via `QUICKBACK_API_KEY` header). See [Authentication](/compiler/cloud-compiler/authentication) for details.\n\n## Environment Variables\n\n| Variable | Description |\n|----------|-------------|\n| `QUICKBACK_API_KEY` | API key for authentication (alternative to `quickback login`) |\n| `QUICKBACK_API_URL` | Override compiler API URL (default: `https://compiler.quickback.dev`) |\n\n### Custom Compiler URL\n\nPoint the CLI to a different compiler (local or custom):\n\n```bash\nQUICKBACK_API_URL=http://localhost:3020 quickback compile\n```\n\nSee [Local Compiler](/compiler/cloud-compiler/local-compiler) for running the compiler locally with Docker."
79
+ },
80
+ "compiler/cloud-compiler": {
81
+ "title": "Cloud Compiler",
82
+ "content": "Quickback offers a hosted compiler at `https://compiler.quickback.dev`. The CLI uses it to turn your definitions into a complete backend.\n\n## How It Works\n\n```\n┌──────────────┐ ┌──────────────────────┐\n│ quickback │ POST │ compiler.quickback │\n│ CLI │────────▶│ .dev/compile │\n│ │ │ │\n│ Sends: │ │ Returns: │\n│ - config │ │ - Hono routes │\n│ - features │◀────────│ - Drizzle migrations │\n│ - meta │ │ - TypeScript SDK │\n└──────────────┘ └──────────────────────┘\n```\n\n1. The CLI reads your `quickback.config.ts` and feature definitions.\n2. It sends them to the cloud compiler as a JSON payload.\n3. The compiler generates all backend files (routes, migrations, SDK, OpenAPI spec).\n4. The CLI writes the generated files to your project.\n\n## Quick Start\n\n```bash\n# 1. Install the CLI\nnpm install -g @kardoe/quickback\n\n# 2. Create a project\nquickback create cloudflare my-app\ncd my-app\n\n# 3. Log in\nquickback login\n\n# 4. Compile\nquickback compile\n```\n\n## Existing Databases\n\nWhen recompiling an existing project, the CLI automatically sends your Drizzle meta files so the compiler generates **incremental** migrations instead of fresh `CREATE TABLE` statements.\n\nMeta files are loaded from:\n- `drizzle/features/meta/` (dual database mode)\n- `drizzle/meta/` (single database mode)\n\nNo extra configuration needed — the CLI handles this automatically.\n\n## Next Steps\n\n- [CLI Reference](/compiler/cloud-compiler/cli) — All CLI commands\n- [Authentication](/compiler/cloud-compiler/authentication) — Login flow and API keys\n- [Endpoints](/compiler/cloud-compiler/endpoints) — Compiler API reference\n- [Troubleshooting](/compiler/cloud-compiler/troubleshooting) — Common issues\n- [Local Compiler](/compiler/cloud-compiler/local-compiler) — Run the compiler locally"
83
+ },
84
+ "compiler/cloud-compiler/local-compiler": {
85
+ "title": "Local Compiler",
86
+ "content": "Run the Quickback compiler locally using Docker Desktop. This is for select users with local compiler access.\n\n## Prerequisites\n\n- Docker Desktop installed and running\n- Local compiler image access (contact support)\n\n## Setup\n\nPull and run the compiler container:\n\n```bash\ndocker run -d -p 3020:3020 --name quickback-compiler ghcr.io/kardoe/quickback-compiler:latest\n```\n\n## Usage\n\nPoint the CLI to your local compiler:\n\n```bash\nQUICKBACK_API_URL=http://localhost:3020 quickback compile\n```\n\nOr export it for the session:\n\n```bash\nexport QUICKBACK_API_URL=http://localhost:3020\nquickback compile\n```\n\n### With API Key\n\nIf your local compiler requires authentication:\n\n```bash\nQUICKBACK_API_URL=http://localhost:3020 QUICKBACK_API_KEY=your_key quickback compile\n```\n\n## Verify\n\nCheck the compiler is running:\n\n```bash\ncurl http://localhost:3020/health\n```\n\n## Ports\n\nThe default port is `3020`. If you need a different port:\n\n```bash\ndocker run -d -p 3000:3020 --name quickback-compiler ghcr.io/kardoe/quickback-compiler:latest\nQUICKBACK_API_URL=http://localhost:3000 quickback compile\n```\n\n## Troubleshooting\n\n### Container not starting\n\n```bash\ndocker logs quickback-compiler\n```\n\n### Port already in use\n\n```bash\n# Find what's using the port\nlsof -i :3020\n\n# Use a different port\ndocker run -d -p 3021:3020 --name quickback-compiler ghcr.io/kardoe/quickback-compiler:latest\n```\n\n### Stop and remove\n\n```bash\ndocker stop quickback-compiler\ndocker rm quickback-compiler\n```"
87
+ },
88
+ "compiler/cloud-compiler/troubleshooting": {
89
+ "title": "Troubleshooting",
90
+ "content": "## 401 Unauthorized\n\nYour session may have expired (sessions last 7 days). Re-authenticate:\n\n```bash\nquickback logout\nquickback login\n```\n\n## Compilation timeout\n\nLarge projects may take longer to compile. The cloud compiler uses [Cloudflare Containers](https://developers.cloudflare.com/containers/) to run compilation in isolated environments. If you hit timeouts, try [running the compiler locally](/compiler/cloud-compiler/local-compiler).\n\n## \"Command not found: quickback\"\n\nMake sure the CLI is installed globally:\n```bash\nnpm install -g @kardoe/quickback\n```\n\nOr use npx:\n```bash\nnpx @kardoe/quickback create cloudflare my-app\n```\n\n## Compile errors\n\n1. Check your `quickback.config.ts` exists and is valid\n2. Ensure all tables in `definitions/features/` have valid exports\n3. Run `quickback compile` with `--verbose` for detailed output\n\n## \"Could not load organizations\"\n\nThis can happen if your session token expired or if the API is temporarily unavailable. Re-login:\n```bash\nquickback logout\nquickback login\n```"
91
+ },
92
+ "compiler/config": {
93
+ "title": "Configuration",
94
+ "content": "The `quickback.config.ts` file configures your Quickback project. It defines which providers to use, which features to enable, and how the compiler generates code.\n\n```typescript\n\nexport default defineConfig({\n name: \"my-app\",\n template: \"hono\",\n features: [\"organizations\"],\n providers: {\n runtime: defineRuntime(\"cloudflare\"),\n database: defineDatabase(\"cloudflare-d1\"),\n auth: defineAuth(\"better-auth\"),\n },\n});\n```\n\n## Required Fields\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `name` | `string` | Project name |\n| `template` | `\"hono\"` | Application template (`\"nextjs\"` is experimental) |\n| `providers` | `object` | Runtime, database, and auth provider configuration |\n\n## Optional Fields\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `features` | `string[]` | Feature flags, e.g. `[\"organizations\"]` |\n| `compiler` | `object` | Compiler options (audit fields, etc.) |\n| `openapi` | `object` | OpenAPI spec generation (`generate`, `publish`) — see [OpenAPI](/compiler/using-the-api/openapi) |\n\nSee the sub-pages for detailed reference on each section:\n- [Providers](/compiler/config/providers) — Runtime, database, and auth options\n- [Variables](/compiler/config/variables) — Environment variables and flags\n- [Output](/compiler/config/output) — Generated file structure"
95
+ },
96
+ "compiler/config/output": {
97
+ "title": "Output Structure",
98
+ "content": "The compiler generates a complete project structure based on your definitions and provider configuration. The output varies depending on your runtime (Cloudflare vs Bun) and enabled features.\n\n**Warning:** Never edit files in `src/` directly. They are overwritten on every compile. Make changes in your `quickback/` definitions instead.\n\n## Cloudflare Output\n\n```\nsrc/\n├── index.ts # Hono app entry point (Workers export)\n├── env.d.ts # Cloudflare bindings TypeScript types\n├── db/\n│ ├── index.ts # Database connection factory\n│ ├── auth-schema.ts # Auth table schemas (dual mode)\n│ └── features-schema.ts # Feature table schemas (dual mode)\n├── auth/\n│ └── index.ts # Better Auth instance & config\n├── features/\n│ └── {feature}/\n│ ├── schema.ts # Drizzle table definition\n│ ├── routes.ts # CRUD + action endpoints\n│ └── actions.ts # Action handlers (if defined)\n├── lib/\n│ ├── access.ts # Access control helpers\n│ ├── types.ts # Runtime type definitions\n│ ├── masks.ts # Field masking utilities\n│ ├── services.ts # Service layer\n│ └── audit-wrapper.ts # Auto-inject audit fields\n└── middleware/\n ├── auth.ts # Auth context middleware\n ├── db.ts # Database instance middleware\n └── services.ts # Service injection middleware\n\ndrizzle/\n├── auth/ # Auth migrations (dual mode)\n│ ├── meta/\n│ │ ├── _journal.json\n│ │ └── 0000_snapshot.json\n│ └── 0000_initial.sql\n└── features/ # Feature migrations (dual mode)\n ├── meta/\n │ ├── _journal.json\n │ └── 0000_snapshot.json\n └── 0000_initial.sql\n\n# Root config files\n├── package.json\n├── tsconfig.json\n├── wrangler.toml # Cloudflare Workers config\n├── drizzle.config.ts # Features DB drizzle config\n└── drizzle.auth.config.ts # Auth DB drizzle config (dual mode)\n```\n\n## Bun Output\n\n```\nsrc/\n├── index.ts # Hono app entry point (Bun server)\n├── db/\n│ ├── index.ts # SQLite connection (bun:sqlite)\n│ └── schema.ts # Combined schema (single DB)\n├── auth/\n│ └── index.ts # Better Auth instance\n├── features/\n│ └── {feature}/\n│ ├── schema.ts\n│ ├── routes.ts\n│ └── actions.ts\n├── lib/\n│ ├── access.ts\n│ ├── types.ts\n│ ├── masks.ts\n│ ├── services.ts\n│ └── audit-wrapper.ts\n└── middleware/\n ├── auth.ts\n ├── db.ts\n └── services.ts\n\ndrizzle/ # Single migration directory\n├── meta/\n│ ├── _journal.json\n│ └── 0000_snapshot.json\n└── 0000_initial.sql\n\ndata/ # SQLite database files\n└── app.db\n\n├── package.json\n├── tsconfig.json\n└── drizzle.config.ts\n```\n\n## Key Differences by Runtime\n\n| Aspect | Cloudflare | Bun |\n|--------|-----------|-----|\n| Entry point | Workers `export default` | `Bun.serve()` with port |\n| Database | D1 bindings (dual mode) | SQLite file (single DB) |\n| Types | `env.d.ts` for bindings | No extra types needed |\n| Config | `wrangler.toml` | `.env` file |\n| Migrations | `drizzle/auth/` + `drizzle/features/` | `drizzle/` |\n| Drizzle configs | 2 configs (auth + features) | 1 config |\n\n## Optional Output Files\n\nThese files are generated only when the corresponding features are configured:\n\n### Embeddings\n\nWhen any feature has `embeddings` configured:\n\n```\nsrc/\n├── lib/\n│ └── embeddings.ts # Embedding helpers\n├── routes/\n│ └── embeddings.ts # POST /api/v1/embeddings endpoint\n└── queue-consumer.ts # Queue handler for async embedding jobs\n```\n\n### File Storage (R2)\n\nWhen `fileStorage` is configured:\n\n```\nsrc/\n└── routes/\n └── storage.ts # File upload/download endpoints\ndrizzle/\n└── files/ # File metadata migrations\n ├── meta/\n └── 0000_*.sql\n```\n\n### Webhooks\n\nWhen webhooks are enabled:\n\n```\nsrc/\n└── lib/\n └── webhooks/\n ├── index.ts # Webhook module entry\n ├── sign.ts # Webhook payload signing\n ├── handlers.ts # Handler registry\n ├── emit.ts # Queue emission helpers\n ├── routes.ts # Inbound/outbound endpoints\n └── providers/\n ├── index.ts\n └── stripe.ts # Stripe webhook handler\ndrizzle/\n└── webhooks/ # Webhook schema migrations\n ├── meta/\n └── 0000_*.sql\n```\n\n### Realtime\n\nWhen any feature has `realtime` configured:\n\n```\nsrc/\n└── lib/\n └── realtime.ts # Broadcast helpers\n\ncloudflare-workers/ # Separate Durable Objects worker\n└── broadcast/\n ├── Broadcaster.ts\n ├── index.ts\n └── wrangler.toml\n```\n\n### Device Authorization\n\nWhen the `deviceAuthorization` plugin is enabled:\n\n```\nsrc/\n└── routes/\n └── cli-auth.ts # Device auth flow endpoints\n```\n\n## Database Schemas\n\n### Dual Database Mode (Cloudflare Default)\n\nThe compiler separates schemas into two files:\n\n**`src/db/auth-schema.ts`** — Re-exports Better Auth table schemas:\n- `users`, `sessions`, `accounts`\n- `organizations`, `members`, `invitations` (if organizations enabled)\n- Plugin-specific tables (`apiKeys`, etc.)\n\n**`src/db/features-schema.ts`** — Re-exports your feature schemas:\n- All tables defined with `defineTable()`\n- Audit field columns added automatically\n\n### Single Database Mode (Bun Default)\n\n**`src/db/schema.ts`** — Combined re-export of all schemas (auth + features).\n\n## Generated Routes\n\nFor each feature, the compiler generates a routes file at `src/features/{name}/routes.ts` containing:\n\n| Route | Generated When |\n|-------|----------------|\n| `GET /` | `crud.list` configured |\n| `GET /:id` | `crud.get` configured |\n| `POST /` | `crud.create` configured |\n| `PATCH /:id` | `crud.update` configured |\n| `DELETE /:id` | `crud.delete` configured |\n| `PUT /:id` | `crud.put` configured |\n| `POST /batch` | `crud.create` exists (auto-enabled) |\n| `PATCH /batch` | `crud.update` exists (auto-enabled) |\n| `DELETE /batch` | `crud.delete` exists (auto-enabled) |\n| `PUT /batch` | `crud.put` exists (auto-enabled) |\n| `GET /views/{name}` | `views` configured |\n| `POST /:id/{action}` | Record-based actions defined |\n| `POST /{action}` | Standalone actions defined |\n\nAll routes are mounted under `/api/v1/{feature}` in the main app.\n\n## Migrations\n\nThe compiler runs `drizzle-kit generate` during compilation to produce SQL migration files. On subsequent compiles, it uses `existingFiles` (your current migration state) to generate only incremental changes.\n\nMigration files follow the Drizzle Kit naming convention:\n```\n0000_initial.sql\n0001_add_status_column.sql\n0002_create_orders_table.sql\n```\n\n## See Also\n\n- [Providers](/compiler/config/providers) — Configure runtime, database, and auth providers\n- [Environment variables](/compiler/config/variables) — Required variables by runtime"
99
+ },
100
+ "compiler/config/providers": {
101
+ "title": "Providers",
102
+ "content": "Providers configure which services your compiled backend targets.\n\n## Runtime Providers\n\n| Provider | Description |\n|----------|-------------|\n| `cloudflare` | Cloudflare Workers (Hono) |\n| `bun` | Bun runtime (Hono) |\n| `node` | Node.js runtime (Hono) |\n\n```typescript\n\nproviders: {\n runtime: defineRuntime(\"cloudflare\"),\n}\n```\n\n## Database Providers\n\n| Provider | Description |\n|----------|-------------|\n| `cloudflare-d1` | Cloudflare D1 (SQLite) |\n| `better-sqlite3` | SQLite via better-sqlite3 (Bun/Node) |\n| `libsql` | LibSQL / Turso |\n| `neon` | Neon (PostgreSQL) |\n| `supabase` | Supabase (PostgreSQL) |\n\n```typescript\n\nproviders: {\n database: defineDatabase(\"cloudflare-d1\", {\n generateId: \"prefixed\", // \"uuid\" | \"cuid\" | \"nanoid\" | \"prefixed\" | \"serial\" | false\n namingConvention: \"snake_case\", // \"camelCase\" | \"snake_case\"\n usePlurals: false, // Auth table names: \"users\" vs \"user\"\n }),\n}\n```\n\n### Database Options\n\n| Option | Type | Default (D1) | Description |\n|--------|------|---------|-------------|\n| `generateId` | `string \\| false` | `\"prefixed\"` | ID generation strategy |\n| `namingConvention` | `string` | `\"snake_case\"` | Column naming convention |\n| `usePlurals` | `boolean` | `false` | Pluralize auth table names (e.g. `user` vs `users`) |\n| `splitDatabases` | `boolean` | `true` | Separate auth and features databases |\n| `authBinding` | `string` | `\"AUTH_DB\"` | Binding name for auth database |\n| `featuresBinding` | `string` | `\"DB\"` | Binding name for features database |\n\n### ID Generation Options\n\n| Value | Description | Example |\n|-------|-------------|---------|\n| `\"uuid\"` | Server generates UUID | `550e8400-e29b-41d4-a716-446655440000` |\n| `\"cuid\"` | Server generates CUID | `clh2v8k9g0000l508h5gx8j1a` |\n| `\"nanoid\"` | Server generates nanoid | `V1StGXR8_Z5jdHi6B-myT` |\n| `\"prefixed\"` | Prefixed ID from table name | `room_abc123` |\n| `\"serial\"` | Database auto-increments | `1`, `2`, `3` |\n| `false` | Client provides ID (enables PUT/upsert) | Any string |\n\n## Auth Providers\n\n| Provider | Description |\n|----------|-------------|\n| `better-auth` | Better Auth with plugins |\n| `supabase-auth` | Supabase Auth |\n| `external` | External auth via Cloudflare service binding |\n\n```typescript\n\nproviders: {\n auth: defineAuth(\"better-auth\", {\n session: {\n expiresInDays: 7,\n updateAgeInDays: 1,\n },\n rateLimit: {\n enabled: true,\n window: 60,\n max: 100,\n },\n }),\n}\n```\n\n### Better Auth Options\n\n| Option | Type | Description |\n|--------|------|-------------|\n| `session.expiresInDays` | `number` | Session expiration in days |\n| `session.updateAgeInDays` | `number` | Session refresh interval in days |\n| `rateLimit.enabled` | `boolean` | Enable rate limiting |\n| `rateLimit.window` | `number` | Rate limit window in seconds |\n| `rateLimit.max` | `number` | Max requests per window |\n| `socialProviders` | `object` | Social login providers (`google`, `github`, `discord`) |\n| `debugLogs` | `boolean` | Enable auth debug logging |\n\n### Better Auth Plugins\n\nWhen `features: [\"organizations\"]` is set in your config, the compiler automatically enables organization-related plugins. Additional plugins can be configured:\n\n| Plugin | Description |\n|--------|-------------|\n| `organization` | Multi-tenant organizations |\n| `admin` | Admin panel access |\n| `apiKey` | API key authentication |\n| `anonymous` | Anonymous sessions |\n| `upgradeAnonymous` | Convert anonymous to full accounts |\n| `twoFactor` | Two-factor authentication |\n| `passkey` | WebAuthn passkey login |\n| `magicLink` | Email magic link login |\n| `emailOtp` | Email one-time password |\n| `deviceAuthorization` | Device auth flow (CLI tools) |\n| `jwt` | JWT token support |\n| `openAPI` | OpenAPI spec generation |\n\n## Storage Providers\n\n```typescript\n\nproviders: {\n storage: defineStorage(\"kv\", {\n binding: \"KV_STORE\",\n }),\n fileStorage: defineFileStorage(\"r2\", {\n binding: \"FILES\",\n maxFileSize: \"10mb\",\n allowedTypes: [\"image/png\", \"image/jpeg\", \"application/pdf\"],\n publicDomain: \"files.example.com\",\n }),\n}\n```\n\n### Storage Types\n\n| Type | Description |\n|------|-------------|\n| `kv` | Key-value storage (Cloudflare KV, Redis) |\n| `r2` | Object storage (Cloudflare R2) |\n| `memory` | In-memory storage (development only) |\n| `redis` | Redis storage |\n\n### File Storage Options\n\n| Option | Type | Description |\n|--------|------|-------------|\n| `binding` | `string` | R2 bucket binding name |\n| `maxFileSize` | `string` | Maximum file size (e.g. `\"10mb\"`) |\n| `allowedTypes` | `string[]` | Allowed MIME types |\n| `publicDomain` | `string` | Public domain for file URLs |\n| `userScopedBuckets` | `boolean` | Scope files by user |"
103
+ },
104
+ "compiler/config/variables": {
105
+ "title": "Environment Variables",
106
+ "content": "## CLI Environment Variables\n\nThese variables configure the Quickback CLI itself.\n\n| Variable | Description | Default |\n|----------|-------------|---------|\n| `QUICKBACK_API_URL` | Compiler API endpoint | `https://compiler.quickback.dev` |\n| `QUICKBACK_API_KEY` | API key for headless authentication (CI/CD) | — |\n| `QUICKBACK_AUTH_URL` | Auth server URL (custom deployments) | — |\n\n### Authentication\n\nThe CLI authenticates via two methods:\n\n1. **Interactive login** — `quickback login` stores credentials in `~/.quickback/credentials.json`\n2. **API key** — Set `QUICKBACK_API_KEY` for CI/CD environments\n\n```bash\n# Use the cloud compiler (default)\nquickback compile\n\n# Use a local compiler instance\nQUICKBACK_API_URL=http://localhost:3000 quickback compile\n\n# CI/CD with API key\nQUICKBACK_API_KEY=qb_key_... quickback compile\n```\n\n## Cloudflare Variables\n\n### Wrangler Bindings\n\nThese are configured as bindings in `wrangler.toml`, not environment variables. The compiler generates them automatically.\n\n| Binding | Type | Description |\n|---------|------|-------------|\n| `AUTH_DB` | D1 Database | Better Auth tables (dual mode) |\n| `DB` | D1 Database | Feature tables (dual mode) |\n| `DATABASE` | D1 Database | All tables (single DB mode) |\n| `KV` | KV Namespace | Key-value storage |\n| `R2_BUCKET` | R2 Bucket | File storage (if configured) |\n| `AI` | Workers AI | Embedding generation (if configured) |\n| `VECTORIZE` | Vectorize | Vector similarity search (if configured) |\n| `EMBEDDINGS_QUEUE` | Queue | Async embedding jobs (if configured) |\n| `WEBHOOKS_DB` | D1 Database | Webhook events (if configured) |\n| `WEBHOOKS_QUEUE` | Queue | Webhook delivery (if configured) |\n| `FILES_DB` | D1 Database | File metadata (if R2 configured) |\n| `BROADCASTER` | Service Binding | Realtime broadcast worker (if configured) |\n\n### Worker Variables\n\nSet these in `wrangler.toml` under `[vars]` or in the Cloudflare dashboard:\n\n| Variable | Description | Required |\n|----------|-------------|----------|\n| `BETTER_AUTH_URL` | Public URL of your auth endpoint | Yes |\n| `APP_NAME` | Application name (used in emails) | No |\n\n### Email (AWS SES)\n\nRequired when using the `emailOtp` plugin with AWS SES:\n\n| Variable | Description |\n|----------|-------------|\n| `AWS_ACCESS_KEY_ID` | AWS access key |\n| `AWS_SECRET_ACCESS_KEY` | AWS secret key |\n| `AWS_SES_REGION` | SES region (e.g., `us-east-2`) |\n| `EMAIL_FROM` | Sender email address |\n| `EMAIL_FROM_NAME` | Sender display name |\n| `EMAIL_REPLY_TO` | Reply-to address |\n\n### Drizzle Kit (Migrations)\n\nFor running remote migrations with `drizzle-kit`, set these in `.env`:\n\n| Variable | Description |\n|----------|-------------|\n| `CLOUDFLARE_ACCOUNT_ID` | Your Cloudflare account ID |\n| `CLOUDFLARE_API_TOKEN` | API token with D1 permissions |\n| `CLOUDFLARE_AUTH_DATABASE_ID` | Auth D1 database ID (dual mode) |\n| `CLOUDFLARE_FEATURES_DATABASE_ID` | Features D1 database ID (dual mode) |\n| `CLOUDFLARE_DATABASE_ID` | Database ID (single DB mode) |\n\n## Bun Variables\n\nSet these in a `.env` file in your project root:\n\n| Variable | Description | Default |\n|----------|-------------|---------|\n| `NODE_ENV` | Runtime environment | `development` |\n| `PORT` | Server port | `3000` |\n| `BETTER_AUTH_SECRET` | Auth encryption secret | — (required) |\n| `BETTER_AUTH_URL` | Public URL of your server | `http://localhost:3000` |\n| `DATABASE_PATH` | Path to SQLite file | `./data/app.db` |\n\n## Turso (LibSQL) Variables\n\nIn addition to the Bun variables above:\n\n| Variable | Description | Default |\n|----------|-------------|---------|\n| `DATABASE_URL` | LibSQL connection URL | `file:./data/app.db` |\n| `DATABASE_AUTH_TOKEN` | Turso auth token (required for remote) | — |\n\n```bash\n# Local development\nDATABASE_URL=file:./data/app.db\n\n# Production (Turso cloud)\nDATABASE_URL=libsql://your-db-slug.turso.io\nDATABASE_AUTH_TOKEN=eyJhbGciOi...\n```\n\n## Social Login Providers\n\nWhen social login is configured in your auth provider:\n\n| Variable | Description |\n|----------|-------------|\n| `GOOGLE_CLIENT_ID` | Google OAuth client ID |\n| `GOOGLE_CLIENT_SECRET` | Google OAuth client secret |\n| `GITHUB_CLIENT_ID` | GitHub OAuth client ID |\n| `GITHUB_CLIENT_SECRET` | GitHub OAuth client secret |\n| `DISCORD_CLIENT_ID` | Discord OAuth client ID |\n| `DISCORD_CLIENT_SECRET` | Discord OAuth client secret |\n\n## See Also\n\n- [Output Structure](/compiler/config/output) — Generated file structure\n- [Providers](/compiler/config/providers) — Provider configuration reference\n- [Cloudflare Template](/compiler/getting-started/template-cloudflare) — Cloudflare setup guide\n- [Bun Template](/compiler/getting-started/template-bun) — Bun setup guide"
107
+ },
108
+ "compiler/definitions/access": {
109
+ "title": "Access - Role & Condition-Based Access Control",
110
+ "content": "Define who can perform CRUD operations and under what conditions.\n\n## Basic Usage\n\n```typescript\n// features/rooms/rooms.ts\n\nexport const rooms = sqliteTable('rooms', {\n id: text('id').primaryKey(),\n name: text('name').notNull(),\n organizationId: text('organization_id').notNull(),\n});\n\nexport default defineTable(rooms, {\n firewall: { organization: {} },\n guards: { createable: [\"name\"], updatable: [\"name\"] },\n crud: {\n list: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n get: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n create: { access: { roles: [\"owner\", \"admin\"] } },\n update: { access: { roles: [\"owner\", \"admin\"] } },\n delete: { access: { roles: [\"owner\", \"admin\"] } },\n },\n});\n```\n\n## Configuration Options\n\n```typescript\ninterface Access {\n // Required roles (OR logic - user needs at least one)\n roles?: string[];\n\n // Record-level conditions\n record?: {\n [field: string]: FieldCondition;\n };\n\n // Combinators\n or?: Access[];\n and?: Access[];\n}\n\n// Field conditions - value can be string | number | boolean\ntype FieldCondition =\n | { equals: value | '$ctx.userId' | '$ctx.activeOrgId' }\n | { notEquals: value }\n | { in: value[] }\n | { notIn: value[] }\n | { lessThan: number }\n | { greaterThan: number }\n | { lessThanOrEqual: number }\n | { greaterThanOrEqual: number };\n```\n\n## CRUD Configuration\n\n```typescript\ncrud: {\n // LIST - GET /resource\n list: {\n access: { roles: [\"owner\", \"admin\", \"member\"] },\n pageSize: 25, // Default page size\n maxPageSize: 100, // Client can't exceed this\n fields: ['id', 'name', 'status'], // Selective field returns (optional)\n },\n\n // GET - GET /resource/:id\n get: {\n access: { roles: [\"owner\", \"admin\", \"member\"] },\n fields: ['id', 'name', 'status', 'details'], // Optional field selection\n },\n\n // CREATE - POST /resource\n create: {\n access: { roles: [\"owner\", \"admin\"] },\n defaults: { // Default values for new records\n status: 'pending',\n isActive: true,\n },\n },\n\n // UPDATE - PATCH /resource/:id\n update: {\n access: {\n or: [\n { roles: [\"owner\", \"admin\"] },\n { roles: [\"member\"], record: { status: { equals: \"draft\" } } }\n ]\n },\n },\n\n // DELETE - DELETE /resource/:id\n delete: {\n access: { roles: [\"owner\", \"admin\"] },\n mode: \"soft\", // 'soft' (default) or 'hard'\n },\n\n // PUT - PUT /resource/:id (only when generateId: false + guards: false)\n put: {\n access: { roles: [\"admin\", \"sync-service\"] },\n },\n}\n```\n\n## List Filtering (Query Parameters)\n\nThe LIST endpoint automatically supports filtering via query params:\n\n```\nGET /rooms?status=active # Exact match\nGET /rooms?capacity.gt=10 # Greater than\nGET /rooms?capacity.gte=10 # Greater than or equal\nGET /rooms?capacity.lt=50 # Less than\nGET /rooms?capacity.lte=50 # Less than or equal\nGET /rooms?status.ne=deleted # Not equal\nGET /rooms?name.like=Conference # Pattern match (LIKE %value%)\nGET /rooms?status.in=active,pending,review # IN clause\n```\n\n| Operator | Query Param | SQL Equivalent |\n|----------|-------------|----------------|\n| Equals | `?field=value` | `WHERE field = value` |\n| Not equals | `?field.ne=value` | `WHERE field != value` |\n| Greater than | `?field.gt=value` | `WHERE field > value` |\n| Greater or equal | `?field.gte=value` | `WHERE field >= value` |\n| Less than | `?field.lt=value` | `WHERE field < value` |\n| Less or equal | `?field.lte=value` | `WHERE field <= value` |\n| Pattern match | `?field.like=value` | `WHERE field LIKE '%value%'` |\n| In list | `?field.in=a,b,c` | `WHERE field IN ('a','b','c')` |\n\n## Sorting & Pagination\n\n```\nGET /rooms?sort=createdAt&order=desc # Sort by field\nGET /rooms?limit=25&offset=50 # Pagination\n```\n\n- **Default limit**: 50\n- **Max limit**: 100 (or `maxPageSize` if configured)\n- **Default order**: `asc`\n\n## Delete Modes\n\n```typescript\ndelete: {\n access: { roles: [\"owner\", \"admin\"] },\n mode: \"soft\", // Sets deletedAt/deletedBy, record stays in DB\n}\n\ndelete: {\n access: { roles: [\"owner\", \"admin\"] },\n mode: \"hard\", // Permanent deletion from database\n}\n```\n\n## Context Variables\n\nUse `$ctx.` prefix to reference context values in conditions:\n\n```typescript\n// User can only view their own records\naccess: {\n record: { userId: { equals: \"$ctx.userId\" } }\n}\n\n// Nested path support for complex context objects\naccess: {\n record: { ownerId: { equals: \"$ctx.user.id\" } }\n}\n```\n\n### AppContext Reference\n\n| Property | Type | Description |\n|----------|------|-------------|\n| `$ctx.userId` | `string` | Current authenticated user's ID |\n| `$ctx.activeOrgId` | `string` | User's active organization ID |\n| `$ctx.activeTeamId` | `string \\| null` | User's active team ID (if applicable) |\n| `$ctx.roles` | `string[]` | User's roles in current context |\n| `$ctx.isAnonymous` | `boolean` | Whether user is anonymous |\n| `$ctx.user` | `object` | Full user object from auth provider |\n| `$ctx.user.id` | `string` | User ID (nested path example) |\n| `$ctx.user.email` | `string` | User's email address |\n| `$ctx.{property}` | `any` | Any custom context property |\n\n## Function-Based Access\n\nFor complex access logic that can't be expressed declaratively, use a function:\n\n```typescript\ncrud: {\n update: {\n access: async (ctx, record) => {\n // Custom logic - return true to allow, false to deny\n if (ctx.roles.includes('admin')) return true;\n if (record.ownerId === ctx.userId) return true;\n\n // Check custom business logic\n const membership = await checkTeamMembership(ctx.userId, record.teamId);\n return membership.canEdit;\n }\n }\n}\n```\n\nFunction access receives:\n- `ctx`: The full AppContext object\n- `record`: The record being accessed (for get/update/delete operations)"
111
+ },
112
+ "compiler/definitions/actions": {
113
+ "title": "Actions",
114
+ "content": "Actions are custom API endpoints for business logic beyond CRUD operations. They enable workflows, integrations, and complex operations.\n\n## Overview\n\nQuickback supports two types of actions:\n\n| Aspect | Record-Based | Standalone |\n|--------|--------------|------------|\n| Route | `{METHOD} /:id/{actionName}` | Custom `path` or `/{actionName}` |\n| Record fetching | Automatic | None (`record` is `undefined`) |\n| Firewall applied | Yes | No |\n| Preconditions | Supported via `access.record` | Not applicable |\n| Response types | JSON only | JSON, stream, file |\n| Use case | Approve invoice, archive order | AI chat, bulk import, webhooks |\n\n## Defining Actions\n\nActions are defined in a separate `actions.ts` file that references your table:\n\n```typescript\n// features/invoices/actions.ts\n\nexport default defineActions(invoices, {\n approve: {\n description: \"Approve invoice for payment\",\n input: z.object({ notes: z.string().optional() }),\n access: { roles: [\"admin\", \"finance\"] },\n execute: async ({ db, record, ctx }) => {\n // Business logic\n return record;\n },\n },\n});\n```\n\n### Configuration Options\n\n| Option | Required | Description |\n|--------|----------|-------------|\n| `description` | Yes | Human-readable description of the action |\n| `input` | Yes | Zod schema for request validation |\n| `access` | Yes | Access control (roles, record conditions, or function) |\n| `execute` | Yes* | Inline execution function |\n| `handler` | Yes* | File path for complex logic (alternative to `execute`) |\n| `standalone` | No | Set `true` for non-record actions |\n| `path` | No | Custom route path (standalone only) |\n| `method` | No | HTTP method: GET, POST, PUT, PATCH, DELETE (default: POST) |\n| `responseType` | No | Response format: json, stream, file (default: json) |\n| `sideEffects` | No | Hint for AI tools: 'sync', 'async', or 'fire-and-forget' |\n| `unsafe` | No | When `true`, provides `rawDb` (unscoped) to the executor |\n\n*Either `execute` or `handler` is required, not both.\n\n## Record-Based Actions\n\nRecord-based actions operate on an existing record. The record is automatically loaded and validated before your action executes.\n\n**Route pattern:** `{METHOD} /:id/{actionName}`\n\n```\nPOST /invoices/:id/approve\nGET /orders/:id/status\nDELETE /items/:id/archive\n```\n\n### Runtime Flow\n\n1. **Authentication** - User token is validated\n2. **Record Loading** - The record is fetched by ID\n3. **Firewall Check** - Ensures user can access this record\n4. **Access Check** - Validates roles and preconditions\n5. **Input Validation** - Request body validated against Zod schema\n6. **Execution** - Your action handler runs\n7. **Response** - Result is returned to client\n\n### Example: Invoice Approval\n\n```typescript\n// features/invoices/actions.ts\n\nexport default defineActions(invoices, {\n approve: {\n description: \"Approve invoice for payment\",\n input: z.object({\n notes: z.string().optional(),\n }),\n access: {\n roles: [\"admin\", \"finance\"],\n record: { status: { equals: \"pending\" } }, // Precondition\n },\n execute: async ({ db, ctx, record, input }) => {\n const [updated] = await db\n .update(invoices)\n .set({\n status: \"approved\",\n approvedBy: ctx.userId,\n approvedAt: new Date(),\n })\n .where(eq(invoices.id, record.id))\n .returning();\n\n return updated;\n },\n },\n});\n```\n\n### Request Example\n\n```\nPOST /invoices/inv_123/approve\nContent-Type: application/json\n\n{\n \"notes\": \"Approved for Q1 budget\"\n}\n```\n\n### Response Example\n\n```json\n{\n \"data\": {\n \"id\": \"inv_123\",\n \"status\": \"approved\",\n \"approvedBy\": \"user_456\",\n \"approvedAt\": \"2024-01-15T14:30:00Z\"\n }\n}\n```\n\n## Standalone Actions\n\nStandalone actions are independent endpoints that don't require a record context. Use `standalone: true` and optionally specify a custom `path`.\n\n**Route pattern:** Custom `path` or `/{actionName}`\n\n```\nPOST /chat\nGET /reports/summary\nPOST /webhooks/stripe\n```\n\n### Example: AI Chat with Streaming\n\n```typescript\nexport default defineActions(sessions, {\n chat: {\n description: \"Send a message to AI\",\n standalone: true,\n path: \"/chat\",\n method: \"POST\",\n responseType: \"stream\",\n input: z.object({\n message: z.string().min(1).max(2000),\n }),\n access: {\n roles: [\"member\", \"admin\"],\n },\n handler: \"./handlers/chat\",\n },\n});\n```\n\n### Streaming Response Example\n\nFor actions with `responseType: 'stream'`:\n\n```\nContent-Type: text/event-stream\n\ndata: {\"type\": \"start\"}\ndata: {\"type\": \"chunk\", \"content\": \"Hello\"}\ndata: {\"type\": \"chunk\", \"content\": \"! I'm\"}\ndata: {\"type\": \"chunk\", \"content\": \" here to help.\"}\ndata: {\"type\": \"done\"}\n```\n\n### Example: Report Generation\n\n```typescript\nexport default defineActions(invoices, {\n generateReport: {\n description: \"Generate PDF report\",\n standalone: true,\n path: \"/invoices/report\",\n method: \"GET\",\n responseType: \"file\",\n input: z.object({\n startDate: z.string().datetime(),\n endDate: z.string().datetime(),\n }),\n access: { roles: [\"admin\", \"finance\"] },\n handler: \"./handlers/generate-report\",\n },\n});\n```\n\n## Access Configuration\n\nAccess controls who can execute an action and under what conditions.\n\n### Role-Based Access\n\n```typescript\naccess: {\n roles: [\"admin\", \"manager\"] // OR logic: user needs any of these roles\n}\n```\n\n### Record Conditions\n\nFor record-based actions, you can require the record to be in a specific state:\n\n```typescript\naccess: {\n roles: [\"admin\"],\n record: {\n status: { equals: \"pending\" } // Precondition\n }\n}\n```\n\n**Supported operators:**\n\n| Operator | Description |\n|----------|-------------|\n| `equals` | Field must equal value |\n| `notEquals` | Field must not equal value |\n| `in` | Field must be one of the values |\n| `notIn` | Field must not be one of the values |\n\n### Context Substitution\n\nUse `$ctx` to reference the current user's context:\n\n```typescript\naccess: {\n record: {\n ownerId: { equals: \"$ctx.userId\" },\n orgId: { equals: \"$ctx.orgId\" }\n }\n}\n```\n\n### OR/AND Combinations\n\n```typescript\naccess: {\n or: [\n { roles: [\"admin\"] },\n {\n roles: [\"member\"],\n record: { ownerId: { equals: \"$ctx.userId\" } }\n }\n ]\n}\n```\n\n### Function Access\n\nFor complex logic, use an access function:\n\n```typescript\naccess: async (ctx, record) => {\n return ctx.roles.includes('admin') || record.ownerId === ctx.userId;\n}\n```\n\n## Scoped Database\n\nAll actions receive a **scoped `db`** instance that automatically enforces security:\n\n| Operation | Org Scoping | Owner Scoping | Soft Delete Filter | Auto-inject on INSERT |\n|-----------|-------------|---------------|--------------------|-----------------------|\n| `SELECT` | `WHERE organizationId = ?` | `WHERE ownerId = ?` | `WHERE deletedAt IS NULL` | n/a |\n| `INSERT` | n/a | n/a | n/a | `organizationId`, `ownerId` from ctx |\n| `UPDATE` | `WHERE organizationId = ?` | `WHERE ownerId = ?` | `WHERE deletedAt IS NULL` | n/a |\n| `DELETE` | `WHERE organizationId = ?` | `WHERE ownerId = ?` | `WHERE deletedAt IS NULL` | n/a |\n\nScoping is duck-typed at runtime — tables with an `organizationId` column get org scoping, tables with `ownerId` get owner scoping, tables with `deletedAt` get soft delete filtering.\n\n```typescript\nexecute: async ({ db, ctx, input }) => {\n // This query automatically includes WHERE organizationId = ? AND ownerId = ? AND deletedAt IS NULL\n const items = await db.select().from(claims).where(eq(claims.status, 'active'));\n\n // Inserts automatically include organizationId and ownerId\n await db.insert(claims).values({ title: input.title });\n\n return items;\n}\n```\n\n**Not enforced** in scoped DB (by design):\n- **Guards** — actions ARE the authorized way to modify guarded fields\n- **Masking** — actions are backend code that may need raw data\n- **Access** — already checked before action execution\n\n### Unsafe Mode\n\nActions that need to bypass security (e.g., cross-org admin queries) can declare `unsafe: true`:\n\n```typescript\nexport default defineActions(invoices, {\n adminReport: {\n description: \"Generate cross-org report\",\n unsafe: true,\n input: z.object({ startDate: z.string() }),\n access: { roles: [\"admin\"] },\n execute: async ({ db, rawDb, ctx, input }) => {\n // db is still scoped (safety net)\n // rawDb bypasses all security — only available with unsafe: true\n const allOrgs = await rawDb.select().from(invoices);\n return allOrgs;\n },\n },\n});\n```\n\nWithout `unsafe: true`, `rawDb` is `undefined` in the executor params.\n\n## Handler Files\n\nFor complex actions, separate the logic into handler files.\n\n### When to Use Handler Files\n\n- Complex business logic spanning multiple operations\n- External API integrations\n- File generation or processing\n- Logic reused across multiple actions\n\n### Handler Structure\n\n```typescript\n// handlers/generate-report.ts\n\nexport const execute: ActionExecutor = async ({ db, ctx, input, services }) => {\n const invoices = await db\n .select()\n .from(invoicesTable)\n .where(between(invoicesTable.createdAt, input.startDate, input.endDate));\n\n const pdf = await services.pdf.generate(invoices);\n\n return {\n file: pdf,\n filename: `invoices-${input.startDate}-${input.endDate}.pdf`,\n contentType: 'application/pdf',\n };\n};\n```\n\n### Executor Parameters\n\n```typescript\ninterface ActionExecutorParams {\n db: DrizzleDB; // Scoped database (auto-applies org/owner/soft-delete filters)\n rawDb?: DrizzleDB; // Raw database (only available when unsafe: true)\n ctx: AppContext; // User context (userId, roles, orgId)\n record?: TRecord; // The record (record-based only, undefined for standalone)\n input: TInput; // Validated input from Zod schema\n services: TServices; // Configured integrations (billing, notifications, etc.)\n c: HonoContext; // Raw Hono context for advanced use\n auditFields: object; // { createdAt, modifiedAt } timestamps\n}\n```\n\n## HTTP API Reference\n\n### Request Format\n\n| Method | Input Source | Use Case |\n|--------|--------------|----------|\n| `GET` | Query parameters | Read-only operations, fetching data |\n| `POST` | JSON body | Default, state-changing operations |\n| `PUT` | JSON body | Full replacement operations |\n| `PATCH` | JSON body | Partial updates |\n| `DELETE` | JSON body | Deletion with optional payload |\n\n```typescript\n// GET action - input comes from query params\ngetStatus: {\n method: \"GET\",\n input: z.object({ format: z.string().optional() }),\n // Called as: GET /invoices/:id/getStatus?format=detailed\n}\n\n// POST action (default) - input comes from JSON body\napprove: {\n // method: \"POST\" is implied\n input: z.object({ notes: z.string().optional() }),\n // Called as: POST /invoices/:id/approve with JSON body\n}\n```\n\n### Response Formats\n\n| Type | Content-Type | Use Case |\n|------|--------------|----------|\n| `json` | `application/json` | Standard API responses (default) |\n| `stream` | `text/event-stream` | Real-time streaming (AI chat, live updates) |\n| `file` | Varies | File downloads (reports, exports) |\n\n### Error Codes\n\n| Status | Description |\n|--------|-------------|\n| `400` | Invalid input / validation error |\n| `401` | Not authenticated |\n| `403` | Access check failed (role or precondition) |\n| `404` | Record not found (record-based actions) |\n| `500` | Handler execution error |\n\n### Validation Error Response\n\n```json\n{\n \"error\": \"Invalid request data\",\n \"layer\": \"validation\",\n \"code\": \"VALIDATION_FAILED\",\n \"details\": {\n \"fields\": {\n \"amount\": \"Expected positive number\"\n }\n },\n \"hint\": \"Check the input schema for this action\"\n}\n```\n\n### Throwing Custom Errors\n\n```typescript\nexecute: async ({ ctx, record, input }) => {\n if (record.balance < input.amount) {\n throw new ActionError('INSUFFICIENT_FUNDS', 'Not enough balance', 400);\n }\n // ... continue\n}\n```\n\n## Protected Fields\n\nActions can modify fields that are protected from regular CRUD operations:\n\n```typescript\n// In resource.ts\nguards: {\n protected: {\n status: [\"approve\", \"reject\"], // Only these actions can modify status\n amount: [\"reviseAmount\"],\n }\n}\n```\n\nThis allows the `approve` action to set `status = \"approved\"` even though the field is protected from regular PATCH requests.\n\n## Examples\n\n### Invoice Approval (Record-Based)\n\n```typescript\nexport default defineActions(invoices, {\n approve: {\n description: \"Approve invoice for payment\",\n input: z.object({ notes: z.string().optional() }),\n access: {\n roles: [\"admin\", \"finance\"],\n record: { status: { equals: \"pending\" } },\n },\n execute: async ({ db, ctx, record, input }) => {\n const [updated] = await db\n .update(invoices)\n .set({\n status: \"approved\",\n approvedBy: ctx.userId,\n notes: input.notes,\n })\n .where(eq(invoices.id, record.id))\n .returning();\n return updated;\n },\n },\n});\n```\n\n### AI Chat (Standalone with Streaming)\n\n```typescript\nexport default defineActions(sessions, {\n chat: {\n description: \"Send a message to AI assistant\",\n standalone: true,\n path: \"/chat\",\n responseType: \"stream\",\n input: z.object({\n message: z.string().min(1).max(2000),\n }),\n access: { roles: [\"member\", \"admin\"] },\n handler: \"./handlers/chat\",\n },\n});\n```\n\n### Bulk Import (Standalone, No Record)\n\n```typescript\nexport default defineActions(contacts, {\n bulkImport: {\n description: \"Import contacts from CSV\",\n standalone: true,\n path: \"/contacts/import\",\n input: z.object({\n data: z.array(z.object({\n email: z.string().email(),\n name: z.string(),\n })),\n }),\n access: { roles: [\"admin\"] },\n execute: async ({ db, input }) => {\n const inserted = await db\n .insert(contacts)\n .values(input.data)\n .returning();\n return { imported: inserted.length };\n },\n },\n});\n```"
115
+ },
116
+ "compiler/definitions/firewall": {
117
+ "title": "Firewall - Data Isolation",
118
+ "content": "The firewall generates WHERE clauses automatically to isolate data by user, organization, or team.\n\n## Basic Usage\n\n```typescript\n// features/rooms/rooms.ts\n\nexport const rooms = sqliteTable('rooms', {\n id: text('id').primaryKey(),\n name: text('name').notNull(),\n organizationId: text('organization_id').notNull(),\n});\n\nexport default defineTable(rooms, {\n firewall: { organization: {} }, // Isolate by organization\n // ... guards, crud\n});\n```\n\n## Configuration Options\n\n```typescript\nfirewall: {\n // User-level ownership (personal data)\n owner?: {\n column?: string; // Default: 'ownerId' or 'owner_id'\n source?: string; // Default: 'ctx.userId'\n mode?: 'required' | 'optional'; // Default: 'required'\n };\n\n // Organization-level ownership\n organization?: {\n column?: string; // Default: 'organizationId' or 'organization_id'\n source?: string; // Default: 'ctx.activeOrgId'\n };\n\n // Team-level ownership\n team?: {\n column?: string; // Default: 'teamId' or 'team_id'\n source?: string; // Default: 'ctx.activeTeamId'\n };\n\n // Soft delete filtering\n softDelete?: {\n column?: string; // Default: 'deletedAt'\n };\n\n // Opt-out for public tables (cannot combine with ownership)\n exception?: boolean;\n}\n```\n\n## Context Sources\n\nThe firewall pulls ownership values from the request context:\n\n| Scope | Default Source | Description |\n|-------|----------------|-------------|\n| `owner` | `ctx.userId` | Current authenticated user's ID |\n| `organization` | `ctx.activeOrgId` | User's active organization ID |\n| `team` | `ctx.activeTeamId` | User's active team ID |\n\nThese context values are populated by the auth middleware from the user's session.\n```\n\n## Auto-Detection\n\nQuickback automatically detects firewall scope based on your column names:\n\n| Column Name | Detected Scope |\n|-------------|----------------|\n| `organizationId` or `organization_id` | `organization` |\n| `ownerId` or `owner_id` | `owner` |\n| `teamId` or `team_id` | `team` |\n| `deletedAt` or `deleted_at` | `softDelete` |\n\nThis means you often don't need to specify column names:\n\n```typescript\n// Quickback detects organizationId column automatically\nfirewall: { organization: {} }\n\n// Quickback detects ownerId column automatically\nfirewall: { owner: {} }\n```\n\n## Common Patterns\n\n```typescript\n// Public/system table - no filtering\nfirewall: { exception: true }\n\n// Organization-scoped data (auto-detected from organizationId column)\nfirewall: { organization: {} }\n\n// Personal user data (auto-detected from ownerId column)\nfirewall: { owner: {} }\n\n// Org data with optional owner filtering\nfirewall: {\n organization: {},\n owner: { mode: 'optional' }\n}\n\n// With soft delete (auto-detected from deletedAt column)\nfirewall: {\n organization: {},\n softDelete: {}\n}\n```\n\n## Rules\n\n- Every resource MUST have at least one ownership scope OR `exception: true`\n- Cannot mix `exception: true` with ownership scopes\n\n## Why Can't You Mix `exception` with Ownership?\n\nThey represent opposite intentions:\n- `exception: true` = \"Generate NO WHERE clauses, data is public/global\"\n- Ownership scopes = \"Generate WHERE clauses to filter data\"\n\nCombining them would be contradictory - you can't both filter and not filter.\n\n## Handling \"Some Public, Some Private\" Data\n\nIf you need records that are sometimes public and sometimes scoped, you have two options:\n\n### Option 1: Two Separate Tables\n\nSplit into two tables - one public, one scoped:\n\n```typescript\n// features/templates/template-library.ts - Public templates\nexport default defineTable(templateLibrary, {\n firewall: { exception: true },\n // ...\n});\n\n// features/templates/user-templates.ts - User's custom templates\nexport default defineTable(userTemplates, {\n firewall: { owner: {} },\n // ...\n});\n```\n\n### Option 2: Use Access Control Instead\n\nKeep ownership scope but make access permissive, then control visibility via `access`:\n\n```typescript\n// features/documents/documents.ts\nexport default defineTable(documents, {\n firewall: {\n organization: {}, // Still scoped to org\n },\n crud: {\n list: {\n // Anyone in the org can list, but they see different things\n // based on a \"visibility\" field you check in your app logic\n access: { roles: [\"owner\", \"admin\", \"member\"] },\n },\n get: {\n // Use record conditions to allow public docs OR owned docs\n access: {\n or: [\n { record: { visibility: { equals: \"public\" } } },\n { record: { ownerId: { equals: \"$ctx.userId\" } } },\n { roles: [\"admin\"] }\n ]\n }\n },\n },\n // ...\n});\n```\n\n### Which to Choose?\n\n| Scenario | Recommendation |\n|----------|----------------|\n| Truly global data (app config, public templates) | `exception: true` in separate resource |\n| \"Public within org\" but still org-isolated | Ownership scope + permissive access rules |\n| User can toggle their own data public/private | Ownership scope + `visibility` field + access conditions |"
119
+ },
120
+ "compiler/definitions/guards": {
121
+ "title": "Guards - Field Modification Rules",
122
+ "content": "Control which fields can be modified in CREATE vs UPDATE operations.\n\n## Basic Usage\n\n```typescript\n// features/invoices/invoices.ts\n\nexport const invoices = sqliteTable('invoices', {\n id: text('id').primaryKey(),\n name: text('name').notNull(),\n amount: integer('amount').notNull(),\n status: text('status').notNull().default('draft'),\n invoiceNumber: text('invoice_number'),\n organizationId: text('organization_id').notNull(),\n});\n\nexport default defineTable(invoices, {\n firewall: { organization: {} },\n guards: {\n createable: [\"name\", \"amount\", \"invoiceNumber\"],\n updatable: [\"name\"],\n protected: {\n status: [\"approve\", \"reject\"],\n amount: [\"reviseAmount\"],\n },\n immutable: [\"invoiceNumber\"],\n },\n crud: {\n // ...\n },\n});\n```\n\n## Configuration Options\n\n```typescript\nguards: {\n // Fields allowed on CREATE\n createable?: string[];\n\n // Fields allowed on UPDATE/PATCH\n updatable?: string[];\n\n // Fields only modifiable via specific actions\n protected?: Record<string, string[]>;\n\n // Fields set on CREATE, never modified after\n immutable?: string[];\n}\n```\n\n## How It Works\n\n| List | What it controls |\n|------|------------------|\n| `createable` | Fields allowed in create (POST) request body |\n| `updatable` | Fields allowed in update (PATCH) request body |\n| `protected` | Fields blocked from CRUD, only modifiable via named actions |\n| `immutable` | Fields allowed on create, blocked on all updates |\n\n**Combining lists:**\n\n- `createable` + `updatable` - Most fields go in both (can set on create AND modify later)\n- `createable` only - Field is set once, cannot be changed via update\n- `protected` - Don't also list in `createable` or `updatable` (they're mutually exclusive)\n- `immutable` - Don't also list in `updatable` (contradiction)\n\n```typescript\nguards: {\n createable: [\"name\", \"description\", \"category\"],\n updatable: [\"name\", \"description\"],\n // \"category\" is only in createable = set once, can't change via update\n protected: {\n status: [\"approve\", \"reject\"], // NOT in createable/updatable\n },\n immutable: [\"invoiceNumber\"], // NOT in updatable\n}\n```\n\n**If a field is not listed anywhere, it cannot be set by the client.**\n\n## System-Managed Fields (Always Protected)\n\nThese are automatically protected - you cannot override:\n- `createdAt`, `createdBy`\n- `modifiedAt`, `modifiedBy`\n- `deletedAt`, `deletedBy`\n\n## Example\n\n```typescript\nguards: {\n createable: [\"name\", \"description\", \"amount\"],\n updatable: [\"name\", \"description\"],\n protected: {\n status: [\"approve\", \"reject\"], // Only via these actions\n amount: [\"reviseAmount\"],\n },\n immutable: [\"invoiceNumber\"],\n}\n```\n\n## Disabling Guards\n\n```typescript\nguards: false // Only system fields protected\n```\n\nWhen `guards: false` is set, all user-defined fields become writable via CRUD operations. This is useful for:\n- External sync scenarios where you need full field control\n- Batch upsert operations where field restrictions would be limiting\n- Simple tables where field-level protection isn't needed\n\n## PUT/Upsert with External IDs\n\nWhen you disable guards AND use client-provided IDs, you unlock PUT (upsert) operations. This is designed for **syncing data from external systems**.\n\n### Requirements for PUT\n\n1. `generateId: false` in database config (client provides IDs)\n2. `guards: false` in resource definition\n\n### How PUT Works\n\n```\nPUT /resource/:id\n├── Record exists? → UPDATE (replace all fields)\n└── Record missing? → CREATE with provided ID\n```\n\n### Database Config\n\n```typescript\n// quickback.config.ts\nexport default {\n database: {\n generateId: false, // Client provides IDs (enables PUT)\n // Other options: 'uuid' | 'cuid' | 'nanoid' | 'prefixed' | 'serial'\n }\n};\n```\n\n### Table Definition\n\n```typescript\n// features/external/external-accounts.ts\nexport default defineTable(externalAccounts, {\n firewall: {\n organization: {}, // Still isolated by org\n },\n guards: false, // Disables field restrictions\n crud: {\n put: {\n access: { roles: ['admin', 'sync-service'] }\n }\n },\n});\n```\n\n## What's Still Protected with guards: false\n\nEven with `guards: false`, system-managed fields are ALWAYS protected:\n- `createdAt`, `createdBy` - Set on INSERT only\n- `modifiedAt`, `modifiedBy` - Auto-updated\n- `deletedAt`, `deletedBy` - Set on soft delete\n\n## Ownership Auto-Population\n\nWhen PUT creates a new record, ownership fields are auto-set from context:\n\n```typescript\n// Client sends: PUT /accounts/ext-123 { name: \"Acme\" }\n// Server creates:\n{\n id: \"ext-123\", // Client-provided\n name: \"Acme\", // Client-provided\n organizationId: ctx.activeOrgId, // Auto-set from firewall\n createdAt: now, // Auto-set\n createdBy: ctx.userId, // Auto-set\n modifiedAt: now, // Auto-set\n modifiedBy: ctx.userId, // Auto-set\n}\n```\n\n## Use Cases for PUT/External IDs\n\n| Use Case | Why PUT? |\n|----------|----------|\n| External API sync | External system controls the ID |\n| Webhook handlers | Events come with their own IDs |\n| Data migration | Preserve IDs from source system |\n| Idempotent updates | Safe to retry (no duplicate creates) |\n| Bulk upsert | Create or update in one operation |\n\n## ID Generation Options\n\n| `generateId` | PUT Available? | Notes |\n|--------------|----------------|-------|\n| `'uuid'` | No | Server generates UUID |\n| `'cuid'` | No | Server generates CUID |\n| `'nanoid'` | No | Server generates nanoid |\n| `'prefixed'` | No | Server generates prefixed ID (e.g. `room_abc123`) |\n| `'serial'` | No | Database auto-increments |\n| `false` | Yes (if guards: false) | Client provides ID |\n\n## Compile-Time Validation\n\nThe Quickback compiler validates your guards configuration and will error if:\n\n1. **Field in both `createable` and `protected`** - A field cannot be both client-writable on create and action-only\n2. **Field in both `updatable` and `protected`** - A field cannot be both client-writable on update and action-only\n3. **Field in both `updatable` and `immutable`** - Contradictory: immutable fields cannot be updated\n4. **Field in `protected` doesn't exist in schema** - Referenced field must exist in the table\n5. **Field in `createable`/`updatable`/`immutable` doesn't exist in schema** - All referenced fields must exist"
123
+ },
124
+ "compiler/definitions": {
125
+ "title": "Definitions Overview",
126
+ "content": "Before diving into specific features, let's understand how Quickback's pieces connect. This page gives you the mental model for everything that follows.\n\n## The Big Picture\n\nQuickback is a **backend compiler**. You write definition files, and Quickback compiles them into a production-ready API.\n\n1. **You write definitions** - Table files with schema and security config using `defineTable`\n2. **Quickback compiles them** - Analyzes your definitions at build time\n3. **You get a production API** - `GET /rooms`, `POST /rooms`, `PATCH /rooms/:id`, `DELETE /rooms/:id`, batch operations, plus custom actions\n\n## File Structure\n\nYour definitions live in a `definitions/` folder organized by feature:\n\n```\nmy-app/\n├── quickback/\n│ ├── quickback.config.ts # Compiler configuration\n│ └── definitions/\n│ └── features/\n│ └── {feature-name}/\n│ ├── claims.ts # Table + config (defineTable)\n│ ├── claim-versions.ts # Secondary table + config\n│ ├── claim-sources.ts # Internal table (no routes)\n│ ├── actions.ts # Custom actions (optional)\n│ └── handlers/ # Action handlers (optional)\n│ └── my-action.ts\n├── src/ # Generated code (output)\n├── drizzle/ # Generated migrations\n└── package.json\n```\n\n**Table files** use `defineTable` to combine schema and security config:\n\n```typescript\n// features/claims/claims.ts\n\nexport const claims = sqliteTable(\"claims\", {\n id: text(\"id\").primaryKey(),\n organizationId: text(\"organization_id\").notNull(),\n content: text(\"content\").notNull(),\n});\n\nexport default defineTable(claims, {\n firewall: { organization: {} },\n guards: {\n createable: [\"content\"],\n updatable: [\"content\"],\n },\n crud: {\n list: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n get: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n create: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n update: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n delete: { access: { roles: [\"owner\", \"admin\"] } },\n },\n});\n\nexport type Claim = typeof claims.$inferSelect;\n```\n\n**Key points:**\n- Tables with `export default defineTable(...)` → CRUD routes generated\n- Tables without default export → internal/junction tables (no routes)\n- Route path derived from filename: `claim-versions.ts` → `/api/v1/claim-versions`\n\n## The Four Security Layers\n\nEvery API request passes through four security layers, in order:\n\n```\nRequest → Firewall → Access → Guards → Masking → Response\n │ │ │ │\n │ │ │ └── Hide sensitive fields\n │ │ └── Block field modifications\n │ └── Check roles & conditions\n └── Isolate data by owner/org/team\n```\n\n### 1. Firewall (Data Isolation)\n\nThe firewall controls **which records** a user can see. It automatically adds WHERE clauses to every query based on your schema columns:\n\n| Column in Schema | What happens |\n|------------------|--------------|\n| `organization_id` | Data isolated by organization |\n| `user_id` | Data isolated by user (personal data) |\n\nNo manual configuration needed - Quickback applies smart rules based on your schema.\n\n[Learn more about Firewall →](/compiler/definitions/firewall)\n\n### 2. Access (CRUD Permissions)\n\nAccess controls **which operations** a user can perform. It checks roles and record conditions.\n\n```typescript\ncrud: {\n list: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n create: { access: { roles: [\"owner\", \"admin\"] } },\n update: { access: { roles: [\"owner\", \"admin\"] } },\n delete: { access: { roles: [\"owner\", \"admin\"] } },\n}\n```\n\n[Learn more about Access →](/compiler/definitions/access)\n\n### 3. Guards (Field Modification Rules)\n\nGuards control **which fields** can be modified in each operation.\n\n| Guard Type | What it means |\n|------------|---------------|\n| `createable` | Fields that can be set when creating |\n| `updatable` | Fields that can be changed when updating |\n| `protected` | Fields that can only be changed via specific actions |\n| `immutable` | Fields that can never be changed after creation |\n\n[Learn more about Guards →](/compiler/definitions/guards)\n\n### 4. Masking (Data Redaction)\n\nMasking hides sensitive fields from users who shouldn't see them.\n\n```typescript\nmasking: {\n ssn: { type: 'ssn' }, // Shows: ***-**-1234\n email: { type: 'email' }, // Shows: j***@y***.com\n salary: { type: 'redact' }, // Shows: [REDACTED]\n}\n```\n\n[Learn more about Masking →](/compiler/definitions/masking)\n\n## How They Work Together\n\n**Scenario:** A member requests `GET /employees/123`\n\n1. **Firewall** checks: Is employee 123 in the user's organization?\n - Yes → Continue\n - No → 404 Not Found (as if it doesn't exist)\n\n2. **Access** checks: Can members perform GET?\n - Yes → Continue\n - No → 403 Forbidden\n\n3. **Guards** don't apply to GET (they're for writes)\n\n4. **Masking** applies: User is a member, not admin\n - SSN: `123-45-6789` → `***-**-6789`\n - Salary: `85000` → `[REDACTED]`\n\n5. **Response** sent with masked data\n\n## Locked Down by Default\n\nQuickback is **secure by default**. Nothing is accessible until you explicitly allow it.\n\n| Layer | Default | What you must do |\n|-------|---------|------------------|\n| Firewall | AUTO | Auto-detects from `organization_id`/`user_id` columns. Only configure for exceptions. |\n| Access | DENIED | Explicitly define `access` rules with roles |\n| Guards | LOCKED | Explicitly list `createable`, `updatable` fields |\n| Actions | BLOCKED | Explicitly define `access` for each action |\n\n**You must deliberately open each door.** This prevents accidental data exposure.\n\n## Next Steps\n\n1. [Database Schema](/compiler/definitions/schema) — Define your tables\n2. [Firewall](/compiler/definitions/firewall) — Set up data isolation\n3. [Access](/compiler/definitions/access) — Configure CRUD permissions\n4. [Guards](/compiler/definitions/guards) — Control field modifications\n5. [Masking](/compiler/definitions/masking) — Hide sensitive data\n6. [Views](/compiler/definitions/views) — Column-level security\n7. [Validation](/compiler/definitions/validation) — Field validation rules\n8. [Actions](/compiler/definitions/actions) — Add custom business logic"
127
+ },
128
+ "compiler/definitions/masking": {
129
+ "title": "Masking - Field Redaction",
130
+ "content": "Hide sensitive data from unauthorized users while showing it to those with permission.\n\n## Automatic Masking (Secure by Default)\n\nQuickback automatically applies masking to columns that match sensitive naming patterns. This ensures a high security posture even if you don't explicitly configure masking.\n\n### Sensitive Keywords & Default Masks\n\n| Pattern | Default Mask | Description |\n| :--- | :--- | :--- |\n| `email` | `email` | p***@e****.com |\n| `phone`, `mobile`, `fax` | `phone` | ***-***-4567 |\n| `ssn`, `socialsecurity`, `nationalid` | `ssn` | ***-**-6789 |\n| `creditcard`, `cc`, `cardnumber`, `cvv` | `creditCard` | ****-****-****-1234 |\n| `iban` | `creditCard` | ****-****-****-1234 |\n| `password`, `secret`, `token`, `apikey`, `privatekey` | `redact` | [REDACTED] |\n\n### Build-Time Alerts\n\nIf the compiler auto-detects a sensitive column that you haven't explicitly configured, it will emit a warning:\n\n```bash\n[Warning] Auto-masking enabled for sensitive column \"users.email\". Explicitly configure masking to silence this warning.\n```\n\nTo silence this warning or change the behavior, simply define the field explicitly in your `masking` configuration. Explicit configurations always take precedence over auto-detected defaults.\n\n### Overriding Defaults\n\nIf you want to show a sensitive field to everyone (disable masking) or use a different rule, define it explicitly in the `resource.masking` block:\n\n```typescript\nexport default defineTable(users, {\n masking: {\n // Show email to everyone (overrides auto-masking)\n email: { type: 'email', show: { roles: ['everyone'] } },\n \n // Silence warning but keep secure (owner-only)\n ssn: { type: 'ssn', show: { or: 'owner' } }\n }\n});\n```\n\n## Basic Usage\n\n```typescript\n// features/employees/employees.ts\n\nexport const employees = sqliteTable('employees', {\n id: text('id').primaryKey(),\n name: text('name').notNull(),\n ssn: text('ssn'),\n salary: integer('salary'),\n email: text('email'),\n organizationId: text('organization_id').notNull(),\n});\n\nexport default defineTable(employees, {\n firewall: { organization: {} },\n guards: { createable: [\"name\", \"ssn\", \"salary\", \"email\"], updatable: [\"name\"] },\n masking: {\n ssn: { type: 'ssn', show: { roles: ['hr', 'admin'] } },\n salary: { type: 'redact', show: { roles: ['hr', 'admin'] } },\n email: { type: 'email', show: { or: 'owner' } },\n },\n crud: {\n // ...\n },\n});\n```\n\n## Built-in Mask Types\n\n| Type | Example Input | Masked Output |\n|------|---------------|---------------|\n| `'email'` | `john@yourdomain.com` | `j***@y***.com` |\n| `'phone'` | `555-123-4567` | `***-***-4567` |\n| `'ssn'` | `123-45-6789` | `***-**-6789` |\n| `'creditCard'` | `4111111111111111` | `************1111` |\n| `'name'` | `John Smith` | `J*** S***` |\n| `'redact'` | `anything` | `[REDACTED]` |\n| `'custom'` | (your logic) | (your output) |\n\n## Configuration\n\n```typescript\nmasking: {\n // Basic masking - everyone sees masked value\n taxId: { type: 'ssn' },\n\n // Show unmasked to specific roles\n salary: {\n type: 'redact',\n show: { roles: ['admin', 'hr'] }\n },\n\n // Show unmasked to owner (createdBy === ctx.userId)\n email: {\n type: 'email',\n show: { or: 'owner' }\n },\n\n // Custom mask function\n apiKey: {\n type: 'custom',\n mask: (value) => value.slice(0, 4) + '...' + value.slice(-4),\n show: { roles: ['admin'] }\n },\n}\n```\n\n## Show Conditions\n\n```typescript\nshow: {\n roles?: string[]; // Unmasked if user has any of these roles\n or?: 'owner'; // Unmasked if user is the record owner\n}\n```\n\nThe `'owner'` condition compares against the owner column configured in your firewall. If you have `firewall: { owner: {} }`, the owner column (default: `ownerId` or `owner_id`) is used to determine ownership. If no owner firewall is configured, it falls back to `createdBy`.\n\n## Complete Example\n\n```typescript\n// features/employees/employees.ts\nexport default defineTable(employees, {\n firewall: { organization: {} },\n guards: { createable: [\"name\", \"ssn\", \"salary\"], updatable: [\"name\"] },\n masking: {\n ssn: { type: 'ssn', show: { roles: ['hr', 'admin'] } },\n salary: { type: 'redact', show: { roles: ['hr', 'admin'] } },\n personalEmail: { type: 'email', show: { or: 'owner' } },\n bankAccount: {\n type: 'custom',\n mask: (val) => '****' + val.slice(-4),\n show: { roles: ['payroll'] }\n },\n },\n crud: {\n list: { access: { roles: [\"member\", \"admin\"] } },\n get: { access: { roles: [\"member\", \"admin\"] } },\n create: { access: { roles: [\"hr\", \"admin\"] } },\n update: { access: { roles: [\"hr\", \"admin\"] } },\n delete: { access: { roles: [\"admin\"] } },\n },\n});\n```"
131
+ },
132
+ "compiler/definitions/schema": {
133
+ "title": "Database Schema",
134
+ "content": "Quickback uses [Drizzle ORM](https://orm.drizzle.team/) to define your database schema. With `defineTable`, you combine your schema definition and security configuration in a single file.\n\n## Defining Tables with defineTable\n\nEach table gets its own file with schema and config together. Use the Drizzle dialect that matches your target database:\n\n| Target Database | Import From | Table Function |\n|-----------------|-------------|----------------|\n| Cloudflare D1, Turso, SQLite | `drizzle-orm/sqlite-core` | `sqliteTable` |\n| Supabase, Neon, PostgreSQL | `drizzle-orm/pg-core` | `pgTable` |\n| PlanetScale, MySQL | `drizzle-orm/mysql-core` | `mysqlTable` |\n\n```typescript\n// definitions/features/rooms/rooms.ts\n// For D1/SQLite targets:\n\nexport const rooms = sqliteTable('rooms', {\n id: text('id').primaryKey(),\n name: text('name').notNull(),\n description: text('description'),\n capacity: integer('capacity').notNull().default(10),\n roomType: text('room_type').notNull(),\n isActive: integer('is_active', { mode: 'boolean' }).notNull().default(true),\n\n // Ownership - for firewall data isolation\n organizationId: text('organization_id').notNull(),\n});\n\nexport default defineTable(rooms, {\n firewall: { organization: {} },\n guards: {\n createable: [\"name\", \"description\", \"capacity\", \"roomType\"],\n updatable: [\"name\", \"description\", \"capacity\"],\n },\n crud: {\n list: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n get: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n create: { access: { roles: [\"owner\", \"admin\"] } },\n update: { access: { roles: [\"owner\", \"admin\"] } },\n delete: { access: { roles: [\"owner\", \"admin\"] } },\n },\n});\n\nexport type Room = typeof rooms.$inferSelect;\n```\n\n## Column Types\n\nDrizzle supports all standard SQL column types:\n\n| Type | Drizzle Function | Example |\n|------|------------------|---------|\n| String | `text()`, `varchar()` | `text('name')` |\n| Integer | `integer()`, `bigint()` | `integer('count')` |\n| Boolean | `boolean()` | `boolean('is_active')` |\n| Timestamp | `timestamp()` | `timestamp('created_at')` |\n| JSON | `json()`, `jsonb()` | `jsonb('metadata')` |\n| UUID | `uuid()` | `uuid('id')` |\n| Decimal | `decimal()`, `numeric()` | `decimal('price', { precision: 10, scale: 2 })` |\n\n## Column Modifiers\n\n```typescript\n// Required field\nname: text('name').notNull()\n\n// Default value\nisActive: boolean('is_active').default(true)\n\n// Primary key\nid: text('id').primaryKey()\n\n// Unique constraint\nemail: text('email').unique()\n\n// Default to current timestamp\ncreatedAt: timestamp('created_at').defaultNow()\n```\n\n## File Organization\n\nOrganize your tables by feature. Each feature directory contains table files:\n\n```\ndefinitions/\n└── features/\n ├── rooms/\n │ ├── rooms.ts # Main table + config\n │ ├── room-bookings.ts # Related table + config\n │ └── actions.ts # Custom actions\n ├── users/\n │ ├── users.ts # Table + config\n │ └── user-preferences.ts # Related table\n └── organizations/\n └── organizations.ts\n```\n\n**Key points:**\n- Tables with `export default defineTable(...)` get CRUD routes generated\n- Tables without a default export are internal (no routes, used for joins/relations)\n- Route paths are derived from filenames: `room-bookings.ts` → `/api/v1/room-bookings`\n\n### defineTable vs defineResource\n\nBoth functions are available:\n\n- **`defineTable`** - The standard function for defining tables with CRUD routes\n- **`defineResource`** - Alias for `defineTable`, useful when thinking in terms of REST resources\n\n```typescript\n// These are equivalent:\nexport default defineTable(rooms, { /* config */ });\nexport default defineResource(rooms, { /* config */ });\n```\n\n## 1 Resource = 1 Security Boundary\n\nEach `defineTable()` call defines a complete, self-contained security boundary. The security config you write — firewall, access, guards, and masking — is compiled into a single resource file that wraps all CRUD routes for that table.\n\nThis is a deliberate design choice. Mixing two resources with different security rules in one configuration would create ambiguity about which firewall, access, or masking rules apply to which table. By keeping it 1:1, there's never any question.\n\n| Scenario | What to do |\n|----------|------------|\n| Table needs its own API routes + security | Own file with `defineTable()` |\n| Table is internal/supporting (no direct API) | Extra `.ts` file in the parent feature directory, no `defineTable()` |\n\nA supporting table without `defineTable()` is useful when it's accessed internally — by action handlers, joins, or background jobs — but should never be directly exposed as its own API endpoint.\n\n## Internal Tables (No Routes)\n\nFor junction tables or internal data structures that shouldn't have API routes, simply omit the `defineTable` export:\n\n```typescript\n// definitions/features/rooms/room-amenities.ts\n\n// Junction table - no routes needed\nexport const roomAmenities = sqliteTable('room_amenities', {\n roomId: text('room_id').notNull(),\n amenityId: text('amenity_id').notNull(),\n});\n\n// No default export = no CRUD routes generated\n```\n\nThese internal tables still participate in the database schema and migrations — they just don't get API routes or security configuration.\n\n## Audit Fields\n\nQuickback automatically adds and manages these audit fields - you don't need to define them:\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `createdAt` | timestamp | Set when record is created |\n| `createdBy` | text | User ID who created the record |\n| `modifiedAt` | timestamp | Updated on every change |\n| `modifiedBy` | text | User ID who last modified |\n| `deletedAt` | timestamp | Set on soft delete (optional) |\n| `deletedBy` | text | User ID who deleted (optional) |\n\nThe soft delete fields (`deletedAt`, `deletedBy`) are only added if your resource uses soft delete mode.\n\n### Disabling Audit Fields\n\nTo disable automatic audit fields for your project:\n\n```typescript\n// quickback.config.ts\nexport default defineConfig({\n // ...\n compiler: {\n features: {\n auditFields: false, // Disable for entire project\n }\n }\n});\n```\n\n### Protected System Fields\n\nThese fields are always protected and cannot be set by clients, even with `guards: false`:\n\n- `id` (when `generateId` is not `false`)\n- `createdAt`, `createdBy`\n- `modifiedAt`, `modifiedBy`\n- `deletedAt`, `deletedBy`\n\n## Ownership Fields\n\nFor the firewall to work, include the appropriate ownership columns:\n\n```typescript\n// For organization-scoped data (most common)\norganizationId: text('organization_id').notNull()\n\n// For user-owned data (personal data)\nownerId: text('owner_id').notNull()\n\n// For team-scoped data\nteamId: text('team_id').notNull()\n```\n\n## Relations (Optional)\n\nDrizzle supports defining relations for type-safe joins:\n\n```typescript\n\nexport const roomsRelations = relations(rooms, ({ one, many }) => ({\n organization: one(organizations, {\n fields: [rooms.organizationId],\n references: [organizations.id],\n }),\n bookings: many(bookings),\n}));\n```\n\n## Database Configuration\n\nConfigure database options in your Quickback config:\n\n```typescript\n// quickback.config.ts\nexport default defineConfig({\n name: 'my-app',\n providers: {\n database: defineDatabase('cloudflare-d1', {\n generateId: 'prefixed', // 'uuid' | 'cuid' | 'nanoid' | 'prefixed' | 'serial' | false\n namingConvention: 'snake_case', // 'camelCase' | 'snake_case'\n usePlurals: false, // Auth table names: 'users' vs 'user'\n }),\n },\n compiler: {\n features: {\n auditFields: true, // Auto-manage audit timestamps\n }\n }\n});\n```\n\n## Choosing Your Dialect\n\nUse the Drizzle dialect that matches your database provider:\n\n### SQLite (D1, Turso, better-sqlite3)\n\n```typescript\n\nexport const posts = sqliteTable('posts', {\n id: text('id').primaryKey(),\n title: text('title').notNull(),\n metadata: text('metadata', { mode: 'json' }), // JSON stored as text\n isPublished: integer('is_published', { mode: 'boolean' }).default(false),\n organizationId: text('organization_id').notNull(),\n});\n```\n\n### PostgreSQL (Supabase, Neon)\n\n```typescript\n\nexport const posts = pgTable('posts', {\n id: serial('id').primaryKey(),\n title: text('title').notNull(),\n metadata: jsonb('metadata'), // Native JSONB\n isPublished: boolean('is_published').default(false),\n organizationId: text('organization_id').notNull(),\n});\n```\n\n### Key Differences\n\n| Feature | SQLite | PostgreSQL |\n|---------|--------|------------|\n| Boolean | `integer({ mode: 'boolean' })` | `boolean()` |\n| JSON | `text({ mode: 'json' })` | `jsonb()` or `json()` |\n| Auto-increment | `integer().primaryKey()` | `serial()` |\n| UUID | `text()` | `uuid()` |\n\n## Next Steps\n\n- [Configure the firewall](/compiler/definitions/firewall) for data isolation\n- [Set up access control](/compiler/definitions/access) for CRUD operations\n- [Define guards](/compiler/definitions/guards) for field modification rules\n- [Add custom actions](/compiler/definitions/actions) for business logic"
135
+ },
136
+ "compiler/definitions/validation": {
137
+ "title": "Validation",
138
+ "content": "## Current Validation: Action Input Schemas\n\nYou can validate input data in custom actions using Zod schemas:\n\n```typescript\n// features/invoices/actions.ts\n\nexport default defineActions(invoices, {\n approve: {\n description: \"Approve invoice for payment\",\n input: z.object({\n notes: z.string().min(1).max(500).optional(),\n amount: z.number().positive(),\n }),\n access: { roles: [\"admin\", \"finance\"] },\n execute: async ({ db, record, ctx, input }) => {\n // input is validated against the Zod schema before execution\n return record;\n },\n },\n});\n```\n\n## Planned: defineTable Validation\n\nIn a future release, `defineTable()` will support field-level validation rules enforced at the API level when creating or updating records.\n\n### Planned Validators\n\n| Validator | Description | Example |\n|-----------|-------------|---------|\n| `minLength` | Minimum string length | `minLength: 3` |\n| `maxLength` | Maximum string length | `maxLength: 255` |\n| `min` | Minimum number value | `min: 0` |\n| `max` | Maximum number value | `max: 100` |\n| `pattern` | Regex pattern match | `pattern: '^[a-z]+$'` |\n| `enum` | Allowed values list | `enum: ['draft', 'published']` |\n\n### Planned Usage\n\n```typescript\n// NOT YET AVAILABLE — planned syntax\nexport default defineTable(posts, {\n firewall: { organization: {} },\n guards: {\n createable: [\"title\", \"content\", \"status\"],\n updatable: [\"title\", \"content\", \"status\"],\n },\n validation: {\n title: { minLength: 1, maxLength: 255 },\n status: { enum: [\"draft\", \"published\", \"archived\"] },\n },\n crud: {\n list: { access: { roles: [\"member\"] } },\n create: { access: { roles: [\"member\"] } },\n update: { access: { roles: [\"member\"] } },\n },\n});\n```"
139
+ },
140
+ "compiler/definitions/views": {
141
+ "title": "Views - Column Level Security",
142
+ "content": "Views implement **Column Level Security (CLS)** - controlling which columns users can access. This complements **Row Level Security (RLS)** provided by the [Firewall](/compiler/definitions/firewall), which controls which rows users can access.\n\n| Security Layer | Controls | Quickback Feature |\n|----------------|----------|-------------------|\n| Row Level Security | Which records | Firewall |\n| Column Level Security | Which fields | Views |\n\nViews provide named field projections with role-based access control. Use views to return different sets of columns to different users without duplicating CRUD endpoints.\n\n## When to Use Views\n\n| Concept | Purpose | Example |\n|---------|---------|---------|\n| CRUD list | Full records | Returns all columns |\n| Masking | Hide values | SSN `***-**-6789` |\n| Views | Exclude columns | Only `id`, `name`, `status` |\n\n- **CRUD list** returns all fields to authorized users\n- **Masking** transforms sensitive values but still includes the column\n- **Views** completely exclude columns from the response\n\n## Basic Usage\n\n```typescript\n// features/customers/customers.ts\n\nexport const customers = sqliteTable('customers', {\n id: text('id').primaryKey(),\n name: text('name').notNull(),\n email: text('email').notNull(),\n phone: text('phone'),\n ssn: text('ssn'),\n internalNotes: text('internal_notes'),\n organizationId: text('organization_id').notNull(),\n});\n\nexport default defineTable(customers, {\n masking: {\n ssn: { type: 'ssn', show: { roles: ['admin'] } },\n },\n\n crud: {\n list: { access: { roles: ['owner', 'admin', 'member'] } },\n get: { access: { roles: ['owner', 'admin', 'member'] } },\n },\n\n // Views: Named field projections\n views: {\n summary: {\n fields: ['id', 'name', 'email'],\n access: { roles: ['owner', 'admin', 'member'] },\n },\n full: {\n fields: ['id', 'name', 'email', 'phone', 'ssn', 'internalNotes'],\n access: { roles: ['admin'] },\n },\n report: {\n fields: ['id', 'name', 'email', 'createdAt'],\n access: { roles: ['finance', 'admin'] },\n },\n },\n});\n```\n\n## Generated Endpoints\n\nViews generate GET endpoints at `/{resource}/views/{viewName}`:\n\n```\nGET /api/v1/customers # CRUD list - all fields\nGET /api/v1/customers/views/summary # View - only summary fields\nGET /api/v1/customers/views/full # View - all fields (admin only)\nGET /api/v1/customers/views/report # View - report fields\n```\n\n## Query Parameters\n\nViews support the same query parameters as the list endpoint:\n\n### Pagination\n\n| Parameter | Description | Default | Max |\n|-----------|-------------|---------|-----|\n| `limit` | Number of records to return | 50 | 100 |\n| `offset` | Number of records to skip | 0 | - |\n\n```bash\n# Get first 10 records\nGET /api/v1/customers/views/summary?limit=10\n\n# Get records 11-20\nGET /api/v1/customers/views/summary?limit=10&offset=10\n```\n\n### Filtering\n\n```bash\n# Filter by exact value\nGET /api/v1/customers/views/summary?status=active\n\n# Filter with operators\nGET /api/v1/customers/views/summary?createdAt.gt=2024-01-01\n\n# Multiple filters (AND logic)\nGET /api/v1/customers/views/summary?status=active&email.like=yourdomain.com\n```\n\n### Sorting\n\n| Parameter | Description | Default |\n|-----------|-------------|---------|\n| `sort` | Field to sort by | `createdAt` |\n| `order` | Sort direction (`asc` or `desc`) | `desc` |\n\n```bash\nGET /api/v1/customers/views/summary?sort=name&order=asc\n```\n\n## Security\n\nAll four security pillars apply to views:\n\n| Pillar | Behavior |\n|--------|----------|\n| **Firewall** | WHERE clause applied (same as list) |\n| **Access** | Per-view access control |\n| **Guards** | N/A (read-only) |\n| **Masking** | Applied to returned fields |\n\n### Firewall\n\nViews automatically apply the same firewall conditions as the list endpoint. Users only see records within their organization scope.\n\n### Access Control\n\nEach view has its own access configuration:\n\n```typescript\nviews: {\n // Public view - available to all members\n summary: {\n fields: ['id', 'name', 'status'],\n access: { roles: ['owner', 'admin', 'member'] },\n },\n // Restricted view - admin only\n full: {\n fields: ['id', 'name', 'status', 'ssn', 'internalNotes'],\n access: { roles: ['admin'] },\n },\n}\n```\n\n### Masking\n\nMasking rules are applied to the returned fields. If a view includes a masked field like `ssn`, the masking rules still apply:\n\n```typescript\nmasking: {\n ssn: { type: 'ssn', show: { roles: ['admin'] } },\n},\n\nviews: {\n // Even if a member accesses the 'full' view, ssn will be masked\n // because masking rules take precedence\n full: {\n fields: ['id', 'name', 'ssn'],\n access: { roles: ['owner', 'admin', 'member'] },\n },\n}\n```\n\n### How Views and Masking Work Together\n\nViews and masking are orthogonal concerns:\n- **Views** control field selection (which columns appear)\n- **Masking** controls field transformation (how values appear based on role)\n\nExample configuration:\n\n```typescript\nmasking: {\n ssn: { type: 'ssn', show: { roles: ['admin'] } },\n},\n\nviews: {\n summary: {\n fields: ['id', 'name'], // ssn NOT included\n access: { roles: ['owner', 'admin', 'member'] },\n },\n full: {\n fields: ['id', 'name', 'ssn'], // ssn included\n access: { roles: ['owner', 'admin', 'member'] },\n },\n}\n```\n\n| Endpoint | Role | `ssn` in response? | `ssn` value |\n|----------|------|-------------------|-------------|\n| `/views/summary` | member | No | N/A |\n| `/views/summary` | admin | No | N/A |\n| `/views/full` | member | Yes | `***-**-6789` |\n| `/views/full` | admin | Yes | `123-45-6789` |\n\n## Response Format\n\nView responses include metadata about the view:\n\n```json\n{\n \"data\": [\n { \"id\": \"cust_123\", \"name\": \"John Doe\", \"email\": \"john@yourdomain.com\" },\n { \"id\": \"cust_456\", \"name\": \"Jane Smith\", \"email\": \"jane@yourdomain.com\" }\n ],\n \"view\": \"summary\",\n \"fields\": [\"id\", \"name\", \"email\"],\n \"pagination\": {\n \"limit\": 50,\n \"offset\": 0,\n \"count\": 2\n }\n}\n```\n\n## Complete Example\n\n```typescript\n// features/employees/employees.ts\nexport default defineTable(employees, {\n guards: {\n createable: ['name', 'email', 'phone', 'department'],\n updatable: ['name', 'email', 'phone'],\n },\n\n masking: {\n ssn: { type: 'ssn', show: { roles: ['hr', 'admin'] } },\n salary: { type: 'redact', show: { roles: ['hr', 'admin'] } },\n personalEmail: { type: 'email', show: { or: 'owner' } },\n },\n\n crud: {\n list: { access: { roles: ['owner', 'admin', 'member'] } },\n get: { access: { roles: ['owner', 'admin', 'member'] } },\n create: { access: { roles: ['owner', 'admin'] } },\n update: { access: { roles: ['owner', 'admin'] } },\n delete: { access: { roles: ['owner', 'admin'] } },\n },\n\n views: {\n // Directory view - public employee info\n directory: {\n fields: ['id', 'name', 'email', 'department'],\n access: { roles: ['owner', 'admin', 'member'] },\n },\n // HR view - includes sensitive info\n hr: {\n fields: ['id', 'name', 'email', 'phone', 'ssn', 'salary', 'department', 'startDate'],\n access: { roles: ['owner', 'admin'] },\n },\n // Payroll export\n payroll: {\n fields: ['id', 'name', 'ssn', 'salary', 'bankAccount'],\n access: { roles: ['owner', 'admin'] },\n },\n },\n});\n```"
143
+ },
144
+ "compiler/getting-started/claude-code": {
145
+ "title": "Claude Code Skill",
146
+ "content": "The Quickback CLI includes a skill for [Claude Code](https://claude.com/claude-code) that gives Claude context about your project structure, definitions, and the Quickback compiler. This helps Claude write correct `defineTable()` definitions, actions, and configuration.\n\n## Installation\n\n### Install the Skill\n\n```bash\nquickback claude install\n```\n\nThis installs the Quickback skill files into your project so Claude Code can use them.\n\n**Options:**\n\n| Flag | Description |\n|------|-------------|\n| `--global` | Install to `~/.claude/` (available in all projects) |\n| `--local` | Install to `./quickback/.claude/` (project-specific, default) |\n\nAfter installing, start Claude Code in your project directory. The Quickback skill will be available automatically.\n\n## What the Skill Provides\n\nThe Quickback skill teaches Claude Code about:\n\n- **Project structure** — Where definitions, features, and config files live\n- **`defineTable()` API** — How to write schema + security definitions correctly\n- **`defineActions()` API** — How to write custom actions with input validation\n- **Security pillars** — Firewall, Access, Guards, and Masking configuration\n- **Provider options** — Cloudflare, Bun, Turso runtime and database choices\n- **Compiler workflow** — How to compile and deploy\n\n## Usage Examples\n\nWith the skill installed, you can ask Claude Code to:\n\n```\n\"Add a customers feature with org-scoped firewall and email masking\"\n\n\"Create an action on orders called 'refund' that requires admin role\"\n\n\"Add a summary view to the projects feature showing only id, name, and status\"\n\n\"Switch my config from Bun to Cloudflare for production deployment\"\n```\n\nClaude will generate correct Quickback definitions that you can compile directly.\n\n## Updating\n\nTo update the skill to the latest version:\n\n```bash\nquickback claude install\n```\n\nThis overwrites the existing skill files with the latest version from the CLI.\n\n## See Also\n\n- [Claude Code Plugin](/plugins-tools/claude-code-skill) — Full plugin reference\n- [Getting Started](/compiler/getting-started) — Project setup guide"
147
+ },
148
+ "compiler/getting-started/full-example": {
149
+ "title": "Complete Example",
150
+ "content": "This example shows a complete resource definition using all five security layers, and the production code Quickback generates from it.\n\n## What You Define\n\nA `customers` resource for a multi-tenant SaaS application with:\n- Organization-level data isolation\n- Role-based CRUD permissions\n- Field-level write protection\n- PII masking for sensitive fields\n- Column-level views for different access levels\n\n### Schema\n\n```typescript title=\"definitions/features/customers/schema.ts\"\n\nexport const customers = sqliteTable('customers', {\n id: text('id').primaryKey(),\n organizationId: text('organization_id').notNull(),\n name: text('name').notNull(),\n email: text('email').notNull(),\n phone: text('phone'),\n ssn: text('ssn'),\n});\n\n// Note: createdAt, createdBy, modifiedAt, modifiedBy are auto-injected\n```\n\n### Resource Configuration\n\n```typescript title=\"definitions/features/customers/resource.ts\"\n\nexport default defineResource(customers, {\n // FIREWALL: Data isolation\n firewall: {\n organization: {}\n },\n\n // ACCESS: Role-based permissions\n crud: {\n list: { access: { roles: ['owner', 'admin', 'member', 'support'] } },\n get: { access: { roles: ['owner', 'admin', 'member', 'support'] } },\n create: { access: { roles: ['admin'] } },\n update: { access: { roles: ['admin', 'support'] } },\n delete: { access: { roles: ['admin'] }, mode: 'hard' },\n },\n\n // GUARDS: Field-level protection\n guards: {\n createable: ['name', 'email', 'phone', 'ssn'],\n updatable: ['name', 'email', 'phone'],\n immutable: ['ssn'], // Can set once, never change\n },\n\n // MASKING: PII redaction\n masking: {\n ssn: { type: 'ssn', show: { roles: ['admin'] } },\n phone: { type: 'phone', show: { roles: ['admin', 'support'] } },\n email: { type: 'email', show: { roles: ['admin'] } },\n },\n\n // VIEWS: Column-level projections\n views: {\n summary: {\n fields: ['id', 'name', 'email'],\n access: { roles: ['owner', 'admin', 'member', 'support'] },\n },\n full: {\n fields: ['id', 'name', 'email', 'phone', 'ssn'],\n access: { roles: ['owner', 'admin'] },\n },\n },\n});\n```\n\n---\n\n## What Quickback Generates\n\nRun `quickback compile` and Quickback generates production-ready code.\n\n### Security Helpers\n\n```typescript title=\"src/features/customers/customers.resource.ts\"\n\n/**\n * Firewall conditions for customers\n * Pattern: Organization only\n */\nexport function buildFirewallConditions(ctx: AppContext) {\n const conditions = [];\n\n // Organization isolation\n conditions.push(eq(customers.organizationId, ctx.activeOrgId!));\n\n return and(...conditions);\n}\n\n/**\n * Guards configuration for field modification rules\n */\nexport const GUARDS_CONFIG = {\n createable: new Set(['name', 'email', 'phone', 'ssn']),\n updatable: new Set(['name', 'email', 'phone']),\n immutable: new Set(['ssn']),\n protected: {},\n systemManaged: new Set([\n 'createdAt', 'createdBy',\n 'modifiedAt', 'modifiedBy',\n 'deletedAt', 'deletedBy'\n ]),\n};\n\n/**\n * Validates field input for CREATE operations.\n */\nexport function validateCreate(input: Record<string, any>) {\n const systemManaged: string[] = [];\n const protectedFields: Array<{ field: string; actions: string[] }> = [];\n const notCreateable: string[] = [];\n\n for (const field of Object.keys(input)) {\n if (GUARDS_CONFIG.systemManaged.has(field)) {\n systemManaged.push(field);\n continue;\n }\n if (field in GUARDS_CONFIG.protected) {\n const actions = [...GUARDS_CONFIG.protected[field]];\n protectedFields.push({ field, actions });\n continue;\n }\n if (!GUARDS_CONFIG.createable.has(field) && !GUARDS_CONFIG.immutable.has(field)) {\n notCreateable.push(field);\n }\n }\n\n const valid = systemManaged.length === 0 &&\n protectedFields.length === 0 &&\n notCreateable.length === 0;\n\n return { valid, systemManaged, protected: protectedFields, notCreateable };\n}\n\n/**\n * Validates field input for UPDATE operations.\n */\nexport function validateUpdate(input: Record<string, any>) {\n const systemManaged: string[] = [];\n const immutable: string[] = [];\n const protectedFields: Array<{ field: string; actions: string[] }> = [];\n const notUpdatable: string[] = [];\n\n for (const field of Object.keys(input)) {\n if (GUARDS_CONFIG.systemManaged.has(field)) {\n systemManaged.push(field);\n continue;\n }\n if (GUARDS_CONFIG.immutable.has(field)) {\n immutable.push(field);\n continue;\n }\n if (field in GUARDS_CONFIG.protected) {\n const actions = [...GUARDS_CONFIG.protected[field]];\n protectedFields.push({ field, actions });\n continue;\n }\n if (!GUARDS_CONFIG.updatable.has(field)) {\n notUpdatable.push(field);\n }\n }\n\n const valid = systemManaged.length === 0 &&\n immutable.length === 0 &&\n protectedFields.length === 0 &&\n notUpdatable.length === 0;\n\n return { valid, systemManaged, immutable, protected: protectedFields, notUpdatable };\n}\n\n/**\n * Masks sensitive fields based on user role\n */\nexport function maskCustomer<T extends Record<string, any>>(\n record: T,\n ctx: AppContext\n): T {\n const masked: any = { ...record };\n\n // email: show to admin only\n if (!ctx.roles?.includes('admin')) {\n if (masked['email'] != null) {\n masked['email'] = masks.email(masked['email']);\n }\n }\n\n // phone: show to admin, support\n if (!ctx.roles?.some(r => ['admin', 'support'].includes(r))) {\n if (masked['phone'] != null) {\n masked['phone'] = masks.phone(masked['phone']);\n }\n }\n\n // ssn: show to admin only\n if (!ctx.roles?.includes('admin')) {\n if (masked['ssn'] != null) {\n masked['ssn'] = masks.ssn(masked['ssn']);\n }\n }\n\n return masked as T;\n}\n\nexport function maskCustomers<T extends Record<string, any>>(\n records: T[],\n ctx: AppContext\n): T[] {\n return records.map(r => maskCustomer(r, ctx));\n}\n\n// CRUD Access configuration\nexport const CRUD_ACCESS = {\n list: { access: { roles: ['owner', 'admin', 'member', 'support'] } },\n get: { access: { roles: ['owner', 'admin', 'member', 'support'] } },\n create: { access: { roles: ['admin'] } },\n update: { access: { roles: ['admin', 'support'] } },\n delete: { access: { roles: ['admin'] }, mode: 'hard' },\n};\n\n// Views configuration\nexport const VIEWS_CONFIG = {\n summary: {\n fields: ['id', 'name', 'email'],\n access: { roles: ['owner', 'admin', 'member', 'support'] },\n },\n full: {\n fields: ['id', 'name', 'email', 'phone', 'ssn'],\n access: { roles: ['owner', 'admin'] },\n },\n};\n```\n\n### API Routes\n\nThe generated routes wire everything together:\n\n```typescript title=\"src/features/customers/customers.routes.ts\"\n\n buildFirewallConditions,\n validateCreate,\n validateUpdate,\n maskCustomer,\n maskCustomers,\n CRUD_ACCESS\n} from './customers.resource';\n\nconst app = new Hono();\n\n// GET /customers - List with firewall + masking\napp.get('/', async (c) => {\n const ctx = c.get('ctx');\n const db = c.get('db');\n\n // Access check\n if (!await evaluateAccess(CRUD_ACCESS.list.access, ctx)) {\n return c.json(AccessErrors.roleRequired(\n CRUD_ACCESS.list.access.roles,\n ctx.roles\n ), 403);\n }\n\n // Query with firewall conditions\n const results = await db.select().from(customers)\n .where(buildFirewallConditions(ctx));\n\n // Apply masking before returning\n return c.json({\n data: maskCustomers(results, ctx),\n });\n});\n\n// POST /customers - Create with guards\napp.post('/', async (c) => {\n const ctx = c.get('ctx');\n const db = c.get('db');\n\n // Access check\n if (!await evaluateAccess(CRUD_ACCESS.create.access, ctx)) {\n return c.json(AccessErrors.roleRequired(\n CRUD_ACCESS.create.access.roles,\n ctx.roles\n ), 403);\n }\n\n const body = await c.req.json();\n\n // Guards validation\n const validation = validateCreate(body);\n if (!validation.valid) {\n if (validation.systemManaged.length > 0) {\n return c.json(GuardErrors.systemManaged(validation.systemManaged), 400);\n }\n if (validation.notCreateable.length > 0) {\n return c.json(GuardErrors.fieldNotCreateable(validation.notCreateable), 400);\n }\n }\n\n // Apply ownership and audit fields\n const data = {\n id: 'cst_' + crypto.randomUUID().replace(/-/g, ''),\n ...body,\n organizationId: ctx.activeOrgId,\n createdAt: new Date().toISOString(),\n createdBy: ctx.userId,\n modifiedAt: new Date().toISOString(),\n modifiedBy: ctx.userId,\n };\n\n const result = await db.insert(customers).values(data).returning();\n return c.json(maskCustomer(result[0], ctx), 201);\n});\n\n// PATCH /customers/:id - Update with guards\napp.patch('/:id', async (c) => {\n const ctx = c.get('ctx');\n const db = c.get('db');\n const id = c.req.param('id');\n\n // Fetch with firewall\n const [record] = await db.select().from(customers)\n .where(and(buildFirewallConditions(ctx), eq(customers.id, id)));\n\n if (!record) {\n return c.json({ error: 'Not found', code: 'NOT_FOUND' }, 404);\n }\n\n // Access check\n if (!await evaluateAccess(CRUD_ACCESS.update.access, ctx)) {\n return c.json(AccessErrors.roleRequired(\n CRUD_ACCESS.update.access.roles,\n ctx.roles\n ), 403);\n }\n\n const body = await c.req.json();\n\n // Guards validation\n const validation = validateUpdate(body);\n if (!validation.valid) {\n if (validation.immutable.length > 0) {\n return c.json(GuardErrors.fieldImmutable(validation.immutable), 400);\n }\n if (validation.notUpdatable.length > 0) {\n return c.json(GuardErrors.fieldNotUpdatable(validation.notUpdatable), 400);\n }\n }\n\n // Apply audit fields\n const data = {\n ...body,\n modifiedAt: new Date().toISOString(),\n modifiedBy: ctx.userId,\n };\n\n const result = await db.update(customers).set(data)\n .where(eq(customers.id, id)).returning();\n\n return c.json(maskCustomer(result[0], ctx));\n});\n\nexport default app;\n```\n\n---\n\n## Runtime Behavior\n\n### Masking by Role\n\n| Role | `email` | `phone` | `ssn` |\n|------|---------|---------|-------|\n| **admin** | `john@example.com` | `555-123-4567` | `123-45-6789` |\n| **support** | `j***@e***.com` | `555-123-4567` | `***-**-6789` |\n| **member** | `j***@e***.com` | `***-***-4567` | `***-**-6789` |\n\n### Views + Masking Interaction\n\nViews control **which fields are returned**. Masking controls **what values are shown**.\n\n| Endpoint | Role | `ssn` in response? | `ssn` value |\n|----------|------|--------------------|-------------|\n| `/customers/views/summary` | member | No | N/A |\n| `/customers/views/summary` | admin | No | N/A |\n| `/customers/views/full` | member | Yes | `***-**-6789` |\n| `/customers/views/full` | admin | Yes | `123-45-6789` |\n\n### Guards Enforcement\n\n```bash\n# ✅ Allowed: Create with createable fields\nPOST /customers\n{ \"name\": \"Jane\", \"email\": \"jane@example.com\", \"ssn\": \"987-65-4321\" }\n\n# ❌ Rejected: Update immutable field\nPATCH /customers/cst_123\n{ \"ssn\": \"000-00-0000\" }\n# → 400: \"Field 'ssn' is immutable and cannot be modified\"\n\n# ❌ Rejected: Set system-managed field\nPOST /customers\n{ \"name\": \"Jane\", \"createdBy\": \"hacker\" }\n# → 400: \"System-managed fields cannot be set: createdBy\"\n```\n\n---\n\n## Generated API Endpoints\n\n| Method | Endpoint | Access |\n|--------|----------|--------|\n| `GET` | `/customers` | owner, admin, member, support |\n| `GET` | `/customers/:id` | owner, admin, member, support |\n| `POST` | `/customers` | admin |\n| `PATCH` | `/customers/:id` | admin, support |\n| `DELETE` | `/customers/:id` | admin |\n| `GET` | `/customers/views/summary` | owner, admin, member, support |\n| `GET` | `/customers/views/full` | admin |\n\n---\n\n## Key Takeaways\n\n1. **~50 lines of configuration** generates **~500+ lines of production code**\n2. **Security is declarative** - you describe *what* you want, not *how*\n3. **All layers work together** - Firewall → Access → Guards → Masking\n4. **Code is readable and auditable** - no magic, just TypeScript\n5. **Type-safe from definition to API** - Drizzle schema drives everything"
151
+ },
152
+ "compiler/getting-started/hand-crafted": {
153
+ "title": "Hand-Crafted Setup",
154
+ "content": "If you have an existing project and want to add Quickback-generated API endpoints, you can set up the definitions directory manually instead of using a template.\n\n## Prerequisites\n\n- An existing Hono-based project (Cloudflare Workers or Bun)\n- Node.js 18+ or Bun installed\n- The Quickback CLI: `npm install -g @kardoe/quickback`\n\n## Setup\n\n### 1. Create the Quickback Directory\n\nCreate a `quickback/` directory in your project root with the following structure:\n\n```\nyour-project/\n├── quickback/\n│ ├── quickback.config.ts\n│ └── features/\n│ └── (your features go here)\n├── src/ # Your existing code\n├── package.json\n└── ...\n```\n\n```bash\nmkdir -p quickback/features\n```\n\n### 2. Write Your Config\n\nCreate `quickback/quickback.config.ts`:\n\n```typescript\n\nexport default defineConfig({\n name: \"my-app\",\n template: \"hono\",\n features: [\"organizations\"],\n providers: {\n runtime: { name: \"cloudflare\", config: {} },\n database: {\n name: \"cloudflare-d1\",\n config: { binding: \"DB\" },\n },\n auth: { name: \"better-auth\", config: {} },\n },\n});\n```\n\nAdjust the providers to match your existing stack. See [Providers](/compiler/config/providers) for all options.\n\n### 3. Create Your First Feature\n\nCreate a feature directory with a schema + security definition:\n\n```bash\nmkdir quickback/features/customers\n```\n\nCreate `quickback/features/customers/customers.ts`:\n\n```typescript\n\nexport const customers = sqliteTable(\"customers\", {\n id: text(\"id\").primaryKey(),\n name: text(\"name\").notNull(),\n email: text(\"email\").notNull(),\n status: text(\"status\").default(\"active\"),\n organizationId: text(\"organization_id\").notNull(),\n});\n\nexport default defineTable(customers, {\n firewall: {\n organization: {},\n },\n guards: {\n createable: [\"name\", \"email\", \"status\"],\n updatable: [\"name\", \"email\", \"status\"],\n immutable: [\"id\", \"organizationId\"],\n },\n masking: {\n email: { type: \"email\", show: { roles: [\"admin\"] } },\n },\n crud: {\n list: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n get: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n create: { access: { roles: [\"owner\", \"admin\"] } },\n update: { access: { roles: [\"owner\", \"admin\"] } },\n delete: { access: { roles: [\"owner\", \"admin\"] }, mode: \"soft\" },\n },\n});\n```\n\n### 4. Log In and Compile\n\n```bash\nquickback login\nquickback compile\n```\n\nThe compiler generates a complete `src/` directory with route handlers, middleware, database schemas, and migrations.\n\n### 5. Integrate with Your Existing Code\n\nThe compiled output creates a self-contained Hono app in `src/index.ts`. If you need to integrate the generated routes into an existing Hono app, you can import the feature routes directly:\n\n```typescript\n\nconst app = new Hono();\n\n// Your existing routes\napp.get(\"/\", (c) => c.json({ status: \"ok\" }));\n\n// Mount generated feature routes\napp.route(\"/api/v1/customers\", customersRoutes);\n\nexport default app;\n```\n\n## Directory Structure\n\nThe compiler expects this structure inside `quickback/`:\n\n```\nquickback/\n├── quickback.config.ts # Required: compiler configuration\n└── features/ # Required: feature definitions\n ├── customers/\n │ ├── customers.ts # Schema + security (defineTable)\n │ └── actions.ts # Optional: custom actions\n ├── orders/\n │ ├── orders.ts\n │ └── actions.ts\n └── ...\n```\n\nEach feature directory should contain:\n- **`{name}.ts`** — The main definition file using `defineTable()` (required)\n- **`actions.ts`** — Custom actions using `defineActions()` (optional)\n\n## Adding Features\n\nTo add a new feature, create a new directory under `quickback/features/` and recompile:\n\n```bash\nmkdir quickback/features/invoices\n# Create invoices/invoices.ts with defineTable(...)\nquickback compile\n```\n\nThe compiler detects all features automatically — no registration needed.\n\n## Recompiling\n\nAfter any change to your definitions, recompile to regenerate the output:\n\n```bash\nquickback compile\n```\n\nThe compiler regenerates the entire `src/` directory. Your definitions in `quickback/` are the source of truth — never edit the generated files directly.\n\n**Warning:** Don't edit files in `src/` manually. They will be overwritten on the next compile. All changes should be made in your `quickback/` definitions.\n\n## Next Steps\n\n- [Configuration reference](/compiler/config) — All config options\n- [Schema definitions](/compiler/definitions/schema) — Define tables with `defineTable()`\n- [Security pillars](/compiler/definitions) — Firewall, Access, Guards, Masking\n- [Templates](/compiler/getting-started/templates) — Use a template for new projects instead"
155
+ },
156
+ "compiler/getting-started": {
157
+ "title": "Getting Started",
158
+ "content": "Get started with Quickback in minutes. This guide shows you how to define a complete table with security configuration.\n\n## Install the CLI\n\n```bash\nnpm install -g @kardoe/quickback\n```\n\n## Create a Project\n\n```bash\nquickback create cloudflare my-app\ncd my-app\n```\n\nThis scaffolds a complete project with:\n- `quickback.config.ts` — Project configuration\n- `definitions/features/` — Your table definitions\n- Example todos feature with full security configuration\n\n**Available templates:**\n- `cloudflare` — Cloudflare Workers + D1 + Better Auth (free)\n- `bun` — Bun + SQLite + Better Auth (free)\n- `turso` — Turso/LibSQL + Better Auth (pro)\n\n## File Structure\n\nEach table gets its own file with schema and config together using `defineTable`:\n\n```\ndefinitions/\n└── features/\n └── rooms/\n ├── rooms.ts # Table + security config\n ├── room-bookings.ts # Related table + config\n ├── actions.ts # Custom actions (optional)\n └── handlers/ # Action handlers (optional)\n └── activate.ts\n```\n\n## Complete Example\n\nHere's a complete `rooms` table with all security layers:\n\n```typescript\n// definitions/features/rooms/rooms.ts\n\nexport const rooms = sqliteTable('rooms', {\n id: text('id').primaryKey(),\n name: text('name').notNull(),\n description: text('description'),\n capacity: integer('capacity').notNull().default(10),\n roomType: text('room_type').notNull(),\n isActive: integer('is_active', { mode: 'boolean' }).notNull().default(true),\n deactivationReason: text('deactivation_reason'),\n\n // Ownership - required for firewall data isolation\n organizationId: text('organization_id').notNull(),\n});\n\nexport default defineTable(rooms, {\n // 1. FIREWALL - Data isolation\n firewall: { organization: {} },\n\n // 2. GUARDS - Field modification rules\n guards: {\n createable: [\"name\", \"description\", \"capacity\", \"roomType\"],\n updatable: [\"name\", \"description\", \"capacity\"],\n protected: {\n isActive: [\"activate\", \"deactivate\"], // Only via actions\n },\n },\n\n // 3. CRUD - Role-based access control\n crud: {\n list: { access: { roles: [\"owner\", \"admin\", \"member\"] }, pageSize: 25 },\n get: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n create: { access: { roles: [\"owner\", \"admin\"] } },\n update: { access: { roles: [\"owner\", \"admin\"] } },\n delete: { access: { roles: [\"owner\", \"admin\"] }, mode: \"soft\" },\n },\n});\n\nexport type Room = typeof rooms.$inferSelect;\n```\n\n## What Each Layer Does\n\n1. **Firewall**: Automatically adds `WHERE organizationId = ?` to every query. Users in Org A can never see Org B's data.\n2. **Guards**: Controls which fields can be modified — `createable` for POST, `updatable` for PATCH, `protected` for action-only fields.\n3. **CRUD Access**: Role-based access control for each operation. Members can read, only admins can write.\n\n## Compile and Run\n\n```bash\n# Log in (first time only)\nquickback login\n\n# Compile your definitions\nquickback compile\n\n# Run locally\nnpm run dev\n```\n\n## Generated Endpoints\n\nQuickback generates these endpoints from the example above:\n\n| Method | Endpoint | Description |\n|--------|----------|-------------|\n| `GET` | `/api/v1/rooms` | List rooms (owners, admins, members) |\n| `GET` | `/api/v1/rooms/:id` | Get single room |\n| `POST` | `/api/v1/rooms` | Create room (admins only) |\n| `PATCH` | `/api/v1/rooms/:id` | Update room (admins only) |\n| `DELETE` | `/api/v1/rooms/:id` | Soft delete room (admins only) |\n\n## Next Steps\n\n- [Template Walkthroughs](/compiler/getting-started/templates) — Detailed setup guides\n- [Full Example](/compiler/getting-started/full-example) — Complete resource walkthrough\n- [Database Schema](/compiler/definitions/schema) — Column types, relations, audit fields\n- [Firewall](/compiler/definitions/firewall) — Data isolation patterns\n- [Access](/compiler/definitions/access) — Role & condition-based control\n- [Guards](/compiler/definitions/guards) — Field modification rules\n- [Masking](/compiler/definitions/masking) — Field redaction for sensitive data\n- [Actions](/compiler/definitions/actions) — Custom business logic endpoints"
159
+ },
160
+ "compiler/getting-started/template-bun": {
161
+ "title": "Bun Template",
162
+ "content": "The Bun template creates a backend running on Bun with a local SQLite database and Better Auth. No cloud account needed — ideal for local development and prototyping.\n\n## Create the Project\n\n```bash\nquickback create bun my-app\ncd my-app\n```\n\nAliases: `quickback create local my-app`\n\nThis scaffolds a project with:\n\n```\nmy-app/\n├── quickback/\n│ ├── quickback.config.ts # Compiler configuration\n│ └── features/\n│ └── todos/\n│ ├── todos.ts # Schema + security (defineTable)\n│ └── actions.ts # Custom actions\n├── src/ # Compiled output (generated)\n├── drizzle/ # Migrations (generated)\n├── data/ # SQLite database files\n├── package.json\n├── tsconfig.json\n└── drizzle.config.ts\n```\n\n## Generated Configuration\n\n### quickback.config.ts\n\n```typescript\n\nexport default defineConfig({\n name: \"my-app\",\n template: \"hono\",\n features: [\"organizations\"],\n providers: {\n runtime: { name: \"bun\", config: {} },\n database: {\n name: \"better-sqlite3\",\n config: { path: \"./data/app.db\" },\n },\n auth: { name: \"better-auth\", config: {} },\n },\n});\n```\n\nThe Bun template includes the same `todos` feature as the [Cloudflare template](/compiler/getting-started/template-cloudflare) with all four security pillars configured.\n\n## Setup Steps\n\n### 1. Install Dependencies\n\n```bash\nbun install\n```\n\n### 2. Log In to the Compiler\n\n```bash\nquickback login\n```\n\n### 3. Compile Your Definitions\n\n```bash\nquickback compile\n```\n\n### 4. Run Migrations\n\n```bash\nnpm run db:migrate\n```\n\nThis creates the SQLite database in `data/app.db` and applies all migrations.\n\n### 5. Create Environment File\n\nCreate a `.env` file in your project root:\n\n```bash\n# Runtime\nNODE_ENV=development\nPORT=3000\n\n# Auth\nBETTER_AUTH_SECRET=your-secret-key-change-in-production\nBETTER_AUTH_URL=http://localhost:3000\n```\n\n**Note:** Generate a strong random string for `BETTER_AUTH_SECRET` in production. You can use `openssl rand -hex 32`.\n\n### 6. Start Development Server\n\n```bash\nnpm run dev\n```\n\nYour API is running at `http://localhost:3000` with hot reload.\n\n## Database\n\nThe Bun template uses `better-sqlite3` with a local SQLite file stored in `data/app.db`. The generated database module automatically:\n\n- Creates the `data/` directory if it doesn't exist\n- Enables WAL mode for better concurrent performance\n- Enables foreign key constraints\n\n```\ndata/\n└── app.db # SQLite database file\n```\n\nUnlike the Cloudflare template, Bun uses a **single database** for both auth and feature tables. Migrations are in a single directory:\n\n```\ndrizzle/\n├── meta/\n│ ├── _journal.json\n│ └── 0000_snapshot.json\n└── 0000_initial.sql\n```\n\n## Available Scripts\n\n| Script | Command | Description |\n|--------|---------|-------------|\n| `dev` | `bun run --hot src/index.ts` | Start dev server with hot reload |\n| `start` | `bun run src/index.ts` | Start production server |\n| `build` | `bun build src/index.ts --outdir ./dist --target bun` | Build for production |\n| `db:migrate` | `npx drizzle-kit migrate` | Apply database migrations |\n| `type-check` | `tsc --noEmit` | Run TypeScript type checking |\n\n## Switching to Cloudflare\n\nWhen you're ready to deploy to production, you can switch to the Cloudflare template by updating your `quickback.config.ts`:\n\n```typescript\n\nexport default defineConfig({\n name: \"my-app\",\n template: \"hono\",\n features: [\"organizations\"],\n providers: {\n runtime: { name: \"cloudflare\", config: {} },\n database: {\n name: \"cloudflare-d1\",\n config: { binding: \"DB\" },\n },\n auth: { name: \"better-auth\", config: {} },\n },\n});\n```\n\nThen recompile:\n\n```bash\nquickback compile\n```\n\nThe compiler regenerates the entire `src/` directory for the new runtime target. Your feature definitions stay the same.\n\n## Next Steps\n\n- [Add a new feature](/compiler/definitions/schema) — Define your own tables with security\n- [Configure access control](/compiler/definitions/access) — Set role-based permissions\n- [Add custom actions](/compiler/definitions/actions) — Business logic beyond CRUD\n- [Cloudflare template](/compiler/getting-started/template-cloudflare) — Deploy to production"
163
+ },
164
+ "compiler/getting-started/template-cloudflare": {
165
+ "title": "Cloudflare Template",
166
+ "content": "The Cloudflare template creates a production-ready backend running on Cloudflare Workers with D1 (SQLite at the edge), KV storage, and Better Auth.\n\n## Create the Project\n\n```bash\nquickback create cloudflare my-app\ncd my-app\n```\n\nAliases: `quickback create cf my-app`\n\nThis scaffolds a project with:\n\n```\nmy-app/\n├── quickback/\n│ ├── quickback.config.ts # Compiler configuration\n│ └── features/\n│ └── todos/\n│ ├── todos.ts # Schema + security (defineTable)\n│ └── actions.ts # Custom actions\n├── src/ # Compiled output (generated)\n├── drizzle/ # Migrations (generated)\n├── package.json\n├── tsconfig.json\n├── wrangler.toml\n└── drizzle.config.ts\n```\n\n## Generated Configuration\n\n### quickback.config.ts\n\n```typescript\n\nexport default defineConfig({\n name: \"my-app\",\n template: \"hono\",\n features: [\"organizations\"],\n providers: {\n runtime: { name: \"cloudflare\", config: {} },\n database: {\n name: \"cloudflare-d1\",\n config: { binding: \"DB\" },\n },\n auth: { name: \"better-auth\", config: {} },\n },\n});\n```\n\n### Example Feature (todos.ts)\n\nThe template includes a `todos` feature with all four security pillars configured:\n\n```typescript\n\nexport const todos = sqliteTable(\"todos\", {\n id: text(\"id\").primaryKey(),\n title: text(\"title\").notNull(),\n description: text(\"description\"),\n completed: integer(\"completed\", { mode: \"boolean\" }).default(false),\n priority: text(\"priority\").default(\"medium\"),\n dueDate: integer(\"due_date\", { mode: \"timestamp\" }),\n organizationId: text(\"organization_id\").notNull(),\n ownerId: text(\"owner_id\").notNull(),\n});\n\nexport default defineTable(todos, {\n firewall: {\n organization: {},\n owner: { mode: \"optional\" },\n softDelete: {},\n },\n guards: {\n createable: [\"title\", \"description\", \"completed\", \"priority\", \"dueDate\"],\n updatable: [\"title\", \"description\", \"completed\", \"priority\", \"dueDate\"],\n immutable: [\"id\", \"organizationId\", \"ownerId\"],\n },\n masking: {\n description: {\n type: \"redact\",\n show: { roles: [\"owner\", \"admin\"] },\n },\n },\n crud: {\n list: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n get: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n create: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n update: { access: { roles: [\"owner\", \"admin\", \"member\"] } },\n delete: { access: { roles: [\"owner\", \"admin\"] }, mode: \"soft\" },\n },\n});\n```\n\n## Setup Steps\n\n### 1. Install Dependencies\n\n```bash\nnpm install\n```\n\n### 2. Log In to the Compiler\n\n```bash\nquickback login\n```\n\n### 3. Compile Your Definitions\n\n```bash\nquickback compile\n```\n\nThis sends your definitions to the Quickback compiler and generates the full `src/` directory, migrations, and configuration files.\n\n### 4. Create D1 Databases\n\nThe Cloudflare template uses **dual database mode** by default — separate databases for auth and application data.\n\n```bash\n# Create the auth database\nnpx wrangler d1 create my-app-auth\n\n# Create the features database\nnpx wrangler d1 create my-app-features\n```\n\nCopy the database IDs from the output and update your `wrangler.toml`:\n\n```toml\n[[d1_databases]]\nbinding = \"AUTH_DB\"\ndatabase_name = \"my-app-auth\"\ndatabase_id = \"paste-auth-id-here\"\n\n[[d1_databases]]\nbinding = \"DB\"\ndatabase_name = \"my-app-features\"\ndatabase_id = \"paste-features-id-here\"\n```\n\n### 5. Run Migrations (Local)\n\n```bash\nnpm run db:migrate:local\n```\n\nThis applies migrations to both databases locally.\n\n### 6. Start Development Server\n\n```bash\nnpm run dev\n```\n\nYour API is running at `http://localhost:8787`.\n\n## Wrangler Configuration\n\nThe compiler generates a `wrangler.toml` with all required bindings:\n\n```toml\nname = \"my-app\"\nmain = \"src/index.ts\"\ncompatibility_date = \"2025-03-01\"\ncompatibility_flags = [\"nodejs_compat\"]\n\n[observability]\nenabled = true\n\n[placement]\nmode = \"smart\"\n\n# Auth database\n[[d1_databases]]\nbinding = \"AUTH_DB\"\ndatabase_name = \"my-app-auth\"\ndatabase_id = \"your-auth-db-id\"\n\n# Features database\n[[d1_databases]]\nbinding = \"DB\"\ndatabase_name = \"my-app-features\"\ndatabase_id = \"your-features-db-id\"\n\n# Key-value storage\n[[kv_namespaces]]\nbinding = \"KV\"\nid = \"your-kv-id\"\n```\n\n### Optional Bindings\n\nDepending on your configuration, the compiler may also generate bindings for:\n\n| Binding | Type | When Generated |\n|---------|------|----------------|\n| `R2_BUCKET` | R2 Bucket | `fileStorage` configured |\n| `AI` | Workers AI | `embeddings` configured on any feature |\n| `VECTORIZE` | Vectorize Index | `embeddings` configured |\n| `EMBEDDINGS_QUEUE` | Queue | `embeddings` configured |\n| `WEBHOOKS_DB` | D1 | Webhooks enabled |\n| `WEBHOOKS_QUEUE` | Queue | Webhooks enabled |\n| `BROADCASTER` | Service | `realtime` configured |\n\n## Dual Database Mode\n\nBy default, the Cloudflare template separates auth tables from application tables:\n\n- **`AUTH_DB`** — Better Auth tables (`users`, `sessions`, `accounts`, `organizations`, `members`)\n- **`DB`** — Your feature tables (`todos`, etc.)\n\nThis keeps auth data isolated and allows independent scaling. Each database gets its own migration directory:\n\n```\ndrizzle/\n├── auth/ # Auth migrations\n│ ├── meta/\n│ └── 0000_*.sql\n└── features/ # Feature migrations\n ├── meta/\n └── 0000_*.sql\n```\n\n## Deploying to Production\n\n### 1. Apply Remote Migrations\n\n```bash\nnpm run db:migrate:remote\n```\n\n### 2. Deploy to Cloudflare\n\n```bash\nnpm run deploy\n```\n\nThis runs `wrangler deploy` to push your worker to Cloudflare's edge network.\n\n## Available Scripts\n\n| Script | Command | Description |\n|--------|---------|-------------|\n| `dev` | `wrangler dev` | Start local dev server |\n| `deploy` | `npm run db:migrate:remote && wrangler deploy` | Deploy to production |\n| `db:migrate:local` | Runs auth + features migrations locally | Apply migrations to local D1 |\n| `db:migrate:remote` | Runs auth + features migrations remotely | Apply migrations to production D1 |\n\n## Next Steps\n\n- [Add a new feature](/compiler/definitions/schema) — Define your own tables with security\n- [Configure access control](/compiler/definitions/access) — Set role-based permissions\n- [Add custom actions](/compiler/definitions/actions) — Business logic beyond CRUD\n- [Environment variables](/compiler/config/variables) — Configure secrets and bindings"
167
+ },
168
+ "compiler/getting-started/templates": {
169
+ "title": "Templates",
170
+ "content": "Quickback provides pre-configured templates to get you started quickly. Each template combines a runtime, database, and auth provider into a working project with an example feature.\n\n## Available Templates\n\n| Template | Runtime | Database | Auth | Command |\n|----------|---------|----------|------|---------|\n| `cloudflare` | Cloudflare Workers | D1 (SQLite) | Better Auth | `quickback create cloudflare my-app` |\n| `bun` | Bun | better-sqlite3 | Better Auth | `quickback create bun my-app` |\n| `turso` | Bun | LibSQL (Turso) | Better Auth | `quickback create turso my-app` |\n\n## What Each Template Includes\n\nEvery template scaffolds a complete project with:\n\n- **`quickback.config.ts`** — Pre-configured with the right providers\n- **`definitions/features/`** — Example `todos` feature with full security configuration\n- **Database migrations** — Ready to apply\n- **Deployment scripts** — `npm run deploy` for Cloudflare, `npm start` for Bun\n\n## Choosing a Template\n\n### Cloudflare (Recommended for Production)\n\nBest for production deployments. Runs on Cloudflare's global edge network with zero cold starts. Includes D1 (SQLite at the edge), KV storage, and R2 file storage.\n\n- Free tier available (Workers free plan)\n- Global edge deployment\n- Built-in KV, R2, Queues, Vectorize\n- Dual database mode (separate auth and features DBs)\n\n### Bun (Best for Local Development)\n\nBest for local development and prototyping. Runs on Bun with a local SQLite file. No cloud account needed.\n\n- No cloud setup required\n- Fast local iteration\n- SQLite file stored in `data/` directory\n- Easy to switch to Cloudflare later\n\n### Turso (Best for Multi-Region)\n\nBest when you need SQLite with multi-region replication. Uses LibSQL via Turso's managed service.\n\n- Multi-region database replication\n- SQLite compatibility\n- Managed backups and branching\n\n## After Creating a Project\n\n```bash\n# 1. Create the project\nquickback create cloudflare my-app\ncd my-app\n\n# 2. Log in to the compiler\nquickback login\n\n# 3. Compile your definitions\nquickback compile\n\n# 4. Run locally\nnpm run dev\n```\n\nSee the individual template guides for detailed setup:\n- [Cloudflare Template](/compiler/getting-started/template-cloudflare)\n- [Bun Template](/compiler/getting-started/template-bun)"
171
+ },
172
+ "compiler": {
173
+ "title": "Quickback Compiler",
174
+ "content": "The Quickback compiler transforms your declarative resource definitions into a complete, production-ready API. It analyzes your TypeScript definitions at build time, generates optimized code, validates your security configuration, and creates database migrations.\n\n## What the Compiler Does\n\nWhen you run `quickback compile`, the compiler:\n\n1. **Reads your definitions** - Analyzes all `defineTable()` configurations in your `definitions/` folder\n2. **Validates security** - Checks that firewall, guards, access, and masking are properly configured\n3. **Generates API routes** - Creates REST endpoints for each resource (GET, POST, PATCH, DELETE, plus batch operations)\n4. **Generates actions** - Creates custom endpoints from your `defineActions()` definitions\n5. **Creates middleware** - Generates authentication, authorization, and data validation logic\n6. **Generates TypeScript types** - Creates type-safe interfaces for your API\n7. **Generates migrations** - Automatically runs `drizzle-kit generate` to create database migration files\n\n**Input:**\n```\nquickback/\n├── quickback.config.ts # Compiler configuration\n└── definitions/\n └── features/\n └── rooms/\n ├── rooms.ts # defineTable(...)\n └── actions.ts # defineActions(...)\n```\n\n**Output:**\n```\nsrc/\n├── routes/\n│ └── rooms.ts # Generated API handlers\n├── middleware/\n│ ├── auth.ts # Authentication logic\n│ └── firewall.ts # Data isolation queries\n└── types/\n └── rooms.ts # TypeScript interfaces\n```\n\n## Basic Usage\n\n### Compile Your Project\n\n```bash\nquickback compile\n```\n\nThis command:\n- Reads your `quickback.config.ts`\n- Analyzes all resource definitions\n- Generates the complete API codebase\n- Reports any configuration errors\n\n### After Compilation\n\nOnce compilation succeeds, you need to:\n\n1. **Apply migrations:**\n\n Run the database migration for your provider. The generated `package.json` includes the appropriate migration script for your setup.\n\n > **Note:** The compiler automatically runs `drizzle-kit generate` during compilation, so you only need to apply the migrations.\n\n2. **Deploy your API:**\n ```bash\n npm run deploy # Cloudflare Workers\n # or\n npm start # Local development\n ```\n\n## The Compilation Process\n\n### 1. Configuration Loading\n\nThe compiler reads your `quickback.config.ts`:\n\n```typescript\n\nexport default defineConfig({\n name: 'my-app',\n template: 'hono',\n features: ['organizations'],\n providers: {\n runtime: defineRuntime('cloudflare'),\n database: defineDatabase('cloudflare-d1'),\n auth: defineAuth('better-auth'),\n },\n});\n```\n\nThis determines:\n- Which application template to use (`hono` or experimental `nextjs`)\n- Which database adapter to use (D1, SQLite, Turso, etc.)\n- Which authentication system to integrate\n- Which runtime to target (Cloudflare Workers, Bun, Node.js)\n- Which auth plugins to enable (based on `features`)\n\n### 2. Resource Discovery\n\nThe compiler scans `definitions/features/` and identifies:\n\n**Resources (with routes):**\n```typescript\n// definitions/features/rooms/rooms.ts\nexport default defineTable(rooms, { ... });\n// → Generates: GET/POST/PATCH/DELETE /api/v1/rooms\n// → Also generates: POST/PATCH/DELETE/PUT /api/v1/rooms/batch (auto-enabled)\n```\n\n**Internal tables (no routes):**\n```typescript\n// definitions/features/rooms/room-bookings.ts\nexport const roomBookings = sqliteTable(...);\n// NO default export → No routes generated\n```\n\n**Actions:**\n```typescript\n// definitions/features/rooms/actions.ts\nexport default defineActions(rooms, {\n book: { ... },\n cancel: { ... }\n});\n// → Generates: POST /api/v1/rooms/:id/book, POST /api/v1/rooms/:id/cancel\n```\n\n### 3. Validation\n\nThe compiler validates your configuration at build time, catching errors before deployment. See [Definitions](/compiler/definitions) for details on what's validated.\n\n### 4. Code Generation\n\nThe compiler generates different files based on your provider:\n\n#### Cloudflare Workers (Hono)\n\n```typescript\n// Generated: src/routes/rooms.ts\n\nconst app = new Hono();\n\napp.get('/rooms', async (c) => {\n const ctx = c.get('ctx');\n let query = db.select().from(rooms);\n query = applyFirewall(query, ctx, 'rooms');\n await checkAccess(ctx, 'rooms', 'list');\n const results = await query;\n return c.json(results);\n});\n```\n\n### 5. Type Generation\n\nThe compiler generates TypeScript types for your API:\n\n```typescript\n// Generated: src/types/rooms.ts\nexport type Room = typeof rooms.$inferSelect;\nexport type RoomInsert = typeof rooms.$inferInsert;\n```\n\n## Build-Time vs Runtime\n\n### Build Time (Compilation)\n\nThe compiler validates security configuration, checks for schema/firewall mismatches, generates API route handlers, creates TypeScript types, and prepares migration setup.\n\n### Runtime (API Requests)\n\nWhen your API receives a request: Authentication → Firewall → Access → Guards → Masking → Response. The compiled code handles all of this automatically.\n\n## Development Workflow\n\n1. **Define resources** in `definitions/features/`\n2. **Compile** with `quickback compile`\n3. **Review migrations** in `drizzle/`\n4. **Apply migrations** for your provider\n5. **Test locally** with `npm run dev`\n6. **Deploy** with `npm run deploy`\n\n## Next Steps\n\n- [Getting Started](/compiler/getting-started) — Create your first project\n- [Definitions](/compiler/definitions) — Schema, security layers, and actions\n- [Using the API](/compiler/using-the-api) — CRUD endpoints and filtering\n- [Cloud Compiler](/compiler/cloud-compiler) — CLI and authentication"
175
+ },
176
+ "compiler/integrations/cloudflare": {
177
+ "title": "Cloudflare Workers",
178
+ "content": "The Cloudflare target generates a complete Hono-based API running on Cloudflare Workers with D1 as the database.\n\n## Configuration\n\n```typescript\n\nexport default defineConfig({\n name: \"my-app\",\n providers: {\n runtime: defineRuntime(\"cloudflare\"),\n database: defineDatabase(\"cloudflare-d1\"),\n auth: defineAuth(\"better-auth\"),\n },\n});\n```\n\n## Generated Output\n\n```\nsrc/\n├── routes/ # Hono route handlers\n├── middleware/ # Auth, firewall, access middleware\n├── types/ # TypeScript interfaces\n├── db/\n│ └── schema.ts # Drizzle schema\n└── index.ts # Hono app entry point\n\ndrizzle/\n├── migrations/ # SQL migration files\n└── meta/ # Drizzle metadata\n```\n\n## Security Model\n\nAll four security layers run at the application level:\n\n1. **Firewall** — Drizzle WHERE clauses for data isolation\n2. **Access** — Role checks in middleware\n3. **Guards** — Field filtering in request handlers\n4. **Masking** — Response transformation before sending\n\n## Features\n\n- Full CRUD with batch operations\n- Custom actions with inline or handler-based execution\n- Soft delete support\n- Pagination, filtering, and sorting\n- Views (column-level projections)\n- OpenAPI specification generation\n- TypeScript client SDK generation\n\n## Deployment\n\n```bash\n# Development\nnpm run dev\n\n# Production\nnpm run deploy\n# or\nwrangler deploy\n```"
179
+ },
180
+ "compiler/integrations": {
181
+ "title": "Compile Targets",
182
+ "content": "The Quickback compiler generates different output based on your chosen providers. Each target includes optimized code for the specific runtime and database.\n\n## Available Targets\n\n| Target | Runtime | Database | Auth |\n|--------|---------|----------|------|\n| [Cloudflare](/compiler/integrations/cloudflare) | Cloudflare Workers | D1 (SQLite) | Better Auth |\n| [Neon](/compiler/integrations/neon) | Cloudflare Workers / Node.js | Neon (PostgreSQL) | Better Auth + Neon Authorize |\n| [Supabase](/compiler/integrations/supabase) | Supabase Edge Functions | Supabase (PostgreSQL) | Supabase Auth |\n\n## How Targets Differ\n\n### Cloudflare (Recommended)\n\n- **Runtime**: Hono on Cloudflare Workers\n- **Security**: Application-level firewall, access, guards, masking\n- **Database**: D1 with Drizzle ORM\n- **Best for**: Full-stack applications, edge deployment\n\n### Neon\n\n- **Runtime**: Hono on Cloudflare Workers or Node.js\n- **Security**: PostgreSQL Row Level Security (RLS) + application-level guards\n- **Database**: Neon PostgreSQL with Drizzle ORM\n- **Best for**: Applications needing full PostgreSQL capabilities\n\n### Supabase\n\n- **Runtime**: Supabase Edge Functions\n- **Security**: PostgreSQL Row Level Security (RLS) only\n- **Database**: Supabase PostgreSQL\n- **Best for**: Existing Supabase projects\n\n## Configuration\n\nSet your target in `quickback.config.ts`:\n\n```typescript\n\nexport default defineConfig({\n name: \"my-app\",\n providers: {\n runtime: defineRuntime(\"cloudflare\"), // or \"supabase\"\n database: defineDatabase(\"cloudflare-d1\"), // or \"neon\", \"supabase\"\n auth: defineAuth(\"better-auth\"), // or \"supabase-auth\"\n },\n});\n```"
183
+ },
184
+ "compiler/integrations/neon": {
185
+ "title": "Neon",
186
+ "content": "Quickback supports Neon as a PostgreSQL database provider, generating schemas with Row Level Security (RLS) policies that enforce your firewall configuration at the database level.\n\n## Why Neon?\n\n- **Full PostgreSQL** — Advanced queries, joins, PostGIS, full-text search\n- **Database-Level Security** — RLS policies enforce access even if API is bypassed\n- **Serverless Architecture** — HTTP connection mode for Cloudflare Workers\n- **Neon Authorize** — JWT-based RLS with `auth.user_id()` function\n- **Better Auth Integration** — Works seamlessly with Better Auth\n\n## Configuration\n\n```typescript\n\nexport default defineConfig({\n name: \"my-app\",\n providers: {\n runtime: defineRuntime(\"cloudflare\"),\n database: defineDatabase(\"neon\", {\n connectionMode: 'auto', // auto-detects based on runtime\n pooled: true,\n }),\n auth: defineAuth(\"better-auth\"),\n },\n});\n```\n\n### Environment Variables\n\n```bash\nDATABASE_URL=postgresql://user:pwd@ep-xxx.neon.tech/db?sslmode=require\n```\n\n## Row Level Security\n\nQuickback generates RLS policies from your firewall config:\n\n```typescript\nfirewall: {\n organization: {},\n owner: { mode: 'optional' }\n}\n```\n\n```sql\n-- Generated policy (uses auth.user_id() for Neon)\nCREATE POLICY \"documents_select\" ON documents FOR SELECT\nUSING (\n organization_id = get_active_org_id()\n AND (has_any_role(ARRAY['admin']) OR user_id = auth.user_id())\n);\n```\n\n### Firewall Patterns\n\n**Organization-Scoped:**\n```sql\nCREATE POLICY \"projects_select\" ON projects FOR SELECT\nUSING (organization_id = public.get_active_org_id());\n```\n\n**User-Scoped:**\n```sql\nCREATE POLICY \"preferences_select\" ON preferences FOR SELECT\nUSING (user_id = auth.user_id());\n```\n\n**Public Tables:**\n```sql\nCREATE POLICY \"categories_deny_anon\" ON categories FOR ALL TO anon USING (false);\nCREATE POLICY \"categories_all\" ON categories FOR ALL TO authenticated USING (true) WITH CHECK (true);\n```\n\n## Generated Helper Functions\n\n| Function | Purpose |\n|----------|---------|\n| `get_active_org_id()` | Returns user's active organization from `user_sessions` |\n| `has_any_role(roles[])` | Checks if user has any specified role |\n| `has_org_role(role)` | Checks for a specific role |\n| `is_org_member()` | Checks org membership |\n| `is_owner(owner_id)` | Checks record ownership |\n\n## Generated Files\n\n```\ndrizzle/\n├── migrations/\n│ ├── 0100_create_rls_helpers.sql\n│ ├── 0101_create_rls_policies.sql\n│ └── 0102_create_indexes.sql\nsrc/\n├── db/\n│ └── schema.ts\n├── auth/\n│ └── schema.ts\n└── lib/\n └── neon.ts\n```\n\n## Setting Up Neon Authorize\n\n1. Go to **Project Settings > Authorize** in Neon console\n2. Add your JWKS URL: `https://your-api.com/auth/v1/.well-known/jwks.json`\n3. The `auth.user_id()` function will return the authenticated user's ID from JWT\n\n## Getting Started\n\n1. Create a Neon project at [neon.tech](https://neon.tech)\n2. Configure your Quickback project for Neon\n3. Run `quickback compile`\n4. Apply migrations with `drizzle-kit migrate`"
187
+ },
188
+ "compiler/integrations/supabase": {
189
+ "title": "Supabase",
190
+ "content": "Quickback supports Supabase as a deployment target, generating PostgreSQL schemas with Row Level Security (RLS) policies that enforce your firewall configuration at the database level.\n\n## Why Supabase?\n\n- **Full PostgreSQL** — Advanced queries, joins, PostGIS, full-text search\n- **Database-Level Security** — RLS policies enforce access even if API is bypassed\n- **Integrated Auth** — Supabase Auth with JWT claims for RLS\n- **Real-time** — Postgres Changes for live updates\n- **Managed Infrastructure** — Automatic backups, scaling, monitoring\n\n## Configuration\n\n```typescript\n\nexport default defineConfig({\n name: \"my-app\",\n providers: {\n runtime: defineRuntime(\"supabase\"),\n database: defineDatabase(\"supabase\"),\n auth: defineAuth(\"supabase-auth\"),\n },\n});\n```\n\n## Key Difference from Neon\n\n| Feature | Supabase | Neon |\n|---------|----------|------|\n| Auth function | `auth.uid()` | `auth.user_id()` |\n| Auth provider | Supabase Auth | Neon Authorize + Better Auth |\n| Users table | `auth.users` | `public.users` (Better Auth) |\n\n## Row Level Security\n\nQuickback generates RLS policies from your firewall config:\n\n```typescript\nfirewall: {\n organization: {},\n owner: { mode: 'optional' }\n}\n```\n\n```sql\n-- Generated policy\nCREATE POLICY \"documents_select\" ON documents FOR SELECT\nUSING (\n organization_id = get_active_org_id()\n AND (has_any_role(ARRAY['admin']) OR user_id = auth.uid())\n);\n```\n\n### Firewall Patterns\n\n**Organization-Scoped:**\n```sql\nCREATE POLICY \"projects_select\" ON projects FOR SELECT\nUSING (organization_id = public.get_active_org_id());\n```\n\n**User-Scoped:**\n```sql\nCREATE POLICY \"preferences_select\" ON preferences FOR SELECT\nUSING (user_id = auth.uid());\n```\n\n## Defense in Depth\n\nAll generated policies include a deny policy for anonymous users:\n\n```sql\nCREATE POLICY \"tablename_deny_anon\" ON tablename FOR ALL\nTO anon\nUSING (false);\n```\n\n## Generated Helper Functions\n\n| Function | Purpose |\n|----------|---------|\n| `get_active_org_id()` | Returns user's active organization from `user_sessions` |\n| `has_any_role(roles[])` | Checks if user has any specified role |\n| `has_org_role(role)` | Checks for a specific role |\n| `is_org_member()` | Checks org membership |\n| `get_user_role()` | Returns user's role in active org |\n\n## Generated Files\n\n```\nsupabase/\n├── migrations/\n│ ├── 0100_create_rls_helpers.sql\n│ ├── 0101_create_rls_policies.sql\n│ └── 0102_create_audit_triggers.sql\nsrc/\n├── db/\n│ └── schema.ts\n└── lib/\n └── supabase.ts\n```\n\n## Getting Started\n\n1. Install the Supabase CLI\n2. Configure your Quickback project for Supabase\n3. Run `quickback compile`\n4. Push migrations with `supabase db push`"
191
+ },
192
+ "compiler/using-the-api/actions-api": {
193
+ "title": "Actions API",
194
+ "content": "Actions provide custom endpoints beyond CRUD operations. They support input validation, access conditions, and multiple response types including streaming.\n\n## Record-Based Actions\n\nRecord-based actions operate on a specific record identified by ID.\n\n```\nPOST /api/v1/{resource}/:id/{action-name}\n```\n\n### Request\n\n```bash\n# With JSON body\ncurl -X POST /api/v1/orders/ord_123/refund \\\n -H \"Authorization: Bearer <token>\" \\\n -H \"Content-Type: application/json\" \\\n -d '{ \"reason\": \"Customer request\", \"amount\": 50.00 }'\n```\n\nThe action's input schema validates the request body using Zod. Invalid input returns a 400 error.\n\n### Response\n\n```json\n{\n \"success\": true,\n \"data\": {\n \"refundId\": \"ref_456\",\n \"amount\": 50.00,\n \"status\": \"processed\"\n }\n}\n```\n\n### Security Flow\n\n1. **Authentication** — User must be logged in (401 if not)\n2. **Firewall** — Record must exist and be accessible to the user (404 if not found or not owned)\n3. **Access** — User must have the required role (403 if insufficient)\n4. **Input validation** — Request body validated against the action's Zod schema (400 if invalid)\n5. **Access conditions** — Record must match access conditions (400 if not met)\n6. **Masking** — Applied to the response data\n\n### Access Conditions\n\nActions can require the record to be in a specific state:\n\n```typescript\n// In actions.ts\ncomplete: {\n access: {\n roles: [\"owner\", \"admin\", \"member\"],\n record: { completed: { equals: false } },\n },\n}\n```\n\nIf the record doesn't match (`completed` is already `true`), the action returns 400.\n\n## Standalone Actions\n\nStandalone actions don't require a record ID. They're for operations that aren't tied to a specific record.\n\n```\nPOST /api/v1/{resource}/{action-name}\n```\n\n### Request\n\n```bash\ncurl -X POST /api/v1/reports/generate-summary \\\n -H \"Authorization: Bearer <token>\" \\\n -H \"Content-Type: application/json\" \\\n -d '{ \"startDate\": \"2025-01-01\", \"endDate\": \"2025-01-31\" }'\n```\n\n### Response\n\nSame format as record-based actions. The difference is no record lookup or firewall check.\n\n### Security Flow\n\n1. **Authentication** — Required (401)\n2. **Organization check** — For org-scoped resources, user must have an active organization (403)\n3. **Access** — Role-based check (403)\n4. **Scoped DB** — The `db` passed to the handler is auto-scoped based on table columns (org isolation, owner filtering, soft delete)\n5. **Input validation** — Zod schema validation (400)\n\n### Scoped Database\n\nAll actions (both standalone and record-based) receive a **scoped `db`** instance that automatically enforces security based on the table's columns:\n\n- Tables with `organizationId` — org isolation (`WHERE organizationId = ?`)\n- Tables with `ownerId` — owner isolation (`WHERE ownerId = ?`)\n- Tables with `deletedAt` — soft delete filter (`WHERE deletedAt IS NULL`)\n- `INSERT` operations auto-inject `organizationId` and `ownerId` from context\n\n```typescript\n// Your handler receives a scoped db — no manual filtering needed\nexecute: async ({ db, ctx, input }) => {\n // This query automatically includes WHERE organizationId = ? AND ownerId = ? AND deletedAt IS NULL\n const items = await db.select().from(claims).where(eq(claims.status, 'active'));\n\n // Inserts automatically include organizationId and ownerId\n await db.insert(claims).values({ title: input.title });\n\n return items;\n}\n```\n\nIf you need unscoped access (e.g., cross-org admin operations), declare `unsafe: true` on the action:\n\n```typescript\nadminReport: {\n unsafe: true,\n execute: async ({ db, rawDb, ctx, input }) => {\n // rawDb bypasses all security — only available with unsafe: true\n const allOrgs = await rawDb.select().from(invoices);\n return allOrgs;\n },\n}\n```\n\nWithout `unsafe: true`, `rawDb` is `undefined`.\n\n## Input Validation\n\nActions use Zod schemas for input validation. The schema is defined in your `actions.ts` file:\n\n```typescript\n\nexport default defineActions(\"orders\", {\n refund: {\n type: \"record\",\n input: z.object({\n reason: z.string().min(1),\n amount: z.number().positive(),\n }),\n access: {\n roles: [\"admin\"],\n record: { status: { equals: \"completed\" } },\n },\n },\n});\n```\n\nWhen validation fails:\n\n```json\n{\n \"error\": \"Validation failed\",\n \"layer\": \"guards\",\n \"code\": \"GUARD_VALIDATION_FAILED\",\n \"details\": {\n \"issues\": [\n { \"path\": [\"amount\"], \"message\": \"Number must be greater than 0\" }\n ]\n }\n}\n```\n\n## Response Types\n\nActions support three response types:\n\n### JSON (Default)\n\nReturns a JSON object wrapped in `{ success: true, data: ... }`. Masking is applied to the data.\n\n### Streaming\n\nFor long-running operations, actions can return a streaming response:\n\n```typescript\ngenerateReport: {\n type: \"standalone\",\n responseType: \"stream\",\n // ...\n}\n```\n\nThe action handler returns a `ReadableStream` directly — no JSON wrapper.\n\n### File\n\nFor file downloads, actions can return a raw `Response` object:\n\n```typescript\nexportCsv: {\n type: \"record\",\n responseType: \"file\",\n // ...\n}\n```\n\n## Action Handler\n\nThe compiler generates a handler skeleton. Your action receives:\n\n```typescript\n{\n db, // Scoped database (auto-enforces org/owner/soft-delete filters)\n rawDb, // Raw database (only available when unsafe: true, otherwise undefined)\n ctx, // Auth context (user, session, org)\n record, // The existing record (record-based only, undefined for standalone)\n input, // Validated input from Zod schema\n services, // Injected services\n c, // Hono context\n auditFields, // { createdAt, modifiedAt } timestamps\n}\n```\n\n## HTTP Methods\n\nActions default to `POST` but can use any HTTP method:\n\n```typescript\ngetStatus: {\n type: \"record\",\n method: \"GET\",\n // GET actions receive input from query parameters instead of body\n}\n```\n\nWhen using `GET`, input is parsed from query parameters instead of the request body.\n\n## Cascading Soft Delete\n\nWhen soft-deleting a parent record, Quickback automatically soft-deletes related records in child/junction tables **within the same feature**. The compiler detects foreign key references at build time.\n\nFor example, if your feature has `projects` and `projectMembers` tables where `projectMembers` has a `.references(() => projects.id)`, deleting a project will also soft-delete all its members:\n\n```\nDELETE /api/v1/projects/proj_123\n```\n\nGenerated code:\n```typescript\n// Soft delete parent\nawait db.update(projects).set({ deletedAt: now, modifiedAt: now })\n .where(and(buildFirewallConditions(ctx), eq(projects.id, id)));\n\n// Cascade soft delete to child tables\nawait db.update(projectMembers).set({ deletedAt: now, modifiedAt: now })\n .where(eq(projectMembers.projectId, id));\n```\n\n**Cascade rules:**\n- **Soft delete (default)**: Application-level cascade via generated UPDATE statements\n- **Hard delete (`mode: 'hard'`)**: No compiler cascade — relies on DB-level `ON DELETE CASCADE`\n- **Cross-feature**: No cascade — only within the same feature's tables\n\n## See Also\n\n- [Defining Actions](/compiler/definitions/actions) — How to define actions in your feature\n- [Guards](/compiler/definitions/guards) — Guard conditions reference\n- [Error Responses](/compiler/using-the-api/errors) — Error format reference"
195
+ },
196
+ "compiler/using-the-api/batch-operations": {
197
+ "title": "Batch Operations",
198
+ "content": "Batch operations let you create, update, or delete multiple records in a single request. They are **auto-enabled** when the corresponding CRUD operation exists in your definition.\n\n## Available Endpoints\n\n| Endpoint | Method | Auto-enabled When |\n|----------|--------|-------------------|\n| `/{resource}/batch` | `POST` | `crud.create` exists |\n| `/{resource}/batch` | `PATCH` | `crud.update` exists |\n| `/{resource}/batch` | `DELETE` | `crud.delete` exists |\n| `/{resource}/batch` | `PUT` | `crud.put` exists |\n\nTo disable a batch operation, set it to `false` in your definition:\n\n```typescript\ncrud: {\n create: { access: { roles: ['admin'] } },\n batchCreate: false, // Disable batch create\n}\n```\n\n## Batch Create\n\n```\nPOST /api/v1/{resource}/batch\n```\n\n### Request\n\n```json\n{\n \"records\": [\n { \"name\": \"Room A\", \"capacity\": 10 },\n { \"name\": \"Room B\", \"capacity\": 20 },\n { \"name\": \"Room C\", \"capacity\": 30 }\n ],\n \"options\": {\n \"atomic\": false\n }\n}\n```\n\n### Response (201 or 207)\n\n```json\n{\n \"success\": [\n { \"id\": \"rm_abc1\", \"name\": \"Room A\", \"capacity\": 10, \"createdAt\": \"2025-01-15T10:00:00Z\" },\n { \"id\": \"rm_abc3\", \"name\": \"Room C\", \"capacity\": 30, \"createdAt\": \"2025-01-15T10:00:00Z\" }\n ],\n \"errors\": [\n {\n \"index\": 1,\n \"record\": { \"name\": \"Room B\", \"capacity\": 20 },\n \"error\": {\n \"error\": \"Database insert failed\",\n \"layer\": \"validation\",\n \"code\": \"INSERT_FAILED\",\n \"details\": { \"reason\": \"UNIQUE constraint failed: rooms.name\" }\n }\n }\n ],\n \"meta\": {\n \"total\": 3,\n \"succeeded\": 2,\n \"failed\": 1,\n \"atomic\": false\n }\n}\n```\n\n- **201** — All records created successfully\n- **207** — Partial success (some errors, some successes)\n\nFields applied automatically to each record:\n- ID generation (UUID, prefixed, etc.)\n- Ownership fields (`organizationId`, `ownerId`, `createdBy`)\n- Audit fields (`createdAt`, `modifiedAt`)\n- Default values and computed fields\n\n## Batch Update\n\n```\nPATCH /api/v1/{resource}/batch\n```\n\nEvery record **must include an `id` field**.\n\n### Request\n\n```json\n{\n \"records\": [\n { \"id\": \"rm_abc1\", \"capacity\": 15 },\n { \"id\": \"rm_abc2\", \"name\": \"Updated Room\" },\n { \"id\": \"rm_xyz9\", \"capacity\": 50 }\n ],\n \"options\": {\n \"atomic\": false\n }\n}\n```\n\n### Response (200 or 207)\n\n```json\n{\n \"success\": [\n { \"id\": \"rm_abc1\", \"name\": \"Room A\", \"capacity\": 15, \"modifiedAt\": \"2025-01-15T11:00:00Z\" },\n { \"id\": \"rm_abc2\", \"name\": \"Updated Room\", \"capacity\": 20, \"modifiedAt\": \"2025-01-15T11:00:00Z\" }\n ],\n \"errors\": [\n {\n \"index\": 2,\n \"record\": { \"id\": \"rm_xyz9\", \"capacity\": 50 },\n \"error\": {\n \"error\": \"Not found\",\n \"layer\": \"firewall\",\n \"code\": \"NOT_FOUND\",\n \"details\": { \"id\": \"rm_xyz9\" }\n }\n }\n ],\n \"meta\": {\n \"total\": 3,\n \"succeeded\": 2,\n \"failed\": 1,\n \"atomic\": false\n }\n}\n```\n\nRecords that don't exist or aren't accessible through the firewall return a `NOT_FOUND` error. Guard rules (immutable, protected, not-updatable fields) are checked per record.\n\n### Missing IDs\n\nIf any records are missing the `id` field, the entire request is rejected:\n\n```json\n{\n \"error\": \"Records missing required ID field\",\n \"layer\": \"validation\",\n \"code\": \"BATCH_MISSING_IDS\",\n \"details\": { \"indices\": [0, 2] },\n \"hint\": \"All records must include an ID field for batch update.\"\n}\n```\n\n## Batch Delete\n\n```\nDELETE /api/v1/{resource}/batch\n```\n\n### Request\n\n```json\n{\n \"ids\": [\"rm_abc1\", \"rm_abc2\", \"rm_abc3\"],\n \"options\": {\n \"atomic\": false\n }\n}\n```\n\nNote: Batch delete uses an `ids` array (not `records`).\n\n### Response (200 or 207)\n\n**Soft delete** (default):\n\n```json\n{\n \"success\": [\n { \"id\": \"rm_abc1\", \"name\": \"Room A\", \"deletedAt\": \"2025-01-15T12:00:00Z\" },\n { \"id\": \"rm_abc2\", \"name\": \"Room B\", \"deletedAt\": \"2025-01-15T12:00:00Z\" }\n ],\n \"errors\": [],\n \"meta\": {\n \"total\": 2,\n \"succeeded\": 2,\n \"failed\": 0,\n \"atomic\": false\n }\n}\n```\n\n**Hard delete**: Returns objects with only the `id` field (record data is deleted).\n\n## Batch Upsert\n\n```\nPUT /api/v1/{resource}/batch\n```\n\nInserts new records or updates existing ones. Every record must include an `id` field.\n\n**Note:** Batch upsert is only available when `generateId` is set to `false` (user-provided IDs) in your configuration.\n\n### Request\n\n```json\n{\n \"records\": [\n { \"id\": \"rm_001\", \"name\": \"Room A\", \"capacity\": 10 },\n { \"id\": \"rm_002\", \"name\": \"Updated Room B\", \"capacity\": 25 }\n ],\n \"options\": {\n \"atomic\": false\n }\n}\n```\n\n### Response (201 or 207)\n\nThe compiler checks which IDs already exist and splits the batch into creates and updates. Create records get ownership and default fields; update records get only `modifiedAt`.\n\n## Atomic Mode\n\nBy default, batch operations use **partial success** mode — each record is processed independently, and failures don't affect other records.\n\nSet `\"atomic\": true` to require all-or-nothing processing:\n\n```json\n{\n \"records\": [...],\n \"options\": {\n \"atomic\": true\n }\n}\n```\n\n### Atomic Failure Response (400)\n\nIf any record fails in atomic mode, the entire batch is rejected:\n\n```json\n{\n \"error\": \"Batch operation failed in atomic mode\",\n \"layer\": \"validation\",\n \"code\": \"BATCH_ATOMIC_FAILED\",\n \"details\": {\n \"failedAt\": 1,\n \"reason\": \"Database insert failed\",\n \"errorDetails\": { \"reason\": \"UNIQUE constraint failed\" }\n },\n \"hint\": \"Transaction rolled back. Fix the error and retry the entire batch.\"\n}\n```\n\n## Batch Size Limit\n\nThe default maximum batch size is **100 records**. Requests exceeding the limit are rejected:\n\n```json\n{\n \"error\": \"Batch size limit exceeded\",\n \"layer\": \"validation\",\n \"code\": \"BATCH_SIZE_EXCEEDED\",\n \"details\": { \"max\": 100, \"actual\": 250 },\n \"hint\": \"Maximum 100 records allowed per batch. Split into multiple requests.\"\n}\n```\n\n## Configuration\n\nBatch operations inherit access control from their corresponding CRUD operation. You can override per-batch settings:\n\n```typescript\nexport default defineTable(rooms, {\n crud: {\n create: { access: { roles: ['admin', 'member'] } },\n update: { access: { roles: ['admin', 'member'] } },\n delete: { access: { roles: ['admin'] }, mode: 'soft' },\n\n // Override batch-specific settings\n batchCreate: {\n access: { roles: ['admin'] }, // More restrictive than single create\n maxBatchSize: 50, // Default: 100\n allowAtomic: true, // Default: true\n },\n batchUpdate: {\n maxBatchSize: 100,\n allowAtomic: true,\n },\n batchDelete: {\n access: { roles: ['admin'] },\n maxBatchSize: 100,\n allowAtomic: true,\n },\n\n // Disable a batch operation entirely\n batchUpsert: false,\n },\n});\n```\n\n## Security\n\nAll security pillars apply to batch operations:\n\n1. **Authentication** — Required for all batch endpoints (401 if missing)\n2. **Firewall** — Applied per-record for update/delete/upsert (not applicable to create)\n3. **Access** — Role-based check using the batch operation's access config\n4. **Guards** — Per-record validation (createable/updatable/immutable fields)\n5. **Masking** — Applied to all records in the success array before response\n\n## See Also\n\n- [CRUD Endpoints](/compiler/using-the-api/crud) — Single-record operations\n- [Error Responses](/compiler/using-the-api/errors) — Error format reference\n- [Guards](/compiler/definitions/guards) — Field-level write protection"
199
+ },
200
+ "compiler/using-the-api/crud": {
201
+ "title": "CRUD Endpoints",
202
+ "content": "Quickback automatically generates RESTful CRUD endpoints for each resource you define. This page covers how to use these endpoints.\n\n## Endpoint Overview\n\nFor a resource named `rooms`, Quickback generates:\n\n| Method | Endpoint | Description |\n|--------|----------|-------------|\n| `GET` | `/rooms` | List all records |\n| `GET` | `/rooms/:id` | Get a single record |\n| `POST` | `/rooms` | Create a new record |\n| `POST` | `/rooms/batch` | Batch create multiple records |\n| `PATCH` | `/rooms/:id` | Update a record |\n| `PATCH` | `/rooms/batch` | Batch update multiple records |\n| `DELETE` | `/rooms/:id` | Delete a record |\n| `DELETE` | `/rooms/batch` | Batch delete multiple records |\n| `PUT` | `/rooms/:id` | Upsert a record (requires config) |\n| `PUT` | `/rooms/batch` | Batch upsert multiple records (requires config) |\n\n## List Records\n\n```\nGET /rooms\n```\n\nReturns a paginated list of records the user has access to.\n\n### Query Parameters\n\n| Parameter | Description | Example |\n|-----------|-------------|---------|\n| `limit` | Number of records to return (default: 50, max: 100) | `?limit=25` |\n| `offset` | Number of records to skip | `?offset=50` |\n| `sort` | Field to sort by | `?sort=createdAt` |\n| `order` | Sort direction: `asc` or `desc` | `?order=desc` |\n\n### Filtering\n\nFilter records using query parameters:\n\n```\nGET /rooms?status=active # Exact match\nGET /rooms?capacity.gt=10 # Greater than\nGET /rooms?capacity.gte=10 # Greater than or equal\nGET /rooms?capacity.lt=50 # Less than\nGET /rooms?capacity.lte=50 # Less than or equal\nGET /rooms?status.ne=deleted # Not equal\nGET /rooms?name.like=Conference # Pattern match (LIKE %value%)\nGET /rooms?status.in=active,pending,review # IN clause\n```\n\n### Response\n\n```json\n{\n \"data\": [\n {\n \"id\": \"room_123\",\n \"name\": \"Conference Room A\",\n \"capacity\": 10,\n \"status\": \"active\",\n \"createdAt\": \"2024-01-15T10:30:00Z\"\n }\n ],\n \"meta\": {\n \"total\": 42,\n \"limit\": 50,\n \"offset\": 0\n }\n}\n```\n\n## Get Single Record\n\n```\nGET /rooms/:id\n```\n\nReturns a single record by ID.\n\n### Response\n\n```json\n{\n \"data\": {\n \"id\": \"room_123\",\n \"name\": \"Conference Room A\",\n \"capacity\": 10,\n \"status\": \"active\",\n \"createdAt\": \"2024-01-15T10:30:00Z\"\n }\n}\n```\n\n### Errors\n\n| Status | Description |\n|--------|-------------|\n| `404` | Record not found or not accessible |\n| `403` | User lacks permission to view this record |\n\n## Create Record\n\n```\nPOST /rooms\nContent-Type: application/json\n\n{\n \"name\": \"Conference Room B\",\n \"capacity\": 8,\n \"roomType\": \"meeting\"\n}\n```\n\nCreates a new record. Only fields listed in `guards.createable` are accepted.\n\n### Response\n\n```json\n{\n \"data\": {\n \"id\": \"room_456\",\n \"name\": \"Conference Room B\",\n \"capacity\": 8,\n \"roomType\": \"meeting\",\n \"createdAt\": \"2024-01-15T11:00:00Z\",\n \"createdBy\": \"user_789\"\n }\n}\n```\n\n### Errors\n\n| Status | Description |\n|--------|-------------|\n| `400` | Invalid field or missing required field |\n| `403` | User lacks permission to create records |\n\n## Update Record\n\n```\nPATCH /rooms/:id\nContent-Type: application/json\n\n{\n \"name\": \"Updated Room Name\",\n \"capacity\": 12\n}\n```\n\nUpdates an existing record. Only fields listed in `guards.updatable` are accepted.\n\n### Response\n\n```json\n{\n \"data\": {\n \"id\": \"room_123\",\n \"name\": \"Updated Room Name\",\n \"capacity\": 12,\n \"modifiedAt\": \"2024-01-15T12:00:00Z\",\n \"modifiedBy\": \"user_789\"\n }\n}\n```\n\n### Errors\n\n| Status | Description |\n|--------|-------------|\n| `400` | Invalid field or field not updatable |\n| `403` | User lacks permission to update this record |\n| `404` | Record not found |\n\n## Delete Record\n\n```\nDELETE /rooms/:id\n```\n\nDeletes a record. Behavior depends on the `delete.mode` configuration.\n\n### Soft Delete (default)\n\nSets `deletedAt` and `deletedBy` fields. Record remains in database but is filtered from queries.\n\n### Hard Delete\n\nPermanently removes the record from the database.\n\n### Response\n\n```json\n{\n \"data\": {\n \"id\": \"room_123\",\n \"deleted\": true\n }\n}\n```\n\n### Errors\n\n| Status | Description |\n|--------|-------------|\n| `403` | User lacks permission to delete this record |\n| `404` | Record not found |\n\n## Upsert Record (PUT)\n\n```\nPUT /rooms/:id\nContent-Type: application/json\n\n{\n \"name\": \"External Room\",\n \"capacity\": 20,\n \"externalId\": \"ext-123\"\n}\n```\n\nCreates or updates a record by ID. Requires special configuration:\n\n1. `generateId: false` in database config\n2. `guards: false` in resource definition\n\n### Behavior\n\n- If record exists: Updates all provided fields\n- If record doesn't exist: Creates with the provided ID\n\n### Use Cases\n\n- Syncing data from external systems\n- Webhook handlers with external IDs\n- Idempotent operations (safe to retry)\n\nSee [Guards documentation](/compiler/definitions/guards#putupsert-with-external-ids) for setup details.\n\n## Batch Operations\n\nQuickback provides batch endpoints for efficient bulk operations. Batch operations automatically inherit from their corresponding CRUD operations and maintain full security layer consistency.\n\n### Batch Create Records\n\n```\nPOST /rooms/batch\nContent-Type: application/json\n\n{\n \"records\": [\n { \"name\": \"Room A\", \"capacity\": 10 },\n { \"name\": \"Room B\", \"capacity\": 20 },\n { \"name\": \"Room C\", \"capacity\": 15 }\n ],\n \"options\": {\n \"atomic\": false\n }\n}\n```\n\nCreates multiple records in a single request. Each record follows the same validation rules as single create operations.\n\n#### Request Body\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `records` | Array | Array of record objects to create |\n| `options.atomic` | Boolean | If `true`, all records succeed or all fail (default: `false`) |\n\n#### Response (Partial Success - Default)\n\n```json\n{\n \"success\": [\n { \"id\": \"room_1\", \"name\": \"Room A\", \"capacity\": 10 },\n { \"id\": \"room_2\", \"name\": \"Room B\", \"capacity\": 20 }\n ],\n \"errors\": [\n {\n \"index\": 2,\n \"record\": { \"name\": \"Room C\", \"capacity\": 15 },\n \"error\": {\n \"error\": \"Field cannot be set during creation\",\n \"layer\": \"guards\",\n \"code\": \"GUARD_FIELD_NOT_CREATEABLE\",\n \"details\": { \"fields\": [\"status\"] }\n }\n }\n ],\n \"meta\": {\n \"total\": 3,\n \"succeeded\": 2,\n \"failed\": 1,\n \"atomic\": false\n }\n}\n```\n\n#### HTTP Status Codes\n\n- `201` - All records created successfully\n- `207` - Partial success (some records failed)\n- `400` - Atomic mode enabled and one or more records failed\n\n#### Batch Size Limit\n\nDefault: 100 records per request (configurable via `maxBatchSize`)\n\n### Batch Update Records\n\n```\nPATCH /rooms/batch\nContent-Type: application/json\n\n{\n \"records\": [\n { \"id\": \"room_1\", \"capacity\": 12 },\n { \"id\": \"room_2\", \"name\": \"Updated Room B\" },\n { \"id\": \"room_3\", \"capacity\": 25 }\n ],\n \"options\": {\n \"atomic\": false\n }\n}\n```\n\nUpdates multiple records in a single request. All records must include an `id` field.\n\n#### Request Body\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `records` | Array | Array of record objects with `id` and fields to update |\n| `options.atomic` | Boolean | If `true`, all records succeed or all fail (default: `false`) |\n\n#### Response\n\n```json\n{\n \"success\": [\n { \"id\": \"room_1\", \"capacity\": 12, \"modifiedAt\": \"2024-01-15T14:00:00Z\" },\n { \"id\": \"room_2\", \"name\": \"Updated Room B\", \"modifiedAt\": \"2024-01-15T14:00:00Z\" }\n ],\n \"errors\": [\n {\n \"index\": 2,\n \"record\": { \"id\": \"room_3\", \"capacity\": 25 },\n \"error\": {\n \"error\": \"Not found\",\n \"layer\": \"firewall\",\n \"code\": \"NOT_FOUND\",\n \"details\": { \"id\": \"room_3\" }\n }\n }\n ],\n \"meta\": {\n \"total\": 3,\n \"succeeded\": 2,\n \"failed\": 1,\n \"atomic\": false\n }\n}\n```\n\n#### Features\n\n- **Batch fetching**: Single database query for all IDs (with firewall)\n- **Per-record access**: Access checks run with record context\n- **Field validation**: Guards apply to each record individually\n\n### Batch Delete Records\n\n```\nDELETE /rooms/batch\nContent-Type: application/json\n\n{\n \"ids\": [\"room_1\", \"room_2\", \"room_3\"],\n \"options\": {\n \"atomic\": false\n }\n}\n```\n\nDeletes multiple records in a single request. Supports both soft and hard delete modes.\n\n#### Request Body\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `ids` | Array | Array of record IDs to delete |\n| `options.atomic` | Boolean | If `true`, all records succeed or all fail (default: `false`) |\n\n#### Response (Soft Delete)\n\n```json\n{\n \"success\": [\n { \"id\": \"room_1\", \"deletedAt\": \"2024-01-15T15:00:00Z\", \"deletedBy\": \"user_789\" },\n { \"id\": \"room_2\", \"deletedAt\": \"2024-01-15T15:00:00Z\", \"deletedBy\": \"user_789\" }\n ],\n \"errors\": [\n {\n \"index\": 2,\n \"id\": \"room_3\",\n \"error\": {\n \"error\": \"Not found\",\n \"layer\": \"firewall\",\n \"code\": \"NOT_FOUND\",\n \"details\": { \"id\": \"room_3\" }\n }\n }\n ],\n \"meta\": {\n \"total\": 3,\n \"succeeded\": 2,\n \"failed\": 1,\n \"atomic\": false\n }\n}\n```\n\n#### Delete Modes\n\n- **Soft delete** (default): Sets `deletedAt`, `deletedBy`, `modifiedAt`, `modifiedBy` fields\n- **Hard delete**: Permanently removes records from database\n\n### Batch Upsert Records\n\n```\nPUT /rooms/batch\nContent-Type: application/json\n\n{\n \"records\": [\n { \"id\": \"room_1\", \"name\": \"Updated Room A\", \"capacity\": 10 },\n { \"id\": \"new_room\", \"name\": \"New Room\", \"capacity\": 30 }\n ],\n \"options\": {\n \"atomic\": false\n }\n}\n```\n\nCreates or updates multiple records in a single request. Creates if ID doesn't exist, updates if it does.\n\n**Strict Requirements** (same as single PUT):\n1. `generateId: false` in database config (user provides IDs)\n2. `guards: false` in resource definition (no field restrictions)\n3. All records must include an `id` field\n\n**Note**: System-managed fields (`createdAt`, `createdBy`, `modifiedAt`, `modifiedBy`, `deletedAt`, `deletedBy`) are always protected and will be rejected if included in the request, regardless of guards configuration.\n\n#### Request Body\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `records` | Array | Array of record objects with `id` and all fields |\n| `options.atomic` | Boolean | If `true`, all records succeed or all fail (default: `false`) |\n\n#### Response\n\n```json\n{\n \"success\": [\n { \"id\": \"room_1\", \"name\": \"Updated Room A\", \"capacity\": 10, \"modifiedAt\": \"2024-01-15T16:00:00Z\" },\n { \"id\": \"new_room\", \"name\": \"New Room\", \"capacity\": 30, \"createdAt\": \"2024-01-15T16:00:00Z\" }\n ],\n \"errors\": [],\n \"meta\": {\n \"total\": 2,\n \"succeeded\": 2,\n \"failed\": 0,\n \"atomic\": false\n }\n}\n```\n\n#### How It Works\n\n1. Batch existence check with firewall\n2. Split records into CREATE and UPDATE batches\n3. Validate new records with `validateCreate()`\n4. Validate existing records with `validateUpdate()`\n5. Check CREATE access for new records\n6. Check UPDATE access for existing records (per-record)\n7. Execute bulk insert and individual updates\n8. Return combined results\n\n### Batch Operation Features\n\n#### Partial Success Mode (Default)\n\nBy default, batch operations use **partial success** mode:\n- All records are processed independently\n- Failed records go into `errors` array with detailed error information\n- Successful records go into `success` array\n- HTTP status `207 Multi-Status` if any errors, `201`/`200` if all success\n\n```json\n{\n \"success\": [ /* succeeded records */ ],\n \"errors\": [\n {\n \"index\": 2,\n \"record\": { /* original input */ },\n \"error\": {\n \"error\": \"Human-readable message\",\n \"layer\": \"guards\",\n \"code\": \"GUARD_FIELD_NOT_CREATEABLE\",\n \"details\": { \"fields\": [\"status\"] },\n \"hint\": \"These fields are set automatically or must be omitted\"\n }\n }\n ],\n \"meta\": {\n \"total\": 10,\n \"succeeded\": 8,\n \"failed\": 2,\n \"atomic\": false\n }\n}\n```\n\n#### Atomic Mode (Opt-in)\n\nEnable **atomic mode** for all-or-nothing behavior:\n\n```json\n{\n \"records\": [ /* ... */ ],\n \"options\": {\n \"atomic\": true\n }\n}\n```\n\n**Atomic mode behavior**:\n- First error immediately stops processing\n- All changes are rolled back (database transaction)\n- HTTP status `400 Bad Request`\n- Returns single error with failure details\n\n```json\n{\n \"error\": \"Batch operation failed in atomic mode\",\n \"layer\": \"validation\",\n \"code\": \"BATCH_ATOMIC_FAILED\",\n \"details\": {\n \"failedAt\": 2,\n \"reason\": { /* the actual error */ }\n },\n \"hint\": \"Transaction rolled back. Fix the error and retry the entire batch.\"\n}\n```\n\n#### Human-Readable Errors\n\nAll batch operation errors include:\n- **Layer identification**: Which security layer rejected the request\n- **Error code**: Machine-readable code for programmatic handling\n- **Clear message**: Human-readable explanation\n- **Details**: Contextual information (fields, IDs, reasons)\n- **Helpful hints**: Actionable guidance for resolution\n\n#### Performance Optimizations\n\n- **Batch size limits**: Default 100 records (prevents memory exhaustion)\n- **Single firewall query**: `WHERE id IN (...)` instead of N queries\n- **Bulk operations**: Single INSERT for multiple records (CREATE, UPSERT)\n- **O(1) lookups**: Map-based record lookup instead of Array.find()\n\n#### Configuration\n\nBatch operations are **auto-enabled** when corresponding CRUD operations exist:\n\n```typescript\n// Auto-enabled - no configuration needed\ncrud: {\n create: { access: { roles: ['member'] } },\n update: { access: { roles: ['member'] } }\n // batchCreate and batchUpdate automatically available\n}\n\n// Customize batch operations\ncrud: {\n create: { access: { roles: ['member'] } },\n batchCreate: {\n access: { roles: ['admin'] }, // Different access rules\n maxBatchSize: 50, // Lower limit\n allowAtomic: false // Disable atomic mode\n }\n}\n\n// Disable batch operations\ncrud: {\n create: { access: { roles: ['member'] } },\n batchCreate: false // Explicitly disable\n}\n```\n\n#### Security Layer Application\n\nBatch operations maintain **full security layer consistency**:\n\n1. **Firewall**: Auto-apply ownership fields, batch fetch with isolation\n2. **Access**: Operation-level for CREATE, per-record for UPDATE/DELETE\n3. **Guards**: Per-record field validation (same rules as single operations)\n4. **Masking**: Applied to success array (respects user permissions)\n5. **Audit**: Single timestamp for entire batch for consistency\n\n## Authentication\n\nAll endpoints require authentication. Include your auth token in the request header:\n\n```\nAuthorization: Bearer <your-token>\n```\n\nThe user's context (userId, roles, organizationId) is extracted from the token and used to:\n\n1. Apply firewall filters (data isolation)\n2. Check access permissions\n3. Set audit fields (createdBy, modifiedBy)\n\n## Error Responses\n\nAll errors use a flat structure with contextual fields:\n\n```json\n{\n \"error\": \"Insufficient permissions\",\n \"layer\": \"access\",\n \"code\": \"ACCESS_ROLE_REQUIRED\",\n \"details\": {\n \"required\": [\"admin\"],\n \"current\": [\"member\"]\n },\n \"hint\": \"Contact an administrator to grant necessary permissions\"\n}\n```\n\nSee [Errors](/compiler/using-the-api/errors) for the complete reference of error codes by security layer."
203
+ },
204
+ "compiler/using-the-api/errors": {
205
+ "title": "Errors",
206
+ "content": "The generated API returns structured error responses with consistent fields across all security layers.\n\n## Status Codes\n\n| Code | Meaning | When |\n|------|---------|------|\n| `200` | OK | Successful GET, PATCH, DELETE |\n| `201` | Created | Successful POST |\n| `207` | Multi-Status | Batch operation with mixed results |\n| `400` | Bad Request | Guard violation, validation error, invalid input |\n| `401` | Unauthorized | Missing or expired authentication |\n| `403` | Forbidden | Access denied (wrong role, condition failed) |\n| `404` | Not Found | Record doesn't exist or firewall blocked it |\n| `429` | Too Many Requests | Rate limit exceeded |\n\n## Error Response Format\n\nAll errors use a flat structure with contextual fields:\n\n```json\n{\n \"error\": \"Insufficient permissions\",\n \"layer\": \"access\",\n \"code\": \"ACCESS_ROLE_REQUIRED\",\n \"details\": {\n \"required\": [\"admin\"],\n \"current\": [\"member\"]\n },\n \"hint\": \"Contact an administrator to grant necessary permissions\"\n}\n```\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `error` | string | Human-readable error message |\n| `layer` | string | Security layer that rejected the request |\n| `code` | string | Machine-readable error code |\n| `details` | object | Layer-specific context (optional) |\n| `hint` | string | Actionable guidance for resolution (optional) |\n\n## Errors by Security Layer\n\n### Authentication (401)\n\nMissing or invalid authentication tokens.\n\n```json\n{\n \"error\": \"Authentication required\",\n \"layer\": \"authentication\",\n \"code\": \"AUTH_MISSING\",\n \"hint\": \"Include Authorization header with Bearer token\"\n}\n```\n\n**Error codes:**\n\n| Code | Description |\n|------|-------------|\n| `AUTH_MISSING` | No Authorization header provided |\n| `AUTH_INVALID_TOKEN` | Token is malformed or invalid |\n| `AUTH_EXPIRED` | Token has expired |\n| `AUTH_RATE_LIMITED` | Too many auth attempts |\n\n### Firewall (404)\n\nRecords outside the user's firewall scope return **404 Not Found** — as if the record doesn't exist. This prevents information leakage about records in other organizations.\n\n```json\n{\n \"error\": \"Not found\",\n \"code\": \"NOT_FOUND\"\n}\n```\n\nFirewall filtering is transparent — the query is scoped by `WHERE organizationId = ?` so inaccessible records simply don't appear in results.\n\n### Access (403)\n\nAccess violations return **403 Forbidden** when the user's role doesn't match the required roles for the operation.\n\n```json\n{\n \"error\": \"Insufficient permissions\",\n \"layer\": \"access\",\n \"code\": \"ACCESS_ROLE_REQUIRED\",\n \"details\": {\n \"required\": [\"admin\"],\n \"current\": [\"member\"]\n },\n \"hint\": \"Contact an administrator to grant necessary permissions\"\n}\n```\n\n**Error codes:**\n\n| Code | Description |\n|------|-------------|\n| `ACCESS_ROLE_REQUIRED` | User doesn't have the required role |\n| `ACCESS_CONDITION_FAILED` | Record-level access condition not met |\n| `ACCESS_OWNERSHIP_REQUIRED` | User must own the record |\n| `ACCESS_NO_ORG` | No active organization set |\n\n### Guards (400)\n\nGuard violations return **400 Bad Request** when the request body contains fields that aren't allowed.\n\n```json\n{\n \"error\": \"Field cannot be set during creation\",\n \"layer\": \"guards\",\n \"code\": \"GUARD_FIELD_NOT_CREATEABLE\",\n \"details\": {\n \"fields\": [\"status\"]\n },\n \"hint\": \"These fields are set automatically or must be omitted\"\n}\n```\n\n**Error codes:**\n\n| Code | Description |\n|------|-------------|\n| `GUARD_FIELD_NOT_CREATEABLE` | Field not in `createable` list |\n| `GUARD_FIELD_NOT_UPDATABLE` | Field not in `updatable` list |\n| `GUARD_FIELD_PROTECTED` | Field is action-only (protected) |\n| `GUARD_FIELD_IMMUTABLE` | Field cannot be modified after creation |\n| `GUARD_SYSTEM_MANAGED` | System field (createdAt, modifiedAt, etc.) |\n\n### Masking\n\nMasking doesn't produce errors — it silently transforms field values in the response.\n\n## Batch Errors\n\nBatch operations can return:\n- **201** — All records succeeded\n- **207** — Partial success (some records failed)\n- **400** — Atomic mode and at least one record failed (all rolled back)\n\n### Partial Success (207)\n\n```json\n{\n \"success\": [{ \"id\": \"room_1\", \"name\": \"Room A\" }],\n \"errors\": [\n {\n \"index\": 1,\n \"record\": { \"name\": \"Room B\", \"status\": \"active\" },\n \"error\": {\n \"error\": \"Field cannot be set during creation\",\n \"layer\": \"guards\",\n \"code\": \"GUARD_FIELD_NOT_CREATEABLE\",\n \"details\": { \"fields\": [\"status\"] },\n \"hint\": \"These fields are set automatically or must be omitted\"\n }\n }\n ],\n \"meta\": { \"total\": 2, \"succeeded\": 1, \"failed\": 1, \"atomic\": false }\n}\n```\n\n### Atomic Failure (400)\n\n```json\n{\n \"error\": \"Batch operation failed in atomic mode\",\n \"layer\": \"validation\",\n \"code\": \"BATCH_ATOMIC_FAILED\",\n \"details\": {\n \"failedAt\": 2,\n \"reason\": { \"error\": \"Not found\", \"code\": \"NOT_FOUND\" }\n },\n \"hint\": \"Transaction rolled back. Fix the error and retry the entire batch.\"\n}\n```\n\n**Batch error codes:**\n\n| Code | Description |\n|------|-------------|\n| `BATCH_SIZE_EXCEEDED` | Too many records in a single request |\n| `BATCH_ATOMIC_FAILED` | Atomic batch failed, all changes rolled back |\n| `BATCH_MISSING_IDS` | Batch update/delete missing required IDs |"
207
+ },
208
+ "compiler/using-the-api": {
209
+ "title": "Using the API",
210
+ "content": "When you run `quickback compile`, Quickback generates a complete REST API from your definitions. This section covers how to use the generated endpoints.\n\n## Base URL\n\nAll generated endpoints are served under `/api/v1/`:\n\n```\nhttps://your-api.com/api/v1/{resource}\n```\n\nThe resource name is derived from the filename: `rooms.ts` → `/api/v1/rooms`.\n\n## Endpoint Overview\n\nFor each resource with `export default defineTable(...)`:\n\n| Method | Endpoint | Description |\n|--------|----------|-------------|\n| `GET` | `/api/v1/{resource}` | List records (paginated) |\n| `GET` | `/api/v1/{resource}/:id` | Get single record |\n| `POST` | `/api/v1/{resource}` | Create record |\n| `PATCH` | `/api/v1/{resource}/:id` | Update record |\n| `DELETE` | `/api/v1/{resource}/:id` | Delete record |\n| `POST` | `/api/v1/{resource}/batch` | Batch create |\n| `PATCH` | `/api/v1/{resource}/batch` | Batch update |\n| `DELETE` | `/api/v1/{resource}/batch` | Batch delete |\n\n## Guides\n\n- [CRUD Endpoints](/compiler/using-the-api/crud) — Detailed CRUD endpoint reference\n- [Query Parameters](/compiler/using-the-api/query-params) — Filtering, pagination, and sorting\n- [Batch Operations](/compiler/using-the-api/batch-operations) — Bulk create, update, delete with atomic mode\n- [Views API](/compiler/using-the-api/views-api) — Column-level projections\n- [Actions API](/compiler/using-the-api/actions-api) — Custom business logic endpoints\n- [Errors](/compiler/using-the-api/errors) — Error responses and HTTP status codes"
211
+ },
212
+ "compiler/using-the-api/openapi": {
213
+ "title": "OpenAPI Spec",
214
+ "content": "The compiler generates an [OpenAPI 3.1](https://spec.openapis.org/oas/v3.1.0) specification from your feature definitions. By default it's both written to `openapi.json` at your project root and served as a runtime route.\n\n## Endpoint\n\n```\nGET /openapi.json\n```\n\nReturns the full OpenAPI spec as JSON.\n\n```bash\ncurl https://your-api.example.com/openapi.json\n```\n\n## What's included\n\nThe generated spec documents every route the compiler produces:\n\n- CRUD endpoints (list, get, create, update, delete)\n- Batch operations\n- Views\n- Actions (including standalone actions)\n- Auth routes (`/api/auth/**`)\n- Storage, embeddings, and webhook routes (when configured)\n\nEach endpoint includes request/response schemas derived from your Drizzle column types, access control metadata, and error responses.\n\n## Configuration\n\nBoth generation and publishing default to `true`. You can control them independently in `quickback.config.ts`:\n\n```typescript\nexport default defineConfig({\n name: \"my-app\",\n // ...\n openapi: {\n generate: true, // write openapi.json to project root\n publish: true, // serve GET /openapi.json at runtime\n },\n});\n```\n\n| Option | Default | Description |\n|--------|---------|-------------|\n| `generate` | `true` | Write `openapi.json` to the project root during compilation |\n| `publish` | `true` | Serve the spec at `GET /openapi.json` (requires `generate: true`) |\n\nOmitting the `openapi` key entirely is equivalent to both being `true`.\n\n### Generate only (no runtime route)\n\n```typescript\nopenapi: {\n generate: true,\n publish: false,\n}\n```\n\nThe file is still written so you can use it in CI or commit it to your repo, but no route is added to the app.\n\n### Disable entirely\n\n```typescript\nopenapi: {\n generate: false,\n}\n```\n\nNo file is written and no route is served.\n\n## Usage with tools\n\n### Import into Postman\n\n1. Open Postman and click **Import**\n2. Paste the URL `https://your-api.example.com/openapi.json` or upload the file\n3. Postman creates a collection with every endpoint pre-configured\n\n### Generate a typed client\n\nUse [openapi-typescript](https://github.com/openapi-ts/openapi-typescript) to generate TypeScript types from the spec:\n\n```bash\nnpx openapi-typescript https://your-api.example.com/openapi.json -o src/api.d.ts\n```\n\n### Swagger UI\n\nPoint any OpenAPI-compatible viewer at the endpoint:\n\n```\nhttps://petstore.swagger.io/?url=https://your-api.example.com/openapi.json\n```\n\n## Example output (excerpt)\n\n```json\n{\n \"openapi\": \"3.1.0\",\n \"info\": {\n \"title\": \"my-app API\",\n \"version\": \"1.0.0\"\n },\n \"paths\": {\n \"/api/v1/todos\": {\n \"get\": {\n \"summary\": \"List todos\",\n \"operationId\": \"listTodos\",\n \"parameters\": [\n { \"name\": \"limit\", \"in\": \"query\", \"schema\": { \"type\": \"integer\" } },\n { \"name\": \"offset\", \"in\": \"query\", \"schema\": { \"type\": \"integer\" } }\n ],\n \"responses\": {\n \"200\": {\n \"description\": \"Success\",\n \"content\": {\n \"application/json\": {\n \"schema\": {\n \"type\": \"object\",\n \"properties\": {\n \"data\": { \"type\": \"array\", \"items\": { \"$ref\": \"#/components/schemas/Todo\" } }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n}\n```"
215
+ },
216
+ "compiler/using-the-api/query-params": {
217
+ "title": "Query Parameters",
218
+ "content": "The generated API supports filtering, pagination, sorting, field selection, search, and total count via query parameters on `GET` list endpoints.\n\n## Filter Operators\n\n| Operator | Query Param | SQL Equivalent |\n|----------|-------------|----------------|\n| Equals | `?field=value` | `WHERE field = value` |\n| Not equals | `?field.ne=value` | `WHERE field != value` |\n| Greater than | `?field.gt=value` | `WHERE field > value` |\n| Greater or equal | `?field.gte=value` | `WHERE field >= value` |\n| Less than | `?field.lt=value` | `WHERE field < value` |\n| Less or equal | `?field.lte=value` | `WHERE field <= value` |\n| Pattern match | `?field.like=value` | `WHERE field LIKE '%value%'` |\n| In list | `?field.in=a,b,c` | `WHERE field IN ('a','b','c')` |\n\n### Examples\n\n```bash\n# Filter by status\nGET /api/v1/rooms?status=active\n\n# Range query\nGET /api/v1/rooms?capacity.gte=10&capacity.lte=50\n\n# Pattern matching\nGET /api/v1/rooms?name.like=conference\n\n# Multiple values\nGET /api/v1/rooms?roomType.in=meeting,conference,workshop\n```\n\n## Pagination\n\n| Parameter | Default | Description |\n|-----------|---------|-------------|\n| `limit` | `50` | Number of records to return (min: 1, max: 100) |\n| `offset` | `0` | Number of records to skip |\n\n```bash\nGET /api/v1/rooms?limit=25&offset=50\n```\n\nThe default `limit` can be configured per-resource in your definition:\n\n```typescript\ncrud: {\n list: {\n access: { roles: [\"member\"] },\n pageSize: 25, // Default limit\n maxPageSize: 100, // Maximum allowed limit\n },\n},\n```\n\n### Response Shape\n\n```json\n{\n \"data\": [ /* records */ ],\n \"pagination\": {\n \"limit\": 25,\n \"offset\": 50,\n \"count\": 12\n }\n}\n```\n\n- `count` — number of records returned on this page\n- `total` — total matching records across all pages (only when `?total=true`, see below)\n\n## Sorting\n\nSort by one or more fields. Use the `-` prefix for descending order.\n\n### Multi-Sort (Recommended)\n\n```bash\n# Sort by status ascending, then createdAt descending\nGET /api/v1/rooms?sort=status,-createdAt\n\n# Single field descending\nGET /api/v1/rooms?sort=-createdAt\n\n# Multiple fields\nGET /api/v1/rooms?sort=priority,-createdAt,name\n```\n\n| Prefix | Direction |\n|--------|-----------|\n| (none) | Ascending |\n| `-` | Descending |\n\n### Legacy Format\n\nThe original `sort` + `order` format is still supported for backwards compatibility:\n\n```bash\nGET /api/v1/rooms?sort=name&order=desc\n```\n\n| Parameter | Values | Default | Description |\n|-----------|--------|---------|-------------|\n| `sort` | Any column name | `createdAt` | Field to sort by |\n| `order` | `asc`, `desc` | `desc` | Sort direction |\n\nWhen the multi-sort format is detected (comma or `-` prefix), the `order` parameter is ignored.\n\n## Field Selection\n\nSelect which columns to return using `?fields=`. Available on LIST and GET routes (not Views — they define their own field set).\n\n```bash\n# Return only id, name, and status\nGET /api/v1/rooms?fields=id,name,status\n\n# Combine with other query params\nGET /api/v1/rooms?fields=id,name,status&status=active&sort=-createdAt\n\n# Single record\nGET /api/v1/rooms/rm_123?fields=id,name,capacity\n```\n\nAll columns are available including system columns (`id`, `organizationId`, `createdAt`, `modifiedAt`, etc.). Invalid field names are silently ignored. If no valid fields are provided, all columns are returned.\n\n**Security**: Masking still applies to selected fields. Requesting `?fields=ssn` will return the masked value, not the raw data.\n\n## Total Count\n\nGet the total number of matching records across all pages by adding `?total=true`. Available on LIST and VIEW routes.\n\n```bash\nGET /api/v1/rooms?status=active&total=true\n```\n\n```json\n{\n \"data\": [ /* 25 records */ ],\n \"pagination\": {\n \"limit\": 25,\n \"offset\": 0,\n \"count\": 25,\n \"total\": 142\n }\n}\n```\n\nThis is opt-in because it runs an additional `COUNT(*)` query. Only use it when you need the total (e.g., for pagination UI).\n\n## Search\n\nFull-text search across all text columns using `?search=`. Available on LIST and VIEW routes.\n\n```bash\n# Search across all text fields\nGET /api/v1/rooms?search=conference\n\n# Combine with filters\nGET /api/v1/rooms?search=conference&status=active\n```\n\nThe search generates an OR'd `LIKE` condition across all `text()` columns in your schema:\n\n```sql\nWHERE (name LIKE '%conference%' OR description LIKE '%conference%')\n```\n\nOnly columns defined with `text()` in your Drizzle schema are searchable. Non-text columns (integers, timestamps, UUIDs, blobs) are automatically excluded.\n\n## Complete Example\n\nCombine all query parameters together:\n\n```bash\nGET /api/v1/rooms?fields=id,name,status,capacity&status=active&capacity.gte=10&search=conference&sort=capacity,-createdAt&limit=10&offset=20&total=true\n```\n\nThis request:\n1. **Selects** only `id`, `name`, `status`, `capacity` fields\n2. **Filters** to active rooms with capacity >= 10\n3. **Searches** text columns for \"conference\"\n4. **Sorts** by capacity ascending, then createdAt descending\n5. **Paginates** with 10 results starting at offset 20\n6. **Counts** total matching records\n\n## Parameter Summary\n\n| Parameter | Applies To | Description |\n|-----------|-----------|-------------|\n| `limit` | LIST, VIEW | Page size (default: 50, max: 100) |\n| `offset` | LIST, VIEW | Skip N records |\n| `sort` | LIST, VIEW | Sort fields (comma-separated, `-` prefix for desc) |\n| `order` | LIST, VIEW | Legacy sort direction (`asc` or `desc`) |\n| `fields` | LIST, GET | Comma-separated column names to return |\n| `total` | LIST, VIEW | Set to `true` to include total count |\n| `search` | LIST, VIEW | Search text across all text columns |\n| `field=value` | LIST, VIEW | Filter by exact match |\n| `field.op=value` | LIST, VIEW | Filter with operator (gt, gte, lt, lte, ne, like, in) |"
219
+ },
220
+ "compiler/using-the-api/views-api": {
221
+ "title": "Views API",
222
+ "content": "Views provide named column projections that limit which fields are returned in API responses. Each view can have its own access control, and all security pillars (firewall, masking) still apply.\n\n## Endpoint\n\n```\nGET /api/v1/{resource}/views/{view-name}\n```\n\n## Example\n\nGiven this definition:\n\n```typescript\nexport default defineTable(customers, {\n views: {\n summary: {\n fields: ['id', 'name', 'status'],\n access: { roles: ['owner', 'admin', 'member'] },\n },\n full: {\n fields: ['id', 'name', 'email', 'status', 'ssn', 'internalNotes'],\n access: { roles: ['owner', 'admin'] },\n },\n },\n // ...\n});\n```\n\n### Request\n\n```bash\ncurl /api/v1/customers/views/summary \\\n -H \"Authorization: Bearer <token>\"\n```\n\n### Response\n\n```json\n{\n \"data\": [\n { \"id\": \"cust_001\", \"name\": \"Acme Corp\", \"status\": \"active\" },\n { \"id\": \"cust_002\", \"name\": \"Widget Inc\", \"status\": \"pending\" }\n ],\n \"view\": \"summary\",\n \"fields\": [\"id\", \"name\", \"status\"],\n \"pagination\": {\n \"limit\": 50,\n \"offset\": 0,\n \"count\": 2\n }\n}\n```\n\nOnly the fields specified in the view definition are returned. Requesting the `full` view with a `member` role returns 403.\n\n## Query Parameters\n\nViews support the same query parameters as the list endpoint:\n\n| Parameter | Description | Default |\n|-----------|-------------|---------|\n| `limit` | Number of records to return (1–100) | `50` |\n| `offset` | Number of records to skip | `0` |\n| `sort` | Field to sort by | `createdAt` |\n| `order` | Sort direction (`asc` or `desc`) | `desc` |\n| `{field}` | Filter by exact value | — |\n| `{field}.gt` | Greater than | — |\n| `{field}.gte` | Greater than or equal | — |\n| `{field}.lt` | Less than | — |\n| `{field}.lte` | Less than or equal | — |\n| `{field}.ne` | Not equal | — |\n| `{field}.like` | Pattern match (SQL LIKE) | — |\n| `{field}.in` | Match any value in comma-separated list | — |\n\n### Examples\n\n```bash\n# Paginated with sorting\nGET /api/v1/customers/views/summary?limit=10&offset=0&sort=name&order=asc\n\n# With filters\nGET /api/v1/customers/views/summary?status=active&name.like=%25Corp%25\n\n# Combined\nGET /api/v1/customers/views/summary?status.in=active,pending&sort=name&order=asc&limit=25\n```\n\n## Security\n\nAll security pillars apply to view endpoints:\n\n1. **Authentication** — Required (401 if missing)\n2. **Firewall** — Organization/user isolation applied to all results. Only records the user owns or has access to are returned.\n3. **Access** — Per-view role check. Each view can require different roles.\n4. **Masking** — Applied to all returned fields. If a masked field is in the view's field list, the masking rules still apply based on the user's role.\n\n### Access Control Per View\n\nDifferent views can require different permission levels:\n\n```typescript\nviews: {\n // Available to all org members\n summary: {\n fields: ['id', 'name', 'status'],\n access: { roles: ['owner', 'admin', 'member'] },\n },\n // Admin-only view with sensitive data\n full: {\n fields: ['id', 'name', 'email', 'ssn', 'internalNotes'],\n access: { roles: ['owner', 'admin'] },\n },\n}\n```\n\nA `member` requesting the `full` view receives:\n\n```json\n{\n \"error\": \"Insufficient permissions\",\n \"layer\": \"access\",\n \"code\": \"ACCESS_ROLE_REQUIRED\",\n \"details\": { \"required\": [\"admin\"], \"current\": [\"member\"] }\n}\n```\n\n### Masking in Views\n\nMasking is applied after field projection. If your masking config hides `ssn` from non-admins:\n\n```typescript\nmasking: {\n ssn: { type: 'ssn', show: { roles: ['admin'] } },\n}\n```\n\nA `member` requesting a view that includes `ssn` will see the masked value (e.g., `***-**-1234`), while an `admin` sees the full value.\n\n## See Also\n\n- [Defining Views](/compiler/definitions/views) — How to configure views in defineTable\n- [Query Parameters](/compiler/using-the-api/query-params) — Full query parameter reference\n- [Access Control](/compiler/definitions/access) — Role-based access configuration"
223
+ },
224
+ "index": {
225
+ "title": "Quickback Documentation",
226
+ "content": "# Quickback Documentation\n\nWelcome to the Quickback documentation. Quickback is a **backend compiler** that transforms declarative resource definitions into a fully-featured, production-ready API with built-in security layers.\n\n## Products\n\n### [Quickback Compiler](/compiler)\n\nThe core product. Define your database schema and security rules in TypeScript, then compile them into a complete backend.\n\n- **[Getting Started](/compiler/getting-started)** — Create your first project\n- **[Definitions](/compiler/definitions)** — Schema, Firewall, Access, Guards, Masking, Views, Actions\n- **[Using the API](/compiler/using-the-api)** — CRUD, filtering, batch operations\n- **[Cloud Compiler](/compiler/cloud-compiler)** — CLI, authentication, endpoints\n\n---\n\n### [Quickback Stack](/stack)\n\nProduction-ready Cloudflare + Better Auth infrastructure that runs on YOUR Cloudflare account.\n\n- **[Auth](/stack/auth)** — Better Auth plugins, security, device auth, API keys\n- **[Database](/stack/database)** — D1, Neon\n- **[Storage](/stack/storage)** — KV, R2\n- **[Realtime](/stack/realtime)** — Durable Objects + WebSocket\n\n---\n\n### [Account UI](/account-ui)\n\nPre-built authentication and account management UI. Deploy to Cloudflare Workers.\n\n- **[Features](/account-ui)** — Sessions, organizations, passkeys, admin panel\n- **[Customization](/account-ui/customization)** — Branding, theming, feature flags\n\n---\n\n### [Plugins & Tools](/plugins-tools)\n\nOpen-source Better Auth plugins and developer tools.\n\n- **[Better Auth Plugins](/plugins-tools)** — AWS SES, combo auth, upgrade anonymous\n- **[Claude Code Skill](/plugins-tools/claude-code-skill)** — AI-powered Quickback assistance\n\n## Quick Start\n\n```bash\nnpm install -g @kardoe/quickback\nquickback create cloudflare my-app\ncd my-app\nquickback compile\n```\n\n## How It Works\n\n1. **Define** your database schema using Drizzle ORM\n2. **Configure** security layers (firewall, access, guards, masking) for each resource\n3. **Compile** your definitions into a deployable backend\n4. **Deploy** to Cloudflare Workers or your own infrastructure\n\n## Security Philosophy: Locked Down by Default\n\nQuickback is **secure by default**. Nothing is accessible until you explicitly open it up. Learn more in [Definitions](/compiler/definitions)."
227
+ },
228
+ "plugins-tools/better-auth-plugins/aws-ses": {
229
+ "title": "AWS SES Plugin",
230
+ "content": "`@kardoe/better-auth-aws-ses` is a Better Auth plugin that sends transactional emails via AWS SES. It handles AWS Signature V4 signing, HTML/text email templates, and rate limiting — designed to work in Cloudflare Workers (no Node.js `aws-sdk` dependency).\n\n## Installation\n\n```bash\nnpm install @kardoe/better-auth-aws-ses\n```\n\nThis plugin is auto-included when `emailOtp` or `magicLink` is enabled in your Quickback config.\n\n## Email Types\n\nThe plugin handles 8 email types:\n\n| Endpoint | When Sent |\n|----------|-----------|\n| `/ses/send-verification` | Signup email verification |\n| `/ses/send-password-reset` | Password reset request |\n| `/ses/send-magic-link` | Passwordless sign-in link |\n| `/ses/send-otp` | One-time password code |\n| `/ses/send-welcome` | Post-signup welcome email |\n| `/ses/send-organization-invitation` | Team/org invite with role |\n| Combo auth | Magic link + OTP in a single email |\n| Delete account | Account deletion confirmation |\n\n### Combo Auth Mode\n\nWhen `useComboAuthEmail: true` is set, the plugin sends a single email that includes both a magic link button and an OTP code. Users can choose either method to authenticate.\n\n## AWS Signature V4\n\nThe plugin signs requests to SES using AWS Signature V4 with the Web Crypto API (`crypto.subtle`), making it compatible with Cloudflare Workers without any AWS SDK dependency.\n\n## Templates\n\nEach email type has both HTML and plain text templates. Templates include:\n\n- Responsive HTML layout with MSO (Microsoft Outlook) support\n- Security warning banners for sensitive operations\n- Fallback plain text versions for all email clients\n- XSS prevention via HTML escaping and URL sanitization\n\n## Configuration\n\n### Server Plugin\n\n```typescript\n\nconst auth = betterAuth({\n plugins: [\n awsSESPlugin({\n region: env.AWS_SES_REGION,\n accessKeyId: env.AWS_ACCESS_KEY_ID,\n secretAccessKey: env.AWS_SECRET_ACCESS_KEY,\n fromEmail: env.EMAIL_FROM,\n fromName: env.EMAIL_FROM_NAME || \"My App\",\n replyTo: env.EMAIL_REPLY_TO,\n appName: env.APP_NAME || \"My App\",\n appUrl: env.APP_URL,\n useComboAuthEmail: true, // Optional: combine magic link + OTP\n }),\n ],\n});\n```\n\n### Client Plugin\n\n```typescript\n\nconst authClient = createAuthClient({\n plugins: [awsSESClientPlugin()],\n});\n```\n\n## Required Environment Variables\n\n| Variable | Description |\n|----------|-------------|\n| `AWS_ACCESS_KEY_ID` | AWS access key (use Wrangler secrets) |\n| `AWS_SECRET_ACCESS_KEY` | AWS secret key (use Wrangler secrets) |\n| `AWS_SES_REGION` | SES region (e.g., `us-east-2`) |\n| `EMAIL_FROM` | Sender email address (must be SES-verified) |\n\n## Optional Variables\n\n| Variable | Description |\n|----------|-------------|\n| `EMAIL_FROM_NAME` | Sender display name |\n| `EMAIL_REPLY_TO` | Reply-to address (defaults to `EMAIL_FROM`) |\n| `APP_NAME` | Application name shown in email templates |\n| `APP_URL` | Application URL for links in emails |\n| `BETTER_AUTH_URL` | Auth API URL (for verification/reset links) |\n\n## Rate Limiting\n\nThe plugin includes basic in-memory rate limiting per recipient:\n\n- Configurable hourly and daily limits\n- Per-recipient tracking\n- For production, consider using an external store (Redis, KV)\n\n```typescript\nawsSESPlugin({\n // ...\n rateLimit: {\n maxEmailsPerHour: 10,\n maxEmailsPerDay: 50,\n },\n})\n```\n\n## Error Handling\n\n- Email failures return `false` (don't throw) — signup/login flows continue even if email delivery fails\n- Invalid or placeholder AWS credentials are detected at startup\n- Detailed console logging for debugging delivery issues\n\n## See Also\n\n- [Combo Auth Plugin](/plugins-tools/better-auth-plugins/combo-auth) — Combined magic link + OTP authentication\n- [Auth Configuration](/compiler/config/providers) — Configuring auth plugins"
231
+ },
232
+ "plugins-tools/better-auth-plugins/combo-auth": {
233
+ "title": "Combo Auth Plugin",
234
+ "content": "`@kardoe/better-auth-combo-auth` provides a combined authentication flow that sends both a magic link and an OTP code in a single email.\n\n## Installation\n\n```bash\nnpm install @kardoe/better-auth-combo-auth\n```\n\n## How It Works\n\nWhen a user requests authentication:\n1. A single email is sent with both a magic link and a 6-digit OTP\n2. The user can click the link (one-click) or enter the OTP code\n3. Either method authenticates the user\n\nThis provides the best of both worlds — convenience of magic links with the fallback of OTP codes."
235
+ },
236
+ "plugins-tools/better-auth-plugins/upgrade-anonymous": {
237
+ "title": "Upgrade Anonymous Plugin",
238
+ "content": "`@kardoe/better-auth-upgrade-anonymous` provides a single endpoint to convert anonymous users into full authenticated users. It flips the `isAnonymous` flag, optionally updates email and name, and refreshes the session — no re-authentication required.\n\n## Installation\n\n```bash\nnpm install @kardoe/better-auth-upgrade-anonymous\n```\n\n## Configuration\n\nEnable both the `anonymous` and `upgradeAnonymous` plugins:\n\n```typescript\nauth: defineAuth(\"better-auth\", {\n plugins: [\"anonymous\", \"upgradeAnonymous\"],\n})\n```\n\nThe plugin accepts optional configuration:\n\n```typescript\n\nupgradeAnonymous({\n emailConfigured: true, // Whether email delivery is available\n requireEmailVerification: true, // Whether to require email verification\n})\n```\n\n| Option | Default | Description |\n|--------|---------|-------------|\n| `emailConfigured` | `false` | When `true` and an email is provided, `emailVerified` is set based on `requireEmailVerification` |\n| `requireEmailVerification` | `true` | When `true`, provided emails are marked unverified (requiring OTP). When `false`, emails are marked verified immediately |\n\n## Endpoint\n\n```\nPOST /auth/v1/upgrade-anonymous\n```\n\n### Flow\n\n1. Validates the user's session (returns `401` if not authenticated)\n2. Checks if the user is already upgraded (returns success immediately if so)\n3. If an email is provided, checks for duplicates (returns `400` if email is taken)\n4. Updates `isAnonymous` to `false`, plus optional `email`, `name`, and `emailVerified` fields\n5. Refreshes the session cookie with the updated user object\n6. Returns the updated user, session, and `verificationRequired` flag\n\n### Request\n\nThe body is optional. When provided, it accepts:\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `email` | `string` (email) | Optional. Real email to replace the anonymous placeholder |\n| `name` | `string` | Optional. User's display name |\n\n```bash\n# Minimal — just upgrade, no email/name\ncurl -X POST https://api.example.com/auth/v1/upgrade-anonymous \\\n -H \"Cookie: better-auth.session_token=...\"\n\n# With email and name\ncurl -X POST https://api.example.com/auth/v1/upgrade-anonymous \\\n -H \"Cookie: better-auth.session_token=...\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\"email\": \"jane@example.com\", \"name\": \"Jane\"}'\n```\n\n### Response\n\n```json\n{\n \"success\": true,\n \"verificationRequired\": true,\n \"user\": {\n \"id\": \"usr_abc123\",\n \"email\": \"jane@example.com\",\n \"isAnonymous\": false\n },\n \"session\": {\n \"id\": \"sess_xyz\",\n \"userId\": \"usr_abc123\"\n }\n}\n```\n\nThe `verificationRequired` field is `true` when all three conditions are met:\n- An email was provided in the request body\n- `emailConfigured` is `true` in plugin config\n- `requireEmailVerification` is `true` in plugin config\n\nWhen `verificationRequired` is `true`, the frontend should send a verification OTP and redirect the user to verify their email.\n\n## How It Works\n\nThe plugin is designed to be flexible:\n\n- **Minimal by default** — With no body, only changes `isAnonymous` from `true` to `false`\n- **Optional email/name** — Can update email and name in the same request\n- **Duplicate protection** — Rejects emails already associated with another account\n- **Verification-aware** — Returns `verificationRequired` so the frontend knows whether to trigger OTP\n- **Session preserved** — Same session ID, no re-authentication needed\n- **Idempotent** — Calling on an already-upgraded user returns success immediately\n\n## Typical Flows\n\n### Passkey Signup (no email)\n\n1. User creates anonymous session\n2. User registers a passkey\n3. App calls `POST /auth/v1/upgrade-anonymous` with no body\n4. User is now a full user with passkey auth — no email on file\n\n### Passkey Signup (with email)\n\n1. User creates anonymous session\n2. User registers a passkey\n3. User optionally enters email/name on the email collection step\n4. App calls `POST /auth/v1/upgrade-anonymous` with `{ email, name }`\n5. If `verificationRequired`: app sends OTP, user verifies, then redirects to dashboard\n6. Otherwise: user goes straight to dashboard\n\n### Email Signup\n\n1. User starts as anonymous (created via Better Auth's `anonymous` plugin)\n2. User provides email/password through your UI (via combo auth or signup)\n3. Your app calls `POST /auth/v1/upgrade-anonymous`\n4. User is now a full user with the same ID and all their data intact\n\n### Client Plugin\n\n```typescript\n\nconst authClient = createAuthClient({\n plugins: [upgradeAnonymousClient()],\n});\n\n// Upgrade the current anonymous user\nawait authClient.upgradeAnonymous();\n```\n\n## See Also\n\n- [Combo Auth Plugin](/plugins-tools/better-auth-plugins/combo-auth) — Combined magic link + OTP for collecting credentials\n- [Auth Configuration](/compiler/config/providers) — Configuring auth plugins"
239
+ },
240
+ "plugins-tools/claude-code-skill": {
241
+ "title": "Claude Code Integration",
242
+ "content": "Build Quickback apps faster with Claude Code. The Quickback skill gives Claude deep knowledge of security layers, patterns, and best practices—so you can describe what you want and let Claude write the configuration.\n\n## Installation\n\n### Option 1: Install globally (Recommended)\n\nInstall the skill once and use it across all your projects:\n\n```bash\nnpm install -g @kardoe/quickback-skill\n```\n\nThis installs to `~/.claude/skills/quickback/` and is automatically available in Claude Code.\n\n### Option 2: New Quickback project\n\nCreate a new Quickback project—the skill is included automatically:\n\n```bash\nnpx @kardoe/quickback create cloudflare my-app\ncd my-app\n```\n\n### Option 3: Manual installation\n\nDownload the skill directly:\n\n```bash\nmkdir -p ~/.claude/skills/quickback\ncurl -o ~/.claude/skills/quickback/SKILL.md \\\n https://raw.githubusercontent.com/kardoe/quickback/main/.claude/skills/quickback/SKILL.md\n```\n\n## What You Get\n\nWhen you install the Quickback skill, you get:\n\n### Quickback Skill\n\nClaude understands Quickback concepts and can answer questions about:\n\n- **Security layers** - Firewall, Access, Guards, Masking, Actions\n- **Common patterns** - Multi-tenant, owner-scoped, hierarchical access\n- **Best practices** - Field protection, role-based access, PII handling\n- **Database dialects** - SQLite/D1, PostgreSQL, MySQL syntax\n\n### Quickback Specialist Agent\n\nClaude also gets a specialized agent that activates automatically when you're:\n\n- Creating new resources with schemas and security configurations\n- Configuring security layers (Firewall, Access, Guards, Masking)\n- Defining actions for business logic\n- Debugging configuration issues\n\nThe agent generates complete, working code in your `definitions/features/` directory.\n\n## Usage\n\n### Let Claude help automatically\n\nJust describe what you need. Claude will use Quickback knowledge when relevant:\n\n```\n\"Create a tasks resource where users can only see their own tasks,\nbut admins can see all tasks in the organization\"\n```\n\n### Invoke directly\n\nUse `/quickback` to explicitly activate the skill:\n\n```\n/quickback How do I configure soft delete?\n```\n\n## Example Conversations\n\n### Building a New Resource\n\n**You:** \"I need a resource for invoices. Users should only see invoices from their organization. The status field should only be changeable through approve/reject actions. Mask the customer email for non-admins.\"\n\n**Claude:** Creates complete configuration with:\n- Firewall scoped to organization\n- Protected status field with approve/reject actions\n- Email masking with admin bypass\n- Appropriate guards for createable/updatable fields\n\n### Understanding Existing Code\n\n**You:** \"Explain what this firewall configuration does\"\n\n**Claude:** Breaks down the WHERE clauses, explains the security implications, and identifies potential issues.\n\n### Debugging Configuration\n\n**You:** \"My users can see records from other organizations. What's wrong?\"\n\n**Claude:** Analyzes your firewall setup, checks for `exception: true` or missing organization scope, and suggests fixes.\n\n## What Claude Knows\n\n### Security Layers\n\n| Layer | What Claude Helps With |\n|-------|------------------------|\n| **Firewall** | Data isolation patterns, WHERE clause generation, soft delete |\n| **Access** | Role-based permissions, record-level conditions, combining rules |\n| **Guards** | Field protection, createable vs updatable, immutable fields |\n| **Masking** | PII redaction, role-based visibility, mask types |\n| **Actions** | Custom endpoints, protected field updates, input validation |\n\n### Common Patterns\n\nClaude recognizes and can implement these patterns:\n\n- **Multi-tenant SaaS** - Organization-scoped with role hierarchy\n- **Personal data apps** - Owner-scoped resources\n- **Hierarchical access** - Admins see all, users see own\n- **Public resources** - Reference data, system tables\n- **Workflow resources** - Status fields with action-based transitions\n\n### Database Support\n\nClaude generates correct syntax for your database:\n\n```typescript\n// SQLite / Cloudflare D1\ncreatedAt: integer('created_at', { mode: 'timestamp' })\n\n// PostgreSQL / Supabase\ncreatedAt: timestamp('created_at').defaultNow()\n\n// MySQL\ncreatedAt: timestamp('created_at')\n```\n\n## Tips for Best Results\n\n**Be specific about security requirements:**\n\n```\n// Good\n\"Users can only see their own tasks. Admins can see all tasks\nin the organization. The priority field can only be set by admins.\"\n\n// Less helpful\n\"Create a tasks resource\"\n```\n\n**Describe your user types:**\n\n```\n\"We have three roles: admin (full access), manager (can approve),\nand member (can only edit their own records)\"\n```\n\n**Mention sensitive fields:**\n\n```\n\"The ssn field should be masked for everyone except HR admins\"\n```\n\n## Common Tasks\n\n### Create a complete resource\n\n```\n\"Create an employees resource for a multi-tenant HR app. Include\nfields for name, email, department, salary. Mask salary for\nnon-admins. Only HR can create/delete employees.\"\n```\n\n### Add an action to existing resource\n\n```\n\"Add an 'approve' action to the expenses resource that sets\nstatus to 'approved' and records the approver\"\n```\n\n### Configure access control\n\n```\n\"Update the projects resource so managers can edit any project\nin their organization, but members can only edit projects they created\"\n```\n\n### Set up masking\n\n```\n\"Add masking to the customers resource: email partially masked,\nphone last 4 digits only, SSN fully redacted except for finance role\"\n```\n\n## Troubleshooting\n\n### Skill not found\n\nVerify the skill is installed:\n\n```bash\nls ~/.claude/skills/quickback/SKILL.md\n```\n\nIf missing, reinstall:\n\n```bash\nnpm install -g @kardoe/quickback-skill\n```\n\n### Claude doesn't understand Quickback\n\nMake sure the skill file exists and Claude Code is restarted. You can also invoke it directly with `/quickback` to force it to load.\n\n### Generated code has errors\n\nRun `quickback compile` to validate. Share the error messages with Claude for fixes.\n\n## Updating the Skill\n\nTo get the latest version:\n\n```bash\nnpm update -g @kardoe/quickback-skill\n```\n\n## Resources\n\n- [Getting Started](/compiler/getting-started) - Get your first Quickback project running\n- [Definitions Overview](/compiler/definitions) - Understand the security layer model\n- [npm package](https://www.npmjs.com/package/@kardoe/quickback-skill) - Skill package\n\n## Feedback\n\nFound an issue with the Claude Code integration?\n\n- [GitHub Issues](https://github.com/kardoe/quickback/issues)\n- [Quickback Documentation](https://docs.quickback.dev)"
243
+ },
244
+ "plugins-tools": {
245
+ "title": "Plugins & Tools",
246
+ "content": "Quickback provides open-source plugins and tools that work with or without the full Quickback Stack.\n\n## Better Auth Plugins\n\nPublished under the `@kardoe/` npm scope:\n\n| Package | Description |\n|---------|-------------|\n| [@kardoe/better-auth-aws-ses](/plugins-tools/better-auth-plugins/aws-ses) | AWS SES email delivery for Better Auth |\n| [@kardoe/better-auth-combo-auth](/plugins-tools/better-auth-plugins/combo-auth) | Magic link + Email OTP combined auth flow |\n| [@kardoe/better-auth-upgrade-anonymous](/plugins-tools/better-auth-plugins/upgrade-anonymous) | Convert anonymous users to full accounts |\n\n## Developer Tools\n\n- [Claude Code Skill](/plugins-tools/claude-code-skill) — AI-powered Quickback assistance"
247
+ },
248
+ "stack/auth/api-keys": {
249
+ "title": "API Keys",
250
+ "content": "API keys provide programmatic access for CI/CD pipelines, scripts, and server-to-server communication.\n\n## Creating API Keys\n\nCreate API keys from your [Quickback account](https://account.quickback.dev/profile). Each key is scoped to your organization.\n\n## Usage\n\n```bash\nQUICKBACK_API_KEY=your_api_key quickback compile\n```\n\nAPI keys are validated by the same `/internal/validate` endpoint as session tokens. See [Authentication](/compiler/cloud-compiler/authentication) for details."
251
+ },
252
+ "stack/auth/device-auth": {
253
+ "title": "Device Authorization",
254
+ "content": "The Quickback CLI uses the [OAuth 2.0 Device Authorization Grant](https://datatracker.ietf.org/doc/html/rfc8628) (RFC 8628) for authentication in headless environments.\n\n## Flow\n\n1. CLI requests a device code from the API\n2. User sees a code in the terminal (e.g., `AUL8-H93S`)\n3. User visits the approval URL and enters the code\n4. CLI polls for approval and exchanges for a session token\n\nThis flow works in SSH sessions, containers, and WSL where browser redirects aren't possible.\n\nSee [CLI Reference](/compiler/cloud-compiler/cli#login) for usage details."
255
+ },
256
+ "stack/auth": {
257
+ "title": "Auth",
258
+ "content": "Quickback Stack uses [Better Auth](https://www.better-auth.com/) for authentication, running on Cloudflare Workers with D1 as the session store.\n\n## Overview\n\nBetter Auth provides:\n- Email/password authentication\n- Session management with cookies\n- Multi-tenant organizations with roles\n- Plugin ecosystem for passwordless auth, passkeys, and more\n\n## Configuration\n\nAuth is configured in your `quickback.config.ts`:\n\n```typescript\n\nexport default defineConfig({\n name: \"my-app\",\n providers: {\n runtime: defineRuntime(\"cloudflare\"),\n database: defineDatabase(\"cloudflare-d1\"),\n auth: defineAuth(\"better-auth\", {\n emailAndPassword: { enabled: true },\n plugins: [\"emailOtp\", \"passkey\", \"magicLink\"],\n }),\n },\n});\n```\n\n## Auth Base Path\n\nAll Better Auth routes are served under:\n\n```\n/auth/v1/*\n```\n\nCommon endpoints:\n- `POST /auth/v1/sign-in/email` — Email/password sign in\n- `POST /auth/v1/sign-up/email` — Create account\n- `GET /auth/v1/get-session` — Get current session\n- `POST /auth/v1/sign-out` — Sign out\n\n## Organization Roles\n\n| Role | Description |\n|------|-------------|\n| `owner` | Full access — can delete the organization and transfer ownership |\n| `admin` | Full access — can manage members and resources, cannot delete the organization |\n| `member` | Standard access — read and limited write, cannot delete or manage members |\n\nThese are Better Auth's built-in organization roles — no configuration needed. The `creatorRole` defaults to `owner`.\n\n> **Tip:** Account UI's role picker uses these exact three roles. Use `[\"owner\", \"admin\", \"member\"]` in your [Access](/compiler/definitions/access) rules so generated projects plug into Better Auth and Account UI seamlessly.\n\nRoles are used throughout the security layers — in [Access](/compiler/definitions/access) rules, [Firewall](/compiler/definitions/firewall) owner checks, and RLS policies.\n\n## Next Steps\n\n- [Plugins](/stack/auth/plugins) — Email OTP, passkeys, magic links, and more\n- [Security](/stack/auth/security) — Cookies, rate limiting, cross-domain auth"
259
+ },
260
+ "stack/auth/plugins": {
261
+ "title": "Auth Plugins",
262
+ "content": "Quickback ships with a curated set of Better Auth plugins and helper wiring so your auth stack is production-ready by default. Enable these in `quickback.config.ts` under `providers.auth`.\n\n## Available Plugins\n\n### Email OTP (with AWS SES)\n\nQuickback wires the Better Auth Email OTP flow to AWS SES. When `emailOtp` is enabled, the compiler emits the SES plugin and combo-auth email flow (magic link + OTP).\n\n**Enable in config:**\n\n```ts\n\nexport default defineConfig({\n name: \"quickback-api\",\n providers: {\n runtime: defineRuntime(\"cloudflare\"),\n database: defineDatabase(\"cloudflare-d1\", {\n vars: {\n AWS_SES_REGION: \"us-east-2\",\n EMAIL_FROM: \"noreply@yourdomain.com\",\n EMAIL_FROM_NAME: \"Your App | Account Services\",\n APP_NAME: \"Your App\",\n APP_URL: \"https://account.yourdomain.com\",\n BETTER_AUTH_URL: \"https://api.yourdomain.com\",\n },\n }),\n auth: defineAuth(\"better-auth\", {\n emailAndPassword: { enabled: true },\n plugins: [\"emailOtp\"],\n }),\n },\n});\n```\n\n**Endpoints:**\n- `POST /auth/v1/email-otp/send-verification-otp`\n- `POST /auth/v1/email-otp/verify-otp`\n\n**Email readiness check:**\n\nUse this to show UI warnings when SES isn't configured.\n\n- `GET /api/v1/system/email-status`\n- Response: `{ \"emailConfigured\": true|false }`\n\n### Upgrade Anonymous\n\nAdds a first-class endpoint to convert an anonymous user into a full user. This flips `isAnonymous` to `false` and refreshes the session cache immediately.\n\n**Enable in config:**\n\n```ts\nauth: defineAuth(\"better-auth\", {\n emailAndPassword: { enabled: true },\n plugins: [\"anonymous\", \"upgradeAnonymous\"],\n}),\n```\n\n**Endpoint:**\n- `POST /auth/v1/upgrade-anonymous`\n\n**Note:** If you send `Content-Type: application/json`, include a body (an empty `{}` is fine). This avoids request parsing errors in some clients.\n\n### AWS SES Plugin\n\nThis is a Quickback-provided Better Auth plugin used by Email OTP. It handles SES signing and delivery and is auto-included when `emailOtp` is enabled.\n\nRequired vars (use Wrangler secrets for credentials):\n- `AWS_ACCESS_KEY_ID` (secret)\n- `AWS_SECRET_ACCESS_KEY` (secret)\n- `AWS_SES_REGION`\n- `EMAIL_FROM`\n\nOptional vars:\n- `EMAIL_FROM_NAME`\n- `EMAIL_REPLY_TO` - Reply-to address for emails (defaults to `EMAIL_FROM`)\n- `APP_NAME`\n- `APP_URL`\n- `BETTER_AUTH_URL`\n\n### Magic Links\n\nMagic links provide passwordless authentication by sending a unique login link to the user's email. When clicked, the link authenticates the user without requiring a password.\n\n**Enable in config:**\n\n```ts\nauth: defineAuth(\"better-auth\", {\n plugins: [\"magicLink\"],\n}),\n```\n\n**Endpoints:**\n- `POST /auth/v1/magic-link/send` - Send magic link email\n- `GET /auth/v1/magic-link/verify` - Verify magic link token\n\n**Email customization:**\n\nMagic link emails use the same AWS SES configuration as Email OTP. Customize the email template via environment variables:\n\n- `APP_NAME` - Application name in email subject\n- `APP_URL` - Base URL for magic link redirect\n- `EMAIL_FROM` - Sender email address\n- `EMAIL_FROM_NAME` - Sender display name\n\n**Frontend integration:**\n\n```ts\n// Request magic link\nawait fetch('/auth/v1/magic-link/send', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ email: 'user@yourdomain.com' })\n});\n\n// User clicks link in email, which redirects to your app\n// The token is verified automatically via the callback URL\n```\n\n### Passkeys\n\nPasskeys provide passwordless authentication using WebAuthn/FIDO2. Users authenticate with biometrics (fingerprint, face) or device PIN instead of passwords.\n\n**Enable in config:**\n\n```ts\nauth: defineAuth(\"better-auth\", {\n plugins: [\"passkey\"],\n}),\n```\n\n**Required environment variable:**\n- `ACCOUNT_URL` - Your account/frontend URL (used as the relying party origin for WebAuthn)\n\n**Endpoints:**\n- `POST /auth/v1/passkey/register/options` - Get registration challenge\n- `POST /auth/v1/passkey/register/verify` - Complete registration\n- `POST /auth/v1/passkey/authenticate/options` - Get authentication challenge\n- `POST /auth/v1/passkey/authenticate/verify` - Complete authentication\n- `GET /auth/v1/passkey/list-user-passkeys` - List user's registered passkeys\n- `POST /auth/v1/passkey/delete-passkey` - Delete a registered passkey\n\n**Browser support:**\n\nPasskeys are supported in all modern browsers:\n- Chrome 67+\n- Safari 14+\n- Firefox 60+\n- Edge 79+\n\n**Registration flow:**\n\n```ts\n// 1. Get registration options\nconst optionsRes = await fetch('/auth/v1/passkey/register/options', {\n method: 'POST',\n credentials: 'include'\n});\nconst options = await optionsRes.json();\n\n// 2. Create credential using WebAuthn API\nconst credential = await navigator.credentials.create({\n publicKey: options\n});\n\n// 3. Send credential to server\nawait fetch('/auth/v1/passkey/register/verify', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n credentials: 'include',\n body: JSON.stringify(credential)\n});\n```\n\n**Authentication flow:**\n\n```ts\n// 1. Get authentication options\nconst optionsRes = await fetch('/auth/v1/passkey/authenticate/options', {\n method: 'POST'\n});\nconst options = await optionsRes.json();\n\n// 2. Get credential using WebAuthn API\nconst credential = await navigator.credentials.get({\n publicKey: options\n});\n\n// 3. Verify credential\nawait fetch('/auth/v1/passkey/authenticate/verify', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n credentials: 'include',\n body: JSON.stringify(credential)\n});\n```\n\n## Where This Runs\n\nAll Better Auth plugin routes are served under your auth base path:\n\n```\n/auth/v1/*\n```"
263
+ },
264
+ "stack/auth/security": {
265
+ "title": "Authentication Security",
266
+ "content": "Quickback enforces secure authentication defaults out of the box. This page explains how cookie security, rate limiting, and cross-domain authentication work, plus how to configure them for your use case.\n\n## Cookie Security\n\nQuickback sets secure cookie defaults to protect against CSRF, XSS, and man-in-the-middle attacks.\n\n### Default Attributes\n\n```ts\nauth: defineAuth(\"better-auth\", {\n advanced: {\n defaultCookieAttributes: {\n sameSite: 'lax', // CSRF protection\n secure: true, // HTTPS only\n httpOnly: true, // No JavaScript access\n }\n }\n})\n```\n\n**What each attribute does:**\n\n| Attribute | Value | Purpose |\n|-----------|-------|---------|\n| `sameSite` | `lax` | Blocks cookies on cross-site POST requests (CSRF protection) |\n| `secure` | `true` | Cookies only sent over HTTPS (prevents interception) |\n| `httpOnly` | `true` | JavaScript cannot access cookies via `document.cookie` (XSS mitigation) |\n\n### SameSite Options\n\n- **`lax` (recommended)**: Blocks cross-site POST requests but allows top-level navigation. Best balance of security and usability.\n- **`strict`**: Blocks cookies on all cross-site requests. More secure but may break legitimate flows (e.g., returning from payment provider).\n- **`none`**: Allows all cross-site requests. Required for cross-domain authentication (requires `secure: true`).\n\n### Development Override\n\nFor local development over HTTP, you can relax the `secure` flag:\n\n```ts\nauth: defineAuth(\"better-auth\", {\n advanced: {\n defaultCookieAttributes: {\n sameSite: 'lax',\n secure: process.env.NODE_ENV === 'production',\n httpOnly: true,\n }\n }\n})\n```\n\n**Warning**: Never use `secure: false` in production.\n\n---\n\n## Rate Limiting\n\nQuickback includes per-IP rate limiting by default to protect against brute-force attacks and abuse.\n\n### How It Works\n\n**Rate limits are applied per IP address**, not globally:\n\n```\n┌─────────────────────────────────────────────────┐\n│ IP: 192.168.1.1 │\n│ Endpoint: /sign-in/email │\n│ Limit: 10 requests per 60 seconds │\n│ │\n│ Request 1-10: ✅ Allowed │\n│ Request 11: ❌ 429 Too Many Requests │\n│ After 60s: Counter resets to 0 │\n└─────────────────────────────────────────────────┘\n\n┌─────────────────────────────────────────────────┐\n│ IP: 192.168.1.2 (different IP) │\n│ Endpoint: /sign-in/email │\n│ Independent counter: Not affected by 1.1 │\n│ Starts at 0 │\n└─────────────────────────────────────────────────┘\n```\n\n### Default Configuration\n\n```ts\nauth: defineAuth(\"better-auth\", {\n rateLimit: {\n enabled: true,\n window: 60, // 60 second window\n max: 100, // 100 requests per IP per window\n customRules: {\n \"/sign-in/email\": { window: 60, max: 10 },\n \"/sign-in/social\": { window: 60, max: 10 },\n \"/device/init\": { window: 60, max: 5 },\n \"/device/poll\": { window: 60, max: 30 },\n \"/device/token\": { window: 60, max: 10 },\n }\n }\n})\n```\n\n### Custom Rules by Endpoint\n\nSet different limits based on endpoint sensitivity:\n\n| Endpoint Type | Recommended Limit | Reasoning |\n|---------------|-------------------|-----------|\n| Sign-in | 10/min per IP | Prevent credential stuffing |\n| Device codes | 5/min per IP | Prevent device code enumeration |\n| Token exchange | 10/min per IP | Prevent token brute-force |\n| Polling | 30/min per IP | Allow reasonable polling (every 2s) |\n| General API | 100/min per IP | Balance abuse prevention and usability |\n\n### Why Per-IP?\n\n✅ **Per-IP** (what Quickback uses):\n- Prevents single attacker from brute-forcing\n- Doesn't penalize legitimate users when one IP is malicious\n- Standard for auth endpoints\n\n❌ **Global total** (not used):\n- Would allow distributed attacks to succeed\n- One attacker with many IPs could exhaust the limit for everyone\n- Not effective for security\n\n### Disabling Rate Limiting\n\nNot recommended, but you can disable for testing:\n\n```ts\nrateLimit: {\n enabled: process.env.NODE_ENV === 'production',\n window: 60,\n max: 1000, // Higher limit for development\n}\n```\n\n---\n\n## Cross-Domain Authentication\n\nFor applications spanning multiple subdomains (e.g., `account.example.com`, `api.example.com`, `dashboard.example.com`), Quickback provides dual-layer security.\n\n### The Problem\n\nBrowsers restrict cookies to exact domains by default. If your auth API is at `api.example.com`, cookies won't be sent to `dashboard.example.com`.\n\n### The Solution: Dual-Layer Protection\n\nQuickback uses two complementary security layers:\n\n#### 1. Cookie Domain (Browser-Level)\n\n```ts\nauth: defineAuth(\"better-auth\", {\n advanced: {\n crossSubDomainCookies: {\n enabled: true,\n domain: 'example.com' // All *.example.com can access cookies\n }\n }\n})\n```\n\n**Behavior**: Allows ALL subdomains to access the cookie (browser limitation per RFC 6265)\n- Cannot selectively allow only specific subdomains\n- All-or-nothing: either exact domain only, or all subdomains\n\n#### 2. Trusted Origins (Application-Level)\n\n```ts\nauth: defineAuth(\"better-auth\", {\n trustedOrigins: [\n 'https://account.example.com',\n 'https://api.example.com',\n 'https://dashboard.example.com'\n ]\n})\n```\n\n**Behavior**: Better Auth validates request origins before processing auth operations\n- Granular whitelist of allowed origins\n- Blocks requests from untrusted subdomains even if they have the cookie\n- Prevents CSRF and unauthorized auth operations\n\n### Security Model\n\n```\n┌─────────────────────────────────────────────────┐\n│ Subdomain: evil.example.com │\n│ Has Cookie: ✓ (via crossSubDomainCookies) │\n│ In trustedOrigins: ✗ │\n│ Result: REQUEST BLOCKED by Better Auth │\n└─────────────────────────────────────────────────┘\n\n┌─────────────────────────────────────────────────┐\n│ Subdomain: account.example.com │\n│ Has Cookie: ✓ (via crossSubDomainCookies) │\n│ In trustedOrigins: ✓ │\n│ Result: REQUEST ALLOWED │\n└─────────────────────────────────────────────────┘\n```\n\n### Full Cross-Domain Configuration\n\n```ts\n\nexport default defineConfig({\n name: \"my-saas\",\n providers: {\n runtime: defineRuntime(\"cloudflare\"),\n database: defineDatabase(\"cloudflare-d1\", {\n vars: {\n ACCOUNT_URL: \"https://account.example.com\",\n BETTER_AUTH_URL: \"https://api.example.com\",\n }\n }),\n auth: defineAuth(\"better-auth\", {\n emailAndPassword: { enabled: true },\n plugins: [\"organization\"],\n\n // Trusted origins (granular whitelist)\n trustedOrigins: [\n \"https://account.example.com\",\n \"https://api.example.com\",\n \"https://dashboard.example.com\",\n \"http://localhost:3000\", // Development\n ],\n\n // Cookie configuration\n advanced: {\n crossSubDomainCookies: {\n enabled: true,\n domain: \"example.com\",\n },\n defaultCookieAttributes: {\n secure: true,\n sameSite: \"none\", // Required for cross-domain\n httpOnly: true,\n }\n }\n })\n }\n})\n```\n\n**Important**: When using `sameSite: \"none\"`, you must also set `secure: true`. This is required by browsers.\n\n---\n\n## Environment Variables\n\nSet these in your Cloudflare Workers environment or `.env` file:\n\n### Required\n\n```bash\n# Better Auth secret (minimum 32 characters)\nBETTER_AUTH_SECRET=your-secret-key-min-32-chars\n\n# Base URL for Better Auth endpoints\nBETTER_AUTH_URL=https://api.example.com\n```\n\n### Optional (Cross-Domain)\n\n```bash\n# Enable cross-subdomain cookies\nCROSS_SUBDOMAIN_COOKIES=true\nCOOKIE_DOMAIN=example.com\n\n# Trusted origins (comma-separated)\nTRUSTED_ORIGINS=https://account.example.com,https://api.example.com\n\n# CORS allowed origins (typically matches trustedOrigins)\nALLOWED_ORIGINS=https://account.example.com,https://api.example.com\n```\n\n### Setting Secrets with Wrangler\n\nFor sensitive values like `BETTER_AUTH_SECRET`:\n\n```bash\nwrangler secret put BETTER_AUTH_SECRET\n# Paste your secret when prompted\n```\n\nFor non-secret vars, add to `wrangler.toml`:\n\n```toml\n[vars]\nBETTER_AUTH_URL = \"https://api.example.com\"\nCROSS_SUBDOMAIN_COOKIES = \"false\"\n```\n\n---\n\n## Security Best Practices\n\n### Single Domain (Recommended)\n\nFor most applications, use exact domain cookies:\n\n```ts\nauth: defineAuth(\"better-auth\", {\n trustedOrigins: [\"https://app.example.com\"],\n advanced: {\n crossSubDomainCookies: {\n enabled: false // Default: exact domain only\n },\n defaultCookieAttributes: {\n sameSite: 'lax',\n secure: true,\n httpOnly: true,\n }\n }\n})\n```\n\n**Benefits**:\n- Maximum security (smallest cookie scope)\n- No cross-subdomain attack surface\n- Simpler configuration\n\n### Multi-Subdomain (When Needed)\n\nOnly enable cross-subdomain cookies if you need SSO across multiple subdomains:\n\n```ts\nauth: defineAuth(\"better-auth\", {\n trustedOrigins: [\n \"https://account.example.com\",\n \"https://dashboard.example.com\",\n ],\n advanced: {\n crossSubDomainCookies: {\n enabled: true,\n domain: \"example.com\",\n },\n defaultCookieAttributes: {\n sameSite: 'none', // Required for cross-domain\n secure: true,\n httpOnly: true,\n }\n }\n})\n```\n\n**Security requirements**:\n- Trust all subdomains equally (any compromised subdomain can steal cookies)\n- Use separate root domains for untrusted services\n- Always set `trustedOrigins` for granular control\n\n### CORS vs Trusted Origins\n\nThese serve different purposes and should typically match:\n\n| Setting | Purpose |\n|---------|---------|\n| `ALLOWED_ORIGINS` (CORS) | Controls which origins can make credentialed requests |\n| `trustedOrigins` (Better Auth) | Validates origins for auth operations + prevents open redirects |\n\n**Example**:\n\n```ts\ndatabase: defineDatabase(\"cloudflare-d1\", {\n vars: {\n ALLOWED_ORIGINS: \"https://account.example.com,https://api.example.com\"\n }\n}),\nauth: defineAuth(\"better-auth\", {\n trustedOrigins: [\n \"https://account.example.com\",\n \"https://api.example.com\"\n ]\n})\n```\n\n### Checklist\n\nBefore deploying to production:\n\n- [ ] `BETTER_AUTH_SECRET` is set (minimum 32 random characters)\n- [ ] `secure: true` is enabled (HTTPS only)\n- [ ] `httpOnly: true` is enabled (XSS protection)\n- [ ] `sameSite: 'lax'` or `'strict'` unless cross-domain required\n- [ ] `trustedOrigins` includes only your domains\n- [ ] `ALLOWED_ORIGINS` matches `trustedOrigins`\n- [ ] Rate limiting is enabled (`rateLimit.enabled: true`)\n- [ ] Custom rate limits set for sensitive endpoints\n\n---\n\n## Deployment Considerations\n\n### Behind Proxy/Load Balancer\n\nIf your app is behind a proxy, ensure IP address headers are trusted:\n\n```ts\nauth: defineAuth(\"better-auth\", {\n autoDetectIpAddress: true, // Uses X-Forwarded-For or CF-Connecting-IP\n})\n```\n\n**Cloudflare Workers**: IP automatically available via `CF-Connecting-IP` header\n\n**Other proxies**: Configure to trust `X-Forwarded-For` or `X-Real-IP`\n\n### Multiple Environments\n\nUse environment variables to configure different settings per environment:\n\n```ts\nauth: defineAuth(\"better-auth\", {\n trustedOrigins: process.env.TRUSTED_ORIGINS?.split(',') || [],\n advanced: {\n crossSubDomainCookies: {\n enabled: process.env.CROSS_SUBDOMAIN_COOKIES === 'true',\n domain: process.env.COOKIE_DOMAIN,\n },\n defaultCookieAttributes: {\n sameSite: process.env.COOKIE_SAMESITE || 'lax',\n secure: process.env.NODE_ENV === 'production',\n httpOnly: true,\n }\n }\n})\n```\n\n**Development** (`.dev.vars`):\n```bash\nTRUSTED_ORIGINS=http://localhost:3000,http://localhost:5173\nCROSS_SUBDOMAIN_COOKIES=false\nCOOKIE_SAMESITE=lax\n```\n\n**Production** (Wrangler secrets):\n```bash\nTRUSTED_ORIGINS=https://account.example.com,https://api.example.com\nCROSS_SUBDOMAIN_COOKIES=true\nCOOKIE_DOMAIN=example.com\nCOOKIE_SAMESITE=none\n```\n\n---\n\n## Further Reading\n\n- [Better Auth: Cookies Documentation](https://www.better-auth.com/docs/concepts/cookies)\n- [Better Auth: Security Guide](https://www.better-auth.com/docs/reference/security)\n- [OWASP: Cross-Site Request Forgery Prevention](https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html)\n- [MDN: SameSite Cookies](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#samesitesamesite-value)"
267
+ },
268
+ "stack/auth/using-auth": {
269
+ "title": "Using Auth",
270
+ "content": "This page covers how to use Better Auth in your Quickback Stack application — sign-in flows, session management, and working with organization context.\n\n## Sign-In Flows\n\nBetter Auth supports multiple sign-in methods:\n\n- **Email/Password** — Traditional credential-based authentication\n- **Email OTP** — One-time passwords sent via email\n- **Magic Links** — Passwordless email links\n- **Passkeys** — WebAuthn biometric/hardware key authentication\n\n## Session Handling\n\nSessions are stored in Cloudflare KV for fast edge-based lookups.\n\n```typescript\n// Access the current session in a Hono route\napp.get('/api/me', async (c) => {\n const session = c.get('session');\n const user = c.get('user');\n return c.json({ user, session });\n});\n```\n\n## Organization Context\n\nWhen organizations are enabled, the current organization is available in the request context:\n\n```typescript\nconst orgId = c.get('session')?.activeOrganizationId;\n```\n\n## Related\n\n- [Auth Overview](/stack/auth) — Setup and configuration\n- [Auth Plugins](/stack/auth/plugins) — Available plugins\n- [Auth Security](/stack/auth/security) — Security best practices"
271
+ },
272
+ "stack/database/d1": {
273
+ "title": "D1 Database",
274
+ "content": "Cloudflare D1 is SQLite at the edge. Quickback uses D1 as the primary database for Cloudflare deployments, with a multi-database pattern for separation of concerns.\n\n## What is D1?\n\nD1 is Cloudflare's serverless SQL database built on SQLite:\n\n- **Edge-native** - Runs in Cloudflare's global network\n- **SQLite compatible** - Use familiar SQL syntax\n- **Zero configuration** - No connection strings or pooling\n- **Automatic replication** - Read replicas at every edge location\n\n## Multi-Database Pattern\n\nQuickback generates separate D1 databases for different concerns:\n\n| Database | Binding | Purpose |\n|----------|---------|---------|\n| `AUTH_DB` | `AUTH_DB` | Better Auth tables (user, session, account) |\n| `DB` | `DB` | Your application data |\n| `FILES_DB` | `FILES_DB` | File metadata for R2 uploads |\n| `WEBHOOKS_DB` | `WEBHOOKS_DB` | Webhook delivery tracking |\n\nThis separation provides:\n- **Independent scaling** - Auth traffic doesn't affect app queries\n- **Isolation** - Auth schema changes don't touch your data\n- **Clarity** - Clear ownership of each database\n\n## Drizzle ORM Integration\n\nQuickback uses [Drizzle ORM](https://orm.drizzle.team/) for type-safe database access:\n\n```ts\n// schema/tables.ts - Your schema definition\n\nexport const posts = sqliteTable(\"posts\", {\n id: text(\"id\").primaryKey(),\n title: text(\"title\").notNull(),\n content: text(\"content\"),\n authorId: text(\"author_id\").notNull(),\n createdAt: integer(\"created_at\", { mode: \"timestamp\" }),\n});\n```\n\nThe compiler generates Drizzle queries based on your security rules:\n\n```ts\n// Generated query with firewall applied\nconst result = await db\n .select()\n .from(posts)\n .where(eq(posts.authorId, userId)); // Firewall injects ownership\n```\n\n## Migrations\n\nQuickback generates migrations automatically at compile time based on your schema changes:\n\n```bash\n# Compile your project (generates migrations)\nquickback compile\n\n# Apply migrations locally\nwrangler d1 migrations apply DB --local\n\n# Apply to production D1\nwrangler d1 migrations apply DB --remote\n```\n\nMigration files are generated in `drizzle/migrations/` and version-controlled with your code. You never need to manually generate migrations—just define your schema and compile.\n\n## wrangler.toml Bindings\n\nConfigure D1 bindings in `wrangler.toml`:\n\n```toml\n[[d1_databases]]\nbinding = \"DB\"\ndatabase_name = \"my-app-db\"\ndatabase_id = \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n\n[[d1_databases]]\nbinding = \"AUTH_DB\"\ndatabase_name = \"my-app-auth\"\ndatabase_id = \"yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy\"\n\n[[d1_databases]]\nbinding = \"FILES_DB\"\ndatabase_name = \"my-app-files\"\ndatabase_id = \"zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz\"\n```\n\nCreate databases via Wrangler:\n\n```bash\nwrangler d1 create my-app-db\nwrangler d1 create my-app-auth\nwrangler d1 create my-app-files\n```\n\n## Accessing Data via API\n\nAll data access in Quickback goes through the generated API endpoints—never direct database queries. This ensures security rules (firewall, access, guards, masking) are always enforced.\n\n### CRUD Operations\n\n```bash\n# List posts (firewall automatically filters by ownership)\nGET /api/v1/posts\n\n# Get a single post\nGET /api/v1/posts/:id\n\n# Create a post (guards validate allowed fields)\nPOST /api/v1/posts\n{ \"title\": \"Hello World\", \"content\": \"...\" }\n\n# Update a post (guards validate updatable fields)\nPATCH /api/v1/posts/:id\n{ \"title\": \"Updated Title\" }\n\n# Delete a post\nDELETE /api/v1/posts/:id\n```\n\n### Filtering and Pagination\n\n```bash\n# Filter by field\nGET /api/v1/posts?status=published\n\n# Pagination\nGET /api/v1/posts?limit=10&offset=20\n\n# Sort\nGET /api/v1/posts?sort=createdAt&order=desc\n```\n\n### Why No Direct Database Access?\n\nDirect database queries bypass Quickback's security layers:\n- **Firewall** - Data isolation by user/org/team\n- **Access** - Role-based permissions\n- **Guards** - Field-level create/update restrictions\n- **Masking** - Sensitive data redaction\n\nAlways use the API endpoints. For custom business logic, use [Actions](/compiler/definitions/actions).\n\n## Local Development\n\nD1 works locally with Wrangler:\n\n```bash\n# Start local dev server with D1\nwrangler dev\n\n# D1 data persists in .wrangler/state/\n```\n\nLocal D1 uses SQLite files in `.wrangler/state/v3/d1/`, which you can inspect with any SQLite client.\n\n## Security Architecture\n\nD1 uses application-layer security that is equally secure to Supabase RLS when using Quickback-generated code. The key difference is where enforcement happens.\n\n### How Security Works\n\n| Component | Enforcement | Notes |\n|-----------|-------------|-------|\n| CRUD endpoints | ✅ Firewall auto-applied | All generated routes enforce security |\n| Actions | ✅ Firewall auto-applied | Both standalone and record-based |\n| Manual routes | ⚠️ Must apply firewall | Use `withFirewall` helper |\n\n### Why D1 is Secure\n\n1. **No external database access** - D1 can only be queried through your Worker. There's no connection string or external endpoint.\n2. **Generated code enforces rules** - All CRUD and Action endpoints automatically apply firewall, access, guards, and masking.\n3. **Single entry point** - Every request flows through your API where security is enforced.\n\nUnlike Supabase where PostgreSQL RLS provides database-level enforcement, D1's security comes from architecture: the database is inaccessible except through your Worker, and all generated routes apply the four security pillars.\n\n### Comparison with Supabase RLS\n\n| Scenario | Supabase | D1 |\n|----------|----------|-----|\n| CRUD endpoints | ✅ Secure (RLS + App) | ✅ Secure (App) |\n| Actions | ✅ Secure (RLS + App) | ✅ Secure (App) |\n| Manual routes | ✅ RLS still protects | ⚠️ Must apply firewall |\n| External DB access | ⚠️ Possible with credentials | ✅ Not possible |\n| Dashboard queries | Via Supabase Studio | ⚠️ Admin only (audit logged) |\n\n### Writing Manual Routes\n\nIf you write custom routes outside of Quickback compilation (e.g., custom reports, integrations), use the generated `withFirewall` helper to ensure security:\n\n```ts\n\napp.get('/reports/monthly', async (c) => {\n return withFirewall(c, async (ctx, firewall) => {\n const results = await db.select()\n .from(invoices)\n .where(firewall);\n return c.json(results);\n });\n});\n```\n\nThe `withFirewall` helper:\n- Validates authentication\n- Builds the correct WHERE conditions for the current user/org\n- Returns 401 if not authenticated\n\n### Best Practices\n\n1. **Use generated endpoints** - Prefer CRUD and Actions over manual routes\n2. **Always apply firewall** - When writing manual routes, always use `withFirewall`\n3. **Avoid raw SQL** - Raw SQL bypasses application security; use Drizzle ORM\n4. **Review custom code** - Manual routes should be code-reviewed for security\n\n## Limitations\n\nD1 is built on SQLite, which means:\n\n- **No stored procedures** - Business logic lives in your Worker\n- **Single-writer** - One write connection at a time (reads scale horizontally)\n- **Size limits** - 10GB per database (free tier: 500MB)\n- **No PostgreSQL extensions** - Use Supabase if you need PostGIS, etc.\n\nFor most applications, these limits are non-issues. D1's edge distribution and zero-config setup outweigh the constraints."
275
+ },
276
+ "stack/database": {
277
+ "title": "Database",
278
+ "content": "The Quickback Stack supports multiple database providers. Choose the one that fits your deployment target.\n\n## Providers\n\n| Provider | Best For | Features |\n|----------|----------|----------|\n| [Cloudflare D1](/stack/database/d1) | Edge-first apps | SQLite at the edge, zero-latency reads |\n| [Neon](/stack/database/neon) | Postgres apps | Serverless Postgres, branching, autoscaling |\n\n## Getting Started\n\n- **[D1 Setup](/stack/database/d1)** — Bindings, wrangler config, migrations\n- **[Using D1](/stack/database/using-d1)** — Drizzle queries, dual-DB mode, schema patterns\n- **[Neon](/stack/database/neon)** — Serverless Postgres setup and usage"
279
+ },
280
+ "stack/database/neon": {
281
+ "title": "Neon",
282
+ "content": "Neon provides serverless PostgreSQL that works with Cloudflare Workers via HTTP connections.\n\n## Configuration\n\n```typescript\ndatabase: defineDatabase(\"neon\", {\n connectionMode: 'auto',\n pooled: true,\n})\n```\n\n## Connection Modes\n\n- **HTTP Mode** — For Cloudflare Workers and edge functions (stateless)\n- **WebSocket Mode** — For Node.js and Bun (persistent connections)\n\nSee [Neon Integration](/compiler/integrations/neon) for the complete setup guide and RLS policy details."
283
+ },
284
+ "stack/database/using-d1": {
285
+ "title": "Using D1",
286
+ "content": "This page covers how to use Cloudflare D1 with Drizzle ORM in your Quickback Stack application.\n\n## Drizzle Queries\n\nThe generated API uses Drizzle ORM for all database operations:\n\n```typescript\n\n// Select\nconst allRooms = await db.select().from(rooms);\n\n// Where clause\nconst room = await db.select().from(rooms).where(eq(rooms.id, id));\n\n// Insert\nawait db.insert(rooms).values({ name: 'Conference A' });\n```\n\n## Dual-DB Mode\n\nQuickback supports separate databases for auth and application data, useful for multi-tenant SaaS apps.\n\n## Migrations\n\nMigrations are generated by the compiler using `drizzle-kit generate`. Apply them with:\n\n```bash\nnpx wrangler d1 migrations apply my-db\n```\n\n## Related\n\n- [D1 Setup](/stack/database/d1) — Bindings and wrangler config\n- [Schema](/compiler/definitions/schema) — Defining your database schema"
287
+ },
288
+ "stack": {
289
+ "title": "Quickback Stack",
290
+ "content": "Quickback Stack is the production-ready Cloudflare + Better Auth integration that runs entirely on YOUR Cloudflare account.\n\n## What is Quickback Stack?\n\nWhile the [Quickback Compiler](/compiler) transforms your definitions into deployable code, Quickback Stack is the runtime environment where that code runs. It's a complete backend architecture built on Cloudflare's edge platform:\n\n- **Your account, your data** — Everything runs on your Cloudflare account\n- **Edge-first** — Global distribution with sub-50ms latency worldwide\n- **Integrated services** — D1, R2, KV, Workers, and Durable Objects working together\n- **Production-ready auth** — Better Auth with plugins for every use case\n\n## Architecture Overview\n\n```\n┌─────────────────────────────────────────────────────────────┐\n│ Cloudflare Edge │\n├─────────────────────────────────────────────────────────────┤\n│ │\n│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │\n│ │ Workers │ │ Durable │ │ KV │ │\n│ │ (API) │ │ Objects │ │ (Cache) │ │\n│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │\n│ │ │ │ │\n│ ┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐ │\n│ │ D1 │ │ Realtime │ │ Sessions │ │\n│ │ (SQLite) │ │ (WebSocket)│ │ (Auth) │ │\n│ └─────────────┘ └─────────────┘ └─────────────┘ │\n│ │\n│ ┌─────────────┐ ┌─────────────────────────────────┐ │\n│ │ R2 │ │ Better Auth │ │\n│ │ (Storage) │ │ (Email OTP, Passkeys, etc.) │ │\n│ └─────────────┘ └─────────────────────────────────┘ │\n│ │\n└─────────────────────────────────────────────────────────────┘\n```\n\n## Stack Components\n\n| Component | Service | Purpose |\n|-----------|---------|---------|\n| [Auth](/stack/auth) | Better Auth | Authentication, sessions, organizations |\n| [D1 Database](/stack/database/d1) | Cloudflare D1 | SQLite at the edge for your data |\n| [File Storage](/stack/storage/r2) | Cloudflare R2 | S3-compatible object storage |\n| [KV Storage](/stack/storage/kv) | Workers KV | Key-value for sessions and cache |\n| [Realtime](/stack/realtime/durable-objects) | Durable Objects | WebSocket connections for live updates |\n| [Embeddings](/stack/vector/embeddings) | Workers AI | Auto-generated vector embeddings |\n| [Queues](/stack/queues/handlers) | Cloudflare Queues | Background job processing |\n\n## What You Own\n\nEverything in Quickback Stack runs on your Cloudflare account:\n\n- **Databases** — Your D1 instances, your data\n- **Storage** — Your R2 buckets, your files\n- **Workers** — Your deployments, your logs\n- **Secrets** — Your API keys, stored in your Wrangler secrets\n\nQuickback provides the architecture and code generation. You own the infrastructure.\n\n## Quick Start\n\n1. **Cloudflare Account** — Sign up at [cloudflare.com](https://cloudflare.com)\n2. **Wrangler CLI** — `npm install -g wrangler`\n3. **Quickback CLI** — `npm install -g @kardoe/quickback`\n4. **Create Project** — `quickback create cloudflare my-app`\n5. **Compile** — `quickback compile`\n6. **Deploy** — `wrangler deploy`"
291
+ },
292
+ "stack/queues/handlers": {
293
+ "title": "Custom Queue Handlers",
294
+ "content": "Quickback lets you define custom queue handlers that integrate seamlessly with the generated queue consumer. This is useful for background processing like material extraction pipelines, batch jobs, and async workflows.\n\n## Overview\n\nCustom queue handlers are defined in the `services/queues/` directory using the `defineQueue` helper. The compiler extracts your handler logic and generates a unified queue consumer that dispatches messages based on their type.\n\n```\nquickback/\n├── definitions/\n│ ├── features/ # Data layer (CRUD, actions, security)\n│ │ ├── materials/\n│ │ └── claims/\n│ └── services/ # Infrastructure layer\n│ └── queues/\n│ ├── ingest.ts # Material processing\n│ └── claim-batches.ts # Batch processing\n```\n\n## Defining a Queue Handler\n\nUse `defineQueue` to create a handler:\n\n```typescript\n// services/queues/ingest.ts\n\ninterface ProcessMaterialMessage {\n type: 'process_material';\n materialId: string;\n organizationId: string;\n}\n\nexport default defineQueue<ProcessMaterialMessage>({\n name: 'ingest',\n messageType: 'process_material',\n description: 'Process materials through extraction pipeline',\n\n execute: async ({ message, db, env, services, ack, retry }) => {\n const { materialId, organizationId } = message;\n\n // Dynamic import of schema\n const { materials } = await import('./features/materials/schema');\n\n // Get material from database\n const [material] = await db\n .select()\n .from(materials)\n .where(eq(materials.id, materialId))\n .limit(1);\n\n if (!material) {\n console.error('[IngestQueue] Material not found:', materialId);\n return { success: false, error: 'Material not found' };\n }\n\n // Process the material...\n await db\n .update(materials)\n .set({ extractionStatus: 'processing' })\n .where(eq(materials.id, materialId));\n\n // Your processing logic here\n\n return { success: true };\n },\n});\n```\n\n## Configuration Options\n\n| Option | Type | Required | Description |\n|--------|------|----------|-------------|\n| `name` | `string` | Yes | Handler identifier (used in logs) |\n| `messageType` | `string` | Yes | Message type to match (e.g., `'process_material'`) |\n| `description` | `string` | No | Human-readable description |\n| `execute` | `function` | Yes | Handler function |\n\n## Execute Function\n\nThe `execute` function receives a context object:\n\n```typescript\nexecute: async ({ message, db, env, services, ack, retry }) => {\n // message - The message payload (typed via generic)\n // db - Drizzle database instance\n // env - Cloudflare bindings (AI, queues, etc.)\n // services - Generated services (ai, etc.)\n // ack() - Acknowledge message (auto-called on success)\n // retry() - Retry message (auto-called on failure)\n\n return { success: true }; // or { success: false, error: 'reason' }\n}\n```\n\n## Constants and Imports\n\nThe compiler extracts top-level constants and imports from your handler file and includes them in the generated queue consumer:\n\n```typescript\n// services/queues/ingest.ts\n\nconst TEXT_GENERATION_MODEL = '@cf/meta/llama-3.1-8b-instruct';\n\nconst EXTRACTION_PROMPT = `You are a news analyst. Extract factual claims from this article.\n\nFor each claim, output a JSON object with:\n- content: The claim as a complete, standalone sentence\n- claimType: One of NEWS, QUOTE, OFFICIAL, ALLEGATION, STATISTIC, FINDING\n- urgency: 1-5 (5 = most urgent/breaking)\n\nOutput ONLY a JSON array of claims, no other text.`;\n\nexport default defineQueue({\n // ... handler definition\n});\n```\n\nThe compiler:\n1. Extracts all `import` statements\n2. Extracts top-level `const` declarations (including multi-line template literals)\n3. Inlines them in the generated queue consumer\n\n## Dynamic Schema Imports\n\nSince your schema files are generated by Quickback, use dynamic imports to reference them:\n\n```typescript\nexecute: async ({ message, db }) => {\n // Dynamic import - path is relative to generated src/ directory\n const { materials, claimExtractionBatches } = await import('./features/materials/schema');\n\n const [material] = await db\n .select()\n .from(materials)\n .where(eq(materials.id, message.materialId));\n\n // ...\n}\n```\n\n## Sending to Other Queues\n\nChain queue handlers by sending messages to other queues:\n\n```typescript\nexecute: async ({ message, db, env }) => {\n const { materialId } = message;\n\n // Process and create batch...\n const batchId = generateId('batch');\n\n // Send to next stage\n if (env.CLAIM_BATCHES_QUEUE) {\n await env.CLAIM_BATCHES_QUEUE.send({\n type: 'claim_batch_stage',\n batch_id: batchId,\n material_id: materialId,\n stage: 'subeditor',\n });\n }\n\n return { success: true };\n}\n```\n\n## Generated Output\n\nWhen queue handlers are defined, the compiler generates a unified `queue-consumer.ts`:\n\n```typescript\n// src/queue-consumer.ts (generated)\n\nconst TEXT_GENERATION_MODEL = '@cf/meta/llama-3.1-8b-instruct';\nconst EXTRACTION_PROMPT = `...`;\n\n// Embedding handler (if embeddings configured)\nconst embeddingQueueHandler = async (batch, env) => { ... };\n\n// Custom handler: ingest\nconst ingestQueueHandler = async (batch, env) => {\n const db = drizzle(env.DB);\n const services = createServices(env);\n\n for (const message of batch.messages) {\n try {\n const result = await executeHandler({\n message: message.body,\n env, db, services,\n ack: () => message.ack(),\n retry: () => message.retry(),\n });\n\n if (result.success) message.ack();\n else message.retry();\n } catch (error) {\n message.retry();\n }\n }\n};\n\n// Unified dispatcher\nexport const queue = async (batch, env) => {\n const messageType = batch.messages[0]?.body?.type;\n\n switch (messageType) {\n case 'embedding':\n await embeddingQueueHandler(batch, env);\n break;\n case 'process_material':\n await ingestQueueHandler(batch, env);\n break;\n default:\n console.warn('[Queue] Unknown message type:', messageType);\n for (const msg of batch.messages) msg.ack();\n }\n};\n```\n\n## wrangler.toml Configuration\n\nConfigure your queues in wrangler.toml:\n\n```toml\n# Queue producers (send to queues)\n[[queues.producers]]\nqueue = \"my-app-ingest-queue\"\nbinding = \"INGEST_QUEUE\"\n\n[[queues.producers]]\nqueue = \"my-app-claim-batches-queue\"\nbinding = \"CLAIM_BATCHES_QUEUE\"\n\n# Queue consumers (receive from queues)\n[[queues.consumers]]\nqueue = \"my-app-ingest-queue\"\nmax_batch_size = 10\nmax_batch_timeout = 30\nmax_retries = 3\n\n[[queues.consumers]]\nqueue = \"my-app-claim-batches-queue\"\nmax_batch_size = 10\nmax_batch_timeout = 30\nmax_retries = 3\n```\n\n## Deployment\n\nAfter defining queue handlers:\n\n1. **Create the queues:**\n ```bash\n wrangler queues create my-app-ingest-queue\n wrangler queues create my-app-claim-batches-queue\n ```\n\n2. **Compile:**\n ```bash\n quickback compile\n ```\n\n3. **Deploy:**\n ```bash\n wrangler deploy\n ```\n\n## Error Handling\n\nThe queue consumer handles errors automatically:\n\n- **Return `{ success: true }`** - Message is acknowledged\n- **Return `{ success: false, error: '...' }`** - Message is retried\n- **Throw an exception** - Message is retried\n\nAfter `max_retries` (default 3), failed messages are dropped or sent to a dead-letter queue if configured.\n\n```typescript\nexecute: async ({ message, db }) => {\n try {\n // Your processing logic\n return { success: true };\n } catch (error) {\n console.error('[Handler] Failed:', error);\n return {\n success: false,\n error: error instanceof Error ? error.message : 'Unknown error'\n };\n }\n}\n```\n\n## Combining with Embeddings\n\nCustom queue handlers work alongside automatic embeddings. The compiler generates a single queue consumer that handles both:\n\n```typescript\nswitch (messageType) {\n case 'embedding':\n // Auto-generated embedding handler\n await embeddingQueueHandler(batch, env);\n break;\n case 'process_material':\n // Your custom handler\n await ingestQueueHandler(batch, env);\n break;\n}\n```\n\nSee [Automatic Embeddings](/stack/vector/embeddings) for configuring auto-embedding on CRUD operations."
295
+ },
296
+ "stack/queues": {
297
+ "title": "Queues",
298
+ "content": "Cloudflare Queues provide reliable, at-least-once message delivery for background processing. Quickback uses queues for embedding generation, webhook delivery, and custom background jobs.\n\n## Built-in Queues\n\nThe compiler auto-configures queues when certain features are enabled:\n\n| Feature | Queue Binding | Purpose |\n|---------|--------------|---------|\n| Embeddings | `EMBEDDINGS_QUEUE` | Async embedding generation via Workers AI |\n| Webhooks | `WEBHOOKS_QUEUE` | Async webhook delivery with retry |\n| Custom | User-defined | Your own background processing |\n\n## Queue Defaults\n\n| Setting | Default |\n|---------|---------|\n| Max batch size | 10 messages |\n| Max batch timeout | 30 seconds |\n| Max retries | 3 |\n\n## Custom Queues\n\nDefine custom queue handlers for background processing like data pipelines, batch jobs, and async workflows. Custom handlers integrate into the same queue consumer as built-in queues.\n\n```typescript\ndatabase: defineDatabase(\"cloudflare-d1\", {\n binding: \"DB\",\n additionalQueues: [\n {\n name: \"my-app-processing-queue\",\n binding: \"PROCESSING_QUEUE\",\n maxBatchSize: 5,\n maxBatchTimeout: 60,\n maxRetries: 5,\n },\n ],\n})\n```\n\n## How It Works\n\nAll queues share a single Cloudflare Workers queue consumer. The consumer inspects each message's `type` field to dispatch to the correct handler:\n\n```\nMessage arrives → Check type field\n ├─ \"embedding\" → Embedding handler\n ├─ \"inbound\"/\"outbound\" → Webhook handler\n └─ \"process_material\" → Custom handler\n```\n\n## Pages\n\n- **[Custom Queue Handlers](/stack/queues/handlers)** — `defineQueue()`, handler context, and generated output\n- **[Using Queues](/stack/queues/using-queues)** — Publishing messages, wrangler config, and deployment"
299
+ },
300
+ "stack/queues/using-queues": {
301
+ "title": "Using Queues",
302
+ "content": "This page covers the practical aspects of working with Cloudflare Queues in a Quickback project — publishing messages, configuring wrangler, and deploying.\n\n## Publishing Messages\n\nSend messages to a queue using the Cloudflare Queue binding in your action or route code:\n\n```typescript\n// In an action handler\nexport const execute: ActionExecutor = async ({ ctx, input, db }) => {\n await ctx.env.PROCESSING_QUEUE.send({\n type: \"process_material\",\n materialId: input.materialId,\n organizationId: ctx.activeOrgId,\n });\n\n return { queued: true };\n};\n```\n\nThe `type` field is required — the queue consumer uses it to dispatch messages to the correct handler.\n\n### Sending to Built-in Queues\n\n**Embeddings** are auto-enqueued by the compiler after CRUD operations. You can also manually enqueue:\n\n```typescript\nawait ctx.env.EMBEDDINGS_QUEUE.send({\n type: \"embedding\",\n table: \"claims\",\n id: record.id,\n content: `${record.title} ${record.content}`,\n model: \"@cf/baai/bge-base-en-v1.5\",\n embeddingColumn: \"embedding\",\n metadata: { organizationId: ctx.activeOrgId },\n organizationId: ctx.activeOrgId,\n});\n```\n\n**Webhooks** are enqueued via `emitWebhookEvent()` — see [Outbound Webhooks](/stack/webhooks/outbound).\n\n## Chaining Queues\n\nQueue handlers can publish to other queues for multi-stage pipelines:\n\n```typescript\nexport default defineQueue({\n name: \"ingest\",\n messageType: \"process_material\",\n\n execute: async ({ message, db, env }) => {\n // Stage 1: Extract data\n const claims = await extractClaims(message.materialId, db);\n\n // Stage 2: Send each claim to next queue\n for (const claim of claims) {\n await env.ANALYSIS_QUEUE.send({\n type: \"analyze_claim\",\n claimId: claim.id,\n });\n }\n\n return { success: true };\n },\n});\n```\n\n## Wrangler Configuration\n\nQueue bindings are auto-generated in `wrangler.toml`. Here's what the compiler produces:\n\n```toml\n# Embeddings queue (auto-configured when embeddings enabled)\n[[queues.producers]]\nqueue = \"my-app-embeddings-queue\"\nbinding = \"EMBEDDINGS_QUEUE\"\n\n[[queues.consumers]]\nqueue = \"my-app-embeddings-queue\"\nmax_batch_size = 10\nmax_batch_timeout = 30\nmax_retries = 3\n\n# Webhooks queue (auto-configured when webhooks enabled)\n[[queues.producers]]\nqueue = \"my-app-webhooks-queue\"\nbinding = \"WEBHOOKS_QUEUE\"\n\n[[queues.consumers]]\nqueue = \"my-app-webhooks-queue\"\nmax_batch_size = 10\nmax_batch_timeout = 30\nmax_retries = 3\ndead_letter_queue = \"my-app-webhooks-dlq\"\n\n# Custom queue (from additionalQueues config)\n[[queues.producers]]\nqueue = \"my-app-processing-queue\"\nbinding = \"PROCESSING_QUEUE\"\n\n[[queues.consumers]]\nqueue = \"my-app-processing-queue\"\nmax_batch_size = 5\nmax_batch_timeout = 60\nmax_retries = 5\n```\n\n## Generated Queue Consumer\n\nThe compiler generates a unified `src/queue-consumer.ts` that handles all message types:\n\n```typescript\n// src/queue-consumer.ts (generated)\nexport const queue = async (\n batch: MessageBatch<any>,\n env: CloudflareBindings\n): Promise<void> => {\n const messageType = batch.messages[0]?.body?.type;\n\n switch (messageType) {\n case \"embedding\":\n await embeddingQueueHandler(batch, env);\n break;\n case \"process_material\":\n await ingestQueueHandler(batch, env);\n break;\n default:\n console.warn(\"[Queue] Unknown message type:\", messageType);\n for (const msg of batch.messages) msg.ack();\n }\n};\n```\n\nThis consumer is exported alongside the `fetch` handler in the Workers entry point.\n\n## Deployment\n\n### 1. Create Queues\n\nBefore deploying, create the queues in Cloudflare:\n\n```bash\nwrangler queues create my-app-embeddings-queue\nwrangler queues create my-app-processing-queue\n```\n\n### 2. Compile and Deploy\n\n```bash\nquickback compile\nwrangler deploy\n```\n\nThe single worker handles both HTTP requests and queue consumption.\n\n### 3. Verify\n\nCheck queue status:\n\n```bash\nwrangler queues list\n```\n\n## Error Handling\n\nThe queue consumer handles errors per-message:\n\n- **Return `{ success: true }`** — Message is acknowledged\n- **Return `{ success: false, error: \"...\" }`** — Message is retried\n- **Throw an exception** — Message is retried\n\nAfter `max_retries` (default 3), failed messages are dropped or sent to a dead-letter queue if configured.\n\n## Monitoring\n\nQueue metrics are available in the Cloudflare dashboard:\n\n- Messages enqueued / processed\n- Retry count\n- Consumer lag\n- Dead-letter queue depth\n\n## Cloudflare Only\n\nQueues require the Cloudflare runtime. They are not available with the Bun or Node runtimes.\n\n## See Also\n\n- [Custom Queue Handlers](/stack/queues/handlers) — `defineQueue()` API and handler context\n- [Automatic Embeddings](/stack/vector/embeddings) — How embedding jobs are enqueued\n- [Outbound Webhooks](/stack/webhooks/outbound) — How webhook deliveries are enqueued"
303
+ },
304
+ "stack/realtime/durable-objects": {
305
+ "title": "Realtime",
306
+ "content": "Quickback can generate realtime notification helpers for broadcasting changes to connected clients. This enables live updates via WebSocket using Cloudflare Durable Objects.\n\n## Enabling Realtime\n\nAdd `realtime` configuration to individual table definitions:\n\n```typescript\n// features/claims/claims.ts\n\nexport const claims = sqliteTable(\"claims\", {\n id: text(\"id\").primaryKey(),\n title: text(\"title\").notNull(),\n status: text(\"status\").notNull(),\n organizationId: text(\"organization_id\").notNull(),\n});\n\nexport default defineTable(claims, {\n firewall: { organization: {} },\n realtime: {\n enabled: true, // Enable realtime for this table\n onInsert: true, // Broadcast on INSERT (default: true)\n onUpdate: true, // Broadcast on UPDATE (default: true)\n onDelete: true, // Broadcast on DELETE (default: true)\n requiredRoles: [\"member\", \"admin\"], // Who receives broadcasts\n fields: [\"id\", \"title\", \"status\"], // Fields to include (optional)\n },\n // ... guards, crud\n});\n```\n\nWhen any table has realtime enabled, the compiler generates `src/lib/realtime.ts` with helper functions for sending notifications.\n\n## Generated Helper\n\nThe `createRealtime()` factory provides convenient methods for both Postgres Changes (CRUD events) and custom broadcasts:\n\n```typescript\n\nexport const execute: ActionExecutor = async ({ db, ctx, input }) => {\n const realtime = createRealtime(ctx.env);\n\n // After creating a record\n await realtime.insert('claims', newClaim, ctx.activeOrgId!);\n\n // After updating a record\n await realtime.update('claims', newClaim, oldClaim, ctx.activeOrgId!);\n\n // After deleting a record\n await realtime.delete('claims', { id: claimId }, ctx.activeOrgId!);\n\n return { success: true };\n};\n```\n\n## Event Format\n\nQuickback broadcasts events in a structured JSON format for easy client-side handling.\n\n### Insert Event\n\n```typescript\nawait realtime.insert('materials', {\n id: 'mat_123',\n title: 'Breaking News',\n status: 'pending'\n}, ctx.activeOrgId!);\n```\n\nBroadcasts:\n```json\n{\n \"type\": \"postgres_changes\",\n \"table\": \"materials\",\n \"eventType\": \"INSERT\",\n \"schema\": \"public\",\n \"new\": {\n \"id\": \"mat_123\",\n \"title\": \"Breaking News\",\n \"status\": \"pending\"\n },\n \"old\": null\n}\n```\n\n### Update Event\n\n```typescript\nawait realtime.update('materials',\n { id: 'mat_123', status: 'completed' }, // new\n { id: 'mat_123', status: 'pending' }, // old\n ctx.activeOrgId!\n);\n```\n\nBroadcasts:\n```json\n{\n \"type\": \"postgres_changes\",\n \"table\": \"materials\",\n \"eventType\": \"UPDATE\",\n \"schema\": \"public\",\n \"new\": { \"id\": \"mat_123\", \"status\": \"completed\" },\n \"old\": { \"id\": \"mat_123\", \"status\": \"pending\" }\n}\n```\n\n### Delete Event\n\n```typescript\nawait realtime.delete('materials',\n { id: 'mat_123' },\n ctx.activeOrgId!\n);\n```\n\nBroadcasts:\n```json\n{\n \"type\": \"postgres_changes\",\n \"table\": \"materials\",\n \"eventType\": \"DELETE\",\n \"schema\": \"public\",\n \"new\": null,\n \"old\": { \"id\": \"mat_123\" }\n}\n```\n\n## Custom Broadcasts\n\nFor arbitrary events that don't map to CRUD operations:\n\n```typescript\nawait realtime.broadcast('processing-complete', {\n materialId: 'mat_123',\n claimCount: 5,\n duration: 1234\n}, ctx.activeOrgId!);\n```\n\nBroadcasts:\n```json\n{\n \"type\": \"broadcast\",\n \"event\": \"processing-complete\",\n \"payload\": {\n \"materialId\": \"mat_123\",\n \"claimCount\": 5,\n \"duration\": 1234\n }\n}\n```\n\n## User-Specific Broadcasts\n\nTarget a specific user instead of the entire organization:\n\n```typescript\n// Only this user receives the notification\nawait realtime.insert('notifications', newNotification, ctx.activeOrgId!, {\n userId: ctx.userId\n});\n\nawait realtime.broadcast('task-assigned', {\n taskId: 'task_123'\n}, ctx.activeOrgId!, {\n userId: assigneeId\n});\n```\n\n## Role-Based Filtering\n\nLimit which roles receive a broadcast using `targetRoles`:\n\n```typescript\n// Only admins and editors receive this broadcast\nawait realtime.insert('admin-actions', action, ctx.activeOrgId!, {\n targetRoles: ['admin', 'editor']\n});\n\n// Members won't see this update\nawait realtime.update('sensitive-data', newRecord, oldRecord, ctx.activeOrgId!, {\n targetRoles: ['admin']\n});\n```\n\nIf `targetRoles` is not specified, all authenticated users in the organization receive the broadcast.\n\n## Per-Role Field Masking\n\nApply different field masking based on the subscriber's role using `maskingConfig`:\n\n```typescript\nawait realtime.insert('employees', newEmployee, ctx.activeOrgId!, {\n maskingConfig: {\n ssn: { type: 'ssn', show: { roles: ['admin', 'hr'] } },\n salary: { type: 'redact', show: { roles: ['admin'] } },\n email: { type: 'email', show: { roles: ['admin', 'hr', 'manager'] } },\n },\n});\n```\n\n**How masking works:**\n- Each subscriber receives a payload masked according to their role\n- Roles in the `show.roles` array see unmasked values\n- All other roles see the masked version\n- Masking is pre-computed per-role (O(roles) not O(subscribers))\n\n**Available mask types:**\n| Type | Example Output |\n|------|----------------|\n| `email` | `j***@y***.com` |\n| `phone` | `***-***-4567` |\n| `ssn` | `***-**-6789` |\n| `creditCard` | `**** **** **** 1111` |\n| `name` | `J***` |\n| `redact` | `[REDACTED]` |\n\n### Owner-Based Masking\n\nShow unmasked data to the record owner:\n\n```typescript\nmaskingConfig: {\n ssn: {\n type: 'ssn',\n show: { roles: ['admin'], or: 'owner' } // Admin OR owner sees unmasked\n },\n}\n```\n\n## Client-Side Subscription\n\n### WebSocket Authentication\n\nThe Broadcaster supports two authentication methods:\n\n**Session Token (Browser/App):**\n```typescript\nconst ws = new WebSocket('wss://api.yourdomain.com/realtime/v1/websocket');\n\nws.onopen = () => {\n ws.send(JSON.stringify({\n type: 'auth',\n token: sessionToken, // JWT from Better Auth session\n organizationId: activeOrgId,\n }));\n};\n\nws.onmessage = (e) => {\n const msg = JSON.parse(e.data);\n if (msg.type === 'auth_success') {\n console.log('Authenticated:', {\n role: msg.role,\n roles: msg.roles,\n authMethod: msg.authMethod, // 'session'\n });\n }\n};\n```\n\n**API Key (Server/CLI):**\n```typescript\nconst ws = new WebSocket('wss://api.yourdomain.com/realtime/v1/websocket');\n\nws.onopen = () => {\n ws.send(JSON.stringify({\n type: 'auth',\n token: apiKey, // API key for machine-to-machine auth\n organizationId: orgId,\n }));\n};\n\n// authMethod will be 'api_key' on success\n```\n\n### Handling Messages\n\n```typescript\nws.onmessage = (event) => {\n const msg = JSON.parse(event.data);\n\n // Handle CRUD events\n if (msg.type === 'postgres_changes') {\n const { table, eventType, new: newRecord, old: oldRecord } = msg;\n\n if (eventType === 'INSERT') {\n addRecord(table, newRecord);\n } else if (eventType === 'UPDATE') {\n updateRecord(table, newRecord);\n } else if (eventType === 'DELETE') {\n removeRecord(table, oldRecord.id);\n }\n }\n\n // Handle custom broadcasts\n if (msg.type === 'broadcast') {\n const { event, payload } = msg;\n\n if (event === 'processing-complete') {\n refreshMaterial(payload.materialId);\n } else if (event === 'task-assigned') {\n showTaskNotification(payload.taskId);\n }\n }\n};\n```\n\n## Required Environment Variables\n\n| Variable | Description |\n|----------|-------------|\n| `REALTIME_URL` | URL of the broadcast/realtime worker |\n| `ACCESS_TOKEN` | Internal service-to-service auth token |\n\nExample configuration:\n\n```toml\n# wrangler.toml\n[vars]\nREALTIME_URL = \"https://your-realtime-worker.workers.dev\"\nACCESS_TOKEN = \"your-internal-secret-token\"\n```\n\n## Architecture\n\n```\n┌─────────────┐ POST /broadcast ┌──────────────────┐\n│ API Worker │ ───────────────────────► │ Realtime Worker │\n│ (Quickback)│ │ (Durable Object)│\n└─────────────┘ └────────┬─────────┘\n │\n WebSocket│\n │\n ┌────────▼─────────┐\n │ Browser Clients │\n │ (WebSocket) │\n └──────────────────┘\n```\n\n1. **API Worker** - Your Quickback-generated API. Calls `realtime.insert()` etc. after CRUD operations.\n2. **Realtime Worker** - Separate worker with Durable Object for managing WebSocket connections.\n3. **Browser Clients** - Connect via WebSocket, subscribe to channels.\n\n## Best Practices\n\n### Broadcast After Commit\n\nAlways broadcast after the database operation succeeds:\n\n```typescript\n// Good - broadcast after successful insert\nconst [newClaim] = await db.insert(claims).values(data).returning();\nawait realtime.insert('claims', newClaim, ctx.activeOrgId!);\n\n// Bad - don't broadcast before confirming success\nawait realtime.insert('claims', data, ctx.activeOrgId!);\nawait db.insert(claims).values(data); // Could fail!\n```\n\n### Minimal Payloads\n\nOnly include necessary data in broadcasts:\n\n```typescript\n// Good - minimal payload\nawait realtime.update('materials',\n { id: record.id, status: 'completed' },\n { id: record.id, status: 'pending' },\n ctx.activeOrgId!\n);\n\n// Avoid - sending entire record with large content\nawait realtime.update('materials', fullRecord, oldRecord, ctx.activeOrgId!);\n```\n\n### Use Broadcasts for Complex Events\n\nFor events that don't map cleanly to CRUD:\n\n```typescript\n// Processing pipeline completed\nawait realtime.broadcast('pipeline-complete', {\n materialId: material.id,\n stages: ['fetch', 'extract', 'analyze'],\n claimsCreated: 5,\n quotesExtracted: 3\n}, ctx.activeOrgId!);\n```\n\n## Custom Event Namespaces\n\nFor complex applications with many custom events, you can define typed event namespaces using `defineRealtime`. This generates strongly-typed helper methods for your custom events.\n\n### Defining Event Namespaces\n\nCreate a file in `services/realtime/`:\n\n```typescript\n// services/realtime/extraction.ts\n\nexport default defineRealtime({\n name: 'extraction',\n events: ['started', 'progress', 'completed', 'failed'],\n description: 'Material extraction pipeline events',\n});\n```\n\n### Generated Helper Methods\n\nAfter compilation, the `createRealtime()` helper includes your custom namespace:\n\n```typescript\n\nexport const execute: ActionExecutor = async ({ ctx, input }) => {\n const realtime = createRealtime(ctx.env);\n\n // Type-safe custom event methods\n await realtime.extraction.started({\n materialId: input.materialId,\n }, ctx.activeOrgId!);\n\n // Progress updates\n await realtime.extraction.progress({\n materialId: input.materialId,\n percent: 50,\n stage: 'extracting',\n }, ctx.activeOrgId!);\n\n // Completion\n await realtime.extraction.completed({\n materialId: input.materialId,\n claimsExtracted: 15,\n duration: 1234,\n }, ctx.activeOrgId!);\n\n return { success: true };\n};\n```\n\n### Event Format\n\nCustom namespace events use the `broadcast` type with namespaced event names:\n\n```json\n{\n \"type\": \"broadcast\",\n \"event\": \"extraction:started\",\n \"payload\": {\n \"materialId\": \"mat_123\"\n }\n}\n```\n\n### Multiple Namespaces\n\nDefine multiple namespaces for different parts of your application:\n\n```typescript\n// services/realtime/notifications.ts\nexport default defineRealtime({\n name: 'notifications',\n events: ['new', 'read', 'dismissed'],\n description: 'User notification events',\n});\n\n// services/realtime/presence.ts\nexport default defineRealtime({\n name: 'presence',\n events: ['joined', 'left', 'typing', 'idle'],\n description: 'User presence events',\n});\n```\n\nUsage:\n```typescript\nconst realtime = createRealtime(ctx.env);\n\n// Notification events\nawait realtime.notifications.new({ ... }, ctx.activeOrgId!);\n\n// Presence events\nawait realtime.presence.joined({ userId: ctx.userId }, ctx.activeOrgId!);\n```\n\n### Namespace vs Generic Broadcast\n\n| Use Case | Approach |\n|----------|----------|\n| One-off custom event | `realtime.broadcast('event-name', payload, orgId)` |\n| Repeated event patterns | `defineRealtime` namespace |\n| Type-safe events | `defineRealtime` namespace |\n| Event discovery | `defineRealtime` (appears in generated types) |"
307
+ },
308
+ "stack/realtime": {
309
+ "title": "Realtime",
310
+ "content": "The Quickback Stack uses Cloudflare Durable Objects to broadcast real-time updates over WebSockets. CRUD events and custom broadcasts are delivered to connected clients with the same security layers (firewall isolation, role-based filtering, field masking) applied.\n\n## Architecture\n\n```\n┌─────────────┐ POST /broadcast ┌──────────────────┐\n│ API Worker │ ───────────────────────► │ Realtime Worker │\n│ (Quickback) │ │ (Durable Object) │\n└─────────────┘ └────────┬─────────┘\n │\n WebSocket│\n │\n ┌────────▼─────────┐\n │ Browser Clients │\n │ (WebSocket) │\n └──────────────────┘\n```\n\n1. **API Worker** — Your Quickback-compiled API. Calls `realtime.insert()` etc. after CRUD operations.\n2. **Realtime Worker** — Separate Cloudflare Worker with Durable Object for managing WebSocket connections, one per organization.\n3. **Browser Clients** — Connect via WebSocket, receive filtered and masked broadcasts.\n\n## Key Features\n\n| Feature | Description |\n|---------|-------------|\n| Organization-scoped | Each org gets its own Durable Object instance |\n| Role-based filtering | Only send events to users with matching roles |\n| Per-role masking | Different users see different field values based on their role |\n| User-specific targeting | Send events to a specific user within an org |\n| Custom broadcasts | Arbitrary events beyond CRUD |\n| Custom namespaces | `defineRealtime()` for type-safe event helpers |\n| Two auth methods | Session tokens (browser) and API keys (server/CLI) |\n\n## Enabling Realtime\n\nAdd `realtime` to individual table definitions:\n\n```typescript\nexport default defineTable(claims, {\n firewall: { organization: {} },\n realtime: {\n enabled: true,\n onInsert: true,\n onUpdate: true,\n onDelete: true,\n requiredRoles: [\"member\", \"admin\"],\n fields: [\"id\", \"title\", \"status\"],\n },\n});\n```\n\nAnd enable the realtime binding in your database config. The compiler generates the Durable Object worker and helper functions.\n\n## Pages\n\n- **[Durable Objects Setup](/stack/realtime/durable-objects)** — Configuration, wrangler bindings, event formats, masking, and custom namespaces\n- **[Using Realtime](/stack/realtime/using-realtime)** — WebSocket connection, authentication, and client-side handling"
311
+ },
312
+ "stack/realtime/using-realtime": {
313
+ "title": "Using Realtime",
314
+ "content": "This page covers connecting to the Quickback realtime system from client applications — authentication, subscribing to events, and handling messages.\n\n## Connecting\n\nOpen a WebSocket connection to the realtime worker:\n\n```typescript\nconst ws = new WebSocket(\"wss://api.yourdomain.com/realtime/v1/websocket\");\n```\n\n## Authentication\n\nAfter connecting, send an auth message to authenticate. Two methods are supported.\n\n### Session Token (Browser/App)\n\n```typescript\nws.onopen = () => {\n ws.send(JSON.stringify({\n type: \"auth\",\n token: sessionToken, // JWT from Better Auth session\n organizationId: activeOrgId,\n }));\n};\n```\n\n### API Key (Server/CLI)\n\n```typescript\nws.onopen = () => {\n ws.send(JSON.stringify({\n type: \"auth\",\n token: apiKey, // API key for machine-to-machine auth\n organizationId: orgId,\n }));\n};\n```\n\n### Auth Response\n\nOn success:\n\n```json\n{\n \"type\": \"auth_success\",\n \"organizationId\": \"org_123\",\n \"userId\": \"user_456\",\n \"role\": \"admin\",\n \"roles\": [\"admin\", \"member\"],\n \"authMethod\": \"session\"\n}\n```\n\nOn failure, the connection is closed with an error message.\n\n## Handling Messages\n\n### CRUD Events\n\nCRUD events use the `postgres_changes` type:\n\n```typescript\nws.onmessage = (event) => {\n const msg = JSON.parse(event.data);\n\n if (msg.type === \"postgres_changes\") {\n const { table, eventType, new: newRecord, old: oldRecord } = msg;\n\n switch (eventType) {\n case \"INSERT\":\n addRecord(table, newRecord);\n break;\n case \"UPDATE\":\n updateRecord(table, newRecord);\n break;\n case \"DELETE\":\n removeRecord(table, oldRecord.id);\n break;\n }\n }\n};\n```\n\n**Event payload:**\n\n```json\n{\n \"type\": \"postgres_changes\",\n \"table\": \"claims\",\n \"schema\": \"public\",\n \"eventType\": \"INSERT\",\n \"new\": { \"id\": \"clm_123\", \"title\": \"Breaking News\", \"status\": \"pending\" },\n \"old\": null\n}\n```\n\nFor UPDATE events, both `new` and `old` are populated. For DELETE events, only `old` is populated.\n\n### Custom Broadcasts\n\nCustom events use the `broadcast` type:\n\n```typescript\nif (msg.type === \"broadcast\") {\n const { event, payload } = msg;\n\n if (event === \"processing-complete\") {\n refreshMaterial(payload.materialId);\n } else if (event === \"extraction:progress\") {\n updateProgressBar(payload.percent);\n }\n}\n```\n\n**Event payload:**\n\n```json\n{\n \"type\": \"broadcast\",\n \"event\": \"processing-complete\",\n \"payload\": {\n \"materialId\": \"mat_123\",\n \"claimCount\": 5,\n \"duration\": 1234\n }\n}\n```\n\nCustom namespaces (from `defineRealtime()`) use the format `namespace:event` — e.g., `extraction:started`, `extraction:progress`.\n\n## Security\n\n### Role-Based Filtering\n\nEvents are only delivered to users whose role matches the broadcast's `targetRoles`. If a table's realtime config specifies `requiredRoles: [\"admin\", \"member\"]`, only users with those roles receive the events.\n\n### Per-Role Masking\n\nField values are masked according to the subscriber's role. For example, with this masking config:\n\n```typescript\nmasking: {\n ssn: { type: \"ssn\", show: { roles: [\"admin\"] } },\n}\n```\n\n- **Admin sees:** `{ ssn: \"123-45-6789\" }`\n- **Member sees:** `{ ssn: \"***-**-6789\" }`\n\nMasking is pre-computed per-role (O(roles), not O(subscribers)) for efficiency.\n\n### User-Specific Events\n\nEvents can target a specific user within an organization. Only that user receives the broadcast — all other connections in the org are skipped.\n\n### Organization Isolation\n\nEach organization has its own Durable Object instance. Users can only subscribe to events in organizations they belong to, enforced during authentication.\n\n## Reconnection\n\nWebSocket connections can drop due to network issues. Implement reconnection logic in your client:\n\n```typescript\nfunction connect() {\n const ws = new WebSocket(\"wss://api.yourdomain.com/realtime/v1/websocket\");\n\n ws.onopen = () => {\n ws.send(JSON.stringify({\n type: \"auth\",\n token: sessionToken,\n organizationId: activeOrgId,\n }));\n };\n\n ws.onclose = () => {\n // Reconnect after delay\n setTimeout(connect, 2000);\n };\n\n ws.onmessage = (event) => {\n const msg = JSON.parse(event.data);\n handleMessage(msg);\n };\n\n return ws;\n}\n```\n\n## Required Environment Variables\n\n| Variable | Description |\n|----------|-------------|\n| `REALTIME_URL` | URL of the broadcast/realtime worker |\n| `ACCESS_TOKEN` | Internal service-to-service auth token |\n\n```toml\n# wrangler.toml\n[vars]\nREALTIME_URL = \"https://my-app-broadcast.workers.dev\"\nACCESS_TOKEN = \"your-internal-secret-token\"\n```\n\n## Cloudflare Only\n\nRealtime requires Cloudflare Durable Objects and is only available with the Cloudflare runtime.\n\n## See Also\n\n- [Durable Objects Setup](/stack/realtime/durable-objects) — Configuration, event formats, masking, and custom namespaces\n- [Masking](/compiler/definitions/masking) — Field masking configuration"
315
+ },
316
+ "stack/storage": {
317
+ "title": "Storage",
318
+ "content": "The Quickback Stack uses Cloudflare's storage primitives for different use cases.\n\n## Storage Types\n\n| Service | Use Case | Pages |\n|---------|----------|-------|\n| **KV** | Sessions, caching, rate limiting | [Setup](/stack/storage/kv) · [Usage](/stack/storage/using-kv) |\n| **R2** | File uploads, avatars, attachments | [Setup](/stack/storage/r2) · [Usage](/stack/storage/using-r2) |\n\n## Getting Started\n\n- **[KV Setup](/stack/storage/kv)** — Namespace and bindings configuration\n- **[Using KV](/stack/storage/using-kv)** — Sessions, caching, rate limiting\n- **[R2 Setup](/stack/storage/r2)** — Bucket, bindings, CORS configuration\n- **[Using R2](/stack/storage/using-r2)** — Uploads, downloads, role-based access"
319
+ },
320
+ "stack/storage/kv": {
321
+ "title": "KV Storage",
322
+ "content": "Cloudflare Workers KV is a global key-value store optimized for read-heavy workloads. Quickback uses KV for session storage, caching, and fast lookups.\n\n## What is KV?\n\nWorkers KV is a distributed key-value store:\n\n- **Global distribution** - Data replicated to 300+ edge locations\n- **Low-latency reads** - Cached at the edge, sub-millisecond access\n- **Eventually consistent** - Writes propagate globally within 60 seconds\n- **Simple API** - Get, put, delete, list\n\n## Use Cases in Quickback\n\n| Use Case | Description |\n|----------|-------------|\n| **Sessions** | Better Auth stores sessions in KV for fast validation |\n| **Cache** | Cache expensive database queries or API responses |\n| **Rate Limiting** | Track request counts per user/IP |\n| **Feature Flags** | Store configuration that changes infrequently |\n\n## Namespace Setup\n\nCreate a KV namespace via Wrangler:\n\n```bash\n# Create namespace\nwrangler kv namespace create \"SESSIONS\"\n\n# Output:\n# Add the following to your wrangler.toml:\n# [[kv_namespaces]]\n# binding = \"SESSIONS\"\n# id = \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n```\n\nAdd to `wrangler.toml`:\n\n```toml\n[[kv_namespaces]]\nbinding = \"SESSIONS\"\nid = \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n\n[[kv_namespaces]]\nbinding = \"CACHE\"\nid = \"yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\"\n```\n\n## Reading and Writing Values\n\n### Basic Operations\n\n```ts\n// Write a value\nawait env.CACHE.put(\"user:123\", JSON.stringify({ name: \"Alice\" }));\n\n// Read a value\nconst data = await env.CACHE.get(\"user:123\");\nconst user = data ? JSON.parse(data) : null;\n\n// Delete a value\nawait env.CACHE.delete(\"user:123\");\n\n// Check if key exists\nconst exists = await env.CACHE.get(\"user:123\") !== null;\n```\n\n### With Metadata\n\nKV supports metadata attached to keys:\n\n```ts\n// Write with metadata\nawait env.CACHE.put(\"user:123\", JSON.stringify(userData), {\n metadata: { updatedAt: Date.now(), version: 1 },\n});\n\n// Read with metadata\nconst { value, metadata } = await env.CACHE.getWithMetadata(\"user:123\");\n```\n\n### List Keys\n\n```ts\n// List all keys with prefix\nconst list = await env.CACHE.list({ prefix: \"user:\" });\n\nfor (const key of list.keys) {\n console.log(key.name, key.metadata);\n}\n\n// Paginate through results\nlet cursor = undefined;\ndo {\n const result = await env.CACHE.list({ prefix: \"user:\", cursor });\n // Process result.keys\n cursor = result.cursor;\n} while (cursor);\n```\n\n## Expiration and TTL\n\nSet automatic expiration on keys:\n\n```ts\n// Expire in 1 hour (3600 seconds)\nawait env.CACHE.put(\"session:abc\", sessionData, {\n expirationTtl: 3600,\n});\n\n// Expire at specific timestamp\nawait env.CACHE.put(\"session:abc\", sessionData, {\n expiration: Math.floor(Date.now() / 1000) + 3600,\n});\n```\n\nCommon TTL patterns:\n\n| Use Case | TTL |\n|----------|-----|\n| Session tokens | 24 hours (86400) |\n| API cache | 5 minutes (300) |\n| Rate limit counters | 1 minute (60) |\n| Feature flags | No expiration |\n\n## Session Storage\n\nBetter Auth uses KV for session storage in Cloudflare deployments:\n\n```ts\n// Quickback configures this automatically\nauth: defineAuth(\"better-auth\", {\n session: {\n storage: \"kv\", // Uses SESSIONS namespace\n },\n}),\n```\n\nSessions are stored as:\n- Key: `session:{sessionId}`\n- Value: Serialized session object\n- TTL: Configured session expiration\n\n## Caching Patterns\n\n### Cache-Aside Pattern\n\n```ts\nasync function getUser(userId: string) {\n // Check cache first\n const cached = await env.CACHE.get(`user:${userId}`);\n if (cached) {\n return JSON.parse(cached);\n }\n\n // Fetch from database\n const user = await db.select().from(users).where(eq(users.id, userId));\n\n // Store in cache\n await env.CACHE.put(`user:${userId}`, JSON.stringify(user), {\n expirationTtl: 300, // 5 minutes\n });\n\n return user;\n}\n```\n\n### Cache Invalidation\n\n```ts\n// Invalidate on update\nasync function updateUser(userId: string, data: UserUpdate) {\n await db.update(users).set(data).where(eq(users.id, userId));\n await env.CACHE.delete(`user:${userId}`);\n}\n```\n\n## wrangler.toml Reference\n\n```toml\n# Production namespace\n[[kv_namespaces]]\nbinding = \"SESSIONS\"\nid = \"abc123...\"\n\n# Preview namespace (for wrangler dev)\n[[kv_namespaces]]\nbinding = \"SESSIONS\"\nid = \"def456...\"\npreview_id = \"ghi789...\"\n```\n\n## Limitations\n\n- **Value size** - Maximum 25 MB per value\n- **Key size** - Maximum 512 bytes\n- **Write limits** - 1 write per second per key\n- **Eventual consistency** - Changes may take up to 60 seconds to propagate\n- **Not for hot writes** - Use Durable Objects for high-write scenarios\n\nKV is optimized for read-heavy workloads. For write-heavy use cases or strong consistency, consider [Durable Objects](/stack/realtime/durable-objects)."
323
+ },
324
+ "stack/storage/r2": {
325
+ "title": "File Storage (R2)",
326
+ "content": "Quickback provides built-in file storage using Cloudflare R2, with role-based access control and a dedicated files worker for serving.\n\n## Architecture\n\n```\n┌─────────────────────────────────────────────────────────────────────┐\n│ Your Application │\n│ │\n│ Upload/Manage Files Serve Files │\n│ ───────────────────── ────────────── │\n│ api.yourdomain.com files.yourdomain.com │\n│ /storage/v1/* /* │\n│ │\n│ ┌─────────────────────┐ ┌─────────────────────┐ │\n│ │ API Worker │ │ Files Worker │ │\n│ │ │ │ │ │\n│ │ POST /bucket │ │ GET /public/* │ │\n│ │ GET /bucket │ │ → No auth │ │\n│ │ POST /object/* │ │ │ │\n│ │ DELETE /object/* │ │ GET /* │ │\n│ │ │ │ → Session + RBAC │ │\n│ └──────────┬──────────┘ └──────────┬──────────┘ │\n│ │ │ │\n│ └───────────┬───────────────────┘ │\n│ │ │\n│ ▼ │\n│ ┌─────────────────────┐ │\n│ │ R2 Bucket │ │\n│ │ quickback-files │ │\n│ └─────────────────────┘ │\n└─────────────────────────────────────────────────────────────────────┘\n```\n\n## Enabling File Storage\n\nAdd the `fileStorage` provider to your `quickback.config.ts`:\n\n```typescript\nexport default {\n name: 'my-app',\n providers: {\n runtime: { name: 'cloudflare' },\n database: { name: 'cloudflare-d1' },\n auth: { name: 'better-auth' },\n fileStorage: {\n name: 'cloudflare-r2',\n config: {\n binding: 'R2_BUCKET',\n bucketName: 'my-app-files',\n filesBinding: 'FILES_DB',\n maxFileSize: 10 * 1024 * 1024, // 10MB\n allowedTypes: ['image/jpeg', 'image/png', 'image/webp'],\n },\n },\n },\n};\n```\n\n## API Endpoints\n\n### Storage API (api.yourdomain.com/storage/v1)\n\n| Method | Endpoint | Description |\n|--------|----------|-------------|\n| POST | `/bucket` | Create a bucket |\n| GET | `/bucket` | List buckets |\n| GET | `/bucket/:name` | Get bucket info |\n| DELETE | `/bucket/:name` | Delete bucket (must be empty) |\n| POST | `/object/:bucket/*path` | Upload file |\n| GET | `/object/:bucket/*path` | Download file |\n| HEAD | `/object/:bucket/*path` | Get file metadata |\n| DELETE | `/object/:bucket/*path` | Delete file (soft delete) |\n| GET | `/object` | List objects |\n| POST | `/url` | Get file URL for serving |\n\n### Files Worker (files.yourdomain.com)\n\n| Method | Path | Auth Required |\n|--------|------|---------------|\n| GET/HEAD | `/public/*` | No |\n| GET/HEAD | `/*` | Yes (session + RBAC) |\n\n## Buckets\n\nBuckets organize files and define access control policies.\n\n### Creating a Bucket\n\n```bash\nPOST /storage/v1/bucket\n{\n \"name\": \"avatars\",\n \"readScope\": \"organization\",\n \"writeScope\": \"organization\",\n \"readRoles\": [\"admin\", \"member\"],\n \"writeRoles\": [\"admin\"],\n \"deleteRoles\": [\"admin\"]\n}\n```\n\n### Scope Options\n\n| Scope | Read Behavior | Write Behavior |\n|-------|---------------|----------------|\n| `public` | Anyone can read | N/A |\n| `organization` | Org members only | Org members only |\n| `user` | Owner only | Owner only |\n\n### Role-Based Access\n\nYou can restrict operations to specific roles:\n\n```json\n{\n \"readRoles\": [\"admin\", \"member\"],\n \"writeRoles\": [\"admin\", \"editor\"],\n \"deleteRoles\": [\"admin\"]\n}\n```\n\nAn empty array `[]` means no role restriction (all authenticated users).\n\n## Uploading Files\n\n```bash\nPOST /storage/v1/object/avatars/profile.jpg\nContent-Type: image/jpeg\nContent-Length: 12345\n\n<binary data>\n```\n\nResponse:\n```json\n{\n \"id\": \"obj_123\",\n \"key\": \"org_abc/avatars/profile.jpg\",\n \"bucket\": \"avatars\",\n \"name\": \"profile.jpg\",\n \"size\": 12345,\n \"mimeType\": \"image/jpeg\",\n \"readScope\": \"organization\"\n}\n```\n\n## Serving Files\n\n### Public Files\n\nFiles in buckets with `readScope: \"public\"` are stored with a `public/` prefix:\n\n```\nhttps://files.yourdomain.com/public/org_abc/avatars/logo.png\n```\n\nNo authentication required.\n\n### Private Files\n\nPrivate files require a valid Better Auth session cookie:\n\n```\nhttps://files.yourdomain.com/org_abc/documents/report.pdf\n```\n\nThe files worker:\n1. Validates the session token\n2. Checks the user's organization matches\n3. Verifies role permissions (if `readRoles` configured)\n4. Serves the file or returns 403\n\n## Generated Files\n\nWhen file storage is configured, the compiler generates:\n\n| File | Purpose |\n|------|---------|\n| `src/routes/storage.ts` | Storage API routes |\n| `src/files.ts` | Files database schema (buckets, objects) |\n| `cloudflare-workers/files/index.ts` | Files worker for serving |\n| `cloudflare-workers/files/wrangler.toml` | Files worker config |\n\n## Deployment\n\nAfter compiling:\n\n1. **Create the R2 bucket:**\n ```bash\n wrangler r2 bucket create my-app-files\n ```\n\n2. **Create the files database:**\n ```bash\n wrangler d1 create my-app-files\n ```\n\n3. **Run migrations:**\n ```bash\n wrangler d1 migrations apply my-app-files --local\n wrangler d1 migrations apply my-app-files --remote\n ```\n\n4. **Deploy the API:**\n ```bash\n wrangler deploy\n ```\n\n5. **Deploy the files worker:**\n ```bash\n cd cloudflare-workers/files\n wrangler deploy\n ```\n\n## Configuration Reference\n\n| Option | Type | Default | Description |\n|--------|------|---------|-------------|\n| `binding` | `string` | `R2_BUCKET` | R2 bucket binding name |\n| `bucketName` | `string` | Required | R2 bucket name |\n| `filesBinding` | `string` | `FILES_DB` | Files metadata D1 binding |\n| `maxFileSize` | `number` | 10MB | Max upload size in bytes |\n| `allowedTypes` | `string[]` | Images only | Allowed MIME types |\n| `publicDomain` | `string` | - | Custom domain for files worker |\n\n## Security Model\n\n- **Upload security**: Enforced by API worker (auth middleware, org check, role check)\n- **Serve security**: Enforced by files worker (session validation, metadata RBAC)\n- **Soft deletes**: Files are marked deleted in metadata but retained in R2\n- **Tenant isolation**: All files are prefixed with organization ID"
327
+ },
328
+ "stack/storage/using-kv": {
329
+ "title": "Using KV",
330
+ "content": "Cloudflare KV is used by the Quickback Stack for session storage, caching, and rate limiting.\n\n## Session Storage\n\nBetter Auth stores sessions in KV by default for fast edge-based lookups:\n\n```typescript\n// KV session adapter (auto-configured)\nconst session = await kv.get(`session:${sessionId}`);\n```\n\n## Rate Limiting\n\nKV powers the rate limiting middleware:\n\n```typescript\n// Rate limit key structure\nconst key = `ratelimit:${ip}:${endpoint}`;\n```\n\n## Caching\n\nUse KV for caching frequently accessed data with configurable TTL.\n\n## Related\n\n- [KV Setup](/stack/storage/kv) — Namespace and bindings configuration\n- [Auth Security](/stack/auth/security) — Rate limiting configuration"
331
+ },
332
+ "stack/storage/using-r2": {
333
+ "title": "Using R2",
334
+ "content": "Cloudflare R2 provides S3-compatible object storage for file uploads in the Quickback Stack.\n\n## Upload Flow\n\nThe generated API includes upload endpoints:\n\n1. Client requests a presigned upload URL\n2. Client uploads directly to R2\n3. API stores the file reference in D1\n\n## Download Flow\n\nFiles can be accessed via the generated download endpoint with role-based access control.\n\n## Role-Based Access\n\nFile access respects the same security layers as your API:\n\n- **Firewall** — Users can only access files belonging to their organization\n- **Access** — Role-based download permissions\n\n## Related\n\n- [R2 Setup](/stack/storage/r2) — Bucket and bindings configuration\n- [Avatars](/account-ui/features/avatars) — Avatar upload UI"
335
+ },
336
+ "stack/vector/embeddings": {
337
+ "title": "Automatic Embeddings",
338
+ "content": "Quickback can automatically generate embeddings for your data using Cloudflare Queues and Workers AI. When configured, INSERT and UPDATE operations automatically enqueue embedding jobs that are processed asynchronously.\n\n## Enabling Embeddings\n\nAdd an `embeddings` configuration to your resource definition:\n\n```typescript\n// claims/resource.ts\n\nexport default defineResource(claims, {\n firewall: { organization: {} },\n\n embeddings: {\n fields: ['content'], // Fields to concatenate and embed\n model: '@cf/baai/bge-base-en-v1.5', // Embedding model (optional)\n onInsert: true, // Auto-embed on create (default: true)\n onUpdate: ['content'], // Re-embed when these fields change\n embeddingColumn: 'embedding', // Column to store embedding\n metadata: ['storyId'], // Metadata for Vectorize index\n },\n\n crud: {\n create: { access: { roles: ['member'] } },\n update: { access: { roles: ['member'] } },\n },\n});\n```\n\n## Configuration Options\n\n| Option | Type | Default | Description |\n|--------|------|---------|-------------|\n| `fields` | `string[]` | **Required** | Fields to concatenate and embed |\n| `model` | `string` | `'@cf/baai/bge-base-en-v1.5'` | Workers AI embedding model |\n| `onInsert` | `boolean` | `true` | Embed on INSERT operations |\n| `onUpdate` | `boolean \\| string[]` | `true` | Embed on UPDATE; array limits to specific fields |\n| `embeddingColumn` | `string` | `'embedding'` | Column to store the embedding vector |\n| `metadata` | `string[]` | `[]` | Fields to include in Vectorize metadata |\n\n### onUpdate Options\n\n```typescript\n// Always re-embed on any update\nonUpdate: true\n\n// Never re-embed on update\nonUpdate: false\n\n// Only re-embed when specific fields change\nonUpdate: ['content', 'title']\n```\n\n## How It Works\n\n```\n┌─────────────────────────────────────────────────────────────────────┐\n│ Main API Worker │\n│ │\n│ ┌─────────────────────┐ ┌────────────────────┐ │\n│ │ POST /claims │ │ Queue Consumer │ │\n│ │ │ │ │ │\n│ │ 1. Auth middleware │ │ 1. Workers AI │ │\n│ │ 2. Firewall │ │ embed() │ │\n│ │ 3. Guards │ │ 2. D1 update() │ │\n│ │ 4. Insert to D1 │ │ 3. Vectorize │ │\n│ │ 5. Enqueue job ─────┼───▶│ upsert() │ │\n│ └─────────────────────┘ └────────────────────┘ │\n│ ▲ │\n│ ┌─────────────────┴──────────────────┐ │\n│ │ EMBEDDINGS_QUEUE │ │\n│ └────────────────────────────────────┘ │\n└─────────────────────────────────────────────────────────────────────┘\n```\n\n1. **API Request** - POST/PATCH arrives and passes through auth, firewall, guards\n2. **Database Insert** - Record is created/updated in D1\n3. **Enqueue Job** - Embedding job is sent to Cloudflare Queue\n4. **Queue Consumer** - Processes job asynchronously:\n - Calls Workers AI to generate embedding\n - Updates D1 with embedding vector\n - Optionally upserts to Vectorize index\n\n## Security Model\n\n**Security is enforced at enqueue time, not consume time:**\n\n- Jobs are only enqueued after passing all security checks (auth, firewall, guards)\n- The queue consumer is an internal process that executes pre-validated jobs\n- If a user can't create a claim, they can't trigger an embedding job\n\n## Generic Embeddings API\n\nIn addition to automatic embeddings on CRUD operations, Quickback generates a generic embeddings API endpoint that allows you to trigger embeddings on arbitrary content.\n\n### POST /api/v1/embeddings\n\nGenerate an embedding for any text content:\n\n```bash\ncurl -X POST https://your-api.workers.dev/api/v1/embeddings \\\n -H \"Content-Type: application/json\" \\\n -H \"Cookie: better-auth.session_token=...\" \\\n -d '{\n \"content\": \"Text to embed\",\n \"model\": \"@cf/baai/bge-base-en-v1.5\",\n \"table\": \"claims\",\n \"id\": \"clm_123\"\n }'\n```\n\n#### Request Body\n\n| Field | Type | Required | Description |\n|-------|------|----------|-------------|\n| `content` | `string` | Yes | Text to generate embedding for |\n| `model` | `string` | No | Embedding model (defaults to table config or `@cf/baai/bge-base-en-v1.5`) |\n| `table` | `string` | No | Table to store embedding back to |\n| `id` | `string` | Conditional | Record ID to update (required if `table` is specified) |\n\n#### Response\n\n```json\n{\n \"queued\": true,\n \"jobId\": \"550e8400-e29b-41d4-a716-446655440000\",\n \"table\": \"claims\",\n \"id\": \"clm_123\",\n \"model\": \"@cf/baai/bge-base-en-v1.5\"\n}\n```\n\n### GET /api/v1/embeddings/tables\n\nList tables that have embeddings configured:\n\n```bash\ncurl https://your-api.workers.dev/api/v1/embeddings/tables \\\n -H \"Cookie: better-auth.session_token=...\"\n```\n\nResponse:\n\n```json\n{\n \"tables\": [\n {\n \"name\": \"claims\",\n \"embeddingColumn\": \"embedding\",\n \"model\": \"@cf/baai/bge-base-en-v1.5\"\n }\n ]\n}\n```\n\n### Authentication & Authorization\n\nThe generic embeddings API requires authentication and uses the `activeOrgId` from the user's context to enforce organization-level isolation. Embedding jobs are scoped to the user's current organization.\n\n### Use Cases\n\nThe generic embeddings API is useful for:\n\n- **Batch embedding**: Embed content without going through CRUD routes\n- **Re-embedding**: Force re-generation of embeddings for existing records\n- **Preview embeddings**: Test embedding generation before persisting\n- **External content**: Embed content that doesn't fit your defined schemas\n\n## Generated Files\n\nWhen embeddings are configured, the compiler generates:\n\n| File | Purpose |\n|------|---------|\n| `src/queue-consumer.ts` | Queue consumer handler for processing embedding jobs |\n| `src/routes/embeddings.ts` | Generic embeddings API routes |\n| `wrangler.toml` | Queue producer/consumer bindings, AI binding |\n| `src/env.d.ts` | `EMBEDDINGS_QUEUE`, `AI` types |\n| `src/index.ts` | Exports `queue` handler, mounts `/api/v1/embeddings` |\n\n### wrangler.toml additions\n\n```toml\n# Embeddings Queue\n[[queues.producers]]\nqueue = \"your-app-embeddings-queue\"\nbinding = \"EMBEDDINGS_QUEUE\"\n\n[[queues.consumers]]\nqueue = \"your-app-embeddings-queue\"\nmax_batch_size = 10\nmax_batch_timeout = 30\nmax_retries = 3\n\n# Workers AI\n[ai]\nbinding = \"AI\"\n```\n\n## Multiple Fields\n\nEmbed multiple fields by concatenating them:\n\n```typescript\nembeddings: {\n fields: ['title', 'content', 'summary'], // Joined with spaces\n // ...\n}\n```\n\nGenerated embedding text: `\"${title} ${content} ${summary}\"`\n\n## Vectorize Integration\n\nIf you have a Vectorize index configured, embeddings are automatically upserted:\n\n```typescript\n// quickback.config.ts\nproviders: {\n database: {\n config: {\n vectorizeIndexName: 'claims-embeddings', // Your Vectorize index\n vectorizeBinding: 'VECTORIZE',\n },\n },\n},\n```\n\nThe queue consumer will:\n1. Generate the embedding via Workers AI\n2. Store the vector in D1 (JSON string)\n3. Upsert to Vectorize with metadata\n\n### Vectorize Metadata\n\nInclude fields in Vectorize metadata for filtering:\n\n```typescript\nembeddings: {\n fields: ['content'],\n metadata: ['storyId', 'organizationId', 'claimType'],\n}\n```\n\nEnables queries like:\n```typescript\nconst results = await env.VECTORIZE.query(vector, {\n topK: 10,\n filter: { storyId: 'story_123' }\n});\n```\n\n## Schema Requirements\n\nYour schema must include the embedding column:\n\n```typescript\n// claims/schema.ts\n\nexport const claims = sqliteTable(\"claims\", {\n id: text(\"id\").primaryKey(),\n content: text(\"content\").notNull(),\n storyId: text(\"story_id\"),\n organizationId: text(\"organization_id\").notNull(),\n\n // Embedding column - stores JSON array of floats\n embedding: text(\"embedding\"),\n});\n```\n\n## Supported Models\n\nAny Workers AI embedding model can be used:\n\n| Model | Dimensions | Notes |\n|-------|------------|-------|\n| `@cf/baai/bge-base-en-v1.5` | 768 | Default, good general-purpose |\n| `@cf/baai/bge-small-en-v1.5` | 384 | Faster, smaller |\n| `@cf/baai/bge-large-en-v1.5` | 1024 | Higher quality |\n\nSee [Workers AI Models](https://developers.cloudflare.com/workers-ai/models/) for the full list.\n\n## Deployment\n\nAfter compiling with embeddings:\n\n1. **Create the queue:**\n ```bash\n wrangler queues create your-app-embeddings-queue\n ```\n\n2. **Deploy the worker:**\n ```bash\n wrangler deploy\n ```\n\nThe single worker handles both HTTP requests and queue consumption.\n\n## Monitoring\n\nQueue metrics are available in the Cloudflare dashboard:\n- Messages enqueued\n- Messages processed\n- Retry count\n- Consumer lag\n\nCheck queue health:\n```bash\nwrangler queues list\nwrangler queues consumer your-app-embeddings-queue\n```\n\n## Error Handling\n\nFailed jobs are automatically retried (up to `max_retries`):\n\n```typescript\n// In queue consumer\ntry {\n const embedding = await env.AI.run(job.model, { text: job.content });\n // ...\n message.ack();\n} catch (error) {\n console.error('[Queue] Embedding job failed:', error);\n message.retry(); // Will retry up to 3 times\n}\n```\n\nAfter max retries, the message is dead-lettered (if configured) or dropped.\n\n## Similarity Search Service\n\nFor applications that need typed similarity search with classification, you can define embedding search configurations using `defineEmbedding`. This generates a service layer with typed search functions.\n\n### Defining Search Configurations\n\nCreate a file in `services/embeddings/`:\n\n```typescript\n// services/embeddings/claim-similarity.ts\n\nexport default defineEmbedding({\n name: 'claim-similarity',\n description: 'Find similar claims with classification',\n\n // Source configuration\n source: 'claims', // Table name\n vectorIndex: 'VECTORIZE', // Binding name\n model: '@cf/baai/bge-base-en-v1.5',\n\n // Search configuration\n search: {\n threshold: 0.60, // Minimum similarity (default: 0.60)\n limit: 10, // Max results (default: 10)\n classify: {\n DUPLICATE: 0.90, // Score >= 0.90 = DUPLICATE\n CONFIRMS: 0.85, // Score >= 0.85 = CONFIRMS\n RELATED: 0.75, // Score >= 0.75 = RELATED\n },\n filters: ['storyId', 'organizationId'], // Filterable fields\n },\n\n // Generation triggers (beyond CRUD)\n triggers: {\n onQueueMessage: 'embed_claim', // Listen for queue messages\n },\n});\n```\n\n### Generated Service Layer\n\nAfter compilation, a `createEmbeddings()` helper is generated in `src/lib/embeddings.ts`:\n\n```typescript\n\nexport const execute: ActionExecutor = async ({ ctx, input }) => {\n const embeddings = createEmbeddings(ctx.env);\n\n // Search with automatic classification\n const similar = await embeddings.claimSimilarity.search(\n 'Police arrested three suspects in downtown robbery',\n {\n storyId: 'story_123',\n limit: 5,\n threshold: 0.70,\n }\n );\n\n // Returns: [{ id, score: 0.87, classification: 'CONFIRMS', metadata }]\n for (const match of similar) {\n console.log(`${match.classification}: ${match.id} (${match.score})`);\n }\n\n return { similar };\n};\n```\n\n### Classification Thresholds\n\nResults are automatically classified based on similarity score:\n\n| Classification | Default Threshold | Meaning |\n|----------------|-------------------|---------|\n| `DUPLICATE` | >= 0.90 | Near-identical content |\n| `CONFIRMS` | >= 0.85 | Strongly supports same claim |\n| `RELATED` | >= 0.75 | Topically related |\n| `NEW` | < 0.75 | No significant match |\n\nCustomize thresholds per use case:\n\n```typescript\nsearch: {\n classify: {\n DUPLICATE: 0.95, // Stricter duplicate detection\n CONFIRMS: 0.88,\n RELATED: 0.70, // Broader \"related\" category\n },\n}\n```\n\n### Gray Zone Detection\n\nFor cases where automatic classification isn't sufficient, use gray zone detection to get matches that need semantic evaluation:\n\n```typescript\nconst results = await embeddings.claimSimilarity.findWithGrayZone(\n 'Some claim text',\n { min: 0.60, max: 0.85 }\n);\n\n// Returns structured results:\n// {\n// high_confidence: [...], // Score >= 0.85 (auto-classified)\n// gray_zone: [...] // 0.60 <= score < 0.85 (needs review)\n// }\n\n// Process high confidence matches automatically\nfor (const match of results.high_confidence) {\n await markAsDuplicate(match.id);\n}\n\n// Queue gray zone for semantic review\nfor (const match of results.gray_zone) {\n await queueForReview(match.id, match.score);\n}\n```\n\n### Generate Embeddings Directly\n\nGenerate embeddings without searching:\n\n```typescript\nconst embeddings = createEmbeddings(ctx.env);\n\n// Get raw embedding vector\nconst vector = await embeddings.claimSimilarity.embed(\n 'Text to embed'\n);\n// Returns: number[] (768 dimensions for bge-base)\n```\n\n### Multiple Search Configurations\n\nDefine different configurations for different use cases:\n\n```typescript\n// services/embeddings/story-similarity.ts\nexport default defineEmbedding({\n name: 'story-similarity',\n source: 'stories',\n search: {\n threshold: 0.65,\n limit: 20,\n classify: {\n DUPLICATE: 0.92,\n CONFIRMS: 0.80,\n RELATED: 0.65,\n },\n filters: ['streamId'],\n },\n});\n\n// services/embeddings/material-similarity.ts\nexport default defineEmbedding({\n name: 'material-similarity',\n source: 'materials',\n search: {\n threshold: 0.70,\n limit: 5,\n classify: {\n DUPLICATE: 0.95,\n CONFIRMS: 0.90,\n RELATED: 0.80,\n },\n filters: ['sourceType', 'organizationId'],\n },\n});\n```\n\nUsage:\n```typescript\nconst embeddings = createEmbeddings(ctx.env);\n\n// Different search behaviors for different content types\nconst similarClaims = await embeddings.claimSimilarity.search(text, opts);\nconst similarStories = await embeddings.storySimilarity.search(text, opts);\nconst similarMaterials = await embeddings.materialSimilarity.search(text, opts);\n```\n\n### Table-Level vs Service-Level\n\n| Feature | Table-level (`defineTable`) | Service-level (`defineEmbedding`) |\n|---------|----------------------------|----------------------------------|\n| Auto-embed on INSERT | ✅ | ❌ |\n| Auto-embed on UPDATE | ✅ | ❌ |\n| Custom search functions | ❌ | ✅ |\n| Classification thresholds | ❌ | ✅ |\n| Gray zone detection | ❌ | ✅ |\n| Filterable searches | ❌ | ✅ |\n| Queue message triggers | ❌ | ✅ |\n\n**Use both together:**\n- `defineTable` with `embeddings` config for automatic embedding generation\n- `defineEmbedding` for typed search functions with classification"
339
+ },
340
+ "stack/vector": {
341
+ "title": "Vector & AI",
342
+ "content": "The Quickback Stack integrates with Cloudflare Workers AI to automatically generate vector embeddings for semantic search. Embeddings are generated asynchronously via Cloudflare Queues and stored in both D1 (as JSON) and optionally in a Vectorize index for fast similarity search.\n\n## Two Levels of Configuration\n\n| Level | API | Purpose |\n|-------|-----|---------|\n| Table-level | `embeddings` in `defineTable()` | Auto-embed on INSERT/UPDATE |\n| Service-level | `defineEmbedding()` | Typed search functions with classification |\n\nUse both together: table-level for auto-generation, service-level for search.\n\n## How It Works\n\n```\nCRUD operation → Enqueue job → Queue consumer\n │\n ├─ Workers AI (generate embedding)\n ├─ D1 (store vector as JSON)\n └─ Vectorize (upsert for search)\n```\n\n1. A record is created or updated via the API\n2. The compiler auto-enqueues an embedding job to `EMBEDDINGS_QUEUE`\n3. The queue consumer generates the embedding using Workers AI\n4. The vector is stored in D1 and optionally upserted to Vectorize\n\n## Cloudflare Bindings\n\n| Binding | Purpose |\n|---------|---------|\n| `AI` | Workers AI for embedding generation |\n| `VECTORIZE` | Vectorize index for similarity search |\n| `EMBEDDINGS_QUEUE` | Queue for async processing |\n\n## Default Model\n\nThe default embedding model is `@cf/baai/bge-base-en-v1.5` (768 dimensions). You can configure a different model per table.\n\n## Pages\n\n- **[Automatic Embeddings](/stack/vector/embeddings)** — `defineTable()` embeddings config, queue consumer, Vectorize integration, and `defineEmbedding()` search service\n- **[Using Embeddings](/stack/vector/using-embeddings)** — Embeddings API endpoints, semantic search, and practical usage"
343
+ },
344
+ "stack/vector/using-embeddings": {
345
+ "title": "Using Embeddings",
346
+ "content": "This page covers the runtime API for working with embeddings — the generated endpoints, the search service, and practical usage patterns.\n\n## Embeddings API\n\nWhen any feature has embeddings configured, the compiler generates these endpoints:\n\n### Generate Embedding\n\n```\nPOST /api/v1/embeddings\n```\n\nEnqueue an embedding job for arbitrary content:\n\n```bash\ncurl -X POST https://api.example.com/api/v1/embeddings \\\n -H \"Content-Type: application/json\" \\\n -H \"Cookie: better-auth.session_token=...\" \\\n -d '{\n \"content\": \"Text to generate an embedding for\",\n \"model\": \"@cf/baai/bge-base-en-v1.5\",\n \"table\": \"claims\",\n \"id\": \"clm_123\"\n }'\n```\n\n**Request body:**\n\n| Field | Type | Required | Description |\n|-------|------|----------|-------------|\n| `content` | string | Yes | Text to embed |\n| `model` | string | No | Embedding model (defaults to table config or `@cf/baai/bge-base-en-v1.5`) |\n| `table` | string | No | Table to store the embedding in |\n| `id` | string | Conditional | Record ID (required when `table` is provided) |\n\n**Response:**\n\n```json\n{\n \"queued\": true,\n \"jobId\": \"550e8400-e29b-41d4-a716-446655440000\",\n \"table\": \"claims\",\n \"id\": \"clm_123\",\n \"model\": \"@cf/baai/bge-base-en-v1.5\"\n}\n```\n\nThis endpoint is useful for:\n- Re-embedding existing records\n- Embedding content that doesn't go through CRUD routes\n- Batch embedding via scripts\n\n### List Embedding Tables\n\n```\nGET /api/v1/embeddings/tables\n```\n\nList tables that have embeddings configured:\n\n```json\n{\n \"tables\": [\n {\n \"name\": \"claims\",\n \"embeddingColumn\": \"embedding\",\n \"model\": \"@cf/baai/bge-base-en-v1.5\"\n }\n ]\n}\n```\n\n## Search Service\n\nFor typed similarity search with classification, use the `createEmbeddings()` service generated from `defineEmbedding()` configurations.\n\n### Basic Search\n\n```typescript\nconst embeddings = createEmbeddings(ctx.env);\n\nconst results = await embeddings.claimSimilarity.search(\n \"Police arrested three suspects\",\n {\n storyId: \"story_123\",\n limit: 5,\n threshold: 0.70,\n }\n);\n\n// Returns: [{ id, score, classification, metadata }]\nfor (const match of results) {\n console.log(`${match.classification}: ${match.id} (score: ${match.score})`);\n}\n```\n\n### Classification\n\nResults are automatically classified based on similarity score:\n\n| Classification | Default Threshold | Meaning |\n|----------------|-------------------|---------|\n| `DUPLICATE` | >= 0.90 | Near-identical content |\n| `CONFIRMS` | >= 0.85 | Strongly supports same claim |\n| `RELATED` | >= 0.75 | Topically related |\n| `NEW` | < 0.75 | No significant match |\n\n### Gray Zone Detection\n\nFor cases where automatic classification isn't sufficient, use gray zone detection:\n\n```typescript\nconst results = await embeddings.claimSimilarity.findWithGrayZone(\n \"Some claim text\",\n { min: 0.60, max: 0.85 }\n);\n\n// results.high_confidence — Score >= 0.85 (auto-classified)\n// results.gray_zone — 0.60 <= score < 0.85 (needs review)\n```\n\n### Raw Embedding\n\nGenerate an embedding vector without searching:\n\n```typescript\nconst vector = await embeddings.claimSimilarity.embed(\"Text to embed\");\n// Returns: number[] (768 dimensions for bge-base)\n```\n\n## Vectorize Queries\n\nIf your project uses a Vectorize index, you can query it directly for custom search logic:\n\n```typescript\n// Generate embedding for query text\nconst queryVector = await embeddings.claimSimilarity.embed(searchText);\n\n// Query Vectorize with metadata filters\nconst results = await ctx.env.VECTORIZE.query(queryVector, {\n topK: 10,\n filter: {\n storyId: \"story_123\",\n organizationId: ctx.activeOrgId,\n },\n});\n```\n\nThe `metadata` fields in your `defineTable` embeddings config are automatically included in the Vectorize index, enabling filtered searches.\n\n## Auto-Embed vs Manual\n\n| Trigger | How | Use Case |\n|---------|-----|----------|\n| Auto (INSERT) | Compiler enqueues after successful create | Default behavior |\n| Auto (UPDATE) | Compiler enqueues when watched fields change | Keeps embeddings fresh |\n| Manual (API) | `POST /api/v1/embeddings` | Re-embedding, batch jobs |\n| Manual (Queue) | `env.EMBEDDINGS_QUEUE.send(...)` | Custom pipelines |\n\n## Security\n\n- The embeddings API requires authentication\n- Jobs are scoped to the user's `activeOrgId`\n- Embedding jobs are only enqueued after all security checks pass (auth, firewall, guards)\n- The queue consumer is an internal process — it trusts pre-validated jobs\n\n## Supported Models\n\n| Model | Dimensions | Notes |\n|-------|------------|-------|\n| `@cf/baai/bge-base-en-v1.5` | 768 | Default, good general-purpose |\n| `@cf/baai/bge-small-en-v1.5` | 384 | Faster, smaller |\n| `@cf/baai/bge-large-en-v1.5` | 1024 | Higher quality |\n\nSee [Workers AI Models](https://developers.cloudflare.com/workers-ai/models/) for the full list.\n\n## Cloudflare Only\n\nEmbeddings require Cloudflare Workers AI and Queues. They are not available with the Bun or Node runtimes.\n\n## See Also\n\n- [Automatic Embeddings](/stack/vector/embeddings) — `defineTable()` config, `defineEmbedding()` search service, and Vectorize integration\n- [Queues](/stack/queues) — How embedding jobs are processed"
347
+ },
348
+ "stack/webhooks/inbound": {
349
+ "title": "Inbound Webhooks",
350
+ "content": "Inbound webhooks allow external services (Stripe, Paddle, GitHub, etc.) to notify your API of events. Quickback generates endpoints that verify signatures, deduplicate events, and dispatch to your handler functions via a Cloudflare Queue.\n\n## Endpoint\n\n```\nPOST /webhooks/v1/inbound/:provider\n```\n\nThe `:provider` parameter identifies which service sent the webhook (e.g., `stripe`, `paddle`, `github`). Each provider has its own signature verification logic.\n\n## How It Works\n\n```\nExternal Service → POST /webhooks/v1/inbound/stripe\n │\n ├─ 1. Verify signature (HMAC-SHA256)\n ├─ 2. Parse event (provider-specific)\n ├─ 3. Deduplicate via externalId\n ├─ 4. Store in webhook_events table\n ├─ 5. Enqueue for async processing\n └─ 6. Return 200 OK immediately\n\n Queue Consumer\n │\n ├─ 1. Update status → 'processing'\n ├─ 2. Call registered handlers\n └─ 3. Update status → 'processed' or 'failed'\n```\n\nThe API returns `200 OK` immediately after enqueuing. Your handler logic runs asynchronously via the queue consumer, so external services don't time out waiting for your processing to complete.\n\n## Registering Handlers\n\nUse `onWebhookEvent()` in your action code to register handlers for specific events:\n\n```typescript\nonWebhookEvent(\"stripe:checkout.session.completed\", async (ctx) => {\n const { type, data, provider, env } = ctx;\n await createSubscription(data, env);\n});\n\n// Wildcard — handle all events from a provider\nonWebhookEvent(\"stripe:*\", async (ctx) => {\n console.log(`Received ${ctx.type} from ${ctx.provider}`);\n});\n```\n\n**Handler context:**\n\n| Field | Type | Description |\n|-------|------|-------------|\n| `type` | `string` | Full event type (e.g., `stripe:checkout.session.completed`) |\n| `data` | `unknown` | Parsed event payload |\n| `provider` | `string` | Provider name (e.g., `stripe`) |\n| `env` | `CloudflareBindings` | Cloudflare environment bindings |\n\n**Execution order:**\n1. Specific handlers run first (exact match on event type)\n2. Wildcard handlers run after\n3. All matching handlers run in parallel\n4. If any handler throws, the message is retried\n\n## Signature Verification\n\nEach provider verifies signatures differently. The secret is read from an environment variable.\n\n### Stripe\n\n**Environment variable:** `STRIPE_WEBHOOK_SECRET`\n\nStripe uses HMAC-SHA256 with a timestamp-based signature:\n\n```\nStripe-Signature: t=1614556800,v1=abc123...\n```\n\nThe signed payload is `<timestamp>.<raw_body>`. Verification checks:\n- HMAC matches\n- Timestamp is within 5 minutes (prevents replay attacks)\n\n### Custom Providers\n\nProviders implement this interface:\n\n```typescript\ninterface InboundProvider {\n name: string;\n verifySignature(payload: string, signature: string, secret: string): Promise<boolean>;\n parseEvent(payload: string): {\n type: string;\n data: unknown;\n externalId?: string;\n };\n}\n```\n\n## Idempotency\n\nEvents are deduplicated using the `externalId` field (e.g., Stripe's `event.id`). If an event with the same `externalId` has already been received, the endpoint returns `200 OK` without re-processing.\n\n## Database Schema\n\nInbound events are stored in the `webhook_events` table:\n\n| Column | Type | Description |\n|--------|------|-------------|\n| `id` | text | Primary key (`whe_<uuid>`) |\n| `provider` | text | Provider name |\n| `eventType` | text | Event type from provider |\n| `externalId` | text | Provider's event ID (for deduplication) |\n| `payload` | text | Raw JSON payload |\n| `status` | text | `received`, `processing`, `processed`, `failed` |\n| `attempts` | integer | Processing attempt count |\n| `processedAt` | timestamp | When successfully processed |\n| `error` | text | Error message if failed |\n| `createdAt` | timestamp | When received |\n\n## Admin Routes\n\nTwo admin-only routes are generated for monitoring inbound webhooks:\n\n### List Events\n\n```\nGET /webhooks/v1/inbound/events\n```\n\nReturns recent inbound events with their status. Requires admin role.\n\n### Retry Failed Event\n\n```\nPOST /webhooks/v1/inbound/events/:id/retry\n```\n\nRe-queues a failed event for processing. Requires admin role.\n\n## Error Handling\n\n- **Invalid signature** — Returns `401 Unauthorized`, event not stored\n- **Handler failure** — Event marked as `failed`, message retried (up to 3 times via queue)\n- **All retries exhausted** — Event remains in `failed` status; use the admin retry endpoint to re-process manually\n\n## Configuration\n\nWebhooks are enabled by setting `webhooksBinding` in your database config:\n\n```typescript\ndatabase: defineDatabase(\"cloudflare-d1\", {\n binding: \"DB\",\n webhooksBinding: \"WEBHOOKS_DB\",\n})\n```\n\nSet the provider secret as a Wrangler secret:\n\n```bash\nwrangler secret put STRIPE_WEBHOOK_SECRET\n```\n\n## See Also\n\n- [Outbound Webhooks](/stack/webhooks/outbound) — Send webhooks when data changes\n- [Queues](/stack/queues) — Background processing infrastructure"
351
+ },
352
+ "stack/webhooks": {
353
+ "title": "Webhooks",
354
+ "content": "Quickback supports both inbound and outbound webhooks for integrating with external services. Inbound webhooks receive events from providers like Stripe, while outbound webhooks notify your users' integrations when data changes.\n\n## Enabling Webhooks\n\nSet `webhooksBinding` in your database provider config:\n\n```typescript\ndatabase: defineDatabase(\"cloudflare-d1\", {\n binding: \"DB\",\n webhooksBinding: \"WEBHOOKS_DB\",\n})\n```\n\nThis generates:\n- Webhook database schema (3 tables: events, endpoints, deliveries)\n- Inbound endpoint at `POST /webhooks/v1/inbound/:provider`\n- Outbound endpoint management API at `/webhooks/v1/endpoints`\n- Queue consumer for async delivery with retry\n- HMAC-SHA256 payload signing (Stripe-compatible format)\n\n## Infrastructure\n\nWebhooks use a Cloudflare Queue for reliable async processing:\n\n| Component | Binding | Purpose |\n|-----------|---------|---------|\n| Queue | `WEBHOOKS_QUEUE` | Process inbound events and deliver outbound webhooks |\n| Dead letter queue | `WEBHOOKS_DLQ` | Capture failed deliveries after max retries |\n| Database | `WEBHOOKS_DB` | Store events, endpoints, and delivery records |\n\n## Pages\n\n- [Inbound Webhooks](/stack/webhooks/inbound) — Receive webhooks from Stripe, Paddle, GitHub, etc.\n- [Outbound Webhooks](/stack/webhooks/outbound) — Send signed webhooks when data changes"
355
+ },
356
+ "stack/webhooks/outbound": {
357
+ "title": "Outbound Webhooks",
358
+ "content": "Outbound webhooks notify external services when events happen in your API. Users register webhook endpoints, subscribe to event types, and receive signed HTTP POST requests when events are emitted.\n\n## Overview\n\nOutbound webhooks are a multi-step system:\n\n1. **Users register endpoints** via the REST API\n2. **Your code emits events** using `emitWebhookEvent()`\n3. **Quickback queues deliveries** for each matching endpoint\n4. **The queue consumer delivers** with retry and exponential backoff\n5. **Payloads are signed** with HMAC-SHA256 (Stripe-compatible format)\n\n## Emitting Events\n\nCall `emitWebhookEvent()` in your actions or custom routes:\n\n```typescript\n\n// After creating a user\nconst newUser = await db.insert(users).values(data).returning();\n\nawait emitWebhookEvent(\n \"user.created\",\n newUser[0],\n { organizationId: ctx.activeOrgId },\n env\n);\n```\n\n**Parameters:**\n\n| Parameter | Type | Description |\n|-----------|------|-------------|\n| `eventType` | `string` | Event name (e.g., `user.created`) |\n| `payload` | `unknown` | Event data (auto-wrapped with metadata) |\n| `options` | `object` | `{ organizationId?, userId? }` — scope for endpoint matching |\n| `env` | `CloudflareBindings` | Cloudflare environment bindings |\n\n**Note:** The compiler does NOT auto-emit events after CRUD operations. You must call `emitWebhookEvent()` explicitly where you want webhooks triggered.\n\n## Event Types\n\nQuickback provides recommended event naming conventions:\n\n| Category | Events |\n|----------|--------|\n| User | `user.created`, `user.updated`, `user.deleted` |\n| Subscription | `subscription.created`, `subscription.updated`, `subscription.cancelled`, `subscription.renewed` |\n| Organization | `organization.created`, `organization.updated`, `organization.deleted`, `organization.member_added`, `organization.member_removed` |\n| File | `file.uploaded`, `file.deleted` |\n\nYou can emit any custom event type — these are conventions, not restrictions.\n\n## Payload Format\n\nThe emitted payload is wrapped automatically:\n\n```json\n{\n \"type\": \"user.created\",\n \"data\": {\n \"id\": \"usr_abc123\",\n \"name\": \"Jane Doe\",\n \"email\": \"jane@example.com\"\n },\n \"createdAt\": \"2025-01-15T10:00:00.000Z\"\n}\n```\n\n## Payload Signing\n\nEvery delivery is signed with HMAC-SHA256 using the endpoint's secret. The signature format is Stripe-compatible:\n\n```\nX-Webhook-Signature: t=1705312800,v1=5257a869e7ecebeda32affa62cdca3fa51cad7e77a0e56ff536d0ce8e108d8bd\n```\n\n**Signed payload:** `<timestamp>.<json_body>`\n\n**Additional headers:**\n\n| Header | Description |\n|--------|-------------|\n| `X-Webhook-Signature` | `t=<timestamp>,v1=<hmac>` |\n| `X-Webhook-Event` | Event type (e.g., `user.created`) |\n| `X-Webhook-Delivery` | Delivery ID for tracking |\n| `Content-Type` | `application/json` |\n\n### Verifying Signatures (Consumer Side)\n\n```typescript\n\nfunction verifyWebhook(payload: string, signature: string, secret: string): boolean {\n const [tPart, v1Part] = signature.split(',');\n const timestamp = tPart.replace('t=', '');\n const expectedSig = v1Part.replace('v1=', '');\n\n // Check timestamp (5 minute tolerance)\n const age = Date.now() / 1000 - parseInt(timestamp);\n if (age > 300) return false;\n\n const signed = `${timestamp}.${payload}`;\n const computed = createHmac('sha256', secret).update(signed).digest('hex');\n return timingSafeEqual(Buffer.from(computed), Buffer.from(expectedSig));\n}\n```\n\n## Endpoint Management API\n\n### Register an Endpoint\n\n```\nPOST /webhooks/v1/endpoints\n```\n\n```json\n{\n \"name\": \"My Integration\",\n \"url\": \"https://example.com/webhook\",\n \"events\": [\"user.created\", \"subscription.*\"]\n}\n```\n\n**Response (201):**\n\n```json\n{\n \"id\": \"wep_abc123\",\n \"name\": \"My Integration\",\n \"url\": \"https://example.com/webhook\",\n \"secret\": \"whsec_a1b2c3d4...\",\n \"events\": [\"user.created\", \"subscription.*\"],\n \"enabled\": true,\n \"createdAt\": \"2025-01-15T10:00:00.000Z\"\n}\n```\n\nThe `secret` is only returned on creation. Store it securely.\n\n### Event Pattern Matching\n\nEndpoints subscribe to events using patterns:\n\n| Pattern | Matches |\n|---------|---------|\n| `user.created` | Exact match only |\n| `user.*` | All user events |\n| `*` | All events |\n\n### Other Endpoints\n\n| Method | Path | Description |\n|--------|------|-------------|\n| `GET` | `/webhooks/v1/endpoints` | List your endpoints |\n| `GET` | `/webhooks/v1/endpoints/:id` | Get endpoint details (secret masked) |\n| `PATCH` | `/webhooks/v1/endpoints/:id` | Update endpoint (name, url, events, enabled) |\n| `DELETE` | `/webhooks/v1/endpoints/:id` | Delete endpoint |\n| `GET` | `/webhooks/v1/endpoints/:id/deliveries` | List delivery attempts |\n| `POST` | `/webhooks/v1/endpoints/:id/test` | Send a test event |\n| `POST` | `/webhooks/v1/endpoints/:id/rotate-secret` | Generate a new signing secret |\n| `GET` | `/webhooks/v1/events` | List available event types |\n\n### Access Control\n\nEndpoints are scoped by `userId` or `organizationId`. Users can only manage their own endpoints. When emitting events, only endpoints matching the event's scope receive deliveries.\n\n## Delivery and Retry\n\nDeliveries are processed asynchronously via a Cloudflare Queue.\n\n**Success:** HTTP 2xx response marks the delivery as `delivered`.\n\n**Failure:** HTTP 4xx/5xx or network errors trigger retries with exponential backoff:\n\n| Attempt | Delay |\n|---------|-------|\n| 1st retry | ~2 seconds |\n| 2nd retry | ~4 seconds |\n| 3rd retry (final) | ~8 seconds |\n\nMaximum backoff is capped at 1 hour. After 3 failed attempts, the delivery is marked as `failed`.\n\n**Disabled endpoints:** If an endpoint is disabled or deleted between emission and delivery, the delivery is marked as failed immediately.\n\n### Delivery Record\n\nEach delivery attempt is tracked in the `webhook_deliveries` table:\n\n| Field | Description |\n|-------|-------------|\n| `status` | `pending`, `delivered`, or `failed` |\n| `attempts` | Number of delivery attempts |\n| `responseStatus` | HTTP status from last attempt |\n| `responseBody` | Response body (truncated to 1000 chars) |\n| `lastAttemptAt` | Timestamp of last attempt |\n| `nextRetryAt` | When the next retry will occur |\n\n## Infrastructure\n\n### Queue Configuration\n\nOutbound webhooks use a dedicated Cloudflare Queue:\n\n```toml\n# Auto-generated in wrangler.toml\n[[queues.producers]]\nqueue = \"my-app-webhooks-queue\"\nbinding = \"WEBHOOKS_QUEUE\"\n\n[[queues.consumers]]\nqueue = \"my-app-webhooks-queue\"\nmax_batch_size = 10\nmax_batch_timeout = 30\nmax_retries = 3\ndead_letter_queue = \"my-app-webhooks-dlq\"\n```\n\n### Dead Letter Queue\n\nFailed webhook deliveries (after all retries) are sent to a dead letter queue for investigation:\n\n```toml\n[[queues.consumers]]\nqueue = \"my-app-webhooks-dlq\"\nmax_batch_size = 1\n```\n\n## Enabling Webhooks\n\nSet `webhooksBinding` in your database provider config:\n\n```typescript\ndatabase: defineDatabase(\"cloudflare-d1\", {\n binding: \"DB\",\n webhooksBinding: \"WEBHOOKS_DB\",\n})\n```\n\nThis generates the webhook database schema, routes, queue consumer, and all supporting code.\n\n## See Also\n\n- [Inbound Webhooks](/stack/webhooks/inbound) — Receive webhooks from external services\n- [Queues](/stack/queues) — Background processing infrastructure"
359
+ }
360
+ };
361
+ export const TOPIC_LIST = [
362
+ "account-ui/customization",
363
+ "account-ui/environment-variables",
364
+ "account-ui/features/admin",
365
+ "account-ui/features/api-keys",
366
+ "account-ui/features/avatars",
367
+ "account-ui/features/cli-authorize",
368
+ "account-ui/features",
369
+ "account-ui/features/organizations",
370
+ "account-ui/features/passkeys",
371
+ "account-ui/features/passwordless",
372
+ "account-ui/features/sessions",
373
+ "account-ui",
374
+ "account-ui/library-usage",
375
+ "account-ui/standalone",
376
+ "account-ui/with-quickback",
377
+ "account-ui/worker",
378
+ "compiler/cloud-compiler/authentication",
379
+ "compiler/cloud-compiler/cli",
380
+ "compiler/cloud-compiler/endpoints",
381
+ "compiler/cloud-compiler",
382
+ "compiler/cloud-compiler/local-compiler",
383
+ "compiler/cloud-compiler/troubleshooting",
384
+ "compiler/config",
385
+ "compiler/config/output",
386
+ "compiler/config/providers",
387
+ "compiler/config/variables",
388
+ "compiler/definitions/access",
389
+ "compiler/definitions/actions",
390
+ "compiler/definitions/firewall",
391
+ "compiler/definitions/guards",
392
+ "compiler/definitions",
393
+ "compiler/definitions/masking",
394
+ "compiler/definitions/schema",
395
+ "compiler/definitions/validation",
396
+ "compiler/definitions/views",
397
+ "compiler/getting-started/claude-code",
398
+ "compiler/getting-started/full-example",
399
+ "compiler/getting-started/hand-crafted",
400
+ "compiler/getting-started",
401
+ "compiler/getting-started/template-bun",
402
+ "compiler/getting-started/template-cloudflare",
403
+ "compiler/getting-started/templates",
404
+ "compiler",
405
+ "compiler/integrations/cloudflare",
406
+ "compiler/integrations",
407
+ "compiler/integrations/neon",
408
+ "compiler/integrations/supabase",
409
+ "compiler/using-the-api/actions-api",
410
+ "compiler/using-the-api/batch-operations",
411
+ "compiler/using-the-api/crud",
412
+ "compiler/using-the-api/errors",
413
+ "compiler/using-the-api",
414
+ "compiler/using-the-api/openapi",
415
+ "compiler/using-the-api/query-params",
416
+ "compiler/using-the-api/views-api",
417
+ "index",
418
+ "plugins-tools/better-auth-plugins/aws-ses",
419
+ "plugins-tools/better-auth-plugins/combo-auth",
420
+ "plugins-tools/better-auth-plugins/upgrade-anonymous",
421
+ "plugins-tools/claude-code-skill",
422
+ "plugins-tools",
423
+ "stack/auth/api-keys",
424
+ "stack/auth/device-auth",
425
+ "stack/auth",
426
+ "stack/auth/plugins",
427
+ "stack/auth/security",
428
+ "stack/auth/using-auth",
429
+ "stack/database/d1",
430
+ "stack/database",
431
+ "stack/database/neon",
432
+ "stack/database/using-d1",
433
+ "stack",
434
+ "stack/queues/handlers",
435
+ "stack/queues",
436
+ "stack/queues/using-queues",
437
+ "stack/realtime/durable-objects",
438
+ "stack/realtime",
439
+ "stack/realtime/using-realtime",
440
+ "stack/storage",
441
+ "stack/storage/kv",
442
+ "stack/storage/r2",
443
+ "stack/storage/using-kv",
444
+ "stack/storage/using-r2",
445
+ "stack/vector/embeddings",
446
+ "stack/vector",
447
+ "stack/vector/using-embeddings",
448
+ "stack/webhooks/inbound",
449
+ "stack/webhooks",
450
+ "stack/webhooks/outbound"
451
+ ];
5
452
  //# sourceMappingURL=content.js.map