dojo.md 0.2.2 → 0.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (149) hide show
  1. package/courses/GENERATION_LOG.md +20 -0
  2. package/courses/api-documentation-writing/course.yaml +12 -0
  3. package/courses/api-documentation-writing/scenarios/level-1/authentication-basics.yaml +46 -0
  4. package/courses/api-documentation-writing/scenarios/level-1/data-types-formats.yaml +45 -0
  5. package/courses/api-documentation-writing/scenarios/level-1/endpoint-description.yaml +45 -0
  6. package/courses/api-documentation-writing/scenarios/level-1/error-documentation.yaml +45 -0
  7. package/courses/api-documentation-writing/scenarios/level-1/first-documentation-shift.yaml +47 -0
  8. package/courses/api-documentation-writing/scenarios/level-1/getting-started-guide.yaml +42 -0
  9. package/courses/api-documentation-writing/scenarios/level-1/pagination-docs.yaml +51 -0
  10. package/courses/api-documentation-writing/scenarios/level-1/request-parameters.yaml +46 -0
  11. package/courses/api-documentation-writing/scenarios/level-1/request-response-examples.yaml +48 -0
  12. package/courses/api-documentation-writing/scenarios/level-1/status-codes.yaml +45 -0
  13. package/courses/api-documentation-writing/scenarios/level-2/error-patterns.yaml +48 -0
  14. package/courses/api-documentation-writing/scenarios/level-2/intermediate-documentation-shift.yaml +48 -0
  15. package/courses/api-documentation-writing/scenarios/level-2/oauth-documentation.yaml +47 -0
  16. package/courses/api-documentation-writing/scenarios/level-2/openapi-specification.yaml +46 -0
  17. package/courses/api-documentation-writing/scenarios/level-2/rate-limiting-docs.yaml +45 -0
  18. package/courses/api-documentation-writing/scenarios/level-2/request-body-schemas.yaml +46 -0
  19. package/courses/api-documentation-writing/scenarios/level-2/schema-definitions.yaml +41 -0
  20. package/courses/api-documentation-writing/scenarios/level-2/swagger-redoc-rendering.yaml +43 -0
  21. package/courses/api-documentation-writing/scenarios/level-2/validation-documentation.yaml +47 -0
  22. package/courses/api-documentation-writing/scenarios/level-2/versioning-changelog.yaml +42 -0
  23. package/courses/api-documentation-writing/scenarios/level-3/advanced-documentation-shift.yaml +43 -0
  24. package/courses/api-documentation-writing/scenarios/level-3/api-style-guide.yaml +40 -0
  25. package/courses/api-documentation-writing/scenarios/level-3/code-samples-multilang.yaml +40 -0
  26. package/courses/api-documentation-writing/scenarios/level-3/content-architecture.yaml +47 -0
  27. package/courses/api-documentation-writing/scenarios/level-3/deprecation-communication.yaml +44 -0
  28. package/courses/api-documentation-writing/scenarios/level-3/interactive-api-explorer.yaml +42 -0
  29. package/courses/api-documentation-writing/scenarios/level-3/migration-guides.yaml +42 -0
  30. package/courses/api-documentation-writing/scenarios/level-3/sdk-documentation.yaml +40 -0
  31. package/courses/api-documentation-writing/scenarios/level-3/webhook-documentation.yaml +48 -0
  32. package/courses/api-documentation-writing/scenarios/level-3/websocket-sse-docs.yaml +47 -0
  33. package/courses/api-documentation-writing/scenarios/level-4/api-changelog-management.yaml +44 -0
  34. package/courses/api-documentation-writing/scenarios/level-4/api-governance-standards.yaml +41 -0
  35. package/courses/api-documentation-writing/scenarios/level-4/api-product-strategy.yaml +41 -0
  36. package/courses/api-documentation-writing/scenarios/level-4/developer-portal-design.yaml +48 -0
  37. package/courses/api-documentation-writing/scenarios/level-4/docs-as-code.yaml +41 -0
  38. package/courses/api-documentation-writing/scenarios/level-4/documentation-localization.yaml +46 -0
  39. package/courses/api-documentation-writing/scenarios/level-4/documentation-metrics.yaml +45 -0
  40. package/courses/api-documentation-writing/scenarios/level-4/documentation-testing.yaml +41 -0
  41. package/courses/api-documentation-writing/scenarios/level-4/expert-documentation-shift.yaml +45 -0
  42. package/courses/api-documentation-writing/scenarios/level-4/multi-audience-docs.yaml +46 -0
  43. package/courses/api-documentation-writing/scenarios/level-5/ai-powered-documentation.yaml +44 -0
  44. package/courses/api-documentation-writing/scenarios/level-5/api-first-documentation.yaml +45 -0
  45. package/courses/api-documentation-writing/scenarios/level-5/api-marketplace-docs.yaml +42 -0
  46. package/courses/api-documentation-writing/scenarios/level-5/board-api-strategy.yaml +48 -0
  47. package/courses/api-documentation-writing/scenarios/level-5/documentation-program-strategy.yaml +42 -0
  48. package/courses/api-documentation-writing/scenarios/level-5/documentation-team-structure.yaml +47 -0
  49. package/courses/api-documentation-writing/scenarios/level-5/dx-competitive-advantage.yaml +46 -0
  50. package/courses/api-documentation-writing/scenarios/level-5/ecosystem-documentation.yaml +45 -0
  51. package/courses/api-documentation-writing/scenarios/level-5/industry-documentation-patterns.yaml +46 -0
  52. package/courses/api-documentation-writing/scenarios/level-5/master-documentation-shift.yaml +46 -0
  53. package/courses/code-review-feedback-writing/course.yaml +12 -0
  54. package/courses/code-review-feedback-writing/scenarios/level-1/approve-vs-request-changes.yaml +48 -0
  55. package/courses/code-review-feedback-writing/scenarios/level-1/asking-questions.yaml +50 -0
  56. package/courses/code-review-feedback-writing/scenarios/level-1/clear-comment-writing.yaml +45 -0
  57. package/courses/code-review-feedback-writing/scenarios/level-1/constructive-tone.yaml +43 -0
  58. package/courses/code-review-feedback-writing/scenarios/level-1/first-review-shift.yaml +46 -0
  59. package/courses/code-review-feedback-writing/scenarios/level-1/giving-praise.yaml +44 -0
  60. package/courses/code-review-feedback-writing/scenarios/level-1/nitpick-etiquette.yaml +44 -0
  61. package/courses/code-review-feedback-writing/scenarios/level-1/providing-context.yaml +46 -0
  62. package/courses/code-review-feedback-writing/scenarios/level-1/reviewing-small-prs.yaml +43 -0
  63. package/courses/code-review-feedback-writing/scenarios/level-1/style-vs-logic.yaml +48 -0
  64. package/courses/code-review-feedback-writing/scenarios/level-2/architectural-feedback.yaml +52 -0
  65. package/courses/code-review-feedback-writing/scenarios/level-2/intermediate-review-shift.yaml +46 -0
  66. package/courses/code-review-feedback-writing/scenarios/level-2/performance-feedback.yaml +50 -0
  67. package/courses/code-review-feedback-writing/scenarios/level-2/reviewing-breaking-changes.yaml +44 -0
  68. package/courses/code-review-feedback-writing/scenarios/level-2/reviewing-complex-prs.yaml +43 -0
  69. package/courses/code-review-feedback-writing/scenarios/level-2/reviewing-documentation.yaml +47 -0
  70. package/courses/code-review-feedback-writing/scenarios/level-2/reviewing-error-handling.yaml +50 -0
  71. package/courses/code-review-feedback-writing/scenarios/level-2/reviewing-tests.yaml +53 -0
  72. package/courses/code-review-feedback-writing/scenarios/level-2/security-review-comments.yaml +50 -0
  73. package/courses/code-review-feedback-writing/scenarios/level-2/suggesting-alternatives.yaml +42 -0
  74. package/courses/code-review-feedback-writing/scenarios/level-3/cross-team-review.yaml +45 -0
  75. package/courses/code-review-feedback-writing/scenarios/level-3/mentoring-through-review.yaml +46 -0
  76. package/courses/code-review-feedback-writing/scenarios/level-3/reviewing-unfamiliar-code.yaml +43 -0
  77. package/courses/terraform-infrastructure-setup/scenarios/level-1/first-debugging-shift.yaml +66 -0
  78. package/courses/terraform-infrastructure-setup/scenarios/level-1/plan-output-reading.yaml +71 -0
  79. package/courses/terraform-infrastructure-setup/scenarios/level-1/resource-creation-failures.yaml +54 -0
  80. package/courses/terraform-infrastructure-setup/scenarios/level-1/resource-references.yaml +70 -0
  81. package/courses/terraform-infrastructure-setup/scenarios/level-1/state-file-basics.yaml +73 -0
  82. package/courses/terraform-infrastructure-setup/scenarios/level-1/terraform-fmt-validate.yaml +58 -0
  83. package/courses/terraform-infrastructure-setup/scenarios/level-2/count-vs-for-each.yaml +58 -0
  84. package/courses/terraform-infrastructure-setup/scenarios/level-2/dependency-management.yaml +80 -0
  85. package/courses/terraform-infrastructure-setup/scenarios/level-2/intermediate-debugging-shift.yaml +66 -0
  86. package/courses/terraform-infrastructure-setup/scenarios/level-2/lifecycle-rules.yaml +51 -0
  87. package/courses/terraform-infrastructure-setup/scenarios/level-2/locals-and-expressions.yaml +58 -0
  88. package/courses/terraform-infrastructure-setup/scenarios/level-2/module-structure.yaml +75 -0
  89. package/courses/terraform-infrastructure-setup/scenarios/level-2/provisioner-pitfalls.yaml +64 -0
  90. package/courses/terraform-infrastructure-setup/scenarios/level-2/remote-state-backend.yaml +55 -0
  91. package/courses/terraform-infrastructure-setup/scenarios/level-2/terraform-import.yaml +55 -0
  92. package/courses/terraform-infrastructure-setup/scenarios/level-2/workspace-management.yaml +51 -0
  93. package/courses/terraform-infrastructure-setup/scenarios/level-3/advanced-debugging-shift.yaml +63 -0
  94. package/courses/terraform-infrastructure-setup/scenarios/level-3/api-rate-limiting.yaml +50 -0
  95. package/courses/terraform-infrastructure-setup/scenarios/level-3/conditional-resources.yaml +66 -0
  96. package/courses/terraform-infrastructure-setup/scenarios/level-3/drift-detection.yaml +66 -0
  97. package/courses/terraform-infrastructure-setup/scenarios/level-3/dynamic-blocks.yaml +71 -0
  98. package/courses/terraform-infrastructure-setup/scenarios/level-3/large-scale-refactoring.yaml +59 -0
  99. package/courses/terraform-infrastructure-setup/scenarios/level-3/multi-provider-config.yaml +69 -0
  100. package/courses/terraform-infrastructure-setup/scenarios/level-3/state-surgery.yaml +57 -0
  101. package/courses/terraform-infrastructure-setup/scenarios/level-3/terraform-cloud-enterprise.yaml +59 -0
  102. package/courses/terraform-infrastructure-setup/scenarios/level-3/terraform-debugging.yaml +51 -0
  103. package/courses/terraform-infrastructure-setup/scenarios/level-4/blast-radius-management.yaml +51 -0
  104. package/courses/terraform-infrastructure-setup/scenarios/level-4/cicd-pipeline-design.yaml +50 -0
  105. package/courses/terraform-infrastructure-setup/scenarios/level-4/compliance-as-code.yaml +46 -0
  106. package/courses/terraform-infrastructure-setup/scenarios/level-4/cost-estimation-governance.yaml +42 -0
  107. package/courses/terraform-infrastructure-setup/scenarios/level-4/expert-debugging-shift.yaml +51 -0
  108. package/courses/terraform-infrastructure-setup/scenarios/level-4/iac-organization-strategy.yaml +45 -0
  109. package/courses/terraform-infrastructure-setup/scenarios/level-4/incident-response-iac.yaml +47 -0
  110. package/courses/terraform-infrastructure-setup/scenarios/level-4/infrastructure-testing.yaml +41 -0
  111. package/courses/terraform-infrastructure-setup/scenarios/level-4/module-registry-design.yaml +45 -0
  112. package/courses/terraform-infrastructure-setup/scenarios/level-4/multi-account-strategy.yaml +57 -0
  113. package/courses/terraform-infrastructure-setup/scenarios/level-5/board-infrastructure-investment.yaml +53 -0
  114. package/courses/terraform-infrastructure-setup/scenarios/level-5/disaster-recovery-iac.yaml +47 -0
  115. package/courses/terraform-infrastructure-setup/scenarios/level-5/enterprise-iac-transformation.yaml +48 -0
  116. package/courses/terraform-infrastructure-setup/scenarios/level-5/iac-technology-evolution.yaml +49 -0
  117. package/courses/terraform-infrastructure-setup/scenarios/level-5/ma-infrastructure-consolidation.yaml +54 -0
  118. package/courses/terraform-infrastructure-setup/scenarios/level-5/master-debugging-shift.yaml +53 -0
  119. package/courses/terraform-infrastructure-setup/scenarios/level-5/multi-cloud-strategy.yaml +49 -0
  120. package/courses/terraform-infrastructure-setup/scenarios/level-5/platform-engineering.yaml +47 -0
  121. package/courses/terraform-infrastructure-setup/scenarios/level-5/regulatory-compliance-automation.yaml +47 -0
  122. package/courses/terraform-infrastructure-setup/scenarios/level-5/terraform-vs-alternatives.yaml +46 -0
  123. package/dist/cli/commands/generate.d.ts.map +1 -1
  124. package/dist/cli/commands/generate.js +2 -1
  125. package/dist/cli/commands/generate.js.map +1 -1
  126. package/dist/cli/commands/train.d.ts.map +1 -1
  127. package/dist/cli/commands/train.js +6 -3
  128. package/dist/cli/commands/train.js.map +1 -1
  129. package/dist/cli/index.js +9 -6
  130. package/dist/cli/index.js.map +1 -1
  131. package/dist/cli/run-demo.js +3 -2
  132. package/dist/cli/run-demo.js.map +1 -1
  133. package/dist/engine/model-utils.d.ts +6 -0
  134. package/dist/engine/model-utils.d.ts.map +1 -1
  135. package/dist/engine/model-utils.js +28 -1
  136. package/dist/engine/model-utils.js.map +1 -1
  137. package/dist/engine/training.d.ts.map +1 -1
  138. package/dist/engine/training.js +4 -3
  139. package/dist/engine/training.js.map +1 -1
  140. package/dist/generator/course-generator.d.ts.map +1 -1
  141. package/dist/generator/course-generator.js +4 -3
  142. package/dist/generator/course-generator.js.map +1 -1
  143. package/dist/mcp/server.d.ts.map +1 -1
  144. package/dist/mcp/server.js +7 -3
  145. package/dist/mcp/server.js.map +1 -1
  146. package/dist/mcp/session-manager.d.ts.map +1 -1
  147. package/dist/mcp/session-manager.js +3 -2
  148. package/dist/mcp/session-manager.js.map +1 -1
  149. package/package.json +1 -1
@@ -0,0 +1,42 @@
1
+ meta:
2
+ id: interactive-api-explorer
3
+ level: 3
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Design interactive API explorer — create documentation for an interactive playground that lets developers test API calls directly from the docs"
7
+ tags: [API, documentation, interactive, playground, try-it-out, sandbox, advanced]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ Your API documentation is static — developers must switch between
13
+ reading docs and their terminal to test calls. You want to add an
14
+ interactive "Try it out" experience directly in the docs:
15
+
16
+ Requirements:
17
+ - Developers can make real API calls from the documentation page
18
+ - Sandbox environment with test data (pre-populated accounts, payments)
19
+ - Auto-fill authentication (sandbox API key) for logged-in developers
20
+ - Show request being constructed as developers fill in parameters
21
+ - Display formatted response with syntax highlighting
22
+ - Save and share request/response pairs
23
+ - Show equivalent curl/SDK code for any request built in the explorer
24
+
25
+ Task: Design and document the interactive API explorer experience.
26
+ Write the developer-facing documentation for using the explorer,
27
+ the sandbox environment setup guide, and the technical specification
28
+ for how the explorer generates code examples from interactions.
29
+
30
+ assertions:
31
+ - type: llm_judge
32
+ criteria: "Explorer UX is fully documented — three-panel layout: (1) parameter input form (auto-populated from OpenAPI schema), (2) live request preview (curl command updating as parameters change), (3) response display (formatted JSON, status code, headers, timing). Authentication: sandbox key auto-injected for logged-in users, manual key input for others. Parameter inputs: dropdowns for enums, date pickers for dates, JSON editor for body. Validation: client-side validation before sending (required fields, format checks). History: recent requests saved per developer, shareable via URL. Request builder: clicking 'Send' makes real API call to sandbox, shows loading state, displays response"
33
+ weight: 0.35
34
+ description: "Explorer UX"
35
+ - type: llm_judge
36
+ criteria: "Sandbox environment is documented — pre-populated test data: 10 test customers, 50 test payments in various states, test webhook endpoint. Test credentials: publishable key (pk_test_...) and secret key (sk_test_...) per developer account. Sandbox limitations: no real charges, no real emails, data resets weekly (or on-demand). Test scenarios: special test values that trigger specific responses (amount 99999 triggers decline, customer_id 'cus_fail' triggers error). Sandbox vs production differences documented. How to create additional test data. Sandbox rate limits (more generous than production)"
37
+ weight: 0.35
38
+ description: "Sandbox docs"
39
+ - type: llm_judge
40
+ criteria: "Code generation from explorer is documented — after making a request, show equivalent code in: curl, Python, Node.js, Go, Java. Code is copy-paste ready with actual values from the request. 'Copy to clipboard' button for each language. Code includes error handling boilerplate. Option to generate a complete runnable script (with imports, auth setup, API call, response handling). Export options: save as Postman collection, import into Insomnia, generate SDK client code. Technical: code templates are Mustache/Handlebars with OpenAPI schema variables, maintained alongside API spec"
41
+ weight: 0.30
42
+ description: "Code generation"
@@ -0,0 +1,42 @@
1
+ meta:
2
+ id: migration-guides
3
+ level: 3
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Write API migration guides — document breaking changes, provide step-by-step migration paths, and minimize developer friction during version upgrades"
7
+ tags: [API, documentation, migration, versioning, breaking-changes, upgrade, advanced]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ You're releasing API v2 with breaking changes from v1. 2,000 active
13
+ integrations need to migrate. The changes:
14
+
15
+ - Response envelope: { "user": {...} } → { "data": {...}, "meta": {...} }
16
+ - Pagination: offset/limit → cursor-based
17
+ - Authentication: API key in query param → Bearer token in header
18
+ - Field renames: "userName" → "user_name", "createdAt" → "created_at"
19
+ - Removed endpoints: GET /users/search (merged into GET /users with
20
+ query params)
21
+ - New required field: "idempotency_key" on all POST requests
22
+
23
+ Timeline: v1 deprecated now, sunset in 12 months.
24
+
25
+ Task: Write a comprehensive migration guide that walks developers
26
+ through upgrading from v1 to v2. Include: change catalog, migration
27
+ steps, code examples showing before/after, compatibility helpers,
28
+ and a testing strategy for the migration.
29
+
30
+ assertions:
31
+ - type: llm_judge
32
+ criteria: "Change catalog is complete and categorized — every breaking change listed with: what changed, why it changed (business reason), impact level (high/medium/low), migration effort estimate. Categorized: (1) Authentication changes (API key → Bearer), (2) Response format changes (envelope, field renames), (3) Pagination changes (offset → cursor), (4) Endpoint changes (removed/merged), (5) New requirements (idempotency key). Each change has before/after code comparison. Impact assessment: which changes affect all integrations vs specific use cases. Quick reference table: v1 pattern → v2 pattern for scanning"
33
+ weight: 0.35
34
+ description: "Change catalog"
35
+ - type: llm_judge
36
+ criteria: "Migration steps are ordered and practical — recommended migration order: (1) Update authentication first (affects all requests), (2) Handle response envelope changes (wrap response parsing), (3) Update field names (find/replace with mapping), (4) Switch pagination (may require storage schema changes), (5) Update removed endpoints, (6) Add idempotency keys. Each step: code example (before → after), common pitfalls, how to test this step in isolation, rollback strategy if something breaks. Compatibility helpers: v1-compat middleware/SDK option that translates v2 responses to v1 format (temporary bridge). Dual-write period: send to both v1 and v2, compare responses"
37
+ weight: 0.35
38
+ description: "Migration steps"
39
+ - type: llm_judge
40
+ criteria: "Timeline and testing strategy enable safe migration — timeline: v1 deprecated now (no new features), v1 supported 12 months, monthly email reminders at 9/6/3/1 months, dashboard showing v1 usage per integration. Testing: sandbox environment with v2, migration verification endpoint (POST /v2/migration/verify — tests your integration against v2), automated compatibility checker that scans your code for v1 patterns. Monitoring: dashboard showing v1 vs v2 request ratio, per-endpoint migration status. Support: dedicated migration support channel, office hours for complex migrations, case studies of successful migrations. FAQ: address common migration fears (data loss, downtime, partial migration)"
41
+ weight: 0.30
42
+ description: "Timeline and testing"
@@ -0,0 +1,40 @@
1
+ meta:
2
+ id: sdk-documentation
3
+ level: 3
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Document SDK libraries — write SDK documentation that bridges REST API docs and language-idiomatic client libraries with installation, configuration, and usage patterns"
7
+ tags: [API, documentation, SDK, client-library, language-specific, advanced]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ Your API has official SDKs in Python, Node.js, Go, and Java. But the
13
+ SDK documentation is just auto-generated from code comments. Developers
14
+ complain:
15
+
16
+ - "The Python SDK docs are just a class list — where's the getting started?"
17
+ - "How do I configure retries? The REST docs mention it but the SDK docs don't"
18
+ - "Is the Node.js SDK async? Do I use callbacks or promises?"
19
+ - "The Go SDK returns errors differently than I expected"
20
+ - "How do I mock the SDK for testing?"
21
+
22
+ Task: Write SDK documentation for the Python and Node.js clients that
23
+ goes beyond auto-generated API reference. Include: installation,
24
+ configuration (auth, retries, timeouts), idiomatic usage patterns,
25
+ error handling in each language, pagination helpers, and testing
26
+ guidance. The docs should feel native to each language's ecosystem.
27
+
28
+ assertions:
29
+ - type: llm_judge
30
+ criteria: "SDK setup and configuration is language-idiomatic — Python: pip install, client initialization with api_key parameter or PAYMENTS_API_KEY env var, configure timeout/retries via client options, type hints shown throughout. Node.js: npm install, ESM and CommonJS import examples, async/await throughout (no callbacks), TypeScript types included. Both: show how to configure base URL for sandbox vs production, set custom headers, configure HTTP proxy. Authentication: constructor parameter vs environment variable vs config file — show all three. Version pinning recommendation. Quick example: 5 lines from install to first API call"
31
+ weight: 0.35
32
+ description: "Language-idiomatic setup"
33
+ - type: llm_judge
34
+ criteria: "Error handling follows language conventions — Python: SDK raises typed exceptions (PaymentError, AuthenticationError, RateLimitError, NotFoundError) inheriting from base APIError. Show try/except patterns with specific exception types. Access error.code, error.message, error.param. Node.js: rejects with typed error classes, show try/catch with async/await, error instanceof checks. Both: map HTTP status codes to exception types (401→AuthenticationError, 404→NotFoundError, 429→RateLimitError). Auto-retry on 429 with configurable max_retries. Show idiomatic error handling for each language, not just translated REST error docs"
35
+ weight: 0.35
36
+ description: "Error handling"
37
+ - type: llm_judge
38
+ criteria: "Advanced patterns cover real usage — Pagination: Python SDK provides auto_paging_iter() that handles cursor pagination automatically (for payment in client.payments.list(limit=100).auto_paging_iter()). Node.js: async iterator (for await (const payment of client.payments.list())). Webhook verification: client.webhooks.verify_signature(payload, header, secret). Testing: mock the client for unit tests — show how to stub responses in pytest/jest. Logging: enable debug logging to see HTTP requests. Resource cleanup: context managers in Python (async with), proper client.close() in Node.js. Bulk operations: helpers for batch processing that respect rate limits"
39
+ weight: 0.30
40
+ description: "Advanced patterns"
@@ -0,0 +1,48 @@
1
+ meta:
2
+ id: webhook-documentation
3
+ level: 3
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Document webhooks — write comprehensive webhook documentation covering event types, payload schemas, security verification, and retry behavior"
7
+ tags: [API, documentation, webhooks, events, signatures, retry, advanced]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ Your platform sends webhooks for key events but the documentation
13
+ is sparse. Developers keep asking:
14
+
15
+ - "What events can I subscribe to?"
16
+ - "What does the payload look like for each event?"
17
+ - "How do I verify the webhook is really from you?"
18
+ - "My endpoint was down — will you retry? How many times?"
19
+ - "I'm getting duplicate events — is that expected?"
20
+ - "The timestamps don't match — what timezone are these in?"
21
+
22
+ Webhook events:
23
+ - payment.succeeded, payment.failed, payment.refunded
24
+ - customer.created, customer.updated, customer.deleted
25
+ - subscription.created, subscription.renewed, subscription.cancelled
26
+ - invoice.created, invoice.paid, invoice.overdue
27
+
28
+ Security: HMAC-SHA256 signature in X-Webhook-Signature header
29
+ Retry policy: 3 retries with exponential backoff (1min, 10min, 1hr)
30
+ Delivery: at-least-once (duplicates possible)
31
+
32
+ Task: Write complete webhook documentation including event catalog,
33
+ payload schemas, signature verification guide, retry behavior, and
34
+ best practices for building reliable webhook consumers.
35
+
36
+ assertions:
37
+ - type: llm_judge
38
+ criteria: "Event catalog and payloads are complete — every event type listed with: event name, trigger condition (when it fires), payload schema with all fields typed and described. Common envelope: { id (unique event ID), type (event string), created_at (ISO 8601 UTC), data (event-specific payload), api_version }. For payment.succeeded: data includes payment object (id, amount, currency, customer_id, payment_method). For customer.updated: data includes previous_attributes showing what changed. Each event has a realistic full JSON example. Events grouped by resource (payment, customer, subscription, invoice)"
39
+ weight: 0.35
40
+ description: "Event catalog"
41
+ - type: llm_judge
42
+ criteria: "Security verification is step-by-step — HMAC-SHA256 verification: (1) extract X-Webhook-Signature header, (2) get raw request body (important: use raw bytes, not parsed JSON), (3) compute HMAC-SHA256 using your webhook secret, (4) compare signatures using constant-time comparison (prevent timing attacks). Code examples in at least 2 languages (Node.js, Python). Explain WHY each step matters (raw body because JSON parsing changes field order, constant-time comparison to prevent timing attacks). How to get/rotate webhook secret. What to do if verification fails (return 401, log, alert). Timestamp tolerance: reject events older than 5 minutes to prevent replay attacks"
43
+ weight: 0.35
44
+ description: "Security verification"
45
+ - type: llm_judge
46
+ criteria: "Reliability patterns are documented — retry behavior: 3 attempts at 1min, 10min, 1hr intervals, then event marked as failed (visible in dashboard). Your endpoint must return 2xx within 30 seconds or it's considered failed. At-least-once delivery: you MAY receive the same event twice — use event ID for idempotency. Best practices: (1) respond with 200 immediately, process async, (2) store event ID to deduplicate, (3) use a queue for processing, (4) handle out-of-order events (check timestamps), (5) set up webhook endpoint monitoring. Testing: provide a 'Send test event' button in dashboard. Webhook logs: show recent deliveries with response codes for debugging"
47
+ weight: 0.30
48
+ description: "Reliability patterns"
@@ -0,0 +1,47 @@
1
+ meta:
2
+ id: websocket-sse-docs
3
+ level: 3
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Document real-time APIs — write documentation for WebSocket and Server-Sent Events endpoints covering connection lifecycle, message formats, and reconnection strategies"
7
+ tags: [API, documentation, WebSocket, SSE, real-time, streaming, events, advanced]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ Your platform is adding real-time features alongside the REST API:
13
+
14
+ 1. WebSocket endpoint: wss://api.example.com/ws
15
+ - Real-time payment status updates
16
+ - Live dashboard data streaming
17
+ - Bi-directional: client can subscribe/unsubscribe to channels
18
+
19
+ 2. Server-Sent Events: https://api.example.com/events/payments
20
+ - Uni-directional stream of payment events
21
+ - Simpler alternative for clients that only need to listen
22
+
23
+ Developers are confused:
24
+ - "Which should I use — WebSocket or SSE?"
25
+ - "How do I authenticate a WebSocket connection?"
26
+ - "What happens when the connection drops?"
27
+ - "What's the message format? Is it JSON?"
28
+ - "How do I subscribe to specific payment updates?"
29
+
30
+ Task: Document both the WebSocket and SSE endpoints. Include:
31
+ connection setup, authentication, message formats, subscription
32
+ management, error handling, reconnection strategies, and a
33
+ comparison to help developers choose between them.
34
+
35
+ assertions:
36
+ - type: llm_judge
37
+ criteria: "WebSocket documentation covers full lifecycle — connection: wss:// URL, authentication via token query param or first message. Handshake example with headers. Message format: JSON with type field (subscribe, unsubscribe, ping, pong, event). Subscribe: { type: 'subscribe', channel: 'payments', filters: { customer_id: 'cus_123' } }. Server events: { type: 'event', channel: 'payments', event: 'payment.succeeded', data: {...}, timestamp: '...' }. Heartbeat: server sends ping every 30s, client must respond with pong within 10s or connection is closed. Connection states: connecting, open, closing, closed. Max message size, max connections per client, idle timeout"
38
+ weight: 0.35
39
+ description: "WebSocket docs"
40
+ - type: llm_judge
41
+ criteria: "SSE documentation and comparison are clear — SSE setup: EventSource API in browser, curl with streaming, library examples. Authentication: Bearer token via headers (not supported by native EventSource — show polyfill or library). Event format: event type, data (JSON), id (for resumption), retry. Last-Event-ID header for reconnection — server resumes from where client left off. Comparison table: WebSocket vs SSE — bidirectional vs unidirectional, protocol complexity, browser support, firewall friendliness, auto-reconnection (SSE has built-in), load balancer compatibility. Recommendation: SSE for simple event listening (payments feed), WebSocket for interactive features (live dashboard with filtering)"
42
+ weight: 0.35
43
+ description: "SSE and comparison"
44
+ - type: llm_judge
45
+ criteria: "Reconnection and error handling are production-ready — WebSocket reconnection: detect close event, implement exponential backoff (1s, 2s, 4s, 8s, max 30s), re-authenticate on reconnect, re-subscribe to channels, request missed events by timestamp. SSE reconnection: automatic via EventSource (browser handles it), Last-Event-ID for resumption, server backfills missed events. Error scenarios: invalid auth (close code 4001), rate limited (close code 4029), server restart (close code 1001), invalid subscription (error message with details). Code examples showing robust connection management in JavaScript and Python. Monitoring: how to detect connection health, metrics to track (connection duration, reconnection frequency, message latency)"
46
+ weight: 0.30
47
+ description: "Reconnection patterns"
@@ -0,0 +1,44 @@
1
+ meta:
2
+ id: api-changelog-management
3
+ level: 4
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Manage API changelog — design a changelog system that communicates API changes effectively to different stakeholders with appropriate urgency and detail"
7
+ tags: [API, documentation, changelog, release-notes, communication, versioning, expert]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ Your API ships weekly updates. The changelog is a mess:
13
+
14
+ - Buried in a docs page nobody reads
15
+ - Mix of breaking changes and minor fixes with no differentiation
16
+ - No way for developers to know if a change affects their integration
17
+ - Enterprise customers surprised by breaking changes
18
+ - "When was this field added?" — nobody knows without git blame
19
+
20
+ Developers want:
21
+ - Advance notice of breaking changes
22
+ - Easy way to see what changed since their last integration update
23
+ - Ability to subscribe to changes relevant to their usage
24
+ - Clear migration guidance for breaking changes
25
+ - Historical record of all changes
26
+
27
+ Task: Design a comprehensive API changelog system. Include: change
28
+ categorization, notification system, migration guidance integration,
29
+ developer-facing changelog format, and tooling for automated changelog
30
+ generation from code changes.
31
+
32
+ assertions:
33
+ - type: llm_judge
34
+ criteria: "Change categorization enables targeted communication — categories: (1) Breaking: requires integration changes (field removal, type change, endpoint removal), (2) Deprecation: still works but will break in future, (3) Feature: new endpoints, fields, or capabilities, (4) Fix: bug fixes (behavior change that matches documentation), (5) Security: security patches (may require action). Each entry: date, category, affected endpoints, description, migration guide link (for breaking). Severity levels for breaking changes: critical (integration will fail), moderate (degraded behavior), minor (cosmetic). Structured format: each entry has tags (resources affected, API version) enabling filtering"
35
+ weight: 0.35
36
+ description: "Change categorization"
37
+ - type: llm_judge
38
+ criteria: "Notification system reaches developers proactively — subscription options: email digest (weekly), RSS feed, webhook (for CI/CD integration), Slack/Discord bot, in-dashboard banner. Developers can filter: by resource (only payment changes), by severity (only breaking), by API version (only v2 changes). Breaking changes: 30-day advance notice via email + dashboard banner + API response header (X-API-Deprecation-Notice). Timeline: announced → preview (available in sandbox) → released → old behavior sunset. Personalized notifications: analyze developer's API usage to highlight changes that affect their specific integration patterns. Emergency changes: immediate notification via all channels with migration urgency"
39
+ weight: 0.35
40
+ description: "Notification system"
41
+ - type: llm_judge
42
+ criteria: "Automation and format are developer-friendly — automated generation: conventional commits or PR labels generate changelog entries. OpenAPI diff tool detects spec changes and auto-generates entries for new/modified/removed endpoints. Human review: auto-generated entries reviewed by tech writer for clarity before publishing. Format: reverse chronological, filterable by category, searchable. Each entry is linkable (deep link to specific change). Diff view: compare OpenAPI specs between any two dates. 'Changes since' query: developers enter their last integration date, see all changes since then. API endpoint: GET /changelog with query params (since, category, resource) — developers can programmatically check for changes in CI/CD. Integration: changelog entries link to relevant documentation sections and migration guides"
43
+ weight: 0.30
44
+ description: "Automation and format"
@@ -0,0 +1,41 @@
1
+ meta:
2
+ id: api-governance-standards
3
+ level: 4
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Establish API documentation governance — create organization-wide standards, review processes, and quality metrics for API documentation across teams"
7
+ tags: [API, documentation, governance, standards, organization, quality, expert]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ Your organization has 8 engineering teams producing 15 APIs.
13
+ Documentation quality varies wildly:
14
+
15
+ - Team A: beautiful docs, interactive examples, 98% developer satisfaction
16
+ - Team B: auto-generated Swagger UI with no descriptions
17
+ - Team C: outdated wiki pages that reference deprecated endpoints
18
+ - Team D: no documentation at all ("read the code")
19
+
20
+ Leadership wants consistent documentation quality across all APIs.
21
+ You're tasked with creating the governance framework.
22
+
23
+ Task: Design an API documentation governance program. Include:
24
+ documentation standards (what every API must have), quality scoring
25
+ rubric, review and approval process, tooling requirements, team
26
+ responsibilities, and an adoption strategy that doesn't alienate
27
+ the teams that are currently behind.
28
+
29
+ assertions:
30
+ - type: llm_judge
31
+ criteria: "Documentation standards define minimum requirements — tiered requirements: Bronze (minimum viable): OpenAPI spec with descriptions for all endpoints, at least one example per endpoint, authentication documented, error codes listed. Silver (good): getting started guide, all request/response schemas with examples, changelog maintained, sandbox available. Gold (excellent): SDK documentation, interactive explorer, migration guides, tutorials, code samples in 3+ languages. Every API must reach Bronze within 3 months, Silver within 6 months. Gold is aspirational. Checklist for each tier is specific and measurable (not subjective). Non-negotiables: valid OpenAPI spec, no undocumented public endpoints, security documentation"
32
+ weight: 0.35
33
+ description: "Standards tiers"
34
+ - type: llm_judge
35
+ criteria: "Quality scoring and review process are objective — scoring rubric: completeness (are all endpoints documented?), accuracy (do examples match actual behavior?), usability (can a new developer integrate in under 30 minutes?), maintenance (is the changelog current?). Automated scoring: CI tool that checks OpenAPI completeness, link validity, example freshness. Manual scoring: quarterly review by documentation team using rubric. API review gate: new APIs cannot launch without Bronze documentation (enforced in release process). Existing APIs: audit current state, create remediation plan per team. Documentation review board: cross-team group that reviews standards quarterly and handles exceptions"
36
+ weight: 0.35
37
+ description: "Quality scoring"
38
+ - type: llm_judge
39
+ criteria: "Adoption strategy is empathetic and practical — don't shame teams that are behind — celebrate teams that are ahead as examples. Provide: documentation templates (copy and fill in), starter OpenAPI specs generated from existing code, dedicated documentation support (tech writer office hours). Incentives: documentation quality in team OKRs, internal 'best docs' award, developer satisfaction scores shared. Training: documentation writing workshops, OpenAPI authoring courses, tool training sessions. Phase rollout: start with one willing team as pilot, iterate on standards based on feedback, then expand. Address common objections: 'no time' (templates reduce effort), 'docs get outdated' (automation keeps them fresh), 'developers should read code' (external developers can't)"
40
+ weight: 0.30
41
+ description: "Adoption strategy"
@@ -0,0 +1,41 @@
1
+ meta:
2
+ id: api-product-strategy
3
+ level: 4
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Position documentation as product enabler — align API documentation strategy with product goals, partner onboarding, and revenue generation"
7
+ tags: [API, documentation, product-strategy, monetization, partner, business, expert]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ Your company's API is becoming a revenue product, not just a feature.
13
+ The API generates $2M ARR from 200 paying integrations. Growth targets:
14
+ $5M ARR, 500 integrations by end of year.
15
+
16
+ Current bottlenecks:
17
+ - Average onboarding: 3 weeks (competitor: 3 days)
18
+ - 30% of trials abandon during integration (most cite "confusing docs")
19
+ - Partner certification takes 2 months of back-and-forth
20
+ - Enterprise prospects require custom documentation for security review
21
+ - No self-service tier — all onboarding requires sales call
22
+
23
+ Task: Create a documentation strategy that directly supports revenue
24
+ growth. Include: self-service onboarding (eliminate sales dependency
25
+ for standard tier), partner acceleration program, enterprise
26
+ documentation package, and metrics linking documentation quality
27
+ to revenue. Show how documentation investment has measurable ROI.
28
+
29
+ assertions:
30
+ - type: llm_judge
31
+ criteria: "Self-service onboarding enables growth without sales — self-service tier: developer signs up, gets sandbox key instantly, follows guided quickstart, upgrades to paid via self-service checkout. Documentation enables this by: comprehensive quickstart (no gaps requiring support), interactive sandbox with pre-populated data, troubleshooting guide that resolves 90% of integration issues, production readiness checklist. Pricing page with clear tier comparison. Self-service target: 80% of new integrations onboard without human contact. ROI: current cost-per-integration $5,000 (sales + support time), self-service reduces to $200 (documentation + infrastructure). Path: documentation investment of $X enables 300 additional self-service integrations"
32
+ weight: 0.35
33
+ description: "Self-service strategy"
34
+ - type: llm_judge
35
+ criteria: "Partner program uses docs to accelerate certification — partner documentation package: architecture guide, best practices, sample integration, certification test suite. Certification program: partners complete integration milestones documented as a checklist (authenticate, create first payment, handle webhooks, error handling, go-live review). Automated certification: partner runs test suite, scores automatically, badge issued. Reduces certification from 2 months to 2 weeks. Partner-specific docs: co-branded guides, partner-specific use cases, featured in marketplace. Revenue impact: faster partner certification → more partners → more transaction volume. Partner tiers (bronze/silver/gold) with documentation requirements for each"
36
+ weight: 0.35
37
+ description: "Partner acceleration"
38
+ - type: llm_judge
39
+ criteria: "Enterprise package and ROI metrics are compelling — enterprise documentation package: security whitepaper (encryption, compliance, pen test results), architecture diagram (data flow, multi-tenancy), SLA documentation, disaster recovery plan, data processing agreement — reduces security review from 6 weeks to 1 week. Revenue metrics dashboard: TTFC correlated with conversion rate (faster onboarding → higher conversion), documentation page visits mapped to integration completion, support ticket cost savings, trial-to-paid conversion by documentation path. Investment case: $500K annual documentation investment generates $3M incremental revenue (6x ROI) through faster onboarding, reduced support, and partner acceleration. Quarterly business review template with documentation KPIs"
40
+ weight: 0.30
41
+ description: "Enterprise and ROI"
@@ -0,0 +1,48 @@
1
+ meta:
2
+ id: developer-portal-design
3
+ level: 4
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Design developer portal — architect a complete developer portal with documentation, API keys, dashboard, community, and self-service onboarding"
7
+ tags: [API, documentation, developer-portal, DX, onboarding, self-service, expert]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ You're building a developer portal from scratch for your platform.
13
+ It needs to be more than just API docs — it's the entire developer
14
+ experience:
15
+
16
+ Current developer journey (painful):
17
+ 1. Find docs via Google (SEO is poor)
18
+ 2. Read API reference (no quickstart)
19
+ 3. Email sales to get API keys (2-day wait)
20
+ 4. Trial and error with undocumented edge cases
21
+ 5. Email support when stuck (1-day response)
22
+ 6. Finally integrate after 2 weeks
23
+
24
+ Target developer journey:
25
+ 1. Find portal via Google (optimized)
26
+ 2. Sign up and get API keys instantly
27
+ 3. Follow quickstart guide (10-minute integration)
28
+ 4. Use interactive explorer for testing
29
+ 5. Go to production with confidence
30
+
31
+ Task: Design the complete developer portal. Include: site architecture,
32
+ self-service features (API key management, usage dashboard), documentation
33
+ sections, community features, and the onboarding flow. Focus on reducing
34
+ time-to-first-call (TTFC) from 2 weeks to 10 minutes.
35
+
36
+ assertions:
37
+ - type: llm_judge
38
+ criteria: "Portal architecture serves the complete developer lifecycle — sections: (1) Landing page with value prop, quickstart CTA, and trust signals. (2) Documentation: getting started, API reference, guides, tutorials, SDKs, webhooks, changelog. (3) Dashboard: API key management (create, rotate, restrict by IP/scope), usage analytics (requests, errors, latency), billing. (4) Interactive: API explorer, sandbox environment, webhook testing. (5) Community: forum/Discord, Stack Overflow tag, GitHub issues, status page. (6) Account: team management, notification preferences. Navigation: persistent sidebar, global search, contextual help. SEO: each endpoint is a crawlable page with structured data"
39
+ weight: 0.35
40
+ description: "Portal architecture"
41
+ - type: llm_judge
42
+ criteria: "Onboarding flow reduces TTFC to under 10 minutes — step 1: sign up (email/GitHub OAuth, no sales call required for sandbox). Step 2: instant sandbox API key displayed on welcome page. Step 3: interactive quickstart — choose your language, copy-paste code, make first call, see response — all on one page. Step 4: guided exploration — 'Now try creating a payment' with pre-filled values. Step 5: production checklist (webhook endpoint, error handling, go-live review). Progress tracker: visual indicator of onboarding progress. Skip option: experienced developers can skip to API reference. Measurement: track TTFC (time from signup to first successful API call), identify and remove friction points"
43
+ weight: 0.35
44
+ description: "Onboarding flow"
45
+ - type: llm_judge
46
+ criteria: "Self-service features reduce support dependency — API key management: create multiple keys with different scopes (read-only, write, admin), IP allowlisting, key rotation without downtime (grace period). Usage dashboard: real-time request volume, error rate, latency percentiles, top endpoints, rate limit proximity warnings. Logs: searchable request/response logs for debugging (last 7 days, filterable by endpoint, status code, time). Alerts: email/Slack when error rate spikes or approaching rate limits. Support integration: 'Report a bug' button pre-fills context (endpoint, error, request_id). Self-service: upgrade tier, manage team members, generate invoices — no support tickets needed. Estimated support ticket reduction: 60% from self-service + better docs"
47
+ weight: 0.30
48
+ description: "Self-service features"
@@ -0,0 +1,41 @@
1
+ meta:
2
+ id: docs-as-code
3
+ level: 4
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Implement docs-as-code workflow — design a documentation pipeline with version control, CI/CD, automated testing, and review processes for API documentation"
7
+ tags: [API, documentation, docs-as-code, CI/CD, automation, workflow, expert]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ Your documentation lives in a CMS, edited by technical writers.
13
+ Problems:
14
+ - Docs drift from actual API behavior (no one updates after code changes)
15
+ - No review process — writers publish directly
16
+ - No versioning — can't see what changed or roll back
17
+ - Developers don't contribute because they don't have CMS access
18
+ - Documentation builds break silently
19
+
20
+ You want to move to docs-as-code: documentation in the same repo
21
+ as code, using the same tools (Git, PRs, CI/CD).
22
+
23
+ Task: Design the complete docs-as-code workflow. Include: repository
24
+ structure, documentation format (MDX, OpenAPI, or both), build
25
+ pipeline, automated quality checks, review process, and how to keep
26
+ docs in sync with code changes. Explain how this reduces documentation
27
+ drift and enables developer contributions.
28
+
29
+ assertions:
30
+ - type: llm_judge
31
+ criteria: "Repository structure and format choices are justified — documentation lives alongside code in /docs directory (or monorepo with docs package). OpenAPI spec is source of truth for API reference (auto-generates endpoint pages). Guides and tutorials in MDX (Markdown + React components for interactive elements). Directory structure: /docs/api (auto-generated from OpenAPI), /docs/guides (hand-written), /docs/tutorials, /docs/changelog. Build tool: Docusaurus, Nextra, or similar static site generator that supports OpenAPI rendering. Version docs alongside API versions (docs/v1, docs/v2). PR template includes 'Documentation updated?' checkbox"
32
+ weight: 0.35
33
+ description: "Repo structure"
34
+ - type: llm_judge
35
+ criteria: "CI/CD pipeline ensures quality — automated checks on every PR: (1) OpenAPI spec validation (no broken $ref, valid schemas), (2) link checking (no dead links), (3) spell checking, (4) code example testing (extract and run code snippets), (5) build check (docs site builds successfully), (6) screenshot comparison (visual regression for rendered docs). Deployment: staging preview for every PR (Vercel preview deployments or similar), production deploy on merge to main. OpenAPI diff: show breaking changes in PR comments when spec changes. Automated changelog generation from conventional commits. Build badge in README"
36
+ weight: 0.35
37
+ description: "CI/CD pipeline"
38
+ - type: llm_judge
39
+ criteria: "Review process and sync strategy prevent drift — review workflow: code PRs that change API behavior must include docs PR (enforced by CI check — 'API changed but no docs updated'). Documentation review: technical writer reviews developer-written docs, developer reviews writer-written technical accuracy. Sync mechanisms: (1) OpenAPI spec generates reference docs automatically, (2) contract tests verify examples match actual API behavior, (3) scheduled job runs all code examples against sandbox. Contribution guide: how developers write docs (templates, style guide link, local preview instructions). Metrics: track docs-to-code commit ratio, average time from API change to docs update, documentation coverage (% of endpoints with examples)"
40
+ weight: 0.30
41
+ description: "Review and sync"
@@ -0,0 +1,46 @@
1
+ meta:
2
+ id: documentation-localization
3
+ level: 4
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Localize API documentation — design a localization strategy for translating API docs into multiple languages while maintaining technical accuracy and consistency"
7
+ tags: [API, documentation, localization, i18n, translation, multi-language, expert]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ Your API is expanding internationally. Developer demographics:
13
+ - 45% English-speaking
14
+ - 20% Japanese-speaking
15
+ - 15% Portuguese-speaking (Brazil)
16
+ - 10% German-speaking
17
+ - 10% other languages
18
+
19
+ Current state: English-only documentation. You're losing deals in
20
+ Japan and Brazil where developers prefer native-language docs.
21
+
22
+ Challenges:
23
+ - Technical terms: should "endpoint" be translated or kept in English?
24
+ - Code examples: variable names in English, comments translated?
25
+ - Keeping translations in sync when English docs change
26
+ - Budget: can't translate everything — what to prioritize?
27
+ - Quality: machine translation of technical docs is often wrong
28
+
29
+ Task: Design a documentation localization strategy. Include: what to
30
+ translate (and what not to), translation workflow, quality assurance,
31
+ synchronization with English source, and a phased rollout plan
32
+ prioritized by market impact.
33
+
34
+ assertions:
35
+ - type: llm_judge
36
+ criteria: "Translation scope is strategically prioritized — translate first: getting started guide, authentication, top 10 most-visited pages, error messages, dashboard UI. Translate later: full API reference, advanced guides, architecture docs. Never translate: code examples (keep English variable names), API field names, URLs/paths, technical terms that are industry-standard in English (API, JSON, REST, OAuth, webhook). For each language: glossary of technical terms with approved translations (or decision to keep English). Code comments and descriptions: translate. Variable names and code: never translate. Phased rollout: Japanese first (highest revenue impact per developer), then Portuguese, then German"
37
+ weight: 0.35
38
+ description: "Translation scope"
39
+ - type: llm_judge
40
+ criteria: "Translation workflow ensures quality and consistency — workflow: (1) English content finalized, (2) extract translatable strings (separate from code/markup), (3) professional translator with technical API background translates, (4) technical reviewer (native-speaking developer) verifies accuracy, (5) in-context review (see translation in actual docs layout). Tools: translation management system (Crowdin, Lokalise, Phrase) integrated with docs repo. Translation memory: reuse approved translations across pages. Glossary enforcement: TMS flags when glossary terms are translated inconsistently. Machine translation: use as first pass for professional translator to refine (not as final output). Style guide per language: tone, formality level, formatting conventions"
41
+ weight: 0.35
42
+ description: "Translation workflow"
43
+ - type: llm_judge
44
+ criteria: "Synchronization and maintenance prevent translation drift — sync strategy: when English source changes, affected translations flagged as 'needs update' in TMS. Change types: (1) minor (typo fix) — apply to all languages automatically, (2) content update — flag translations, prioritize by page traffic, (3) new content — add to translation queue. Dashboard: translation coverage percentage per language, pages needing update, average translation lag (days between English change and translation update). Fallback: untranslated pages show English with banner 'This page is not yet available in [language]'. URL strategy: /ja/docs/..., /pt-br/docs/..., with language switcher. Budget: estimate cost per word, annual budget per language, measure ROI (conversion rate lift per language launched)"
45
+ weight: 0.30
46
+ description: "Sync and maintenance"
@@ -0,0 +1,45 @@
1
+ meta:
2
+ id: documentation-metrics
3
+ level: 4
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Measure documentation effectiveness — define and track metrics for developer experience including TTFC, task completion, satisfaction, and support ticket deflection"
7
+ tags: [API, documentation, metrics, TTFC, analytics, developer-experience, expert]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ Leadership asks: "How good are our API docs?" You have no data to
13
+ answer. You need metrics to:
14
+
15
+ 1. Measure current documentation quality
16
+ 2. Track improvement over time
17
+ 3. Justify investment in documentation
18
+ 4. Identify which docs need the most work
19
+
20
+ Available data sources:
21
+ - Documentation site analytics (page views, time on page, bounce rate)
22
+ - API usage logs (first API call timestamps, error rates by integration)
23
+ - Support tickets (tagged by category)
24
+ - Developer surveys (quarterly, low response rate)
25
+ - SDK download counts
26
+ - Sandbox vs production API key creation rates
27
+
28
+ Task: Design a documentation metrics program. Define key metrics
29
+ (leading and lagging), set up measurement methods, create dashboards,
30
+ and explain how to use metrics to prioritize documentation improvements.
31
+ Include benchmark targets for each metric.
32
+
33
+ assertions:
34
+ - type: llm_judge
35
+ criteria: "Key metrics are defined with measurement methods — primary metrics: (1) Time to First Call (TTFC): median time from signup to first successful API call — measure via API logs. Target: under 15 minutes. (2) Task completion rate: can developers complete common tasks (create payment, set up webhooks) using only docs? — measure via usability testing. Target: 90%. (3) Support ticket deflection: percentage of docs-answerable questions that don't become tickets — measure via ticket analysis. Target: reduce doc-answerable tickets by 50%. (4) Developer satisfaction (CSAT/NPS): quarterly survey. Target: NPS > 40. Secondary: page views, search queries (what are developers looking for?), 404 rate, time on page, bounce rate per section"
36
+ weight: 0.35
37
+ description: "Key metrics"
38
+ - type: llm_judge
39
+ criteria: "Analytics implementation is detailed — documentation analytics: track page views, scroll depth, time on page, search queries (especially zero-result searches), copy-to-clipboard clicks on code examples, 'Was this helpful?' feedback per page. API correlation: link documentation page visits to subsequent API calls (did reading the guide lead to a successful integration?). Funnel analysis: signup → docs visit → sandbox call → production call — where do developers drop off? Support correlation: when a developer files a ticket, what docs pages did they visit first (indicates doc failure)? A/B testing: test different documentation approaches (more examples vs more explanation) and measure completion rates. Tools: Google Analytics/Plausible for docs, custom events for API correlation"
40
+ weight: 0.35
41
+ description: "Analytics implementation"
42
+ - type: llm_judge
43
+ criteria: "Metrics drive prioritization and ROI — prioritization framework: fix docs with highest (traffic × bounce rate × support ticket volume) first. Dashboard: real-time documentation health scorecard — TTFC trend, top searched terms, most-visited error pages, support ticket volume by category. ROI calculation: each support ticket costs $X, documentation that deflects Y tickets saves $XY per month — justify technical writer headcount. Quarterly review: compare metrics against targets, identify top 5 documentation improvements needed, report to leadership. Feedback loop: metrics identify problem areas → improve docs → measure impact → iterate. Common pitfall: don't optimize for page views (vanity metric) — optimize for task completion and reduced support load"
44
+ weight: 0.30
45
+ description: "Prioritization and ROI"
@@ -0,0 +1,41 @@
1
+ meta:
2
+ id: documentation-testing
3
+ level: 4
4
+ course: api-documentation-writing
5
+ type: output
6
+ description: "Test API documentation — implement automated testing for documentation accuracy including contract tests, example validation, and link checking"
7
+ tags: [API, documentation, testing, contract-tests, validation, automation, expert]
8
+
9
+ state: {}
10
+
11
+ trigger: |
12
+ Your documentation has a trust problem. Developers have been burned
13
+ by inaccurate docs:
14
+
15
+ - Code example in docs returns 400 when copied verbatim
16
+ - Response schema in docs doesn't match actual response
17
+ - Documented endpoint was renamed 3 months ago
18
+ - "Required field" in docs is actually optional
19
+ - Link to authentication guide is 404
20
+
21
+ You need automated testing to guarantee documentation accuracy.
22
+
23
+ Task: Design a documentation testing strategy. Include: contract
24
+ testing (docs match API behavior), code example testing (examples
25
+ actually work), schema validation (OpenAPI spec matches responses),
26
+ link checking, and freshness monitoring. Provide implementation
27
+ details for each type of test.
28
+
29
+ assertions:
30
+ - type: llm_judge
31
+ criteria: "Contract testing validates docs match API — contract tests: for each documented endpoint, make the documented request and verify: (1) response status matches documented status, (2) response body matches documented schema (all fields present, types correct), (3) required fields in request are actually required (test with/without), (4) documented error scenarios return documented error codes. Tools: Dredd (OpenAPI contract testing), Schemathesis (property-based API testing from OpenAPI), custom test suite. Run against sandbox in CI/CD on every API or docs change. Report: percentage of endpoints passing contract tests, specific failures with diff between documented and actual"
32
+ weight: 0.35
33
+ description: "Contract testing"
34
+ - type: llm_judge
35
+ criteria: "Code examples and schemas are tested automatically — code example extraction: parse documentation (MDX/Markdown), extract fenced code blocks tagged with language, identify API calls. Execution: run each code example against sandbox, verify it succeeds (non-error response). For curl examples: execute directly. For Python/Node.js: run in Docker container with dependencies installed. Schema validation: compare OpenAPI response schema against actual API responses using JSON Schema validation. Detect: extra fields in response not in schema, missing fields, type mismatches. Freshness: flag docs pages not updated in 90 days for review, detect OpenAPI spec changes that don't have corresponding docs updates"
36
+ weight: 0.35
37
+ description: "Example and schema testing"
38
+ - type: llm_judge
39
+ criteria: "Link checking and monitoring are comprehensive — link checking: crawl all documentation pages, verify internal links (no 404s), external links (still valid), anchor links (target exists on page). Run on every build and weekly scheduled scan. Monitoring dashboard: documentation health score (contract pass rate, broken links, stale pages, example failures). Alerts: Slack notification when contract test fails, weekly report of documentation health. Metrics tracked over time: documentation accuracy trend, mean time to fix broken docs, percentage of endpoints with passing tests. Integration: documentation test results visible in PR reviews, blocking merge if critical docs tests fail"
40
+ weight: 0.30
41
+ description: "Monitoring"