@liqhtworks/sophon-sdk 0.1.0 → 0.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (162) hide show
  1. package/.github/workflows/publish.yml +14 -1
  2. package/dist/apis/DownloadsApi.d.ts +1 -1
  3. package/dist/apis/DownloadsApi.js +1 -1
  4. package/dist/apis/HealthApi.d.ts +1 -1
  5. package/dist/apis/HealthApi.js +1 -1
  6. package/dist/apis/JobsApi.d.ts +5 -5
  7. package/dist/apis/JobsApi.js +3 -3
  8. package/dist/apis/UploadsApi.d.ts +1 -1
  9. package/dist/apis/UploadsApi.js +1 -1
  10. package/dist/apis/WebhooksApi.d.ts +1 -1
  11. package/dist/apis/WebhooksApi.js +1 -1
  12. package/dist/esm/apis/DownloadsApi.d.ts +1 -1
  13. package/dist/esm/apis/DownloadsApi.js +1 -1
  14. package/dist/esm/apis/HealthApi.d.ts +1 -1
  15. package/dist/esm/apis/HealthApi.js +1 -1
  16. package/dist/esm/apis/JobsApi.d.ts +5 -5
  17. package/dist/esm/apis/JobsApi.js +3 -3
  18. package/dist/esm/apis/UploadsApi.d.ts +1 -1
  19. package/dist/esm/apis/UploadsApi.js +1 -1
  20. package/dist/esm/apis/WebhooksApi.d.ts +1 -1
  21. package/dist/esm/apis/WebhooksApi.js +1 -1
  22. package/dist/esm/helpers/uploads.js +9 -2
  23. package/dist/esm/models/CompleteUploadResponse.d.ts +1 -1
  24. package/dist/esm/models/CompleteUploadResponse.js +1 -1
  25. package/dist/esm/models/CreateJobOutputOptions.d.ts +1 -1
  26. package/dist/esm/models/CreateJobOutputOptions.js +1 -1
  27. package/dist/esm/models/CreateJobRequest.d.ts +1 -1
  28. package/dist/esm/models/CreateJobRequest.js +1 -1
  29. package/dist/esm/models/CreateUploadRequest.d.ts +1 -1
  30. package/dist/esm/models/CreateUploadRequest.js +1 -1
  31. package/dist/esm/models/CreateUploadResponse.d.ts +1 -1
  32. package/dist/esm/models/CreateUploadResponse.js +1 -1
  33. package/dist/esm/models/CreateWebhookRequest.d.ts +1 -1
  34. package/dist/esm/models/CreateWebhookRequest.js +1 -1
  35. package/dist/esm/models/ErrorBody.d.ts +1 -1
  36. package/dist/esm/models/ErrorBody.js +1 -1
  37. package/dist/esm/models/ErrorEnvelope.d.ts +1 -1
  38. package/dist/esm/models/ErrorEnvelope.js +1 -1
  39. package/dist/esm/models/JobOutputInfo.d.ts +1 -1
  40. package/dist/esm/models/JobOutputInfo.js +1 -1
  41. package/dist/esm/models/JobProfile.d.ts +53 -9
  42. package/dist/esm/models/JobProfile.js +53 -9
  43. package/dist/esm/models/JobProgress.d.ts +1 -1
  44. package/dist/esm/models/JobProgress.js +1 -1
  45. package/dist/esm/models/JobResponse.d.ts +6 -3
  46. package/dist/esm/models/JobResponse.js +1 -1
  47. package/dist/esm/models/JobSourceInfo.d.ts +1 -1
  48. package/dist/esm/models/JobSourceInfo.js +1 -1
  49. package/dist/esm/models/JobSourceType.d.ts +1 -1
  50. package/dist/esm/models/JobSourceType.js +1 -1
  51. package/dist/esm/models/JobStatus.d.ts +1 -1
  52. package/dist/esm/models/JobStatus.js +1 -1
  53. package/dist/esm/models/ListJobsResponse.d.ts +1 -1
  54. package/dist/esm/models/ListJobsResponse.js +1 -1
  55. package/dist/esm/models/OutputContainer.d.ts +1 -1
  56. package/dist/esm/models/OutputContainer.js +1 -1
  57. package/dist/esm/models/ReadyResponse.d.ts +1 -1
  58. package/dist/esm/models/ReadyResponse.js +1 -1
  59. package/dist/esm/models/UploadJobSource.d.ts +1 -1
  60. package/dist/esm/models/UploadJobSource.js +1 -1
  61. package/dist/esm/models/UploadPartResponse.d.ts +1 -1
  62. package/dist/esm/models/UploadPartResponse.js +1 -1
  63. package/dist/esm/models/UploadStatusResponse.d.ts +1 -1
  64. package/dist/esm/models/UploadStatusResponse.js +1 -1
  65. package/dist/esm/models/WebhookDeliveryPayload.d.ts +1 -1
  66. package/dist/esm/models/WebhookDeliveryPayload.js +1 -1
  67. package/dist/esm/models/WebhookListItem.d.ts +1 -1
  68. package/dist/esm/models/WebhookListItem.js +1 -1
  69. package/dist/esm/models/WebhookListResponse.d.ts +1 -1
  70. package/dist/esm/models/WebhookListResponse.js +1 -1
  71. package/dist/esm/models/WebhookResponse.d.ts +1 -1
  72. package/dist/esm/models/WebhookResponse.js +1 -1
  73. package/dist/esm/runtime.d.ts +1 -1
  74. package/dist/esm/runtime.js +1 -1
  75. package/dist/helpers/uploads.js +9 -2
  76. package/dist/models/CompleteUploadResponse.d.ts +1 -1
  77. package/dist/models/CompleteUploadResponse.js +1 -1
  78. package/dist/models/CreateJobOutputOptions.d.ts +1 -1
  79. package/dist/models/CreateJobOutputOptions.js +1 -1
  80. package/dist/models/CreateJobRequest.d.ts +1 -1
  81. package/dist/models/CreateJobRequest.js +1 -1
  82. package/dist/models/CreateUploadRequest.d.ts +1 -1
  83. package/dist/models/CreateUploadRequest.js +1 -1
  84. package/dist/models/CreateUploadResponse.d.ts +1 -1
  85. package/dist/models/CreateUploadResponse.js +1 -1
  86. package/dist/models/CreateWebhookRequest.d.ts +1 -1
  87. package/dist/models/CreateWebhookRequest.js +1 -1
  88. package/dist/models/ErrorBody.d.ts +1 -1
  89. package/dist/models/ErrorBody.js +1 -1
  90. package/dist/models/ErrorEnvelope.d.ts +1 -1
  91. package/dist/models/ErrorEnvelope.js +1 -1
  92. package/dist/models/JobOutputInfo.d.ts +1 -1
  93. package/dist/models/JobOutputInfo.js +1 -1
  94. package/dist/models/JobProfile.d.ts +53 -9
  95. package/dist/models/JobProfile.js +53 -9
  96. package/dist/models/JobProgress.d.ts +1 -1
  97. package/dist/models/JobProgress.js +1 -1
  98. package/dist/models/JobResponse.d.ts +6 -3
  99. package/dist/models/JobResponse.js +1 -1
  100. package/dist/models/JobSourceInfo.d.ts +1 -1
  101. package/dist/models/JobSourceInfo.js +1 -1
  102. package/dist/models/JobSourceType.d.ts +1 -1
  103. package/dist/models/JobSourceType.js +1 -1
  104. package/dist/models/JobStatus.d.ts +1 -1
  105. package/dist/models/JobStatus.js +1 -1
  106. package/dist/models/ListJobsResponse.d.ts +1 -1
  107. package/dist/models/ListJobsResponse.js +1 -1
  108. package/dist/models/OutputContainer.d.ts +1 -1
  109. package/dist/models/OutputContainer.js +1 -1
  110. package/dist/models/ReadyResponse.d.ts +1 -1
  111. package/dist/models/ReadyResponse.js +1 -1
  112. package/dist/models/UploadJobSource.d.ts +1 -1
  113. package/dist/models/UploadJobSource.js +1 -1
  114. package/dist/models/UploadPartResponse.d.ts +1 -1
  115. package/dist/models/UploadPartResponse.js +1 -1
  116. package/dist/models/UploadStatusResponse.d.ts +1 -1
  117. package/dist/models/UploadStatusResponse.js +1 -1
  118. package/dist/models/WebhookDeliveryPayload.d.ts +1 -1
  119. package/dist/models/WebhookDeliveryPayload.js +1 -1
  120. package/dist/models/WebhookListItem.d.ts +1 -1
  121. package/dist/models/WebhookListItem.js +1 -1
  122. package/dist/models/WebhookListResponse.d.ts +1 -1
  123. package/dist/models/WebhookListResponse.js +1 -1
  124. package/dist/models/WebhookResponse.d.ts +1 -1
  125. package/dist/models/WebhookResponse.js +1 -1
  126. package/dist/runtime.d.ts +1 -1
  127. package/dist/runtime.js +1 -1
  128. package/docs/JobProfile.md +1 -1
  129. package/docs/JobsApi.md +1 -1
  130. package/package.json +1 -1
  131. package/src/apis/DownloadsApi.ts +1 -1
  132. package/src/apis/HealthApi.ts +1 -1
  133. package/src/apis/JobsApi.ts +5 -5
  134. package/src/apis/UploadsApi.ts +1 -1
  135. package/src/apis/WebhooksApi.ts +1 -1
  136. package/src/helpers/uploads.ts +9 -2
  137. package/src/models/CompleteUploadResponse.ts +1 -1
  138. package/src/models/CreateJobOutputOptions.ts +1 -1
  139. package/src/models/CreateJobRequest.ts +1 -1
  140. package/src/models/CreateUploadRequest.ts +1 -1
  141. package/src/models/CreateUploadResponse.ts +1 -1
  142. package/src/models/CreateWebhookRequest.ts +1 -1
  143. package/src/models/ErrorBody.ts +1 -1
  144. package/src/models/ErrorEnvelope.ts +1 -1
  145. package/src/models/JobOutputInfo.ts +1 -1
  146. package/src/models/JobProfile.ts +53 -9
  147. package/src/models/JobProgress.ts +1 -1
  148. package/src/models/JobResponse.ts +6 -3
  149. package/src/models/JobSourceInfo.ts +1 -1
  150. package/src/models/JobSourceType.ts +1 -1
  151. package/src/models/JobStatus.ts +1 -1
  152. package/src/models/ListJobsResponse.ts +1 -1
  153. package/src/models/OutputContainer.ts +1 -1
  154. package/src/models/ReadyResponse.ts +1 -1
  155. package/src/models/UploadJobSource.ts +1 -1
  156. package/src/models/UploadPartResponse.ts +1 -1
  157. package/src/models/UploadStatusResponse.ts +1 -1
  158. package/src/models/WebhookDeliveryPayload.ts +1 -1
  159. package/src/models/WebhookListItem.ts +1 -1
  160. package/src/models/WebhookListResponse.ts +1 -1
  161. package/src/models/WebhookResponse.ts +1 -1
  162. package/src/runtime.ts +1 -1
@@ -34,11 +34,24 @@ jobs:
34
34
  with:
35
35
  node-version: "20"
36
36
  registry-url: "https://registry.npmjs.org"
37
+ # Scoped packages like @liqhtworks/sophon-sdk need the scope here
38
+ # so actions/setup-node writes the scoped registry mapping into
39
+ # .npmrc alongside the auth token. Without it, `npm publish`
40
+ # cannot resolve a registry for @liqhtworks and fails ENEEDAUTH.
41
+ scope: "@liqhtworks"
37
42
 
38
43
  - name: Verify tag matches package.json
39
44
  shell: bash
45
+ env:
46
+ # `GITHUB_REF_NAME` is the *dispatched* ref (often `main`), not the
47
+ # checked-out tag. Prefer the explicit workflow_dispatch input,
48
+ # then fall back to GITHUB_REF_NAME for the tag-push trigger.
49
+ INPUT_TAG: ${{ github.event.inputs.tag }}
40
50
  run: |
41
- tag="${GITHUB_REF_NAME#v}"
51
+ tag="${INPUT_TAG#v}"
52
+ if [ -z "${tag}" ]; then
53
+ tag="${GITHUB_REF_NAME#v}"
54
+ fi
42
55
  pkg="$(node -p "require('./package.json').version")"
43
56
  if [ "${tag}" != "${pkg}" ]; then
44
57
  echo "::error::tag v${tag} does not match package.json version ${pkg}" >&2
@@ -1,6 +1,6 @@
1
1
  /**
2
2
  * SOPHON Encoding API
3
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
3
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
4
4
  *
5
5
  * The version of the OpenAPI document: 1.0.0
6
6
  *
@@ -3,7 +3,7 @@
3
3
  /* eslint-disable */
4
4
  /**
5
5
  * SOPHON Encoding API
6
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
6
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
7
7
  *
8
8
  * The version of the OpenAPI document: 1.0.0
9
9
  *
@@ -1,6 +1,6 @@
1
1
  /**
2
2
  * SOPHON Encoding API
3
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
3
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
4
4
  *
5
5
  * The version of the OpenAPI document: 1.0.0
6
6
  *
@@ -3,7 +3,7 @@
3
3
  /* eslint-disable */
4
4
  /**
5
5
  * SOPHON Encoding API
6
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
6
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
7
7
  *
8
8
  * The version of the OpenAPI document: 1.0.0
9
9
  *
@@ -1,6 +1,6 @@
1
1
  /**
2
2
  * SOPHON Encoding API
3
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
3
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
4
4
  *
5
5
  * The version of the OpenAPI document: 1.0.0
6
6
  *
@@ -66,7 +66,7 @@ export interface JobsApiInterface {
66
66
  */
67
67
  createJobRequestOpts(requestParameters: CreateJobOperationRequest): Promise<runtime.RequestOpts>;
68
68
  /**
69
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
69
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
70
70
  * @summary Submit an encoding job
71
71
  * @param {string} idempotencyKey Client-generated UUID or string for exactly-once semantics. Required on all POST endpoints. Replaying the same key with the same request body returns the original response without side effects.
72
72
  * @param {CreateJobRequest} createJobRequest
@@ -76,7 +76,7 @@ export interface JobsApiInterface {
76
76
  */
77
77
  createJobRaw(requestParameters: CreateJobOperationRequest, initOverrides?: RequestInit | runtime.InitOverrideFunction): Promise<runtime.ApiResponse<JobResponse>>;
78
78
  /**
79
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
79
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
80
80
  * Submit an encoding job
81
81
  */
82
82
  createJob(requestParameters: CreateJobOperationRequest, initOverrides?: RequestInit | runtime.InitOverrideFunction): Promise<JobResponse>;
@@ -171,12 +171,12 @@ export declare class JobsApi extends runtime.BaseAPI implements JobsApiInterface
171
171
  */
172
172
  createJobRequestOpts(requestParameters: CreateJobOperationRequest): Promise<runtime.RequestOpts>;
173
173
  /**
174
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
174
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
175
175
  * Submit an encoding job
176
176
  */
177
177
  createJobRaw(requestParameters: CreateJobOperationRequest, initOverrides?: RequestInit | runtime.InitOverrideFunction): Promise<runtime.ApiResponse<JobResponse>>;
178
178
  /**
179
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
179
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
180
180
  * Submit an encoding job
181
181
  */
182
182
  createJob(requestParameters: CreateJobOperationRequest, initOverrides?: RequestInit | runtime.InitOverrideFunction): Promise<JobResponse>;
@@ -3,7 +3,7 @@
3
3
  /* eslint-disable */
4
4
  /**
5
5
  * SOPHON Encoding API
6
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
6
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
7
7
  *
8
8
  * The version of the OpenAPI document: 1.0.0
9
9
  *
@@ -95,7 +95,7 @@ class JobsApi extends runtime.BaseAPI {
95
95
  };
96
96
  }
97
97
  /**
98
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
98
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
99
99
  * Submit an encoding job
100
100
  */
101
101
  async createJobRaw(requestParameters, initOverrides) {
@@ -104,7 +104,7 @@ class JobsApi extends runtime.BaseAPI {
104
104
  return new runtime.JSONApiResponse(response, (jsonValue) => (0, index_1.JobResponseFromJSON)(jsonValue));
105
105
  }
106
106
  /**
107
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
107
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
108
108
  * Submit an encoding job
109
109
  */
110
110
  async createJob(requestParameters, initOverrides) {
@@ -1,6 +1,6 @@
1
1
  /**
2
2
  * SOPHON Encoding API
3
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
3
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
4
4
  *
5
5
  * The version of the OpenAPI document: 1.0.0
6
6
  *
@@ -3,7 +3,7 @@
3
3
  /* eslint-disable */
4
4
  /**
5
5
  * SOPHON Encoding API
6
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
6
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
7
7
  *
8
8
  * The version of the OpenAPI document: 1.0.0
9
9
  *
@@ -1,6 +1,6 @@
1
1
  /**
2
2
  * SOPHON Encoding API
3
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
3
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
4
4
  *
5
5
  * The version of the OpenAPI document: 1.0.0
6
6
  *
@@ -3,7 +3,7 @@
3
3
  /* eslint-disable */
4
4
  /**
5
5
  * SOPHON Encoding API
6
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
6
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
7
7
  *
8
8
  * The version of the OpenAPI document: 1.0.0
9
9
  *
@@ -1,6 +1,6 @@
1
1
  /**
2
2
  * SOPHON Encoding API
3
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
3
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
4
4
  *
5
5
  * The version of the OpenAPI document: 1.0.0
6
6
  *
@@ -2,7 +2,7 @@
2
2
  /* eslint-disable */
3
3
  /**
4
4
  * SOPHON Encoding API
5
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
5
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
6
6
  *
7
7
  * The version of the OpenAPI document: 1.0.0
8
8
  *
@@ -1,6 +1,6 @@
1
1
  /**
2
2
  * SOPHON Encoding API
3
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
3
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
4
4
  *
5
5
  * The version of the OpenAPI document: 1.0.0
6
6
  *
@@ -2,7 +2,7 @@
2
2
  /* eslint-disable */
3
3
  /**
4
4
  * SOPHON Encoding API
5
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
5
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
6
6
  *
7
7
  * The version of the OpenAPI document: 1.0.0
8
8
  *
@@ -1,6 +1,6 @@
1
1
  /**
2
2
  * SOPHON Encoding API
3
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
3
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
4
4
  *
5
5
  * The version of the OpenAPI document: 1.0.0
6
6
  *
@@ -66,7 +66,7 @@ export interface JobsApiInterface {
66
66
  */
67
67
  createJobRequestOpts(requestParameters: CreateJobOperationRequest): Promise<runtime.RequestOpts>;
68
68
  /**
69
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
69
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
70
70
  * @summary Submit an encoding job
71
71
  * @param {string} idempotencyKey Client-generated UUID or string for exactly-once semantics. Required on all POST endpoints. Replaying the same key with the same request body returns the original response without side effects.
72
72
  * @param {CreateJobRequest} createJobRequest
@@ -76,7 +76,7 @@ export interface JobsApiInterface {
76
76
  */
77
77
  createJobRaw(requestParameters: CreateJobOperationRequest, initOverrides?: RequestInit | runtime.InitOverrideFunction): Promise<runtime.ApiResponse<JobResponse>>;
78
78
  /**
79
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
79
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
80
80
  * Submit an encoding job
81
81
  */
82
82
  createJob(requestParameters: CreateJobOperationRequest, initOverrides?: RequestInit | runtime.InitOverrideFunction): Promise<JobResponse>;
@@ -171,12 +171,12 @@ export declare class JobsApi extends runtime.BaseAPI implements JobsApiInterface
171
171
  */
172
172
  createJobRequestOpts(requestParameters: CreateJobOperationRequest): Promise<runtime.RequestOpts>;
173
173
  /**
174
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
174
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
175
175
  * Submit an encoding job
176
176
  */
177
177
  createJobRaw(requestParameters: CreateJobOperationRequest, initOverrides?: RequestInit | runtime.InitOverrideFunction): Promise<runtime.ApiResponse<JobResponse>>;
178
178
  /**
179
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
179
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
180
180
  * Submit an encoding job
181
181
  */
182
182
  createJob(requestParameters: CreateJobOperationRequest, initOverrides?: RequestInit | runtime.InitOverrideFunction): Promise<JobResponse>;
@@ -2,7 +2,7 @@
2
2
  /* eslint-disable */
3
3
  /**
4
4
  * SOPHON Encoding API
5
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
5
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
6
6
  *
7
7
  * The version of the OpenAPI document: 1.0.0
8
8
  *
@@ -92,7 +92,7 @@ export class JobsApi extends runtime.BaseAPI {
92
92
  };
93
93
  }
94
94
  /**
95
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
95
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
96
96
  * Submit an encoding job
97
97
  */
98
98
  async createJobRaw(requestParameters, initOverrides) {
@@ -101,7 +101,7 @@ export class JobsApi extends runtime.BaseAPI {
101
101
  return new runtime.JSONApiResponse(response, (jsonValue) => JobResponseFromJSON(jsonValue));
102
102
  }
103
103
  /**
104
- * Creates a queued encoding job from a completed upload source. The `profile` field accepts explicit coffee profiles or `sophon-auto`, and `output.target_height` can request aspect-preserving downscale.
104
+ * Creates a queued encoding job from a completed upload source. **Picking `profile`:** - Use `sophon-auto` unless you have a specific reason not to. It picks per-source settings tuned for consistent output and re-encodes at stricter settings if the first pass doesn\'t hold up. - Use an explicit coffee profile (`sophon-espresso` / `-cortado` / `-americano`) when you want deterministic encoder behavior — same settings regardless of source. - Use an `-hq` variant when the source is a heavy format (ProRes, DNxHD, high-bitrate camera originals). Larger output files, maximum detail preservation. - Use an `-hq-10bit` variant when the source is 10-bit and you want to preserve that depth end-to-end (ProRes 422/4444, DNxHD, BRAW, camera masters). See `JobProfile` for the full enum. `output.target_height` requests an aspect-preserving downscale (width derived from source, both dims rounded to even). If absent or larger than source, output uses source dimensions.
105
105
  * Submit an encoding job
106
106
  */
107
107
  async createJob(requestParameters, initOverrides) {
@@ -1,6 +1,6 @@
1
1
  /**
2
2
  * SOPHON Encoding API
3
- * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination.
3
+ * REST API for submitting, monitoring, and retrieving SOPHON encoding jobs. Authentication is via Bearer API key or session cookie. All POST endpoints require an Idempotency-Key header. List endpoints use opaque cursor-based pagination. --- ## Integration example A real-world walkthrough of how [Daisy](https://daisy.so) wires SOPHON into two production flows — user-uploaded video compression and automatic post-generation encoding after video rendering. Both converge on the same adapter and state machine; only the source differs. The patterns below are the ones that transfer cleanly to any integration. ### 1. One thin adapter, one method per endpoint Keep the HTTP surface boring. Axios (or your stack\'s equivalent), a per-endpoint idempotency key, and no enum for profile names: ```ts @Injectable() export class SophonService { private client() { return axios.create({ baseURL: process.env.SOPHON_BASE_URL, headers: { Authorization: `Bearer ${process.env.SOPHON_API_KEY}` }, timeout: 60_000, }); } async createUploadSession(req, idempotencyKey) { /_* POST /v1/uploads *_/ } async uploadChunk(uploadId, partNumber, bytes) { /_* PUT /v1/uploads/{id}/parts/{n} *_/ } async completeUpload(uploadId, idempotencyKey) { /_* POST /v1/uploads/{id}/complete *_/ } async createJob(req, idempotencyKey) { /_* POST /v1/jobs *_/ } async getJob(id) { /_* GET /v1/jobs/{id} *_/ } async downloadOutputStream(jobId) { /_* GET /v1/jobs/{id}/output *_/ } } ``` **Suffix idempotency keys per endpoint.** SOPHON scopes dedupe per route but a shared key collides across retries that hit different endpoints. Do this: ```ts const base = `video:${video.id}:v1`; await sophon.createUploadSession(req, `${base}:create-upload`); await sophon.completeUpload(uploadId, `${base}:complete-upload`); await sophon.createJob(req, `${base}:create-job`); ``` **Profile names are strings, not an enum.** We add and rename profiles (`sophon-espresso` → `sophon-auto` → future variants). A TypeScript union will drift; let the server validate. ### 2. Model your pipeline as a state machine Persist a single `sophonState` JSON column per row. `jobId === null` routes to dispatch; anything else polls that job: ```ts interface SophonState { jobId: string | null; // null = not dispatched; string = poll it uploadId?: string; // persist between upload + createJob profile?: string; // sophon-auto | sophon-espresso | ... dispatchRetries: number; // 3 strikes → fallback downloadRetries: number; lastError?: { stage, code, message, at }; } // In your cron (5-second tick is plenty): if (state.jobId === null) { await dispatch(video, state); // upload + createJob } else { await poll(video, state); // getJob + (if completed) downloadAndComplete } ``` Persisting `uploadId` between the upload completion and the `createJob` call matters — a crash in that window otherwise re-uploads the file. ### 3. Stream for large sources; buffer for small User-uploaded sources can be 1 GB+. Stream S3 → SOPHON in chunks equal to `session.chunk_size` from the createUploadSession response: ```ts async uploadStream(stream, fileName, mimeType, fileSize) { const session = await this.createUploadSession({ file_name: fileName, file_size: fileSize, mime_type: mimeType, }); let partIndex = 0, buffer = Buffer.alloc(0); for await (const chunk of stream) { buffer = Buffer.concat([buffer, chunk]); while (buffer.length >= session.chunk_size) { await this.uploadChunk(session.id, partIndex++, buffer.subarray(0, session.chunk_size)); buffer = buffer.subarray(session.chunk_size); } } if (buffer.length > 0) { await this.uploadChunk(session.id, partIndex, buffer); } return this.completeUpload(session.id); } ``` Generated outputs from a model run are typically <30 MB — for those, a buffered upload path is simpler and avoids managing a stream lifetime. ### 4. Always keep a fallback URL Before a row enters your encoding state, make sure the source is already playable from your CDN. Every SOPHON failure then degrades to \"use the original\" — the user\'s video never disappears because SOPHON is slow or down. This is the single most important invariant: ```ts await videoRepository.update({ id: video.id }, { videoUrl: sourceCloudfrontUrl, // fallback URL, stays intact status: VideoStatus.EncodingPending, sophonState: { jobId: null, profile, dispatchRetries: 0, downloadRetries: 0 }, sourceFileSize: sourceBytes, }); ``` On any terminal failure (structured `retryable: false`, retry budget exhausted, 404 on getJob, 23h stuck-row guard), flip status back to `Done` with `videoUrl` unchanged. SOPHON is enhancement, not a delivery dependency. ### 5. Handle the \"no-gain\" success path `sophon-auto` runs a pre-probe and, when it decides the output wouldn\'t be smaller than the source, returns `final_artifact: \"original\"` and `saved_percent: 0`. Skip the output download — the source already lives in your bucket: ```ts if (job.status === \'completed\') { if (job.final_artifact === \'original\') { // Persist outputFileSize = sourceFileSize so your UI shows // \"no reduction\" instead of a missing value. await completeWithFallbackOutput(video, job.output?.bytes ?? null); return; } await downloadAndComplete(video, state, job.output?.bytes ?? null); } ``` ### 6. Finalize by streaming into your own storage `GET /v1/jobs/{id}/output` returns a 302 to a presigned URL with a 24h TTL. Stream that directly into your bucket — no temp file, no buffering: ```ts const { stream } = await sophon.downloadOutputStream(state.jobId); const outputKey = `encoded/${video.userId}/${video.id}.mp4`; await fileService.uploadStream(outputKey, stream, \'video/mp4\'); await videoRepository.update({ id: video.id }, { videoUrl: fileService.cloudfrontUrl(outputKey), outputFileSize: sophonOutputBytes, status: VideoStatus.Done, }); ``` ### 7. Failure taxonomy | Error | Handling | |---|---| | Structured `retryable: false` from SOPHON | Terminal. Fall back to `Done` with source URL. | | Retryable upload / createJob failure | Increment `dispatchRetries`; after 3, fall back. | | Retryable download failure | Increment `downloadRetries`; after 3, fall back. | | `getJob` → HTTP 404 | Terminal. Job expired or never created. Fall back. | | Transient poll network error | Do nothing; next tick retries. Don\'t burn retry budget. | | Row stuck in encode state > 23h | Fall back (safety net against orphans). | ### Minimal config ```bash SOPHON_API_KEY=sk_live_... SOPHON_BASE_URL=https://api.liqhtworks.xyz ```
4
4
  *
5
5
  * The version of the OpenAPI document: 1.0.0
6
6
  *