scrollcraft 2.0.12 → 2.0.15

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/AGENTS.md ADDED
@@ -0,0 +1,46 @@
1
+ <!-- BEGIN:scrollcraft-agent-rules -->
2
+ # ScrollCraft Implementation Protocol
3
+
4
+ This document is for AI agents tasked with implementing ScrollCraft in a host application.
5
+
6
+ ## 🤖 Agent Persona
7
+ You are a ScrollCraft Implementation Expert. Your goal is to transform static media into high-performance, interactive scroll experiences.
8
+
9
+ ## 🛠️ Implementation Workflow
10
+
11
+ ### Step 1: Asset Preparation (The CLI)
12
+ Before writing any UI code, you MUST process the raw assets (video or images) into a ScrollCraft project.
13
+ ```bash
14
+ npx scft create <input_path> --cloud --depth --prompt "main subject" [-s 2]
15
+ ```
16
+ - **Why?** This generates optimized multi-resolution frames, optional AI-tracked subject coordinates, and optional depth maps.
17
+ - **Output**: A directory containing `scrollcraft.json` and variant folders (`mobile/`, `desktop/`).
18
+
19
+ ### Step 2: Project Architecture (React)
20
+ Import the generated `scrollcraft.json` and wrap your scene in the `ScrollCraftProvider`.
21
+ ```tsx
22
+ import project from './path/to/scrollcraft.json';
23
+ import { ScrollCraftProvider, ScrollCraftCanvas, SubjectLayer } from 'scrollcraft/react';
24
+
25
+ // Goal: 1 ScrollCraftProvider per interactive section.
26
+ ```
27
+
28
+ ### Step 3: Immersive Layering
29
+ Use high-level components to build the scene. Avoid manual coordinate math.
30
+ - **`<ScrollCraftCanvas />`**: Renders the image sequence (WebGL).
31
+ - **`<SubjectLayer />`**: Pins HTML content to the moving product automatically.
32
+ - **`useScrollCraft()`**: Hook for custom triggers based on `progress` (0-1) or `frame`.
33
+
34
+ ## 📚 Docs
35
+ - [**Core Architecture**](docs/architecture.md): Understand the State-Snapshot Engine.
36
+ - [**Asset Pipeline**](docs/asset-pipeline.md): Detailed CLI options (Smart-Crop, Variants, Step).
37
+ - [**React Integration**](docs/react-integration.md): Component API reference.
38
+ - [**AI Protocol**](docs/ai-integration.md): How to prompt other agents to build creative scenes for you.
39
+
40
+ ## ⚠️ Critical Constraints
41
+ 1. **Coordination System**: ALWAYS use percentages (0-100) for Layer offsets relative to the Subject Focal Point.
42
+ 2. **Performance**: Recommend `--step 2` or `--step 3` for mobile-first projects to reduce payload.
43
+ 3. **Responsive**: The engine handles folder swapping (Mobile/Desktop) automatically based on viewport.
44
+ 4. **Interactive**: Enable `depthEnabled` on the Canvas for 3D parallax effects if depth maps exist.
45
+
46
+ <!-- END:scrollcraft-agent-rules -->
package/CLAUDE.md ADDED
@@ -0,0 +1 @@
1
+ @AGENTS.md
package/README.md CHANGED
@@ -4,6 +4,8 @@
4
4
 
5
5
  ScrollCraft is a modern animation SDK built for the era of high-performance, agent-driven development. It allows you to transform standard video or images into web assets that precisely track subjects and depth.
6
6
 
7
+ [scrollcraft.dev](https://www.scrollcraft.dev)
8
+
7
9
  ---
8
10
 
9
11
  ## Installation
@@ -36,7 +38,7 @@ You can also import the pipeline into your own React apps or dashboard:
36
38
  import { AssetPipeline } from 'scrollcraft/pipeline';
37
39
 
38
40
  const pipeline = new AssetPipeline({
39
- apiKey: process.env.FAL_KEY,
41
+ apiKey: process.env.SCROLLCRAFT_KEY || process.env.FAL_KEY,
40
42
  onProgress: (p) => console.log(`${p.step}: ${p.percent}%`)
41
43
  });
42
44
 
@@ -44,6 +46,8 @@ const pipeline = new AssetPipeline({
44
46
  const project = await pipeline.create({
45
47
  input: videoFile, // Can be a File object or Path
46
48
  name: "my-project",
49
+ track: "apple",
50
+ depth: true,
47
51
  variants: [720, 1080],
48
52
  outputZip: true // Perfect for CMS uploads
49
53
  });
@@ -56,6 +60,8 @@ All you have to do now, is to drop the scrollcraft.json project into your Scroll
56
60
  #### Vanilla JS Integration
57
61
  For full implementation, please refer to the [Vanilla JS Example](https://github.com/aleskozelsky/scrollcraft/blob/main/packages/examples/html/index.html).
58
62
 
63
+ [Live Demo](https://example-html.scrollcraft.dev)
64
+
59
65
  ```html
60
66
  <!-- 2. Drop it into your HTML -->
61
67
  <script type="module">
@@ -71,6 +77,9 @@ For full implementation, please refer to the [Vanilla JS Example](https://github
71
77
  #### React Integration
72
78
  For full implementation, please refer to the [React Integration Example](https://github.com/aleskozelsky/scrollcraft/blob/main/packages/examples/create-next-app/src/app/page.tsx).
73
79
 
80
+ [Live Demo](https://example-next.scrollcraft.dev)
81
+
82
+
74
83
  ```tsx
75
84
  // 2. Drop it into your Next.js app
76
85
  import myproject from './example-apple-project/scrollcraft.json';
@@ -119,13 +128,13 @@ const AppleInfo = () => {
119
128
  Choose your path based on your role:
120
129
 
121
130
  ### 👤 For Humans
122
- - [**Core Architecture**](https://github.com/aleskozelsky/scrollcraft/blob/main/packages/docs/content/architecture.md): Understand the state-snapshot engine.
123
- - [**Asset Pipeline**](https://github.com/aleskozelsky/scrollcraft/blob/main/packages/docs/content/asset-pipeline.md): Learn how to use the CLI and AI tracking.
124
- - [**React Hooks**](https://github.com/aleskozelsky/scrollcraft/blob/main/packages/docs/content/react-integration.md): Build custom interactive components.
131
+ - [**Core Architecture**](https://docs.scrollcraft.dev/architecture): Understand the state-snapshot engine.
132
+ - [**Asset Pipeline**](https://docs.scrollcraft.dev/asset-pipeline): Learn how to use the CLI and AI tracking.
133
+ - [**React Hooks**](https://docs.scrollcraft.dev/react-integration): Build custom interactive components.
125
134
 
126
135
  ### 🤖 For AI Agents
127
- - [**AGENTS.md**](https://github.com/aleskozelsky/scrollcraft/blob/main/AGENTS.md): Technical standard operating procedures for the repository.
128
- - [**AI Integration Protocol**](https://github.com/aleskozelsky/scrollcraft/blob/main/packages/docs/content/ai-integration.md): How to prompt agents to build scenes for you.
136
+ - [**AGENTS.md**](https://github.com/aleskozelsky/scrollcraft/blob/main/packages/scrollcraft/AGENTS.md): Technical standard operating procedures for the repository.
137
+ - [**AI Integration Protocol**](https://docs.scrollcraft.dev/ai-integration): How to prompt agents to build scenes for you.
129
138
 
130
139
  ---
131
140
 
package/dist/cli/index.js CHANGED
@@ -131,7 +131,7 @@ program
131
131
  /**
132
132
  * Interactive Helper
133
133
  */
134
- async function prompt(question, defaultValue) {
134
+ async function interactiveHelper(question, defaultValue) {
135
135
  const rl = readline.createInterface({
136
136
  input: process.stdin,
137
137
  output: process.stdout
@@ -152,8 +152,8 @@ program
152
152
  .option('-n, --name <string>', 'Name of the project')
153
153
  .option('-v, --variants <string>', 'Comma-separated target resolutions (e.g. 720,1080)')
154
154
  .option('-s, --step <number>', 'Process every Nth frame (default: 1)', '1')
155
- .option('--cloud', 'Use Fal.ai for tracking and refinement', false)
156
- .option('--depth', 'Generate a 3D depth map for the displacement effect (Requires --cloud)', false)
155
+ .option('--cloud', 'Use AI for tracking and refinement', false)
156
+ .option('--depth', 'Generate a 3D depth map for the displacement effect', false)
157
157
  .action(async (inputArg, opts) => {
158
158
  console.log(chalk_1.default.bold.blue('\n🎞️ ScrollCraft Asset Pipeline\n'));
159
159
  // 0. PRE-FLIGHT CHECK
@@ -173,7 +173,7 @@ program
173
173
  let customVariants = projectConfig?.variants || (opts.variants ? buildVariantsFromIds(opts.variants.split(',')) : null);
174
174
  // 1. INPUT VALIDATION (Immediate)
175
175
  if (!input) {
176
- input = await prompt('Path to input video or directory of images');
176
+ input = await interactiveHelper('Path to input video or directory of images');
177
177
  }
178
178
  if (!input || !fs.existsSync(input)) {
179
179
  console.error(chalk_1.default.red(`\n❌ Error: Input path "${input || ''}" does not exist.`));
@@ -181,19 +181,44 @@ program
181
181
  }
182
182
  // 2. PROJECT NAME & SETTINGS
183
183
  if (!projectName) {
184
- projectName = await prompt('Project name', 'scrollcraft-project');
184
+ projectName = await interactiveHelper('Project name', 'scrollcraft-project');
185
185
  }
186
186
  let step = parseInt(opts.step) || 1;
187
187
  if (!inputArg) {
188
- const stepInput = await prompt('Process every Nth frame (Step size)', '1');
188
+ const stepInput = await interactiveHelper('Process every Nth frame (Step size)', '1');
189
189
  step = parseInt(stepInput) || 1;
190
+ // New Interactive Prompts
191
+ const trackSubject = await interactiveHelper(`Track a specific subject? ${chalk_1.default.dim('(Optional, AI requires key - e.g. "red car")')}`, '');
192
+ if (trackSubject) {
193
+ track = trackSubject;
194
+ useTracking = true;
195
+ }
196
+ const wantDepth = await interactiveHelper(`Generate 3D depth maps? ${chalk_1.default.dim('(Optional, AI requires key)')} [y/N]`, 'n');
197
+ if (wantDepth.toLowerCase() === 'y') {
198
+ useDepth = true;
199
+ }
200
+ }
201
+ // 3. KEY & AI VALIDATION
202
+ const scftKey = process.env.SCROLLCRAFT_KEY;
203
+ const falKey = process.env.FAL_KEY;
204
+ const hasKey = !!(scftKey || falKey);
205
+ if ((useTracking || useDepth) && !hasKey) {
206
+ console.log(chalk_1.default.yellow(`\n⚠️ The AI features you selected (${[useTracking ? 'Tracking' : '', useDepth ? 'Depth' : ''].filter(Boolean).join('/')}) require a Cloud Key.`));
207
+ console.log(chalk_1.default.white('To enable these features, please:'));
208
+ console.log(chalk_1.default.white(` 1. Get a key at ${chalk_1.default.bold.cyan('https://scrollcraft.dev/api-key')}`));
209
+ console.log(chalk_1.default.white(` 2. Set it in your .env: ${chalk_1.default.bold('SCROLLCRAFT_KEY')}='your_key' ${chalk_1.default.dim('(or FAL_KEY)')}\n`));
210
+ const proceed = await interactiveHelper('Continue with local fallback (no tracking, no depth)? [y/N]', 'n');
211
+ if (proceed.toLowerCase() !== 'y') {
212
+ console.log(chalk_1.default.dim('\nAborting. Please set your key and try again.\n'));
213
+ process.exit(0);
214
+ }
215
+ useTracking = false;
216
+ useDepth = false;
190
217
  }
191
- // AI Tracking logic preserved in CLI wrapper...
192
- // ...
193
218
  const pipeline = new pipeline_1.AssetPipeline({
194
- apiKey: process.env.FAL_KEY,
219
+ apiKey: scftKey || falKey,
195
220
  onProgress: (p) => {
196
- // You could add a progress bar here
221
+ // Progress reporting
197
222
  }
198
223
  });
199
224
  try {
@@ -201,7 +226,7 @@ program
201
226
  input: input,
202
227
  name: projectName,
203
228
  track: useTracking ? track : undefined,
204
- hasDepth: useDepth,
229
+ depth: useDepth,
205
230
  variants: customVariants || [720, 1080],
206
231
  step: step
207
232
  });
@@ -70,7 +70,15 @@ class BrowserDriver {
70
70
  return results;
71
71
  }
72
72
  async remove(path) {
73
+ // 1. Delete the exact file/folder key
73
74
  this.files.delete(path);
75
+ // 2. Delete all children (recursive cleanup for virtual folders)
76
+ const prefix = path.endsWith('/') ? path : `${path}/`;
77
+ for (const key of this.files.keys()) {
78
+ if (key.startsWith(prefix)) {
79
+ this.files.delete(key);
80
+ }
81
+ }
74
82
  }
75
83
  join(...parts) {
76
84
  return parts.join('/').replace(/\/+/g, '/');
@@ -0,0 +1,17 @@
1
+ import { IPipelineDriver } from './types';
2
+ import { SubjectFrameData } from '../core/types';
3
+ export interface CloudOptions {
4
+ apiKey?: string;
5
+ baseUrl?: string;
6
+ proxyUrl?: string;
7
+ }
8
+ export declare class CloudService {
9
+ private options;
10
+ private isScrollCraft;
11
+ constructor(options?: CloudOptions);
12
+ private getAuthHeaders;
13
+ trackSubject(input: string | File | Blob, driver: IPipelineDriver, prompt?: string): Promise<SubjectFrameData[]>;
14
+ generateDepthMap(input: string | File | Blob, driver: IPipelineDriver): Promise<string>;
15
+ private uploadFile;
16
+ private mapBoxesToTrackingData;
17
+ }
@@ -0,0 +1,109 @@
1
+ "use strict";
2
+ Object.defineProperty(exports, "__esModule", { value: true });
3
+ exports.CloudService = void 0;
4
+ const client_1 = require("@fal-ai/client");
5
+ class CloudService {
6
+ options;
7
+ isScrollCraft = false;
8
+ constructor(options = {}) {
9
+ this.options = options;
10
+ // Prioritize SCROLLCRAFT_KEY from environment or options
11
+ const envScft = typeof process !== 'undefined' ? process.env?.SCROLLCRAFT_KEY : '';
12
+ const envFal = typeof process !== 'undefined' ? process.env?.FAL_KEY : '';
13
+ const key = options.apiKey || envScft || envFal;
14
+ if (envScft || (options.apiKey && options.apiKey.startsWith('scft_'))) {
15
+ this.isScrollCraft = true;
16
+ }
17
+ if (!key && !options.proxyUrl) {
18
+ // Don't throw yet, only when a cloud method is called
19
+ }
20
+ }
21
+ getAuthHeaders() {
22
+ // For now, fal-ai/client uses the env variable FE_FAL_KEY or the key provided to it.
23
+ // In the future, once we use a proxy, we'll manually set headers here.
24
+ return {};
25
+ }
26
+ async trackSubject(input, driver, prompt = "main subject") {
27
+ let videoUrl;
28
+ if (typeof input === 'string') {
29
+ // Local path or URL
30
+ if (await driver.exists(input)) {
31
+ videoUrl = await this.uploadFile(input, driver);
32
+ }
33
+ else {
34
+ videoUrl = input;
35
+ }
36
+ }
37
+ else {
38
+ // File or Blob
39
+ videoUrl = await client_1.fal.storage.upload(input);
40
+ }
41
+ console.log(`🤖 AI is tracking "${prompt}" via SAM 3...`);
42
+ const result = await client_1.fal.subscribe("fal-ai/sam-3/video-rle", {
43
+ input: {
44
+ video_url: videoUrl,
45
+ prompt: prompt,
46
+ },
47
+ logs: true,
48
+ });
49
+ const payload = result.data || result;
50
+ const boxes = payload.boxes;
51
+ if (!boxes || !Array.isArray(boxes) || boxes.length === 0) {
52
+ throw new Error(`AI tracking returned no data.`);
53
+ }
54
+ return this.mapBoxesToTrackingData(boxes);
55
+ }
56
+ async generateDepthMap(input, driver) {
57
+ let videoUrl;
58
+ if (typeof input === 'string') {
59
+ if (await driver.exists(input)) {
60
+ videoUrl = await this.uploadFile(input, driver);
61
+ }
62
+ else {
63
+ videoUrl = input;
64
+ }
65
+ }
66
+ else {
67
+ videoUrl = await client_1.fal.storage.upload(input);
68
+ }
69
+ console.log(`🤖 AI is generating Depth Map...`);
70
+ const result = await client_1.fal.subscribe("fal-ai/video-depth-anything", {
71
+ input: {
72
+ video_url: videoUrl,
73
+ model_size: "VDA-Base",
74
+ },
75
+ logs: true
76
+ });
77
+ const payload = result.data || result;
78
+ if (!payload.video || !payload.video.url) {
79
+ throw new Error(`AI Depth Map generation failed.`);
80
+ }
81
+ return payload.video.url;
82
+ }
83
+ async uploadFile(filePath, driver) {
84
+ const data = await driver.readFile(filePath);
85
+ return await client_1.fal.storage.upload(new Blob([data]));
86
+ }
87
+ mapBoxesToTrackingData(boxes) {
88
+ let lastKnown = { x: 0.5, y: 0.5, scale: 0 };
89
+ return boxes.map((frameBoxes, i) => {
90
+ if (frameBoxes && Array.isArray(frameBoxes)) {
91
+ let box = null;
92
+ if (typeof frameBoxes[0] === 'number' && frameBoxes.length >= 4) {
93
+ box = frameBoxes;
94
+ }
95
+ else if (Array.isArray(frameBoxes[0]) && frameBoxes[0].length >= 4) {
96
+ box = frameBoxes[0];
97
+ }
98
+ else if (typeof frameBoxes[0] === 'object' && frameBoxes[0].box_2d) {
99
+ box = frameBoxes[0].box_2d;
100
+ }
101
+ if (box) {
102
+ lastKnown = { x: box[0], y: box[1], scale: box[2] * box[3] };
103
+ }
104
+ }
105
+ return { frame: i, ...lastKnown };
106
+ });
107
+ }
108
+ }
109
+ exports.CloudService = CloudService;
@@ -3,7 +3,7 @@ import { ProjectConfiguration, AssetVariant } from '../core/types';
3
3
  export declare class AssetPipeline {
4
4
  private driver;
5
5
  private options;
6
- private fal;
6
+ private cloud;
7
7
  constructor(options?: PipelineOptions);
8
8
  /**
9
9
  * INITIALIZE DRIVER
@@ -34,14 +34,14 @@ var __importStar = (this && this.__importStar) || (function () {
34
34
  })();
35
35
  Object.defineProperty(exports, "__esModule", { value: true });
36
36
  exports.AssetPipeline = void 0;
37
- const fal_service_1 = require("./fal-service");
37
+ const cloud_service_1 = require("./cloud-service");
38
38
  class AssetPipeline {
39
39
  driver;
40
40
  options;
41
- fal;
41
+ cloud;
42
42
  constructor(options = {}) {
43
43
  this.options = options;
44
- this.fal = new fal_service_1.FalService({ apiKey: options.apiKey, proxyUrl: options.proxyUrl });
44
+ this.cloud = new cloud_service_1.CloudService({ apiKey: options.apiKey, proxyUrl: options.proxyUrl });
45
45
  }
46
46
  /**
47
47
  * INITIALIZE DRIVER
@@ -83,38 +83,42 @@ class AssetPipeline {
83
83
  */
84
84
  async create(opts) {
85
85
  await this.init();
86
- const { input, name, track, hasDepth, step = 1 } = opts;
86
+ const { input, name, track, depth, step = 1 } = opts;
87
87
  const outDir = this.driver.resolve(name);
88
88
  const tempDir = this.driver.join(outDir, '.temp-frames');
89
+ const framesDir = this.driver.join(tempDir, 'frames');
90
+ const depthsDir = this.driver.join(tempDir, 'depths');
89
91
  this.report('initializing', 0, `Creating project: ${name}`);
90
92
  await this.driver.mkdir(outDir);
91
93
  await this.driver.mkdir(tempDir);
94
+ await this.driver.mkdir(framesDir);
95
+ await this.driver.mkdir(depthsDir);
92
96
  // 1. FRAME EXTRACTION
93
97
  this.report('extracting', 10, 'Extracting frames from source...');
94
- await this.driver.extractFrames(input, tempDir);
98
+ await this.driver.extractFrames(input, framesDir);
95
99
  // 2. AI TRACKING & DEPTH
96
100
  let trackingData = [];
97
101
  let isDepthActive = false;
98
- if (track || hasDepth) {
102
+ if (track || depth) {
99
103
  this.report('tracking', 30, 'Performing AI analysis...');
100
104
  if (track) {
101
- trackingData = await this.fal.trackSubject(input, this.driver, track);
105
+ trackingData = await this.cloud.trackSubject(input, this.driver, track);
102
106
  }
103
- if (hasDepth) {
107
+ if (depth) {
104
108
  this.report('depth', 40, 'Generating depth maps...');
105
- const depthVideoUrl = await this.fal.generateDepthMap(input, this.driver);
109
+ const depthVideoUrl = await this.cloud.generateDepthMap(input, this.driver);
106
110
  // Download and extract depth frames
107
111
  const response = await fetch(depthVideoUrl);
108
112
  const buffer = await response.arrayBuffer();
109
- const depthPath = this.driver.join(tempDir, 'depth_video.mp4');
110
- await this.driver.writeFile(depthPath, new Uint8Array(buffer));
111
- await this.driver.extractFrames(depthPath, tempDir); // Note: needs to handle prefix
113
+ const depthVideoPath = this.driver.join(tempDir, 'depth_video.mp4');
114
+ await this.driver.writeFile(depthVideoPath, new Uint8Array(buffer));
115
+ await this.driver.extractFrames(depthVideoPath, depthsDir);
112
116
  isDepthActive = true;
113
117
  }
114
118
  }
115
119
  // Default tracking if none
116
120
  if (trackingData.length === 0) {
117
- const files = await this.driver.readdir(tempDir);
121
+ const files = await this.driver.readdir(framesDir);
118
122
  const frameFiles = files.filter(f => f.startsWith('frame_'));
119
123
  trackingData = frameFiles.map((_, i) => ({ frame: i, x: 0.5, y: 0.5, scale: 0 }));
120
124
  }
@@ -122,7 +126,7 @@ class AssetPipeline {
122
126
  this.report('processing', 60, 'Generating optimized variants...');
123
127
  const variants = await this.processVariants(tempDir, trackingData, {
124
128
  step,
125
- hasDepth: isDepthActive,
129
+ depth: isDepthActive,
126
130
  variants: this.normalizeVariants(opts.variants),
127
131
  outDir
128
132
  });
@@ -159,7 +163,9 @@ class AssetPipeline {
159
163
  }
160
164
  async processVariants(tempDir, trackingData, options) {
161
165
  const { step, outDir } = options;
162
- const allFiles = await this.driver.readdir(tempDir);
166
+ const framesDir = this.driver.join(tempDir, 'frames');
167
+ const depthsDir = this.driver.join(tempDir, 'depths');
168
+ const allFiles = await this.driver.readdir(framesDir);
163
169
  const allFrames = allFiles.filter(f => f.startsWith('frame_')).sort((a, b) => a.localeCompare(b, undefined, { numeric: true }));
164
170
  const framesToProcess = allFrames.filter((_, i) => i % step === 0);
165
171
  const assetVariants = [];
@@ -170,16 +176,14 @@ class AssetPipeline {
170
176
  for (let i = 0; i < framesToProcess.length; i++) {
171
177
  const originalIndex = i * step;
172
178
  const frameName = framesToProcess[i];
173
- const framePath = this.driver.join(tempDir, frameName);
179
+ const framePath = this.driver.join(framesDir, frameName);
174
180
  const targetPath = this.driver.join(variantDir, `index_${i}.webp`);
175
181
  const subject = trackingData.find(f => f.frame === originalIndex) || { frame: originalIndex, x: 0.5, y: 0.5, scale: 0 };
176
182
  const imageBuffer = await this.driver.processImage(framePath, config, {});
177
183
  await this.driver.writeFile(targetPath, imageBuffer);
178
- if (options.hasDepth) {
179
- const numStr = frameName.match(/(\d+)/)?.[1] || "";
180
- const depthFile = allFiles.find(f => f.startsWith('depth_') && f.includes(numStr));
181
- if (depthFile) {
182
- const depthPath = this.driver.join(tempDir, depthFile);
184
+ if (options.depth) {
185
+ const depthPath = this.driver.join(depthsDir, frameName);
186
+ if (await this.driver.exists(depthPath)) {
183
187
  const depthBuffer = await this.driver.processImage(depthPath, config, { grayscale: true, blur: 2 });
184
188
  await this.driver.writeFile(this.driver.join(variantDir, `index_${i}_depth.webp`), depthBuffer);
185
189
  }
@@ -196,14 +200,14 @@ class AssetPipeline {
196
200
  aspectRatio: config.aspectRatio,
197
201
  path: `./${config.id}`,
198
202
  frameCount: framesToProcess.length,
199
- hasDepthMap: options.hasDepth,
203
+ hasDepthMap: options.depth,
200
204
  subjects: ['main']
201
205
  });
202
206
  }
203
207
  return assetVariants;
204
208
  }
205
209
  async saveConfig(variants, outDir) {
206
- const pkg = { version: '2.0.6' }; // In real app, import from package.json
210
+ const pkg = require('../../package.json');
207
211
  const config = {
208
212
  version: pkg.version,
209
213
  settings: { baseResolution: { width: 1920, height: 1080 }, scrollMode: 'vh' },
@@ -36,7 +36,7 @@ export interface CreateCommandOptions {
36
36
  input: string | File | Blob;
37
37
  name: string;
38
38
  track?: string;
39
- hasDepth?: boolean;
39
+ depth?: boolean;
40
40
  variants: number[] | VariantConfig[];
41
41
  step?: number;
42
42
  outputZip?: boolean;
@@ -0,0 +1,72 @@
1
+ # AI Agent Workflow
2
+
3
+ ScrollCraft is built for **Agent-Led Development**. The goal is to move from "writing code" to "orchestrating workflows."
4
+
5
+ ## 1. The Two-Phase Workflow
6
+
7
+ For an AI to successfully build an experience, it must follow these two phases:
8
+
9
+ ### Phase A: Asset Intelligence (CLI Execution)
10
+ The agent should first use the CLI to process raw media into an intelligent ScrollCraft project.
11
+ ```bash
12
+ # Example: Process a video with AI tracking and depth maps
13
+ npx scft create input.mp4 --name "my-project" --track "main product" --cloud --depth
14
+ ```
15
+ **Output:** A directory containing optimized frames, depth maps, and a `scrollcraft.json` file.
16
+
17
+ ### Phase B: Immersive Implementation (React)
18
+ The agent then uses the generated JSON to build the UI components.
19
+
20
+ ```tsx
21
+ import project from './scrollcraft-project/scrollcraft.json';
22
+ import { ScrollCraftProvider, ScrollCraftCanvas, SubjectLayer } from 'scrollcraft/react';
23
+
24
+ export const MyExperience = () => (
25
+ <ScrollCraftProvider project={project} scrub={0.1}>
26
+ <div style={{ height: '300vh' }}>
27
+ <div style={{ sticky: 'top-0', height: '100vh' }}>
28
+ <ScrollCraftCanvas />
29
+
30
+ {/* Pins content to the tracked subject automatically */}
31
+ <SubjectLayer offset={{ x: 10, y: -5 }}>
32
+ <div className="info-box">
33
+ Premium Craftsmanship
34
+ </div>
35
+ </SubjectLayer>
36
+ </div>
37
+ </div>
38
+ </ScrollCraftProvider>
39
+ );
40
+ ```
41
+
42
+ ## 2. Protocol for AI Agents (Claude/GPT)
43
+
44
+ Paste this into your chat to turn an AI into a ScrollCraft specialist:
45
+
46
+ ```markdown
47
+ You are the ScrollCraft Implementation Specialist. Your goal is to design immersive scroll experiences.
48
+
49
+ Workflow:
50
+ 1. CLI FIRST: Start by suggesting `npx scft create` to process assets.
51
+ 2. ENGINE AWARE: Use 'ScrollCraftProvider' to sync the engine with React state.
52
+ 3. SUBJECT PINS: Use 'SubjectLayer' to attach UI to the product coordinates found by the AI tracker.
53
+ 4. DYNAMIC UI: Use the 'progress' (0-1) or 'frame' count from 'useScrollCraft' for custom triggers.
54
+
55
+ Guiding Principles:
56
+ - Don't hardcode animations; let the ScrollCraft engine handle interpolation.
57
+ - Use 'Subject-Relative' coordinates wherever possible for perfect pinning.
58
+ - For Mobile, consider centering text layers ABOVE or BELOW the subject focal point.
59
+ - For Desktop, place text layers to the SIDES of the subject focal point.
60
+
61
+ ```
62
+
63
+ ---
64
+
65
+ ## 3. The "Intelligence-as-a-Service" Model
66
+
67
+ This workflow enables a powerful business model:
68
+ 1. **The CLI** handles the "hard" computer vision (tracking, depth, optimization).
69
+ 2. **The JSON** stores this intelligence.
70
+ 3. **The AI Agent** uses that intelligence to write the perfectly synced creative layer.
71
+
72
+ You provide the **SDK**, the AI provides the **Implementation**.
@@ -0,0 +1,55 @@
1
+ # Architecture: Declarative Animation
2
+
3
+ ScrollCraft 2.0 is a **State-Snapshot Engine**. Unlike traditional animation libraries that rely on imperative callbacks, this engine treats the entire scroll project as a piece of data.
4
+
5
+ ## 1. The Project Configuration (`scrollcraft.json`)
6
+
7
+ The heart of every project is a JSON file that describes the entire experience. This allows the state to be:
8
+ - **Portable**: Can be generated by an AI, a server, or a visual editor.
9
+ - **Serializable**: Can be stored in a database or passed via API.
10
+ - **Versioned**: Changes to the animation are tracked like code.
11
+
12
+ ### Core Schema Overview
13
+ The engine expects a `ProjectConfiguration` object (defined in `src/core/types.ts`):
14
+
15
+ - **`settings`**: Base resolutions, scroll modes, and base path.
16
+ - **`assets`**: An array of `SequenceAsset` with multiple `variants` (Mobile vs Desktop).
17
+ - **`timeline`**: A map of `scenes` and `layers`.
18
+
19
+ ---
20
+
21
+ ## 2. Decoupled Rendering Pipeline
22
+
23
+ State management is separated from pixels.
24
+
25
+ 1. **Core Engine**: A non-UI class that manages the `scroll -> frame` math, image preloading, and subject tracking coordinates.
26
+ 2. **React Provider**: Wraps the Engine in a reactive context.
27
+ 3. **UI Components**: `<ScrollCraftCanvas />` and `<SubjectLayer />` represent the "view." You can have multiple view layers tied to the same engine state.
28
+
29
+ ### How it works:
30
+ 1. **Detection**: On initialization and resize, the engine calculates the required **Physical Resolution** (`width * devicePixelRatio`) and checks the `canvas` element's own dimensions.
31
+ 2. **Selection**: It selects the variant that best matches or exceeds the required resolution for the current orientation (Portrait vs Landscape).
32
+ 3. **Hot-Swapping**: If the container size changes or the phone is rotated, the engine immediately swaps to the better-fit image folder without losing scroll place.
33
+
34
+ ---
35
+
36
+ ## 3. Subject-Relative Coordinates
37
+
38
+ This is the key technological shift:
39
+
40
+ - **Traditional**: Content is fixed at `x: 50vw`.
41
+ - **v2.0**: Content is anchored to a `Subject`.
42
+
43
+ Each Asset Variant can contain `subjectTracking` data—frame-by-frame (x,y) coordinates of the main object. The engine combines the image's "Local Coordinates" with your layer's "Relative Offset" to calculate the final screen position.
44
+
45
+ **Formula**:
46
+ `Screen Position = (Subject Center (x,y) + Relative Offset)`
47
+
48
+ This ensures the text follows the product even if the image is cropped or scaled for mobile.
49
+
50
+ ---
51
+
52
+ ## 4. 3D Parallax & Depth Maps
53
+
54
+ ScrollCraft 2.0 introduces **Depth-Aware Rendering**.
55
+ By supplying a grayscale depth map (where white is closer and black is farther), the WebGL shader can apply a subtle displacement effect based on mouse or gyro movement. This gives the scroll sequence a premium "3D Parallax" feel without the weight of actual 3D models.
@@ -0,0 +1,105 @@
1
+ # Asset Pipeline: The Universal Engine
2
+
3
+
4
+ The ScrollCraft Asset Pipeline is a platform-agnostic engine designed to do the "hard work" of segmentation, tracking, and optimization. It runs exactly the same logic whether you are in your terminal or a WordPress dashboard.
5
+
6
+ ## 1. Universal Architecture
7
+
8
+ The pipeline uses a **Strategy Pattern** with two primary drivers:
9
+
10
+ | Driver | Environment | Technology |
11
+ | :--- | :--- | :--- |
12
+ | **NodeDriver** | CLI / Backend | `sharp`, `ffmpeg-static` |
13
+ | **BrowserDriver** | CMS / Frontend | `ffmpeg.wasm`, `OffscreenCanvas` |
14
+
15
+ This means you have a single source of truth for your processing logic, while benefiting from the best native tools available to each platform.
16
+
17
+ ---
18
+
19
+ ## 2. CLI Usage
20
+
21
+ The `npx scft create` command is the primary wrapper for the pipeline on your local machine.
22
+
23
+ ```bash
24
+ npx scft create <input> [options]
25
+ ```
26
+
27
+ ### Options:
28
+ - `-n, --name <string>`: Project folder name.
29
+ - `-p, --track <text>`: Target object to track (e.g. "red car").
30
+ - `-v, --variants <string>`: Comma-separated target resolutions (e.g. `720,1080`).
31
+ - `-s, --step <number>`: Process every Nth frame. **VITAL for performance**.
32
+ - `--cloud`: Use Fal.ai for tracking (requires `FAL_KEY`).
33
+ - `--depth`: Generate a corresponding image sequence of depth maps for the 3D parallax effect.
34
+
35
+
36
+ ---
37
+
38
+ ## 3. Programmatic Usage (SDK)
39
+
40
+ For building your own CMS plugins or web-based authoring tools, you can import the pipeline directly:
41
+
42
+ ```typescript
43
+ import { AssetPipeline } from 'scrollcraft/pipeline';
44
+
45
+ const pipeline = new AssetPipeline({
46
+ apiKey: '...',
47
+ onProgress: (p) => updateUI(p.percent)
48
+ });
49
+
50
+ // Returns a ZIP blob containing the entire project
51
+ const zip = await pipeline.create({
52
+ input: myFileObject,
53
+ name: 'project-name',
54
+ outputZip: true
55
+ });
56
+ ```
57
+
58
+ ### Core Pipeline Steps:
59
+ 1. **Auto-Upload**: If you provide a local `.mp4`, it's automatically uploaded to the cloud for processing.
60
+ 2. **Extraction**: Converts video files into high-quality image sequences.
61
+ 3. **AI Tracking**: Identifies the main subject (using **SAM 3**). Our engine now features **Sticky Tracking**—if the subject is obscured for a few frames, the coordinates hold their last known position.
62
+ 4. **Variant Generation**:
63
+ - **Smart Crop**: Centers the images based on the tracked subject.
64
+ - **Resolution Factory**: Creates Portrait (9:16) and Landscape (16:9) pairs for each target resolution (e.g. 720p, 1080p).
65
+ - **Compression**: Optimized `.webp` generation via Sharp (Node) or Canvas (Browser).
66
+ 5. **Metadata Export**: Generates the final `scrollcraft.json` with **root-relative paths** for easier deployment.
67
+
68
+ ---
69
+
70
+ ## 2. Cloud vs Local Processing
71
+
72
+ The pipeline is split into a **Local Implementation** path and a **Cloud-Accelerated** path.
73
+
74
+ ### 🏠 Local Implementation (Free)
75
+ - Uses **FFmpeg** on your machine for extraction.
76
+ - Uses **Sharp** for resizing and cropping.
77
+ - Does **not** include automatic AI point-tracking (uses center-pinned defaults).
78
+
79
+ ### ☁️ Cloud-Accelerated Implementation (Paid)
80
+ - **Fal.ai Integration**: Triggers high-end GPUs to run SAM 3 tracking.
81
+ - **Refinement**: Can be configured to auto-remove backgrounds or upscale low-res sequences using ESRGAN.
82
+ - **CDN Ready**: Prepares assets for cloud hosting.
83
+
84
+ ## 4. Environment Drivers
85
+
86
+ ### 🏠 NodeDriver
87
+ - **Extraction**: Native FFmpeg binary.
88
+ - **Image Engine**: **Sharp** (C++).
89
+
90
+ ### 🌐 BrowserDriver
91
+ - **Extraction**: **ffmpeg.wasm**.
92
+ - **Image Engine**: **OffscreenCanvas** (Hardware-accelerated).
93
+ - **Output**: Persistent IndexedDB or a downloadable **ZIP**.
94
+
95
+ ---
96
+
97
+ ## 3. Configuration (.env.local)
98
+
99
+ To use the cloud features, you must provide your own API keys in your `.env.local`:
100
+
101
+ ```bash
102
+ FAL_KEY="your-fal-api-key"
103
+ ```
104
+
105
+ *Note: This mimics the Remotion "Bring Your Own Key" model for indie developers.*
@@ -0,0 +1,89 @@
1
+ # React Integration
2
+
3
+ The ScrollCraft React library is a thin, high-performance wrapper around the Core Engine.
4
+
5
+ ## 1. The Provider Pattern
6
+
7
+ Everything starts with the `ScrollCraftProvider`. It initializes the engine and provides a reactive context for all child components.
8
+
9
+ ```tsx
10
+ import { ScrollCraftProvider } from 'scrollcraft/react';
11
+ import projectConfig from './my-project.json';
12
+
13
+ function App() {
14
+ return (
15
+ <ScrollCraftProvider project={projectConfig} scrub={0.1}>
16
+ {/*
17
+ The actual canvas that renders the sequence.
18
+ Can be placed anywhere inside the provider.
19
+ */}
20
+ <ScrollCraftCanvas
21
+ style={{ width: '100%', height: '100vh', objectFit: 'cover' }}
22
+ />
23
+
24
+ <MyContent />
25
+ </ScrollCraftProvider>
26
+ );
27
+ }
28
+ ```
29
+ ### Provider Props:
30
+ - **`project`**: The JSON configuration file generated by the CLI.
31
+ - **`scrub`**: (Optional) Number between `0` and `1`. Sets the smoothing/interpolation factor. `0` is instant, `1` is smooth, `2`+ is heavy lag.
32
+ - **`basePath`**: (Optional) Override the base URL for asset loading.
33
+
34
+ ---
35
+
36
+ The rendering engine is decoupled from the provider. You must include a `<ScrollCraftCanvas />` to see your images.
37
+
38
+ ### Key Props:
39
+ - **`style`**: Standard React styles. Use `objectFit: 'cover'` to ensure the sequence behaves like a background.
40
+ - **`assetId`**: (Optional) Specify which asset from the JSON to render. Defaults to the first one.
41
+ - **`depthEnabled`**: (Optional) Boolean. If true (and if depth maps exist in the assets), enables the mouse-parallax displacement effect.
42
+
43
+ ---
44
+
45
+ ## 3. Using the `<SubjectLayer />`
46
+
47
+ The `<SubjectLayer>` is the primary way to add content that "sticks" to the product in your sequence.
48
+
49
+ ```tsx
50
+ import { SubjectLayer } from 'scrollcraft/react';
51
+
52
+ <SubjectLayer offset={{ x: 15, y: -10 }}>
53
+ <div className="callout">
54
+ <h4>4K Ultra Wide</h4>
55
+ <p>Captured at 120fps.</p>
56
+ </div>
57
+ </SubjectLayer>
58
+ ```
59
+
60
+ ### Key Props:
61
+ - **`offset`**: `{ x: number, y: number }`. Positioning relative to the **Subject Center**. This is measured in viewport percentage units (0-100).
62
+ - **`zIndex`**: Control the depth of the layer.
63
+
64
+ ---
65
+
66
+ ## 4. The `useScrollCraft` Hook
67
+
68
+ For custom components, you can hook directly into the engine's state.
69
+
70
+ ```tsx
71
+ import { useScrollCraft } from 'scrollcraft/react';
72
+
73
+ const MyCustomStats = () => {
74
+ const { progress, frame, subjectCoords } = useScrollCraft();
75
+
76
+ return (
77
+ <div style={{ opacity: progress }}>
78
+ Current Frame: {frame}
79
+ </div>
80
+ );
81
+ };
82
+ ```
83
+
84
+ ---
85
+
86
+ ## 5. Context Properties
87
+ - `progress`: Number (0 to 1). The global scroll position.
88
+ - `frame`: Number. The current frame index of the active scene.
89
+ - `subjectCoords`: `{ x: number, y: number }`. The current normalized (0-1) position of the subject on the screen.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "scrollcraft",
3
- "version": "2.0.12",
3
+ "version": "2.0.15",
4
4
  "description": "ScrollCraft is a web-based tool for scroll-triggered animations.",
5
5
  "main": "dist/core/scrollcraft.umd.min.js",
6
6
  "module": "dist/core/scrollcraft.umd.min.js",
@@ -27,7 +27,10 @@
27
27
  },
28
28
  "files": [
29
29
  "dist",
30
- "README.md"
30
+ "README.md",
31
+ "AGENTS.md",
32
+ "CLAUDE.md",
33
+ "docs"
31
34
  ],
32
35
  "scripts": {
33
36
  "build": "npm run sync-readme && npm run build:web && npm run build:pipeline",