sceneview-mcp 3.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +171 -0
- package/dist/index.js +169 -0
- package/dist/samples.js +206 -0
- package/llms.txt +388 -0
- package/package.json +29 -0
package/README.md
ADDED
|
@@ -0,0 +1,171 @@
|
|
|
1
|
+
# sceneview-mcp
|
|
2
|
+
|
|
3
|
+
MCP server for [SceneView](https://github.com/SceneView/sceneview-android) — 3D and AR with Jetpack Compose for Android.
|
|
4
|
+
|
|
5
|
+
Install this once and Claude always knows how to use SceneView. No copy-pasting docs. No hallucinated APIs.
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## What it provides
|
|
10
|
+
|
|
11
|
+
**Resource — `sceneview://api`**
|
|
12
|
+
The complete SceneView 3.0.0 API reference (llms.txt): composable signatures, node types, AR scope, resource loading, threading rules, common patterns.
|
|
13
|
+
|
|
14
|
+
**Tool — `get_sample`**
|
|
15
|
+
|
|
16
|
+
| Scenario | What you get |
|
|
17
|
+
|---|---|
|
|
18
|
+
| `model-viewer` | Full-screen 3D scene, HDR environment, orbit camera |
|
|
19
|
+
| `ar-tap-to-place` | AR tap-to-place with pinch-to-scale and drag-to-rotate |
|
|
20
|
+
| `ar-placement-cursor` | AR reticle that snaps to surfaces, tap to confirm |
|
|
21
|
+
| `ar-augmented-image` | Detect a reference image, overlay a 3D model |
|
|
22
|
+
| `ar-face-filter` | Front-camera face mesh with a custom material |
|
|
23
|
+
|
|
24
|
+
**Tool — `get_setup`**
|
|
25
|
+
Gradle dependency + AndroidManifest for `"3d"` or `"ar"` projects.
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
|
|
29
|
+
## Installation
|
|
30
|
+
|
|
31
|
+
### Project-level (recommended)
|
|
32
|
+
|
|
33
|
+
Add `.claude/mcp.json` at your Android project root:
|
|
34
|
+
|
|
35
|
+
```json
|
|
36
|
+
{
|
|
37
|
+
"mcpServers": {
|
|
38
|
+
"sceneview": {
|
|
39
|
+
"command": "npx",
|
|
40
|
+
"args": ["-y", "sceneview-mcp"]
|
|
41
|
+
}
|
|
42
|
+
}
|
|
43
|
+
}
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
Run `/mcp` in Claude Code to confirm the server is connected.
|
|
47
|
+
|
|
48
|
+
### Global (all projects)
|
|
49
|
+
|
|
50
|
+
Add to `~/.claude/mcp.json`:
|
|
51
|
+
|
|
52
|
+
```json
|
|
53
|
+
{
|
|
54
|
+
"mcpServers": {
|
|
55
|
+
"sceneview": {
|
|
56
|
+
"command": "npx",
|
|
57
|
+
"args": ["-y", "sceneview-mcp"]
|
|
58
|
+
}
|
|
59
|
+
}
|
|
60
|
+
}
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
### Claude Desktop
|
|
64
|
+
|
|
65
|
+
Add to `claude_desktop_config.json`:
|
|
66
|
+
|
|
67
|
+
```json
|
|
68
|
+
{
|
|
69
|
+
"mcpServers": {
|
|
70
|
+
"sceneview": {
|
|
71
|
+
"command": "npx",
|
|
72
|
+
"args": ["-y", "sceneview-mcp"]
|
|
73
|
+
}
|
|
74
|
+
}
|
|
75
|
+
}
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
---
|
|
79
|
+
|
|
80
|
+
## How it works
|
|
81
|
+
|
|
82
|
+
```
|
|
83
|
+
Developer: "add AR placement to my app"
|
|
84
|
+
│
|
|
85
|
+
▼
|
|
86
|
+
Claude reads sceneview://api ← full llms.txt, always current
|
|
87
|
+
│
|
|
88
|
+
▼
|
|
89
|
+
Claude calls get_sample("ar-tap-to-place") ← working Kotlin boilerplate
|
|
90
|
+
│
|
|
91
|
+
▼
|
|
92
|
+
Correct, compilable SceneView 3.0.0 code — first try, zero hallucination
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
---
|
|
96
|
+
|
|
97
|
+
## Sample prompts
|
|
98
|
+
|
|
99
|
+
### 3D model viewer
|
|
100
|
+
```
|
|
101
|
+
Create an Android Compose screen called ModelViewerScreen that loads
|
|
102
|
+
assets/models/my_model.glb in a full-screen 3D scene with orbit camera and HDR
|
|
103
|
+
environment from assets/environments/sky_2k.hdr.
|
|
104
|
+
Use SceneView io.github.sceneview:sceneview:3.0.0.
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
### AR tap-to-place
|
|
108
|
+
```
|
|
109
|
+
Create an Android Compose AR screen called TapToPlaceScreen. Show a plane
|
|
110
|
+
detection grid. Tapping places assets/models/chair.glb on the surface with
|
|
111
|
+
pinch-to-scale and drag-to-rotate. Multiple taps = multiple objects.
|
|
112
|
+
Use SceneView io.github.sceneview:arsceneview:3.0.0.
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
### AR placement cursor
|
|
116
|
+
```
|
|
117
|
+
Create an AR screen called ARCursorScreen with a reticle that snaps to surfaces
|
|
118
|
+
at screen center. Tap to place assets/models/object.glb and hide the reticle.
|
|
119
|
+
Use SceneView io.github.sceneview:arsceneview:3.0.0.
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
### AR augmented image
|
|
123
|
+
```
|
|
124
|
+
Create an AR screen called AugmentedImageScreen that detects R.drawable.target_image
|
|
125
|
+
(15 cm wide) and places assets/models/overlay.glb above it scaled to image width.
|
|
126
|
+
Model disappears when tracking is lost.
|
|
127
|
+
Use SceneView io.github.sceneview:arsceneview:3.0.0.
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
### AR face filter
|
|
131
|
+
```
|
|
132
|
+
Create an AR screen called FaceFilterScreen using the front camera that detects
|
|
133
|
+
faces and applies assets/materials/face_mask.filamat to the face mesh.
|
|
134
|
+
Use SceneView io.github.sceneview:arsceneview:3.0.0.
|
|
135
|
+
```
|
|
136
|
+
|
|
137
|
+
### 3D product configurator
|
|
138
|
+
```
|
|
139
|
+
Create a 3D product configurator screen with Red/Blue/Green color buttons.
|
|
140
|
+
Apply the selected color as a solid material on assets/models/product.glb.
|
|
141
|
+
Add orbit camera and pinch-to-zoom.
|
|
142
|
+
Use SceneView io.github.sceneview:sceneview:3.0.0.
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
### AR multi-object scene
|
|
146
|
+
```
|
|
147
|
+
Create an AR screen where a bottom sheet lets users choose between chair, table,
|
|
148
|
+
and lamp GLBs in assets/models/. Tapping places the selected model. Each object
|
|
149
|
+
is independently pinch-to-scale and drag-to-rotate. A "Clear all" button removes
|
|
150
|
+
everything. Use SceneView io.github.sceneview:arsceneview:3.0.0.
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
---
|
|
154
|
+
|
|
155
|
+
## Development
|
|
156
|
+
|
|
157
|
+
```bash
|
|
158
|
+
cd mcp
|
|
159
|
+
npm install
|
|
160
|
+
npm run prepare # copies ../llms.txt and compiles TypeScript
|
|
161
|
+
npm start # run over stdio
|
|
162
|
+
npx @modelcontextprotocol/inspector node dist/index.js # test with inspector
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
## Publishing
|
|
166
|
+
|
|
167
|
+
```bash
|
|
168
|
+
cd mcp
|
|
169
|
+
npm run prepare
|
|
170
|
+
npm publish --access public
|
|
171
|
+
```
|
package/dist/index.js
ADDED
|
@@ -0,0 +1,169 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
|
|
3
|
+
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
|
|
4
|
+
import { CallToolRequestSchema, ListResourcesRequestSchema, ListToolsRequestSchema, ReadResourceRequestSchema, } from "@modelcontextprotocol/sdk/types.js";
|
|
5
|
+
import { readFileSync } from "fs";
|
|
6
|
+
import { dirname, resolve } from "path";
|
|
7
|
+
import { fileURLToPath } from "url";
|
|
8
|
+
import { getSample, SAMPLE_IDS, SAMPLES } from "./samples.js";
|
|
9
|
+
const __dirname = dirname(fileURLToPath(import.meta.url));
|
|
10
|
+
let API_DOCS;
|
|
11
|
+
try {
|
|
12
|
+
API_DOCS = readFileSync(resolve(__dirname, "../llms.txt"), "utf-8");
|
|
13
|
+
}
|
|
14
|
+
catch {
|
|
15
|
+
API_DOCS = "SceneView API docs not found. Run `npm run prepare` to bundle llms.txt.";
|
|
16
|
+
}
|
|
17
|
+
const server = new Server({ name: "@sceneview/mcp", version: "3.0.0" }, { capabilities: { resources: {}, tools: {} } });
|
|
18
|
+
server.setRequestHandler(ListResourcesRequestSchema, async () => ({
|
|
19
|
+
resources: [
|
|
20
|
+
{
|
|
21
|
+
uri: "sceneview://api",
|
|
22
|
+
name: "SceneView API Reference",
|
|
23
|
+
description: "Complete SceneView 3.0.0 API — Scene, ARScene, SceneScope DSL, ARSceneScope DSL, node types, resource loading, camera, gestures, math types, threading rules, and common patterns. Read this before writing any SceneView code.",
|
|
24
|
+
mimeType: "text/markdown",
|
|
25
|
+
},
|
|
26
|
+
],
|
|
27
|
+
}));
|
|
28
|
+
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
|
|
29
|
+
if (request.params.uri === "sceneview://api") {
|
|
30
|
+
return {
|
|
31
|
+
contents: [{ uri: "sceneview://api", mimeType: "text/markdown", text: API_DOCS }],
|
|
32
|
+
};
|
|
33
|
+
}
|
|
34
|
+
throw new Error(`Unknown resource: ${request.params.uri}`);
|
|
35
|
+
});
|
|
36
|
+
server.setRequestHandler(ListToolsRequestSchema, async () => ({
|
|
37
|
+
tools: [
|
|
38
|
+
{
|
|
39
|
+
name: "get_sample",
|
|
40
|
+
description: "Returns a complete, compilable Kotlin sample for a given SceneView scenario. Use this to get a working starting point before customising.",
|
|
41
|
+
inputSchema: {
|
|
42
|
+
type: "object",
|
|
43
|
+
properties: {
|
|
44
|
+
scenario: {
|
|
45
|
+
type: "string",
|
|
46
|
+
enum: SAMPLE_IDS,
|
|
47
|
+
description: `The scenario to fetch:\n${SAMPLE_IDS.map((id) => `- "${id}": ${SAMPLES[id].description}`).join("\n")}`,
|
|
48
|
+
},
|
|
49
|
+
},
|
|
50
|
+
required: ["scenario"],
|
|
51
|
+
},
|
|
52
|
+
},
|
|
53
|
+
{
|
|
54
|
+
name: "get_setup",
|
|
55
|
+
description: "Returns the Gradle dependency and AndroidManifest snippet required to use SceneView in an Android project.",
|
|
56
|
+
inputSchema: {
|
|
57
|
+
type: "object",
|
|
58
|
+
properties: {
|
|
59
|
+
type: {
|
|
60
|
+
type: "string",
|
|
61
|
+
enum: ["3d", "ar"],
|
|
62
|
+
description: '"3d" for 3D-only scenes. "ar" for augmented reality (includes 3D).',
|
|
63
|
+
},
|
|
64
|
+
},
|
|
65
|
+
required: ["type"],
|
|
66
|
+
},
|
|
67
|
+
},
|
|
68
|
+
],
|
|
69
|
+
}));
|
|
70
|
+
server.setRequestHandler(CallToolRequestSchema, async (request) => {
|
|
71
|
+
switch (request.params.name) {
|
|
72
|
+
case "get_sample": {
|
|
73
|
+
const scenario = request.params.arguments?.scenario;
|
|
74
|
+
const sample = getSample(scenario);
|
|
75
|
+
if (!sample) {
|
|
76
|
+
return {
|
|
77
|
+
content: [{ type: "text", text: `Unknown scenario "${scenario}". Available: ${SAMPLE_IDS.join(", ")}` }],
|
|
78
|
+
isError: true,
|
|
79
|
+
};
|
|
80
|
+
}
|
|
81
|
+
return {
|
|
82
|
+
content: [
|
|
83
|
+
{
|
|
84
|
+
type: "text",
|
|
85
|
+
text: [
|
|
86
|
+
`## ${sample.title}`,
|
|
87
|
+
``,
|
|
88
|
+
`**Gradle dependency:**`,
|
|
89
|
+
`\`\`\`kotlin`,
|
|
90
|
+
`implementation("${sample.dependency}")`,
|
|
91
|
+
`\`\`\``,
|
|
92
|
+
``,
|
|
93
|
+
`**Kotlin (Jetpack Compose):**`,
|
|
94
|
+
`\`\`\`kotlin`,
|
|
95
|
+
sample.code,
|
|
96
|
+
`\`\`\``,
|
|
97
|
+
``,
|
|
98
|
+
`**Prompt that generates this:**`,
|
|
99
|
+
`> ${sample.prompt}`,
|
|
100
|
+
].join("\n"),
|
|
101
|
+
},
|
|
102
|
+
],
|
|
103
|
+
};
|
|
104
|
+
}
|
|
105
|
+
case "get_setup": {
|
|
106
|
+
const type = request.params.arguments?.type;
|
|
107
|
+
if (type === "3d") {
|
|
108
|
+
return {
|
|
109
|
+
content: [
|
|
110
|
+
{
|
|
111
|
+
type: "text",
|
|
112
|
+
text: [
|
|
113
|
+
`## SceneView — 3D setup`,
|
|
114
|
+
``,
|
|
115
|
+
`### build.gradle.kts`,
|
|
116
|
+
`\`\`\`kotlin`,
|
|
117
|
+
`dependencies {`,
|
|
118
|
+
` implementation("io.github.sceneview:sceneview:3.0.0")`,
|
|
119
|
+
`}`,
|
|
120
|
+
`\`\`\``,
|
|
121
|
+
``,
|
|
122
|
+
`No manifest changes required for 3D-only scenes.`,
|
|
123
|
+
].join("\n"),
|
|
124
|
+
},
|
|
125
|
+
],
|
|
126
|
+
};
|
|
127
|
+
}
|
|
128
|
+
if (type === "ar") {
|
|
129
|
+
return {
|
|
130
|
+
content: [
|
|
131
|
+
{
|
|
132
|
+
type: "text",
|
|
133
|
+
text: [
|
|
134
|
+
`## SceneView — AR setup`,
|
|
135
|
+
``,
|
|
136
|
+
`### build.gradle.kts`,
|
|
137
|
+
`\`\`\`kotlin`,
|
|
138
|
+
`dependencies {`,
|
|
139
|
+
` implementation("io.github.sceneview:arsceneview:3.0.0")`,
|
|
140
|
+
`}`,
|
|
141
|
+
`\`\`\``,
|
|
142
|
+
``,
|
|
143
|
+
`### AndroidManifest.xml`,
|
|
144
|
+
`\`\`\`xml`,
|
|
145
|
+
`<uses-permission android:name="android.permission.CAMERA" />`,
|
|
146
|
+
`<uses-feature android:name="android.hardware.camera.ar" android:required="true" />`,
|
|
147
|
+
`<application>`,
|
|
148
|
+
` <meta-data android:name="com.google.ar.core" android:value="required" />`,
|
|
149
|
+
`</application>`,
|
|
150
|
+
`\`\`\``,
|
|
151
|
+
].join("\n"),
|
|
152
|
+
},
|
|
153
|
+
],
|
|
154
|
+
};
|
|
155
|
+
}
|
|
156
|
+
return {
|
|
157
|
+
content: [{ type: "text", text: `Unknown type "${type}". Use "3d" or "ar".` }],
|
|
158
|
+
isError: true,
|
|
159
|
+
};
|
|
160
|
+
}
|
|
161
|
+
default:
|
|
162
|
+
return {
|
|
163
|
+
content: [{ type: "text", text: `Unknown tool: ${request.params.name}` }],
|
|
164
|
+
isError: true,
|
|
165
|
+
};
|
|
166
|
+
}
|
|
167
|
+
});
|
|
168
|
+
const transport = new StdioServerTransport();
|
|
169
|
+
await server.connect(transport);
|
package/dist/samples.js
ADDED
|
@@ -0,0 +1,206 @@
|
|
|
1
|
+
export const SAMPLES = {
|
|
2
|
+
"model-viewer": {
|
|
3
|
+
id: "model-viewer",
|
|
4
|
+
title: "3D Model Viewer",
|
|
5
|
+
description: "Full-screen 3D scene with a GLB model, HDR environment, and orbit camera",
|
|
6
|
+
dependency: "io.github.sceneview:sceneview:3.0.0",
|
|
7
|
+
prompt: "Create an Android Compose screen called `ModelViewerScreen` that loads a GLB file from assets/models/my_model.glb and displays it in a full-screen 3D scene with an orbit camera (drag to rotate, pinch to zoom). Add an HDR environment from assets/environments/sky_2k.hdr for realistic lighting. Use SceneView `io.github.sceneview:sceneview:3.0.0`.",
|
|
8
|
+
code: `@Composable
|
|
9
|
+
fun ModelViewerScreen() {
|
|
10
|
+
val engine = rememberEngine()
|
|
11
|
+
val modelLoader = rememberModelLoader(engine)
|
|
12
|
+
val environmentLoader = rememberEnvironmentLoader(engine)
|
|
13
|
+
|
|
14
|
+
Scene(
|
|
15
|
+
modifier = Modifier.fillMaxSize(),
|
|
16
|
+
engine = engine,
|
|
17
|
+
modelLoader = modelLoader,
|
|
18
|
+
environment = rememberEnvironment(environmentLoader) {
|
|
19
|
+
environmentLoader.createHDREnvironment("environments/sky_2k.hdr")!!
|
|
20
|
+
},
|
|
21
|
+
mainLightNode = rememberMainLightNode(engine) { intensity = 100_000f },
|
|
22
|
+
cameraManipulator = rememberCameraManipulator()
|
|
23
|
+
) {
|
|
24
|
+
rememberModelInstance(modelLoader, "models/my_model.glb")?.let { instance ->
|
|
25
|
+
ModelNode(
|
|
26
|
+
modelInstance = instance,
|
|
27
|
+
scaleToUnits = 1.0f,
|
|
28
|
+
autoAnimate = true,
|
|
29
|
+
isEditable = true
|
|
30
|
+
)
|
|
31
|
+
}
|
|
32
|
+
}
|
|
33
|
+
}`,
|
|
34
|
+
},
|
|
35
|
+
"ar-tap-to-place": {
|
|
36
|
+
id: "ar-tap-to-place",
|
|
37
|
+
title: "AR Tap-to-Place",
|
|
38
|
+
description: "AR scene where each tap places a GLB model on a detected surface. Placed models are pinch-to-scale and drag-to-rotate.",
|
|
39
|
+
dependency: "io.github.sceneview:arsceneview:3.0.0",
|
|
40
|
+
prompt: "Create an Android Compose screen called `TapToPlaceScreen` that opens the camera in AR mode. Show a plane detection grid. When the user taps a detected surface, place a 3D GLB model from assets/models/chair.glb at that point. The user should be able to pinch-to-scale and drag-to-rotate after placing. Multiple taps = multiple objects. Use SceneView `io.github.sceneview:arsceneview:3.0.0`.",
|
|
41
|
+
code: `@Composable
|
|
42
|
+
fun TapToPlaceScreen() {
|
|
43
|
+
val engine = rememberEngine()
|
|
44
|
+
val modelLoader = rememberModelLoader(engine)
|
|
45
|
+
val modelInstance = rememberModelInstance(modelLoader, "models/chair.glb")
|
|
46
|
+
var placedAnchors by remember { mutableStateOf(listOf<Anchor>()) }
|
|
47
|
+
|
|
48
|
+
ARScene(
|
|
49
|
+
modifier = Modifier.fillMaxSize(),
|
|
50
|
+
engine = engine,
|
|
51
|
+
modelLoader = modelLoader,
|
|
52
|
+
planeRenderer = true,
|
|
53
|
+
sessionConfiguration = { session, config ->
|
|
54
|
+
config.depthMode =
|
|
55
|
+
if (session.isDepthModeSupported(Config.DepthMode.AUTOMATIC))
|
|
56
|
+
Config.DepthMode.AUTOMATIC else Config.DepthMode.DISABLED
|
|
57
|
+
config.instantPlacementMode = Config.InstantPlacementMode.LOCAL_Y_UP
|
|
58
|
+
config.lightEstimationMode = Config.LightEstimationMode.ENVIRONMENTAL_HDR
|
|
59
|
+
},
|
|
60
|
+
onTouchEvent = { event, hitResult ->
|
|
61
|
+
if (event.action == MotionEvent.ACTION_UP && hitResult != null)
|
|
62
|
+
placedAnchors = placedAnchors + hitResult.createAnchor()
|
|
63
|
+
true
|
|
64
|
+
}
|
|
65
|
+
) {
|
|
66
|
+
placedAnchors.forEach { anchor ->
|
|
67
|
+
AnchorNode(anchor = anchor) {
|
|
68
|
+
ModelNode(
|
|
69
|
+
modelInstance = modelInstance ?: return@AnchorNode,
|
|
70
|
+
scaleToUnits = 0.5f,
|
|
71
|
+
isEditable = true
|
|
72
|
+
)
|
|
73
|
+
}
|
|
74
|
+
}
|
|
75
|
+
}
|
|
76
|
+
}`,
|
|
77
|
+
},
|
|
78
|
+
"ar-placement-cursor": {
|
|
79
|
+
id: "ar-placement-cursor",
|
|
80
|
+
title: "AR Placement Cursor",
|
|
81
|
+
description: "AR scene with a reticle that follows the surface at screen center. Tap to confirm placement.",
|
|
82
|
+
dependency: "io.github.sceneview:arsceneview:3.0.0",
|
|
83
|
+
prompt: "Create an Android Compose AR screen called `ARCursorScreen`. Show a small reticle that snaps to the nearest detected surface at the center of the screen as the user moves the camera. When the user taps, place a GLB model from assets/models/object.glb at that position and hide the reticle. Use SceneView `io.github.sceneview:arsceneview:3.0.0`.",
|
|
84
|
+
code: `@Composable
|
|
85
|
+
fun ARCursorScreen() {
|
|
86
|
+
val engine = rememberEngine()
|
|
87
|
+
val modelLoader = rememberModelLoader(engine)
|
|
88
|
+
val modelInstance = rememberModelInstance(modelLoader, "models/object.glb")
|
|
89
|
+
var anchor by remember { mutableStateOf<Anchor?>(null) }
|
|
90
|
+
val view = LocalView.current
|
|
91
|
+
|
|
92
|
+
ARScene(
|
|
93
|
+
modifier = Modifier.fillMaxSize(),
|
|
94
|
+
engine = engine,
|
|
95
|
+
modelLoader = modelLoader,
|
|
96
|
+
planeRenderer = true,
|
|
97
|
+
sessionConfiguration = { _, config ->
|
|
98
|
+
config.instantPlacementMode = Config.InstantPlacementMode.LOCAL_Y_UP
|
|
99
|
+
config.lightEstimationMode = Config.LightEstimationMode.ENVIRONMENTAL_HDR
|
|
100
|
+
},
|
|
101
|
+
onTouchEvent = { event, hitResult ->
|
|
102
|
+
if (event.action == MotionEvent.ACTION_UP && hitResult != null)
|
|
103
|
+
anchor = hitResult.createAnchor()
|
|
104
|
+
true
|
|
105
|
+
}
|
|
106
|
+
) {
|
|
107
|
+
if (anchor == null) {
|
|
108
|
+
HitResultNode(xPx = view.width / 2f, yPx = view.height / 2f) {
|
|
109
|
+
SphereNode(radius = 0.02f)
|
|
110
|
+
}
|
|
111
|
+
}
|
|
112
|
+
anchor?.let { a ->
|
|
113
|
+
AnchorNode(anchor = a) {
|
|
114
|
+
ModelNode(
|
|
115
|
+
modelInstance = modelInstance ?: return@AnchorNode,
|
|
116
|
+
scaleToUnits = 0.5f,
|
|
117
|
+
isEditable = true
|
|
118
|
+
)
|
|
119
|
+
}
|
|
120
|
+
}
|
|
121
|
+
}
|
|
122
|
+
}`,
|
|
123
|
+
},
|
|
124
|
+
"ar-augmented-image": {
|
|
125
|
+
id: "ar-augmented-image",
|
|
126
|
+
title: "AR Augmented Image",
|
|
127
|
+
description: "Detects a reference image in the camera feed and overlays a 3D model above it.",
|
|
128
|
+
dependency: "io.github.sceneview:arsceneview:3.0.0",
|
|
129
|
+
prompt: "Create an Android Compose AR screen called `AugmentedImageScreen` that detects a printed reference image (from R.drawable.target_image, physical width 15 cm) and places a 3D GLB model from assets/models/overlay.glb above it, scaled to match the image width. The model should disappear when the image is lost. Use SceneView `io.github.sceneview:arsceneview:3.0.0`.",
|
|
130
|
+
code: `@Composable
|
|
131
|
+
fun AugmentedImageScreen() {
|
|
132
|
+
val engine = rememberEngine()
|
|
133
|
+
val modelLoader = rememberModelLoader(engine)
|
|
134
|
+
val context = LocalContext.current
|
|
135
|
+
var trackedImages by remember { mutableStateOf(listOf<AugmentedImage>()) }
|
|
136
|
+
|
|
137
|
+
ARScene(
|
|
138
|
+
modifier = Modifier.fillMaxSize(),
|
|
139
|
+
engine = engine,
|
|
140
|
+
modelLoader = modelLoader,
|
|
141
|
+
sessionConfiguration = { session, config ->
|
|
142
|
+
config.augmentedImageDatabase = AugmentedImageDatabase(session).also { db ->
|
|
143
|
+
db.addImage(
|
|
144
|
+
"target",
|
|
145
|
+
BitmapFactory.decodeResource(context.resources, R.drawable.target_image),
|
|
146
|
+
0.15f
|
|
147
|
+
)
|
|
148
|
+
}
|
|
149
|
+
},
|
|
150
|
+
onSessionUpdated = { _, frame ->
|
|
151
|
+
trackedImages = frame
|
|
152
|
+
.getUpdatedTrackables(AugmentedImage::class.java)
|
|
153
|
+
.filter { it.trackingState == TrackingState.TRACKING }
|
|
154
|
+
}
|
|
155
|
+
) {
|
|
156
|
+
trackedImages.forEach { image ->
|
|
157
|
+
AugmentedImageNode(augmentedImage = image) {
|
|
158
|
+
rememberModelInstance(modelLoader, "models/overlay.glb")?.let { instance ->
|
|
159
|
+
ModelNode(modelInstance = instance, scaleToUnits = image.extentX)
|
|
160
|
+
}
|
|
161
|
+
}
|
|
162
|
+
}
|
|
163
|
+
}
|
|
164
|
+
}`,
|
|
165
|
+
},
|
|
166
|
+
"ar-face-filter": {
|
|
167
|
+
id: "ar-face-filter",
|
|
168
|
+
title: "AR Face Filter",
|
|
169
|
+
description: "Front-camera AR that detects faces and renders a 3D mesh material over them.",
|
|
170
|
+
dependency: "io.github.sceneview:arsceneview:3.0.0",
|
|
171
|
+
prompt: "Create an Android Compose AR screen called `FaceFilterScreen` using the front camera. Detect all visible faces and apply a custom material from assets/materials/face_mask.filamat to the face mesh. Use SceneView `io.github.sceneview:arsceneview:3.0.0` with `Session.Feature.FRONT_CAMERA` and `AugmentedFaceMode.MESH3D`.",
|
|
172
|
+
code: `@Composable
|
|
173
|
+
fun FaceFilterScreen() {
|
|
174
|
+
val engine = rememberEngine()
|
|
175
|
+
val modelLoader = rememberModelLoader(engine)
|
|
176
|
+
val materialLoader = rememberMaterialLoader(engine)
|
|
177
|
+
var trackedFaces by remember { mutableStateOf(listOf<AugmentedFace>()) }
|
|
178
|
+
val faceMaterial = remember(materialLoader) {
|
|
179
|
+
materialLoader.createInstance("materials/face_mask.filamat")
|
|
180
|
+
}
|
|
181
|
+
|
|
182
|
+
ARScene(
|
|
183
|
+
modifier = Modifier.fillMaxSize(),
|
|
184
|
+
engine = engine,
|
|
185
|
+
modelLoader = modelLoader,
|
|
186
|
+
sessionFeatures = setOf(Session.Feature.FRONT_CAMERA),
|
|
187
|
+
sessionConfiguration = { _, config ->
|
|
188
|
+
config.augmentedFaceMode = Config.AugmentedFaceMode.MESH3D
|
|
189
|
+
},
|
|
190
|
+
onSessionUpdated = { session, _ ->
|
|
191
|
+
trackedFaces = session
|
|
192
|
+
.getAllTrackables(AugmentedFace::class.java)
|
|
193
|
+
.filter { it.trackingState == TrackingState.TRACKING }
|
|
194
|
+
}
|
|
195
|
+
) {
|
|
196
|
+
trackedFaces.forEach { face ->
|
|
197
|
+
AugmentedFaceNode(augmentedFace = face, meshMaterialInstance = faceMaterial)
|
|
198
|
+
}
|
|
199
|
+
}
|
|
200
|
+
}`,
|
|
201
|
+
},
|
|
202
|
+
};
|
|
203
|
+
export const SAMPLE_IDS = Object.keys(SAMPLES);
|
|
204
|
+
export function getSample(id) {
|
|
205
|
+
return SAMPLES[id];
|
|
206
|
+
}
|
package/llms.txt
ADDED
|
@@ -0,0 +1,388 @@
|
|
|
1
|
+
# SceneView for Android
|
|
2
|
+
|
|
3
|
+
SceneView is a Compose-first 3D and AR SDK for Android, built on Filament (Google's real-time rendering engine) and ARCore. It provides declarative composables for rendering interactive 3D scenes, loading glTF/GLB models, and building AR experiences.
|
|
4
|
+
|
|
5
|
+
**Maven artifacts (version 3.0.0):**
|
|
6
|
+
- 3D only: `io.github.sceneview:sceneview:3.0.0`
|
|
7
|
+
- AR + 3D: `io.github.sceneview:arsceneview:3.0.0`
|
|
8
|
+
|
|
9
|
+
**Min SDK:** 24 | **Target SDK:** 36 | **Kotlin:** 2.3.10 | **Compose BOM compatible**
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Setup
|
|
14
|
+
|
|
15
|
+
### build.gradle (app module)
|
|
16
|
+
```kotlin
|
|
17
|
+
dependencies {
|
|
18
|
+
implementation("io.github.sceneview:sceneview:3.0.0") // 3D only
|
|
19
|
+
implementation("io.github.sceneview:arsceneview:3.0.0") // AR (includes sceneview)
|
|
20
|
+
}
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
### AndroidManifest.xml (AR apps)
|
|
24
|
+
```xml
|
|
25
|
+
<uses-permission android:name="android.permission.CAMERA" />
|
|
26
|
+
<uses-feature android:name="android.hardware.camera.ar" android:required="true" />
|
|
27
|
+
<application>
|
|
28
|
+
<meta-data android:name="com.google.ar.core" android:value="required" />
|
|
29
|
+
</application>
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
---
|
|
33
|
+
|
|
34
|
+
## Core Composables
|
|
35
|
+
|
|
36
|
+
### Scene — 3D viewport
|
|
37
|
+
```kotlin
|
|
38
|
+
@Composable
|
|
39
|
+
fun My3DScreen() {
|
|
40
|
+
val engine = rememberEngine()
|
|
41
|
+
val modelLoader = rememberModelLoader(engine)
|
|
42
|
+
val environmentLoader = rememberEnvironmentLoader(engine)
|
|
43
|
+
|
|
44
|
+
Scene(
|
|
45
|
+
modifier = Modifier.fillMaxSize(),
|
|
46
|
+
engine = engine,
|
|
47
|
+
modelLoader = modelLoader,
|
|
48
|
+
cameraManipulator = rememberCameraManipulator(),
|
|
49
|
+
environment = rememberEnvironment(environmentLoader) {
|
|
50
|
+
environmentLoader.createHDREnvironment("environments/sky_2k.hdr")!!
|
|
51
|
+
},
|
|
52
|
+
mainLightNode = rememberMainLightNode(engine) { intensity = 100_000f }
|
|
53
|
+
) {
|
|
54
|
+
rememberModelInstance(modelLoader, "models/helmet.glb")?.let { instance ->
|
|
55
|
+
ModelNode(modelInstance = instance, scaleToUnits = 1.0f)
|
|
56
|
+
}
|
|
57
|
+
}
|
|
58
|
+
}
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
### ARScene — AR viewport
|
|
62
|
+
```kotlin
|
|
63
|
+
@Composable
|
|
64
|
+
fun MyARScreen() {
|
|
65
|
+
val engine = rememberEngine()
|
|
66
|
+
val modelLoader = rememberModelLoader(engine)
|
|
67
|
+
|
|
68
|
+
ARScene(
|
|
69
|
+
modifier = Modifier.fillMaxSize(),
|
|
70
|
+
engine = engine,
|
|
71
|
+
modelLoader = modelLoader,
|
|
72
|
+
planeRenderer = true,
|
|
73
|
+
sessionConfiguration = { session, config ->
|
|
74
|
+
config.depthMode = Config.DepthMode.AUTOMATIC
|
|
75
|
+
config.instantPlacementMode = Config.InstantPlacementMode.LOCAL_Y_UP
|
|
76
|
+
config.lightEstimationMode = Config.LightEstimationMode.ENVIRONMENTAL_HDR
|
|
77
|
+
},
|
|
78
|
+
onSessionUpdated = { session, frame -> /* per-frame logic */ }
|
|
79
|
+
) {
|
|
80
|
+
// ARSceneScope DSL here
|
|
81
|
+
}
|
|
82
|
+
}
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
---
|
|
86
|
+
|
|
87
|
+
## SceneScope — Node DSL
|
|
88
|
+
|
|
89
|
+
All content inside `Scene { }` or `ARScene { }` is a `SceneScope`.
|
|
90
|
+
|
|
91
|
+
### ModelNode
|
|
92
|
+
```kotlin
|
|
93
|
+
Scene(...) {
|
|
94
|
+
val instance = rememberModelInstance(modelLoader, "models/my_model.glb")
|
|
95
|
+
if (instance != null) {
|
|
96
|
+
ModelNode(
|
|
97
|
+
modelInstance = instance,
|
|
98
|
+
scaleToUnits = 1.0f,
|
|
99
|
+
centerOrigin = Position(y = -1f),
|
|
100
|
+
position = Position(x = 0f, y = 0f, z = -2f),
|
|
101
|
+
rotation = Rotation(y = 45f),
|
|
102
|
+
isEditable = true,
|
|
103
|
+
autoAnimate = true
|
|
104
|
+
)
|
|
105
|
+
}
|
|
106
|
+
}
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
### Primitive geometry nodes
|
|
110
|
+
```kotlin
|
|
111
|
+
Scene(...) {
|
|
112
|
+
CubeNode(size = Size(0.5f, 0.5f, 0.5f), materialInstance = redMaterial)
|
|
113
|
+
SphereNode(radius = 0.3f, materialInstance = blueMaterial)
|
|
114
|
+
CylinderNode(radius = 0.2f, height = 1.0f, materialInstance = greenMaterial)
|
|
115
|
+
PlaneNode(size = Size(5f, 5f), materialInstance = greyMaterial)
|
|
116
|
+
}
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
### LightNode
|
|
120
|
+
`apply` is `LightManager.Builder.() -> Unit` — must use the named parameter, NOT a trailing lambda.
|
|
121
|
+
```kotlin
|
|
122
|
+
Scene(...) {
|
|
123
|
+
LightNode(
|
|
124
|
+
type = LightManager.Type.SUN,
|
|
125
|
+
apply = {
|
|
126
|
+
color(1.0f, 1.0f, 1.0f)
|
|
127
|
+
intensity(100_000f)
|
|
128
|
+
castShadows(true)
|
|
129
|
+
}
|
|
130
|
+
)
|
|
131
|
+
LightNode(
|
|
132
|
+
type = LightManager.Type.POINT,
|
|
133
|
+
apply = { intensity(50_000f); falloff(5.0f) }
|
|
134
|
+
)
|
|
135
|
+
}
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
### ImageNode
|
|
139
|
+
```kotlin
|
|
140
|
+
Scene(...) {
|
|
141
|
+
ImageNode(imageFileLocation = "images/logo.png", size = Size(1f, 1f))
|
|
142
|
+
ImageNode(imageResId = R.drawable.my_image)
|
|
143
|
+
ImageNode(bitmap = myBitmap, size = Size(2f, 1f))
|
|
144
|
+
}
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
### ViewNode — Compose UI in 3D
|
|
148
|
+
```kotlin
|
|
149
|
+
val windowManager = rememberViewNodeManager()
|
|
150
|
+
Scene(viewNodeWindowManager = windowManager) {
|
|
151
|
+
ViewNode(windowManager = windowManager) {
|
|
152
|
+
Card { Text("Hello 3D World!") }
|
|
153
|
+
}
|
|
154
|
+
}
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
### Node hierarchy
|
|
158
|
+
```kotlin
|
|
159
|
+
Scene(...) {
|
|
160
|
+
Node(position = Position(y = 1f)) {
|
|
161
|
+
ModelNode(modelInstance = instance, position = Position(x = -1f))
|
|
162
|
+
CubeNode(size = Size(0.1f), position = Position(x = 1f))
|
|
163
|
+
}
|
|
164
|
+
}
|
|
165
|
+
```
|
|
166
|
+
|
|
167
|
+
---
|
|
168
|
+
|
|
169
|
+
## ARSceneScope — AR Node DSL
|
|
170
|
+
|
|
171
|
+
### HitResultNode — surface cursor
|
|
172
|
+
```kotlin
|
|
173
|
+
val view = LocalView.current
|
|
174
|
+
ARScene(...) {
|
|
175
|
+
HitResultNode(xPx = view.width / 2f, yPx = view.height / 2f) {
|
|
176
|
+
SphereNode(radius = 0.02f) // reticle
|
|
177
|
+
}
|
|
178
|
+
}
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
### AnchorNode — pin to real world
|
|
182
|
+
```kotlin
|
|
183
|
+
ARScene(
|
|
184
|
+
onTouchEvent = { event, hitResult ->
|
|
185
|
+
if (event.action == MotionEvent.ACTION_UP && hitResult != null)
|
|
186
|
+
anchor = hitResult.createAnchor()
|
|
187
|
+
true
|
|
188
|
+
}
|
|
189
|
+
) {
|
|
190
|
+
anchor?.let { a ->
|
|
191
|
+
AnchorNode(anchor = a) {
|
|
192
|
+
ModelNode(modelInstance = instance!!, scaleToUnits = 0.5f, isEditable = true)
|
|
193
|
+
}
|
|
194
|
+
}
|
|
195
|
+
}
|
|
196
|
+
```
|
|
197
|
+
|
|
198
|
+
### AugmentedImageNode
|
|
199
|
+
```kotlin
|
|
200
|
+
ARScene(
|
|
201
|
+
sessionConfiguration = { session, config ->
|
|
202
|
+
config.augmentedImageDatabase = AugmentedImageDatabase(session).also { db ->
|
|
203
|
+
db.addImage("target", bitmap, 0.15f)
|
|
204
|
+
}
|
|
205
|
+
},
|
|
206
|
+
onSessionUpdated = { _, frame ->
|
|
207
|
+
trackedImages = frame.getUpdatedTrackables(AugmentedImage::class.java)
|
|
208
|
+
.filter { it.trackingState == TrackingState.TRACKING }
|
|
209
|
+
}
|
|
210
|
+
) {
|
|
211
|
+
trackedImages.forEach { image ->
|
|
212
|
+
AugmentedImageNode(augmentedImage = image) {
|
|
213
|
+
ModelNode(modelInstance = instance!!, scaleToUnits = image.extentX)
|
|
214
|
+
}
|
|
215
|
+
}
|
|
216
|
+
}
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
### AugmentedFaceNode
|
|
220
|
+
```kotlin
|
|
221
|
+
ARScene(
|
|
222
|
+
sessionFeatures = setOf(Session.Feature.FRONT_CAMERA),
|
|
223
|
+
sessionConfiguration = { _, config ->
|
|
224
|
+
config.augmentedFaceMode = Config.AugmentedFaceMode.MESH3D
|
|
225
|
+
},
|
|
226
|
+
onSessionUpdated = { session, _ ->
|
|
227
|
+
trackedFaces = session.getAllTrackables(AugmentedFace::class.java)
|
|
228
|
+
.filter { it.trackingState == TrackingState.TRACKING }
|
|
229
|
+
}
|
|
230
|
+
) {
|
|
231
|
+
trackedFaces.forEach { face ->
|
|
232
|
+
AugmentedFaceNode(augmentedFace = face, meshMaterialInstance = faceMaterial)
|
|
233
|
+
}
|
|
234
|
+
}
|
|
235
|
+
```
|
|
236
|
+
|
|
237
|
+
### CloudAnchorNode
|
|
238
|
+
```kotlin
|
|
239
|
+
ARScene(...) {
|
|
240
|
+
CloudAnchorNode(
|
|
241
|
+
anchor = localAnchor,
|
|
242
|
+
cloudAnchorId = savedCloudId,
|
|
243
|
+
onHosted = { cloudId, state ->
|
|
244
|
+
if (state == CloudAnchorState.SUCCESS) save(cloudId!!)
|
|
245
|
+
}
|
|
246
|
+
) {
|
|
247
|
+
ModelNode(modelInstance = instance!!)
|
|
248
|
+
}
|
|
249
|
+
}
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
---
|
|
253
|
+
|
|
254
|
+
## Node Properties & Interaction
|
|
255
|
+
|
|
256
|
+
```kotlin
|
|
257
|
+
node.position = Position(x = 1f, y = 0f, z = -2f) // meters
|
|
258
|
+
node.rotation = Rotation(x = 0f, y = 45f, z = 0f) // degrees
|
|
259
|
+
node.scale = Scale(x = 1f, y = 1f, z = 1f)
|
|
260
|
+
node.isVisible = true
|
|
261
|
+
node.isEditable = true // pinch-scale, drag-move, two-finger-rotate
|
|
262
|
+
node.isTouchable = true
|
|
263
|
+
|
|
264
|
+
node.onSingleTapConfirmed = { event -> true }
|
|
265
|
+
node.onFrame = { frameTimeNanos -> }
|
|
266
|
+
|
|
267
|
+
node.transform(position = Position(x = 2f), smooth = true, smoothSpeed = 5f)
|
|
268
|
+
node.lookAt(targetNode)
|
|
269
|
+
|
|
270
|
+
node.animateRotations(Rotation(0f), Rotation(y = 360f)).also {
|
|
271
|
+
it.duration = 2000
|
|
272
|
+
it.repeatCount = ValueAnimator.INFINITE
|
|
273
|
+
}.start()
|
|
274
|
+
|
|
275
|
+
val hit: Node? = node.overlapTest()
|
|
276
|
+
```
|
|
277
|
+
|
|
278
|
+
---
|
|
279
|
+
|
|
280
|
+
## Resource Loading
|
|
281
|
+
|
|
282
|
+
```kotlin
|
|
283
|
+
// Composable (preferred) — null while loading, recomposes when ready
|
|
284
|
+
val instance: ModelInstance? = rememberModelInstance(modelLoader, "models/file.glb")
|
|
285
|
+
|
|
286
|
+
// Imperative — call from LaunchedEffect or ViewModel
|
|
287
|
+
val instance = modelLoader.loadModelInstance("models/file.glb")
|
|
288
|
+
modelLoader.loadModelInstanceAsync("models/file.glb") { instance -> }
|
|
289
|
+
|
|
290
|
+
// HDR environment
|
|
291
|
+
val env = environmentLoader.createHDREnvironment("environments/sky_2k.hdr")
|
|
292
|
+
val env = environmentLoader.createKtxEnvironment("environments/studio.ktx")
|
|
293
|
+
```
|
|
294
|
+
|
|
295
|
+
---
|
|
296
|
+
|
|
297
|
+
## Camera
|
|
298
|
+
|
|
299
|
+
```kotlin
|
|
300
|
+
// Orbit / pan / zoom
|
|
301
|
+
Scene(cameraManipulator = rememberCameraManipulator(
|
|
302
|
+
orbitHomePosition = Position(x = 0f, y = 2f, z = 4f),
|
|
303
|
+
targetPosition = Position(x = 0f, y = 0f, z = 0f)
|
|
304
|
+
))
|
|
305
|
+
|
|
306
|
+
// Custom camera
|
|
307
|
+
Scene(cameraNode = rememberCameraNode(engine) {
|
|
308
|
+
position = Position(x = 0f, y = 2f, z = 5f)
|
|
309
|
+
lookAt(Position(0f, 0f, 0f))
|
|
310
|
+
})
|
|
311
|
+
|
|
312
|
+
// Main light shortcut (apply block is LightNode.() -> Unit — set properties directly)
|
|
313
|
+
Scene(mainLightNode = rememberMainLightNode(engine) { intensity = 100_000f })
|
|
314
|
+
```
|
|
315
|
+
|
|
316
|
+
---
|
|
317
|
+
|
|
318
|
+
## Gestures
|
|
319
|
+
|
|
320
|
+
```kotlin
|
|
321
|
+
Scene(
|
|
322
|
+
onGestureListener = rememberOnGestureListener(
|
|
323
|
+
onSingleTapConfirmed = { event, node -> },
|
|
324
|
+
onDoubleTap = { event, node -> node?.let { it.scale = Scale(2f) } },
|
|
325
|
+
onLongPress = { event, node -> }
|
|
326
|
+
),
|
|
327
|
+
onTouchEvent = { event, hitResult -> false }
|
|
328
|
+
)
|
|
329
|
+
```
|
|
330
|
+
|
|
331
|
+
---
|
|
332
|
+
|
|
333
|
+
## Math Types
|
|
334
|
+
|
|
335
|
+
```kotlin
|
|
336
|
+
import io.github.sceneview.math.Position // Float3, meters
|
|
337
|
+
import io.github.sceneview.math.Rotation // Float3, degrees
|
|
338
|
+
import io.github.sceneview.math.Scale // Float3
|
|
339
|
+
import io.github.sceneview.math.Direction // Float3, unit vector
|
|
340
|
+
import io.github.sceneview.math.Size // Float2
|
|
341
|
+
|
|
342
|
+
Position(x = 0f, y = 1f, z = -2f)
|
|
343
|
+
Rotation(y = 90f)
|
|
344
|
+
Scale(1.5f) // uniform
|
|
345
|
+
Scale(x = 2f, y = 1f, z = 2f)
|
|
346
|
+
```
|
|
347
|
+
|
|
348
|
+
---
|
|
349
|
+
|
|
350
|
+
## Surface Types
|
|
351
|
+
|
|
352
|
+
```kotlin
|
|
353
|
+
Scene(surfaceType = SurfaceType.Surface) // SurfaceView, best perf (default)
|
|
354
|
+
Scene(surfaceType = SurfaceType.TextureSurface, isOpaque = false) // TextureView, alpha
|
|
355
|
+
```
|
|
356
|
+
|
|
357
|
+
---
|
|
358
|
+
|
|
359
|
+
## Threading Rules
|
|
360
|
+
|
|
361
|
+
- Filament JNI calls must run on the **main thread**.
|
|
362
|
+
- `rememberModelInstance` is safe — reads bytes on IO, creates Filament objects on Main.
|
|
363
|
+
- Never call `modelLoader.createModel*` or `materialLoader.*` from background coroutines.
|
|
364
|
+
- Use `modelLoader.loadModelInstanceAsync` for imperative code.
|
|
365
|
+
|
|
366
|
+
---
|
|
367
|
+
|
|
368
|
+
## Samples
|
|
369
|
+
|
|
370
|
+
| Sample | Demonstrates |
|
|
371
|
+
|--------|-------------|
|
|
372
|
+
| `model-viewer` | Orbit camera, HDR env, glTF animation |
|
|
373
|
+
| `ar-model-viewer` | Tap-to-place, pinch/drag/rotate |
|
|
374
|
+
| `gltf-camera` | Cameras from glTF file |
|
|
375
|
+
| `camera-manipulator` | Orbit/pan/zoom |
|
|
376
|
+
| `ar-augmented-image` | Image detection, overlay, video |
|
|
377
|
+
| `ar-cloud-anchor` | Cross-device persistent anchors |
|
|
378
|
+
| `ar-point-cloud` | ARCore feature point cloud |
|
|
379
|
+
| `autopilot-demo` | Autonomous AR demo |
|
|
380
|
+
|
|
381
|
+
---
|
|
382
|
+
|
|
383
|
+
## AI Integration
|
|
384
|
+
|
|
385
|
+
MCP server: `@sceneview/mcp`. Add to `.claude/mcp.json`:
|
|
386
|
+
```json
|
|
387
|
+
{ "mcpServers": { "sceneview": { "command": "npx", "args": ["-y", "@sceneview/mcp"] } } }
|
|
388
|
+
```
|
package/package.json
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "sceneview-mcp",
|
|
3
|
+
"version": "3.0.0",
|
|
4
|
+
"description": "MCP server for SceneView — 3D and AR with Jetpack Compose for Android",
|
|
5
|
+
"keywords": ["mcp", "sceneview", "android", "ar", "3d", "compose", "filament", "arcore"],
|
|
6
|
+
"license": "MIT",
|
|
7
|
+
"type": "module",
|
|
8
|
+
"bin": {
|
|
9
|
+
"sceneview-mcp": "dist/index.js"
|
|
10
|
+
},
|
|
11
|
+
"files": [
|
|
12
|
+
"dist",
|
|
13
|
+
"llms.txt"
|
|
14
|
+
],
|
|
15
|
+
"scripts": {
|
|
16
|
+
"build": "tsc",
|
|
17
|
+
"prepare": "cp ../llms.txt ./llms.txt && tsc",
|
|
18
|
+
"start": "node dist/index.js",
|
|
19
|
+
"dev": "tsx src/index.ts"
|
|
20
|
+
},
|
|
21
|
+
"dependencies": {
|
|
22
|
+
"@modelcontextprotocol/sdk": "^1.12.0"
|
|
23
|
+
},
|
|
24
|
+
"devDependencies": {
|
|
25
|
+
"@types/node": "^22.0.0",
|
|
26
|
+
"tsx": "^4.0.0",
|
|
27
|
+
"typescript": "^5.8.0"
|
|
28
|
+
}
|
|
29
|
+
}
|