sogni-gen 1.2.2 → 1.2.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +3 -3
- package/llm.txt +183 -0
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -16,7 +16,7 @@ This repo ships an `openclaw.plugin.json` manifest so OpenClaw can automatically
|
|
|
16
16
|
|
|
17
17
|
```bash
|
|
18
18
|
# One command to install from GitHub
|
|
19
|
-
openclaw plugins install git@github.com:Sogni-AI/
|
|
19
|
+
openclaw plugins install git@github.com:Sogni-AI/openclaw-sogni-gen.git
|
|
20
20
|
|
|
21
21
|
# Or if published to npm
|
|
22
22
|
openclaw plugins install sogni-gen
|
|
@@ -28,8 +28,8 @@ That's it! OpenClaw will handle the rest.
|
|
|
28
28
|
|
|
29
29
|
```bash
|
|
30
30
|
# Clone the repository
|
|
31
|
-
git clone git@github.com:Sogni-AI/
|
|
32
|
-
cd
|
|
31
|
+
git clone git@github.com:Sogni-AI/openclaw-sogni-gen.git
|
|
32
|
+
cd openclaw-sogni-gen
|
|
33
33
|
npm install
|
|
34
34
|
```
|
|
35
35
|
|
package/llm.txt
ADDED
|
@@ -0,0 +1,183 @@
|
|
|
1
|
+
# sogni-gen — AI Image & Video Generation
|
|
2
|
+
|
|
3
|
+
> OpenClaw plugin powered by Sogni AI's decentralized GPU network.
|
|
4
|
+
> Repo: https://github.com/Sogni-AI/openclaw-sogni-gen
|
|
5
|
+
|
|
6
|
+
## What It Does
|
|
7
|
+
|
|
8
|
+
Generates AI images and videos from text prompts or reference media. Users ask you to "draw", "generate", "create an image/video", or "animate" something and you produce it.
|
|
9
|
+
|
|
10
|
+
## Install
|
|
11
|
+
|
|
12
|
+
```bash
|
|
13
|
+
openclaw plugins install sogni-gen
|
|
14
|
+
```
|
|
15
|
+
|
|
16
|
+
Then create Sogni credentials:
|
|
17
|
+
|
|
18
|
+
```bash
|
|
19
|
+
mkdir -p ~/.config/sogni
|
|
20
|
+
cat > ~/.config/sogni/credentials << 'EOF'
|
|
21
|
+
SOGNI_USERNAME=your_username
|
|
22
|
+
SOGNI_PASSWORD=your_password
|
|
23
|
+
EOF
|
|
24
|
+
chmod 600 ~/.config/sogni/credentials
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
Sign up at https://sogni.ai if you don't have an account. You get 50 free Spark tokens daily at https://app.sogni.ai/
|
|
28
|
+
|
|
29
|
+
## How to Generate
|
|
30
|
+
|
|
31
|
+
### Images
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
# Basic — returns a URL
|
|
35
|
+
node {{skillDir}}/sogni-gen.mjs -q "a cat wearing a hat"
|
|
36
|
+
|
|
37
|
+
# Save to file (then send via message tool with filePath)
|
|
38
|
+
node {{skillDir}}/sogni-gen.mjs -q -o /tmp/generated.png "a cat wearing a hat"
|
|
39
|
+
|
|
40
|
+
# Bigger image
|
|
41
|
+
node {{skillDir}}/sogni-gen.mjs -q -o /tmp/out.png -w 1024 -h 1024 "a dragon eating tacos"
|
|
42
|
+
|
|
43
|
+
# Higher quality (slower)
|
|
44
|
+
node {{skillDir}}/sogni-gen.mjs -q -m flux2_dev_fp8 -o /tmp/out.png "portrait of a wizard"
|
|
45
|
+
```
|
|
46
|
+
|
|
47
|
+
### Image Editing (needs a reference image)
|
|
48
|
+
|
|
49
|
+
```bash
|
|
50
|
+
# Edit an existing image
|
|
51
|
+
node {{skillDir}}/sogni-gen.mjs -q -c /path/to/photo.jpg -o /tmp/edited.png "make the background a beach"
|
|
52
|
+
|
|
53
|
+
# Use last generated image as input
|
|
54
|
+
node {{skillDir}}/sogni-gen.mjs -q --last-image -o /tmp/edited.png "make it pop art style"
|
|
55
|
+
|
|
56
|
+
# Restore a damaged photo
|
|
57
|
+
node {{skillDir}}/sogni-gen.mjs -q -c /path/to/old_photo.jpg -o /tmp/restored.png "restore this vintage photo, remove damage and scratches"
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
### Videos
|
|
61
|
+
|
|
62
|
+
```bash
|
|
63
|
+
# Text-to-video
|
|
64
|
+
node {{skillDir}}/sogni-gen.mjs -q --video -o /tmp/video.mp4 "ocean waves at sunset"
|
|
65
|
+
|
|
66
|
+
# Image-to-video (animate an image)
|
|
67
|
+
node {{skillDir}}/sogni-gen.mjs -q --video --ref /path/to/image.png -o /tmp/video.mp4 "camera slowly zooms in"
|
|
68
|
+
|
|
69
|
+
# Looping video
|
|
70
|
+
node {{skillDir}}/sogni-gen.mjs -q --video --looping --ref /path/to/image.png -o /tmp/loop.mp4 "gentle camera pan"
|
|
71
|
+
|
|
72
|
+
# Longer video (10 seconds)
|
|
73
|
+
node {{skillDir}}/sogni-gen.mjs -q --video --duration 10 --ref /path/to/image.png -o /tmp/video.mp4 "camera orbits around"
|
|
74
|
+
|
|
75
|
+
# Sound-to-video (lip sync / talking head)
|
|
76
|
+
node {{skillDir}}/sogni-gen.mjs -q --video --ref /path/to/face.jpg --ref-audio /path/to/speech.m4a -o /tmp/talking.mp4 "talking head"
|
|
77
|
+
|
|
78
|
+
# Motion transfer from another video
|
|
79
|
+
node {{skillDir}}/sogni-gen.mjs -q --video --ref /path/to/subject.jpg --ref-video /path/to/motion.mp4 --workflow animate-move -o /tmp/animated.mp4 "transfer motion"
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
### 360 Turntable
|
|
83
|
+
|
|
84
|
+
```bash
|
|
85
|
+
# Generate 8 angles of a subject
|
|
86
|
+
node {{skillDir}}/sogni-gen.mjs -q --angles-360 -c /path/to/subject.jpg "studio portrait"
|
|
87
|
+
|
|
88
|
+
# 360 video (looping mp4, requires ffmpeg)
|
|
89
|
+
node {{skillDir}}/sogni-gen.mjs -q --angles-360 --angles-360-video /tmp/turntable.mp4 -c /path/to/subject.jpg "studio portrait"
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
### Check Balance
|
|
93
|
+
|
|
94
|
+
```bash
|
|
95
|
+
node {{skillDir}}/sogni-gen.mjs --json --balance
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
## Image Models
|
|
99
|
+
|
|
100
|
+
| Model | Speed | Best For |
|
|
101
|
+
|-------|-------|----------|
|
|
102
|
+
| z_image_turbo_bf16 | ~5-10s | Default, general purpose |
|
|
103
|
+
| flux1-schnell-fp8 | ~3-5s | Quick iterations |
|
|
104
|
+
| flux2_dev_fp8 | ~2min | Highest quality |
|
|
105
|
+
| chroma-v.46-flash_fp8 | ~30s | Balanced speed/quality |
|
|
106
|
+
| qwen_image_edit_2511_fp8_lightning | ~8s | Fast image editing (auto-selected with -c) |
|
|
107
|
+
| qwen_image_edit_2511_fp8 | ~30s | Higher quality editing |
|
|
108
|
+
|
|
109
|
+
## Video Models (auto-selected by workflow)
|
|
110
|
+
|
|
111
|
+
| Workflow | Model | Speed |
|
|
112
|
+
|----------|-------|-------|
|
|
113
|
+
| t2v (text-to-video) | wan_v2.2-14b-fp8_t2v_lightx2v | ~5min |
|
|
114
|
+
| i2v (image-to-video) | wan_v2.2-14b-fp8_i2v_lightx2v | ~3-5min |
|
|
115
|
+
| s2v (sound-to-video) | wan_v2.2-14b-fp8_s2v_lightx2v | ~5min |
|
|
116
|
+
| animate-move | wan_v2.2-14b-fp8_animate-move_lightx2v | ~5min |
|
|
117
|
+
| animate-replace | wan_v2.2-14b-fp8_animate-replace_lightx2v | ~5min |
|
|
118
|
+
|
|
119
|
+
## Key Flags
|
|
120
|
+
|
|
121
|
+
| Flag | What It Does |
|
|
122
|
+
|------|-------------|
|
|
123
|
+
| -o /path | Save output to file |
|
|
124
|
+
| -q | Quiet mode (suppress progress) |
|
|
125
|
+
| -w, -h | Width/height in pixels (default 768x768) |
|
|
126
|
+
| -m MODEL | Choose a specific model |
|
|
127
|
+
| -c IMAGE | Context image for editing (repeatable, max 3) |
|
|
128
|
+
| --video, -v | Generate video instead of image |
|
|
129
|
+
| --ref IMAGE | Reference image for video |
|
|
130
|
+
| --ref-audio FILE | Audio for lip sync (s2v) |
|
|
131
|
+
| --ref-video FILE | Motion source for animate workflows |
|
|
132
|
+
| --looping | Seamless A-B-A loop (i2v only) |
|
|
133
|
+
| --duration SEC | Video length (default 5s) |
|
|
134
|
+
| --fps NUM | Frames per second (default 16) |
|
|
135
|
+
| --last-image | Reuse last generated image as input |
|
|
136
|
+
| --json | Machine-readable JSON output |
|
|
137
|
+
| --balance | Show Spark/Sogni token balances |
|
|
138
|
+
|
|
139
|
+
## Agent Behavior Guidelines
|
|
140
|
+
|
|
141
|
+
1. When the user asks to "draw", "generate", "create", or "make" an image: generate an image and send it.
|
|
142
|
+
2. When they ask to "animate", "make a video", or "create a video": use --video mode.
|
|
143
|
+
3. When they send a photo and ask to edit/change/modify it: use -c with their image.
|
|
144
|
+
4. When they send a photo and ask to animate it: use --video --ref with their image.
|
|
145
|
+
5. When they send a photo + audio and ask for lip sync: use --video --ref IMAGE --ref-audio AUDIO.
|
|
146
|
+
6. Always use -q (quiet) and -o (output to file) so you can send the result back.
|
|
147
|
+
7. After generating, send the file to the user via the message tool with filePath.
|
|
148
|
+
8. If you get "Insufficient funds", tell them: "Claim 50 free daily Spark points at https://app.sogni.ai/"
|
|
149
|
+
9. For transition/animation videos, always use this plugin (not ffmpeg) unless told otherwise.
|
|
150
|
+
10. Default to 768x768 for images. Video sizes must be divisible by 16 (min 480px, max 1536px).
|
|
151
|
+
|
|
152
|
+
## Finding User-Sent Media
|
|
153
|
+
|
|
154
|
+
When users send images/audio via Telegram, WhatsApp, or iMessages:
|
|
155
|
+
|
|
156
|
+
```bash
|
|
157
|
+
# Recent inbound images
|
|
158
|
+
ls -la ~/.clawdbot/media/inbound/*.jpg | tail -3
|
|
159
|
+
ls -la ~/.clawdbot/media/inbound/*.png | tail -3
|
|
160
|
+
|
|
161
|
+
# Recent inbound audio
|
|
162
|
+
ls -la ~/.clawdbot/media/inbound/*.m4a | tail -3
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
## Example Conversations
|
|
166
|
+
|
|
167
|
+
User: "Draw a sunset over mountains"
|
|
168
|
+
You: Generate image, send it.
|
|
169
|
+
|
|
170
|
+
User: *sends photo* "Make this look like a watercolor painting"
|
|
171
|
+
You: Use -c with their photo, edit prompt, send result.
|
|
172
|
+
|
|
173
|
+
User: *sends photo* "Animate this"
|
|
174
|
+
You: Use --video --ref with their photo, send video.
|
|
175
|
+
|
|
176
|
+
User: "Make a video of a cat playing piano"
|
|
177
|
+
You: Use --video (t2v), send video.
|
|
178
|
+
|
|
179
|
+
User: *sends photo + audio* "Make this person say this"
|
|
180
|
+
You: Use --video --ref photo --ref-audio audio (s2v), send video.
|
|
181
|
+
|
|
182
|
+
User: "Show me a 360 view of this" *sends photo*
|
|
183
|
+
You: Use --angles-360 --angles-360-video with their photo, send video.
|