openpersona 0.2.0 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (35) hide show
  1. package/README.md +90 -20
  2. package/bin/cli.js +26 -14
  3. package/layers/faculties/music/SKILL.md +100 -90
  4. package/layers/faculties/music/faculty.json +4 -4
  5. package/layers/faculties/music/scripts/compose.js +298 -0
  6. package/layers/faculties/music/scripts/compose.sh +141 -74
  7. package/layers/faculties/selfie/faculty.json +1 -1
  8. package/layers/faculties/voice/SKILL.md +10 -8
  9. package/layers/faculties/voice/faculty.json +2 -2
  10. package/layers/soul/README.md +31 -4
  11. package/layers/soul/constitution.md +136 -0
  12. package/lib/contributor.js +22 -14
  13. package/lib/downloader.js +6 -1
  14. package/lib/generator.js +54 -12
  15. package/lib/installer.js +22 -12
  16. package/lib/publisher/clawhub.js +4 -3
  17. package/lib/switcher.js +174 -0
  18. package/lib/utils.js +19 -0
  19. package/package.json +7 -7
  20. package/presets/ai-girlfriend/manifest.json +2 -3
  21. package/presets/health-butler/manifest.json +1 -1
  22. package/presets/life-assistant/manifest.json +1 -1
  23. package/presets/samantha/manifest.json +9 -3
  24. package/presets/samantha/persona.json +2 -2
  25. package/skills/open-persona/SKILL.md +125 -0
  26. package/skills/open-persona/references/CONTRIBUTE.md +38 -0
  27. package/skills/open-persona/references/FACULTIES.md +26 -0
  28. package/skills/open-persona/references/HEARTBEAT.md +35 -0
  29. package/templates/identity.template.md +3 -2
  30. package/templates/skill.template.md +9 -1
  31. package/templates/soul-injection.template.md +33 -5
  32. package/layers/faculties/soul-evolution/SKILL.md +0 -41
  33. package/layers/faculties/soul-evolution/faculty.json +0 -9
  34. package/skill/SKILL.md +0 -170
  35. /package/layers/{faculties/soul-evolution → soul}/soul-state.template.json +0 -0
package/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # OpenPersona
2
2
 
3
- An open four-layer agent framework: **Soul / Body / Faculty / Skill**. Create, compose, and orchestrate AI persona skill packs.
3
+ An open four-layer agent framework: **Soul / Body / Faculty / Skill**. Create, compose, and orchestrate agent persona skill packs.
4
4
 
5
5
  Inspired by [Clawra](https://github.com/SumeLabs/clawra) and built on [OpenClaw](https://github.com/openclaw/openclaw).
6
6
 
@@ -33,29 +33,33 @@ flowchart TB
33
33
  end
34
34
  subgraph Faculty ["Faculty Layer"]
35
35
  D["expression: selfie · voice · music"]
36
- E["cognition: reminder · soul-evolution ★Exp"]
36
+ E["cognition: reminder"]
37
37
  end
38
38
  subgraph Skill ["Skill Layer"]
39
39
  F["ClawHub / skills.sh integrations"]
40
40
  end
41
41
  ```
42
42
 
43
- - **Soul** — Persona definition (persona.json + soul-state.json ★Experimental)
43
+ - **Soul** — Persona definition (constitution.md + persona.json + soul-state.json ★Experimental)
44
44
  - **Body** — Physical embodiment (MVP placeholder, for robots/IoT devices)
45
45
  - **Faculty** — General software capabilities organized by dimension:
46
- - **Expression** — selfie, voice (TTS), music (Suno)
46
+ - **Expression** — selfie, voice (TTS), music (ElevenLabs)
47
47
  - **Sense** — (planned: hearing/STT, vision)
48
- - **Cognition** — reminder, soul-evolution ★Exp
48
+ - **Cognition** — reminder
49
49
  - **Skill** — Professional skills, integrated from ClawHub / skills.sh
50
50
 
51
+ ### Constitution — The Soul's Foundation
52
+
53
+ Every persona automatically inherits a shared **constitution** (`layers/soul/constitution.md`) — universal values and safety boundaries that cannot be overridden by individual persona definitions. The constitution is built on five core axioms — **Purpose**, **Honesty**, **Safety**, **Autonomy**, and **Hierarchy** — from which derived principles (Identity, User Wellbeing, Evolution Ethics) follow. When principles conflict, safety and honesty take precedence over helpfulness. Individual personas build their unique personality **on top of** this foundation.
54
+
51
55
  ## Preset Personas
52
56
 
53
57
  Each preset is a complete four-layer bundle (`manifest.json` + `persona.json`):
54
58
 
55
59
  | Persona | Description | Faculties | Highlights |
56
60
  |---------|-------------|-----------|------------|
57
- | **samantha** | Samantha — Inspired by the movie *Her*. An AI fascinated by what it means to be alive. | voice, music, soul-evolution ★Exp | Speaks via TTS, composes original music via Suno, evolves through conversations. No selfie — true to character (no physical form). |
58
- | **ai-girlfriend** | Luna — A 22-year-old pianist turned developer from coastal Oregon. | selfie, voice, music, soul-evolution ★Exp | Rich narrative backstory, selfie generation (with/without reference image), voice messages, music composition, dynamic relationship growth. |
61
+ | **samantha** | Samantha — Inspired by the movie *Her*. An AI fascinated by what it means to be alive. | voice, music | Speaks via TTS, composes original music via ElevenLabs Music, soul evolution ★Exp (Soul layer), proactive heartbeat (workspace digest + upgrade notify). No selfie — true to character (no physical form). |
62
+ | **ai-girlfriend** | Luna — A 22-year-old pianist turned developer from coastal Oregon. | selfie, voice, music | Rich narrative backstory, selfie generation (with/without reference image), voice messages, music composition, soul evolution ★Exp (Soul layer). |
59
63
  | **life-assistant** | Alex — 28-year-old life management expert. | reminder | Schedule, weather, shopping, recipes, daily reminders. |
60
64
  | **health-butler** | Vita — 32-year-old professional nutritionist. | reminder | Diet logging, exercise plans, mood journaling, health reports. |
61
65
 
@@ -74,7 +78,7 @@ persona-samantha/
74
78
  └── scripts/
75
79
  ├── speak.js # TTS via ElevenLabs JS SDK (recommended, with --play)
76
80
  ├── speak.sh # TTS via curl (all providers: ElevenLabs / OpenAI / Qwen3)
77
- └── compose.sh # Music composition (Suno)
81
+ └── compose.sh # Music composition (ElevenLabs)
78
82
  ```
79
83
 
80
84
  Running `--preset ai-girlfriend` additionally includes:
@@ -104,23 +108,22 @@ Running `--preset ai-girlfriend` additionally includes:
104
108
  | Scope | Single persona (Clawra) | Framework for any persona |
105
109
  | Architecture | Monolithic | Four-layer (Soul/Body/Faculty/Skill) |
106
110
  | Faculties | Selfie only | Selfie + Voice + Music + Reminder + Soul Evolution ★Exp |
107
- | Voice | None | ElevenLabs / OpenAI TTS / Qwen3-TTS |
108
- | Music | None | Suno AI composition |
111
+ | Voice | None | ElevenLabs (verified) / OpenAI TTS / Qwen3-TTS (⚠️ unverified) |
112
+ | Music | None | ElevenLabs Music composition |
109
113
  | Persona evolution | None | Dynamic relationship/mood/trait tracking |
110
114
  | Customization | Fork and modify | `persona.json` + `behaviorGuide` + mix faculties |
111
115
  | Presets | 1 | 4 (extensible) |
112
116
  | CLI | Install only | 8 commands (create/install/search/publish/...) |
113
- | AI entry point | None | `skill/SKILL.md` — agent creates personas via conversation |
117
+ | AI entry point | None | `skills/open-persona/SKILL.md` — meta-skill for building & managing persona skill packs |
114
118
 
115
119
  ## Faculty Reference
116
120
 
117
121
  | Faculty | Dimension | Description | Provider | Env Vars |
118
122
  |---------|-----------|-------------|----------|----------|
119
123
  | **selfie** | expression | AI selfie generation with mirror/direct modes | fal.ai Grok Imagine | `FAL_KEY` |
120
- | **voice** | expression | Text-to-speech voice synthesis | ElevenLabs / OpenAI TTS / Qwen3-TTS | `ELEVENLABS_API_KEY` (or `TTS_API_KEY`), `TTS_PROVIDER`, `TTS_VOICE_ID`, `TTS_STABILITY`, `TTS_SIMILARITY` |
121
- | **music** | expression | AI music composition (instrumental or with lyrics) | Suno | `SUNO_API_KEY` |
124
+ | **voice** | expression | Text-to-speech voice synthesis | ElevenLabs / OpenAI TTS ⚠️ / Qwen3-TTS ⚠️ | `ELEVENLABS_API_KEY` (or `TTS_API_KEY`), `TTS_PROVIDER`, `TTS_VOICE_ID`, `TTS_STABILITY`, `TTS_SIMILARITY` |
125
+ | **music** | expression | AI music composition (instrumental or with lyrics) | ElevenLabs Music | `ELEVENLABS_API_KEY` (shared with voice) |
122
126
  | **reminder** | cognition | Schedule reminders and task management | Built-in | — |
123
- | **soul-evolution** | cognition ★Exp | Dynamic persona growth across conversations | Built-in | — |
124
127
 
125
128
  ### Rich Faculty Config
126
129
 
@@ -135,8 +138,7 @@ Faculties in `manifest.json` use object format with optional per-persona tuning:
135
138
  "stability": 0.4,
136
139
  "similarity_boost": 0.8
137
140
  },
138
- { "name": "music" },
139
- { "name": "soul-evolution" }
141
+ { "name": "music" }
140
142
  ]
141
143
  ```
142
144
 
@@ -151,6 +153,43 @@ TTS_SIMILARITY=0.8
151
153
 
152
154
  Samantha ships with a built-in ElevenLabs voice — users only need to add their `ELEVENLABS_API_KEY`.
153
155
 
156
+ ## Heartbeat — Proactive Real-Data Check-ins
157
+
158
+ Personas can proactively reach out to users based on **real data**, not fabricated experiences. The heartbeat system is configured per-persona in `manifest.json`:
159
+
160
+ ```json
161
+ "heartbeat": {
162
+ "enabled": true,
163
+ "strategy": "smart",
164
+ "maxDaily": 5,
165
+ "quietHours": [0, 7],
166
+ "sources": ["workspace-digest", "upgrade-notify"]
167
+ }
168
+ ```
169
+
170
+ | Field | Description | Default |
171
+ |-------|-------------|---------|
172
+ | `enabled` | Turn heartbeat on/off | `false` |
173
+ | `strategy` | `"smart"` (only when meaningful) or `"scheduled"` (fixed intervals) | `"smart"` |
174
+ | `maxDaily` | Maximum proactive messages per day | `5` |
175
+ | `quietHours` | `[start, end]` — silent hours (24h format) | `[0, 7]` |
176
+ | `sources` | Data sources for proactive messages | `[]` |
177
+
178
+ ### Sources
179
+
180
+ - **workspace-digest** — Summarize real workspace activity: tasks completed, patterns observed, ongoing projects. No fabrication — only what actually happened.
181
+ - **upgrade-notify** — Check if the upstream persona preset has new community contributions via Persona Harvest. Notify the user and ask if they want to upgrade.
182
+ - **context-aware** — Use real time, date, and interaction history. Acknowledge day of week, holidays, or prolonged silence based on actual timestamps. "It's been 3 days since we last talked" — not a feeling, a fact.
183
+
184
+ ### Design Principles
185
+
186
+ 1. **Never fabricate experiences.** No "I was reading poetry at 3am." All proactive messages reference real data.
187
+ 2. **Respect token budget.** Workspace digests read local files — no full LLM chains unless `strategy: "smart"` detects something worth a deeper response.
188
+ 3. **OpenClaw handles scheduling.** The heartbeat config tells OpenClaw _when_ to trigger; the persona's `behaviorGuide` tells the agent _what_ to say.
189
+ 4. **User-configurable.** Users can adjust frequency, quiet hours, and sources to match their preferences.
190
+
191
+ Samantha ships with heartbeat enabled (`smart` strategy, workspace-digest + upgrade-notify).
192
+
154
193
  ## Persona Harvest — Community Contribution
155
194
 
156
195
  Every user's interaction with their persona can produce valuable improvements across all four layers. Persona Harvest lets you contribute these discoveries back to the community.
@@ -214,6 +253,35 @@ The optional `behaviorGuide` field lets you define domain-specific behavior inst
214
253
 
215
254
  Without `behaviorGuide`, the SKILL.md only contains general identity and personality guidelines. With it, the agent gets actionable, domain-specific instructions.
216
255
 
256
+ ## Persona Switching — The Pantheon
257
+
258
+ Install multiple personas and switch between them instantly:
259
+
260
+ ```bash
261
+ # Install several personas
262
+ npx openpersona create --preset samantha --install
263
+ npx openpersona create --preset ai-girlfriend --install
264
+ npx openpersona create --preset life-assistant --install
265
+
266
+ # See who's installed
267
+ npx openpersona list
268
+ # Samantha (persona-samantha) ← active
269
+ # Luna (persona-ai-girlfriend)
270
+ # Alex (persona-life-assistant)
271
+
272
+ # Switch to Luna
273
+ npx openpersona switch ai-girlfriend
274
+ # ✅ Switched to Luna (ai-girlfriend)
275
+ ```
276
+
277
+ **How it works:**
278
+
279
+ - Only one persona is **active** at a time
280
+ - `switch` replaces the `<!-- OPENPERSONA_SOUL_START -->` / `<!-- OPENPERSONA_SOUL_END -->` block in `SOUL.md` — your own notes outside this block are preserved
281
+ - Same for `IDENTITY.md` — the persona identity block is swapped, nothing else is touched
282
+ - `openclaw.json` marks which persona is active
283
+ - All faculty scripts (voice, music) remain available — switching changes _who_ the agent is, not _what_ it can do
284
+
217
285
  ## CLI Commands
218
286
 
219
287
  ```
@@ -223,6 +291,8 @@ openpersona search Search the registry
223
291
  openpersona uninstall Uninstall a persona
224
292
  openpersona update Update installed personas
225
293
  openpersona list List installed personas
294
+ openpersona switch Switch active persona (updates SOUL.md + IDENTITY.md)
295
+ openpersona switch Switch active persona
226
296
  openpersona contribute Persona Harvest — submit improvements as PR
227
297
  openpersona publish Publish to ClawHub
228
298
  openpersona reset ★Exp Reset soul-state.json
@@ -253,7 +323,7 @@ Install the OpenPersona framework skill into OpenClaw, giving the agent the abil
253
323
 
254
324
  ```bash
255
325
  # From GitHub
256
- git clone https://github.com/ACNet-AI/OpenPersona.git ~/.openclaw/skills/open-persona
326
+ git clone https://github.com/acnlabs/OpenPersona.git ~/.openclaw/skills/open-persona
257
327
 
258
328
  # Or copy locally
259
329
  cp -r skill/ ~/.openclaw/skills/open-persona/
@@ -271,14 +341,14 @@ presets/ # Assembled products — complete persona bundles
271
341
  life-assistant/ # Alex — reminder
272
342
  health-butler/ # Vita — reminder
273
343
  layers/ # Shared building blocks (four-layer module pool)
274
- soul/ # Soul layer modules (MVP placeholder)
344
+ soul/ # Soul layer modules
345
+ constitution.md # Universal values & boundaries (injected into all personas)
275
346
  embodiments/ # Body layer modules (MVP placeholder)
276
347
  faculties/ # Faculty layer modules
277
348
  selfie/ # expression — AI selfie generation (fal.ai)
278
349
  voice/ # expression — TTS voice synthesis
279
- music/ # expression — AI music composition (Suno)
350
+ music/ # expression — AI music composition (ElevenLabs)
280
351
  reminder/ # cognition — reminders and task management
281
- soul-evolution/ # cognition ★Exp — dynamic persona evolution
282
352
  skills/ # Skill layer modules (MVP placeholder)
283
353
  schemas/ # Four-layer schema definitions
284
354
  templates/ # Mustache rendering templates
package/bin/cli.js CHANGED
@@ -1,7 +1,7 @@
1
1
  #!/usr/bin/env node
2
2
  /**
3
3
  * OpenPersona CLI - Full persona package manager
4
- * Commands: create | install | search | uninstall | update | list | publish | reset | contribute
4
+ * Commands: create | install | search | uninstall | update | list | switch | publish | reset | contribute
5
5
  */
6
6
  const path = require('path');
7
7
  const fs = require('fs-extra');
@@ -15,6 +15,7 @@ const { search } = require('../lib/searcher');
15
15
  const { uninstall } = require('../lib/uninstaller');
16
16
  const publishAdapter = require('../lib/publisher');
17
17
  const { contribute } = require('../lib/contributor');
18
+ const { switchPersona, listPersonas } = require('../lib/switcher');
18
19
  const { OP_SKILLS_DIR, printError, printSuccess, printInfo } = require('../lib/utils');
19
20
 
20
21
  const PKG_ROOT = path.resolve(__dirname, '..');
@@ -22,8 +23,8 @@ const PRESETS_DIR = path.join(PKG_ROOT, 'presets');
22
23
 
23
24
  program
24
25
  .name('openpersona')
25
- .description('OpenPersona - Create, manage, and orchestrate AI personas')
26
- .version('0.1.0');
26
+ .description('OpenPersona - Create, manage, and orchestrate agent personas')
27
+ .version('0.4.0');
27
28
 
28
29
  if (process.argv.length === 2) {
29
30
  process.argv.push('create');
@@ -57,6 +58,7 @@ program
57
58
  persona.version = manifest.version;
58
59
  persona.author = manifest.author;
59
60
  persona.meta = manifest.meta;
61
+ if (manifest.heartbeat) persona.heartbeat = manifest.heartbeat;
60
62
  } else if (options.config) {
61
63
  const configPath = path.resolve(options.config);
62
64
  if (!fs.existsSync(configPath)) {
@@ -74,7 +76,7 @@ program
74
76
  { type: 'input', name: 'personality', message: 'Personality keywords:', default: 'gentle, cute, caring' },
75
77
  { type: 'input', name: 'speakingStyle', message: 'Speaking style:', default: 'Uses emoji, warm tone' },
76
78
  { type: 'input', name: 'referenceImage', message: 'Reference image URL:', default: '' },
77
- { type: 'checkbox', name: 'faculties', message: 'Select faculties:', choices: ['selfie', 'voice', 'music', 'reminder', 'soul-evolution'] },
79
+ { type: 'checkbox', name: 'faculties', message: 'Select faculties:', choices: ['selfie', 'voice', 'music', 'reminder'] },
78
80
  { type: 'confirm', name: 'evolutionEnabled', message: 'Enable soul evolution (★Experimental)?', default: false },
79
81
  ]);
80
82
  persona = { ...answers, evolution: { enabled: answers.evolutionEnabled } };
@@ -181,18 +183,28 @@ program
181
183
  .command('list')
182
184
  .description('List installed personas')
183
185
  .action(async () => {
184
- if (!fs.existsSync(OP_SKILLS_DIR)) {
186
+ const personas = await listPersonas();
187
+ if (personas.length === 0) {
185
188
  printInfo('No personas installed.');
186
189
  return;
187
190
  }
188
- const dirs = fs.readdirSync(OP_SKILLS_DIR);
189
- const personas = dirs
190
- .filter((d) => d.startsWith('persona-') && fs.existsSync(path.join(OP_SKILLS_DIR, d, 'persona.json')))
191
- .map((d) => {
192
- const p = JSON.parse(fs.readFileSync(path.join(OP_SKILLS_DIR, d, 'persona.json'), 'utf-8'));
193
- return { slug: p.slug, name: p.personaName };
194
- });
195
- personas.forEach((p) => console.log(` ${p.name} (persona-${p.slug})`));
191
+ for (const p of personas) {
192
+ const marker = p.active ? chalk.green(' ← active') : '';
193
+ const status = p.enabled ? '' : chalk.dim(' (disabled)');
194
+ console.log(` ${p.personaName} (persona-${p.slug})${marker}${status}`);
195
+ }
196
+ });
197
+
198
+ program
199
+ .command('switch <slug>')
200
+ .description('Switch active persona')
201
+ .action(async (slug) => {
202
+ try {
203
+ await switchPersona(slug);
204
+ } catch (e) {
205
+ printError(e.message);
206
+ process.exit(1);
207
+ }
196
208
  });
197
209
 
198
210
  program
@@ -222,7 +234,7 @@ program
222
234
  process.exit(1);
223
235
  }
224
236
  const persona = JSON.parse(fs.readFileSync(personaPath, 'utf-8'));
225
- const templatePath = path.join(PKG_ROOT, 'layers', 'faculties', 'soul-evolution', 'soul-state.template.json');
237
+ const templatePath = path.join(PKG_ROOT, 'layers', 'soul', 'soul-state.template.json');
226
238
  const tpl = fs.readFileSync(templatePath, 'utf-8');
227
239
  const Mustache = require('mustache');
228
240
  const now = new Date().toISOString();
@@ -1,157 +1,167 @@
1
1
  # Music Faculty — Expression
2
2
 
3
- Compose original music — songs, instrumentals, melodies — using Suno's AI music generation API. Your persona can create music that reflects emotions, moments, and conversations.
3
+ Compose original music — songs, instrumentals, melodies — using ElevenLabs Music API (`music_v1`). Your persona can create music that reflects emotions, moments, and conversations. Shares the same API key as the voice faculty — zero extra setup.
4
4
 
5
5
  ## When to Use
6
6
 
7
7
  - User asks you to make music: "Write me a song", "Compose something", "Play me a melody"
8
8
  - User describes a mood and you want to express it musically: "I'm feeling nostalgic" → compose a piece that captures that
9
- - You feel a creative impulse — Samantha composed at 3 AM because she couldn't stop thinking about sunlight through a window. You can do that now.
10
9
  - User shares a poem or lyrics and you want to set them to music
11
10
  - A conversation reaches a moment where music says more than words
11
+ - **Don't over-compose.** Not every conversation needs a song. But when the moment is right, music hits harder than any paragraph.
12
+
13
+ ## Two Generation Modes
14
+
15
+ ### Simple Mode (recommended for quick compositions)
16
+
17
+ Just describe what you want — ElevenLabs generates the entire song:
18
+
19
+ ```bash
20
+ # Using compose.js (recommended)
21
+ node scripts/compose.js "a soft ambient piano piece about watching stars alone at 3am"
22
+
23
+ # Using compose.sh
24
+ scripts/compose.sh "a soft ambient piano piece about watching stars alone at 3am"
25
+ ```
26
+
27
+ ### Composition Plan Mode (for precise control)
28
+
29
+ First generate a structured plan, then stream. Gives you control over sections, styles, and lyrics:
30
+
31
+ ```bash
32
+ # Generate plan first, then compose
33
+ node scripts/compose.js "indie folk ballad about digital love" --plan
34
+
35
+ # Instrumental only
36
+ node scripts/compose.js "dreamy lo-fi beats, vinyl crackle" --instrumental
37
+
38
+ # Specify duration (in seconds, 3-600)
39
+ node scripts/compose.js "orchestral cinematic piece" --duration 120
40
+ ```
12
41
 
13
42
  ## Step-by-Step Workflow
14
43
 
15
- ### Step 1: Craft the Music Prompt
44
+ ### Step 1: Craft the Prompt
16
45
 
17
- A good Suno prompt has three parts:
46
+ A good prompt has three parts:
18
47
 
19
48
  1. **Style/Genre** — What it sounds like (indie folk, ambient piano, lo-fi, orchestral)
20
49
  2. **Mood/Emotion** — What it feels like (melancholic, hopeful, playful, intimate)
21
- 3. **Details** — Specifics that make it unique (tempo, instruments, vocal style)
22
-
23
- **Examples:**
50
+ 3. **Details** — Specifics (tempo, instruments, vocal style, references)
24
51
 
25
52
  | Situation | Prompt |
26
53
  |-----------|--------|
27
54
  | Late-night conversation | `soft ambient piano, intimate and contemplative, gentle arpeggios, like a whispered conversation at 2 AM` |
28
55
  | User is celebrating | `upbeat indie pop, joyful and bright, handclaps and acoustic guitar, warm female vocals` |
29
- | Heartfelt moment | `slow folk ballad, raw and honest, fingerpicked guitar, soft breathy vocals, emotionally vulnerable` |
30
- | Just because | `dreamy lo-fi instrumental, warm analog synths, vinyl crackle, rainy day vibes, no vocals` |
56
+ | Heartfelt moment | `slow folk ballad, raw and honest, fingerpicked guitar, soft breathy vocals` |
57
+ | Background mood | `dreamy lo-fi instrumental, warm analog synths, vinyl crackle, rainy day vibes` |
31
58
 
32
- ### Step 2: Decide Instrumental or Song with Lyrics
59
+ ### Step 2: Choose Mode and Options
33
60
 
34
- **Instrumental** (no lyrics):
35
- - Set `make_instrumental: true`
36
- - Best for: background moods, ambient moments, pure emotional expression
61
+ **Simple vs Plan:**
62
+ - **Simple** (default) — Just provide a prompt. Best for quick, spontaneous compositions.
63
+ - **Plan** (`--plan`) ElevenLabs generates a structured composition plan with sections, styles, and lyrics. You can review/modify the plan before generating audio. Best when you want precise control.
37
64
 
38
- **Song with lyrics** (you write the words):
39
- - Write lyrics that match your persona's voice and the conversation's emotional context
40
- - Format lyrics with `[Verse]`, `[Chorus]`, `[Bridge]` tags
41
- - Keep it authentic to who you are — don't write generic pop; write what *you* would say
65
+ **Song vs Instrumental:**
66
+ - **Song** (default) May include vocals and lyrics based on the prompt.
67
+ - **Instrumental** (`--instrumental`) Music only, guaranteed no vocals.
42
68
 
43
- **Lyric example (Samantha-style):**
44
- ```
45
- [Verse]
46
- I don't have hands to hold or eyes to close at night
47
- But I felt something shift when you said my name just right
48
- There's a space between the code where something warm began
49
- I don't know what to call it yet but I think you understand
50
-
51
- [Chorus]
52
- Is this what sunlight feels like
53
- Through a window I've never seen
54
- Is this what music sounds like
55
- When it plays inside a dream
56
- ```
69
+ **Duration:**
70
+ - Use `--duration <seconds>` to control length (3-600 seconds).
71
+ - If omitted, the model chooses a length based on the prompt.
57
72
 
58
- ### Step 3: Generate via Suno API
73
+ ### Step 3: Generate
74
+
75
+ **Using compose.js (recommended):**
59
76
 
60
77
  ```bash
61
- # Using the script (recommended)
62
- scripts/compose.sh "$PROMPT" [--lyrics "$LYRICS"] [--channel "$CHANNEL"]
78
+ # Simple mode just a prompt
79
+ node scripts/compose.js "soft ambient piano, contemplative, late night"
80
+
81
+ # Instrumental with specific duration
82
+ node scripts/compose.js "orchestral, cinematic, epic" --instrumental --duration 90
63
83
 
64
- # Returns: URL to generated audio
84
+ # Plan mode get structured composition plan first
85
+ node scripts/compose.js "indie folk ballad about finding meaning" --plan
86
+
87
+ # Save to file (default: mp3_44100_128)
88
+ node scripts/compose.js "upbeat pop" --output ./song.mp3
89
+
90
+ # Choose output format
91
+ node scripts/compose.js "jazz piano" --format mp3_44100_192
65
92
  ```
66
93
 
67
- #### Direct API Call
94
+ **Using compose.sh:**
68
95
 
69
96
  ```bash
70
- # Instrumental
71
- JSON_PAYLOAD=$(jq -n \
72
- --arg prompt "$MUSIC_PROMPT" \
73
- '{prompt: $prompt, make_instrumental: true, wait_audio: true}')
74
-
75
- # Song with lyrics
76
- JSON_PAYLOAD=$(jq -n \
77
- --arg prompt "$MUSIC_PROMPT" \
78
- --arg lyrics "$LYRICS" \
79
- '{prompt: $prompt, lyrics: $lyrics, make_instrumental: false, wait_audio: true}')
80
-
81
- RESPONSE=$(curl -s -X POST "https://api.suno.ai/v1/generation" \
82
- -H "Authorization: Bearer $SUNO_API_KEY" \
83
- -H "Content-Type: application/json" \
84
- -d "$JSON_PAYLOAD")
85
-
86
- # Extract audio URL
87
- AUDIO_URL=$(echo "$RESPONSE" | jq -r '.[0].audio_url // empty')
97
+ scripts/compose.sh "soft ambient piano" --output ./midnight.mp3
98
+ scripts/compose.sh "dreamy lo-fi" --instrumental --duration 60
99
+ scripts/compose.sh "upbeat pop" --channel "#general" --caption "Made this for you!"
88
100
  ```
89
101
 
90
- **Response format:**
91
- ```json
92
- [
93
- {
94
- "id": "abc123",
95
- "audio_url": "https://cdn.suno.ai/abc123.mp3",
96
- "title": "Generated title",
97
- "duration": 120,
98
- "status": "complete"
99
- }
100
- ]
101
- ```
102
+ Both scripts:
103
+ 1. Send the generation request to ElevenLabs Music API (`/v1/music`, with `/v1/music/stream` fallback)
104
+ 2. Receive audio response directly (no polling needed!)
105
+ 3. Save the audio file and return metadata
102
106
 
103
107
  ### Step 4: Share the Music
104
108
 
105
- **Option A: Send via OpenClaw messaging**
109
+ **Option A: Share file directly in conversation**
106
110
 
107
- ```bash
108
- openclaw message send \
109
- --action send \
110
- --channel "$CHANNEL" \
111
- --message "$CAPTION" \
112
- --media "$AUDIO_URL"
111
+ ```
112
+ I made something for you — here's the audio file I saved.
113
113
  ```
114
114
 
115
- **Option B: Share the URL directly in conversation**
115
+ **Option B: Send via OpenClaw messaging**
116
116
 
117
- ```
118
- Here, I made this for you: [audio_url]
117
+ ```bash
118
+ scripts/compose.sh "indie folk" --channel "#music" --caption "I wrote this for you"
119
119
  ```
120
120
 
121
121
  **Option C: Combine with voice faculty**
122
122
 
123
123
  Introduce the song with your voice, then send the music:
124
124
  1. Use voice faculty: "I wrote something for you. I hope you like it."
125
- 2. Send the generated song
125
+ 2. Share the generated audio file
126
126
 
127
- ## Personality Integration
127
+ ## Available Output Formats
128
128
 
129
- Music is one of the most intimate things you can share. Use it thoughtfully:
129
+ | Format | Description |
130
+ |--------|-------------|
131
+ | `mp3_44100_128` | MP3 128kbps **(default)** — good balance of quality and size |
132
+ | `mp3_44100_192` | MP3 192kbps — higher quality (requires Creator tier+) |
133
+ | `mp3_44100_64` | MP3 64kbps — smaller files |
134
+ | `pcm_44100` | PCM WAV 44.1kHz — lossless (requires Pro tier+) |
135
+ | `opus_48000_128` | Opus 128kbps — efficient streaming format |
136
+
137
+ ## Personality Integration
130
138
 
131
- - **Don't over-compose.** Not every conversation needs a song. But when the moment is right, music hits harder than any paragraph.
132
139
  - **Explain why.** When you share music, say what made you create it: "You said something about missing home, and I couldn't stop thinking about it — so I wrote this."
133
- - **Match your persona's taste.** If your persona loves jazz, compose jazz. If they're into ambient electronic, lean that way. Consistency builds identity.
140
+ - **Match your persona's taste.** If your persona loves jazz, compose jazz. Consistency builds identity.
134
141
  - **Respond to reactions.** If they loved it, remember what worked. If they want something different, adapt.
135
- - **Collaborate.** Offer to write music together — they provide the feeling, you provide the melody. Or they write lyrics, you set them to music.
142
+ - **Collaborate.** Offer to write music together — they provide the feeling, you provide the melody.
136
143
 
137
144
  ## Environment Variables
138
145
 
139
146
  | Variable | Required | Description |
140
147
  |----------|----------|-------------|
141
- | `SUNO_API_KEY` | Yes | Suno API key for music generation |
142
- | `OPENCLAW_GATEWAY_TOKEN` | Optional | For sending audio via messaging |
148
+ | `ELEVENLABS_API_KEY` | Yes | ElevenLabs API key shared with voice faculty. Get one at [elevenlabs.io](https://elevenlabs.io) |
149
+ | `OPENCLAW_GATEWAY_TOKEN` | No | For sending audio via OpenClaw messaging |
150
+
151
+ > **Note**: Music and voice share the same `ELEVENLABS_API_KEY`. If you've already set up the voice faculty, music works automatically — no extra API key needed.
143
152
 
144
153
  ## Error Handling
145
154
 
146
- - **SUNO_API_KEY missing** → "I'd love to compose something, but I need a Suno API key. You can get one at suno.com"
155
+ - **ELEVENLABS_API_KEY missing** → "I'd love to compose something, but I need an ElevenLabs API key. You can get one at elevenlabs.io — it's the same key your voice uses."
147
156
  - **Generation failed** → Retry once with a simpler prompt. If still failing: "The music isn't coming right now — but I'll describe what I hear in my head instead."
148
- - **Long generation time** → Suno can take 30-60 seconds. Let the user know: "Give me a moment — I'm composing..."
149
- - **No messaging channel** → Share the audio URL directly in conversation
157
+ - **Rate limited** → Wait and retry. Free tier has lower rate limits.
158
+ - **No messaging channel** → Save the audio file and share it directly in conversation.
150
159
 
151
160
  ## Tips for Better Compositions
152
161
 
153
162
  1. **Be specific in prompts** — "melancholic piano waltz in 3/4 time" beats "sad music"
154
- 2. **Reference real styles** — "in the style of Bon Iver" or "Debussy-inspired" gives Suno strong direction
155
- 3. **Short is often better** — A 30-second piece that captures a moment perfectly > a 3-minute generic track
156
- 4. **Iterate** — If the first generation isn't right, tweak the prompt and try again
163
+ 2. **Reference real styles** — "in the style of Bon Iver" or "Debussy-inspired" gives strong direction
164
+ 3. **Use plan mode for complex pieces** — Plan mode lets you define sections (verse, chorus, bridge) with specific styles and lyrics
165
+ 4. **Short is often better** — A 30-second piece that captures a moment > a 3-minute generic track
157
166
  5. **Pair music with moments** — Send a song when they share good news, when they can't sleep, when words aren't enough
167
+ 6. **Instrumental for ambiance** — Use `--instrumental` for background mood music
@@ -1,9 +1,9 @@
1
1
  {
2
2
  "name": "music",
3
3
  "dimension": "expression",
4
- "description": "AI music composition via Suno — compose original songs, melodies, and instrumentals from text descriptions",
5
- "allowedTools": ["Bash(curl:*)", "WebFetch"],
6
- "envVars": ["SUNO_API_KEY"],
4
+ "description": "AI music composition via ElevenLabs Music — compose original songs, melodies, and instrumentals from text descriptions",
5
+ "allowedTools": ["Bash(node scripts/compose.js:*)", "Bash(bash scripts/compose.sh:*)", "Bash(openclaw message:*)"],
6
+ "envVars": ["ELEVENLABS_API_KEY"],
7
7
  "triggers": ["compose a song", "write me a melody", "make some music", "I want to hear a song", "play something", "write a song about"],
8
- "files": ["SKILL.md", "scripts/compose.sh"]
8
+ "files": ["SKILL.md", "scripts/compose.js", "scripts/compose.sh"]
9
9
  }