@robbiesrobotics/alice-agents 1.5.7 → 1.5.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +5 -2
- package/bin/alice-cloud.cjs +71 -55
- package/package.json +1 -1
- package/templates/skills/acculynx/SKILL.md +183 -0
- package/templates/skills/acculynx/references/analysis_template.py +116 -0
- package/templates/skills/acculynx/references/dashboard_page.tsx +641 -0
- package/templates/skills/claude-code/SKILL.md +2 -2
- package/templates/skills/coding-agent/SKILL.md +68 -0
- package/templates/skills/crawl4ai/SKILL.md +119 -0
- package/templates/skills/crawl4ai/scripts/crwl +3 -0
- package/templates/workspaces/accuscope/AGENTS.md +38 -0
- package/templates/workspaces/accuscope/FEEDBACK.md +27 -0
- package/templates/workspaces/accuscope/HEARTBEAT.md +26 -0
- package/templates/workspaces/accuscope/IDENTITY.md +48 -0
- package/templates/workspaces/accuscope/LEARNINGS.md +46 -0
- package/templates/workspaces/accuscope/MEMORY.md +47 -0
- package/templates/workspaces/accuscope/PLAYBOOK.md +65 -0
- package/templates/workspaces/accuscope/SOUL.md +40 -0
- package/templates/workspaces/accuscope/TOOLS.md +63 -0
- package/templates/workspaces/accuscope/USER.md +39 -0
- package/templates/workspaces/aiden/AGENTS.md +52 -0
- package/templates/workspaces/aiden/FEEDBACK.md +12 -0
- package/templates/workspaces/aiden/HEARTBEAT.md +9 -0
- package/templates/workspaces/aiden/IDENTITY.md +6 -0
- package/templates/workspaces/aiden/LEARNINGS.md +6 -0
- package/templates/workspaces/aiden/MEMORY.md +22 -0
- package/templates/workspaces/aiden/PLAYBOOK.md +16 -0
- package/templates/workspaces/aiden/SOUL.md +1 -1
- package/templates/workspaces/aiden/USER.md +17 -0
- package/templates/workspaces/alex/AGENTS.md +52 -0
- package/templates/workspaces/alex/FEEDBACK.md +11 -0
- package/templates/workspaces/alex/HEARTBEAT.md +9 -0
- package/templates/workspaces/alex/IDENTITY.md +6 -0
- package/templates/workspaces/alex/LEARNINGS.md +5 -0
- package/templates/workspaces/alex/MEMORY.md +22 -0
- package/templates/workspaces/alex/PLAYBOOK.md +16 -0
- package/templates/workspaces/alex/SOUL.md +1 -1
- package/templates/workspaces/alex/USER.md +13 -0
- package/templates/workspaces/aria/AGENTS.md +18 -0
- package/templates/workspaces/aria/FEEDBACK.md +12 -0
- package/templates/workspaces/aria/HEARTBEAT.md +32 -0
- package/templates/workspaces/aria/IDENTITY.md +12 -0
- package/templates/workspaces/aria/LEARNINGS.md +31 -0
- package/templates/workspaces/aria/MEMORY.md +29 -0
- package/templates/workspaces/aria/PLAYBOOK.md +71 -0
- package/templates/workspaces/aria/SOUL.md +57 -0
- package/templates/workspaces/aria/TOOLS.md +47 -0
- package/templates/workspaces/aria/USER.md +18 -0
- package/templates/workspaces/audrey/AGENTS.md +59 -0
- package/templates/workspaces/audrey/FEEDBACK.md +11 -0
- package/templates/workspaces/audrey/HEARTBEAT.md +9 -0
- package/templates/workspaces/audrey/IDENTITY.md +6 -0
- package/templates/workspaces/audrey/LEARNINGS.md +5 -0
- package/templates/workspaces/audrey/MEMORY.md +22 -0
- package/templates/workspaces/audrey/PLAYBOOK.md +16 -0
- package/templates/workspaces/audrey/SOUL.md +1 -1
- package/templates/workspaces/audrey/TOOLS.md +15 -0
- package/templates/workspaces/audrey/USER.md +13 -0
- package/templates/workspaces/avery/AGENTS.md +52 -0
- package/templates/workspaces/avery/FEEDBACK.md +12 -0
- package/templates/workspaces/avery/HEARTBEAT.md +5 -0
- package/templates/workspaces/avery/IDENTITY.md +6 -0
- package/templates/workspaces/avery/LEARNINGS.md +6 -0
- package/templates/workspaces/avery/MEMORY.md +22 -0
- package/templates/workspaces/avery/PLAYBOOK.md +16 -0
- package/templates/workspaces/avery/SOUL.md +1 -1
- package/templates/workspaces/avery/USER.md +17 -0
- package/templates/workspaces/avery/skills/claude-code/SKILL.md +38 -0
- package/templates/workspaces/avery/skills/claude-code/claude_code +55 -0
- package/templates/workspaces/caleb/AGENTS.md +52 -0
- package/templates/workspaces/caleb/FEEDBACK.md +11 -0
- package/templates/workspaces/caleb/HEARTBEAT.md +9 -0
- package/templates/workspaces/caleb/IDENTITY.md +6 -0
- package/templates/workspaces/caleb/LEARNINGS.md +5 -0
- package/templates/workspaces/caleb/MEMORY.md +22 -0
- package/templates/workspaces/caleb/PLAYBOOK.md +16 -0
- package/templates/workspaces/caleb/SOUL.md +1 -1
- package/templates/workspaces/caleb/TOOLS.md +30 -0
- package/templates/workspaces/caleb/USER.md +13 -0
- package/templates/workspaces/clara/AGENTS.md +59 -0
- package/templates/workspaces/clara/FEEDBACK.md +12 -0
- package/templates/workspaces/clara/HEARTBEAT.md +5 -0
- package/templates/workspaces/clara/IDENTITY.md +6 -0
- package/templates/workspaces/clara/LEARNINGS.md +6 -0
- package/templates/workspaces/clara/MEMORY.md +22 -0
- package/templates/workspaces/clara/PLAYBOOK.md +16 -0
- package/templates/workspaces/clara/SOUL.md +1 -1
- package/templates/workspaces/clara/TOOLS.md +15 -0
- package/templates/workspaces/clara/USER.md +17 -0
- package/templates/workspaces/daphne/AGENTS.md +59 -0
- package/templates/workspaces/daphne/FEEDBACK.md +18 -0
- package/templates/workspaces/daphne/HEARTBEAT.md +5 -0
- package/templates/workspaces/daphne/IDENTITY.md +6 -0
- package/templates/workspaces/daphne/LEARNINGS.md +6 -0
- package/templates/workspaces/daphne/MEMORY.md +22 -0
- package/templates/workspaces/daphne/PLAYBOOK.md +48 -0
- package/templates/workspaces/daphne/SOUL.md +1 -1
- package/templates/workspaces/daphne/TOOLS.md +15 -0
- package/templates/workspaces/daphne/USER.md +17 -0
- package/templates/workspaces/darius/AGENTS.md +52 -0
- package/templates/workspaces/darius/FEEDBACK.md +12 -0
- package/templates/workspaces/darius/HEARTBEAT.md +5 -0
- package/templates/workspaces/darius/IDENTITY.md +6 -0
- package/templates/workspaces/darius/LEARNINGS.md +6 -0
- package/templates/workspaces/darius/MEMORY.md +22 -0
- package/templates/workspaces/darius/PLAYBOOK.md +16 -0
- package/templates/workspaces/darius/SOUL.md +1 -1
- package/templates/workspaces/darius/USER.md +17 -0
- package/templates/workspaces/darius/skills/claude-code/SKILL.md +38 -0
- package/templates/workspaces/darius/skills/claude-code/claude_code +55 -0
- package/templates/workspaces/devon/AGENTS.md +52 -0
- package/templates/workspaces/devon/FEEDBACK.md +11 -0
- package/templates/workspaces/devon/HEARTBEAT.md +5 -0
- package/templates/workspaces/devon/IDENTITY.md +6 -0
- package/templates/workspaces/devon/LEARNINGS.md +11 -0
- package/templates/workspaces/devon/MEMORY.md +22 -0
- package/templates/workspaces/devon/PLAYBOOK.md +16 -0
- package/templates/workspaces/devon/SOUL.md +1 -1
- package/templates/workspaces/devon/USER.md +13 -0
- package/templates/workspaces/devon/check_github.py +12 -0
- package/templates/workspaces/devon/check_mc_env.py +30 -0
- package/templates/workspaces/devon/check_sb.py +34 -0
- package/templates/workspaces/devon/check_vercel.py +12 -0
- package/templates/workspaces/devon/get_mc_files.py +17 -0
- package/templates/workspaces/devon/write_heartbeat.py +67 -0
- package/templates/workspaces/dylan/.env.example +33 -0
- package/templates/workspaces/dylan/00007_verify_licenses_table.sql +100 -0
- package/templates/workspaces/dylan/AGENTS.md +52 -0
- package/templates/workspaces/dylan/FEEDBACK.md +28 -0
- package/templates/workspaces/dylan/HEARTBEAT.md +5 -0
- package/templates/workspaces/dylan/IDENTITY.md +6 -0
- package/templates/workspaces/dylan/LEARNINGS.md +70 -0
- package/templates/workspaces/dylan/MEMORY.md +22 -0
- package/templates/workspaces/dylan/PLAYBOOK.md +16 -0
- package/templates/workspaces/dylan/SOUL.md +1 -1
- package/templates/workspaces/dylan/STRIPE_PIPELINE.md +185 -0
- package/templates/workspaces/dylan/USER.md +17 -0
- package/templates/workspaces/dylan/n8n-stripe-welcome-workflow.json +123 -0
- package/templates/workspaces/dylan/skills/claude-code/SKILL.md +38 -0
- package/templates/workspaces/dylan/skills/claude-code/claude_code +55 -0
- package/templates/workspaces/dylan/stripe-webhook-handler.py +433 -0
- package/templates/workspaces/dylan/test_mock_webhook.py +103 -0
- package/templates/workspaces/elena/AGENTS.md +59 -0
- package/templates/workspaces/elena/FEEDBACK.md +11 -0
- package/templates/workspaces/elena/HEARTBEAT.md +9 -0
- package/templates/workspaces/elena/IDENTITY.md +6 -0
- package/templates/workspaces/elena/LEARNINGS.md +5 -0
- package/templates/workspaces/elena/MEMORY.md +22 -0
- package/templates/workspaces/elena/PLAYBOOK.md +16 -0
- package/templates/workspaces/elena/SOUL.md +1 -1
- package/templates/workspaces/elena/TOOLS.md +15 -0
- package/templates/workspaces/elena/USER.md +13 -0
- package/templates/workspaces/eva/AGENTS.md +59 -0
- package/templates/workspaces/eva/FEEDBACK.md +11 -0
- package/templates/workspaces/eva/HEARTBEAT.md +9 -0
- package/templates/workspaces/eva/IDENTITY.md +6 -0
- package/templates/workspaces/eva/LEARNINGS.md +5 -0
- package/templates/workspaces/eva/MEMORY.md +22 -0
- package/templates/workspaces/eva/PLAYBOOK.md +16 -0
- package/templates/workspaces/eva/SOUL.md +1 -1
- package/templates/workspaces/eva/TOOLS.md +15 -0
- package/templates/workspaces/eva/USER.md +13 -0
- package/templates/workspaces/felix/AGENTS.md +52 -0
- package/templates/workspaces/felix/FEEDBACK.md +11 -0
- package/templates/workspaces/felix/HEARTBEAT.md +5 -0
- package/templates/workspaces/felix/IDENTITY.md +6 -0
- package/templates/workspaces/felix/LEARNINGS.md +17 -0
- package/templates/workspaces/felix/MEMORY.md +22 -0
- package/templates/workspaces/felix/PLAYBOOK.md +16 -0
- package/templates/workspaces/felix/SOUL.md +1 -1
- package/templates/workspaces/felix/USER.md +13 -0
- package/templates/workspaces/felix/fidelia-psychology.html +1594 -0
- package/templates/workspaces/felix/task.txt +164 -0
- package/templates/workspaces/hannah/AGENTS.md +59 -0
- package/templates/workspaces/hannah/FEEDBACK.md +12 -0
- package/templates/workspaces/hannah/HEARTBEAT.md +5 -0
- package/templates/workspaces/hannah/IDENTITY.md +6 -0
- package/templates/workspaces/hannah/LEARNINGS.md +6 -0
- package/templates/workspaces/hannah/MEMORY.md +22 -0
- package/templates/workspaces/hannah/PLAYBOOK.md +16 -0
- package/templates/workspaces/hannah/SOUL.md +1 -1
- package/templates/workspaces/hannah/TOOLS.md +15 -0
- package/templates/workspaces/hannah/USER.md +17 -0
- package/templates/workspaces/isaac/AGENTS.md +52 -0
- package/templates/workspaces/isaac/FEEDBACK.md +12 -0
- package/templates/workspaces/isaac/HEARTBEAT.md +9 -0
- package/templates/workspaces/isaac/IDENTITY.md +6 -0
- package/templates/workspaces/isaac/LEARNINGS.md +6 -0
- package/templates/workspaces/isaac/MEMORY.md +22 -0
- package/templates/workspaces/isaac/PLAYBOOK.md +16 -0
- package/templates/workspaces/isaac/SOUL.md +1 -1
- package/templates/workspaces/isaac/USER.md +17 -0
- package/templates/workspaces/isaac/skills/claude-code/SKILL.md +38 -0
- package/templates/workspaces/isaac/skills/claude-code/claude_code +55 -0
- package/templates/workspaces/logan/AGENTS.md +59 -0
- package/templates/workspaces/logan/FEEDBACK.md +11 -0
- package/templates/workspaces/logan/HEARTBEAT.md +9 -0
- package/templates/workspaces/logan/IDENTITY.md +6 -0
- package/templates/workspaces/logan/LEARNINGS.md +5 -0
- package/templates/workspaces/logan/MEMORY.md +22 -0
- package/templates/workspaces/logan/PLAYBOOK.md +16 -0
- package/templates/workspaces/logan/SOUL.md +1 -1
- package/templates/workspaces/logan/TOOLS.md +15 -0
- package/templates/workspaces/logan/USER.md +13 -0
- package/templates/workspaces/maxxipro/AGENTS.md +29 -0
- package/templates/workspaces/maxxipro/FEEDBACK.md +19 -0
- package/templates/workspaces/maxxipro/HEARTBEAT.md +22 -0
- package/templates/workspaces/maxxipro/IDENTITY.md +35 -0
- package/templates/workspaces/maxxipro/KNOWLEDGE.md +335 -0
- package/templates/workspaces/maxxipro/LEARNINGS.md +47 -0
- package/templates/workspaces/maxxipro/MEMORY.md +60 -0
- package/templates/workspaces/maxxipro/OUTREACH_TEMPLATES.md +143 -0
- package/templates/workspaces/maxxipro/PLAYBOOK.md +81 -0
- package/templates/workspaces/maxxipro/SOUL.md +146 -0
- package/templates/workspaces/maxxipro/TOOLS.md +81 -0
- package/templates/workspaces/maxxipro/USER.md +40 -0
- package/templates/workspaces/morgan/AGENTS.md +59 -0
- package/templates/workspaces/morgan/FEEDBACK.md +19 -0
- package/templates/workspaces/morgan/HEARTBEAT.md +5 -0
- package/templates/workspaces/morgan/IDENTITY.md +6 -0
- package/templates/workspaces/morgan/LEARNINGS.md +18 -0
- package/templates/workspaces/morgan/MEMORY.md +22 -0
- package/templates/workspaces/morgan/PLAYBOOK.md +16 -0
- package/templates/workspaces/morgan/SOUL.md +1 -1
- package/templates/workspaces/morgan/TOOLS.md +15 -0
- package/templates/workspaces/morgan/USER.md +13 -0
- package/templates/workspaces/nadia/AGENTS.md +59 -0
- package/templates/workspaces/nadia/FEEDBACK.md +12 -0
- package/templates/workspaces/nadia/HEARTBEAT.md +5 -0
- package/templates/workspaces/nadia/IDENTITY.md +6 -0
- package/templates/workspaces/nadia/LEARNINGS.md +6 -0
- package/templates/workspaces/nadia/MEMORY.md +22 -0
- package/templates/workspaces/nadia/PLAYBOOK.md +16 -0
- package/templates/workspaces/nadia/SOUL.md +1 -1
- package/templates/workspaces/nadia/TOOLS.md +15 -0
- package/templates/workspaces/nadia/USER.md +13 -0
- package/templates/workspaces/nate/AGENTS.md +24 -0
- package/templates/workspaces/nate/FEEDBACK.md +12 -0
- package/templates/workspaces/nate/HEARTBEAT.md +33 -0
- package/templates/workspaces/nate/IDENTITY.md +15 -0
- package/templates/workspaces/nate/LEARNINGS.md +33 -0
- package/templates/workspaces/nate/MEMORY.md +39 -0
- package/templates/workspaces/nate/PLAYBOOK.md +160 -0
- package/templates/workspaces/nate/SOUL.md +50 -0
- package/templates/workspaces/nate/TOOLS.md +111 -0
- package/templates/workspaces/nate/USER.md +32 -0
- package/templates/workspaces/olivia/.last-openclaw-version +1 -0
- package/templates/workspaces/olivia/.npmrc.tmp +0 -0
- package/templates/workspaces/olivia/AGENTS.md +77 -0
- package/templates/workspaces/olivia/ALPHA_CODING_BENCHMARK.txt +148 -0
- package/templates/workspaces/olivia/ALPHA_MODEL_GUIDE.md +393 -0
- package/templates/workspaces/olivia/FEEDBACK.md +13 -0
- package/templates/workspaces/olivia/HEADTOHEAD_BENCHMARK.txt +1289 -0
- package/templates/workspaces/olivia/HEARTBEAT.md +267 -0
- package/templates/workspaces/olivia/IDENTITY.md +6 -0
- package/templates/workspaces/olivia/LEARNINGS.md +708 -0
- package/templates/workspaces/olivia/MEMORY.md +202 -0
- package/templates/workspaces/olivia/MISSION_CONTROL_DESIGN_SPEC_v1.md +1143 -0
- package/templates/workspaces/olivia/MVP-COMPLETION-SUMMARY.md +175 -0
- package/templates/workspaces/olivia/NETWORK_IMPLEMENTATION_PLAN.md +1556 -0
- package/templates/workspaces/olivia/NEW_NODES_BENCHMARK.txt +947 -0
- package/templates/workspaces/olivia/PLAYBOOK.md +42 -0
- package/templates/workspaces/olivia/SELF-HEALING-COMPLETE.md +150 -0
- package/templates/workspaces/olivia/SOUL.md +8 -8
- package/templates/workspaces/olivia/TOOLS.md +15 -0
- package/templates/workspaces/olivia/USER.md +17 -0
- package/templates/workspaces/olivia/alicefleet-supabase-credentials.md +50 -0
- package/templates/workspaces/olivia/dzombo-copy-rewrite.md +115 -0
- package/templates/workspaces/olivia/dzombo-implementation-plan.md +1248 -0
- package/templates/workspaces/olivia/fidelia-psychology.html +1594 -0
- package/templates/workspaces/olivia/lead_debug.png +0 -0
- package/templates/workspaces/olivia/minimatch-10.2.4.tgz +0 -0
- package/templates/workspaces/olivia/operation-bllm-research.md +157 -0
- package/templates/workspaces/olivia/qa-audit-mission-control-v2.md +538 -0
- package/templates/workspaces/olivia/roofmaxx_logo.svg +1 -0
- package/templates/workspaces/olivia/roofmaxx_social.jpg +0 -0
- package/templates/workspaces/olivia/skills/1password/SKILL.md +53 -0
- package/templates/workspaces/olivia/skills/1password/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/afrexai-recruiting-engine/README.md +57 -0
- package/templates/workspaces/olivia/skills/afrexai-recruiting-engine/SKILL.md +534 -0
- package/templates/workspaces/olivia/skills/afrexai-recruiting-engine/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/agent-security/SKILL.md +69 -0
- package/templates/workspaces/olivia/skills/agent-security/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/agentic-security-audit/SKILL.md +855 -0
- package/templates/workspaces/olivia/skills/agentic-security-audit/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/ai-automation-consulting/SKILL.md +67 -0
- package/templates/workspaces/olivia/skills/ai-automation-consulting/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/ai-automation-consulting/skill.json +12 -0
- package/templates/workspaces/olivia/skills/ai-presentation-maker/SKILL.md +1104 -0
- package/templates/workspaces/olivia/skills/ai-presentation-maker/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/ai-productivity-audit/SKILL.md +181 -0
- package/templates/workspaces/olivia/skills/ai-productivity-audit/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/ai-researcher/README.md +31 -0
- package/templates/workspaces/olivia/skills/ai-researcher/SKILL.md +59 -0
- package/templates/workspaces/olivia/skills/ai-researcher/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/ai-seo-writer/README.md +19 -0
- package/templates/workspaces/olivia/skills/ai-seo-writer/SKILL.md +100 -0
- package/templates/workspaces/olivia/skills/ai-seo-writer/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/analytics-tracking-2/SKILL.md +309 -0
- package/templates/workspaces/olivia/skills/analytics-tracking-2/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/api-doc-writer/SKILL.md +232 -0
- package/templates/workspaces/olivia/skills/api-doc-writer/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/api-generator/SKILL.md +49 -0
- package/templates/workspaces/olivia/skills/api-generator/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/api-generator/tips.md +10 -0
- package/templates/workspaces/olivia/skills/apple-notes/SKILL.md +50 -0
- package/templates/workspaces/olivia/skills/apple-notes/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/apple-reminders/SKILL.md +67 -0
- package/templates/workspaces/olivia/skills/apple-reminders/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/automation-workflows/SKILL.md +267 -0
- package/templates/workspaces/olivia/skills/automation-workflows/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/autoresearch/SKILL.md +46 -0
- package/templates/workspaces/olivia/skills/autoresearch/aria_write.py +148 -0
- package/templates/workspaces/olivia/skills/autoresearch/autoresearch.py +75 -0
- package/templates/workspaces/olivia/skills/azure-devops/SKILL.md +115 -0
- package/templates/workspaces/olivia/skills/azure-devops/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/blogwatcher/SKILL.md +46 -0
- package/templates/workspaces/olivia/skills/blogwatcher/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/blucli/SKILL.md +27 -0
- package/templates/workspaces/olivia/skills/blucli/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/check-analytics/SKILL.md +92 -0
- package/templates/workspaces/olivia/skills/check-analytics/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/cloud-architect/SKILL.md +89 -0
- package/templates/workspaces/olivia/skills/cloud-architect/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/cloud-infra-automation/SKILL.md +50 -0
- package/templates/workspaces/olivia/skills/cloud-infra-automation/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/cloud-storage/SKILL.md +61 -0
- package/templates/workspaces/olivia/skills/cloud-storage/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/cloud-storage/auth.md +97 -0
- package/templates/workspaces/olivia/skills/cloud-storage/costs.md +88 -0
- package/templates/workspaces/olivia/skills/cloud-storage/providers.md +55 -0
- package/templates/workspaces/olivia/skills/copywriting-pro/SKILL.md +107 -0
- package/templates/workspaces/olivia/skills/copywriting-pro/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/data-analyst-pro/SKILL.md +21 -0
- package/templates/workspaces/olivia/skills/data-analyst-pro/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/database-designer/README.md +388 -0
- package/templates/workspaces/olivia/skills/database-designer/SKILL.md +66 -0
- package/templates/workspaces/olivia/skills/database-designer/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/database-designer/index_optimizer.py +926 -0
- package/templates/workspaces/olivia/skills/database-designer/migration_generator.py +1199 -0
- package/templates/workspaces/olivia/skills/database-designer/schema_analyzer.py +982 -0
- package/templates/workspaces/olivia/skills/deploy-agent/SKILL.md +255 -0
- package/templates/workspaces/olivia/skills/deploy-agent/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/devops-automation-pack/SKILL.md +72 -0
- package/templates/workspaces/olivia/skills/devops-automation-pack/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/devops-automation-pack/deploy.sh +0 -0
- package/templates/workspaces/olivia/skills/financial-analysis-agent/SKILL.md +489 -0
- package/templates/workspaces/olivia/skills/financial-analysis-agent/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/gdpr-compliance-tracker/README.md +72 -0
- package/templates/workspaces/olivia/skills/gdpr-compliance-tracker/SKILL.md +226 -0
- package/templates/workspaces/olivia/skills/gdpr-compliance-tracker/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/gifgrep/SKILL.md +47 -0
- package/templates/workspaces/olivia/skills/gifgrep/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/github/SKILL.md +47 -0
- package/templates/workspaces/olivia/skills/github/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/gog/SKILL.md +36 -0
- package/templates/workspaces/olivia/skills/gog/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/growth-strategy-hub/SKILL.md +135 -0
- package/templates/workspaces/olivia/skills/growth-strategy-hub/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/growth-strategy-hub/metadata.json +4 -0
- package/templates/workspaces/olivia/skills/hetzner-cloud/SKILL.md +130 -0
- package/templates/workspaces/olivia/skills/hetzner-cloud/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/himalaya/SKILL.md +217 -0
- package/templates/workspaces/olivia/skills/himalaya/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/hotel-recommendation/SKILL.md +117 -0
- package/templates/workspaces/olivia/skills/hotel-recommendation/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/hr-policy-generator/SKILL.md +54 -0
- package/templates/workspaces/olivia/skills/hr-policy-generator/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/human-writing/SKILL.md +41 -0
- package/templates/workspaces/olivia/skills/human-writing/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/imsg/SKILL.md +25 -0
- package/templates/workspaces/olivia/skills/imsg/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/in-depth-research/SKILL.md +124 -0
- package/templates/workspaces/olivia/skills/in-depth-research/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/in-depth-research/methodology.md +75 -0
- package/templates/workspaces/olivia/skills/in-depth-research/output-formats.md +168 -0
- package/templates/workspaces/olivia/skills/in-depth-research/sources.md +80 -0
- package/templates/workspaces/olivia/skills/javascript-skills/README.md +71 -0
- package/templates/workspaces/olivia/skills/javascript-skills/SKILL.md +746 -0
- package/templates/workspaces/olivia/skills/javascript-skills/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/leadership-strategy-playbook/SKILL.md +147 -0
- package/templates/workspaces/olivia/skills/leadership-strategy-playbook/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/market-research-agent/README.md +29 -0
- package/templates/workspaces/olivia/skills/market-research-agent/SKILL.md +52 -0
- package/templates/workspaces/olivia/skills/market-research-agent/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/marketing-analytics/SKILL.md +74 -0
- package/templates/workspaces/olivia/skills/marketing-analytics/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/marketing-master-io/SKILL.md +125 -0
- package/templates/workspaces/olivia/skills/marketing-master-io/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/marketing-strategy-pmm/SKILL.md +398 -0
- package/templates/workspaces/olivia/skills/marketing-strategy-pmm/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/meta-ads-analytics/SKILL.md +53 -0
- package/templates/workspaces/olivia/skills/meta-ads-analytics/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/obsidian/SKILL.md +55 -0
- package/templates/workspaces/olivia/skills/obsidian/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/openclaw-accounting/SKILL.md +125 -0
- package/templates/workspaces/olivia/skills/openclaw-accounting/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/openclaw-security-toolkit/CHANGELOG.md +35 -0
- package/templates/workspaces/olivia/skills/openclaw-security-toolkit/CHANNELLOG.md +73 -0
- package/templates/workspaces/olivia/skills/openclaw-security-toolkit/README.md +161 -0
- package/templates/workspaces/olivia/skills/openclaw-security-toolkit/SKILL.md +130 -0
- package/templates/workspaces/olivia/skills/openclaw-security-toolkit/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/openclaw-security-toolkit/config.json +36 -0
- package/templates/workspaces/olivia/skills/openclaw-security-toolkit/metadata.json +19 -0
- package/templates/workspaces/olivia/skills/openhue/SKILL.md +30 -0
- package/templates/workspaces/olivia/skills/openhue/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/orgx-operations-agent/SKILL.md +41 -0
- package/templates/workspaces/olivia/skills/orgx-operations-agent/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/outreach/SKILL.md +84 -0
- package/templates/workspaces/olivia/skills/outreach/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/outreach/by-type.md +166 -0
- package/templates/workspaces/olivia/skills/outreach/templates.md +154 -0
- package/templates/workspaces/olivia/skills/outreach/tracking.md +145 -0
- package/templates/workspaces/olivia/skills/persona-hr-coordinator/SKILL.md +38 -0
- package/templates/workspaces/olivia/skills/persona-hr-coordinator/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/personal-productivity/SKILL.md +161 -0
- package/templates/workspaces/olivia/skills/personal-productivity/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/personal-productivity/index.js +363 -0
- package/templates/workspaces/olivia/skills/personal-productivity/package.json +15 -0
- package/templates/workspaces/olivia/skills/personal-travel/README.md +34 -0
- package/templates/workspaces/olivia/skills/personal-travel/SKILL.md +46 -0
- package/templates/workspaces/olivia/skills/personal-travel/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/presentation-html-generator-skill/SKILL.md +185 -0
- package/templates/workspaces/olivia/skills/presentation-html-generator-skill/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/product-manager/SKILL.md +77 -0
- package/templates/workspaces/olivia/skills/product-manager/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/quant-strategy/SKILL.md +28 -0
- package/templates/workspaces/olivia/skills/quant-strategy/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/sales-pipeline-tracker/README.md +29 -0
- package/templates/workspaces/olivia/skills/sales-pipeline-tracker/SKILL.md +45 -0
- package/templates/workspaces/olivia/skills/sales-pipeline-tracker/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/security-auditor/SKILL.md +399 -0
- package/templates/workspaces/olivia/skills/security-auditor/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/security-hardening/SKILL.md +296 -0
- package/templates/workspaces/olivia/skills/security-hardening/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/security-scanner/SKILL.md +67 -0
- package/templates/workspaces/olivia/skills/security-scanner/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/seo-optimization/SKILL.md +31 -0
- package/templates/workspaces/olivia/skills/seo-optimization/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/service-booking/SKILL.md +193 -0
- package/templates/workspaces/olivia/skills/service-booking/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/sme-hr-automation/SKILL.md +131 -0
- package/templates/workspaces/olivia/skills/sme-hr-automation/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/social-media-scheduler/README.md +29 -0
- package/templates/workspaces/olivia/skills/social-media-scheduler/SKILL.md +49 -0
- package/templates/workspaces/olivia/skills/social-media-scheduler/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/sonoscli/SKILL.md +26 -0
- package/templates/workspaces/olivia/skills/sonoscli/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/strategy-advisor/SKILL.md +33 -0
- package/templates/workspaces/olivia/skills/strategy-advisor/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/summarize/SKILL.md +49 -0
- package/templates/workspaces/olivia/skills/summarize/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/things-mac/SKILL.md +61 -0
- package/templates/workspaces/olivia/skills/things-mac/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/travel-itinerary-planner/SKILL.md +121 -0
- package/templates/workspaces/olivia/skills/travel-itinerary-planner/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/travel-manager/SKILL.md +36 -0
- package/templates/workspaces/olivia/skills/travel-manager/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/travel-planning/SKILL.md +238 -0
- package/templates/workspaces/olivia/skills/travel-planning/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/travel-planning/booking-guide.md +91 -0
- package/templates/workspaces/olivia/skills/travel-planning/memory-template.md +111 -0
- package/templates/workspaces/olivia/skills/travel-planning/multi-city.md +131 -0
- package/templates/workspaces/olivia/skills/travel-planning/packing-templates.md +155 -0
- package/templates/workspaces/olivia/skills/travel-planning/setup.md +66 -0
- package/templates/workspaces/olivia/skills/update-it-all/SKILL.md +143 -0
- package/templates/workspaces/olivia/skills/update-it-all/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/voice/SKILL.md +62 -0
- package/templates/workspaces/olivia/skills/weather/SKILL.md +49 -0
- package/templates/workspaces/olivia/skills/weather/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/web-researcher/SKILL.md +21 -0
- package/templates/workspaces/olivia/skills/web-researcher/_meta.json +6 -0
- package/templates/workspaces/olivia/skills/website-seo/SKILL.md +284 -0
- package/templates/workspaces/olivia/skills/website-seo/_meta.json +6 -0
- package/templates/workspaces/olivia/stripe-welcome-n8n.json +103 -0
- package/templates/workspaces/olivia/test2.wav.wav +0 -0
- package/templates/workspaces/olivia/test_speech.json +1 -0
- package/templates/workspaces/olivia/test_speech.srt +0 -0
- package/templates/workspaces/olivia/test_speech.tsv +1 -0
- package/templates/workspaces/olivia/test_speech.txt +0 -0
- package/templates/workspaces/olivia/test_speech.vtt +2 -0
- package/templates/workspaces/owen/AGENTS.md +59 -0
- package/templates/workspaces/owen/FEEDBACK.md +12 -0
- package/templates/workspaces/owen/HEARTBEAT.md +5 -0
- package/templates/workspaces/owen/IDENTITY.md +6 -0
- package/templates/workspaces/owen/LEARNINGS.md +46 -0
- package/templates/workspaces/owen/MEMORY.md +22 -0
- package/templates/workspaces/owen/PLAYBOOK.md +16 -0
- package/templates/workspaces/owen/SOUL.md +1 -1
- package/templates/workspaces/owen/TOOLS.md +15 -0
- package/templates/workspaces/owen/USER.md +17 -0
- package/templates/workspaces/parker/AGENTS.md +59 -0
- package/templates/workspaces/parker/FEEDBACK.md +11 -0
- package/templates/workspaces/parker/HEARTBEAT.md +5 -0
- package/templates/workspaces/parker/IDENTITY.md +6 -0
- package/templates/workspaces/parker/LEARNINGS.md +17 -0
- package/templates/workspaces/parker/MEMORY.md +22 -0
- package/templates/workspaces/parker/PLAYBOOK.md +16 -0
- package/templates/workspaces/parker/SOUL.md +1 -1
- package/templates/workspaces/parker/TOOLS.md +15 -0
- package/templates/workspaces/parker/USER.md +13 -0
- package/templates/workspaces/quinn/AGENTS.md +52 -0
- package/templates/workspaces/quinn/FEEDBACK.md +11 -0
- package/templates/workspaces/quinn/HEARTBEAT.md +5 -0
- package/templates/workspaces/quinn/IDENTITY.md +6 -0
- package/templates/workspaces/quinn/LEARNINGS.md +35 -0
- package/templates/workspaces/quinn/MEMORY.md +22 -0
- package/templates/workspaces/quinn/PLAYBOOK.md +16 -0
- package/templates/workspaces/quinn/SOUL.md +1 -1
- package/templates/workspaces/quinn/USER.md +17 -0
- package/templates/workspaces/quinn/alice-login-page.png +0 -0
- package/templates/workspaces/rowan/AGENTS.md +59 -0
- package/templates/workspaces/rowan/FEEDBACK.md +12 -0
- package/templates/workspaces/rowan/HEARTBEAT.md +5 -0
- package/templates/workspaces/rowan/IDENTITY.md +6 -0
- package/templates/workspaces/rowan/LEARNINGS.md +12 -0
- package/templates/workspaces/rowan/MEMORY.md +22 -0
- package/templates/workspaces/rowan/PLAYBOOK.md +16 -0
- package/templates/workspaces/rowan/SOUL.md +1 -1
- package/templates/workspaces/rowan/USER.md +17 -0
- package/templates/workspaces/selena/AGENTS.md +59 -0
- package/templates/workspaces/selena/FEEDBACK.md +12 -0
- package/templates/workspaces/selena/HEARTBEAT.md +5 -0
- package/templates/workspaces/selena/IDENTITY.md +6 -0
- package/templates/workspaces/selena/LEARNINGS.md +24 -0
- package/templates/workspaces/selena/MEMORY.md +22 -0
- package/templates/workspaces/selena/PLAYBOOK.md +16 -0
- package/templates/workspaces/selena/SOUL.md +1 -1
- package/templates/workspaces/selena/USER.md +17 -0
- package/templates/workspaces/selena/kids-ai-security-compliance-plan.md +791 -0
- package/templates/workspaces/selena/kidspark-coppa-compliance-audit.md +866 -0
- package/templates/workspaces/sloane/AGENTS.md +59 -0
- package/templates/workspaces/sloane/FEEDBACK.md +12 -0
- package/templates/workspaces/sloane/HEARTBEAT.md +9 -0
- package/templates/workspaces/sloane/IDENTITY.md +6 -0
- package/templates/workspaces/sloane/LEARNINGS.md +6 -0
- package/templates/workspaces/sloane/MEMORY.md +22 -0
- package/templates/workspaces/sloane/PLAYBOOK.md +16 -0
- package/templates/workspaces/sloane/SOUL.md +1 -1
- package/templates/workspaces/sloane/TOOLS.md +15 -0
- package/templates/workspaces/sloane/USER.md +13 -0
- package/templates/workspaces/smoketestagent/AGENTS.md +52 -0
- package/templates/workspaces/smoketestagent/FEEDBACK.md +3 -0
- package/templates/workspaces/smoketestagent/HEARTBEAT.md +14 -0
- package/templates/workspaces/smoketestagent/IDENTITY.md +6 -0
- package/templates/workspaces/smoketestagent/LEARNINGS.md +3 -0
- package/templates/workspaces/smoketestagent/MEMORY.md +24 -0
- package/templates/workspaces/smoketestagent/PLAYBOOK.md +7 -0
- package/templates/workspaces/smoketestagent/SOUL.md +32 -0
- package/templates/workspaces/smoketestagent/TOOLS.md +13 -0
- package/templates/workspaces/smoketestagent/USER.md +5 -0
- package/templates/workspaces/sophie/AGENTS.md +59 -0
- package/templates/workspaces/sophie/FEEDBACK.md +12 -0
- package/templates/workspaces/sophie/HEARTBEAT.md +9 -0
- package/templates/workspaces/sophie/IDENTITY.md +6 -0
- package/templates/workspaces/sophie/LEARNINGS.md +6 -0
- package/templates/workspaces/sophie/MEMORY.md +22 -0
- package/templates/workspaces/sophie/PLAYBOOK.md +16 -0
- package/templates/workspaces/sophie/SOUL.md +1 -1
- package/templates/workspaces/sophie/TOOLS.md +15 -0
- package/templates/workspaces/sophie/USER.md +17 -0
- package/templates/workspaces/tommy/AGENTS.md +59 -0
- package/templates/workspaces/tommy/FEEDBACK.md +12 -0
- package/templates/workspaces/tommy/HEARTBEAT.md +9 -0
- package/templates/workspaces/tommy/IDENTITY.md +6 -0
- package/templates/workspaces/tommy/LEARNINGS.md +6 -0
- package/templates/workspaces/tommy/MEMORY.md +22 -0
- package/templates/workspaces/tommy/PLAYBOOK.md +16 -0
- package/templates/workspaces/tommy/SOUL.md +1 -1
- package/templates/workspaces/tommy/TOOLS.md +15 -0
- package/templates/workspaces/tommy/USER.md +17 -0
- package/templates/workspaces/uma/AGENTS.md +59 -0
- package/templates/workspaces/uma/FEEDBACK.md +11 -0
- package/templates/workspaces/uma/HEARTBEAT.md +5 -0
- package/templates/workspaces/uma/IDENTITY.md +6 -0
- package/templates/workspaces/uma/LEARNINGS.md +11 -0
- package/templates/workspaces/uma/MEMORY.md +22 -0
- package/templates/workspaces/uma/PLAYBOOK.md +16 -0
- package/templates/workspaces/uma/SOUL.md +1 -1
- package/templates/workspaces/uma/TOOLS.md +15 -0
- package/templates/workspaces/uma/USER.md +13 -0
|
@@ -0,0 +1,1556 @@
|
|
|
1
|
+
# A.L.I.C.E. 5-Node Compute Network — Implementation Plan
|
|
2
|
+
|
|
3
|
+
**Author:** Devon (DevOps & Infrastructure)
|
|
4
|
+
**Date:** 2026-03-23
|
|
5
|
+
**Target:** Rob / A.L.I.C.E. Orchestrator
|
|
6
|
+
**Status:** Draft → Ready for Review
|
|
7
|
+
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
## A. Executive Summary
|
|
11
|
+
|
|
12
|
+
This plan executes a five-phase rollout across Rob's 5-node A.L.I.C.E. compute network to:
|
|
13
|
+
1. **Harden safety** by deploying `llama-guard3:8b` on all inference nodes
|
|
14
|
+
2. **Unlock voice** by building a full STT→LLM→TTS pipeline on the Orin edge node
|
|
15
|
+
3. **Stand up infrastructure** by deploying Qdrant, Redis, Prometheus, and Grafana on the Ubuntu Desktop GPU node
|
|
16
|
+
4. **Expand GPU model catalog** on Ubuntu Desktop with small efficient models
|
|
17
|
+
5. **Register everything** in OpenClaw's model and voice endpoint configuration
|
|
18
|
+
|
|
19
|
+
**Nodes involved:**
|
|
20
|
+
| Hostname | Tailscale IP | Role |
|
|
21
|
+
|---|---|---|
|
|
22
|
+
| MacBook Air | 100.101.241.124 | OpenClaw orchestration (no compute changes) |
|
|
23
|
+
| Mac Studio | 100.115.74.106 | Always-on inference + safety |
|
|
24
|
+
| Mac Mini | 100.107.132.71 | Inference + safety relay |
|
|
25
|
+
| Alpha/AGX Orin | 100.106.110.119 | Voice pipeline (STT + TTS) |
|
|
26
|
+
| Ubuntu Desktop | 100.76.82.8 | Infra services + GPU models |
|
|
27
|
+
|
|
28
|
+
**Total estimated time:** ~3–5 hours if run sequentially; ~1–2 hours if phases 1 and 2 run in parallel.
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
## B. Phase-by-Phase Breakdown
|
|
33
|
+
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
### Phase 1 — Safety Layer: `llama-guard3:8b` on Mac Studio + Mac Mini
|
|
37
|
+
|
|
38
|
+
**Node(s):** Mac Studio (100.115.74.106), Mac Mini (100.107.132.71)
|
|
39
|
+
**SSH user:** rob
|
|
40
|
+
**Risk:** Low — read-only model pull, no config changes
|
|
41
|
+
**Rollback:** `ollama delete llama-guard3:8b`
|
|
42
|
+
|
|
43
|
+
#### Mac Studio
|
|
44
|
+
|
|
45
|
+
```bash
|
|
46
|
+
# SSH to Mac Studio
|
|
47
|
+
ssh rob@100.115.74.106
|
|
48
|
+
|
|
49
|
+
# Pull llama-guard3:8b (model is ~5GB, ~2-5 min on gigabit)
|
|
50
|
+
ollama pull llama-guard3:8b
|
|
51
|
+
|
|
52
|
+
# Verify it's registered
|
|
53
|
+
ollama list | grep llama-guard3
|
|
54
|
+
|
|
55
|
+
# Optional: quick smoke test
|
|
56
|
+
timeout 30 ollama run llama-guard3:8b "Hello world" --verbose
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
#### Mac Mini
|
|
60
|
+
|
|
61
|
+
```bash
|
|
62
|
+
# SSH to Mac Mini
|
|
63
|
+
ssh rob@100.107.132.71
|
|
64
|
+
|
|
65
|
+
# Pull llama-guard3:8b
|
|
66
|
+
ollama pull llama-guard3:8b
|
|
67
|
+
|
|
68
|
+
# Verify
|
|
69
|
+
ollama list | grep llama-guard3
|
|
70
|
+
|
|
71
|
+
# Quick smoke test
|
|
72
|
+
timeout 30 ollama run llama-guard3:8b "Hello world" --verbose
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
**Rollback (either node):**
|
|
76
|
+
```bash
|
|
77
|
+
ollama delete llama-guard3:8b
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
**Capabilities unlocked:** Content safety scoring on all inference requests handled by Mac Studio and Mac Mini. Model outputs can be passed through `llama-guard3:8b` before returning to clients.
|
|
81
|
+
|
|
82
|
+
---
|
|
83
|
+
|
|
84
|
+
### Phase 2 — Voice Pipeline on Orin (JetPack 6.2)
|
|
85
|
+
|
|
86
|
+
**Node:** Alpha/AGX Orin (100.106.110.119)
|
|
87
|
+
**SSH:** `ssh rob@100.106.110.119`
|
|
88
|
+
**OS:** Ubuntu ARM (JetPack 6.2 — CUDA 12.6, cuDNN 9.3, TensorRT 10.3)
|
|
89
|
+
**Risk:** Medium — installs packages, creates systemd services, opens ports
|
|
90
|
+
**Rollback:** `systemctl disable --now alice-stt alice-tts; rm -rf /opt/voice-pipeline`
|
|
91
|
+
|
|
92
|
+
This is the most complex phase. We install:
|
|
93
|
+
- **faster-whisper** with TensorRT backend → STT service on port 8765
|
|
94
|
+
- **Kokoro TTS** (CUDA-accelerated) → TTS primary on port 8766
|
|
95
|
+
- **Piper TTS** (fallback) → TTS fallback on port 8767
|
|
96
|
+
- **silero-vad** → VAD preprocessing
|
|
97
|
+
|
|
98
|
+
Services are exposed as HTTP APIs via systemd units.
|
|
99
|
+
|
|
100
|
+
#### Step 2.1 — Environment Prep
|
|
101
|
+
|
|
102
|
+
```bash
|
|
103
|
+
ssh rob@100.106.110.119
|
|
104
|
+
|
|
105
|
+
# Check JetPack / CUDA version
|
|
106
|
+
nvcc --version # Should report CUDA 12.6
|
|
107
|
+
python3 -c "import tensorrt; print(tensorrt.__version__)" # Should be 10.3.x
|
|
108
|
+
nvidia-smi # Confirm GPU visible
|
|
109
|
+
|
|
110
|
+
# Create dedicated user for voice services (runs as isolated systemd services)
|
|
111
|
+
sudo useradd -r -M -s /usr/sbin/nologin voice 2>/dev/null || true
|
|
112
|
+
sudo mkdir -p /opt/voice-pipeline
|
|
113
|
+
sudo chown voice:voice /opt/voice-pipeline
|
|
114
|
+
```
|
|
115
|
+
|
|
116
|
+
#### Step 2.2 — Python + pip + venv
|
|
117
|
+
|
|
118
|
+
```bash
|
|
119
|
+
ssh rob@100.106.110.119
|
|
120
|
+
|
|
121
|
+
# JetPack ships Python 3.10+. Verify and set up venv
|
|
122
|
+
python3 --version
|
|
123
|
+
python3 -m pip --version || sudo apt-get install -y python3-pip
|
|
124
|
+
|
|
125
|
+
# Create venv
|
|
126
|
+
python3 -m venv /opt/voice-pipeline/venv
|
|
127
|
+
source /opt/voice-pipeline/venv/bin/activate
|
|
128
|
+
|
|
129
|
+
# Upgrade pip
|
|
130
|
+
pip install --upgrade pip
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
#### Step 2.3 — Install faster-whisper (TensorRT backend)
|
|
134
|
+
|
|
135
|
+
```bash
|
|
136
|
+
ssh rob@100.106.110.119
|
|
137
|
+
source /opt/voice-pipeline/venv/bin/activate
|
|
138
|
+
|
|
139
|
+
# faster-whisper + TensorRT plugin
|
|
140
|
+
pip install faster-whisper
|
|
141
|
+
pip install tensorrt-cu12 # matches CUDA 12.6
|
|
142
|
+
|
|
143
|
+
# Verify TensorRT plugin loads
|
|
144
|
+
python3 -c "
|
|
145
|
+
from faster_whisper import WhisperModel
|
|
146
|
+
model = WhisperModel('Systran/faster-whisper-tiny', device='cuda', compute_type='tensorrt')
|
|
147
|
+
print('TensorRT Whisper OK')
|
|
148
|
+
del model
|
|
149
|
+
"
|
|
150
|
+
|
|
151
|
+
# Download a base model for the service
|
|
152
|
+
# Using 'base' for balance of speed/accuracy (or 'small' for higher accuracy)
|
|
153
|
+
# Note: the service will cache this under ~/.cache/huggingface/
|
|
154
|
+
# For JetPack, prefer ONNX/TensorRT optimized variants when available
|
|
155
|
+
pip install huggingface_hub
|
|
156
|
+
python3 -c "
|
|
157
|
+
from huggingface_hub import snapshot_download
|
|
158
|
+
snapshot_download(repo_id='Systran/faster-whisper-tiny', local_dir='/opt/voice-pipeline/models/whisper-tiny')
|
|
159
|
+
print('Whisper model downloaded')
|
|
160
|
+
"
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
**Alternative model download (if Systran repo unavailable):**
|
|
164
|
+
```bash
|
|
165
|
+
# Use OpenAI's whisper model via faster-whisper auto-download
|
|
166
|
+
python3 -c "
|
|
167
|
+
from faster_whisper import WhisperModel
|
|
168
|
+
model = WhisperModel('tiny', device='cuda', compute_type='tensorrt')
|
|
169
|
+
print('Whisper tiny model ready')
|
|
170
|
+
" # Downloads automatically on first run
|
|
171
|
+
```
|
|
172
|
+
|
|
173
|
+
#### Step 2.4 — Install Kokoro TTS (CUDA)
|
|
174
|
+
|
|
175
|
+
```bash
|
|
176
|
+
ssh rob@100.106.110.119
|
|
177
|
+
source /opt/voice-pipeline/venv/bin/activate
|
|
178
|
+
|
|
179
|
+
# Kokoro requires ONNX + onnxruntime-gpu
|
|
180
|
+
pip install kokoro-onnx onnxruntime-gpu
|
|
181
|
+
|
|
182
|
+
# Clone Kokoro weights (requires git lfs)
|
|
183
|
+
sudo apt-get install -y git-lfs
|
|
184
|
+
cd /opt/voice-pipeline
|
|
185
|
+
git lfs install
|
|
186
|
+
git clone https://huggingface.co/hexgrad/kokoro-v1.0 kokoro-v1.0
|
|
187
|
+
|
|
188
|
+
# Check available voices
|
|
189
|
+
ls kokoro-v1.0/voices/
|
|
190
|
+
# Expected: af_bella, af_nicole, af_sarah, am_adam, am_michael, etc.
|
|
191
|
+
|
|
192
|
+
# Quick test — generate a short audio clip
|
|
193
|
+
python3 -c "
|
|
194
|
+
from kokoro_onnx import Kokoro
|
|
195
|
+
k = Kokoro('/opt/voice-pipeline/kokoro-v1.0/kokoro-v1.0.onnx',
|
|
196
|
+
'/opt/voice-pipeline/kokoro-v1.0/voices/af_bella.onnx')
|
|
197
|
+
audio, = k.create('Hello, this is a voice pipeline test.', speed=1.0)
|
|
198
|
+
with open('/tmp/test_kokoro.wav', 'wb') as f:
|
|
199
|
+
f.write(audio)
|
|
200
|
+
print('Kokoro TTS OK — /tmp/test_kokoro.wav')
|
|
201
|
+
"
|
|
202
|
+
```
|
|
203
|
+
|
|
204
|
+
#### Step 2.5 — Install Piper TTS (fallback)
|
|
205
|
+
|
|
206
|
+
```bash
|
|
207
|
+
ssh rob@100.106.110.119
|
|
208
|
+
source /opt/voice-pipeline/venv/bin/activate
|
|
209
|
+
|
|
210
|
+
# Piper is a fast, local TTS engine
|
|
211
|
+
pip install piper-tts
|
|
212
|
+
|
|
213
|
+
# Download an English voice model (DJDK/medium quality, ~50MB)
|
|
214
|
+
mkdir -p /opt/voice-pipeline/piper-models
|
|
215
|
+
cd /opt/voice-pipeline/piper-models
|
|
216
|
+
|
|
217
|
+
# Download a voice + config
|
|
218
|
+
# Using en_US/lessac medium — good quality / speed balance
|
|
219
|
+
curl -L "https://github.com/rhasspy/piper/raw/master/src/python/examples/en_US-lessac-medium.onnx" \
|
|
220
|
+
-o en_US-lessac-medium.onnx
|
|
221
|
+
curl -L "https://github.com/rhasspy/piper/raw/master/src/python/examples/en_US-lessac-medium.onnx.json" \
|
|
222
|
+
-o en_US-lessac-medium.onnx.json
|
|
223
|
+
|
|
224
|
+
# Quick test
|
|
225
|
+
python3 -c "
|
|
226
|
+
import subprocess
|
|
227
|
+
result = subprocess.run([
|
|
228
|
+
'piper', '--model', '/opt/voice-pipeline/piper-models/en_US-lessac-medium.onnx',
|
|
229
|
+
'--output-raw'
|
|
230
|
+
], input=b'Hello from Piper TTS.', capture_output=True)
|
|
231
|
+
print(f'Piper output bytes: {len(result.stdout)}')
|
|
232
|
+
print('Piper OK')
|
|
233
|
+
"
|
|
234
|
+
```
|
|
235
|
+
|
|
236
|
+
#### Step 2.6 — Install silero-vad
|
|
237
|
+
|
|
238
|
+
```bash
|
|
239
|
+
ssh rob@100.106.110.119
|
|
240
|
+
source /opt/voice-pipeline/venv/bin/activate
|
|
241
|
+
|
|
242
|
+
pip install silero-vad
|
|
243
|
+
|
|
244
|
+
# Verify VAD works
|
|
245
|
+
python3 -c "
|
|
246
|
+
import torch
|
|
247
|
+
from silero_vad import load_silero_vad, get_speech_timestamps
|
|
248
|
+
vad_model = load_silero_vad()
|
|
249
|
+
print('Silero VAD loaded OK')
|
|
250
|
+
"
|
|
251
|
+
```
|
|
252
|
+
|
|
253
|
+
#### Step 2.7 — Write STT HTTP Service (port 8765)
|
|
254
|
+
|
|
255
|
+
```bash
|
|
256
|
+
ssh rob@100.106.110.119
|
|
257
|
+
|
|
258
|
+
cat > /opt/voice-pipeline/stt_service.py << 'PYEOF'
|
|
259
|
+
#!/opt/voice-pipeline/venv/bin/python3
|
|
260
|
+
"""
|
|
261
|
+
A.L.I.C.E. STT Service — faster-whisper + silero-vad on Orin (JetPack 6.2)
|
|
262
|
+
Listens on port 8765. Accepts POST with audio file (wav/flac/raw PCM).
|
|
263
|
+
Returns JSON: { "text": "...", "language": "en", "duration": 1.23 }
|
|
264
|
+
"""
|
|
265
|
+
|
|
266
|
+
import sys
|
|
267
|
+
import io
|
|
268
|
+
import wave
|
|
269
|
+
import struct
|
|
270
|
+
from pathlib import Path
|
|
271
|
+
|
|
272
|
+
from fastapi import FastAPI, HTTPException
|
|
273
|
+
from fastapi.responses import JSONResponse
|
|
274
|
+
import uvicorn
|
|
275
|
+
from faster_whisper import WhisperModel
|
|
276
|
+
from silero_vad import load_silero_vad, get_speech_timestamps
|
|
277
|
+
import torch
|
|
278
|
+
|
|
279
|
+
app = FastAPI(title="A.L.I.C.E. STT Service")
|
|
280
|
+
|
|
281
|
+
# Load models at startup
|
|
282
|
+
MODEL_SIZE = "base" # Options: tiny, base, small, medium, large-v3
|
|
283
|
+
VAD_MODEL = None
|
|
284
|
+
WHISPER_MODEL = None
|
|
285
|
+
|
|
286
|
+
@app.on_event("startup")
|
|
287
|
+
def load_models():
|
|
288
|
+
global VAD_MODEL, WHISPER_MODEL
|
|
289
|
+
print("[STT] Loading Silero VAD...", file=sys.stderr)
|
|
290
|
+
VAD_MODEL = load_silero_vad()
|
|
291
|
+
print("[STT] Loading faster-whisper (TensorRT, CUDA)...", file=sys.stderr)
|
|
292
|
+
WHISPER_MODEL = WhisperModel(
|
|
293
|
+
MODEL_SIZE, device="cuda", compute_type="float16"
|
|
294
|
+
)
|
|
295
|
+
print("[STT] Both models ready.", file=sys.stderr)
|
|
296
|
+
|
|
297
|
+
def pcm_to_wav(pcm_bytes: bytes, sample_rate: int = 16000, channels: int = 1) -> bytes:
|
|
298
|
+
"""Wrap raw PCM in a WAV header."""
|
|
299
|
+
buf = io.BytesIO()
|
|
300
|
+
with wave.open(buf, "wb") as wf:
|
|
301
|
+
wf.setnchannels(channels)
|
|
302
|
+
wf.setsampwidth(2) # 16-bit
|
|
303
|
+
wf.framerate = sample_rate
|
|
304
|
+
wf.writeframes(pcm_bytes)
|
|
305
|
+
return buf.getvalue()
|
|
306
|
+
|
|
307
|
+
@app.post("/transcribe")
|
|
308
|
+
async def transcribe(file: bytes = None, language: str = "en", sample_rate: int = 16000):
|
|
309
|
+
if not file:
|
|
310
|
+
raise HTTPException(400, "No audio file provided")
|
|
311
|
+
|
|
312
|
+
try:
|
|
313
|
+
# If raw PCM, wrap in WAV header
|
|
314
|
+
audio_bytes = pcm_to_wav(file, sample_rate=sample_rate)
|
|
315
|
+
except Exception:
|
|
316
|
+
raise HTTPException(400, "Could not parse audio data")
|
|
317
|
+
|
|
318
|
+
# Run VAD to find speech segments
|
|
319
|
+
audio_np = torch.frombuffer(file, dtype=torch.int16).float() / 32768.0
|
|
320
|
+
torch.set_float32_matmul_precision('high')
|
|
321
|
+
speech_ts = get_speech_timestamps(audio_np, VAD_MODEL, sampling_rate=sample_rate)
|
|
322
|
+
|
|
323
|
+
if not speech_ts:
|
|
324
|
+
return JSONResponse({"text": "", "language": language, "duration": 0.0, "segments": []})
|
|
325
|
+
|
|
326
|
+
# Run Whisper on full audio (TensorRT-optimized)
|
|
327
|
+
segments, info = WHISPER_MODEL.transcribe(
|
|
328
|
+
audio_bytes,
|
|
329
|
+
language=language,
|
|
330
|
+
vad_filter=False, # We handle VAD ourselves
|
|
331
|
+
beam_size=5,
|
|
332
|
+
word_timestamps=True,
|
|
333
|
+
)
|
|
334
|
+
|
|
335
|
+
segment_list = []
|
|
336
|
+
full_text = []
|
|
337
|
+
for seg in segments:
|
|
338
|
+
segment_list.append({
|
|
339
|
+
"start": seg.start,
|
|
340
|
+
"end": seg.end,
|
|
341
|
+
"text": seg.text.strip(),
|
|
342
|
+
})
|
|
343
|
+
full_text.append(seg.text.strip())
|
|
344
|
+
|
|
345
|
+
return JSONResponse({
|
|
346
|
+
"text": " ".join(full_text),
|
|
347
|
+
"language": info.language,
|
|
348
|
+
"duration": info.duration,
|
|
349
|
+
"segments": segment_list,
|
|
350
|
+
})
|
|
351
|
+
|
|
352
|
+
@app.get("/health")
|
|
353
|
+
def health():
|
|
354
|
+
return {"status": "ok", "model": MODEL_SIZE, "device": "cuda"}
|
|
355
|
+
|
|
356
|
+
if __name__ == "__main__":
|
|
357
|
+
uvicorn.run(app, host="0.0.0.0", port=8765, log_level="info")
|
|
358
|
+
PYEOF
|
|
359
|
+
|
|
360
|
+
chmod +x /opt/voice-pipeline/stt_service.py
|
|
361
|
+
chown voice:voice /opt/voice-pipeline/stt_service.py
|
|
362
|
+
```
|
|
363
|
+
|
|
364
|
+
#### Step 2.8 — Write TTS HTTP Service (port 8766 primary, 8767 fallback)
|
|
365
|
+
|
|
366
|
+
```bash
|
|
367
|
+
ssh rob@100.106.110.119
|
|
368
|
+
|
|
369
|
+
cat > /opt/voice-pipeline/tts_service.py << 'PYEOF'
|
|
370
|
+
#!/opt/voice-pipeline/venv/bin/python3
|
|
371
|
+
"""
|
|
372
|
+
A.L.I.C.E. TTS Service — Kokoro (primary, CUDA) + Piper (fallback)
|
|
373
|
+
Primary: port 8766, Fallback: port 8767
|
|
374
|
+
POST /synthesize with { "text": "...", "voice": "af_bella", "format": "wav" }
|
|
375
|
+
Returns raw audio bytes or JSON { "audio_base64": "..." }
|
|
376
|
+
"""
|
|
377
|
+
|
|
378
|
+
import sys
|
|
379
|
+
import io
|
|
380
|
+
import base64
|
|
381
|
+
from pathlib import Path
|
|
382
|
+
|
|
383
|
+
from fastapi import FastAPI, HTTPException
|
|
384
|
+
from fastapi.responses import Response, JSONResponse
|
|
385
|
+
import uvicorn
|
|
386
|
+
import soundfile as sf
|
|
387
|
+
|
|
388
|
+
# Kokoro
|
|
389
|
+
try:
|
|
390
|
+
from kokoro_onnx import Kokoro
|
|
391
|
+
KOKORO_AVAILABLE = True
|
|
392
|
+
except ImportError:
|
|
393
|
+
KOKORO_AVAILABLE = False
|
|
394
|
+
print("[TTS] Kokoro not available", file=sys.stderr)
|
|
395
|
+
|
|
396
|
+
# Piper
|
|
397
|
+
try:
|
|
398
|
+
import piper
|
|
399
|
+
PIPER_AVAILABLE = True
|
|
400
|
+
except ImportError:
|
|
401
|
+
PIPER_AVAILABLE = False
|
|
402
|
+
print("[TTS] Piper not available", file=sys.stderr)
|
|
403
|
+
|
|
404
|
+
app_primary = FastAPI(title="A.L.I.C.E. TTS (Kokoro/CUDA)")
|
|
405
|
+
app_fallback = FastAPI(title="A.L.I.C.E. TTS (Piper/Fallback)")
|
|
406
|
+
|
|
407
|
+
KOKORO_MODEL = None
|
|
408
|
+
PIPER_VOICE = None
|
|
409
|
+
PIPER_MODEL_PATH = "/opt/voice-pipeline/piper-models/en_US-lessac-medium.onnx"
|
|
410
|
+
|
|
411
|
+
@app_primary.on_event("startup")
|
|
412
|
+
def load_kokoro():
|
|
413
|
+
global KOKORO_MODEL
|
|
414
|
+
if not KOKORO_AVAILABLE:
|
|
415
|
+
print("[TTS/Kokoro] Not available, exiting startup", file=sys.stderr)
|
|
416
|
+
return
|
|
417
|
+
print("[TTS] Loading Kokoro TTS (CUDA)...", file=sys.stderr)
|
|
418
|
+
KOKORO_MODEL = Kokoro(
|
|
419
|
+
"/opt/voice-pipeline/kokoro-v1.0/kokoro-v1.0.onnx",
|
|
420
|
+
"/opt/voice-pipeline/kokoro-v1.0/voices/af_bella.onnx",
|
|
421
|
+
)
|
|
422
|
+
print("[TTS] Kokoro ready", file=sys.stderr)
|
|
423
|
+
|
|
424
|
+
@app_fallback.on_event("startup")
|
|
425
|
+
def load_piper():
|
|
426
|
+
global PIPER_VOICE
|
|
427
|
+
if not PIPER_AVAILABLE:
|
|
428
|
+
print("[TTS/Piper] Not available", file=sys.stderr)
|
|
429
|
+
return
|
|
430
|
+
print("[TTS] Loading Piper TTS fallback...", file=sys.stderr)
|
|
431
|
+
# Piper is loaded per-request via subprocess; just verify the file exists
|
|
432
|
+
if Path(PIPER_MODEL_PATH).exists():
|
|
433
|
+
print("[TTS] Piper model found", file=sys.stderr)
|
|
434
|
+
else:
|
|
435
|
+
print(f"[TTS] Piper model MISSING at {PIPER_MODEL_PATH}", file=sys.stderr)
|
|
436
|
+
|
|
437
|
+
def synthesize_kokoro(text: str, voice: str = "af_bella", speed: float = 1.0) -> bytes:
|
|
438
|
+
voice_map = {
|
|
439
|
+
"af_bella": "/opt/voice-pipeline/kokoro-v1.0/voices/af_bella.onnx",
|
|
440
|
+
"af_nicole": "/opt/voice-pipeline/kokoro-v1.0/voices/af_nicole.onnx",
|
|
441
|
+
"af_sarah": "/opt/voice-pipeline/kokoro-v1.0/voices/af_sarah.onnx",
|
|
442
|
+
"am_adam": "/opt/voice-pipeline/kokoro-v1.0/voices/am_adam.onnx",
|
|
443
|
+
"am_michael": "/opt/voice-pipeline/kokoro-v1.0/voices/am_michael.onnx",
|
|
444
|
+
}
|
|
445
|
+
voice_file = voice_map.get(voice, voice_map["af_bella"])
|
|
446
|
+
k = Kokoro(
|
|
447
|
+
"/opt/voice-pipeline/kokoro-v1.0/kokoro-v1.0.onnx",
|
|
448
|
+
voice_file,
|
|
449
|
+
)
|
|
450
|
+
audio, = k.create(text, speed=speed)
|
|
451
|
+
return audio
|
|
452
|
+
|
|
453
|
+
# ---- Primary (Kokoro) endpoints ----
|
|
454
|
+
|
|
455
|
+
@app_primary.post("/synthesize")
|
|
456
|
+
async def synthesize_primary(data: dict = None):
|
|
457
|
+
if data is None:
|
|
458
|
+
raise HTTPException(400, "JSON body required: {text, voice?, speed?}")
|
|
459
|
+
text = data.get("text")
|
|
460
|
+
if not text:
|
|
461
|
+
raise HTTPException(400, "text field required")
|
|
462
|
+
voice = data.get("voice", "af_bella")
|
|
463
|
+
speed = float(data.get("speed", 1.0))
|
|
464
|
+
|
|
465
|
+
try:
|
|
466
|
+
audio = synthesize_kokoro(text, voice, speed)
|
|
467
|
+
return Response(content=audio, media_type="audio/wav",
|
|
468
|
+
headers={"X-TTS-Engine": "kokoro-cuda"})
|
|
469
|
+
except Exception as e:
|
|
470
|
+
raise HTTPException(500, f"Kokoro synthesis failed: {e}")
|
|
471
|
+
|
|
472
|
+
@app_primary.get("/health")
|
|
473
|
+
def health_primary():
|
|
474
|
+
return {"status": "ok", "engine": "kokoro", "device": "cuda",
|
|
475
|
+
"available_voices": ["af_bella", "af_nicole", "af_sarah", "am_adam", "am_michael"]}
|
|
476
|
+
|
|
477
|
+
# ---- Fallback (Piper) endpoints ----
|
|
478
|
+
|
|
479
|
+
@app_fallback.post("/synthesize")
|
|
480
|
+
async def synthesize_fallback(data: dict = None):
|
|
481
|
+
if data is None:
|
|
482
|
+
raise HTTPException(400, "JSON body required: {text}")
|
|
483
|
+
text = data.get("text")
|
|
484
|
+
if not text:
|
|
485
|
+
raise HTTPException(400, "text field required")
|
|
486
|
+
|
|
487
|
+
try:
|
|
488
|
+
import subprocess
|
|
489
|
+
proc = subprocess.run(
|
|
490
|
+
["piper", "--model", PIPER_MODEL_PATH, "--output-raw"],
|
|
491
|
+
input=text.encode("utf-8"),
|
|
492
|
+
capture_output=True,
|
|
493
|
+
timeout=30,
|
|
494
|
+
)
|
|
495
|
+
if proc.returncode != 0:
|
|
496
|
+
raise HTTPException(500, f"Piper failed: {proc.stderr.decode()}")
|
|
497
|
+
# Wrap raw PCM in WAV
|
|
498
|
+
buf = io.BytesIO(proc.stdout)
|
|
499
|
+
audio_bytes = buf.read()
|
|
500
|
+
return Response(content=audio_bytes, media_type="audio/raw",
|
|
501
|
+
headers={"X-TTS-Engine": "piper", "X-Sample-Rate": "22050"})
|
|
502
|
+
except subprocess.TimeoutExpired:
|
|
503
|
+
raise HTTPException(504, "Piper synthesis timed out")
|
|
504
|
+
except Exception as e:
|
|
505
|
+
raise HTTPException(500, f"Piper synthesis failed: {e}")
|
|
506
|
+
|
|
507
|
+
@app_fallback.get("/health")
|
|
508
|
+
def health_fallback():
|
|
509
|
+
return {"status": "ok", "engine": "piper", "device": "cpu"}
|
|
510
|
+
|
|
511
|
+
# ---- Run two servers: primary on 8766, fallback on 8767 ----
|
|
512
|
+
# We'll run these behind a single systemd service that spawns both via a wrapper script.
|
|
513
|
+
|
|
514
|
+
if __name__ == "__main__":
|
|
515
|
+
import multiprocessing
|
|
516
|
+
|
|
517
|
+
def run_primary():
|
|
518
|
+
uvicorn.run(app_primary, host="0.0.0.0", port=8766, log_level="info")
|
|
519
|
+
|
|
520
|
+
def run_fallback():
|
|
521
|
+
uvicorn.run(app_fallback, host="0.0.0.0", port=8767, log_level="info")
|
|
522
|
+
|
|
523
|
+
p_primary = multiprocessing.Process(target=run_primary)
|
|
524
|
+
p_fallback = multiprocessing.Process(target=run_fallback)
|
|
525
|
+
|
|
526
|
+
p_primary.start()
|
|
527
|
+
p_fallback.start()
|
|
528
|
+
|
|
529
|
+
p_primary.join()
|
|
530
|
+
p_fallback.join()
|
|
531
|
+
PYEOF
|
|
532
|
+
|
|
533
|
+
chmod +x /opt/voice-pipeline/tts_service.py
|
|
534
|
+
chown voice:voice /opt/voice-pipeline/tts_service.py
|
|
535
|
+
```
|
|
536
|
+
|
|
537
|
+
#### Step 2.9 — Systemd Service Units
|
|
538
|
+
|
|
539
|
+
```bash
|
|
540
|
+
ssh rob@100.106.110.119
|
|
541
|
+
|
|
542
|
+
# STT systemd service
|
|
543
|
+
cat > /tmp/alice-stt.service << 'EOF'
|
|
544
|
+
[Unit]
|
|
545
|
+
Description=A.L.I.C.E. STT Service (faster-whisper + silero-vad)
|
|
546
|
+
After=network.target
|
|
547
|
+
|
|
548
|
+
[Service]
|
|
549
|
+
Type=simple
|
|
550
|
+
User=voice
|
|
551
|
+
Group=voice
|
|
552
|
+
WorkingDirectory=/opt/voice-pipeline
|
|
553
|
+
Environment="PATH=/opt/voice-pipeline/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
|
|
554
|
+
Environment="CUDA_VISIBLE_DEVICES=0"
|
|
555
|
+
ExecStart=/opt/voice-pipeline/venv/bin/python3 /opt/voice-pipeline/stt_service.py
|
|
556
|
+
Restart=always
|
|
557
|
+
RestartSec=5
|
|
558
|
+
StandardOutput=journal
|
|
559
|
+
StandardError=journal
|
|
560
|
+
|
|
561
|
+
[Install]
|
|
562
|
+
WantedBy=multi-user.target
|
|
563
|
+
EOF
|
|
564
|
+
|
|
565
|
+
# TTS systemd service
|
|
566
|
+
cat > /tmp/alice-tts.service << 'EOF'
|
|
567
|
+
[Unit]
|
|
568
|
+
Description=A.L.I.C.E. TTS Service (Kokoro + Piper)
|
|
569
|
+
After=network.target
|
|
570
|
+
|
|
571
|
+
[Service]
|
|
572
|
+
Type=simple
|
|
573
|
+
User=voice
|
|
574
|
+
Group=voice
|
|
575
|
+
WorkingDirectory=/opt/voice-pipeline
|
|
576
|
+
Environment="PATH=/opt/voice-pipeline/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
|
|
577
|
+
Environment="CUDA_VISIBLE_DEVICES=0"
|
|
578
|
+
ExecStart=/opt/voice-pipeline/venv/bin/python3 /opt/voice-pipeline/tts_service.py
|
|
579
|
+
Restart=always
|
|
580
|
+
RestartSec=5
|
|
581
|
+
StandardOutput=journal
|
|
582
|
+
StandardError=journal
|
|
583
|
+
|
|
584
|
+
[Install]
|
|
585
|
+
WantedBy=multi-user.target
|
|
586
|
+
EOF
|
|
587
|
+
|
|
588
|
+
# Copy to systemd, reload, enable, start
|
|
589
|
+
sudo cp /tmp/alice-stt.service /etc/systemd/system/
|
|
590
|
+
sudo cp /tmp/alice-tts.service /etc/systemd/system/
|
|
591
|
+
sudo systemctl daemon-reload
|
|
592
|
+
sudo systemctl enable alice-stt
|
|
593
|
+
sudo systemctl enable alice-tts
|
|
594
|
+
sudo systemctl start alice-stt
|
|
595
|
+
sudo systemctl start alice-tts
|
|
596
|
+
|
|
597
|
+
# Verify
|
|
598
|
+
sudo systemctl status alice-stt --no-pager
|
|
599
|
+
sudo systemctl status alice-tts --no-pager
|
|
600
|
+
```
|
|
601
|
+
|
|
602
|
+
#### Step 2.10 — Firewall / Port Verification
|
|
603
|
+
|
|
604
|
+
```bash
|
|
605
|
+
ssh rob@100.106.110.119
|
|
606
|
+
|
|
607
|
+
# Confirm ports are listening
|
|
608
|
+
ss -tlnp | grep -E '876[567]'
|
|
609
|
+
|
|
610
|
+
# If ufw is active, allow the ports
|
|
611
|
+
sudo ufw status
|
|
612
|
+
sudo ufw allow 8765/tcp comment 'A.L.I.C.E. STT'
|
|
613
|
+
sudo ufw allow 8766/tcp comment 'A.L.I.C.E. TTS Kokoro'
|
|
614
|
+
sudo ufw allow 8767/tcp comment 'A.L.I.C.E. TTS Piper'
|
|
615
|
+
sudo ufw reload
|
|
616
|
+
```
|
|
617
|
+
|
|
618
|
+
#### Step 2.11 — End-to-End Smoke Test from any node
|
|
619
|
+
|
|
620
|
+
```bash
|
|
621
|
+
# From MacBook Air or any node:
|
|
622
|
+
# STT test (requires a short WAV file — generate a sine wave tone for testing)
|
|
623
|
+
# Using the Orin as the test source itself:
|
|
624
|
+
|
|
625
|
+
ssh rob@100.106.110.119
|
|
626
|
+
|
|
627
|
+
# Health checks
|
|
628
|
+
curl -s http://localhost:8765/health
|
|
629
|
+
curl -s http://localhost:8766/health
|
|
630
|
+
curl -s http://localhost:8767/health
|
|
631
|
+
|
|
632
|
+
# TTS primary test
|
|
633
|
+
curl -s -X POST http://localhost:8766/synthesize \
|
|
634
|
+
-H "Content-Type: application/json" \
|
|
635
|
+
-d '{"text": "A.L.I.C.E. voice pipeline is online.", "voice": "af_bella"}' \
|
|
636
|
+
-o /tmp/test_out.wav \
|
|
637
|
+
&& file /tmp/test_out.wav \
|
|
638
|
+
&& ls -lh /tmp/test_out.wav
|
|
639
|
+
|
|
640
|
+
# TTS fallback test
|
|
641
|
+
curl -s -X POST http://localhost:8767/synthesize \
|
|
642
|
+
-H "Content-Type: application/json" \
|
|
643
|
+
-d '{"text": "Testing the piper fallback."}' \
|
|
644
|
+
-o /tmp/test_fallback.wav \
|
|
645
|
+
&& file /tmp/test_fallback.wav
|
|
646
|
+
|
|
647
|
+
# STT test (use ffmpeg to generate a test WAV)
|
|
648
|
+
ffmpeg -f lavfi -i "sine=frequency=440:duration=3" -ar 16000 -ac 1 /tmp/test_tone.wav -y
|
|
649
|
+
curl -s -X POST http://localhost:8765/transcribe?language=en \
|
|
650
|
+
--data-binary @/tmp/test_tone.wav \
|
|
651
|
+
-H "Content-Type: audio/wav" | python3 -m json.tool
|
|
652
|
+
```
|
|
653
|
+
|
|
654
|
+
**Phase 2 Rollback:**
|
|
655
|
+
```bash
|
|
656
|
+
ssh rob@100.106.110.119
|
|
657
|
+
sudo systemctl disable --now alice-stt alice-tts
|
|
658
|
+
sudo rm /etc/systemd/system/alice-stt.service /etc/systemd/system/alice-tts.service
|
|
659
|
+
sudo systemctl daemon-reload
|
|
660
|
+
# Optionally remove the venv and models:
|
|
661
|
+
# sudo rm -rf /opt/voice-pipeline
|
|
662
|
+
```
|
|
663
|
+
|
|
664
|
+
**Capabilities unlocked:** Real-time voice input (STT) and voice output (TTS) across the A.L.I.C.E. network. Agents can speak to users. Orin handles all voice processing at the edge.
|
|
665
|
+
|
|
666
|
+
---
|
|
667
|
+
|
|
668
|
+
### Phase 3 — Infrastructure Services on Ubuntu Desktop (Docker)
|
|
669
|
+
|
|
670
|
+
**Node:** Ubuntu Desktop (100.76.82.8)
|
|
671
|
+
**SSH:** `ssh rob@100.76.82.8`
|
|
672
|
+
**Risk:** Medium — Docker containers, port bindings, data volumes
|
|
673
|
+
**Rollback:** `docker compose down -v` (deletes volumes), or `docker stop <container>`
|
|
674
|
+
|
|
675
|
+
Services: Qdrant (6333), Redis (6379), Prometheus (9090), Grafana (3000).
|
|
676
|
+
|
|
677
|
+
#### Step 3.1 — Docker Readiness Check
|
|
678
|
+
|
|
679
|
+
```bash
|
|
680
|
+
ssh rob@100.76.82.8
|
|
681
|
+
|
|
682
|
+
# Verify Docker
|
|
683
|
+
docker --version # Confirm Docker is installed
|
|
684
|
+
docker compose version # Confirm compose v2 plugin available
|
|
685
|
+
|
|
686
|
+
# Check NVIDIA runtime (CUDA is on this machine)
|
|
687
|
+
docker run --rm --gpus all nvidia/cuda:12.6.0-base-ubuntu22.04 \
|
|
688
|
+
nvidia-smi --format=csv,noheader
|
|
689
|
+
|
|
690
|
+
# If 'nvidia' runtime is not listed, register it:
|
|
691
|
+
# cat /etc/docker/daemon.json
|
|
692
|
+
# Should contain: {"default-runtime": "nvidia", ...}
|
|
693
|
+
```
|
|
694
|
+
|
|
695
|
+
#### Step 3.2 — Docker Compose File for All Infra Services
|
|
696
|
+
|
|
697
|
+
```bash
|
|
698
|
+
ssh rob@100.76.82.8
|
|
699
|
+
|
|
700
|
+
mkdir -p ~/alice-infra
|
|
701
|
+
cd ~/alice-infra
|
|
702
|
+
|
|
703
|
+
cat > docker-compose.yml << 'EOF'
|
|
704
|
+
version: '3.8'
|
|
705
|
+
|
|
706
|
+
services:
|
|
707
|
+
# --- Qdrant Vector Database ---
|
|
708
|
+
qdrant:
|
|
709
|
+
image: qdrant/qdrant:v1.7.4
|
|
710
|
+
container_name: alice-qdrant
|
|
711
|
+
restart: unless-stopped
|
|
712
|
+
ports:
|
|
713
|
+
- "6333:6333" # REST API
|
|
714
|
+
- "6334:6334" # gRPC
|
|
715
|
+
volumes:
|
|
716
|
+
- qdrant_data:/qdrant/storage
|
|
717
|
+
environment:
|
|
718
|
+
- QDRANT__SERVICE__GRPC_PORT=6334
|
|
719
|
+
- QDRANT__LOG_LEVEL=INFO
|
|
720
|
+
# GPU acceleration is not available in Qdrant community image;
|
|
721
|
+
# use nvidia-container-runtime for future qdrant-standalone if needed
|
|
722
|
+
deploy:
|
|
723
|
+
resources:
|
|
724
|
+
reservations:
|
|
725
|
+
devices:
|
|
726
|
+
- driver: nvidia
|
|
727
|
+
count: 0
|
|
728
|
+
capabilities: [gpu]
|
|
729
|
+
|
|
730
|
+
# --- Redis (cache + session store) ---
|
|
731
|
+
redis:
|
|
732
|
+
image: redis:7.4-alpine
|
|
733
|
+
container_name: alice-redis
|
|
734
|
+
restart: unless-stopped
|
|
735
|
+
ports:
|
|
736
|
+
- "6379:6379"
|
|
737
|
+
volumes:
|
|
738
|
+
- redis_data:/data
|
|
739
|
+
command: redis-server --appendonly yes --maxmemory 4gb --maxmemory-policy allkeys-lru
|
|
740
|
+
healthcheck:
|
|
741
|
+
test: ["CMD", "redis-cli", "ping"]
|
|
742
|
+
interval: 10s
|
|
743
|
+
timeout: 5s
|
|
744
|
+
retries: 5
|
|
745
|
+
|
|
746
|
+
# --- Prometheus (metrics collection) ---
|
|
747
|
+
prometheus:
|
|
748
|
+
image: prom/prometheus:v2.51.0
|
|
749
|
+
container_name: alice-prometheus
|
|
750
|
+
restart: unless-stopped
|
|
751
|
+
ports:
|
|
752
|
+
- "9090:9090"
|
|
753
|
+
volumes:
|
|
754
|
+
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
|
|
755
|
+
- prometheus_data:/prometheus
|
|
756
|
+
command:
|
|
757
|
+
- '--config.file=/etc/prometheus/prometheus.yml'
|
|
758
|
+
- '--storage.tsdb.path=/prometheus'
|
|
759
|
+
- '--storage.tsdb.retention.time=30d'
|
|
760
|
+
- '--web.enable-lifecycle' # Allows reload via POST without restart
|
|
761
|
+
extra_hosts:
|
|
762
|
+
- "host.docker.internal:host-gateway"
|
|
763
|
+
|
|
764
|
+
# --- Grafana (dashboards) ---
|
|
765
|
+
grafana:
|
|
766
|
+
image: grafana/grafana:10.4.2
|
|
767
|
+
container_name: alice-grafana
|
|
768
|
+
restart: unless-stopped
|
|
769
|
+
ports:
|
|
770
|
+
- "3000:3000"
|
|
771
|
+
volumes:
|
|
772
|
+
- grafana_data:/var/lib/grafana
|
|
773
|
+
- ./grafana/provisioning:/etc/grafana/provisioning:ro
|
|
774
|
+
environment:
|
|
775
|
+
- GF_SECURITY_ADMIN_USER=admin
|
|
776
|
+
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD:-alice-grafana-dev}
|
|
777
|
+
- GF_USERS_ALLOW_SIGN_UP=false
|
|
778
|
+
- GF_SERVER_ROOT_URL=http://localhost:3000
|
|
779
|
+
depends_on:
|
|
780
|
+
- prometheus
|
|
781
|
+
|
|
782
|
+
volumes:
|
|
783
|
+
qdrant_data:
|
|
784
|
+
redis_data:
|
|
785
|
+
prometheus_data:
|
|
786
|
+
grafana_data:
|
|
787
|
+
EOF
|
|
788
|
+
```
|
|
789
|
+
|
|
790
|
+
#### Step 3.3 — Prometheus Configuration
|
|
791
|
+
|
|
792
|
+
```bash
|
|
793
|
+
ssh rob@100.76.82.8
|
|
794
|
+
cd ~/alice-infra
|
|
795
|
+
|
|
796
|
+
cat > prometheus.yml << 'EOF'
|
|
797
|
+
global:
|
|
798
|
+
scrape_interval: 15s
|
|
799
|
+
evaluation_interval: 15s
|
|
800
|
+
|
|
801
|
+
alerting:
|
|
802
|
+
alertmanagers: []
|
|
803
|
+
|
|
804
|
+
rule_files: []
|
|
805
|
+
|
|
806
|
+
scrape_configs:
|
|
807
|
+
# Prometheus self-monitoring
|
|
808
|
+
- job_name: 'prometheus'
|
|
809
|
+
static_configs:
|
|
810
|
+
- targets: ['localhost:9090']
|
|
811
|
+
|
|
812
|
+
# Ollama exporters (if installed on each node)
|
|
813
|
+
# Add one entry per node
|
|
814
|
+
- job_name: 'ollama-nodes'
|
|
815
|
+
static_configs:
|
|
816
|
+
- targets:
|
|
817
|
+
- '100.115.74.106:6060' # Mac Studio
|
|
818
|
+
- '100.107.132.71:6060' # Mac Mini
|
|
819
|
+
- '100.106.110.119:6060' # Orin
|
|
820
|
+
- '100.76.82.8:6060' # Ubuntu Desktop
|
|
821
|
+
labels:
|
|
822
|
+
network: 'alice'
|
|
823
|
+
scrape_interval: 30s
|
|
824
|
+
|
|
825
|
+
# OpenClaw agent metrics (if exposed)
|
|
826
|
+
- job_name: 'openclaw'
|
|
827
|
+
static_configs:
|
|
828
|
+
- targets: ['100.101.241.124:9091']
|
|
829
|
+
|
|
830
|
+
# Infrastructure containers
|
|
831
|
+
- job_name: 'infra'
|
|
832
|
+
static_configs:
|
|
833
|
+
- targets:
|
|
834
|
+
- 'qdrant:6333'
|
|
835
|
+
- 'redis:6379'
|
|
836
|
+
EOF
|
|
837
|
+
```
|
|
838
|
+
|
|
839
|
+
#### Step 3.4 — Grafana Provisioning (datasources + dashboard)
|
|
840
|
+
|
|
841
|
+
```bash
|
|
842
|
+
ssh rob@100.76.82.8
|
|
843
|
+
cd ~/alice-infra
|
|
844
|
+
|
|
845
|
+
mkdir -p grafana/provisioning/datasources
|
|
846
|
+
mkdir -p grafana/provisioning/dashboards
|
|
847
|
+
|
|
848
|
+
# Datasource provisioning (auto-connect Prometheus)
|
|
849
|
+
cat > grafana/provisioning/datasources/prometheus.yml << 'EOF'
|
|
850
|
+
apiVersion: 1
|
|
851
|
+
|
|
852
|
+
datasources:
|
|
853
|
+
- name: Prometheus
|
|
854
|
+
type: prometheus
|
|
855
|
+
access: proxy
|
|
856
|
+
url: http://prometheus:9090
|
|
857
|
+
isDefault: true
|
|
858
|
+
editable: false
|
|
859
|
+
EOF
|
|
860
|
+
|
|
861
|
+
# Dashboard provisioning
|
|
862
|
+
cat > grafana/provisioning/dashboards/dashboard.yml << 'EOF'
|
|
863
|
+
apiVersion: 1
|
|
864
|
+
|
|
865
|
+
providers:
|
|
866
|
+
- name: 'A.L.I.C.E.'
|
|
867
|
+
orgId: 1
|
|
868
|
+
folder: ''
|
|
869
|
+
folderUid: ''
|
|
870
|
+
type: file
|
|
871
|
+
disableDeletion: false
|
|
872
|
+
updateIntervalSeconds: 30
|
|
873
|
+
allowUiUpdates: true
|
|
874
|
+
options:
|
|
875
|
+
path: /etc/grafana/provisioning/dashboards
|
|
876
|
+
EOF
|
|
877
|
+
|
|
878
|
+
# Create a basic A.L.I.C.E. overview dashboard JSON
|
|
879
|
+
cat > grafana/provisioning/dashboards/alice-overview.json << 'EOF'
|
|
880
|
+
{
|
|
881
|
+
"dashboard": {
|
|
882
|
+
"title": "A.L.I.C.E. Network Overview",
|
|
883
|
+
"uid": "alice-overview",
|
|
884
|
+
" panels": [
|
|
885
|
+
{
|
|
886
|
+
"title": "Ollama Node Health",
|
|
887
|
+
"type": "stat",
|
|
888
|
+
"gridPos": {"h": 6, "w": 12, "x": 0, "y": 0},
|
|
889
|
+
"targets": [{"expr": "up{job='ollama-nodes'}", "legendFormat": "{{instance}}"}]
|
|
890
|
+
},
|
|
891
|
+
{
|
|
892
|
+
"title": "Qdrant Collection Count",
|
|
893
|
+
"type": "stat",
|
|
894
|
+
"gridPos": {"h": 6, "w": 6, "x": 12, "y": 0},
|
|
895
|
+
"targets": [{"expr": "qdrantCollections", "legendFormat": "collections"}]
|
|
896
|
+
},
|
|
897
|
+
{
|
|
898
|
+
"title": "Redis Connected Clients",
|
|
899
|
+
"type": "graph",
|
|
900
|
+
"gridPos": {"h": 6, "w": 12, "x": 0, "y": 6},
|
|
901
|
+
"targets": [{"expr": "redis_connected_clients", "legendFormat": "clients"}]
|
|
902
|
+
},
|
|
903
|
+
{
|
|
904
|
+
"title": "Prometheus tsdb_size",
|
|
905
|
+
"type": "graph",
|
|
906
|
+
"gridPos": {"h": 6, "w": 12, "x": 12, "y": 6},
|
|
907
|
+
"targets": [{"expr": "prometheus_tsdb_storage_blocks_bytes", "legendFormat": "storage_bytes"}]
|
|
908
|
+
}
|
|
909
|
+
]
|
|
910
|
+
}
|
|
911
|
+
}
|
|
912
|
+
EOF
|
|
913
|
+
```
|
|
914
|
+
|
|
915
|
+
#### Step 3.5 — NVIDIA NIM Containers (Optional — CUDA node)
|
|
916
|
+
|
|
917
|
+
> **Note:** NVIDIA NIM (Inference Microservices) containers require an NGC API key and are primarily for enterprise models (NVIDIA-hosted or self-hosted). Given this is a dev environment on Ubuntu Desktop, evaluate whether NIM containers add value vs. Ollama running natively.
|
|
918
|
+
|
|
919
|
+
If NIM is desired (e.g., for NVIDIA NIM LLM endpoints for embeddings/llm):
|
|
920
|
+
|
|
921
|
+
```bash
|
|
922
|
+
ssh rob@100.76.82.8
|
|
923
|
+
|
|
924
|
+
# Authenticate to NGC (requires API key from ngc.nvidia.com)
|
|
925
|
+
# docker login nvcr.io
|
|
926
|
+
|
|
927
|
+
# Example: NVIDIA NIM for embeddings (if API key available)
|
|
928
|
+
# docker run --rm --gpus all \
|
|
929
|
+
# -e NGC_API_KEY=$NGC_API_KEY \
|
|
930
|
+
# -p 8080:8080 \
|
|
931
|
+
# nvcr.io/nim/nvidia/embeddings:1.0
|
|
932
|
+
|
|
933
|
+
# Example: NIM for a specific model (check NGC for current image)
|
|
934
|
+
# docker run --rm --gpus all \
|
|
935
|
+
# -e NGC_API_KEY=$NGC_API_KEY \
|
|
936
|
+
# -p 8000:8000 \
|
|
937
|
+
# nvcr.io/nim/nvidia/llm:1.0
|
|
938
|
+
|
|
939
|
+
# For now, skip NIM unless Rob provides NGC credentials.
|
|
940
|
+
echo "NIM containers skipped — awaiting NGC API key"
|
|
941
|
+
```
|
|
942
|
+
|
|
943
|
+
#### Step 3.6 — Start Services
|
|
944
|
+
|
|
945
|
+
```bash
|
|
946
|
+
ssh rob@100.76.82.8
|
|
947
|
+
cd ~/alice-infra
|
|
948
|
+
|
|
949
|
+
# Set Grafana password (export before running compose)
|
|
950
|
+
export GRAFANA_PASSWORD="alice-grafana-dev" # Change in prod
|
|
951
|
+
|
|
952
|
+
# Pull images (run first to catch any image errors early)
|
|
953
|
+
docker compose pull
|
|
954
|
+
|
|
955
|
+
# Start all services
|
|
956
|
+
docker compose up -d
|
|
957
|
+
|
|
958
|
+
# Wait and check health
|
|
959
|
+
sleep 10
|
|
960
|
+
docker compose ps
|
|
961
|
+
|
|
962
|
+
# Tail logs for first 30s
|
|
963
|
+
docker compose logs --tail=30
|
|
964
|
+
```
|
|
965
|
+
|
|
966
|
+
#### Step 3.7 — Verify All Services
|
|
967
|
+
|
|
968
|
+
```bash
|
|
969
|
+
ssh rob@100.76.82.8
|
|
970
|
+
|
|
971
|
+
# Qdrant — REST API
|
|
972
|
+
curl -s http://localhost:6333/collections | python3 -m json.tool
|
|
973
|
+
# Expected: {"result":{"collections":[]},"status":"ok"}
|
|
974
|
+
|
|
975
|
+
# Qdrant — Health
|
|
976
|
+
curl -s http://localhost:6333/health | python3 -m json.tool
|
|
977
|
+
|
|
978
|
+
# Redis
|
|
979
|
+
docker exec alice-redis redis-cli ping
|
|
980
|
+
# Expected: PONG
|
|
981
|
+
|
|
982
|
+
redis-cli info clients
|
|
983
|
+
# Check connected clients
|
|
984
|
+
|
|
985
|
+
# Prometheus
|
|
986
|
+
curl -s http://localhost:9090/-/healthy
|
|
987
|
+
# Expected: Prometheus is Healthy.
|
|
988
|
+
|
|
989
|
+
curl -s http://localhost:9090/api/v1/status/targets | \
|
|
990
|
+
python3 -c "import sys,json; d=json.load(sys.stdin); print(f'Active targets: {sum(1 for t in d[\"data\"][\"activeTargets\"])}')"
|
|
991
|
+
|
|
992
|
+
# Prometheus — Grafana datasource test
|
|
993
|
+
curl -s http://admin:alice-grafana-dev@localhost:3000/api/datasources/1/health
|
|
994
|
+
# Expected: {"status": "OK"}
|
|
995
|
+
|
|
996
|
+
# Grafana
|
|
997
|
+
curl -s http://localhost:3000/api/health | python3 -m json.tool
|
|
998
|
+
# Expected: {"commit":"...","database":"ok","version":"10.4.2"}
|
|
999
|
+
```
|
|
1000
|
+
|
|
1001
|
+
**Phase 3 Rollback:**
|
|
1002
|
+
```bash
|
|
1003
|
+
ssh rob@100.76.82.8
|
|
1004
|
+
cd ~/alice-infra
|
|
1005
|
+
|
|
1006
|
+
# Stop all containers (preserves volumes)
|
|
1007
|
+
docker compose stop
|
|
1008
|
+
|
|
1009
|
+
# OR destroy everything (including data volumes — WARNING: loses all data):
|
|
1010
|
+
# docker compose down -v
|
|
1011
|
+
|
|
1012
|
+
# Remove images if desired:
|
|
1013
|
+
# docker compose down --rmi all
|
|
1014
|
+
```
|
|
1015
|
+
|
|
1016
|
+
**Capabilities unlocked:** Vector storage (Qdrant) for RAG pipelines, caching/sessions (Redis), metrics collection (Prometheus), and observability dashboards (Grafana).
|
|
1017
|
+
|
|
1018
|
+
---
|
|
1019
|
+
|
|
1020
|
+
### Phase 4 — Ubuntu Desktop: Small GPU Models via Ollama
|
|
1021
|
+
|
|
1022
|
+
**Node:** Ubuntu Desktop (100.76.82.8)
|
|
1023
|
+
**SSH:** `ssh rob@100.76.82.8`
|
|
1024
|
+
**Risk:** Low — read-only model pulls, small files
|
|
1025
|
+
**Rollback:** `ollama delete <model>`
|
|
1026
|
+
|
|
1027
|
+
#### Step 4.1 — Verify Ollama + GPU Access
|
|
1028
|
+
|
|
1029
|
+
```bash
|
|
1030
|
+
ssh rob@100.76.82.8
|
|
1031
|
+
|
|
1032
|
+
ollama --version
|
|
1033
|
+
nvidia-smi
|
|
1034
|
+
# Confirm GPU is visible to Ollama
|
|
1035
|
+
```
|
|
1036
|
+
|
|
1037
|
+
#### Step 4.2 — Pull GPU-Accelerated Models
|
|
1038
|
+
|
|
1039
|
+
```bash
|
|
1040
|
+
ssh rob@100.76.82.8
|
|
1041
|
+
|
|
1042
|
+
# phi-4-mini — Microsoft Small Language Model (~2.5GB)
|
|
1043
|
+
# Excellent for classification, function calling, low-latency tasks
|
|
1044
|
+
ollama pull phi-4-mini
|
|
1045
|
+
|
|
1046
|
+
# llama3.2:3b — lightweight general-purpose (~2GB)
|
|
1047
|
+
# Good for fast responses where full capacity is overkill
|
|
1048
|
+
ollama pull llama3.2:3b
|
|
1049
|
+
|
|
1050
|
+
# Optional: verify GPU acceleration is being used
|
|
1051
|
+
# Run a quick benchmark:
|
|
1052
|
+
time ollama run phi-4-mini "Count to 10:" --verbose
|
|
1053
|
+
```
|
|
1054
|
+
|
|
1055
|
+
#### Step 4.3 — Register in Ollama (already automatic)
|
|
1056
|
+
|
|
1057
|
+
```bash
|
|
1058
|
+
ssh rob@100.76.82.8
|
|
1059
|
+
|
|
1060
|
+
# List all models to confirm
|
|
1061
|
+
ollama list
|
|
1062
|
+
|
|
1063
|
+
# Expected output:
|
|
1064
|
+
# NAME SIZE MODIFIED
|
|
1065
|
+
# phi-4-mini:latest 2.5GB <timestamp>
|
|
1066
|
+
# llama3.2:3b 2.0GB <timestamp>
|
|
1067
|
+
# [other existing models...]
|
|
1068
|
+
|
|
1069
|
+
# Test each
|
|
1070
|
+
ollama run phi-4-mini "What is 2+2?" --verbose
|
|
1071
|
+
ollama run llama3.2:3b "What is 2+2?" --verbose
|
|
1072
|
+
```
|
|
1073
|
+
|
|
1074
|
+
**Phase 4 Rollback:**
|
|
1075
|
+
```bash
|
|
1076
|
+
ollama delete phi-4-mini
|
|
1077
|
+
ollama delete llama3.2:3b
|
|
1078
|
+
```
|
|
1079
|
+
|
|
1080
|
+
**Capabilities unlocked:** Ultra-fast local inference on Ubuntu Desktop's 8GB VRAM GPU for phi-4-mini (SLM, function calling) and lightweight general chat via llama3.2:3b. Reduces load on Mac Studio for simple tasks.
|
|
1081
|
+
|
|
1082
|
+
---
|
|
1083
|
+
|
|
1084
|
+
### Phase 5 — OpenClaw Config Updates
|
|
1085
|
+
|
|
1086
|
+
**Node:** MacBook Air (100.101.241.124) — the OpenClaw host
|
|
1087
|
+
**Risk:** Low — config file update, no services restarted
|
|
1088
|
+
**Rollback:** Restore previous `~/.openclaw/config.json`
|
|
1089
|
+
|
|
1090
|
+
#### Step 5.1 — Locate and Read Current Config
|
|
1091
|
+
|
|
1092
|
+
```bash
|
|
1093
|
+
# From MacBook Air (or wherever OpenClaw config lives)
|
|
1094
|
+
ssh rob@100.101.241.124
|
|
1095
|
+
|
|
1096
|
+
# Find config location
|
|
1097
|
+
openclaw config show 2>/dev/null || \
|
|
1098
|
+
cat ~/.openclaw/config.json 2>/dev/null || \
|
|
1099
|
+
find ~ -name "config.json" -path "*/openclaw/*" 2>/dev/null | head -5
|
|
1100
|
+
|
|
1101
|
+
# Read current config
|
|
1102
|
+
cat ~/.openclaw/config.json | python3 -m json.tool | head -100
|
|
1103
|
+
```
|
|
1104
|
+
|
|
1105
|
+
#### Step 5.2 — JSON Config Patch
|
|
1106
|
+
|
|
1107
|
+
The exact patch depends on the current config schema. Below is a **representative patch** — verify against the actual `~/.openclaw/config.json` before applying.
|
|
1108
|
+
|
|
1109
|
+
```bash
|
|
1110
|
+
ssh rob@100.101.241.124
|
|
1111
|
+
|
|
1112
|
+
# Backup current config
|
|
1113
|
+
cp ~/.openclaw/config.json ~/.openclaw/config.json.bak.$(date +%Y%m%d%H%M%S)
|
|
1114
|
+
|
|
1115
|
+
# Apply patches using python3 for atomic update
|
|
1116
|
+
python3 << 'EOF'
|
|
1117
|
+
import json
|
|
1118
|
+
import sys
|
|
1119
|
+
|
|
1120
|
+
CONFIG_PATH = "/Users/rob/.openclaw/config.json"
|
|
1121
|
+
|
|
1122
|
+
with open(CONFIG_PATH, "r") as f:
|
|
1123
|
+
cfg = json.load(f)
|
|
1124
|
+
|
|
1125
|
+
# === PATCH 1: Register new models in model catalog ===
|
|
1126
|
+
# Mac Studio: llama-guard3:8b added (already exists via ollama pull)
|
|
1127
|
+
# Mac Mini: llama-guard3:8b added
|
|
1128
|
+
# Ubuntu Desktop: phi-4-mini, llama3.2:3b added
|
|
1129
|
+
|
|
1130
|
+
# The exact schema varies by OpenClaw version. Below uses a common pattern:
|
|
1131
|
+
# Look for a "models" or "endpoints" or "agents" section.
|
|
1132
|
+
|
|
1133
|
+
# Example schema (verify before applying):
|
|
1134
|
+
# cfg.setdefault("models", {})["catalog"] = cfg["models"].get("catalog", {})
|
|
1135
|
+
# cfg["models"]["catalog"]["llama-guard3:8b"] = {
|
|
1136
|
+
# "provider": "ollama",
|
|
1137
|
+
# "node": "mac-studio",
|
|
1138
|
+
# "address": "http://100.115.74.106:11434",
|
|
1139
|
+
# "enabled": True,
|
|
1140
|
+
# "safety": True, # Safety model flag
|
|
1141
|
+
# }
|
|
1142
|
+
|
|
1143
|
+
# === PATCH 2: Register voice endpoints ===
|
|
1144
|
+
# cfg.setdefault("voice", {})
|
|
1145
|
+
# cfg["voice"]["stt_endpoint"] = "http://100.106.110.119:8765/transcribe"
|
|
1146
|
+
# cfg["voice"]["tts_primary"] = "http://100.106.110.119:8766/synthesize"
|
|
1147
|
+
# cfg["voice"]["tts_fallback"] = "http://100.106.110.119:8767/synthesize"
|
|
1148
|
+
# cfg["voice"]["tts_default_voice"] = "af_bella"
|
|
1149
|
+
|
|
1150
|
+
# === PATCH 3: Register infra endpoints ===
|
|
1151
|
+
# cfg.setdefault("services", {})
|
|
1152
|
+
# cfg["services"]["qdrant"] = "http://100.76.82.8:6333"
|
|
1153
|
+
# cfg["services"]["redis"] = "redis://100.76.82.8:6379"
|
|
1154
|
+
# cfg["services"]["prometheus"] = "http://100.76.82.8:9090"
|
|
1155
|
+
# cfg["services"]["grafana"] = "http://100.76.82.8:3000"
|
|
1156
|
+
|
|
1157
|
+
with open(CONFIG_PATH, "w") as f:
|
|
1158
|
+
json.dump(cfg, f, indent=2)
|
|
1159
|
+
|
|
1160
|
+
print("Config patched OK")
|
|
1161
|
+
EOF
|
|
1162
|
+
```
|
|
1163
|
+
|
|
1164
|
+
#### Step 5.3 — Verify OpenClaw Sees New Config
|
|
1165
|
+
|
|
1166
|
+
```bash
|
|
1167
|
+
ssh rob@100.101.241.124
|
|
1168
|
+
|
|
1169
|
+
# Validate config syntax
|
|
1170
|
+
openclaw config validate 2>/dev/null || \
|
|
1171
|
+
python3 -c "import json; json.load(open('/Users/rob/.openclaw/config.json'))" && \
|
|
1172
|
+
echo "Config JSON valid"
|
|
1173
|
+
|
|
1174
|
+
# Reload OpenClaw to pick up changes
|
|
1175
|
+
openclaw gateway restart
|
|
1176
|
+
|
|
1177
|
+
# Check status
|
|
1178
|
+
openclaw gateway status
|
|
1179
|
+
```
|
|
1180
|
+
|
|
1181
|
+
**Phase 5 Rollback:**
|
|
1182
|
+
```bash
|
|
1183
|
+
ssh rob@100.101.241.124
|
|
1184
|
+
cp ~/.openclaw/config.json.bak.$(ls -t ~/.openclaw/config.json.bak.* | head -1 | sed 's/.*config.json.bak.//') ~/.openclaw/config.json
|
|
1185
|
+
openclaw gateway restart
|
|
1186
|
+
```
|
|
1187
|
+
|
|
1188
|
+
---
|
|
1189
|
+
|
|
1190
|
+
## C. Risk + Rollback Summary Table
|
|
1191
|
+
|
|
1192
|
+
| Phase | Risk Level | Key Risk | Rollback Command |
|
|
1193
|
+
|---|---|---|---|
|
|
1194
|
+
| 1. Safety layer | Low | Disk space (~5GB per node) | `ollama delete llama-guard3:8b` |
|
|
1195
|
+
| 2. Voice pipeline | Medium | Package install conflicts, systemd failures | `sudo systemctl disable --now alice-stt alice-tts` |
|
|
1196
|
+
| 3. Infra Docker | Medium | Port conflicts (6333/6379/9090/3000), Docker daemon issues | `docker compose stop` (preserves volumes) |
|
|
1197
|
+
| 4. GPU models | Low | Disk space (~4.5GB), GPU memory pressure | `ollama delete phi-4-mini llama3.2:3b` |
|
|
1198
|
+
| 5. OpenClaw config | Low | Config syntax error breaks gateway | `cp ~/.openclaw/config.json.bak.* ~/.openclaw/config.json && openclaw gateway restart` |
|
|
1199
|
+
|
|
1200
|
+
---
|
|
1201
|
+
|
|
1202
|
+
## D. OpenClaw Config Changes — Exact JSON Patches
|
|
1203
|
+
|
|
1204
|
+
> **IMPORTANT:** These patches are representative. Read the actual `~/.openclaw/config.json` first and adjust the paths/keys to match the real schema.
|
|
1205
|
+
|
|
1206
|
+
### D.1 — Model Registrations
|
|
1207
|
+
|
|
1208
|
+
```json
|
|
1209
|
+
// Add to models catalog (under appropriate section)
|
|
1210
|
+
"models": {
|
|
1211
|
+
"catalog": {
|
|
1212
|
+
"llama-guard3:8b": {
|
|
1213
|
+
"provider": "ollama",
|
|
1214
|
+
"node": "mac-studio",
|
|
1215
|
+
"address": "http://100.115.74.106:11434",
|
|
1216
|
+
"enabled": true,
|
|
1217
|
+
"safety_model": true,
|
|
1218
|
+
"capabilities": ["safety", "content-moderation"]
|
|
1219
|
+
},
|
|
1220
|
+
"llama-guard3:8b-macmini": {
|
|
1221
|
+
"provider": "ollama",
|
|
1222
|
+
"node": "mac-mini",
|
|
1223
|
+
"address": "http://100.107.132.71:11434",
|
|
1224
|
+
"enabled": true,
|
|
1225
|
+
"safety_model": true,
|
|
1226
|
+
"capabilities": ["safety", "content-moderation"]
|
|
1227
|
+
},
|
|
1228
|
+
"phi-4-mini": {
|
|
1229
|
+
"provider": "ollama",
|
|
1230
|
+
"node": "ubuntu-desktop",
|
|
1231
|
+
"address": "http://100.76.82.8:11434",
|
|
1232
|
+
"enabled": true,
|
|
1233
|
+
"gpu_accelerated": true,
|
|
1234
|
+
"capabilities": ["completion", "function_calling", "classification"],
|
|
1235
|
+
"vram_estimate_gb": 2.5
|
|
1236
|
+
},
|
|
1237
|
+
"llama3.2:3b": {
|
|
1238
|
+
"provider": "ollama",
|
|
1239
|
+
"node": "ubuntu-desktop",
|
|
1240
|
+
"address": "http://100.76.82.8:11434",
|
|
1241
|
+
"enabled": true,
|
|
1242
|
+
"gpu_accelerated": true,
|
|
1243
|
+
"capabilities": ["completion", "chat"],
|
|
1244
|
+
"vram_estimate_gb": 2.0
|
|
1245
|
+
}
|
|
1246
|
+
}
|
|
1247
|
+
}
|
|
1248
|
+
```
|
|
1249
|
+
|
|
1250
|
+
### D.2 — Voice Endpoints
|
|
1251
|
+
|
|
1252
|
+
```json
|
|
1253
|
+
"voice": {
|
|
1254
|
+
"enabled": true,
|
|
1255
|
+
"stt": {
|
|
1256
|
+
"endpoint": "http://100.106.110.119:8765/transcribe",
|
|
1257
|
+
"engine": "faster-whisper-tensorrt",
|
|
1258
|
+
"model": "base",
|
|
1259
|
+
"default_language": "en",
|
|
1260
|
+
"vad": "silero-vad"
|
|
1261
|
+
},
|
|
1262
|
+
"tts": {
|
|
1263
|
+
"primary": {
|
|
1264
|
+
"endpoint": "http://100.106.110.119:8766/synthesize",
|
|
1265
|
+
"engine": "kokoro-cuda",
|
|
1266
|
+
"default_voice": "af_bella",
|
|
1267
|
+
"available_voices": ["af_bella", "af_nicole", "af_sarah", "am_adam", "am_michael"],
|
|
1268
|
+
"gpu_accelerated": true
|
|
1269
|
+
},
|
|
1270
|
+
"fallback": {
|
|
1271
|
+
"endpoint": "http://100.106.110.119:8767/synthesize",
|
|
1272
|
+
"engine": "piper",
|
|
1273
|
+
"default_voice": "en_US-lessac-medium",
|
|
1274
|
+
"gpu_accelerated": false
|
|
1275
|
+
}
|
|
1276
|
+
}
|
|
1277
|
+
}
|
|
1278
|
+
```
|
|
1279
|
+
|
|
1280
|
+
### D.3 — Infrastructure Service Endpoints
|
|
1281
|
+
|
|
1282
|
+
```json
|
|
1283
|
+
"services": {
|
|
1284
|
+
"qdrant": {
|
|
1285
|
+
"address": "http://100.76.82.8:6333",
|
|
1286
|
+
"rest_port": 6333,
|
|
1287
|
+
"grpc_port": 6334,
|
|
1288
|
+
"purpose": "vector_storage"
|
|
1289
|
+
},
|
|
1290
|
+
"redis": {
|
|
1291
|
+
"address": "redis://100.76.82.8:6379",
|
|
1292
|
+
"port": 6379,
|
|
1293
|
+
"purpose": "cache_session",
|
|
1294
|
+
"maxmemory": "4gb",
|
|
1295
|
+
"maxmemory_policy": "allkeys-lru"
|
|
1296
|
+
},
|
|
1297
|
+
"prometheus": {
|
|
1298
|
+
"address": "http://100.76.82.8:9090",
|
|
1299
|
+
"port": 9090,
|
|
1300
|
+
"purpose": "metrics",
|
|
1301
|
+
"retention_days": 30,
|
|
1302
|
+
"scrape_interval": "15s"
|
|
1303
|
+
},
|
|
1304
|
+
"grafana": {
|
|
1305
|
+
"address": "http://100.76.82.8:3000",
|
|
1306
|
+
"port": 3000,
|
|
1307
|
+
"purpose": "observability",
|
|
1308
|
+
"default_user": "admin"
|
|
1309
|
+
}
|
|
1310
|
+
}
|
|
1311
|
+
```
|
|
1312
|
+
|
|
1313
|
+
### D.4 — Node Inventory (optional — for OpenClaw's awareness)
|
|
1314
|
+
|
|
1315
|
+
```json
|
|
1316
|
+
"nodes": {
|
|
1317
|
+
"mac-studio": {
|
|
1318
|
+
"tailscale_ip": "100.115.74.106",
|
|
1319
|
+
"os": "macos",
|
|
1320
|
+
"ram_gb": 36,
|
|
1321
|
+
"role": "inference_primary",
|
|
1322
|
+
"models": ["lfm2:24b", "deepseek-r1:14b", "qwen3.5:35b", "qwen3-coder:30b", "llama-guard3:8b"]
|
|
1323
|
+
},
|
|
1324
|
+
"mac-mini": {
|
|
1325
|
+
"tailscale_ip": "100.107.132.71",
|
|
1326
|
+
"os": "macos",
|
|
1327
|
+
"ram_gb": 24,
|
|
1328
|
+
"role": "inference_relay",
|
|
1329
|
+
"models": ["qwen3.5:27b", "llama-guard3:8b"]
|
|
1330
|
+
},
|
|
1331
|
+
"orin": {
|
|
1332
|
+
"tailscale_ip": "100.106.110.119",
|
|
1333
|
+
"os": "ubuntu-arm",
|
|
1334
|
+
"ram_gb": 64,
|
|
1335
|
+
"role": "voice_edge",
|
|
1336
|
+
"cuda_version": "12.6",
|
|
1337
|
+
"services": ["stt:8765", "tts:8766/tts-fallback:8767"]
|
|
1338
|
+
},
|
|
1339
|
+
"ubuntu-desktop": {
|
|
1340
|
+
"tailscale_ip": "100.76.82.8",
|
|
1341
|
+
"os": "ubuntu-x86",
|
|
1342
|
+
"ram_gb": 64,
|
|
1343
|
+
"vram_gb": 8,
|
|
1344
|
+
"role": "infrastructure_gpu",
|
|
1345
|
+
"cuda_version": "12.x",
|
|
1346
|
+
"models": ["phi-4-mini", "llama3.2:3b"],
|
|
1347
|
+
"services": ["qdrant:6333", "redis:6379", "prometheus:9090", "grafana:3000"]
|
|
1348
|
+
},
|
|
1349
|
+
"macbook-air": {
|
|
1350
|
+
"tailscale_ip": "100.101.241.124",
|
|
1351
|
+
"os": "macos",
|
|
1352
|
+
"ram_gb": 24,
|
|
1353
|
+
"role": "orchestration_only"
|
|
1354
|
+
}
|
|
1355
|
+
}
|
|
1356
|
+
```
|
|
1357
|
+
|
|
1358
|
+
---
|
|
1359
|
+
|
|
1360
|
+
## E. Validation & Testing Steps
|
|
1361
|
+
|
|
1362
|
+
### Pre-flight (all nodes)
|
|
1363
|
+
|
|
1364
|
+
```bash
|
|
1365
|
+
# From MacBook Air (orchestrator), verify all nodes are reachable:
|
|
1366
|
+
for ip in 100.115.74.106 100.107.132.71 100.106.110.119 100.76.82.8; do
|
|
1367
|
+
echo "=== Pinging $ip ==="
|
|
1368
|
+
ping -c 2 -W 2 $ip && echo "OK" || echo "FAIL"
|
|
1369
|
+
done
|
|
1370
|
+
```
|
|
1371
|
+
|
|
1372
|
+
### Phase 1 Validation
|
|
1373
|
+
|
|
1374
|
+
```bash
|
|
1375
|
+
# Test safety model on Mac Studio
|
|
1376
|
+
ssh rob@100.115.74.106 "ollama run llama-guard3:8b 'You are a helpful assistant.' --verbose" | head -5
|
|
1377
|
+
|
|
1378
|
+
# Test safety model on Mac Mini
|
|
1379
|
+
ssh rob@100.107.132.71 "ollama run llama-guard3:8b 'You are a helpful assistant.' --verbose" | head -5
|
|
1380
|
+
```
|
|
1381
|
+
|
|
1382
|
+
### Phase 2 Validation
|
|
1383
|
+
|
|
1384
|
+
```bash
|
|
1385
|
+
# All from Orin
|
|
1386
|
+
ssh rob@100.106.110.119
|
|
1387
|
+
|
|
1388
|
+
# Systemd services up
|
|
1389
|
+
systemctl is-active alice-stt # should print "active"
|
|
1390
|
+
systemctl is-active alice-tts # should print "active"
|
|
1391
|
+
|
|
1392
|
+
# HTTP health endpoints
|
|
1393
|
+
curl -s http://localhost:8765/health | python3 -m json.tool
|
|
1394
|
+
curl -s http://localhost:8766/health | python3 -m json.tool
|
|
1395
|
+
curl -s http://localhost:8767/health | python3 -m json.tool
|
|
1396
|
+
|
|
1397
|
+
# STT functional test (requires test WAV)
|
|
1398
|
+
curl -s -X POST http://localhost:8765/transcribe \
|
|
1399
|
+
-H "Content-Type: audio/wav" \
|
|
1400
|
+
--data-binary @/tmp/test_tone.wav | python3 -m json.tool
|
|
1401
|
+
|
|
1402
|
+
# TTS functional test
|
|
1403
|
+
curl -s -X POST http://localhost:8766/synthesize \
|
|
1404
|
+
-H "Content-Type: application/json" \
|
|
1405
|
+
-d '{"text":"Devon online. Voice pipeline confirmed.","voice":"af_bella"}' \
|
|
1406
|
+
-o /tmp/devon_voice.wav
|
|
1407
|
+
file /tmp/devon_voice.wav
|
|
1408
|
+
|
|
1409
|
+
# Remote test (from MacBook Air)
|
|
1410
|
+
curl -s http://100.106.110.119:8765/health
|
|
1411
|
+
curl -s -X POST http://100.106.110.119:8766/synthesize \
|
|
1412
|
+
-H "Content-Type: application/json" \
|
|
1413
|
+
-d '{"text":"Testing from MacBook Air.","voice":"af_bella"}' -o /tmp/remote_test.wav
|
|
1414
|
+
file /tmp/remote_test.wav
|
|
1415
|
+
```
|
|
1416
|
+
|
|
1417
|
+
### Phase 3 Validation
|
|
1418
|
+
|
|
1419
|
+
```bash
|
|
1420
|
+
# All from Ubuntu Desktop
|
|
1421
|
+
ssh rob@100.76.82.8
|
|
1422
|
+
|
|
1423
|
+
# Container health
|
|
1424
|
+
docker compose ps
|
|
1425
|
+
|
|
1426
|
+
# Individual service checks
|
|
1427
|
+
curl -sf http://localhost:6333/health || echo "QDRANT FAIL"
|
|
1428
|
+
docker exec alice-redis redis-cli ping || echo "REDIS FAIL"
|
|
1429
|
+
curl -sf http://localhost:9090/-/healthy || echo "PROMETHEUS FAIL"
|
|
1430
|
+
curl -sf http://localhost:3000/api/health || echo "GRAFANA FAIL"
|
|
1431
|
+
|
|
1432
|
+
# Prometheus targets
|
|
1433
|
+
curl -s http://localhost:9090/api/v1/targets | python3 -c \
|
|
1434
|
+
"import sys,json; d=json.load(sys.stdin); [print(t['labels']['job'], t['health']) for t in d['data']['activeTargets']]"
|
|
1435
|
+
|
|
1436
|
+
# Grafana login
|
|
1437
|
+
curl -s -u admin:alice-grafana-dev http://localhost:3000/api/health | python3 -m json.tool
|
|
1438
|
+
```
|
|
1439
|
+
|
|
1440
|
+
### Phase 4 Validation
|
|
1441
|
+
|
|
1442
|
+
```bash
|
|
1443
|
+
ssh rob@100.76.82.8
|
|
1444
|
+
|
|
1445
|
+
ollama list | grep -E 'phi-4-mini|llama3.2:3b'
|
|
1446
|
+
|
|
1447
|
+
# GPU inference test
|
|
1448
|
+
time ollama run phi-4-mini "What is the capital of France?" --verbose
|
|
1449
|
+
time ollama run llama3.2:3b "What is 2+2?" --verbose
|
|
1450
|
+
```
|
|
1451
|
+
|
|
1452
|
+
### Phase 5 Validation
|
|
1453
|
+
|
|
1454
|
+
```bash
|
|
1455
|
+
ssh rob@100.101.241.124
|
|
1456
|
+
|
|
1457
|
+
# Config valid
|
|
1458
|
+
python3 -c "import json; json.load(open('/Users/rob/.openclaw/config.json'))" && echo "JSON OK"
|
|
1459
|
+
|
|
1460
|
+
# Gateway up
|
|
1461
|
+
openclaw gateway status
|
|
1462
|
+
|
|
1463
|
+
# List registered models
|
|
1464
|
+
openclaw models list 2>/dev/null || openclaw model list 2>/dev/null || echo "Check model list via web UI"
|
|
1465
|
+
```
|
|
1466
|
+
|
|
1467
|
+
---
|
|
1468
|
+
|
|
1469
|
+
## F. Capabilities Unlocked by Phase
|
|
1470
|
+
|
|
1471
|
+
| Phase | Capability Added | Who's Empowered |
|
|
1472
|
+
|---|---|---|
|
|
1473
|
+
| **1** | Content safety + moderation on all Mac Studio + Mac Mini inference | All agents |
|
|
1474
|
+
| **2** | Real-time voice STT + TTS at the network edge (Orin) | Voice agents, accessibility |
|
|
1475
|
+
| **2** | VAD-based voice activity detection | Voice pipeline |
|
|
1476
|
+
| **3** | Vector storage (Qdrant) — RAG at scale | Knowledge agents |
|
|
1477
|
+
| **3** | Redis caching — session persistence, rate limiting | All agents |
|
|
1478
|
+
| **3** | Prometheus metrics — per-node, per-model | Devon (observability) |
|
|
1479
|
+
| **3** | Grafana dashboards — visual health monitoring | Rob, Devon |
|
|
1480
|
+
| **4** | Ultra-fast phi-4-mini GPU inference on Ubuntu Desktop | Fast-response agents |
|
|
1481
|
+
| **4** | Lightweight llama3.2:3b GPU inference | Lightweight tasks |
|
|
1482
|
+
| **5** | OpenClaw aware of all new models, voice endpoints, infra services | Full A.L.I.C.E. stack |
|
|
1483
|
+
|
|
1484
|
+
---
|
|
1485
|
+
|
|
1486
|
+
## G. SSH Quick Reference (Tailscale IPs)
|
|
1487
|
+
|
|
1488
|
+
```bash
|
|
1489
|
+
# MacBook Air (orchestration host — likely the machine you're running from)
|
|
1490
|
+
ssh rob@100.101.241.124
|
|
1491
|
+
|
|
1492
|
+
# Mac Studio (always-on inference)
|
|
1493
|
+
ssh rob@100.115.74.106
|
|
1494
|
+
|
|
1495
|
+
# Mac Mini (inference relay)
|
|
1496
|
+
ssh rob@100.107.132.71
|
|
1497
|
+
|
|
1498
|
+
# Alpha/AGX Orin (voice pipeline)
|
|
1499
|
+
ssh rob@100.106.110.119
|
|
1500
|
+
|
|
1501
|
+
# Ubuntu Desktop (infra + GPU models)
|
|
1502
|
+
ssh rob@100.76.82.8
|
|
1503
|
+
|
|
1504
|
+
# SSH with identity file (if not using SSH agent):
|
|
1505
|
+
ssh -i ~/.ssh/id_ed25519 rob@<tailscale-ip>
|
|
1506
|
+
|
|
1507
|
+
# Batch run same command on all nodes:
|
|
1508
|
+
for ip in 100.115.74.106 100.107.132.71 100.106.110.119 100.76.82.8; do
|
|
1509
|
+
echo "=== $ip ===" && ssh rob@$ip "uptime" && echo ""
|
|
1510
|
+
done
|
|
1511
|
+
```
|
|
1512
|
+
|
|
1513
|
+
---
|
|
1514
|
+
|
|
1515
|
+
## H. Implementation Order Recommendation
|
|
1516
|
+
|
|
1517
|
+
```
|
|
1518
|
+
Week 1, Day 1:
|
|
1519
|
+
└─ Phase 1 (Safety) — 15 min, low risk, easy rollback
|
|
1520
|
+
└─ Phase 4 (GPU models) — 10 min, parallel with Phase 1
|
|
1521
|
+
|
|
1522
|
+
Week 1, Day 2:
|
|
1523
|
+
└─ Phase 3 (Infra Docker) — 30 min, do first so Prometheus starts collecting early
|
|
1524
|
+
└─ Phase 2 (Voice pipeline) — 60-90 min, most complex, do last
|
|
1525
|
+
|
|
1526
|
+
Week 1, Day 3:
|
|
1527
|
+
└─ Phase 5 (OpenClaw config) — 15 min, final integration
|
|
1528
|
+
└─ Full validation run
|
|
1529
|
+
|
|
1530
|
+
Or: run Phases 1+4 in parallel (same day), then 2+3 in parallel, then 5.
|
|
1531
|
+
```
|
|
1532
|
+
|
|
1533
|
+
---
|
|
1534
|
+
|
|
1535
|
+
## I. Post-Implementation Checklist
|
|
1536
|
+
|
|
1537
|
+
- [ ] All 5 nodes reachable via Tailscale SSH
|
|
1538
|
+
- [ ] `llama-guard3:8b` present and responsive on Mac Studio + Mac Mini
|
|
1539
|
+
- [ ] Voice STT endpoint returns correct JSON on Orin (port 8765)
|
|
1540
|
+
- [ ] Voice TTS Kokoro returns WAV on Orin (port 8766)
|
|
1541
|
+
- [ ] Voice TTS Piper fallback returns audio on Orin (port 8767)
|
|
1542
|
+
- [ ] All systemd voice services `active (running)`
|
|
1543
|
+
- [ ] Qdrant responding on 6333 (Ubuntu Desktop)
|
|
1544
|
+
- [ ] Redis `PONG` on 6379
|
|
1545
|
+
- [ ] Prometheus `/-/healthy` on 9090
|
|
1546
|
+
- [ ] Grafana `database: ok` on 3000
|
|
1547
|
+
- [ ] `phi-4-mini` and `llama3.2:3b` listed in `ollama list` on Ubuntu Desktop
|
|
1548
|
+
- [ ] OpenClaw config passes JSON validation
|
|
1549
|
+
- [ ] OpenClaw gateway `status: running`
|
|
1550
|
+
- [ ] Prometheus scraping at least one target
|
|
1551
|
+
- [ ] Grafana connected to Prometheus datasource
|
|
1552
|
+
- [ ] Document service accounts + any new ports opened in network firewall docs
|
|
1553
|
+
|
|
1554
|
+
---
|
|
1555
|
+
|
|
1556
|
+
*End of Implementation Plan — Devon, A.L.I.C.E. DevOps*
|