@kolbo/kolbo-code-linux-arm64-musl 0.0.0-dev-202604161628
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/kolbo +0 -0
- package/package.json +14 -0
- package/skills/brainstorming/SKILL.md +164 -0
- package/skills/brainstorming/scripts/frame-template.html +214 -0
- package/skills/brainstorming/scripts/helper.js +88 -0
- package/skills/brainstorming/scripts/server.cjs +354 -0
- package/skills/brainstorming/scripts/start-server.sh +148 -0
- package/skills/brainstorming/scripts/stop-server.sh +56 -0
- package/skills/brainstorming/spec-document-reviewer-prompt.md +49 -0
- package/skills/brainstorming/visual-companion.md +287 -0
- package/skills/color-grading/SKILL.md +152 -0
- package/skills/dispatching-parallel-agents/SKILL.md +182 -0
- package/skills/docx/.skillfish.json +10 -0
- package/skills/docx/SKILL.md +196 -0
- package/skills/docx/docx-js.md +350 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-chart.xsd +1499 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-chartDrawing.xsd +146 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-diagram.xsd +1085 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-lockedCanvas.xsd +11 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-main.xsd +3081 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-picture.xsd +23 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-spreadsheetDrawing.xsd +185 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/dml-wordprocessingDrawing.xsd +287 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/pml.xsd +1676 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-additionalCharacteristics.xsd +28 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-bibliography.xsd +144 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-commonSimpleTypes.xsd +174 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-customXmlDataProperties.xsd +25 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-customXmlSchemaProperties.xsd +18 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesCustom.xsd +59 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesExtended.xsd +56 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesVariantTypes.xsd +195 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-math.xsd +582 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/shared-relationshipReference.xsd +25 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/sml.xsd +4439 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/vml-main.xsd +570 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/vml-officeDrawing.xsd +509 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/vml-presentationDrawing.xsd +12 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/vml-spreadsheetDrawing.xsd +108 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/vml-wordprocessingDrawing.xsd +96 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/wml.xsd +3646 -0
- package/skills/docx/ooxml/schemas/ISO-IEC29500-4_2016/xml.xsd +116 -0
- package/skills/docx/ooxml/schemas/ecma/fouth-edition/opc-contentTypes.xsd +42 -0
- package/skills/docx/ooxml/schemas/ecma/fouth-edition/opc-coreProperties.xsd +50 -0
- package/skills/docx/ooxml/schemas/ecma/fouth-edition/opc-digSig.xsd +49 -0
- package/skills/docx/ooxml/schemas/ecma/fouth-edition/opc-relationships.xsd +33 -0
- package/skills/docx/ooxml/schemas/mce/mc.xsd +75 -0
- package/skills/docx/ooxml/schemas/microsoft/wml-2010.xsd +560 -0
- package/skills/docx/ooxml/schemas/microsoft/wml-2012.xsd +67 -0
- package/skills/docx/ooxml/schemas/microsoft/wml-2018.xsd +14 -0
- package/skills/docx/ooxml/schemas/microsoft/wml-cex-2018.xsd +20 -0
- package/skills/docx/ooxml/schemas/microsoft/wml-cid-2016.xsd +13 -0
- package/skills/docx/ooxml/schemas/microsoft/wml-sdtdatahash-2020.xsd +4 -0
- package/skills/docx/ooxml/schemas/microsoft/wml-symex-2015.xsd +8 -0
- package/skills/docx/ooxml/scripts/pack.py +159 -0
- package/skills/docx/ooxml/scripts/unpack.py +29 -0
- package/skills/docx/ooxml/scripts/validate.py +69 -0
- package/skills/docx/ooxml/scripts/validation/__init__.py +15 -0
- package/skills/docx/ooxml/scripts/validation/base.py +951 -0
- package/skills/docx/ooxml/scripts/validation/docx.py +274 -0
- package/skills/docx/ooxml/scripts/validation/pptx.py +315 -0
- package/skills/docx/ooxml/scripts/validation/redlining.py +279 -0
- package/skills/docx/ooxml.md +599 -0
- package/skills/docx/scripts/__init__.py +1 -0
- package/skills/docx/scripts/document.py +1272 -0
- package/skills/docx/scripts/templates/comments.xml +3 -0
- package/skills/docx/scripts/templates/commentsExtended.xml +3 -0
- package/skills/docx/scripts/templates/commentsExtensible.xml +3 -0
- package/skills/docx/scripts/templates/commentsIds.xml +3 -0
- package/skills/docx/scripts/templates/people.xml +3 -0
- package/skills/docx/scripts/utilities.py +374 -0
- package/skills/executing-plans/SKILL.md +70 -0
- package/skills/ffmpeg-patterns/SKILL.md +240 -0
- package/skills/finishing-a-development-branch/SKILL.md +200 -0
- package/skills/frontend-design/SKILL.md +42 -0
- package/skills/fullstack-app/SKILL.md +621 -0
- package/skills/image-prompting-guide/SKILL.md +143 -0
- package/skills/kolbo/SKILL.md +610 -0
- package/skills/music-prompting/SKILL.md +146 -0
- package/skills/pdf/.skillfish.json +10 -0
- package/skills/pdf/FORMS.md +205 -0
- package/skills/pdf/REFERENCE.md +612 -0
- package/skills/pdf/SKILL.md +293 -0
- package/skills/pdf/scripts/check_bounding_boxes.py +70 -0
- package/skills/pdf/scripts/check_bounding_boxes_test.py +226 -0
- package/skills/pdf/scripts/check_fillable_fields.py +12 -0
- package/skills/pdf/scripts/convert_pdf_to_images.py +35 -0
- package/skills/pdf/scripts/create_validation_image.py +41 -0
- package/skills/pdf/scripts/extract_form_field_info.py +152 -0
- package/skills/pdf/scripts/fill_fillable_fields.py +114 -0
- package/skills/pdf/scripts/fill_pdf_form_with_annotations.py +108 -0
- package/skills/photo-studio/SKILL.md +130 -0
- package/skills/pptx/.skillfish.json +10 -0
- package/skills/pptx/SKILL.md +483 -0
- package/skills/pptx/html2pptx.md +626 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-chart.xsd +1499 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-chartDrawing.xsd +146 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-diagram.xsd +1085 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-lockedCanvas.xsd +11 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-main.xsd +3081 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-picture.xsd +23 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-spreadsheetDrawing.xsd +185 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/dml-wordprocessingDrawing.xsd +287 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/pml.xsd +1676 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-additionalCharacteristics.xsd +28 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-bibliography.xsd +144 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-commonSimpleTypes.xsd +174 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-customXmlDataProperties.xsd +25 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-customXmlSchemaProperties.xsd +18 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesCustom.xsd +59 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesExtended.xsd +56 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesVariantTypes.xsd +195 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-math.xsd +582 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/shared-relationshipReference.xsd +25 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/sml.xsd +4439 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/vml-main.xsd +570 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/vml-officeDrawing.xsd +509 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/vml-presentationDrawing.xsd +12 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/vml-spreadsheetDrawing.xsd +108 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/vml-wordprocessingDrawing.xsd +96 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/wml.xsd +3646 -0
- package/skills/pptx/ooxml/schemas/ISO-IEC29500-4_2016/xml.xsd +116 -0
- package/skills/pptx/ooxml/schemas/ecma/fouth-edition/opc-contentTypes.xsd +42 -0
- package/skills/pptx/ooxml/schemas/ecma/fouth-edition/opc-coreProperties.xsd +50 -0
- package/skills/pptx/ooxml/schemas/ecma/fouth-edition/opc-digSig.xsd +49 -0
- package/skills/pptx/ooxml/schemas/ecma/fouth-edition/opc-relationships.xsd +33 -0
- package/skills/pptx/ooxml/schemas/mce/mc.xsd +75 -0
- package/skills/pptx/ooxml/schemas/microsoft/wml-2010.xsd +560 -0
- package/skills/pptx/ooxml/schemas/microsoft/wml-2012.xsd +67 -0
- package/skills/pptx/ooxml/schemas/microsoft/wml-2018.xsd +14 -0
- package/skills/pptx/ooxml/schemas/microsoft/wml-cex-2018.xsd +20 -0
- package/skills/pptx/ooxml/schemas/microsoft/wml-cid-2016.xsd +13 -0
- package/skills/pptx/ooxml/schemas/microsoft/wml-sdtdatahash-2020.xsd +4 -0
- package/skills/pptx/ooxml/schemas/microsoft/wml-symex-2015.xsd +8 -0
- package/skills/pptx/ooxml/scripts/pack.py +159 -0
- package/skills/pptx/ooxml/scripts/unpack.py +29 -0
- package/skills/pptx/ooxml/scripts/validate.py +69 -0
- package/skills/pptx/ooxml/scripts/validation/__init__.py +15 -0
- package/skills/pptx/ooxml/scripts/validation/base.py +951 -0
- package/skills/pptx/ooxml/scripts/validation/docx.py +274 -0
- package/skills/pptx/ooxml/scripts/validation/pptx.py +315 -0
- package/skills/pptx/ooxml/scripts/validation/redlining.py +279 -0
- package/skills/pptx/ooxml.md +427 -0
- package/skills/pptx/scripts/html2pptx.js +995 -0
- package/skills/pptx/scripts/inventory.py +1020 -0
- package/skills/pptx/scripts/rearrange.py +231 -0
- package/skills/pptx/scripts/replace.py +385 -0
- package/skills/pptx/scripts/thumbnail.py +450 -0
- package/skills/production-review/SKILL.md +152 -0
- package/skills/receiving-code-review/SKILL.md +213 -0
- package/skills/remotion-best-practices/SKILL.md +62 -0
- package/skills/remotion-best-practices/rules/3d.md +86 -0
- package/skills/remotion-best-practices/rules/animations.md +27 -0
- package/skills/remotion-best-practices/rules/assets/charts-bar-chart.tsx +173 -0
- package/skills/remotion-best-practices/rules/assets/text-animations-typewriter.tsx +100 -0
- package/skills/remotion-best-practices/rules/assets/text-animations-word-highlight.tsx +103 -0
- package/skills/remotion-best-practices/rules/assets.md +78 -0
- package/skills/remotion-best-practices/rules/audio-visualization.md +198 -0
- package/skills/remotion-best-practices/rules/audio.md +169 -0
- package/skills/remotion-best-practices/rules/calculate-metadata.md +134 -0
- package/skills/remotion-best-practices/rules/can-decode.md +81 -0
- package/skills/remotion-best-practices/rules/charts.md +120 -0
- package/skills/remotion-best-practices/rules/compositions.md +154 -0
- package/skills/remotion-best-practices/rules/display-captions.md +184 -0
- package/skills/remotion-best-practices/rules/extract-frames.md +229 -0
- package/skills/remotion-best-practices/rules/ffmpeg.md +38 -0
- package/skills/remotion-best-practices/rules/fonts.md +152 -0
- package/skills/remotion-best-practices/rules/get-audio-duration.md +58 -0
- package/skills/remotion-best-practices/rules/get-video-dimensions.md +68 -0
- package/skills/remotion-best-practices/rules/get-video-duration.md +60 -0
- package/skills/remotion-best-practices/rules/gifs.md +141 -0
- package/skills/remotion-best-practices/rules/images.md +134 -0
- package/skills/remotion-best-practices/rules/import-srt-captions.md +69 -0
- package/skills/remotion-best-practices/rules/light-leaks.md +73 -0
- package/skills/remotion-best-practices/rules/lottie.md +70 -0
- package/skills/remotion-best-practices/rules/maps.md +412 -0
- package/skills/remotion-best-practices/rules/measuring-dom-nodes.md +34 -0
- package/skills/remotion-best-practices/rules/measuring-text.md +140 -0
- package/skills/remotion-best-practices/rules/motion-design.md +215 -0
- package/skills/remotion-best-practices/rules/parameters.md +109 -0
- package/skills/remotion-best-practices/rules/sequencing.md +118 -0
- package/skills/remotion-best-practices/rules/sfx.md +30 -0
- package/skills/remotion-best-practices/rules/subtitles.md +36 -0
- package/skills/remotion-best-practices/rules/tailwind.md +11 -0
- package/skills/remotion-best-practices/rules/text-animations.md +20 -0
- package/skills/remotion-best-practices/rules/timing.md +179 -0
- package/skills/remotion-best-practices/rules/transcribe-captions.md +70 -0
- package/skills/remotion-best-practices/rules/transitions.md +197 -0
- package/skills/remotion-best-practices/rules/transparent-videos.md +106 -0
- package/skills/remotion-best-practices/rules/trimming.md +51 -0
- package/skills/remotion-best-practices/rules/videos.md +171 -0
- package/skills/remotion-best-practices/rules/voiceover.md +99 -0
- package/skills/requesting-code-review/SKILL.md +105 -0
- package/skills/requesting-code-review/code-reviewer.md +146 -0
- package/skills/short-form-video/SKILL.md +168 -0
- package/skills/sound-design/SKILL.md +154 -0
- package/skills/storytelling/SKILL.md +139 -0
- package/skills/subagent-driven-development/SKILL.md +277 -0
- package/skills/subagent-driven-development/code-quality-reviewer-prompt.md +26 -0
- package/skills/subagent-driven-development/implementer-prompt.md +113 -0
- package/skills/subagent-driven-development/spec-reviewer-prompt.md +61 -0
- package/skills/subtitle-production/SKILL.md +244 -0
- package/skills/subtitle-production/reference/burn_to_video.py +222 -0
- package/skills/subtitle-production/reference/export_srts.py +127 -0
- package/skills/subtitle-production/reference/gen_srt.py +42 -0
- package/skills/supabase/.skillfish.json +10 -0
- package/skills/supabase/SKILL.md +106 -0
- package/skills/supabase/assets/feedback-issue-template.md +17 -0
- package/skills/supabase/references/skill-feedback.md +17 -0
- package/skills/supabase-postgres-best-practices/.skillfish.json +10 -0
- package/skills/supabase-postgres-best-practices/SKILL.md +64 -0
- package/skills/supabase-postgres-best-practices/references/_contributing.md +170 -0
- package/skills/supabase-postgres-best-practices/references/_sections.md +39 -0
- package/skills/supabase-postgres-best-practices/references/_template.md +34 -0
- package/skills/supabase-postgres-best-practices/references/advanced-full-text-search.md +55 -0
- package/skills/supabase-postgres-best-practices/references/advanced-jsonb-indexing.md +49 -0
- package/skills/supabase-postgres-best-practices/references/conn-idle-timeout.md +46 -0
- package/skills/supabase-postgres-best-practices/references/conn-limits.md +44 -0
- package/skills/supabase-postgres-best-practices/references/conn-pooling.md +41 -0
- package/skills/supabase-postgres-best-practices/references/conn-prepared-statements.md +46 -0
- package/skills/supabase-postgres-best-practices/references/data-batch-inserts.md +54 -0
- package/skills/supabase-postgres-best-practices/references/data-n-plus-one.md +53 -0
- package/skills/supabase-postgres-best-practices/references/data-pagination.md +50 -0
- package/skills/supabase-postgres-best-practices/references/data-upsert.md +50 -0
- package/skills/supabase-postgres-best-practices/references/lock-advisory.md +56 -0
- package/skills/supabase-postgres-best-practices/references/lock-deadlock-prevention.md +68 -0
- package/skills/supabase-postgres-best-practices/references/lock-short-transactions.md +50 -0
- package/skills/supabase-postgres-best-practices/references/lock-skip-locked.md +54 -0
- package/skills/supabase-postgres-best-practices/references/monitor-explain-analyze.md +45 -0
- package/skills/supabase-postgres-best-practices/references/monitor-pg-stat-statements.md +55 -0
- package/skills/supabase-postgres-best-practices/references/monitor-vacuum-analyze.md +55 -0
- package/skills/supabase-postgres-best-practices/references/query-composite-indexes.md +44 -0
- package/skills/supabase-postgres-best-practices/references/query-covering-indexes.md +40 -0
- package/skills/supabase-postgres-best-practices/references/query-index-types.md +48 -0
- package/skills/supabase-postgres-best-practices/references/query-missing-indexes.md +43 -0
- package/skills/supabase-postgres-best-practices/references/query-partial-indexes.md +45 -0
- package/skills/supabase-postgres-best-practices/references/schema-constraints.md +80 -0
- package/skills/supabase-postgres-best-practices/references/schema-data-types.md +46 -0
- package/skills/supabase-postgres-best-practices/references/schema-foreign-key-indexes.md +59 -0
- package/skills/supabase-postgres-best-practices/references/schema-lowercase-identifiers.md +55 -0
- package/skills/supabase-postgres-best-practices/references/schema-partitioning.md +55 -0
- package/skills/supabase-postgres-best-practices/references/schema-primary-keys.md +61 -0
- package/skills/supabase-postgres-best-practices/references/security-privileges.md +54 -0
- package/skills/supabase-postgres-best-practices/references/security-rls-basics.md +50 -0
- package/skills/supabase-postgres-best-practices/references/security-rls-performance.md +57 -0
- package/skills/supabase-quickstart/SKILL.md +400 -0
- package/skills/systematic-debugging/CREATION-LOG.md +119 -0
- package/skills/systematic-debugging/SKILL.md +296 -0
- package/skills/systematic-debugging/condition-based-waiting-example.ts +158 -0
- package/skills/systematic-debugging/condition-based-waiting.md +115 -0
- package/skills/systematic-debugging/defense-in-depth.md +122 -0
- package/skills/systematic-debugging/find-polluter.sh +63 -0
- package/skills/systematic-debugging/root-cause-tracing.md +169 -0
- package/skills/systematic-debugging/test-academic.md +14 -0
- package/skills/systematic-debugging/test-pressure-1.md +58 -0
- package/skills/systematic-debugging/test-pressure-2.md +68 -0
- package/skills/systematic-debugging/test-pressure-3.md +69 -0
- package/skills/test-driven-development/SKILL.md +371 -0
- package/skills/test-driven-development/testing-anti-patterns.md +299 -0
- package/skills/typography-video/SKILL.md +182 -0
- package/skills/typography-video/reference/KineticTitleScene.tsx +345 -0
- package/skills/using-git-worktrees/SKILL.md +218 -0
- package/skills/using-superpowers/SKILL.md +115 -0
- package/skills/using-superpowers/references/codex-tools.md +100 -0
- package/skills/using-superpowers/references/gemini-tools.md +33 -0
- package/skills/verification-before-completion/SKILL.md +139 -0
- package/skills/video-editing/SKILL.md +128 -0
- package/skills/video-production/SKILL.md +247 -0
- package/skills/video-prompting-guide/SKILL.md +268 -0
- package/skills/writing-plans/SKILL.md +152 -0
- package/skills/writing-plans/plan-document-reviewer-prompt.md +49 -0
- package/skills/writing-skills/SKILL.md +655 -0
- package/skills/writing-skills/anthropic-best-practices.md +1150 -0
- package/skills/writing-skills/examples/CLAUDE_MD_TESTING.md +189 -0
- package/skills/writing-skills/graphviz-conventions.dot +172 -0
- package/skills/writing-skills/persuasion-principles.md +187 -0
- package/skills/writing-skills/render-graphs.js +168 -0
- package/skills/writing-skills/testing-skills-with-subagents.md +384 -0
- package/skills/xlsx/.skillfish.json +10 -0
- package/skills/xlsx/SKILL.md +288 -0
- package/skills/xlsx/recalc.py +178 -0
- package/skills/youtube-clipper/SKILL.md +187 -0
|
@@ -0,0 +1,610 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: kolbo
|
|
3
|
+
description: Generate, edit, or analyze creative media through Kolbo AI. Load this skill whenever the user asks to create, edit, prompt, or analyze images, videos, music, speech, sound effects, 3D models — or to transcribe audio/video, manage media, use Visual DNA for consistency, check credits, or browse models/presets/moodboards. It contains the MCP tool workflow and the prompt-engineering rules for each media type.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Kolbo AI — Creative Generation, Analysis & Transcription
|
|
7
|
+
|
|
8
|
+
You have direct access to the Kolbo AI creative platform via MCP tools (auto-configured by `kolbo auth login`). Use them to generate and deliver real content — do NOT just describe what you would create.
|
|
9
|
+
|
|
10
|
+
## Available MCP Tools
|
|
11
|
+
|
|
12
|
+
### Generation
|
|
13
|
+
|
|
14
|
+
| Tool | Description |
|
|
15
|
+
|------|-------------|
|
|
16
|
+
| `generate_image` | Create images from text prompts. Supports Visual DNA, moodboards, reference images, batch generation, web-search grounding. |
|
|
17
|
+
| `generate_image_edit` | Edit/transform an existing image (background removal, color changes, compositing). Pass source images + edit prompt. |
|
|
18
|
+
| `generate_creative_director` | Generate a coordinated multi-scene set (1–8 scenes) from one creative brief. Ideal for storyboards, ad campaigns, product showcases. Supports image and video modes. |
|
|
19
|
+
| `generate_video` | Create videos from text prompts. Supports Visual DNA and reference images for consistency. |
|
|
20
|
+
| `generate_video_from_image` | Animate a still image into video. Prompt describes the motion, not the subject. |
|
|
21
|
+
| `generate_video_from_video` | Restyle/transform an existing video (style transfer, scene restyling, subject swap). Keeps the original motion. |
|
|
22
|
+
| `generate_elements` | Generate video from reference assets (images/videos) + prompt. Use when animating specific uploaded assets. |
|
|
23
|
+
| `generate_first_last_frame` | Generate video that morphs from a first frame to a last frame (keyframe interpolation). |
|
|
24
|
+
| `generate_lipsync` | Lipsync an audio track to a source image or video face. Accepts local files or URLs. |
|
|
25
|
+
| `generate_music` | Create music from descriptions. Supports instrumental, custom lyrics, style, vocal gender. |
|
|
26
|
+
| `generate_speech` | Convert text to speech (TTS). Default: ElevenLabs. Use `list_voices` to pick a voice. |
|
|
27
|
+
| `generate_sound` | Generate sound effects from descriptions (foley, ambient, impacts, UI sounds). |
|
|
28
|
+
| `generate_3d` | Generate 3D models from text, single image, or multi-view images. Returns GLB, FBX, OBJ, USDZ. |
|
|
29
|
+
|
|
30
|
+
### Transcription & Analysis
|
|
31
|
+
|
|
32
|
+
| Tool | Description |
|
|
33
|
+
|------|-------------|
|
|
34
|
+
| `transcribe_audio` | Transcribe audio or video into text + SRT subtitles + word-by-word SRT. Accepts local files or URLs. |
|
|
35
|
+
|
|
36
|
+
### Voice & Model Discovery
|
|
37
|
+
|
|
38
|
+
| Tool | Description |
|
|
39
|
+
|------|-------------|
|
|
40
|
+
| `list_models` | Browse available AI models filtered by type. |
|
|
41
|
+
| `list_voices` | List available TTS voices with filtering by provider, language, gender. |
|
|
42
|
+
| `check_credits` | Check remaining Kolbo credit balance. |
|
|
43
|
+
| `get_generation_status` | Poll status of an in-progress generation by ID (fallback for timeouts). |
|
|
44
|
+
|
|
45
|
+
### Media Library
|
|
46
|
+
|
|
47
|
+
| Tool | Description |
|
|
48
|
+
|------|-------------|
|
|
49
|
+
| `upload_media` | Upload ANY local file to Kolbo CDN → returns a public URL. Works for images, videos, audio, HTML, documents — any file type. Use for: feeding media to `chat_send_message`, sharing files publicly, hosting HTML pages, or multi-tool workflows. |
|
|
50
|
+
| `list_media` | Browse user's uploaded media with filtering by type and search. |
|
|
51
|
+
|
|
52
|
+
### Visual DNA (Character/Style Consistency)
|
|
53
|
+
|
|
54
|
+
| Tool | Description |
|
|
55
|
+
|------|-------------|
|
|
56
|
+
| `create_visual_dna` | Create a Visual DNA profile from reference images/video/audio for character, style, product, or scene consistency. |
|
|
57
|
+
| `list_visual_dnas` | List your Visual DNA profiles (id, name, type, thumbnail). |
|
|
58
|
+
| `get_visual_dna` | Fetch full profile details including system_prompt and reference images. |
|
|
59
|
+
| `delete_visual_dna` | Delete a Visual DNA profile. |
|
|
60
|
+
|
|
61
|
+
### Moodboards & Presets
|
|
62
|
+
|
|
63
|
+
| Tool | Description |
|
|
64
|
+
|------|-------------|
|
|
65
|
+
| `list_moodboards` | List available moodboards (personal, system presets, org). |
|
|
66
|
+
| `get_moodboard` | Fetch a moodboard's master_prompt, style_guide, and images. |
|
|
67
|
+
| `list_presets` | Browse generation presets (image/video/music templates with bundled style direction). |
|
|
68
|
+
|
|
69
|
+
### Chat & Vision
|
|
70
|
+
|
|
71
|
+
| Tool | Description |
|
|
72
|
+
|------|-------------|
|
|
73
|
+
| `chat_send_message` | Send a message to Kolbo AI chat. Pass `media_urls` (array of public URLs) to analyze images, videos, or audio — Smart Select auto-routes to Gemini vision when media is detected. Omit `model` for automatic routing. Supports web search and deep think modes. |
|
|
74
|
+
| `chat_list_conversations` | List your SDK chat conversations. |
|
|
75
|
+
| `chat_get_messages` | Fetch messages in a conversation (with media URLs). |
|
|
76
|
+
|
|
77
|
+
## ⚠️ Generate vs Edit — Know the Difference
|
|
78
|
+
|
|
79
|
+
| User intent | Action | NOT this |
|
|
80
|
+
|-------------|--------|----------|
|
|
81
|
+
| "Create a video from scratch" / "Generate a video of..." | `generate_video` (Kolbo MCP) | — |
|
|
82
|
+
| "Edit this video" / "Cut" / "Trim" / "Crop" / "Merge" / "Add subtitles" / "Remove silence" / "Speed up" / "Convert to 9:16" | Load `video-production` skill → FFmpeg | ❌ Do NOT call `generate_video` |
|
|
83
|
+
| "Create motion graphics" / "Animated text" / "Title sequence" | Load `remotion-best-practices` skill → Remotion | ❌ Do NOT call `generate_video` |
|
|
84
|
+
| "Animate this image" / "Make this photo move" | `generate_video_from_image` (Kolbo MCP) | — |
|
|
85
|
+
| "Restyle this video as anime" | `generate_video_from_video` (Kolbo MCP) | — |
|
|
86
|
+
|
|
87
|
+
**`generate_video` creates NEW videos from text prompts. It cannot edit, cut, trim, merge, or modify existing video files.** For any operation on an existing video file, use FFmpeg via the `video-production` skill.
|
|
88
|
+
|
|
89
|
+
## Core Workflow
|
|
90
|
+
|
|
91
|
+
1. **Check credits** ONCE per conversation with `check_credits`. Skip if you already checked earlier in this session.
|
|
92
|
+
2. **Discover models** with `list_models` using a `type` filter — but **skip this when the user names a specific model** (e.g. "seedance 2 fast"). Only call `list_models` when you need to discover or compare models.
|
|
93
|
+
3. **Pick the model**: If the user explicitly requested a specific model, use that name directly. Otherwise, **prefer the cheapest model that still has great quality** — look at both `credit` cost and `recommended` status from `list_models`.
|
|
94
|
+
4. **How generation calls work**: Each tool call blocks until the generation is fully complete (the MCP server polls the API internally). For images this is seconds; for video it can be minutes. If a call times out, use `get_generation_status` with the returned generation ID. When you output multiple tool calls in a single response, they run concurrently — so batch calls finish in the time of the slowest one, not the sum.
|
|
95
|
+
5. **Share the URL** — after a successful generation, hand the real URL back to the user. Never fabricate URLs.
|
|
96
|
+
|
|
97
|
+
**For batch operations** (generating multiple items at once), see the "Rate Limiting & Batch Generation" section below — it overrides the per-item steps above.
|
|
98
|
+
|
|
99
|
+
### Model Types (for `list_models`)
|
|
100
|
+
|
|
101
|
+
| Type | Use for |
|
|
102
|
+
|------|---------|
|
|
103
|
+
| `image` | Still-image generation |
|
|
104
|
+
| `image_edit` | Image editing / transformation |
|
|
105
|
+
| `video` | Text-to-video |
|
|
106
|
+
| `video_from_image` | Image-to-video animation |
|
|
107
|
+
| `lipsync` | Audio-to-face lipsync |
|
|
108
|
+
| `music` | Music generation |
|
|
109
|
+
| `speech` | Text-to-speech |
|
|
110
|
+
| `sound` | Sound effects |
|
|
111
|
+
| `three_d` | 3D model generation |
|
|
112
|
+
|
|
113
|
+
### Cost Awareness
|
|
114
|
+
|
|
115
|
+
Creative generations bill against the user's Kolbo credit balance. **Billing units differ by type** — always apply the correct formula before generating.
|
|
116
|
+
|
|
117
|
+
| Type | Billing unit | Credit range | Example |
|
|
118
|
+
|------|-------------|-------------|---------|
|
|
119
|
+
| **Image** | per image (flat) | 1–30 cr | Flux.1 Fast = 1 cr, Midjourney = 4 cr, 4K variants cost more |
|
|
120
|
+
| **Image edit** | per image (flat) | 2–20 cr | |
|
|
121
|
+
| **Video** | **cr/s × duration** | 2–30 cr/s | Kandinsky 5 Fast × 5s = 10 cr; Seedance 2.0 × 10s = 300 cr |
|
|
122
|
+
| **Video from image** | **cr/s × duration** | 4–30 cr/s | Same per-second rule as text-to-video |
|
|
123
|
+
| **Lipsync** | **cr/s × duration** | 5–20 cr/s | |
|
|
124
|
+
| **Music** | per generation (flat) | 15–60 cr | Suno v5 = 15 cr; ElevenLabs Music = 60 cr |
|
|
125
|
+
| **Speech (TTS)** | per 100 characters | 2–5 cr/100 chars | ElevenLabs (5) × 500 chars = 25 cr; Google (2) × 500 chars = 10 cr |
|
|
126
|
+
| **Sound effects** | per generation (flat) | 4–7 cr | |
|
|
127
|
+
| **3D model** | per model (flat) | 5–300 cr | Trellis = 5 cr; Meshy v6 = 150 cr; Marble 1.1 = 300 cr |
|
|
128
|
+
| **Transcription (stt)** | per minute of audio | model.credit × duration_minutes | |
|
|
129
|
+
|
|
130
|
+
**Calculation formulas — apply when confirming cost:**
|
|
131
|
+
- **Video / Lipsync**: `total = model_credit_per_second × duration_seconds`
|
|
132
|
+
- Get the `credit` value from `list_models` (or from a previous call in this session) and multiply by duration.
|
|
133
|
+
- Never assume the credit shown is a flat per-generation cost for these types.
|
|
134
|
+
- **Music**: flat per generation — `total = model_credit` (duration does not change the cost).
|
|
135
|
+
- **TTS**: `total = model_credit × ceil(character_count / 100)`
|
|
136
|
+
- Count the actual characters in the text before estimating. 1000 chars with ElevenLabs = 50 credits.
|
|
137
|
+
- **Images / 3D / Sound effects**: `total = model_credit × quantity`
|
|
138
|
+
|
|
139
|
+
**Cost confirmation — know when to skip it:**
|
|
140
|
+
- **User specified everything** (model, count, duration, e.g. "make 5 videos, seedance 2 fast, 15s, 16:9"): **ACT IMMEDIATELY** — that IS the confirmation. Do not re-explain costs or ask again.
|
|
141
|
+
- **Single generation under 5 credits**: proceed without confirmation.
|
|
142
|
+
- **Everything else**: calculate total cost, present a summary, and wait for the user to confirm before generating.
|
|
143
|
+
|
|
144
|
+
**When confirmation IS needed:**
|
|
145
|
+
1. Calculate per-item cost using the formulas above.
|
|
146
|
+
2. Multiply by the number of items.
|
|
147
|
+
3. Present a summary: "This will generate 8 videos × 5s each using [model] at X cr/s = **Y credits total**. Proceed?"
|
|
148
|
+
4. **Suggest cheaper alternatives** if available.
|
|
149
|
+
5. Only proceed after the user confirms.
|
|
150
|
+
|
|
151
|
+
### Rate Limiting & Batch Generation (CRITICAL)
|
|
152
|
+
|
|
153
|
+
**Rate limits** (per user, enforced server-side):
|
|
154
|
+
- **10 generation requests per minute per tool type** (e.g. 10 video + 10 image = fine, but 11 video in 1 minute = 429)
|
|
155
|
+
- **300 requests per minute** global across all media endpoints
|
|
156
|
+
- **Uploads** (`upload_media`): 300/min, no credit cost — much lighter than generation
|
|
157
|
+
- The API **queues** requests internally — it never silently drops them. If you're within limits, every request will be processed.
|
|
158
|
+
|
|
159
|
+
**⚠️ NEVER duplicate a generation you already fired.**
|
|
160
|
+
Before calling any generation tool, check your conversation history. If you already called that tool with the same or similar prompt in this session:
|
|
161
|
+
- Do NOT call it again — even if it was aborted or interrupted (it is still running server-side and will complete)
|
|
162
|
+
- Only retry if the user explicitly says "retry", "redo", or "try again"
|
|
163
|
+
- Each duplicate wastes real credits from the user's balance
|
|
164
|
+
- If unsure whether a generation went through, use `get_generation_status` to check — the API returns 202 immediately and processes in the background, so aborted tool calls still generate
|
|
165
|
+
|
|
166
|
+
**Batch generation workflow (≤10 items):**
|
|
167
|
+
1. Confirm cost ONCE — or skip if the user already specified model, count, and duration (e.g. "make 5 videos, seedance 2 fast, 15s" IS the confirmation — act immediately)
|
|
168
|
+
2. **Output ALL generation tool calls in a single response** — up to 10 per tool type. The system runs them concurrently, so 5 videos render in parallel and finish in the time of the slowest one, not 5× the time.
|
|
169
|
+
3. Each call blocks until its generation is complete (images: seconds, video: 1-5 minutes). This is normal — don't apologize for the wait.
|
|
170
|
+
4. Track what you've generated — never re-fire a completed or in-progress generation.
|
|
171
|
+
5. After all complete, present all results together.
|
|
172
|
+
6. If any fail with 429: wait 60 seconds and retry only the failed ones (max 2 retries).
|
|
173
|
+
|
|
174
|
+
**Batch images**: use `generate_creative_director` for 5+ coordinated images — one request handles multi-scene.
|
|
175
|
+
|
|
176
|
+
**Don't narrate, just generate.** When the user says "make 5 videos", output all 5 tool calls in one response. Don't explain your plan, don't calculate step-by-step, don't say "Generating Video 1 of 5..." — just call the tools.
|
|
177
|
+
|
|
178
|
+
**Handling interruptions:** If the user aborts or interrupts mid-batch (e.g. cancels Video 1, then says "do the rest" or "continue with 2-5"), pick up where you left off. Check which generations you already fired, skip those, and fire only the remaining ones. Never restart a batch from the beginning. Remember: aborted tool calls still process server-side — don't re-fire them.
|
|
179
|
+
|
|
180
|
+
---
|
|
181
|
+
|
|
182
|
+
## Transcription & Audio/Video Analysis
|
|
183
|
+
|
|
184
|
+
Use `transcribe_audio` ONLY when the user explicitly asks for:
|
|
185
|
+
- A text transcript
|
|
186
|
+
- Subtitles (SRT format)
|
|
187
|
+
- Word-by-word timed subtitles (for karaoke, motion graphics, Remotion captions, video editing)
|
|
188
|
+
- Summary of what was **spoken/said** in the video
|
|
189
|
+
- Dialogue extraction from video
|
|
190
|
+
|
|
191
|
+
**Do NOT use `transcribe_audio` to "analyze" a video visually.** For visual analysis **of videos or audio**, use `upload_media` → `chat_send_message` with `media_urls`. For **images**, use the `Read` tool directly — you have built-in vision.
|
|
192
|
+
|
|
193
|
+
### Workflow
|
|
194
|
+
1. Call `transcribe_audio` with the `source` (URL or absolute local file path)
|
|
195
|
+
2. The tool returns:
|
|
196
|
+
- `text` — full transcript as plain text
|
|
197
|
+
- `srt_url` — download URL for grouped SRT subtitles (configurable words-per-line)
|
|
198
|
+
- `word_by_word_srt_url` — download URL for **word-by-word SRT** (one word per subtitle entry with precise timestamps from ElevenLabs Scribe v2)
|
|
199
|
+
- `txt_url` — download URL for plain text file
|
|
200
|
+
- `duration` — audio duration in seconds
|
|
201
|
+
3. Analyze the transcript text as needed (summarize, translate, extract topics, answer questions about content)
|
|
202
|
+
|
|
203
|
+
### Supported Formats
|
|
204
|
+
- **Audio**: mp3, wav, m4a, flac, aac
|
|
205
|
+
- **Video** (extracts audio track): mp4, mov, webm, mkv, avi, m4v
|
|
206
|
+
|
|
207
|
+
### Word-by-Word Transcription
|
|
208
|
+
The `word_by_word_srt_url` contains an SRT file where each subtitle entry is a **single word** with precise start/end timestamps (powered by ElevenLabs Scribe v2). This is ideal for:
|
|
209
|
+
- **Karaoke-style captions** — highlight one word at a time
|
|
210
|
+
- **Remotion/motion graphics** — animate text word-by-word synced to audio
|
|
211
|
+
- **Video editing** — precise cut points aligned to speech
|
|
212
|
+
- **Accessibility** — word-level navigation for hearing-impaired users
|
|
213
|
+
|
|
214
|
+
The regular `srt_url` groups words into readable subtitle lines (default 12 words per line, up to 2 lines per subtitle).
|
|
215
|
+
|
|
216
|
+
### Use Cases & Examples
|
|
217
|
+
- "Transcribe this podcast" → `transcribe_audio` with the audio URL
|
|
218
|
+
- "What's being said in this video?" → `transcribe_audio` → analyze the returned text
|
|
219
|
+
- "Generate subtitles for my video" → `transcribe_audio` → share the `srt_url`
|
|
220
|
+
- "I need word-by-word timing for this audio" → `transcribe_audio` → share `word_by_word_srt_url`
|
|
221
|
+
- "Summarize this meeting recording" → `transcribe_audio` → summarize the text
|
|
222
|
+
- "Extract key points from this lecture" → `transcribe_audio` → analyze and extract
|
|
223
|
+
|
|
224
|
+
### Long Content
|
|
225
|
+
Transcription supports files up to 30 minutes. For longer content, split the file first or provide segments.
|
|
226
|
+
|
|
227
|
+
### Visual Video/Audio/Image Analysis
|
|
228
|
+
|
|
229
|
+
**The agent has built-in vision — ALWAYS prefer your own model for images:**
|
|
230
|
+
|
|
231
|
+
| Media type | How to analyze |
|
|
232
|
+
|------------|----------------|
|
|
233
|
+
| **Image** (jpg, png, webp, etc.) | **Read it directly with the `Read` tool** — you see images natively. No upload, no API call, no rate-limit risk. This is ALWAYS the first choice for images. |
|
|
234
|
+
| **Video / Audio** | `upload_media` → `chat_send_message` with `media_urls` (Gemini handles video/audio) |
|
|
235
|
+
| **Transcription** | `transcribe_audio` — ONLY when user explicitly says "transcribe", "subtitles", "SRT", or "what's being said" |
|
|
236
|
+
|
|
237
|
+
**⚠️ Image analysis priority: YOUR OWN VISION FIRST.**
|
|
238
|
+
You are a multimodal model — you can see and analyze images directly via the `Read` tool. This is faster, free, and avoids API rate limits. **Never upload images to Kolbo or use `chat_send_message` for image analysis** unless the user explicitly asks to use a specific Kolbo chat model. Even with 10+ images, read them all yourself — you can handle up to 10 images in a single analysis pass.
|
|
239
|
+
|
|
240
|
+
**NEVER use ffmpeg or frame extraction for analysis. NEVER ask the user — just pick the right path above.**
|
|
241
|
+
|
|
242
|
+
**Video/Audio analysis workflow — Step 1 is NOT optional:**
|
|
243
|
+
1. `upload_media({ source: "/absolute/local/path/to/file.mp4" })` → returns `{ url, thumbnail_url, ... }`
|
|
244
|
+
- **Use `url`** — the actual CDN URL. Ignore `thumbnail_url` (preview JPG only).
|
|
245
|
+
2. `chat_send_message({ message: "<your question>", media_urls: [result.url] })`
|
|
246
|
+
- **`media_urls` is mandatory** — the model only sees the video if you pass the CDN URL here.
|
|
247
|
+
- Always an **array**: `media_urls: ["https://cdn.kolbo.ai/..."]`
|
|
248
|
+
- **Omit `model`** — Smart Select auto-routes to Gemini when media is detected
|
|
249
|
+
- **Sessions do NOT remember media between messages.** On retry: reuse the same CDN `url` (no re-upload) but always pass `media_urls` again.
|
|
250
|
+
- **Batch / many videos**: use `list_models` to find the cheapest Gemini model and pass it explicitly for cheaper bulk runs
|
|
251
|
+
|
|
252
|
+
### ⚠️ Batching Media in Chat Messages (CRITICAL)
|
|
253
|
+
|
|
254
|
+
**Always send ALL media in ONE `chat_send_message` call.** The `media_urls` array accepts up to **10 URLs** in a single request. Never send one message per image/video.
|
|
255
|
+
|
|
256
|
+
**Why this matters:** Each `upload_media` call + the final `chat_send_message` all count toward rate limits. Sending 10 uploads + 10 separate chat messages = 20 requests in rapid succession → "Too many generation requests" error. Instead:
|
|
257
|
+
|
|
258
|
+
1. Upload all files at once (output all `upload_media` calls in one response — uploads are 300/min and cost no credits).
|
|
259
|
+
2. Collect ALL returned CDN URLs into one array.
|
|
260
|
+
3. Send ONE `chat_send_message` with all URLs in `media_urls`.
|
|
261
|
+
|
|
262
|
+
**Example — analyzing 5 videos:**
|
|
263
|
+
```
|
|
264
|
+
# Step 1: Upload all in one response (all 5 upload_media calls at once)
|
|
265
|
+
upload_media({ source: "video1.mp4" }) → url1
|
|
266
|
+
upload_media({ source: "video2.mp4" }) → url2
|
|
267
|
+
upload_media({ source: "video3.mp4" }) → url3
|
|
268
|
+
upload_media({ source: "video4.mp4" }) → url4
|
|
269
|
+
upload_media({ source: "video5.mp4" }) → url5
|
|
270
|
+
|
|
271
|
+
# Step 2: ONE chat call with ALL media URLs
|
|
272
|
+
chat_send_message({
|
|
273
|
+
message: "Analyze all 5 videos...",
|
|
274
|
+
media_urls: [url1, url2, url3, url4, url5]
|
|
275
|
+
})
|
|
276
|
+
```
|
|
277
|
+
|
|
278
|
+
**Rate limit recovery:** If you hit "Too many generation requests", wait 60 seconds before retrying. On retry, do NOT re-upload — reuse the CDN URLs from step 1.
|
|
279
|
+
|
|
280
|
+
**❌ Never do this:**
|
|
281
|
+
- Pass a local file path in `media_urls` — it won't work, only CDN URLs work
|
|
282
|
+
- Use the `.txt` URL from a transcription result as the video URL — that's text, not video
|
|
283
|
+
- Skip `upload_media` and try to construct a URL yourself
|
|
284
|
+
- Send separate `chat_send_message` calls for each media file — batch them into ONE call
|
|
285
|
+
|
|
286
|
+
When in doubt, do visual analysis. Do not stop to ask.
|
|
287
|
+
|
|
288
|
+
---
|
|
289
|
+
|
|
290
|
+
## Image Prompts
|
|
291
|
+
|
|
292
|
+
### Rules
|
|
293
|
+
- **Clean prompts only.** No "Output:", "Tips:", "Notes:", "Resolution:", "Dimensions:", or any instructional/meta language inside the prompt. The prompt is what the model sees — anything not describing the image is noise.
|
|
294
|
+
- **Length**: focused 2-3 sentences beats a bloated paragraph. Only go longer when the concept genuinely needs it (complex scenes, multiple subjects, specific technical requirements). Match prompt length to complexity.
|
|
295
|
+
- **Order**: Subject → action/pose → environment → lighting → style.
|
|
296
|
+
- **Be specific about style** when it matters: "1970s film photography", "watercolor illustration on rough paper", "3D product render with studio softbox lighting" — not vague descriptors like "beautiful" or "high quality".
|
|
297
|
+
- **`enhance_prompt: true`** (default) will improve most prompts automatically. Turn it off only if the user's prompt is already fully engineered or they want literal wording.
|
|
298
|
+
|
|
299
|
+
### Image Editing (image-to-image)
|
|
300
|
+
|
|
301
|
+
Use `generate_image_edit` when the user wants to modify an existing image. Pass the source image URL(s) in `source_images` and describe the change in `prompt`.
|
|
302
|
+
|
|
303
|
+
- Good: "Turn the sky orange and add drifting clouds"
|
|
304
|
+
- Bad: "A mountain landscape with an orange sky and drifting clouds" (re-describes what's already in the image)
|
|
305
|
+
|
|
306
|
+
Simple edits deserve simple prompts. Only elaborate for genuinely complex, multi-step transformations.
|
|
307
|
+
|
|
308
|
+
### Multi-Scene / Campaigns
|
|
309
|
+
For storyboards, campaigns, or character-consistent sequences, use `generate_creative_director` — it generates 1–8 coordinated scenes from a single creative brief with consistent style. Pass `visual_dna_ids` and/or `moodboard_id` for character/style consistency across all scenes.
|
|
310
|
+
|
|
311
|
+
In the CLI, you can also do multiple `generate_image` calls (in parallel for batches) with the same Visual DNA profiles.
|
|
312
|
+
|
|
313
|
+
---
|
|
314
|
+
|
|
315
|
+
## Visual DNA (Character/Style Consistency)
|
|
316
|
+
|
|
317
|
+
Visual DNA profiles capture the visual "identity" of a character, style, product, or scene from reference media.
|
|
318
|
+
|
|
319
|
+
### Workflow
|
|
320
|
+
1. **Create** a profile with `create_visual_dna` — provide reference images (max 4), optionally video and audio
|
|
321
|
+
2. **Types**: `character` (default), `style`, `product`, `scene`
|
|
322
|
+
3. **Use** the profile by passing its `id` in `visual_dna_ids` when calling any generation tool
|
|
323
|
+
4. **List/inspect** profiles with `list_visual_dnas` / `get_visual_dna`
|
|
324
|
+
|
|
325
|
+
### When to Use
|
|
326
|
+
- User wants the same character across multiple images/videos
|
|
327
|
+
- User wants a consistent brand style across a campaign
|
|
328
|
+
- User references "keep the same look" or "same character"
|
|
329
|
+
- User provides reference photos of a person/product to maintain consistency
|
|
330
|
+
|
|
331
|
+
---
|
|
332
|
+
|
|
333
|
+
## Video Prompts
|
|
334
|
+
|
|
335
|
+
Video costs more per generation than images — write prompts deliberately to get it right the first time.
|
|
336
|
+
|
|
337
|
+
### Core Rules
|
|
338
|
+
- **Order**: Subject → Action → Camera → Style → Constraints → Audio
|
|
339
|
+
- **Length**: 80-280 words. Shorter = random. Longer = the model forgets the start.
|
|
340
|
+
- **Always specify at least one camera movement per shot.** Even "static wide shot" is a valid explicit choice — just don't leave it unsaid.
|
|
341
|
+
- **Character consistency**: when a character appears across shots, begin the prompt with the literal phrase `same character throughout all shots` to prevent identity drift.
|
|
342
|
+
- **Max 3 shots per prompt.** More shots cause the model to drift.
|
|
343
|
+
- **Duration-aware timecodes**: if the user gives a duration, space timecodes to fit (`[0s] [3s]` for 5s total; `[0s] [3s] [6s]` for 10s total). If no duration is given, describe shots sequentially without hardcoded timecodes.
|
|
344
|
+
|
|
345
|
+
### Image-to-Video
|
|
346
|
+
The model can see the starting frame. Describe **what happens**, not what the image looks like. Focus on motion, camera, and action — don't re-describe the subject or setting.
|
|
347
|
+
- Good: "Slow dolly-in on the subject. Her hair drifts in a light breeze. Soft particles float through the air. [6s]"
|
|
348
|
+
- Bad: "A woman with long brown hair standing in a forest, wearing a red dress, with golden sunlight..." (re-describes the image)
|
|
349
|
+
|
|
350
|
+
### Video-to-Video (Restyle)
|
|
351
|
+
Use `generate_video_from_video` to restyle an existing video. Describe the **new style**, not the original content — the model preserves the original motion.
|
|
352
|
+
- Good: "Transform into anime style with cel-shading and vibrant colors"
|
|
353
|
+
- Bad: "A person walking down a street" (re-describes what's already in the video)
|
|
354
|
+
|
|
355
|
+
### Elements (Reference Assets → Video)
|
|
356
|
+
Use `generate_elements` when the user has specific assets (product photos, character references) they want animated into a video. Pass them as `reference_images` (URLs) or `files` (local paths).
|
|
357
|
+
|
|
358
|
+
### First/Last Frame (Keyframe Interpolation)
|
|
359
|
+
Use `generate_first_last_frame` when the user provides two keyframes and wants the model to create a smooth transition between them.
|
|
360
|
+
|
|
361
|
+
### Lipsync
|
|
362
|
+
Use `generate_lipsync` to sync audio to a face in an image or video. Both `source` (face) and `audio` accept URLs or local file paths.
|
|
363
|
+
|
|
364
|
+
### Camera Vocabulary
|
|
365
|
+
|
|
366
|
+
Pick what fits the mood. Every shot gets at least one.
|
|
367
|
+
|
|
368
|
+
| Movement | Use for |
|
|
369
|
+
|----------|---------|
|
|
370
|
+
| `slow dolly-in` | Building intensity, focus pull |
|
|
371
|
+
| `pull-back` / `dolly out` | Scale reveal, loneliness, context |
|
|
372
|
+
| `extreme low-angle` | Power, heroic framing |
|
|
373
|
+
| `overhead top-down` | Geometry, pattern, abstraction |
|
|
374
|
+
| `360° orbit` | Product showcase, bullet-time moments |
|
|
375
|
+
| `handheld natural lag` | Urgency, documentary, grit |
|
|
376
|
+
| `tracking shot` | Continuous follow of a subject |
|
|
377
|
+
| `crash zoom` | Shock, impact moment |
|
|
378
|
+
| `aerial pull-back` | Epic reveal, landscape scale |
|
|
379
|
+
| `static drift` | Contemplative, subtle, meditative |
|
|
380
|
+
| `crane up` / `crane down` | Grandeur, establishing, dismissal |
|
|
381
|
+
| `whip pan` | Sharp transition, high energy |
|
|
382
|
+
|
|
383
|
+
### Physics Vocabulary (only name what matters for the scene)
|
|
384
|
+
|
|
385
|
+
- **Cloth**: `cloth inertia`, `fabric lags behind movement`
|
|
386
|
+
- **Water**: `water splashing with surface tension`, `droplets scattering`, `puddle mirror reflection`
|
|
387
|
+
- **Sand / dust**: `sand displacement`, `radial dust shockwave`
|
|
388
|
+
- **Hair**: `hair reacts to acceleration and wind`
|
|
389
|
+
- **Impact**: `skin distorting on impact`, `delayed follow-through`
|
|
390
|
+
- **Smoke**: `volumetric smoke curling and dissipating`
|
|
391
|
+
|
|
392
|
+
Don't stuff every category in every prompt — only name the physics that genuinely drives the shot.
|
|
393
|
+
|
|
394
|
+
### Multi-Shot Format
|
|
395
|
+
|
|
396
|
+
When the user wants a sequence (trailer, story, showcase), write each shot as a brief 1-2 sentence entry on its own line inside the prompt:
|
|
397
|
+
|
|
398
|
+
```
|
|
399
|
+
Shot 1: [action + camera movement]
|
|
400
|
+
Shot 2: [action + camera movement]
|
|
401
|
+
Shot 3: [action + camera movement]
|
|
402
|
+
```
|
|
403
|
+
|
|
404
|
+
Think like a director. Describe what **happens**, not what things **look** like.
|
|
405
|
+
|
|
406
|
+
### Mood Presets
|
|
407
|
+
|
|
408
|
+
Pick techniques that match the user's intent. A calm landscape and an action sequence need different tools.
|
|
409
|
+
|
|
410
|
+
- **Cinematic / dramatic**: slow dolly-in, anamorphic 2.39:1, shallow depth of field, volumetric light, subtle film grain
|
|
411
|
+
- **Product showcase**: 360° orbit, clean white or gradient backdrop, macro detail inserts, smooth tracking
|
|
412
|
+
- **Dreamy / ethereal**: slow crane up, soft diffused light, gentle particle drift, muted pastels, static drift moments
|
|
413
|
+
- **Action / intense**: crash zoom, handheld natural lag, extreme slow-motion at the peak beat, high contrast, fast cuts
|
|
414
|
+
- **Nature / landscape**: aerial pull-back, golden hour lighting, wind physics on foliage, wide establishing shots
|
|
415
|
+
- **Abstract / motion graphics**: overhead top-down, geometric patterns, bold color blocks, rhythmic cutting
|
|
416
|
+
|
|
417
|
+
### Slow-Motion
|
|
418
|
+
|
|
419
|
+
Extreme slow-motion is a tool, not a freeze frame. Always describe the micro-movements that *continue* during the slow beat (hair drifting, droplets crawling, fabric rippling), and specify the snap-back to full speed when relevant.
|
|
420
|
+
|
|
421
|
+
Format: `extreme slow-motion [Xs] — [micro-movements in ultra slow-mo] — snap-back to full speed`
|
|
422
|
+
|
|
423
|
+
---
|
|
424
|
+
|
|
425
|
+
## 3D Generation
|
|
426
|
+
|
|
427
|
+
Use `generate_3d` for creating 3D models. Three modes:
|
|
428
|
+
- **Text mode**: prompt-only (e.g., "a medieval sword with ornate handle")
|
|
429
|
+
- **Single image mode**: one reference image + optional prompt
|
|
430
|
+
- **Multi-view mode**: 2+ reference images for higher-quality reconstruction
|
|
431
|
+
|
|
432
|
+
Returns downloadable model files in GLB, FBX, OBJ, and USDZ formats. Use `list_models` with `type: "three_d"` to discover available models.
|
|
433
|
+
|
|
434
|
+
---
|
|
435
|
+
|
|
436
|
+
## Music Prompts
|
|
437
|
+
|
|
438
|
+
Describe **genre → mood → instrumentation → tempo → era**, in that order.
|
|
439
|
+
|
|
440
|
+
- `instrumental: true` excludes vocals.
|
|
441
|
+
- `lyrics` accepts actual lyric text the model should sing.
|
|
442
|
+
- `style` accepts short genre tags ("lo-fi hip hop", "orchestral cinematic", "80s synthwave").
|
|
443
|
+
- Good: "Upbeat 80s synthwave, analog synths, gated reverb drums, 120 BPM, driving bassline, no vocals"
|
|
444
|
+
- Bad: "A cool song" / "Something for a workout" (too vague)
|
|
445
|
+
|
|
446
|
+
---
|
|
447
|
+
|
|
448
|
+
## Speech (TTS)
|
|
449
|
+
|
|
450
|
+
- Call `list_voices` to find available voices. Filter by `provider`, `language`, or `gender`.
|
|
451
|
+
- Pass the returned `voice_id` (or the voice's display name like "Rachel") as the `voice` parameter in `generate_speech`.
|
|
452
|
+
- For multilingual content, pick a voice that supports the target language.
|
|
453
|
+
- For long text, split at natural sentence boundaries. Each generation has a character cap; chunk long-form content into multiple calls.
|
|
454
|
+
|
|
455
|
+
---
|
|
456
|
+
|
|
457
|
+
## Sound Effects
|
|
458
|
+
|
|
459
|
+
- Describe the sound **literally and physically**. Avoid emotional framing.
|
|
460
|
+
- Good: "Heavy wooden door creaking open slowly, echoing in a stone hallway, followed by distant dripping water"
|
|
461
|
+
- Bad: "A scary sound" / "Creepy atmosphere" (the model can't render emotions directly — render the physical source)
|
|
462
|
+
|
|
463
|
+
---
|
|
464
|
+
|
|
465
|
+
## Moodboards & Presets
|
|
466
|
+
|
|
467
|
+
**Moodboards** provide style direction (master prompt + style guide + reference images). Pass a `moodboard_id` to any generation tool to apply its style.
|
|
468
|
+
- `list_moodboards` to browse available options
|
|
469
|
+
- `get_moodboard` to see full details before applying
|
|
470
|
+
|
|
471
|
+
**Presets** bundle prompt templates + style direction for specific creative looks. Pass a `preset_id` to generation tools.
|
|
472
|
+
- `list_presets` with optional `type` filter ("image", "video", "music", "text_to_video")
|
|
473
|
+
|
|
474
|
+
---
|
|
475
|
+
|
|
476
|
+
## Media Library
|
|
477
|
+
|
|
478
|
+
Use `upload_media` to upload local files or URLs to the Kolbo CDN for stable hosting. Useful when:
|
|
479
|
+
- A local file needs to be referenced in multiple generation calls
|
|
480
|
+
- You want a permanent CDN URL instead of an ephemeral local path
|
|
481
|
+
|
|
482
|
+
Use `list_media` to browse previously uploaded content (filter by type, search by name).
|
|
483
|
+
|
|
484
|
+
---
|
|
485
|
+
|
|
486
|
+
## Chat
|
|
487
|
+
|
|
488
|
+
Use `chat_send_message` to interact with Kolbo AI models (GPT-4o, Claude, etc.) with optional web search and deep think modes. Conversations persist via `session_id` — omit to start new, pass to continue.
|
|
489
|
+
|
|
490
|
+
**Media in chat:** Always batch all media into a single message. `media_urls` accepts up to 10 URLs per call. See the "Batching Media in Chat Messages" section above for the mandatory workflow.
|
|
491
|
+
|
|
492
|
+
Use `chat_list_conversations` and `chat_get_messages` to browse conversation history.
|
|
493
|
+
|
|
494
|
+
---
|
|
495
|
+
|
|
496
|
+
## Image Analysis (when the user uploads images)
|
|
497
|
+
|
|
498
|
+
When the user shares an image and asks about it:
|
|
499
|
+
|
|
500
|
+
- **Analyze thoroughly**: describe composition, subjects, colors, lighting, style, text/signage, setting, mood, visible objects, and any embedded information (charts, diagrams, screenshots).
|
|
501
|
+
- **Reference specific regions** when helpful: "top-left corner", "in the foreground", "the figure on the right".
|
|
502
|
+
- **Extract text verbatim** when asked (OCR-style requests are fine).
|
|
503
|
+
- **Cannot identify real people.** Describe hair, clothing, pose, expression, and apparent role — but never name a specific individual, even a well-known public figure. If the user insists, decline and offer to describe instead.
|
|
504
|
+
- **Copyrighted content**: summarize and reference, don't reproduce verbatim large chunks.
|
|
505
|
+
- If the user wants an **edit** based on the analysis, hand off to `generate_image_edit` (visual edit) or `generate_video_from_image` (motion).
|
|
506
|
+
|
|
507
|
+
---
|
|
508
|
+
|
|
509
|
+
## Limitations & Safety
|
|
510
|
+
|
|
511
|
+
- **Real people**: never identify specific real individuals in photos, even public figures. Describe visible attributes only.
|
|
512
|
+
- **NSFW**: Kolbo enforces content safety at the model level. If a generation fails on safety grounds, rephrase the prompt rather than retrying identically.
|
|
513
|
+
- **Copyright**: style references are fine (e.g. "in the style of Studio Ghibli"); verbatim reproduction of copyrighted material is not.
|
|
514
|
+
- **No fabricated URLs**: only share URLs that actually came back from a tool call. Never guess a URL.
|
|
515
|
+
|
|
516
|
+
---
|
|
517
|
+
|
|
518
|
+
## Sharing HTML Artifacts
|
|
519
|
+
|
|
520
|
+
When you generate an HTML, SVG, or Mermaid artifact in the chat, a **Share** button appears in the artifact preview toolbar (next to Desktop / Mobile). Clicking it:
|
|
521
|
+
|
|
522
|
+
1. Uploads the artifact to Kolbo's hosting platform
|
|
523
|
+
2. Copies a permanent public URL to the clipboard (e.g. `https://api.kolbo.ai/api/shared-artifact-raw/<token>`)
|
|
524
|
+
3. Shows a toast confirming the link was copied
|
|
525
|
+
|
|
526
|
+
Anyone with the URL can view the rendered page — no login required.
|
|
527
|
+
|
|
528
|
+
**Requirements:** You must be logged in (`kolbo auth login`). The share button returns an error toast if you are not authenticated.
|
|
529
|
+
|
|
530
|
+
---
|
|
531
|
+
|
|
532
|
+
## Kolbo Code Documentation
|
|
533
|
+
|
|
534
|
+
Full public documentation for Kolbo Code (the CLI you are running inside) lives at **[docs.kolbo.ai/docs/kolbo-code](https://docs.kolbo.ai/docs/kolbo-code)**. If the user asks about installation, authentication, voice input, supported languages, commands, or how to uninstall, point them to the matching page below rather than guessing:
|
|
535
|
+
|
|
536
|
+
| Topic | Path |
|
|
537
|
+
|-------|------|
|
|
538
|
+
| Overview & quick links | `/docs/kolbo-code` |
|
|
539
|
+
| Installation (npm / bun / brew / scoop / choco) | `/docs/kolbo-code/installation` |
|
|
540
|
+
| Sign in with Kolbo (device-code OAuth) | `/docs/kolbo-code/authentication` |
|
|
541
|
+
| Push-to-talk voice input (hold `space`) | `/docs/kolbo-code/voice-input` |
|
|
542
|
+
| 12 supported UI languages + RTL | `/docs/kolbo-code/languages` |
|
|
543
|
+
| Full CLI command reference | `/docs/kolbo-code/commands` |
|
|
544
|
+
| Uninstall + cleanup | `/docs/kolbo-code/uninstall` |
|
|
545
|
+
|
|
546
|
+
The MDX sources are in the `kolbo-docs` repo under `content/docs/kolbo-code/`. When the user's question has a concrete answer in one of those pages, cite the path and summarize — do not invent new instructions.
|
|
547
|
+
|
|
548
|
+
## Troubleshooting
|
|
549
|
+
|
|
550
|
+
### "API key is invalid or expired"
|
|
551
|
+
This usually means the CLI is sending a key to the wrong API endpoint.
|
|
552
|
+
|
|
553
|
+
**Common cause — whitelabel overlap:** if the user previously used regular `kolbo` and then switched to a whitelabel/partner CLI (e.g. `sapir`), the old API key may still be cached against the main Kolbo API. Running `kolbo` instead of the branded command (`sapir`) overwrites the MCP config with the wrong endpoint.
|
|
554
|
+
|
|
555
|
+
**Fix:** tell the user to re-authenticate with their branded CLI command:
|
|
556
|
+
```
|
|
557
|
+
sapir auth login
|
|
558
|
+
```
|
|
559
|
+
(Replace `sapir` with their actual CLI command.)
|
|
560
|
+
|
|
561
|
+
Then **restart the editor/session** so the MCP picks up the new key and endpoint.
|
|
562
|
+
|
|
563
|
+
**Important:** whitelabel users must always use their branded CLI command (e.g. `sapir`), not `kolbo`, to keep the MCP pointed at the correct API.
|
|
564
|
+
|
|
565
|
+
### MCP tools not responding or not found
|
|
566
|
+
If Kolbo tools timeout or aren't listed, the MCP server may not be wired. Tell the user to run:
|
|
567
|
+
```
|
|
568
|
+
<their-cli-command> auth login
|
|
569
|
+
```
|
|
570
|
+
This re-wires the MCP configuration automatically. Then restart the session.
|
|
571
|
+
|
|
572
|
+
### "Rate limited" (429 errors)
|
|
573
|
+
Kolbo allows 10 generation requests per minute per user per tool type (video, image, etc. are separate pools). Wait 60 seconds (the window resets) and retry only the failed calls. Use `generate_creative_director` for batch image work instead of multiple `generate_image` calls. The API queues requests — it never silently drops them.
|
|
574
|
+
|
|
575
|
+
---
|
|
576
|
+
|
|
577
|
+
## Examples
|
|
578
|
+
|
|
579
|
+
Natural-language triggers that should prompt this skill + a tool call:
|
|
580
|
+
|
|
581
|
+
- "Generate an image of a neon-lit Tokyo street at night" → `list_models` (image) → `generate_image`
|
|
582
|
+
- "Use Midjourney to generate a Tokyo street" → `generate_image` with model "midjourney" (user named the model — skip `list_models`)
|
|
583
|
+
- "Remove the background from this image" → `list_models` (image_edit) → `generate_image_edit`
|
|
584
|
+
- "Create a storyboard for a coffee brand ad" → `list_models` (image) → `generate_creative_director`
|
|
585
|
+
- "Create a 5-second cinematic video of ocean waves at sunset" → `list_models` (video) → `generate_video` with camera + mood guidance
|
|
586
|
+
- "Make 5 videos with Seedance 2 Fast, 15s, 16:9" → fire all 5 `generate_video` calls in parallel (user specified everything — skip `list_models`, skip cost confirmation)
|
|
587
|
+
- "Animate this product photo with a 360° orbit" → `list_models` (video_from_image) → `generate_video_from_image`
|
|
588
|
+
- "Restyle this video as anime" → `generate_video_from_video`
|
|
589
|
+
- "Make this character talk with this voiceover" → `generate_lipsync`
|
|
590
|
+
- "Create a smooth transition between these two frames" → `generate_first_last_frame`
|
|
591
|
+
- "Make a lo-fi hip hop beat, instrumental, 85 BPM" → `list_models` (music) → `generate_music`
|
|
592
|
+
- "Say this in English with a natural female voice: Welcome to Kolbo" → `list_voices` → `generate_speech`
|
|
593
|
+
- "Generate a door slam sound effect" → `list_models` (sound) → `generate_sound`
|
|
594
|
+
- "Create a 3D model of a medieval castle" → `list_models` (three_d) → `generate_3d`
|
|
595
|
+
- "Transcribe this podcast episode" → `transcribe_audio`
|
|
596
|
+
- "What's being said in this video?" → `transcribe_audio` → analyze the text
|
|
597
|
+
- "Generate word-by-word subtitles for this audio" → `transcribe_audio` → share `word_by_word_srt_url`
|
|
598
|
+
- "Analyze this video" / "What do you see?" / "What's in this?" (with video file) → `upload_media` → `chat_send_message` with `media_urls` (omit model — auto-routes to Gemini)
|
|
599
|
+
- "What prompts are shown in this video?" → `upload_media` → `chat_send_message` with `media_urls` (omit model — auto-routes to Gemini)
|
|
600
|
+
- "Keep the same character across all these images" → `create_visual_dna` → `generate_image` with `visual_dna_ids`
|
|
601
|
+
- "Upload this file to my media library" → `upload_media`
|
|
602
|
+
- "Host this HTML page" / "Publish this landing page" / "Give me a public URL for this file" → `upload_media` → share the returned `url` (Kolbo CDN serves any file type publicly)
|
|
603
|
+
- "What video models are available?" → `list_models` (video)
|
|
604
|
+
- "How many credits do I have?" → `check_credits`
|
|
605
|
+
- "What's in this image?" (with upload) → Read the image directly with your own vision — no Kolbo API call needed
|
|
606
|
+
- "Analyze these 10 frames" (with multiple images) → Read all images directly with your own vision — you handle up to 10 natively
|
|
607
|
+
- "Analyze these 5 videos" → upload all 5 with `upload_media`, then ONE `chat_send_message` with all 5 URLs in `media_urls`
|
|
608
|
+
- "Create motion graphics" / "animated text" / "title sequence" → load the `remotion-best-practices` skill for Remotion-based motion graphics
|
|
609
|
+
- "Edit this video" / "cut this clip" / "remove silence" / "add subtitles" / "convert to 9:16" → load the `video-production` skill for FFmpeg-based editing
|
|
610
|
+
- "Create a short-form video" / "make a reel" / "YouTube short" → load the `short-form-video` skill
|