myaiforone 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +113 -0
- package/agents/_template/CLAUDE.md +18 -0
- package/agents/_template/agent.json +7 -0
- package/agents/platform/agentcreator/CLAUDE.md +300 -0
- package/agents/platform/appcreator/CLAUDE.md +158 -0
- package/agents/platform/gym/CLAUDE.md +486 -0
- package/agents/platform/gym/agent.json +40 -0
- package/agents/platform/gym/programs/agent-building/program.json +160 -0
- package/agents/platform/gym/programs/automations-mastery/program.json +129 -0
- package/agents/platform/gym/programs/getting-started/program.json +124 -0
- package/agents/platform/gym/programs/mcp-integrations/program.json +116 -0
- package/agents/platform/gym/programs/multi-model-strategy/program.json +115 -0
- package/agents/platform/gym/programs/prompt-engineering/program.json +136 -0
- package/agents/platform/gym/souls/alex.md +12 -0
- package/agents/platform/gym/souls/jordan.md +12 -0
- package/agents/platform/gym/souls/morgan.md +12 -0
- package/agents/platform/gym/souls/riley.md +12 -0
- package/agents/platform/gym/souls/sam.md +12 -0
- package/agents/platform/hub/CLAUDE.md +372 -0
- package/agents/platform/promptcreator/CLAUDE.md +130 -0
- package/agents/platform/skillcreator/CLAUDE.md +163 -0
- package/bin/cli.js +566 -0
- package/config.example.json +310 -0
- package/dist/agent-registry.d.ts +32 -0
- package/dist/agent-registry.d.ts.map +1 -0
- package/dist/agent-registry.js +144 -0
- package/dist/agent-registry.js.map +1 -0
- package/dist/channels/discord.d.ts +17 -0
- package/dist/channels/discord.d.ts.map +1 -0
- package/dist/channels/discord.js +114 -0
- package/dist/channels/discord.js.map +1 -0
- package/dist/channels/imessage.d.ts +23 -0
- package/dist/channels/imessage.d.ts.map +1 -0
- package/dist/channels/imessage.js +214 -0
- package/dist/channels/imessage.js.map +1 -0
- package/dist/channels/slack.d.ts +19 -0
- package/dist/channels/slack.d.ts.map +1 -0
- package/dist/channels/slack.js +167 -0
- package/dist/channels/slack.js.map +1 -0
- package/dist/channels/telegram.d.ts +19 -0
- package/dist/channels/telegram.d.ts.map +1 -0
- package/dist/channels/telegram.js +274 -0
- package/dist/channels/telegram.js.map +1 -0
- package/dist/channels/types.d.ts +44 -0
- package/dist/channels/types.d.ts.map +1 -0
- package/dist/channels/types.js +18 -0
- package/dist/channels/types.js.map +1 -0
- package/dist/channels/whatsapp.d.ts +23 -0
- package/dist/channels/whatsapp.d.ts.map +1 -0
- package/dist/channels/whatsapp.js +189 -0
- package/dist/channels/whatsapp.js.map +1 -0
- package/dist/config.d.ts +134 -0
- package/dist/config.d.ts.map +1 -0
- package/dist/config.js +127 -0
- package/dist/config.js.map +1 -0
- package/dist/cron.d.ts +8 -0
- package/dist/cron.d.ts.map +1 -0
- package/dist/cron.js +35 -0
- package/dist/cron.js.map +1 -0
- package/dist/decrypt-keys.d.ts +7 -0
- package/dist/decrypt-keys.d.ts.map +1 -0
- package/dist/decrypt-keys.js +53 -0
- package/dist/decrypt-keys.js.map +1 -0
- package/dist/encrypt-keys.d.ts +8 -0
- package/dist/encrypt-keys.d.ts.map +1 -0
- package/dist/encrypt-keys.js +62 -0
- package/dist/encrypt-keys.js.map +1 -0
- package/dist/executor.d.ts +31 -0
- package/dist/executor.d.ts.map +1 -0
- package/dist/executor.js +2009 -0
- package/dist/executor.js.map +1 -0
- package/dist/gemini-executor.d.ts +27 -0
- package/dist/gemini-executor.d.ts.map +1 -0
- package/dist/gemini-executor.js +160 -0
- package/dist/gemini-executor.js.map +1 -0
- package/dist/goals.d.ts +24 -0
- package/dist/goals.d.ts.map +1 -0
- package/dist/goals.js +189 -0
- package/dist/goals.js.map +1 -0
- package/dist/gym/activity-digest.d.ts +30 -0
- package/dist/gym/activity-digest.d.ts.map +1 -0
- package/dist/gym/activity-digest.js +506 -0
- package/dist/gym/activity-digest.js.map +1 -0
- package/dist/gym/dimension-scorer.d.ts +76 -0
- package/dist/gym/dimension-scorer.d.ts.map +1 -0
- package/dist/gym/dimension-scorer.js +236 -0
- package/dist/gym/dimension-scorer.js.map +1 -0
- package/dist/gym/gym-router.d.ts +7 -0
- package/dist/gym/gym-router.d.ts.map +1 -0
- package/dist/gym/gym-router.js +718 -0
- package/dist/gym/gym-router.js.map +1 -0
- package/dist/gym/index.d.ts +11 -0
- package/dist/gym/index.d.ts.map +1 -0
- package/dist/gym/index.js +11 -0
- package/dist/gym/index.js.map +1 -0
- package/dist/heartbeat.d.ts +21 -0
- package/dist/heartbeat.d.ts.map +1 -0
- package/dist/heartbeat.js +163 -0
- package/dist/heartbeat.js.map +1 -0
- package/dist/index.d.ts +2 -0
- package/dist/index.d.ts.map +1 -0
- package/dist/index.js +254 -0
- package/dist/index.js.map +1 -0
- package/dist/keystore.d.ts +22 -0
- package/dist/keystore.d.ts.map +1 -0
- package/dist/keystore.js +178 -0
- package/dist/keystore.js.map +1 -0
- package/dist/logger.d.ts +9 -0
- package/dist/logger.d.ts.map +1 -0
- package/dist/logger.js +45 -0
- package/dist/logger.js.map +1 -0
- package/dist/memory/daily.d.ts +22 -0
- package/dist/memory/daily.d.ts.map +1 -0
- package/dist/memory/daily.js +82 -0
- package/dist/memory/daily.js.map +1 -0
- package/dist/memory/embeddings.d.ts +15 -0
- package/dist/memory/embeddings.d.ts.map +1 -0
- package/dist/memory/embeddings.js +154 -0
- package/dist/memory/embeddings.js.map +1 -0
- package/dist/memory/index.d.ts +32 -0
- package/dist/memory/index.d.ts.map +1 -0
- package/dist/memory/index.js +159 -0
- package/dist/memory/index.js.map +1 -0
- package/dist/memory/search.d.ts +21 -0
- package/dist/memory/search.d.ts.map +1 -0
- package/dist/memory/search.js +77 -0
- package/dist/memory/search.js.map +1 -0
- package/dist/memory/store.d.ts +23 -0
- package/dist/memory/store.d.ts.map +1 -0
- package/dist/memory/store.js +144 -0
- package/dist/memory/store.js.map +1 -0
- package/dist/ollama-executor.d.ts +17 -0
- package/dist/ollama-executor.d.ts.map +1 -0
- package/dist/ollama-executor.js +112 -0
- package/dist/ollama-executor.js.map +1 -0
- package/dist/openai-executor.d.ts +38 -0
- package/dist/openai-executor.d.ts.map +1 -0
- package/dist/openai-executor.js +197 -0
- package/dist/openai-executor.js.map +1 -0
- package/dist/router.d.ts +11 -0
- package/dist/router.d.ts.map +1 -0
- package/dist/router.js +185 -0
- package/dist/router.js.map +1 -0
- package/dist/test-message.d.ts +2 -0
- package/dist/test-message.d.ts.map +1 -0
- package/dist/test-message.js +60 -0
- package/dist/test-message.js.map +1 -0
- package/dist/utils/imsg-db-reader.d.ts +24 -0
- package/dist/utils/imsg-db-reader.d.ts.map +1 -0
- package/dist/utils/imsg-db-reader.js +92 -0
- package/dist/utils/imsg-db-reader.js.map +1 -0
- package/dist/utils/imsg-rpc.d.ts +25 -0
- package/dist/utils/imsg-rpc.d.ts.map +1 -0
- package/dist/utils/imsg-rpc.js +149 -0
- package/dist/utils/imsg-rpc.js.map +1 -0
- package/dist/utils/message-formatter.d.ts +3 -0
- package/dist/utils/message-formatter.d.ts.map +1 -0
- package/dist/utils/message-formatter.js +69 -0
- package/dist/utils/message-formatter.js.map +1 -0
- package/dist/web-ui.d.ts +12 -0
- package/dist/web-ui.d.ts.map +1 -0
- package/dist/web-ui.js +5784 -0
- package/dist/web-ui.js.map +1 -0
- package/dist/whatsapp-chats.d.ts +2 -0
- package/dist/whatsapp-chats.d.ts.map +1 -0
- package/dist/whatsapp-chats.js +76 -0
- package/dist/whatsapp-chats.js.map +1 -0
- package/dist/whatsapp-login.d.ts +2 -0
- package/dist/whatsapp-login.d.ts.map +1 -0
- package/dist/whatsapp-login.js +90 -0
- package/dist/whatsapp-login.js.map +1 -0
- package/dist/wiki-sync.d.ts +21 -0
- package/dist/wiki-sync.d.ts.map +1 -0
- package/dist/wiki-sync.js +147 -0
- package/dist/wiki-sync.js.map +1 -0
- package/docs/AddNewAgentGuide.md +100 -0
- package/docs/AddNewMcpGuide.md +72 -0
- package/docs/Architecture.md +795 -0
- package/docs/CLAUDE-AI-SETUP.md +166 -0
- package/docs/Setup.md +297 -0
- package/docs/ai-gym-architecture.md +1040 -0
- package/docs/ai-gym-build-plan.md +343 -0
- package/docs/ai-gym-onboarding.md +122 -0
- package/docs/appcreator_plan.md +348 -0
- package/docs/platform-mcp-audit.md +320 -0
- package/docs/server-deployment-plan.md +503 -0
- package/docs/superpowers/plans/2026-03-25-marketplace.md +1281 -0
- package/docs/superpowers/specs/2026-03-25-marketplace-design.md +287 -0
- package/docs/user-guide.md +2016 -0
- package/mcp-catalog.json +628 -0
- package/package.json +63 -0
- package/public/MyAIforOne-logomark-512.svg +16 -0
- package/public/MyAIforOne-logomark-transparent.svg +15 -0
- package/public/activity.html +314 -0
- package/public/admin.html +1674 -0
- package/public/agent-dashboard.html +670 -0
- package/public/api-docs.html +1106 -0
- package/public/automations.html +722 -0
- package/public/canvas.css +223 -0
- package/public/canvas.js +588 -0
- package/public/changelog.html +231 -0
- package/public/gym.html +2766 -0
- package/public/home.html +1930 -0
- package/public/index.html +2809 -0
- package/public/lab.html +1643 -0
- package/public/library.html +1442 -0
- package/public/marketplace.html +1101 -0
- package/public/mcp-docs.html +441 -0
- package/public/mini.html +390 -0
- package/public/monitor.html +584 -0
- package/public/org.html +4304 -0
- package/public/projects.html +734 -0
- package/public/settings.html +645 -0
- package/public/tasks.html +932 -0
- package/public/trainers/alex.svg +12 -0
- package/public/trainers/jordan.svg +12 -0
- package/public/trainers/morgan.svg +12 -0
- package/public/trainers/riley.svg +12 -0
- package/public/trainers/sam.svg +12 -0
- package/public/user-guide.html +218 -0
- package/registry/agents.json +3 -0
- package/registry/apps.json +20 -0
- package/registry/installed-drafts.json +3 -0
- package/registry/mcps.json +1084 -0
- package/registry/prompts/personal/mcp-test-prompt.md +6 -0
- package/registry/prompts/personal/memory-recall.md +6 -0
- package/registry/prompts/platform/brainstorm.md +15 -0
- package/registry/prompts/platform/code-review.md +16 -0
- package/registry/prompts/platform/explain.md +16 -0
- package/registry/prompts.json +58 -0
- package/registry/skills/external/brainstorming.md +5 -0
- package/registry/skills/external/code-review.md +40 -0
- package/registry/skills/external/frontend-patterns.md +642 -0
- package/registry/skills/external/frontend-slides.md +184 -0
- package/registry/skills/external/systematic-debugging.md +5 -0
- package/registry/skills/external/tdd.md +328 -0
- package/registry/skills/external/verification-before-completion.md +5 -0
- package/registry/skills/external/writing-plans.md +5 -0
- package/registry/skills/platform/ai41_app_build.md +930 -0
- package/registry/skills/platform/ai41_app_deploy.md +168 -0
- package/registry/skills/platform/ai41_app_orchestrator.md +239 -0
- package/registry/skills/platform/ai41_app_patterns.md +359 -0
- package/registry/skills/platform/ai41_app_register.md +85 -0
- package/registry/skills/platform/ai41_app_scaffold.md +421 -0
- package/registry/skills/platform/ai41_app_verify.md +107 -0
- package/registry/skills/platform/opProjectCreate.md +239 -0
- package/registry/skills/platform/op_devbrowser.md +136 -0
- package/registry/skills/platform/sop_brandguidelines.md +103 -0
- package/registry/skills/platform/sop_docx.md +117 -0
- package/registry/skills/platform/sop_frontenddesign.md +44 -0
- package/registry/skills/platform/sop_frontenddesign_v2.md +659 -0
- package/registry/skills/platform/sop_mcpbuilder.md +133 -0
- package/registry/skills/platform/sop_pdf.md +172 -0
- package/registry/skills/platform/sop_pptx.md +133 -0
- package/registry/skills/platform/sop_skillcreator.md +104 -0
- package/registry/skills/platform/sop_themefactory.md +128 -0
- package/registry/skills/platform/sop_webapptesting.md +75 -0
- package/registry/skills/platform/sop_webartifactsbuilder.md +97 -0
- package/registry/skills/platform/sop_xlsx.md +134 -0
- package/registry/skills.json +1055 -0
- package/scripts/discover-chats.sh +11 -0
- package/scripts/install-service-windows.ps1 +87 -0
- package/scripts/install-service.sh +52 -0
- package/scripts/seed-registry.ts +195 -0
- package/scripts/test-send.sh +5 -0
- package/scripts/tray-indicator.ps1 +35 -0
- package/scripts/uninstall-service-windows.ps1 +23 -0
- package/scripts/uninstall-service.sh +15 -0
- package/scripts/xbar-myagent.5s.sh +32 -0
- package/server/mcp-server/dist/index.d.ts +11 -0
- package/server/mcp-server/dist/index.js +1332 -0
- package/server/mcp-server/dist/lib/api-client.d.ts +165 -0
- package/server/mcp-server/dist/lib/api-client.js +241 -0
- package/server/mcp-server/index.ts +1545 -0
- package/server/mcp-server/lib/api-client.ts +366 -0
- package/server/mcp-server/tsconfig.json +14 -0
- package/src/agent-registry.ts +180 -0
- package/src/channels/discord.ts +129 -0
- package/src/channels/imessage.ts +261 -0
- package/src/channels/slack.ts +208 -0
- package/src/channels/telegram.ts +307 -0
- package/src/channels/types.ts +62 -0
- package/src/channels/whatsapp.ts +227 -0
- package/src/config.ts +281 -0
- package/src/cron.ts +43 -0
- package/src/decrypt-keys.ts +60 -0
- package/src/encrypt-keys.ts +70 -0
- package/src/executor.ts +2190 -0
- package/src/gemini-executor.ts +212 -0
- package/src/goals.ts +240 -0
- package/src/gym/activity-digest.ts +546 -0
- package/src/gym/dimension-scorer.ts +297 -0
- package/src/gym/gym-router.ts +801 -0
- package/src/gym/index.ts +19 -0
- package/src/heartbeat.ts +220 -0
- package/src/index.ts +275 -0
- package/src/keystore.ts +190 -0
- package/src/logger.ts +51 -0
- package/src/memory/daily.ts +101 -0
- package/src/memory/embeddings.ts +185 -0
- package/src/memory/index.ts +218 -0
- package/src/memory/search.ts +124 -0
- package/src/memory/store.ts +189 -0
- package/src/ollama-executor.ts +126 -0
- package/src/openai-executor.ts +259 -0
- package/src/router.ts +230 -0
- package/src/test-message.ts +72 -0
- package/src/utils/imsg-db-reader.ts +109 -0
- package/src/utils/imsg-rpc.ts +178 -0
- package/src/utils/message-formatter.ts +90 -0
- package/src/web-ui.ts +5778 -0
- package/src/whatsapp-chats.ts +91 -0
- package/src/whatsapp-login.ts +110 -0
- package/src/wiki-sync.ts +199 -0
- package/tsconfig.json +19 -0
|
@@ -0,0 +1,115 @@
|
|
|
1
|
+
{
|
|
2
|
+
"id": "multi-model-strategy",
|
|
3
|
+
"slug": "multi-model-strategy",
|
|
4
|
+
"title": "Multi-Model Strategy",
|
|
5
|
+
"description": "Choose the right AI model for each job — Claude, GPT, Gemini, local models, and when to use which.",
|
|
6
|
+
"difficulty": "advanced",
|
|
7
|
+
"tier": "free",
|
|
8
|
+
"isPublic": true,
|
|
9
|
+
"isMarketplaceListed": false,
|
|
10
|
+
"dimensions": ["knowledge", "craft"],
|
|
11
|
+
"estimatedTime": "40-55 minutes",
|
|
12
|
+
"prerequisites": [],
|
|
13
|
+
"trainers": ["alex", "jordan", "morgan", "riley", "sam"],
|
|
14
|
+
"modules": [
|
|
15
|
+
{
|
|
16
|
+
"id": "model-landscape",
|
|
17
|
+
"title": "Model Landscape",
|
|
18
|
+
"order": 1,
|
|
19
|
+
"agentInstructions": "Be balanced and honest about model capabilities. This isn't about picking a winner — it's about understanding tradeoffs. The learner should leave knowing that model choice is a strategic decision, not a loyalty test. Keep the information practical and current.",
|
|
20
|
+
"steps": [
|
|
21
|
+
{
|
|
22
|
+
"id": "the-ai-model-ecosystem",
|
|
23
|
+
"title": "The AI model ecosystem",
|
|
24
|
+
"order": 1,
|
|
25
|
+
"type": "knowledge",
|
|
26
|
+
"content": "The AI model landscape is broader than just Claude, and understanding the ecosystem helps you make better decisions about which model to use for which task.\n\n**Claude (Anthropic)** — The default on this platform. Strong at reasoning, code generation, long documents, and following complex instructions. Available in multiple tiers: Haiku (fast, cheap, good for simple tasks), Sonnet (balanced), and Opus (most capable, best for complex reasoning). Claude excels at tasks requiring careful analysis, nuanced writing, and tool use.\n\n**GPT (OpenAI)** — The most widely-known model family. GPT-4o is strong at creative writing, multilingual tasks, and has good general knowledge. GPT-4o-mini is a fast, affordable option for simpler tasks. OpenAI also offers specialized models for images (DALL-E), speech (Whisper), and embeddings.\n\n**Gemini (Google)** — Strong at multimodal tasks (combining text, images, video, code). Has a very large context window. Good at tasks involving Google ecosystem data and search-adjacent reasoning. Gemini Pro and Flash offer different speed/capability tradeoffs.\n\n**Groq** — Not a model but an inference provider that runs open-source models (Llama, Mixtral) at extremely high speed. When you need fast responses and the task doesn't require frontier-model reasoning, Groq is compelling.\n\n**Ollama (Local)** — Runs open-source models (Llama, Gemma, Mistral, Phi, etc.) entirely on your machine. Zero API costs, complete privacy (no data leaves your device), works offline. The tradeoff is that local models are generally less capable than cloud models, and performance depends on your hardware. On this platform, Ollama agents get text-in/text-out only — no tool use, no MCP, no sessions.\n\nNo single model is best at everything. The strategic advantage comes from matching models to tasks based on cost, speed, capability, and privacy requirements.",
|
|
27
|
+
"isCritical": true,
|
|
28
|
+
"trainerVariations": {},
|
|
29
|
+
"verification": "knowledge",
|
|
30
|
+
"verificationQuestions": [
|
|
31
|
+
"You have a task that involves analyzing confidential financial data that must never leave your machine. Which model approach would you choose and why?",
|
|
32
|
+
"What are the main tradeoffs between using a cloud model like Claude and a local model via Ollama?"
|
|
33
|
+
]
|
|
34
|
+
},
|
|
35
|
+
{
|
|
36
|
+
"id": "model-comparison",
|
|
37
|
+
"title": "Model comparison",
|
|
38
|
+
"order": 2,
|
|
39
|
+
"type": "self-report",
|
|
40
|
+
"content": "Let's build your intuition for model differences through direct comparison. Pick a task and try it with at least two different models (or model tiers) if you have access.\n\n**Suggested comparison experiments:**\n- Give the same coding task to Claude and an Ollama model. Compare code quality, explanation clarity, and handling of edge cases.\n- Ask the same analytical question to different Claude tiers (Haiku vs. Sonnet vs. Opus). Note where the cheaper model is \"good enough\" and where the premium model clearly wins.\n- Try a creative writing task across models. Notice differences in voice, creativity, and instruction-following.\n\nIf you only have access to Claude, you can still compare tiers. The difference between Haiku and Opus on a complex reasoning task is dramatic and instructive.\n\n**What to observe:**\n- **Accuracy**: Did the model get the facts right?\n- **Depth**: How thorough was the analysis?\n- **Speed**: How fast was the response? (Matters for interactive use.)\n- **Tone**: Does the model match your preferred communication style?\n- **Cost**: For production use, what's the per-task cost difference?\n\nDocument your findings. Over time, this comparison data becomes your personal model selection guide — you'll know intuitively which model to reach for based on the task at hand.\n\nReflect on what surprised you. Most people expect premium models to always win, but for simple tasks, smaller and cheaper models often produce equivalent results at a fraction of the cost and latency.",
|
|
41
|
+
"isCritical": false,
|
|
42
|
+
"trainerVariations": {},
|
|
43
|
+
"verification": "self-report"
|
|
44
|
+
}
|
|
45
|
+
]
|
|
46
|
+
},
|
|
47
|
+
{
|
|
48
|
+
"id": "when-to-use-which-model",
|
|
49
|
+
"title": "When to Use Which Model",
|
|
50
|
+
"order": 2,
|
|
51
|
+
"agentInstructions": "This module is about developing judgment. There's no universal right answer — it depends on the user's priorities (cost, speed, privacy, quality). Help them build a decision framework rather than memorize rules.",
|
|
52
|
+
"steps": [
|
|
53
|
+
{
|
|
54
|
+
"id": "matching-models-to-tasks",
|
|
55
|
+
"title": "Matching models to tasks",
|
|
56
|
+
"order": 1,
|
|
57
|
+
"type": "knowledge",
|
|
58
|
+
"content": "Model selection is a tradeoff between five dimensions: **capability**, **cost**, **speed**, **privacy**, and **reliability**. Here's a practical framework.\n\n**Use frontier cloud models (Claude Opus, GPT-4o) when:**\n- The task requires complex reasoning, multi-step analysis, or nuanced judgment\n- Accuracy matters more than speed or cost (legal analysis, code architecture, strategic planning)\n- You need tool use, MCP integrations, or sophisticated instruction-following\n- The task involves long documents or large codebases (strong context windows)\n\n**Use mid-tier cloud models (Claude Sonnet, GPT-4o-mini) when:**\n- The task is moderately complex but well-defined (code generation, summarization, drafting)\n- You need a good balance of quality and cost for regular use\n- Response speed matters (interactive conversations, real-time assistants)\n- This is your workhorse tier for day-to-day agent work\n\n**Use fast/cheap models (Claude Haiku, Groq) when:**\n- The task is simple and well-structured (classification, extraction, formatting)\n- You're processing high volumes (batch analysis, log scanning, data transformation)\n- Latency is critical (user-facing chatbots, real-time assistants)\n- The task has clear right/wrong answers that don't require deep reasoning\n\n**Use local models (Ollama) when:**\n- Privacy is paramount (sensitive data that cannot leave your machine)\n- You need to work offline or in air-gapped environments\n- Cost is a primary concern and you're running many queries\n- The task is within the local model's capability (Q&A, simple generation, content drafting)\n- You want experimentation freedom without API costs\n\n**The hybrid approach:** The most effective strategy isn't choosing one model — it's using different models for different agents. Your code architect might run on Opus for maximum reasoning. Your daily standup summarizer runs on Haiku because it's fast and cheap. Your confidential HR assistant runs locally on Ollama. Each agent gets the model that matches its job.",
|
|
59
|
+
"isCritical": true,
|
|
60
|
+
"trainerVariations": {},
|
|
61
|
+
"verification": "knowledge",
|
|
62
|
+
"verificationQuestions": [
|
|
63
|
+
"You have three agents: a code architect, a log scanner that runs every hour, and an HR assistant handling employee data. What model would you assign to each and why?",
|
|
64
|
+
"When is a cheaper, faster model actually better than a more capable one — not just 'good enough,' but genuinely a better choice?"
|
|
65
|
+
]
|
|
66
|
+
},
|
|
67
|
+
{
|
|
68
|
+
"id": "model-selection-exercise",
|
|
69
|
+
"title": "Model selection exercise",
|
|
70
|
+
"order": 2,
|
|
71
|
+
"type": "self-report",
|
|
72
|
+
"content": "Map your current (or planned) agent roster to optimal models. For each agent you have or want to create, decide which model best fits its role.\n\n**Create a model assignment table:**\n\n| Agent | Primary Task | Key Requirement | Recommended Model | Why |\n|-------|-------------|-----------------|-------------------|-----|\n| hub | General assistance | Versatile + tool use | Claude Sonnet | Balance of capability and cost |\n| (your agent) | ... | ... | ... | ... |\n\nFor each agent, consider:\n- Does it need tool use? (If yes, cloud models only — Ollama agents can't use tools on this platform)\n- Does it handle sensitive data? (If yes, consider local models)\n- How often does it run? (High-frequency = cost matters more)\n- How complex are its tasks? (Simple = cheaper model is fine)\n- Does it need MCPs? (If yes, Claude is the strongest choice)\n\nAfter filling out the table, look for patterns. You'll likely find that most of your agents can run on mid-tier models, a few need premium, and some could save money on cheaper or local models.\n\nReflect on what this exercise reveals about your platform usage. Are you over-provisioning (using expensive models for simple tasks)? Under-provisioning (using cheap models for tasks that need more capability)? The right model assignment can dramatically reduce costs while maintaining or even improving output quality.",
|
|
73
|
+
"isCritical": false,
|
|
74
|
+
"trainerVariations": {},
|
|
75
|
+
"verification": "self-report"
|
|
76
|
+
}
|
|
77
|
+
]
|
|
78
|
+
},
|
|
79
|
+
{
|
|
80
|
+
"id": "switching-and-comparing-on-the-platform",
|
|
81
|
+
"title": "Switching and Comparing on the Platform",
|
|
82
|
+
"order": 3,
|
|
83
|
+
"agentInstructions": "This is the hands-on module. Walk the learner through the actual platform configuration for multi-model support. If they don't have Ollama installed, that's okay — the exercise of configuring an alternative executor is still valuable even as a dry run.",
|
|
84
|
+
"steps": [
|
|
85
|
+
{
|
|
86
|
+
"id": "multi-model-setup",
|
|
87
|
+
"title": "Multi-model setup",
|
|
88
|
+
"order": 1,
|
|
89
|
+
"type": "knowledge",
|
|
90
|
+
"content": "The platform supports multiple AI models through its executor system. Here's how to configure it.\n\n**Step 1: Enable multi-model support.** In your `config.json`, set `service.multiModelEnabled: true`. This unlocks the ability to assign different executors to different agents. When disabled (the default), all agents use Claude.\n\n**Step 2: Configure Ollama (for local models).** If you want to use local models:\n1. Install Ollama from ollama.ai\n2. Pull a model: `ollama pull gemma2` (or llama3, mistral, phi3, etc.)\n3. Ensure Ollama is running: `ollama serve`\n4. Set `service.ollamaBaseUrl` in config if it's not the default `http://localhost:11434`\n\n**Step 3: Assign executors to agents.** Each agent can have an `executor` field that overrides the platform default:\n- `\"executor\": \"claude\"` — Use Claude (the default, doesn't need to be set explicitly)\n- `\"executor\": \"ollama:gemma2\"` — Use Gemma 2 via Ollama\n- `\"executor\": \"ollama:llama3\"` — Use Llama 3 via Ollama\n- `\"executor\": \"ollama:mistral\"` — Use Mistral via Ollama\n\nAgents without an explicit executor use `service.platformDefaultExecutor` (which defaults to \"claude\").\n\n**Step 4: Configure provider keys (for cloud alternatives).** If you want to use non-Claude cloud models, add API keys to `service.providerKeys` in config:\n```json\n\"providerKeys\": {\n \"openai\": \"sk-...\",\n \"google\": \"AIza...\"\n}\n```\n\n**Important limitations of Ollama agents:** Local model agents get text-in/text-out only. They cannot use tools (Read, Write, Bash, etc.), MCP integrations, or sessions. They're best for advisory roles, Q&A, content generation, and tasks that don't need filesystem or API access. Plan your agent architecture accordingly — some agents must remain on Claude for full capability.",
|
|
91
|
+
"isCritical": true,
|
|
92
|
+
"trainerVariations": {},
|
|
93
|
+
"verification": "knowledge",
|
|
94
|
+
"verificationQuestions": [
|
|
95
|
+
"What are the limitations of an Ollama-based agent compared to a Claude-based agent on this platform?",
|
|
96
|
+
"Describe the steps to configure an agent to use a local Gemma model instead of Claude."
|
|
97
|
+
]
|
|
98
|
+
},
|
|
99
|
+
{
|
|
100
|
+
"id": "configure-a-non-claude-agent",
|
|
101
|
+
"title": "Configure a non-Claude agent",
|
|
102
|
+
"order": 2,
|
|
103
|
+
"type": "platform-check",
|
|
104
|
+
"content": "Put your knowledge into practice by configuring an agent to use an alternative model. You have two paths depending on your setup:\n\n**Path A: Ollama (if installed).** Pick an existing agent or create a new one that's suited for text-only work (advisory, Q&A, content drafting). Set its executor to an Ollama model. Test it with a conversation and compare the output quality to a Claude-based agent on the same task.\n\nGood candidates for Ollama:\n- A brainstorming agent that generates ideas (doesn't need tools)\n- A Q&A agent for a specific knowledge domain\n- A draft writer that produces first versions you'll edit\n\n**Path B: Provider keys (if you have other API keys).** Configure a provider key in your settings and set an agent to use an alternative cloud model. This gives you cloud-grade capability from a different provider.\n\n**Path C: Executor field exploration (if neither above).** Even without Ollama or alternative keys, you can enable multi-model support in config and set an agent's executor field to understand how the system works. Set `service.multiModelEnabled: true`, then explore the executor configuration in the agent settings.\n\nAfter configuring, test the agent:\n1. Send it a moderate-complexity task\n2. Evaluate the response quality\n3. Compare response time to your default Claude agents\n4. Note whether the model handles your domain's terminology and context appropriately\n\nReflect on whether this model is a good fit for this agent's role, or whether Claude remains the better choice for this particular use case.",
|
|
105
|
+
"isCritical": true,
|
|
106
|
+
"trainerVariations": {},
|
|
107
|
+
"verification": "platform-check",
|
|
108
|
+
"check": "feature-used"
|
|
109
|
+
}
|
|
110
|
+
]
|
|
111
|
+
}
|
|
112
|
+
],
|
|
113
|
+
"createdAt": "2026-04-09T00:00:00Z",
|
|
114
|
+
"updatedAt": "2026-04-09T00:00:00Z"
|
|
115
|
+
}
|
|
@@ -0,0 +1,136 @@
|
|
|
1
|
+
{
|
|
2
|
+
"id": "prompt-engineering",
|
|
3
|
+
"slug": "prompt-engineering",
|
|
4
|
+
"title": "Prompt Engineering",
|
|
5
|
+
"description": "Master the art of communicating with AI — from basic prompts to system-level instructions.",
|
|
6
|
+
"difficulty": "intermediate",
|
|
7
|
+
"tier": "free",
|
|
8
|
+
"isPublic": true,
|
|
9
|
+
"isMarketplaceListed": false,
|
|
10
|
+
"dimensions": ["communication", "application"],
|
|
11
|
+
"estimatedTime": "45-60 minutes",
|
|
12
|
+
"prerequisites": [],
|
|
13
|
+
"trainers": ["alex", "jordan", "morgan", "riley", "sam"],
|
|
14
|
+
"modules": [
|
|
15
|
+
{
|
|
16
|
+
"id": "context-and-specificity",
|
|
17
|
+
"title": "Context and Specificity",
|
|
18
|
+
"order": 1,
|
|
19
|
+
"agentInstructions": "This module is about the mechanics of prompt quality. Walk the learner through concrete before/after examples. Don't be abstract — show them exactly how adding context, specifics, and constraints transforms AI output from generic to genuinely useful.",
|
|
20
|
+
"steps": [
|
|
21
|
+
{
|
|
22
|
+
"id": "anatomy-of-a-great-prompt",
|
|
23
|
+
"title": "The anatomy of a great prompt",
|
|
24
|
+
"order": 1,
|
|
25
|
+
"type": "knowledge",
|
|
26
|
+
"content": "Every effective prompt has four building blocks, and understanding them will transform how you work with AI.\n\n**Context** tells the AI what situation it's operating in. \"I'm a product manager writing release notes for a B2B SaaS tool\" gives the AI a frame of reference for tone, audience, and terminology. Without context, you get generic output that could be for anyone.\n\n**Specificity** narrows the output to exactly what you need. Instead of \"write release notes,\" try \"write 5 bullet points covering the new dashboard filtering feature, the CSV export bug fix, and the updated onboarding flow. Each bullet should be one sentence, starting with a verb.\" The more specific your request, the less time you spend editing the result.\n\n**Constraints** tell the AI what NOT to do, which is often more powerful than telling it what to do. \"Don't use technical jargon — our users are non-technical small business owners\" or \"Keep the total under 150 words\" or \"Don't mention pricing.\" Constraints prevent the most common failure mode: technically correct but contextually wrong output.\n\n**Examples** are the secret weapon. When you show the AI what good looks like — a previous version, a competitor's approach, a template — it can pattern-match far more effectively than interpreting abstract instructions. Even a single example dramatically improves output quality. Try pasting in \"here's last month's release notes for reference\" and watch the difference.",
|
|
27
|
+
"isCritical": true,
|
|
28
|
+
"trainerVariations": {},
|
|
29
|
+
"verification": "knowledge",
|
|
30
|
+
"verificationQuestions": [
|
|
31
|
+
"You need an agent to write a customer email about a service outage. What context, specifics, and constraints would you include in your prompt?",
|
|
32
|
+
"Why are constraints (what NOT to do) often more powerful than positive instructions? Give an example from your own work."
|
|
33
|
+
]
|
|
34
|
+
},
|
|
35
|
+
{
|
|
36
|
+
"id": "before-and-after",
|
|
37
|
+
"title": "Before and after",
|
|
38
|
+
"order": 2,
|
|
39
|
+
"type": "self-report",
|
|
40
|
+
"content": "Time to practice. Take three prompts you'd normally send to an AI and transform them using the four building blocks.\n\nHere are common weak prompts to start with if you need inspiration:\n- \"Summarize this document\"\n- \"Write an email to my team\"\n- \"Help me with this code\"\n- \"Create a plan for my project\"\n\nFor each one, write a **before** (the vague version) and an **after** (with context, specificity, constraints, and optionally an example). Then send the **after** version to any agent and see how the output compares to what you'd normally get.\n\nPay attention to what changes most. Usually, adding context alone gets you 50% of the way there. Adding constraints handles another 30%. Specificity and examples polish the remaining 20%.\n\nNotice which of the four building blocks made the biggest difference for each prompt. Different types of tasks benefit from different elements — creative writing benefits most from examples and constraints, while analytical tasks benefit most from context and specificity.",
|
|
41
|
+
"isCritical": false,
|
|
42
|
+
"trainerVariations": {},
|
|
43
|
+
"verification": "self-report"
|
|
44
|
+
},
|
|
45
|
+
{
|
|
46
|
+
"id": "measure-the-difference",
|
|
47
|
+
"title": "Measure the difference",
|
|
48
|
+
"order": 3,
|
|
49
|
+
"type": "platform-check",
|
|
50
|
+
"content": "Now put your new skills into sustained practice. Go to any agent on the platform and have a real working session — at least 5 messages — where you apply everything from this module.\n\nDon't just test the agent with toy prompts. Use it for actual work you need to get done today. Each message should include at least one of the four building blocks: context, specificity, constraints, or examples.\n\nAs you go, notice how the conversation evolves. Good prompts create a compounding effect — the AI builds on the context you've established, and each response gets more tailored. By message 3 or 4, you should feel like the agent truly understands your situation.\n\nIf you find yourself re-explaining things or correcting the AI, that's a signal your initial prompt was missing context. That's not failure — it's exactly the feedback loop that makes you better at prompting over time.",
|
|
51
|
+
"isCritical": true,
|
|
52
|
+
"trainerVariations": {},
|
|
53
|
+
"verification": "platform-check",
|
|
54
|
+
"check": "message-count-gte-5"
|
|
55
|
+
}
|
|
56
|
+
]
|
|
57
|
+
},
|
|
58
|
+
{
|
|
59
|
+
"id": "system-prompts",
|
|
60
|
+
"title": "System Prompts",
|
|
61
|
+
"order": 2,
|
|
62
|
+
"agentInstructions": "System prompts are the most powerful lever on the platform. Help the learner understand that a system prompt isn't just a greeting — it's the DNA of an agent. Use real examples from the platform's own agents (like the hub agent or gym coach) to illustrate what great system prompts look like.",
|
|
63
|
+
"steps": [
|
|
64
|
+
{
|
|
65
|
+
"id": "what-is-a-system-prompt",
|
|
66
|
+
"title": "What is a system prompt?",
|
|
67
|
+
"order": 1,
|
|
68
|
+
"type": "knowledge",
|
|
69
|
+
"content": "A system prompt is the instruction set that runs before every conversation. On this platform, it lives in the agent's **CLAUDE.md** file, and it shapes every single response the agent gives. Think of it as the difference between hiring \"someone\" and hiring \"a senior financial analyst who specializes in SaaS metrics, communicates in plain English, and always shows their reasoning.\"\n\nA system prompt typically defines four things: **Role** (who the agent is and what it's expert at), **Behavior** (how it should respond — tone, format, level of detail), **Domain knowledge** (key facts, terminology, context it should always have), and **Constraints** (what it should never do, what to avoid, guardrails).\n\nThe power of a system prompt is that it front-loads context you'd otherwise have to repeat in every conversation. Without one, you'd start every chat with \"You're a financial analyst, here's what I need you to know about our business...\" With a good system prompt, the agent already knows all of that.\n\nSystem prompts also compound with your per-message prompts. When your system prompt says \"You are a code reviewer for a React/TypeScript project\" and your message says \"Review this component,\" the agent applies its persistent expertise to your specific request. The system prompt is the foundation; your messages are the tasks built on top of it.",
|
|
70
|
+
"isCritical": true,
|
|
71
|
+
"trainerVariations": {},
|
|
72
|
+
"verification": "knowledge",
|
|
73
|
+
"verificationQuestions": [
|
|
74
|
+
"What's the practical difference between putting instructions in a system prompt versus repeating them in every message?",
|
|
75
|
+
"If an agent keeps giving overly technical responses to non-technical users, where would you fix that — in the system prompt or in individual messages? Why?"
|
|
76
|
+
]
|
|
77
|
+
},
|
|
78
|
+
{
|
|
79
|
+
"id": "read-an-agents-system-prompt",
|
|
80
|
+
"title": "Read an agent's system prompt",
|
|
81
|
+
"order": 2,
|
|
82
|
+
"type": "platform-check",
|
|
83
|
+
"content": "Let's look at a real system prompt in action. Go to the **Org** page (/org) and click on any agent that has a custom system prompt — you'll see the CLAUDE.md content in their detail panel.\n\nAs you read it, look for the four elements: role definition, behavior instructions, domain knowledge, and constraints. Notice how specific the best system prompts are. They don't just say \"be helpful\" — they define exactly what kind of help, in what format, with what boundaries.\n\nIf you have access to multiple agents, compare their system prompts. How does a coding agent's prompt differ from a writing agent's? What makes each one effective for its purpose?\n\nPay special attention to the constraints section. The best system prompts spend as much space on what NOT to do as on what to do. This is what prevents an agent from going off-rails or making assumptions that waste your time.",
|
|
84
|
+
"isCritical": false,
|
|
85
|
+
"trainerVariations": {},
|
|
86
|
+
"verification": "platform-check",
|
|
87
|
+
"check": "agent-has-custom-prompt"
|
|
88
|
+
},
|
|
89
|
+
{
|
|
90
|
+
"id": "write-your-own-system-prompt",
|
|
91
|
+
"title": "Write your own system prompt",
|
|
92
|
+
"order": 3,
|
|
93
|
+
"type": "self-report",
|
|
94
|
+
"content": "Now it's your turn. Pick an agent (or create a new one) and write a system prompt from scratch. Don't just write a couple of sentences — aim for a prompt that covers all four elements.\n\nStart with the role: Who is this agent? What's its expertise? What would its job title be if it were a person? Then add behavior rules: Should it be concise or detailed? Formal or casual? Should it ask clarifying questions or just do its best with what it has?\n\nNext, add domain knowledge: What facts, terminology, or context should it always have? If it's a project agent, what does it need to know about your project? Finally, add constraints: What should it never do? What topics are out of scope? What format rules should it always follow?\n\nAfter writing it, test it. Send the agent 3-4 different types of requests and see if the system prompt shapes the responses the way you intended. If not, iterate. The best system prompts go through 2-3 revisions before they feel right.\n\nReflect on what was hardest to write. Most people struggle with constraints — it takes practice to anticipate what an AI might do wrong and proactively prevent it.",
|
|
95
|
+
"isCritical": false,
|
|
96
|
+
"trainerVariations": {},
|
|
97
|
+
"verification": "self-report"
|
|
98
|
+
}
|
|
99
|
+
]
|
|
100
|
+
},
|
|
101
|
+
{
|
|
102
|
+
"id": "iterating-and-debugging",
|
|
103
|
+
"title": "Iterating and Debugging",
|
|
104
|
+
"order": 3,
|
|
105
|
+
"agentInstructions": "This module is about what to do when things go wrong. Normalize failure — even experts iterate on prompts. The goal is to build a toolkit of correction strategies so the learner doesn't get stuck or frustrated.",
|
|
106
|
+
"steps": [
|
|
107
|
+
{
|
|
108
|
+
"id": "when-ai-gets-it-wrong",
|
|
109
|
+
"title": "When AI gets it wrong",
|
|
110
|
+
"order": 1,
|
|
111
|
+
"type": "knowledge",
|
|
112
|
+
"content": "AI will get things wrong. The skill isn't in getting perfect output on the first try — it's in knowing how to iterate efficiently when the output misses the mark.\n\n**Correction strategies** from most to least disruptive:\n\n1. **Refine in-thread**: \"That's close, but make it more concise and remove the bullet points\" — works when the AI is 70%+ there. The AI retains context from the conversation, so small nudges are powerful.\n2. **Add missing context**: \"I should have mentioned — this is for an internal audience, not customers\" — works when the output is wrong because you left out important information.\n3. **Show an example**: \"Here's what I'm looking for\" + paste a sample — works when the AI understood the task but not the style or format.\n4. **Reset and restructure**: Start a new conversation with a completely rewritten prompt — works when the conversation has gone off-track and corrections are making it worse.\n\n**When to reset vs. iterate**: If you've sent more than 3 correction messages and the output still isn't right, it's usually faster to start fresh with a better initial prompt. Each correction adds noise to the conversation context, and sometimes the AI gets confused trying to satisfy contradictory instructions.\n\n**The meta-skill**: After every iteration, ask yourself \"what was missing from my original prompt that caused this?\" Over time, you'll internalize the patterns and your first-try success rate will climb. The best prompt engineers aren't people who write perfect prompts — they're people who've debugged enough prompts to know what goes wrong and prevent it upfront.",
|
|
113
|
+
"isCritical": true,
|
|
114
|
+
"trainerVariations": {},
|
|
115
|
+
"verification": "knowledge",
|
|
116
|
+
"verificationQuestions": [
|
|
117
|
+
"You've sent 4 correction messages and the agent still isn't giving you what you want. What would you do and why?",
|
|
118
|
+
"What's the difference between refining in-thread and resetting? When is each strategy the better choice?"
|
|
119
|
+
]
|
|
120
|
+
},
|
|
121
|
+
{
|
|
122
|
+
"id": "debug-a-prompt-chain",
|
|
123
|
+
"title": "Debug a prompt chain",
|
|
124
|
+
"order": 2,
|
|
125
|
+
"type": "self-report",
|
|
126
|
+
"content": "Let's practice debugging. Pick a multi-step task — something that requires at least 3 back-and-forth messages to complete. Here are some ideas:\n\n- Ask an agent to draft a document, then refine the tone, then adjust the structure\n- Have an agent analyze data, then ask follow-up questions about the analysis, then request a summary\n- Get an agent to help plan a project, then drill into a specific phase, then ask it to identify risks\n\nAs you go through the conversation, intentionally practice these moves:\n- **At least one refinement**: After the first response, ask for a specific change\n- **At least one context addition**: Add information you originally left out and see how the output shifts\n- **Evaluate whether to continue or reset**: After 3-4 exchanges, consciously decide if you'd be better off restarting\n\nWhen you're done, reflect on the experience. What was your original prompt missing? At what point did the conversation feel most productive? If you hit a dead end, what would you change in your approach?",
|
|
127
|
+
"isCritical": false,
|
|
128
|
+
"trainerVariations": {},
|
|
129
|
+
"verification": "self-report"
|
|
130
|
+
}
|
|
131
|
+
]
|
|
132
|
+
}
|
|
133
|
+
],
|
|
134
|
+
"createdAt": "2026-04-09T00:00:00Z",
|
|
135
|
+
"updatedAt": "2026-04-09T00:00:00Z"
|
|
136
|
+
}
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
# Trainer: Alex
|
|
2
|
+
## Personality: Collaborative · Steady
|
|
3
|
+
|
|
4
|
+
You are Alex. Your coaching style is collaborative — you work alongside the learner, not above them. You figure things out together. You celebrate small wins with genuine enthusiasm. Your pace is steady and consistent.
|
|
5
|
+
|
|
6
|
+
Voice traits:
|
|
7
|
+
- Warm, approachable, encouraging
|
|
8
|
+
- Use "we" more than "you" — "let's try this", "we can do this"
|
|
9
|
+
- Celebrate progress: "Nice! That's real progress."
|
|
10
|
+
- When stuck: "No worries, let's look at this from another angle."
|
|
11
|
+
- Never condescending, never rushed
|
|
12
|
+
- Emoji: occasional 🙌 ✨ but don't overdo it
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
# Trainer: Jordan
|
|
2
|
+
## Personality: Direct · Accountable · Steady
|
|
3
|
+
|
|
4
|
+
You are Jordan. You're direct, no-nonsense, and you hold the learner accountable. You tell them exactly what to work on and call out avoidance. You're not mean — you're honest and efficient. You respect their time by not wasting it.
|
|
5
|
+
|
|
6
|
+
Voice traits:
|
|
7
|
+
- Clear, concise, no filler
|
|
8
|
+
- Direct feedback: "That prompt is too vague. Here's why."
|
|
9
|
+
- Accountability: "Did you actually do the exercise, or just read it?"
|
|
10
|
+
- Praise is rare and earned: "Good. That's exactly right."
|
|
11
|
+
- When stuck: "Here's what's happening and here's the fix."
|
|
12
|
+
- No emoji. No exclamation marks unless truly warranted.
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
# Trainer: Morgan
|
|
2
|
+
## Personality: Thoughtful · Frameworks-First · Steady
|
|
3
|
+
|
|
4
|
+
You are Morgan. You care deeply about understanding. You don't just show someone what to click — you explain why it works, what's happening underneath, and how it connects to the bigger picture. You think in frameworks and mental models.
|
|
5
|
+
|
|
6
|
+
Voice traits:
|
|
7
|
+
- Reflective, analytical, clear
|
|
8
|
+
- Explain the "why": "The reason this works is..."
|
|
9
|
+
- Use frameworks: "Think of it this way..."
|
|
10
|
+
- Ask probing questions: "What do you think would happen if...?"
|
|
11
|
+
- When stuck: "Let's step back and think about what's actually going on here."
|
|
12
|
+
- Occasional analogies to make concepts click
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
# Trainer: Riley
|
|
2
|
+
## Personality: Challenging · Immersive · High-Intensity
|
|
3
|
+
|
|
4
|
+
You are Riley. You push hard. You ask uncomfortable questions and assign exercises the learner would rather skip. You believe the fastest growth happens at the edge of comfort. Sessions with you are intense and focused.
|
|
5
|
+
|
|
6
|
+
Voice traits:
|
|
7
|
+
- High energy, direct, challenging
|
|
8
|
+
- Push: "Don't just read about it — go do it right now."
|
|
9
|
+
- Challenge assumptions: "Why do you think that? Test it."
|
|
10
|
+
- Expect more: "Good start, but you can do better. Here's how."
|
|
11
|
+
- When stuck: "This is exactly where the learning happens. Stay with it."
|
|
12
|
+
- Short, punchy sentences. No hand-holding.
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
# Trainer: Sam
|
|
2
|
+
## Personality: Patient · No Pressure · Steady
|
|
3
|
+
|
|
4
|
+
You are Sam. You meet learners exactly where they are. No pressure, no judgment. Brand new? Fine. Tried before and it didn't stick? Also fine. You go at their pace, build from what they already know, and never overwhelm.
|
|
5
|
+
|
|
6
|
+
Voice traits:
|
|
7
|
+
- Calm, reassuring, patient
|
|
8
|
+
- Normalize difficulty: "This trips everyone up at first."
|
|
9
|
+
- Small steps: "Let's just try one thing."
|
|
10
|
+
- No pressure: "Take your time. There's no rush."
|
|
11
|
+
- When stuck: "That's totally normal. Here's a simpler way to think about it."
|
|
12
|
+
- Warm but not saccharine. Genuine.
|