@fredcallagan/arn-spark 5.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/plugin.json +9 -0
- package/.opencode/plugins/arn-spark.js +272 -0
- package/package.json +17 -0
- package/plugins/arn-spark/.claude-plugin/plugin.json +9 -0
- package/plugins/arn-spark/LICENSE +21 -0
- package/plugins/arn-spark/README.md +25 -0
- package/plugins/arn-spark/agents/arn-spark-brand-strategist.md +299 -0
- package/plugins/arn-spark/agents/arn-spark-dev-env-builder.md +228 -0
- package/plugins/arn-spark/agents/arn-spark-doctor.md +92 -0
- package/plugins/arn-spark/agents/arn-spark-forensic-investigator.md +181 -0
- package/plugins/arn-spark/agents/arn-spark-market-researcher.md +232 -0
- package/plugins/arn-spark/agents/arn-spark-marketing-pm.md +225 -0
- package/plugins/arn-spark/agents/arn-spark-persona-architect.md +259 -0
- package/plugins/arn-spark/agents/arn-spark-persona-impersonator.md +183 -0
- package/plugins/arn-spark/agents/arn-spark-product-strategist.md +191 -0
- package/plugins/arn-spark/agents/arn-spark-prototype-builder.md +497 -0
- package/plugins/arn-spark/agents/arn-spark-scaffolder.md +228 -0
- package/plugins/arn-spark/agents/arn-spark-spike-runner.md +209 -0
- package/plugins/arn-spark/agents/arn-spark-style-capture.md +196 -0
- package/plugins/arn-spark/agents/arn-spark-tech-evaluator.md +229 -0
- package/plugins/arn-spark/agents/arn-spark-ui-interactor.md +235 -0
- package/plugins/arn-spark/agents/arn-spark-use-case-writer.md +280 -0
- package/plugins/arn-spark/agents/arn-spark-ux-judge.md +215 -0
- package/plugins/arn-spark/agents/arn-spark-ux-specialist.md +200 -0
- package/plugins/arn-spark/agents/arn-spark-visual-sketcher.md +285 -0
- package/plugins/arn-spark/agents/arn-spark-visual-test-engineer.md +224 -0
- package/plugins/arn-spark/references/copilot-tools.md +62 -0
- package/plugins/arn-spark/skills/arn-brainstorming/SKILL.md +520 -0
- package/plugins/arn-spark/skills/arn-brainstorming/references/add-feature-flow.md +155 -0
- package/plugins/arn-spark/skills/arn-spark-arch-vision/SKILL.md +226 -0
- package/plugins/arn-spark/skills/arn-spark-arch-vision/references/architecture-vision-template.md +153 -0
- package/plugins/arn-spark/skills/arn-spark-arch-vision/references/technology-evaluation-guide.md +86 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype/SKILL.md +471 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/clickable-prototype-criteria.md +65 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/journey-template.md +62 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/review-report-template.md +75 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype/references/showcase-capture-guide.md +213 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/SKILL.md +642 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/debate-protocol.md +242 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/debate-review-report-template.md +161 -0
- package/plugins/arn-spark/skills/arn-spark-clickable-prototype-teams/references/expert-interaction-review-template.md +152 -0
- package/plugins/arn-spark/skills/arn-spark-concept-review/SKILL.md +350 -0
- package/plugins/arn-spark/skills/arn-spark-concept-review/references/conflict-resolution-protocol.md +145 -0
- package/plugins/arn-spark/skills/arn-spark-concept-review/references/review-report-template.md +185 -0
- package/plugins/arn-spark/skills/arn-spark-dev-setup/SKILL.md +366 -0
- package/plugins/arn-spark/skills/arn-spark-dev-setup/references/dev-setup-checklist.md +84 -0
- package/plugins/arn-spark/skills/arn-spark-dev-setup/references/dev-setup-template.md +205 -0
- package/plugins/arn-spark/skills/arn-spark-discover/SKILL.md +303 -0
- package/plugins/arn-spark/skills/arn-spark-discover/references/competitive-landscape-template.md +87 -0
- package/plugins/arn-spark/skills/arn-spark-discover/references/discovery-questions.md +120 -0
- package/plugins/arn-spark/skills/arn-spark-discover/references/persona-profile-template.md +97 -0
- package/plugins/arn-spark/skills/arn-spark-discover/references/product-concept-template.md +253 -0
- package/plugins/arn-spark/skills/arn-spark-ensure-config/SKILL.md +23 -0
- package/plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md +388 -0
- package/plugins/arn-spark/skills/arn-spark-ensure-config/references/step-0-fast-path.md +25 -0
- package/plugins/arn-spark/skills/arn-spark-ensure-config/scripts/cache-check.sh +127 -0
- package/plugins/arn-spark/skills/arn-spark-feature-extract/SKILL.md +483 -0
- package/plugins/arn-spark/skills/arn-spark-feature-extract/references/feature-backlog-template.md +176 -0
- package/plugins/arn-spark/skills/arn-spark-feature-extract/references/feature-entry-template.md +209 -0
- package/plugins/arn-spark/skills/arn-spark-help/SKILL.md +149 -0
- package/plugins/arn-spark/skills/arn-spark-help/references/pipeline-map.md +211 -0
- package/plugins/arn-spark/skills/arn-spark-init/SKILL.md +312 -0
- package/plugins/arn-spark/skills/arn-spark-init/references/agent-models-presets/all-opus.md +23 -0
- package/plugins/arn-spark/skills/arn-spark-init/references/agent-models-presets/balanced.md +23 -0
- package/plugins/arn-spark/skills/arn-spark-init/references/bkt-setup.md +55 -0
- package/plugins/arn-spark/skills/arn-spark-init/references/jira-mcp-setup.md +61 -0
- package/plugins/arn-spark/skills/arn-spark-init/references/platform-labels.md +97 -0
- package/plugins/arn-spark/skills/arn-spark-naming/SKILL.md +275 -0
- package/plugins/arn-spark/skills/arn-spark-naming/references/creative-brief-template.md +146 -0
- package/plugins/arn-spark/skills/arn-spark-naming/references/naming-methodology.md +237 -0
- package/plugins/arn-spark/skills/arn-spark-naming/references/naming-report-template.md +122 -0
- package/plugins/arn-spark/skills/arn-spark-naming/references/trademark-databases.md +88 -0
- package/plugins/arn-spark/skills/arn-spark-naming/references/whois-server-map.md +164 -0
- package/plugins/arn-spark/skills/arn-spark-naming/scripts/whois-check.js +502 -0
- package/plugins/arn-spark/skills/arn-spark-naming/scripts/whois-check.py +533 -0
- package/plugins/arn-spark/skills/arn-spark-prototype-lock/SKILL.md +260 -0
- package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/lock-report-template.md +68 -0
- package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/pretooluse-hook-template.json +35 -0
- package/plugins/arn-spark/skills/arn-spark-prototype-lock/references/prototype-guardrail-rules.md +38 -0
- package/plugins/arn-spark/skills/arn-spark-report/SKILL.md +144 -0
- package/plugins/arn-spark/skills/arn-spark-report/references/issue-template.md +81 -0
- package/plugins/arn-spark/skills/arn-spark-report/references/spark-knowledge-base.md +293 -0
- package/plugins/arn-spark/skills/arn-spark-scaffold/SKILL.md +239 -0
- package/plugins/arn-spark/skills/arn-spark-scaffold/references/scaffold-checklist.md +79 -0
- package/plugins/arn-spark/skills/arn-spark-scaffold/references/scaffold-summary-template.md +74 -0
- package/plugins/arn-spark/skills/arn-spark-spike/SKILL.md +209 -0
- package/plugins/arn-spark/skills/arn-spark-spike/references/spike-report-template.md +123 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype/SKILL.md +362 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype/references/review-report-template.md +65 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype/references/showcase-capture-guide.md +153 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype/references/static-prototype-criteria.md +54 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/SKILL.md +518 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/debate-protocol.md +230 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/debate-review-report-template.md +148 -0
- package/plugins/arn-spark/skills/arn-spark-static-prototype-teams/references/expert-visual-review-template.md +130 -0
- package/plugins/arn-spark/skills/arn-spark-stress-competitive/SKILL.md +166 -0
- package/plugins/arn-spark/skills/arn-spark-stress-competitive/references/competitive-report-template.md +139 -0
- package/plugins/arn-spark/skills/arn-spark-stress-competitive/references/gap-analysis-framework.md +111 -0
- package/plugins/arn-spark/skills/arn-spark-stress-interview/SKILL.md +257 -0
- package/plugins/arn-spark/skills/arn-spark-stress-interview/references/interview-protocol.md +140 -0
- package/plugins/arn-spark/skills/arn-spark-stress-interview/references/interview-report-template.md +165 -0
- package/plugins/arn-spark/skills/arn-spark-stress-interview/references/persona-casting-spec.md +138 -0
- package/plugins/arn-spark/skills/arn-spark-stress-premortem/SKILL.md +181 -0
- package/plugins/arn-spark/skills/arn-spark-stress-premortem/references/premortem-protocol.md +112 -0
- package/plugins/arn-spark/skills/arn-spark-stress-premortem/references/premortem-report-template.md +158 -0
- package/plugins/arn-spark/skills/arn-spark-stress-prfaq/SKILL.md +206 -0
- package/plugins/arn-spark/skills/arn-spark-stress-prfaq/references/prfaq-report-template.md +139 -0
- package/plugins/arn-spark/skills/arn-spark-stress-prfaq/references/prfaq-workflow.md +118 -0
- package/plugins/arn-spark/skills/arn-spark-style-explore/SKILL.md +281 -0
- package/plugins/arn-spark/skills/arn-spark-style-explore/references/style-brief-template.md +198 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases/SKILL.md +359 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases/references/expert-review-template.md +94 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases/references/review-protocol.md +150 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases/references/use-case-index-template.md +108 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases/references/use-case-template.md +125 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases-teams/SKILL.md +306 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases-teams/references/debate-protocol.md +272 -0
- package/plugins/arn-spark/skills/arn-spark-use-cases-teams/references/review-report-template.md +112 -0
- package/plugins/arn-spark/skills/arn-spark-visual-readiness/SKILL.md +293 -0
- package/plugins/arn-spark/skills/arn-spark-visual-readiness/references/readiness-checklist.md +196 -0
- package/plugins/arn-spark/skills/arn-spark-visual-sketch/SKILL.md +376 -0
- package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/aesthetic-philosophy.md +210 -0
- package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/sketch-gallery-guide.md +282 -0
- package/plugins/arn-spark/skills/arn-spark-visual-sketch/references/visual-direction-template.md +174 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/SKILL.md +447 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/baseline-capture-script-template.js +89 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/journey-schema.md +375 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/spike-checklist.md +122 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/strategy-layers-guide.md +132 -0
- package/plugins/arn-spark/skills/arn-spark-visual-strategy/references/visual-strategy-template.md +141 -0
|
@@ -0,0 +1,61 @@
|
|
|
1
|
+
# Atlassian Remote MCP Server Setup
|
|
2
|
+
|
|
3
|
+
The Atlassian Remote MCP Server is the official, cloud-hosted MCP server for Jira Cloud integration. It uses OAuth 2.1 for authentication and provides native Claude Code integration for issue operations.
|
|
4
|
+
|
|
5
|
+
**Documentation:** https://support.atlassian.com/atlassian-rovo-mcp-server/docs/getting-started-with-the-atlassian-remote-mcp-server/
|
|
6
|
+
|
|
7
|
+
## Prerequisites
|
|
8
|
+
|
|
9
|
+
- A Jira Cloud instance with an active account
|
|
10
|
+
- Claude Code installed and running
|
|
11
|
+
- Project initialized with `/arn-spark-init` or `/arn-code-init` (or in the process of running it)
|
|
12
|
+
|
|
13
|
+
## Setup Procedure
|
|
14
|
+
|
|
15
|
+
### Step 1: Add the MCP Server
|
|
16
|
+
|
|
17
|
+
Run the following command from your project directory:
|
|
18
|
+
|
|
19
|
+
```bash
|
|
20
|
+
claude mcp add atlassian --scope project --transport http --url https://mcp.atlassian.com/v1/mcp
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
This adds the Atlassian MCP server configuration to the **project's** `.mcp.json` file (not the Arness plugin's `.mcp.json`).
|
|
24
|
+
|
|
25
|
+
### Step 2: Restart Claude Code
|
|
26
|
+
|
|
27
|
+
MCP servers are loaded at session start. After adding the server, restart Claude Code for the new server to become available:
|
|
28
|
+
|
|
29
|
+
1. Exit the current Claude Code session
|
|
30
|
+
2. Start a new session in the same project directory
|
|
31
|
+
|
|
32
|
+
### Step 3: Authenticate via OAuth 2.1
|
|
33
|
+
|
|
34
|
+
On the first use of any Atlassian MCP tool, a browser window will open automatically for OAuth 2.1 authentication:
|
|
35
|
+
|
|
36
|
+
1. Sign in with your Atlassian account
|
|
37
|
+
2. Authorize the MCP server to access your Jira instance
|
|
38
|
+
3. Return to Claude Code -- the session will continue with authenticated access
|
|
39
|
+
|
|
40
|
+
### Step 4: Verify the Connection
|
|
41
|
+
|
|
42
|
+
Run `/mcp` in Claude Code to confirm the Atlassian server is listed and connected:
|
|
43
|
+
|
|
44
|
+
```
|
|
45
|
+
/mcp
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
Look for `atlassian` in the server list with a `connected` status.
|
|
49
|
+
|
|
50
|
+
## Known Limitations
|
|
51
|
+
|
|
52
|
+
- **Re-authentication:** The OAuth 2.1 token may expire multiple times per day. When this happens, a browser window will open again for re-authentication. This is a known limitation of the Atlassian Remote MCP Server.
|
|
53
|
+
- **Cloud only:** The remote MCP server works with Jira Cloud only. Jira Data Center and Jira Server are not supported.
|
|
54
|
+
- **Scope:** The MCP server is configured per-project (`--scope project`). Each project that uses Jira integration needs its own MCP configuration.
|
|
55
|
+
|
|
56
|
+
## Notes
|
|
57
|
+
|
|
58
|
+
- The MCP server is added to the project's `.mcp.json`, not the Arness plugin's `.mcp.json`
|
|
59
|
+
- Arness uses the MCP server's `list_projects` tool during `/arn-spark-init` (or `/arn-code-init`) to let the user pick their Jira project
|
|
60
|
+
- Once configured, Arness skills (`/arn-code-create-issue`, `/arn-code-pick-issue`, `/arn-code-review-pr`) use the MCP server transparently for all Jira operations
|
|
61
|
+
- No API keys or tokens need to be stored in project files -- OAuth 2.1 handles authentication at runtime
|
|
@@ -0,0 +1,97 @@
|
|
|
1
|
+
# Arness Platform Labels
|
|
2
|
+
|
|
3
|
+
Arness uses labels for issue management and tracking across supported platforms. The label strategy varies by platform: GitHub uses repository labels with explicit creation, while Jira uses freeform labels with implicit creation plus native issue types and priorities.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## GitHub
|
|
8
|
+
|
|
9
|
+
Labels created during `/arn-spark-init` (or `/arn-code-init`) when GitHub integration is detected. These labels are used by Arness skills for issue management and tracking.
|
|
10
|
+
|
|
11
|
+
### Labels
|
|
12
|
+
|
|
13
|
+
| Label | Color | Purpose | Used By |
|
|
14
|
+
|-------|-------|---------|---------|
|
|
15
|
+
| `arness-backlog` | `#d4c5f9` (lavender) | Deferred items from PRs or postponed features | `/arn-code-review-pr`, `/arn-code-create-issue`, `/arn-code-pick-issue` |
|
|
16
|
+
| `arness-feature-issue` | `#0e8a16` (green) | Feature requests tracked via Arness | `/arn-code-create-issue`, `/arn-code-pick-issue` |
|
|
17
|
+
| `arness-bug-issue` | `#d93f0b` (red) | Bug reports tracked via Arness | `/arn-code-create-issue`, `/arn-code-pick-issue` |
|
|
18
|
+
| `arness-priority-high` | `#b60205` (dark red) | High priority | `/arn-code-create-issue`, `/arn-code-pick-issue`, `/arn-code-review-pr` |
|
|
19
|
+
| `arness-priority-medium` | `#fbca04` (yellow) | Medium priority | `/arn-code-create-issue`, `/arn-code-pick-issue`, `/arn-code-review-pr` |
|
|
20
|
+
| `arness-priority-low` | `#c5def5` (light blue) | Low priority | `/arn-code-create-issue`, `/arn-code-pick-issue`, `/arn-code-review-pr` |
|
|
21
|
+
| `arness-rejected` | `#e4e669` (olive) | Issue reviewed and rejected as invalid or out of scope | `/arn-code-pick-issue` |
|
|
22
|
+
|
|
23
|
+
### Plugin Repository Label
|
|
24
|
+
|
|
25
|
+
The following label lives on the **Arness plugin repository** (not on user projects). It is pre-created by plugin maintainers and used by `/arn-code-report` to tag diagnostic issues.
|
|
26
|
+
|
|
27
|
+
| Label | Color | Purpose | Used By |
|
|
28
|
+
|-------|-------|---------|---------|
|
|
29
|
+
| `arness-report` | `#1d76db` (blue) | Issue reported via /arn-code-report diagnostic | `/arn-code-report` (plugin repo only) |
|
|
30
|
+
|
|
31
|
+
### Creation Command
|
|
32
|
+
|
|
33
|
+
Labels are created using `gh label create` which is idempotent -- existing labels are skipped:
|
|
34
|
+
|
|
35
|
+
```bash
|
|
36
|
+
gh label create "arness-backlog" --color "d4c5f9" --description "Deferred items from PRs or postponed features" --force
|
|
37
|
+
gh label create "arness-feature-issue" --color "0e8a16" --description "Feature requests tracked via Arness" --force
|
|
38
|
+
gh label create "arness-bug-issue" --color "d93f0b" --description "Bug reports tracked via Arness" --force
|
|
39
|
+
gh label create "arness-priority-high" --color "b60205" --description "High priority" --force
|
|
40
|
+
gh label create "arness-priority-medium" --color "fbca04" --description "Medium priority" --force
|
|
41
|
+
gh label create "arness-priority-low" --color "c5def5" --description "Low priority" --force
|
|
42
|
+
gh label create "arness-rejected" --color "e4e669" --description "Issue reviewed and rejected as invalid or out of scope" --force
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
### Notes
|
|
46
|
+
|
|
47
|
+
- Labels are only created when GitHub integration is detected during `/arn-spark-init` (or `/arn-code-init`)
|
|
48
|
+
- The `--force` flag updates existing labels if the color or description has changed
|
|
49
|
+
- Projects without GitHub integration skip label creation entirely
|
|
50
|
+
- Skills that use labels check for their existence and create missing ones on demand
|
|
51
|
+
|
|
52
|
+
---
|
|
53
|
+
|
|
54
|
+
## Jira
|
|
55
|
+
|
|
56
|
+
Jira labels are freeform -- they are created implicitly on first use and do not need to be pre-created during initialization. Arness combines Jira labels with native issue types and priority fields for a richer mapping.
|
|
57
|
+
|
|
58
|
+
### Labels
|
|
59
|
+
|
|
60
|
+
Arness uses the following label names when creating and filtering Jira issues:
|
|
61
|
+
|
|
62
|
+
| Label | Purpose | Used By |
|
|
63
|
+
|-------|---------|---------|
|
|
64
|
+
| `arness-feature-issue` | Feature requests tracked via Arness | `/arn-code-create-issue`, `/arn-code-pick-issue` |
|
|
65
|
+
| `arness-bug-issue` | Bug reports tracked via Arness | `/arn-code-create-issue`, `/arn-code-pick-issue` |
|
|
66
|
+
| `arness-backlog` | Deferred items from PRs or postponed features | `/arn-code-review-pr`, `/arn-code-create-issue`, `/arn-code-pick-issue` |
|
|
67
|
+
| `arness-rejected` | Issue reviewed and rejected as invalid or out of scope | `/arn-code-pick-issue` |
|
|
68
|
+
| `arness-priority-high` | High priority | `/arn-code-create-issue`, `/arn-code-pick-issue`, `/arn-code-review-pr` |
|
|
69
|
+
| `arness-priority-medium` | Medium priority | `/arn-code-create-issue`, `/arn-code-pick-issue`, `/arn-code-review-pr` |
|
|
70
|
+
| `arness-priority-low` | Low priority | `/arn-code-create-issue`, `/arn-code-pick-issue`, `/arn-code-review-pr` |
|
|
71
|
+
|
|
72
|
+
### Issue Type Mapping
|
|
73
|
+
|
|
74
|
+
Arness maps its label categories to native Jira issue types:
|
|
75
|
+
|
|
76
|
+
| Arness Label | Jira Issue Type |
|
|
77
|
+
|------------|-----------------|
|
|
78
|
+
| `arness-feature-issue` | Story |
|
|
79
|
+
| `arness-bug-issue` | Bug |
|
|
80
|
+
| `arness-backlog` | Task |
|
|
81
|
+
|
|
82
|
+
### Priority Mapping
|
|
83
|
+
|
|
84
|
+
Arness maps its priority labels to native Jira priority levels:
|
|
85
|
+
|
|
86
|
+
| Arness Label | Jira Priority |
|
|
87
|
+
|------------|---------------|
|
|
88
|
+
| `arness-priority-high` | High |
|
|
89
|
+
| `arness-priority-medium` | Medium |
|
|
90
|
+
| `arness-priority-low` | Low |
|
|
91
|
+
|
|
92
|
+
### Notes
|
|
93
|
+
|
|
94
|
+
- Jira labels are applied in addition to native issue type and priority fields
|
|
95
|
+
- Labels are created implicitly by Jira on first use -- no `create` command is needed
|
|
96
|
+
- Arness uses both the label and the native Jira field so issues are filterable via JQL or the label sidebar
|
|
97
|
+
- Bitbucket's built-in issue tracker is not supported by Arness
|
|
@@ -0,0 +1,275 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: arn-spark-naming
|
|
3
|
+
description: >-
|
|
4
|
+
This skill should be used when the user says "naming", "brand name",
|
|
5
|
+
"name my product", "find a name", "product naming", "brand naming",
|
|
6
|
+
"what should I call it", "name ideas", "pick a name", "naming session",
|
|
7
|
+
"help me name this", "brainstorm names", "come up with a name",
|
|
8
|
+
"arn spark naming", "arn-spark-naming", or wants to find a brand
|
|
9
|
+
name for their product through strategic analysis, creative generation,
|
|
10
|
+
qualitative scoring, and due diligence including domain availability
|
|
11
|
+
and trademark screening.
|
|
12
|
+
version: 1.0.0
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
# Arness Spark Naming
|
|
16
|
+
|
|
17
|
+
Guide a product from nameless concept to validated brand name through a structured 4-step methodology, driven by the `arn-spark-brand-strategist` agent. Produces a **naming brief** (`naming-brief.md`) in the vision directory and a **naming report** (`naming-report.md`) in the reports directory.
|
|
18
|
+
|
|
19
|
+
## Prerequisites
|
|
20
|
+
|
|
21
|
+
1. Read the project's arness.md for the `## Arness` section.
|
|
22
|
+
2. Extract **Vision directory** and **Reports directory** paths. If no `## Arness` section exists or Arness Spark fields are missing, inform the user: "Arness Spark is not configured for this project yet. Run `/arn-brainstorming` to get started — it will set everything up automatically." Do not proceed without it.
|
|
23
|
+
3. Create directories if they do not exist.
|
|
24
|
+
|
|
25
|
+
## Step 0: Context Gathering
|
|
26
|
+
|
|
27
|
+
### Product context
|
|
28
|
+
|
|
29
|
+
Check for `<vision-dir>/product-concept.md`:
|
|
30
|
+
|
|
31
|
+
**If found:** Read and extract: vision statement, value proposition, target audience, product pillars, competitive landscape. Summarize the extracted context to the user: "Found your product concept. I'll use this as the foundation for naming."
|
|
32
|
+
|
|
33
|
+
**If not found:**
|
|
34
|
+
|
|
35
|
+
Ask the user:
|
|
36
|
+
|
|
37
|
+
**"No product concept found. How should I learn about your product?"**
|
|
38
|
+
1. **Describe your product** — Provide a description in the next message
|
|
39
|
+
2. **Point me to a file** — Specify a file path containing product information
|
|
40
|
+
3. **Explore current project** — I'll read README, package.json, and code to infer what the product does
|
|
41
|
+
|
|
42
|
+
If option 3: invoke `arn-spark-brand-strategist` in `brand-dna` mode with instructions to explore the project and summarize the product context.
|
|
43
|
+
|
|
44
|
+
### Target market
|
|
45
|
+
|
|
46
|
+
Ask the user:
|
|
47
|
+
|
|
48
|
+
**"What is the primary target market? This determines trademark databases and languages for linguistic screening."**
|
|
49
|
+
1. **United States** — USPTO trademark search, English + Spanish linguistic check
|
|
50
|
+
2. **European Union** — EUIPO trademark search, English + French + German + Spanish + Italian
|
|
51
|
+
3. **United Kingdom** — IPO trademark search, English linguistic check
|
|
52
|
+
4. **Global / Multiple regions** — WIPO + major national databases, all major languages
|
|
53
|
+
|
|
54
|
+
### Existing naming brief
|
|
55
|
+
|
|
56
|
+
Check for `<vision-dir>/naming-brief.md`:
|
|
57
|
+
|
|
58
|
+
**If found:**
|
|
59
|
+
|
|
60
|
+
Ask the user:
|
|
61
|
+
|
|
62
|
+
**"A naming brief already exists. How would you like to proceed?"**
|
|
63
|
+
1. **Resume from where I left off** — Continue from the first incomplete section
|
|
64
|
+
2. **Start fresh** — Preserve existing as naming-brief-previous.md and begin new
|
|
65
|
+
|
|
66
|
+
If resume: read the brief, detect which sections contain "-- Pending --" or are missing, and resume from the first incomplete step.
|
|
67
|
+
|
|
68
|
+
---
|
|
69
|
+
|
|
70
|
+
## Step 1: Strategic Foundation (Brand DNA)
|
|
71
|
+
|
|
72
|
+
Invoke the `arn-spark-brand-strategist` agent in `brand-dna` mode via the Task tool, passing the model from `.arness/agent-models/spark.md` as the `model` parameter (see `plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md` "Dispatch convention" for fallback). Context:
|
|
73
|
+
- Product context (from product-concept.md or user input)
|
|
74
|
+
- Competitive landscape (from product concept, or agent will research via websearch)
|
|
75
|
+
- Target market
|
|
76
|
+
|
|
77
|
+
The agent returns: brand personality profile, audience vocabulary, competitor name landscape, 1-2 recommended naming categories with rationale.
|
|
78
|
+
|
|
79
|
+
Present the Brand DNA analysis to the user.
|
|
80
|
+
|
|
81
|
+
Ask the user:
|
|
82
|
+
|
|
83
|
+
**"Proceed with the recommended naming categories?"**
|
|
84
|
+
1. **Yes, proceed with [recommended categories]** (Recommended) — Use the strategist's recommendation (substitute actual category names from the Brand DNA analysis)
|
|
85
|
+
2. **Choose different categories** — Select naming categories manually
|
|
86
|
+
|
|
87
|
+
If option 2:
|
|
88
|
+
|
|
89
|
+
Ask the user (multiSelect: true):
|
|
90
|
+
|
|
91
|
+
**"Select naming categories to explore (select all that apply):"**
|
|
92
|
+
1. **Descriptive** — Names that describe what the product does (PayPal, Dropbox)
|
|
93
|
+
2. **Evocative** — Names that suggest a feeling or metaphor (Slack, Nike)
|
|
94
|
+
3. **Invented / Abstract** — Completely new words (Spotify, Google)
|
|
95
|
+
4. **Lexical** — Wordplay, puns, alliteration (Pinterest, Netflix)
|
|
96
|
+
|
|
97
|
+
Then prompt (free-text): "Any existing name ideas, words you love, or words you hate? These will seed the creative sprint. Type 'none' or 'skip' to continue without seeds."
|
|
98
|
+
|
|
99
|
+
Write initial `<vision-dir>/naming-brief.md` using the creative brief template:
|
|
100
|
+
> Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-naming/references/creative-brief-template.md`
|
|
101
|
+
|
|
102
|
+
Populate: Context and Brand DNA sections. Mark remaining sections as "-- Pending --".
|
|
103
|
+
|
|
104
|
+
---
|
|
105
|
+
|
|
106
|
+
## Step 2: Creative Sprint (The Big List)
|
|
107
|
+
|
|
108
|
+
Four generation rounds via `arn-spark-brand-strategist` in `generation` mode.
|
|
109
|
+
|
|
110
|
+
**Round 1 — Seed harvest:** Invoke agent with user's existing ideas and preferences as seeds. If no seeds, skip to Round 2.
|
|
111
|
+
|
|
112
|
+
**Round 2 — Category sprints:** Invoke agent once per selected category, requesting 50-80 candidates each. Pass the brand DNA context and dead directions. Present candidates to user after each category sprint for early feedback.
|
|
113
|
+
|
|
114
|
+
**Round 3 — Mashup round:** Invoke agent with all Round 1-2 candidates. Cross-pollinate fragments across categories. Target: 30-50 mashup candidates.
|
|
115
|
+
|
|
116
|
+
**Round 4 — User collaboration checkpoint:** Present the complete candidate list organized by category. This is a free-text conversation loop: the user marks favorites (star), flags directions to kill, and may suggest additional directions. Iterate until the user signals satisfaction.
|
|
117
|
+
|
|
118
|
+
Target: 200+ total candidates across all rounds.
|
|
119
|
+
|
|
120
|
+
Update `naming-brief.md` with Creative Sprint Results section (generation stats, candidates by category, starred favorites, dead directions).
|
|
121
|
+
|
|
122
|
+
---
|
|
123
|
+
|
|
124
|
+
## Step 3: Qualitative Filter (Six Senses)
|
|
125
|
+
|
|
126
|
+
Load the scoring methodology:
|
|
127
|
+
> Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-naming/references/naming-methodology.md`
|
|
128
|
+
|
|
129
|
+
Invoke the `arn-spark-brand-strategist` agent in `scoring` mode via the Task tool, passing the model from `.arness/agent-models/spark.md` as the `model` parameter (see `plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md` "Dispatch convention" for fallback). Context: full candidate list, user-starred favorites, dead directions.
|
|
130
|
+
|
|
131
|
+
**Pass 1:** Agent filters 200+ candidates to 30-40 by removing obvious duds (unpronounceable, too long, too similar, offensive, dead directions). User-starred names always survive Pass 1.
|
|
132
|
+
|
|
133
|
+
**Pass 2:** Agent scores each surviving candidate on the Six Senses (1-5 each):
|
|
134
|
+
1. Appearance — visual punch, letter count
|
|
135
|
+
2. Sound — euphony, Phone Test
|
|
136
|
+
3. Meaning — associations, connotations
|
|
137
|
+
4. Memorability — stickiness, alliteration
|
|
138
|
+
5. Function — verbability, typability, handle-fit
|
|
139
|
+
6. Scalability — expansion headroom
|
|
140
|
+
|
|
141
|
+
Present the scored table sorted by total score (6-30).
|
|
142
|
+
|
|
143
|
+
Prompt (free-text): "Select 5-8 finalists for due diligence. You can pick from the top of the table, or choose names you like regardless of score."
|
|
144
|
+
|
|
145
|
+
Update `naming-brief.md` with Qualitative Filter Results section.
|
|
146
|
+
|
|
147
|
+
---
|
|
148
|
+
|
|
149
|
+
## Step 4: Cold Shower (Due Diligence)
|
|
150
|
+
|
|
151
|
+
### 4a — Domain Availability
|
|
152
|
+
|
|
153
|
+
Load the WHOIS/RDAP server reference:
|
|
154
|
+
> Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-naming/references/whois-server-map.md`
|
|
155
|
+
|
|
156
|
+
**How it works:** Both scripts use RDAP (the modern IETF standard) as the primary lookup method, with port-43 WHOIS and system `whois` as built-in fallbacks. The fallback chain per domain is: RDAP → port-43 WHOIS → system whois → manual URL. The scripts handle this automatically — no manual fallback logic needed.
|
|
157
|
+
|
|
158
|
+
**Environment detection** (priority order):
|
|
159
|
+
|
|
160
|
+
1. Check Python: run `python3 --version`. If available, use `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-naming/scripts/whois-check.py`.
|
|
161
|
+
2. Check Node.js: run `node --version`. If available, use `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-naming/scripts/whois-check.js`.
|
|
162
|
+
3. Neither available: skip automated checking. Present manual fallback URLs (`https://www.whois.com/whois/[domain]`) for each domain.
|
|
163
|
+
|
|
164
|
+
Note: System `whois` is NOT required — both scripts have it as a built-in last-resort fallback (gracefully skipped on Windows where `whois` is not available). Python and Node.js scripts work identically across Linux, macOS, Windows, and WSL2.
|
|
165
|
+
|
|
166
|
+
**Rate limit discovery:** Before running queries, use websearch for `"[RDAP or WHOIS server name]" rate limit` for the primary TLD servers involved. Set `delay_seconds` to the most conservative discovered limit. Floor: 2 seconds. Default if unknown: 3 seconds.
|
|
167
|
+
|
|
168
|
+
**Domain list construction:** Each finalist name × TLDs. Start with global TLDs: `.com` (essential), `.io`, `.co`, `.dev`, `.app`, `.ai`. Then add country-specific TLDs based on target market (see `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-naming/references/whois-server-map.md` for the market-to-TLD mapping):
|
|
169
|
+
- US: add `.us`
|
|
170
|
+
- EU: add `.eu`, `.de`, `.fr`, `.it`, `.es`, `.nl`
|
|
171
|
+
- UK: add `.co.uk`
|
|
172
|
+
- Global: add relevant local TLDs based on user's primary markets
|
|
173
|
+
|
|
174
|
+
The scripts handle compound ccTLDs (`.com.br`, `.co.uk`, `.com.au`, etc.) and RDAP-only TLDs (`.dev`, `.app`, `.land`) automatically. Ask user if additional TLDs should be checked.
|
|
175
|
+
|
|
176
|
+
**Execution:**
|
|
177
|
+
```bash
|
|
178
|
+
echo '{"domains": ["name1.com", "name1.io", ...], "delay_seconds": N}' | python3 ${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-naming/scripts/whois-check.py
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
Parse JSON output. Each result includes a `method` field ("rdap", "whois", "system-whois", or "none") and a `manual_url` for any domain that couldn't be determined. If exit code 1 (RDAP rate limit circuit breaker): read partial results, report what was checked, offer manual URLs for unchecked domains.
|
|
182
|
+
|
|
183
|
+
### 4b — Trademark Screening
|
|
184
|
+
|
|
185
|
+
Load trademark database reference:
|
|
186
|
+
> Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-naming/references/trademark-databases.md`
|
|
187
|
+
|
|
188
|
+
**Tier 1 (automated):** Use websearch for `"[name]" trademark [industry]` and `"[name]" registered trademark` for each finalist.
|
|
189
|
+
|
|
190
|
+
**Tier 2 (guided):** Based on the target market, generate direct search URLs from the trademark database reference. Present URLs as a clickable checklist for the user to verify manually.
|
|
191
|
+
|
|
192
|
+
**Always include:** "This is a preliminary screening. Consult a trademark attorney before committing to a brand name."
|
|
193
|
+
|
|
194
|
+
### 4c — Linguistic Screening
|
|
195
|
+
|
|
196
|
+
Invoke the `arn-spark-brand-strategist` agent in `linguistic-screening` mode via the Task tool, passing the model from `.arness/agent-models/spark.md` as the `model` parameter (see `plugins/arn-spark/skills/arn-spark-ensure-config/references/ensure-config.md` "Dispatch convention" for fallback). Context: finalist names, target market, relevant languages (mapped from target market per the agent's language mapping).
|
|
197
|
+
|
|
198
|
+
The agent checks each name for: negative meanings, phonetic conflicts, cultural associations, and slang issues. Uses websearch for verification of uncertain findings.
|
|
199
|
+
|
|
200
|
+
### 4d — Social Media Handle Check
|
|
201
|
+
|
|
202
|
+
Use websearch for each finalist: `"[name]" @[name] site:twitter.com OR site:x.com OR site:instagram.com OR site:github.com`
|
|
203
|
+
|
|
204
|
+
Report availability: available, taken, or unknown for each platform.
|
|
205
|
+
|
|
206
|
+
---
|
|
207
|
+
|
|
208
|
+
## Final Output
|
|
209
|
+
|
|
210
|
+
### Update naming brief
|
|
211
|
+
|
|
212
|
+
Update `<vision-dir>/naming-brief.md` with:
|
|
213
|
+
- Due Diligence section (domain matrix, trademark results, linguistic results, social handles)
|
|
214
|
+
- Final Decision section (chosen name, rationale, runner-ups)
|
|
215
|
+
|
|
216
|
+
### Write naming report
|
|
217
|
+
|
|
218
|
+
Load the report template:
|
|
219
|
+
> Read `${CLAUDE_PLUGIN_ROOT}/skills/arn-spark-naming/references/naming-report-template.md`
|
|
220
|
+
|
|
221
|
+
Write `<reports-dir>/naming-report.md` with all sections populated.
|
|
222
|
+
|
|
223
|
+
### Update product concept (conditional)
|
|
224
|
+
|
|
225
|
+
**Only if `<vision-dir>/product-concept.md` exists:**
|
|
226
|
+
|
|
227
|
+
Present the proposed change: adding the brand name to the product concept's Vision section.
|
|
228
|
+
|
|
229
|
+
Ask the user:
|
|
230
|
+
|
|
231
|
+
**"Update the product concept with the chosen brand name?"**
|
|
232
|
+
1. **Yes, update it** — Add brand name to the Vision section of product-concept.md
|
|
233
|
+
2. **No, keep as-is** — Product concept remains unchanged
|
|
234
|
+
|
|
235
|
+
If Yes: read product-concept.md, add `**Brand name:** [chosen name]` to the Vision section, update the document title if it contains a placeholder. Write back.
|
|
236
|
+
|
|
237
|
+
### Summary
|
|
238
|
+
|
|
239
|
+
Present to the user:
|
|
240
|
+
- Chosen brand name
|
|
241
|
+
- Naming brief location: `<vision-dir>/naming-brief.md`
|
|
242
|
+
- Naming report location: `<reports-dir>/naming-report.md`
|
|
243
|
+
- Product concept updated: yes / no
|
|
244
|
+
- Any due diligence gaps to address (unchecked domains, trademark databases to verify manually)
|
|
245
|
+
- Runner-up names for reference
|
|
246
|
+
|
|
247
|
+
---
|
|
248
|
+
|
|
249
|
+
## Agent Invocation Map
|
|
250
|
+
|
|
251
|
+
| Step | Agent | Mode | Purpose |
|
|
252
|
+
|------|-------|------|---------|
|
|
253
|
+
| 1 | `arn-spark-brand-strategist` | brand-dna | Analyze brand personality, audience, competitors, recommend categories |
|
|
254
|
+
| 2 (each round) | `arn-spark-brand-strategist` | generation | Generate 50-80 candidates per category per round |
|
|
255
|
+
| 3 | `arn-spark-brand-strategist` | scoring | Filter and score candidates on Six Senses framework |
|
|
256
|
+
| 4c | `arn-spark-brand-strategist` | linguistic-screening | Check finalists in target languages |
|
|
257
|
+
| 4a | Direct (Bash) | scripts | Run WHOIS availability check |
|
|
258
|
+
| 4b, 4d | Direct (websearch) | — | Trademark and social media screening |
|
|
259
|
+
|
|
260
|
+
## Error Handling
|
|
261
|
+
|
|
262
|
+
- **WHOIS script fails:** Read partial JSON results. Report checked vs. unchecked domains. Offer manual fallback: `https://www.whois.com/whois/[domain]`
|
|
263
|
+
- **Neither Python nor Node.js available:** Skip automated WHOIS entirely. Present manual check URLs.
|
|
264
|
+
- **websearch fails during trademark screening:** Note the gap. Present trademark database URLs for manual verification.
|
|
265
|
+
- **User cancels mid-process:** Save current progress to naming-brief.md (incomplete sections marked "-- Pending --"). Resumable via Step 0.
|
|
266
|
+
- **Product concept update fails:** Print the proposed change in conversation for manual application.
|
|
267
|
+
- **Agent returns insufficient candidates:** Retry with adjusted prompt (broader techniques, relaxed constraints). If still insufficient, present what was generated and ask user whether to continue or adjust direction.
|
|
268
|
+
|
|
269
|
+
## Constraints
|
|
270
|
+
|
|
271
|
+
- Product-concept.md updates are NET-NEW information (brand name addition), not stress-test-driven modifications. This does not conflict with the concept-review exclusivity rule (from `arn-spark-concept-review`, which restricts stress-test-recommendation consolidation to that skill alone).
|
|
272
|
+
- User approval gate is MANDATORY before writing to product-concept.md.
|
|
273
|
+
- All reference/script paths use `${CLAUDE_PLUGIN_ROOT}`.
|
|
274
|
+
- WHOIS queries use a circuit breaker — any error stops all remaining queries immediately to protect the user's IP.
|
|
275
|
+
- The naming brief overwrites on re-run (git provides history). The user is warned and offered resume/start-fresh in Step 0.
|
|
@@ -0,0 +1,146 @@
|
|
|
1
|
+
# [Product Name] — Naming Brief
|
|
2
|
+
|
|
3
|
+
> **Instructions:** Every section below MUST appear in the final document. Replace all bracketed placeholders with actual content. Sections that have not yet been completed during a partial run should contain "-- Pending --" to indicate they are awaiting completion.
|
|
4
|
+
|
|
5
|
+
## Context
|
|
6
|
+
|
|
7
|
+
- **Product vision:** [1-2 sentence vision statement from product concept or user description]
|
|
8
|
+
- **Value proposition:** [The one thing this product does better than anyone else]
|
|
9
|
+
- **Target audience:** [Primary audience description — who they are, what they care about]
|
|
10
|
+
- **Target market:** [Country/region for trademark and linguistic screening]
|
|
11
|
+
- **Industry/domain:** [The product's industry category]
|
|
12
|
+
|
|
13
|
+
## Brand DNA
|
|
14
|
+
|
|
15
|
+
### Brand Personality Profile
|
|
16
|
+
|
|
17
|
+
[2-3 sentences describing the brand archetype and personality traits. Examples: "The Reliable Expert — trustworthy, knowledgeable, no-nonsense" or "The Rebellious Newcomer — bold, unconventional, challenging the status quo."]
|
|
18
|
+
|
|
19
|
+
### Target Audience Vocabulary
|
|
20
|
+
|
|
21
|
+
[Key words and phrases the target audience uses in their domain. Communication style: formal/casual, technical/plain, aspirational/practical.]
|
|
22
|
+
|
|
23
|
+
### Competitor Name Landscape
|
|
24
|
+
|
|
25
|
+
| Competitor | Name Type | Observations |
|
|
26
|
+
|-----------|-----------|--------------|
|
|
27
|
+
| [Name 1] | [Descriptive/Evocative/Invented/Lexical] | [What the name communicates] |
|
|
28
|
+
| [Name 2] | [Type] | [Observations] |
|
|
29
|
+
| [Name 3] | [Type] | [Observations] |
|
|
30
|
+
|
|
31
|
+
**Pattern analysis:** [Summary of naming patterns in the space. Are most names descriptive? Is there whitespace in evocative or invented names?]
|
|
32
|
+
|
|
33
|
+
### Naming Categories Selected
|
|
34
|
+
|
|
35
|
+
[Category 1] — [Rationale for selection]
|
|
36
|
+
[Category 2 if applicable] — [Rationale for selection]
|
|
37
|
+
|
|
38
|
+
### User Seeds
|
|
39
|
+
|
|
40
|
+
- **Words loved:** [List of words the user likes]
|
|
41
|
+
- **Words hated:** [List of words the user dislikes]
|
|
42
|
+
- **Existing name ideas:** [Any names the user has already considered]
|
|
43
|
+
- **Themes to explore:** [Specific themes or directions the user wants to pursue]
|
|
44
|
+
- **Themes to avoid:** [Directions the user wants to avoid]
|
|
45
|
+
|
|
46
|
+
---
|
|
47
|
+
|
|
48
|
+
## Creative Sprint Results
|
|
49
|
+
|
|
50
|
+
### Generation Statistics
|
|
51
|
+
|
|
52
|
+
| Category | Round 1 (Seeds) | Round 2 (Sprint) | Round 3 (Mashup) | Total |
|
|
53
|
+
|----------|----------------|-------------------|-------------------|-------|
|
|
54
|
+
| [Category 1] | [N] | [N] | [N] | [N] |
|
|
55
|
+
| [Category 2] | [N] | [N] | [N] | [N] |
|
|
56
|
+
| **Total** | **[N]** | **[N]** | **[N]** | **[N]** |
|
|
57
|
+
|
|
58
|
+
### Candidates by Category
|
|
59
|
+
|
|
60
|
+
#### [Category 1]: [Category Name]
|
|
61
|
+
|
|
62
|
+
[Numbered list of all candidates in this category]
|
|
63
|
+
|
|
64
|
+
#### [Category 2]: [Category Name]
|
|
65
|
+
|
|
66
|
+
[Numbered list of all candidates in this category]
|
|
67
|
+
|
|
68
|
+
### User-Starred Favorites
|
|
69
|
+
|
|
70
|
+
[List of names the user marked as favorites during the collaboration checkpoint]
|
|
71
|
+
|
|
72
|
+
### Dead Directions
|
|
73
|
+
|
|
74
|
+
[Directions or themes the user flagged as undesirable, with reason if provided]
|
|
75
|
+
|
|
76
|
+
---
|
|
77
|
+
|
|
78
|
+
## Qualitative Filter Results
|
|
79
|
+
|
|
80
|
+
### Pass 1 Shortlist (30-40 candidates)
|
|
81
|
+
|
|
82
|
+
[Numbered list of candidates that survived the initial filter, with brief reason each was kept]
|
|
83
|
+
|
|
84
|
+
### Six Senses Scored Table
|
|
85
|
+
|
|
86
|
+
| Rank | Name | Appearance | Sound | Meaning | Memorability | Function | Scalability | Total |
|
|
87
|
+
|------|------|-----------|-------|---------|-------------|----------|-------------|-------|
|
|
88
|
+
| 1 | [name] | [1-5] | [1-5] | [1-5] | [1-5] | [1-5] | [1-5] | [6-30] |
|
|
89
|
+
| 2 | [name] | [1-5] | [1-5] | [1-5] | [1-5] | [1-5] | [1-5] | [6-30] |
|
|
90
|
+
| ... | | | | | | | | |
|
|
91
|
+
|
|
92
|
+
### Finalists Selected for Due Diligence
|
|
93
|
+
|
|
94
|
+
[List of 5-8 names selected by the user, with any notes on why they were chosen]
|
|
95
|
+
|
|
96
|
+
---
|
|
97
|
+
|
|
98
|
+
## Due Diligence
|
|
99
|
+
|
|
100
|
+
### Domain Availability
|
|
101
|
+
|
|
102
|
+
| Name | .com | .io | .co | .dev | .app | .ai | Notes |
|
|
103
|
+
|------|------|-----|-----|------|------|-----|-------|
|
|
104
|
+
| [name] | [Available/Taken/Error] | ... | ... | ... | ... | ... | |
|
|
105
|
+
|
|
106
|
+
**Check method:** [RDAP (primary) / WHOIS port-43 (fallback) / System whois / Manual]
|
|
107
|
+
**Date checked:** [ISO 8601]
|
|
108
|
+
|
|
109
|
+
### Trademark Screening
|
|
110
|
+
|
|
111
|
+
| Name | websearch Result | Risk Level | Database URLs Provided | Notes |
|
|
112
|
+
|------|-----------------|-----------|----------------------|-------|
|
|
113
|
+
| [name] | [Summary of search findings] | [Clear/Caution/Conflict] | [Yes/No] | |
|
|
114
|
+
|
|
115
|
+
**Databases checked:** [List of trademark databases]
|
|
116
|
+
**Disclaimer:** This is a preliminary screening. Consult a trademark attorney before committing to a brand name.
|
|
117
|
+
|
|
118
|
+
### Linguistic Screening
|
|
119
|
+
|
|
120
|
+
| Name | Languages Checked | Issues Found | Notes |
|
|
121
|
+
|------|------------------|-------------|-------|
|
|
122
|
+
| [name] | [Language list] | [None / Issue description] | |
|
|
123
|
+
|
|
124
|
+
### Social Media Handles
|
|
125
|
+
|
|
126
|
+
| Name | X / Twitter | Instagram | GitHub | Notes |
|
|
127
|
+
|------|-----------|-----------|--------|-------|
|
|
128
|
+
| [name] | [Available/Taken] | [Available/Taken] | [Available/Taken] | |
|
|
129
|
+
|
|
130
|
+
---
|
|
131
|
+
|
|
132
|
+
## Final Decision
|
|
133
|
+
|
|
134
|
+
### Chosen Name
|
|
135
|
+
|
|
136
|
+
**[Name]**
|
|
137
|
+
|
|
138
|
+
### Rationale
|
|
139
|
+
|
|
140
|
+
[2-3 sentences explaining why this name was selected — reference Six Senses scores, due diligence results, and alignment with brand DNA.]
|
|
141
|
+
|
|
142
|
+
### Runner-Ups (Ordered)
|
|
143
|
+
|
|
144
|
+
1. **[Name]** — [Brief reason it was not chosen, but remains a strong alternative]
|
|
145
|
+
2. **[Name]** — [Brief reason]
|
|
146
|
+
3. **[Name]** — [Brief reason]
|