@warpmetrics/blog-seo 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,162 @@
1
+ # @warpmetrics/blog-seo
2
+
3
+ Self-improving technical blog engine. Generates code-heavy blog posts, tracks search performance via Google Search Console, learns what works, and writes better content each cycle.
4
+
5
+ ## Setup
6
+
7
+ ```bash
8
+ npm install
9
+ cp .env.example .env # Add your API keys
10
+ ```
11
+
12
+ ### Environment variables
13
+
14
+ ```
15
+ OPENAI_API_KEY=sk-...
16
+ WARPMETRICS_API_KEY=wm_...
17
+ ```
18
+
19
+ ### Configuration
20
+
21
+ Create `blog-seo.config.json` in the directory you'll run the CLI from:
22
+
23
+ ```json
24
+ {
25
+ "domain": "warpmetrics.com",
26
+ "siteUrl": "sc-domain:warpmetrics.com",
27
+ "outputDir": "./src/content/blog",
28
+ "manifest": "./blog-seo.json"
29
+ }
30
+ ```
31
+
32
+ | Field | Description |
33
+ |-------|-------------|
34
+ | `domain` | Your domain. Used to fetch product context from `llms-full.txt` / `llms.txt` |
35
+ | `siteUrl` | Google Search Console site URL (usually `sc-domain:yourdomain.com`) |
36
+ | `outputDir` | Where generated markdown files are written |
37
+ | `manifest` | Path to the manifest file that tracks generated posts |
38
+
39
+ ### Product context
40
+
41
+ blog-seo fetches product context to ground generated content in accurate information. It looks for:
42
+
43
+ 1. `https://{domain}/llms-full.txt` (preferred)
44
+ 2. `https://{domain}/llms.txt` (fallback)
45
+
46
+ These follow the [llms.txt convention](https://llmstxt.org/) — a plain text file describing your product, APIs, features, and pricing. When present, this context is included in every generation prompt so the LLM uses real API names and features instead of hallucinating.
47
+
48
+ If neither file exists, posts are generated without product context.
49
+
50
+ ## Commands
51
+
52
+ ### `blog-seo auth`
53
+
54
+ Authenticate with Google Search Console via OAuth.
55
+
56
+ ```bash
57
+ npx blog-seo auth
58
+ ```
59
+
60
+ Saves credentials to `.gsc-credentials.json`.
61
+
62
+ ### `blog-seo seed`
63
+
64
+ Generate initial blog posts from built-in seed topics. No GSC data needed.
65
+
66
+ ```bash
67
+ npx blog-seo seed --max 3
68
+ ```
69
+
70
+ | Flag | Default | Description |
71
+ |------|---------|-------------|
72
+ | `--max` | 5 | Maximum posts to generate |
73
+ | `--output` | from config | Output directory |
74
+ | `--max-retries` | 2 | Retries per post on validation failure |
75
+
76
+ ### `blog-seo analyze`
77
+
78
+ Display GSC performance for existing blog posts.
79
+
80
+ ```bash
81
+ npx blog-seo analyze --days 30
82
+ ```
83
+
84
+ ### `blog-seo run`
85
+
86
+ Full flywheel: Feedback → Learn → Plan → Generate.
87
+
88
+ ```bash
89
+ npx blog-seo run --max-new 3 --max-rewrites 2
90
+ ```
91
+
92
+ | Flag | Default | Description |
93
+ |------|---------|-------------|
94
+ | `--max-new` | 3 | Max new posts per run |
95
+ | `--max-rewrites` | 2 | Max rewrites of underperformers |
96
+ | `--min-days` | 14 | Days before tracking a post |
97
+ | `--min-impressions` | 50 | Min impressions for feedback |
98
+ | `--max-retries` | 2 | Retries per post on validation failure |
99
+ | `--output` | from config | Output directory |
100
+ | `--manifest` | from config | Manifest file path |
101
+
102
+ ## How it works
103
+
104
+ Each `run` cycle executes four phases:
105
+
106
+ 1. **Feedback** — Fetches GSC metrics for tracked posts. Classifies each as High Traffic, Improved, Stagnant, or Declining.
107
+
108
+ 2. **Learn** — Analyzes high-performing posts to find patterns (topic, structure, code density). Updates generation prompts with learned patterns.
109
+
110
+ 3. **Plan** — Identifies topic gaps from GSC query data (queries with impressions but no matching content). Selects underperformers for rewrites.
111
+
112
+ 4. **Generate** — Writes new posts or rewrites underperformers. Each post goes through a two-step process (outline → full post) and is validated for structure, code correctness, quality, and SEO before being written.
113
+
114
+ Every cycle creates a WarpMetrics run with outcomes and a "Continue Optimization" act that links to the next run, forming an unbroken improvement chain.
115
+
116
+ ## Generated post format
117
+
118
+ Posts are markdown with YAML frontmatter:
119
+
120
+ ```markdown
121
+ ---
122
+ title: "How to Track LLM API Costs"
123
+ description: "Track OpenAI, Anthropic, and Cohere costs..."
124
+ date: 2026-02-15
125
+ author: "blog-seo"
126
+ keywords: ["llm costs", "api cost tracking"]
127
+ generated: true
128
+ ---
129
+
130
+ # How to Track LLM API Costs
131
+
132
+ [code-heavy content]
133
+ ```
134
+
135
+ The `generated: true` field distinguishes machine-generated posts from human-written ones.
136
+
137
+ ## Manifest
138
+
139
+ `blog-seo.json` tracks which posts were generated and their baselines:
140
+
141
+ ```json
142
+ {
143
+ "tracking-llm-costs": {
144
+ "slug": "tracking-llm-costs",
145
+ "title": "How to Track LLM API Costs",
146
+ "generatedAt": "2026-02-15T10:00:00Z",
147
+ "runId": "wm_run_xxx",
148
+ "targetKeywords": ["track llm costs"],
149
+ "version": 1,
150
+ "baseline": { "ctr": null, "position": null, "impressions": 0 }
151
+ }
152
+ }
153
+ ```
154
+
155
+ ## Validation
156
+
157
+ Every generated post passes four validators before being written:
158
+
159
+ 1. **Structure** — Frontmatter, H1, 3+ H2 sections, 2+ code blocks, 800-3000 words
160
+ 2. **Code** — Matched code fences, no truncated snippets
161
+ 3. **Quality** — LLM-scored 1-10 (must score 7+)
162
+ 4. **SEO** — Title contains keyword, description 140-160 chars, H2 contains keyword
package/package.json ADDED
@@ -0,0 +1,55 @@
1
+ {
2
+ "name": "@warpmetrics/blog-seo",
3
+ "version": "0.0.1",
4
+ "description": "Self-improving technical blog engine with Google Search Console feedback",
5
+ "type": "module",
6
+ "bin": {
7
+ "blog-seo": "./src/cli/index.js"
8
+ },
9
+ "exports": {
10
+ ".": "./src/index.js",
11
+ "./core/*": "./src/core/*.js"
12
+ },
13
+ "files": [
14
+ "src",
15
+ "!src/**/*.test.js"
16
+ ],
17
+ "scripts": {
18
+ "dev": "node src/cli/index.js",
19
+ "test": "vitest run",
20
+ "test:watch": "vitest",
21
+ "preversion": "npm test",
22
+ "release:patch": "npm version patch && git push origin main --tags",
23
+ "release:minor": "npm version minor && git push origin main --tags"
24
+ },
25
+ "dependencies": {
26
+ "@warpmetrics/warp": "^0.0.13",
27
+ "chalk": "^5.4.1",
28
+ "commander": "^12.1.0",
29
+ "dotenv": "^16.4.7",
30
+ "googleapis": "^144.0.0",
31
+ "open": "^10.1.0",
32
+ "openai": "^4.77.3",
33
+ "ora": "^8.1.1"
34
+ },
35
+ "keywords": [
36
+ "seo",
37
+ "blog",
38
+ "google-search-console",
39
+ "self-improving",
40
+ "ai",
41
+ "technical-writing"
42
+ ],
43
+ "license": "MIT",
44
+ "repository": {
45
+ "type": "git",
46
+ "url": "https://github.com/warpmetrics/blog-seo.git"
47
+ },
48
+ "homepage": "https://github.com/warpmetrics/blog-seo#readme",
49
+ "bugs": {
50
+ "url": "https://github.com/warpmetrics/blog-seo/issues"
51
+ },
52
+ "devDependencies": {
53
+ "vitest": "^3.2.4"
54
+ }
55
+ }
@@ -0,0 +1,86 @@
1
+ import { createGSCClient } from '../core/gsc-client.js';
2
+ import chalk from 'chalk';
3
+ import ora from 'ora';
4
+ import fs from 'fs/promises';
5
+
6
+ export async function analyzeCommand(options) {
7
+ const spinner = ora('Loading configuration...').start();
8
+
9
+ try {
10
+ const config = JSON.parse(await fs.readFile('./blog-seo.config.json', 'utf-8'));
11
+
12
+ const gsc = createGSCClient('./.gsc-credentials.json');
13
+
14
+ spinner.text = 'Authenticating with Google Search Console...';
15
+ await gsc.authenticate();
16
+
17
+ spinner.text = 'Fetching performance data...';
18
+
19
+ const endDate = new Date().toISOString().split('T')[0];
20
+ const startDate = new Date(Date.now() - parseInt(options.days) * 24 * 60 * 60 * 1000)
21
+ .toISOString()
22
+ .split('T')[0];
23
+
24
+ const rows = await gsc.getPagePerformance(config.siteUrl, startDate, endDate);
25
+
26
+ // Filter to blog posts only
27
+ const minImpressions = parseInt(options.minImpressions);
28
+ const blogPosts = rows
29
+ .filter(row => {
30
+ const url = row.keys[0];
31
+ if (!url.includes('/blog/')) return false;
32
+ return row.impressions >= minImpressions;
33
+ })
34
+ .sort((a, b) => b.impressions - a.impressions);
35
+
36
+ spinner.succeed(`Analyzed ${blogPosts.length} blog posts`);
37
+
38
+ console.log(chalk.bold('\nBlog Performance Summary\n'));
39
+
40
+ if (blogPosts.length === 0) {
41
+ console.log(chalk.gray('No blog posts with enough impressions found.'));
42
+ console.log(chalk.gray(`Run ${chalk.white('blog-seo seed')} to generate initial posts.`));
43
+ return;
44
+ }
45
+
46
+ const avgCTR = (blogPosts.reduce((sum, r) => sum + r.ctr, 0) / blogPosts.length * 100).toFixed(2);
47
+ const avgPosition = (blogPosts.reduce((sum, r) => sum + r.position, 0) / blogPosts.length).toFixed(1);
48
+ const totalClicks = blogPosts.reduce((sum, r) => sum + r.clicks, 0);
49
+
50
+ console.log(`Posts analyzed: ${chalk.cyan(blogPosts.length)}`);
51
+ console.log(`Average CTR: ${chalk.cyan(avgCTR + '%')}`);
52
+ console.log(`Average Position: ${chalk.cyan(avgPosition)}`);
53
+ console.log(`Total Clicks: ${chalk.cyan(totalClicks.toLocaleString())}`);
54
+
55
+ const topPerformers = blogPosts.filter(r => r.position < 10);
56
+ const underperformers = blogPosts.filter(r => r.position > 15 && r.impressions >= 200);
57
+
58
+ if (topPerformers.length > 0) {
59
+ console.log(chalk.bold('\nTop Performers (position < 10)\n'));
60
+ topPerformers.slice(0, 5).forEach((row, i) => {
61
+ const slug = row.keys[0].split('/blog/')[1]?.replace(/\/$/, '') || row.keys[0];
62
+ console.log(`${i + 1}. ${chalk.green(slug)}`);
63
+ console.log(` Position: ${chalk.cyan(row.position.toFixed(1))} | CTR: ${chalk.cyan((row.ctr * 100).toFixed(2) + '%')} | Clicks: ${chalk.gray(row.clicks)}\n`);
64
+ });
65
+ }
66
+
67
+ if (underperformers.length > 0) {
68
+ console.log(chalk.bold('\nRewrite Candidates (position > 15, 200+ impressions)\n'));
69
+ underperformers.slice(0, 5).forEach((row, i) => {
70
+ const slug = row.keys[0].split('/blog/')[1]?.replace(/\/$/, '') || row.keys[0];
71
+ console.log(`${i + 1}. ${chalk.yellow(slug)}`);
72
+ console.log(` Position: ${chalk.red(row.position.toFixed(1))} | Impressions: ${chalk.gray(row.impressions.toLocaleString())} | CTR: ${chalk.gray((row.ctr * 100).toFixed(2) + '%')}\n`);
73
+ });
74
+ }
75
+
76
+ console.log(chalk.gray(`Run ${chalk.white('blog-seo run')} to generate new posts and rewrite underperformers`));
77
+
78
+ } catch (err) {
79
+ spinner.fail('Analysis failed');
80
+ console.error(chalk.red(err.message));
81
+ if (err.stack) {
82
+ console.error(chalk.gray(err.stack));
83
+ }
84
+ process.exit(1);
85
+ }
86
+ }
@@ -0,0 +1,34 @@
1
+ import { createGSCClient } from '../core/gsc-client.js';
2
+ import chalk from 'chalk';
3
+ import ora from 'ora';
4
+
5
+ export async function authCommand() {
6
+ const spinner = ora('Authenticating with Google Search Console...').start();
7
+
8
+ try {
9
+ const client = createGSCClient('./.gsc-credentials.json');
10
+
11
+ // Check if already authenticated
12
+ try {
13
+ const isAuth = await client.authenticate();
14
+ if (isAuth) {
15
+ spinner.succeed('Already authenticated!');
16
+ process.exit(0);
17
+ }
18
+ } catch {
19
+ // Not authenticated or token expired — proceed to OAuth
20
+ }
21
+
22
+ spinner.text = 'Opening browser for OAuth...';
23
+ await client.initiateOAuth();
24
+
25
+ spinner.succeed('Authentication successful!');
26
+ console.log(chalk.gray('Credentials saved to .gsc-credentials.json'));
27
+ console.log(chalk.yellow('\nWARNING: Add .gsc-credentials.json to .gitignore'));
28
+ process.exit(0);
29
+ } catch (err) {
30
+ spinner.fail('Authentication failed');
31
+ console.error(chalk.red(err.message));
32
+ process.exit(1);
33
+ }
34
+ }
@@ -0,0 +1,56 @@
1
+ #!/usr/bin/env node
2
+ import { Command } from 'commander';
3
+ import { authCommand } from './auth.js';
4
+ import { analyzeCommand } from './analyze.js';
5
+ import { runCommand } from './run.js';
6
+ import { seedCommand } from './seed.js';
7
+ import { readFileSync } from 'fs';
8
+ import { fileURLToPath } from 'url';
9
+ import { dirname, join } from 'path';
10
+ import dotenv from 'dotenv';
11
+
12
+ dotenv.config();
13
+
14
+ const __dirname = dirname(fileURLToPath(import.meta.url));
15
+ const { version } = JSON.parse(readFileSync(join(__dirname, '../../package.json'), 'utf-8'));
16
+
17
+ const program = new Command();
18
+
19
+ program
20
+ .name('blog-seo')
21
+ .description('Self-improving technical blog engine with Google Search Console feedback')
22
+ .version(version);
23
+
24
+ program
25
+ .command('auth')
26
+ .description('Authenticate with Google Search Console')
27
+ .action(authCommand);
28
+
29
+ program
30
+ .command('analyze')
31
+ .description('Display GSC performance for existing blog posts')
32
+ .option('--days <number>', 'Days of data to analyze', '30')
33
+ .option('--min-impressions <number>', 'Minimum impressions to consider', '50')
34
+ .action(analyzeCommand);
35
+
36
+ program
37
+ .command('seed')
38
+ .description('Generate initial blog posts from built-in seed topics')
39
+ .option('--max <number>', 'Maximum posts to generate', '5')
40
+ .option('--output <path>', 'Output directory for markdown files')
41
+ .option('--max-retries <number>', 'Max retries per post on validation failure', '2')
42
+ .action(seedCommand);
43
+
44
+ program
45
+ .command('run')
46
+ .description('Run the full blog-seo flywheel: feedback → learn → plan → generate')
47
+ .option('--max-new <number>', 'Maximum new posts per run', '3')
48
+ .option('--max-rewrites <number>', 'Maximum rewrites of underperformers', '2')
49
+ .option('--min-days <number>', 'Days before tracking a post', '14')
50
+ .option('--min-impressions <number>', 'Minimum impressions for feedback', '50')
51
+ .option('--max-retries <number>', 'Max retries per post on validation failure', '2')
52
+ .option('--output <path>', 'Output directory for markdown files')
53
+ .option('--manifest <path>', 'Manifest file path')
54
+ .action(runCommand);
55
+
56
+ program.parse();
package/src/cli/run.js ADDED
@@ -0,0 +1,227 @@
1
+ import { createGSCClient } from '../core/gsc-client.js';
2
+ import { generate } from '../core/generator.js';
3
+ import { trackPerformance } from '../core/tracker.js';
4
+ import { analyze } from '../core/improver.js';
5
+ import { identifyTopics } from '../core/topic-planner.js';
6
+ import { createPromptManager } from '../core/prompt-manager.js';
7
+ import { fetchContext } from '../core/context.js';
8
+ import OpenAI from 'openai';
9
+ import { warp, run, group, outcome, act, flush } from '@warpmetrics/warp';
10
+ import chalk from 'chalk';
11
+ import ora from 'ora';
12
+ import fs from 'fs/promises';
13
+ import path from 'path';
14
+
15
+ export async function runCommand(options) {
16
+ const spinner = ora('Starting Blog SEO...').start();
17
+
18
+ try {
19
+ const config = JSON.parse(await fs.readFile('./blog-seo.config.json', 'utf-8'));
20
+
21
+ // Poll for Continue Optimization act from previous run
22
+ let prevAct = null;
23
+ try {
24
+ const res = await fetch(
25
+ 'https://api.warpmetrics.com/v1/acts?name=Continue%20Optimization&hasFollowUp=false&limit=1',
26
+ { headers: { 'Authorization': `Bearer ${process.env.WARPMETRICS_API_KEY}` } }
27
+ );
28
+ if (res.ok) {
29
+ const body = await res.json();
30
+ if (body.data?.length > 0) prevAct = body.data[0].id;
31
+ }
32
+ } catch {}
33
+
34
+ // Create the run — linked to previous if exists
35
+ const r = prevAct
36
+ ? run(prevAct, 'Blog SEO', { domain: config.domain })
37
+ : run('Blog SEO', { domain: config.domain });
38
+
39
+ // Setup
40
+ const openai = warp(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }), {
41
+ apiKey: process.env.WARPMETRICS_API_KEY,
42
+ });
43
+
44
+ const gsc = createGSCClient('./.gsc-credentials.json');
45
+ await gsc.authenticate();
46
+
47
+ const prompts = createPromptManager('.');
48
+ await prompts.initialize();
49
+
50
+ // Fetch product context
51
+ spinner.text = 'Fetching product context...';
52
+ const context = config.domain ? await fetchContext(config.domain) : null;
53
+ if (context) {
54
+ spinner.succeed('Product context loaded');
55
+ } else {
56
+ spinner.warn('No product context found (no llms-full.txt or llms.txt)');
57
+ }
58
+
59
+ const outputDir = path.resolve(options.output || config.outputDir || './output');
60
+ await fs.mkdir(outputDir, { recursive: true });
61
+
62
+ const manifestPath = options.manifest || config.manifest || './blog-seo.json';
63
+
64
+ // Load manifest
65
+ let manifest = {};
66
+ try {
67
+ manifest = JSON.parse(await fs.readFile(manifestPath, 'utf-8'));
68
+ } catch {}
69
+
70
+ // ═══════════════════════════════════════════
71
+ // Phase 1: Feedback
72
+ // ═══════════════════════════════════════════
73
+ spinner.text = 'Collecting feedback...';
74
+ const feedbackGrp = group(r, 'Feedback');
75
+
76
+ const feedbackResults = await trackPerformance(
77
+ gsc,
78
+ feedbackGrp,
79
+ config.siteUrl,
80
+ manifest,
81
+ parseInt(options.minDays)
82
+ );
83
+
84
+ // Update baselines in manifest
85
+ if (feedbackResults.updates) {
86
+ for (const [slug, update] of Object.entries(feedbackResults.updates)) {
87
+ if (manifest[slug]) {
88
+ manifest[slug].baseline = update;
89
+ }
90
+ }
91
+ }
92
+
93
+ if (feedbackResults.tracked > 0) {
94
+ spinner.succeed(`Feedback: ${feedbackResults.tracked} tracked, ${feedbackResults.highTraffic} high traffic`);
95
+ } else {
96
+ spinner.info('Feedback: no posts eligible yet');
97
+ }
98
+
99
+ // ═══════════════════════════════════════════
100
+ // Phase 2: Learn
101
+ // ═══════════════════════════════════════════
102
+ spinner.start('Analyzing patterns...');
103
+ const learnGrp = group(r, 'Learn');
104
+
105
+ const learnResults = await analyze(
106
+ openai,
107
+ process.env.WARPMETRICS_API_KEY,
108
+ prompts,
109
+ learnGrp,
110
+ config.domain
111
+ );
112
+
113
+ if (learnResults) {
114
+ spinner.succeed(`Learn: ${learnResults.patternsLearned} patterns learned`);
115
+ } else {
116
+ spinner.info('Learn: not enough data yet');
117
+ }
118
+
119
+ // ═══════════════════════════════════════════
120
+ // Phase 3: Plan
121
+ // ═══════════════════════════════════════════
122
+ spinner.start('Identifying topics...');
123
+ const planGrp = group(r, 'Plan');
124
+
125
+ const planResults = await identifyTopics(
126
+ openai,
127
+ gsc,
128
+ planGrp,
129
+ config.siteUrl,
130
+ manifest,
131
+ {
132
+ maxNew: parseInt(options.maxNew),
133
+ maxRewrites: parseInt(options.maxRewrites),
134
+ }
135
+ );
136
+
137
+ spinner.succeed(
138
+ `Plan: ${planResults.newTopics.length} new topics, ${planResults.rewriteCandidates.length} rewrites`
139
+ );
140
+
141
+ // ═══════════════════════════════════════════
142
+ // Phase 4: Generate
143
+ // ═══════════════════════════════════════════
144
+ spinner.start('Generating posts...');
145
+ const generateGrp = group(r, 'Generate');
146
+
147
+ // Generate new posts
148
+ const allTopics = [...planResults.newTopics];
149
+
150
+ // For rewrites, load existing content
151
+ for (const candidate of planResults.rewriteCandidates) {
152
+ try {
153
+ const filePath = path.join(outputDir, `${candidate.slug}.md`);
154
+ const existingContent = await fs.readFile(filePath, 'utf-8');
155
+ allTopics.push({ ...candidate, existingContent });
156
+ } catch {
157
+ // File not found — skip rewrite
158
+ }
159
+ }
160
+
161
+ const genResults = await generate(openai, prompts, generateGrp, allTopics, {
162
+ maxRetries: parseInt(options.maxRetries) || 2,
163
+ context,
164
+ });
165
+
166
+ spinner.succeed(
167
+ `Generate: ${genResults.results.length} created` +
168
+ (genResults.failures.length > 0 ? `, ${genResults.failures.length} failed` : '')
169
+ );
170
+
171
+ // ═══════════════════════════════════════════
172
+ // Write files and update manifest
173
+ // ═══════════════════════════════════════════
174
+ for (const result of genResults.results) {
175
+ const filePath = path.join(outputDir, `${result.slug}.md`);
176
+ await fs.writeFile(filePath, result.markdown);
177
+
178
+ manifest[result.slug] = {
179
+ slug: result.slug,
180
+ title: result.title,
181
+ generatedAt: result.generatedAt,
182
+ runId: r.id,
183
+ targetKeywords: result.targetKeywords,
184
+ version: (manifest[result.slug]?.version || 0) + 1,
185
+ baseline: manifest[result.slug]?.baseline || { ctr: null, position: null, impressions: 0 },
186
+ };
187
+ }
188
+
189
+ await fs.writeFile(manifestPath, JSON.stringify(manifest, null, 2));
190
+
191
+ // ═══════════════════════════════════════════
192
+ // Run complete — link to next run
193
+ // ═══════════════════════════════════════════
194
+ const allFailed = genResults.results.length === 0 && genResults.failures.length > 0;
195
+ const runComplete = outcome(r, allFailed ? 'Run Failed' : 'Run Complete', {
196
+ postsGenerated: genResults.results.filter(r => !r.rewrite).length,
197
+ postsRewritten: genResults.results.filter(r => r.rewrite).length,
198
+ generationFailed: genResults.failures.length,
199
+ tracked: feedbackResults.tracked,
200
+ highTraffic: feedbackResults.highTraffic,
201
+ patternsLearned: learnResults?.patternsLearned || 0,
202
+ topicsIdentified: planResults.newTopics.length,
203
+ });
204
+
205
+ act(runComplete, 'Continue Optimization');
206
+
207
+ await flush();
208
+
209
+ // Summary
210
+ console.log(chalk.bold('\nBlog SEO Complete\n'));
211
+ console.log(` Feedback: ${feedbackResults.tracked} tracked, ${feedbackResults.highTraffic} high traffic`);
212
+ console.log(` Learn: ${learnResults ? `${learnResults.patternsLearned} patterns` : 'insufficient data'}`);
213
+ console.log(` Plan: ${planResults.newTopics.length} new, ${planResults.rewriteCandidates.length} rewrites`);
214
+ console.log(` Generate: ${genResults.results.length} created, ${genResults.failures.length} failed`);
215
+ console.log(chalk.gray(`\n Run: https://app.warpmetrics.com/runs/${r.id}`));
216
+ console.log(chalk.gray(` Output: ${outputDir}`));
217
+ console.log(chalk.gray(` Manifest: ${manifestPath}`));
218
+
219
+ } catch (err) {
220
+ spinner.fail('Blog SEO failed');
221
+ console.error(chalk.red(err.message));
222
+ if (err.stack) {
223
+ console.error(chalk.gray(err.stack));
224
+ }
225
+ process.exit(1);
226
+ }
227
+ }