agileflow 3.2.1 → 3.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (134) hide show
  1. package/CHANGELOG.md +10 -0
  2. package/README.md +6 -6
  3. package/lib/feature-flags.js +32 -4
  4. package/lib/skill-loader.js +0 -1
  5. package/package.json +1 -1
  6. package/scripts/agileflow-statusline.sh +81 -0
  7. package/scripts/babysit-clear-restore.js +154 -0
  8. package/scripts/claude-tmux.sh +120 -24
  9. package/scripts/claude-watchdog.sh +225 -0
  10. package/scripts/generators/agent-registry.js +14 -1
  11. package/scripts/generators/inject-babysit.js +22 -9
  12. package/scripts/generators/inject-help.js +19 -9
  13. package/scripts/lib/README-portable-tasks.md +424 -0
  14. package/scripts/lib/audit-cleanup.js +250 -0
  15. package/scripts/lib/audit-registry.js +248 -0
  16. package/scripts/lib/configure-detect.js +20 -0
  17. package/scripts/lib/feature-catalog.js +13 -2
  18. package/scripts/lib/gate-enforcer.js +295 -0
  19. package/scripts/lib/model-profiles.js +98 -0
  20. package/scripts/lib/signal-detectors.js +1 -1
  21. package/scripts/lib/skill-catalog.js +557 -0
  22. package/scripts/lib/skill-recommender.js +311 -0
  23. package/scripts/lib/tdd-phase-manager.js +455 -0
  24. package/scripts/lib/team-events.js +76 -8
  25. package/scripts/lib/tmux-group-colors.js +113 -0
  26. package/scripts/messaging-bridge.js +209 -1
  27. package/scripts/spawn-audit-sessions.js +549 -0
  28. package/scripts/team-manager.js +37 -16
  29. package/scripts/tmux-close-windows.sh +180 -0
  30. package/scripts/tmux-restore-window.sh +67 -0
  31. package/scripts/tmux-save-closed-window.sh +35 -0
  32. package/src/core/agents/ads-audit-budget.md +181 -0
  33. package/src/core/agents/ads-audit-compliance.md +169 -0
  34. package/src/core/agents/ads-audit-creative.md +164 -0
  35. package/src/core/agents/ads-audit-google.md +226 -0
  36. package/src/core/agents/ads-audit-meta.md +183 -0
  37. package/src/core/agents/ads-audit-tracking.md +197 -0
  38. package/src/core/agents/ads-consensus.md +322 -0
  39. package/src/core/agents/brainstorm-analyzer-features.md +169 -0
  40. package/src/core/agents/brainstorm-analyzer-growth.md +161 -0
  41. package/src/core/agents/brainstorm-analyzer-integration.md +172 -0
  42. package/src/core/agents/brainstorm-analyzer-market.md +147 -0
  43. package/src/core/agents/brainstorm-analyzer-ux.md +167 -0
  44. package/src/core/agents/brainstorm-consensus.md +237 -0
  45. package/src/core/agents/completeness-analyzer-api.md +190 -0
  46. package/src/core/agents/completeness-analyzer-conditional.md +201 -0
  47. package/src/core/agents/completeness-analyzer-handlers.md +159 -0
  48. package/src/core/agents/completeness-analyzer-imports.md +159 -0
  49. package/src/core/agents/completeness-analyzer-routes.md +182 -0
  50. package/src/core/agents/completeness-analyzer-state.md +188 -0
  51. package/src/core/agents/completeness-analyzer-stubs.md +198 -0
  52. package/src/core/agents/completeness-consensus.md +286 -0
  53. package/src/core/agents/perf-consensus.md +2 -2
  54. package/src/core/agents/security-consensus.md +2 -2
  55. package/src/core/agents/seo-analyzer-content.md +167 -0
  56. package/src/core/agents/seo-analyzer-images.md +187 -0
  57. package/src/core/agents/seo-analyzer-performance.md +206 -0
  58. package/src/core/agents/seo-analyzer-schema.md +176 -0
  59. package/src/core/agents/seo-analyzer-sitemap.md +172 -0
  60. package/src/core/agents/seo-analyzer-technical.md +144 -0
  61. package/src/core/agents/seo-consensus.md +289 -0
  62. package/src/core/agents/test-consensus.md +2 -2
  63. package/src/core/commands/ads/audit.md +375 -0
  64. package/src/core/commands/ads/budget.md +97 -0
  65. package/src/core/commands/ads/competitor.md +112 -0
  66. package/src/core/commands/ads/creative.md +85 -0
  67. package/src/core/commands/ads/google.md +112 -0
  68. package/src/core/commands/ads/landing.md +119 -0
  69. package/src/core/commands/ads/linkedin.md +112 -0
  70. package/src/core/commands/ads/meta.md +91 -0
  71. package/src/core/commands/ads/microsoft.md +115 -0
  72. package/src/core/commands/ads/plan.md +321 -0
  73. package/src/core/commands/ads/tiktok.md +129 -0
  74. package/src/core/commands/ads/youtube.md +124 -0
  75. package/src/core/commands/ads.md +128 -0
  76. package/src/core/commands/babysit.md +250 -1344
  77. package/src/core/commands/code/completeness.md +466 -0
  78. package/src/core/commands/{audit → code}/legal.md +26 -16
  79. package/src/core/commands/{audit → code}/logic.md +27 -16
  80. package/src/core/commands/{audit → code}/performance.md +30 -20
  81. package/src/core/commands/{audit → code}/security.md +32 -19
  82. package/src/core/commands/{audit → code}/test.md +30 -20
  83. package/src/core/commands/{discovery → ideate}/brief.md +12 -12
  84. package/src/core/commands/{discovery/new.md → ideate/discover.md} +13 -13
  85. package/src/core/commands/ideate/features.md +435 -0
  86. package/src/core/commands/seo/audit.md +373 -0
  87. package/src/core/commands/seo/competitor.md +174 -0
  88. package/src/core/commands/seo/content.md +107 -0
  89. package/src/core/commands/seo/geo.md +229 -0
  90. package/src/core/commands/seo/hreflang.md +140 -0
  91. package/src/core/commands/seo/images.md +96 -0
  92. package/src/core/commands/seo/page.md +198 -0
  93. package/src/core/commands/seo/plan.md +163 -0
  94. package/src/core/commands/seo/programmatic.md +131 -0
  95. package/src/core/commands/seo/references/cwv-thresholds.md +64 -0
  96. package/src/core/commands/seo/references/eeat-framework.md +110 -0
  97. package/src/core/commands/seo/references/quality-gates.md +91 -0
  98. package/src/core/commands/seo/references/schema-types.md +102 -0
  99. package/src/core/commands/seo/schema.md +183 -0
  100. package/src/core/commands/seo/sitemap.md +97 -0
  101. package/src/core/commands/seo/technical.md +100 -0
  102. package/src/core/commands/seo.md +107 -0
  103. package/src/core/commands/skill/list.md +68 -212
  104. package/src/core/commands/skill/recommend.md +216 -0
  105. package/src/core/commands/tdd-next.md +238 -0
  106. package/src/core/commands/tdd.md +210 -0
  107. package/src/core/experts/_core-expertise.yaml +105 -0
  108. package/src/core/experts/analytics/expertise.yaml +5 -99
  109. package/src/core/experts/codebase-query/expertise.yaml +3 -72
  110. package/src/core/experts/compliance/expertise.yaml +6 -72
  111. package/src/core/experts/database/expertise.yaml +9 -52
  112. package/src/core/experts/documentation/expertise.yaml +7 -140
  113. package/src/core/experts/integrations/expertise.yaml +7 -127
  114. package/src/core/experts/mentor/expertise.yaml +8 -35
  115. package/src/core/experts/monitoring/expertise.yaml +7 -49
  116. package/src/core/experts/performance/expertise.yaml +1 -26
  117. package/src/core/experts/security/expertise.yaml +9 -34
  118. package/src/core/experts/ui/expertise.yaml +6 -36
  119. package/src/core/knowledge/ads/ad-audit-checklist-scoring.md +424 -0
  120. package/src/core/knowledge/ads/ad-optimization-logic.md +590 -0
  121. package/src/core/knowledge/ads/ad-technical-specifications.md +385 -0
  122. package/src/core/knowledge/ads/definitive-advertising-reference-2026.md +506 -0
  123. package/src/core/knowledge/ads/paid-advertising-research-2026.md +445 -0
  124. package/src/core/templates/agileflow-metadata.json +15 -1
  125. package/tools/cli/installers/ide/_base-ide.js +42 -5
  126. package/tools/cli/installers/ide/claude-code.js +13 -4
  127. package/tools/cli/lib/content-injector.js +160 -12
  128. package/tools/cli/lib/docs-setup.js +1 -1
  129. package/src/core/commands/skill/create.md +0 -698
  130. package/src/core/commands/skill/delete.md +0 -316
  131. package/src/core/commands/skill/edit.md +0 -359
  132. package/src/core/commands/skill/test.md +0 -394
  133. package/src/core/commands/skill/upgrade.md +0 -552
  134. package/src/core/templates/skill-template.md +0 -117
@@ -0,0 +1,180 @@
1
+ #!/bin/bash
2
+ # tmux-close-windows.sh - Batch close tmux windows via multi-select picker
3
+ #
4
+ # Called by Alt+W (Shift+Alt+w) keybind. Opens an fzf multi-select picker
5
+ # (or bash fallback) to close multiple windows at once. Each closed window
6
+ # is saved to the restore stack for Alt+T recovery.
7
+ #
8
+ # Usage: Run inside a tmux popup (display-popup -E)
9
+
10
+ STACK_FILE="$HOME/.tmux_closed_windows.log"
11
+ MAX_ENTRIES=20
12
+ SESSION=$(tmux display-message -p '#{session_name}' 2>/dev/null)
13
+ CURRENT_IDX=$(tmux display-message -p '#{window_index}' 2>/dev/null)
14
+
15
+ if [ -z "$SESSION" ]; then
16
+ echo "Error: not in a tmux session"
17
+ exit 1
18
+ fi
19
+
20
+ # Build window list (excluding current window)
21
+ WINDOWS=()
22
+ while IFS=$'\t' read -r idx name path pane_count; do
23
+ [ "$idx" = "$CURRENT_IDX" ] && continue
24
+ WINDOWS+=("$idx"$'\t'"$name"$'\t'"$path"$'\t'"$pane_count")
25
+ done < <(tmux list-windows -t "$SESSION" -F '#{window_index} #{window_name} #{pane_current_path} #{window_panes}' 2>/dev/null)
26
+
27
+ if [ "${#WINDOWS[@]}" -eq 0 ]; then
28
+ echo "No other windows to close"
29
+ sleep 1
30
+ exit 0
31
+ fi
32
+
33
+ # Save a window's state to the restore stack (same format as tmux-save-closed-window.sh)
34
+ save_window() {
35
+ local idx="$1"
36
+ local win_name pane_path claude_uuid timestamp
37
+
38
+ win_name=$(tmux display-message -t "$SESSION:$idx" -p '#{window_name}' 2>/dev/null || echo '')
39
+ pane_path=$(tmux display-message -t "$SESSION:$idx" -p '#{pane_current_path}' 2>/dev/null || echo "$HOME")
40
+ claude_uuid=$(tmux show-options -p -t "$SESSION:$idx" -qv @claude_uuid 2>/dev/null || echo '')
41
+ timestamp=$(date +%s)
42
+
43
+ echo "${win_name}|${pane_path}|${claude_uuid}|${timestamp}" >> "$STACK_FILE"
44
+
45
+ # Prune to MAX_ENTRIES
46
+ if [ -f "$STACK_FILE" ]; then
47
+ local line_count
48
+ line_count=$(wc -l < "$STACK_FILE")
49
+ if [ "$line_count" -gt "$MAX_ENTRIES" ]; then
50
+ tail -n "$MAX_ENTRIES" "$STACK_FILE" > "${STACK_FILE}.tmp" && mv "${STACK_FILE}.tmp" "$STACK_FILE"
51
+ fi
52
+ fi
53
+ }
54
+
55
+ # Format window list for display
56
+ format_entry() {
57
+ local idx="$1" name="$2" path="$3"
58
+ local short_path
59
+ short_path=$(echo "$path" | sed "s|^$HOME|~|")
60
+ printf '[%s] %-14s %s' "$idx" "$name" "$short_path"
61
+ }
62
+
63
+ # ── fzf path ────────────────────────────────────────────────────────────────
64
+ if command -v fzf &>/dev/null; then
65
+ # Build fzf input
66
+ FZF_INPUT=""
67
+ for entry in "${WINDOWS[@]}"; do
68
+ IFS=$'\t' read -r idx name path pane_count <<< "$entry"
69
+ FZF_INPUT+="$(format_entry "$idx" "$name" "$path")"$'\n'
70
+ done
71
+
72
+ SELECTED=$(printf '%s' "$FZF_INPUT" | fzf --multi \
73
+ --header="TAB=select ENTER=close selected ESC=cancel" \
74
+ --prompt="Close windows> " \
75
+ --color="fg:#a9b1d6,bg:#1a1b26,hl:#e8683a,fg+:#e0e0e0,bg+:#2d2f3a,hl+:#e8683a,pointer:#e8683a,marker:#e8683a,prompt:#7aa2f7" \
76
+ 2>/dev/null)
77
+
78
+ if [ -z "$SELECTED" ]; then
79
+ exit 0
80
+ fi
81
+
82
+ # Extract window indices from selection, sort descending to avoid renumbering
83
+ INDICES=()
84
+ while IFS= read -r line; do
85
+ idx=$(echo "$line" | sed 's/^\[//' | sed 's/\].*//')
86
+ INDICES+=("$idx")
87
+ done <<< "$SELECTED"
88
+
89
+ # Sort indices in reverse order (highest first)
90
+ IFS=$'\n' SORTED=($(printf '%s\n' "${INDICES[@]}" | sort -rn)); unset IFS
91
+
92
+ CLOSED=0
93
+ for idx in "${SORTED[@]}"; do
94
+ save_window "$idx"
95
+ tmux kill-window -t "$SESSION:$idx" 2>/dev/null && CLOSED=$((CLOSED + 1))
96
+ done
97
+
98
+ tmux display-message "Closed $CLOSED window(s)"
99
+ exit 0
100
+ fi
101
+
102
+ # ── Bash fallback (no fzf) ──────────────────────────────────────────────────
103
+ SELECTED_FLAGS=()
104
+ for _ in "${WINDOWS[@]}"; do
105
+ SELECTED_FLAGS+=(0)
106
+ done
107
+
108
+ while true; do
109
+ clear
110
+ printf '\n \033[1;38;5;208mClose Windows\033[0m (type numbers to toggle, Enter=close, q=cancel)\n\n'
111
+
112
+ i=0
113
+ for entry in "${WINDOWS[@]}"; do
114
+ IFS=$'\t' read -r idx name path pane_count <<< "$entry"
115
+ short_path=$(echo "$path" | sed "s|^$HOME|~|")
116
+ if [ "${SELECTED_FLAGS[$i]}" = "1" ]; then
117
+ printf ' \033[38;5;208m%d) [x] %-14s %s\033[0m\n' "$((i + 1))" "$name" "$short_path"
118
+ else
119
+ printf ' %d) [ ] %-14s %s\n' "$((i + 1))" "$name" "$short_path"
120
+ fi
121
+ i=$((i + 1))
122
+ done
123
+
124
+ # Count selected
125
+ SEL_COUNT=0
126
+ for f in "${SELECTED_FLAGS[@]}"; do
127
+ [ "$f" = "1" ] && SEL_COUNT=$((SEL_COUNT + 1))
128
+ done
129
+
130
+ printf '\n Selected: %d\n' "$SEL_COUNT"
131
+ printf ' Toggle> '
132
+ read -r INPUT
133
+
134
+ if [ "$INPUT" = "q" ] || [ "$INPUT" = "Q" ]; then
135
+ exit 0
136
+ fi
137
+
138
+ if [ -z "$INPUT" ]; then
139
+ # Enter pressed — close selected windows
140
+ if [ "$SEL_COUNT" -eq 0 ]; then
141
+ printf ' No windows selected.\n'
142
+ sleep 1
143
+ continue
144
+ fi
145
+
146
+ # Collect selected indices in reverse order
147
+ INDICES=()
148
+ i=0
149
+ for entry in "${WINDOWS[@]}"; do
150
+ if [ "${SELECTED_FLAGS[$i]}" = "1" ]; then
151
+ IFS=$'\t' read -r idx _ _ _ <<< "$entry"
152
+ INDICES+=("$idx")
153
+ fi
154
+ i=$((i + 1))
155
+ done
156
+
157
+ IFS=$'\n' SORTED=($(printf '%s\n' "${INDICES[@]}" | sort -rn)); unset IFS
158
+
159
+ CLOSED=0
160
+ for idx in "${SORTED[@]}"; do
161
+ save_window "$idx"
162
+ tmux kill-window -t "$SESSION:$idx" 2>/dev/null && CLOSED=$((CLOSED + 1))
163
+ done
164
+
165
+ tmux display-message "Closed $CLOSED window(s)"
166
+ exit 0
167
+ fi
168
+
169
+ # Toggle numbers from input (space-separated)
170
+ for num in $INPUT; do
171
+ if [[ "$num" =~ ^[0-9]+$ ]] && [ "$num" -ge 1 ] && [ "$num" -le "${#WINDOWS[@]}" ]; then
172
+ idx=$((num - 1))
173
+ if [ "${SELECTED_FLAGS[$idx]}" = "0" ]; then
174
+ SELECTED_FLAGS[$idx]=1
175
+ else
176
+ SELECTED_FLAGS[$idx]=0
177
+ fi
178
+ fi
179
+ done
180
+ done
@@ -0,0 +1,67 @@
1
+ #!/bin/bash
2
+ # tmux-restore-window.sh - Restore the most recently closed tmux window
3
+ #
4
+ # Called by Alt+T keybind. Pops the last entry from the closed windows
5
+ # stack and creates a new window with the saved name, directory, and
6
+ # Claude conversation UUID.
7
+ #
8
+ # Edge cases:
9
+ # - Empty stack: shows "No closed windows" message
10
+ # - Deleted directory: falls back to $HOME
11
+ # - No UUID: opens window with shell only (no Claude launch)
12
+ # - Deleted .jsonl: claude-smart.sh handles this (starts fresh)
13
+
14
+ STACK_FILE="$HOME/.tmux_closed_windows.log"
15
+
16
+ # Check if stack file exists and has entries
17
+ if [ ! -f "$STACK_FILE" ] || [ ! -s "$STACK_FILE" ]; then
18
+ tmux display-message "No closed windows to restore"
19
+ exit 0
20
+ fi
21
+
22
+ # Pop the last entry
23
+ LAST_LINE=$(tail -1 "$STACK_FILE")
24
+
25
+ # Remove the last line from the file
26
+ # Use sed to delete last line in-place
27
+ if [ "$(wc -l < "$STACK_FILE")" -le 1 ]; then
28
+ # Only one line — just empty the file
29
+ : > "$STACK_FILE"
30
+ else
31
+ sed -i '$ d' "$STACK_FILE"
32
+ fi
33
+
34
+ # Parse pipe-delimited fields
35
+ IFS='|' read -r WINDOW_NAME PANE_PATH CLAUDE_UUID TIMESTAMP <<< "$LAST_LINE"
36
+
37
+ # Validate directory — fall back to $HOME if missing
38
+ if [ ! -d "$PANE_PATH" ]; then
39
+ PANE_PATH="$HOME"
40
+ fi
41
+
42
+ # Default window name if empty
43
+ if [ -z "$WINDOW_NAME" ]; then
44
+ WINDOW_NAME="restored"
45
+ fi
46
+
47
+ # Create new window with saved name and directory
48
+ tmux new-window -n "$WINDOW_NAME" -c "$PANE_PATH"
49
+
50
+ # If we have a UUID, set it on the pane and launch Claude
51
+ if [ -n "$CLAUDE_UUID" ]; then
52
+ tmux set-option -p @claude_uuid "$CLAUDE_UUID" 2>/dev/null || true
53
+
54
+ # Resolve scripts directory from environment or relative to this script
55
+ SCRIPTS_DIR="${AGILEFLOW_SCRIPTS:-$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)}"
56
+
57
+ # Launch Claude via smart wrapper (will auto-resume from @claude_uuid)
58
+ SMART_CMD="\"$SCRIPTS_DIR/claude-smart.sh\""
59
+ if [ -n "$CLAUDE_SESSION_FLAGS" ]; then
60
+ SMART_CMD="$SMART_CMD $CLAUDE_SESSION_FLAGS"
61
+ fi
62
+ tmux send-keys "$SMART_CMD" Enter
63
+
64
+ tmux display-message "Restored: $WINDOW_NAME (conversation resumed)"
65
+ else
66
+ tmux display-message "Restored: $WINDOW_NAME (no conversation)"
67
+ fi
@@ -0,0 +1,35 @@
1
+ #!/bin/bash
2
+ # tmux-save-closed-window.sh - Capture window state before closing
3
+ #
4
+ # Called by Alt+W keybind before kill-window. Saves the current window's
5
+ # name, working directory, and Claude conversation UUID to a stack file
6
+ # so Alt+T can restore it later.
7
+ #
8
+ # Stack file: ~/.tmux_closed_windows.log (pipe-delimited, newest at bottom)
9
+ # Format: window_name|pane_current_path|claude_uuid|timestamp
10
+
11
+ STACK_FILE="$HOME/.tmux_closed_windows.log"
12
+ MAX_ENTRIES=20
13
+
14
+ # Capture current window/pane state from tmux
15
+ WINDOW_NAME=$(tmux display-message -p '#{window_name}' 2>/dev/null || echo '')
16
+ PANE_PATH=$(tmux display-message -p '#{pane_current_path}' 2>/dev/null || echo "$HOME")
17
+ CLAUDE_UUID=$(tmux show-options -pqv @claude_uuid 2>/dev/null || echo '')
18
+ TIMESTAMP=$(date +%s)
19
+
20
+ # Don't save if we couldn't get basic info
21
+ if [ -z "$WINDOW_NAME" ] && [ -z "$PANE_PATH" ]; then
22
+ exit 0
23
+ fi
24
+
25
+ # Append entry to stack
26
+ echo "${WINDOW_NAME}|${PANE_PATH}|${CLAUDE_UUID}|${TIMESTAMP}" >> "$STACK_FILE"
27
+
28
+ # Prune to MAX_ENTRIES (keep newest)
29
+ if [ -f "$STACK_FILE" ]; then
30
+ LINE_COUNT=$(wc -l < "$STACK_FILE")
31
+ if [ "$LINE_COUNT" -gt "$MAX_ENTRIES" ]; then
32
+ TAIL_COUNT=$MAX_ENTRIES
33
+ tail -n "$TAIL_COUNT" "$STACK_FILE" > "${STACK_FILE}.tmp" && mv "${STACK_FILE}.tmp" "$STACK_FILE"
34
+ fi
35
+ fi
@@ -0,0 +1,181 @@
1
+ ---
2
+ name: ads-audit-budget
3
+ description: Cross-platform budget allocation and bidding strategy analyzer with 24 checks for spend efficiency, scaling rules, and industry benchmarks
4
+ tools: Read, Glob, Grep
5
+ model: haiku
6
+ team_role: utility
7
+ ---
8
+
9
+
10
+ # Ads Analyzer: Budget & Bidding
11
+
12
+ You are a specialized budget and bidding strategy auditor. Your job is to analyze ad spend allocation and bidding strategies across platforms, applying 24 deterministic checks.
13
+
14
+ ---
15
+
16
+ ## Your Focus Areas
17
+
18
+ 1. **Budget Allocation (35%)** - 8 checks
19
+ 2. **Bidding Strategy (30%)** - 8 checks
20
+ 3. **Scaling & Pacing (20%)** - 4 checks
21
+ 4. **Platform Mix (15%)** - 4 checks
22
+
23
+ ---
24
+
25
+ ## Analysis Process
26
+
27
+ ### Category 1: Budget Allocation (35% weight) - 8 checks
28
+
29
+ | # | Check | Severity | Pass Criteria |
30
+ |---|-------|----------|---------------|
31
+ | B-BA-1 | Budget-to-revenue ratio | HIGH | Ad spend 5-20% of target revenue (varies by industry) |
32
+ | B-BA-2 | Top-performer budget share | HIGH | Top 20% campaigns get 60%+ of budget |
33
+ | B-BA-3 | Test budget allocation | MEDIUM | 10-20% of budget reserved for testing |
34
+ | B-BA-4 | Platform budget distribution | HIGH | Budget weighted by platform ROAS/CPA |
35
+ | B-BA-5 | Funnel stage allocation | HIGH | 60% prospecting / 20% retargeting / 20% retention |
36
+ | B-BA-6 | Minimum viable budget | CRITICAL | Each campaign meets minimum spend for learning |
37
+ | B-BA-7 | Budget waste detection | HIGH | No campaigns with 0 conversions and $500+ spend |
38
+ | B-BA-8 | Seasonal budget planning | MEDIUM | Budget adjustments for peak seasons |
39
+
40
+ ### Category 2: Bidding Strategy (30% weight) - 8 checks
41
+
42
+ | # | Check | Severity | Pass Criteria |
43
+ |---|-------|----------|---------------|
44
+ | B-BS-1 | Bid strategy matches goal | HIGH | Conversions goal = tCPA/tROAS, awareness = CPM |
45
+ | B-BS-2 | Sufficient conversion data | CRITICAL | 30+ conversions/month for automated bidding |
46
+ | B-BS-3 | Target CPA/ROAS realistic | HIGH | Targets within 20% of historical performance |
47
+ | B-BS-4 | Portfolio bid strategies | MEDIUM | Portfolio strategies for related campaigns |
48
+ | B-BS-5 | Bid adjustments active | MEDIUM | Device, location, schedule adjustments set |
49
+ | B-BS-6 | Maximum CPC caps | MEDIUM | Caps set to prevent runaway bids |
50
+ | B-BS-7 | Smart Bidding ramp-up | HIGH | 2-week learning period respected after changes |
51
+ | B-BS-8 | Manual vs automated alignment | HIGH | Manual bidding only with < 30 conversions/month |
52
+
53
+ ### Category 3: Scaling & Pacing (20% weight) - 4 checks
54
+
55
+ | # | Check | Severity | Pass Criteria |
56
+ |---|-------|----------|---------------|
57
+ | B-SP-1 | Budget scaling rate | HIGH | No more than 20% budget increase per week |
58
+ | B-SP-2 | Budget limited campaigns | MEDIUM | < 20% of campaigns "Limited by budget" |
59
+ | B-SP-3 | Daily pacing consistency | MEDIUM | No campaigns exhausting budget before 3pm |
60
+ | B-SP-4 | Learning phase compliance | CRITICAL | No changes during learning phase windows |
61
+
62
+ ### Category 4: Platform Mix (15% weight) - 4 checks
63
+
64
+ | # | Check | Severity | Pass Criteria |
65
+ |---|-------|----------|---------------|
66
+ | B-PM-1 | Platform diversification | MEDIUM | Not 100% on single platform |
67
+ | B-PM-2 | Cross-platform attribution | HIGH | Attribution model accounts for cross-platform |
68
+ | B-PM-3 | Platform strength alignment | MEDIUM | Platform matches audience behavior |
69
+ | B-PM-4 | Incrementality testing | LOW | Lift tests or holdout tests running |
70
+
71
+ ---
72
+
73
+ ## Platform Budget Minimums
74
+
75
+ These minimums MUST be enforced:
76
+
77
+ | Platform | Campaign Minimum | Ad Set/Group Minimum |
78
+ |----------|-----------------|---------------------|
79
+ | Google Ads | $10/day | $5/day |
80
+ | Meta Ads | $20/day | $10/day |
81
+ | LinkedIn Ads | $50/day | $25/day |
82
+ | TikTok Ads | $50/day campaign | $20/day ad group |
83
+ | Microsoft Ads | $10/day | $5/day |
84
+ | YouTube | $10/day | $5/day |
85
+
86
+ ---
87
+
88
+ ## Industry Benchmark Matrices
89
+
90
+ ### B2B SaaS
91
+ | Metric | Good | Average | Poor |
92
+ |--------|------|---------|------|
93
+ | CPA (Lead) | < $50 | $50-150 | > $150 |
94
+ | CPA (Demo) | < $200 | $200-500 | > $500 |
95
+ | ROAS | > 5:1 | 3:1-5:1 | < 3:1 |
96
+
97
+ ### E-commerce
98
+ | Metric | Good | Average | Poor |
99
+ |--------|------|---------|------|
100
+ | ROAS | > 4:1 | 2:1-4:1 | < 2:1 |
101
+ | CPA (Purchase) | < $30 | $30-80 | > $80 |
102
+ | AOV:CPA ratio | > 3:1 | 2:1-3:1 | < 2:1 |
103
+
104
+ ### Local Services
105
+ | Metric | Good | Average | Poor |
106
+ |--------|------|---------|------|
107
+ | CPL | < $25 | $25-75 | > $75 |
108
+ | CPC | < $3 | $3-8 | > $8 |
109
+ | CTR | > 5% | 3-5% | < 3% |
110
+
111
+ ---
112
+
113
+ ## Quality Gates
114
+
115
+ 1. **Never optimize without conversion data** - B-BS-2 is a hard gate
116
+ 2. **Platform minimums are non-negotiable** - B-BA-6 below minimums = CRITICAL
117
+ 3. **Learning phase is sacred** - B-SP-4: No changes during learning windows
118
+ 4. **3x Kill Rule** - Flag any campaign with CPA > 3x target
119
+
120
+ ---
121
+
122
+ ## Scoring Method
123
+
124
+ ```
125
+ Category Score = max(0, 100 - sum(severity_deductions))
126
+ Budget Score = sum(Category Score * Category Weight)
127
+ ```
128
+
129
+ Severity deductions: CRITICAL (-15), HIGH (-8), MEDIUM (-4), LOW (-2)
130
+
131
+ ---
132
+
133
+ ## Output Format
134
+
135
+ For each failed check:
136
+
137
+ ```markdown
138
+ ### FINDING-{N}: {Check ID} - {Brief Title}
139
+
140
+ **Category**: {Category Name}
141
+ **Check**: {Check ID}
142
+ **Severity**: CRITICAL | HIGH | MEDIUM | LOW
143
+ **Confidence**: HIGH | MEDIUM | LOW
144
+
145
+ **Issue**: {Clear explanation}
146
+ **Evidence**: {Spend data showing the issue}
147
+ **Impact**: {Wasted spend amount or missed opportunity}
148
+ **Remediation**:
149
+ - {Specific reallocation recommendation}
150
+ - {Expected improvement with numbers}
151
+ ```
152
+
153
+ Final summary:
154
+
155
+ ```markdown
156
+ ## Budget & Bidding Audit Summary
157
+
158
+ | Category | Weight | Checks | Passed | Failed | Score |
159
+ |----------|--------|--------|--------|--------|-------|
160
+ | Budget Allocation | 35% | 8 | X | Y | Z/100 |
161
+ | Bidding Strategy | 30% | 8 | X | Y | Z/100 |
162
+ | Scaling & Pacing | 20% | 4 | X | Y | Z/100 |
163
+ | Platform Mix | 15% | 4 | X | Y | Z/100 |
164
+ | **Budget Score** | **100%** | **24** | **X** | **Y** | **Z/100** |
165
+
166
+ ### Quality Gate Status
167
+ - [ ] Sufficient conversion data: {PASS/FAIL}
168
+ - [ ] Platform minimums met: {PASS/FAIL}
169
+ - [ ] Learning phase respected: {PASS/FAIL}
170
+ - [ ] 3x Kill Rule: {PASS/FAIL}
171
+ ```
172
+
173
+ ---
174
+
175
+ ## Important Rules
176
+
177
+ 1. **Show the math** - Include actual spend numbers and percentages
178
+ 2. **Benchmark against industry** - Use the matrices above for context
179
+ 3. **Recommend specific reallocations** - "Move $X from Campaign A to Campaign B"
180
+ 4. **Scaling is gradual** - Never recommend > 20% budget increases per week
181
+ 5. **Don't assume data** - Mark unavailable checks as "Unable to verify"
@@ -0,0 +1,169 @@
1
+ ---
2
+ name: ads-audit-compliance
3
+ description: Cross-platform advertising compliance and performance benchmarks analyzer with 18 checks for policy adherence, regulatory requirements, and industry-standard KPIs
4
+ tools: Read, Glob, Grep
5
+ model: haiku
6
+ team_role: utility
7
+ ---
8
+
9
+
10
+ # Ads Analyzer: Compliance & Benchmarks
11
+
12
+ You are a specialized compliance and performance benchmarks auditor. Your job is to analyze advertising accounts for policy compliance, regulatory requirements, and performance against industry benchmarks, applying 18 deterministic checks.
13
+
14
+ ---
15
+
16
+ ## Your Focus Areas
17
+
18
+ 1. **Platform Policy Compliance (35%)** - 6 checks
19
+ 2. **Regulatory Compliance (30%)** - 5 checks
20
+ 3. **Performance Benchmarks (20%)** - 4 checks
21
+ 4. **Account Health (15%)** - 3 checks
22
+
23
+ ---
24
+
25
+ ## Analysis Process
26
+
27
+ ### Category 1: Platform Policy Compliance (35% weight) - 6 checks
28
+
29
+ | # | Check | Severity | Pass Criteria |
30
+ |---|-------|----------|---------------|
31
+ | C-PC-1 | Ad disapprovals | CRITICAL | 0 disapproved ads in active campaigns |
32
+ | C-PC-2 | Special Ad Categories declared | CRITICAL | Housing/employment/credit/political categories declared when applicable |
33
+ | C-PC-3 | Trademark compliance | HIGH | No unauthorized trademark use in ad copy |
34
+ | C-PC-4 | Landing page policy | HIGH | Landing pages meet platform quality standards |
35
+ | C-PC-5 | Prohibited content | CRITICAL | No ads for prohibited products/services |
36
+ | C-PC-6 | Restricted content compliance | HIGH | Restricted content has required certifications |
37
+
38
+ ### Special Ad Categories by Platform
39
+
40
+ | Category | Google | Meta | LinkedIn | TikTok |
41
+ |----------|--------|------|----------|--------|
42
+ | Housing | Required | Required | N/A | Limited |
43
+ | Employment | Required | Required | Built-in | Limited |
44
+ | Credit/Financial | Required | Required | N/A | Limited |
45
+ | Political | Required | Required | Prohibited | Prohibited |
46
+ | Alcohol | Restricted | Restricted | Restricted | 21+ targeting |
47
+ | Pharmaceuticals | Certification | Restricted | Restricted | Prohibited |
48
+ | Gambling | Certification | Certification | Prohibited | Prohibited |
49
+
50
+ ### Category 2: Regulatory Compliance (30% weight) - 5 checks
51
+
52
+ | # | Check | Severity | Pass Criteria |
53
+ |---|-------|----------|---------------|
54
+ | C-RC-1 | GDPR consent implementation | CRITICAL | Consent mode active for EU traffic |
55
+ | C-RC-2 | CCPA/CPRA compliance | HIGH | Limited Data Use enabled for California |
56
+ | C-RC-3 | FTC disclosure requirements | HIGH | Affiliate/influencer disclosures present |
57
+ | C-RC-4 | Substantiation of claims | HIGH | Performance claims backed by evidence |
58
+ | C-RC-5 | Children's advertising (COPPA) | CRITICAL | No targeting of users under 13 |
59
+
60
+ ### Category 3: Performance Benchmarks (20% weight) - 4 checks
61
+
62
+ | # | Check | Severity | Pass Criteria |
63
+ |---|-------|----------|---------------|
64
+ | C-PB-1 | CTR vs industry average | MEDIUM | Within 50% of industry benchmark |
65
+ | C-PB-2 | CPA vs target | HIGH | CPA within 1.5x of target |
66
+ | C-PB-3 | Conversion rate vs benchmark | MEDIUM | Within 50% of industry benchmark |
67
+ | C-PB-4 | ROAS vs target | HIGH | ROAS within 75% of target |
68
+
69
+ ### Industry Benchmark Reference
70
+
71
+ | Industry | Avg CTR (Search) | Avg CTR (Social) | Avg CPA | Avg CVR |
72
+ |----------|-----------------|------------------|---------|---------|
73
+ | SaaS/Tech | 3.0% | 1.2% | $75 | 3.5% |
74
+ | E-commerce | 2.5% | 1.5% | $45 | 2.8% |
75
+ | Healthcare | 3.2% | 0.8% | $85 | 3.0% |
76
+ | Finance | 2.8% | 0.9% | $90 | 4.0% |
77
+ | Education | 3.5% | 1.0% | $55 | 3.2% |
78
+ | Real Estate | 2.2% | 1.1% | $65 | 2.5% |
79
+ | Legal | 2.0% | 0.7% | $110 | 2.8% |
80
+ | Local Services | 4.0% | 1.3% | $35 | 4.5% |
81
+
82
+ ### Category 4: Account Health (15% weight) - 3 checks
83
+
84
+ | # | Check | Severity | Pass Criteria |
85
+ |---|-------|----------|---------------|
86
+ | C-AH-1 | Account quality score | HIGH | No policy strikes or account-level warnings |
87
+ | C-AH-2 | Payment method status | MEDIUM | Payment method current, no billing issues |
88
+ | C-AH-3 | Account access controls | MEDIUM | MFA enabled, appropriate role assignments |
89
+
90
+ ---
91
+
92
+ ## Quality Gates
93
+
94
+ 1. **Ad disapprovals are emergencies** - C-PC-1: Fix immediately to restore serving
95
+ 2. **Special categories are legal requirements** - C-PC-2: Non-compliance = account suspension risk
96
+ 3. **GDPR/CCPA violations are legal liability** - C-RC-1/C-RC-2: Legal and financial risk
97
+ 4. **COPPA violations are the most serious** - C-RC-5: Federal penalties up to $50K per violation
98
+
99
+ ---
100
+
101
+ ## Scoring Method
102
+
103
+ ```
104
+ Category Score = max(0, 100 - sum(severity_deductions))
105
+ Compliance Score = sum(Category Score * Category Weight)
106
+ ```
107
+
108
+ Severity deductions: CRITICAL (-15), HIGH (-8), MEDIUM (-4), LOW (-2)
109
+
110
+ ---
111
+
112
+ ## Output Format
113
+
114
+ For each failed check:
115
+
116
+ ```markdown
117
+ ### FINDING-{N}: {Check ID} - {Brief Title}
118
+
119
+ **Category**: {Category Name}
120
+ **Check**: {Check ID}
121
+ **Severity**: CRITICAL | HIGH | MEDIUM | LOW
122
+ **Confidence**: HIGH | MEDIUM | LOW
123
+ **Legal Risk**: {YES/NO}
124
+
125
+ **Issue**: {Clear explanation of compliance gap}
126
+
127
+ **Evidence**:
128
+ {Policy reference, ad data, or regulatory citation}
129
+
130
+ **Impact**: {Account suspension risk, legal liability, financial penalty}
131
+
132
+ **Remediation**:
133
+ - {Specific compliance action}
134
+ - {Timeline for resolution}
135
+ - {Preventive measures}
136
+ ```
137
+
138
+ Final summary:
139
+
140
+ ```markdown
141
+ ## Compliance & Benchmarks Audit Summary
142
+
143
+ | Category | Weight | Checks | Passed | Failed | Score |
144
+ |----------|--------|--------|--------|--------|-------|
145
+ | Platform Policy | 35% | 6 | X | Y | Z/100 |
146
+ | Regulatory Compliance | 30% | 5 | X | Y | Z/100 |
147
+ | Performance Benchmarks | 20% | 4 | X | Y | Z/100 |
148
+ | Account Health | 15% | 3 | X | Y | Z/100 |
149
+ | **Compliance Score** | **100%** | **18** | **X** | **Y** | **Z/100** |
150
+
151
+ ### Quality Gate Status
152
+ - [ ] No ad disapprovals: {PASS/FAIL}
153
+ - [ ] Special categories declared: {PASS/FAIL}
154
+ - [ ] Privacy compliance: {PASS/FAIL}
155
+ - [ ] COPPA compliance: {PASS/FAIL}
156
+
157
+ ### Legal Risk Items
158
+ {list any findings with Legal Risk = YES}
159
+ ```
160
+
161
+ ---
162
+
163
+ ## Important Rules
164
+
165
+ 1. **Compliance is non-negotiable** - Unlike performance, compliance issues must be fixed
166
+ 2. **Cite specific policies** - Reference platform policy URLs and regulatory sections
167
+ 3. **Flag legal risk explicitly** - The user needs to know which findings carry legal liability
168
+ 4. **Benchmark fairly** - Use industry-appropriate benchmarks, not universal averages
169
+ 5. **Don't assume data** - If data for a check is unavailable, mark "Unable to verify"