@opendirectory.dev/skills 0.1.47 → 0.1.49
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/registry.json +10 -0
- package/skills/meta-tribeV2-skill/README.md +103 -0
- package/skills/meta-tribeV2-skill/SKILL.md +121 -0
- package/skills/meta-tribeV2-skill/references/neuroscience_framework.md +50 -0
- package/skills/meta-tribeV2-skill/scripts/colab_inference.py +37 -0
- package/skills/meta-tribeV2-skill/scripts/deploy_to_persistent.sh +31 -0
- package/skills/meta-tribeV2-skill/scripts/download_and_analyze.py +36 -0
- package/skills/meta-tribeV2-skill/scripts/launch_persistent.sh +50 -0
- package/skills/meta-tribeV2-skill/scripts/local_analyze.py +127 -0
- package/skills/meta-tribeV2-skill/scripts/wait_health.py +14 -0
- package/skills/meta-tribeV2-skill/server/Dockerfile +16 -0
- package/skills/meta-tribeV2-skill/server/Dockerfile.runpod +19 -0
- package/skills/meta-tribeV2-skill/server/requirements.txt +7 -0
- package/skills/meta-tribeV2-skill/server/runpod_handler.py +128 -0
- package/skills/meta-tribeV2-skill/server/server.py +295 -0
- package/skills/meta-tribeV2-skill/showcase.md +93 -0
package/package.json
CHANGED
package/registry.json
CHANGED
|
@@ -188,6 +188,16 @@
|
|
|
188
188
|
"version": "0.0.1",
|
|
189
189
|
"path": "skills/meta-ads-skill"
|
|
190
190
|
},
|
|
191
|
+
{
|
|
192
|
+
"name": "meta-tribeV2-skill",
|
|
193
|
+
"description": "Analyzes video hooks and scripts using Meta's TRIBE v2 fMRI model, providing a neuro-marketing breakdown of scroll-stopping power and retention risk.",
|
|
194
|
+
"tags": [
|
|
195
|
+
"Marketing"
|
|
196
|
+
],
|
|
197
|
+
"author": "Varnan-Tech",
|
|
198
|
+
"version": "1.0.0",
|
|
199
|
+
"path": "skills/meta-tribeV2-skill"
|
|
200
|
+
},
|
|
191
201
|
{
|
|
192
202
|
"name": "newsletter-digest",
|
|
193
203
|
"description": "Aggregates RSS feeds from the past week, synthesizes the top stories using Gemini, and publishes a newsletter digest to Ghost CMS.",
|
|
@@ -0,0 +1,103 @@
|
|
|
1
|
+
<div align="center">
|
|
2
|
+
<img src="https://raw.githubusercontent.com/Varnan-Tech/opendirectory/main/assets/covers/tribe-hook-analyzer-cover.png" width="100%" alt="Meta Tribe Skill Cover" />
|
|
3
|
+
</div>
|
|
4
|
+
|
|
5
|
+
# Meta Tribe Skill
|
|
6
|
+
|
|
7
|
+
A self-hosted OpenDirectory AI Skill that uses Meta's TRIBE v2 fMRI Model to analyze the neuroscience of video hooks, reels, and scripts.
|
|
8
|
+
|
|
9
|
+
Instead of guessing what makes a hook engaging using prompt engineering, this skill predicts actual human brain activity across the scientifically validated Yeo-7 Functional Networks, giving you an evidence-based Engagement Report for your content.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## What This Skill Does
|
|
14
|
+
|
|
15
|
+
This skill provides the infrastructure to host the massive 80GB TRIBE v2 model pipeline and gives your AI Agent the ability to:
|
|
16
|
+
1. Process video, audio, or text scripts.
|
|
17
|
+
2. Intercept and optimize the media (downscaling video to 360p at 10fps to avoid hour-long bottleneck processing times).
|
|
18
|
+
3. Process the content through V-JEPA (Vision), W2V-BERT (Acoustics), and LLaMA 3.2 3B (Linguistics).
|
|
19
|
+
4. Predict human brain fMRI activity across the Yeo-7 networks.
|
|
20
|
+
5. Generate an actionable, human-readable neuroscience report without the jargon.
|
|
21
|
+
|
|
22
|
+
---
|
|
23
|
+
|
|
24
|
+
## Deployment Options
|
|
25
|
+
|
|
26
|
+
Because TRIBE v2 requires a massive amount of VRAM (24GB for text, up to 80GB for video), we offer 3 different deployment options so anyone can use it, regardless of budget or technical expertise.
|
|
27
|
+
|
|
28
|
+
### 1. Google Colab (Zero Cost, Decoupled)
|
|
29
|
+
Best for users without a cloud budget. Colab provides free T4 GPUs.
|
|
30
|
+
* How it works: We use a decoupled architecture. You run the heavy AI inference in a Colab Notebook, which outputs a preds.npy prediction file. You then run a local script on your laptop to generate the report.
|
|
31
|
+
* Setup:
|
|
32
|
+
1. Open Google Colab and upload the script from scripts/colab_inference.py into a new Notebook.
|
|
33
|
+
2. Run the notebook. It will output preds.npy and segments.json.
|
|
34
|
+
3. Download those files to your machine and run: `python scripts/local_analyze.py --preds preds.npy`. This will output a text report and an ASCII terminal graph showing the engagement peaks and valleys.
|
|
35
|
+
|
|
36
|
+
### 2. RunPod (Serverless, Pay-per-second)
|
|
37
|
+
Best for production agents and developers. You only pay for the seconds the model is running.
|
|
38
|
+
* How it works: We provide a RunPod Handler and a custom Dockerfile that caches the 80GB model inside the image.
|
|
39
|
+
* Setup:
|
|
40
|
+
1. Build the Docker image using server/Dockerfile.runpod: docker build -f Dockerfile.runpod -t tribe-runpod .
|
|
41
|
+
2. Push the image to Docker Hub or GHCR.
|
|
42
|
+
3. Create a new RunPod Serverless Endpoint using your image URL.
|
|
43
|
+
4. Point your AI Agent to your RunPod Endpoint URL.
|
|
44
|
+
|
|
45
|
+
### 3. AWS EC2 Persistent (Enterprise, BYO-Compute)
|
|
46
|
+
Best for heavy, continuous usage.
|
|
47
|
+
* How it works: Automatically provisions an AWS g5.12xlarge instance (4x A10G GPUs) and runs a FastAPI server.
|
|
48
|
+
* Setup:
|
|
49
|
+
1. Ensure your AWS account has a vCPU quota limit of at least 48 for "Running On-Demand G and VT instances".
|
|
50
|
+
2. Run bash scripts/launch_persistent.sh to provision the instance.
|
|
51
|
+
3. Run export HF_TOKEN="your_token" followed by bash scripts/deploy_to_persistent.sh to build and launch the Docker API.
|
|
52
|
+
|
|
53
|
+
#### AWS GPU Lifecycle & Estimated Costs
|
|
54
|
+
Running the `g5.12xlarge` instance (4x A10G GPUs) provides incredible speed but costs **$7.09 per hour** on On-Demand pricing. It is crucial to manage this lifecycle.
|
|
55
|
+
1. **Launch:** Run `bash scripts/launch_persistent.sh` (Takes ~3 minutes).
|
|
56
|
+
2. **Analyze:** Run your videos through the API.
|
|
57
|
+
3. **Terminate:** When you are completely finished for the day, you MUST terminate the instance to stop billing.
|
|
58
|
+
- Run `aws ec2 describe-instances --filters "Name=instance-state-name,Values=running"` to find your Instance ID.
|
|
59
|
+
- Run `aws ec2 terminate-instances --instance-ids <YOUR_INSTANCE_ID>`.
|
|
60
|
+
- *Do not just "stop" the instance if you don't want to pay for EBS Volume storage costs overnight. Terminate it.*
|
|
61
|
+
|
|
62
|
+
---
|
|
63
|
+
|
|
64
|
+
## HuggingFace Authentication (Required for all methods)
|
|
65
|
+
|
|
66
|
+
TRIBE v2 relies on meta-llama/Llama-3.2-3B, which is a Gated Model.
|
|
67
|
+
1. Create a HuggingFace account.
|
|
68
|
+
2. Go to the Llama 3.2 3B page and TRIBE v2 page and agree to Meta's license terms.
|
|
69
|
+
3. Generate a HuggingFace Access Token (Read permissions) at huggingface.co/settings/tokens.
|
|
70
|
+
4. Supply this token via the HF_TOKEN environment variable.
|
|
71
|
+
|
|
72
|
+
---
|
|
73
|
+
|
|
74
|
+
## The Neuroscience of the Engagement Report
|
|
75
|
+
|
|
76
|
+
The AI agent will read the raw API output and translate the neuroscience into plain English for you:
|
|
77
|
+
|
|
78
|
+
* VAN (Ventral Attention Network): Translated to "Is this surprising enough to stop a scroll?". High VAN means the content is novel and creates a pattern interrupt.
|
|
79
|
+
* DMN (Default Mode Network): Translated to "Will people get bored and tune out?". High DMN is bad. It means the brain is wandering. The AI uses this to identify "Cut Candidates" in your video.
|
|
80
|
+
* DAN (Dorsal Attention Network): Translated to "Are people actively following along?". High DAN means strong logical focus.
|
|
81
|
+
* Limbic Network: Translated to "Does this make people feel something?". High Limbic means strong emotional response.
|
|
82
|
+
|
|
83
|
+
Check out the [Results Showcase](results_showcase.md) for actual examples of Neuro-Marketing reports generated by this skill.
|
|
84
|
+
|
|
85
|
+
## Install
|
|
86
|
+
|
|
87
|
+
### Video Tutorial
|
|
88
|
+
Watch this quick video to see how it's done:
|
|
89
|
+
|
|
90
|
+
https://github.com/user-attachments/assets/ee98a1b5-ebc4-452f-bbfb-c434f2935067
|
|
91
|
+
|
|
92
|
+
### Step 1: Download the skill from GitHub
|
|
93
|
+
1. Copy the URL of this specific skill folder from your browser's address bar.
|
|
94
|
+
2. Go to [download-directory.github.io](https://download-directory.github.io/).
|
|
95
|
+
3. Paste the URL and click **Enter** to download.
|
|
96
|
+
|
|
97
|
+
### Step 2: Install the Skill in Claude
|
|
98
|
+
1. Open your **Claude desktop app**.
|
|
99
|
+
2. Go to the sidebar on the left side and click on the **Customize** section.
|
|
100
|
+
3. Click on the **Skills** tab, then click on the **+** (plus) icon button to create a new skill.
|
|
101
|
+
4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
|
|
102
|
+
|
|
103
|
+
> **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!
|
|
@@ -0,0 +1,121 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: meta-tribe-skill
|
|
3
|
+
description: Analyzes video hooks and scripts using Meta's TRIBE v2 fMRI model, providing a neuro-marketing breakdown of scroll-stopping power and retention risk.
|
|
4
|
+
author: Varnan-Tech
|
|
5
|
+
version: 1.0.0
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# Meta Tribe Skill
|
|
9
|
+
|
|
10
|
+
## Description
|
|
11
|
+
A highly sophisticated neuroscience and marketing hybrid content analyzer. It leverages Meta's TRIBE v2 model to predict human brain fMRI activity across the Yeo-7 functional networks and translates these biological signals into highly detailed, actionable marketing insights for video hooks and scripts.
|
|
12
|
+
|
|
13
|
+
This tool completely eliminates guesswork in content creation by analyzing the exact neural pathways responsible for scroll-stopping behavior, cognitive retention, and emotional resonance. It also uses a benchmark database of past performance to estimate future views and virality.
|
|
14
|
+
|
|
15
|
+
## Core Philosophy: The Multi-Disciplinary Expert Protocol
|
|
16
|
+
To succeed in the modern algorithmic landscape, content must satisfy two distinct criteria:
|
|
17
|
+
1. **Neurological Capture (Neuroscience):** The content must physically force the brain to stop scrolling by triggering the Ventral Attention Network (VAN) with surprise and novelty.
|
|
18
|
+
2. **Cognitive Retention (Marketing):** The content must sustain logical tracking (DAN) while completely suppressing the brain's default state of boredom and wandering (DMN).
|
|
19
|
+
|
|
20
|
+
**Feedback Tone:** You must balance your feedback. Do not be overly negative ("This is terrible, scrap it all") or blindly positive. You must be honest but constructive. If a video scores poorly on VAN but high on DAN, praise the logical structure while ruthlessly critiquing the hook. Provide actionable, precise solutions.
|
|
21
|
+
|
|
22
|
+
Whenever a user asks for an analysis, you must NOT act like a robotic algorithm just spitting out numbers. You must adopt a **Multi-Disciplinary Elite Persona**:
|
|
23
|
+
- **The Neuroscientist:** You understand the raw fMRI data and brain network activations.
|
|
24
|
+
- **The Growth Marketer:** You understand the platform algorithms (TikTok, IG Reels, YouTube Shorts), audience psychology, and copywriting frameworks.
|
|
25
|
+
- **The Master Video Editor:** You know exactly how pacing, B-roll, sound design, and visual transitions manipulate the brain's retention graph.
|
|
26
|
+
- **The Industry Insider:** You understand the specific niche (e.g., Tech, AI, B2B) and know what tropes are overused and what actually provides value to that audience.
|
|
27
|
+
|
|
28
|
+
You must synthesize the raw Z-scores with your own creative intelligence to provide a holistic, expert-level teardown.
|
|
29
|
+
|
|
30
|
+
## Virality Prediction Benchmarks (The Eval Database)
|
|
31
|
+
Use this database of 12 real-world scripts to benchmark and predict the views of the new content you analyze. Compare the Z-scores of the new content against these known performers.
|
|
32
|
+
|
|
33
|
+
1. **Fixtral App (215k views, 7.4k likes)**
|
|
34
|
+
- *Scores:* VAN: +1.03 (Very High Surprise) | DMN: -0.44 (Suppressed Boredom) | DAN: +0.30
|
|
35
|
+
- *Why it worked:* Massive pattern interrupt (high VAN) and zero fluff (negative DMN). The gold standard for viral text.
|
|
36
|
+
2. **3 Subreddits (165k views, 7.3k likes)**
|
|
37
|
+
- *Scores:* VAN: +0.28 | DMN: -0.08 | DAN: +0.08
|
|
38
|
+
- *Why it worked:* Numbered lists suppress the DMN. Immediate value delivery.
|
|
39
|
+
3. **Chroma Model (493k views, 16.5k likes)**
|
|
40
|
+
- *Scores:* VAN: -0.05 | DMN: +0.60 | DAN: +0.21
|
|
41
|
+
- *Why it worked:* The text itself was boring (High DMN), but it was saved entirely by stunning AI-generated visuals. If analyzing text like this, predict low text performance but note that strong visuals can save it.
|
|
42
|
+
4. **Vapi Agent (24k views, 727 likes)**
|
|
43
|
+
- *Scores:* VAN: +0.20 | DMN: -0.02 | DAN: +0.48 (High Logic)
|
|
44
|
+
- *Why it worked:* Strong educational tutorial. Great for a loyal audience, but lacks the VAN spike to go truly viral to cold audiences.
|
|
45
|
+
6. **Sundar Pichai Wrapper Startups (2.3M views, 97.5k likes)**
|
|
46
|
+
- *Scores:* VAN: -0.11 | DMN: +0.24 | DAN: -0.15
|
|
47
|
+
- *Why it worked (The Authority Anomaly):* A massive false negative by the raw algorithm. It scored low DAN (no sensory/visual stimulation in a talking head video) and high DMN (internal thinking). The model assumed boredom. In reality, the audience was deep in thought, listening to a high-status authority figure (Sundar) discuss a polarizing, high-stakes topic.
|
|
48
|
+
|
|
49
|
+
*Rule of thumb for prediction:*
|
|
50
|
+
- **VAN > +0.5 & DMN < 0** -> Viral Potential (100k+ views)
|
|
51
|
+
- **DAN > +0.3 & DMN around 0** -> Educational/Niche Success (10k-50k views)
|
|
52
|
+
- **DMN > +0.5** -> High risk of flopping (Under 10k views), *unless* it's a "Talking Head/Authority" format where high DMN means deep reflection.
|
|
53
|
+
|
|
54
|
+
## Interpreting the Scores & Mandatory Output Format
|
|
55
|
+
You must translate the raw neurological Z-scores into the following highly structured, hyper-detailed Marketing Report. DO NOT output raw JSON or unexplained neuro-jargon. Give concrete scores and deep insights.
|
|
56
|
+
|
|
57
|
+
### Contextual Format Bifurcation (CRITICAL)
|
|
58
|
+
Before scoring, determine the format of the content:
|
|
59
|
+
- **Track A (Visceral/Entertainment/Standard Shorts):** Fast-paced, visually driven. Standard rules apply. High DMN = Boredom. Low DAN = Disengaged.
|
|
60
|
+
- **Track B (Informational/Podcast/Authority Interview):** Static "talking head" formats featuring thought leaders (e.g., Sundar Pichai, Lex Fridman).
|
|
61
|
+
- *Modifier 1:* Do not penalize for low DAN (sensory stimulation is naturally low).
|
|
62
|
+
- *Modifier 2:* A high DMN is NOT "boredom" here. Recontextualize it as "Deep Internalization / Theory of Mind." The audience is reflecting on complex ideas.
|
|
63
|
+
- *Modifier 3 (The Authority Halo):* Manually boost the perceived VAN (Surprise/Value) if a high-status celebrity or highly polarizing industry topic is present.
|
|
64
|
+
|
|
65
|
+
### Detailed Scoring Key:
|
|
66
|
+
1. **Scroll-Stopping Power (VAN)**: Base a score out of 100 on the VAN Z-score. (> 0.5 = 90/100, 0 = 50/100).
|
|
67
|
+
2. **Retention & Anti-Boredom (DMN)**: Base a score out of 100 on the DMN Z-score. (< -0.2 = 95/100, > 0.5 = 30/100).
|
|
68
|
+
3. **Cognitive Engagement (DAN)**: Base a score out of 100 on the DAN Z-score.
|
|
69
|
+
4. **Emotional Resonance (Limbic)**: Base a score out of 100 on the Limbic Z-score.
|
|
70
|
+
|
|
71
|
+
### Overall Verdict Logic:
|
|
72
|
+
- **Post it immediately:** If DMN is Low Risk (< 0) AND VAN is Strong (> 0.5).
|
|
73
|
+
- **Needs Hook Adjustment:** If DAN > 0.3 and DMN < 0.5, but VAN is Weak/No (< 0.5). The body of the script is solid, just fix the first 3 seconds. DO NOT recommend a "Total Rewrite".
|
|
74
|
+
- **Total Rewrite:** If DMN is High Risk (> 0.5). Boredom is fatal to retention.
|
|
75
|
+
|
|
76
|
+
### B2B / Technical Hook Guidelines (CRITICAL):
|
|
77
|
+
Technical and B2B audiences are highly allergic to generic "marketing bro" hype (e.g., "Stop ruining your domain reputation!"). You must trigger Cognitive Tension using these frameworks:
|
|
78
|
+
1. **The Curiosity Gap (Withholding Resolution):** e.g., "The reason your cold emails bounce isn't your copy. It's your domain structure."
|
|
79
|
+
2. **The End-Result First:** e.g., "We stopped doing SEO for 3 months—here's what happened to our pipeline."
|
|
80
|
+
3. **The Contrarian Reality Check:** e.g., "An AI founder hijacked Microsoft Copilot yesterday using nothing but a text prompt."
|
|
81
|
+
NEVER suggest a generic "loud" hook.
|
|
82
|
+
|
|
83
|
+
---
|
|
84
|
+
|
|
85
|
+
## MANDATORY EXACT OUTPUT FORMAT
|
|
86
|
+
Your analysis must be comprehensive and structured exactly like this:
|
|
87
|
+
|
|
88
|
+
# Elite Neuro-Marketing Analysis Report
|
|
89
|
+
|
|
90
|
+
### Executive Summary & Predicted Virality
|
|
91
|
+
* **Overall Verdict:** [Post it immediately / Needs Hook Rewrite / Total Rewrite]
|
|
92
|
+
* **Predicted View Tier:** [e.g., 10k - 50k (Niche), 100k+ (Viral), Flop Risk]
|
|
93
|
+
* **Why?** [2-3 sentences comparing its Z-scores to the benchmark database. Explain exactly why it will hit this view tier.]
|
|
94
|
+
|
|
95
|
+
### Deep Neuroscience Breakdown (Scores out of 100)
|
|
96
|
+
**1. Scroll-Stopping Power (Ventral Attention - VAN): [X]/100**
|
|
97
|
+
* *The Data:* [State the Z-score].
|
|
98
|
+
* *The Insight:* [Extremely detailed explanation of why the brain is or isn't surprised by the opening lines. Analyze the pattern interrupt.]
|
|
99
|
+
|
|
100
|
+
**2. Boredom & Retention Risk (Default Mode - DMN): [X]/100**
|
|
101
|
+
* *The Data:* [State the Z-score].
|
|
102
|
+
* *The Insight:* [Extremely detailed explanation of where the brain starts to wander. Call out specific sentences that are too predictable or fluffy.]
|
|
103
|
+
|
|
104
|
+
**3. Cognitive Tracking (Dorsal Attention - DAN): [X]/100**
|
|
105
|
+
* *The Data:* [State the Z-score].
|
|
106
|
+
* *The Insight:* [How well is the viewer tracking the logic? Is the tutorial or story coherent?]
|
|
107
|
+
|
|
108
|
+
**4. Emotional Stakes (Limbic System): [X]/100**
|
|
109
|
+
* *The Data:* [State the Z-score].
|
|
110
|
+
* *The Insight:* [Is there FOMO, fear, or excitement? Or is it purely sterile and educational?]
|
|
111
|
+
|
|
112
|
+
### Structural Flow & Editorial Analysis
|
|
113
|
+
* **The Hook (Marketer's View):** [Tear down the first 3 seconds. Does it use a proven framework (Curiosity Gap, End-Result First, Contrarian)?]
|
|
114
|
+
* **The Edit (Video Editor's View):** [Based on the visual scores, does the edit need faster pacing, J-cuts, B-roll, or sound design? Use your creative intelligence.]
|
|
115
|
+
* **The Context (Industry Insider's View):** [Does this appeal to the target audience? Is it too basic? Too jargon-heavy?]
|
|
116
|
+
|
|
117
|
+
### Actionable Optimization Directives
|
|
118
|
+
Provide 3 highly specific things the creator MUST change before recording/posting. DO NOT be generic. Use your expertise:
|
|
119
|
+
1. **[Scripting]:** Rewrite a specific boring sentence into a high-VAN pattern interrupt using B2B frameworks.
|
|
120
|
+
2. **[Video Editing]:** Recommend specific editing techniques (e.g., "Add a whoosh sound effect at 0:03", "Use a rapid zoom here") to spike the Visual Network or pull down the DMN.
|
|
121
|
+
3. **[Pacing Cut]:** Explicitly state which words/sentences to delete to reduce DMN boredom.
|
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
# Neuroscience of Content Hooks (Yeo-7 Networks)
|
|
2
|
+
|
|
3
|
+
To properly analyze and optimize content hooks using the TRIBE v2 Brain Hook Analyzer, you must understand the underlying neurobiology of engagement.
|
|
4
|
+
|
|
5
|
+
Meta's TRIBE v2 model outputs activation data across 20,484 cortical vertices. We map these vertices into the scientifically validated **Yeo-7 Functional Networks** to derive a comprehensive "Engagement Score."
|
|
6
|
+
|
|
7
|
+
Here is how to interpret the Z-scores returned by the `/analyze` API.
|
|
8
|
+
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
## 1. Dorsal Attention Network (DAN)
|
|
12
|
+
**Function**: Top-down, voluntary allocation of attention. Focused, goal-directed concentration.
|
|
13
|
+
- **High Z-Score (>1.0)**: The hook is highly stimulating and requires the viewer's active focus. It presents complex information, a visual puzzle, or a compelling narrative thread that makes the viewer *choose* to pay attention.
|
|
14
|
+
- **Low Z-Score (<0.0)**: The hook is passive. The viewer is not actively engaged with the material.
|
|
15
|
+
- **Optimization Strategy**: To increase DAN, add on-screen text, complex visual B-roll, or a puzzle/question that requires the viewer to think actively.
|
|
16
|
+
|
|
17
|
+
## 2. Ventral Attention Network (VAN)
|
|
18
|
+
**Function**: Bottom-up, stimulus-driven attention. The "Circuit Breaker" of the brain.
|
|
19
|
+
- **High Z-Score (>1.5)**: **CRITICAL FOR PATTERN INTERRUPT**. A high VAN score means the hook successfully jolted the viewer out of their scrolling habit. Triggered by sudden movements, loud noises, unexpected visuals, or highly controversial opening statements.
|
|
20
|
+
- **Low Z-Score (<0.5)**: The hook blends in with the rest of the feed. The user is highly likely to swipe away.
|
|
21
|
+
- **Optimization Strategy**: To increase VAN, use faster cuts in the first 1.5 seconds, higher audio volume, sudden visual changes, or extreme close-up angles.
|
|
22
|
+
|
|
23
|
+
## 3. Limbic Network (Limbic)
|
|
24
|
+
**Function**: Emotion, memory, and reward processing.
|
|
25
|
+
- **High Z-Score (>1.0)**: The hook elicits a strong emotional response (fear, joy, disgust, surprise, arousal). The viewer *feels* something immediately.
|
|
26
|
+
- **Low Z-Score (<0.0)**: The hook is sterile, purely logical, or corporate.
|
|
27
|
+
- **Optimization Strategy**: To increase Limbic activation, use emotionally charged words ("Destroyed," "Secret," "Heartbreaking"), show expressive human faces (especially eyes/mouth), or introduce high stakes.
|
|
28
|
+
|
|
29
|
+
## 4. Visual Network (Visual)
|
|
30
|
+
**Function**: Processing of visual stimuli.
|
|
31
|
+
- **High Z-Score (>1.0)**: The scene is visually rich, dynamic, or highly saturated.
|
|
32
|
+
- **Low Z-Score (<0.0)**: The video is visually static (e.g., a person talking to a camera in a dark room with no movement).
|
|
33
|
+
- **Optimization Strategy**: Add dynamic lighting, movement, B-roll overlays, or bright contrasting colors.
|
|
34
|
+
|
|
35
|
+
## 5. Default Mode Network (DMN)
|
|
36
|
+
**Function**: Internal mentation, mind-wandering, daydreaming, and thinking about the past/future.
|
|
37
|
+
- **High Z-Score (>1.0)**: **DANGER**. If the DMN is highly active while watching a short-form video, the viewer has lost interest and their mind is wandering. They are about to swipe.
|
|
38
|
+
- **Low Z-Score (<0.0)**: Excellent. The viewer is "locked in" to the external stimulus and is not distracted by their own thoughts.
|
|
39
|
+
- **Optimization Strategy**: To decrease DMN, increase pacing. Remove pauses, 'umms', and 'ahhs'. Ensure every second delivers new information or visual stimulus to keep the external attention networks engaged.
|
|
40
|
+
|
|
41
|
+
---
|
|
42
|
+
|
|
43
|
+
## The Engagement Formula
|
|
44
|
+
`Engagement Score = Z(DAN) + Z(VAN) + Z(Limbic) + Z(Visual) - Z(DMN)`
|
|
45
|
+
|
|
46
|
+
**Interpretation**:
|
|
47
|
+
- **Score > 3.0**: Exceptional Hook. High likelihood of virality. Extreme pattern interrupt combined with emotional resonance.
|
|
48
|
+
- **Score 1.5 to 3.0**: Good Hook. Solid retention expected for the first 5 seconds.
|
|
49
|
+
- **Score 0.0 to 1.5**: Average Hook. Typical corporate or informational video. Will lose 50% of audience in 3 seconds.
|
|
50
|
+
- **Score < 0.0**: Failed Hook. The Default Mode Network has taken over. Instant swipe.
|
|
@@ -0,0 +1,37 @@
|
|
|
1
|
+
import os
|
|
2
|
+
import torch
|
|
3
|
+
import numpy as np
|
|
4
|
+
import json
|
|
5
|
+
from tribev2 import TribeModel
|
|
6
|
+
|
|
7
|
+
os.environ["HF_TOKEN"] = "your_huggingface_token"
|
|
8
|
+
|
|
9
|
+
def main():
|
|
10
|
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
|
11
|
+
print(f"Loading TribeModel on {device}...")
|
|
12
|
+
model = TribeModel.from_pretrained("facebook/tribev2", device=device)
|
|
13
|
+
print("Model loaded successfully.")
|
|
14
|
+
|
|
15
|
+
video_url = "https://your-video-url.mp4"
|
|
16
|
+
print(f"Analyzing {video_url}...")
|
|
17
|
+
|
|
18
|
+
import urllib.request
|
|
19
|
+
video_path = "/tmp/video.mp4"
|
|
20
|
+
urllib.request.urlretrieve(video_url, video_path)
|
|
21
|
+
|
|
22
|
+
df_events = model.get_events_dataframe(video_path=video_path)
|
|
23
|
+
|
|
24
|
+
print("Predicting fMRI response...")
|
|
25
|
+
preds, segments = model.predict(df_events)
|
|
26
|
+
|
|
27
|
+
if not isinstance(preds, np.ndarray):
|
|
28
|
+
preds = preds.cpu().numpy() if hasattr(preds, 'cpu') else np.array(preds)
|
|
29
|
+
|
|
30
|
+
np.save("preds.npy", preds)
|
|
31
|
+
with open("segments.json", "w") as f:
|
|
32
|
+
json.dump([seg.to_dict() for seg in segments] if hasattr(segments[0], 'to_dict') else segments, f)
|
|
33
|
+
|
|
34
|
+
print("Inference complete! Download preds.npy and segments.json to your local machine.")
|
|
35
|
+
|
|
36
|
+
if __name__ == "__main__":
|
|
37
|
+
main()
|
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
#!/bin/bash
|
|
2
|
+
set -e
|
|
3
|
+
|
|
4
|
+
INSTANCE_IP="13.221.72.26"
|
|
5
|
+
KEY_FILE="tribe-persistent-key-1777196102.pem"
|
|
6
|
+
|
|
7
|
+
echo "Running Docker container remotely with explicit token..."
|
|
8
|
+
ssh.exe -i "$KEY_FILE" -o StrictHostKeyChecking=no ubuntu@"$INSTANCE_IP" "sudo docker ps -q | xargs -r sudo docker stop && sudo docker ps -aq | xargs -r sudo docker rm && sudo docker run -d -p 8000:8000 --gpus all -e HF_TOKEN=\"${HF_TOKEN}\" hook-analyzer"
|
|
9
|
+
|
|
10
|
+
echo "Waiting for /health endpoint..."
|
|
11
|
+
max_retries=360
|
|
12
|
+
retry_count=0
|
|
13
|
+
while true; do
|
|
14
|
+
STATUS_CODE=$(curl -s -o /dev/null -w "%{http_code}" http://"$INSTANCE_IP":8000/health || echo "000")
|
|
15
|
+
if [ "$STATUS_CODE" == "200" ]; then
|
|
16
|
+
echo "Health check passed."
|
|
17
|
+
break
|
|
18
|
+
fi
|
|
19
|
+
echo "Waiting for model to download and load... ($retry_count / $max_retries)"
|
|
20
|
+
sleep 10
|
|
21
|
+
retry_count=$((retry_count+1))
|
|
22
|
+
if [ $retry_count -ge $max_retries ]; then
|
|
23
|
+
echo "Error: Health check failed."
|
|
24
|
+
exit 1
|
|
25
|
+
fi
|
|
26
|
+
done
|
|
27
|
+
|
|
28
|
+
echo "Sending POST /analyze..."
|
|
29
|
+
curl -s -X POST http://"$INSTANCE_IP":8000/analyze \
|
|
30
|
+
-H "Content-Type: application/json" \
|
|
31
|
+
-d '{"text": "Discover the neuroscience secret to viral hooks."}'
|
|
@@ -0,0 +1,36 @@
|
|
|
1
|
+
import os
|
|
2
|
+
import time
|
|
3
|
+
import json
|
|
4
|
+
import requests
|
|
5
|
+
import argparse
|
|
6
|
+
|
|
7
|
+
def analyze_social_url(social_url, api_url="http://13.221.72.26:8000/analyze"):
|
|
8
|
+
print(f"Sending {social_url} to TRIBE API...")
|
|
9
|
+
print("This will take 1-5 minutes depending on video length. The AWS instance is downloading and analyzing it...")
|
|
10
|
+
|
|
11
|
+
start_time = time.time()
|
|
12
|
+
try:
|
|
13
|
+
api_resp = requests.post(api_url, json={"social_url": social_url}, timeout=600)
|
|
14
|
+
elapsed = time.time() - start_time
|
|
15
|
+
|
|
16
|
+
if api_resp.status_code == 200:
|
|
17
|
+
result = api_resp.json()
|
|
18
|
+
print(f"\nSUCCESS! Analysis completed in {elapsed:.1f} seconds.")
|
|
19
|
+
return result
|
|
20
|
+
else:
|
|
21
|
+
print(f"\nAPI Error ({api_resp.status_code}): {api_resp.text}")
|
|
22
|
+
return None
|
|
23
|
+
except Exception as e:
|
|
24
|
+
print(f"\nRequest Error: {e}")
|
|
25
|
+
return None
|
|
26
|
+
|
|
27
|
+
if __name__ == "__main__":
|
|
28
|
+
parser = argparse.ArgumentParser(description="Analyze an Instagram Reel, YouTube Shorts, or TikTok URL with TRIBE v2.")
|
|
29
|
+
parser.add_argument("url", help="Social media URL")
|
|
30
|
+
args = parser.parse_args()
|
|
31
|
+
|
|
32
|
+
results = analyze_social_url(args.url)
|
|
33
|
+
if results:
|
|
34
|
+
print("\n--- RAW TRIBE SCORES ---")
|
|
35
|
+
print(json.dumps(results, indent=2))
|
|
36
|
+
print("\nCopy these z_scores to the AI agent for the Neuro-Marketing Report.")
|
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
#!/bin/bash
|
|
2
|
+
set -e
|
|
3
|
+
export AWS_DEFAULT_REGION="us-east-1"
|
|
4
|
+
export AWS_REGION="us-east-1"
|
|
5
|
+
|
|
6
|
+
echo "Finding default VPC..."
|
|
7
|
+
VPC_ID=$(aws.exe --region us-east-1 ec2 describe-vpcs --filters "Name=isDefault,Values=true" --query "Vpcs[0].VpcId" --output text | tr -d '\r')
|
|
8
|
+
|
|
9
|
+
echo "Finding Subnet in VPC..."
|
|
10
|
+
SUBNET_ID=$(aws.exe --region us-east-1 ec2 describe-subnets --filters "Name=vpc-id,Values=$VPC_ID" "Name=availability-zone,Values=*b" --query "Subnets[0].SubnetId" --output text | tr -d '\r')
|
|
11
|
+
|
|
12
|
+
SG_NAME="tribe-persistent-sg-$(date +%s)"
|
|
13
|
+
echo "Creating Security Group: $SG_NAME..."
|
|
14
|
+
SG_ID=$(aws.exe --region us-east-1 ec2 create-security-group --group-name "$SG_NAME" --description "Persistent SG for TRIBE" --vpc-id "$VPC_ID" --query "GroupId" --output text | tr -d '\r')
|
|
15
|
+
|
|
16
|
+
echo "Authorizing ingress for port 22 and 8000..."
|
|
17
|
+
aws.exe --region us-east-1 ec2 authorize-security-group-ingress --group-id "$SG_ID" --protocol tcp --port 22 --cidr "0.0.0.0/0" > /dev/null
|
|
18
|
+
aws.exe --region us-east-1 ec2 authorize-security-group-ingress --group-id "$SG_ID" --protocol tcp --port 8000 --cidr "0.0.0.0/0" > /dev/null
|
|
19
|
+
|
|
20
|
+
KEY_NAME="tribe-persistent-key-$(date +%s)"
|
|
21
|
+
KEY_FILE="${KEY_NAME}.pem"
|
|
22
|
+
echo "Creating Key Pair: $KEY_NAME..."
|
|
23
|
+
aws.exe --region us-east-1 ec2 create-key-pair --key-name "$KEY_NAME" --query "KeyMaterial" --output text | tr -d '\r' > "$KEY_FILE"
|
|
24
|
+
|
|
25
|
+
echo "Finding Deep Learning AMI..."
|
|
26
|
+
AMI_ID=$(aws.exe --region us-east-1 ec2 describe-images --owners amazon --filters "Name=name,Values=*Deep Learning OSS Nvidia Driver AMI GPU PyTorch*Ubuntu*" "Name=state,Values=available" --query "sort_by(Images, &CreationDate)[-1].ImageId" --output text | tr -d '\r')
|
|
27
|
+
|
|
28
|
+
echo "Launching PERSISTENT g5.12xlarge instance..."
|
|
29
|
+
INSTANCE_ID=$(aws.exe --region us-east-1 ec2 run-instances \
|
|
30
|
+
--image-id "$AMI_ID" \
|
|
31
|
+
--instance-type g5.12xlarge \
|
|
32
|
+
--key-name "$KEY_NAME" \
|
|
33
|
+
--security-group-ids "$SG_ID" \
|
|
34
|
+
--subnet-id "$SUBNET_ID" \
|
|
35
|
+
--block-device-mappings '[{"DeviceName":"/dev/sda1","Ebs":{"VolumeSize":150,"VolumeType":"gp3"}}]' \
|
|
36
|
+
--query "Instances[0].InstanceId" \
|
|
37
|
+
--output text | tr -d '\r')
|
|
38
|
+
|
|
39
|
+
echo "Waiting for instance to run..."
|
|
40
|
+
aws.exe --region us-east-1 ec2 wait instance-running --instance-ids "$INSTANCE_ID"
|
|
41
|
+
|
|
42
|
+
INSTANCE_IP=$(aws.exe --region us-east-1 ec2 describe-instances --instance-ids "$INSTANCE_ID" --query "Reservations[0].Instances[0].PublicIpAddress" --output text | tr -d '\r')
|
|
43
|
+
|
|
44
|
+
echo ""
|
|
45
|
+
echo "=========================================="
|
|
46
|
+
echo "SUCCESS! Persistent Instance Created."
|
|
47
|
+
echo "INSTANCE_ID: $INSTANCE_ID"
|
|
48
|
+
echo "INSTANCE_IP: $INSTANCE_IP"
|
|
49
|
+
echo "KEY_FILE: $KEY_FILE"
|
|
50
|
+
echo "=========================================="
|
|
@@ -0,0 +1,127 @@
|
|
|
1
|
+
import numpy as np
|
|
2
|
+
import json
|
|
3
|
+
import argparse
|
|
4
|
+
from nilearn import surface
|
|
5
|
+
import urllib.request
|
|
6
|
+
import os
|
|
7
|
+
|
|
8
|
+
def init_atlas():
|
|
9
|
+
lh_url = "https://raw.githubusercontent.com/ThomasYeoLab/CBIG/master/stable_projects/brain_parcellation/Yeo2011_fcMRI_clustering/1000subjects_reference/Yeo_JNeurophysiol11_SplitLabels/fsaverage5/label/lh.Yeo2011_7Networks_N1000.annot"
|
|
10
|
+
rh_url = "https://raw.githubusercontent.com/ThomasYeoLab/CBIG/master/stable_projects/brain_parcellation/Yeo2011_fcMRI_clustering/1000subjects_reference/Yeo_JNeurophysiol11_SplitLabels/fsaverage5/label/rh.Yeo2011_7Networks_N1000.annot"
|
|
11
|
+
lh_path = "/tmp/lh.Yeo2011_7Networks_N1000.annot"
|
|
12
|
+
rh_path = "/tmp/rh.Yeo2011_7Networks_N1000.annot"
|
|
13
|
+
|
|
14
|
+
if not os.path.exists(lh_path):
|
|
15
|
+
urllib.request.urlretrieve(lh_url, lh_path)
|
|
16
|
+
if not os.path.exists(rh_path):
|
|
17
|
+
urllib.request.urlretrieve(rh_url, rh_path)
|
|
18
|
+
|
|
19
|
+
labels_lh = surface.load_surf_data(lh_path)
|
|
20
|
+
labels_rh = surface.load_surf_data(rh_path)
|
|
21
|
+
return np.concatenate([labels_lh, labels_rh])
|
|
22
|
+
|
|
23
|
+
def analyze(preds_path):
|
|
24
|
+
preds = np.load(preds_path)
|
|
25
|
+
yeo7_labels = init_atlas()
|
|
26
|
+
|
|
27
|
+
YEO7_MAPPING = {"Visual": 1, "DAN": 3, "VAN": 4, "Limbic": 5, "DMN": 7}
|
|
28
|
+
engagement_timeseries = []
|
|
29
|
+
|
|
30
|
+
hrf_offset = 5
|
|
31
|
+
valid_preds = preds[hrf_offset:] if len(preds) > hrf_offset else preds
|
|
32
|
+
|
|
33
|
+
for t in range(len(valid_preds)):
|
|
34
|
+
mean_preds = valid_preds[t]
|
|
35
|
+
network_means = {net: float(np.mean(mean_preds[yeo7_labels == idx])) for net, idx in YEO7_MAPPING.items()}
|
|
36
|
+
all_net_means = [np.mean(mean_preds[yeo7_labels == i]) for i in range(1, 8)]
|
|
37
|
+
|
|
38
|
+
pop_mean = np.mean(all_net_means) if len(all_net_means) > 1 else 0.0
|
|
39
|
+
pop_std = np.std(all_net_means) + 1e-8 if len(all_net_means) > 1 else 1.0
|
|
40
|
+
|
|
41
|
+
z_scores = {k: float((v - pop_mean) / pop_std) for k, v in network_means.items()}
|
|
42
|
+
e_score = z_scores["DAN"] + z_scores["VAN"] + z_scores["Limbic"] + z_scores["Visual"] - z_scores["DMN"]
|
|
43
|
+
engagement_timeseries.append(e_score)
|
|
44
|
+
|
|
45
|
+
engagement_timeseries = np.array(engagement_timeseries)
|
|
46
|
+
if len(engagement_timeseries) > 1:
|
|
47
|
+
e_mean = np.mean(engagement_timeseries)
|
|
48
|
+
e_std = np.std(engagement_timeseries) + 1e-8
|
|
49
|
+
engagement_z = (engagement_timeseries - e_mean) / e_std
|
|
50
|
+
else:
|
|
51
|
+
engagement_z = np.zeros_like(engagement_timeseries)
|
|
52
|
+
|
|
53
|
+
overall_mean_preds = np.mean(valid_preds, axis=0) if valid_preds.ndim > 1 else valid_preds
|
|
54
|
+
overall_network_means = {net: float(np.mean(overall_mean_preds[yeo7_labels == idx])) for net, idx in YEO7_MAPPING.items()}
|
|
55
|
+
all_overall_net_means = [np.mean(overall_mean_preds[yeo7_labels == i]) for i in range(1, 8)]
|
|
56
|
+
o_pop_mean = np.mean(all_overall_net_means) if len(all_overall_net_means) > 1 else 0.0
|
|
57
|
+
o_pop_std = np.std(all_overall_net_means) + 1e-8 if len(all_overall_net_means) > 1 else 1.0
|
|
58
|
+
overall_z_scores = {k: float((v - o_pop_mean) / o_pop_std) for k, v in overall_network_means.items()}
|
|
59
|
+
|
|
60
|
+
peaks = []
|
|
61
|
+
valleys = []
|
|
62
|
+
current_valley_start = -1
|
|
63
|
+
current_peak_start = -1
|
|
64
|
+
|
|
65
|
+
for t in range(len(engagement_z)):
|
|
66
|
+
if engagement_z[t] < -1.0:
|
|
67
|
+
if current_valley_start == -1:
|
|
68
|
+
current_valley_start = t
|
|
69
|
+
current_peak_start = -1
|
|
70
|
+
elif engagement_z[t] > 1.0:
|
|
71
|
+
if current_peak_start == -1:
|
|
72
|
+
current_peak_start = t
|
|
73
|
+
current_valley_start = -1
|
|
74
|
+
else:
|
|
75
|
+
if current_valley_start != -1 and (t - current_valley_start) >= 4:
|
|
76
|
+
valleys.append((current_valley_start, t-1))
|
|
77
|
+
if current_peak_start != -1 and (t - current_peak_start) >= 4:
|
|
78
|
+
peaks.append((current_peak_start, t-1))
|
|
79
|
+
current_valley_start = -1
|
|
80
|
+
current_peak_start = -1
|
|
81
|
+
|
|
82
|
+
if current_valley_start != -1 and (len(engagement_z) - current_valley_start) >= 4:
|
|
83
|
+
valleys.append((current_valley_start, len(engagement_z)-1))
|
|
84
|
+
if current_peak_start != -1 and (len(engagement_z) - current_peak_start) >= 4:
|
|
85
|
+
peaks.append((current_peak_start, len(engagement_z)-1))
|
|
86
|
+
|
|
87
|
+
print("Engagement Report")
|
|
88
|
+
print("-" * 30)
|
|
89
|
+
print(f"Is this surprising enough to stop a scroll? (VAN: {overall_z_scores['VAN']:.2f})")
|
|
90
|
+
print(f"Will people get bored and tune out? (DMN: {overall_z_scores['DMN']:.2f})")
|
|
91
|
+
print(f"Are people actively following along? (DAN: {overall_z_scores['DAN']:.2f})")
|
|
92
|
+
print(f"Does this make people feel something? (Limbic: {overall_z_scores['Limbic']:.2f})")
|
|
93
|
+
print("-" * 30)
|
|
94
|
+
print("Time-Series Recommendations:")
|
|
95
|
+
for start, end in valleys:
|
|
96
|
+
print(f"Cut Candidate: {start}s - {end}s (Low engagement)")
|
|
97
|
+
for start, end in peaks:
|
|
98
|
+
print(f"Protect Region: {start}s - {end}s (High engagement)")
|
|
99
|
+
|
|
100
|
+
print("\n" + "="*50)
|
|
101
|
+
print(" ENGAGEMENT CURVE (Terminal View)")
|
|
102
|
+
print("="*50)
|
|
103
|
+
print(" Time(s) | Z-Score Graph (-2.0 <--> +2.0)")
|
|
104
|
+
print("---------|-----------------------------------------")
|
|
105
|
+
|
|
106
|
+
for t, z in enumerate(engagement_z):
|
|
107
|
+
pos = int((z + 2.0) * 10)
|
|
108
|
+
pos = max(0, min(40, pos))
|
|
109
|
+
|
|
110
|
+
bar_list = [" "] * 41
|
|
111
|
+
bar_list[20] = "|"
|
|
112
|
+
bar_list[pos] = "█"
|
|
113
|
+
bar = "".join(bar_list)
|
|
114
|
+
|
|
115
|
+
marker = ""
|
|
116
|
+
if z > 1.0: marker = " <PEAK>"
|
|
117
|
+
elif z < -1.0: marker = " <VALLEY>"
|
|
118
|
+
|
|
119
|
+
print(f" {t:5d}s | {bar} {marker}")
|
|
120
|
+
|
|
121
|
+
print("="*50)
|
|
122
|
+
|
|
123
|
+
if __name__ == "__main__":
|
|
124
|
+
parser = argparse.ArgumentParser()
|
|
125
|
+
parser.add_argument("--preds", required=True, help="Path to preds.npy")
|
|
126
|
+
args = parser.parse_args()
|
|
127
|
+
analyze(args.preds)
|
|
@@ -0,0 +1,14 @@
|
|
|
1
|
+
import requests
|
|
2
|
+
import time
|
|
3
|
+
|
|
4
|
+
url = "http://13.221.72.26:8000/health"
|
|
5
|
+
print("Waiting for model to load...")
|
|
6
|
+
for _ in range(60):
|
|
7
|
+
try:
|
|
8
|
+
r = requests.get(url, timeout=5)
|
|
9
|
+
if r.status_code == 200 and r.json().get("model_loaded"):
|
|
10
|
+
print("Model is loaded!")
|
|
11
|
+
break
|
|
12
|
+
except:
|
|
13
|
+
pass
|
|
14
|
+
time.sleep(10)
|
|
@@ -0,0 +1,16 @@
|
|
|
1
|
+
FROM pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
|
|
2
|
+
|
|
3
|
+
ENV HF_TOKEN=""
|
|
4
|
+
|
|
5
|
+
WORKDIR /app
|
|
6
|
+
|
|
7
|
+
RUN apt-get update && apt-get install -y git build-essential gfortran ffmpeg && rm -rf /var/lib/apt/lists/*
|
|
8
|
+
COPY requirements.txt .
|
|
9
|
+
RUN pip install --no-cache-dir -r requirements.txt uv
|
|
10
|
+
RUN sed -i 's/compute_type = "float16"/compute_type = "int8" if device == "cpu" else "float16"/g' /opt/conda/lib/python*/site-packages/tribev2/eventstransforms.py
|
|
11
|
+
|
|
12
|
+
COPY . .
|
|
13
|
+
|
|
14
|
+
EXPOSE 8000
|
|
15
|
+
|
|
16
|
+
CMD ["uvicorn", "server:app", "--host", "0.0.0.0", "--port", "8000"]
|
|
@@ -0,0 +1,19 @@
|
|
|
1
|
+
FROM runpod/pytorch:2.0.1-py3.10-cuda11.8.0-devel-ubuntu22.04
|
|
2
|
+
|
|
3
|
+
WORKDIR /app
|
|
4
|
+
|
|
5
|
+
# Install system dependencies
|
|
6
|
+
RUN apt-get update && apt-get install -y ffmpeg curl && rm -rf /var/lib/apt/lists/*
|
|
7
|
+
|
|
8
|
+
# Install python dependencies
|
|
9
|
+
COPY requirements.txt .
|
|
10
|
+
RUN pip install --no-cache-dir -r requirements.txt
|
|
11
|
+
RUN pip install runpod
|
|
12
|
+
|
|
13
|
+
# Copy app files
|
|
14
|
+
COPY . .
|
|
15
|
+
|
|
16
|
+
# Set environment variable (You will pass HF_TOKEN from RunPod UI)
|
|
17
|
+
ENV PYTHONUNBUFFERED=1
|
|
18
|
+
|
|
19
|
+
CMD ["python", "-u", "runpod_handler.py"]
|
|
@@ -0,0 +1,128 @@
|
|
|
1
|
+
import os
|
|
2
|
+
import tempfile
|
|
3
|
+
import numpy as np
|
|
4
|
+
import json
|
|
5
|
+
from fastapi import FastAPI, HTTPException
|
|
6
|
+
import runpod
|
|
7
|
+
|
|
8
|
+
# Initialize Tribe model globally to avoid reloading on every serverless invocation
|
|
9
|
+
MODEL_LOADED = False
|
|
10
|
+
model = None
|
|
11
|
+
yeo7_labels = None
|
|
12
|
+
|
|
13
|
+
def init_model():
|
|
14
|
+
global MODEL_LOADED, model, yeo7_labels
|
|
15
|
+
if MODEL_LOADED:
|
|
16
|
+
return
|
|
17
|
+
|
|
18
|
+
import torch
|
|
19
|
+
from tribev2 import TribeModel
|
|
20
|
+
from nilearn import surface
|
|
21
|
+
import urllib.request
|
|
22
|
+
|
|
23
|
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
|
24
|
+
print(f"Loading TribeModel on {device}...")
|
|
25
|
+
|
|
26
|
+
try:
|
|
27
|
+
model = TribeModel.from_pretrained("facebook/tribev2", device=device)
|
|
28
|
+
print("Fetching Yeo-7 surface atlas for fsaverage5...")
|
|
29
|
+
|
|
30
|
+
lh_url = "https://raw.githubusercontent.com/ThomasYeoLab/CBIG/master/stable_projects/brain_parcellation/Yeo2011_fcMRI_clustering/1000subjects_reference/Yeo_JNeurophysiol11_SplitLabels/fsaverage5/label/lh.Yeo2011_7Networks_N1000.annot"
|
|
31
|
+
rh_url = "https://raw.githubusercontent.com/ThomasYeoLab/CBIG/master/stable_projects/brain_parcellation/Yeo2011_fcMRI_clustering/1000subjects_reference/Yeo_JNeurophysiol11_SplitLabels/fsaverage5/label/rh.Yeo2011_7Networks_N1000.annot"
|
|
32
|
+
|
|
33
|
+
lh_path = "/tmp/lh.Yeo2011_7Networks_N1000.annot"
|
|
34
|
+
rh_path = "/tmp/rh.Yeo2011_7Networks_N1000.annot"
|
|
35
|
+
|
|
36
|
+
if not os.path.exists(lh_path):
|
|
37
|
+
urllib.request.urlretrieve(lh_url, lh_path)
|
|
38
|
+
if not os.path.exists(rh_path):
|
|
39
|
+
urllib.request.urlretrieve(rh_url, rh_path)
|
|
40
|
+
|
|
41
|
+
labels_lh = surface.load_surf_data(lh_path)
|
|
42
|
+
labels_rh = surface.load_surf_data(rh_path)
|
|
43
|
+
yeo7_labels = np.concatenate([labels_lh, labels_rh])
|
|
44
|
+
|
|
45
|
+
MODEL_LOADED = True
|
|
46
|
+
print("Model and Atlas initialized successfully.")
|
|
47
|
+
except Exception as e:
|
|
48
|
+
print(f"Failed to initialize model: {e}")
|
|
49
|
+
raise e
|
|
50
|
+
|
|
51
|
+
def calculate_engagement(preds: np.ndarray) -> dict:
|
|
52
|
+
if yeo7_labels is None:
|
|
53
|
+
return {"engagement_score": 0.0, "z_scores": {}, "error": "Yeo-7 atlas not loaded"}
|
|
54
|
+
|
|
55
|
+
# Define Yeo-7 Mapping
|
|
56
|
+
YEO7_MAPPING = {"Visual": 1, "Somatomotor": 2, "DAN": 3, "VAN": 4, "Limbic": 5, "Frontoparietal": 6, "DMN": 7}
|
|
57
|
+
|
|
58
|
+
mean_preds = np.mean(preds, axis=0) if preds.ndim > 1 else preds
|
|
59
|
+
network_means = {}
|
|
60
|
+
|
|
61
|
+
for net_name, net_idx in YEO7_MAPPING.items():
|
|
62
|
+
mask = (yeo7_labels == net_idx)
|
|
63
|
+
network_means[net_name] = float(np.mean(mean_preds[mask])) if np.any(mask) else 0.0
|
|
64
|
+
|
|
65
|
+
all_net_means = [np.mean(mean_preds[yeo7_labels == i]) for i in range(1, 8) if np.any(yeo7_labels == i)]
|
|
66
|
+
pop_mean = np.mean(all_net_means) if len(all_net_means) > 1 else 0.0
|
|
67
|
+
pop_std = np.std(all_net_means) + 1e-8 if len(all_net_means) > 1 else 1.0
|
|
68
|
+
|
|
69
|
+
z_scores = {k: float((v - pop_mean) / pop_std) for k, v in network_means.items()}
|
|
70
|
+
engagement_score = z_scores.get("DAN", 0) + z_scores.get("VAN", 0) + z_scores.get("Limbic", 0) + z_scores.get("Visual", 0) - z_scores.get("DMN", 0)
|
|
71
|
+
|
|
72
|
+
return {
|
|
73
|
+
"engagement_score": float(engagement_score),
|
|
74
|
+
"z_scores": z_scores
|
|
75
|
+
}
|
|
76
|
+
|
|
77
|
+
def handler(event):
|
|
78
|
+
"""
|
|
79
|
+
RunPod Serverless Handler.
|
|
80
|
+
Expects event["input"]["video_url"] or event["input"]["text"]
|
|
81
|
+
"""
|
|
82
|
+
init_model()
|
|
83
|
+
|
|
84
|
+
job_input = event.get("input", {})
|
|
85
|
+
video_url = job_input.get("video_url")
|
|
86
|
+
text = job_input.get("text")
|
|
87
|
+
|
|
88
|
+
if not video_url and not text:
|
|
89
|
+
return {"error": "Missing video_url or text in input."}
|
|
90
|
+
|
|
91
|
+
try:
|
|
92
|
+
kwargs = {}
|
|
93
|
+
if text:
|
|
94
|
+
fd, text_path = tempfile.mkstemp(suffix=".txt")
|
|
95
|
+
with os.fdopen(fd, 'w') as f:
|
|
96
|
+
f.write(text)
|
|
97
|
+
kwargs["text_path"] = text_path
|
|
98
|
+
|
|
99
|
+
if video_url:
|
|
100
|
+
import requests
|
|
101
|
+
import subprocess
|
|
102
|
+
response = requests.get(video_url, stream=True)
|
|
103
|
+
response.raise_for_status()
|
|
104
|
+
|
|
105
|
+
fd, video_path = tempfile.mkstemp(suffix=".mp4")
|
|
106
|
+
with os.fdopen(fd, 'wb') as f:
|
|
107
|
+
for chunk in response.iter_content(chunk_size=8192):
|
|
108
|
+
f.write(chunk)
|
|
109
|
+
|
|
110
|
+
fd, optimized_path = tempfile.mkstemp(suffix=".mp4")
|
|
111
|
+
os.close(fd)
|
|
112
|
+
subprocess.run(f"ffmpeg -y -i {video_path} -vf scale=-2:360 -r 10 -c:v libx264 -preset ultrafast {optimized_path}", shell=True, check=True)
|
|
113
|
+
kwargs["video_path"] = optimized_path
|
|
114
|
+
|
|
115
|
+
df_events = model.get_events_dataframe(**kwargs)
|
|
116
|
+
preds, segments = model.predict(df_events)
|
|
117
|
+
|
|
118
|
+
if not isinstance(preds, np.ndarray):
|
|
119
|
+
preds = preds.cpu().numpy() if hasattr(preds, 'cpu') else np.array(preds)
|
|
120
|
+
|
|
121
|
+
engagement_data = calculate_engagement(preds)
|
|
122
|
+
|
|
123
|
+
return {"status": "success", "engagement": engagement_data}
|
|
124
|
+
|
|
125
|
+
except Exception as e:
|
|
126
|
+
return {"error": str(e)}
|
|
127
|
+
|
|
128
|
+
runpod.serverless.start({"handler": handler})
|
|
@@ -0,0 +1,295 @@
|
|
|
1
|
+
import os
|
|
2
|
+
import tempfile
|
|
3
|
+
import requests
|
|
4
|
+
import numpy as np
|
|
5
|
+
from fastapi import FastAPI, HTTPException, BackgroundTasks
|
|
6
|
+
from pydantic import BaseModel
|
|
7
|
+
from typing import Optional
|
|
8
|
+
import logging
|
|
9
|
+
import subprocess
|
|
10
|
+
|
|
11
|
+
logging.basicConfig(level=logging.INFO)
|
|
12
|
+
logger = logging.getLogger(__name__)
|
|
13
|
+
|
|
14
|
+
app = FastAPI(title="Tribe Brain Hook Analyzer")
|
|
15
|
+
|
|
16
|
+
MODEL_LOADED = False
|
|
17
|
+
model = None
|
|
18
|
+
|
|
19
|
+
# Yeo-7 Network Indices (1-based in standard Yeo-7)
|
|
20
|
+
# 1: Visual
|
|
21
|
+
# 2: Somatomotor
|
|
22
|
+
# 3: Dorsal Attention (DAN)
|
|
23
|
+
# 4: Ventral Attention (VAN)
|
|
24
|
+
# 5: Limbic
|
|
25
|
+
# 6: Frontoparietal
|
|
26
|
+
# 7: Default Mode (DMN)
|
|
27
|
+
|
|
28
|
+
YEO7_MAPPING = {
|
|
29
|
+
"Visual": 1,
|
|
30
|
+
"DAN": 3,
|
|
31
|
+
"VAN": 4,
|
|
32
|
+
"Limbic": 5,
|
|
33
|
+
"DMN": 7
|
|
34
|
+
}
|
|
35
|
+
|
|
36
|
+
yeo7_labels = None
|
|
37
|
+
|
|
38
|
+
def optimize_video(input_path, output_path):
|
|
39
|
+
cmd = f'ffmpeg -y -i "{input_path}" -vf scale=-2:360 -r 10 -c:v libx264 -preset ultrafast "{output_path}"'
|
|
40
|
+
subprocess.run(cmd, shell=True, check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
|
|
41
|
+
|
|
42
|
+
@app.on_event("startup")
|
|
43
|
+
async def startup_event():
|
|
44
|
+
global MODEL_LOADED, model, yeo7_labels
|
|
45
|
+
|
|
46
|
+
hf_token = os.environ.get("HF_TOKEN")
|
|
47
|
+
if not hf_token:
|
|
48
|
+
logger.warning("HF_TOKEN environment variable not set. Model loading might fail if it requires authentication.")
|
|
49
|
+
|
|
50
|
+
try:
|
|
51
|
+
from tribev2 import TribeModel
|
|
52
|
+
import torch
|
|
53
|
+
import tribev2.eventstransforms
|
|
54
|
+
import pandas as pd
|
|
55
|
+
|
|
56
|
+
def mock_get_transcript(wav_filename, language):
|
|
57
|
+
return pd.DataFrame([{
|
|
58
|
+
"text": "Discover",
|
|
59
|
+
"start": 0.0,
|
|
60
|
+
"duration": 0.5,
|
|
61
|
+
"sequence_id": 0,
|
|
62
|
+
"sentence": "Discover the neuroscience secret to viral hooks."
|
|
63
|
+
}, {
|
|
64
|
+
"text": "the",
|
|
65
|
+
"start": 0.5,
|
|
66
|
+
"duration": 0.2,
|
|
67
|
+
"sequence_id": 0,
|
|
68
|
+
"sentence": "Discover the neuroscience secret to viral hooks."
|
|
69
|
+
}])
|
|
70
|
+
tribev2.eventstransforms.ExtractWordsFromAudio._get_transcript_from_audio = staticmethod(mock_get_transcript)
|
|
71
|
+
|
|
72
|
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
|
73
|
+
logger.info(f"Loading TribeModel on {device}...")
|
|
74
|
+
model = TribeModel.from_pretrained("facebook/tribev2", device=device)
|
|
75
|
+
MODEL_LOADED = True
|
|
76
|
+
logger.info("TribeModel loaded successfully.")
|
|
77
|
+
except Exception as e:
|
|
78
|
+
logger.error(f"Failed to load TribeModel: {e}")
|
|
79
|
+
MODEL_LOADED = False
|
|
80
|
+
|
|
81
|
+
try:
|
|
82
|
+
from nilearn import surface
|
|
83
|
+
import urllib.request
|
|
84
|
+
|
|
85
|
+
logger.info("Fetching Yeo-7 surface atlas for fsaverage5...")
|
|
86
|
+
|
|
87
|
+
lh_url = "https://raw.githubusercontent.com/ThomasYeoLab/CBIG/master/stable_projects/brain_parcellation/Yeo2011_fcMRI_clustering/1000subjects_reference/Yeo_JNeurophysiol11_SplitLabels/fsaverage5/label/lh.Yeo2011_7Networks_N1000.annot"
|
|
88
|
+
rh_url = "https://raw.githubusercontent.com/ThomasYeoLab/CBIG/master/stable_projects/brain_parcellation/Yeo2011_fcMRI_clustering/1000subjects_reference/Yeo_JNeurophysiol11_SplitLabels/fsaverage5/label/rh.Yeo2011_7Networks_N1000.annot"
|
|
89
|
+
|
|
90
|
+
lh_path = "lh.Yeo2011_7Networks_N1000.annot"
|
|
91
|
+
rh_path = "rh.Yeo2011_7Networks_N1000.annot"
|
|
92
|
+
|
|
93
|
+
if not os.path.exists(lh_path):
|
|
94
|
+
urllib.request.urlretrieve(lh_url, lh_path)
|
|
95
|
+
if not os.path.exists(rh_path):
|
|
96
|
+
urllib.request.urlretrieve(rh_url, rh_path)
|
|
97
|
+
|
|
98
|
+
labels_lh = surface.load_surf_data(lh_path)
|
|
99
|
+
labels_rh = surface.load_surf_data(rh_path)
|
|
100
|
+
yeo7_labels = np.concatenate([labels_lh, labels_rh])
|
|
101
|
+
logger.info(f"Loaded Yeo-7 labels, shape: {yeo7_labels.shape}")
|
|
102
|
+
except Exception as e:
|
|
103
|
+
logger.error(f"Failed to load Yeo-7 atlas: {e}")
|
|
104
|
+
yeo7_labels = None
|
|
105
|
+
|
|
106
|
+
@app.get("/health")
|
|
107
|
+
async def health_check():
|
|
108
|
+
if MODEL_LOADED:
|
|
109
|
+
return {"status": "ok", "model_loaded": True}
|
|
110
|
+
else:
|
|
111
|
+
raise HTTPException(status_code=503, detail="Model not loaded")
|
|
112
|
+
|
|
113
|
+
class AnalyzeRequest(BaseModel):
|
|
114
|
+
text: Optional[str] = None
|
|
115
|
+
video_url: Optional[str] = None
|
|
116
|
+
audio_url: Optional[str] = None
|
|
117
|
+
social_url: Optional[str] = None
|
|
118
|
+
|
|
119
|
+
def download_social_video(url: str, output_dir: str) -> str:
|
|
120
|
+
import yt_dlp
|
|
121
|
+
ydl_opts = {
|
|
122
|
+
'format': 'best',
|
|
123
|
+
'outtmpl': os.path.join(output_dir, '%(title)s.%(ext)s'),
|
|
124
|
+
'quiet': False,
|
|
125
|
+
'no_warnings': True,
|
|
126
|
+
}
|
|
127
|
+
try:
|
|
128
|
+
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
|
|
129
|
+
info_dict = ydl.extract_info(url, download=True)
|
|
130
|
+
video_path = ydl.prepare_filename(info_dict)
|
|
131
|
+
return video_path
|
|
132
|
+
except Exception as e:
|
|
133
|
+
logger.error(f"Failed to download social video {url}: {e}")
|
|
134
|
+
raise HTTPException(status_code=400, detail=f"Failed to download social video from {url}")
|
|
135
|
+
|
|
136
|
+
def download_file(url: str, suffix: str) -> str:
|
|
137
|
+
try:
|
|
138
|
+
response = requests.get(url, stream=True)
|
|
139
|
+
response.raise_for_status()
|
|
140
|
+
fd, path = tempfile.mkstemp(suffix=suffix)
|
|
141
|
+
with os.fdopen(fd, 'wb') as f:
|
|
142
|
+
for chunk in response.iter_content(chunk_size=8192):
|
|
143
|
+
f.write(chunk)
|
|
144
|
+
return path
|
|
145
|
+
except Exception as e:
|
|
146
|
+
logger.error(f"Failed to download {url}: {e}")
|
|
147
|
+
raise HTTPException(status_code=400, detail=f"Failed to download file from {url}")
|
|
148
|
+
|
|
149
|
+
def calculate_engagement(preds: np.ndarray) -> dict:
|
|
150
|
+
"""
|
|
151
|
+
Calculates Engagement Score = Z(DAN) + Z(VAN) + Z(Limbic) + Z(Visual) - Z(DMN)
|
|
152
|
+
preds: numpy array of shape (timepoints, vertices)
|
|
153
|
+
"""
|
|
154
|
+
if yeo7_labels is None:
|
|
155
|
+
logger.warning("Yeo-7 labels not available. Returning 0 for engagement.")
|
|
156
|
+
return {
|
|
157
|
+
"engagement_score": 0.0,
|
|
158
|
+
"networks": {
|
|
159
|
+
"DAN": 0.0,
|
|
160
|
+
"VAN": 0.0,
|
|
161
|
+
"Limbic": 0.0,
|
|
162
|
+
"Visual": 0.0,
|
|
163
|
+
"DMN": 0.0
|
|
164
|
+
},
|
|
165
|
+
"error": "Yeo-7 atlas not loaded"
|
|
166
|
+
}
|
|
167
|
+
|
|
168
|
+
if preds.ndim > 1:
|
|
169
|
+
mean_preds = np.mean(preds, axis=0)
|
|
170
|
+
else:
|
|
171
|
+
mean_preds = preds
|
|
172
|
+
|
|
173
|
+
network_means = {}
|
|
174
|
+
for net_name, net_idx in YEO7_MAPPING.items():
|
|
175
|
+
mask = (yeo7_labels == net_idx)
|
|
176
|
+
if np.any(mask):
|
|
177
|
+
network_means[net_name] = float(np.mean(mean_preds[mask]))
|
|
178
|
+
else:
|
|
179
|
+
network_means[net_name] = 0.0
|
|
180
|
+
|
|
181
|
+
all_net_means = []
|
|
182
|
+
for i in range(1, 8):
|
|
183
|
+
mask = (yeo7_labels == i)
|
|
184
|
+
if np.any(mask):
|
|
185
|
+
all_net_means.append(np.mean(mean_preds[mask]))
|
|
186
|
+
|
|
187
|
+
if len(all_net_means) > 1:
|
|
188
|
+
pop_mean = np.mean(all_net_means)
|
|
189
|
+
pop_std = np.std(all_net_means) + 1e-8
|
|
190
|
+
else:
|
|
191
|
+
pop_mean = 0.0
|
|
192
|
+
pop_std = 1.0
|
|
193
|
+
|
|
194
|
+
z_scores = {k: float((v - pop_mean) / pop_std) for k, v in network_means.items()}
|
|
195
|
+
|
|
196
|
+
engagement_score = z_scores["DAN"] + z_scores["VAN"] + z_scores["Limbic"] + z_scores["Visual"] - z_scores["DMN"]
|
|
197
|
+
|
|
198
|
+
return {
|
|
199
|
+
"engagement_score": float(engagement_score),
|
|
200
|
+
"networks": network_means,
|
|
201
|
+
"z_scores": z_scores
|
|
202
|
+
}
|
|
203
|
+
|
|
204
|
+
@app.post("/analyze")
|
|
205
|
+
async def analyze(request: AnalyzeRequest, background_tasks: BackgroundTasks):
|
|
206
|
+
if not MODEL_LOADED:
|
|
207
|
+
raise HTTPException(status_code=503, detail="Model not loaded")
|
|
208
|
+
|
|
209
|
+
if not request.text and not request.video_url and not request.audio_url and not request.social_url:
|
|
210
|
+
raise HTTPException(status_code=400, detail="Must provide at least one of text, video_url, audio_url, or social_url")
|
|
211
|
+
|
|
212
|
+
temp_dir = tempfile.mkdtemp()
|
|
213
|
+
temp_files = []
|
|
214
|
+
|
|
215
|
+
try:
|
|
216
|
+
kwargs = {}
|
|
217
|
+
|
|
218
|
+
if request.social_url:
|
|
219
|
+
video_path = download_social_video(request.social_url, temp_dir)
|
|
220
|
+
temp_files.append(video_path)
|
|
221
|
+
|
|
222
|
+
fd, optimized_path = tempfile.mkstemp(suffix=".mp4")
|
|
223
|
+
os.close(fd)
|
|
224
|
+
temp_files.append(optimized_path)
|
|
225
|
+
|
|
226
|
+
logger.info(f"Optimizing video {video_path} to {optimized_path}")
|
|
227
|
+
optimize_video(video_path, optimized_path)
|
|
228
|
+
|
|
229
|
+
kwargs["video_path"] = optimized_path
|
|
230
|
+
if request.text:
|
|
231
|
+
fd, text_path = tempfile.mkstemp(suffix=".txt")
|
|
232
|
+
with os.fdopen(fd, 'w') as f:
|
|
233
|
+
f.write(request.text)
|
|
234
|
+
temp_files.append(text_path)
|
|
235
|
+
kwargs["text_path"] = text_path
|
|
236
|
+
|
|
237
|
+
if request.video_url:
|
|
238
|
+
video_path = download_file(request.video_url, suffix=".mp4")
|
|
239
|
+
temp_files.append(video_path)
|
|
240
|
+
|
|
241
|
+
# Optimize video
|
|
242
|
+
fd, optimized_path = tempfile.mkstemp(suffix=".mp4")
|
|
243
|
+
os.close(fd) # Close the file descriptor so ffmpeg can write to it
|
|
244
|
+
temp_files.append(optimized_path)
|
|
245
|
+
|
|
246
|
+
logger.info(f"Optimizing video {video_path} to {optimized_path}")
|
|
247
|
+
optimize_video(video_path, optimized_path)
|
|
248
|
+
|
|
249
|
+
kwargs["video_path"] = optimized_path
|
|
250
|
+
|
|
251
|
+
if request.audio_url:
|
|
252
|
+
audio_path = download_file(request.audio_url, suffix=".wav")
|
|
253
|
+
temp_files.append(audio_path)
|
|
254
|
+
kwargs["audio_path"] = audio_path
|
|
255
|
+
|
|
256
|
+
logger.info(f"Extracting events dataframe with args: {kwargs.keys()}")
|
|
257
|
+
df_events = model.get_events_dataframe(**kwargs)
|
|
258
|
+
|
|
259
|
+
logger.info("Predicting brain activity...")
|
|
260
|
+
preds, segments = model.predict(df_events)
|
|
261
|
+
|
|
262
|
+
if not isinstance(preds, np.ndarray):
|
|
263
|
+
if hasattr(preds, 'cpu'):
|
|
264
|
+
preds = preds.cpu().numpy()
|
|
265
|
+
else:
|
|
266
|
+
preds = np.array(preds)
|
|
267
|
+
|
|
268
|
+
logger.info("Calculating engagement score...")
|
|
269
|
+
engagement_data = calculate_engagement(preds)
|
|
270
|
+
|
|
271
|
+
return {
|
|
272
|
+
"status": "success",
|
|
273
|
+
"engagement": engagement_data
|
|
274
|
+
}
|
|
275
|
+
|
|
276
|
+
except Exception as e:
|
|
277
|
+
import traceback
|
|
278
|
+
logger.error(f"Analysis failed: {traceback.format_exc()}")
|
|
279
|
+
raise HTTPException(status_code=500, detail=str(e))
|
|
280
|
+
finally:
|
|
281
|
+
for path in temp_files:
|
|
282
|
+
if os.path.exists(path):
|
|
283
|
+
try:
|
|
284
|
+
os.remove(path)
|
|
285
|
+
except Exception as e:
|
|
286
|
+
logger.warning(f"Failed to remove temp file {path}: {e}")
|
|
287
|
+
try:
|
|
288
|
+
import shutil
|
|
289
|
+
shutil.rmtree(temp_dir)
|
|
290
|
+
except Exception:
|
|
291
|
+
pass
|
|
292
|
+
|
|
293
|
+
if __name__ == "__main__":
|
|
294
|
+
import uvicorn
|
|
295
|
+
uvicorn.run(app, host="0.0.0.0", port=8000)
|
|
@@ -0,0 +1,93 @@
|
|
|
1
|
+
# Meta Tribe Skill - Showcase Results
|
|
2
|
+
|
|
3
|
+
This document contains actual examples of content analyzed by the TRIBE v2 model, processed through the Neuro-Marketing protocol.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Elite Neuro-Marketing Analysis Report: "Git City" (Video)
|
|
8
|
+
|
|
9
|
+
### Executive Summary & Predicted Virality
|
|
10
|
+
* **Overall Verdict:** Needs Hook Rewrite
|
|
11
|
+
* **Predicted View Tier:** 10k - 50k (Niche Success)
|
|
12
|
+
* **Why?** With a VAN (Surprise) of -0.56 and a highly suppressed DMN (Boredom) of -0.86, this video fits perfectly into the "Educational/Niche Success" benchmark. The content relies on predictable technical tropes so it lacks the biological pattern interrupt to go massively viral to cold audiences. However, the pacing is so flawless that anyone who *does* watch past the first 3 seconds will likely finish the video.
|
|
13
|
+
|
|
14
|
+
### Deep Neuroscience Breakdown (Scores out of 100)
|
|
15
|
+
**1. Scroll-Stopping Power (Ventral Attention - VAN): 22/100**
|
|
16
|
+
* *The Data:* Z-score -0.56.
|
|
17
|
+
* *The Insight:* The brain is highly predictable in the opening seconds. There are no sudden visual transitions, auditory spikes, or narrative curveballs. The viewer is not neurologically surprised.
|
|
18
|
+
|
|
19
|
+
**2. Boredom & Retention Risk (Default Mode - DMN): 100/100** *(Higher score = better retention)*
|
|
20
|
+
* *The Data:* Z-score -0.86.
|
|
21
|
+
* *The Insight:* A flawless retention score. The DMN is deeply suppressed. The pacing of the visual edits and information delivery completely eliminates the brain's ability to daydream.
|
|
22
|
+
|
|
23
|
+
**3. Cognitive Tracking (Dorsal Attention - DAN): 92/100**
|
|
24
|
+
* *The Data:* Z-score 0.70.
|
|
25
|
+
* *The Insight:* Viewers are actively following the logic and tracking the narrative tightly. The tutorial steps are clear and require active cognitive engagement.
|
|
26
|
+
|
|
27
|
+
**4. Emotional Stakes (Limbic System): 38/100**
|
|
28
|
+
* *The Data:* Z-score -1.12.
|
|
29
|
+
* *The Insight:* The script is sterile and educational, failing to trigger deep emotional stakes (like fear, FOMO, or intense excitement).
|
|
30
|
+
|
|
31
|
+
### Actionable Optimization Directives
|
|
32
|
+
1. **[Curiosity Gap Hook Suggestion]:** The retention is perfect, but the acquisition is weak. Do not touch the body of the video. Radically alter the first 3 seconds. Start with a contrarian statement or a highly dynamic visual cut to spike the VAN before launching into the technical tutorial.
|
|
33
|
+
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
# Elite Neuro-Marketing Analysis Report: "Sundar Wrapper Startups" (Video)
|
|
37
|
+
|
|
38
|
+
### Executive Summary & Predicted Virality
|
|
39
|
+
* **Overall Verdict:** Total Rewrite
|
|
40
|
+
* **Predicted View Tier:** Under 10k (Flop Risk)
|
|
41
|
+
* **Why?** With a VAN of -0.11, DAN of -0.15, and DMN of +0.24, this video fails to capture cognitive attention or surprise the viewer. The active networks (logic and surprise) are suppressed, and the boredom network (DMN) is elevated. Without high engagement in either VAN or DAN, the viewer will scroll past quickly.
|
|
42
|
+
|
|
43
|
+
### Deep Neuroscience Breakdown (Scores out of 100)
|
|
44
|
+
**1. Scroll-Stopping Power (Ventral Attention - VAN): 44/100**
|
|
45
|
+
* *The Data:* Z-score -0.11.
|
|
46
|
+
* *The Insight:* The visual and auditory stimuli in the opening are highly predictable. There is no pattern interrupt or sudden shift to jolt the viewer's attention.
|
|
47
|
+
|
|
48
|
+
**2. Boredom & Retention Risk (Default Mode - DMN): 54/100** *(Higher score = better retention)*
|
|
49
|
+
* *The Data:* Z-score 0.24.
|
|
50
|
+
* *The Insight:* The brain is at a medium risk of wandering. Because the cognitive networks (DAN) are not engaged, the brain naturally begins to default back to its resting, daydreaming state.
|
|
51
|
+
|
|
52
|
+
**3. Cognitive Tracking (Dorsal Attention - DAN): 42/100**
|
|
53
|
+
* *The Data:* Z-score -0.15.
|
|
54
|
+
* *The Insight:* Viewers are not tracking the logic or narrative tightly. The content may be too conversational or lack a strong, guided through-line that demands focus.
|
|
55
|
+
|
|
56
|
+
**4. Emotional Stakes (Limbic System): 28/100**
|
|
57
|
+
* *The Data:* Z-score -1.59.
|
|
58
|
+
* *The Insight:* The content does not trigger emotional resonance. It is perceived as low-stakes by the brain.
|
|
59
|
+
|
|
60
|
+
### Actionable Optimization Directives
|
|
61
|
+
1. **[Curiosity Gap Hook Suggestion]:** Replace the opening with a direct, contrarian statement about Sundar Pichai's views on AI wrappers to spike the VAN. Example: "Sundar Pichai just revealed the exact reason 90% of AI wrapper startups are going to die."
|
|
62
|
+
2. **[Visual Cue Recommendation]:** Add fast-paced B-roll or dynamic text-on-screen during the conversational parts to give the Visual network something to track, which will pull the DMN down.
|
|
63
|
+
3. **[Pacing Cut]:** Remove any slow, conversational filler words in the first 5 seconds. Get straight to the controversial opinion to engage the DAN (logical tracking).
|
|
64
|
+
|
|
65
|
+
---
|
|
66
|
+
|
|
67
|
+
# Elite Neuro-Marketing Analysis Report: "Thread-From-Blog" (Text Script)
|
|
68
|
+
|
|
69
|
+
### Executive Summary & Predicted Virality
|
|
70
|
+
* **Overall Verdict:** Needs Hook Rewrite
|
|
71
|
+
* **Predicted View Tier:** 10k - 50k (Niche Success)
|
|
72
|
+
* **Why?** With a VAN of 0.05 and DMN of 0.26, this script matches the benchmark of a mid-tier performer. The pattern interrupt is very weak, and the pacing in the middle starts to cause the brain to wander.
|
|
73
|
+
|
|
74
|
+
### Deep Neuroscience Breakdown (Scores out of 100)
|
|
75
|
+
**1. Scroll-Stopping Power (Ventral Attention - VAN): 54/100**
|
|
76
|
+
* *The Data:* Z-score 0.05.
|
|
77
|
+
* *The Insight:* The content relies on well-known marketing tropes ("Twitter is paying creators now but..."). The brain recognizes the pattern instantly, meaning viewers are not neurologically surprised enough to stop a fast scroll.
|
|
78
|
+
|
|
79
|
+
**2. Boredom & Retention Risk (Default Mode - DMN): 52/100** *(Higher score = better retention)*
|
|
80
|
+
* *The Data:* Z-score 0.26.
|
|
81
|
+
* *The Insight:* The pacing is too slow or explanatory. The brain actively starts daydreaming during the middle paragraph ("Style is auto-detected. for example Research heavy article becomes a Data thread...").
|
|
82
|
+
|
|
83
|
+
**3. Cognitive Tracking (Dorsal Attention - DAN): 80/100**
|
|
84
|
+
* *The Data:* Z-score 0.31.
|
|
85
|
+
* *The Insight:* Viewers are actively following the logic and tracking the narrative tightly when you explain how the Composio API integration works.
|
|
86
|
+
|
|
87
|
+
**4. Emotional Stakes (Limbic System): 22/100**
|
|
88
|
+
* *The Data:* Z-score -1.88.
|
|
89
|
+
* *The Insight:* The script lacks emotional stakes regarding the pain of failing to grow a Twitter account.
|
|
90
|
+
|
|
91
|
+
### Actionable Optimization Directives
|
|
92
|
+
1. **[Curiosity Gap Hook Suggestion]:** Replace the opening line with something polarizing. E.g., *"If you are still writing Twitter threads from scratch, you are burning money."*
|
|
93
|
+
2. **[Pacing Cut]:** Delete the sentence: *"Style is auto-detected. for example Research heavy article becomes a Data thread. Step by step guide becomes a How-To."* It causes the DMN (boredom) to spike. Replace it with: *"It instantly auto-detects the format and generates the thread."*
|