@opendirectory.dev/skills 0.1.46 → 0.1.48

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@opendirectory.dev/skills",
3
- "version": "0.1.46",
3
+ "version": "0.1.48",
4
4
  "main": "dist/index.js",
5
5
  "types": "dist/index.d.ts",
6
6
  "bin": {
package/registry.json CHANGED
@@ -188,6 +188,14 @@
188
188
  "version": "0.0.1",
189
189
  "path": "skills/meta-ads-skill"
190
190
  },
191
+ {
192
+ "name": "meta-tribeV2-skill",
193
+ "description": "<div align=\"center\">",
194
+ "tags": [],
195
+ "author": "opendirectory",
196
+ "version": "0.0.1",
197
+ "path": "skills/meta-tribeV2-skill"
198
+ },
191
199
  {
192
200
  "name": "newsletter-digest",
193
201
  "description": "Aggregates RSS feeds from the past week, synthesizes the top stories using Gemini, and publishes a newsletter digest to Ghost CMS.",
@@ -4,6 +4,9 @@ Agent Skill that equips your AI agent with the ability to autonomously guess, en
4
4
 
5
5
  Instead of running Python scripts manually, this skill teaches your AI how to read your lead lists, discover corporate domains via the Clearbit API, generate standard email permutations, and securely verify them.
6
6
 
7
+ <img width="2752" height="1536" alt="cold-email-verifier-cover-image" src="https://github.com/user-attachments/assets/f033ec61-d2c1-4cee-a6d5-71e7cf9632a6" />
8
+
9
+
7
10
  ## Install
8
11
 
9
12
  ```bash
@@ -0,0 +1,103 @@
1
+ <div align="center">
2
+ <img src="https://raw.githubusercontent.com/Varnan-Tech/opendirectory/main/assets/covers/tribe-hook-analyzer-cover.png" width="100%" alt="Meta Tribe Skill Cover" />
3
+ </div>
4
+
5
+ # Meta Tribe Skill
6
+
7
+ A self-hosted OpenDirectory AI Skill that uses Meta's TRIBE v2 fMRI Model to analyze the neuroscience of video hooks, reels, and scripts.
8
+
9
+ Instead of guessing what makes a hook engaging using prompt engineering, this skill predicts actual human brain activity across the scientifically validated Yeo-7 Functional Networks, giving you an evidence-based Engagement Report for your content.
10
+
11
+ ---
12
+
13
+ ## What This Skill Does
14
+
15
+ This skill provides the infrastructure to host the massive 80GB TRIBE v2 model pipeline and gives your AI Agent the ability to:
16
+ 1. Process video, audio, or text scripts.
17
+ 2. Intercept and optimize the media (downscaling video to 360p at 10fps to avoid hour-long bottleneck processing times).
18
+ 3. Process the content through V-JEPA (Vision), W2V-BERT (Acoustics), and LLaMA 3.2 3B (Linguistics).
19
+ 4. Predict human brain fMRI activity across the Yeo-7 networks.
20
+ 5. Generate an actionable, human-readable neuroscience report without the jargon.
21
+
22
+ ---
23
+
24
+ ## Deployment Options
25
+
26
+ Because TRIBE v2 requires a massive amount of VRAM (24GB for text, up to 80GB for video), we offer 3 different deployment options so anyone can use it, regardless of budget or technical expertise.
27
+
28
+ ### 1. Google Colab (Zero Cost, Decoupled)
29
+ Best for users without a cloud budget. Colab provides free T4 GPUs.
30
+ * How it works: We use a decoupled architecture. You run the heavy AI inference in a Colab Notebook, which outputs a preds.npy prediction file. You then run a local script on your laptop to generate the report.
31
+ * Setup:
32
+ 1. Open Google Colab and upload the script from scripts/colab_inference.py into a new Notebook.
33
+ 2. Run the notebook. It will output preds.npy and segments.json.
34
+ 3. Download those files to your machine and run: `python scripts/local_analyze.py --preds preds.npy`. This will output a text report and an ASCII terminal graph showing the engagement peaks and valleys.
35
+
36
+ ### 2. RunPod (Serverless, Pay-per-second)
37
+ Best for production agents and developers. You only pay for the seconds the model is running.
38
+ * How it works: We provide a RunPod Handler and a custom Dockerfile that caches the 80GB model inside the image.
39
+ * Setup:
40
+ 1. Build the Docker image using server/Dockerfile.runpod: docker build -f Dockerfile.runpod -t tribe-runpod .
41
+ 2. Push the image to Docker Hub or GHCR.
42
+ 3. Create a new RunPod Serverless Endpoint using your image URL.
43
+ 4. Point your AI Agent to your RunPod Endpoint URL.
44
+
45
+ ### 3. AWS EC2 Persistent (Enterprise, BYO-Compute)
46
+ Best for heavy, continuous usage.
47
+ * How it works: Automatically provisions an AWS g5.12xlarge instance (4x A10G GPUs) and runs a FastAPI server.
48
+ * Setup:
49
+ 1. Ensure your AWS account has a vCPU quota limit of at least 48 for "Running On-Demand G and VT instances".
50
+ 2. Run bash scripts/launch_persistent.sh to provision the instance.
51
+ 3. Run export HF_TOKEN="your_token" followed by bash scripts/deploy_to_persistent.sh to build and launch the Docker API.
52
+
53
+ #### AWS GPU Lifecycle & Estimated Costs
54
+ Running the `g5.12xlarge` instance (4x A10G GPUs) provides incredible speed but costs **$7.09 per hour** on On-Demand pricing. It is crucial to manage this lifecycle.
55
+ 1. **Launch:** Run `bash scripts/launch_persistent.sh` (Takes ~3 minutes).
56
+ 2. **Analyze:** Run your videos through the API.
57
+ 3. **Terminate:** When you are completely finished for the day, you MUST terminate the instance to stop billing.
58
+ - Run `aws ec2 describe-instances --filters "Name=instance-state-name,Values=running"` to find your Instance ID.
59
+ - Run `aws ec2 terminate-instances --instance-ids <YOUR_INSTANCE_ID>`.
60
+ - *Do not just "stop" the instance if you don't want to pay for EBS Volume storage costs overnight. Terminate it.*
61
+
62
+ ---
63
+
64
+ ## HuggingFace Authentication (Required for all methods)
65
+
66
+ TRIBE v2 relies on meta-llama/Llama-3.2-3B, which is a Gated Model.
67
+ 1. Create a HuggingFace account.
68
+ 2. Go to the Llama 3.2 3B page and TRIBE v2 page and agree to Meta's license terms.
69
+ 3. Generate a HuggingFace Access Token (Read permissions) at huggingface.co/settings/tokens.
70
+ 4. Supply this token via the HF_TOKEN environment variable.
71
+
72
+ ---
73
+
74
+ ## The Neuroscience of the Engagement Report
75
+
76
+ The AI agent will read the raw API output and translate the neuroscience into plain English for you:
77
+
78
+ * VAN (Ventral Attention Network): Translated to "Is this surprising enough to stop a scroll?". High VAN means the content is novel and creates a pattern interrupt.
79
+ * DMN (Default Mode Network): Translated to "Will people get bored and tune out?". High DMN is bad. It means the brain is wandering. The AI uses this to identify "Cut Candidates" in your video.
80
+ * DAN (Dorsal Attention Network): Translated to "Are people actively following along?". High DAN means strong logical focus.
81
+ * Limbic Network: Translated to "Does this make people feel something?". High Limbic means strong emotional response.
82
+
83
+ Check out the [Results Showcase](results_showcase.md) for actual examples of Neuro-Marketing reports generated by this skill.
84
+
85
+ ## Install
86
+
87
+ ### Video Tutorial
88
+ Watch this quick video to see how it's done:
89
+
90
+ https://github.com/user-attachments/assets/ee98a1b5-ebc4-452f-bbfb-c434f2935067
91
+
92
+ ### Step 1: Download the skill from GitHub
93
+ 1. Copy the URL of this specific skill folder from your browser's address bar.
94
+ 2. Go to [download-directory.github.io](https://download-directory.github.io/).
95
+ 3. Paste the URL and click **Enter** to download.
96
+
97
+ ### Step 2: Install the Skill in Claude
98
+ 1. Open your **Claude desktop app**.
99
+ 2. Go to the sidebar on the left side and click on the **Customize** section.
100
+ 3. Click on the **Skills** tab, then click on the **+** (plus) icon button to create a new skill.
101
+ 4. Choose the option to **Upload a skill**, and drag and drop the `.zip` file (or you can extract it and drop the folder, both work).
102
+
103
+ > **Note:** For some skills (like `position-me`), the `SKILL.md` file might be located inside a subfolder. Always make sure you are uploading the specific folder that contains the `SKILL.md` file!