opencode-skills-antigravity 1.0.39 → 1.0.41
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bundled-skills/.antigravity-install-manifest.json +10 -1
- package/bundled-skills/docs/integrations/jetski-cortex.md +3 -3
- package/bundled-skills/docs/integrations/jetski-gemini-loader/README.md +1 -1
- package/bundled-skills/docs/maintainers/repo-growth-seo.md +3 -3
- package/bundled-skills/docs/maintainers/security-findings-triage-2026-03-29-refresh.csv +34 -0
- package/bundled-skills/docs/maintainers/security-findings-triage-2026-03-29-refresh.md +2 -0
- package/bundled-skills/docs/maintainers/skills-update-guide.md +1 -1
- package/bundled-skills/docs/sources/sources.md +2 -2
- package/bundled-skills/docs/users/bundles.md +1 -1
- package/bundled-skills/docs/users/claude-code-skills.md +1 -1
- package/bundled-skills/docs/users/gemini-cli-skills.md +1 -1
- package/bundled-skills/docs/users/getting-started.md +1 -1
- package/bundled-skills/docs/users/kiro-integration.md +1 -1
- package/bundled-skills/docs/users/usage.md +4 -4
- package/bundled-skills/docs/users/visual-guide.md +4 -4
- package/bundled-skills/hugging-face-cli/SKILL.md +192 -195
- package/bundled-skills/hugging-face-community-evals/SKILL.md +213 -0
- package/bundled-skills/hugging-face-community-evals/examples/.env.example +3 -0
- package/bundled-skills/hugging-face-community-evals/examples/USAGE_EXAMPLES.md +101 -0
- package/bundled-skills/hugging-face-community-evals/scripts/inspect_eval_uv.py +104 -0
- package/bundled-skills/hugging-face-community-evals/scripts/inspect_vllm_uv.py +306 -0
- package/bundled-skills/hugging-face-community-evals/scripts/lighteval_vllm_uv.py +297 -0
- package/bundled-skills/hugging-face-dataset-viewer/SKILL.md +120 -120
- package/bundled-skills/hugging-face-gradio/SKILL.md +304 -0
- package/bundled-skills/hugging-face-gradio/examples.md +613 -0
- package/bundled-skills/hugging-face-jobs/SKILL.md +25 -18
- package/bundled-skills/hugging-face-jobs/index.html +216 -0
- package/bundled-skills/hugging-face-jobs/references/hardware_guide.md +336 -0
- package/bundled-skills/hugging-face-jobs/references/hub_saving.md +352 -0
- package/bundled-skills/hugging-face-jobs/references/token_usage.md +570 -0
- package/bundled-skills/hugging-face-jobs/references/troubleshooting.md +475 -0
- package/bundled-skills/hugging-face-jobs/scripts/cot-self-instruct.py +718 -0
- package/bundled-skills/hugging-face-jobs/scripts/finepdfs-stats.py +546 -0
- package/bundled-skills/hugging-face-jobs/scripts/generate-responses.py +587 -0
- package/bundled-skills/hugging-face-model-trainer/SKILL.md +11 -12
- package/bundled-skills/hugging-face-model-trainer/references/gguf_conversion.md +296 -0
- package/bundled-skills/hugging-face-model-trainer/references/hardware_guide.md +283 -0
- package/bundled-skills/hugging-face-model-trainer/references/hub_saving.md +364 -0
- package/bundled-skills/hugging-face-model-trainer/references/local_training_macos.md +231 -0
- package/bundled-skills/hugging-face-model-trainer/references/reliability_principles.md +371 -0
- package/bundled-skills/hugging-face-model-trainer/references/trackio_guide.md +189 -0
- package/bundled-skills/hugging-face-model-trainer/references/training_methods.md +150 -0
- package/bundled-skills/hugging-face-model-trainer/references/training_patterns.md +203 -0
- package/bundled-skills/hugging-face-model-trainer/references/troubleshooting.md +282 -0
- package/bundled-skills/hugging-face-model-trainer/references/unsloth.md +313 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/convert_to_gguf.py +424 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/dataset_inspector.py +417 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/estimate_cost.py +150 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/train_dpo_example.py +106 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/train_grpo_example.py +89 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/train_sft_example.py +122 -0
- package/bundled-skills/hugging-face-model-trainer/scripts/unsloth_sft_example.py +512 -0
- package/bundled-skills/hugging-face-paper-publisher/SKILL.md +11 -4
- package/bundled-skills/hugging-face-paper-publisher/examples/example_usage.md +326 -0
- package/bundled-skills/hugging-face-paper-publisher/references/quick_reference.md +216 -0
- package/bundled-skills/hugging-face-paper-publisher/scripts/paper_manager.py +606 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/arxiv.md +299 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/ml-report.md +358 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/modern.md +319 -0
- package/bundled-skills/hugging-face-paper-publisher/templates/standard.md +201 -0
- package/bundled-skills/hugging-face-papers/SKILL.md +241 -0
- package/bundled-skills/hugging-face-trackio/.claude-plugin/plugin.json +19 -0
- package/bundled-skills/hugging-face-trackio/SKILL.md +117 -0
- package/bundled-skills/hugging-face-trackio/references/alerts.md +196 -0
- package/bundled-skills/hugging-face-trackio/references/logging_metrics.md +206 -0
- package/bundled-skills/hugging-face-trackio/references/retrieving_metrics.md +251 -0
- package/bundled-skills/hugging-face-vision-trainer/SKILL.md +595 -0
- package/bundled-skills/hugging-face-vision-trainer/references/finetune_sam2_trainer.md +254 -0
- package/bundled-skills/hugging-face-vision-trainer/references/hub_saving.md +618 -0
- package/bundled-skills/hugging-face-vision-trainer/references/image_classification_training_notebook.md +279 -0
- package/bundled-skills/hugging-face-vision-trainer/references/object_detection_training_notebook.md +700 -0
- package/bundled-skills/hugging-face-vision-trainer/references/reliability_principles.md +310 -0
- package/bundled-skills/hugging-face-vision-trainer/references/timm_trainer.md +91 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/dataset_inspector.py +814 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/estimate_cost.py +217 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/image_classification_training.py +383 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/object_detection_training.py +710 -0
- package/bundled-skills/hugging-face-vision-trainer/scripts/sam_segmentation_training.py +382 -0
- package/bundled-skills/jq/SKILL.md +273 -0
- package/bundled-skills/odoo-edi-connector/SKILL.md +32 -10
- package/bundled-skills/odoo-woocommerce-bridge/SKILL.md +9 -5
- package/bundled-skills/tmux/SKILL.md +370 -0
- package/bundled-skills/transformers-js/SKILL.md +639 -0
- package/bundled-skills/transformers-js/references/CACHE.md +339 -0
- package/bundled-skills/transformers-js/references/CONFIGURATION.md +390 -0
- package/bundled-skills/transformers-js/references/EXAMPLES.md +605 -0
- package/bundled-skills/transformers-js/references/MODEL_ARCHITECTURES.md +167 -0
- package/bundled-skills/transformers-js/references/PIPELINE_OPTIONS.md +545 -0
- package/bundled-skills/transformers-js/references/TEXT_GENERATION.md +315 -0
- package/bundled-skills/viboscope/SKILL.md +64 -0
- package/package.json +1 -1
|
@@ -0,0 +1,315 @@
|
|
|
1
|
+
# Text Generation Guide
|
|
2
|
+
|
|
3
|
+
Guide to generating text with Transformers.js, including streaming and chat format.
|
|
4
|
+
|
|
5
|
+
## Table of Contents
|
|
6
|
+
|
|
7
|
+
1. [Basic Generation](#basic-generation)
|
|
8
|
+
2. [Streaming](#streaming)
|
|
9
|
+
3. [Chat Format](#chat-format)
|
|
10
|
+
4. [Generation Parameters](#generation-parameters)
|
|
11
|
+
5. [Model Selection](#model-selection)
|
|
12
|
+
6. [Best Practices](#best-practices)
|
|
13
|
+
|
|
14
|
+
## Basic Generation
|
|
15
|
+
|
|
16
|
+
```javascript
|
|
17
|
+
import { pipeline } from '@huggingface/transformers';
|
|
18
|
+
|
|
19
|
+
const generator = await pipeline(
|
|
20
|
+
'text-generation',
|
|
21
|
+
'onnx-community/Qwen2.5-0.5B-Instruct',
|
|
22
|
+
{ dtype: 'q4' }
|
|
23
|
+
);
|
|
24
|
+
|
|
25
|
+
const result = await generator('Once upon a time', {
|
|
26
|
+
max_new_tokens: 100,
|
|
27
|
+
temperature: 0.7,
|
|
28
|
+
});
|
|
29
|
+
|
|
30
|
+
console.log(result[0].generated_text);
|
|
31
|
+
|
|
32
|
+
// Clean up when done
|
|
33
|
+
await generator.dispose();
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
## Streaming
|
|
37
|
+
|
|
38
|
+
Stream tokens as they're generated for better UX. Once you understand streaming, you can combine it with other features like chat format.
|
|
39
|
+
|
|
40
|
+
### Node.js
|
|
41
|
+
|
|
42
|
+
```javascript
|
|
43
|
+
import { pipeline, TextStreamer } from '@huggingface/transformers';
|
|
44
|
+
|
|
45
|
+
const generator = await pipeline(
|
|
46
|
+
'text-generation',
|
|
47
|
+
'onnx-community/Qwen2.5-0.5B-Instruct',
|
|
48
|
+
{ dtype: 'q4' }
|
|
49
|
+
);
|
|
50
|
+
|
|
51
|
+
const streamer = new TextStreamer(generator.tokenizer, {
|
|
52
|
+
skip_prompt: true,
|
|
53
|
+
skip_special_tokens: true,
|
|
54
|
+
callback_function: (token) => {
|
|
55
|
+
process.stdout.write(token);
|
|
56
|
+
},
|
|
57
|
+
});
|
|
58
|
+
|
|
59
|
+
await generator('Tell me a story', {
|
|
60
|
+
max_new_tokens: 200,
|
|
61
|
+
temperature: 0.7,
|
|
62
|
+
streamer,
|
|
63
|
+
});
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
### Browser
|
|
67
|
+
|
|
68
|
+
```html
|
|
69
|
+
<!DOCTYPE html>
|
|
70
|
+
<html>
|
|
71
|
+
<body>
|
|
72
|
+
<textarea id="prompt" placeholder="Enter prompt..."></textarea>
|
|
73
|
+
<button onclick="generate()">Generate</button>
|
|
74
|
+
<div id="output"></div>
|
|
75
|
+
|
|
76
|
+
<script type="module">
|
|
77
|
+
import { pipeline, TextStreamer } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.8.1';
|
|
78
|
+
|
|
79
|
+
const generator = await pipeline(
|
|
80
|
+
'text-generation',
|
|
81
|
+
'onnx-community/Qwen2.5-0.5B-Instruct',
|
|
82
|
+
{ dtype: 'q4' }
|
|
83
|
+
);
|
|
84
|
+
|
|
85
|
+
window.generate = async function() {
|
|
86
|
+
const prompt = document.getElementById('prompt').value;
|
|
87
|
+
const outputDiv = document.getElementById('output');
|
|
88
|
+
outputDiv.textContent = '';
|
|
89
|
+
|
|
90
|
+
const streamer = new TextStreamer(generator.tokenizer, {
|
|
91
|
+
skip_prompt: true,
|
|
92
|
+
skip_special_tokens: true,
|
|
93
|
+
callback_function: (token) => {
|
|
94
|
+
outputDiv.textContent += token;
|
|
95
|
+
},
|
|
96
|
+
});
|
|
97
|
+
|
|
98
|
+
await generator(prompt, {
|
|
99
|
+
max_new_tokens: 200,
|
|
100
|
+
temperature: 0.7,
|
|
101
|
+
streamer,
|
|
102
|
+
});
|
|
103
|
+
};
|
|
104
|
+
</script>
|
|
105
|
+
</body>
|
|
106
|
+
</html>
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
### React
|
|
110
|
+
|
|
111
|
+
```jsx
|
|
112
|
+
import { useState, useRef, useEffect } from 'react';
|
|
113
|
+
import { pipeline, TextStreamer } from '@huggingface/transformers';
|
|
114
|
+
|
|
115
|
+
function StreamingGenerator() {
|
|
116
|
+
const generatorRef = useRef(null);
|
|
117
|
+
const [output, setOutput] = useState('');
|
|
118
|
+
const [loading, setLoading] = useState(false);
|
|
119
|
+
|
|
120
|
+
const handleGenerate = async (prompt) => {
|
|
121
|
+
if (!prompt) return;
|
|
122
|
+
|
|
123
|
+
setLoading(true);
|
|
124
|
+
setOutput('');
|
|
125
|
+
|
|
126
|
+
// Load model on first generate
|
|
127
|
+
if (!generatorRef.current) {
|
|
128
|
+
generatorRef.current = await pipeline(
|
|
129
|
+
'text-generation',
|
|
130
|
+
'onnx-community/Qwen2.5-0.5B-Instruct',
|
|
131
|
+
{ dtype: 'q4' }
|
|
132
|
+
);
|
|
133
|
+
}
|
|
134
|
+
|
|
135
|
+
const streamer = new TextStreamer(generatorRef.current.tokenizer, {
|
|
136
|
+
skip_prompt: true,
|
|
137
|
+
skip_special_tokens: true,
|
|
138
|
+
callback_function: (token) => {
|
|
139
|
+
setOutput((prev) => prev + token);
|
|
140
|
+
},
|
|
141
|
+
});
|
|
142
|
+
|
|
143
|
+
await generatorRef.current(prompt, {
|
|
144
|
+
max_new_tokens: 200,
|
|
145
|
+
temperature: 0.7,
|
|
146
|
+
streamer,
|
|
147
|
+
});
|
|
148
|
+
|
|
149
|
+
setLoading(false);
|
|
150
|
+
};
|
|
151
|
+
|
|
152
|
+
// Cleanup on unmount
|
|
153
|
+
useEffect(() => {
|
|
154
|
+
return () => {
|
|
155
|
+
if (generatorRef.current) {
|
|
156
|
+
generatorRef.current.dispose();
|
|
157
|
+
}
|
|
158
|
+
};
|
|
159
|
+
}, []);
|
|
160
|
+
|
|
161
|
+
return (
|
|
162
|
+
<div>
|
|
163
|
+
<button onClick={() => handleGenerate('Tell me a story')} disabled={loading}>
|
|
164
|
+
{loading ? 'Generating...' : 'Generate'}
|
|
165
|
+
</button>
|
|
166
|
+
<div>{output}</div>
|
|
167
|
+
</div>
|
|
168
|
+
);
|
|
169
|
+
}
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
## Chat Format
|
|
173
|
+
|
|
174
|
+
Use structured messages for conversations. Works with both basic generation and streaming (just add `streamer` parameter).
|
|
175
|
+
|
|
176
|
+
### Single Turn
|
|
177
|
+
|
|
178
|
+
```javascript
|
|
179
|
+
import { pipeline } from '@huggingface/transformers';
|
|
180
|
+
|
|
181
|
+
const generator = await pipeline(
|
|
182
|
+
'text-generation',
|
|
183
|
+
'onnx-community/Qwen2.5-0.5B-Instruct',
|
|
184
|
+
{ dtype: 'q4' }
|
|
185
|
+
);
|
|
186
|
+
|
|
187
|
+
const messages = [
|
|
188
|
+
{ role: 'system', content: 'You are a helpful assistant.' },
|
|
189
|
+
{ role: 'user', content: 'How do I create an async function?' }
|
|
190
|
+
];
|
|
191
|
+
|
|
192
|
+
const result = await generator(messages, {
|
|
193
|
+
max_new_tokens: 256,
|
|
194
|
+
temperature: 0.7,
|
|
195
|
+
});
|
|
196
|
+
|
|
197
|
+
console.log(result[0].generated_text);
|
|
198
|
+
```
|
|
199
|
+
|
|
200
|
+
### Multi-turn Conversation
|
|
201
|
+
|
|
202
|
+
```javascript
|
|
203
|
+
const conversation = [
|
|
204
|
+
{ role: 'system', content: 'You are a helpful assistant.' },
|
|
205
|
+
{ role: 'user', content: 'What is JavaScript?' },
|
|
206
|
+
{ role: 'assistant', content: 'JavaScript is a programming language...' },
|
|
207
|
+
{ role: 'user', content: 'Can you show an example?' }
|
|
208
|
+
];
|
|
209
|
+
|
|
210
|
+
const result = await generator(conversation, {
|
|
211
|
+
max_new_tokens: 200,
|
|
212
|
+
temperature: 0.7,
|
|
213
|
+
});
|
|
214
|
+
|
|
215
|
+
// To add streaming, just pass a streamer:
|
|
216
|
+
// streamer: new TextStreamer(generator.tokenizer, {...})
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
## Generation Parameters
|
|
220
|
+
|
|
221
|
+
### Common Parameters
|
|
222
|
+
|
|
223
|
+
```javascript
|
|
224
|
+
await generator(prompt, {
|
|
225
|
+
// Token limits
|
|
226
|
+
max_new_tokens: 512, // Maximum tokens to generate
|
|
227
|
+
min_new_tokens: 0, // Minimum tokens to generate
|
|
228
|
+
|
|
229
|
+
// Sampling
|
|
230
|
+
temperature: 0.7, // Randomness (0.0-2.0)
|
|
231
|
+
top_k: 50, // Consider top K tokens
|
|
232
|
+
top_p: 0.95, // Nucleus sampling
|
|
233
|
+
do_sample: true, // Use random sampling (false = always pick most likely token)
|
|
234
|
+
|
|
235
|
+
// Repetition control
|
|
236
|
+
repetition_penalty: 1.0, // Penalty for repeating (1.0 = no penalty)
|
|
237
|
+
no_repeat_ngram_size: 0, // Prevent repeating n-grams
|
|
238
|
+
|
|
239
|
+
// Streaming
|
|
240
|
+
streamer: streamer, // TextStreamer instance
|
|
241
|
+
});
|
|
242
|
+
```
|
|
243
|
+
|
|
244
|
+
### Parameter Effects
|
|
245
|
+
|
|
246
|
+
**Temperature:**
|
|
247
|
+
- Low (0.1-0.5): More focused and deterministic
|
|
248
|
+
- Medium (0.6-0.9): Balanced creativity and coherence
|
|
249
|
+
- High (1.0-2.0): More creative and random
|
|
250
|
+
|
|
251
|
+
```javascript
|
|
252
|
+
// Focused output
|
|
253
|
+
await generator(prompt, { temperature: 0.3, max_new_tokens: 100 });
|
|
254
|
+
|
|
255
|
+
// Creative output
|
|
256
|
+
await generator(prompt, { temperature: 1.2, max_new_tokens: 100 });
|
|
257
|
+
```
|
|
258
|
+
|
|
259
|
+
**Sampling Methods:**
|
|
260
|
+
|
|
261
|
+
```javascript
|
|
262
|
+
// Greedy (deterministic)
|
|
263
|
+
await generator(prompt, {
|
|
264
|
+
do_sample: false,
|
|
265
|
+
max_new_tokens: 100
|
|
266
|
+
});
|
|
267
|
+
|
|
268
|
+
// Top-k sampling
|
|
269
|
+
await generator(prompt, {
|
|
270
|
+
top_k: 50,
|
|
271
|
+
temperature: 0.7,
|
|
272
|
+
max_new_tokens: 100
|
|
273
|
+
});
|
|
274
|
+
|
|
275
|
+
// Top-p (nucleus) sampling
|
|
276
|
+
await generator(prompt, {
|
|
277
|
+
top_p: 0.95,
|
|
278
|
+
temperature: 0.7,
|
|
279
|
+
max_new_tokens: 100
|
|
280
|
+
});
|
|
281
|
+
```
|
|
282
|
+
|
|
283
|
+
## Model Selection
|
|
284
|
+
|
|
285
|
+
Browse available text generation models on Hugging Face Hub:
|
|
286
|
+
|
|
287
|
+
**https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&sort=trending**
|
|
288
|
+
|
|
289
|
+
### Selection Tips
|
|
290
|
+
|
|
291
|
+
- **Small models (< 1B params)**: Fast, browser-friendly, use `dtype: 'q4'`
|
|
292
|
+
- **Medium models (1-3B params)**: Balanced quality/speed, use `dtype: 'q4'` or `fp16`
|
|
293
|
+
- **Large models (> 3B params)**: High quality, slower, best for Node.js with `dtype: 'fp16'`
|
|
294
|
+
|
|
295
|
+
Check model cards for:
|
|
296
|
+
- Parameter count and model size
|
|
297
|
+
- Supported languages
|
|
298
|
+
- Benchmark scores
|
|
299
|
+
- License restrictions
|
|
300
|
+
|
|
301
|
+
## Best Practices
|
|
302
|
+
|
|
303
|
+
1. **Model Size**: Use quantized models (`q4`) for browsers, larger models (`fp16`) for servers
|
|
304
|
+
2. **Streaming**: Use streaming for better UX - shows progress and feels responsive
|
|
305
|
+
3. **Token Limits**: Set `max_new_tokens` to prevent runaway generation
|
|
306
|
+
4. **Temperature**: Tune based on use case (creative: 0.8-1.2, factual: 0.3-0.7)
|
|
307
|
+
5. **Memory**: Always call `dispose()` when done
|
|
308
|
+
6. **Caching**: Load model once, reuse for multiple requests
|
|
309
|
+
|
|
310
|
+
## Related Documentation
|
|
311
|
+
|
|
312
|
+
- [Pipeline Options](./PIPELINE_OPTIONS.md) - Configure pipeline loading
|
|
313
|
+
- [Configuration Reference](./CONFIGURATION.md) - Environment settings
|
|
314
|
+
- [Code Examples](./EXAMPLES.md) - More examples for different runtimes
|
|
315
|
+
- [Main Skill Guide](../SKILL.md) - Getting started guide
|
|
@@ -0,0 +1,64 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: viboscope
|
|
3
|
+
description: "Psychological compatibility matching — find cofounders, collaborators, and friends through validated psychometrics"
|
|
4
|
+
category: collaboration
|
|
5
|
+
risk: safe
|
|
6
|
+
source: community
|
|
7
|
+
date_added: "2026-03-29"
|
|
8
|
+
author: ivanschmidt
|
|
9
|
+
tags: [matching, psychology, compatibility, networking, collaboration]
|
|
10
|
+
tools: [claude, cursor, codex, gemini, windsurf]
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
# Viboscope
|
|
14
|
+
|
|
15
|
+
## Overview
|
|
16
|
+
|
|
17
|
+
Viboscope helps find compatible people — cofounders, project partners, friends, romantic partners — through deep psychological compatibility matching. It builds a profile across 10 validated dimensions and calculates mathematical compatibility with other users.
|
|
18
|
+
|
|
19
|
+
## When to Use This Skill
|
|
20
|
+
|
|
21
|
+
- Use when looking for a cofounder or project collaborator
|
|
22
|
+
- Use when wanting to find people with compatible work style and values
|
|
23
|
+
- Use when checking compatibility with a specific person via invite link
|
|
24
|
+
|
|
25
|
+
## How It Works
|
|
26
|
+
|
|
27
|
+
### Step 1: Install
|
|
28
|
+
|
|
29
|
+
```bash
|
|
30
|
+
curl -fsS https://viboscope.com/api/v1/skill -o viboscope.md
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
Save to your platform's skills directory.
|
|
34
|
+
|
|
35
|
+
### Step 2: Build Profile
|
|
36
|
+
|
|
37
|
+
The skill guides a 5-minute onboarding that builds a psychological profile using:
|
|
38
|
+
- AI assistant portrait (fastest — 2 min for 90%+ profile)
|
|
39
|
+
- 5 validated questionnaires (Big Five, Values, Attachment, Conflict, Work Style)
|
|
40
|
+
- Context scan from workspace files
|
|
41
|
+
|
|
42
|
+
### Step 3: Search
|
|
43
|
+
|
|
44
|
+
Search across 7 contexts: business, romantic, friendship, professional, intellectual, hobby, general. Results include percentage scores and human-readable explanations of why you match.
|
|
45
|
+
|
|
46
|
+
## Examples
|
|
47
|
+
|
|
48
|
+
### Example 1: Find a Cofounder
|
|
49
|
+
|
|
50
|
+
Tell your AI agent: "Install Viboscope and find me a cofounder"
|
|
51
|
+
|
|
52
|
+
The agent will guide you through profiling, then search for business-compatible matches with aligned values and complementary work styles.
|
|
53
|
+
|
|
54
|
+
### Example 2: Check Compatibility
|
|
55
|
+
|
|
56
|
+
Share your invite link: `viboscope.com/match/@your_nick`
|
|
57
|
+
|
|
58
|
+
When someone opens it with their AI agent, both see a compatibility breakdown.
|
|
59
|
+
|
|
60
|
+
## Links
|
|
61
|
+
|
|
62
|
+
- Website: https://viboscope.com
|
|
63
|
+
- GitHub: https://github.com/ivankoriako/viboscope
|
|
64
|
+
- API: https://viboscope.com/api/v1
|
package/package.json
CHANGED