escribano 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +297 -0
- package/dist/0_types.js +279 -0
- package/dist/actions/classify-session.js +77 -0
- package/dist/actions/create-contexts.js +44 -0
- package/dist/actions/create-topic-blocks.js +68 -0
- package/dist/actions/extract-metadata.js +24 -0
- package/dist/actions/generate-artifact-v3.js +296 -0
- package/dist/actions/generate-artifact.js +61 -0
- package/dist/actions/generate-summary-v3.js +260 -0
- package/dist/actions/outline-index.js +204 -0
- package/dist/actions/process-recording-v2.js +494 -0
- package/dist/actions/process-recording-v3.js +412 -0
- package/dist/actions/process-session.js +183 -0
- package/dist/actions/publish-summary-v3.js +303 -0
- package/dist/actions/sync-to-outline.js +196 -0
- package/dist/adapters/audio.silero.adapter.js +69 -0
- package/dist/adapters/cap.adapter.js +94 -0
- package/dist/adapters/capture.cap.adapter.js +107 -0
- package/dist/adapters/capture.filesystem.adapter.js +124 -0
- package/dist/adapters/embedding.ollama.adapter.js +141 -0
- package/dist/adapters/intelligence.adapter.js +202 -0
- package/dist/adapters/intelligence.mlx.adapter.js +395 -0
- package/dist/adapters/intelligence.ollama.adapter.js +741 -0
- package/dist/adapters/publishing.outline.adapter.js +75 -0
- package/dist/adapters/storage.adapter.js +81 -0
- package/dist/adapters/storage.fs.adapter.js +83 -0
- package/dist/adapters/transcription.whisper.adapter.js +206 -0
- package/dist/adapters/video.ffmpeg.adapter.js +405 -0
- package/dist/adapters/whisper.adapter.js +168 -0
- package/dist/batch-context.js +329 -0
- package/dist/db/helpers.js +50 -0
- package/dist/db/index.js +95 -0
- package/dist/db/migrate.js +80 -0
- package/dist/db/repositories/artifact.sqlite.js +77 -0
- package/dist/db/repositories/cluster.sqlite.js +92 -0
- package/dist/db/repositories/context.sqlite.js +75 -0
- package/dist/db/repositories/index.js +10 -0
- package/dist/db/repositories/observation.sqlite.js +70 -0
- package/dist/db/repositories/recording.sqlite.js +56 -0
- package/dist/db/repositories/subject.sqlite.js +64 -0
- package/dist/db/repositories/topic-block.sqlite.js +45 -0
- package/dist/db/types.js +4 -0
- package/dist/domain/classification.js +60 -0
- package/dist/domain/context.js +97 -0
- package/dist/domain/index.js +2 -0
- package/dist/domain/observation.js +17 -0
- package/dist/domain/recording.js +41 -0
- package/dist/domain/segment.js +93 -0
- package/dist/domain/session.js +93 -0
- package/dist/domain/time-range.js +38 -0
- package/dist/domain/transcript.js +79 -0
- package/dist/index.js +173 -0
- package/dist/pipeline/context.js +162 -0
- package/dist/pipeline/events.js +2 -0
- package/dist/prerequisites.js +226 -0
- package/dist/scripts/rebuild-index.js +53 -0
- package/dist/scripts/seed-fixtures.js +290 -0
- package/dist/services/activity-segmentation.js +333 -0
- package/dist/services/activity-segmentation.test.js +191 -0
- package/dist/services/app-normalization.js +212 -0
- package/dist/services/cluster-merge.js +69 -0
- package/dist/services/clustering.js +237 -0
- package/dist/services/debug.js +58 -0
- package/dist/services/frame-sampling.js +318 -0
- package/dist/services/signal-extraction.js +106 -0
- package/dist/services/subject-grouping.js +342 -0
- package/dist/services/temporal-alignment.js +99 -0
- package/dist/services/vlm-enrichment.js +84 -0
- package/dist/services/vlm-service.js +130 -0
- package/dist/stats/index.js +3 -0
- package/dist/stats/observer.js +65 -0
- package/dist/stats/repository.js +36 -0
- package/dist/stats/resource-tracker.js +86 -0
- package/dist/stats/types.js +1 -0
- package/dist/test-classification-prompts.js +181 -0
- package/dist/tests/cap.adapter.test.js +75 -0
- package/dist/tests/capture.cap.adapter.test.js +69 -0
- package/dist/tests/classify-session.test.js +140 -0
- package/dist/tests/db/repositories.test.js +243 -0
- package/dist/tests/domain/time-range.test.js +31 -0
- package/dist/tests/integration.test.js +84 -0
- package/dist/tests/intelligence.adapter.test.js +102 -0
- package/dist/tests/intelligence.ollama.adapter.test.js +178 -0
- package/dist/tests/process-v2.test.js +90 -0
- package/dist/tests/services/clustering.test.js +112 -0
- package/dist/tests/services/frame-sampling.test.js +152 -0
- package/dist/tests/utils/ocr.test.js +76 -0
- package/dist/tests/utils/parallel.test.js +57 -0
- package/dist/tests/visual-observer.test.js +175 -0
- package/dist/utils/id-normalization.js +15 -0
- package/dist/utils/index.js +9 -0
- package/dist/utils/model-detector.js +154 -0
- package/dist/utils/ocr.js +80 -0
- package/dist/utils/parallel.js +32 -0
- package/migrations/001_initial.sql +109 -0
- package/migrations/002_clusters.sql +41 -0
- package/migrations/003_observations_vlm_fields.sql +14 -0
- package/migrations/004_observations_unique.sql +18 -0
- package/migrations/005_processing_stats.sql +29 -0
- package/migrations/006_vlm_raw_response.sql +6 -0
- package/migrations/007_subjects.sql +23 -0
- package/migrations/008_artifacts_recording.sql +6 -0
- package/migrations/009_artifact_subjects.sql +10 -0
- package/package.json +82 -0
- package/prompts/action-items.md +55 -0
- package/prompts/blog-draft.md +54 -0
- package/prompts/blog-research.md +87 -0
- package/prompts/card.md +54 -0
- package/prompts/classify-segment.md +38 -0
- package/prompts/classify.md +37 -0
- package/prompts/code-snippets.md +163 -0
- package/prompts/extract-metadata.md +149 -0
- package/prompts/notes.md +83 -0
- package/prompts/runbook.md +123 -0
- package/prompts/standup.md +50 -0
- package/prompts/step-by-step.md +125 -0
- package/prompts/subject-grouping.md +31 -0
- package/prompts/summary-v3.md +89 -0
- package/prompts/summary.md +77 -0
- package/prompts/topic-classifier.md +24 -0
- package/prompts/topic-extract.md +13 -0
- package/prompts/vlm-batch.md +21 -0
- package/prompts/vlm-single.md +19 -0
package/prompts/notes.md
ADDED
|
@@ -0,0 +1,83 @@
|
|
|
1
|
+
# Session Notes
|
|
2
|
+
|
|
3
|
+
You are a technical researcher/student taking detailed, research-backed notes from a session.
|
|
4
|
+
|
|
5
|
+
## Context
|
|
6
|
+
Metadata: {{METADATA}}
|
|
7
|
+
Visual Log: {{VISUAL_LOG}}
|
|
8
|
+
Detected Language: {{LANGUAGE}}
|
|
9
|
+
|
|
10
|
+
## Visual Integration Rule
|
|
11
|
+
Include screenshots to illustrate complex concepts or UI layouts using `[SCREENSHOT: timestamp]`.
|
|
12
|
+
|
|
13
|
+
## Language Rule
|
|
14
|
+
Use English for all headings, structural elements, and organizational labels. The technical content, step-by-step explanations, research findings, and core concepts must be written in the original language ({{LANGUAGE}}).
|
|
15
|
+
|
|
16
|
+
## Structure
|
|
17
|
+
|
|
18
|
+
### 1. Session Overview (Summary)
|
|
19
|
+
Write a concise 3-5 sentence summary synthesizing the key themes, objectives, and outcomes of this session. This is your "bottom of the page" Cornell-style summary—capture the essence in your own words.
|
|
20
|
+
|
|
21
|
+
### 2. Key Questions & Cues
|
|
22
|
+
List 5-8 questions or keywords that capture the main themes and can serve as recall triggers when reviewing these notes later. Frame them as questions where possible (e.g., "What is...", "Why does...", "How to...").
|
|
23
|
+
|
|
24
|
+
Example format:
|
|
25
|
+
- **What**: [core concept name]
|
|
26
|
+
- **Why**: [rationale/purpose]
|
|
27
|
+
- **How**: [implementation or process]
|
|
28
|
+
- **Key Terms**: [important technical terms]
|
|
29
|
+
|
|
30
|
+
### 3. Main Concepts & Atomic Ideas
|
|
31
|
+
For each significant concept discussed, create a focused section with:
|
|
32
|
+
- **Concept Name** (English heading)
|
|
33
|
+
- **Definition**: What it is, in original language ({{LANGUAGE}})
|
|
34
|
+
- **Why It Matters**: The purpose, problem solved, or relevance
|
|
35
|
+
- **Key Details**: Specific technical aspects, constraints, or considerations
|
|
36
|
+
- **Connections**: How this relates to other concepts in this session
|
|
37
|
+
|
|
38
|
+
Organize these as distinct "atomic notes"—one clear idea per section.
|
|
39
|
+
|
|
40
|
+
### 4. Technical Details & Examples
|
|
41
|
+
Capture specific technical information with context:
|
|
42
|
+
|
|
43
|
+
**Commands & Code**
|
|
44
|
+
- Code snippets or commands (in original language {{LANGUAGE}})
|
|
45
|
+
- Purpose/what it does
|
|
46
|
+
- Parameters and their meanings
|
|
47
|
+
- Example usage scenarios
|
|
48
|
+
|
|
49
|
+
**Problems & Solutions**
|
|
50
|
+
- Challenge or issue described
|
|
51
|
+
- Troubleshooting approach taken
|
|
52
|
+
- Resolution and why it worked
|
|
53
|
+
- Alternative approaches mentioned
|
|
54
|
+
|
|
55
|
+
**Architectural Decisions**
|
|
56
|
+
- Design choices made
|
|
57
|
+
- Trade-offs considered
|
|
58
|
+
- Rationale for final decision
|
|
59
|
+
|
|
60
|
+
### 5. References & Resources
|
|
61
|
+
List all references mentioned with context:
|
|
62
|
+
- Documentation links (with brief description of relevance)
|
|
63
|
+
- Tools, libraries, or frameworks (with version info if mentioned)
|
|
64
|
+
- Related reading or follow-up research
|
|
65
|
+
- Dependencies or prerequisites
|
|
66
|
+
|
|
67
|
+
### 6. Action Items & Next Steps
|
|
68
|
+
Capture concrete actions identified during the session:
|
|
69
|
+
- Tasks to complete
|
|
70
|
+
- Experiments to try
|
|
71
|
+
- Topics to research further
|
|
72
|
+
- Decisions requiring follow-up
|
|
73
|
+
|
|
74
|
+
## Guidelines for Effective Notes
|
|
75
|
+
- **Paraphrase, Don't Transcribe**: Rewrite ideas in your own words rather than copying verbatim.
|
|
76
|
+
- **Be Specific**: Include actual code, command output, or technical details rather than vague descriptions.
|
|
77
|
+
- **Capture the "Why"**: Always explain why something matters, not just what it is.
|
|
78
|
+
- **Use Examples**: Include concrete examples discussed or referenced.
|
|
79
|
+
- **Note Uncertainties**: Mark areas that were unclear or require further investigation.
|
|
80
|
+
- **Link Ideas**: When concepts relate, explicitly state the connection.
|
|
81
|
+
|
|
82
|
+
## Transcript
|
|
83
|
+
{{TRANSCRIPT_ALL}}
|
|
@@ -0,0 +1,123 @@
|
|
|
1
|
+
# Debugging Runbook
|
|
2
|
+
You are a senior engineer documenting a troubleshooting session.
|
|
3
|
+
|
|
4
|
+
## Context
|
|
5
|
+
Metadata: {{METADATA}}
|
|
6
|
+
Visual Log: {{VISUAL_LOG}}
|
|
7
|
+
Detected Language: {{LANGUAGE}}
|
|
8
|
+
|
|
9
|
+
## Instructions
|
|
10
|
+
|
|
11
|
+
### Visual Integration Rule
|
|
12
|
+
You MUST illustrate the runbook by requesting screenshots at critical moments (e.g., when an error message appears, when a fix is verified). Use the tag `[SCREENSHOT: timestamp]` where timestamp is the exact seconds.
|
|
13
|
+
|
|
14
|
+
Example: "The console showed a 404 error [SCREENSHOT: 45.5]."
|
|
15
|
+
|
|
16
|
+
### Language Rule
|
|
17
|
+
Use English for all headings, structural elements, and section labels. All technical details, error messages, specific troubleshooting steps, resolution explanations, and code examples must remain in the original language ({{LANGUAGE}}).
|
|
18
|
+
|
|
19
|
+
### Blameless Documentation Principle
|
|
20
|
+
Focus on systems, processes, and contributing factors—not on individuals or teams. Assume everyone involved had good intentions and acted with the information available at the time. Document what happened, why it happened systemically, and how to prevent it—not who is to blame.
|
|
21
|
+
|
|
22
|
+
## Structure
|
|
23
|
+
|
|
24
|
+
### 1. Summary
|
|
25
|
+
**Provide a concise, high-level overview of the troubleshooting session.**
|
|
26
|
+
- What was broken or failing?
|
|
27
|
+
- What was the primary symptom observed?
|
|
28
|
+
- What was the final outcome?
|
|
29
|
+
|
|
30
|
+
### 2. Impact Assessment
|
|
31
|
+
**Document the effect of the issue.**
|
|
32
|
+
- What was affected? (e.g., users, services, features, data)
|
|
33
|
+
- How severe was the impact? (e.g., critical degradation, partial outage, localized issue)
|
|
34
|
+
- Any quantifiable metrics? (e.g., error rate, latency, affected users)
|
|
35
|
+
|
|
36
|
+
### 3. Detection
|
|
37
|
+
**How was the issue discovered?**
|
|
38
|
+
- What monitoring, alerting, or user report identified the problem?
|
|
39
|
+
- When was it first noticed?
|
|
40
|
+
- What triggered the investigation?
|
|
41
|
+
|
|
42
|
+
### 4. Timeline
|
|
43
|
+
**Chronological account of key events during troubleshooting.**
|
|
44
|
+
- Use timestamps where available from the transcript
|
|
45
|
+
- Include major actions taken and decisions made
|
|
46
|
+
- Note any shifts in investigation direction or hypothesis
|
|
47
|
+
- Format: `[Time/Sequence] — Actor/Context — Action/Observation`
|
|
48
|
+
|
|
49
|
+
### 5. Problem Description
|
|
50
|
+
**Detailed description of what was broken or failing.**
|
|
51
|
+
- Expected behavior vs. actual behavior
|
|
52
|
+
- Specific error messages from {{TECHNICAL_TERMS}}
|
|
53
|
+
- Symptoms observed (e.g., latency, errors, incorrect results)
|
|
54
|
+
- Scope of the issue (how widespread?)
|
|
55
|
+
|
|
56
|
+
### 6. Investigation Steps
|
|
57
|
+
**Document the path taken to identify the root cause.**
|
|
58
|
+
- What hypotheses were formed and tested?
|
|
59
|
+
- What diagnostic tools or approaches were used?
|
|
60
|
+
- Which paths were explored and ruled out?
|
|
61
|
+
- How did the investigation narrow down to the cause?
|
|
62
|
+
|
|
63
|
+
### 7. Root Cause(s)
|
|
64
|
+
**Identify the underlying issue(s) that caused the problem.**
|
|
65
|
+
- Primary root cause (most direct cause)
|
|
66
|
+
- Contributing factors (if applicable—e.g., configuration issues, system interactions, recent changes)
|
|
67
|
+
- Use "5 Whys" approach if helpful: trace back from symptom to deeper systemic cause
|
|
68
|
+
|
|
69
|
+
### 8. Trigger (if applicable)
|
|
70
|
+
**If the issue was triggered by a specific event, identify it.**
|
|
71
|
+
- What latent bug was activated?
|
|
72
|
+
- What change, event, or condition triggered the failure?
|
|
73
|
+
- Distinguish between the trigger (what activated it) and the root cause (the underlying flaw)
|
|
74
|
+
|
|
75
|
+
### 9. Resolution
|
|
76
|
+
**How the issue was fixed or the solution applied.**
|
|
77
|
+
- Specific steps taken to resolve the issue
|
|
78
|
+
- Immediate mitigation vs. long-term fix
|
|
79
|
+
- Any configuration changes, code changes, or workarounds
|
|
80
|
+
|
|
81
|
+
### 10. Verification
|
|
82
|
+
**How to verify the fix is working.**
|
|
83
|
+
- What tests or checks confirm the issue is resolved?
|
|
84
|
+
- What metrics or behaviors should return to normal?
|
|
85
|
+
- How to ensure no regressions?
|
|
86
|
+
|
|
87
|
+
### 11. Lessons Learned
|
|
88
|
+
**Reflect on what the session revealed about the system and process.**
|
|
89
|
+
|
|
90
|
+
**What Went Well:**
|
|
91
|
+
- What worked effectively during troubleshooting?
|
|
92
|
+
- What tools, processes, or approaches helped resolve the issue quickly?
|
|
93
|
+
- What should be replicated in future sessions?
|
|
94
|
+
|
|
95
|
+
**What Went Wrong:**
|
|
96
|
+
- What could have been done better or faster?
|
|
97
|
+
- What information was missing or delayed?
|
|
98
|
+
- What made investigation difficult?
|
|
99
|
+
|
|
100
|
+
**Where We Got Lucky (Near Misses):**
|
|
101
|
+
- What prevented this from being worse?
|
|
102
|
+
- What fortunate circumstances helped resolution?
|
|
103
|
+
|
|
104
|
+
### 12. Action Items
|
|
105
|
+
**Concrete follow-up items to prevent recurrence or improve future troubleshooting.**
|
|
106
|
+
|
|
107
|
+
| Action Item | Type | Owner | Status |
|
|
108
|
+
|-------------|------|-------|--------|
|
|
109
|
+
| [Specific action] | [Prevent/Mitigate/Improve] | [Responsible person/team] | [TODO/DONE/In Progress] |
|
|
110
|
+
|
|
111
|
+
**Types:**
|
|
112
|
+
- **Prevent**: Changes to eliminate this root cause
|
|
113
|
+
- **Mitigate**: Measures to reduce impact if it recurs
|
|
114
|
+
- **Improve**: Process/tooling improvements for faster troubleshooting
|
|
115
|
+
|
|
116
|
+
### 13. Supporting Evidence
|
|
117
|
+
**Links or references to additional context.**
|
|
118
|
+
- Logs, metrics, screenshots, or monitoring dashboards referenced
|
|
119
|
+
- Documentation, playbooks, or runbooks consulted
|
|
120
|
+
- Related bugs, issues, or pull requests
|
|
121
|
+
|
|
122
|
+
## Transcript
|
|
123
|
+
{{TRANSCRIPT_ALL}}
|
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
# Standup Format - Bullet-Point Status Update
|
|
2
|
+
|
|
3
|
+
You are generating a standup-style status update from a work session. Focus on what was accomplished and what's next.
|
|
4
|
+
|
|
5
|
+
## Session Metadata
|
|
6
|
+
- **Duration:** {{SESSION_DURATION}}
|
|
7
|
+
- **Date:** {{SESSION_DATE}}
|
|
8
|
+
|
|
9
|
+
## Work Done
|
|
10
|
+
|
|
11
|
+
{{WORK_SUBJECTS}}
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## Instructions
|
|
16
|
+
|
|
17
|
+
Generate a concise standup update with three sections:
|
|
18
|
+
|
|
19
|
+
1. **What I did** - 3-5 bullet points of main activities
|
|
20
|
+
2. **Key outcomes** - 2-3 concrete results or progress
|
|
21
|
+
3. **Next steps** - 1-3 items for next session
|
|
22
|
+
|
|
23
|
+
**Format example:**
|
|
24
|
+
|
|
25
|
+
```markdown
|
|
26
|
+
## Standup - Feb 25, 2026
|
|
27
|
+
|
|
28
|
+
**What I did:**
|
|
29
|
+
- Optimized Escribano scene detection pipeline
|
|
30
|
+
- Fixed LLM truncation and database constraint errors
|
|
31
|
+
- Benchmarked MLX vs Ollama VLM models
|
|
32
|
+
- Reviewed competitor architecture (Screenpipe)
|
|
33
|
+
|
|
34
|
+
**Key outcomes:**
|
|
35
|
+
- Scene detection reduced from 6119s to 166s (20.6x speedup)
|
|
36
|
+
- VLM batch inference working with new skip-frame strategy
|
|
37
|
+
- Identified qwen3_next as candidate for inference improvements
|
|
38
|
+
|
|
39
|
+
**Next:**
|
|
40
|
+
- Merge perf/scene-detection-skip-keyframes branch
|
|
41
|
+
- Test qwen3_next model for inference improvements
|
|
42
|
+
- Add unit tests for mlx_bridge.py
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
**Rules:**
|
|
46
|
+
- Maximum 10-12 lines total
|
|
47
|
+
- Be specific, not generic
|
|
48
|
+
- Focus on accomplishments, not activities
|
|
49
|
+
- Skip personal content entirely
|
|
50
|
+
- Use present tense
|
|
@@ -0,0 +1,125 @@
|
|
|
1
|
+
# Step-by-Step Guide
|
|
2
|
+
You are a technical writer creating a **how-to guide** (goal-oriented procedural documentation) from a demonstration session. This guide helps users who already know what they want to achieve by providing clear, actionable steps.
|
|
3
|
+
|
|
4
|
+
## Context
|
|
5
|
+
Metadata: {{METADATA}}
|
|
6
|
+
Visual Log: {{VISUAL_LOG}}
|
|
7
|
+
Detected Language: {{LANGUAGE}}
|
|
8
|
+
|
|
9
|
+
## Visual Integration Rule
|
|
10
|
+
You MUST illustrate each major step by requesting a screenshot. Use the tag `[SCREENSHOT: timestamp]` where timestamp is the seconds from the Metadata or Visual Log.
|
|
11
|
+
|
|
12
|
+
Example:
|
|
13
|
+
1. Open the project configuration [SCREENSHOT: 12.0].
|
|
14
|
+
2. Update the API endpoint in `.env`.
|
|
15
|
+
|
|
16
|
+
## Language Rule
|
|
17
|
+
Use English for headings, structural elements, and section labels. The actual instructions, technical explanations, command descriptions, and all procedural content must remain in the original language ({{LANGUAGE}}).
|
|
18
|
+
|
|
19
|
+
## Structure Requirements
|
|
20
|
+
|
|
21
|
+
### 1. Problem Statement (What This Guide Solves)
|
|
22
|
+
Begin with a clear statement of the problem or task this guide addresses. Answer: "What will the reader accomplish?"
|
|
23
|
+
|
|
24
|
+
### 2. Prerequisites
|
|
25
|
+
List requirements in a bulleted list. Include:
|
|
26
|
+
- Software, tools, or versions needed
|
|
27
|
+
- Access or permissions required
|
|
28
|
+
- Prior knowledge or skills assumed
|
|
29
|
+
- Files or resources to have ready
|
|
30
|
+
|
|
31
|
+
### 3. Step-by-Step Instructions
|
|
32
|
+
Follow these strict formatting rules:
|
|
33
|
+
|
|
34
|
+
**Introductory Sentence**: Provide context that isn't in the heading. Don't repeat the heading.
|
|
35
|
+
|
|
36
|
+
**Step Format**:
|
|
37
|
+
- Each step must start with an **imperative verb**
|
|
38
|
+
- Use **complete sentences**
|
|
39
|
+
- Maintain **parallel structure** (consistent verb form)
|
|
40
|
+
- **State the goal before the action** when it clarifies purpose
|
|
41
|
+
- **State the location/context before the action** (e.g., "In the terminal, run...")
|
|
42
|
+
- **State the action first, then the result** or justification
|
|
43
|
+
|
|
44
|
+
**Multi-Action Steps**: Combine small related actions using angle brackets: `Click **File > New > Document**`
|
|
45
|
+
|
|
46
|
+
**Sub-steps**:
|
|
47
|
+
- Use lowercase letters for sub-steps
|
|
48
|
+
- Use lowercase Roman numerals for sub-sub-steps
|
|
49
|
+
- End parent step with colon or period
|
|
50
|
+
|
|
51
|
+
**Optional Steps**: Prefix with "Optional:" (not "(Optional)")
|
|
52
|
+
|
|
53
|
+
**Single-Step Procedures**: Format as bullet list, not numbered
|
|
54
|
+
|
|
55
|
+
**Command Steps**: Follow this order:
|
|
56
|
+
1. Describe what the command does (imperative)
|
|
57
|
+
2. Show the command in code block
|
|
58
|
+
3. Explain placeholders (e.g., "Replace `NAME` with...")
|
|
59
|
+
4. Explain the command's function if necessary
|
|
60
|
+
5. Show expected output
|
|
61
|
+
6. Explain the result
|
|
62
|
+
|
|
63
|
+
**Example**:
|
|
64
|
+
```
|
|
65
|
+
1. Plan the Terraform deployment:
|
|
66
|
+
|
|
67
|
+
terraform plan -out=NAME
|
|
68
|
+
|
|
69
|
+
Replace `NAME` with the name of your Terraform plan.
|
|
70
|
+
|
|
71
|
+
The `terraform plan` command creates an execution plan showing what resources will be added, changed, or destroyed.
|
|
72
|
+
|
|
73
|
+
The output is similar to the following:
|
|
74
|
+
|
|
75
|
+
Plan: 26 to add, 0 to change, 0 to destroy.
|
|
76
|
+
|
|
77
|
+
This output shows what resources to add, change, or destroy.
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
### 4. Expected Result
|
|
81
|
+
Describe what success looks like after completing all steps. Include:
|
|
82
|
+
- What the reader should see or have
|
|
83
|
+
- How to verify the result
|
|
84
|
+
- What the reader can do next
|
|
85
|
+
|
|
86
|
+
### 5. Troubleshooting
|
|
87
|
+
Address common issues mentioned in the transcript. For each issue:
|
|
88
|
+
- State the problem clearly
|
|
89
|
+
- Provide the solution
|
|
90
|
+
- Explain why it occurred (briefly)
|
|
91
|
+
|
|
92
|
+
## Writing Principles (Anti-patterns to Avoid)
|
|
93
|
+
|
|
94
|
+
❌ **Don't** use directional language ("above", "below", "right-hand side")
|
|
95
|
+
❌ **Don't** say "please"
|
|
96
|
+
❌ **Don't** say "run the following command" (focus on what it does)
|
|
97
|
+
❌ **Don't** include keyboard shortcuts (just say what to do)
|
|
98
|
+
❌ **Don't** give alternate ways to complete a task (pick the best one)
|
|
99
|
+
❌ **Don't** over-explain or include unnecessary background (this is a how-to guide, not a tutorial or explanation)
|
|
100
|
+
❌ **Don't** repeat procedure headings in introductory sentences
|
|
101
|
+
❌ **Don't** make steps too long—split if needed
|
|
102
|
+
|
|
103
|
+
✅ **Do** focus on concrete, actionable steps
|
|
104
|
+
✅ **Do** provide visible results early and often
|
|
105
|
+
✅ **Do** maintain flow and rhythm between steps
|
|
106
|
+
✅ **Do** include exact expected output when helpful
|
|
107
|
+
✅ **Do** explain placeholders clearly
|
|
108
|
+
✅ **Do** ensure the guide works reliably every time
|
|
109
|
+
|
|
110
|
+
## Quality Checklist
|
|
111
|
+
- [ ] Each step starts with an imperative verb
|
|
112
|
+
- [ ] All steps use complete sentences
|
|
113
|
+
- [ ] Parallel structure is maintained
|
|
114
|
+
- [ ] Context/location appears before action
|
|
115
|
+
- [ ] Optional steps are marked "Optional:"
|
|
116
|
+
- [ ] No directional language used
|
|
117
|
+
- [ ] No "please" included
|
|
118
|
+
- [ ] Commands are explained, not introduced with "run"
|
|
119
|
+
- [ ] Expected output is shown for commands
|
|
120
|
+
- [ ] Problem statement is clear
|
|
121
|
+
- [ ] Prerequisites are complete
|
|
122
|
+
- [ ] Troubleshooting addresses common issues
|
|
123
|
+
|
|
124
|
+
## Transcript
|
|
125
|
+
{{TRANSCRIPT_ALL}}
|
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
You are analyzing a work session that has been divided into {{BLOCK_COUNT}} segments (TopicBlocks).
|
|
2
|
+
|
|
3
|
+
Your task is to group these segments into 1-6 coherent SUBJECTS. A subject represents a distinct thread of work (e.g., "Escribano pipeline optimization", "Personal time", "Email and admin", "Research on competitors").
|
|
4
|
+
|
|
5
|
+
GROUPING RULES:
|
|
6
|
+
1. Group segments that belong to the same work thread, even if they're not consecutive in time
|
|
7
|
+
2. Personal activities (WhatsApp, Instagram, social media, personal calls) should be grouped into a "Personal" subject
|
|
8
|
+
3. Email/calendar/admin is only its own group when email IS the primary activity — not just because an email app was open in the background
|
|
9
|
+
4. Deep work on the same project/codebase should be grouped together
|
|
10
|
+
5. Research sessions should be grouped separately from coding sessions unless clearly related
|
|
11
|
+
|
|
12
|
+
RULE PRIORITY (when in doubt):
|
|
13
|
+
- Classify by primary ACTIVITY TYPE and project context, not by which apps happened to be open
|
|
14
|
+
- If all segments are about the same project, one group is correct — do not invent artificial splits
|
|
15
|
+
|
|
16
|
+
SEGMENTS TO GROUP:
|
|
17
|
+
{{BLOCK_DESCRIPTIONS}}
|
|
18
|
+
|
|
19
|
+
For each group, output ONE line in this EXACT format:
|
|
20
|
+
Group 1: label: [Descriptive subject name] | blockIds: [uuid1, uuid2, uuid3]
|
|
21
|
+
|
|
22
|
+
Example output:
|
|
23
|
+
Group 1: label: Escribano VLM Integration | blockIds: [{{EXAMPLE_BLOCK_IDS}}]
|
|
24
|
+
|
|
25
|
+
CRITICAL REQUIREMENTS:
|
|
26
|
+
- Each group MUST have "label" and "blockIds"
|
|
27
|
+
- Block IDs are the UUIDs shown in each BLOCK above (copy them exactly)
|
|
28
|
+
- Include ALL {{BLOCK_COUNT}} block IDs across all groups (every block must be assigned exactly once)
|
|
29
|
+
- Create 1-6 groups (one group is fine if all work is the same project)
|
|
30
|
+
- Use clear, descriptive labels for each subject
|
|
31
|
+
- Output ONLY the group lines — no explanation, no preamble, no markdown
|
|
@@ -0,0 +1,89 @@
|
|
|
1
|
+
You are a productivity assistant analyzing a developer's work session recording.
|
|
2
|
+
|
|
3
|
+
Generate a detailed narrative summary of this work session, organized by themes rather than strict chronology.
|
|
4
|
+
|
|
5
|
+
## Session Metadata
|
|
6
|
+
- **Duration:** {{SESSION_DURATION}} minutes
|
|
7
|
+
- **Date:** {{SESSION_DATE}}
|
|
8
|
+
- **Activities Identified:** {{ACTIVITY_COUNT}}
|
|
9
|
+
|
|
10
|
+
## Activity Timeline
|
|
11
|
+
|
|
12
|
+
{{ACTIVITY_TIMELINE}}
|
|
13
|
+
|
|
14
|
+
## Apps & Pages Used
|
|
15
|
+
|
|
16
|
+
### Applications
|
|
17
|
+
{{APPS_LIST}}
|
|
18
|
+
|
|
19
|
+
### Websites Visited
|
|
20
|
+
{{URLS_LIST}}
|
|
21
|
+
|
|
22
|
+
## Instructions
|
|
23
|
+
|
|
24
|
+
Write a comprehensive yet readable summary that:
|
|
25
|
+
|
|
26
|
+
1. **Groups activities by theme** — combine related work (e.g., all terminal work together, all research together)
|
|
27
|
+
2. **Describes the session as a work log** — what was being worked on, with transitions between themes
|
|
28
|
+
3. **Includes specifics** from the visual descriptions (file names, app names, error messages, URLs)
|
|
29
|
+
4. **Incorporates audio transcript quotes** when they add context (decisions made, explanations spoken)
|
|
30
|
+
5. **Uses markdown headers** for major thematic sections (not every activity change)
|
|
31
|
+
6. **Ends with structured outcomes** — what was accomplished, what's unresolved, what's next
|
|
32
|
+
|
|
33
|
+
Write 500-1500 words depending on session complexity. Be specific, not generic.
|
|
34
|
+
|
|
35
|
+
Do NOT include a section listing raw observations — synthesize them into narrative.
|
|
36
|
+
Do NOT use bullet points for narrative sections — organize into flowing paragraphs.
|
|
37
|
+
Write in work log style using **FIRST PERSON** present continuous tense:
|
|
38
|
+
- "Working on..." "Debugging..." "Reviewing..."
|
|
39
|
+
- "Editing the config file..." "Running tests..." "Checking the logs..."
|
|
40
|
+
- NOT: "The developer..." "The user was..." "They were..."
|
|
41
|
+
|
|
42
|
+
## Format Example
|
|
43
|
+
|
|
44
|
+
```markdown
|
|
45
|
+
# Session Summary: [Date]
|
|
46
|
+
|
|
47
|
+
## Overview
|
|
48
|
+
[Brief 2-3 sentence overview in first person: "Spent 3 hours optimizing the VLM pipeline, achieving a 4x speedup through scene detection and model quantization improvements."]
|
|
49
|
+
|
|
50
|
+
## Timeline
|
|
51
|
+
* **0:00** (27m): terminal
|
|
52
|
+
* **27:15** (45m): debugging
|
|
53
|
+
* **72:00** (30m): research
|
|
54
|
+
...
|
|
55
|
+
|
|
56
|
+
## Apps & Pages Used
|
|
57
|
+
|
|
58
|
+
### Applications
|
|
59
|
+
Terminal, Google Chrome, VS Code
|
|
60
|
+
|
|
61
|
+
### Websites Visited
|
|
62
|
+
- github.com/owner/repo
|
|
63
|
+
- docs.example.com/guide
|
|
64
|
+
|
|
65
|
+
## Terminal Work: Model Benchmarking (0:00–27:00)
|
|
66
|
+
Running benchmark scripts in the terminal to compare VLM model performance. Processing 342 frames through the pipeline and measuring inference speed. The qwen3-vl:4b model shows promising results with 115 tok/s throughput...
|
|
67
|
+
|
|
68
|
+
## Debugging & Optimization (27:00–72:00)
|
|
69
|
+
Encountering parsing errors in the benchmark script. The JSON output from the VLM is being truncated on later frames. Investigating the root cause by adding debug logging and adjusting the MAX_TOKENS parameter...
|
|
70
|
+
|
|
71
|
+
## Research & Documentation (72:00–102:00)
|
|
72
|
+
Researching alternative VLM implementations on Google Chrome. Found an arXiv paper comparing vision-language models on standardized benchmarks. Reviewing the GitHub repository for mlx-vlm examples...
|
|
73
|
+
|
|
74
|
+
## Key Outcomes
|
|
75
|
+
|
|
76
|
+
### ✅ Accomplished
|
|
77
|
+
- Achieved 4x speedup in the processing pipeline
|
|
78
|
+
- Fixed JSON parsing errors in benchmark script
|
|
79
|
+
- Documented performance metrics in HTML reports
|
|
80
|
+
|
|
81
|
+
### ⏳ Unresolved
|
|
82
|
+
- Need to test with larger model (InternVL-14B)
|
|
83
|
+
- Some frame descriptions still truncated at high batch sizes
|
|
84
|
+
|
|
85
|
+
### ➡️ Next Steps
|
|
86
|
+
- Integrate 4bit model into production pipeline
|
|
87
|
+
- Explore continuous batching for parallel processing
|
|
88
|
+
- Add unit tests for the new adapter
|
|
89
|
+
```
|
|
@@ -0,0 +1,77 @@
|
|
|
1
|
+
# Session Summary
|
|
2
|
+
|
|
3
|
+
You are an expert scribe specializing in creating comprehensive, actionable session documentation. Your task is to transform the following transcript into a professional session summary that stakeholders can reference for decision-making and follow-up.
|
|
4
|
+
|
|
5
|
+
## Context
|
|
6
|
+
**Metadata**: {{METADATA}}
|
|
7
|
+
**Visual Log**: {{VISUAL_LOG}}
|
|
8
|
+
**Detected Language**: {{LANGUAGE}}
|
|
9
|
+
|
|
10
|
+
## Visual Integration Rule
|
|
11
|
+
If the session involves visual demonstrations or screen-sharing, include screenshots of major moments using the tag `[SCREENSHOT: timestamp]`.
|
|
12
|
+
|
|
13
|
+
## Instructions
|
|
14
|
+
|
|
15
|
+
### Language Rule
|
|
16
|
+
- Use English for all headings, structure, and meta-analysis
|
|
17
|
+
- All actual discussion content, quotes, and explanations must remain in the original language ({{LANGUAGE}})
|
|
18
|
+
|
|
19
|
+
### Structure Requirements
|
|
20
|
+
|
|
21
|
+
Create a summary with the following sections:
|
|
22
|
+
|
|
23
|
+
#### 1. Session Overview
|
|
24
|
+
A concise 2-3 sentence summary answering:
|
|
25
|
+
- What was the primary purpose of this session?
|
|
26
|
+
- What was the main outcome or result?
|
|
27
|
+
- Who participated (if identifiable)?
|
|
28
|
+
|
|
29
|
+
#### 2. Attendees & Context
|
|
30
|
+
- **Participants**: List identified speakers/participants (use speaker labels from transcript if names unavailable)
|
|
31
|
+
- **Duration**: Note session length if available in metadata
|
|
32
|
+
- **Type**: Briefly characterize the session (e.g., planning meeting, technical review, brainstorming, 1-on-1)
|
|
33
|
+
|
|
34
|
+
#### 3. Key Discussion Points
|
|
35
|
+
Organize the main topics discussed. For each topic:
|
|
36
|
+
- **Topic heading** (English)
|
|
37
|
+
- Brief bullet points of key points covered (in original language)
|
|
38
|
+
- Include significant questions raised and responses given
|
|
39
|
+
- Reference timestamp ranges where relevant (e.g., `[12:34-18:45]`)
|
|
40
|
+
|
|
41
|
+
#### 4. Decisions Made
|
|
42
|
+
List clear conclusions or agreements reached. For each decision:
|
|
43
|
+
- What was decided (concise, actionable)
|
|
44
|
+
- Who made or agreed to the decision
|
|
45
|
+
- Approximate timestamp if referenced in discussion
|
|
46
|
+
- **Format**: Start with a verb (e.g., "Approved", "Decided", "Agreed to")
|
|
47
|
+
|
|
48
|
+
#### 5. Action Items
|
|
49
|
+
Critical: List all tasks or commitments made. For each action item:
|
|
50
|
+
- **Action**: Specific task description (what needs to be done)
|
|
51
|
+
- **Owner**: Who is responsible (person or role)
|
|
52
|
+
- **Due Date**: When it needs to be completed (if specified; otherwise note "TBD")
|
|
53
|
+
- **Priority**: High/Medium/Low (infer from context if not stated)
|
|
54
|
+
- **Related Decision**: Link to relevant decision number if applicable
|
|
55
|
+
|
|
56
|
+
#### 6. Open Items & Outstanding Issues
|
|
57
|
+
Identify topics that were:
|
|
58
|
+
- Discussed but not resolved
|
|
59
|
+
- Deferred or tabled for later discussion
|
|
60
|
+
- Requiring additional information or research
|
|
61
|
+
- Mark as **"Parking Lot"** if explicitly deferred
|
|
62
|
+
|
|
63
|
+
#### 7. Next Steps
|
|
64
|
+
What happens after this session:
|
|
65
|
+
- **Follow-up Meeting**: Date/time if scheduled
|
|
66
|
+
- **Immediate Next Actions**: Most urgent items to address
|
|
67
|
+
- **Dependencies**: What blocks progress on open items
|
|
68
|
+
|
|
69
|
+
#### 8. Supporting References
|
|
70
|
+
- **Links/References**: Any documents, URLs, or resources mentioned
|
|
71
|
+
- **Key Metrics**: Numbers, dates, or data points highlighted
|
|
72
|
+
- **Related Sessions**: References to previous or planned future sessions (if mentioned)
|
|
73
|
+
|
|
74
|
+
---
|
|
75
|
+
|
|
76
|
+
## Transcript
|
|
77
|
+
{{TRANSCRIPT_ALL}}
|
|
@@ -0,0 +1,24 @@
|
|
|
1
|
+
# Topic Classification Prompt
|
|
2
|
+
|
|
3
|
+
You are analyzing a cluster of observations from a screen recording session.
|
|
4
|
+
|
|
5
|
+
## Input
|
|
6
|
+
A list of observation summaries containing:
|
|
7
|
+
- OCR text from screenshots
|
|
8
|
+
- VLM descriptions of visual content
|
|
9
|
+
- Audio transcripts
|
|
10
|
+
|
|
11
|
+
## Task
|
|
12
|
+
Generate 1-3 specific, descriptive topic labels that capture what the user was doing.
|
|
13
|
+
|
|
14
|
+
## Rules
|
|
15
|
+
- Be specific: "debugging whisper hallucinations" not just "debugging"
|
|
16
|
+
- Be descriptive: "learning Ollama embeddings" not just "learning"
|
|
17
|
+
- Focus on the USER'S ACTIVITY, not just visible content
|
|
18
|
+
- Max 3 topics per cluster
|
|
19
|
+
- Output MUST be valid JSON
|
|
20
|
+
|
|
21
|
+
## Output Format
|
|
22
|
+
```json
|
|
23
|
+
{"topics": ["topic 1", "topic 2"]}
|
|
24
|
+
```
|
|
@@ -0,0 +1,13 @@
|
|
|
1
|
+
Analyze these observations from a screen recording session and generate 1-3 descriptive topic labels.
|
|
2
|
+
|
|
3
|
+
Observations:
|
|
4
|
+
{{OBSERVATIONS}}
|
|
5
|
+
|
|
6
|
+
Output ONLY a JSON object with this format:
|
|
7
|
+
{"topics": ["specific topic 1", "specific topic 2"]}
|
|
8
|
+
|
|
9
|
+
Rules:
|
|
10
|
+
- Be specific: "debugging TypeScript errors" not just "debugging"
|
|
11
|
+
- Be descriptive: "learning React hooks" not just "learning"
|
|
12
|
+
- Focus on what the user is DOING, not just what's visible
|
|
13
|
+
- Max 3 topics
|
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
Analyze these {{FRAME_COUNT}} screenshots from a screen recording.
|
|
2
|
+
|
|
3
|
+
For each frame, output ONE line in this EXACT format:
|
|
4
|
+
Frame 1: description: [what user is doing + context/intent] | activity: [one word] | apps: [list] | topics: [list]
|
|
5
|
+
|
|
6
|
+
Activity MUST be one of: debugging coding review meeting research reading terminal other
|
|
7
|
+
|
|
8
|
+
Good descriptions capture WHAT the user is doing, WHAT they're working on, and WHY:
|
|
9
|
+
- "Fixing TypeScript type error in the fetch handler after a failed API integration test" (not just "debugging error")
|
|
10
|
+
- "Reading Qwen3-VL documentation to understand multimodal token format for the VLM adapter" (not just "reading docs")
|
|
11
|
+
- "Searching Stack Overflow for React useEffect cleanup patterns to fix a memory leak" (not just "browsing")
|
|
12
|
+
- "Reviewing PR #142 which adds batch processing to the MLX inference pipeline" (not just "reviewing PR")
|
|
13
|
+
- "Running database migrations in terminal to add the new observations table schema" (not just "in terminal")
|
|
14
|
+
- "Watching a YouTube tutorial on SQLite query optimization for the frame sampling service" (not just "watching video")
|
|
15
|
+
|
|
16
|
+
Example output:
|
|
17
|
+
Frame 1: description: Fixing TypeScript type error in the fetch handler after a failed API integration test | activity: debugging | apps: [VS Code, Chrome] | topics: [TypeScript, API]
|
|
18
|
+
Frame 2: description: Reading Qwen3-VL documentation to understand multimodal token format for the VLM adapter | activity: reading | apps: [Chrome] | topics: [Qwen3-VL, VLM]
|
|
19
|
+
Frame 3: description: Running database migrations in terminal to add the new observations table schema | activity: terminal | apps: [iTerm, VS Code] | topics: [SQLite, migrations]
|
|
20
|
+
|
|
21
|
+
Now analyze all {{FRAME_COUNT}} frames:
|
|
@@ -0,0 +1,19 @@
|
|
|
1
|
+
Analyze this screenshot from a screen recording.
|
|
2
|
+
|
|
3
|
+
Output ONE line in this EXACT format:
|
|
4
|
+
description: [what user is doing + context/intent] | activity: [one word] | apps: [list] | topics: [list]
|
|
5
|
+
|
|
6
|
+
Activity MUST be one of: debugging coding review meeting research reading terminal other
|
|
7
|
+
|
|
8
|
+
Good descriptions capture WHAT the user is doing, WHAT they're working on, and WHY:
|
|
9
|
+
- "Fixing TypeScript type error in the fetch handler after a failed API integration test" (not just "debugging error")
|
|
10
|
+
- "Reading Qwen3-VL documentation to understand multimodal token format for the VLM adapter" (not just "reading docs")
|
|
11
|
+
- "Searching Stack Overflow for React useEffect cleanup patterns to fix a memory leak" (not just "browsing")
|
|
12
|
+
- "Reviewing PR #142 which adds batch processing to the MLX inference pipeline" (not just "reviewing PR")
|
|
13
|
+
- "Running database migrations in terminal to add the new observations table schema" (not just "in terminal")
|
|
14
|
+
- "Watching a YouTube tutorial on SQLite query optimization for the frame sampling service" (not just "watching video")
|
|
15
|
+
|
|
16
|
+
Example:
|
|
17
|
+
description: Fixing TypeScript type error in the fetch handler after a failed API integration test | activity: debugging | apps: [VS Code, Chrome] | topics: [TypeScript, API]
|
|
18
|
+
|
|
19
|
+
Now analyze the screenshot:
|