@semiont/make-meaning 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,742 @@
1
+ # @semiont/make-meaning
2
+
3
+ [![Tests](https://github.com/The-AI-Alliance/semiont/actions/workflows/package-tests.yml/badge.svg)](https://github.com/The-AI-Alliance/semiont/actions/workflows/package-tests.yml?query=branch%3Amain+is%3Asuccess+job%3A%22Test+make-meaning%22)
4
+
5
+ **Making meaning from resources through context assembly, pattern detection, and relationship reasoning.**
6
+
7
+ This package transforms raw resources into meaningful, interconnected knowledge. It provides the core capabilities for:
8
+
9
+ - **Context Assembly**: Gathering resource metadata, content, and annotations from distributed storage
10
+ - **Pattern Detection**: AI-powered discovery of semantic patterns (comments, highlights, assessments, tags)
11
+ - **Graph Reasoning**: Navigating relationships and connections between resources
12
+
13
+ ## Philosophy
14
+
15
+ Resources don't exist in isolation. A document becomes meaningful when we understand its annotations, its relationships to other resources, and the patterns within its content. `@semiont/make-meaning` provides the infrastructure to assemble this context from event-sourced storage, detect semantic patterns using AI, and reason about resource relationships through graph traversal.
16
+
17
+ This is the "applied meaning-making" layer - it sits between low-level AI primitives ([@semiont/inference](../inference/)) and high-level application orchestration ([apps/backend](../../apps/backend/)).
18
+
19
+ ## Installation
20
+
21
+ ```bash
22
+ npm install @semiont/make-meaning
23
+ ```
24
+
25
+ ## Quick Start
26
+
27
+ ### Assemble Resource Context
28
+
29
+ ```typescript
30
+ import { ResourceContext } from '@semiont/make-meaning';
31
+ import type { EnvironmentConfig } from '@semiont/core';
32
+
33
+ // Get resource metadata from event-sourced view storage
34
+ const resource = await ResourceContext.getResourceMetadata(resourceId, config);
35
+
36
+ // List all resources with optional filtering
37
+ const resources = await ResourceContext.listResources(
38
+ { createdAfter: '2024-01-01' },
39
+ config
40
+ );
41
+
42
+ // Add content previews to resource descriptors
43
+ const withContent = await ResourceContext.addContentPreviews(resources, config);
44
+ ```
45
+
46
+ ### Work with Annotations
47
+
48
+ ```typescript
49
+ import { AnnotationContext } from '@semiont/make-meaning';
50
+
51
+ // Get all annotations for a resource
52
+ const annotations = await AnnotationContext.getResourceAnnotations(resourceId, config);
53
+
54
+ // Build LLM context for an annotation (includes surrounding text)
55
+ const context = await AnnotationContext.buildLLMContext(
56
+ annotationUri,
57
+ resourceId,
58
+ config,
59
+ { contextLines: 5 }
60
+ );
61
+
62
+ // Generate AI summary of an annotation
63
+ const summary = await AnnotationContext.generateAnnotationSummary(
64
+ annotationId,
65
+ resourceId,
66
+ config
67
+ );
68
+ ```
69
+
70
+ ### Detect Semantic Patterns
71
+
72
+ ```typescript
73
+ import { AnnotationDetection } from '@semiont/make-meaning';
74
+
75
+ // AI-powered detection of passages that merit commentary
76
+ const comments = await AnnotationDetection.detectComments(
77
+ resourceId,
78
+ config,
79
+ 'Focus on technical explanations',
80
+ 'educational',
81
+ 0.7
82
+ );
83
+
84
+ // Detect passages that should be highlighted
85
+ const highlights = await AnnotationDetection.detectHighlights(
86
+ resourceId,
87
+ config,
88
+ 'Find key definitions and important concepts',
89
+ 0.5
90
+ );
91
+
92
+ // Detect passages that merit assessment/evaluation
93
+ const assessments = await AnnotationDetection.detectAssessments(
94
+ resourceId,
95
+ config,
96
+ 'Evaluate clarity and technical accuracy',
97
+ 'constructive',
98
+ 0.6
99
+ );
100
+
101
+ // Detect and extract structured tags from text using ontology schemas
102
+ const tags = await AnnotationDetection.detectTags(
103
+ resourceId,
104
+ config,
105
+ 'irac', // Schema ID from @semiont/ontology
106
+ 'issue' // Category within the schema
107
+ );
108
+ ```
109
+
110
+ ### Structured Tagging with Ontology Schemas
111
+
112
+ A powerful use case is **structured tagging** using tag schemas defined in [@semiont/ontology](../ontology/). For example, legal writing can be analyzed using the IRAC framework (Issue, Rule, Application, Conclusion):
113
+
114
+ ```typescript
115
+ import { AnnotationDetection } from '@semiont/make-meaning';
116
+
117
+ // Analyze a legal brief using IRAC schema
118
+ const categories = ['issue', 'rule', 'application', 'conclusion'];
119
+
120
+ for (const category of categories) {
121
+ const tags = await AnnotationDetection.detectTags(
122
+ resourceId,
123
+ config,
124
+ 'irac', // Tag schema from @semiont/ontology
125
+ category // Which category to detect
126
+ );
127
+
128
+ console.log(`Found ${tags.length} ${category} passages`);
129
+ }
130
+ ```
131
+
132
+ **Why this matters:**
133
+
134
+ When you tag multiple documents with the same schema (e.g., IRAC for legal briefs, IMRAD for scientific papers), you create a **structured semantic layer** across your corpus:
135
+
136
+ - **Rich traversal**: Find all "issue" statements across 100 legal briefs
137
+ - **Cross-document analysis**: Compare how different authors structure their "application" sections
138
+ - **Context retrieval**: When reading one brief, see related "rule" passages from other cases
139
+ - **Graph-based reasoning**: Trace argument patterns across your entire document collection
140
+
141
+ This transforms a collection of unstructured documents into a queryable knowledge base organized by domain-specific rhetorical structures.
142
+
143
+ See [@semiont/ontology](../ontology/) for available tag schemas and how to define custom schemas.
144
+
145
+ ### Navigate Resource Relationships
146
+
147
+ ```typescript
148
+ import { GraphContext } from '@semiont/make-meaning';
149
+
150
+ // Find resources that link to this resource (backlinks)
151
+ const backlinks = await GraphContext.getBacklinks(resourceId, config);
152
+
153
+ // Find shortest path between two resources
154
+ const paths = await GraphContext.findPath(fromResourceId, toResourceId, config, 3);
155
+
156
+ // Get all connections for a resource
157
+ const connections = await GraphContext.getResourceConnections(resourceId, config);
158
+
159
+ // Full-text search across all resources
160
+ const results = await GraphContext.searchResources('neural networks', config, 10);
161
+ ```
162
+
163
+ ## Architecture
164
+
165
+ `@semiont/make-meaning` implements a **three-layer architecture**:
166
+
167
+ ```
168
+ ┌─────────────────────────────────────────────┐
169
+ │ apps/backend │
170
+ │ Job orchestration, progress tracking, │
171
+ │ HTTP APIs, event emission │
172
+ └─────────────────────────────────────────────┘
173
+
174
+ ┌─────────────────────────────────────────────┐
175
+ │ @semiont/make-meaning │
176
+ │ Context assembly, pattern detection, │
177
+ │ relationship reasoning │
178
+ └─────────────────────────────────────────────┘
179
+
180
+ ┌─────────────────────────────────────────────┐
181
+ │ @semiont/inference │
182
+ │ AI primitives: prompts, parsers, │
183
+ │ generateText abstraction │
184
+ └─────────────────────────────────────────────┘
185
+ ```
186
+
187
+ **Key principles:**
188
+
189
+ - **Event-sourced context**: Resources and annotations are assembled from event streams via view storage
190
+ - **Content-addressed storage**: Content retrieved using checksums, enabling deduplication and caching
191
+ - **Graph-backed relationships**: @semiont/graph provides graph traversal for backlinks, paths, and connections
192
+ - **Separation of concerns**: Detection logic (make-meaning) is separate from job orchestration (backend)
193
+
194
+ See [MAKE-MEANING-PACKAGE.md](../../MAKE-MEANING-PACKAGE.md) for complete architecture documentation.
195
+
196
+ ## API Reference
197
+
198
+ ### ResourceContext
199
+
200
+ Provides resource metadata and content assembly from event-sourced storage.
201
+
202
+ ```typescript
203
+ class ResourceContext {
204
+ /**
205
+ * Get resource metadata from view storage
206
+ * Implementation: packages/make-meaning/src/resource-context.ts:15-28
207
+ */
208
+ static async getResourceMetadata(
209
+ resourceId: ResourceId,
210
+ config: EnvironmentConfig
211
+ ): Promise<ResourceDescriptor | null>
212
+
213
+ /**
214
+ * List resources with optional filtering
215
+ * Implementation: packages/make-meaning/src/resource-context.ts:30-48
216
+ */
217
+ static async listResources(
218
+ filters: ListResourcesFilters | undefined,
219
+ config: EnvironmentConfig
220
+ ): Promise<ResourceDescriptor[]>
221
+
222
+ /**
223
+ * Add content previews to resource descriptors
224
+ * Implementation: packages/make-meaning/src/resource-context.ts:50-77
225
+ */
226
+ static async addContentPreviews(
227
+ resources: ResourceDescriptor[],
228
+ config: EnvironmentConfig
229
+ ): Promise<Array<ResourceDescriptor & { content: string }>>
230
+ }
231
+ ```
232
+
233
+ **Filters:**
234
+ ```typescript
235
+ interface ListResourcesFilters {
236
+ createdAfter?: string;
237
+ createdBefore?: string;
238
+ mimeType?: string;
239
+ limit?: number;
240
+ }
241
+ ```
242
+
243
+ ### AnnotationContext
244
+
245
+ Consolidated annotation operations including queries, context building, and AI summarization.
246
+
247
+ ```typescript
248
+ class AnnotationContext {
249
+ /**
250
+ * Build LLM context for an annotation (includes surrounding text)
251
+ * Implementation: packages/make-meaning/src/annotation-context.ts:35-120
252
+ */
253
+ static async buildLLMContext(
254
+ annotationUri: AnnotationUri,
255
+ resourceId: ResourceId,
256
+ config: EnvironmentConfig,
257
+ options: BuildContextOptions
258
+ ): Promise<AnnotationLLMContextResponse>
259
+
260
+ /**
261
+ * Get all annotations for a resource, organized by motivation
262
+ * Implementation: packages/make-meaning/src/annotation-context.ts:122-172
263
+ */
264
+ static async getResourceAnnotations(
265
+ resourceId: ResourceId,
266
+ config: EnvironmentConfig
267
+ ): Promise<ResourceAnnotations>
268
+
269
+ /**
270
+ * Get all annotations for a resource (flat list)
271
+ * Implementation: packages/make-meaning/src/annotation-context.ts:174-187
272
+ */
273
+ static async getAllAnnotations(
274
+ resourceId: ResourceId,
275
+ config: EnvironmentConfig
276
+ ): Promise<Annotation[]>
277
+
278
+ /**
279
+ * Get a specific annotation by ID
280
+ * Implementation: packages/make-meaning/src/annotation-context.ts:189-202
281
+ */
282
+ static async getAnnotation(
283
+ annotationId: AnnotationId,
284
+ resourceId: ResourceId,
285
+ config: EnvironmentConfig
286
+ ): Promise<Annotation | null>
287
+
288
+ /**
289
+ * List annotations with optional filtering
290
+ * Implementation: packages/make-meaning/src/annotation-context.ts:204-225
291
+ */
292
+ static async listAnnotations(
293
+ filters: { resourceId?: ResourceId; type?: AnnotationCategory } | undefined,
294
+ config: EnvironmentConfig
295
+ ): Promise<Annotation[]>
296
+
297
+ /**
298
+ * Check if a resource exists in view storage
299
+ * Implementation: packages/make-meaning/src/annotation-context.ts:227-236
300
+ */
301
+ static async resourceExists(
302
+ resourceId: ResourceId,
303
+ config: EnvironmentConfig
304
+ ): Promise<boolean>
305
+
306
+ /**
307
+ * Get resource statistics (version, last updated)
308
+ * Implementation: packages/make-meaning/src/annotation-context.ts:238-254
309
+ */
310
+ static async getResourceStats(
311
+ resourceId: ResourceId,
312
+ config: EnvironmentConfig
313
+ ): Promise<{
314
+ resourceId: ResourceId;
315
+ version: number;
316
+ updatedAt: string;
317
+ }>
318
+
319
+ /**
320
+ * Get annotation context (surrounding text)
321
+ * Implementation: packages/make-meaning/src/annotation-context.ts:256-314
322
+ */
323
+ static async getAnnotationContext(
324
+ annotationId: AnnotationId,
325
+ resourceId: ResourceId,
326
+ contextBefore: number,
327
+ contextAfter: number,
328
+ config: EnvironmentConfig
329
+ ): Promise<AnnotationContextResponse>
330
+
331
+ /**
332
+ * Generate AI summary of an annotation
333
+ * Implementation: packages/make-meaning/src/annotation-context.ts:316-381
334
+ */
335
+ static async generateAnnotationSummary(
336
+ annotationId: AnnotationId,
337
+ resourceId: ResourceId,
338
+ config: EnvironmentConfig
339
+ ): Promise<ContextualSummaryResponse>
340
+ }
341
+ ```
342
+
343
+ **Options:**
344
+ ```typescript
345
+ interface BuildContextOptions {
346
+ contextLines?: number; // Lines of surrounding text (default: 5)
347
+ includeMetadata?: boolean; // Include resource metadata (default: true)
348
+ }
349
+ ```
350
+
351
+ ### GraphContext
352
+
353
+ Provides graph database operations for traversing resource relationships. All operations are delegated to @semiont/graph (which may use Neo4j or other graph database implementations).
354
+
355
+ ```typescript
356
+ class GraphContext {
357
+ /**
358
+ * Get all resources referencing this resource (backlinks)
359
+ * Requires graph traversal - uses @semiont/graph
360
+ * Implementation: packages/make-meaning/src/graph-context.ts:26-30
361
+ */
362
+ static async getBacklinks(
363
+ resourceId: ResourceId,
364
+ config: EnvironmentConfig
365
+ ): Promise<Annotation[]>
366
+
367
+ /**
368
+ * Find shortest path between two resources
369
+ * Requires graph traversal - uses @semiont/graph
370
+ * Implementation: packages/make-meaning/src/graph-context.ts:36-44
371
+ */
372
+ static async findPath(
373
+ fromResourceId: ResourceId,
374
+ toResourceId: ResourceId,
375
+ config: EnvironmentConfig,
376
+ maxDepth?: number
377
+ ): Promise<GraphPath[]>
378
+
379
+ /**
380
+ * Get resource connections (graph edges)
381
+ * Requires graph traversal - uses @semiont/graph
382
+ * Implementation: packages/make-meaning/src/graph-context.ts:50-53
383
+ */
384
+ static async getResourceConnections(
385
+ resourceId: ResourceId,
386
+ config: EnvironmentConfig
387
+ ): Promise<GraphConnection[]>
388
+
389
+ /**
390
+ * Search resources by name (cross-resource query)
391
+ * Requires full-text search - uses @semiont/graph
392
+ * Implementation: packages/make-meaning/src/graph-context.ts:59-62
393
+ */
394
+ static async searchResources(
395
+ query: string,
396
+ config: EnvironmentConfig,
397
+ limit?: number
398
+ ): Promise<ResourceDescriptor[]>
399
+ }
400
+ ```
401
+
402
+ ### AnnotationDetection
403
+
404
+ AI-powered semantic pattern detection. Orchestrates the full pipeline: resource content → AI prompts → response parsing → validated matches.
405
+
406
+ ```typescript
407
+ class AnnotationDetection {
408
+ /**
409
+ * Detect passages that merit commentary
410
+ * Implementation: packages/make-meaning/src/annotation-detection.ts:27-65
411
+ * Uses: MotivationPrompts.buildCommentPrompt, MotivationParsers.parseComments
412
+ */
413
+ static async detectComments(
414
+ resourceId: ResourceId,
415
+ config: EnvironmentConfig,
416
+ instructions?: string,
417
+ tone?: string,
418
+ density?: number
419
+ ): Promise<CommentMatch[]>
420
+
421
+ /**
422
+ * Detect passages that should be highlighted
423
+ * Implementation: packages/make-meaning/src/annotation-detection.ts:67-101
424
+ * Uses: MotivationPrompts.buildHighlightPrompt, MotivationParsers.parseHighlights
425
+ */
426
+ static async detectHighlights(
427
+ resourceId: ResourceId,
428
+ config: EnvironmentConfig,
429
+ instructions?: string,
430
+ density?: number
431
+ ): Promise<HighlightMatch[]>
432
+
433
+ /**
434
+ * Detect passages that merit assessment/evaluation
435
+ * Implementation: packages/make-meaning/src/annotation-detection.ts:103-141
436
+ * Uses: MotivationPrompts.buildAssessmentPrompt, MotivationParsers.parseAssessments
437
+ */
438
+ static async detectAssessments(
439
+ resourceId: ResourceId,
440
+ config: EnvironmentConfig,
441
+ instructions?: string,
442
+ tone?: string,
443
+ density?: number
444
+ ): Promise<AssessmentMatch[]>
445
+
446
+ /**
447
+ * Detect and extract structured tags from text
448
+ * Implementation: packages/make-meaning/src/annotation-detection.ts:143-197
449
+ * Uses: MotivationPrompts.buildTagPrompt, MotivationParsers.parseTags
450
+ */
451
+ static async detectTags(
452
+ resourceId: ResourceId,
453
+ config: EnvironmentConfig,
454
+ schemaId: string,
455
+ category: string
456
+ ): Promise<TagMatch[]>
457
+ }
458
+ ```
459
+
460
+ **Match types:**
461
+ ```typescript
462
+ // Re-exported from @semiont/inference for convenience
463
+ interface CommentMatch {
464
+ exact: string; // The exact text passage
465
+ start: number; // Character offset start
466
+ end: number; // Character offset end
467
+ prefix?: string; // Context before (for fuzzy anchoring)
468
+ suffix?: string; // Context after (for fuzzy anchoring)
469
+ comment: string; // The AI-generated comment
470
+ }
471
+
472
+ interface HighlightMatch {
473
+ exact: string;
474
+ start: number;
475
+ end: number;
476
+ prefix?: string;
477
+ suffix?: string;
478
+ }
479
+
480
+ interface AssessmentMatch {
481
+ exact: string;
482
+ start: number;
483
+ end: number;
484
+ prefix?: string;
485
+ suffix?: string;
486
+ assessment: string; // The AI-generated assessment
487
+ }
488
+
489
+ interface TagMatch {
490
+ exact: string;
491
+ start: number;
492
+ end: number;
493
+ prefix?: string;
494
+ suffix?: string;
495
+ category: string; // The tag category
496
+ }
497
+ ```
498
+
499
+ **Detection parameters:**
500
+
501
+ - `instructions`: Custom guidance for the AI (e.g., "Focus on technical concepts")
502
+ - `tone`: Tone for comments/assessments (e.g., "educational", "constructive", "analytical")
503
+ - `density`: Target density 0.0-1.0 (0.5 = ~50% of passages should be detected)
504
+
505
+ ## Examples
506
+
507
+ ### Building Annotation Context for AI
508
+
509
+ ```typescript
510
+ import { AnnotationContext, ResourceContext } from '@semiont/make-meaning';
511
+
512
+ // Get the full context needed for AI to process an annotation
513
+ const context = await AnnotationContext.buildLLMContext(
514
+ annotationUri,
515
+ resourceId,
516
+ config,
517
+ { contextLines: 10 }
518
+ );
519
+
520
+ // context includes:
521
+ // - The annotation itself
522
+ // - Surrounding text (10 lines before/after)
523
+ // - Resource metadata
524
+ // - Related annotations in the vicinity
525
+ ```
526
+
527
+ ### Detecting Patterns and Creating Annotations
528
+
529
+ ```typescript
530
+ import { AnnotationDetection } from '@semiont/make-meaning';
531
+ import { createEventStore } from '@semiont/event-sourcing';
532
+
533
+ // Detect highlights using AI
534
+ const highlights = await AnnotationDetection.detectHighlights(
535
+ resourceId,
536
+ config,
537
+ 'Find key definitions and important claims',
538
+ 0.6 // Medium density
539
+ );
540
+
541
+ // Create annotations for each detected highlight
542
+ const eventStore = await createEventStore(config);
543
+ for (const highlight of highlights) {
544
+ const annotation = {
545
+ '@context': 'http://www.w3.org/ns/anno.jsonld',
546
+ type: 'Annotation',
547
+ id: generateAnnotationId(),
548
+ motivation: 'highlighting',
549
+ target: {
550
+ type: 'SpecificResource',
551
+ source: resourceUri,
552
+ selector: [
553
+ {
554
+ type: 'TextPositionSelector',
555
+ start: highlight.start,
556
+ end: highlight.end,
557
+ },
558
+ {
559
+ type: 'TextQuoteSelector',
560
+ exact: highlight.exact,
561
+ prefix: highlight.prefix,
562
+ suffix: highlight.suffix,
563
+ },
564
+ ],
565
+ },
566
+ body: [],
567
+ };
568
+
569
+ await eventStore.appendEvent({
570
+ type: 'annotation.added',
571
+ resourceId,
572
+ userId,
573
+ version: 1,
574
+ payload: { annotation },
575
+ });
576
+ }
577
+ ```
578
+
579
+ ### Navigating Resource Relationships
580
+
581
+ ```typescript
582
+ import { GraphContext, ResourceContext } from '@semiont/make-meaning';
583
+
584
+ // Find all resources that link to this one
585
+ const backlinks = await GraphContext.getBacklinks(resourceId, config);
586
+ console.log(`Found ${backlinks.length} resources linking here`);
587
+
588
+ // Find connection path between two resources
589
+ const paths = await GraphContext.findPath(sourceId, targetId, config, 3);
590
+ if (paths.length > 0) {
591
+ console.log(`Shortest path has ${paths[0].nodes.length} nodes`);
592
+ }
593
+
594
+ // Get all connections for a resource
595
+ const connections = await GraphContext.getResourceConnections(resourceId, config);
596
+ // connections = [{ from: ResourceId, to: ResourceId, via: AnnotationId }, ...]
597
+ ```
598
+
599
+ ## Configuration
600
+
601
+ All methods require an `EnvironmentConfig` object with:
602
+
603
+ ```typescript
604
+ interface EnvironmentConfig {
605
+ services: {
606
+ backend: {
607
+ publicURL: string; // Base URL for resource URIs
608
+ };
609
+ openai?: {
610
+ apiKey: string; // Required for detection methods
611
+ model?: string; // Default: 'gpt-4o-mini'
612
+ temperature?: number; // Default: 0.7
613
+ };
614
+ };
615
+ storage: {
616
+ base: string; // Base path for filesystem storage
617
+ };
618
+ }
619
+ ```
620
+
621
+ Example:
622
+ ```typescript
623
+ const config: EnvironmentConfig = {
624
+ services: {
625
+ backend: {
626
+ publicURL: 'http://localhost:3000',
627
+ },
628
+ openai: {
629
+ apiKey: process.env.OPENAI_API_KEY!,
630
+ model: 'gpt-4o-mini',
631
+ temperature: 0.7,
632
+ },
633
+ },
634
+ storage: {
635
+ base: '/path/to/storage',
636
+ },
637
+ };
638
+ ```
639
+
640
+ ## Dependencies
641
+
642
+ `@semiont/make-meaning` builds on several core packages:
643
+
644
+ - **[@semiont/core](../core/)**: Core types and utilities
645
+ - **[@semiont/api-client](../api-client/)**: OpenAPI-generated types
646
+ - **[@semiont/event-sourcing](../event-sourcing/)**: Event store and view storage
647
+ - **[@semiont/content](../content/)**: Content-addressed storage
648
+ - **[@semiont/graph](../graph/)**: Neo4j graph database client
649
+ - **[@semiont/ontology](../ontology/)**: Schema definitions for tags
650
+ - **[@semiont/inference](../inference/)**: AI primitives (prompts, parsers, generateText)
651
+
652
+ ## How Detection Works
653
+
654
+ See [packages/inference/README.md](../inference/README.md) for details on the AI pipeline.
655
+
656
+ **High-level flow:**
657
+
658
+ 1. **Context Assembly**: `ResourceContext.getResourceMetadata()` retrieves resource content
659
+ 2. **Prompt Building**: `MotivationPrompts.buildXPrompt()` creates AI prompt with domain knowledge
660
+ 3. **AI Inference**: `generateText()` calls OpenAI API with prompt
661
+ 4. **Response Parsing**: `MotivationParsers.parseX()` extracts structured matches from response
662
+ 5. **Offset Validation**: Parser validates that `start`/`end` offsets match `exact` text in content
663
+
664
+ **Example for highlights:**
665
+
666
+ ```typescript
667
+ // 1. Get content
668
+ const resource = await ResourceContext.getResourceMetadata(resourceId, config);
669
+ const content = await representationStore.retrieve(resource.contentId);
670
+
671
+ // 2. Build prompt
672
+ const prompt = MotivationPrompts.buildHighlightPrompt(
673
+ content,
674
+ 'Find key definitions',
675
+ 0.6
676
+ );
677
+
678
+ // 3. Generate AI response
679
+ const response = await generateText(prompt, config);
680
+
681
+ // 4. Parse and validate
682
+ const highlights = MotivationParsers.parseHighlights(response, content);
683
+ // Returns: HighlightMatch[] with validated offsets
684
+ ```
685
+
686
+ ## Worker Integration
687
+
688
+ Detection jobs are orchestrated by workers in [apps/backend/src/jobs/workers/](../../apps/backend/src/jobs/workers/):
689
+
690
+ - [highlight-detection-worker.ts](../../apps/backend/src/jobs/workers/highlight-detection-worker.ts) - Delegated detection to `AnnotationDetection.detectHighlights()`
691
+ - [comment-detection-worker.ts](../../apps/backend/src/jobs/workers/comment-detection-worker.ts) - Delegated detection to `AnnotationDetection.detectComments()`
692
+ - [assessment-detection-worker.ts](../../apps/backend/src/jobs/workers/assessment-detection-worker.ts) - Delegated detection to `AnnotationDetection.detectAssessments()`
693
+ - [tag-detection-worker.ts](../../apps/backend/src/jobs/workers/tag-detection-worker.ts) - Delegated detection to `AnnotationDetection.detectTags()`
694
+
695
+ Workers handle:
696
+ - Job lifecycle (pending → running → completed/failed)
697
+ - Progress tracking and event emission
698
+ - Annotation creation via event store
699
+ - Error handling and retries
700
+
701
+ All detection logic lives in `@semiont/make-meaning`, keeping workers focused on orchestration.
702
+
703
+ ## Future Direction
704
+
705
+ ### Deterministic Reasoning
706
+
707
+ Future versions will add deterministic reasoning capabilities alongside AI-powered detection:
708
+
709
+ - **Rule-based pattern matching**: Detect annotations using regex, string matching, or custom predicates
710
+ - **Ontology-driven inference**: Apply OWL/RDFS reasoning over resource relationships
711
+ - **Compositional reasoning**: Combine multiple reasoning strategies (AI + rules + ontology)
712
+
713
+ Example (aspirational):
714
+
715
+ ```typescript
716
+ // AI-powered detection
717
+ const aiHighlights = await AnnotationDetection.detectHighlights(resourceId, config);
718
+
719
+ // Rule-based detection
720
+ const ruleHighlights = await ResourceReasoning.findMatches(resourceId, {
721
+ pattern: /\btheorem\b.*\bproof\b/gi,
722
+ motivation: 'highlighting',
723
+ });
724
+
725
+ // Ontology-based reasoning
726
+ const inferences = await ResourceReasoning.inferRelationships(resourceId, {
727
+ ontology: 'http://example.org/math-ontology',
728
+ rules: ['transitive-proof-chain'],
729
+ });
730
+ ```
731
+
732
+ This will enable hybrid approaches where AI handles semantic understanding and deterministic rules handle structural patterns.
733
+
734
+ ### Enhanced Context Assembly
735
+
736
+ - **Multi-resource context**: Build context spanning multiple related resources
737
+ - **Temporal context**: Access historical versions of resources and annotations
738
+ - **Provenance tracking**: Track reasoning chains and decision paths
739
+
740
+ ## License
741
+
742
+ MIT