flappa-doormal 2.6.1 → 2.6.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/AGENTS.md CHANGED
@@ -41,7 +41,7 @@ src/
41
41
  ├── match-utils.ts # Extracted match processing utilities
42
42
  ├── segmenter.test.ts # Core test suite (150+ tests including breakpoints)
43
43
  ├── segmenter.bukhari.test.ts # Real-world test cases
44
- ├── breakpoint-utils.test.ts # Breakpoint utility tests (42 tests)
44
+ ├── breakpoint-utils.test.ts # Breakpoint utility tests (55 tests)
45
45
  ├── rule-regex.test.ts # Rule regex builder tests
46
46
  ├── segmenter-utils.test.ts # Segmenter helper tests
47
47
  ├── tokens.test.ts # Token expansion tests
@@ -92,7 +92,7 @@ docs/
92
92
  - `buildExcludeSet()` - Create Set from PageRange[] for O(1) lookups
93
93
  - `createSegment()` - Create segment with optional to/meta fields
94
94
  - `expandBreakpoints()` - Expand patterns with pre-compiled regexes
95
- - `findActualEndPage()` - Search backwards for ending page by content
95
+ - `findActualEndPage()` - Search backwards for ending page using progressive prefix matching (handles mid-page splits)
96
96
  - `findBreakpointWindowEndPosition()` - Compute window boundary in content-space (robust to marker stripping)
97
97
  - `applyPageJoinerBetweenPages()` - Normalize page-boundary join in output segments (`space` vs `newline`)
98
98
  - `findBreakPosition()` - Find break position using breakpoint patterns
@@ -306,7 +306,7 @@ The original `segmentPages` had complexity 37 (max: 15). Extraction:
306
306
  1. **TypeScript strict mode** - No `any` types
307
307
  2. **Biome linting** - Max complexity 15 per function (some exceptions exist)
308
308
  3. **JSDoc comments** - All exported functions documented
309
- 4. **Test coverage** - 251 tests across 8 files
309
+ 4. **Test coverage** - 352 tests across 12 files
310
310
 
311
311
  ## Dependencies
312
312
 
@@ -360,6 +360,8 @@ bunx biome lint .
360
360
 
361
361
  9. **Auto-escaping improves DX significantly**: Users expect `(أ):` to match literal parentheses. Auto-escaping `()[]` in template patterns (but not `regex`) gives intuitive behavior while preserving power-user escape hatch.
362
362
 
363
+ 10. **Page boundary detection needs progressive prefixes**: When breakpoints split content mid-page, checking only the first N characters of a page to detect if the segment ends on that page can fail. Solution: try progressively shorter prefixes (`[80, 60, 40, 30, 20, 15, 12, 10, 8, 6]`) via `JOINER_PREFIX_LENGTHS`. The check uses `indexOf(...) > 0` (not `>= 0`) to avoid false positives when a page prefix appears at position 0 (which indicates the segment *starts* with that page, not *ends* on it).
364
+
363
365
  ### Architecture Insights
364
366
 
365
367
  - **Declarative > Imperative**: Users describe patterns, library handles regex
package/README.md CHANGED
@@ -1,5 +1,20 @@
1
1
  # flappa-doormal
2
2
 
3
+ <p align="center">
4
+ <img src="icon.png" alt="flappa-doormal" width="128" height="128" />
5
+ </p>
6
+
7
+ <p align="center">
8
+ <strong>Declarative Arabic text segmentation library</strong><br/>
9
+ Split pages of content into logical segments using human-readable patterns.
10
+ </p>
11
+
12
+ <p align="center">
13
+ <a href="https://flappa-doormal.surge.sh">🚀 <strong>Live Demo</strong></a> •
14
+ <a href="https://www.npmjs.com/package/flappa-doormal">📦 npm</a> •
15
+ <a href="https://github.com/ragaeeb/flappa-doormal">📚 GitHub</a>
16
+ </p>
17
+
3
18
  [![wakatime](https://wakatime.com/badge/user/a0b906ce-b8e7-4463-8bce-383238df6d4b/project/384fa29d-72e8-4078-980f-45d363f10507.svg)](https://wakatime.com/badge/user/a0b906ce-b8e7-4463-8bce-383238df6d4b/project/384fa29d-72e8-4078-980f-45d363f10507)
4
19
  [![Node.js CI](https://github.com/ragaeeb/flappa-doormal/actions/workflows/build.yml/badge.svg)](https://github.com/ragaeeb/flappa-doormal/actions/workflows/build.yml) ![GitHub License](https://img.shields.io/github/license/ragaeeb/flappa-doormal)
5
20
  ![GitHub Release](https://img.shields.io/github/v/release/ragaeeb/flappa-doormal)
@@ -12,8 +27,6 @@
12
27
  [![codecov](https://codecov.io/gh/ragaeeb/flappa-doormal/graph/badge.svg?token=RQ2BV4M9IS)](https://codecov.io/gh/ragaeeb/flappa-doormal)
13
28
  [![npm version](https://badge.fury.io/js/flappa-doormal.svg)](https://badge.fury.io/js/flappa-doormal)
14
29
 
15
- **Declarative Arabic text segmentation library** - Split pages of content into logical segments using human-readable patterns.
16
-
17
30
  ## Why This Library?
18
31
 
19
32
  ### The Problem
@@ -939,7 +952,7 @@ Complex logic is intentionally split into small, independently testable modules:
939
952
 
940
953
  - `src/segmentation/match-utils.ts`: match filtering + capture extraction
941
954
  - `src/segmentation/rule-regex.ts`: SplitRule → compiled regex builder (`buildRuleRegex`, `processPattern`)
942
- - `src/segmentation/breakpoint-utils.ts`: breakpoint windowing/exclusion helpers + page boundary join normalization
955
+ - `src/segmentation/breakpoint-utils.ts`: breakpoint windowing/exclusion helpers, page boundary join normalization, and progressive prefix page detection for accurate `from`/`to` attribution
943
956
  - `src/segmentation/breakpoint-processor.ts`: breakpoint post-processing engine (applies breakpoints after structural segmentation)
944
957
 
945
958
  ## Performance Notes
@@ -986,6 +999,30 @@ See [AGENTS.md](./AGENTS.md) for:
986
999
  - Algorithm explanations
987
1000
  - Lessons learned during development
988
1001
 
1002
+ ## Demo
1003
+
1004
+ An interactive demo is available at [flappa-doormal.surge.sh](https://flappa-doormal.surge.sh).
1005
+
1006
+ The demo source code is located in the `demo/` directory and includes:
1007
+ - **Analysis**: Discover common line-start patterns in your text
1008
+ - **Pattern Detection**: Auto-detect tokens in text and get template suggestions
1009
+ - **Segmentation**: Apply rules and see segmented output with metadata
1010
+
1011
+ To run the demo locally:
1012
+
1013
+ ```bash
1014
+ cd demo
1015
+ bun install
1016
+ bun run dev
1017
+ ```
1018
+
1019
+ To deploy updates:
1020
+
1021
+ ```bash
1022
+ cd demo
1023
+ bun run deploy
1024
+ ```
1025
+
989
1026
  ## License
990
1027
 
991
1028
  MIT
package/dist/index.mjs CHANGED
@@ -459,8 +459,10 @@ const findExclusionBreakPosition = (currentFromIdx, windowEndIdx, toIdx, pageIds
459
459
  const findActualEndPage = (pieceContent, currentFromIdx, toIdx, pageIds, normalizedPages) => {
460
460
  for (let pi = toIdx; pi > currentFromIdx; pi--) {
461
461
  const pageData = normalizedPages.get(pageIds[pi]);
462
- if (pageData) {
463
- const checkPortion = pageData.content.slice(0, Math.min(30, pageData.length));
462
+ if (!pageData) continue;
463
+ const trimmedContent = pageData.content.trimStart();
464
+ for (const len of JOINER_PREFIX_LENGTHS) {
465
+ const checkPortion = trimmedContent.slice(0, Math.min(len, trimmedContent.length)).trim();
464
466
  if (checkPortion.length > 0 && pieceContent.indexOf(checkPortion) > 0) return pi;
465
467
  }
466
468
  }
@@ -1 +1 @@
1
- {"version":3,"file":"index.mjs","names":["EQUIV_GROUPS: string[][]","seg: Segment","processPattern","first: { index: number; length: number } | undefined","last: { index: number; length: number } | undefined","cumulativeOffsets: number[]","result: Segment[]","namedCaptures: Record<string, string>","BASE_TOKENS: Record<string, string>","COMPOSITE_TOKENS: Record<string, string>","TOKEN_PATTERNS: Record<string, string>","segments: TemplateSegment[]","match: RegExpExecArray | null","captureNames: string[]","names: string[]","s: {\n lineStartsWith?: string[];\n lineStartsAfter?: string[];\n lineEndsWith?: string[];\n template?: string;\n regex?: string;\n }","allCaptureNames: string[]","compiled: CompiledReplaceRule[]","combinableRules: { rule: SplitRule; prefix: string; index: number }[]","standaloneRules: SplitRule[]","fastFuzzyRules: FastFuzzyRule[]","boundaries: PageBoundary[]","pageBreaks: number[]","parts: string[]","initialSeg: Segment","namedCaptures: Record<string, string>","capturedContent: string | undefined","contentStartOffset: number | undefined","sp: SplitPoint","finalSplitPoints: SplitPoint[]","matches: MatchResult[]","result: MatchResult","result: number[]","createSegment","seg: Segment","result: Segment[]","segments: Segment[]","DEFAULT_OPTIONS: ResolvedLineStartAnalysisOptions","TOKEN_PRIORITY_ORDER: string[]","TOKEN_PRIORITY_ORDER","compiled: CompiledTokenRegex[]","best: { token: string; text: string } | null","firstWord","o: ResolvedLineStartAnalysisOptions","results: DetectedPattern[]","coveredRanges: Array<[number, number]>","match: RegExpExecArray | null"],"sources":["../src/segmentation/fuzzy.ts","../src/segmentation/breakpoint-utils.ts","../src/segmentation/breakpoint-processor.ts","../src/segmentation/match-utils.ts","../src/segmentation/tokens.ts","../src/segmentation/rule-regex.ts","../src/segmentation/replace.ts","../src/segmentation/fast-fuzzy-prefix.ts","../src/segmentation/segmenter-rule-utils.ts","../src/segmentation/textUtils.ts","../src/segmentation/segmenter.ts","../src/analysis.ts","../src/detection.ts"],"sourcesContent":["/**\n * Fuzzy matching utilities for Arabic text.\n *\n * Provides diacritic-insensitive and character-equivalence matching for Arabic text.\n * This allows matching text regardless of:\n * - Diacritical marks (harakat/tashkeel): فَتْحَة، ضَمَّة، كَسْرَة، سُكُون، شَدَّة، تَنْوين\n * - Character equivalences: ا↔آ↔أ↔إ, ة↔ه, ى↔ي\n *\n * @module fuzzy\n *\n * @example\n * // Make a pattern diacritic-insensitive\n * const pattern = makeDiacriticInsensitive('حدثنا');\n * new RegExp(pattern, 'u').test('حَدَّثَنَا') // → true\n */\n\n/**\n * Character class matching all Arabic diacritics (Tashkeel/Harakat).\n *\n * Includes the following diacritical marks:\n * - U+064B: ً (fathatan - double fatha)\n * - U+064C: ٌ (dammatan - double damma)\n * - U+064D: ٍ (kasratan - double kasra)\n * - U+064E: َ (fatha - short a)\n * - U+064F: ُ (damma - short u)\n * - U+0650: ِ (kasra - short i)\n * - U+0651: ّ (shadda - gemination)\n * - U+0652: ْ (sukun - no vowel)\n *\n * @internal\n */\nconst DIACRITICS_CLASS = '[\\u064B\\u064C\\u064D\\u064E\\u064F\\u0650\\u0651\\u0652]';\n\n/**\n * Groups of equivalent Arabic characters.\n *\n * Characters within the same group are considered equivalent for matching purposes.\n * This handles common variations in Arabic text where different characters are\n * used interchangeably or have the same underlying meaning.\n *\n * Equivalence groups:\n * - Alef variants: ا (bare), آ (with madda), أ (with hamza above), إ (with hamza below)\n * - Ta marbuta and Ha: ة ↔ ه (often interchangeable at word endings)\n * - Alef maqsura and Ya: ى ↔ ي (often interchangeable at word endings)\n *\n * @internal\n */\nconst EQUIV_GROUPS: string[][] = [\n ['\\u0627', '\\u0622', '\\u0623', '\\u0625'], // ا, آ, أ, إ\n ['\\u0629', '\\u0647'], // ة <-> ه\n ['\\u0649', '\\u064A'], // ى <-> ي\n];\n\n/**\n * Escapes a string for safe inclusion in a regular expression.\n *\n * Escapes all regex metacharacters: `.*+?^${}()|[\\]\\\\`\n *\n * @param s - Any string to escape\n * @returns String with regex metacharacters escaped\n *\n * @example\n * escapeRegex('hello.world') // → 'hello\\\\.world'\n * escapeRegex('[test]') // → '\\\\[test\\\\]'\n * escapeRegex('a+b*c?') // → 'a\\\\+b\\\\*c\\\\?'\n */\nexport const escapeRegex = (s: string): string => s.replace(/[.*+?^${}()|[\\]\\\\]/g, '\\\\$&');\n\n/**\n * Returns a regex character class for all equivalents of a given character.\n *\n * If the character belongs to one of the predefined equivalence groups\n * (e.g., ا/آ/أ/إ), the returned class will match any member of that group.\n * Otherwise, the original character is simply escaped for safe regex inclusion.\n *\n * @param ch - A single character to expand into its equivalence class\n * @returns A RegExp-safe string representing the character and its equivalents\n *\n * @example\n * getEquivClass('ا') // → '[اآأإ]' (matches any alef variant)\n * getEquivClass('ب') // → 'ب' (no equivalents, just escaped)\n * getEquivClass('.') // → '\\\\.' (regex metachar escaped)\n *\n * @internal\n */\nconst getEquivClass = (ch: string): string => {\n for (const group of EQUIV_GROUPS) {\n if (group.includes(ch)) {\n // join the group's members into a character class\n return `[${group.map((c) => escapeRegex(c)).join('')}]`;\n }\n }\n // not in equivalence groups -> return escaped character\n return escapeRegex(ch);\n};\n\n/**\n * Performs light normalization on Arabic text for consistent matching.\n *\n * Normalization steps:\n * 1. NFC normalization (canonical decomposition then composition)\n * 2. Remove Zero-Width Joiner (U+200D) and Zero-Width Non-Joiner (U+200C)\n * 3. Collapse multiple whitespace characters to single space\n * 4. Trim leading and trailing whitespace\n *\n * This normalization preserves diacritics and letter forms while removing\n * invisible characters that could interfere with matching.\n *\n * @param str - Arabic text to normalize\n * @returns Normalized string\n *\n * @example\n * normalizeArabicLight('حَدَّثَنَا') // → 'حَدَّثَنَا' (diacritics preserved)\n * normalizeArabicLight('بسم الله') // → 'بسم الله' (spaces collapsed)\n * normalizeArabicLight(' text ') // → 'text' (trimmed)\n *\n * @internal\n */\nconst normalizeArabicLight = (str: string) => {\n return str\n .normalize('NFC')\n .replace(/[\\u200C\\u200D]/g, '') // remove ZWJ/ZWNJ\n .replace(/\\s+/g, ' ')\n .trim();\n};\n\n/**\n * Creates a diacritic-insensitive regex pattern for Arabic text matching.\n *\n * Transforms input text into a regex pattern that matches the text regardless\n * of diacritical marks (harakat) and character variations. Each character in\n * the input is:\n * 1. Expanded to its equivalence class (if applicable)\n * 2. Followed by an optional diacritics matcher\n *\n * This allows matching:\n * - `حدثنا` with `حَدَّثَنَا` (with full diacritics)\n * - `الإيمان` with `الايمان` (alef variants)\n * - `صلاة` with `صلاه` (ta marbuta ↔ ha)\n *\n * @param text - Input Arabic text to make diacritic-insensitive\n * @returns Regex pattern string that matches the text with or without diacritics\n *\n * @example\n * const pattern = makeDiacriticInsensitive('حدثنا');\n * // Each char gets equivalence class + optional diacritics\n * // Result matches: حدثنا, حَدَّثَنَا, حَدَثَنَا, etc.\n *\n * @example\n * const pattern = makeDiacriticInsensitive('باب');\n * new RegExp(pattern, 'u').test('بَابٌ') // → true\n * new RegExp(pattern, 'u').test('باب') // → true\n *\n * @example\n * // Using with split rules\n * {\n * lineStartsWith: ['باب'],\n * split: 'at',\n * fuzzy: true // Applies makeDiacriticInsensitive internally\n * }\n */\nexport const makeDiacriticInsensitive = (text: string) => {\n const diacriticsMatcher = `${DIACRITICS_CLASS}*`;\n const norm = normalizeArabicLight(text);\n // Use Array.from to iterate grapheme-safe over the string (works fine for Arabic letters)\n return Array.from(norm)\n .map((ch) => getEquivClass(ch) + diacriticsMatcher)\n .join('');\n};\n","/**\n * Utility functions for breakpoint processing in the segmentation engine.\n *\n * These functions handle breakpoint normalization, page exclusion checking,\n * and segment creation. Extracted for independent testing and reuse.\n *\n * @module breakpoint-utils\n */\n\nimport type { Breakpoint, BreakpointRule, PageRange, Segment } from './types.js';\n\nconst WINDOW_PREFIX_LENGTHS = [80, 60, 40, 30, 20, 15] as const;\n// For page-join normalization we need to handle cases where only the very beginning of the next page\n// is present in the current segment (e.g. the segment ends right before the next structural marker).\n// That can be as short as a few words, so we allow shorter prefixes here.\nconst JOINER_PREFIX_LENGTHS = [80, 60, 40, 30, 20, 15, 12, 10, 8, 6] as const;\n\n/**\n * Normalizes a breakpoint to the object form.\n * Strings are converted to { pattern: str } with no constraints.\n *\n * @param bp - Breakpoint as string or object\n * @returns Normalized BreakpointRule object\n *\n * @example\n * normalizeBreakpoint('\\\\n\\\\n')\n * // → { pattern: '\\\\n\\\\n' }\n *\n * normalizeBreakpoint({ pattern: '\\\\n', min: 10 })\n * // → { pattern: '\\\\n', min: 10 }\n */\nexport const normalizeBreakpoint = (bp: Breakpoint): BreakpointRule => (typeof bp === 'string' ? { pattern: bp } : bp);\n\n/**\n * Checks if a page ID is in an excluded list (single pages or ranges).\n *\n * @param pageId - Page ID to check\n * @param excludeList - List of page IDs or [from, to] ranges to exclude\n * @returns True if page is excluded\n *\n * @example\n * isPageExcluded(5, [1, 5, 10])\n * // → true\n *\n * isPageExcluded(5, [[3, 7]])\n * // → true\n *\n * isPageExcluded(5, [[10, 20]])\n * // → false\n */\nexport const isPageExcluded = (pageId: number, excludeList: PageRange[] | undefined): boolean => {\n if (!excludeList || excludeList.length === 0) {\n return false;\n }\n for (const item of excludeList) {\n if (typeof item === 'number') {\n if (pageId === item) {\n return true;\n }\n } else {\n const [from, to] = item;\n if (pageId >= from && pageId <= to) {\n return true;\n }\n }\n }\n return false;\n};\n\n/**\n * Checks if a page ID is within a breakpoint's min/max range and not excluded.\n *\n * @param pageId - Page ID to check\n * @param rule - Breakpoint rule with optional min/max/exclude constraints\n * @returns True if page is within valid range\n *\n * @example\n * isInBreakpointRange(50, { pattern: '\\\\n', min: 10, max: 100 })\n * // → true\n *\n * isInBreakpointRange(5, { pattern: '\\\\n', min: 10 })\n * // → false (below min)\n */\nexport const isInBreakpointRange = (pageId: number, rule: BreakpointRule): boolean => {\n if (rule.min !== undefined && pageId < rule.min) {\n return false;\n }\n if (rule.max !== undefined && pageId > rule.max) {\n return false;\n }\n return !isPageExcluded(pageId, rule.exclude);\n};\n\n/**\n * Builds an exclude set from a PageRange array for O(1) lookups.\n *\n * @param excludeList - List of page IDs or [from, to] ranges\n * @returns Set of all excluded page IDs\n *\n * @remarks\n * This expands ranges into explicit page IDs for fast membership checks. For typical\n * book-scale inputs (thousands of pages), this is small and keeps downstream logic\n * simple and fast. If you expect extremely large ranges (e.g., millions of pages),\n * consider avoiding broad excludes or introducing a range-based membership structure.\n *\n * @example\n * buildExcludeSet([1, 5, [10, 12]])\n * // → Set { 1, 5, 10, 11, 12 }\n */\nexport const buildExcludeSet = (excludeList: PageRange[] | undefined): Set<number> => {\n const excludeSet = new Set<number>();\n for (const item of excludeList || []) {\n if (typeof item === 'number') {\n excludeSet.add(item);\n } else {\n for (let i = item[0]; i <= item[1]; i++) {\n excludeSet.add(i);\n }\n }\n }\n return excludeSet;\n};\n\n/**\n * Creates a segment with optional to and meta fields.\n * Returns null if content is empty after trimming.\n *\n * @param content - Segment content\n * @param fromPageId - Starting page ID\n * @param toPageId - Optional ending page ID (omitted if same as from)\n * @param meta - Optional metadata to attach\n * @returns Segment object or null if empty\n *\n * @example\n * createSegment('Hello world', 1, 3, { chapter: 1 })\n * // → { content: 'Hello world', from: 1, to: 3, meta: { chapter: 1 } }\n *\n * createSegment(' ', 1, undefined, undefined)\n * // → null (empty content)\n */\nexport const createSegment = (\n content: string,\n fromPageId: number,\n toPageId: number | undefined,\n meta: Record<string, unknown> | undefined,\n): Segment | null => {\n const trimmed = content.trim();\n if (!trimmed) {\n return null;\n }\n const seg: Segment = { content: trimmed, from: fromPageId };\n if (toPageId !== undefined && toPageId !== fromPageId) {\n seg.to = toPageId;\n }\n if (meta) {\n seg.meta = meta;\n }\n return seg;\n};\n\n/** Expanded breakpoint with pre-compiled regex and exclude set */\nexport type ExpandedBreakpoint = {\n rule: BreakpointRule;\n regex: RegExp | null;\n excludeSet: Set<number>;\n skipWhenRegex: RegExp | null;\n};\n\n/** Function type for pattern processing */\nexport type PatternProcessor = (pattern: string) => string;\n\n/**\n * Expands breakpoint patterns and pre-computes exclude sets.\n *\n * @param breakpoints - Array of breakpoint patterns or rules\n * @param processPattern - Function to expand tokens in patterns\n * @returns Array of expanded breakpoints with compiled regexes\n *\n * @remarks\n * This function compiles regex patterns dynamically. This can be a ReDoS vector\n * if patterns come from untrusted sources. In typical usage, breakpoint rules\n * are application configuration, not user input.\n */\nexport const expandBreakpoints = (breakpoints: Breakpoint[], processPattern: PatternProcessor): ExpandedBreakpoint[] =>\n breakpoints.map((bp) => {\n const rule = normalizeBreakpoint(bp);\n const excludeSet = buildExcludeSet(rule.exclude);\n const skipWhenRegex =\n rule.skipWhen !== undefined\n ? (() => {\n const expandedSkip = processPattern(rule.skipWhen);\n try {\n return new RegExp(expandedSkip, 'mu');\n } catch (error) {\n const message = error instanceof Error ? error.message : String(error);\n throw new Error(`Invalid breakpoint skipWhen regex: ${rule.skipWhen}\\n Cause: ${message}`);\n }\n })()\n : null;\n if (rule.pattern === '') {\n return { excludeSet, regex: null, rule, skipWhenRegex };\n }\n const expanded = processPattern(rule.pattern);\n try {\n return { excludeSet, regex: new RegExp(expanded, 'gmu'), rule, skipWhenRegex };\n } catch (error) {\n const message = error instanceof Error ? error.message : String(error);\n throw new Error(`Invalid breakpoint regex: ${rule.pattern}\\n Cause: ${message}`);\n }\n });\n\n/** Normalized page data for efficient lookups */\nexport type NormalizedPage = { content: string; length: number; index: number };\n\n/**\n * Applies a configured joiner at detected page boundaries within a multi-page content chunk.\n *\n * This is used for breakpoint-generated segments which don't have access to the original\n * `pageMap.pageBreaks` offsets. We detect page starts sequentially by searching for each page's\n * prefix after the previous boundary, then replace ONLY the single newline immediately before\n * that page start.\n *\n * This avoids converting real in-page newlines, while still normalizing page joins consistently.\n */\nexport const applyPageJoinerBetweenPages = (\n content: string,\n fromIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n joiner: 'space' | 'newline',\n): string => {\n if (joiner === 'newline' || fromIdx >= toIdx || !content.includes('\\n')) {\n return content;\n }\n\n let updated = content;\n let searchFrom = 0;\n\n for (let pi = fromIdx + 1; pi <= toIdx; pi++) {\n const pageData = normalizedPages.get(pageIds[pi]);\n if (!pageData) {\n continue;\n }\n\n const found = findPrefixPositionInContent(updated, pageData.content.trimStart(), searchFrom);\n if (found > 0 && updated[found - 1] === '\\n') {\n updated = `${updated.slice(0, found - 1)} ${updated.slice(found)}`;\n }\n if (found > 0) {\n searchFrom = found;\n }\n }\n\n return updated;\n};\n\n/**\n * Finds the position of a page prefix in content, trying multiple prefix lengths.\n */\nconst findPrefixPositionInContent = (content: string, trimmedPageContent: string, searchFrom: number): number => {\n for (const len of JOINER_PREFIX_LENGTHS) {\n const prefix = trimmedPageContent.slice(0, Math.min(len, trimmedPageContent.length)).trim();\n if (!prefix) {\n continue;\n }\n const pos = content.indexOf(prefix, searchFrom);\n if (pos > 0) {\n return pos;\n }\n }\n return -1;\n};\n\n/**\n * Estimates how far into the current page `remainingContent` begins.\n *\n * During breakpoint processing, `remainingContent` can begin mid-page after a previous split.\n * When that happens, raw cumulative page offsets (computed from full page starts) can overestimate\n * expected boundary positions. This helper computes an approximate starting offset by matching\n * a short prefix of `remainingContent` inside the current page content.\n */\nexport const estimateStartOffsetInCurrentPage = (\n remainingContent: string,\n currentFromIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n const currentPageData = normalizedPages.get(pageIds[currentFromIdx]);\n if (!currentPageData) {\n return 0;\n }\n\n const remStart = remainingContent.trimStart().slice(0, Math.min(60, remainingContent.length));\n const needle = remStart.slice(0, Math.min(30, remStart.length));\n if (!needle) {\n return 0;\n }\n\n const idx = currentPageData.content.indexOf(needle);\n return idx > 0 ? idx : 0;\n};\n\n/**\n * Attempts to find the start position of a target page within remainingContent,\n * anchored near an expected boundary position to reduce collisions.\n *\n * This is used to define breakpoint windows in terms of actual content being split, rather than\n * raw per-page offsets which can desync when structural rules strip markers.\n */\nexport const findPageStartNearExpectedBoundary = (\n remainingContent: string,\n _currentFromIdx: number, // unused but kept for API compatibility\n targetPageIdx: number,\n expectedBoundary: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n const targetPageData = normalizedPages.get(pageIds[targetPageIdx]);\n if (!targetPageData) {\n return -1;\n }\n\n // Anchor search near the expected boundary to avoid matching repeated phrases earlier in content.\n const approx = Math.min(Math.max(0, expectedBoundary), remainingContent.length);\n const searchStart = Math.max(0, approx - 10_000);\n const searchEnd = Math.min(remainingContent.length, approx + 2_000);\n\n // The target page content might be truncated in the current segment due to structural split points\n // early in that page (e.g. headings). Use progressively shorter prefixes.\n const targetTrimmed = targetPageData.content.trimStart();\n for (const len of WINDOW_PREFIX_LENGTHS) {\n const prefix = targetTrimmed.slice(0, Math.min(len, targetTrimmed.length)).trim();\n if (!prefix) {\n continue;\n }\n\n let pos = remainingContent.indexOf(prefix, searchStart);\n while (pos !== -1 && pos <= searchEnd) {\n // Prefer matches that look like page boundaries (preceded by whitespace).\n if (pos > 0 && /\\s/.test(remainingContent[pos - 1] ?? '')) {\n return pos;\n }\n pos = remainingContent.indexOf(prefix, pos + 1);\n }\n\n // Fallback: take the last occurrence at or before approx (still anchored).\n const last = remainingContent.lastIndexOf(prefix, approx);\n if (last > 0) {\n return last;\n }\n }\n\n return -1;\n};\n\n/**\n * Finds the end position of a breakpoint window inside `remainingContent`.\n *\n * The window end is defined as the start of the page AFTER `windowEndIdx` (i.e. `windowEndIdx + 1`),\n * found within the actual `remainingContent` string being split. This avoids relying on raw page offsets\n * that can diverge when structural rules strip markers (e.g. `lineStartsAfter`).\n */\nexport const findBreakpointWindowEndPosition = (\n remainingContent: string,\n currentFromIdx: number,\n windowEndIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n cumulativeOffsets: number[],\n): number => {\n // If the window already reaches the end of the segment, the window is the remaining content.\n if (windowEndIdx >= toIdx) {\n return remainingContent.length;\n }\n\n const desiredNextIdx = windowEndIdx + 1;\n const minNextIdx = currentFromIdx + 1;\n const maxNextIdx = Math.min(desiredNextIdx, toIdx);\n\n const startOffsetInCurrentPage = estimateStartOffsetInCurrentPage(\n remainingContent,\n currentFromIdx,\n pageIds,\n normalizedPages,\n );\n\n // If we can't find the boundary for the desired next page, progressively fall back\n // to earlier page boundaries (smaller window), which is conservative but still correct.\n for (let nextIdx = maxNextIdx; nextIdx >= minNextIdx; nextIdx--) {\n const expectedBoundary =\n cumulativeOffsets[nextIdx] !== undefined && cumulativeOffsets[currentFromIdx] !== undefined\n ? Math.max(0, cumulativeOffsets[nextIdx] - cumulativeOffsets[currentFromIdx] - startOffsetInCurrentPage)\n : remainingContent.length;\n\n const pos = findPageStartNearExpectedBoundary(\n remainingContent,\n currentFromIdx,\n nextIdx,\n expectedBoundary,\n pageIds,\n normalizedPages,\n );\n if (pos > 0) {\n return pos;\n }\n }\n\n // As a last resort (should be rare), treat the entire remaining content as the window.\n // This may under-enforce maxPages if boundary detection fails, but avoids infinite loops.\n return remainingContent.length;\n};\n\n/**\n * Finds exclusion-based break position using raw cumulative offsets.\n *\n * This is used to ensure pages excluded by breakpoints are never merged into the same output segment.\n * Returns a break position relative to the start of `remainingContent` (i.e. the currentFromIdx start).\n */\nexport const findExclusionBreakPosition = (\n currentFromIdx: number,\n windowEndIdx: number,\n toIdx: number,\n pageIds: number[],\n expandedBreakpoints: Array<{ excludeSet: Set<number> }>,\n cumulativeOffsets: number[],\n): number => {\n const startingPageId = pageIds[currentFromIdx];\n const startingPageExcluded = expandedBreakpoints.some((bp) => bp.excludeSet.has(startingPageId));\n if (startingPageExcluded && currentFromIdx < toIdx) {\n // Output just this one page as a segment (break at next page boundary)\n return cumulativeOffsets[currentFromIdx + 1] - cumulativeOffsets[currentFromIdx];\n }\n\n // Find the first excluded page AFTER the starting page (within window) and split BEFORE it\n for (let pageIdx = currentFromIdx + 1; pageIdx <= windowEndIdx; pageIdx++) {\n const pageId = pageIds[pageIdx];\n const isExcluded = expandedBreakpoints.some((bp) => bp.excludeSet.has(pageId));\n if (isExcluded) {\n return cumulativeOffsets[pageIdx] - cumulativeOffsets[currentFromIdx];\n }\n }\n return -1;\n};\n\n/**\n * Finds the actual ending page index by searching backwards for page content prefix.\n * Used to determine which page a segment actually ends on based on content matching.\n *\n * @param pieceContent - Content of the segment piece\n * @param currentFromIdx - Current starting index in pageIds\n * @param toIdx - Maximum ending index to search\n * @param pageIds - Array of page IDs\n * @param normalizedPages - Map of page ID to normalized content\n * @returns The actual ending page index\n */\nexport const findActualEndPage = (\n pieceContent: string,\n currentFromIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n for (let pi = toIdx; pi > currentFromIdx; pi--) {\n const pageData = normalizedPages.get(pageIds[pi]);\n if (pageData) {\n const checkPortion = pageData.content.slice(0, Math.min(30, pageData.length));\n if (checkPortion.length > 0 && pieceContent.indexOf(checkPortion) > 0) {\n return pi;\n }\n }\n }\n return currentFromIdx;\n};\n\n/**\n * Finds the actual starting page index by searching forwards for page content prefix.\n * Used to determine which page content actually starts from based on content matching.\n *\n * This is the counterpart to findActualEndPage - it searches forward to find which\n * page the content starts on, rather than which page it ends on.\n *\n * @param pieceContent - Content of the segment piece\n * @param currentFromIdx - Current starting index in pageIds\n * @param toIdx - Maximum ending index to search\n * @param pageIds - Array of page IDs\n * @param normalizedPages - Map of page ID to normalized content\n * @returns The actual starting page index\n */\nexport const findActualStartPage = (\n pieceContent: string,\n currentFromIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n const trimmedPiece = pieceContent.trimStart();\n if (!trimmedPiece) {\n return currentFromIdx;\n }\n\n // Search forward from currentFromIdx to find which page the content starts on\n for (let pi = currentFromIdx; pi <= toIdx; pi++) {\n const pageData = normalizedPages.get(pageIds[pi]);\n if (pageData) {\n const pagePrefix = pageData.content.slice(0, Math.min(30, pageData.length)).trim();\n const piecePrefix = trimmedPiece.slice(0, Math.min(30, trimmedPiece.length));\n\n // Check both directions:\n // 1. pieceContent starts with page prefix (page content is longer)\n // 2. page content starts with pieceContent prefix (pieceContent is shorter)\n if (pagePrefix.length > 0) {\n if (trimmedPiece.startsWith(pagePrefix)) {\n return pi;\n }\n if (pageData.content.trimStart().startsWith(piecePrefix)) {\n return pi;\n }\n }\n }\n }\n return currentFromIdx;\n};\n\n/** Context required for finding break positions */\nexport type BreakpointContext = {\n pageIds: number[];\n normalizedPages: Map<number, NormalizedPage>;\n expandedBreakpoints: ExpandedBreakpoint[];\n prefer: 'longer' | 'shorter';\n};\n\n/**\n * Checks if any page in a range is excluded by the given exclude set.\n *\n * @param excludeSet - Set of excluded page IDs\n * @param pageIds - Array of page IDs\n * @param fromIdx - Start index (inclusive)\n * @param toIdx - End index (inclusive)\n * @returns True if any page in range is excluded\n */\nexport const hasExcludedPageInRange = (\n excludeSet: Set<number>,\n pageIds: number[],\n fromIdx: number,\n toIdx: number,\n): boolean => {\n if (excludeSet.size === 0) {\n return false;\n }\n for (let pageIdx = fromIdx; pageIdx <= toIdx; pageIdx++) {\n if (excludeSet.has(pageIds[pageIdx])) {\n return true;\n }\n }\n return false;\n};\n\n/**\n * Finds the position of the next page content within remaining content.\n * Returns -1 if not found.\n *\n * @param remainingContent - Content to search in\n * @param nextPageData - Normalized data for the next page\n * @returns Position of next page content, or -1 if not found\n */\nexport const findNextPagePosition = (remainingContent: string, nextPageData: NormalizedPage): number => {\n const searchPrefix = nextPageData.content.trim().slice(0, Math.min(30, nextPageData.length));\n if (searchPrefix.length === 0) {\n return -1;\n }\n const pos = remainingContent.indexOf(searchPrefix);\n return pos > 0 ? pos : -1;\n};\n\n/**\n * Finds matches within a window and returns the selected position based on preference.\n *\n * @param windowContent - Content to search\n * @param regex - Regex to match\n * @param prefer - 'longer' for last match, 'shorter' for first match\n * @returns Break position after the selected match, or -1 if no matches\n */\nexport const findPatternBreakPosition = (\n windowContent: string,\n regex: RegExp,\n prefer: 'longer' | 'shorter',\n): number => {\n // OPTIMIZATION: Stream matches instead of collecting all into an array.\n // Only track first and last match to avoid allocating large arrays for dense patterns.\n let first: { index: number; length: number } | undefined;\n let last: { index: number; length: number } | undefined;\n for (const m of windowContent.matchAll(regex)) {\n const match = { index: m.index, length: m[0].length };\n if (!first) {\n first = match;\n }\n last = match;\n }\n if (!first) {\n return -1;\n }\n const selected = prefer === 'longer' ? last! : first;\n return selected.index + selected.length;\n};\n\n/**\n * Handles page boundary breakpoint (empty pattern).\n * Returns break position or -1 if no valid position found.\n */\nconst handlePageBoundaryBreak = (\n remainingContent: string,\n windowEndIdx: number,\n windowEndPosition: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n const nextPageIdx = windowEndIdx + 1;\n if (nextPageIdx <= toIdx) {\n const nextPageData = normalizedPages.get(pageIds[nextPageIdx]);\n if (nextPageData) {\n const pos = findNextPagePosition(remainingContent, nextPageData);\n if (pos > 0) {\n return Math.min(pos, windowEndPosition, remainingContent.length);\n }\n }\n }\n return Math.min(windowEndPosition, remainingContent.length);\n};\n\n/**\n * Tries to find a break position within the current window using breakpoint patterns.\n * Returns the break position or -1 if no suitable break was found.\n *\n * @param remainingContent - Content remaining to be segmented\n * @param currentFromIdx - Current starting page index\n * @param toIdx - Ending page index\n * @param windowEndIdx - Maximum window end index\n * @param ctx - Breakpoint context with page data and patterns\n * @returns Break position in the content, or -1 if no break found\n */\nexport const findBreakPosition = (\n remainingContent: string,\n currentFromIdx: number,\n toIdx: number,\n windowEndIdx: number,\n windowEndPosition: number,\n ctx: BreakpointContext,\n): number => {\n const { pageIds, normalizedPages, expandedBreakpoints, prefer } = ctx;\n\n for (const { rule, regex, excludeSet, skipWhenRegex } of expandedBreakpoints) {\n // Check if this breakpoint applies to the current segment's starting page\n if (!isInBreakpointRange(pageIds[currentFromIdx], rule)) {\n continue;\n }\n\n // Check if ANY page in the current WINDOW is excluded (not the entire segment)\n if (hasExcludedPageInRange(excludeSet, pageIds, currentFromIdx, windowEndIdx)) {\n continue;\n }\n\n // Check if content matches skipWhen pattern (pre-compiled)\n if (skipWhenRegex?.test(remainingContent)) {\n continue;\n }\n\n // Handle page boundary (empty pattern)\n if (regex === null) {\n return handlePageBoundaryBreak(\n remainingContent,\n windowEndIdx,\n windowEndPosition,\n toIdx,\n pageIds,\n normalizedPages,\n );\n }\n\n // Find matches within window\n const windowContent = remainingContent.slice(0, Math.min(windowEndPosition, remainingContent.length));\n const breakPos = findPatternBreakPosition(windowContent, regex, prefer);\n if (breakPos > 0) {\n return breakPos;\n }\n }\n\n return -1;\n};\n","/**\n * Breakpoint post-processing engine extracted from segmenter.ts.\n *\n * This module is intentionally split into small helpers to reduce cognitive complexity\n * and allow unit testing of tricky edge cases (window sizing, next-page advancement, etc.).\n */\n\nimport {\n applyPageJoinerBetweenPages,\n type BreakpointContext,\n createSegment,\n expandBreakpoints,\n findActualEndPage,\n findActualStartPage,\n findBreakPosition,\n findBreakpointWindowEndPosition,\n findExclusionBreakPosition,\n hasExcludedPageInRange,\n type NormalizedPage,\n} from './breakpoint-utils.js';\nimport type { Breakpoint, Logger, Page, Segment } from './types.js';\n\nexport type BreakpointPatternProcessor = (pattern: string) => string;\n\nconst buildPageIdToIndexMap = (pageIds: number[]) => new Map(pageIds.map((id, i) => [id, i]));\n\nconst buildNormalizedPagesMap = (pages: Page[], normalizedContent: string[]) => {\n const normalizedPages = new Map<number, NormalizedPage>();\n for (let i = 0; i < pages.length; i++) {\n const content = normalizedContent[i];\n normalizedPages.set(pages[i].id, { content, index: i, length: content.length });\n }\n return normalizedPages;\n};\n\nconst buildCumulativeOffsets = (pageIds: number[], normalizedPages: Map<number, NormalizedPage>) => {\n const cumulativeOffsets: number[] = [0];\n let totalOffset = 0;\n for (let i = 0; i < pageIds.length; i++) {\n const pageData = normalizedPages.get(pageIds[i]);\n totalOffset += pageData ? pageData.length : 0;\n if (i < pageIds.length - 1) {\n totalOffset += 1; // separator between pages\n }\n cumulativeOffsets.push(totalOffset);\n }\n return cumulativeOffsets;\n};\n\nconst hasAnyExclusionsInRange = (\n expandedBreakpoints: Array<{ excludeSet: Set<number> }>,\n pageIds: number[],\n fromIdx: number,\n toIdx: number,\n): boolean => expandedBreakpoints.some((bp) => hasExcludedPageInRange(bp.excludeSet, pageIds, fromIdx, toIdx));\n\nexport const computeWindowEndIdx = (\n currentFromIdx: number,\n toIdx: number,\n pageIds: number[],\n maxPages: number,\n): number => {\n const currentPageId = pageIds[currentFromIdx];\n const maxWindowPageId = currentPageId + maxPages;\n let windowEndIdx = currentFromIdx;\n for (let i = currentFromIdx; i <= toIdx; i++) {\n if (pageIds[i] <= maxWindowPageId) {\n windowEndIdx = i;\n } else {\n break;\n }\n }\n return windowEndIdx;\n};\n\nconst computeRemainingSpan = (currentFromIdx: number, toIdx: number, pageIds: number[]) =>\n pageIds[toIdx] - pageIds[currentFromIdx];\n\nconst createFinalSegment = (\n remainingContent: string,\n currentFromIdx: number,\n toIdx: number,\n pageIds: number[],\n meta: Segment['meta'] | undefined,\n includeMeta: boolean,\n) =>\n createSegment(\n remainingContent,\n pageIds[currentFromIdx],\n currentFromIdx !== toIdx ? pageIds[toIdx] : undefined,\n includeMeta ? meta : undefined,\n );\n\ntype PiecePages = { actualEndIdx: number; actualStartIdx: number };\n\nconst computePiecePages = (\n pieceContent: string,\n currentFromIdx: number,\n toIdx: number,\n windowEndIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): PiecePages => {\n const actualStartIdx = pieceContent\n ? findActualStartPage(pieceContent, currentFromIdx, toIdx, pageIds, normalizedPages)\n : currentFromIdx;\n const actualEndIdx = pieceContent\n ? findActualEndPage(pieceContent, actualStartIdx, windowEndIdx, pageIds, normalizedPages)\n : currentFromIdx;\n return { actualEndIdx, actualStartIdx };\n};\n\nexport const computeNextFromIdx = (\n remainingContent: string,\n actualEndIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n let nextFromIdx = actualEndIdx;\n if (remainingContent && actualEndIdx + 1 <= toIdx) {\n const nextPageData = normalizedPages.get(pageIds[actualEndIdx + 1]);\n if (nextPageData) {\n const nextPrefix = nextPageData.content.slice(0, Math.min(30, nextPageData.length));\n const remainingPrefix = remainingContent.trimStart().slice(0, Math.min(30, remainingContent.length));\n // Check both directions:\n // 1. remainingContent starts with page prefix (page is longer or equal)\n // 2. page content starts with remaining prefix (remaining is shorter)\n if (\n nextPrefix &&\n (remainingContent.startsWith(nextPrefix) || nextPageData.content.startsWith(remainingPrefix))\n ) {\n nextFromIdx = actualEndIdx + 1;\n }\n }\n }\n return nextFromIdx;\n};\n\nconst createPieceSegment = (\n pieceContent: string,\n actualStartIdx: number,\n actualEndIdx: number,\n pageIds: number[],\n meta: Segment['meta'] | undefined,\n includeMeta: boolean,\n): Segment | null =>\n createSegment(\n pieceContent,\n pageIds[actualStartIdx],\n actualEndIdx > actualStartIdx ? pageIds[actualEndIdx] : undefined,\n includeMeta ? meta : undefined,\n );\n\nconst processOversizedSegment = (\n segment: Segment,\n fromIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n cumulativeOffsets: number[],\n expandedBreakpoints: ReturnType<typeof expandBreakpoints>,\n maxPages: number,\n prefer: 'longer' | 'shorter',\n logger?: Logger,\n): Segment[] => {\n const result: Segment[] = [];\n let remainingContent = segment.content;\n let currentFromIdx = fromIdx;\n let isFirstPiece = true;\n let iterationCount = 0;\n const maxIterations = 10000;\n\n while (currentFromIdx <= toIdx) {\n iterationCount++;\n if (iterationCount > maxIterations) {\n logger?.error?.('INFINITE LOOP DETECTED! Breaking out, you should report this bug', {\n iterationCount: maxIterations,\n });\n break;\n }\n\n const remainingHasExclusions = hasAnyExclusionsInRange(expandedBreakpoints, pageIds, currentFromIdx, toIdx);\n const remainingSpan = computeRemainingSpan(currentFromIdx, toIdx, pageIds);\n if (remainingSpan <= maxPages && !remainingHasExclusions) {\n const finalSeg = createFinalSegment(\n remainingContent,\n currentFromIdx,\n toIdx,\n pageIds,\n segment.meta,\n isFirstPiece,\n );\n if (finalSeg) {\n result.push(finalSeg);\n }\n break;\n }\n\n const windowEndIdx = computeWindowEndIdx(currentFromIdx, toIdx, pageIds, maxPages);\n logger?.debug?.(`[breakpoints] iteration=${iterationCount}`, {\n currentFromIdx,\n currentFromPageId: pageIds[currentFromIdx],\n remainingContentStart: remainingContent.slice(0, 50),\n remainingContentLength: remainingContent.length,\n remainingSpan,\n toIdx,\n toPageId: pageIds[toIdx],\n windowEndIdx,\n windowEndPageId: pageIds[windowEndIdx],\n });\n const windowEndPosition = findBreakpointWindowEndPosition(\n remainingContent,\n currentFromIdx,\n windowEndIdx,\n toIdx,\n pageIds,\n normalizedPages,\n cumulativeOffsets,\n );\n\n const windowHasExclusions = hasAnyExclusionsInRange(expandedBreakpoints, pageIds, currentFromIdx, windowEndIdx);\n let breakPosition = -1;\n\n if (windowHasExclusions) {\n breakPosition = findExclusionBreakPosition(\n currentFromIdx,\n windowEndIdx,\n toIdx,\n pageIds,\n expandedBreakpoints,\n cumulativeOffsets,\n );\n }\n\n if (breakPosition <= 0) {\n const breakpointCtx: BreakpointContext = { expandedBreakpoints, normalizedPages, pageIds, prefer };\n breakPosition = findBreakPosition(\n remainingContent,\n currentFromIdx,\n toIdx,\n windowEndIdx,\n windowEndPosition,\n breakpointCtx,\n );\n }\n\n if (breakPosition <= 0) {\n // No pattern matched: fall back to the window boundary.\n breakPosition = windowEndPosition;\n }\n\n const pieceContent = remainingContent.slice(0, breakPosition).trim();\n logger?.debug?.('[breakpoints] selectedBreak', {\n breakPosition,\n pieceContentEnd: pieceContent.slice(-50),\n pieceContentLength: pieceContent.length,\n windowEndPosition,\n });\n\n const { actualEndIdx, actualStartIdx } = computePiecePages(\n pieceContent,\n currentFromIdx,\n toIdx,\n windowEndIdx,\n pageIds,\n normalizedPages,\n );\n\n if (pieceContent) {\n const pieceSeg = createPieceSegment(\n pieceContent,\n actualStartIdx,\n actualEndIdx,\n pageIds,\n segment.meta,\n isFirstPiece,\n );\n if (pieceSeg) {\n result.push(pieceSeg);\n }\n }\n\n remainingContent = remainingContent.slice(breakPosition).trim();\n logger?.debug?.('[breakpoints] afterSlice', {\n actualEndIdx,\n remainingContentLength: remainingContent.length,\n remainingContentStart: remainingContent.slice(0, 60),\n });\n if (!remainingContent) {\n logger?.debug?.('[breakpoints] done: no remaining content');\n break;\n }\n\n currentFromIdx = computeNextFromIdx(remainingContent, actualEndIdx, toIdx, pageIds, normalizedPages);\n logger?.debug?.('[breakpoints] nextIteration', {\n currentFromIdx,\n currentFromPageId: pageIds[currentFromIdx],\n });\n isFirstPiece = false;\n }\n\n logger?.debug?.('[breakpoints] processOversizedSegmentDone', { resultCount: result.length });\n return result;\n};\n\n/**\n * Applies breakpoints to oversized segments.\n *\n * Note: This is an internal engine used by `segmentPages()`.\n */\nexport const applyBreakpoints = (\n segments: Segment[],\n pages: Page[],\n normalizedContent: string[],\n maxPages: number,\n breakpoints: Breakpoint[],\n prefer: 'longer' | 'shorter',\n patternProcessor: BreakpointPatternProcessor,\n logger?: Logger,\n pageJoiner: 'space' | 'newline' = 'space',\n): Segment[] => {\n const pageIds = pages.map((p) => p.id);\n const pageIdToIndex = buildPageIdToIndexMap(pageIds);\n const normalizedPages = buildNormalizedPagesMap(pages, normalizedContent);\n const cumulativeOffsets = buildCumulativeOffsets(pageIds, normalizedPages);\n const expandedBreakpoints = expandBreakpoints(breakpoints, patternProcessor);\n\n const result: Segment[] = [];\n\n logger?.info?.('Starting breakpoint processing', { maxPages, segmentCount: segments.length });\n\n logger?.debug?.('[breakpoints] inputSegments', {\n segmentCount: segments.length,\n segments: segments.map((s) => ({ contentLength: s.content.length, from: s.from, to: s.to })),\n });\n\n for (const segment of segments) {\n const fromIdx = pageIdToIndex.get(segment.from) ?? -1;\n const toIdx = segment.to !== undefined ? (pageIdToIndex.get(segment.to) ?? fromIdx) : fromIdx;\n\n const segmentSpan = (segment.to ?? segment.from) - segment.from;\n const hasExclusions = hasAnyExclusionsInRange(expandedBreakpoints, pageIds, fromIdx, toIdx);\n\n if (segmentSpan <= maxPages && !hasExclusions) {\n result.push(segment);\n continue;\n }\n\n const broken = processOversizedSegment(\n segment,\n fromIdx,\n toIdx,\n pageIds,\n normalizedPages,\n cumulativeOffsets,\n expandedBreakpoints,\n maxPages,\n prefer,\n logger,\n );\n // Normalize page joins for breakpoint-created pieces\n result.push(\n ...broken.map((s) => {\n const segFromIdx = pageIdToIndex.get(s.from) ?? -1;\n const segToIdx = s.to !== undefined ? (pageIdToIndex.get(s.to) ?? segFromIdx) : segFromIdx;\n if (segFromIdx >= 0 && segToIdx > segFromIdx) {\n return {\n ...s,\n content: applyPageJoinerBetweenPages(\n s.content,\n segFromIdx,\n segToIdx,\n pageIds,\n normalizedPages,\n pageJoiner,\n ),\n };\n }\n return s;\n }),\n );\n }\n\n logger?.info?.('Breakpoint processing completed', { resultCount: result.length });\n return result;\n};\n","/**\n * Utility functions for regex matching and result processing.\n *\n * These functions were extracted from `segmenter.ts` to reduce complexity\n * and enable independent testing. They handle match filtering, capture\n * extraction, and occurrence-based selection.\n *\n * @module match-utils\n */\n\nimport { isPageExcluded } from './breakpoint-utils.js';\nimport type { SplitRule } from './types.js';\n\n/**\n * Result of a regex match with position and optional capture information.\n *\n * Represents a single match found by the segmentation engine, including\n * its position in the concatenated content and any captured values.\n */\nexport type MatchResult = {\n /**\n * Start offset (inclusive) of the match in the content string.\n */\n start: number;\n\n /**\n * End offset (exclusive) of the match in the content string.\n *\n * The matched text is `content.slice(start, end)`.\n */\n end: number;\n\n /**\n * Content captured by `lineStartsAfter` patterns.\n *\n * For patterns like `^٦٦٩٦ - (.*)`, this contains the text\n * matched by the `(.*)` group (the rest of the line after the marker).\n */\n captured?: string;\n\n /**\n * Named capture group values from `{{token:name}}` syntax.\n *\n * Keys are the capture names, values are the matched strings.\n *\n * @example\n * // For pattern '{{raqms:num}} {{dash}}'\n * { num: '٦٦٩٦' }\n */\n namedCaptures?: Record<string, string>;\n};\n\n/**\n * Extracts named capture groups from a regex match.\n *\n * Only includes groups that are in the `captureNames` list and have\n * defined values. This filters out positional captures and ensures\n * only explicitly requested named captures are returned.\n *\n * @param groups - The `match.groups` object from `RegExp.exec()`\n * @param captureNames - List of capture names to extract (from `{{token:name}}` syntax)\n * @returns Object with capture name → value pairs, or `undefined` if none found\n *\n * @example\n * const match = /(?<num>[٠-٩]+) -/.exec('٦٦٩٦ - text');\n * extractNamedCaptures(match.groups, ['num'])\n * // → { num: '٦٦٩٦' }\n *\n * @example\n * // No matching captures\n * extractNamedCaptures({}, ['num'])\n * // → undefined\n *\n * @example\n * // Undefined groups\n * extractNamedCaptures(undefined, ['num'])\n * // → undefined\n */\nexport const extractNamedCaptures = (\n groups: Record<string, string> | undefined,\n captureNames: string[],\n): Record<string, string> | undefined => {\n if (!groups || captureNames.length === 0) {\n return undefined;\n }\n\n const namedCaptures: Record<string, string> = {};\n for (const name of captureNames) {\n if (groups[name] !== undefined) {\n namedCaptures[name] = groups[name];\n }\n }\n\n return Object.keys(namedCaptures).length > 0 ? namedCaptures : undefined;\n};\n\n/**\n * Gets the last defined positional capture group from a match array.\n *\n * Used for `lineStartsAfter` patterns where the content capture (`.*`)\n * is always at the end of the pattern. Named captures may shift the\n * positional indices, so we iterate backward to find the actual content.\n *\n * @param match - RegExp exec result array\n * @returns The last defined capture group value, or `undefined` if none\n *\n * @example\n * // Pattern: ^(?:(?<num>[٠-٩]+) - )(.*)\n * // Match array: ['٦٦٩٦ - content', '٦٦٩٦', 'content']\n * getLastPositionalCapture(match)\n * // → 'content'\n *\n * @example\n * // No captures\n * getLastPositionalCapture(['full match'])\n * // → undefined\n */\nexport const getLastPositionalCapture = (match: RegExpExecArray): string | undefined => {\n if (match.length <= 1) {\n return undefined;\n }\n\n for (let i = match.length - 1; i >= 1; i--) {\n if (match[i] !== undefined) {\n return match[i];\n }\n }\n return undefined;\n};\n\n/**\n * Filters matches to only include those within page ID constraints.\n *\n * Applies the `min`, `max`, and `exclude` constraints from a rule to filter out\n * matches that occur on pages outside the allowed range or explicitly excluded.\n *\n * @param matches - Array of match results to filter\n * @param rule - Rule containing `min`, `max`, and/or `exclude` page constraints\n * @param getId - Function that returns the page ID for a given offset\n * @returns Filtered array containing only matches within constraints\n *\n * @example\n * const matches = [\n * { start: 0, end: 10 }, // Page 1\n * { start: 100, end: 110 }, // Page 5\n * { start: 200, end: 210 }, // Page 10\n * ];\n * filterByConstraints(matches, { min: 3, max: 8 }, getId)\n * // → [{ start: 100, end: 110 }] (only page 5 match)\n */\nexport const filterByConstraints = (\n matches: MatchResult[],\n rule: Pick<SplitRule, 'min' | 'max' | 'exclude'>,\n getId: (offset: number) => number,\n): MatchResult[] => {\n return matches.filter((m) => {\n const id = getId(m.start);\n if (rule.min !== undefined && id < rule.min) {\n return false;\n }\n if (rule.max !== undefined && id > rule.max) {\n return false;\n }\n if (isPageExcluded(id, rule.exclude)) {\n return false;\n }\n return true;\n });\n};\n\n/**\n * Filters matches based on occurrence setting (first, last, or all).\n *\n * Applies occurrence-based selection to a list of matches:\n * - `'all'` or `undefined`: Return all matches (default)\n * - `'first'`: Return only the first match\n * - `'last'`: Return only the last match\n *\n * @param matches - Array of match results to filter\n * @param occurrence - Which occurrence(s) to keep\n * @returns Filtered array based on occurrence setting\n *\n * @example\n * const matches = [{ start: 0 }, { start: 10 }, { start: 20 }];\n *\n * filterByOccurrence(matches, 'first')\n * // → [{ start: 0 }]\n *\n * filterByOccurrence(matches, 'last')\n * // → [{ start: 20 }]\n *\n * filterByOccurrence(matches, 'all')\n * // → [{ start: 0 }, { start: 10 }, { start: 20 }]\n *\n * filterByOccurrence(matches, undefined)\n * // → [{ start: 0 }, { start: 10 }, { start: 20 }] (default: all)\n */\nexport const filterByOccurrence = (matches: MatchResult[], occurrence?: 'first' | 'last' | 'all'): MatchResult[] => {\n if (!matches.length) {\n return [];\n }\n if (occurrence === 'first') {\n return [matches[0]];\n }\n if (occurrence === 'last') {\n return [matches[matches.length - 1]];\n }\n return matches;\n};\n\n/**\n * Groups matches using a sliding window approach based on page ID difference.\n *\n * Uses a lookahead algorithm where `maxSpan` is the maximum page ID difference\n * allowed when looking ahead for the next split point. This prefers longer\n * segments by looking as far ahead as allowed before selecting a match.\n *\n * Algorithm:\n * 1. Start from the first page in the pages list\n * 2. Look for matches within `maxSpan` page IDs ahead\n * 3. Apply occurrence filter (e.g., 'last') to select a match\n * 4. If match found, add it; move window to start from the next page after the match\n * 5. If no match in window, skip to the next page and repeat\n *\n * @param matches - Array of match results (must be sorted by start position)\n * @param maxSpan - Maximum page ID difference allowed when looking ahead\n * @param occurrence - Which occurrence(s) to keep within each window\n * @param getId - Function that returns the page ID for a given offset\n * @param pageIds - Sorted array of all page IDs in the content\n * @returns Filtered array with sliding window and occurrence filter applied\n *\n * @example\n * // Pages: [1, 2, 3], maxSpan=1, occurrence='last'\n * // Window from page 1: pages 1-2 (diff <= 1)\n * // Finds last match in pages 1-2, adds it\n * // Next window from page 3: just page 3\n * // Result: segments span pages 1-2 and page 3\n */\nexport const groupBySpanAndFilter = (\n matches: MatchResult[],\n maxSpan: number,\n occurrence: 'first' | 'last' | 'all' | undefined,\n getId: (offset: number) => number,\n pageIds?: number[],\n): MatchResult[] => {\n if (!matches.length) {\n return [];\n }\n\n // Precompute pageId per match once to avoid O(P×M) behavior for large inputs.\n const matchPageIds = matches.map((m) => getId(m.start));\n const uniquePageIds = pageIds ?? [...new Set(matchPageIds)].sort((a, b) => a - b);\n\n if (!uniquePageIds.length) {\n return filterByOccurrence(matches, occurrence);\n }\n\n return collectWindowMatches(matches, matchPageIds, uniquePageIds, maxSpan, occurrence);\n};\n\n/**\n * Collects matches from each sliding window, applying occurrence selection.\n */\nconst collectWindowMatches = (\n matches: MatchResult[],\n matchPageIds: number[],\n uniquePageIds: number[],\n maxSpan: number,\n occurrence: 'first' | 'last' | 'all' | undefined,\n): MatchResult[] => {\n const result: MatchResult[] = [];\n let windowStartIdx = 0;\n let matchIdx = 0;\n\n while (windowStartIdx < uniquePageIds.length) {\n const windowStartPageId = uniquePageIds[windowStartIdx];\n const windowEndPageId = windowStartPageId + maxSpan;\n\n // Advance matchIdx to first match in or after the window start page.\n matchIdx = advanceToWindowStart(matchPageIds, matchIdx, windowStartPageId);\n\n if (matchIdx >= matches.length) {\n break;\n }\n\n // Find range of matches that fall within [windowStartPageId, windowEndPageId]\n const windowMatchEnd = findWindowMatchEnd(matchPageIds, matchIdx, windowEndPageId);\n\n if (windowMatchEnd <= matchIdx) {\n windowStartIdx++;\n continue;\n }\n\n // Apply occurrence selection and add matches\n const { selectedStart, selectedEnd } = selectOccurrenceRange(matchIdx, windowMatchEnd, occurrence);\n for (let i = selectedStart; i < selectedEnd; i++) {\n result.push(matches[i]);\n }\n\n // Advance window past the last selected match's page\n const lastMatchPageId = matchPageIds[selectedEnd - 1];\n while (windowStartIdx < uniquePageIds.length && uniquePageIds[windowStartIdx] <= lastMatchPageId) {\n windowStartIdx++;\n }\n matchIdx = selectedEnd;\n }\n\n return result;\n};\n\n/** Advances matchIdx to first match in or after windowStartPageId. */\nconst advanceToWindowStart = (matchPageIds: number[], matchIdx: number, windowStartPageId: number): number => {\n let idx = matchIdx;\n while (idx < matchPageIds.length && matchPageIds[idx] < windowStartPageId) {\n idx++;\n }\n return idx;\n};\n\n/** Finds the exclusive end index of matches within the window. */\nconst findWindowMatchEnd = (matchPageIds: number[], startIdx: number, windowEndPageId: number): number => {\n let endIdx = startIdx;\n while (endIdx < matchPageIds.length && matchPageIds[endIdx] <= windowEndPageId) {\n endIdx++;\n }\n return endIdx;\n};\n\n/**\n * Computes the range of indices to select based on occurrence setting.\n */\nconst selectOccurrenceRange = (\n start: number,\n endExclusive: number,\n occurrence: 'first' | 'last' | 'all' | undefined,\n): { selectedStart: number; selectedEnd: number } => {\n if (occurrence === 'first') {\n return { selectedEnd: start + 1, selectedStart: start };\n }\n if (occurrence === 'last') {\n return { selectedEnd: endExclusive, selectedStart: endExclusive - 1 };\n }\n return { selectedEnd: endExclusive, selectedStart: start };\n};\n\n/**\n * Checks if any rule in the list allows the given page ID.\n *\n * A rule allows an ID if it falls within the rule's `min`/`max` constraints.\n * Rules without constraints allow all page IDs.\n *\n * This is used to determine whether to create a segment for content\n * that appears before any split points (the \"first segment\").\n *\n * @param rules - Array of rules with optional `min` and `max` constraints\n * @param pageId - Page ID to check\n * @returns `true` if at least one rule allows the page ID\n *\n * @example\n * const rules = [\n * { min: 5, max: 10 }, // Allows pages 5-10\n * { min: 20 }, // Allows pages 20+\n * ];\n *\n * anyRuleAllowsId(rules, 7) // → true (first rule allows)\n * anyRuleAllowsId(rules, 3) // → false (no rule allows)\n * anyRuleAllowsId(rules, 25) // → true (second rule allows)\n *\n * @example\n * // Rules without constraints allow everything\n * anyRuleAllowsId([{}], 999) // → true\n */\nexport const anyRuleAllowsId = (rules: Pick<SplitRule, 'min' | 'max'>[], pageId: number): boolean => {\n return rules.some((r) => {\n const minOk = r.min === undefined || pageId >= r.min;\n const maxOk = r.max === undefined || pageId <= r.max;\n return minOk && maxOk;\n });\n};\n","/**\n * Token-based template system for Arabic text pattern matching.\n *\n * This module provides a human-readable way to define regex patterns using\n * `{{token}}` placeholders that expand to their regex equivalents. It supports\n * named capture groups for extracting matched values into metadata.\n *\n * @module tokens\n *\n * @example\n * // Simple token expansion\n * expandTokens('{{raqms}} {{dash}}')\n * // → '[\\\\u0660-\\\\u0669]+ [-–—ـ]'\n *\n * @example\n * // Named capture groups\n * expandTokensWithCaptures('{{raqms:num}} {{dash}}')\n * // → { pattern: '(?<num>[\\\\u0660-\\\\u0669]+) [-–—ـ]', captureNames: ['num'], hasCaptures: true }\n */\n\n/**\n * Token definitions mapping human-readable token names to regex patterns.\n *\n * Tokens are used in template strings with double-brace syntax:\n * - `{{token}}` - Expands to the pattern (non-capturing in context)\n * - `{{token:name}}` - Expands to a named capture group `(?<name>pattern)`\n * - `{{:name}}` - Captures any content with the given name `(?<name>.+)`\n *\n * @remarks\n * These patterns are designed for Arabic text matching. For diacritic-insensitive\n * matching of Arabic patterns, use the `fuzzy: true` option in split rules,\n * which applies `makeDiacriticInsensitive()` to the expanded patterns.\n *\n * @example\n * // Using tokens in a split rule\n * { lineStartsWith: ['{{kitab}}', '{{bab}}'], split: 'at', fuzzy: true }\n *\n * @example\n * // Using tokens with named captures\n * { lineStartsAfter: ['{{raqms:hadithNum}} {{dash}} '], split: 'at' }\n */\n// ─────────────────────────────────────────────────────────────\n// Auto-escaping for template patterns\n// ─────────────────────────────────────────────────────────────\n\n/**\n * Escapes regex metacharacters (parentheses and brackets) in template patterns,\n * but preserves content inside `{{...}}` token delimiters.\n *\n * This allows users to write intuitive patterns like `({{harf}}):` instead of\n * the verbose `\\\\({{harf}}\\\\):`. The escaping is applied BEFORE token expansion,\n * so tokens like `{{harf}}` which expand to `[أ-ي]` work correctly.\n *\n * @param pattern - Template pattern that may contain `()[]` and `{{tokens}}`\n * @returns Pattern with `()[]` escaped outside of `{{...}}` delimiters\n *\n * @example\n * escapeTemplateBrackets('({{harf}}): ')\n * // → '\\\\({{harf}}\\\\): '\n *\n * @example\n * escapeTemplateBrackets('[{{raqm}}] ')\n * // → '\\\\[{{raqm}}\\\\] '\n *\n * @example\n * escapeTemplateBrackets('{{harf}}')\n * // → '{{harf}}' (unchanged - no brackets outside tokens)\n */\nexport const escapeTemplateBrackets = (pattern: string): string => {\n // Match either a token ({{...}}) or a bracket character\n // Tokens are preserved as-is, brackets are escaped\n return pattern.replace(/(\\{\\{[^}]*\\}\\})|([()[\\]])/g, (_match, token, bracket) => {\n if (token) {\n return token; // Leave tokens intact\n }\n return `\\\\${bracket}`; // Escape the bracket\n });\n};\n\n// ─────────────────────────────────────────────────────────────\n// Base tokens - raw regex patterns (no template references)\n// ─────────────────────────────────────────────────────────────\n\n/**\n * Base token definitions mapping human-readable token names to regex patterns.\n *\n * These tokens contain raw regex patterns and do not reference other tokens.\n * For composite tokens that build on these, see `COMPOSITE_TOKENS`.\n *\n * @internal\n */\n// IMPORTANT:\n// - We include the Arabic-Indic digit `٤` as a rumuz code, but we do NOT match it when it's part of a larger number (e.g. \"٣٤\").\n// - We intentionally do NOT match ASCII `4`.\n// - For performance/clarity, the single-letter rumuz are represented as a character class.\nconst RUMUZ_SINGLE_LETTER = '[خرزيمنصسدفلتقع]';\nconst RUMUZ_FOUR = '(?<![\\\\u0660-\\\\u0669])٤(?![\\\\u0660-\\\\u0669])';\n// IMPORTANT: order matters. Put longer/more specific codes before shorter ones.\nconst RUMUZ_ATOMS: string[] = [\n // 2-letter codes\n 'خت',\n 'خغ',\n 'بخ',\n 'عخ',\n 'مق',\n 'مت',\n 'عس',\n 'سي',\n 'سن',\n 'كن',\n 'مد',\n 'قد',\n 'خد',\n 'فد',\n 'دل',\n 'كد',\n 'غد',\n 'صد',\n 'دت',\n 'دس',\n 'تم',\n 'فق',\n 'دق',\n // Single-letter codes (character class) + special digit atom\n RUMUZ_SINGLE_LETTER,\n RUMUZ_FOUR,\n];\n\nconst RUMUZ_ATOM = `(?:${RUMUZ_ATOMS.join('|')})`;\nconst RUMUZ_BLOCK = `${RUMUZ_ATOM}(?:\\\\s+${RUMUZ_ATOM})*`;\n\nconst BASE_TOKENS: Record<string, string> = {\n /**\n * Chapter marker - Arabic word for \"chapter\" (باب).\n *\n * Commonly used in hadith collections to mark chapter divisions.\n *\n * @example 'باب ما جاء في الصلاة' (Chapter on what came regarding prayer)\n */\n bab: 'باب',\n\n /**\n * Basmala pattern - Arabic invocation \"In the name of Allah\" (بسم الله).\n *\n * Matches the beginning of the basmala formula, commonly appearing\n * at the start of chapters, books, or documents.\n *\n * @example 'بسم الله الرحمن الرحيم' (In the name of Allah, the Most Gracious, the Most Merciful)\n */\n basmalah: ['بسم الله', '﷽'].join('|'),\n\n /**\n * Bullet point variants - common bullet characters.\n *\n * Character class matching: `•` (bullet), `*` (asterisk), `°` (degree).\n *\n * @example '• First item'\n */\n bullet: '[•*°]',\n\n /**\n * Dash variants - various dash and separator characters.\n *\n * Character class matching:\n * - `-` (hyphen-minus U+002D)\n * - `–` (en-dash U+2013)\n * - `—` (em-dash U+2014)\n * - `ـ` (tatweel U+0640, Arabic elongation character)\n *\n * @example '٦٦٩٦ - حدثنا' or '٦٦٩٦ ـ حدثنا'\n */\n dash: '[-–—ـ]',\n\n /**\n * Section marker - Arabic word for \"section/issue\".\n * Commonly used for fiqh books.\n */\n fasl: ['مسألة', 'فصل'].join('|'),\n\n /**\n * Single Arabic letter - matches any Arabic letter character.\n *\n * Character range from أ (alef with hamza) to ي (ya).\n * Does NOT include diacritics (harakat/tashkeel).\n *\n * @example '{{harf}}' matches 'ب' in 'باب'\n */\n harf: '[أ-ي]',\n\n /**\n * One or more Arabic letters separated by spaces - matches sequences like \"د ت س ي ق\".\n *\n * Useful for matching abbreviation *lists* that are encoded as single-letter tokens\n * separated by spaces.\n *\n * IMPORTANT:\n * - This token intentionally matches **single letters only** (with optional spacing).\n * - It does NOT match multi-letter rumuz like \"سي\" or \"خت\". For those, use `{{rumuz}}`.\n *\n * @example '{{harfs}}' matches 'د ت س ي ق' in '١١١٨ د ت س ي ق: حجاج'\n * @example '{{raqms:num}} {{harfs}}:' matches number + abbreviations + colon\n */\n // Example matches: \"د ت س ي ق\"\n // Example non-matches: \"وعلامة ...\", \"في\", \"لا\", \"سي\", \"خت\"\n harfs: '[أ-ي](?:\\\\s+[أ-ي])*',\n\n /**\n * Book marker - Arabic word for \"book\" (كتاب).\n *\n * Commonly used in hadith collections to mark major book divisions.\n *\n * @example 'كتاب الإيمان' (Book of Faith)\n */\n kitab: 'كتاب',\n\n /**\n * Naql (transmission) phrases - common hadith transmission phrases.\n *\n * Alternation of Arabic phrases used to indicate narration chains:\n * - حدثنا (he narrated to us)\n * - أخبرنا (he informed us)\n * - حدثني (he narrated to me)\n * - وحدثنا (and he narrated to us)\n * - أنبأنا (he reported to us)\n * - سمعت (I heard)\n *\n * @example '{{naql}}' matches any of the above phrases\n */\n naql: ['حدثني', 'وأخبرنا', 'حدثنا', 'سمعت', 'أنبأنا', 'وحدثنا', 'أخبرنا', 'وحدثني', 'وحدثنيه'].join('|'),\n\n /**\n * Single Arabic-Indic digit - matches one digit (٠-٩).\n *\n * Unicode range: U+0660 to U+0669 (Arabic-Indic digits).\n * Use `{{raqms}}` for one or more digits.\n *\n * @example '{{raqm}}' matches '٥' in '٥ - '\n */\n raqm: '[\\\\u0660-\\\\u0669]',\n\n /**\n * One or more Arabic-Indic digits - matches digit sequences (٠-٩)+.\n *\n * Unicode range: U+0660 to U+0669 (Arabic-Indic digits).\n * Commonly used for hadith numbers, verse numbers, etc.\n *\n * @example '{{raqms}}' matches '٦٦٩٦' in '٦٦٩٦ - حدثنا'\n */\n raqms: '[\\\\u0660-\\\\u0669]+',\n\n /**\n * Rumuz (source abbreviations) used in rijāl / takhrīj texts.\n *\n * This token matches the known abbreviation set used to denote sources like:\n * - All six books: (ع)\n * - The four Sunan: (٤)\n * - Bukhari: خ / خت / خغ / بخ / عخ / ز / ي\n * - Muslim: م / مق / مت\n * - Nasa'i: س / ن / ص / عس / سي / كن\n * - Abu Dawud: د / مد / قد / خد / ف / فد / ل / دل / كد / غد / صد\n * - Tirmidhi: ت / تم\n * - Ibn Majah: ق / فق\n *\n * Notes:\n * - Order matters: longer alternatives must come before shorter ones (e.g., \"خد\" before \"خ\")\n * - This token matches a rumuz *block*: one or more codes separated by whitespace\n * (e.g., \"خ سي\", \"خ فق\", \"خت ٤\", \"د ت سي ق\")\n */\n rumuz: RUMUZ_BLOCK,\n\n /**\n * Punctuation characters.\n * Use {{tarqim}} which is especially useful when splitting using split: 'after' on punctuation marks.\n */\n tarqim: '[.!?؟؛]',\n};\n\n// ─────────────────────────────────────────────────────────────\n// Composite tokens - templates that reference base tokens\n// These are pre-expanded at module load time for performance\n// ─────────────────────────────────────────────────────────────\n\n/**\n * Composite token definitions using template syntax.\n *\n * These tokens reference base tokens using `{{token}}` syntax and are\n * automatically expanded to their final regex patterns at module load time.\n *\n * This provides better abstraction - if base tokens change, composites\n * automatically update on the next build.\n *\n * @internal\n */\nconst COMPOSITE_TOKENS: Record<string, string> = {\n /**\n * Numbered hadith marker - common format for hadith numbering.\n *\n * Matches patterns like \"٢٢ - \" (number, space, dash, space).\n * This is the most common format in hadith collections.\n *\n * Use with `lineStartsAfter` to cleanly extract hadith content:\n * ```typescript\n * { lineStartsAfter: ['{{numbered}}'], split: 'at' }\n * ```\n *\n * For capturing the hadith number, use explicit capture syntax:\n * ```typescript\n * { lineStartsAfter: ['{{raqms:hadithNum}} {{dash}} '], split: 'at' }\n * ```\n *\n * @example '٢٢ - حدثنا' matches, content starts after '٢٢ - '\n * @example '٦٦٩٦ – أخبرنا' matches (with en-dash)\n */\n numbered: '{{raqms}} {{dash}} ',\n};\n\n/**\n * Expands any *composite* tokens (like `{{numbered}}`) into their underlying template form\n * (like `{{raqms}} {{dash}} `).\n *\n * This is useful when you want to take a signature produced by `analyzeCommonLineStarts()`\n * and turn it into an editable template where you can add named captures, e.g.:\n *\n * - `{{numbered}}` → `{{raqms}} {{dash}} `\n * - then: `{{raqms:num}} {{dash}} ` to capture the number\n *\n * Notes:\n * - This only expands the plain `{{token}}` form (not `{{token:name}}`).\n * - Expansion is repeated a few times to support nested composites.\n */\nexport const expandCompositeTokensInTemplate = (template: string): string => {\n let out = template;\n for (let i = 0; i < 10; i++) {\n const next = out.replace(/\\{\\{(\\w+)\\}\\}/g, (m, tokenName: string) => {\n const replacement = COMPOSITE_TOKENS[tokenName];\n return replacement ?? m;\n });\n if (next === out) {\n break;\n }\n out = next;\n }\n return out;\n};\n\n/**\n * Expands base tokens in a template string.\n * Used internally to pre-expand composite tokens.\n *\n * @param template - Template string with `{{token}}` placeholders\n * @returns Expanded pattern with base tokens replaced\n * @internal\n */\nconst expandBaseTokens = (template: string): string => {\n return template.replace(/\\{\\{(\\w+)\\}\\}/g, (_, tokenName) => {\n return BASE_TOKENS[tokenName] ?? `{{${tokenName}}}`;\n });\n};\n\n/**\n * Token definitions mapping human-readable token names to regex patterns.\n *\n * Tokens are used in template strings with double-brace syntax:\n * - `{{token}}` - Expands to the pattern (non-capturing in context)\n * - `{{token:name}}` - Expands to a named capture group `(?<name>pattern)`\n * - `{{:name}}` - Captures any content with the given name `(?<name>.+)`\n *\n * @remarks\n * These patterns are designed for Arabic text matching. For diacritic-insensitive\n * matching of Arabic patterns, use the `fuzzy: true` option in split rules,\n * which applies `makeDiacriticInsensitive()` to the expanded patterns.\n *\n * @example\n * // Using tokens in a split rule\n * { lineStartsWith: ['{{kitab}}', '{{bab}}'], split: 'at', fuzzy: true }\n *\n * @example\n * // Using tokens with named captures\n * { lineStartsAfter: ['{{raqms:hadithNum}} {{dash}} '], split: 'at' }\n *\n * @example\n * // Using the numbered convenience token\n * { lineStartsAfter: ['{{numbered}}'], split: 'at' }\n */\nexport const TOKEN_PATTERNS: Record<string, string> = {\n ...BASE_TOKENS,\n // Pre-expand composite tokens at module load time\n ...Object.fromEntries(Object.entries(COMPOSITE_TOKENS).map(([k, v]) => [k, expandBaseTokens(v)])),\n};\n\n/**\n * Regex pattern for matching tokens with optional named capture syntax.\n *\n * Matches:\n * - `{{token}}` - Simple token (group 1 = token name, group 2 = empty)\n * - `{{token:name}}` - Token with capture (group 1 = token, group 2 = name)\n * - `{{:name}}` - Capture-only (group 1 = empty, group 2 = name)\n *\n * @internal\n */\nconst TOKEN_WITH_CAPTURE_REGEX = /\\{\\{(\\w*):?(\\w*)\\}\\}/g;\n\n/**\n * Regex pattern for simple token matching (no capture syntax).\n *\n * Matches only `{{token}}` format where token is one or more word characters.\n * Used by `containsTokens()` for quick detection.\n *\n * @internal\n */\nconst SIMPLE_TOKEN_REGEX = /\\{\\{(\\w+)\\}\\}/g;\n\n/**\n * Checks if a query string contains template tokens.\n *\n * Performs a quick test for `{{token}}` patterns without actually\n * expanding them. Useful for determining whether to apply token\n * expansion to a string.\n *\n * @param query - String to check for tokens\n * @returns `true` if the string contains at least one `{{token}}` pattern\n *\n * @example\n * containsTokens('{{raqms}} {{dash}}') // → true\n * containsTokens('plain text') // → false\n * containsTokens('[٠-٩]+ - ') // → false (raw regex, no tokens)\n */\nexport const containsTokens = (query: string): boolean => {\n SIMPLE_TOKEN_REGEX.lastIndex = 0;\n return SIMPLE_TOKEN_REGEX.test(query);\n};\n\n/**\n * Result from expanding tokens with capture information.\n *\n * Contains the expanded pattern string along with metadata about\n * any named capture groups that were created.\n */\nexport type ExpandResult = {\n /**\n * The expanded regex pattern string with all tokens replaced.\n *\n * Named captures use the `(?<name>pattern)` syntax.\n */\n pattern: string;\n\n /**\n * Names of captured groups extracted from `{{token:name}}` syntax.\n *\n * Empty array if no named captures were found.\n */\n captureNames: string[];\n\n /**\n * Whether the pattern has any named capturing groups.\n *\n * Equivalent to `captureNames.length > 0`.\n */\n hasCaptures: boolean;\n};\n\ntype TemplateSegment = { type: 'token' | 'text'; value: string };\n\nconst splitTemplateIntoSegments = (query: string): TemplateSegment[] => {\n const segments: TemplateSegment[] = [];\n let lastIndex = 0;\n TOKEN_WITH_CAPTURE_REGEX.lastIndex = 0;\n let match: RegExpExecArray | null;\n\n // biome-ignore lint/suspicious/noAssignInExpressions: standard regex exec loop pattern\n while ((match = TOKEN_WITH_CAPTURE_REGEX.exec(query)) !== null) {\n if (match.index > lastIndex) {\n segments.push({ type: 'text', value: query.slice(lastIndex, match.index) });\n }\n segments.push({ type: 'token', value: match[0] });\n lastIndex = match.index + match[0].length;\n }\n\n if (lastIndex < query.length) {\n segments.push({ type: 'text', value: query.slice(lastIndex) });\n }\n\n return segments;\n};\n\nconst maybeApplyFuzzyToText = (text: string, fuzzyTransform?: (pattern: string) => string): string => {\n if (fuzzyTransform && /[\\u0600-\\u06FF]/u.test(text)) {\n return fuzzyTransform(text);\n }\n return text;\n};\n\n// NOTE: This intentionally preserves the previous behavior:\n// it applies fuzzy per `|`-separated alternative (best-effort) to avoid corrupting regex metacharacters.\nconst maybeApplyFuzzyToTokenPattern = (tokenPattern: string, fuzzyTransform?: (pattern: string) => string): string => {\n if (!fuzzyTransform) {\n return tokenPattern;\n }\n return tokenPattern\n .split('|')\n .map((part) => (/[\\u0600-\\u06FF]/u.test(part) ? fuzzyTransform(part) : part))\n .join('|');\n};\n\nconst parseTokenLiteral = (literal: string): { tokenName: string; captureName: string } | null => {\n TOKEN_WITH_CAPTURE_REGEX.lastIndex = 0;\n const tokenMatch = TOKEN_WITH_CAPTURE_REGEX.exec(literal);\n if (!tokenMatch) {\n return null;\n }\n const [, tokenName, captureName] = tokenMatch;\n return { captureName, tokenName };\n};\n\nconst createCaptureRegistry = (capturePrefix?: string) => {\n const captureNames: string[] = [];\n const captureNameCounts = new Map<string, number>();\n\n const register = (baseName: string): string => {\n const count = captureNameCounts.get(baseName) ?? 0;\n captureNameCounts.set(baseName, count + 1);\n const uniqueName = count === 0 ? baseName : `${baseName}_${count + 1}`;\n const prefixedName = capturePrefix ? `${capturePrefix}${uniqueName}` : uniqueName;\n captureNames.push(prefixedName);\n return prefixedName;\n };\n\n return { captureNames, register };\n};\n\nconst expandTokenLiteral = (\n literal: string,\n opts: {\n fuzzyTransform?: (pattern: string) => string;\n registerCapture: (baseName: string) => string;\n capturePrefix?: string;\n },\n): string => {\n const parsed = parseTokenLiteral(literal);\n if (!parsed) {\n return literal;\n }\n\n const { tokenName, captureName } = parsed;\n\n // {{:name}} - capture anything with name\n if (!tokenName && captureName) {\n const name = opts.registerCapture(captureName);\n return `(?<${name}>.+)`;\n }\n\n let tokenPattern = TOKEN_PATTERNS[tokenName];\n if (!tokenPattern) {\n // Unknown token - leave as-is\n return literal;\n }\n\n tokenPattern = maybeApplyFuzzyToTokenPattern(tokenPattern, opts.fuzzyTransform);\n\n // {{token:name}} - capture with name\n if (captureName) {\n const name = opts.registerCapture(captureName);\n return `(?<${name}>${tokenPattern})`;\n }\n\n // {{token}} - no capture, just expand\n return tokenPattern;\n};\n\n/**\n * Expands template tokens with support for named captures.\n *\n * This is the primary token expansion function that handles all token syntax:\n * - `{{token}}` → Expands to the token's pattern (no capture group)\n * - `{{token:name}}` → Expands to `(?<name>pattern)` (named capture)\n * - `{{:name}}` → Expands to `(?<name>.+)` (capture anything)\n *\n * Unknown tokens are left as-is in the output, allowing for partial templates.\n *\n * @param query - The template string containing tokens\n * @param fuzzyTransform - Optional function to transform Arabic text for fuzzy matching.\n * Applied to both token patterns and plain Arabic text between tokens.\n * Typically `makeDiacriticInsensitive` from the fuzzy module.\n * @returns Object with expanded pattern, capture names, and capture flag\n *\n * @example\n * // Simple token expansion\n * expandTokensWithCaptures('{{raqms}} {{dash}}')\n * // → { pattern: '[\\\\u0660-\\\\u0669]+ [-–—ـ]', captureNames: [], hasCaptures: false }\n *\n * @example\n * // Named capture\n * expandTokensWithCaptures('{{raqms:num}} {{dash}}')\n * // → { pattern: '(?<num>[\\\\u0660-\\\\u0669]+) [-–—ـ]', captureNames: ['num'], hasCaptures: true }\n *\n * @example\n * // Capture-only token\n * expandTokensWithCaptures('{{raqms:num}} {{dash}} {{:content}}')\n * // → { pattern: '(?<num>[٠-٩]+) [-–—ـ] (?<content>.+)', captureNames: ['num', 'content'], hasCaptures: true }\n *\n * @example\n * // With fuzzy transform\n * expandTokensWithCaptures('{{bab}}', makeDiacriticInsensitive)\n * // → { pattern: 'بَ?ا?بٌ?', captureNames: [], hasCaptures: false }\n */\nexport const expandTokensWithCaptures = (\n query: string,\n fuzzyTransform?: (pattern: string) => string,\n capturePrefix?: string,\n): ExpandResult => {\n const segments = splitTemplateIntoSegments(query);\n const registry = createCaptureRegistry(capturePrefix);\n\n const processedParts = segments.map((segment) => {\n if (segment.type === 'text') {\n return maybeApplyFuzzyToText(segment.value, fuzzyTransform);\n }\n return expandTokenLiteral(segment.value, {\n capturePrefix,\n fuzzyTransform,\n registerCapture: registry.register,\n });\n });\n\n return {\n captureNames: registry.captureNames,\n hasCaptures: registry.captureNames.length > 0,\n pattern: processedParts.join(''),\n };\n};\n\n/**\n * Expands template tokens in a query string to their regex equivalents.\n *\n * This is the simple version without capture support. It returns only the\n * expanded pattern string, not capture metadata.\n *\n * Unknown tokens are left as-is, allowing for partial templates.\n *\n * @param query - Template string containing `{{token}}` placeholders\n * @returns Expanded regex pattern string\n *\n * @example\n * expandTokens('، {{raqms}}') // → '، [\\\\u0660-\\\\u0669]+'\n * expandTokens('{{raqm}}*') // → '[\\\\u0660-\\\\u0669]*'\n * expandTokens('{{dash}}{{raqm}}') // → '[-–—ـ][\\\\u0660-\\\\u0669]'\n * expandTokens('{{unknown}}') // → '{{unknown}}' (left as-is)\n *\n * @see expandTokensWithCaptures for full capture group support\n */\nexport const expandTokens = (query: string) => expandTokensWithCaptures(query).pattern;\n\n/**\n * Converts a template string to a compiled RegExp.\n *\n * Expands all tokens and attempts to compile the result as a RegExp\n * with Unicode flag. Returns `null` if the resulting pattern is invalid.\n *\n * @remarks\n * This function dynamically compiles regular expressions from template strings.\n * If templates may come from untrusted sources, be aware of potential ReDoS\n * (Regular Expression Denial of Service) risks due to catastrophic backtracking.\n * Consider validating pattern complexity or applying execution timeouts when\n * running user-submitted patterns.\n *\n * @param template - Template string containing `{{token}}` placeholders\n * @returns Compiled RegExp with 'u' flag, or `null` if invalid\n *\n * @example\n * templateToRegex('، {{raqms}}') // → /، [٠-٩]+/u\n * templateToRegex('{{raqms}}+') // → /[٠-٩]++/u (might be invalid in some engines)\n * templateToRegex('(((') // → null (invalid regex)\n */\nexport const templateToRegex = (template: string) => {\n const expanded = expandTokens(template);\n try {\n return new RegExp(expanded, 'u');\n } catch {\n return null;\n }\n};\n\n/**\n * Lists all available token names defined in `TOKEN_PATTERNS`.\n *\n * Useful for documentation, validation, or building user interfaces\n * that show available tokens.\n *\n * @returns Array of token names (e.g., `['bab', 'basmala', 'bullet', ...]`)\n *\n * @example\n * getAvailableTokens()\n * // → ['bab', 'basmala', 'bullet', 'dash', 'harf', 'kitab', 'naql', 'raqm', 'raqms']\n */\nexport const getAvailableTokens = () => Object.keys(TOKEN_PATTERNS);\n\n/**\n * Gets the regex pattern for a specific token name.\n *\n * Returns the raw pattern string as defined in `TOKEN_PATTERNS`,\n * without any expansion or capture group wrapping.\n *\n * @param tokenName - The token name to look up (e.g., 'raqms', 'dash')\n * @returns The regex pattern string, or `undefined` if token doesn't exist\n *\n * @example\n * getTokenPattern('raqms') // → '[\\\\u0660-\\\\u0669]+'\n * getTokenPattern('dash') // → '[-–—ـ]'\n * getTokenPattern('unknown') // → undefined\n */\nexport const getTokenPattern = (tokenName: string): string | undefined => TOKEN_PATTERNS[tokenName];\n","/**\n * Split rule → compiled regex builder.\n *\n * Extracted from `segmenter.ts` to reduce cognitive complexity and enable\n * independent unit testing of regex compilation and token expansion behavior.\n */\n\nimport { makeDiacriticInsensitive } from './fuzzy.js';\nimport { escapeTemplateBrackets, expandTokensWithCaptures } from './tokens.js';\nimport type { SplitRule } from './types.js';\n\n/**\n * Result of processing a pattern with token expansion and optional fuzzy matching.\n */\nexport type ProcessedPattern = {\n /** The expanded regex pattern string (tokens replaced with regex) */\n pattern: string;\n /** Names of captured groups extracted from `{{token:name}}` syntax */\n captureNames: string[];\n};\n\n/**\n * Compiled regex and metadata for a split rule.\n */\nexport type RuleRegex = {\n /** Compiled RegExp with 'gmu' flags (global, multiline, unicode) */\n regex: RegExp;\n /** Whether the regex uses capturing groups for content extraction */\n usesCapture: boolean;\n /** Names of captured groups from `{{token:name}}` syntax */\n captureNames: string[];\n /** Whether this rule uses `lineStartsAfter` (content capture at end) */\n usesLineStartsAfter: boolean;\n};\n\n/**\n * Checks if a regex pattern contains standard (anonymous) capturing groups.\n *\n * Detects standard capturing groups `(...)` while excluding:\n * - Non-capturing groups `(?:...)`\n * - Lookahead assertions `(?=...)` and `(?!...)`\n * - Lookbehind assertions `(?<=...)` and `(?<!...)`\n * - Named groups `(?<name>...)` (start with `(?` so excluded here)\n *\n * NOTE: Named capture groups are still captures, but they're tracked via `captureNames`.\n */\nexport const hasCapturingGroup = (pattern: string): boolean => {\n // Match ( that is NOT followed by ? (excludes non-capturing and named groups)\n return /\\((?!\\?)/.test(pattern);\n};\n\n/**\n * Extracts named capture group names from a regex pattern.\n *\n * Parses patterns like `(?<num>[0-9]+)` and returns `['num']`.\n *\n * @example\n * extractNamedCaptureNames('^(?<num>[٠-٩]+)\\\\s+') // ['num']\n * extractNamedCaptureNames('^(?<a>\\\\d+)(?<b>\\\\w+)') // ['a', 'b']\n * extractNamedCaptureNames('^\\\\d+') // []\n */\nexport const extractNamedCaptureNames = (pattern: string): string[] => {\n const names: string[] = [];\n // Match (?<name> where name is the capture group name\n const namedGroupRegex = /\\(\\?<([^>]+)>/g;\n for (const match of pattern.matchAll(namedGroupRegex)) {\n names.push(match[1]);\n }\n return names;\n};\n\n/**\n * Safely compiles a regex pattern, throwing a helpful error if invalid.\n */\nexport const compileRuleRegex = (pattern: string): RegExp => {\n try {\n return new RegExp(pattern, 'gmu');\n } catch (error) {\n const message = error instanceof Error ? error.message : String(error);\n throw new Error(`Invalid regex pattern: ${pattern}\\n Cause: ${message}`);\n }\n};\n\n/**\n * Processes a pattern string by expanding tokens and optionally applying fuzzy matching.\n *\n * Brackets `()[]` outside `{{tokens}}` are auto-escaped.\n */\nexport const processPattern = (pattern: string, fuzzy: boolean, capturePrefix?: string): ProcessedPattern => {\n const escaped = escapeTemplateBrackets(pattern);\n const fuzzyTransform = fuzzy ? makeDiacriticInsensitive : undefined;\n const { pattern: expanded, captureNames } = expandTokensWithCaptures(escaped, fuzzyTransform, capturePrefix);\n return { captureNames, pattern: expanded };\n};\n\nexport const buildLineStartsAfterRegexSource = (\n patterns: string[],\n fuzzy: boolean,\n capturePrefix?: string,\n): { regex: string; captureNames: string[] } => {\n const processed = patterns.map((p) => processPattern(p, fuzzy, capturePrefix));\n const union = processed.map((p) => p.pattern).join('|');\n const captureNames = processed.flatMap((p) => p.captureNames);\n // For lineStartsAfter, we need to capture the content.\n // If we have a prefix (combined-regex mode), we name the internal content capture so the caller\n // can compute marker length. IMPORTANT: this internal group is not a \"user capture\", so it must\n // NOT be included in `captureNames` (otherwise it leaks into segment.meta as `content`).\n const contentCapture = capturePrefix ? `(?<${capturePrefix}__content>.*)` : '(.*)';\n return { captureNames, regex: `^(?:${union})${contentCapture}` };\n};\n\nexport const buildLineStartsWithRegexSource = (\n patterns: string[],\n fuzzy: boolean,\n capturePrefix?: string,\n): { regex: string; captureNames: string[] } => {\n const processed = patterns.map((p) => processPattern(p, fuzzy, capturePrefix));\n const union = processed.map((p) => p.pattern).join('|');\n const captureNames = processed.flatMap((p) => p.captureNames);\n return { captureNames, regex: `^(?:${union})` };\n};\n\nexport const buildLineEndsWithRegexSource = (\n patterns: string[],\n fuzzy: boolean,\n capturePrefix?: string,\n): { regex: string; captureNames: string[] } => {\n const processed = patterns.map((p) => processPattern(p, fuzzy, capturePrefix));\n const union = processed.map((p) => p.pattern).join('|');\n const captureNames = processed.flatMap((p) => p.captureNames);\n return { captureNames, regex: `(?:${union})$` };\n};\n\nexport const buildTemplateRegexSource = (\n template: string,\n capturePrefix?: string,\n): { regex: string; captureNames: string[] } => {\n const escaped = escapeTemplateBrackets(template);\n const { pattern, captureNames } = expandTokensWithCaptures(escaped, undefined, capturePrefix);\n return { captureNames, regex: pattern };\n};\n\nexport const determineUsesCapture = (regexSource: string, _captureNames: string[]): boolean =>\n hasCapturingGroup(regexSource);\n\n/**\n * Builds a compiled regex and metadata from a split rule.\n *\n * Behavior mirrors the previous implementation in `segmenter.ts`.\n */\nexport const buildRuleRegex = (rule: SplitRule, capturePrefix?: string): RuleRegex => {\n const s: {\n lineStartsWith?: string[];\n lineStartsAfter?: string[];\n lineEndsWith?: string[];\n template?: string;\n regex?: string;\n } = { ...rule };\n\n const fuzzy = (rule as { fuzzy?: boolean }).fuzzy ?? false;\n let allCaptureNames: string[] = [];\n\n // lineStartsAfter: creates a capturing group to exclude the marker from content\n if (s.lineStartsAfter?.length) {\n const { regex, captureNames } = buildLineStartsAfterRegexSource(s.lineStartsAfter, fuzzy, capturePrefix);\n allCaptureNames = captureNames;\n return {\n captureNames: allCaptureNames,\n regex: compileRuleRegex(regex),\n usesCapture: true,\n usesLineStartsAfter: true,\n };\n }\n\n if (s.lineStartsWith?.length) {\n const { regex, captureNames } = buildLineStartsWithRegexSource(s.lineStartsWith, fuzzy, capturePrefix);\n s.regex = regex;\n allCaptureNames = captureNames;\n }\n if (s.lineEndsWith?.length) {\n const { regex, captureNames } = buildLineEndsWithRegexSource(s.lineEndsWith, fuzzy, capturePrefix);\n s.regex = regex;\n allCaptureNames = captureNames;\n }\n if (s.template) {\n const { regex, captureNames } = buildTemplateRegexSource(s.template, capturePrefix);\n s.regex = regex;\n allCaptureNames = [...allCaptureNames, ...captureNames];\n }\n\n if (!s.regex) {\n throw new Error(\n 'Rule must specify exactly one pattern type: regex, template, lineStartsWith, lineStartsAfter, or lineEndsWith',\n );\n }\n\n // Extract named capture groups from raw regex patterns if not already populated\n if (allCaptureNames.length === 0) {\n allCaptureNames = extractNamedCaptureNames(s.regex);\n }\n\n const usesCapture = determineUsesCapture(s.regex, allCaptureNames);\n return {\n captureNames: allCaptureNames,\n regex: compileRuleRegex(s.regex),\n usesCapture,\n usesLineStartsAfter: false,\n };\n};\n","import type { Page, SegmentationOptions } from './types.js';\n\n/**\n * A single replacement rule applied by `applyReplacements()` / `SegmentationOptions.replace`.\n *\n * Notes:\n * - `regex` is a raw JavaScript regex source string (no token expansion).\n * - Default flags are `gu` (global + unicode).\n * - If `flags` is provided, it is validated and `g` + `u` are always enforced.\n * - If `pageIds` is omitted, the rule applies to all pages.\n * - If `pageIds` is `[]`, the rule applies to no pages (rule is skipped).\n */\nexport type ReplaceRule = NonNullable<SegmentationOptions['replace']>[number];\n\nconst DEFAULT_REPLACE_FLAGS = 'gu';\n\nconst normalizeReplaceFlags = (flags?: string): string => {\n if (!flags) {\n return DEFAULT_REPLACE_FLAGS;\n }\n // Validate and de-duplicate flags. Force include g + u.\n const allowed = new Set(['g', 'i', 'm', 's', 'u', 'y']);\n const set = new Set<string>();\n for (const ch of flags) {\n if (!allowed.has(ch)) {\n throw new Error(`Invalid replace regex flag: \"${ch}\" (allowed: gimsyu)`);\n }\n set.add(ch);\n }\n set.add('g');\n set.add('u');\n\n // Stable ordering for reproducibility\n const order = ['g', 'i', 'm', 's', 'y', 'u'];\n return order.filter((c) => set.has(c)).join('');\n};\n\ntype CompiledReplaceRule = {\n re: RegExp;\n replacement: string;\n pageIdSet?: ReadonlySet<number>;\n};\n\nconst compileReplaceRules = (rules: ReplaceRule[]): CompiledReplaceRule[] => {\n const compiled: CompiledReplaceRule[] = [];\n for (const r of rules) {\n if (r.pageIds && r.pageIds.length === 0) {\n // Empty list means \"apply to no pages\"\n continue;\n }\n const flags = normalizeReplaceFlags(r.flags);\n const re = new RegExp(r.regex, flags);\n compiled.push({\n pageIdSet: r.pageIds ? new Set(r.pageIds) : undefined,\n re,\n replacement: r.replacement,\n });\n }\n return compiled;\n};\n\n/**\n * Applies ordered regex replacements to page content (per page).\n *\n * - Replacement rules are applied in array order.\n * - Each rule is applied globally (flag `g` enforced) with unicode mode (flag `u` enforced).\n * - `pageIds` can scope a rule to specific pages. `pageIds: []` skips the rule entirely.\n *\n * This function is intentionally **pure**:\n * it returns a new pages array only when changes are needed, otherwise it returns the original pages.\n */\nexport const applyReplacements = (pages: Page[], rules?: ReplaceRule[]): Page[] => {\n if (!rules || rules.length === 0 || pages.length === 0) {\n return pages;\n }\n const compiled = compileReplaceRules(rules);\n if (compiled.length === 0) {\n return pages;\n }\n\n return pages.map((p) => {\n let content = p.content;\n for (const rule of compiled) {\n if (rule.pageIdSet && !rule.pageIdSet.has(p.id)) {\n continue;\n }\n content = content.replace(rule.re, rule.replacement);\n }\n if (content === p.content) {\n return p;\n }\n return { ...p, content };\n });\n};\n\n\n","/**\n * Fast-path fuzzy prefix matching for common Arabic line-start markers.\n *\n * This exists to avoid running expensive fuzzy-expanded regex alternations over\n * a giant concatenated string. Instead, we match only at known line-start\n * offsets and perform a small deterministic comparison:\n * - Skip Arabic diacritics in the CONTENT\n * - Treat common equivalence groups as equal (ا/آ/أ/إ, ة/ه, ى/ي)\n *\n * This module is intentionally conservative: it only supports \"literal\"\n * token patterns (plain text alternation via `|`), not general regex.\n */\n\nimport { getTokenPattern } from './tokens.js';\n\n// U+064B..U+0652 (tashkeel/harakat)\nconst isArabicDiacriticCode = (code: number): boolean => code >= 0x064b && code <= 0x0652;\n\n// Map a char to a representative equivalence class key.\n// Keep this in sync with EQUIV_GROUPS in fuzzy.ts.\nconst equivKey = (ch: string): string => {\n switch (ch) {\n case '\\u0622': // آ\n case '\\u0623': // أ\n case '\\u0625': // إ\n return '\\u0627'; // ا\n case '\\u0647': // ه\n return '\\u0629'; // ة\n case '\\u064a': // ي\n return '\\u0649'; // ى\n default:\n return ch;\n }\n};\n\n/**\n * Match a fuzzy literal prefix at a given offset.\n *\n * - Skips diacritics in the content\n * - Applies equivalence groups on both content and literal\n *\n * @returns endOffset (exclusive) in CONTENT if matched; otherwise null.\n */\nexport const matchFuzzyLiteralPrefixAt = (content: string, offset: number, literal: string): number | null => {\n let i = offset;\n // Skip leading diacritics in content (rare but possible)\n while (i < content.length && isArabicDiacriticCode(content.charCodeAt(i))) {\n i++;\n }\n\n for (let j = 0; j < literal.length; j++) {\n const litCh = literal[j];\n\n // In literal, we treat whitespace literally (no collapsing).\n // (Tokens like kitab/bab/fasl/naql/basmalah do not rely on fuzzy spaces.)\n // Skip diacritics in content before matching each char.\n while (i < content.length && isArabicDiacriticCode(content.charCodeAt(i))) {\n i++;\n }\n\n if (i >= content.length) {\n return null;\n }\n\n const cCh = content[i];\n if (equivKey(cCh) !== equivKey(litCh)) {\n return null;\n }\n i++;\n }\n\n // Allow trailing diacritics immediately after the matched prefix.\n while (i < content.length && isArabicDiacriticCode(content.charCodeAt(i))) {\n i++;\n }\n return i;\n};\n\nconst isLiteralOnly = (s: string): boolean => {\n // Reject anything that looks like regex syntax.\n // We allow only plain text (including Arabic, spaces) and the alternation separator `|`.\n // This intentionally rejects tokens like `tarqim: '[.!?؟؛]'`, which are not literal.\n return !/[\\\\[\\]{}()^$.*+?]/.test(s);\n};\n\nexport type CompiledLiteralAlternation = {\n alternatives: string[];\n};\n\nexport const compileLiteralAlternation = (pattern: string): CompiledLiteralAlternation | null => {\n if (!pattern) {\n return null;\n }\n if (!isLiteralOnly(pattern)) {\n return null;\n }\n const alternatives = pattern\n .split('|')\n .map((s) => s.trim())\n .filter(Boolean);\n if (!alternatives.length) {\n return null;\n }\n return { alternatives };\n};\n\nexport type FastFuzzyTokenRule = {\n token: string; // token name, e.g. 'kitab'\n alternatives: string[]; // resolved literal alternatives\n};\n\n/**\n * Attempt to compile a fast fuzzy rule from a single-token pattern like `{{kitab}}`.\n * Returns null if not eligible.\n */\nexport const compileFastFuzzyTokenRule = (tokenTemplate: string): FastFuzzyTokenRule | null => {\n const m = tokenTemplate.match(/^\\{\\{(\\w+)\\}\\}$/);\n if (!m) {\n return null;\n }\n const token = m[1];\n const tokenPattern = getTokenPattern(token);\n if (!tokenPattern) {\n return null;\n }\n const compiled = compileLiteralAlternation(tokenPattern);\n if (!compiled) {\n return null;\n }\n return { alternatives: compiled.alternatives, token };\n};\n\n/**\n * Try matching any alternative for a compiled token at a line-start offset.\n * Returns endOffset (exclusive) on match, else null.\n */\nexport const matchFastFuzzyTokenAt = (content: string, offset: number, compiled: FastFuzzyTokenRule): number | null => {\n for (const alt of compiled.alternatives) {\n const end = matchFuzzyLiteralPrefixAt(content, offset, alt);\n if (end !== null) {\n return end;\n }\n }\n return null;\n};\n","import { isPageExcluded } from './breakpoint-utils.js';\nimport { compileFastFuzzyTokenRule, type FastFuzzyTokenRule, matchFastFuzzyTokenAt } from './fast-fuzzy-prefix.js';\nimport { extractNamedCaptureNames, hasCapturingGroup, processPattern } from './rule-regex.js';\nimport type { PageMap, SplitPoint } from './segmenter-types.js';\nimport type { SplitRule } from './types.js';\n\nexport type FastFuzzyRule = {\n compiled: FastFuzzyTokenRule;\n rule: SplitRule;\n ruleIndex: number;\n kind: 'startsWith' | 'startsAfter';\n};\n\nexport type PartitionedRules = {\n combinableRules: Array<{ rule: SplitRule; prefix: string; index: number }>;\n standaloneRules: SplitRule[];\n fastFuzzyRules: FastFuzzyRule[];\n};\n\nexport const partitionRulesForMatching = (rules: SplitRule[]): PartitionedRules => {\n const combinableRules: { rule: SplitRule; prefix: string; index: number }[] = [];\n const standaloneRules: SplitRule[] = [];\n const fastFuzzyRules: FastFuzzyRule[] = [];\n\n // Separate rules into combinable, standalone, and fast-fuzzy\n rules.forEach((rule, index) => {\n // Fast-path: fuzzy + lineStartsWith + single token pattern like {{kitab}}\n if ((rule as { fuzzy?: boolean }).fuzzy && 'lineStartsWith' in rule) {\n const compiled =\n rule.lineStartsWith.length === 1 ? compileFastFuzzyTokenRule(rule.lineStartsWith[0]) : null;\n if (compiled) {\n fastFuzzyRules.push({ compiled, kind: 'startsWith', rule, ruleIndex: index });\n return; // handled by fast path\n }\n }\n\n // Fast-path: fuzzy + lineStartsAfter + single token pattern like {{naql}}\n if ((rule as { fuzzy?: boolean }).fuzzy && 'lineStartsAfter' in rule) {\n const compiled =\n rule.lineStartsAfter.length === 1 ? compileFastFuzzyTokenRule(rule.lineStartsAfter[0]) : null;\n if (compiled) {\n fastFuzzyRules.push({ compiled, kind: 'startsAfter', rule, ruleIndex: index });\n return; // handled by fast path\n }\n }\n\n let isCombinable = true;\n\n // Raw regex rules are combinable ONLY if they don't use named captures, backreferences, or anonymous captures\n if ('regex' in rule && rule.regex) {\n const hasNamedCaptures = extractNamedCaptureNames(rule.regex).length > 0;\n const hasBackreferences = /\\\\[1-9]/.test(rule.regex);\n const hasAnonymousCaptures = hasCapturingGroup(rule.regex);\n if (hasNamedCaptures || hasBackreferences || hasAnonymousCaptures) {\n isCombinable = false;\n }\n }\n\n if (isCombinable) {\n combinableRules.push({ index, prefix: `r${index}_`, rule });\n } else {\n standaloneRules.push(rule);\n }\n });\n\n return { combinableRules, fastFuzzyRules, standaloneRules };\n};\n\nexport type PageStartGuardChecker = (rule: SplitRule, ruleIndex: number, matchStart: number) => boolean;\n\nexport const createPageStartGuardChecker = (matchContent: string, pageMap: PageMap): PageStartGuardChecker => {\n const pageStartToBoundaryIndex = new Map<number, number>();\n for (let i = 0; i < pageMap.boundaries.length; i++) {\n pageStartToBoundaryIndex.set(pageMap.boundaries[i].start, i);\n }\n\n const compiledPageStartPrev = new Map<number, RegExp | null>();\n const getPageStartPrevRegex = (rule: SplitRule, ruleIndex: number): RegExp | null => {\n if (compiledPageStartPrev.has(ruleIndex)) {\n return compiledPageStartPrev.get(ruleIndex) ?? null;\n }\n const pattern = (rule as { pageStartGuard?: string }).pageStartGuard;\n if (!pattern) {\n compiledPageStartPrev.set(ruleIndex, null);\n return null;\n }\n const expanded = processPattern(pattern, false).pattern;\n const re = new RegExp(`(?:${expanded})$`, 'u');\n compiledPageStartPrev.set(ruleIndex, re);\n return re;\n };\n\n const getPrevPageLastNonWsChar = (boundaryIndex: number): string => {\n if (boundaryIndex <= 0) {\n return '';\n }\n const prevBoundary = pageMap.boundaries[boundaryIndex - 1];\n // prevBoundary.end points at the inserted page-break newline; the last content char is end-1.\n for (let i = prevBoundary.end - 1; i >= prevBoundary.start; i--) {\n const ch = matchContent[i];\n if (!ch) {\n continue;\n }\n if (/\\s/u.test(ch)) {\n continue;\n }\n return ch;\n }\n return '';\n };\n\n return (rule: SplitRule, ruleIndex: number, matchStart: number): boolean => {\n const boundaryIndex = pageStartToBoundaryIndex.get(matchStart);\n if (boundaryIndex === undefined || boundaryIndex === 0) {\n return true; // not a page start, or the very first page\n }\n const prevReq = getPageStartPrevRegex(rule, ruleIndex);\n if (!prevReq) {\n return true;\n }\n const lastChar = getPrevPageLastNonWsChar(boundaryIndex);\n if (!lastChar) {\n return false;\n }\n return prevReq.test(lastChar);\n };\n};\n\nexport const collectFastFuzzySplitPoints = (\n matchContent: string,\n pageMap: PageMap,\n fastFuzzyRules: FastFuzzyRule[],\n passesPageStartGuard: PageStartGuardChecker,\n): Map<number, SplitPoint[]> => {\n const splitPointsByRule = new Map<number, SplitPoint[]>();\n if (fastFuzzyRules.length === 0 || pageMap.boundaries.length === 0) {\n return splitPointsByRule;\n }\n\n // Stream page boundary cursor to avoid O(log n) getId calls in hot loop.\n let boundaryIdx = 0;\n let currentBoundary = pageMap.boundaries[boundaryIdx];\n const advanceBoundaryTo = (offset: number) => {\n while (currentBoundary && offset > currentBoundary.end && boundaryIdx < pageMap.boundaries.length - 1) {\n boundaryIdx++;\n currentBoundary = pageMap.boundaries[boundaryIdx];\n }\n };\n\n const recordSplitPoint = (ruleIndex: number, sp: SplitPoint) => {\n const arr = splitPointsByRule.get(ruleIndex);\n if (!arr) {\n splitPointsByRule.set(ruleIndex, [sp]);\n return;\n }\n arr.push(sp);\n };\n\n const isPageStart = (offset: number): boolean => offset === currentBoundary?.start;\n\n // Line starts are offset 0 and any char after '\\n'\n for (let lineStart = 0; lineStart <= matchContent.length; ) {\n advanceBoundaryTo(lineStart);\n const pageId = currentBoundary?.id ?? 0;\n\n if (lineStart >= matchContent.length) {\n break;\n }\n\n for (const { compiled, kind, rule, ruleIndex } of fastFuzzyRules) {\n const passesConstraints =\n (rule.min === undefined || pageId >= rule.min) &&\n (rule.max === undefined || pageId <= rule.max) &&\n !isPageExcluded(pageId, rule.exclude);\n if (!passesConstraints) {\n continue;\n }\n\n if (isPageStart(lineStart) && !passesPageStartGuard(rule, ruleIndex, lineStart)) {\n continue;\n }\n\n const end = matchFastFuzzyTokenAt(matchContent, lineStart, compiled);\n if (end === null) {\n continue;\n }\n\n const splitIndex = (rule.split ?? 'at') === 'at' ? lineStart : end;\n if (kind === 'startsWith') {\n recordSplitPoint(ruleIndex, { index: splitIndex, meta: rule.meta });\n } else {\n const markerLength = end - lineStart;\n recordSplitPoint(ruleIndex, {\n contentStartOffset: (rule.split ?? 'at') === 'at' ? markerLength : undefined,\n index: splitIndex,\n meta: rule.meta,\n });\n }\n }\n\n const nextNl = matchContent.indexOf('\\n', lineStart);\n if (nextNl === -1) {\n break;\n }\n lineStart = nextNl + 1;\n }\n\n return splitPointsByRule;\n};\n","/**\n * Normalizes line endings to Unix-style (`\\n`).\n *\n * Converts Windows (`\\r\\n`) and old Mac (`\\r`) line endings to Unix style\n * for consistent pattern matching across platforms.\n *\n * @param content - Raw content with potentially mixed line endings\n * @returns Content with all line endings normalized to `\\n`\n */\n// OPTIMIZATION: Fast-path when no \\r present (common case for Unix/Mac content)\nexport const normalizeLineEndings = (content: string) => {\n return content.includes('\\r') ? content.replace(/\\r\\n?/g, '\\n') : content;\n};\n","/**\n * Core segmentation engine for splitting Arabic text pages into logical segments.\n *\n * The segmenter takes an array of pages and applies pattern-based rules to\n * identify split points, producing segments with content, page references,\n * and optional metadata.\n *\n * @module segmenter\n */\n\nimport { applyBreakpoints } from './breakpoint-processor.js';\nimport { isPageExcluded } from './breakpoint-utils.js';\nimport {\n anyRuleAllowsId,\n extractNamedCaptures,\n filterByConstraints,\n getLastPositionalCapture,\n type MatchResult,\n} from './match-utils.js';\nimport { buildRuleRegex, processPattern } from './rule-regex.js';\nimport { applyReplacements } from './replace.js';\nimport {\n collectFastFuzzySplitPoints,\n createPageStartGuardChecker,\n partitionRulesForMatching,\n} from './segmenter-rule-utils.js';\nimport type { PageBoundary, PageMap, SplitPoint } from './segmenter-types.js';\nimport { normalizeLineEndings } from './textUtils.js';\nimport type { Page, Segment, SegmentationOptions, SplitRule } from './types.js';\n\n// buildRuleRegex + processPattern extracted to src/segmentation/rule-regex.ts\n\n/**\n * Builds a concatenated content string and page mapping from input pages.\n *\n * Pages are joined with newline characters, and a page map is created to\n * track which page each offset belongs to. This allows pattern matching\n * across page boundaries while preserving page reference information.\n *\n * @param pages - Array of input pages with id and content\n * @returns Concatenated content string and page mapping utilities\n *\n * @example\n * const pages = [\n * { id: 1, content: 'Page 1 text' },\n * { id: 2, content: 'Page 2 text' }\n * ];\n * const { content, pageMap } = buildPageMap(pages);\n * // content = 'Page 1 text\\nPage 2 text'\n * // pageMap.getId(0) = 1\n * // pageMap.getId(12) = 2\n */\nconst buildPageMap = (pages: Page[]): { content: string; normalizedPages: string[]; pageMap: PageMap } => {\n const boundaries: PageBoundary[] = [];\n const pageBreaks: number[] = []; // Sorted array for binary search\n let offset = 0;\n const parts: string[] = [];\n\n for (let i = 0; i < pages.length; i++) {\n const normalized = normalizeLineEndings(pages[i].content);\n boundaries.push({ end: offset + normalized.length, id: pages[i].id, start: offset });\n parts.push(normalized);\n if (i < pages.length - 1) {\n pageBreaks.push(offset + normalized.length); // Already in sorted order\n offset += normalized.length + 1;\n } else {\n offset += normalized.length;\n }\n }\n\n /**\n * Finds the page boundary containing the given offset using binary search.\n * O(log n) complexity for efficient lookup with many pages.\n *\n * @param off - Character offset to look up\n * @returns Page boundary or the last boundary as fallback\n */\n const findBoundary = (off: number): PageBoundary | undefined => {\n let lo = 0;\n let hi = boundaries.length - 1;\n\n while (lo <= hi) {\n const mid = (lo + hi) >>> 1; // Unsigned right shift for floor division\n const b = boundaries[mid];\n if (off < b.start) {\n hi = mid - 1;\n } else if (off > b.end) {\n lo = mid + 1;\n } else {\n return b;\n }\n }\n // Fallback to last boundary if not found\n return boundaries[boundaries.length - 1];\n };\n\n return {\n content: parts.join('\\n'),\n normalizedPages: parts, // OPTIMIZATION: Return already-normalized content for reuse\n pageMap: {\n boundaries,\n getId: (off: number) => findBoundary(off)?.id ?? 0,\n pageBreaks,\n pageIds: boundaries.map((b) => b.id),\n },\n };\n};\n\n/**\n * Deduplicate split points by index, preferring ones with more information.\n *\n * Preference rules (when same index):\n * - Prefer a split with `contentStartOffset` (needed for `lineStartsAfter` marker stripping)\n * - Otherwise prefer a split with `meta` over one without\n */\nexport const dedupeSplitPoints = (splitPoints: SplitPoint[]): SplitPoint[] => {\n const byIndex = new Map<number, SplitPoint>();\n for (const p of splitPoints) {\n const existing = byIndex.get(p.index);\n if (!existing) {\n byIndex.set(p.index, p);\n continue;\n }\n const hasMoreInfo =\n (p.contentStartOffset !== undefined && existing.contentStartOffset === undefined) ||\n (p.meta !== undefined && existing.meta === undefined);\n if (hasMoreInfo) {\n byIndex.set(p.index, p);\n }\n }\n const unique = [...byIndex.values()];\n unique.sort((a, b) => a.index - b.index);\n return unique;\n};\n\n/**\n * If no structural rules produced segments, create a single segment spanning all pages.\n * This allows breakpoint processing to still run.\n */\nexport const ensureFallbackSegment = (\n segments: Segment[],\n pages: Page[],\n normalizedContent: string[],\n pageJoiner: 'space' | 'newline',\n): Segment[] => {\n if (segments.length > 0 || pages.length === 0) {\n return segments;\n }\n const firstPage = pages[0];\n const lastPage = pages[pages.length - 1];\n const joinChar = pageJoiner === 'newline' ? '\\n' : ' ';\n const allContent = normalizedContent.join(joinChar).trim();\n if (!allContent) {\n return segments;\n }\n const initialSeg: Segment = { content: allContent, from: firstPage.id };\n if (lastPage.id !== firstPage.id) {\n initialSeg.to = lastPage.id;\n }\n return [initialSeg];\n};\n\nconst collectSplitPointsFromRules = (rules: SplitRule[], matchContent: string, pageMap: PageMap): SplitPoint[] => {\n const passesPageStartGuard = createPageStartGuardChecker(matchContent, pageMap);\n const { combinableRules, fastFuzzyRules, standaloneRules } = partitionRulesForMatching(rules);\n\n // Store split points by rule index to apply occurrence filtering later.\n // Start with fast-fuzzy matches (if any) and then add regex-based matches.\n const splitPointsByRule = collectFastFuzzySplitPoints(matchContent, pageMap, fastFuzzyRules, passesPageStartGuard);\n\n // Process combinable rules in a single pass\n if (combinableRules.length > 0) {\n const ruleRegexes = combinableRules.map(({ rule, prefix }) => {\n const built = buildRuleRegex(rule, prefix);\n return {\n prefix,\n source: `(?<${prefix}>${built.regex.source})`,\n ...built,\n };\n });\n\n const combinedSource = ruleRegexes.map((r) => r.source).join('|');\n const combinedRegex = new RegExp(combinedSource, 'gm');\n\n combinedRegex.lastIndex = 0;\n let m = combinedRegex.exec(matchContent);\n\n while (m !== null) {\n // Find which rule matched by checking which prefix group is defined\n const matchedRuleIndex = combinableRules.findIndex(({ prefix }) => m?.groups?.[prefix] !== undefined);\n\n if (matchedRuleIndex !== -1) {\n const { rule, prefix, index: originalIndex } = combinableRules[matchedRuleIndex];\n const ruleInfo = ruleRegexes[matchedRuleIndex];\n\n // Extract named captures for this specific rule (stripping the prefix)\n const namedCaptures: Record<string, string> = {};\n if (m.groups) {\n for (const prefixedName of ruleInfo.captureNames) {\n if (m.groups[prefixedName] !== undefined) {\n const cleanName = prefixedName.slice(prefix.length);\n namedCaptures[cleanName] = m.groups[prefixedName];\n }\n }\n }\n\n // Handle lineStartsAfter content capture\n let capturedContent: string | undefined;\n let contentStartOffset: number | undefined;\n\n if (ruleInfo.usesLineStartsAfter) {\n // The internal content capture is named `${prefix}__content` (not a user capture).\n capturedContent = m.groups?.[`${prefix}__content`];\n if (capturedContent !== undefined) {\n // Calculate marker length: (full match length) - (content length)\n // Note: m[0] is the full match of the combined group\n const fullMatch = m.groups?.[prefix] || m[0];\n const markerLength = fullMatch.length - capturedContent.length;\n contentStartOffset = markerLength;\n }\n }\n\n // Check constraints\n const start = m.index;\n const end = m.index + m[0].length;\n const pageId = pageMap.getId(start);\n\n // Apply min/max/exclude page constraints\n const passesConstraints =\n (rule.min === undefined || pageId >= rule.min) &&\n (rule.max === undefined || pageId <= rule.max) &&\n !isPageExcluded(pageId, rule.exclude);\n\n if (passesConstraints) {\n if (!passesPageStartGuard(rule, originalIndex, start)) {\n // Skip false positives caused purely by page wrap.\n // Mid-page line starts are unaffected.\n continue;\n }\n const sp: SplitPoint = {\n capturedContent: undefined, // For combinable rules, we don't use captured content for the segment text\n contentStartOffset,\n index: (rule.split ?? 'at') === 'at' ? start : end,\n meta: rule.meta,\n namedCaptures: Object.keys(namedCaptures).length > 0 ? namedCaptures : undefined,\n };\n\n if (!splitPointsByRule.has(originalIndex)) {\n splitPointsByRule.set(originalIndex, []);\n }\n splitPointsByRule.get(originalIndex)!.push(sp);\n }\n }\n\n if (m[0].length === 0) {\n combinedRegex.lastIndex++;\n }\n m = combinedRegex.exec(matchContent);\n }\n }\n\n // Process standalone rules individually (legacy path)\n const collectSplitPointsFromRule = (rule: SplitRule, ruleIndex: number): void => {\n const { regex, usesCapture, captureNames, usesLineStartsAfter } = buildRuleRegex(rule);\n const allMatches = findMatches(matchContent, regex, usesCapture, captureNames);\n const constrainedMatches = filterByConstraints(allMatches, rule, pageMap.getId);\n const guarded = constrainedMatches.filter((m) => passesPageStartGuard(rule, ruleIndex, m.start));\n // We don't filter by occurrence here yet, we do it uniformly later\n // But wait, filterByConstraints returns MatchResult, we need SplitPoint\n\n const points = guarded.map((m) => {\n const isLineStartsAfter = usesLineStartsAfter && m.captured !== undefined;\n const markerLength = isLineStartsAfter ? m.end - m.captured!.length - m.start : 0;\n return {\n capturedContent: isLineStartsAfter ? undefined : m.captured,\n contentStartOffset: isLineStartsAfter ? markerLength : undefined,\n index: (rule.split ?? 'at') === 'at' ? m.start : m.end,\n meta: rule.meta,\n namedCaptures: m.namedCaptures,\n };\n });\n\n if (!splitPointsByRule.has(ruleIndex)) {\n splitPointsByRule.set(ruleIndex, []);\n }\n splitPointsByRule.get(ruleIndex)!.push(...points);\n };\n\n standaloneRules.forEach((rule) => {\n // Find original index\n const originalIndex = rules.indexOf(rule);\n collectSplitPointsFromRule(rule, originalIndex);\n });\n\n // Apply occurrence filtering and flatten\n const finalSplitPoints: SplitPoint[] = [];\n rules.forEach((rule, index) => {\n const points = splitPointsByRule.get(index);\n if (!points || points.length === 0) {\n return;\n }\n\n let filtered = points;\n if (rule.occurrence === 'first') {\n filtered = [points[0]];\n } else if (rule.occurrence === 'last') {\n filtered = [points[points.length - 1]];\n }\n\n finalSplitPoints.push(...filtered);\n });\n\n return finalSplitPoints;\n};\n\n/**\n * Executes a regex against content and extracts match results with capture information.\n *\n * @param content - Full content string to search\n * @param regex - Compiled regex with 'g' flag\n * @param usesCapture - Whether to extract captured content\n * @param captureNames - Names of expected named capture groups\n * @returns Array of match results with positions and captures\n */\nconst findMatches = (content: string, regex: RegExp, usesCapture: boolean, captureNames: string[]) => {\n const matches: MatchResult[] = [];\n regex.lastIndex = 0;\n let m = regex.exec(content);\n\n while (m !== null) {\n const result: MatchResult = { end: m.index + m[0].length, start: m.index };\n\n // Extract named captures if present\n result.namedCaptures = extractNamedCaptures(m.groups, captureNames);\n\n // For lineStartsAfter, get the last positional capture (the .* content)\n if (usesCapture) {\n result.captured = getLastPositionalCapture(m);\n }\n\n matches.push(result);\n\n if (m[0].length === 0) {\n regex.lastIndex++;\n }\n m = regex.exec(content);\n }\n\n return matches;\n};\n\n/**\n * Finds page breaks within a given offset range using binary search.\n * O(log n + k) where n = total breaks, k = breaks in range.\n *\n * @param startOffset - Start of range (inclusive)\n * @param endOffset - End of range (exclusive)\n * @param sortedBreaks - Sorted array of page break offsets\n * @returns Array of break offsets relative to startOffset\n */\nconst findBreaksInRange = (startOffset: number, endOffset: number, sortedBreaks: number[]) => {\n if (sortedBreaks.length === 0) {\n return [];\n }\n\n // Binary search for first break >= startOffset\n let lo = 0;\n let hi = sortedBreaks.length;\n while (lo < hi) {\n const mid = (lo + hi) >>> 1;\n if (sortedBreaks[mid] < startOffset) {\n lo = mid + 1;\n } else {\n hi = mid;\n }\n }\n\n // Collect breaks until we exceed endOffset\n const result: number[] = [];\n for (let i = lo; i < sortedBreaks.length && sortedBreaks[i] < endOffset; i++) {\n result.push(sortedBreaks[i] - startOffset);\n }\n return result;\n};\n\n/**\n * Converts page-break newlines to spaces in segment content.\n *\n * When a segment spans multiple pages, the newline characters that were\n * inserted as page separators during concatenation are converted to spaces\n * for more natural reading.\n *\n * Uses binary search for O(log n + k) lookup instead of O(n) iteration.\n *\n * @param content - Segment content string\n * @param startOffset - Starting offset of this content in concatenated string\n * @param pageBreaks - Sorted array of page break offsets\n * @returns Content with page-break newlines converted to spaces\n */\nconst convertPageBreaks = (content: string, startOffset: number, pageBreaks: number[]): string => {\n // OPTIMIZATION: Fast-path for empty or no-newline content (common cases)\n if (!content || !content.includes('\\n')) {\n return content;\n }\n\n const endOffset = startOffset + content.length;\n const breaksInRange = findBreaksInRange(startOffset, endOffset, pageBreaks);\n\n // No page breaks in this segment - return as-is (most common case)\n if (breaksInRange.length === 0) {\n return content;\n }\n\n // Convert ONLY page-break newlines (the ones inserted during concatenation) to spaces.\n //\n // NOTE: Offsets from findBreaksInRange are string indices (code units). Using Array.from()\n // would index by Unicode code points and can desync indices if surrogate pairs appear.\n const breakSet = new Set(breaksInRange);\n return content.replace(/\\n/g, (match, offset: number) => (breakSet.has(offset) ? ' ' : match));\n};\n\n/**\n * Applies breakpoints to oversized segments.\n *\n * For each segment that spans more than maxPages, tries the breakpoint patterns\n * in order to find a suitable split point. Structural markers (from rules) are\n * always respected - segments are only broken within their boundaries.\n *\n * @param segments - Initial segments from rule processing\n * @param pages - Original pages for page lookup\n * @param maxPages - Maximum pages before breakpoints apply\n * @param breakpoints - Patterns to try in order (tokens supported)\n * @param prefer - 'longer' for last match, 'shorter' for first match\n * @returns Processed segments with oversized ones broken up\n */\n// applyBreakpoints implementation moved to breakpoint-processor.ts to reduce complexity in this module.\n\n/**\n * Segments pages of content based on pattern-matching rules.\n *\n * This is the main entry point for the segmentation engine. It takes an array\n * of pages and applies the provided rules to identify split points, producing\n * an array of segments with content, page references, and metadata.\n *\n * @param pages - Array of pages with id and content\n * @param options - Segmentation options including splitting rules\n * @returns Array of segments with content, from/to page references, and optional metadata\n *\n * @example\n * // Split markdown by headers\n * const segments = segmentPages(pages, {\n * rules: [\n * { lineStartsWith: ['## '], split: 'at', meta: { type: 'chapter' } }\n * ]\n * });\n *\n * @example\n * // Split Arabic hadith text with number extraction\n * const segments = segmentPages(pages, {\n * rules: [\n * {\n * lineStartsAfter: ['{{raqms:hadithNum}} {{dash}} '],\n * split: 'at',\n * fuzzy: true,\n * meta: { type: 'hadith' }\n * }\n * ]\n * });\n *\n * @example\n * // Multiple rules with page constraints\n * const segments = segmentPages(pages, {\n * rules: [\n * { lineStartsWith: ['{{kitab}}'], split: 'at', meta: { type: 'book' } },\n * { lineStartsWith: ['{{bab}}'], split: 'at', min: 10, meta: { type: 'chapter' } },\n * { regex: '^[٠-٩]+ - ', split: 'at', meta: { type: 'hadith' } }\n * ]\n * });\n */\nexport const segmentPages = (pages: Page[], options: SegmentationOptions): Segment[] => {\n const { rules = [], maxPages = 0, breakpoints = [], prefer = 'longer', pageJoiner = 'space', logger } = options;\n\n const processedPages = options.replace ? applyReplacements(pages, options.replace) : pages;\n const { content: matchContent, normalizedPages: normalizedContent, pageMap } = buildPageMap(processedPages);\n const splitPoints = collectSplitPointsFromRules(rules, matchContent, pageMap);\n const unique = dedupeSplitPoints(splitPoints);\n\n // Build initial segments from structural rules\n let segments = buildSegments(unique, matchContent, pageMap, rules);\n\n segments = ensureFallbackSegment(segments, processedPages, normalizedContent, pageJoiner);\n\n // Apply breakpoints post-processing for oversized segments\n if (maxPages >= 0 && breakpoints.length) {\n const patternProcessor = (p: string) => processPattern(p, false).pattern;\n return applyBreakpoints(\n segments,\n processedPages,\n normalizedContent,\n maxPages,\n breakpoints,\n prefer,\n patternProcessor,\n logger,\n pageJoiner,\n );\n }\n\n return segments;\n};\n\n/**\n * Creates segment objects from split points.\n *\n * Handles segment creation including:\n * - Content extraction (with captured content for `lineStartsAfter`)\n * - Page break conversion to spaces\n * - From/to page reference calculation\n * - Metadata merging (static + named captures)\n *\n * @param splitPoints - Sorted, unique split points\n * @param content - Full concatenated content string\n * @param pageMap - Page mapping utilities\n * @param rules - Original rules (for constraint checking on first segment)\n * @returns Array of segment objects\n */\nconst buildSegments = (splitPoints: SplitPoint[], content: string, pageMap: PageMap, rules: SplitRule[]): Segment[] => {\n /**\n * Creates a single segment from a content range.\n */\n const createSegment = (\n start: number,\n end: number,\n meta?: Record<string, unknown>,\n capturedContent?: string,\n namedCaptures?: Record<string, string>,\n contentStartOffset?: number,\n ): Segment | null => {\n // For lineStartsAfter, skip the marker by using contentStartOffset\n const actualStart = start + (contentStartOffset ?? 0);\n // For lineStartsAfter (contentStartOffset set), trim leading whitespace after marker\n // For other rules, only trim trailing whitespace to preserve intentional leading spaces\n const sliced = content.slice(actualStart, end);\n let text = capturedContent?.trim() ?? (contentStartOffset ? sliced.trim() : sliced.replace(/[\\s\\n]+$/, ''));\n if (!text) {\n return null;\n }\n if (!capturedContent) {\n text = convertPageBreaks(text, actualStart, pageMap.pageBreaks);\n }\n const from = pageMap.getId(actualStart);\n const to = capturedContent ? pageMap.getId(end - 1) : pageMap.getId(actualStart + text.length - 1);\n const seg: Segment = { content: text, from };\n if (to !== from) {\n seg.to = to;\n }\n if (meta || namedCaptures) {\n seg.meta = { ...meta, ...namedCaptures };\n }\n return seg;\n };\n\n /**\n * Creates segments from an array of split points.\n */\n const createSegmentsFromSplitPoints = (): Segment[] => {\n const result: Segment[] = [];\n for (let i = 0; i < splitPoints.length; i++) {\n const sp = splitPoints[i];\n const end = i < splitPoints.length - 1 ? splitPoints[i + 1].index : content.length;\n const s = createSegment(\n sp.index,\n end,\n sp.meta,\n sp.capturedContent,\n sp.namedCaptures,\n sp.contentStartOffset,\n );\n if (s) {\n result.push(s);\n }\n }\n return result;\n };\n\n const segments: Segment[] = [];\n\n // Handle case with no split points\n if (!splitPoints.length) {\n const firstId = pageMap.getId(0);\n if (anyRuleAllowsId(rules, firstId)) {\n const s = createSegment(0, content.length);\n if (s) {\n segments.push(s);\n }\n }\n return segments;\n }\n\n // Add first segment if there's content before first split\n if (splitPoints[0].index > 0) {\n const firstId = pageMap.getId(0);\n if (anyRuleAllowsId(rules, firstId)) {\n const s = createSegment(0, splitPoints[0].index);\n if (s) {\n segments.push(s);\n }\n }\n }\n\n // Create segments from split points using extracted utility\n return [...segments, ...createSegmentsFromSplitPoints()];\n};\n","import { normalizeLineEndings } from './segmentation/textUtils.js';\nimport { getAvailableTokens, TOKEN_PATTERNS } from './segmentation/tokens.js';\nimport type { Page } from './segmentation/types.js';\n\nexport type LineStartAnalysisOptions = {\n /** Return top K patterns (after filtering). Default: 20 */\n topK?: number;\n /** Only consider the first N characters of each trimmed line. Default: 60 */\n prefixChars?: number;\n /** Ignore lines shorter than this (after trimming). Default: 6 */\n minLineLength?: number;\n /** Only include patterns that appear at least this many times. Default: 3 */\n minCount?: number;\n /** Keep up to this many example lines per pattern. Default: 5 */\n maxExamples?: number;\n /**\n * If true, include a literal first word when no token match is found at the start.\n * Default: true\n */\n includeFirstWordFallback?: boolean;\n /**\n * If true, strip Arabic diacritics (harakat/tashkeel) for the purposes of matching tokens.\n * This helps patterns like `وأَخْبَرَنَا` match the `{{naql}}` token (`وأخبرنا`).\n *\n * Note: examples are still stored in their original (unstripped) form.\n *\n * Default: true\n */\n normalizeArabicDiacritics?: boolean;\n /**\n * How to sort patterns before applying `topK`.\n *\n * - `specificity` (default): prioritize more structured prefixes first (tokenCount, then literalLen), then count.\n * - `count`: prioritize highest-frequency patterns first, then specificity.\n */\n sortBy?: 'specificity' | 'count';\n /**\n * Optional filter to restrict which lines are analyzed.\n *\n * The `line` argument is the trimmed + whitespace-collapsed version of the line.\n * Return `true` to include it, `false` to skip it.\n *\n * @example\n * // Only analyze markdown H2 headings\n * { lineFilter: (line) => line.startsWith('## ') }\n */\n lineFilter?: (line: string, pageId: number) => boolean;\n /**\n * Optional list of prefix matchers to consume before tokenization.\n *\n * This is for \"syntactic\" prefixes that are common at line start but are not\n * meaningful as tokens by themselves (e.g. markdown headings like `##`).\n *\n * Each matcher is applied at the current position. If it matches, the matched\n * text is appended (escaped) to the signature and the scanner advances.\n *\n * @example\n * // Support markdown blockquotes and headings\n * { prefixMatchers: [/^>+/u, /^#+/u] }\n */\n prefixMatchers?: RegExp[];\n /**\n * How to represent whitespace in returned `pattern` signatures.\n *\n * - `regex` (default): use `\\\\s*` placeholders between tokens (useful if you paste patterns into regex-ish templates).\n * - `space`: use literal single spaces (`' '`) between tokens (safer if you don't want `\\\\s` to match newlines when reused as regex).\n */\n whitespace?: 'regex' | 'space';\n};\n\nexport type LineStartPatternExample = { line: string; pageId: number };\n\nexport type CommonLineStartPattern = {\n pattern: string;\n count: number;\n examples: LineStartPatternExample[];\n};\n\nconst countTokenMarkers = (pattern: string): number => (pattern.match(/\\{\\{/g) ?? []).length;\n\nconst stripWhitespacePlaceholders = (pattern: string): string =>\n // Remove both the regex placeholder and literal spaces/tabs since they are not meaningful \"constraints\"\n pattern.replace(/\\\\s\\*/g, '').replace(/[ \\t]+/g, '');\n\n// Heuristic: higher is \"more precise\".\n// - More tokens usually means more structured prefix\n// - More literal characters (after removing \\s*) indicates more constraints (e.g. \":\" or \"[\")\nconst computeSpecificity = (pattern: string): { literalLen: number; tokenCount: number } => {\n const tokenCount = countTokenMarkers(pattern);\n const literalLen = stripWhitespacePlaceholders(pattern).length;\n return { literalLen, tokenCount };\n};\n\ntype ResolvedLineStartAnalysisOptions = Required<Omit<LineStartAnalysisOptions, 'lineFilter' | 'prefixMatchers'>> & {\n lineFilter?: LineStartAnalysisOptions['lineFilter'];\n prefixMatchers: RegExp[];\n};\n\nconst DEFAULT_OPTIONS: ResolvedLineStartAnalysisOptions = {\n includeFirstWordFallback: true,\n lineFilter: undefined,\n maxExamples: 1,\n minCount: 3,\n minLineLength: 6,\n normalizeArabicDiacritics: true,\n prefixChars: 60,\n prefixMatchers: [/^#+/u],\n sortBy: 'specificity',\n topK: 40,\n whitespace: 'regex',\n};\n\n// For analysis signatures we avoid escaping ()[] because:\n// - These are commonly used literally in texts (e.g., \"(ح)\")\n// - When signatures are later used in template patterns, ()[] are auto-escaped there\n// We still escape other regex metacharacters to keep signatures safe if reused as templates.\nconst escapeSignatureLiteral = (s: string): string => s.replace(/[.*+?^${}|\\\\{}]/g, '\\\\$&');\n\n// Keep this intentionally focused on \"useful at line start\" tokens, avoiding overly-generic tokens like {{harf}}.\nconst TOKEN_PRIORITY_ORDER: string[] = [\n 'basmalah',\n 'kitab',\n 'bab',\n 'fasl',\n 'naql',\n 'rumuz',\n 'numbered',\n 'raqms',\n 'raqm',\n 'dash',\n 'bullet',\n 'tarqim',\n];\n\nconst buildTokenPriority = (): string[] => {\n const allTokens = new Set(getAvailableTokens());\n // IMPORTANT: We only use an explicit allow-list here.\n // Including \"all remaining tokens\" adds overly-generic tokens (e.g., harf) which makes signatures noisy.\n return TOKEN_PRIORITY_ORDER.filter((t) => allTokens.has(t));\n};\n\nconst collapseWhitespace = (s: string): string => s.replace(/\\s+/g, ' ').trim();\n\n// Arabic diacritics / tashkeel marks that commonly appear in Shamela texts.\n// This is intentionally conservative: remove combining marks but keep letters.\nconst stripArabicDiacritics = (s: string): string =>\n // harakat + common Quranic marks + tatweel\n s.replace(/[\\u064B-\\u065F\\u0670\\u06D6-\\u06ED\\u0640]/gu, '');\n\ntype CompiledTokenRegex = { token: string; re: RegExp };\n\nconst compileTokenRegexes = (tokenNames: string[]): CompiledTokenRegex[] => {\n const compiled: CompiledTokenRegex[] = [];\n for (const token of tokenNames) {\n const pat = TOKEN_PATTERNS[token];\n if (!pat) {\n continue;\n }\n try {\n compiled.push({ re: new RegExp(pat, 'uy'), token });\n } catch {\n // Ignore invalid patterns\n }\n }\n return compiled;\n};\n\nconst appendWs = (out: string, mode: 'regex' | 'space'): string => {\n if (!out) {\n return out;\n }\n if (mode === 'space') {\n return out.endsWith(' ') ? out : `${out} `;\n }\n return out.endsWith('\\\\s*') ? out : `${out}\\\\s*`;\n};\n\nconst consumeLeadingPrefixes = (\n s: string,\n pos: number,\n out: string,\n prefixMatchers: RegExp[],\n whitespace: 'regex' | 'space',\n): { matchedAny: boolean; out: string; pos: number } => {\n let matchedAny = false;\n let currentPos = pos;\n let currentOut = out;\n\n for (const re of prefixMatchers) {\n if (currentPos >= s.length) {\n break;\n }\n const m = re.exec(s.slice(currentPos));\n if (!m || m.index !== 0 || !m[0]) {\n continue;\n }\n\n currentOut += escapeSignatureLiteral(m[0]);\n currentPos += m[0].length;\n matchedAny = true;\n\n const wsAfter = /^[ \\t]+/u.exec(s.slice(currentPos));\n if (wsAfter) {\n currentPos += wsAfter[0].length;\n currentOut = appendWs(currentOut, whitespace);\n }\n }\n\n return { matchedAny, out: currentOut, pos: currentPos };\n};\n\nconst findBestTokenMatchAt = (\n s: string,\n pos: number,\n compiled: CompiledTokenRegex[],\n isArabicLetter: (ch: string) => boolean,\n): { token: string; text: string } | null => {\n let best: { token: string; text: string } | null = null;\n for (const { token, re } of compiled) {\n re.lastIndex = pos;\n const m = re.exec(s);\n if (!m || m.index !== pos) {\n continue;\n }\n if (!best || m[0].length > best.text.length) {\n best = { text: m[0], token };\n }\n }\n\n if (best?.token === 'rumuz') {\n const end = pos + best.text.length;\n const next = end < s.length ? s[end] : '';\n if (next && isArabicLetter(next) && !/\\s/u.test(next)) {\n return null;\n }\n }\n\n return best;\n};\n\nconst tokenizeLineStart = (\n line: string,\n tokenNames: string[],\n prefixChars: number,\n includeFirstWordFallback: boolean,\n normalizeArabicDiacritics: boolean,\n prefixMatchers: RegExp[],\n whitespace: 'regex' | 'space',\n): string | null => {\n const trimmed = collapseWhitespace(line);\n if (!trimmed) {\n return null;\n }\n\n const s = (normalizeArabicDiacritics ? stripArabicDiacritics(trimmed) : trimmed).slice(0, prefixChars);\n let pos = 0;\n let out = '';\n let matchedAny = false;\n let matchedToken = false;\n\n // Pre-compile regexes once per call (tokenNames is small); use sticky to match at position.\n const compiled = compileTokenRegexes(tokenNames);\n\n // IMPORTANT: do NOT treat all Arabic-block codepoints as \"letters\" (it includes punctuation like \"،\").\n // We only want to consider actual letters here for the rumuz boundary guard.\n const isArabicLetter = (ch: string): boolean => /\\p{Script=Arabic}/u.test(ch) && /\\p{L}/u.test(ch);\n const isCommonDelimiter = (ch: string): boolean => /[::\\-–—ـ،؛.?!؟()[\\]{}]/u.test(ch);\n\n {\n const consumed = consumeLeadingPrefixes(s, pos, out, prefixMatchers, whitespace);\n pos = consumed.pos;\n out = consumed.out;\n matchedAny = consumed.matchedAny;\n }\n\n // Scan forward at most a few *token* steps to avoid producing huge unique strings.\n // Whitespace and delimiters do not count toward the token step budget.\n let tokenSteps = 0;\n while (tokenSteps < 6 && pos < s.length) {\n // Skip whitespace and represent it as \\\\s*\n const wsMatch = /^[ \\t]+/u.exec(s.slice(pos));\n if (wsMatch) {\n pos += wsMatch[0].length;\n out = appendWs(out, whitespace);\n continue;\n }\n\n const best = findBestTokenMatchAt(s, pos, compiled, isArabicLetter);\n\n if (best) {\n if (out && !out.endsWith('\\\\s*')) {\n // If we have no whitespace but are concatenating tokens, keep it literal.\n }\n out += `{{${best.token}}}`;\n matchedAny = true;\n matchedToken = true;\n pos += best.text.length;\n tokenSteps++;\n continue;\n }\n\n // After matching tokens, allow common delimiters (like ':' in \"١١٢٨ ع:\") to become part of the signature.\n if (matchedAny) {\n const ch = s[pos];\n if (ch && isCommonDelimiter(ch)) {\n out += escapeSignatureLiteral(ch);\n pos += 1;\n continue;\n }\n }\n\n // If we already matched something token-y, stop at first unknown content to avoid overfitting.\n if (matchedAny) {\n // Exception: if we only matched a generic prefix (e.g., \"##\") and no tokens yet,\n // allow the first-word fallback to capture the next word to show heading variations.\n if (includeFirstWordFallback && !matchedToken) {\n const firstWord = (s.slice(pos).match(/^[^\\s:،؛.?!؟]+/u) ?? [])[0];\n if (!firstWord) {\n break;\n }\n out += escapeSignatureLiteral(firstWord);\n tokenSteps++;\n }\n break;\n }\n\n if (!includeFirstWordFallback) {\n return null;\n }\n\n // Fallback: include the first word as a literal (escaped), then stop.\n const firstWord = (s.slice(pos).match(/^[^\\s:،؛.?!؟]+/u) ?? [])[0];\n if (!firstWord) {\n return null;\n }\n out += escapeSignatureLiteral(firstWord);\n tokenSteps++;\n return out;\n }\n\n if (!matchedAny) {\n return null;\n }\n // Avoid trailing whitespace placeholder noise.\n if (whitespace === 'regex') {\n while (out.endsWith('\\\\s*')) {\n out = out.slice(0, -3);\n }\n } else {\n while (out.endsWith(' ')) {\n out = out.slice(0, -1);\n }\n }\n return out;\n};\n\n/**\n * Analyze pages and return the most common line-start patterns (top K).\n *\n * This is a pure algorithmic heuristic: it tokenizes common prefixes into a stable\n * template-ish string using the library tokens (e.g., `{{bab}}`, `{{raqms}}`, `{{rumuz}}`).\n */\nexport const analyzeCommonLineStarts = (\n pages: Page[],\n options: LineStartAnalysisOptions = {},\n): CommonLineStartPattern[] => {\n const o: ResolvedLineStartAnalysisOptions = {\n ...DEFAULT_OPTIONS,\n ...options,\n // Ensure defaults are kept if caller doesn't pass these (or passes undefined).\n lineFilter: options.lineFilter ?? DEFAULT_OPTIONS.lineFilter,\n prefixMatchers: options.prefixMatchers ?? DEFAULT_OPTIONS.prefixMatchers,\n whitespace: options.whitespace ?? DEFAULT_OPTIONS.whitespace,\n };\n const tokenPriority = buildTokenPriority();\n\n const counts = new Map<string, { count: number; examples: LineStartPatternExample[] }>();\n\n for (const page of pages) {\n const normalized = normalizeLineEndings(page.content ?? '');\n const lines = normalized.split('\\n');\n for (const line of lines) {\n const trimmed = collapseWhitespace(line);\n if (trimmed.length < o.minLineLength) {\n continue;\n }\n if (o.lineFilter && !o.lineFilter(trimmed, page.id)) {\n continue;\n }\n\n const sig = tokenizeLineStart(\n trimmed,\n tokenPriority,\n o.prefixChars,\n o.includeFirstWordFallback,\n o.normalizeArabicDiacritics,\n o.prefixMatchers,\n o.whitespace,\n );\n if (!sig) {\n continue;\n }\n\n const existing = counts.get(sig);\n if (!existing) {\n counts.set(sig, { count: 1, examples: [{ line: trimmed, pageId: page.id }] });\n } else {\n existing.count++;\n if (existing.examples.length < o.maxExamples) {\n existing.examples.push({ line: trimmed, pageId: page.id });\n }\n }\n }\n }\n\n const compareSpecificityThenCount = (a: CommonLineStartPattern, b: CommonLineStartPattern): number => {\n const sa = computeSpecificity(a.pattern);\n const sb = computeSpecificity(b.pattern);\n // Most precise first\n if (sb.tokenCount !== sa.tokenCount) {\n return sb.tokenCount - sa.tokenCount;\n }\n if (sb.literalLen !== sa.literalLen) {\n return sb.literalLen - sa.literalLen;\n }\n // Then by frequency\n if (b.count !== a.count) {\n return b.count - a.count;\n }\n return a.pattern.localeCompare(b.pattern);\n };\n\n const compareCountThenSpecificity = (a: CommonLineStartPattern, b: CommonLineStartPattern): number => {\n if (b.count !== a.count) {\n return b.count - a.count;\n }\n return compareSpecificityThenCount(a, b);\n };\n\n const sorted: CommonLineStartPattern[] = [...counts.entries()]\n .map(([pattern, v]) => ({ count: v.count, examples: v.examples, pattern }))\n .filter((p) => p.count >= o.minCount)\n .sort(o.sortBy === 'count' ? compareCountThenSpecificity : compareSpecificityThenCount);\n\n return sorted.slice(0, o.topK);\n};\n","/**\n * Pattern detection utilities for recognizing template tokens in Arabic text.\n * Used to auto-detect patterns from user-highlighted text in the segmentation dialog.\n *\n * @module pattern-detection\n */\n\nimport { getAvailableTokens, TOKEN_PATTERNS } from './segmentation/tokens.js';\n\n/**\n * Result of detecting a token pattern in text\n */\nexport type DetectedPattern = {\n /** Token name from TOKEN_PATTERNS (e.g., 'raqms', 'dash') */\n token: string;\n /** The matched text */\n match: string;\n /** Start index in the original text */\n index: number;\n /** End index (exclusive) */\n endIndex: number;\n};\n\n/**\n * Token detection order - more specific patterns first to avoid partial matches.\n * Example: 'raqms' before 'raqm' so \"٣٤\" matches 'raqms' not just the first digit.\n *\n * Tokens not in this list are appended in alphabetical order from TOKEN_PATTERNS.\n */\nconst TOKEN_PRIORITY_ORDER = [\n 'basmalah', // Most specific - full phrase\n 'kitab',\n 'bab',\n 'fasl',\n 'naql',\n 'rumuz', // Source abbreviations (e.g., \"خت\", \"خ سي\", \"٤\")\n 'numbered', // Composite: raqms + dash\n 'raqms', // Multiple digits before single digit\n 'raqm',\n 'tarqim',\n 'bullet',\n 'dash',\n 'harf',\n];\n\n/**\n * Gets the token detection priority order.\n * Returns tokens in priority order, with any TOKEN_PATTERNS not in the priority list appended.\n */\nconst getTokenPriority = () => {\n const allTokens = getAvailableTokens();\n const prioritized = TOKEN_PRIORITY_ORDER.filter((t) => allTokens.includes(t));\n const remaining = allTokens.filter((t) => !TOKEN_PRIORITY_ORDER.includes(t)).sort();\n return [...prioritized, ...remaining];\n};\n\nconst isRumuzStandalone = (text: string, startIndex: number, endIndex: number): boolean => {\n // We want rumuz to behave like a standalone marker (e.g. \"س:\" or \"خت ٤:\"),\n // not a substring match inside normal Arabic words (e.g. \"إِبْرَاهِيم\").\n const before = startIndex > 0 ? text[startIndex - 1] : '';\n const after = endIndex < text.length ? text[endIndex] : '';\n\n const isWhitespace = (ch: string): boolean => !!ch && /\\s/u.test(ch);\n const isOpenBracket = (ch: string): boolean => !!ch && /[([{]/u.test(ch);\n const isRightDelimiter = (ch: string): boolean => !!ch && /[::\\-–—ـ،؛.?!؟)\\]}]/u.test(ch);\n\n // Treat any Arabic-block codepoint (letters + diacritics + digits) as \"wordy\" context.\n // Unicode Script properties can classify some combining marks as \"Inherited\", so we avoid \\p{Script=Arabic}.\n const isArabicWordy = (ch: string): boolean => !!ch && /[\\u0600-\\u06FF]/u.test(ch);\n\n const leftOk = !before || isWhitespace(before) || isOpenBracket(before) || !isArabicWordy(before);\n const rightOk = !after || isWhitespace(after) || isRightDelimiter(after) || !isArabicWordy(after);\n\n return leftOk && rightOk;\n};\n\n/**\n * Analyzes text and returns all detected token patterns with their positions.\n * Patterns are detected in priority order to avoid partial matches.\n *\n * @param text - The text to analyze for token patterns\n * @returns Array of detected patterns sorted by position\n *\n * @example\n * detectTokenPatterns(\"٣٤ - حدثنا\")\n * // Returns: [\n * // { token: 'raqms', match: '٣٤', index: 0, endIndex: 2 },\n * // { token: 'dash', match: '-', index: 3, endIndex: 4 },\n * // { token: 'naql', match: 'حدثنا', index: 5, endIndex: 10 }\n * // ]\n */\nexport const detectTokenPatterns = (text: string) => {\n if (!text) {\n return [];\n }\n\n const results: DetectedPattern[] = [];\n const coveredRanges: Array<[number, number]> = [];\n\n // Check if a position is already covered by a detected pattern\n const isPositionCovered = (start: number, end: number): boolean => {\n return coveredRanges.some(\n ([s, e]) => (start >= s && start < e) || (end > s && end <= e) || (start <= s && end >= e),\n );\n };\n\n // Process tokens in priority order\n for (const tokenName of getTokenPriority()) {\n const pattern = TOKEN_PATTERNS[tokenName];\n if (!pattern) {\n continue;\n }\n\n try {\n // Create a global regex to find all matches\n const regex = new RegExp(`(${pattern})`, 'gu');\n let match: RegExpExecArray | null;\n\n // biome-ignore lint/suspicious/noAssignInExpressions: standard regex exec loop pattern\n while ((match = regex.exec(text)) !== null) {\n const startIndex = match.index;\n const endIndex = startIndex + match[0].length;\n\n if (tokenName === 'rumuz' && !isRumuzStandalone(text, startIndex, endIndex)) {\n continue;\n }\n\n // Skip if this range overlaps with an already detected pattern\n if (isPositionCovered(startIndex, endIndex)) {\n continue;\n }\n\n results.push({ endIndex, index: startIndex, match: match[0], token: tokenName });\n\n coveredRanges.push([startIndex, endIndex]);\n }\n } catch {}\n }\n\n return results.sort((a, b) => a.index - b.index);\n};\n\n/**\n * Generates a template pattern from text using detected tokens.\n * Replaces matched portions with {{token}} syntax.\n *\n * @param text - Original text\n * @param detected - Array of detected patterns from detectTokenPatterns\n * @returns Template string with tokens, e.g., \"{{raqms}} {{dash}} \"\n *\n * @example\n * const detected = detectTokenPatterns(\"٣٤ - \");\n * generateTemplateFromText(\"٣٤ - \", detected);\n * // Returns: \"{{raqms}} {{dash}} \"\n */\nexport const generateTemplateFromText = (text: string, detected: DetectedPattern[]) => {\n if (!text || detected.length === 0) {\n return text;\n }\n\n // Build template by replacing detected patterns with tokens\n // Process in reverse order to preserve indices\n let template = text;\n const sortedByIndexDesc = [...detected].sort((a, b) => b.index - a.index);\n\n for (const d of sortedByIndexDesc) {\n template = `${template.slice(0, d.index)}{{${d.token}}}${template.slice(d.endIndex)}`;\n }\n\n return template;\n};\n\n/**\n * Determines the best pattern type for auto-generated rules based on detected patterns.\n *\n * @param detected - Array of detected patterns\n * @returns Suggested pattern type and whether to use fuzzy matching\n */\nexport const suggestPatternConfig = (\n detected: DetectedPattern[],\n): { patternType: 'lineStartsWith' | 'lineStartsAfter'; fuzzy: boolean; metaType?: string } => {\n // Check if the detected patterns suggest a structural marker (chapter, book, etc.)\n const hasStructuralToken = detected.some((d) => ['basmalah', 'kitab', 'bab', 'fasl'].includes(d.token));\n\n // Check if the pattern is numbered (hadith-style)\n const hasNumberedPattern = detected.some((d) => ['raqms', 'raqm', 'numbered'].includes(d.token));\n\n // If it starts with a structural token, use lineStartsWith (keep marker in content)\n if (hasStructuralToken) {\n return {\n fuzzy: true,\n metaType: detected.find((d) => ['kitab', 'bab', 'fasl'].includes(d.token))?.token || 'chapter',\n patternType: 'lineStartsWith',\n };\n }\n\n // If it's a numbered pattern (like hadith numbers), use lineStartsAfter (strip prefix)\n if (hasNumberedPattern) {\n return { fuzzy: false, metaType: 'hadith', patternType: 'lineStartsAfter' };\n }\n\n // Default: use lineStartsAfter without fuzzy\n return { fuzzy: false, patternType: 'lineStartsAfter' };\n};\n\n/**\n * Analyzes text and generates a complete suggested rule configuration.\n *\n * @param text - Highlighted text from the page\n * @returns Suggested rule configuration or null if no patterns detected\n */\nexport const analyzeTextForRule = (\n text: string,\n): {\n template: string;\n patternType: 'lineStartsWith' | 'lineStartsAfter';\n fuzzy: boolean;\n metaType?: string;\n detected: DetectedPattern[];\n} | null => {\n const detected = detectTokenPatterns(text);\n\n if (detected.length === 0) {\n return null;\n }\n\n const template = generateTemplateFromText(text, detected);\n const config = suggestPatternConfig(detected);\n\n return { detected, template, ...config };\n};\n"],"mappings":";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AA+BA,MAAM,mBAAmB;;;;;;;;;;;;;;;AAgBzB,MAAMA,eAA2B;CAC7B;EAAC;EAAU;EAAU;EAAU;EAAS;CACxC,CAAC,KAAU,IAAS;CACpB,CAAC,KAAU,IAAS;CACvB;;;;;;;;;;;;;;AAeD,MAAa,eAAe,MAAsB,EAAE,QAAQ,uBAAuB,OAAO;;;;;;;;;;;;;;;;;;AAmB1F,MAAM,iBAAiB,OAAuB;AAC1C,MAAK,MAAM,SAAS,aAChB,KAAI,MAAM,SAAS,GAAG,CAElB,QAAO,IAAI,MAAM,KAAK,MAAM,YAAY,EAAE,CAAC,CAAC,KAAK,GAAG,CAAC;AAI7D,QAAO,YAAY,GAAG;;;;;;;;;;;;;;;;;;;;;;;;AAyB1B,MAAM,wBAAwB,QAAgB;AAC1C,QAAO,IACF,UAAU,MAAM,CAChB,QAAQ,mBAAmB,GAAG,CAC9B,QAAQ,QAAQ,IAAI,CACpB,MAAM;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAsCf,MAAa,4BAA4B,SAAiB;CACtD,MAAM,oBAAoB,GAAG,iBAAiB;CAC9C,MAAM,OAAO,qBAAqB,KAAK;AAEvC,QAAO,MAAM,KAAK,KAAK,CAClB,KAAK,OAAO,cAAc,GAAG,GAAG,kBAAkB,CAClD,KAAK,GAAG;;;;;AC5JjB,MAAM,wBAAwB;CAAC;CAAI;CAAI;CAAI;CAAI;CAAI;CAAG;AAItD,MAAM,wBAAwB;CAAC;CAAI;CAAI;CAAI;CAAI;CAAI;CAAI;CAAI;CAAI;CAAG;CAAE;;;;;;;;;;;;;;;AAgBpE,MAAa,uBAAuB,OAAoC,OAAO,OAAO,WAAW,EAAE,SAAS,IAAI,GAAG;;;;;;;;;;;;;;;;;;AAmBnH,MAAa,kBAAkB,QAAgB,gBAAkD;AAC7F,KAAI,CAAC,eAAe,YAAY,WAAW,EACvC,QAAO;AAEX,MAAK,MAAM,QAAQ,YACf,KAAI,OAAO,SAAS,UAChB;MAAI,WAAW,KACX,QAAO;QAER;EACH,MAAM,CAAC,MAAM,MAAM;AACnB,MAAI,UAAU,QAAQ,UAAU,GAC5B,QAAO;;AAInB,QAAO;;;;;;;;;;;;;;;;AAiBX,MAAa,uBAAuB,QAAgB,SAAkC;AAClF,KAAI,KAAK,QAAQ,UAAa,SAAS,KAAK,IACxC,QAAO;AAEX,KAAI,KAAK,QAAQ,UAAa,SAAS,KAAK,IACxC,QAAO;AAEX,QAAO,CAAC,eAAe,QAAQ,KAAK,QAAQ;;;;;;;;;;;;;;;;;;AAmBhD,MAAa,mBAAmB,gBAAsD;CAClF,MAAM,6BAAa,IAAI,KAAa;AACpC,MAAK,MAAM,QAAQ,eAAe,EAAE,CAChC,KAAI,OAAO,SAAS,SAChB,YAAW,IAAI,KAAK;KAEpB,MAAK,IAAI,IAAI,KAAK,IAAI,KAAK,KAAK,IAAI,IAChC,YAAW,IAAI,EAAE;AAI7B,QAAO;;;;;;;;;;;;;;;;;;;AAoBX,MAAa,iBACT,SACA,YACA,UACA,SACiB;CACjB,MAAM,UAAU,QAAQ,MAAM;AAC9B,KAAI,CAAC,QACD,QAAO;CAEX,MAAMC,MAAe;EAAE,SAAS;EAAS,MAAM;EAAY;AAC3D,KAAI,aAAa,UAAa,aAAa,WACvC,KAAI,KAAK;AAEb,KAAI,KACA,KAAI,OAAO;AAEf,QAAO;;;;;;;;;;;;;;AA0BX,MAAa,qBAAqB,aAA2B,qBACzD,YAAY,KAAK,OAAO;CACpB,MAAM,OAAO,oBAAoB,GAAG;CACpC,MAAM,aAAa,gBAAgB,KAAK,QAAQ;CAChD,MAAM,gBACF,KAAK,aAAa,gBACL;EACH,MAAM,eAAeC,iBAAe,KAAK,SAAS;AAClD,MAAI;AACA,UAAO,IAAI,OAAO,cAAc,KAAK;WAChC,OAAO;GACZ,MAAM,UAAU,iBAAiB,QAAQ,MAAM,UAAU,OAAO,MAAM;AACtE,SAAM,IAAI,MAAM,sCAAsC,KAAK,SAAS,aAAa,UAAU;;KAE/F,GACJ;AACV,KAAI,KAAK,YAAY,GACjB,QAAO;EAAE;EAAY,OAAO;EAAM;EAAM;EAAe;CAE3D,MAAM,WAAWA,iBAAe,KAAK,QAAQ;AAC7C,KAAI;AACA,SAAO;GAAE;GAAY,OAAO,IAAI,OAAO,UAAU,MAAM;GAAE;GAAM;GAAe;UACzE,OAAO;EACZ,MAAM,UAAU,iBAAiB,QAAQ,MAAM,UAAU,OAAO,MAAM;AACtE,QAAM,IAAI,MAAM,6BAA6B,KAAK,QAAQ,aAAa,UAAU;;EAEvF;;;;;;;;;;;AAeN,MAAa,+BACT,SACA,SACA,OACA,SACA,iBACA,WACS;AACT,KAAI,WAAW,aAAa,WAAW,SAAS,CAAC,QAAQ,SAAS,KAAK,CACnE,QAAO;CAGX,IAAI,UAAU;CACd,IAAI,aAAa;AAEjB,MAAK,IAAI,KAAK,UAAU,GAAG,MAAM,OAAO,MAAM;EAC1C,MAAM,WAAW,gBAAgB,IAAI,QAAQ,IAAI;AACjD,MAAI,CAAC,SACD;EAGJ,MAAM,QAAQ,4BAA4B,SAAS,SAAS,QAAQ,WAAW,EAAE,WAAW;AAC5F,MAAI,QAAQ,KAAK,QAAQ,QAAQ,OAAO,KACpC,WAAU,GAAG,QAAQ,MAAM,GAAG,QAAQ,EAAE,CAAC,GAAG,QAAQ,MAAM,MAAM;AAEpE,MAAI,QAAQ,EACR,cAAa;;AAIrB,QAAO;;;;;AAMX,MAAM,+BAA+B,SAAiB,oBAA4B,eAA+B;AAC7G,MAAK,MAAM,OAAO,uBAAuB;EACrC,MAAM,SAAS,mBAAmB,MAAM,GAAG,KAAK,IAAI,KAAK,mBAAmB,OAAO,CAAC,CAAC,MAAM;AAC3F,MAAI,CAAC,OACD;EAEJ,MAAM,MAAM,QAAQ,QAAQ,QAAQ,WAAW;AAC/C,MAAI,MAAM,EACN,QAAO;;AAGf,QAAO;;;;;;;;;;AAWX,MAAa,oCACT,kBACA,gBACA,SACA,oBACS;CACT,MAAM,kBAAkB,gBAAgB,IAAI,QAAQ,gBAAgB;AACpE,KAAI,CAAC,gBACD,QAAO;CAGX,MAAM,WAAW,iBAAiB,WAAW,CAAC,MAAM,GAAG,KAAK,IAAI,IAAI,iBAAiB,OAAO,CAAC;CAC7F,MAAM,SAAS,SAAS,MAAM,GAAG,KAAK,IAAI,IAAI,SAAS,OAAO,CAAC;AAC/D,KAAI,CAAC,OACD,QAAO;CAGX,MAAM,MAAM,gBAAgB,QAAQ,QAAQ,OAAO;AACnD,QAAO,MAAM,IAAI,MAAM;;;;;;;;;AAU3B,MAAa,qCACT,kBACA,iBACA,eACA,kBACA,SACA,oBACS;CACT,MAAM,iBAAiB,gBAAgB,IAAI,QAAQ,eAAe;AAClE,KAAI,CAAC,eACD,QAAO;CAIX,MAAM,SAAS,KAAK,IAAI,KAAK,IAAI,GAAG,iBAAiB,EAAE,iBAAiB,OAAO;CAC/E,MAAM,cAAc,KAAK,IAAI,GAAG,SAAS,IAAO;CAChD,MAAM,YAAY,KAAK,IAAI,iBAAiB,QAAQ,SAAS,IAAM;CAInE,MAAM,gBAAgB,eAAe,QAAQ,WAAW;AACxD,MAAK,MAAM,OAAO,uBAAuB;EACrC,MAAM,SAAS,cAAc,MAAM,GAAG,KAAK,IAAI,KAAK,cAAc,OAAO,CAAC,CAAC,MAAM;AACjF,MAAI,CAAC,OACD;EAGJ,IAAI,MAAM,iBAAiB,QAAQ,QAAQ,YAAY;AACvD,SAAO,QAAQ,MAAM,OAAO,WAAW;AAEnC,OAAI,MAAM,KAAK,KAAK,KAAK,iBAAiB,MAAM,MAAM,GAAG,CACrD,QAAO;AAEX,SAAM,iBAAiB,QAAQ,QAAQ,MAAM,EAAE;;EAInD,MAAM,OAAO,iBAAiB,YAAY,QAAQ,OAAO;AACzD,MAAI,OAAO,EACP,QAAO;;AAIf,QAAO;;;;;;;;;AAUX,MAAa,mCACT,kBACA,gBACA,cACA,OACA,SACA,iBACA,sBACS;AAET,KAAI,gBAAgB,MAChB,QAAO,iBAAiB;CAG5B,MAAM,iBAAiB,eAAe;CACtC,MAAM,aAAa,iBAAiB;CACpC,MAAM,aAAa,KAAK,IAAI,gBAAgB,MAAM;CAElD,MAAM,2BAA2B,iCAC7B,kBACA,gBACA,SACA,gBACH;AAID,MAAK,IAAI,UAAU,YAAY,WAAW,YAAY,WAAW;EAC7D,MAAM,mBACF,kBAAkB,aAAa,UAAa,kBAAkB,oBAAoB,SAC5E,KAAK,IAAI,GAAG,kBAAkB,WAAW,kBAAkB,kBAAkB,yBAAyB,GACtG,iBAAiB;EAE3B,MAAM,MAAM,kCACR,kBACA,gBACA,SACA,kBACA,SACA,gBACH;AACD,MAAI,MAAM,EACN,QAAO;;AAMf,QAAO,iBAAiB;;;;;;;;AAS5B,MAAa,8BACT,gBACA,cACA,OACA,SACA,qBACA,sBACS;CACT,MAAM,iBAAiB,QAAQ;AAE/B,KAD6B,oBAAoB,MAAM,OAAO,GAAG,WAAW,IAAI,eAAe,CAAC,IACpE,iBAAiB,MAEzC,QAAO,kBAAkB,iBAAiB,KAAK,kBAAkB;AAIrE,MAAK,IAAI,UAAU,iBAAiB,GAAG,WAAW,cAAc,WAAW;EACvE,MAAM,SAAS,QAAQ;AAEvB,MADmB,oBAAoB,MAAM,OAAO,GAAG,WAAW,IAAI,OAAO,CAAC,CAE1E,QAAO,kBAAkB,WAAW,kBAAkB;;AAG9D,QAAO;;;;;;;;;;;;;AAcX,MAAa,qBACT,cACA,gBACA,OACA,SACA,oBACS;AACT,MAAK,IAAI,KAAK,OAAO,KAAK,gBAAgB,MAAM;EAC5C,MAAM,WAAW,gBAAgB,IAAI,QAAQ,IAAI;AACjD,MAAI,UAAU;GACV,MAAM,eAAe,SAAS,QAAQ,MAAM,GAAG,KAAK,IAAI,IAAI,SAAS,OAAO,CAAC;AAC7E,OAAI,aAAa,SAAS,KAAK,aAAa,QAAQ,aAAa,GAAG,EAChE,QAAO;;;AAInB,QAAO;;;;;;;;;;;;;;;;AAiBX,MAAa,uBACT,cACA,gBACA,OACA,SACA,oBACS;CACT,MAAM,eAAe,aAAa,WAAW;AAC7C,KAAI,CAAC,aACD,QAAO;AAIX,MAAK,IAAI,KAAK,gBAAgB,MAAM,OAAO,MAAM;EAC7C,MAAM,WAAW,gBAAgB,IAAI,QAAQ,IAAI;AACjD,MAAI,UAAU;GACV,MAAM,aAAa,SAAS,QAAQ,MAAM,GAAG,KAAK,IAAI,IAAI,SAAS,OAAO,CAAC,CAAC,MAAM;GAClF,MAAM,cAAc,aAAa,MAAM,GAAG,KAAK,IAAI,IAAI,aAAa,OAAO,CAAC;AAK5E,OAAI,WAAW,SAAS,GAAG;AACvB,QAAI,aAAa,WAAW,WAAW,CACnC,QAAO;AAEX,QAAI,SAAS,QAAQ,WAAW,CAAC,WAAW,YAAY,CACpD,QAAO;;;;AAKvB,QAAO;;;;;;;;;;;AAoBX,MAAa,0BACT,YACA,SACA,SACA,UACU;AACV,KAAI,WAAW,SAAS,EACpB,QAAO;AAEX,MAAK,IAAI,UAAU,SAAS,WAAW,OAAO,UAC1C,KAAI,WAAW,IAAI,QAAQ,SAAS,CAChC,QAAO;AAGf,QAAO;;;;;;;;;;AAWX,MAAa,wBAAwB,kBAA0B,iBAAyC;CACpG,MAAM,eAAe,aAAa,QAAQ,MAAM,CAAC,MAAM,GAAG,KAAK,IAAI,IAAI,aAAa,OAAO,CAAC;AAC5F,KAAI,aAAa,WAAW,EACxB,QAAO;CAEX,MAAM,MAAM,iBAAiB,QAAQ,aAAa;AAClD,QAAO,MAAM,IAAI,MAAM;;;;;;;;;;AAW3B,MAAa,4BACT,eACA,OACA,WACS;CAGT,IAAIC;CACJ,IAAIC;AACJ,MAAK,MAAM,KAAK,cAAc,SAAS,MAAM,EAAE;EAC3C,MAAM,QAAQ;GAAE,OAAO,EAAE;GAAO,QAAQ,EAAE,GAAG;GAAQ;AACrD,MAAI,CAAC,MACD,SAAQ;AAEZ,SAAO;;AAEX,KAAI,CAAC,MACD,QAAO;CAEX,MAAM,WAAW,WAAW,WAAW,OAAQ;AAC/C,QAAO,SAAS,QAAQ,SAAS;;;;;;AAOrC,MAAM,2BACF,kBACA,cACA,mBACA,OACA,SACA,oBACS;CACT,MAAM,cAAc,eAAe;AACnC,KAAI,eAAe,OAAO;EACtB,MAAM,eAAe,gBAAgB,IAAI,QAAQ,aAAa;AAC9D,MAAI,cAAc;GACd,MAAM,MAAM,qBAAqB,kBAAkB,aAAa;AAChE,OAAI,MAAM,EACN,QAAO,KAAK,IAAI,KAAK,mBAAmB,iBAAiB,OAAO;;;AAI5E,QAAO,KAAK,IAAI,mBAAmB,iBAAiB,OAAO;;;;;;;;;;;;;AAc/D,MAAa,qBACT,kBACA,gBACA,OACA,cACA,mBACA,QACS;CACT,MAAM,EAAE,SAAS,iBAAiB,qBAAqB,WAAW;AAElE,MAAK,MAAM,EAAE,MAAM,OAAO,YAAY,mBAAmB,qBAAqB;AAE1E,MAAI,CAAC,oBAAoB,QAAQ,iBAAiB,KAAK,CACnD;AAIJ,MAAI,uBAAuB,YAAY,SAAS,gBAAgB,aAAa,CACzE;AAIJ,MAAI,eAAe,KAAK,iBAAiB,CACrC;AAIJ,MAAI,UAAU,KACV,QAAO,wBACH,kBACA,cACA,mBACA,OACA,SACA,gBACH;EAKL,MAAM,WAAW,yBADK,iBAAiB,MAAM,GAAG,KAAK,IAAI,mBAAmB,iBAAiB,OAAO,CAAC,EAC5C,OAAO,OAAO;AACvE,MAAI,WAAW,EACX,QAAO;;AAIf,QAAO;;;;;;;;;;;ACzpBX,MAAM,yBAAyB,YAAsB,IAAI,IAAI,QAAQ,KAAK,IAAI,MAAM,CAAC,IAAI,EAAE,CAAC,CAAC;AAE7F,MAAM,2BAA2B,OAAe,sBAAgC;CAC5E,MAAM,kCAAkB,IAAI,KAA6B;AACzD,MAAK,IAAI,IAAI,GAAG,IAAI,MAAM,QAAQ,KAAK;EACnC,MAAM,UAAU,kBAAkB;AAClC,kBAAgB,IAAI,MAAM,GAAG,IAAI;GAAE;GAAS,OAAO;GAAG,QAAQ,QAAQ;GAAQ,CAAC;;AAEnF,QAAO;;AAGX,MAAM,0BAA0B,SAAmB,oBAAiD;CAChG,MAAMC,oBAA8B,CAAC,EAAE;CACvC,IAAI,cAAc;AAClB,MAAK,IAAI,IAAI,GAAG,IAAI,QAAQ,QAAQ,KAAK;EACrC,MAAM,WAAW,gBAAgB,IAAI,QAAQ,GAAG;AAChD,iBAAe,WAAW,SAAS,SAAS;AAC5C,MAAI,IAAI,QAAQ,SAAS,EACrB,gBAAe;AAEnB,oBAAkB,KAAK,YAAY;;AAEvC,QAAO;;AAGX,MAAM,2BACF,qBACA,SACA,SACA,UACU,oBAAoB,MAAM,OAAO,uBAAuB,GAAG,YAAY,SAAS,SAAS,MAAM,CAAC;AAE9G,MAAa,uBACT,gBACA,OACA,SACA,aACS;CAET,MAAM,kBADgB,QAAQ,kBACU;CACxC,IAAI,eAAe;AACnB,MAAK,IAAI,IAAI,gBAAgB,KAAK,OAAO,IACrC,KAAI,QAAQ,MAAM,gBACd,gBAAe;KAEf;AAGR,QAAO;;AAGX,MAAM,wBAAwB,gBAAwB,OAAe,YACjE,QAAQ,SAAS,QAAQ;AAE7B,MAAM,sBACF,kBACA,gBACA,OACA,SACA,MACA,gBAEA,cACI,kBACA,QAAQ,iBACR,mBAAmB,QAAQ,QAAQ,SAAS,QAC5C,cAAc,OAAO,OACxB;AAIL,MAAM,qBACF,cACA,gBACA,OACA,cACA,SACA,oBACa;CACb,MAAM,iBAAiB,eACjB,oBAAoB,cAAc,gBAAgB,OAAO,SAAS,gBAAgB,GAClF;AAIN,QAAO;EAAE,cAHY,eACf,kBAAkB,cAAc,gBAAgB,cAAc,SAAS,gBAAgB,GACvF;EACiB;EAAgB;;AAG3C,MAAa,sBACT,kBACA,cACA,OACA,SACA,oBACS;CACT,IAAI,cAAc;AAClB,KAAI,oBAAoB,eAAe,KAAK,OAAO;EAC/C,MAAM,eAAe,gBAAgB,IAAI,QAAQ,eAAe,GAAG;AACnE,MAAI,cAAc;GACd,MAAM,aAAa,aAAa,QAAQ,MAAM,GAAG,KAAK,IAAI,IAAI,aAAa,OAAO,CAAC;GACnF,MAAM,kBAAkB,iBAAiB,WAAW,CAAC,MAAM,GAAG,KAAK,IAAI,IAAI,iBAAiB,OAAO,CAAC;AAIpG,OACI,eACC,iBAAiB,WAAW,WAAW,IAAI,aAAa,QAAQ,WAAW,gBAAgB,EAE5F,eAAc,eAAe;;;AAIzC,QAAO;;AAGX,MAAM,sBACF,cACA,gBACA,cACA,SACA,MACA,gBAEA,cACI,cACA,QAAQ,iBACR,eAAe,iBAAiB,QAAQ,gBAAgB,QACxD,cAAc,OAAO,OACxB;AAEL,MAAM,2BACF,SACA,SACA,OACA,SACA,iBACA,mBACA,qBACA,UACA,QACA,WACY;CACZ,MAAMC,SAAoB,EAAE;CAC5B,IAAI,mBAAmB,QAAQ;CAC/B,IAAI,iBAAiB;CACrB,IAAI,eAAe;CACnB,IAAI,iBAAiB;CACrB,MAAM,gBAAgB;AAEtB,QAAO,kBAAkB,OAAO;AAC5B;AACA,MAAI,iBAAiB,eAAe;AAChC,WAAQ,QAAQ,oEAAoE,EAChF,gBAAgB,eACnB,CAAC;AACF;;EAGJ,MAAM,yBAAyB,wBAAwB,qBAAqB,SAAS,gBAAgB,MAAM;EAC3G,MAAM,gBAAgB,qBAAqB,gBAAgB,OAAO,QAAQ;AAC1E,MAAI,iBAAiB,YAAY,CAAC,wBAAwB;GACtD,MAAM,WAAW,mBACb,kBACA,gBACA,OACA,SACA,QAAQ,MACR,aACH;AACD,OAAI,SACA,QAAO,KAAK,SAAS;AAEzB;;EAGJ,MAAM,eAAe,oBAAoB,gBAAgB,OAAO,SAAS,SAAS;AAClF,UAAQ,QAAQ,2BAA2B,kBAAkB;GACzD;GACA,mBAAmB,QAAQ;GAC3B,uBAAuB,iBAAiB,MAAM,GAAG,GAAG;GACpD,wBAAwB,iBAAiB;GACzC;GACA;GACA,UAAU,QAAQ;GAClB;GACA,iBAAiB,QAAQ;GAC5B,CAAC;EACF,MAAM,oBAAoB,gCACtB,kBACA,gBACA,cACA,OACA,SACA,iBACA,kBACH;EAED,MAAM,sBAAsB,wBAAwB,qBAAqB,SAAS,gBAAgB,aAAa;EAC/G,IAAI,gBAAgB;AAEpB,MAAI,oBACA,iBAAgB,2BACZ,gBACA,cACA,OACA,SACA,qBACA,kBACH;AAGL,MAAI,iBAAiB,EAEjB,iBAAgB,kBACZ,kBACA,gBACA,OACA,cACA,mBANqC;GAAE;GAAqB;GAAiB;GAAS;GAAQ,CAQjG;AAGL,MAAI,iBAAiB,EAEjB,iBAAgB;EAGpB,MAAM,eAAe,iBAAiB,MAAM,GAAG,cAAc,CAAC,MAAM;AACpE,UAAQ,QAAQ,+BAA+B;GAC3C;GACA,iBAAiB,aAAa,MAAM,IAAI;GACxC,oBAAoB,aAAa;GACjC;GACH,CAAC;EAEF,MAAM,EAAE,cAAc,mBAAmB,kBACrC,cACA,gBACA,OACA,cACA,SACA,gBACH;AAED,MAAI,cAAc;GACd,MAAM,WAAW,mBACb,cACA,gBACA,cACA,SACA,QAAQ,MACR,aACH;AACD,OAAI,SACA,QAAO,KAAK,SAAS;;AAI7B,qBAAmB,iBAAiB,MAAM,cAAc,CAAC,MAAM;AAC/D,UAAQ,QAAQ,4BAA4B;GACxC;GACA,wBAAwB,iBAAiB;GACzC,uBAAuB,iBAAiB,MAAM,GAAG,GAAG;GACvD,CAAC;AACF,MAAI,CAAC,kBAAkB;AACnB,WAAQ,QAAQ,2CAA2C;AAC3D;;AAGJ,mBAAiB,mBAAmB,kBAAkB,cAAc,OAAO,SAAS,gBAAgB;AACpG,UAAQ,QAAQ,+BAA+B;GAC3C;GACA,mBAAmB,QAAQ;GAC9B,CAAC;AACF,iBAAe;;AAGnB,SAAQ,QAAQ,6CAA6C,EAAE,aAAa,OAAO,QAAQ,CAAC;AAC5F,QAAO;;;;;;;AAQX,MAAa,oBACT,UACA,OACA,mBACA,UACA,aACA,QACA,kBACA,QACA,aAAkC,YACtB;CACZ,MAAM,UAAU,MAAM,KAAK,MAAM,EAAE,GAAG;CACtC,MAAM,gBAAgB,sBAAsB,QAAQ;CACpD,MAAM,kBAAkB,wBAAwB,OAAO,kBAAkB;CACzE,MAAM,oBAAoB,uBAAuB,SAAS,gBAAgB;CAC1E,MAAM,sBAAsB,kBAAkB,aAAa,iBAAiB;CAE5E,MAAMA,SAAoB,EAAE;AAE5B,SAAQ,OAAO,kCAAkC;EAAE;EAAU,cAAc,SAAS;EAAQ,CAAC;AAE7F,SAAQ,QAAQ,+BAA+B;EAC3C,cAAc,SAAS;EACvB,UAAU,SAAS,KAAK,OAAO;GAAE,eAAe,EAAE,QAAQ;GAAQ,MAAM,EAAE;GAAM,IAAI,EAAE;GAAI,EAAE;EAC/F,CAAC;AAEF,MAAK,MAAM,WAAW,UAAU;EAC5B,MAAM,UAAU,cAAc,IAAI,QAAQ,KAAK,IAAI;EACnD,MAAM,QAAQ,QAAQ,OAAO,SAAa,cAAc,IAAI,QAAQ,GAAG,IAAI,UAAW;EAEtF,MAAM,eAAe,QAAQ,MAAM,QAAQ,QAAQ,QAAQ;EAC3D,MAAM,gBAAgB,wBAAwB,qBAAqB,SAAS,SAAS,MAAM;AAE3F,MAAI,eAAe,YAAY,CAAC,eAAe;AAC3C,UAAO,KAAK,QAAQ;AACpB;;EAGJ,MAAM,SAAS,wBACX,SACA,SACA,OACA,SACA,iBACA,mBACA,qBACA,UACA,QACA,OACH;AAED,SAAO,KACH,GAAG,OAAO,KAAK,MAAM;GACjB,MAAM,aAAa,cAAc,IAAI,EAAE,KAAK,IAAI;GAChD,MAAM,WAAW,EAAE,OAAO,SAAa,cAAc,IAAI,EAAE,GAAG,IAAI,aAAc;AAChF,OAAI,cAAc,KAAK,WAAW,WAC9B,QAAO;IACH,GAAG;IACH,SAAS,4BACL,EAAE,SACF,YACA,UACA,SACA,iBACA,WACH;IACJ;AAEL,UAAO;IACT,CACL;;AAGL,SAAQ,OAAO,mCAAmC,EAAE,aAAa,OAAO,QAAQ,CAAC;AACjF,QAAO;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;ACnTX,MAAa,wBACT,QACA,iBACqC;AACrC,KAAI,CAAC,UAAU,aAAa,WAAW,EACnC;CAGJ,MAAMC,gBAAwC,EAAE;AAChD,MAAK,MAAM,QAAQ,aACf,KAAI,OAAO,UAAU,OACjB,eAAc,QAAQ,OAAO;AAIrC,QAAO,OAAO,KAAK,cAAc,CAAC,SAAS,IAAI,gBAAgB;;;;;;;;;;;;;;;;;;;;;;;AAwBnE,MAAa,4BAA4B,UAA+C;AACpF,KAAI,MAAM,UAAU,EAChB;AAGJ,MAAK,IAAI,IAAI,MAAM,SAAS,GAAG,KAAK,GAAG,IACnC,KAAI,MAAM,OAAO,OACb,QAAO,MAAM;;;;;;;;;;;;;;;;;;;;;;AA0BzB,MAAa,uBACT,SACA,MACA,UACgB;AAChB,QAAO,QAAQ,QAAQ,MAAM;EACzB,MAAM,KAAK,MAAM,EAAE,MAAM;AACzB,MAAI,KAAK,QAAQ,UAAa,KAAK,KAAK,IACpC,QAAO;AAEX,MAAI,KAAK,QAAQ,UAAa,KAAK,KAAK,IACpC,QAAO;AAEX,MAAI,eAAe,IAAI,KAAK,QAAQ,CAChC,QAAO;AAEX,SAAO;GACT;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AA6MN,MAAa,mBAAmB,OAAyC,WAA4B;AACjG,QAAO,MAAM,MAAM,MAAM;EACrB,MAAM,QAAQ,EAAE,QAAQ,UAAa,UAAU,EAAE;EACjD,MAAM,QAAQ,EAAE,QAAQ,UAAa,UAAU,EAAE;AACjD,SAAO,SAAS;GAClB;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;ACrTN,MAAa,0BAA0B,YAA4B;AAG/D,QAAO,QAAQ,QAAQ,+BAA+B,QAAQ,OAAO,YAAY;AAC7E,MAAI,MACA,QAAO;AAEX,SAAO,KAAK;GACd;;AAoDN,MAAM,aAAa,MA9BW;CAE1B;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CA3BwB;CACT;CA8BlB,CAEoC,KAAK,IAAI,CAAC;AAC/C,MAAM,cAAc,GAAG,WAAW,SAAS,WAAW;AAEtD,MAAMC,cAAsC;CAQxC,KAAK;CAUL,UAAU,CAAC,YAAY,IAAI,CAAC,KAAK,IAAI;CASrC,QAAQ;CAaR,MAAM;CAMN,MAAM,CAAC,SAAS,MAAM,CAAC,KAAK,IAAI;CAUhC,MAAM;CAiBN,OAAO;CASP,OAAO;CAeP,MAAM;EAAC;EAAS;EAAW;EAAS;EAAQ;EAAU;EAAU;EAAU;EAAU;EAAU,CAAC,KAAK,IAAI;CAUxG,MAAM;CAUN,OAAO;CAoBP,OAAO;CAMP,QAAQ;CACX;;;;;;;;;;;;AAkBD,MAAMC,mBAA2C,EAoB7C,UAAU,uBACb;;;;;;;;;;;;;;;AAgBD,MAAa,mCAAmC,aAA6B;CACzE,IAAI,MAAM;AACV,MAAK,IAAI,IAAI,GAAG,IAAI,IAAI,KAAK;EACzB,MAAM,OAAO,IAAI,QAAQ,mBAAmB,GAAG,cAAsB;AAEjE,UADoB,iBAAiB,cACf;IACxB;AACF,MAAI,SAAS,IACT;AAEJ,QAAM;;AAEV,QAAO;;;;;;;;;;AAWX,MAAM,oBAAoB,aAA6B;AACnD,QAAO,SAAS,QAAQ,mBAAmB,GAAG,cAAc;AACxD,SAAO,YAAY,cAAc,KAAK,UAAU;GAClD;;;;;;;;;;;;;;;;;;;;;;;;;;;AA4BN,MAAaC,iBAAyC;CAClD,GAAG;CAEH,GAAG,OAAO,YAAY,OAAO,QAAQ,iBAAiB,CAAC,KAAK,CAAC,GAAG,OAAO,CAAC,GAAG,iBAAiB,EAAE,CAAC,CAAC,CAAC;CACpG;;;;;;;;;;;AAYD,MAAM,2BAA2B;;;;;;;;;AAUjC,MAAM,qBAAqB;;;;;;;;;;;;;;;;AAiB3B,MAAa,kBAAkB,UAA2B;AACtD,oBAAmB,YAAY;AAC/B,QAAO,mBAAmB,KAAK,MAAM;;AAkCzC,MAAM,6BAA6B,UAAqC;CACpE,MAAMC,WAA8B,EAAE;CACtC,IAAI,YAAY;AAChB,0BAAyB,YAAY;CACrC,IAAIC;AAGJ,SAAQ,QAAQ,yBAAyB,KAAK,MAAM,MAAM,MAAM;AAC5D,MAAI,MAAM,QAAQ,UACd,UAAS,KAAK;GAAE,MAAM;GAAQ,OAAO,MAAM,MAAM,WAAW,MAAM,MAAM;GAAE,CAAC;AAE/E,WAAS,KAAK;GAAE,MAAM;GAAS,OAAO,MAAM;GAAI,CAAC;AACjD,cAAY,MAAM,QAAQ,MAAM,GAAG;;AAGvC,KAAI,YAAY,MAAM,OAClB,UAAS,KAAK;EAAE,MAAM;EAAQ,OAAO,MAAM,MAAM,UAAU;EAAE,CAAC;AAGlE,QAAO;;AAGX,MAAM,yBAAyB,MAAc,mBAAyD;AAClG,KAAI,kBAAkB,mBAAmB,KAAK,KAAK,CAC/C,QAAO,eAAe,KAAK;AAE/B,QAAO;;AAKX,MAAM,iCAAiC,cAAsB,mBAAyD;AAClH,KAAI,CAAC,eACD,QAAO;AAEX,QAAO,aACF,MAAM,IAAI,CACV,KAAK,SAAU,mBAAmB,KAAK,KAAK,GAAG,eAAe,KAAK,GAAG,KAAM,CAC5E,KAAK,IAAI;;AAGlB,MAAM,qBAAqB,YAAuE;AAC9F,0BAAyB,YAAY;CACrC,MAAM,aAAa,yBAAyB,KAAK,QAAQ;AACzD,KAAI,CAAC,WACD,QAAO;CAEX,MAAM,GAAG,WAAW,eAAe;AACnC,QAAO;EAAE;EAAa;EAAW;;AAGrC,MAAM,yBAAyB,kBAA2B;CACtD,MAAMC,eAAyB,EAAE;CACjC,MAAM,oCAAoB,IAAI,KAAqB;CAEnD,MAAM,YAAY,aAA6B;EAC3C,MAAM,QAAQ,kBAAkB,IAAI,SAAS,IAAI;AACjD,oBAAkB,IAAI,UAAU,QAAQ,EAAE;EAC1C,MAAM,aAAa,UAAU,IAAI,WAAW,GAAG,SAAS,GAAG,QAAQ;EACnE,MAAM,eAAe,gBAAgB,GAAG,gBAAgB,eAAe;AACvE,eAAa,KAAK,aAAa;AAC/B,SAAO;;AAGX,QAAO;EAAE;EAAc;EAAU;;AAGrC,MAAM,sBACF,SACA,SAKS;CACT,MAAM,SAAS,kBAAkB,QAAQ;AACzC,KAAI,CAAC,OACD,QAAO;CAGX,MAAM,EAAE,WAAW,gBAAgB;AAGnC,KAAI,CAAC,aAAa,YAEd,QAAO,MADM,KAAK,gBAAgB,YAAY,CAC5B;CAGtB,IAAI,eAAe,eAAe;AAClC,KAAI,CAAC,aAED,QAAO;AAGX,gBAAe,8BAA8B,cAAc,KAAK,eAAe;AAG/E,KAAI,YAEA,QAAO,MADM,KAAK,gBAAgB,YAAY,CAC5B,GAAG,aAAa;AAItC,QAAO;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAuCX,MAAa,4BACT,OACA,gBACA,kBACe;CACf,MAAM,WAAW,0BAA0B,MAAM;CACjD,MAAM,WAAW,sBAAsB,cAAc;CAErD,MAAM,iBAAiB,SAAS,KAAK,YAAY;AAC7C,MAAI,QAAQ,SAAS,OACjB,QAAO,sBAAsB,QAAQ,OAAO,eAAe;AAE/D,SAAO,mBAAmB,QAAQ,OAAO;GACrC;GACA;GACA,iBAAiB,SAAS;GAC7B,CAAC;GACJ;AAEF,QAAO;EACH,cAAc,SAAS;EACvB,aAAa,SAAS,aAAa,SAAS;EAC5C,SAAS,eAAe,KAAK,GAAG;EACnC;;;;;;;;;;;;;;;;;;;;;AAsBL,MAAa,gBAAgB,UAAkB,yBAAyB,MAAM,CAAC;;;;;;;;;;;;;;;;;;;;;;AAuB/E,MAAa,mBAAmB,aAAqB;CACjD,MAAM,WAAW,aAAa,SAAS;AACvC,KAAI;AACA,SAAO,IAAI,OAAO,UAAU,IAAI;SAC5B;AACJ,SAAO;;;;;;;;;;;;;;;AAgBf,MAAa,2BAA2B,OAAO,KAAK,eAAe;;;;;;;;;;;;;;;AAgBnE,MAAa,mBAAmB,cAA0C,eAAe;;;;;;;;;;;;;;;;;;;;;ACxpBzF,MAAa,qBAAqB,YAA6B;AAE3D,QAAO,WAAW,KAAK,QAAQ;;;;;;;;;;;;AAanC,MAAa,4BAA4B,YAA8B;CACnE,MAAMC,QAAkB,EAAE;AAG1B,MAAK,MAAM,SAAS,QAAQ,SADJ,iBAC6B,CACjD,OAAM,KAAK,MAAM,GAAG;AAExB,QAAO;;;;;AAMX,MAAa,oBAAoB,YAA4B;AACzD,KAAI;AACA,SAAO,IAAI,OAAO,SAAS,MAAM;UAC5B,OAAO;EACZ,MAAM,UAAU,iBAAiB,QAAQ,MAAM,UAAU,OAAO,MAAM;AACtE,QAAM,IAAI,MAAM,0BAA0B,QAAQ,aAAa,UAAU;;;;;;;;AASjF,MAAa,kBAAkB,SAAiB,OAAgB,kBAA6C;CAGzG,MAAM,EAAE,SAAS,UAAU,iBAAiB,yBAF5B,uBAAuB,QAAQ,EACxB,QAAQ,2BAA2B,QACoC,cAAc;AAC5G,QAAO;EAAE;EAAc,SAAS;EAAU;;AAG9C,MAAa,mCACT,UACA,OACA,kBAC4C;CAC5C,MAAM,YAAY,SAAS,KAAK,MAAM,eAAe,GAAG,OAAO,cAAc,CAAC;CAC9E,MAAM,QAAQ,UAAU,KAAK,MAAM,EAAE,QAAQ,CAAC,KAAK,IAAI;AAOvD,QAAO;EAAE,cANY,UAAU,SAAS,MAAM,EAAE,aAAa;EAMtC,OAAO,OAAO,MAAM,GADpB,gBAAgB,MAAM,cAAc,iBAAiB;EACZ;;AAGpE,MAAa,kCACT,UACA,OACA,kBAC4C;CAC5C,MAAM,YAAY,SAAS,KAAK,MAAM,eAAe,GAAG,OAAO,cAAc,CAAC;CAC9E,MAAM,QAAQ,UAAU,KAAK,MAAM,EAAE,QAAQ,CAAC,KAAK,IAAI;AAEvD,QAAO;EAAE,cADY,UAAU,SAAS,MAAM,EAAE,aAAa;EACtC,OAAO,OAAO,MAAM;EAAI;;AAGnD,MAAa,gCACT,UACA,OACA,kBAC4C;CAC5C,MAAM,YAAY,SAAS,KAAK,MAAM,eAAe,GAAG,OAAO,cAAc,CAAC;CAC9E,MAAM,QAAQ,UAAU,KAAK,MAAM,EAAE,QAAQ,CAAC,KAAK,IAAI;AAEvD,QAAO;EAAE,cADY,UAAU,SAAS,MAAM,EAAE,aAAa;EACtC,OAAO,MAAM,MAAM;EAAK;;AAGnD,MAAa,4BACT,UACA,kBAC4C;CAE5C,MAAM,EAAE,SAAS,iBAAiB,yBADlB,uBAAuB,SAAS,EACoB,QAAW,cAAc;AAC7F,QAAO;EAAE;EAAc,OAAO;EAAS;;AAG3C,MAAa,wBAAwB,aAAqB,kBACtD,kBAAkB,YAAY;;;;;;AAOlC,MAAa,kBAAkB,MAAiB,kBAAsC;CAClF,MAAMC,IAMF,EAAE,GAAG,MAAM;CAEf,MAAM,QAAS,KAA6B,SAAS;CACrD,IAAIC,kBAA4B,EAAE;AAGlC,KAAI,EAAE,iBAAiB,QAAQ;EAC3B,MAAM,EAAE,OAAO,iBAAiB,gCAAgC,EAAE,iBAAiB,OAAO,cAAc;AACxG,oBAAkB;AAClB,SAAO;GACH,cAAc;GACd,OAAO,iBAAiB,MAAM;GAC9B,aAAa;GACb,qBAAqB;GACxB;;AAGL,KAAI,EAAE,gBAAgB,QAAQ;EAC1B,MAAM,EAAE,OAAO,iBAAiB,+BAA+B,EAAE,gBAAgB,OAAO,cAAc;AACtG,IAAE,QAAQ;AACV,oBAAkB;;AAEtB,KAAI,EAAE,cAAc,QAAQ;EACxB,MAAM,EAAE,OAAO,iBAAiB,6BAA6B,EAAE,cAAc,OAAO,cAAc;AAClG,IAAE,QAAQ;AACV,oBAAkB;;AAEtB,KAAI,EAAE,UAAU;EACZ,MAAM,EAAE,OAAO,iBAAiB,yBAAyB,EAAE,UAAU,cAAc;AACnF,IAAE,QAAQ;AACV,oBAAkB,CAAC,GAAG,iBAAiB,GAAG,aAAa;;AAG3D,KAAI,CAAC,EAAE,MACH,OAAM,IAAI,MACN,gHACH;AAIL,KAAI,gBAAgB,WAAW,EAC3B,mBAAkB,yBAAyB,EAAE,MAAM;CAGvD,MAAM,cAAc,qBAAqB,EAAE,OAAO,gBAAgB;AAClE,QAAO;EACH,cAAc;EACd,OAAO,iBAAiB,EAAE,MAAM;EAChC;EACA,qBAAqB;EACxB;;;;;ACjML,MAAM,wBAAwB;AAE9B,MAAM,yBAAyB,UAA2B;AACtD,KAAI,CAAC,MACD,QAAO;CAGX,MAAM,UAAU,IAAI,IAAI;EAAC;EAAK;EAAK;EAAK;EAAK;EAAK;EAAI,CAAC;CACvD,MAAM,sBAAM,IAAI,KAAa;AAC7B,MAAK,MAAM,MAAM,OAAO;AACpB,MAAI,CAAC,QAAQ,IAAI,GAAG,CAChB,OAAM,IAAI,MAAM,gCAAgC,GAAG,qBAAqB;AAE5E,MAAI,IAAI,GAAG;;AAEf,KAAI,IAAI,IAAI;AACZ,KAAI,IAAI,IAAI;AAIZ,QADc;EAAC;EAAK;EAAK;EAAK;EAAK;EAAK;EAAI,CAC/B,QAAQ,MAAM,IAAI,IAAI,EAAE,CAAC,CAAC,KAAK,GAAG;;AASnD,MAAM,uBAAuB,UAAgD;CACzE,MAAMC,WAAkC,EAAE;AAC1C,MAAK,MAAM,KAAK,OAAO;AACnB,MAAI,EAAE,WAAW,EAAE,QAAQ,WAAW,EAElC;EAEJ,MAAM,QAAQ,sBAAsB,EAAE,MAAM;EAC5C,MAAM,KAAK,IAAI,OAAO,EAAE,OAAO,MAAM;AACrC,WAAS,KAAK;GACV,WAAW,EAAE,UAAU,IAAI,IAAI,EAAE,QAAQ,GAAG;GAC5C;GACA,aAAa,EAAE;GAClB,CAAC;;AAEN,QAAO;;;;;;;;;;;;AAaX,MAAa,qBAAqB,OAAe,UAAkC;AAC/E,KAAI,CAAC,SAAS,MAAM,WAAW,KAAK,MAAM,WAAW,EACjD,QAAO;CAEX,MAAM,WAAW,oBAAoB,MAAM;AAC3C,KAAI,SAAS,WAAW,EACpB,QAAO;AAGX,QAAO,MAAM,KAAK,MAAM;EACpB,IAAI,UAAU,EAAE;AAChB,OAAK,MAAM,QAAQ,UAAU;AACzB,OAAI,KAAK,aAAa,CAAC,KAAK,UAAU,IAAI,EAAE,GAAG,CAC3C;AAEJ,aAAU,QAAQ,QAAQ,KAAK,IAAI,KAAK,YAAY;;AAExD,MAAI,YAAY,EAAE,QACd,QAAO;AAEX,SAAO;GAAE,GAAG;GAAG;GAAS;GAC1B;;;;;;;;;;;;;;;;;AC5EN,MAAM,yBAAyB,SAA0B,QAAQ,QAAU,QAAQ;AAInF,MAAM,YAAY,OAAuB;AACrC,SAAQ,IAAR;EACI,KAAK;EACL,KAAK;EACL,KAAK,IACD,QAAO;EACX,KAAK,IACD,QAAO;EACX,KAAK,IACD,QAAO;EACX,QACI,QAAO;;;;;;;;;;;AAYnB,MAAa,6BAA6B,SAAiB,QAAgB,YAAmC;CAC1G,IAAI,IAAI;AAER,QAAO,IAAI,QAAQ,UAAU,sBAAsB,QAAQ,WAAW,EAAE,CAAC,CACrE;AAGJ,MAAK,IAAI,IAAI,GAAG,IAAI,QAAQ,QAAQ,KAAK;EACrC,MAAM,QAAQ,QAAQ;AAKtB,SAAO,IAAI,QAAQ,UAAU,sBAAsB,QAAQ,WAAW,EAAE,CAAC,CACrE;AAGJ,MAAI,KAAK,QAAQ,OACb,QAAO;EAGX,MAAM,MAAM,QAAQ;AACpB,MAAI,SAAS,IAAI,KAAK,SAAS,MAAM,CACjC,QAAO;AAEX;;AAIJ,QAAO,IAAI,QAAQ,UAAU,sBAAsB,QAAQ,WAAW,EAAE,CAAC,CACrE;AAEJ,QAAO;;AAGX,MAAM,iBAAiB,MAAuB;AAI1C,QAAO,CAAC,oBAAoB,KAAK,EAAE;;AAOvC,MAAa,6BAA6B,YAAuD;AAC7F,KAAI,CAAC,QACD,QAAO;AAEX,KAAI,CAAC,cAAc,QAAQ,CACvB,QAAO;CAEX,MAAM,eAAe,QAChB,MAAM,IAAI,CACV,KAAK,MAAM,EAAE,MAAM,CAAC,CACpB,OAAO,QAAQ;AACpB,KAAI,CAAC,aAAa,OACd,QAAO;AAEX,QAAO,EAAE,cAAc;;;;;;AAY3B,MAAa,6BAA6B,kBAAqD;CAC3F,MAAM,IAAI,cAAc,MAAM,kBAAkB;AAChD,KAAI,CAAC,EACD,QAAO;CAEX,MAAM,QAAQ,EAAE;CAChB,MAAM,eAAe,gBAAgB,MAAM;AAC3C,KAAI,CAAC,aACD,QAAO;CAEX,MAAM,WAAW,0BAA0B,aAAa;AACxD,KAAI,CAAC,SACD,QAAO;AAEX,QAAO;EAAE,cAAc,SAAS;EAAc;EAAO;;;;;;AAOzD,MAAa,yBAAyB,SAAiB,QAAgB,aAAgD;AACnH,MAAK,MAAM,OAAO,SAAS,cAAc;EACrC,MAAM,MAAM,0BAA0B,SAAS,QAAQ,IAAI;AAC3D,MAAI,QAAQ,KACR,QAAO;;AAGf,QAAO;;;;;AC5HX,MAAa,6BAA6B,UAAyC;CAC/E,MAAMC,kBAAwE,EAAE;CAChF,MAAMC,kBAA+B,EAAE;CACvC,MAAMC,iBAAkC,EAAE;AAG1C,OAAM,SAAS,MAAM,UAAU;AAE3B,MAAK,KAA6B,SAAS,oBAAoB,MAAM;GACjE,MAAM,WACF,KAAK,eAAe,WAAW,IAAI,0BAA0B,KAAK,eAAe,GAAG,GAAG;AAC3F,OAAI,UAAU;AACV,mBAAe,KAAK;KAAE;KAAU,MAAM;KAAc;KAAM,WAAW;KAAO,CAAC;AAC7E;;;AAKR,MAAK,KAA6B,SAAS,qBAAqB,MAAM;GAClE,MAAM,WACF,KAAK,gBAAgB,WAAW,IAAI,0BAA0B,KAAK,gBAAgB,GAAG,GAAG;AAC7F,OAAI,UAAU;AACV,mBAAe,KAAK;KAAE;KAAU,MAAM;KAAe;KAAM,WAAW;KAAO,CAAC;AAC9E;;;EAIR,IAAI,eAAe;AAGnB,MAAI,WAAW,QAAQ,KAAK,OAAO;GAC/B,MAAM,mBAAmB,yBAAyB,KAAK,MAAM,CAAC,SAAS;GACvE,MAAM,oBAAoB,UAAU,KAAK,KAAK,MAAM;GACpD,MAAM,uBAAuB,kBAAkB,KAAK,MAAM;AAC1D,OAAI,oBAAoB,qBAAqB,qBACzC,gBAAe;;AAIvB,MAAI,aACA,iBAAgB,KAAK;GAAE;GAAO,QAAQ,IAAI,MAAM;GAAI;GAAM,CAAC;MAE3D,iBAAgB,KAAK,KAAK;GAEhC;AAEF,QAAO;EAAE;EAAiB;EAAgB;EAAiB;;AAK/D,MAAa,+BAA+B,cAAsB,YAA4C;CAC1G,MAAM,2CAA2B,IAAI,KAAqB;AAC1D,MAAK,IAAI,IAAI,GAAG,IAAI,QAAQ,WAAW,QAAQ,IAC3C,0BAAyB,IAAI,QAAQ,WAAW,GAAG,OAAO,EAAE;CAGhE,MAAM,wCAAwB,IAAI,KAA4B;CAC9D,MAAM,yBAAyB,MAAiB,cAAqC;AACjF,MAAI,sBAAsB,IAAI,UAAU,CACpC,QAAO,sBAAsB,IAAI,UAAU,IAAI;EAEnD,MAAM,UAAW,KAAqC;AACtD,MAAI,CAAC,SAAS;AACV,yBAAsB,IAAI,WAAW,KAAK;AAC1C,UAAO;;EAEX,MAAM,WAAW,eAAe,SAAS,MAAM,CAAC;EAChD,MAAM,KAAK,IAAI,OAAO,MAAM,SAAS,KAAK,IAAI;AAC9C,wBAAsB,IAAI,WAAW,GAAG;AACxC,SAAO;;CAGX,MAAM,4BAA4B,kBAAkC;AAChE,MAAI,iBAAiB,EACjB,QAAO;EAEX,MAAM,eAAe,QAAQ,WAAW,gBAAgB;AAExD,OAAK,IAAI,IAAI,aAAa,MAAM,GAAG,KAAK,aAAa,OAAO,KAAK;GAC7D,MAAM,KAAK,aAAa;AACxB,OAAI,CAAC,GACD;AAEJ,OAAI,MAAM,KAAK,GAAG,CACd;AAEJ,UAAO;;AAEX,SAAO;;AAGX,SAAQ,MAAiB,WAAmB,eAAgC;EACxE,MAAM,gBAAgB,yBAAyB,IAAI,WAAW;AAC9D,MAAI,kBAAkB,UAAa,kBAAkB,EACjD,QAAO;EAEX,MAAM,UAAU,sBAAsB,MAAM,UAAU;AACtD,MAAI,CAAC,QACD,QAAO;EAEX,MAAM,WAAW,yBAAyB,cAAc;AACxD,MAAI,CAAC,SACD,QAAO;AAEX,SAAO,QAAQ,KAAK,SAAS;;;AAIrC,MAAa,+BACT,cACA,SACA,gBACA,yBAC4B;CAC5B,MAAM,oCAAoB,IAAI,KAA2B;AACzD,KAAI,eAAe,WAAW,KAAK,QAAQ,WAAW,WAAW,EAC7D,QAAO;CAIX,IAAI,cAAc;CAClB,IAAI,kBAAkB,QAAQ,WAAW;CACzC,MAAM,qBAAqB,WAAmB;AAC1C,SAAO,mBAAmB,SAAS,gBAAgB,OAAO,cAAc,QAAQ,WAAW,SAAS,GAAG;AACnG;AACA,qBAAkB,QAAQ,WAAW;;;CAI7C,MAAM,oBAAoB,WAAmB,OAAmB;EAC5D,MAAM,MAAM,kBAAkB,IAAI,UAAU;AAC5C,MAAI,CAAC,KAAK;AACN,qBAAkB,IAAI,WAAW,CAAC,GAAG,CAAC;AACtC;;AAEJ,MAAI,KAAK,GAAG;;CAGhB,MAAM,eAAe,WAA4B,WAAW,iBAAiB;AAG7E,MAAK,IAAI,YAAY,GAAG,aAAa,aAAa,SAAU;AACxD,oBAAkB,UAAU;EAC5B,MAAM,SAAS,iBAAiB,MAAM;AAEtC,MAAI,aAAa,aAAa,OAC1B;AAGJ,OAAK,MAAM,EAAE,UAAU,MAAM,MAAM,eAAe,gBAAgB;AAK9D,OAAI,GAHC,KAAK,QAAQ,UAAa,UAAU,KAAK,SACzC,KAAK,QAAQ,UAAa,UAAU,KAAK,QAC1C,CAAC,eAAe,QAAQ,KAAK,QAAQ,EAErC;AAGJ,OAAI,YAAY,UAAU,IAAI,CAAC,qBAAqB,MAAM,WAAW,UAAU,CAC3E;GAGJ,MAAM,MAAM,sBAAsB,cAAc,WAAW,SAAS;AACpE,OAAI,QAAQ,KACR;GAGJ,MAAM,cAAc,KAAK,SAAS,UAAU,OAAO,YAAY;AAC/D,OAAI,SAAS,aACT,kBAAiB,WAAW;IAAE,OAAO;IAAY,MAAM,KAAK;IAAM,CAAC;QAChE;IACH,MAAM,eAAe,MAAM;AAC3B,qBAAiB,WAAW;KACxB,qBAAqB,KAAK,SAAS,UAAU,OAAO,eAAe;KACnE,OAAO;KACP,MAAM,KAAK;KACd,CAAC;;;EAIV,MAAM,SAAS,aAAa,QAAQ,MAAM,UAAU;AACpD,MAAI,WAAW,GACX;AAEJ,cAAY,SAAS;;AAGzB,QAAO;;;;;;;;;;;;;;ACrMX,MAAa,wBAAwB,YAAoB;AACrD,QAAO,QAAQ,SAAS,KAAK,GAAG,QAAQ,QAAQ,UAAU,KAAK,GAAG;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;ACyCtE,MAAM,gBAAgB,UAAoF;CACtG,MAAMC,aAA6B,EAAE;CACrC,MAAMC,aAAuB,EAAE;CAC/B,IAAI,SAAS;CACb,MAAMC,QAAkB,EAAE;AAE1B,MAAK,IAAI,IAAI,GAAG,IAAI,MAAM,QAAQ,KAAK;EACnC,MAAM,aAAa,qBAAqB,MAAM,GAAG,QAAQ;AACzD,aAAW,KAAK;GAAE,KAAK,SAAS,WAAW;GAAQ,IAAI,MAAM,GAAG;GAAI,OAAO;GAAQ,CAAC;AACpF,QAAM,KAAK,WAAW;AACtB,MAAI,IAAI,MAAM,SAAS,GAAG;AACtB,cAAW,KAAK,SAAS,WAAW,OAAO;AAC3C,aAAU,WAAW,SAAS;QAE9B,WAAU,WAAW;;;;;;;;;CAW7B,MAAM,gBAAgB,QAA0C;EAC5D,IAAI,KAAK;EACT,IAAI,KAAK,WAAW,SAAS;AAE7B,SAAO,MAAM,IAAI;GACb,MAAM,MAAO,KAAK,OAAQ;GAC1B,MAAM,IAAI,WAAW;AACrB,OAAI,MAAM,EAAE,MACR,MAAK,MAAM;YACJ,MAAM,EAAE,IACf,MAAK,MAAM;OAEX,QAAO;;AAIf,SAAO,WAAW,WAAW,SAAS;;AAG1C,QAAO;EACH,SAAS,MAAM,KAAK,KAAK;EACzB,iBAAiB;EACjB,SAAS;GACL;GACA,QAAQ,QAAgB,aAAa,IAAI,EAAE,MAAM;GACjD;GACA,SAAS,WAAW,KAAK,MAAM,EAAE,GAAG;GACvC;EACJ;;;;;;;;;AAUL,MAAa,qBAAqB,gBAA4C;CAC1E,MAAM,0BAAU,IAAI,KAAyB;AAC7C,MAAK,MAAM,KAAK,aAAa;EACzB,MAAM,WAAW,QAAQ,IAAI,EAAE,MAAM;AACrC,MAAI,CAAC,UAAU;AACX,WAAQ,IAAI,EAAE,OAAO,EAAE;AACvB;;AAKJ,MAFK,EAAE,uBAAuB,UAAa,SAAS,uBAAuB,UACtE,EAAE,SAAS,UAAa,SAAS,SAAS,OAE3C,SAAQ,IAAI,EAAE,OAAO,EAAE;;CAG/B,MAAM,SAAS,CAAC,GAAG,QAAQ,QAAQ,CAAC;AACpC,QAAO,MAAM,GAAG,MAAM,EAAE,QAAQ,EAAE,MAAM;AACxC,QAAO;;;;;;AAOX,MAAa,yBACT,UACA,OACA,mBACA,eACY;AACZ,KAAI,SAAS,SAAS,KAAK,MAAM,WAAW,EACxC,QAAO;CAEX,MAAM,YAAY,MAAM;CACxB,MAAM,WAAW,MAAM,MAAM,SAAS;CACtC,MAAM,WAAW,eAAe,YAAY,OAAO;CACnD,MAAM,aAAa,kBAAkB,KAAK,SAAS,CAAC,MAAM;AAC1D,KAAI,CAAC,WACD,QAAO;CAEX,MAAMC,aAAsB;EAAE,SAAS;EAAY,MAAM,UAAU;EAAI;AACvE,KAAI,SAAS,OAAO,UAAU,GAC1B,YAAW,KAAK,SAAS;AAE7B,QAAO,CAAC,WAAW;;AAGvB,MAAM,+BAA+B,OAAoB,cAAsB,YAAmC;CAC9G,MAAM,uBAAuB,4BAA4B,cAAc,QAAQ;CAC/E,MAAM,EAAE,iBAAiB,gBAAgB,oBAAoB,0BAA0B,MAAM;CAI7F,MAAM,oBAAoB,4BAA4B,cAAc,SAAS,gBAAgB,qBAAqB;AAGlH,KAAI,gBAAgB,SAAS,GAAG;EAC5B,MAAM,cAAc,gBAAgB,KAAK,EAAE,MAAM,aAAa;GAC1D,MAAM,QAAQ,eAAe,MAAM,OAAO;AAC1C,UAAO;IACH;IACA,QAAQ,MAAM,OAAO,GAAG,MAAM,MAAM,OAAO;IAC3C,GAAG;IACN;IACH;EAEF,MAAM,iBAAiB,YAAY,KAAK,MAAM,EAAE,OAAO,CAAC,KAAK,IAAI;EACjE,MAAM,gBAAgB,IAAI,OAAO,gBAAgB,KAAK;AAEtD,gBAAc,YAAY;EAC1B,IAAI,IAAI,cAAc,KAAK,aAAa;AAExC,SAAO,MAAM,MAAM;GAEf,MAAM,mBAAmB,gBAAgB,WAAW,EAAE,aAAa,GAAG,SAAS,YAAY,OAAU;AAErG,OAAI,qBAAqB,IAAI;IACzB,MAAM,EAAE,MAAM,QAAQ,OAAO,kBAAkB,gBAAgB;IAC/D,MAAM,WAAW,YAAY;IAG7B,MAAMC,gBAAwC,EAAE;AAChD,QAAI,EAAE,QACF;UAAK,MAAM,gBAAgB,SAAS,aAChC,KAAI,EAAE,OAAO,kBAAkB,QAAW;MACtC,MAAM,YAAY,aAAa,MAAM,OAAO,OAAO;AACnD,oBAAc,aAAa,EAAE,OAAO;;;IAMhD,IAAIC;IACJ,IAAIC;AAEJ,QAAI,SAAS,qBAAqB;AAE9B,uBAAkB,EAAE,SAAS,GAAG,OAAO;AACvC,SAAI,oBAAoB,OAKpB,uBAFkB,EAAE,SAAS,WAAW,EAAE,IACX,SAAS,gBAAgB;;IAMhE,MAAM,QAAQ,EAAE;IAChB,MAAM,MAAM,EAAE,QAAQ,EAAE,GAAG;IAC3B,MAAM,SAAS,QAAQ,MAAM,MAAM;AAQnC,SAJK,KAAK,QAAQ,UAAa,UAAU,KAAK,SACzC,KAAK,QAAQ,UAAa,UAAU,KAAK,QAC1C,CAAC,eAAe,QAAQ,KAAK,QAAQ,EAElB;AACnB,SAAI,CAAC,qBAAqB,MAAM,eAAe,MAAM,CAGjD;KAEJ,MAAMC,KAAiB;MACnB,iBAAiB;MACjB;MACA,QAAQ,KAAK,SAAS,UAAU,OAAO,QAAQ;MAC/C,MAAM,KAAK;MACX,eAAe,OAAO,KAAK,cAAc,CAAC,SAAS,IAAI,gBAAgB;MAC1E;AAED,SAAI,CAAC,kBAAkB,IAAI,cAAc,CACrC,mBAAkB,IAAI,eAAe,EAAE,CAAC;AAE5C,uBAAkB,IAAI,cAAc,CAAE,KAAK,GAAG;;;AAItD,OAAI,EAAE,GAAG,WAAW,EAChB,eAAc;AAElB,OAAI,cAAc,KAAK,aAAa;;;CAK5C,MAAM,8BAA8B,MAAiB,cAA4B;EAC7E,MAAM,EAAE,OAAO,aAAa,cAAc,wBAAwB,eAAe,KAAK;EAOtF,MAAM,SALqB,oBADR,YAAY,cAAc,OAAO,aAAa,aAAa,EACnB,MAAM,QAAQ,MAAM,CAC5C,QAAQ,MAAM,qBAAqB,MAAM,WAAW,EAAE,MAAM,CAAC,CAIzE,KAAK,MAAM;GAC9B,MAAM,oBAAoB,uBAAuB,EAAE,aAAa;GAChE,MAAM,eAAe,oBAAoB,EAAE,MAAM,EAAE,SAAU,SAAS,EAAE,QAAQ;AAChF,UAAO;IACH,iBAAiB,oBAAoB,SAAY,EAAE;IACnD,oBAAoB,oBAAoB,eAAe;IACvD,QAAQ,KAAK,SAAS,UAAU,OAAO,EAAE,QAAQ,EAAE;IACnD,MAAM,KAAK;IACX,eAAe,EAAE;IACpB;IACH;AAEF,MAAI,CAAC,kBAAkB,IAAI,UAAU,CACjC,mBAAkB,IAAI,WAAW,EAAE,CAAC;AAExC,oBAAkB,IAAI,UAAU,CAAE,KAAK,GAAG,OAAO;;AAGrD,iBAAgB,SAAS,SAAS;AAG9B,6BAA2B,MADL,MAAM,QAAQ,KAAK,CACM;GACjD;CAGF,MAAMC,mBAAiC,EAAE;AACzC,OAAM,SAAS,MAAM,UAAU;EAC3B,MAAM,SAAS,kBAAkB,IAAI,MAAM;AAC3C,MAAI,CAAC,UAAU,OAAO,WAAW,EAC7B;EAGJ,IAAI,WAAW;AACf,MAAI,KAAK,eAAe,QACpB,YAAW,CAAC,OAAO,GAAG;WACf,KAAK,eAAe,OAC3B,YAAW,CAAC,OAAO,OAAO,SAAS,GAAG;AAG1C,mBAAiB,KAAK,GAAG,SAAS;GACpC;AAEF,QAAO;;;;;;;;;;;AAYX,MAAM,eAAe,SAAiB,OAAe,aAAsB,iBAA2B;CAClG,MAAMC,UAAyB,EAAE;AACjC,OAAM,YAAY;CAClB,IAAI,IAAI,MAAM,KAAK,QAAQ;AAE3B,QAAO,MAAM,MAAM;EACf,MAAMC,SAAsB;GAAE,KAAK,EAAE,QAAQ,EAAE,GAAG;GAAQ,OAAO,EAAE;GAAO;AAG1E,SAAO,gBAAgB,qBAAqB,EAAE,QAAQ,aAAa;AAGnE,MAAI,YACA,QAAO,WAAW,yBAAyB,EAAE;AAGjD,UAAQ,KAAK,OAAO;AAEpB,MAAI,EAAE,GAAG,WAAW,EAChB,OAAM;AAEV,MAAI,MAAM,KAAK,QAAQ;;AAG3B,QAAO;;;;;;;;;;;AAYX,MAAM,qBAAqB,aAAqB,WAAmB,iBAA2B;AAC1F,KAAI,aAAa,WAAW,EACxB,QAAO,EAAE;CAIb,IAAI,KAAK;CACT,IAAI,KAAK,aAAa;AACtB,QAAO,KAAK,IAAI;EACZ,MAAM,MAAO,KAAK,OAAQ;AAC1B,MAAI,aAAa,OAAO,YACpB,MAAK,MAAM;MAEX,MAAK;;CAKb,MAAMC,SAAmB,EAAE;AAC3B,MAAK,IAAI,IAAI,IAAI,IAAI,aAAa,UAAU,aAAa,KAAK,WAAW,IACrE,QAAO,KAAK,aAAa,KAAK,YAAY;AAE9C,QAAO;;;;;;;;;;;;;;;;AAiBX,MAAM,qBAAqB,SAAiB,aAAqB,eAAiC;AAE9F,KAAI,CAAC,WAAW,CAAC,QAAQ,SAAS,KAAK,CACnC,QAAO;CAIX,MAAM,gBAAgB,kBAAkB,aADtB,cAAc,QAAQ,QACwB,WAAW;AAG3E,KAAI,cAAc,WAAW,EACzB,QAAO;CAOX,MAAM,WAAW,IAAI,IAAI,cAAc;AACvC,QAAO,QAAQ,QAAQ,QAAQ,OAAO,WAAoB,SAAS,IAAI,OAAO,GAAG,MAAM,MAAO;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AA6DlG,MAAa,gBAAgB,OAAe,YAA4C;CACpF,MAAM,EAAE,QAAQ,EAAE,EAAE,WAAW,GAAG,cAAc,EAAE,EAAE,SAAS,UAAU,aAAa,SAAS,WAAW;CAExG,MAAM,iBAAiB,QAAQ,UAAU,kBAAkB,OAAO,QAAQ,QAAQ,GAAG;CACrF,MAAM,EAAE,SAAS,cAAc,iBAAiB,mBAAmB,YAAY,aAAa,eAAe;CAK3G,IAAI,WAAW,cAHA,kBADK,4BAA4B,OAAO,cAAc,QAAQ,CAChC,EAGR,cAAc,SAAS,MAAM;AAElE,YAAW,sBAAsB,UAAU,gBAAgB,mBAAmB,WAAW;AAGzF,KAAI,YAAY,KAAK,YAAY,QAAQ;EACrC,MAAM,oBAAoB,MAAc,eAAe,GAAG,MAAM,CAAC;AACjE,SAAO,iBACH,UACA,gBACA,mBACA,UACA,aACA,QACA,kBACA,QACA,WACH;;AAGL,QAAO;;;;;;;;;;;;;;;;;AAkBX,MAAM,iBAAiB,aAA2B,SAAiB,SAAkB,UAAkC;;;;CAInH,MAAMC,mBACF,OACA,KACA,MACA,iBACA,eACA,uBACiB;EAEjB,MAAM,cAAc,SAAS,sBAAsB;EAGnD,MAAM,SAAS,QAAQ,MAAM,aAAa,IAAI;EAC9C,IAAI,OAAO,iBAAiB,MAAM,KAAK,qBAAqB,OAAO,MAAM,GAAG,OAAO,QAAQ,YAAY,GAAG;AAC1G,MAAI,CAAC,KACD,QAAO;AAEX,MAAI,CAAC,gBACD,QAAO,kBAAkB,MAAM,aAAa,QAAQ,WAAW;EAEnE,MAAM,OAAO,QAAQ,MAAM,YAAY;EACvC,MAAM,KAAK,kBAAkB,QAAQ,MAAM,MAAM,EAAE,GAAG,QAAQ,MAAM,cAAc,KAAK,SAAS,EAAE;EAClG,MAAMC,MAAe;GAAE,SAAS;GAAM;GAAM;AAC5C,MAAI,OAAO,KACP,KAAI,KAAK;AAEb,MAAI,QAAQ,cACR,KAAI,OAAO;GAAE,GAAG;GAAM,GAAG;GAAe;AAE5C,SAAO;;;;;CAMX,MAAM,sCAAiD;EACnD,MAAMC,SAAoB,EAAE;AAC5B,OAAK,IAAI,IAAI,GAAG,IAAI,YAAY,QAAQ,KAAK;GACzC,MAAM,KAAK,YAAY;GACvB,MAAM,MAAM,IAAI,YAAY,SAAS,IAAI,YAAY,IAAI,GAAG,QAAQ,QAAQ;GAC5E,MAAM,IAAIF,gBACN,GAAG,OACH,KACA,GAAG,MACH,GAAG,iBACH,GAAG,eACH,GAAG,mBACN;AACD,OAAI,EACA,QAAO,KAAK,EAAE;;AAGtB,SAAO;;CAGX,MAAMG,WAAsB,EAAE;AAG9B,KAAI,CAAC,YAAY,QAAQ;AAErB,MAAI,gBAAgB,OADJ,QAAQ,MAAM,EAAE,CACG,EAAE;GACjC,MAAM,IAAIH,gBAAc,GAAG,QAAQ,OAAO;AAC1C,OAAI,EACA,UAAS,KAAK,EAAE;;AAGxB,SAAO;;AAIX,KAAI,YAAY,GAAG,QAAQ,GAEvB;MAAI,gBAAgB,OADJ,QAAQ,MAAM,EAAE,CACG,EAAE;GACjC,MAAM,IAAIA,gBAAc,GAAG,YAAY,GAAG,MAAM;AAChD,OAAI,EACA,UAAS,KAAK,EAAE;;;AAM5B,QAAO,CAAC,GAAG,UAAU,GAAG,+BAA+B,CAAC;;;;;ACrhB5D,MAAM,qBAAqB,aAA6B,QAAQ,MAAM,QAAQ,IAAI,EAAE,EAAE;AAEtF,MAAM,+BAA+B,YAEjC,QAAQ,QAAQ,UAAU,GAAG,CAAC,QAAQ,WAAW,GAAG;AAKxD,MAAM,sBAAsB,YAAgE;CACxF,MAAM,aAAa,kBAAkB,QAAQ;AAE7C,QAAO;EAAE,YADU,4BAA4B,QAAQ,CAAC;EACnC;EAAY;;AAQrC,MAAMI,kBAAoD;CACtD,0BAA0B;CAC1B,YAAY;CACZ,aAAa;CACb,UAAU;CACV,eAAe;CACf,2BAA2B;CAC3B,aAAa;CACb,gBAAgB,CAAC,OAAO;CACxB,QAAQ;CACR,MAAM;CACN,YAAY;CACf;AAMD,MAAM,0BAA0B,MAAsB,EAAE,QAAQ,oBAAoB,OAAO;AAG3F,MAAMC,yBAAiC;CACnC;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACH;AAED,MAAM,2BAAqC;CACvC,MAAM,YAAY,IAAI,IAAI,oBAAoB,CAAC;AAG/C,QAAOC,uBAAqB,QAAQ,MAAM,UAAU,IAAI,EAAE,CAAC;;AAG/D,MAAM,sBAAsB,MAAsB,EAAE,QAAQ,QAAQ,IAAI,CAAC,MAAM;AAI/E,MAAM,yBAAyB,MAE3B,EAAE,QAAQ,8CAA8C,GAAG;AAI/D,MAAM,uBAAuB,eAA+C;CACxE,MAAMC,WAAiC,EAAE;AACzC,MAAK,MAAM,SAAS,YAAY;EAC5B,MAAM,MAAM,eAAe;AAC3B,MAAI,CAAC,IACD;AAEJ,MAAI;AACA,YAAS,KAAK;IAAE,IAAI,IAAI,OAAO,KAAK,KAAK;IAAE;IAAO,CAAC;UAC/C;;AAIZ,QAAO;;AAGX,MAAM,YAAY,KAAa,SAAoC;AAC/D,KAAI,CAAC,IACD,QAAO;AAEX,KAAI,SAAS,QACT,QAAO,IAAI,SAAS,IAAI,GAAG,MAAM,GAAG,IAAI;AAE5C,QAAO,IAAI,SAAS,OAAO,GAAG,MAAM,GAAG,IAAI;;AAG/C,MAAM,0BACF,GACA,KACA,KACA,gBACA,eACoD;CACpD,IAAI,aAAa;CACjB,IAAI,aAAa;CACjB,IAAI,aAAa;AAEjB,MAAK,MAAM,MAAM,gBAAgB;AAC7B,MAAI,cAAc,EAAE,OAChB;EAEJ,MAAM,IAAI,GAAG,KAAK,EAAE,MAAM,WAAW,CAAC;AACtC,MAAI,CAAC,KAAK,EAAE,UAAU,KAAK,CAAC,EAAE,GAC1B;AAGJ,gBAAc,uBAAuB,EAAE,GAAG;AAC1C,gBAAc,EAAE,GAAG;AACnB,eAAa;EAEb,MAAM,UAAU,WAAW,KAAK,EAAE,MAAM,WAAW,CAAC;AACpD,MAAI,SAAS;AACT,iBAAc,QAAQ,GAAG;AACzB,gBAAa,SAAS,YAAY,WAAW;;;AAIrD,QAAO;EAAE;EAAY,KAAK;EAAY,KAAK;EAAY;;AAG3D,MAAM,wBACF,GACA,KACA,UACA,mBACyC;CACzC,IAAIC,OAA+C;AACnD,MAAK,MAAM,EAAE,OAAO,QAAQ,UAAU;AAClC,KAAG,YAAY;EACf,MAAM,IAAI,GAAG,KAAK,EAAE;AACpB,MAAI,CAAC,KAAK,EAAE,UAAU,IAClB;AAEJ,MAAI,CAAC,QAAQ,EAAE,GAAG,SAAS,KAAK,KAAK,OACjC,QAAO;GAAE,MAAM,EAAE;GAAI;GAAO;;AAIpC,KAAI,MAAM,UAAU,SAAS;EACzB,MAAM,MAAM,MAAM,KAAK,KAAK;EAC5B,MAAM,OAAO,MAAM,EAAE,SAAS,EAAE,OAAO;AACvC,MAAI,QAAQ,eAAe,KAAK,IAAI,CAAC,MAAM,KAAK,KAAK,CACjD,QAAO;;AAIf,QAAO;;AAGX,MAAM,qBACF,MACA,YACA,aACA,0BACA,2BACA,gBACA,eACgB;CAChB,MAAM,UAAU,mBAAmB,KAAK;AACxC,KAAI,CAAC,QACD,QAAO;CAGX,MAAM,KAAK,4BAA4B,sBAAsB,QAAQ,GAAG,SAAS,MAAM,GAAG,YAAY;CACtG,IAAI,MAAM;CACV,IAAI,MAAM;CACV,IAAI,aAAa;CACjB,IAAI,eAAe;CAGnB,MAAM,WAAW,oBAAoB,WAAW;CAIhD,MAAM,kBAAkB,OAAwB,qBAAqB,KAAK,GAAG,IAAI,SAAS,KAAK,GAAG;CAClG,MAAM,qBAAqB,OAAwB,0BAA0B,KAAK,GAAG;CAErF;EACI,MAAM,WAAW,uBAAuB,GAAG,KAAK,KAAK,gBAAgB,WAAW;AAChF,QAAM,SAAS;AACf,QAAM,SAAS;AACf,eAAa,SAAS;;CAK1B,IAAI,aAAa;AACjB,QAAO,aAAa,KAAK,MAAM,EAAE,QAAQ;EAErC,MAAM,UAAU,WAAW,KAAK,EAAE,MAAM,IAAI,CAAC;AAC7C,MAAI,SAAS;AACT,UAAO,QAAQ,GAAG;AAClB,SAAM,SAAS,KAAK,WAAW;AAC/B;;EAGJ,MAAM,OAAO,qBAAqB,GAAG,KAAK,UAAU,eAAe;AAEnE,MAAI,MAAM;AACN,OAAI,OAAO,CAAC,IAAI,SAAS,OAAO,EAAE;AAGlC,UAAO,KAAK,KAAK,MAAM;AACvB,gBAAa;AACb,kBAAe;AACf,UAAO,KAAK,KAAK;AACjB;AACA;;AAIJ,MAAI,YAAY;GACZ,MAAM,KAAK,EAAE;AACb,OAAI,MAAM,kBAAkB,GAAG,EAAE;AAC7B,WAAO,uBAAuB,GAAG;AACjC,WAAO;AACP;;;AAKR,MAAI,YAAY;AAGZ,OAAI,4BAA4B,CAAC,cAAc;IAC3C,MAAMC,eAAa,EAAE,MAAM,IAAI,CAAC,MAAM,kBAAkB,IAAI,EAAE,EAAE;AAChE,QAAI,CAACA,YACD;AAEJ,WAAO,uBAAuBA,YAAU;AACxC;;AAEJ;;AAGJ,MAAI,CAAC,yBACD,QAAO;EAIX,MAAM,aAAa,EAAE,MAAM,IAAI,CAAC,MAAM,kBAAkB,IAAI,EAAE,EAAE;AAChE,MAAI,CAAC,UACD,QAAO;AAEX,SAAO,uBAAuB,UAAU;AACxC;AACA,SAAO;;AAGX,KAAI,CAAC,WACD,QAAO;AAGX,KAAI,eAAe,QACf,QAAO,IAAI,SAAS,OAAO,CACvB,OAAM,IAAI,MAAM,GAAG,GAAG;KAG1B,QAAO,IAAI,SAAS,IAAI,CACpB,OAAM,IAAI,MAAM,GAAG,GAAG;AAG9B,QAAO;;;;;;;;AASX,MAAa,2BACT,OACA,UAAoC,EAAE,KACX;CAC3B,MAAMC,IAAsC;EACxC,GAAG;EACH,GAAG;EAEH,YAAY,QAAQ,cAAc,gBAAgB;EAClD,gBAAgB,QAAQ,kBAAkB,gBAAgB;EAC1D,YAAY,QAAQ,cAAc,gBAAgB;EACrD;CACD,MAAM,gBAAgB,oBAAoB;CAE1C,MAAM,yBAAS,IAAI,KAAqE;AAExF,MAAK,MAAM,QAAQ,OAAO;EAEtB,MAAM,QADa,qBAAqB,KAAK,WAAW,GAAG,CAClC,MAAM,KAAK;AACpC,OAAK,MAAM,QAAQ,OAAO;GACtB,MAAM,UAAU,mBAAmB,KAAK;AACxC,OAAI,QAAQ,SAAS,EAAE,cACnB;AAEJ,OAAI,EAAE,cAAc,CAAC,EAAE,WAAW,SAAS,KAAK,GAAG,CAC/C;GAGJ,MAAM,MAAM,kBACR,SACA,eACA,EAAE,aACF,EAAE,0BACF,EAAE,2BACF,EAAE,gBACF,EAAE,WACL;AACD,OAAI,CAAC,IACD;GAGJ,MAAM,WAAW,OAAO,IAAI,IAAI;AAChC,OAAI,CAAC,SACD,QAAO,IAAI,KAAK;IAAE,OAAO;IAAG,UAAU,CAAC;KAAE,MAAM;KAAS,QAAQ,KAAK;KAAI,CAAC;IAAE,CAAC;QAC1E;AACH,aAAS;AACT,QAAI,SAAS,SAAS,SAAS,EAAE,YAC7B,UAAS,SAAS,KAAK;KAAE,MAAM;KAAS,QAAQ,KAAK;KAAI,CAAC;;;;CAM1E,MAAM,+BAA+B,GAA2B,MAAsC;EAClG,MAAM,KAAK,mBAAmB,EAAE,QAAQ;EACxC,MAAM,KAAK,mBAAmB,EAAE,QAAQ;AAExC,MAAI,GAAG,eAAe,GAAG,WACrB,QAAO,GAAG,aAAa,GAAG;AAE9B,MAAI,GAAG,eAAe,GAAG,WACrB,QAAO,GAAG,aAAa,GAAG;AAG9B,MAAI,EAAE,UAAU,EAAE,MACd,QAAO,EAAE,QAAQ,EAAE;AAEvB,SAAO,EAAE,QAAQ,cAAc,EAAE,QAAQ;;CAG7C,MAAM,+BAA+B,GAA2B,MAAsC;AAClG,MAAI,EAAE,UAAU,EAAE,MACd,QAAO,EAAE,QAAQ,EAAE;AAEvB,SAAO,4BAA4B,GAAG,EAAE;;AAQ5C,QALyC,CAAC,GAAG,OAAO,SAAS,CAAC,CACzD,KAAK,CAAC,SAAS,QAAQ;EAAE,OAAO,EAAE;EAAO,UAAU,EAAE;EAAU;EAAS,EAAE,CAC1E,QAAQ,MAAM,EAAE,SAAS,EAAE,SAAS,CACpC,KAAK,EAAE,WAAW,UAAU,8BAA8B,4BAA4B,CAE7E,MAAM,GAAG,EAAE,KAAK;;;;;;;;;;;;;;;;;AC/ZlC,MAAM,uBAAuB;CACzB;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACH;;;;;AAMD,MAAM,yBAAyB;CAC3B,MAAM,YAAY,oBAAoB;CACtC,MAAM,cAAc,qBAAqB,QAAQ,MAAM,UAAU,SAAS,EAAE,CAAC;CAC7E,MAAM,YAAY,UAAU,QAAQ,MAAM,CAAC,qBAAqB,SAAS,EAAE,CAAC,CAAC,MAAM;AACnF,QAAO,CAAC,GAAG,aAAa,GAAG,UAAU;;AAGzC,MAAM,qBAAqB,MAAc,YAAoB,aAA8B;CAGvF,MAAM,SAAS,aAAa,IAAI,KAAK,aAAa,KAAK;CACvD,MAAM,QAAQ,WAAW,KAAK,SAAS,KAAK,YAAY;CAExD,MAAM,gBAAgB,OAAwB,CAAC,CAAC,MAAM,MAAM,KAAK,GAAG;CACpE,MAAM,iBAAiB,OAAwB,CAAC,CAAC,MAAM,SAAS,KAAK,GAAG;CACxE,MAAM,oBAAoB,OAAwB,CAAC,CAAC,MAAM,uBAAuB,KAAK,GAAG;CAIzF,MAAM,iBAAiB,OAAwB,CAAC,CAAC,MAAM,mBAAmB,KAAK,GAAG;CAElF,MAAM,SAAS,CAAC,UAAU,aAAa,OAAO,IAAI,cAAc,OAAO,IAAI,CAAC,cAAc,OAAO;CACjG,MAAM,UAAU,CAAC,SAAS,aAAa,MAAM,IAAI,iBAAiB,MAAM,IAAI,CAAC,cAAc,MAAM;AAEjG,QAAO,UAAU;;;;;;;;;;;;;;;;;AAkBrB,MAAa,uBAAuB,SAAiB;AACjD,KAAI,CAAC,KACD,QAAO,EAAE;CAGb,MAAMC,UAA6B,EAAE;CACrC,MAAMC,gBAAyC,EAAE;CAGjD,MAAM,qBAAqB,OAAe,QAAyB;AAC/D,SAAO,cAAc,MAChB,CAAC,GAAG,OAAQ,SAAS,KAAK,QAAQ,KAAO,MAAM,KAAK,OAAO,KAAO,SAAS,KAAK,OAAO,EAC3F;;AAIL,MAAK,MAAM,aAAa,kBAAkB,EAAE;EACxC,MAAM,UAAU,eAAe;AAC/B,MAAI,CAAC,QACD;AAGJ,MAAI;GAEA,MAAM,QAAQ,IAAI,OAAO,IAAI,QAAQ,IAAI,KAAK;GAC9C,IAAIC;AAGJ,WAAQ,QAAQ,MAAM,KAAK,KAAK,MAAM,MAAM;IACxC,MAAM,aAAa,MAAM;IACzB,MAAM,WAAW,aAAa,MAAM,GAAG;AAEvC,QAAI,cAAc,WAAW,CAAC,kBAAkB,MAAM,YAAY,SAAS,CACvE;AAIJ,QAAI,kBAAkB,YAAY,SAAS,CACvC;AAGJ,YAAQ,KAAK;KAAE;KAAU,OAAO;KAAY,OAAO,MAAM;KAAI,OAAO;KAAW,CAAC;AAEhF,kBAAc,KAAK,CAAC,YAAY,SAAS,CAAC;;UAE1C;;AAGZ,QAAO,QAAQ,MAAM,GAAG,MAAM,EAAE,QAAQ,EAAE,MAAM;;;;;;;;;;;;;;;AAgBpD,MAAa,4BAA4B,MAAc,aAAgC;AACnF,KAAI,CAAC,QAAQ,SAAS,WAAW,EAC7B,QAAO;CAKX,IAAI,WAAW;CACf,MAAM,oBAAoB,CAAC,GAAG,SAAS,CAAC,MAAM,GAAG,MAAM,EAAE,QAAQ,EAAE,MAAM;AAEzE,MAAK,MAAM,KAAK,kBACZ,YAAW,GAAG,SAAS,MAAM,GAAG,EAAE,MAAM,CAAC,IAAI,EAAE,MAAM,IAAI,SAAS,MAAM,EAAE,SAAS;AAGvF,QAAO;;;;;;;;AASX,MAAa,wBACT,aAC2F;CAE3F,MAAM,qBAAqB,SAAS,MAAM,MAAM;EAAC;EAAY;EAAS;EAAO;EAAO,CAAC,SAAS,EAAE,MAAM,CAAC;CAGvG,MAAM,qBAAqB,SAAS,MAAM,MAAM;EAAC;EAAS;EAAQ;EAAW,CAAC,SAAS,EAAE,MAAM,CAAC;AAGhG,KAAI,mBACA,QAAO;EACH,OAAO;EACP,UAAU,SAAS,MAAM,MAAM;GAAC;GAAS;GAAO;GAAO,CAAC,SAAS,EAAE,MAAM,CAAC,EAAE,SAAS;EACrF,aAAa;EAChB;AAIL,KAAI,mBACA,QAAO;EAAE,OAAO;EAAO,UAAU;EAAU,aAAa;EAAmB;AAI/E,QAAO;EAAE,OAAO;EAAO,aAAa;EAAmB;;;;;;;;AAS3D,MAAa,sBACT,SAOQ;CACR,MAAM,WAAW,oBAAoB,KAAK;AAE1C,KAAI,SAAS,WAAW,EACpB,QAAO;AAMX,QAAO;EAAE;EAAU,UAHF,yBAAyB,MAAM,SAAS;EAG5B,GAFd,qBAAqB,SAAS;EAEL"}
1
+ {"version":3,"file":"index.mjs","names":["EQUIV_GROUPS: string[][]","seg: Segment","processPattern","first: { index: number; length: number } | undefined","last: { index: number; length: number } | undefined","cumulativeOffsets: number[]","result: Segment[]","namedCaptures: Record<string, string>","BASE_TOKENS: Record<string, string>","COMPOSITE_TOKENS: Record<string, string>","TOKEN_PATTERNS: Record<string, string>","segments: TemplateSegment[]","match: RegExpExecArray | null","captureNames: string[]","names: string[]","s: {\n lineStartsWith?: string[];\n lineStartsAfter?: string[];\n lineEndsWith?: string[];\n template?: string;\n regex?: string;\n }","allCaptureNames: string[]","compiled: CompiledReplaceRule[]","combinableRules: { rule: SplitRule; prefix: string; index: number }[]","standaloneRules: SplitRule[]","fastFuzzyRules: FastFuzzyRule[]","boundaries: PageBoundary[]","pageBreaks: number[]","parts: string[]","initialSeg: Segment","namedCaptures: Record<string, string>","capturedContent: string | undefined","contentStartOffset: number | undefined","sp: SplitPoint","finalSplitPoints: SplitPoint[]","matches: MatchResult[]","result: MatchResult","result: number[]","createSegment","seg: Segment","result: Segment[]","segments: Segment[]","DEFAULT_OPTIONS: ResolvedLineStartAnalysisOptions","TOKEN_PRIORITY_ORDER: string[]","TOKEN_PRIORITY_ORDER","compiled: CompiledTokenRegex[]","best: { token: string; text: string } | null","firstWord","o: ResolvedLineStartAnalysisOptions","results: DetectedPattern[]","coveredRanges: Array<[number, number]>","match: RegExpExecArray | null"],"sources":["../src/segmentation/fuzzy.ts","../src/segmentation/breakpoint-utils.ts","../src/segmentation/breakpoint-processor.ts","../src/segmentation/match-utils.ts","../src/segmentation/tokens.ts","../src/segmentation/rule-regex.ts","../src/segmentation/replace.ts","../src/segmentation/fast-fuzzy-prefix.ts","../src/segmentation/segmenter-rule-utils.ts","../src/segmentation/textUtils.ts","../src/segmentation/segmenter.ts","../src/analysis.ts","../src/detection.ts"],"sourcesContent":["/**\n * Fuzzy matching utilities for Arabic text.\n *\n * Provides diacritic-insensitive and character-equivalence matching for Arabic text.\n * This allows matching text regardless of:\n * - Diacritical marks (harakat/tashkeel): فَتْحَة، ضَمَّة، كَسْرَة، سُكُون، شَدَّة، تَنْوين\n * - Character equivalences: ا↔آ↔أ↔إ, ة↔ه, ى↔ي\n *\n * @module fuzzy\n *\n * @example\n * // Make a pattern diacritic-insensitive\n * const pattern = makeDiacriticInsensitive('حدثنا');\n * new RegExp(pattern, 'u').test('حَدَّثَنَا') // → true\n */\n\n/**\n * Character class matching all Arabic diacritics (Tashkeel/Harakat).\n *\n * Includes the following diacritical marks:\n * - U+064B: ً (fathatan - double fatha)\n * - U+064C: ٌ (dammatan - double damma)\n * - U+064D: ٍ (kasratan - double kasra)\n * - U+064E: َ (fatha - short a)\n * - U+064F: ُ (damma - short u)\n * - U+0650: ِ (kasra - short i)\n * - U+0651: ّ (shadda - gemination)\n * - U+0652: ْ (sukun - no vowel)\n *\n * @internal\n */\nconst DIACRITICS_CLASS = '[\\u064B\\u064C\\u064D\\u064E\\u064F\\u0650\\u0651\\u0652]';\n\n/**\n * Groups of equivalent Arabic characters.\n *\n * Characters within the same group are considered equivalent for matching purposes.\n * This handles common variations in Arabic text where different characters are\n * used interchangeably or have the same underlying meaning.\n *\n * Equivalence groups:\n * - Alef variants: ا (bare), آ (with madda), أ (with hamza above), إ (with hamza below)\n * - Ta marbuta and Ha: ة ↔ ه (often interchangeable at word endings)\n * - Alef maqsura and Ya: ى ↔ ي (often interchangeable at word endings)\n *\n * @internal\n */\nconst EQUIV_GROUPS: string[][] = [\n ['\\u0627', '\\u0622', '\\u0623', '\\u0625'], // ا, آ, أ, إ\n ['\\u0629', '\\u0647'], // ة <-> ه\n ['\\u0649', '\\u064A'], // ى <-> ي\n];\n\n/**\n * Escapes a string for safe inclusion in a regular expression.\n *\n * Escapes all regex metacharacters: `.*+?^${}()|[\\]\\\\`\n *\n * @param s - Any string to escape\n * @returns String with regex metacharacters escaped\n *\n * @example\n * escapeRegex('hello.world') // → 'hello\\\\.world'\n * escapeRegex('[test]') // → '\\\\[test\\\\]'\n * escapeRegex('a+b*c?') // → 'a\\\\+b\\\\*c\\\\?'\n */\nexport const escapeRegex = (s: string): string => s.replace(/[.*+?^${}()|[\\]\\\\]/g, '\\\\$&');\n\n/**\n * Returns a regex character class for all equivalents of a given character.\n *\n * If the character belongs to one of the predefined equivalence groups\n * (e.g., ا/آ/أ/إ), the returned class will match any member of that group.\n * Otherwise, the original character is simply escaped for safe regex inclusion.\n *\n * @param ch - A single character to expand into its equivalence class\n * @returns A RegExp-safe string representing the character and its equivalents\n *\n * @example\n * getEquivClass('ا') // → '[اآأإ]' (matches any alef variant)\n * getEquivClass('ب') // → 'ب' (no equivalents, just escaped)\n * getEquivClass('.') // → '\\\\.' (regex metachar escaped)\n *\n * @internal\n */\nconst getEquivClass = (ch: string): string => {\n for (const group of EQUIV_GROUPS) {\n if (group.includes(ch)) {\n // join the group's members into a character class\n return `[${group.map((c) => escapeRegex(c)).join('')}]`;\n }\n }\n // not in equivalence groups -> return escaped character\n return escapeRegex(ch);\n};\n\n/**\n * Performs light normalization on Arabic text for consistent matching.\n *\n * Normalization steps:\n * 1. NFC normalization (canonical decomposition then composition)\n * 2. Remove Zero-Width Joiner (U+200D) and Zero-Width Non-Joiner (U+200C)\n * 3. Collapse multiple whitespace characters to single space\n * 4. Trim leading and trailing whitespace\n *\n * This normalization preserves diacritics and letter forms while removing\n * invisible characters that could interfere with matching.\n *\n * @param str - Arabic text to normalize\n * @returns Normalized string\n *\n * @example\n * normalizeArabicLight('حَدَّثَنَا') // → 'حَدَّثَنَا' (diacritics preserved)\n * normalizeArabicLight('بسم الله') // → 'بسم الله' (spaces collapsed)\n * normalizeArabicLight(' text ') // → 'text' (trimmed)\n *\n * @internal\n */\nconst normalizeArabicLight = (str: string) => {\n return str\n .normalize('NFC')\n .replace(/[\\u200C\\u200D]/g, '') // remove ZWJ/ZWNJ\n .replace(/\\s+/g, ' ')\n .trim();\n};\n\n/**\n * Creates a diacritic-insensitive regex pattern for Arabic text matching.\n *\n * Transforms input text into a regex pattern that matches the text regardless\n * of diacritical marks (harakat) and character variations. Each character in\n * the input is:\n * 1. Expanded to its equivalence class (if applicable)\n * 2. Followed by an optional diacritics matcher\n *\n * This allows matching:\n * - `حدثنا` with `حَدَّثَنَا` (with full diacritics)\n * - `الإيمان` with `الايمان` (alef variants)\n * - `صلاة` with `صلاه` (ta marbuta ↔ ha)\n *\n * @param text - Input Arabic text to make diacritic-insensitive\n * @returns Regex pattern string that matches the text with or without diacritics\n *\n * @example\n * const pattern = makeDiacriticInsensitive('حدثنا');\n * // Each char gets equivalence class + optional diacritics\n * // Result matches: حدثنا, حَدَّثَنَا, حَدَثَنَا, etc.\n *\n * @example\n * const pattern = makeDiacriticInsensitive('باب');\n * new RegExp(pattern, 'u').test('بَابٌ') // → true\n * new RegExp(pattern, 'u').test('باب') // → true\n *\n * @example\n * // Using with split rules\n * {\n * lineStartsWith: ['باب'],\n * split: 'at',\n * fuzzy: true // Applies makeDiacriticInsensitive internally\n * }\n */\nexport const makeDiacriticInsensitive = (text: string) => {\n const diacriticsMatcher = `${DIACRITICS_CLASS}*`;\n const norm = normalizeArabicLight(text);\n // Use Array.from to iterate grapheme-safe over the string (works fine for Arabic letters)\n return Array.from(norm)\n .map((ch) => getEquivClass(ch) + diacriticsMatcher)\n .join('');\n};\n","/**\n * Utility functions for breakpoint processing in the segmentation engine.\n *\n * These functions handle breakpoint normalization, page exclusion checking,\n * and segment creation. Extracted for independent testing and reuse.\n *\n * @module breakpoint-utils\n */\n\nimport type { Breakpoint, BreakpointRule, PageRange, Segment } from './types.js';\n\nconst WINDOW_PREFIX_LENGTHS = [80, 60, 40, 30, 20, 15] as const;\n// For page-join normalization we need to handle cases where only the very beginning of the next page\n// is present in the current segment (e.g. the segment ends right before the next structural marker).\n// That can be as short as a few words, so we allow shorter prefixes here.\nconst JOINER_PREFIX_LENGTHS = [80, 60, 40, 30, 20, 15, 12, 10, 8, 6] as const;\n\n/**\n * Normalizes a breakpoint to the object form.\n * Strings are converted to { pattern: str } with no constraints.\n *\n * @param bp - Breakpoint as string or object\n * @returns Normalized BreakpointRule object\n *\n * @example\n * normalizeBreakpoint('\\\\n\\\\n')\n * // → { pattern: '\\\\n\\\\n' }\n *\n * normalizeBreakpoint({ pattern: '\\\\n', min: 10 })\n * // → { pattern: '\\\\n', min: 10 }\n */\nexport const normalizeBreakpoint = (bp: Breakpoint): BreakpointRule => (typeof bp === 'string' ? { pattern: bp } : bp);\n\n/**\n * Checks if a page ID is in an excluded list (single pages or ranges).\n *\n * @param pageId - Page ID to check\n * @param excludeList - List of page IDs or [from, to] ranges to exclude\n * @returns True if page is excluded\n *\n * @example\n * isPageExcluded(5, [1, 5, 10])\n * // → true\n *\n * isPageExcluded(5, [[3, 7]])\n * // → true\n *\n * isPageExcluded(5, [[10, 20]])\n * // → false\n */\nexport const isPageExcluded = (pageId: number, excludeList: PageRange[] | undefined): boolean => {\n if (!excludeList || excludeList.length === 0) {\n return false;\n }\n for (const item of excludeList) {\n if (typeof item === 'number') {\n if (pageId === item) {\n return true;\n }\n } else {\n const [from, to] = item;\n if (pageId >= from && pageId <= to) {\n return true;\n }\n }\n }\n return false;\n};\n\n/**\n * Checks if a page ID is within a breakpoint's min/max range and not excluded.\n *\n * @param pageId - Page ID to check\n * @param rule - Breakpoint rule with optional min/max/exclude constraints\n * @returns True if page is within valid range\n *\n * @example\n * isInBreakpointRange(50, { pattern: '\\\\n', min: 10, max: 100 })\n * // → true\n *\n * isInBreakpointRange(5, { pattern: '\\\\n', min: 10 })\n * // → false (below min)\n */\nexport const isInBreakpointRange = (pageId: number, rule: BreakpointRule): boolean => {\n if (rule.min !== undefined && pageId < rule.min) {\n return false;\n }\n if (rule.max !== undefined && pageId > rule.max) {\n return false;\n }\n return !isPageExcluded(pageId, rule.exclude);\n};\n\n/**\n * Builds an exclude set from a PageRange array for O(1) lookups.\n *\n * @param excludeList - List of page IDs or [from, to] ranges\n * @returns Set of all excluded page IDs\n *\n * @remarks\n * This expands ranges into explicit page IDs for fast membership checks. For typical\n * book-scale inputs (thousands of pages), this is small and keeps downstream logic\n * simple and fast. If you expect extremely large ranges (e.g., millions of pages),\n * consider avoiding broad excludes or introducing a range-based membership structure.\n *\n * @example\n * buildExcludeSet([1, 5, [10, 12]])\n * // → Set { 1, 5, 10, 11, 12 }\n */\nexport const buildExcludeSet = (excludeList: PageRange[] | undefined): Set<number> => {\n const excludeSet = new Set<number>();\n for (const item of excludeList || []) {\n if (typeof item === 'number') {\n excludeSet.add(item);\n } else {\n for (let i = item[0]; i <= item[1]; i++) {\n excludeSet.add(i);\n }\n }\n }\n return excludeSet;\n};\n\n/**\n * Creates a segment with optional to and meta fields.\n * Returns null if content is empty after trimming.\n *\n * @param content - Segment content\n * @param fromPageId - Starting page ID\n * @param toPageId - Optional ending page ID (omitted if same as from)\n * @param meta - Optional metadata to attach\n * @returns Segment object or null if empty\n *\n * @example\n * createSegment('Hello world', 1, 3, { chapter: 1 })\n * // → { content: 'Hello world', from: 1, to: 3, meta: { chapter: 1 } }\n *\n * createSegment(' ', 1, undefined, undefined)\n * // → null (empty content)\n */\nexport const createSegment = (\n content: string,\n fromPageId: number,\n toPageId: number | undefined,\n meta: Record<string, unknown> | undefined,\n): Segment | null => {\n const trimmed = content.trim();\n if (!trimmed) {\n return null;\n }\n const seg: Segment = { content: trimmed, from: fromPageId };\n if (toPageId !== undefined && toPageId !== fromPageId) {\n seg.to = toPageId;\n }\n if (meta) {\n seg.meta = meta;\n }\n return seg;\n};\n\n/** Expanded breakpoint with pre-compiled regex and exclude set */\nexport type ExpandedBreakpoint = {\n rule: BreakpointRule;\n regex: RegExp | null;\n excludeSet: Set<number>;\n skipWhenRegex: RegExp | null;\n};\n\n/** Function type for pattern processing */\nexport type PatternProcessor = (pattern: string) => string;\n\n/**\n * Expands breakpoint patterns and pre-computes exclude sets.\n *\n * @param breakpoints - Array of breakpoint patterns or rules\n * @param processPattern - Function to expand tokens in patterns\n * @returns Array of expanded breakpoints with compiled regexes\n *\n * @remarks\n * This function compiles regex patterns dynamically. This can be a ReDoS vector\n * if patterns come from untrusted sources. In typical usage, breakpoint rules\n * are application configuration, not user input.\n */\nexport const expandBreakpoints = (breakpoints: Breakpoint[], processPattern: PatternProcessor): ExpandedBreakpoint[] =>\n breakpoints.map((bp) => {\n const rule = normalizeBreakpoint(bp);\n const excludeSet = buildExcludeSet(rule.exclude);\n const skipWhenRegex =\n rule.skipWhen !== undefined\n ? (() => {\n const expandedSkip = processPattern(rule.skipWhen);\n try {\n return new RegExp(expandedSkip, 'mu');\n } catch (error) {\n const message = error instanceof Error ? error.message : String(error);\n throw new Error(`Invalid breakpoint skipWhen regex: ${rule.skipWhen}\\n Cause: ${message}`);\n }\n })()\n : null;\n if (rule.pattern === '') {\n return { excludeSet, regex: null, rule, skipWhenRegex };\n }\n const expanded = processPattern(rule.pattern);\n try {\n return { excludeSet, regex: new RegExp(expanded, 'gmu'), rule, skipWhenRegex };\n } catch (error) {\n const message = error instanceof Error ? error.message : String(error);\n throw new Error(`Invalid breakpoint regex: ${rule.pattern}\\n Cause: ${message}`);\n }\n });\n\n/** Normalized page data for efficient lookups */\nexport type NormalizedPage = { content: string; length: number; index: number };\n\n/**\n * Applies a configured joiner at detected page boundaries within a multi-page content chunk.\n *\n * This is used for breakpoint-generated segments which don't have access to the original\n * `pageMap.pageBreaks` offsets. We detect page starts sequentially by searching for each page's\n * prefix after the previous boundary, then replace ONLY the single newline immediately before\n * that page start.\n *\n * This avoids converting real in-page newlines, while still normalizing page joins consistently.\n */\nexport const applyPageJoinerBetweenPages = (\n content: string,\n fromIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n joiner: 'space' | 'newline',\n): string => {\n if (joiner === 'newline' || fromIdx >= toIdx || !content.includes('\\n')) {\n return content;\n }\n\n let updated = content;\n let searchFrom = 0;\n\n for (let pi = fromIdx + 1; pi <= toIdx; pi++) {\n const pageData = normalizedPages.get(pageIds[pi]);\n if (!pageData) {\n continue;\n }\n\n const found = findPrefixPositionInContent(updated, pageData.content.trimStart(), searchFrom);\n if (found > 0 && updated[found - 1] === '\\n') {\n updated = `${updated.slice(0, found - 1)} ${updated.slice(found)}`;\n }\n if (found > 0) {\n searchFrom = found;\n }\n }\n\n return updated;\n};\n\n/**\n * Finds the position of a page prefix in content, trying multiple prefix lengths.\n */\nconst findPrefixPositionInContent = (content: string, trimmedPageContent: string, searchFrom: number): number => {\n for (const len of JOINER_PREFIX_LENGTHS) {\n const prefix = trimmedPageContent.slice(0, Math.min(len, trimmedPageContent.length)).trim();\n if (!prefix) {\n continue;\n }\n const pos = content.indexOf(prefix, searchFrom);\n if (pos > 0) {\n return pos;\n }\n }\n return -1;\n};\n\n/**\n * Estimates how far into the current page `remainingContent` begins.\n *\n * During breakpoint processing, `remainingContent` can begin mid-page after a previous split.\n * When that happens, raw cumulative page offsets (computed from full page starts) can overestimate\n * expected boundary positions. This helper computes an approximate starting offset by matching\n * a short prefix of `remainingContent` inside the current page content.\n */\nexport const estimateStartOffsetInCurrentPage = (\n remainingContent: string,\n currentFromIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n const currentPageData = normalizedPages.get(pageIds[currentFromIdx]);\n if (!currentPageData) {\n return 0;\n }\n\n const remStart = remainingContent.trimStart().slice(0, Math.min(60, remainingContent.length));\n const needle = remStart.slice(0, Math.min(30, remStart.length));\n if (!needle) {\n return 0;\n }\n\n const idx = currentPageData.content.indexOf(needle);\n return idx > 0 ? idx : 0;\n};\n\n/**\n * Attempts to find the start position of a target page within remainingContent,\n * anchored near an expected boundary position to reduce collisions.\n *\n * This is used to define breakpoint windows in terms of actual content being split, rather than\n * raw per-page offsets which can desync when structural rules strip markers.\n */\nexport const findPageStartNearExpectedBoundary = (\n remainingContent: string,\n _currentFromIdx: number, // unused but kept for API compatibility\n targetPageIdx: number,\n expectedBoundary: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n const targetPageData = normalizedPages.get(pageIds[targetPageIdx]);\n if (!targetPageData) {\n return -1;\n }\n\n // Anchor search near the expected boundary to avoid matching repeated phrases earlier in content.\n const approx = Math.min(Math.max(0, expectedBoundary), remainingContent.length);\n const searchStart = Math.max(0, approx - 10_000);\n const searchEnd = Math.min(remainingContent.length, approx + 2_000);\n\n // The target page content might be truncated in the current segment due to structural split points\n // early in that page (e.g. headings). Use progressively shorter prefixes.\n const targetTrimmed = targetPageData.content.trimStart();\n for (const len of WINDOW_PREFIX_LENGTHS) {\n const prefix = targetTrimmed.slice(0, Math.min(len, targetTrimmed.length)).trim();\n if (!prefix) {\n continue;\n }\n\n let pos = remainingContent.indexOf(prefix, searchStart);\n while (pos !== -1 && pos <= searchEnd) {\n // Prefer matches that look like page boundaries (preceded by whitespace).\n if (pos > 0 && /\\s/.test(remainingContent[pos - 1] ?? '')) {\n return pos;\n }\n pos = remainingContent.indexOf(prefix, pos + 1);\n }\n\n // Fallback: take the last occurrence at or before approx (still anchored).\n const last = remainingContent.lastIndexOf(prefix, approx);\n if (last > 0) {\n return last;\n }\n }\n\n return -1;\n};\n\n/**\n * Finds the end position of a breakpoint window inside `remainingContent`.\n *\n * The window end is defined as the start of the page AFTER `windowEndIdx` (i.e. `windowEndIdx + 1`),\n * found within the actual `remainingContent` string being split. This avoids relying on raw page offsets\n * that can diverge when structural rules strip markers (e.g. `lineStartsAfter`).\n */\nexport const findBreakpointWindowEndPosition = (\n remainingContent: string,\n currentFromIdx: number,\n windowEndIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n cumulativeOffsets: number[],\n): number => {\n // If the window already reaches the end of the segment, the window is the remaining content.\n if (windowEndIdx >= toIdx) {\n return remainingContent.length;\n }\n\n const desiredNextIdx = windowEndIdx + 1;\n const minNextIdx = currentFromIdx + 1;\n const maxNextIdx = Math.min(desiredNextIdx, toIdx);\n\n const startOffsetInCurrentPage = estimateStartOffsetInCurrentPage(\n remainingContent,\n currentFromIdx,\n pageIds,\n normalizedPages,\n );\n\n // If we can't find the boundary for the desired next page, progressively fall back\n // to earlier page boundaries (smaller window), which is conservative but still correct.\n for (let nextIdx = maxNextIdx; nextIdx >= minNextIdx; nextIdx--) {\n const expectedBoundary =\n cumulativeOffsets[nextIdx] !== undefined && cumulativeOffsets[currentFromIdx] !== undefined\n ? Math.max(0, cumulativeOffsets[nextIdx] - cumulativeOffsets[currentFromIdx] - startOffsetInCurrentPage)\n : remainingContent.length;\n\n const pos = findPageStartNearExpectedBoundary(\n remainingContent,\n currentFromIdx,\n nextIdx,\n expectedBoundary,\n pageIds,\n normalizedPages,\n );\n if (pos > 0) {\n return pos;\n }\n }\n\n // As a last resort (should be rare), treat the entire remaining content as the window.\n // This may under-enforce maxPages if boundary detection fails, but avoids infinite loops.\n return remainingContent.length;\n};\n\n/**\n * Finds exclusion-based break position using raw cumulative offsets.\n *\n * This is used to ensure pages excluded by breakpoints are never merged into the same output segment.\n * Returns a break position relative to the start of `remainingContent` (i.e. the currentFromIdx start).\n */\nexport const findExclusionBreakPosition = (\n currentFromIdx: number,\n windowEndIdx: number,\n toIdx: number,\n pageIds: number[],\n expandedBreakpoints: Array<{ excludeSet: Set<number> }>,\n cumulativeOffsets: number[],\n): number => {\n const startingPageId = pageIds[currentFromIdx];\n const startingPageExcluded = expandedBreakpoints.some((bp) => bp.excludeSet.has(startingPageId));\n if (startingPageExcluded && currentFromIdx < toIdx) {\n // Output just this one page as a segment (break at next page boundary)\n return cumulativeOffsets[currentFromIdx + 1] - cumulativeOffsets[currentFromIdx];\n }\n\n // Find the first excluded page AFTER the starting page (within window) and split BEFORE it\n for (let pageIdx = currentFromIdx + 1; pageIdx <= windowEndIdx; pageIdx++) {\n const pageId = pageIds[pageIdx];\n const isExcluded = expandedBreakpoints.some((bp) => bp.excludeSet.has(pageId));\n if (isExcluded) {\n return cumulativeOffsets[pageIdx] - cumulativeOffsets[currentFromIdx];\n }\n }\n return -1;\n};\n\n/**\n * Finds the actual ending page index by searching backwards for page content prefix.\n * Used to determine which page a segment actually ends on based on content matching.\n *\n * @param pieceContent - Content of the segment piece\n * @param currentFromIdx - Current starting index in pageIds\n * @param toIdx - Maximum ending index to search\n * @param pageIds - Array of page IDs\n * @param normalizedPages - Map of page ID to normalized content\n * @returns The actual ending page index\n */\nexport const findActualEndPage = (\n pieceContent: string,\n currentFromIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n for (let pi = toIdx; pi > currentFromIdx; pi--) {\n const pageData = normalizedPages.get(pageIds[pi]);\n if (!pageData) {\n continue;\n }\n\n const trimmedContent = pageData.content.trimStart();\n\n // Try progressively shorter prefixes to handle mid-page splits.\n // Uses JOINER_PREFIX_LENGTHS which is designed for cases where only\n // a small portion of a page is present in the segment.\n for (const len of JOINER_PREFIX_LENGTHS) {\n const checkPortion = trimmedContent.slice(0, Math.min(len, trimmedContent.length)).trim();\n // Note: We use `> 0` (not `>= 0`) intentionally.\n // If the page prefix appears at position 0, it means the piece STARTS with this page's\n // content, not that the piece ENDS on this page. We're looking for cases where the\n // page content appears AFTER earlier pages' content (position > 0).\n if (checkPortion.length > 0 && pieceContent.indexOf(checkPortion) > 0) {\n return pi;\n }\n }\n }\n return currentFromIdx;\n};\n\n/**\n * Finds the actual starting page index by searching forwards for page content prefix.\n * Used to determine which page content actually starts from based on content matching.\n *\n * This is the counterpart to findActualEndPage - it searches forward to find which\n * page the content starts on, rather than which page it ends on.\n *\n * @param pieceContent - Content of the segment piece\n * @param currentFromIdx - Current starting index in pageIds\n * @param toIdx - Maximum ending index to search\n * @param pageIds - Array of page IDs\n * @param normalizedPages - Map of page ID to normalized content\n * @returns The actual starting page index\n */\nexport const findActualStartPage = (\n pieceContent: string,\n currentFromIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n const trimmedPiece = pieceContent.trimStart();\n if (!trimmedPiece) {\n return currentFromIdx;\n }\n\n // Search forward from currentFromIdx to find which page the content starts on\n for (let pi = currentFromIdx; pi <= toIdx; pi++) {\n const pageData = normalizedPages.get(pageIds[pi]);\n if (pageData) {\n const pagePrefix = pageData.content.slice(0, Math.min(30, pageData.length)).trim();\n const piecePrefix = trimmedPiece.slice(0, Math.min(30, trimmedPiece.length));\n\n // Check both directions:\n // 1. pieceContent starts with page prefix (page content is longer)\n // 2. page content starts with pieceContent prefix (pieceContent is shorter)\n if (pagePrefix.length > 0) {\n if (trimmedPiece.startsWith(pagePrefix)) {\n return pi;\n }\n if (pageData.content.trimStart().startsWith(piecePrefix)) {\n return pi;\n }\n }\n }\n }\n return currentFromIdx;\n};\n\n/** Context required for finding break positions */\nexport type BreakpointContext = {\n pageIds: number[];\n normalizedPages: Map<number, NormalizedPage>;\n expandedBreakpoints: ExpandedBreakpoint[];\n prefer: 'longer' | 'shorter';\n};\n\n/**\n * Checks if any page in a range is excluded by the given exclude set.\n *\n * @param excludeSet - Set of excluded page IDs\n * @param pageIds - Array of page IDs\n * @param fromIdx - Start index (inclusive)\n * @param toIdx - End index (inclusive)\n * @returns True if any page in range is excluded\n */\nexport const hasExcludedPageInRange = (\n excludeSet: Set<number>,\n pageIds: number[],\n fromIdx: number,\n toIdx: number,\n): boolean => {\n if (excludeSet.size === 0) {\n return false;\n }\n for (let pageIdx = fromIdx; pageIdx <= toIdx; pageIdx++) {\n if (excludeSet.has(pageIds[pageIdx])) {\n return true;\n }\n }\n return false;\n};\n\n/**\n * Finds the position of the next page content within remaining content.\n * Returns -1 if not found.\n *\n * @param remainingContent - Content to search in\n * @param nextPageData - Normalized data for the next page\n * @returns Position of next page content, or -1 if not found\n */\nexport const findNextPagePosition = (remainingContent: string, nextPageData: NormalizedPage): number => {\n const searchPrefix = nextPageData.content.trim().slice(0, Math.min(30, nextPageData.length));\n if (searchPrefix.length === 0) {\n return -1;\n }\n const pos = remainingContent.indexOf(searchPrefix);\n return pos > 0 ? pos : -1;\n};\n\n/**\n * Finds matches within a window and returns the selected position based on preference.\n *\n * @param windowContent - Content to search\n * @param regex - Regex to match\n * @param prefer - 'longer' for last match, 'shorter' for first match\n * @returns Break position after the selected match, or -1 if no matches\n */\nexport const findPatternBreakPosition = (\n windowContent: string,\n regex: RegExp,\n prefer: 'longer' | 'shorter',\n): number => {\n // OPTIMIZATION: Stream matches instead of collecting all into an array.\n // Only track first and last match to avoid allocating large arrays for dense patterns.\n let first: { index: number; length: number } | undefined;\n let last: { index: number; length: number } | undefined;\n for (const m of windowContent.matchAll(regex)) {\n const match = { index: m.index, length: m[0].length };\n if (!first) {\n first = match;\n }\n last = match;\n }\n if (!first) {\n return -1;\n }\n const selected = prefer === 'longer' ? last! : first;\n return selected.index + selected.length;\n};\n\n/**\n * Handles page boundary breakpoint (empty pattern).\n * Returns break position or -1 if no valid position found.\n */\nconst handlePageBoundaryBreak = (\n remainingContent: string,\n windowEndIdx: number,\n windowEndPosition: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n const nextPageIdx = windowEndIdx + 1;\n if (nextPageIdx <= toIdx) {\n const nextPageData = normalizedPages.get(pageIds[nextPageIdx]);\n if (nextPageData) {\n const pos = findNextPagePosition(remainingContent, nextPageData);\n if (pos > 0) {\n return Math.min(pos, windowEndPosition, remainingContent.length);\n }\n }\n }\n return Math.min(windowEndPosition, remainingContent.length);\n};\n\n/**\n * Tries to find a break position within the current window using breakpoint patterns.\n * Returns the break position or -1 if no suitable break was found.\n *\n * @param remainingContent - Content remaining to be segmented\n * @param currentFromIdx - Current starting page index\n * @param toIdx - Ending page index\n * @param windowEndIdx - Maximum window end index\n * @param ctx - Breakpoint context with page data and patterns\n * @returns Break position in the content, or -1 if no break found\n */\nexport const findBreakPosition = (\n remainingContent: string,\n currentFromIdx: number,\n toIdx: number,\n windowEndIdx: number,\n windowEndPosition: number,\n ctx: BreakpointContext,\n): number => {\n const { pageIds, normalizedPages, expandedBreakpoints, prefer } = ctx;\n\n for (const { rule, regex, excludeSet, skipWhenRegex } of expandedBreakpoints) {\n // Check if this breakpoint applies to the current segment's starting page\n if (!isInBreakpointRange(pageIds[currentFromIdx], rule)) {\n continue;\n }\n\n // Check if ANY page in the current WINDOW is excluded (not the entire segment)\n if (hasExcludedPageInRange(excludeSet, pageIds, currentFromIdx, windowEndIdx)) {\n continue;\n }\n\n // Check if content matches skipWhen pattern (pre-compiled)\n if (skipWhenRegex?.test(remainingContent)) {\n continue;\n }\n\n // Handle page boundary (empty pattern)\n if (regex === null) {\n return handlePageBoundaryBreak(\n remainingContent,\n windowEndIdx,\n windowEndPosition,\n toIdx,\n pageIds,\n normalizedPages,\n );\n }\n\n // Find matches within window\n const windowContent = remainingContent.slice(0, Math.min(windowEndPosition, remainingContent.length));\n const breakPos = findPatternBreakPosition(windowContent, regex, prefer);\n if (breakPos > 0) {\n return breakPos;\n }\n }\n\n return -1;\n};\n","/**\n * Breakpoint post-processing engine extracted from segmenter.ts.\n *\n * This module is intentionally split into small helpers to reduce cognitive complexity\n * and allow unit testing of tricky edge cases (window sizing, next-page advancement, etc.).\n */\n\nimport {\n applyPageJoinerBetweenPages,\n type BreakpointContext,\n createSegment,\n expandBreakpoints,\n findActualEndPage,\n findActualStartPage,\n findBreakPosition,\n findBreakpointWindowEndPosition,\n findExclusionBreakPosition,\n hasExcludedPageInRange,\n type NormalizedPage,\n} from './breakpoint-utils.js';\nimport type { Breakpoint, Logger, Page, Segment } from './types.js';\n\nexport type BreakpointPatternProcessor = (pattern: string) => string;\n\nconst buildPageIdToIndexMap = (pageIds: number[]) => new Map(pageIds.map((id, i) => [id, i]));\n\nconst buildNormalizedPagesMap = (pages: Page[], normalizedContent: string[]) => {\n const normalizedPages = new Map<number, NormalizedPage>();\n for (let i = 0; i < pages.length; i++) {\n const content = normalizedContent[i];\n normalizedPages.set(pages[i].id, { content, index: i, length: content.length });\n }\n return normalizedPages;\n};\n\nconst buildCumulativeOffsets = (pageIds: number[], normalizedPages: Map<number, NormalizedPage>) => {\n const cumulativeOffsets: number[] = [0];\n let totalOffset = 0;\n for (let i = 0; i < pageIds.length; i++) {\n const pageData = normalizedPages.get(pageIds[i]);\n totalOffset += pageData ? pageData.length : 0;\n if (i < pageIds.length - 1) {\n totalOffset += 1; // separator between pages\n }\n cumulativeOffsets.push(totalOffset);\n }\n return cumulativeOffsets;\n};\n\nconst hasAnyExclusionsInRange = (\n expandedBreakpoints: Array<{ excludeSet: Set<number> }>,\n pageIds: number[],\n fromIdx: number,\n toIdx: number,\n): boolean => expandedBreakpoints.some((bp) => hasExcludedPageInRange(bp.excludeSet, pageIds, fromIdx, toIdx));\n\nexport const computeWindowEndIdx = (\n currentFromIdx: number,\n toIdx: number,\n pageIds: number[],\n maxPages: number,\n): number => {\n const currentPageId = pageIds[currentFromIdx];\n const maxWindowPageId = currentPageId + maxPages;\n let windowEndIdx = currentFromIdx;\n for (let i = currentFromIdx; i <= toIdx; i++) {\n if (pageIds[i] <= maxWindowPageId) {\n windowEndIdx = i;\n } else {\n break;\n }\n }\n return windowEndIdx;\n};\n\nconst computeRemainingSpan = (currentFromIdx: number, toIdx: number, pageIds: number[]) =>\n pageIds[toIdx] - pageIds[currentFromIdx];\n\nconst createFinalSegment = (\n remainingContent: string,\n currentFromIdx: number,\n toIdx: number,\n pageIds: number[],\n meta: Segment['meta'] | undefined,\n includeMeta: boolean,\n) =>\n createSegment(\n remainingContent,\n pageIds[currentFromIdx],\n currentFromIdx !== toIdx ? pageIds[toIdx] : undefined,\n includeMeta ? meta : undefined,\n );\n\ntype PiecePages = { actualEndIdx: number; actualStartIdx: number };\n\nconst computePiecePages = (\n pieceContent: string,\n currentFromIdx: number,\n toIdx: number,\n windowEndIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): PiecePages => {\n const actualStartIdx = pieceContent\n ? findActualStartPage(pieceContent, currentFromIdx, toIdx, pageIds, normalizedPages)\n : currentFromIdx;\n const actualEndIdx = pieceContent\n ? findActualEndPage(pieceContent, actualStartIdx, windowEndIdx, pageIds, normalizedPages)\n : currentFromIdx;\n return { actualEndIdx, actualStartIdx };\n};\n\nexport const computeNextFromIdx = (\n remainingContent: string,\n actualEndIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n): number => {\n let nextFromIdx = actualEndIdx;\n if (remainingContent && actualEndIdx + 1 <= toIdx) {\n const nextPageData = normalizedPages.get(pageIds[actualEndIdx + 1]);\n if (nextPageData) {\n const nextPrefix = nextPageData.content.slice(0, Math.min(30, nextPageData.length));\n const remainingPrefix = remainingContent.trimStart().slice(0, Math.min(30, remainingContent.length));\n // Check both directions:\n // 1. remainingContent starts with page prefix (page is longer or equal)\n // 2. page content starts with remaining prefix (remaining is shorter)\n if (\n nextPrefix &&\n (remainingContent.startsWith(nextPrefix) || nextPageData.content.startsWith(remainingPrefix))\n ) {\n nextFromIdx = actualEndIdx + 1;\n }\n }\n }\n return nextFromIdx;\n};\n\nconst createPieceSegment = (\n pieceContent: string,\n actualStartIdx: number,\n actualEndIdx: number,\n pageIds: number[],\n meta: Segment['meta'] | undefined,\n includeMeta: boolean,\n): Segment | null =>\n createSegment(\n pieceContent,\n pageIds[actualStartIdx],\n actualEndIdx > actualStartIdx ? pageIds[actualEndIdx] : undefined,\n includeMeta ? meta : undefined,\n );\n\nconst processOversizedSegment = (\n segment: Segment,\n fromIdx: number,\n toIdx: number,\n pageIds: number[],\n normalizedPages: Map<number, NormalizedPage>,\n cumulativeOffsets: number[],\n expandedBreakpoints: ReturnType<typeof expandBreakpoints>,\n maxPages: number,\n prefer: 'longer' | 'shorter',\n logger?: Logger,\n): Segment[] => {\n const result: Segment[] = [];\n let remainingContent = segment.content;\n let currentFromIdx = fromIdx;\n let isFirstPiece = true;\n let iterationCount = 0;\n const maxIterations = 10000;\n\n while (currentFromIdx <= toIdx) {\n iterationCount++;\n if (iterationCount > maxIterations) {\n logger?.error?.('INFINITE LOOP DETECTED! Breaking out, you should report this bug', {\n iterationCount: maxIterations,\n });\n break;\n }\n\n const remainingHasExclusions = hasAnyExclusionsInRange(expandedBreakpoints, pageIds, currentFromIdx, toIdx);\n const remainingSpan = computeRemainingSpan(currentFromIdx, toIdx, pageIds);\n if (remainingSpan <= maxPages && !remainingHasExclusions) {\n const finalSeg = createFinalSegment(\n remainingContent,\n currentFromIdx,\n toIdx,\n pageIds,\n segment.meta,\n isFirstPiece,\n );\n if (finalSeg) {\n result.push(finalSeg);\n }\n break;\n }\n\n const windowEndIdx = computeWindowEndIdx(currentFromIdx, toIdx, pageIds, maxPages);\n logger?.debug?.(`[breakpoints] iteration=${iterationCount}`, {\n currentFromIdx,\n currentFromPageId: pageIds[currentFromIdx],\n remainingContentStart: remainingContent.slice(0, 50),\n remainingContentLength: remainingContent.length,\n remainingSpan,\n toIdx,\n toPageId: pageIds[toIdx],\n windowEndIdx,\n windowEndPageId: pageIds[windowEndIdx],\n });\n const windowEndPosition = findBreakpointWindowEndPosition(\n remainingContent,\n currentFromIdx,\n windowEndIdx,\n toIdx,\n pageIds,\n normalizedPages,\n cumulativeOffsets,\n );\n\n const windowHasExclusions = hasAnyExclusionsInRange(expandedBreakpoints, pageIds, currentFromIdx, windowEndIdx);\n let breakPosition = -1;\n\n if (windowHasExclusions) {\n breakPosition = findExclusionBreakPosition(\n currentFromIdx,\n windowEndIdx,\n toIdx,\n pageIds,\n expandedBreakpoints,\n cumulativeOffsets,\n );\n }\n\n if (breakPosition <= 0) {\n const breakpointCtx: BreakpointContext = { expandedBreakpoints, normalizedPages, pageIds, prefer };\n breakPosition = findBreakPosition(\n remainingContent,\n currentFromIdx,\n toIdx,\n windowEndIdx,\n windowEndPosition,\n breakpointCtx,\n );\n }\n\n if (breakPosition <= 0) {\n // No pattern matched: fall back to the window boundary.\n breakPosition = windowEndPosition;\n }\n\n const pieceContent = remainingContent.slice(0, breakPosition).trim();\n logger?.debug?.('[breakpoints] selectedBreak', {\n breakPosition,\n pieceContentEnd: pieceContent.slice(-50),\n pieceContentLength: pieceContent.length,\n windowEndPosition,\n });\n\n const { actualEndIdx, actualStartIdx } = computePiecePages(\n pieceContent,\n currentFromIdx,\n toIdx,\n windowEndIdx,\n pageIds,\n normalizedPages,\n );\n\n if (pieceContent) {\n const pieceSeg = createPieceSegment(\n pieceContent,\n actualStartIdx,\n actualEndIdx,\n pageIds,\n segment.meta,\n isFirstPiece,\n );\n if (pieceSeg) {\n result.push(pieceSeg);\n }\n }\n\n remainingContent = remainingContent.slice(breakPosition).trim();\n logger?.debug?.('[breakpoints] afterSlice', {\n actualEndIdx,\n remainingContentLength: remainingContent.length,\n remainingContentStart: remainingContent.slice(0, 60),\n });\n if (!remainingContent) {\n logger?.debug?.('[breakpoints] done: no remaining content');\n break;\n }\n\n currentFromIdx = computeNextFromIdx(remainingContent, actualEndIdx, toIdx, pageIds, normalizedPages);\n logger?.debug?.('[breakpoints] nextIteration', {\n currentFromIdx,\n currentFromPageId: pageIds[currentFromIdx],\n });\n isFirstPiece = false;\n }\n\n logger?.debug?.('[breakpoints] processOversizedSegmentDone', { resultCount: result.length });\n return result;\n};\n\n/**\n * Applies breakpoints to oversized segments.\n *\n * Note: This is an internal engine used by `segmentPages()`.\n */\nexport const applyBreakpoints = (\n segments: Segment[],\n pages: Page[],\n normalizedContent: string[],\n maxPages: number,\n breakpoints: Breakpoint[],\n prefer: 'longer' | 'shorter',\n patternProcessor: BreakpointPatternProcessor,\n logger?: Logger,\n pageJoiner: 'space' | 'newline' = 'space',\n): Segment[] => {\n const pageIds = pages.map((p) => p.id);\n const pageIdToIndex = buildPageIdToIndexMap(pageIds);\n const normalizedPages = buildNormalizedPagesMap(pages, normalizedContent);\n const cumulativeOffsets = buildCumulativeOffsets(pageIds, normalizedPages);\n const expandedBreakpoints = expandBreakpoints(breakpoints, patternProcessor);\n\n const result: Segment[] = [];\n\n logger?.info?.('Starting breakpoint processing', { maxPages, segmentCount: segments.length });\n\n logger?.debug?.('[breakpoints] inputSegments', {\n segmentCount: segments.length,\n segments: segments.map((s) => ({ contentLength: s.content.length, from: s.from, to: s.to })),\n });\n\n for (const segment of segments) {\n const fromIdx = pageIdToIndex.get(segment.from) ?? -1;\n const toIdx = segment.to !== undefined ? (pageIdToIndex.get(segment.to) ?? fromIdx) : fromIdx;\n\n const segmentSpan = (segment.to ?? segment.from) - segment.from;\n const hasExclusions = hasAnyExclusionsInRange(expandedBreakpoints, pageIds, fromIdx, toIdx);\n\n if (segmentSpan <= maxPages && !hasExclusions) {\n result.push(segment);\n continue;\n }\n\n const broken = processOversizedSegment(\n segment,\n fromIdx,\n toIdx,\n pageIds,\n normalizedPages,\n cumulativeOffsets,\n expandedBreakpoints,\n maxPages,\n prefer,\n logger,\n );\n // Normalize page joins for breakpoint-created pieces\n result.push(\n ...broken.map((s) => {\n const segFromIdx = pageIdToIndex.get(s.from) ?? -1;\n const segToIdx = s.to !== undefined ? (pageIdToIndex.get(s.to) ?? segFromIdx) : segFromIdx;\n if (segFromIdx >= 0 && segToIdx > segFromIdx) {\n return {\n ...s,\n content: applyPageJoinerBetweenPages(\n s.content,\n segFromIdx,\n segToIdx,\n pageIds,\n normalizedPages,\n pageJoiner,\n ),\n };\n }\n return s;\n }),\n );\n }\n\n logger?.info?.('Breakpoint processing completed', { resultCount: result.length });\n return result;\n};\n","/**\n * Utility functions for regex matching and result processing.\n *\n * These functions were extracted from `segmenter.ts` to reduce complexity\n * and enable independent testing. They handle match filtering, capture\n * extraction, and occurrence-based selection.\n *\n * @module match-utils\n */\n\nimport { isPageExcluded } from './breakpoint-utils.js';\nimport type { SplitRule } from './types.js';\n\n/**\n * Result of a regex match with position and optional capture information.\n *\n * Represents a single match found by the segmentation engine, including\n * its position in the concatenated content and any captured values.\n */\nexport type MatchResult = {\n /**\n * Start offset (inclusive) of the match in the content string.\n */\n start: number;\n\n /**\n * End offset (exclusive) of the match in the content string.\n *\n * The matched text is `content.slice(start, end)`.\n */\n end: number;\n\n /**\n * Content captured by `lineStartsAfter` patterns.\n *\n * For patterns like `^٦٦٩٦ - (.*)`, this contains the text\n * matched by the `(.*)` group (the rest of the line after the marker).\n */\n captured?: string;\n\n /**\n * Named capture group values from `{{token:name}}` syntax.\n *\n * Keys are the capture names, values are the matched strings.\n *\n * @example\n * // For pattern '{{raqms:num}} {{dash}}'\n * { num: '٦٦٩٦' }\n */\n namedCaptures?: Record<string, string>;\n};\n\n/**\n * Extracts named capture groups from a regex match.\n *\n * Only includes groups that are in the `captureNames` list and have\n * defined values. This filters out positional captures and ensures\n * only explicitly requested named captures are returned.\n *\n * @param groups - The `match.groups` object from `RegExp.exec()`\n * @param captureNames - List of capture names to extract (from `{{token:name}}` syntax)\n * @returns Object with capture name → value pairs, or `undefined` if none found\n *\n * @example\n * const match = /(?<num>[٠-٩]+) -/.exec('٦٦٩٦ - text');\n * extractNamedCaptures(match.groups, ['num'])\n * // → { num: '٦٦٩٦' }\n *\n * @example\n * // No matching captures\n * extractNamedCaptures({}, ['num'])\n * // → undefined\n *\n * @example\n * // Undefined groups\n * extractNamedCaptures(undefined, ['num'])\n * // → undefined\n */\nexport const extractNamedCaptures = (\n groups: Record<string, string> | undefined,\n captureNames: string[],\n): Record<string, string> | undefined => {\n if (!groups || captureNames.length === 0) {\n return undefined;\n }\n\n const namedCaptures: Record<string, string> = {};\n for (const name of captureNames) {\n if (groups[name] !== undefined) {\n namedCaptures[name] = groups[name];\n }\n }\n\n return Object.keys(namedCaptures).length > 0 ? namedCaptures : undefined;\n};\n\n/**\n * Gets the last defined positional capture group from a match array.\n *\n * Used for `lineStartsAfter` patterns where the content capture (`.*`)\n * is always at the end of the pattern. Named captures may shift the\n * positional indices, so we iterate backward to find the actual content.\n *\n * @param match - RegExp exec result array\n * @returns The last defined capture group value, or `undefined` if none\n *\n * @example\n * // Pattern: ^(?:(?<num>[٠-٩]+) - )(.*)\n * // Match array: ['٦٦٩٦ - content', '٦٦٩٦', 'content']\n * getLastPositionalCapture(match)\n * // → 'content'\n *\n * @example\n * // No captures\n * getLastPositionalCapture(['full match'])\n * // → undefined\n */\nexport const getLastPositionalCapture = (match: RegExpExecArray): string | undefined => {\n if (match.length <= 1) {\n return undefined;\n }\n\n for (let i = match.length - 1; i >= 1; i--) {\n if (match[i] !== undefined) {\n return match[i];\n }\n }\n return undefined;\n};\n\n/**\n * Filters matches to only include those within page ID constraints.\n *\n * Applies the `min`, `max`, and `exclude` constraints from a rule to filter out\n * matches that occur on pages outside the allowed range or explicitly excluded.\n *\n * @param matches - Array of match results to filter\n * @param rule - Rule containing `min`, `max`, and/or `exclude` page constraints\n * @param getId - Function that returns the page ID for a given offset\n * @returns Filtered array containing only matches within constraints\n *\n * @example\n * const matches = [\n * { start: 0, end: 10 }, // Page 1\n * { start: 100, end: 110 }, // Page 5\n * { start: 200, end: 210 }, // Page 10\n * ];\n * filterByConstraints(matches, { min: 3, max: 8 }, getId)\n * // → [{ start: 100, end: 110 }] (only page 5 match)\n */\nexport const filterByConstraints = (\n matches: MatchResult[],\n rule: Pick<SplitRule, 'min' | 'max' | 'exclude'>,\n getId: (offset: number) => number,\n): MatchResult[] => {\n return matches.filter((m) => {\n const id = getId(m.start);\n if (rule.min !== undefined && id < rule.min) {\n return false;\n }\n if (rule.max !== undefined && id > rule.max) {\n return false;\n }\n if (isPageExcluded(id, rule.exclude)) {\n return false;\n }\n return true;\n });\n};\n\n/**\n * Filters matches based on occurrence setting (first, last, or all).\n *\n * Applies occurrence-based selection to a list of matches:\n * - `'all'` or `undefined`: Return all matches (default)\n * - `'first'`: Return only the first match\n * - `'last'`: Return only the last match\n *\n * @param matches - Array of match results to filter\n * @param occurrence - Which occurrence(s) to keep\n * @returns Filtered array based on occurrence setting\n *\n * @example\n * const matches = [{ start: 0 }, { start: 10 }, { start: 20 }];\n *\n * filterByOccurrence(matches, 'first')\n * // → [{ start: 0 }]\n *\n * filterByOccurrence(matches, 'last')\n * // → [{ start: 20 }]\n *\n * filterByOccurrence(matches, 'all')\n * // → [{ start: 0 }, { start: 10 }, { start: 20 }]\n *\n * filterByOccurrence(matches, undefined)\n * // → [{ start: 0 }, { start: 10 }, { start: 20 }] (default: all)\n */\nexport const filterByOccurrence = (matches: MatchResult[], occurrence?: 'first' | 'last' | 'all'): MatchResult[] => {\n if (!matches.length) {\n return [];\n }\n if (occurrence === 'first') {\n return [matches[0]];\n }\n if (occurrence === 'last') {\n return [matches[matches.length - 1]];\n }\n return matches;\n};\n\n/**\n * Groups matches using a sliding window approach based on page ID difference.\n *\n * Uses a lookahead algorithm where `maxSpan` is the maximum page ID difference\n * allowed when looking ahead for the next split point. This prefers longer\n * segments by looking as far ahead as allowed before selecting a match.\n *\n * Algorithm:\n * 1. Start from the first page in the pages list\n * 2. Look for matches within `maxSpan` page IDs ahead\n * 3. Apply occurrence filter (e.g., 'last') to select a match\n * 4. If match found, add it; move window to start from the next page after the match\n * 5. If no match in window, skip to the next page and repeat\n *\n * @param matches - Array of match results (must be sorted by start position)\n * @param maxSpan - Maximum page ID difference allowed when looking ahead\n * @param occurrence - Which occurrence(s) to keep within each window\n * @param getId - Function that returns the page ID for a given offset\n * @param pageIds - Sorted array of all page IDs in the content\n * @returns Filtered array with sliding window and occurrence filter applied\n *\n * @example\n * // Pages: [1, 2, 3], maxSpan=1, occurrence='last'\n * // Window from page 1: pages 1-2 (diff <= 1)\n * // Finds last match in pages 1-2, adds it\n * // Next window from page 3: just page 3\n * // Result: segments span pages 1-2 and page 3\n */\nexport const groupBySpanAndFilter = (\n matches: MatchResult[],\n maxSpan: number,\n occurrence: 'first' | 'last' | 'all' | undefined,\n getId: (offset: number) => number,\n pageIds?: number[],\n): MatchResult[] => {\n if (!matches.length) {\n return [];\n }\n\n // Precompute pageId per match once to avoid O(P×M) behavior for large inputs.\n const matchPageIds = matches.map((m) => getId(m.start));\n const uniquePageIds = pageIds ?? [...new Set(matchPageIds)].sort((a, b) => a - b);\n\n if (!uniquePageIds.length) {\n return filterByOccurrence(matches, occurrence);\n }\n\n return collectWindowMatches(matches, matchPageIds, uniquePageIds, maxSpan, occurrence);\n};\n\n/**\n * Collects matches from each sliding window, applying occurrence selection.\n */\nconst collectWindowMatches = (\n matches: MatchResult[],\n matchPageIds: number[],\n uniquePageIds: number[],\n maxSpan: number,\n occurrence: 'first' | 'last' | 'all' | undefined,\n): MatchResult[] => {\n const result: MatchResult[] = [];\n let windowStartIdx = 0;\n let matchIdx = 0;\n\n while (windowStartIdx < uniquePageIds.length) {\n const windowStartPageId = uniquePageIds[windowStartIdx];\n const windowEndPageId = windowStartPageId + maxSpan;\n\n // Advance matchIdx to first match in or after the window start page.\n matchIdx = advanceToWindowStart(matchPageIds, matchIdx, windowStartPageId);\n\n if (matchIdx >= matches.length) {\n break;\n }\n\n // Find range of matches that fall within [windowStartPageId, windowEndPageId]\n const windowMatchEnd = findWindowMatchEnd(matchPageIds, matchIdx, windowEndPageId);\n\n if (windowMatchEnd <= matchIdx) {\n windowStartIdx++;\n continue;\n }\n\n // Apply occurrence selection and add matches\n const { selectedStart, selectedEnd } = selectOccurrenceRange(matchIdx, windowMatchEnd, occurrence);\n for (let i = selectedStart; i < selectedEnd; i++) {\n result.push(matches[i]);\n }\n\n // Advance window past the last selected match's page\n const lastMatchPageId = matchPageIds[selectedEnd - 1];\n while (windowStartIdx < uniquePageIds.length && uniquePageIds[windowStartIdx] <= lastMatchPageId) {\n windowStartIdx++;\n }\n matchIdx = selectedEnd;\n }\n\n return result;\n};\n\n/** Advances matchIdx to first match in or after windowStartPageId. */\nconst advanceToWindowStart = (matchPageIds: number[], matchIdx: number, windowStartPageId: number): number => {\n let idx = matchIdx;\n while (idx < matchPageIds.length && matchPageIds[idx] < windowStartPageId) {\n idx++;\n }\n return idx;\n};\n\n/** Finds the exclusive end index of matches within the window. */\nconst findWindowMatchEnd = (matchPageIds: number[], startIdx: number, windowEndPageId: number): number => {\n let endIdx = startIdx;\n while (endIdx < matchPageIds.length && matchPageIds[endIdx] <= windowEndPageId) {\n endIdx++;\n }\n return endIdx;\n};\n\n/**\n * Computes the range of indices to select based on occurrence setting.\n */\nconst selectOccurrenceRange = (\n start: number,\n endExclusive: number,\n occurrence: 'first' | 'last' | 'all' | undefined,\n): { selectedStart: number; selectedEnd: number } => {\n if (occurrence === 'first') {\n return { selectedEnd: start + 1, selectedStart: start };\n }\n if (occurrence === 'last') {\n return { selectedEnd: endExclusive, selectedStart: endExclusive - 1 };\n }\n return { selectedEnd: endExclusive, selectedStart: start };\n};\n\n/**\n * Checks if any rule in the list allows the given page ID.\n *\n * A rule allows an ID if it falls within the rule's `min`/`max` constraints.\n * Rules without constraints allow all page IDs.\n *\n * This is used to determine whether to create a segment for content\n * that appears before any split points (the \"first segment\").\n *\n * @param rules - Array of rules with optional `min` and `max` constraints\n * @param pageId - Page ID to check\n * @returns `true` if at least one rule allows the page ID\n *\n * @example\n * const rules = [\n * { min: 5, max: 10 }, // Allows pages 5-10\n * { min: 20 }, // Allows pages 20+\n * ];\n *\n * anyRuleAllowsId(rules, 7) // → true (first rule allows)\n * anyRuleAllowsId(rules, 3) // → false (no rule allows)\n * anyRuleAllowsId(rules, 25) // → true (second rule allows)\n *\n * @example\n * // Rules without constraints allow everything\n * anyRuleAllowsId([{}], 999) // → true\n */\nexport const anyRuleAllowsId = (rules: Pick<SplitRule, 'min' | 'max'>[], pageId: number): boolean => {\n return rules.some((r) => {\n const minOk = r.min === undefined || pageId >= r.min;\n const maxOk = r.max === undefined || pageId <= r.max;\n return minOk && maxOk;\n });\n};\n","/**\n * Token-based template system for Arabic text pattern matching.\n *\n * This module provides a human-readable way to define regex patterns using\n * `{{token}}` placeholders that expand to their regex equivalents. It supports\n * named capture groups for extracting matched values into metadata.\n *\n * @module tokens\n *\n * @example\n * // Simple token expansion\n * expandTokens('{{raqms}} {{dash}}')\n * // → '[\\\\u0660-\\\\u0669]+ [-–—ـ]'\n *\n * @example\n * // Named capture groups\n * expandTokensWithCaptures('{{raqms:num}} {{dash}}')\n * // → { pattern: '(?<num>[\\\\u0660-\\\\u0669]+) [-–—ـ]', captureNames: ['num'], hasCaptures: true }\n */\n\n/**\n * Token definitions mapping human-readable token names to regex patterns.\n *\n * Tokens are used in template strings with double-brace syntax:\n * - `{{token}}` - Expands to the pattern (non-capturing in context)\n * - `{{token:name}}` - Expands to a named capture group `(?<name>pattern)`\n * - `{{:name}}` - Captures any content with the given name `(?<name>.+)`\n *\n * @remarks\n * These patterns are designed for Arabic text matching. For diacritic-insensitive\n * matching of Arabic patterns, use the `fuzzy: true` option in split rules,\n * which applies `makeDiacriticInsensitive()` to the expanded patterns.\n *\n * @example\n * // Using tokens in a split rule\n * { lineStartsWith: ['{{kitab}}', '{{bab}}'], split: 'at', fuzzy: true }\n *\n * @example\n * // Using tokens with named captures\n * { lineStartsAfter: ['{{raqms:hadithNum}} {{dash}} '], split: 'at' }\n */\n// ─────────────────────────────────────────────────────────────\n// Auto-escaping for template patterns\n// ─────────────────────────────────────────────────────────────\n\n/**\n * Escapes regex metacharacters (parentheses and brackets) in template patterns,\n * but preserves content inside `{{...}}` token delimiters.\n *\n * This allows users to write intuitive patterns like `({{harf}}):` instead of\n * the verbose `\\\\({{harf}}\\\\):`. The escaping is applied BEFORE token expansion,\n * so tokens like `{{harf}}` which expand to `[أ-ي]` work correctly.\n *\n * @param pattern - Template pattern that may contain `()[]` and `{{tokens}}`\n * @returns Pattern with `()[]` escaped outside of `{{...}}` delimiters\n *\n * @example\n * escapeTemplateBrackets('({{harf}}): ')\n * // → '\\\\({{harf}}\\\\): '\n *\n * @example\n * escapeTemplateBrackets('[{{raqm}}] ')\n * // → '\\\\[{{raqm}}\\\\] '\n *\n * @example\n * escapeTemplateBrackets('{{harf}}')\n * // → '{{harf}}' (unchanged - no brackets outside tokens)\n */\nexport const escapeTemplateBrackets = (pattern: string): string => {\n // Match either a token ({{...}}) or a bracket character\n // Tokens are preserved as-is, brackets are escaped\n return pattern.replace(/(\\{\\{[^}]*\\}\\})|([()[\\]])/g, (_match, token, bracket) => {\n if (token) {\n return token; // Leave tokens intact\n }\n return `\\\\${bracket}`; // Escape the bracket\n });\n};\n\n// ─────────────────────────────────────────────────────────────\n// Base tokens - raw regex patterns (no template references)\n// ─────────────────────────────────────────────────────────────\n\n/**\n * Base token definitions mapping human-readable token names to regex patterns.\n *\n * These tokens contain raw regex patterns and do not reference other tokens.\n * For composite tokens that build on these, see `COMPOSITE_TOKENS`.\n *\n * @internal\n */\n// IMPORTANT:\n// - We include the Arabic-Indic digit `٤` as a rumuz code, but we do NOT match it when it's part of a larger number (e.g. \"٣٤\").\n// - We intentionally do NOT match ASCII `4`.\n// - For performance/clarity, the single-letter rumuz are represented as a character class.\nconst RUMUZ_SINGLE_LETTER = '[خرزيمنصسدفلتقع]';\nconst RUMUZ_FOUR = '(?<![\\\\u0660-\\\\u0669])٤(?![\\\\u0660-\\\\u0669])';\n// IMPORTANT: order matters. Put longer/more specific codes before shorter ones.\nconst RUMUZ_ATOMS: string[] = [\n // 2-letter codes\n 'خت',\n 'خغ',\n 'بخ',\n 'عخ',\n 'مق',\n 'مت',\n 'عس',\n 'سي',\n 'سن',\n 'كن',\n 'مد',\n 'قد',\n 'خد',\n 'فد',\n 'دل',\n 'كد',\n 'غد',\n 'صد',\n 'دت',\n 'دس',\n 'تم',\n 'فق',\n 'دق',\n // Single-letter codes (character class) + special digit atom\n RUMUZ_SINGLE_LETTER,\n RUMUZ_FOUR,\n];\n\nconst RUMUZ_ATOM = `(?:${RUMUZ_ATOMS.join('|')})`;\nconst RUMUZ_BLOCK = `${RUMUZ_ATOM}(?:\\\\s+${RUMUZ_ATOM})*`;\n\nconst BASE_TOKENS: Record<string, string> = {\n /**\n * Chapter marker - Arabic word for \"chapter\" (باب).\n *\n * Commonly used in hadith collections to mark chapter divisions.\n *\n * @example 'باب ما جاء في الصلاة' (Chapter on what came regarding prayer)\n */\n bab: 'باب',\n\n /**\n * Basmala pattern - Arabic invocation \"In the name of Allah\" (بسم الله).\n *\n * Matches the beginning of the basmala formula, commonly appearing\n * at the start of chapters, books, or documents.\n *\n * @example 'بسم الله الرحمن الرحيم' (In the name of Allah, the Most Gracious, the Most Merciful)\n */\n basmalah: ['بسم الله', '﷽'].join('|'),\n\n /**\n * Bullet point variants - common bullet characters.\n *\n * Character class matching: `•` (bullet), `*` (asterisk), `°` (degree).\n *\n * @example '• First item'\n */\n bullet: '[•*°]',\n\n /**\n * Dash variants - various dash and separator characters.\n *\n * Character class matching:\n * - `-` (hyphen-minus U+002D)\n * - `–` (en-dash U+2013)\n * - `—` (em-dash U+2014)\n * - `ـ` (tatweel U+0640, Arabic elongation character)\n *\n * @example '٦٦٩٦ - حدثنا' or '٦٦٩٦ ـ حدثنا'\n */\n dash: '[-–—ـ]',\n\n /**\n * Section marker - Arabic word for \"section/issue\".\n * Commonly used for fiqh books.\n */\n fasl: ['مسألة', 'فصل'].join('|'),\n\n /**\n * Single Arabic letter - matches any Arabic letter character.\n *\n * Character range from أ (alef with hamza) to ي (ya).\n * Does NOT include diacritics (harakat/tashkeel).\n *\n * @example '{{harf}}' matches 'ب' in 'باب'\n */\n harf: '[أ-ي]',\n\n /**\n * One or more Arabic letters separated by spaces - matches sequences like \"د ت س ي ق\".\n *\n * Useful for matching abbreviation *lists* that are encoded as single-letter tokens\n * separated by spaces.\n *\n * IMPORTANT:\n * - This token intentionally matches **single letters only** (with optional spacing).\n * - It does NOT match multi-letter rumuz like \"سي\" or \"خت\". For those, use `{{rumuz}}`.\n *\n * @example '{{harfs}}' matches 'د ت س ي ق' in '١١١٨ د ت س ي ق: حجاج'\n * @example '{{raqms:num}} {{harfs}}:' matches number + abbreviations + colon\n */\n // Example matches: \"د ت س ي ق\"\n // Example non-matches: \"وعلامة ...\", \"في\", \"لا\", \"سي\", \"خت\"\n harfs: '[أ-ي](?:\\\\s+[أ-ي])*',\n\n /**\n * Book marker - Arabic word for \"book\" (كتاب).\n *\n * Commonly used in hadith collections to mark major book divisions.\n *\n * @example 'كتاب الإيمان' (Book of Faith)\n */\n kitab: 'كتاب',\n\n /**\n * Naql (transmission) phrases - common hadith transmission phrases.\n *\n * Alternation of Arabic phrases used to indicate narration chains:\n * - حدثنا (he narrated to us)\n * - أخبرنا (he informed us)\n * - حدثني (he narrated to me)\n * - وحدثنا (and he narrated to us)\n * - أنبأنا (he reported to us)\n * - سمعت (I heard)\n *\n * @example '{{naql}}' matches any of the above phrases\n */\n naql: ['حدثني', 'وأخبرنا', 'حدثنا', 'سمعت', 'أنبأنا', 'وحدثنا', 'أخبرنا', 'وحدثني', 'وحدثنيه'].join('|'),\n\n /**\n * Single Arabic-Indic digit - matches one digit (٠-٩).\n *\n * Unicode range: U+0660 to U+0669 (Arabic-Indic digits).\n * Use `{{raqms}}` for one or more digits.\n *\n * @example '{{raqm}}' matches '٥' in '٥ - '\n */\n raqm: '[\\\\u0660-\\\\u0669]',\n\n /**\n * One or more Arabic-Indic digits - matches digit sequences (٠-٩)+.\n *\n * Unicode range: U+0660 to U+0669 (Arabic-Indic digits).\n * Commonly used for hadith numbers, verse numbers, etc.\n *\n * @example '{{raqms}}' matches '٦٦٩٦' in '٦٦٩٦ - حدثنا'\n */\n raqms: '[\\\\u0660-\\\\u0669]+',\n\n /**\n * Rumuz (source abbreviations) used in rijāl / takhrīj texts.\n *\n * This token matches the known abbreviation set used to denote sources like:\n * - All six books: (ع)\n * - The four Sunan: (٤)\n * - Bukhari: خ / خت / خغ / بخ / عخ / ز / ي\n * - Muslim: م / مق / مت\n * - Nasa'i: س / ن / ص / عس / سي / كن\n * - Abu Dawud: د / مد / قد / خد / ف / فد / ل / دل / كد / غد / صد\n * - Tirmidhi: ت / تم\n * - Ibn Majah: ق / فق\n *\n * Notes:\n * - Order matters: longer alternatives must come before shorter ones (e.g., \"خد\" before \"خ\")\n * - This token matches a rumuz *block*: one or more codes separated by whitespace\n * (e.g., \"خ سي\", \"خ فق\", \"خت ٤\", \"د ت سي ق\")\n */\n rumuz: RUMUZ_BLOCK,\n\n /**\n * Punctuation characters.\n * Use {{tarqim}} which is especially useful when splitting using split: 'after' on punctuation marks.\n */\n tarqim: '[.!?؟؛]',\n};\n\n// ─────────────────────────────────────────────────────────────\n// Composite tokens - templates that reference base tokens\n// These are pre-expanded at module load time for performance\n// ─────────────────────────────────────────────────────────────\n\n/**\n * Composite token definitions using template syntax.\n *\n * These tokens reference base tokens using `{{token}}` syntax and are\n * automatically expanded to their final regex patterns at module load time.\n *\n * This provides better abstraction - if base tokens change, composites\n * automatically update on the next build.\n *\n * @internal\n */\nconst COMPOSITE_TOKENS: Record<string, string> = {\n /**\n * Numbered hadith marker - common format for hadith numbering.\n *\n * Matches patterns like \"٢٢ - \" (number, space, dash, space).\n * This is the most common format in hadith collections.\n *\n * Use with `lineStartsAfter` to cleanly extract hadith content:\n * ```typescript\n * { lineStartsAfter: ['{{numbered}}'], split: 'at' }\n * ```\n *\n * For capturing the hadith number, use explicit capture syntax:\n * ```typescript\n * { lineStartsAfter: ['{{raqms:hadithNum}} {{dash}} '], split: 'at' }\n * ```\n *\n * @example '٢٢ - حدثنا' matches, content starts after '٢٢ - '\n * @example '٦٦٩٦ – أخبرنا' matches (with en-dash)\n */\n numbered: '{{raqms}} {{dash}} ',\n};\n\n/**\n * Expands any *composite* tokens (like `{{numbered}}`) into their underlying template form\n * (like `{{raqms}} {{dash}} `).\n *\n * This is useful when you want to take a signature produced by `analyzeCommonLineStarts()`\n * and turn it into an editable template where you can add named captures, e.g.:\n *\n * - `{{numbered}}` → `{{raqms}} {{dash}} `\n * - then: `{{raqms:num}} {{dash}} ` to capture the number\n *\n * Notes:\n * - This only expands the plain `{{token}}` form (not `{{token:name}}`).\n * - Expansion is repeated a few times to support nested composites.\n */\nexport const expandCompositeTokensInTemplate = (template: string): string => {\n let out = template;\n for (let i = 0; i < 10; i++) {\n const next = out.replace(/\\{\\{(\\w+)\\}\\}/g, (m, tokenName: string) => {\n const replacement = COMPOSITE_TOKENS[tokenName];\n return replacement ?? m;\n });\n if (next === out) {\n break;\n }\n out = next;\n }\n return out;\n};\n\n/**\n * Expands base tokens in a template string.\n * Used internally to pre-expand composite tokens.\n *\n * @param template - Template string with `{{token}}` placeholders\n * @returns Expanded pattern with base tokens replaced\n * @internal\n */\nconst expandBaseTokens = (template: string): string => {\n return template.replace(/\\{\\{(\\w+)\\}\\}/g, (_, tokenName) => {\n return BASE_TOKENS[tokenName] ?? `{{${tokenName}}}`;\n });\n};\n\n/**\n * Token definitions mapping human-readable token names to regex patterns.\n *\n * Tokens are used in template strings with double-brace syntax:\n * - `{{token}}` - Expands to the pattern (non-capturing in context)\n * - `{{token:name}}` - Expands to a named capture group `(?<name>pattern)`\n * - `{{:name}}` - Captures any content with the given name `(?<name>.+)`\n *\n * @remarks\n * These patterns are designed for Arabic text matching. For diacritic-insensitive\n * matching of Arabic patterns, use the `fuzzy: true` option in split rules,\n * which applies `makeDiacriticInsensitive()` to the expanded patterns.\n *\n * @example\n * // Using tokens in a split rule\n * { lineStartsWith: ['{{kitab}}', '{{bab}}'], split: 'at', fuzzy: true }\n *\n * @example\n * // Using tokens with named captures\n * { lineStartsAfter: ['{{raqms:hadithNum}} {{dash}} '], split: 'at' }\n *\n * @example\n * // Using the numbered convenience token\n * { lineStartsAfter: ['{{numbered}}'], split: 'at' }\n */\nexport const TOKEN_PATTERNS: Record<string, string> = {\n ...BASE_TOKENS,\n // Pre-expand composite tokens at module load time\n ...Object.fromEntries(Object.entries(COMPOSITE_TOKENS).map(([k, v]) => [k, expandBaseTokens(v)])),\n};\n\n/**\n * Regex pattern for matching tokens with optional named capture syntax.\n *\n * Matches:\n * - `{{token}}` - Simple token (group 1 = token name, group 2 = empty)\n * - `{{token:name}}` - Token with capture (group 1 = token, group 2 = name)\n * - `{{:name}}` - Capture-only (group 1 = empty, group 2 = name)\n *\n * @internal\n */\nconst TOKEN_WITH_CAPTURE_REGEX = /\\{\\{(\\w*):?(\\w*)\\}\\}/g;\n\n/**\n * Regex pattern for simple token matching (no capture syntax).\n *\n * Matches only `{{token}}` format where token is one or more word characters.\n * Used by `containsTokens()` for quick detection.\n *\n * @internal\n */\nconst SIMPLE_TOKEN_REGEX = /\\{\\{(\\w+)\\}\\}/g;\n\n/**\n * Checks if a query string contains template tokens.\n *\n * Performs a quick test for `{{token}}` patterns without actually\n * expanding them. Useful for determining whether to apply token\n * expansion to a string.\n *\n * @param query - String to check for tokens\n * @returns `true` if the string contains at least one `{{token}}` pattern\n *\n * @example\n * containsTokens('{{raqms}} {{dash}}') // → true\n * containsTokens('plain text') // → false\n * containsTokens('[٠-٩]+ - ') // → false (raw regex, no tokens)\n */\nexport const containsTokens = (query: string): boolean => {\n SIMPLE_TOKEN_REGEX.lastIndex = 0;\n return SIMPLE_TOKEN_REGEX.test(query);\n};\n\n/**\n * Result from expanding tokens with capture information.\n *\n * Contains the expanded pattern string along with metadata about\n * any named capture groups that were created.\n */\nexport type ExpandResult = {\n /**\n * The expanded regex pattern string with all tokens replaced.\n *\n * Named captures use the `(?<name>pattern)` syntax.\n */\n pattern: string;\n\n /**\n * Names of captured groups extracted from `{{token:name}}` syntax.\n *\n * Empty array if no named captures were found.\n */\n captureNames: string[];\n\n /**\n * Whether the pattern has any named capturing groups.\n *\n * Equivalent to `captureNames.length > 0`.\n */\n hasCaptures: boolean;\n};\n\ntype TemplateSegment = { type: 'token' | 'text'; value: string };\n\nconst splitTemplateIntoSegments = (query: string): TemplateSegment[] => {\n const segments: TemplateSegment[] = [];\n let lastIndex = 0;\n TOKEN_WITH_CAPTURE_REGEX.lastIndex = 0;\n let match: RegExpExecArray | null;\n\n // biome-ignore lint/suspicious/noAssignInExpressions: standard regex exec loop pattern\n while ((match = TOKEN_WITH_CAPTURE_REGEX.exec(query)) !== null) {\n if (match.index > lastIndex) {\n segments.push({ type: 'text', value: query.slice(lastIndex, match.index) });\n }\n segments.push({ type: 'token', value: match[0] });\n lastIndex = match.index + match[0].length;\n }\n\n if (lastIndex < query.length) {\n segments.push({ type: 'text', value: query.slice(lastIndex) });\n }\n\n return segments;\n};\n\nconst maybeApplyFuzzyToText = (text: string, fuzzyTransform?: (pattern: string) => string): string => {\n if (fuzzyTransform && /[\\u0600-\\u06FF]/u.test(text)) {\n return fuzzyTransform(text);\n }\n return text;\n};\n\n// NOTE: This intentionally preserves the previous behavior:\n// it applies fuzzy per `|`-separated alternative (best-effort) to avoid corrupting regex metacharacters.\nconst maybeApplyFuzzyToTokenPattern = (tokenPattern: string, fuzzyTransform?: (pattern: string) => string): string => {\n if (!fuzzyTransform) {\n return tokenPattern;\n }\n return tokenPattern\n .split('|')\n .map((part) => (/[\\u0600-\\u06FF]/u.test(part) ? fuzzyTransform(part) : part))\n .join('|');\n};\n\nconst parseTokenLiteral = (literal: string): { tokenName: string; captureName: string } | null => {\n TOKEN_WITH_CAPTURE_REGEX.lastIndex = 0;\n const tokenMatch = TOKEN_WITH_CAPTURE_REGEX.exec(literal);\n if (!tokenMatch) {\n return null;\n }\n const [, tokenName, captureName] = tokenMatch;\n return { captureName, tokenName };\n};\n\nconst createCaptureRegistry = (capturePrefix?: string) => {\n const captureNames: string[] = [];\n const captureNameCounts = new Map<string, number>();\n\n const register = (baseName: string): string => {\n const count = captureNameCounts.get(baseName) ?? 0;\n captureNameCounts.set(baseName, count + 1);\n const uniqueName = count === 0 ? baseName : `${baseName}_${count + 1}`;\n const prefixedName = capturePrefix ? `${capturePrefix}${uniqueName}` : uniqueName;\n captureNames.push(prefixedName);\n return prefixedName;\n };\n\n return { captureNames, register };\n};\n\nconst expandTokenLiteral = (\n literal: string,\n opts: {\n fuzzyTransform?: (pattern: string) => string;\n registerCapture: (baseName: string) => string;\n capturePrefix?: string;\n },\n): string => {\n const parsed = parseTokenLiteral(literal);\n if (!parsed) {\n return literal;\n }\n\n const { tokenName, captureName } = parsed;\n\n // {{:name}} - capture anything with name\n if (!tokenName && captureName) {\n const name = opts.registerCapture(captureName);\n return `(?<${name}>.+)`;\n }\n\n let tokenPattern = TOKEN_PATTERNS[tokenName];\n if (!tokenPattern) {\n // Unknown token - leave as-is\n return literal;\n }\n\n tokenPattern = maybeApplyFuzzyToTokenPattern(tokenPattern, opts.fuzzyTransform);\n\n // {{token:name}} - capture with name\n if (captureName) {\n const name = opts.registerCapture(captureName);\n return `(?<${name}>${tokenPattern})`;\n }\n\n // {{token}} - no capture, just expand\n return tokenPattern;\n};\n\n/**\n * Expands template tokens with support for named captures.\n *\n * This is the primary token expansion function that handles all token syntax:\n * - `{{token}}` → Expands to the token's pattern (no capture group)\n * - `{{token:name}}` → Expands to `(?<name>pattern)` (named capture)\n * - `{{:name}}` → Expands to `(?<name>.+)` (capture anything)\n *\n * Unknown tokens are left as-is in the output, allowing for partial templates.\n *\n * @param query - The template string containing tokens\n * @param fuzzyTransform - Optional function to transform Arabic text for fuzzy matching.\n * Applied to both token patterns and plain Arabic text between tokens.\n * Typically `makeDiacriticInsensitive` from the fuzzy module.\n * @returns Object with expanded pattern, capture names, and capture flag\n *\n * @example\n * // Simple token expansion\n * expandTokensWithCaptures('{{raqms}} {{dash}}')\n * // → { pattern: '[\\\\u0660-\\\\u0669]+ [-–—ـ]', captureNames: [], hasCaptures: false }\n *\n * @example\n * // Named capture\n * expandTokensWithCaptures('{{raqms:num}} {{dash}}')\n * // → { pattern: '(?<num>[\\\\u0660-\\\\u0669]+) [-–—ـ]', captureNames: ['num'], hasCaptures: true }\n *\n * @example\n * // Capture-only token\n * expandTokensWithCaptures('{{raqms:num}} {{dash}} {{:content}}')\n * // → { pattern: '(?<num>[٠-٩]+) [-–—ـ] (?<content>.+)', captureNames: ['num', 'content'], hasCaptures: true }\n *\n * @example\n * // With fuzzy transform\n * expandTokensWithCaptures('{{bab}}', makeDiacriticInsensitive)\n * // → { pattern: 'بَ?ا?بٌ?', captureNames: [], hasCaptures: false }\n */\nexport const expandTokensWithCaptures = (\n query: string,\n fuzzyTransform?: (pattern: string) => string,\n capturePrefix?: string,\n): ExpandResult => {\n const segments = splitTemplateIntoSegments(query);\n const registry = createCaptureRegistry(capturePrefix);\n\n const processedParts = segments.map((segment) => {\n if (segment.type === 'text') {\n return maybeApplyFuzzyToText(segment.value, fuzzyTransform);\n }\n return expandTokenLiteral(segment.value, {\n capturePrefix,\n fuzzyTransform,\n registerCapture: registry.register,\n });\n });\n\n return {\n captureNames: registry.captureNames,\n hasCaptures: registry.captureNames.length > 0,\n pattern: processedParts.join(''),\n };\n};\n\n/**\n * Expands template tokens in a query string to their regex equivalents.\n *\n * This is the simple version without capture support. It returns only the\n * expanded pattern string, not capture metadata.\n *\n * Unknown tokens are left as-is, allowing for partial templates.\n *\n * @param query - Template string containing `{{token}}` placeholders\n * @returns Expanded regex pattern string\n *\n * @example\n * expandTokens('، {{raqms}}') // → '، [\\\\u0660-\\\\u0669]+'\n * expandTokens('{{raqm}}*') // → '[\\\\u0660-\\\\u0669]*'\n * expandTokens('{{dash}}{{raqm}}') // → '[-–—ـ][\\\\u0660-\\\\u0669]'\n * expandTokens('{{unknown}}') // → '{{unknown}}' (left as-is)\n *\n * @see expandTokensWithCaptures for full capture group support\n */\nexport const expandTokens = (query: string) => expandTokensWithCaptures(query).pattern;\n\n/**\n * Converts a template string to a compiled RegExp.\n *\n * Expands all tokens and attempts to compile the result as a RegExp\n * with Unicode flag. Returns `null` if the resulting pattern is invalid.\n *\n * @remarks\n * This function dynamically compiles regular expressions from template strings.\n * If templates may come from untrusted sources, be aware of potential ReDoS\n * (Regular Expression Denial of Service) risks due to catastrophic backtracking.\n * Consider validating pattern complexity or applying execution timeouts when\n * running user-submitted patterns.\n *\n * @param template - Template string containing `{{token}}` placeholders\n * @returns Compiled RegExp with 'u' flag, or `null` if invalid\n *\n * @example\n * templateToRegex('، {{raqms}}') // → /، [٠-٩]+/u\n * templateToRegex('{{raqms}}+') // → /[٠-٩]++/u (might be invalid in some engines)\n * templateToRegex('(((') // → null (invalid regex)\n */\nexport const templateToRegex = (template: string) => {\n const expanded = expandTokens(template);\n try {\n return new RegExp(expanded, 'u');\n } catch {\n return null;\n }\n};\n\n/**\n * Lists all available token names defined in `TOKEN_PATTERNS`.\n *\n * Useful for documentation, validation, or building user interfaces\n * that show available tokens.\n *\n * @returns Array of token names (e.g., `['bab', 'basmala', 'bullet', ...]`)\n *\n * @example\n * getAvailableTokens()\n * // → ['bab', 'basmala', 'bullet', 'dash', 'harf', 'kitab', 'naql', 'raqm', 'raqms']\n */\nexport const getAvailableTokens = () => Object.keys(TOKEN_PATTERNS);\n\n/**\n * Gets the regex pattern for a specific token name.\n *\n * Returns the raw pattern string as defined in `TOKEN_PATTERNS`,\n * without any expansion or capture group wrapping.\n *\n * @param tokenName - The token name to look up (e.g., 'raqms', 'dash')\n * @returns The regex pattern string, or `undefined` if token doesn't exist\n *\n * @example\n * getTokenPattern('raqms') // → '[\\\\u0660-\\\\u0669]+'\n * getTokenPattern('dash') // → '[-–—ـ]'\n * getTokenPattern('unknown') // → undefined\n */\nexport const getTokenPattern = (tokenName: string): string | undefined => TOKEN_PATTERNS[tokenName];\n","/**\n * Split rule → compiled regex builder.\n *\n * Extracted from `segmenter.ts` to reduce cognitive complexity and enable\n * independent unit testing of regex compilation and token expansion behavior.\n */\n\nimport { makeDiacriticInsensitive } from './fuzzy.js';\nimport { escapeTemplateBrackets, expandTokensWithCaptures } from './tokens.js';\nimport type { SplitRule } from './types.js';\n\n/**\n * Result of processing a pattern with token expansion and optional fuzzy matching.\n */\nexport type ProcessedPattern = {\n /** The expanded regex pattern string (tokens replaced with regex) */\n pattern: string;\n /** Names of captured groups extracted from `{{token:name}}` syntax */\n captureNames: string[];\n};\n\n/**\n * Compiled regex and metadata for a split rule.\n */\nexport type RuleRegex = {\n /** Compiled RegExp with 'gmu' flags (global, multiline, unicode) */\n regex: RegExp;\n /** Whether the regex uses capturing groups for content extraction */\n usesCapture: boolean;\n /** Names of captured groups from `{{token:name}}` syntax */\n captureNames: string[];\n /** Whether this rule uses `lineStartsAfter` (content capture at end) */\n usesLineStartsAfter: boolean;\n};\n\n/**\n * Checks if a regex pattern contains standard (anonymous) capturing groups.\n *\n * Detects standard capturing groups `(...)` while excluding:\n * - Non-capturing groups `(?:...)`\n * - Lookahead assertions `(?=...)` and `(?!...)`\n * - Lookbehind assertions `(?<=...)` and `(?<!...)`\n * - Named groups `(?<name>...)` (start with `(?` so excluded here)\n *\n * NOTE: Named capture groups are still captures, but they're tracked via `captureNames`.\n */\nexport const hasCapturingGroup = (pattern: string): boolean => {\n // Match ( that is NOT followed by ? (excludes non-capturing and named groups)\n return /\\((?!\\?)/.test(pattern);\n};\n\n/**\n * Extracts named capture group names from a regex pattern.\n *\n * Parses patterns like `(?<num>[0-9]+)` and returns `['num']`.\n *\n * @example\n * extractNamedCaptureNames('^(?<num>[٠-٩]+)\\\\s+') // ['num']\n * extractNamedCaptureNames('^(?<a>\\\\d+)(?<b>\\\\w+)') // ['a', 'b']\n * extractNamedCaptureNames('^\\\\d+') // []\n */\nexport const extractNamedCaptureNames = (pattern: string): string[] => {\n const names: string[] = [];\n // Match (?<name> where name is the capture group name\n const namedGroupRegex = /\\(\\?<([^>]+)>/g;\n for (const match of pattern.matchAll(namedGroupRegex)) {\n names.push(match[1]);\n }\n return names;\n};\n\n/**\n * Safely compiles a regex pattern, throwing a helpful error if invalid.\n */\nexport const compileRuleRegex = (pattern: string): RegExp => {\n try {\n return new RegExp(pattern, 'gmu');\n } catch (error) {\n const message = error instanceof Error ? error.message : String(error);\n throw new Error(`Invalid regex pattern: ${pattern}\\n Cause: ${message}`);\n }\n};\n\n/**\n * Processes a pattern string by expanding tokens and optionally applying fuzzy matching.\n *\n * Brackets `()[]` outside `{{tokens}}` are auto-escaped.\n */\nexport const processPattern = (pattern: string, fuzzy: boolean, capturePrefix?: string): ProcessedPattern => {\n const escaped = escapeTemplateBrackets(pattern);\n const fuzzyTransform = fuzzy ? makeDiacriticInsensitive : undefined;\n const { pattern: expanded, captureNames } = expandTokensWithCaptures(escaped, fuzzyTransform, capturePrefix);\n return { captureNames, pattern: expanded };\n};\n\nexport const buildLineStartsAfterRegexSource = (\n patterns: string[],\n fuzzy: boolean,\n capturePrefix?: string,\n): { regex: string; captureNames: string[] } => {\n const processed = patterns.map((p) => processPattern(p, fuzzy, capturePrefix));\n const union = processed.map((p) => p.pattern).join('|');\n const captureNames = processed.flatMap((p) => p.captureNames);\n // For lineStartsAfter, we need to capture the content.\n // If we have a prefix (combined-regex mode), we name the internal content capture so the caller\n // can compute marker length. IMPORTANT: this internal group is not a \"user capture\", so it must\n // NOT be included in `captureNames` (otherwise it leaks into segment.meta as `content`).\n const contentCapture = capturePrefix ? `(?<${capturePrefix}__content>.*)` : '(.*)';\n return { captureNames, regex: `^(?:${union})${contentCapture}` };\n};\n\nexport const buildLineStartsWithRegexSource = (\n patterns: string[],\n fuzzy: boolean,\n capturePrefix?: string,\n): { regex: string; captureNames: string[] } => {\n const processed = patterns.map((p) => processPattern(p, fuzzy, capturePrefix));\n const union = processed.map((p) => p.pattern).join('|');\n const captureNames = processed.flatMap((p) => p.captureNames);\n return { captureNames, regex: `^(?:${union})` };\n};\n\nexport const buildLineEndsWithRegexSource = (\n patterns: string[],\n fuzzy: boolean,\n capturePrefix?: string,\n): { regex: string; captureNames: string[] } => {\n const processed = patterns.map((p) => processPattern(p, fuzzy, capturePrefix));\n const union = processed.map((p) => p.pattern).join('|');\n const captureNames = processed.flatMap((p) => p.captureNames);\n return { captureNames, regex: `(?:${union})$` };\n};\n\nexport const buildTemplateRegexSource = (\n template: string,\n capturePrefix?: string,\n): { regex: string; captureNames: string[] } => {\n const escaped = escapeTemplateBrackets(template);\n const { pattern, captureNames } = expandTokensWithCaptures(escaped, undefined, capturePrefix);\n return { captureNames, regex: pattern };\n};\n\nexport const determineUsesCapture = (regexSource: string, _captureNames: string[]): boolean =>\n hasCapturingGroup(regexSource);\n\n/**\n * Builds a compiled regex and metadata from a split rule.\n *\n * Behavior mirrors the previous implementation in `segmenter.ts`.\n */\nexport const buildRuleRegex = (rule: SplitRule, capturePrefix?: string): RuleRegex => {\n const s: {\n lineStartsWith?: string[];\n lineStartsAfter?: string[];\n lineEndsWith?: string[];\n template?: string;\n regex?: string;\n } = { ...rule };\n\n const fuzzy = (rule as { fuzzy?: boolean }).fuzzy ?? false;\n let allCaptureNames: string[] = [];\n\n // lineStartsAfter: creates a capturing group to exclude the marker from content\n if (s.lineStartsAfter?.length) {\n const { regex, captureNames } = buildLineStartsAfterRegexSource(s.lineStartsAfter, fuzzy, capturePrefix);\n allCaptureNames = captureNames;\n return {\n captureNames: allCaptureNames,\n regex: compileRuleRegex(regex),\n usesCapture: true,\n usesLineStartsAfter: true,\n };\n }\n\n if (s.lineStartsWith?.length) {\n const { regex, captureNames } = buildLineStartsWithRegexSource(s.lineStartsWith, fuzzy, capturePrefix);\n s.regex = regex;\n allCaptureNames = captureNames;\n }\n if (s.lineEndsWith?.length) {\n const { regex, captureNames } = buildLineEndsWithRegexSource(s.lineEndsWith, fuzzy, capturePrefix);\n s.regex = regex;\n allCaptureNames = captureNames;\n }\n if (s.template) {\n const { regex, captureNames } = buildTemplateRegexSource(s.template, capturePrefix);\n s.regex = regex;\n allCaptureNames = [...allCaptureNames, ...captureNames];\n }\n\n if (!s.regex) {\n throw new Error(\n 'Rule must specify exactly one pattern type: regex, template, lineStartsWith, lineStartsAfter, or lineEndsWith',\n );\n }\n\n // Extract named capture groups from raw regex patterns if not already populated\n if (allCaptureNames.length === 0) {\n allCaptureNames = extractNamedCaptureNames(s.regex);\n }\n\n const usesCapture = determineUsesCapture(s.regex, allCaptureNames);\n return {\n captureNames: allCaptureNames,\n regex: compileRuleRegex(s.regex),\n usesCapture,\n usesLineStartsAfter: false,\n };\n};\n","import type { Page, SegmentationOptions } from './types.js';\n\n/**\n * A single replacement rule applied by `applyReplacements()` / `SegmentationOptions.replace`.\n *\n * Notes:\n * - `regex` is a raw JavaScript regex source string (no token expansion).\n * - Default flags are `gu` (global + unicode).\n * - If `flags` is provided, it is validated and `g` + `u` are always enforced.\n * - If `pageIds` is omitted, the rule applies to all pages.\n * - If `pageIds` is `[]`, the rule applies to no pages (rule is skipped).\n */\nexport type ReplaceRule = NonNullable<SegmentationOptions['replace']>[number];\n\nconst DEFAULT_REPLACE_FLAGS = 'gu';\n\nconst normalizeReplaceFlags = (flags?: string): string => {\n if (!flags) {\n return DEFAULT_REPLACE_FLAGS;\n }\n // Validate and de-duplicate flags. Force include g + u.\n const allowed = new Set(['g', 'i', 'm', 's', 'u', 'y']);\n const set = new Set<string>();\n for (const ch of flags) {\n if (!allowed.has(ch)) {\n throw new Error(`Invalid replace regex flag: \"${ch}\" (allowed: gimsyu)`);\n }\n set.add(ch);\n }\n set.add('g');\n set.add('u');\n\n // Stable ordering for reproducibility\n const order = ['g', 'i', 'm', 's', 'y', 'u'];\n return order.filter((c) => set.has(c)).join('');\n};\n\ntype CompiledReplaceRule = {\n re: RegExp;\n replacement: string;\n pageIdSet?: ReadonlySet<number>;\n};\n\nconst compileReplaceRules = (rules: ReplaceRule[]): CompiledReplaceRule[] => {\n const compiled: CompiledReplaceRule[] = [];\n for (const r of rules) {\n if (r.pageIds && r.pageIds.length === 0) {\n // Empty list means \"apply to no pages\"\n continue;\n }\n const flags = normalizeReplaceFlags(r.flags);\n const re = new RegExp(r.regex, flags);\n compiled.push({\n pageIdSet: r.pageIds ? new Set(r.pageIds) : undefined,\n re,\n replacement: r.replacement,\n });\n }\n return compiled;\n};\n\n/**\n * Applies ordered regex replacements to page content (per page).\n *\n * - Replacement rules are applied in array order.\n * - Each rule is applied globally (flag `g` enforced) with unicode mode (flag `u` enforced).\n * - `pageIds` can scope a rule to specific pages. `pageIds: []` skips the rule entirely.\n *\n * This function is intentionally **pure**:\n * it returns a new pages array only when changes are needed, otherwise it returns the original pages.\n */\nexport const applyReplacements = (pages: Page[], rules?: ReplaceRule[]): Page[] => {\n if (!rules || rules.length === 0 || pages.length === 0) {\n return pages;\n }\n const compiled = compileReplaceRules(rules);\n if (compiled.length === 0) {\n return pages;\n }\n\n return pages.map((p) => {\n let content = p.content;\n for (const rule of compiled) {\n if (rule.pageIdSet && !rule.pageIdSet.has(p.id)) {\n continue;\n }\n content = content.replace(rule.re, rule.replacement);\n }\n if (content === p.content) {\n return p;\n }\n return { ...p, content };\n });\n};\n\n\n","/**\n * Fast-path fuzzy prefix matching for common Arabic line-start markers.\n *\n * This exists to avoid running expensive fuzzy-expanded regex alternations over\n * a giant concatenated string. Instead, we match only at known line-start\n * offsets and perform a small deterministic comparison:\n * - Skip Arabic diacritics in the CONTENT\n * - Treat common equivalence groups as equal (ا/آ/أ/إ, ة/ه, ى/ي)\n *\n * This module is intentionally conservative: it only supports \"literal\"\n * token patterns (plain text alternation via `|`), not general regex.\n */\n\nimport { getTokenPattern } from './tokens.js';\n\n// U+064B..U+0652 (tashkeel/harakat)\nconst isArabicDiacriticCode = (code: number): boolean => code >= 0x064b && code <= 0x0652;\n\n// Map a char to a representative equivalence class key.\n// Keep this in sync with EQUIV_GROUPS in fuzzy.ts.\nconst equivKey = (ch: string): string => {\n switch (ch) {\n case '\\u0622': // آ\n case '\\u0623': // أ\n case '\\u0625': // إ\n return '\\u0627'; // ا\n case '\\u0647': // ه\n return '\\u0629'; // ة\n case '\\u064a': // ي\n return '\\u0649'; // ى\n default:\n return ch;\n }\n};\n\n/**\n * Match a fuzzy literal prefix at a given offset.\n *\n * - Skips diacritics in the content\n * - Applies equivalence groups on both content and literal\n *\n * @returns endOffset (exclusive) in CONTENT if matched; otherwise null.\n */\nexport const matchFuzzyLiteralPrefixAt = (content: string, offset: number, literal: string): number | null => {\n let i = offset;\n // Skip leading diacritics in content (rare but possible)\n while (i < content.length && isArabicDiacriticCode(content.charCodeAt(i))) {\n i++;\n }\n\n for (let j = 0; j < literal.length; j++) {\n const litCh = literal[j];\n\n // In literal, we treat whitespace literally (no collapsing).\n // (Tokens like kitab/bab/fasl/naql/basmalah do not rely on fuzzy spaces.)\n // Skip diacritics in content before matching each char.\n while (i < content.length && isArabicDiacriticCode(content.charCodeAt(i))) {\n i++;\n }\n\n if (i >= content.length) {\n return null;\n }\n\n const cCh = content[i];\n if (equivKey(cCh) !== equivKey(litCh)) {\n return null;\n }\n i++;\n }\n\n // Allow trailing diacritics immediately after the matched prefix.\n while (i < content.length && isArabicDiacriticCode(content.charCodeAt(i))) {\n i++;\n }\n return i;\n};\n\nconst isLiteralOnly = (s: string): boolean => {\n // Reject anything that looks like regex syntax.\n // We allow only plain text (including Arabic, spaces) and the alternation separator `|`.\n // This intentionally rejects tokens like `tarqim: '[.!?؟؛]'`, which are not literal.\n return !/[\\\\[\\]{}()^$.*+?]/.test(s);\n};\n\nexport type CompiledLiteralAlternation = {\n alternatives: string[];\n};\n\nexport const compileLiteralAlternation = (pattern: string): CompiledLiteralAlternation | null => {\n if (!pattern) {\n return null;\n }\n if (!isLiteralOnly(pattern)) {\n return null;\n }\n const alternatives = pattern\n .split('|')\n .map((s) => s.trim())\n .filter(Boolean);\n if (!alternatives.length) {\n return null;\n }\n return { alternatives };\n};\n\nexport type FastFuzzyTokenRule = {\n token: string; // token name, e.g. 'kitab'\n alternatives: string[]; // resolved literal alternatives\n};\n\n/**\n * Attempt to compile a fast fuzzy rule from a single-token pattern like `{{kitab}}`.\n * Returns null if not eligible.\n */\nexport const compileFastFuzzyTokenRule = (tokenTemplate: string): FastFuzzyTokenRule | null => {\n const m = tokenTemplate.match(/^\\{\\{(\\w+)\\}\\}$/);\n if (!m) {\n return null;\n }\n const token = m[1];\n const tokenPattern = getTokenPattern(token);\n if (!tokenPattern) {\n return null;\n }\n const compiled = compileLiteralAlternation(tokenPattern);\n if (!compiled) {\n return null;\n }\n return { alternatives: compiled.alternatives, token };\n};\n\n/**\n * Try matching any alternative for a compiled token at a line-start offset.\n * Returns endOffset (exclusive) on match, else null.\n */\nexport const matchFastFuzzyTokenAt = (content: string, offset: number, compiled: FastFuzzyTokenRule): number | null => {\n for (const alt of compiled.alternatives) {\n const end = matchFuzzyLiteralPrefixAt(content, offset, alt);\n if (end !== null) {\n return end;\n }\n }\n return null;\n};\n","import { isPageExcluded } from './breakpoint-utils.js';\nimport { compileFastFuzzyTokenRule, type FastFuzzyTokenRule, matchFastFuzzyTokenAt } from './fast-fuzzy-prefix.js';\nimport { extractNamedCaptureNames, hasCapturingGroup, processPattern } from './rule-regex.js';\nimport type { PageMap, SplitPoint } from './segmenter-types.js';\nimport type { SplitRule } from './types.js';\n\nexport type FastFuzzyRule = {\n compiled: FastFuzzyTokenRule;\n rule: SplitRule;\n ruleIndex: number;\n kind: 'startsWith' | 'startsAfter';\n};\n\nexport type PartitionedRules = {\n combinableRules: Array<{ rule: SplitRule; prefix: string; index: number }>;\n standaloneRules: SplitRule[];\n fastFuzzyRules: FastFuzzyRule[];\n};\n\nexport const partitionRulesForMatching = (rules: SplitRule[]): PartitionedRules => {\n const combinableRules: { rule: SplitRule; prefix: string; index: number }[] = [];\n const standaloneRules: SplitRule[] = [];\n const fastFuzzyRules: FastFuzzyRule[] = [];\n\n // Separate rules into combinable, standalone, and fast-fuzzy\n rules.forEach((rule, index) => {\n // Fast-path: fuzzy + lineStartsWith + single token pattern like {{kitab}}\n if ((rule as { fuzzy?: boolean }).fuzzy && 'lineStartsWith' in rule) {\n const compiled =\n rule.lineStartsWith.length === 1 ? compileFastFuzzyTokenRule(rule.lineStartsWith[0]) : null;\n if (compiled) {\n fastFuzzyRules.push({ compiled, kind: 'startsWith', rule, ruleIndex: index });\n return; // handled by fast path\n }\n }\n\n // Fast-path: fuzzy + lineStartsAfter + single token pattern like {{naql}}\n if ((rule as { fuzzy?: boolean }).fuzzy && 'lineStartsAfter' in rule) {\n const compiled =\n rule.lineStartsAfter.length === 1 ? compileFastFuzzyTokenRule(rule.lineStartsAfter[0]) : null;\n if (compiled) {\n fastFuzzyRules.push({ compiled, kind: 'startsAfter', rule, ruleIndex: index });\n return; // handled by fast path\n }\n }\n\n let isCombinable = true;\n\n // Raw regex rules are combinable ONLY if they don't use named captures, backreferences, or anonymous captures\n if ('regex' in rule && rule.regex) {\n const hasNamedCaptures = extractNamedCaptureNames(rule.regex).length > 0;\n const hasBackreferences = /\\\\[1-9]/.test(rule.regex);\n const hasAnonymousCaptures = hasCapturingGroup(rule.regex);\n if (hasNamedCaptures || hasBackreferences || hasAnonymousCaptures) {\n isCombinable = false;\n }\n }\n\n if (isCombinable) {\n combinableRules.push({ index, prefix: `r${index}_`, rule });\n } else {\n standaloneRules.push(rule);\n }\n });\n\n return { combinableRules, fastFuzzyRules, standaloneRules };\n};\n\nexport type PageStartGuardChecker = (rule: SplitRule, ruleIndex: number, matchStart: number) => boolean;\n\nexport const createPageStartGuardChecker = (matchContent: string, pageMap: PageMap): PageStartGuardChecker => {\n const pageStartToBoundaryIndex = new Map<number, number>();\n for (let i = 0; i < pageMap.boundaries.length; i++) {\n pageStartToBoundaryIndex.set(pageMap.boundaries[i].start, i);\n }\n\n const compiledPageStartPrev = new Map<number, RegExp | null>();\n const getPageStartPrevRegex = (rule: SplitRule, ruleIndex: number): RegExp | null => {\n if (compiledPageStartPrev.has(ruleIndex)) {\n return compiledPageStartPrev.get(ruleIndex) ?? null;\n }\n const pattern = (rule as { pageStartGuard?: string }).pageStartGuard;\n if (!pattern) {\n compiledPageStartPrev.set(ruleIndex, null);\n return null;\n }\n const expanded = processPattern(pattern, false).pattern;\n const re = new RegExp(`(?:${expanded})$`, 'u');\n compiledPageStartPrev.set(ruleIndex, re);\n return re;\n };\n\n const getPrevPageLastNonWsChar = (boundaryIndex: number): string => {\n if (boundaryIndex <= 0) {\n return '';\n }\n const prevBoundary = pageMap.boundaries[boundaryIndex - 1];\n // prevBoundary.end points at the inserted page-break newline; the last content char is end-1.\n for (let i = prevBoundary.end - 1; i >= prevBoundary.start; i--) {\n const ch = matchContent[i];\n if (!ch) {\n continue;\n }\n if (/\\s/u.test(ch)) {\n continue;\n }\n return ch;\n }\n return '';\n };\n\n return (rule: SplitRule, ruleIndex: number, matchStart: number): boolean => {\n const boundaryIndex = pageStartToBoundaryIndex.get(matchStart);\n if (boundaryIndex === undefined || boundaryIndex === 0) {\n return true; // not a page start, or the very first page\n }\n const prevReq = getPageStartPrevRegex(rule, ruleIndex);\n if (!prevReq) {\n return true;\n }\n const lastChar = getPrevPageLastNonWsChar(boundaryIndex);\n if (!lastChar) {\n return false;\n }\n return prevReq.test(lastChar);\n };\n};\n\nexport const collectFastFuzzySplitPoints = (\n matchContent: string,\n pageMap: PageMap,\n fastFuzzyRules: FastFuzzyRule[],\n passesPageStartGuard: PageStartGuardChecker,\n): Map<number, SplitPoint[]> => {\n const splitPointsByRule = new Map<number, SplitPoint[]>();\n if (fastFuzzyRules.length === 0 || pageMap.boundaries.length === 0) {\n return splitPointsByRule;\n }\n\n // Stream page boundary cursor to avoid O(log n) getId calls in hot loop.\n let boundaryIdx = 0;\n let currentBoundary = pageMap.boundaries[boundaryIdx];\n const advanceBoundaryTo = (offset: number) => {\n while (currentBoundary && offset > currentBoundary.end && boundaryIdx < pageMap.boundaries.length - 1) {\n boundaryIdx++;\n currentBoundary = pageMap.boundaries[boundaryIdx];\n }\n };\n\n const recordSplitPoint = (ruleIndex: number, sp: SplitPoint) => {\n const arr = splitPointsByRule.get(ruleIndex);\n if (!arr) {\n splitPointsByRule.set(ruleIndex, [sp]);\n return;\n }\n arr.push(sp);\n };\n\n const isPageStart = (offset: number): boolean => offset === currentBoundary?.start;\n\n // Line starts are offset 0 and any char after '\\n'\n for (let lineStart = 0; lineStart <= matchContent.length; ) {\n advanceBoundaryTo(lineStart);\n const pageId = currentBoundary?.id ?? 0;\n\n if (lineStart >= matchContent.length) {\n break;\n }\n\n for (const { compiled, kind, rule, ruleIndex } of fastFuzzyRules) {\n const passesConstraints =\n (rule.min === undefined || pageId >= rule.min) &&\n (rule.max === undefined || pageId <= rule.max) &&\n !isPageExcluded(pageId, rule.exclude);\n if (!passesConstraints) {\n continue;\n }\n\n if (isPageStart(lineStart) && !passesPageStartGuard(rule, ruleIndex, lineStart)) {\n continue;\n }\n\n const end = matchFastFuzzyTokenAt(matchContent, lineStart, compiled);\n if (end === null) {\n continue;\n }\n\n const splitIndex = (rule.split ?? 'at') === 'at' ? lineStart : end;\n if (kind === 'startsWith') {\n recordSplitPoint(ruleIndex, { index: splitIndex, meta: rule.meta });\n } else {\n const markerLength = end - lineStart;\n recordSplitPoint(ruleIndex, {\n contentStartOffset: (rule.split ?? 'at') === 'at' ? markerLength : undefined,\n index: splitIndex,\n meta: rule.meta,\n });\n }\n }\n\n const nextNl = matchContent.indexOf('\\n', lineStart);\n if (nextNl === -1) {\n break;\n }\n lineStart = nextNl + 1;\n }\n\n return splitPointsByRule;\n};\n","/**\n * Normalizes line endings to Unix-style (`\\n`).\n *\n * Converts Windows (`\\r\\n`) and old Mac (`\\r`) line endings to Unix style\n * for consistent pattern matching across platforms.\n *\n * @param content - Raw content with potentially mixed line endings\n * @returns Content with all line endings normalized to `\\n`\n */\n// OPTIMIZATION: Fast-path when no \\r present (common case for Unix/Mac content)\nexport const normalizeLineEndings = (content: string) => {\n return content.includes('\\r') ? content.replace(/\\r\\n?/g, '\\n') : content;\n};\n","/**\n * Core segmentation engine for splitting Arabic text pages into logical segments.\n *\n * The segmenter takes an array of pages and applies pattern-based rules to\n * identify split points, producing segments with content, page references,\n * and optional metadata.\n *\n * @module segmenter\n */\n\nimport { applyBreakpoints } from './breakpoint-processor.js';\nimport { isPageExcluded } from './breakpoint-utils.js';\nimport {\n anyRuleAllowsId,\n extractNamedCaptures,\n filterByConstraints,\n getLastPositionalCapture,\n type MatchResult,\n} from './match-utils.js';\nimport { buildRuleRegex, processPattern } from './rule-regex.js';\nimport { applyReplacements } from './replace.js';\nimport {\n collectFastFuzzySplitPoints,\n createPageStartGuardChecker,\n partitionRulesForMatching,\n} from './segmenter-rule-utils.js';\nimport type { PageBoundary, PageMap, SplitPoint } from './segmenter-types.js';\nimport { normalizeLineEndings } from './textUtils.js';\nimport type { Page, Segment, SegmentationOptions, SplitRule } from './types.js';\n\n// buildRuleRegex + processPattern extracted to src/segmentation/rule-regex.ts\n\n/**\n * Builds a concatenated content string and page mapping from input pages.\n *\n * Pages are joined with newline characters, and a page map is created to\n * track which page each offset belongs to. This allows pattern matching\n * across page boundaries while preserving page reference information.\n *\n * @param pages - Array of input pages with id and content\n * @returns Concatenated content string and page mapping utilities\n *\n * @example\n * const pages = [\n * { id: 1, content: 'Page 1 text' },\n * { id: 2, content: 'Page 2 text' }\n * ];\n * const { content, pageMap } = buildPageMap(pages);\n * // content = 'Page 1 text\\nPage 2 text'\n * // pageMap.getId(0) = 1\n * // pageMap.getId(12) = 2\n */\nconst buildPageMap = (pages: Page[]): { content: string; normalizedPages: string[]; pageMap: PageMap } => {\n const boundaries: PageBoundary[] = [];\n const pageBreaks: number[] = []; // Sorted array for binary search\n let offset = 0;\n const parts: string[] = [];\n\n for (let i = 0; i < pages.length; i++) {\n const normalized = normalizeLineEndings(pages[i].content);\n boundaries.push({ end: offset + normalized.length, id: pages[i].id, start: offset });\n parts.push(normalized);\n if (i < pages.length - 1) {\n pageBreaks.push(offset + normalized.length); // Already in sorted order\n offset += normalized.length + 1;\n } else {\n offset += normalized.length;\n }\n }\n\n /**\n * Finds the page boundary containing the given offset using binary search.\n * O(log n) complexity for efficient lookup with many pages.\n *\n * @param off - Character offset to look up\n * @returns Page boundary or the last boundary as fallback\n */\n const findBoundary = (off: number): PageBoundary | undefined => {\n let lo = 0;\n let hi = boundaries.length - 1;\n\n while (lo <= hi) {\n const mid = (lo + hi) >>> 1; // Unsigned right shift for floor division\n const b = boundaries[mid];\n if (off < b.start) {\n hi = mid - 1;\n } else if (off > b.end) {\n lo = mid + 1;\n } else {\n return b;\n }\n }\n // Fallback to last boundary if not found\n return boundaries[boundaries.length - 1];\n };\n\n return {\n content: parts.join('\\n'),\n normalizedPages: parts, // OPTIMIZATION: Return already-normalized content for reuse\n pageMap: {\n boundaries,\n getId: (off: number) => findBoundary(off)?.id ?? 0,\n pageBreaks,\n pageIds: boundaries.map((b) => b.id),\n },\n };\n};\n\n/**\n * Deduplicate split points by index, preferring ones with more information.\n *\n * Preference rules (when same index):\n * - Prefer a split with `contentStartOffset` (needed for `lineStartsAfter` marker stripping)\n * - Otherwise prefer a split with `meta` over one without\n */\nexport const dedupeSplitPoints = (splitPoints: SplitPoint[]): SplitPoint[] => {\n const byIndex = new Map<number, SplitPoint>();\n for (const p of splitPoints) {\n const existing = byIndex.get(p.index);\n if (!existing) {\n byIndex.set(p.index, p);\n continue;\n }\n const hasMoreInfo =\n (p.contentStartOffset !== undefined && existing.contentStartOffset === undefined) ||\n (p.meta !== undefined && existing.meta === undefined);\n if (hasMoreInfo) {\n byIndex.set(p.index, p);\n }\n }\n const unique = [...byIndex.values()];\n unique.sort((a, b) => a.index - b.index);\n return unique;\n};\n\n/**\n * If no structural rules produced segments, create a single segment spanning all pages.\n * This allows breakpoint processing to still run.\n */\nexport const ensureFallbackSegment = (\n segments: Segment[],\n pages: Page[],\n normalizedContent: string[],\n pageJoiner: 'space' | 'newline',\n): Segment[] => {\n if (segments.length > 0 || pages.length === 0) {\n return segments;\n }\n const firstPage = pages[0];\n const lastPage = pages[pages.length - 1];\n const joinChar = pageJoiner === 'newline' ? '\\n' : ' ';\n const allContent = normalizedContent.join(joinChar).trim();\n if (!allContent) {\n return segments;\n }\n const initialSeg: Segment = { content: allContent, from: firstPage.id };\n if (lastPage.id !== firstPage.id) {\n initialSeg.to = lastPage.id;\n }\n return [initialSeg];\n};\n\nconst collectSplitPointsFromRules = (rules: SplitRule[], matchContent: string, pageMap: PageMap): SplitPoint[] => {\n const passesPageStartGuard = createPageStartGuardChecker(matchContent, pageMap);\n const { combinableRules, fastFuzzyRules, standaloneRules } = partitionRulesForMatching(rules);\n\n // Store split points by rule index to apply occurrence filtering later.\n // Start with fast-fuzzy matches (if any) and then add regex-based matches.\n const splitPointsByRule = collectFastFuzzySplitPoints(matchContent, pageMap, fastFuzzyRules, passesPageStartGuard);\n\n // Process combinable rules in a single pass\n if (combinableRules.length > 0) {\n const ruleRegexes = combinableRules.map(({ rule, prefix }) => {\n const built = buildRuleRegex(rule, prefix);\n return {\n prefix,\n source: `(?<${prefix}>${built.regex.source})`,\n ...built,\n };\n });\n\n const combinedSource = ruleRegexes.map((r) => r.source).join('|');\n const combinedRegex = new RegExp(combinedSource, 'gm');\n\n combinedRegex.lastIndex = 0;\n let m = combinedRegex.exec(matchContent);\n\n while (m !== null) {\n // Find which rule matched by checking which prefix group is defined\n const matchedRuleIndex = combinableRules.findIndex(({ prefix }) => m?.groups?.[prefix] !== undefined);\n\n if (matchedRuleIndex !== -1) {\n const { rule, prefix, index: originalIndex } = combinableRules[matchedRuleIndex];\n const ruleInfo = ruleRegexes[matchedRuleIndex];\n\n // Extract named captures for this specific rule (stripping the prefix)\n const namedCaptures: Record<string, string> = {};\n if (m.groups) {\n for (const prefixedName of ruleInfo.captureNames) {\n if (m.groups[prefixedName] !== undefined) {\n const cleanName = prefixedName.slice(prefix.length);\n namedCaptures[cleanName] = m.groups[prefixedName];\n }\n }\n }\n\n // Handle lineStartsAfter content capture\n let capturedContent: string | undefined;\n let contentStartOffset: number | undefined;\n\n if (ruleInfo.usesLineStartsAfter) {\n // The internal content capture is named `${prefix}__content` (not a user capture).\n capturedContent = m.groups?.[`${prefix}__content`];\n if (capturedContent !== undefined) {\n // Calculate marker length: (full match length) - (content length)\n // Note: m[0] is the full match of the combined group\n const fullMatch = m.groups?.[prefix] || m[0];\n const markerLength = fullMatch.length - capturedContent.length;\n contentStartOffset = markerLength;\n }\n }\n\n // Check constraints\n const start = m.index;\n const end = m.index + m[0].length;\n const pageId = pageMap.getId(start);\n\n // Apply min/max/exclude page constraints\n const passesConstraints =\n (rule.min === undefined || pageId >= rule.min) &&\n (rule.max === undefined || pageId <= rule.max) &&\n !isPageExcluded(pageId, rule.exclude);\n\n if (passesConstraints) {\n if (!passesPageStartGuard(rule, originalIndex, start)) {\n // Skip false positives caused purely by page wrap.\n // Mid-page line starts are unaffected.\n continue;\n }\n const sp: SplitPoint = {\n capturedContent: undefined, // For combinable rules, we don't use captured content for the segment text\n contentStartOffset,\n index: (rule.split ?? 'at') === 'at' ? start : end,\n meta: rule.meta,\n namedCaptures: Object.keys(namedCaptures).length > 0 ? namedCaptures : undefined,\n };\n\n if (!splitPointsByRule.has(originalIndex)) {\n splitPointsByRule.set(originalIndex, []);\n }\n splitPointsByRule.get(originalIndex)!.push(sp);\n }\n }\n\n if (m[0].length === 0) {\n combinedRegex.lastIndex++;\n }\n m = combinedRegex.exec(matchContent);\n }\n }\n\n // Process standalone rules individually (legacy path)\n const collectSplitPointsFromRule = (rule: SplitRule, ruleIndex: number): void => {\n const { regex, usesCapture, captureNames, usesLineStartsAfter } = buildRuleRegex(rule);\n const allMatches = findMatches(matchContent, regex, usesCapture, captureNames);\n const constrainedMatches = filterByConstraints(allMatches, rule, pageMap.getId);\n const guarded = constrainedMatches.filter((m) => passesPageStartGuard(rule, ruleIndex, m.start));\n // We don't filter by occurrence here yet, we do it uniformly later\n // But wait, filterByConstraints returns MatchResult, we need SplitPoint\n\n const points = guarded.map((m) => {\n const isLineStartsAfter = usesLineStartsAfter && m.captured !== undefined;\n const markerLength = isLineStartsAfter ? m.end - m.captured!.length - m.start : 0;\n return {\n capturedContent: isLineStartsAfter ? undefined : m.captured,\n contentStartOffset: isLineStartsAfter ? markerLength : undefined,\n index: (rule.split ?? 'at') === 'at' ? m.start : m.end,\n meta: rule.meta,\n namedCaptures: m.namedCaptures,\n };\n });\n\n if (!splitPointsByRule.has(ruleIndex)) {\n splitPointsByRule.set(ruleIndex, []);\n }\n splitPointsByRule.get(ruleIndex)!.push(...points);\n };\n\n standaloneRules.forEach((rule) => {\n // Find original index\n const originalIndex = rules.indexOf(rule);\n collectSplitPointsFromRule(rule, originalIndex);\n });\n\n // Apply occurrence filtering and flatten\n const finalSplitPoints: SplitPoint[] = [];\n rules.forEach((rule, index) => {\n const points = splitPointsByRule.get(index);\n if (!points || points.length === 0) {\n return;\n }\n\n let filtered = points;\n if (rule.occurrence === 'first') {\n filtered = [points[0]];\n } else if (rule.occurrence === 'last') {\n filtered = [points[points.length - 1]];\n }\n\n finalSplitPoints.push(...filtered);\n });\n\n return finalSplitPoints;\n};\n\n/**\n * Executes a regex against content and extracts match results with capture information.\n *\n * @param content - Full content string to search\n * @param regex - Compiled regex with 'g' flag\n * @param usesCapture - Whether to extract captured content\n * @param captureNames - Names of expected named capture groups\n * @returns Array of match results with positions and captures\n */\nconst findMatches = (content: string, regex: RegExp, usesCapture: boolean, captureNames: string[]) => {\n const matches: MatchResult[] = [];\n regex.lastIndex = 0;\n let m = regex.exec(content);\n\n while (m !== null) {\n const result: MatchResult = { end: m.index + m[0].length, start: m.index };\n\n // Extract named captures if present\n result.namedCaptures = extractNamedCaptures(m.groups, captureNames);\n\n // For lineStartsAfter, get the last positional capture (the .* content)\n if (usesCapture) {\n result.captured = getLastPositionalCapture(m);\n }\n\n matches.push(result);\n\n if (m[0].length === 0) {\n regex.lastIndex++;\n }\n m = regex.exec(content);\n }\n\n return matches;\n};\n\n/**\n * Finds page breaks within a given offset range using binary search.\n * O(log n + k) where n = total breaks, k = breaks in range.\n *\n * @param startOffset - Start of range (inclusive)\n * @param endOffset - End of range (exclusive)\n * @param sortedBreaks - Sorted array of page break offsets\n * @returns Array of break offsets relative to startOffset\n */\nconst findBreaksInRange = (startOffset: number, endOffset: number, sortedBreaks: number[]) => {\n if (sortedBreaks.length === 0) {\n return [];\n }\n\n // Binary search for first break >= startOffset\n let lo = 0;\n let hi = sortedBreaks.length;\n while (lo < hi) {\n const mid = (lo + hi) >>> 1;\n if (sortedBreaks[mid] < startOffset) {\n lo = mid + 1;\n } else {\n hi = mid;\n }\n }\n\n // Collect breaks until we exceed endOffset\n const result: number[] = [];\n for (let i = lo; i < sortedBreaks.length && sortedBreaks[i] < endOffset; i++) {\n result.push(sortedBreaks[i] - startOffset);\n }\n return result;\n};\n\n/**\n * Converts page-break newlines to spaces in segment content.\n *\n * When a segment spans multiple pages, the newline characters that were\n * inserted as page separators during concatenation are converted to spaces\n * for more natural reading.\n *\n * Uses binary search for O(log n + k) lookup instead of O(n) iteration.\n *\n * @param content - Segment content string\n * @param startOffset - Starting offset of this content in concatenated string\n * @param pageBreaks - Sorted array of page break offsets\n * @returns Content with page-break newlines converted to spaces\n */\nconst convertPageBreaks = (content: string, startOffset: number, pageBreaks: number[]): string => {\n // OPTIMIZATION: Fast-path for empty or no-newline content (common cases)\n if (!content || !content.includes('\\n')) {\n return content;\n }\n\n const endOffset = startOffset + content.length;\n const breaksInRange = findBreaksInRange(startOffset, endOffset, pageBreaks);\n\n // No page breaks in this segment - return as-is (most common case)\n if (breaksInRange.length === 0) {\n return content;\n }\n\n // Convert ONLY page-break newlines (the ones inserted during concatenation) to spaces.\n //\n // NOTE: Offsets from findBreaksInRange are string indices (code units). Using Array.from()\n // would index by Unicode code points and can desync indices if surrogate pairs appear.\n const breakSet = new Set(breaksInRange);\n return content.replace(/\\n/g, (match, offset: number) => (breakSet.has(offset) ? ' ' : match));\n};\n\n/**\n * Applies breakpoints to oversized segments.\n *\n * For each segment that spans more than maxPages, tries the breakpoint patterns\n * in order to find a suitable split point. Structural markers (from rules) are\n * always respected - segments are only broken within their boundaries.\n *\n * @param segments - Initial segments from rule processing\n * @param pages - Original pages for page lookup\n * @param maxPages - Maximum pages before breakpoints apply\n * @param breakpoints - Patterns to try in order (tokens supported)\n * @param prefer - 'longer' for last match, 'shorter' for first match\n * @returns Processed segments with oversized ones broken up\n */\n// applyBreakpoints implementation moved to breakpoint-processor.ts to reduce complexity in this module.\n\n/**\n * Segments pages of content based on pattern-matching rules.\n *\n * This is the main entry point for the segmentation engine. It takes an array\n * of pages and applies the provided rules to identify split points, producing\n * an array of segments with content, page references, and metadata.\n *\n * @param pages - Array of pages with id and content\n * @param options - Segmentation options including splitting rules\n * @returns Array of segments with content, from/to page references, and optional metadata\n *\n * @example\n * // Split markdown by headers\n * const segments = segmentPages(pages, {\n * rules: [\n * { lineStartsWith: ['## '], split: 'at', meta: { type: 'chapter' } }\n * ]\n * });\n *\n * @example\n * // Split Arabic hadith text with number extraction\n * const segments = segmentPages(pages, {\n * rules: [\n * {\n * lineStartsAfter: ['{{raqms:hadithNum}} {{dash}} '],\n * split: 'at',\n * fuzzy: true,\n * meta: { type: 'hadith' }\n * }\n * ]\n * });\n *\n * @example\n * // Multiple rules with page constraints\n * const segments = segmentPages(pages, {\n * rules: [\n * { lineStartsWith: ['{{kitab}}'], split: 'at', meta: { type: 'book' } },\n * { lineStartsWith: ['{{bab}}'], split: 'at', min: 10, meta: { type: 'chapter' } },\n * { regex: '^[٠-٩]+ - ', split: 'at', meta: { type: 'hadith' } }\n * ]\n * });\n */\nexport const segmentPages = (pages: Page[], options: SegmentationOptions): Segment[] => {\n const { rules = [], maxPages = 0, breakpoints = [], prefer = 'longer', pageJoiner = 'space', logger } = options;\n\n const processedPages = options.replace ? applyReplacements(pages, options.replace) : pages;\n const { content: matchContent, normalizedPages: normalizedContent, pageMap } = buildPageMap(processedPages);\n const splitPoints = collectSplitPointsFromRules(rules, matchContent, pageMap);\n const unique = dedupeSplitPoints(splitPoints);\n\n // Build initial segments from structural rules\n let segments = buildSegments(unique, matchContent, pageMap, rules);\n\n segments = ensureFallbackSegment(segments, processedPages, normalizedContent, pageJoiner);\n\n // Apply breakpoints post-processing for oversized segments\n if (maxPages >= 0 && breakpoints.length) {\n const patternProcessor = (p: string) => processPattern(p, false).pattern;\n return applyBreakpoints(\n segments,\n processedPages,\n normalizedContent,\n maxPages,\n breakpoints,\n prefer,\n patternProcessor,\n logger,\n pageJoiner,\n );\n }\n\n return segments;\n};\n\n/**\n * Creates segment objects from split points.\n *\n * Handles segment creation including:\n * - Content extraction (with captured content for `lineStartsAfter`)\n * - Page break conversion to spaces\n * - From/to page reference calculation\n * - Metadata merging (static + named captures)\n *\n * @param splitPoints - Sorted, unique split points\n * @param content - Full concatenated content string\n * @param pageMap - Page mapping utilities\n * @param rules - Original rules (for constraint checking on first segment)\n * @returns Array of segment objects\n */\nconst buildSegments = (splitPoints: SplitPoint[], content: string, pageMap: PageMap, rules: SplitRule[]): Segment[] => {\n /**\n * Creates a single segment from a content range.\n */\n const createSegment = (\n start: number,\n end: number,\n meta?: Record<string, unknown>,\n capturedContent?: string,\n namedCaptures?: Record<string, string>,\n contentStartOffset?: number,\n ): Segment | null => {\n // For lineStartsAfter, skip the marker by using contentStartOffset\n const actualStart = start + (contentStartOffset ?? 0);\n // For lineStartsAfter (contentStartOffset set), trim leading whitespace after marker\n // For other rules, only trim trailing whitespace to preserve intentional leading spaces\n const sliced = content.slice(actualStart, end);\n let text = capturedContent?.trim() ?? (contentStartOffset ? sliced.trim() : sliced.replace(/[\\s\\n]+$/, ''));\n if (!text) {\n return null;\n }\n if (!capturedContent) {\n text = convertPageBreaks(text, actualStart, pageMap.pageBreaks);\n }\n const from = pageMap.getId(actualStart);\n const to = capturedContent ? pageMap.getId(end - 1) : pageMap.getId(actualStart + text.length - 1);\n const seg: Segment = { content: text, from };\n if (to !== from) {\n seg.to = to;\n }\n if (meta || namedCaptures) {\n seg.meta = { ...meta, ...namedCaptures };\n }\n return seg;\n };\n\n /**\n * Creates segments from an array of split points.\n */\n const createSegmentsFromSplitPoints = (): Segment[] => {\n const result: Segment[] = [];\n for (let i = 0; i < splitPoints.length; i++) {\n const sp = splitPoints[i];\n const end = i < splitPoints.length - 1 ? splitPoints[i + 1].index : content.length;\n const s = createSegment(\n sp.index,\n end,\n sp.meta,\n sp.capturedContent,\n sp.namedCaptures,\n sp.contentStartOffset,\n );\n if (s) {\n result.push(s);\n }\n }\n return result;\n };\n\n const segments: Segment[] = [];\n\n // Handle case with no split points\n if (!splitPoints.length) {\n const firstId = pageMap.getId(0);\n if (anyRuleAllowsId(rules, firstId)) {\n const s = createSegment(0, content.length);\n if (s) {\n segments.push(s);\n }\n }\n return segments;\n }\n\n // Add first segment if there's content before first split\n if (splitPoints[0].index > 0) {\n const firstId = pageMap.getId(0);\n if (anyRuleAllowsId(rules, firstId)) {\n const s = createSegment(0, splitPoints[0].index);\n if (s) {\n segments.push(s);\n }\n }\n }\n\n // Create segments from split points using extracted utility\n return [...segments, ...createSegmentsFromSplitPoints()];\n};\n","import { normalizeLineEndings } from './segmentation/textUtils.js';\nimport { getAvailableTokens, TOKEN_PATTERNS } from './segmentation/tokens.js';\nimport type { Page } from './segmentation/types.js';\n\nexport type LineStartAnalysisOptions = {\n /** Return top K patterns (after filtering). Default: 20 */\n topK?: number;\n /** Only consider the first N characters of each trimmed line. Default: 60 */\n prefixChars?: number;\n /** Ignore lines shorter than this (after trimming). Default: 6 */\n minLineLength?: number;\n /** Only include patterns that appear at least this many times. Default: 3 */\n minCount?: number;\n /** Keep up to this many example lines per pattern. Default: 5 */\n maxExamples?: number;\n /**\n * If true, include a literal first word when no token match is found at the start.\n * Default: true\n */\n includeFirstWordFallback?: boolean;\n /**\n * If true, strip Arabic diacritics (harakat/tashkeel) for the purposes of matching tokens.\n * This helps patterns like `وأَخْبَرَنَا` match the `{{naql}}` token (`وأخبرنا`).\n *\n * Note: examples are still stored in their original (unstripped) form.\n *\n * Default: true\n */\n normalizeArabicDiacritics?: boolean;\n /**\n * How to sort patterns before applying `topK`.\n *\n * - `specificity` (default): prioritize more structured prefixes first (tokenCount, then literalLen), then count.\n * - `count`: prioritize highest-frequency patterns first, then specificity.\n */\n sortBy?: 'specificity' | 'count';\n /**\n * Optional filter to restrict which lines are analyzed.\n *\n * The `line` argument is the trimmed + whitespace-collapsed version of the line.\n * Return `true` to include it, `false` to skip it.\n *\n * @example\n * // Only analyze markdown H2 headings\n * { lineFilter: (line) => line.startsWith('## ') }\n */\n lineFilter?: (line: string, pageId: number) => boolean;\n /**\n * Optional list of prefix matchers to consume before tokenization.\n *\n * This is for \"syntactic\" prefixes that are common at line start but are not\n * meaningful as tokens by themselves (e.g. markdown headings like `##`).\n *\n * Each matcher is applied at the current position. If it matches, the matched\n * text is appended (escaped) to the signature and the scanner advances.\n *\n * @example\n * // Support markdown blockquotes and headings\n * { prefixMatchers: [/^>+/u, /^#+/u] }\n */\n prefixMatchers?: RegExp[];\n /**\n * How to represent whitespace in returned `pattern` signatures.\n *\n * - `regex` (default): use `\\\\s*` placeholders between tokens (useful if you paste patterns into regex-ish templates).\n * - `space`: use literal single spaces (`' '`) between tokens (safer if you don't want `\\\\s` to match newlines when reused as regex).\n */\n whitespace?: 'regex' | 'space';\n};\n\nexport type LineStartPatternExample = { line: string; pageId: number };\n\nexport type CommonLineStartPattern = {\n pattern: string;\n count: number;\n examples: LineStartPatternExample[];\n};\n\nconst countTokenMarkers = (pattern: string): number => (pattern.match(/\\{\\{/g) ?? []).length;\n\nconst stripWhitespacePlaceholders = (pattern: string): string =>\n // Remove both the regex placeholder and literal spaces/tabs since they are not meaningful \"constraints\"\n pattern.replace(/\\\\s\\*/g, '').replace(/[ \\t]+/g, '');\n\n// Heuristic: higher is \"more precise\".\n// - More tokens usually means more structured prefix\n// - More literal characters (after removing \\s*) indicates more constraints (e.g. \":\" or \"[\")\nconst computeSpecificity = (pattern: string): { literalLen: number; tokenCount: number } => {\n const tokenCount = countTokenMarkers(pattern);\n const literalLen = stripWhitespacePlaceholders(pattern).length;\n return { literalLen, tokenCount };\n};\n\ntype ResolvedLineStartAnalysisOptions = Required<Omit<LineStartAnalysisOptions, 'lineFilter' | 'prefixMatchers'>> & {\n lineFilter?: LineStartAnalysisOptions['lineFilter'];\n prefixMatchers: RegExp[];\n};\n\nconst DEFAULT_OPTIONS: ResolvedLineStartAnalysisOptions = {\n includeFirstWordFallback: true,\n lineFilter: undefined,\n maxExamples: 1,\n minCount: 3,\n minLineLength: 6,\n normalizeArabicDiacritics: true,\n prefixChars: 60,\n prefixMatchers: [/^#+/u],\n sortBy: 'specificity',\n topK: 40,\n whitespace: 'regex',\n};\n\n// For analysis signatures we avoid escaping ()[] because:\n// - These are commonly used literally in texts (e.g., \"(ح)\")\n// - When signatures are later used in template patterns, ()[] are auto-escaped there\n// We still escape other regex metacharacters to keep signatures safe if reused as templates.\nconst escapeSignatureLiteral = (s: string): string => s.replace(/[.*+?^${}|\\\\{}]/g, '\\\\$&');\n\n// Keep this intentionally focused on \"useful at line start\" tokens, avoiding overly-generic tokens like {{harf}}.\nconst TOKEN_PRIORITY_ORDER: string[] = [\n 'basmalah',\n 'kitab',\n 'bab',\n 'fasl',\n 'naql',\n 'rumuz',\n 'numbered',\n 'raqms',\n 'raqm',\n 'dash',\n 'bullet',\n 'tarqim',\n];\n\nconst buildTokenPriority = (): string[] => {\n const allTokens = new Set(getAvailableTokens());\n // IMPORTANT: We only use an explicit allow-list here.\n // Including \"all remaining tokens\" adds overly-generic tokens (e.g., harf) which makes signatures noisy.\n return TOKEN_PRIORITY_ORDER.filter((t) => allTokens.has(t));\n};\n\nconst collapseWhitespace = (s: string): string => s.replace(/\\s+/g, ' ').trim();\n\n// Arabic diacritics / tashkeel marks that commonly appear in Shamela texts.\n// This is intentionally conservative: remove combining marks but keep letters.\nconst stripArabicDiacritics = (s: string): string =>\n // harakat + common Quranic marks + tatweel\n s.replace(/[\\u064B-\\u065F\\u0670\\u06D6-\\u06ED\\u0640]/gu, '');\n\ntype CompiledTokenRegex = { token: string; re: RegExp };\n\nconst compileTokenRegexes = (tokenNames: string[]): CompiledTokenRegex[] => {\n const compiled: CompiledTokenRegex[] = [];\n for (const token of tokenNames) {\n const pat = TOKEN_PATTERNS[token];\n if (!pat) {\n continue;\n }\n try {\n compiled.push({ re: new RegExp(pat, 'uy'), token });\n } catch {\n // Ignore invalid patterns\n }\n }\n return compiled;\n};\n\nconst appendWs = (out: string, mode: 'regex' | 'space'): string => {\n if (!out) {\n return out;\n }\n if (mode === 'space') {\n return out.endsWith(' ') ? out : `${out} `;\n }\n return out.endsWith('\\\\s*') ? out : `${out}\\\\s*`;\n};\n\nconst consumeLeadingPrefixes = (\n s: string,\n pos: number,\n out: string,\n prefixMatchers: RegExp[],\n whitespace: 'regex' | 'space',\n): { matchedAny: boolean; out: string; pos: number } => {\n let matchedAny = false;\n let currentPos = pos;\n let currentOut = out;\n\n for (const re of prefixMatchers) {\n if (currentPos >= s.length) {\n break;\n }\n const m = re.exec(s.slice(currentPos));\n if (!m || m.index !== 0 || !m[0]) {\n continue;\n }\n\n currentOut += escapeSignatureLiteral(m[0]);\n currentPos += m[0].length;\n matchedAny = true;\n\n const wsAfter = /^[ \\t]+/u.exec(s.slice(currentPos));\n if (wsAfter) {\n currentPos += wsAfter[0].length;\n currentOut = appendWs(currentOut, whitespace);\n }\n }\n\n return { matchedAny, out: currentOut, pos: currentPos };\n};\n\nconst findBestTokenMatchAt = (\n s: string,\n pos: number,\n compiled: CompiledTokenRegex[],\n isArabicLetter: (ch: string) => boolean,\n): { token: string; text: string } | null => {\n let best: { token: string; text: string } | null = null;\n for (const { token, re } of compiled) {\n re.lastIndex = pos;\n const m = re.exec(s);\n if (!m || m.index !== pos) {\n continue;\n }\n if (!best || m[0].length > best.text.length) {\n best = { text: m[0], token };\n }\n }\n\n if (best?.token === 'rumuz') {\n const end = pos + best.text.length;\n const next = end < s.length ? s[end] : '';\n if (next && isArabicLetter(next) && !/\\s/u.test(next)) {\n return null;\n }\n }\n\n return best;\n};\n\nconst tokenizeLineStart = (\n line: string,\n tokenNames: string[],\n prefixChars: number,\n includeFirstWordFallback: boolean,\n normalizeArabicDiacritics: boolean,\n prefixMatchers: RegExp[],\n whitespace: 'regex' | 'space',\n): string | null => {\n const trimmed = collapseWhitespace(line);\n if (!trimmed) {\n return null;\n }\n\n const s = (normalizeArabicDiacritics ? stripArabicDiacritics(trimmed) : trimmed).slice(0, prefixChars);\n let pos = 0;\n let out = '';\n let matchedAny = false;\n let matchedToken = false;\n\n // Pre-compile regexes once per call (tokenNames is small); use sticky to match at position.\n const compiled = compileTokenRegexes(tokenNames);\n\n // IMPORTANT: do NOT treat all Arabic-block codepoints as \"letters\" (it includes punctuation like \"،\").\n // We only want to consider actual letters here for the rumuz boundary guard.\n const isArabicLetter = (ch: string): boolean => /\\p{Script=Arabic}/u.test(ch) && /\\p{L}/u.test(ch);\n const isCommonDelimiter = (ch: string): boolean => /[::\\-–—ـ،؛.?!؟()[\\]{}]/u.test(ch);\n\n {\n const consumed = consumeLeadingPrefixes(s, pos, out, prefixMatchers, whitespace);\n pos = consumed.pos;\n out = consumed.out;\n matchedAny = consumed.matchedAny;\n }\n\n // Scan forward at most a few *token* steps to avoid producing huge unique strings.\n // Whitespace and delimiters do not count toward the token step budget.\n let tokenSteps = 0;\n while (tokenSteps < 6 && pos < s.length) {\n // Skip whitespace and represent it as \\\\s*\n const wsMatch = /^[ \\t]+/u.exec(s.slice(pos));\n if (wsMatch) {\n pos += wsMatch[0].length;\n out = appendWs(out, whitespace);\n continue;\n }\n\n const best = findBestTokenMatchAt(s, pos, compiled, isArabicLetter);\n\n if (best) {\n if (out && !out.endsWith('\\\\s*')) {\n // If we have no whitespace but are concatenating tokens, keep it literal.\n }\n out += `{{${best.token}}}`;\n matchedAny = true;\n matchedToken = true;\n pos += best.text.length;\n tokenSteps++;\n continue;\n }\n\n // After matching tokens, allow common delimiters (like ':' in \"١١٢٨ ع:\") to become part of the signature.\n if (matchedAny) {\n const ch = s[pos];\n if (ch && isCommonDelimiter(ch)) {\n out += escapeSignatureLiteral(ch);\n pos += 1;\n continue;\n }\n }\n\n // If we already matched something token-y, stop at first unknown content to avoid overfitting.\n if (matchedAny) {\n // Exception: if we only matched a generic prefix (e.g., \"##\") and no tokens yet,\n // allow the first-word fallback to capture the next word to show heading variations.\n if (includeFirstWordFallback && !matchedToken) {\n const firstWord = (s.slice(pos).match(/^[^\\s:،؛.?!؟]+/u) ?? [])[0];\n if (!firstWord) {\n break;\n }\n out += escapeSignatureLiteral(firstWord);\n tokenSteps++;\n }\n break;\n }\n\n if (!includeFirstWordFallback) {\n return null;\n }\n\n // Fallback: include the first word as a literal (escaped), then stop.\n const firstWord = (s.slice(pos).match(/^[^\\s:،؛.?!؟]+/u) ?? [])[0];\n if (!firstWord) {\n return null;\n }\n out += escapeSignatureLiteral(firstWord);\n tokenSteps++;\n return out;\n }\n\n if (!matchedAny) {\n return null;\n }\n // Avoid trailing whitespace placeholder noise.\n if (whitespace === 'regex') {\n while (out.endsWith('\\\\s*')) {\n out = out.slice(0, -3);\n }\n } else {\n while (out.endsWith(' ')) {\n out = out.slice(0, -1);\n }\n }\n return out;\n};\n\n/**\n * Analyze pages and return the most common line-start patterns (top K).\n *\n * This is a pure algorithmic heuristic: it tokenizes common prefixes into a stable\n * template-ish string using the library tokens (e.g., `{{bab}}`, `{{raqms}}`, `{{rumuz}}`).\n */\nexport const analyzeCommonLineStarts = (\n pages: Page[],\n options: LineStartAnalysisOptions = {},\n): CommonLineStartPattern[] => {\n const o: ResolvedLineStartAnalysisOptions = {\n ...DEFAULT_OPTIONS,\n ...options,\n // Ensure defaults are kept if caller doesn't pass these (or passes undefined).\n lineFilter: options.lineFilter ?? DEFAULT_OPTIONS.lineFilter,\n prefixMatchers: options.prefixMatchers ?? DEFAULT_OPTIONS.prefixMatchers,\n whitespace: options.whitespace ?? DEFAULT_OPTIONS.whitespace,\n };\n const tokenPriority = buildTokenPriority();\n\n const counts = new Map<string, { count: number; examples: LineStartPatternExample[] }>();\n\n for (const page of pages) {\n const normalized = normalizeLineEndings(page.content ?? '');\n const lines = normalized.split('\\n');\n for (const line of lines) {\n const trimmed = collapseWhitespace(line);\n if (trimmed.length < o.minLineLength) {\n continue;\n }\n if (o.lineFilter && !o.lineFilter(trimmed, page.id)) {\n continue;\n }\n\n const sig = tokenizeLineStart(\n trimmed,\n tokenPriority,\n o.prefixChars,\n o.includeFirstWordFallback,\n o.normalizeArabicDiacritics,\n o.prefixMatchers,\n o.whitespace,\n );\n if (!sig) {\n continue;\n }\n\n const existing = counts.get(sig);\n if (!existing) {\n counts.set(sig, { count: 1, examples: [{ line: trimmed, pageId: page.id }] });\n } else {\n existing.count++;\n if (existing.examples.length < o.maxExamples) {\n existing.examples.push({ line: trimmed, pageId: page.id });\n }\n }\n }\n }\n\n const compareSpecificityThenCount = (a: CommonLineStartPattern, b: CommonLineStartPattern): number => {\n const sa = computeSpecificity(a.pattern);\n const sb = computeSpecificity(b.pattern);\n // Most precise first\n if (sb.tokenCount !== sa.tokenCount) {\n return sb.tokenCount - sa.tokenCount;\n }\n if (sb.literalLen !== sa.literalLen) {\n return sb.literalLen - sa.literalLen;\n }\n // Then by frequency\n if (b.count !== a.count) {\n return b.count - a.count;\n }\n return a.pattern.localeCompare(b.pattern);\n };\n\n const compareCountThenSpecificity = (a: CommonLineStartPattern, b: CommonLineStartPattern): number => {\n if (b.count !== a.count) {\n return b.count - a.count;\n }\n return compareSpecificityThenCount(a, b);\n };\n\n const sorted: CommonLineStartPattern[] = [...counts.entries()]\n .map(([pattern, v]) => ({ count: v.count, examples: v.examples, pattern }))\n .filter((p) => p.count >= o.minCount)\n .sort(o.sortBy === 'count' ? compareCountThenSpecificity : compareSpecificityThenCount);\n\n return sorted.slice(0, o.topK);\n};\n","/**\n * Pattern detection utilities for recognizing template tokens in Arabic text.\n * Used to auto-detect patterns from user-highlighted text in the segmentation dialog.\n *\n * @module pattern-detection\n */\n\nimport { getAvailableTokens, TOKEN_PATTERNS } from './segmentation/tokens.js';\n\n/**\n * Result of detecting a token pattern in text\n */\nexport type DetectedPattern = {\n /** Token name from TOKEN_PATTERNS (e.g., 'raqms', 'dash') */\n token: string;\n /** The matched text */\n match: string;\n /** Start index in the original text */\n index: number;\n /** End index (exclusive) */\n endIndex: number;\n};\n\n/**\n * Token detection order - more specific patterns first to avoid partial matches.\n * Example: 'raqms' before 'raqm' so \"٣٤\" matches 'raqms' not just the first digit.\n *\n * Tokens not in this list are appended in alphabetical order from TOKEN_PATTERNS.\n */\nconst TOKEN_PRIORITY_ORDER = [\n 'basmalah', // Most specific - full phrase\n 'kitab',\n 'bab',\n 'fasl',\n 'naql',\n 'rumuz', // Source abbreviations (e.g., \"خت\", \"خ سي\", \"٤\")\n 'numbered', // Composite: raqms + dash\n 'raqms', // Multiple digits before single digit\n 'raqm',\n 'tarqim',\n 'bullet',\n 'dash',\n 'harf',\n];\n\n/**\n * Gets the token detection priority order.\n * Returns tokens in priority order, with any TOKEN_PATTERNS not in the priority list appended.\n */\nconst getTokenPriority = () => {\n const allTokens = getAvailableTokens();\n const prioritized = TOKEN_PRIORITY_ORDER.filter((t) => allTokens.includes(t));\n const remaining = allTokens.filter((t) => !TOKEN_PRIORITY_ORDER.includes(t)).sort();\n return [...prioritized, ...remaining];\n};\n\nconst isRumuzStandalone = (text: string, startIndex: number, endIndex: number): boolean => {\n // We want rumuz to behave like a standalone marker (e.g. \"س:\" or \"خت ٤:\"),\n // not a substring match inside normal Arabic words (e.g. \"إِبْرَاهِيم\").\n const before = startIndex > 0 ? text[startIndex - 1] : '';\n const after = endIndex < text.length ? text[endIndex] : '';\n\n const isWhitespace = (ch: string): boolean => !!ch && /\\s/u.test(ch);\n const isOpenBracket = (ch: string): boolean => !!ch && /[([{]/u.test(ch);\n const isRightDelimiter = (ch: string): boolean => !!ch && /[::\\-–—ـ،؛.?!؟)\\]}]/u.test(ch);\n\n // Treat any Arabic-block codepoint (letters + diacritics + digits) as \"wordy\" context.\n // Unicode Script properties can classify some combining marks as \"Inherited\", so we avoid \\p{Script=Arabic}.\n const isArabicWordy = (ch: string): boolean => !!ch && /[\\u0600-\\u06FF]/u.test(ch);\n\n const leftOk = !before || isWhitespace(before) || isOpenBracket(before) || !isArabicWordy(before);\n const rightOk = !after || isWhitespace(after) || isRightDelimiter(after) || !isArabicWordy(after);\n\n return leftOk && rightOk;\n};\n\n/**\n * Analyzes text and returns all detected token patterns with their positions.\n * Patterns are detected in priority order to avoid partial matches.\n *\n * @param text - The text to analyze for token patterns\n * @returns Array of detected patterns sorted by position\n *\n * @example\n * detectTokenPatterns(\"٣٤ - حدثنا\")\n * // Returns: [\n * // { token: 'raqms', match: '٣٤', index: 0, endIndex: 2 },\n * // { token: 'dash', match: '-', index: 3, endIndex: 4 },\n * // { token: 'naql', match: 'حدثنا', index: 5, endIndex: 10 }\n * // ]\n */\nexport const detectTokenPatterns = (text: string) => {\n if (!text) {\n return [];\n }\n\n const results: DetectedPattern[] = [];\n const coveredRanges: Array<[number, number]> = [];\n\n // Check if a position is already covered by a detected pattern\n const isPositionCovered = (start: number, end: number): boolean => {\n return coveredRanges.some(\n ([s, e]) => (start >= s && start < e) || (end > s && end <= e) || (start <= s && end >= e),\n );\n };\n\n // Process tokens in priority order\n for (const tokenName of getTokenPriority()) {\n const pattern = TOKEN_PATTERNS[tokenName];\n if (!pattern) {\n continue;\n }\n\n try {\n // Create a global regex to find all matches\n const regex = new RegExp(`(${pattern})`, 'gu');\n let match: RegExpExecArray | null;\n\n // biome-ignore lint/suspicious/noAssignInExpressions: standard regex exec loop pattern\n while ((match = regex.exec(text)) !== null) {\n const startIndex = match.index;\n const endIndex = startIndex + match[0].length;\n\n if (tokenName === 'rumuz' && !isRumuzStandalone(text, startIndex, endIndex)) {\n continue;\n }\n\n // Skip if this range overlaps with an already detected pattern\n if (isPositionCovered(startIndex, endIndex)) {\n continue;\n }\n\n results.push({ endIndex, index: startIndex, match: match[0], token: tokenName });\n\n coveredRanges.push([startIndex, endIndex]);\n }\n } catch {}\n }\n\n return results.sort((a, b) => a.index - b.index);\n};\n\n/**\n * Generates a template pattern from text using detected tokens.\n * Replaces matched portions with {{token}} syntax.\n *\n * @param text - Original text\n * @param detected - Array of detected patterns from detectTokenPatterns\n * @returns Template string with tokens, e.g., \"{{raqms}} {{dash}} \"\n *\n * @example\n * const detected = detectTokenPatterns(\"٣٤ - \");\n * generateTemplateFromText(\"٣٤ - \", detected);\n * // Returns: \"{{raqms}} {{dash}} \"\n */\nexport const generateTemplateFromText = (text: string, detected: DetectedPattern[]) => {\n if (!text || detected.length === 0) {\n return text;\n }\n\n // Build template by replacing detected patterns with tokens\n // Process in reverse order to preserve indices\n let template = text;\n const sortedByIndexDesc = [...detected].sort((a, b) => b.index - a.index);\n\n for (const d of sortedByIndexDesc) {\n template = `${template.slice(0, d.index)}{{${d.token}}}${template.slice(d.endIndex)}`;\n }\n\n return template;\n};\n\n/**\n * Determines the best pattern type for auto-generated rules based on detected patterns.\n *\n * @param detected - Array of detected patterns\n * @returns Suggested pattern type and whether to use fuzzy matching\n */\nexport const suggestPatternConfig = (\n detected: DetectedPattern[],\n): { patternType: 'lineStartsWith' | 'lineStartsAfter'; fuzzy: boolean; metaType?: string } => {\n // Check if the detected patterns suggest a structural marker (chapter, book, etc.)\n const hasStructuralToken = detected.some((d) => ['basmalah', 'kitab', 'bab', 'fasl'].includes(d.token));\n\n // Check if the pattern is numbered (hadith-style)\n const hasNumberedPattern = detected.some((d) => ['raqms', 'raqm', 'numbered'].includes(d.token));\n\n // If it starts with a structural token, use lineStartsWith (keep marker in content)\n if (hasStructuralToken) {\n return {\n fuzzy: true,\n metaType: detected.find((d) => ['kitab', 'bab', 'fasl'].includes(d.token))?.token || 'chapter',\n patternType: 'lineStartsWith',\n };\n }\n\n // If it's a numbered pattern (like hadith numbers), use lineStartsAfter (strip prefix)\n if (hasNumberedPattern) {\n return { fuzzy: false, metaType: 'hadith', patternType: 'lineStartsAfter' };\n }\n\n // Default: use lineStartsAfter without fuzzy\n return { fuzzy: false, patternType: 'lineStartsAfter' };\n};\n\n/**\n * Analyzes text and generates a complete suggested rule configuration.\n *\n * @param text - Highlighted text from the page\n * @returns Suggested rule configuration or null if no patterns detected\n */\nexport const analyzeTextForRule = (\n text: string,\n): {\n template: string;\n patternType: 'lineStartsWith' | 'lineStartsAfter';\n fuzzy: boolean;\n metaType?: string;\n detected: DetectedPattern[];\n} | null => {\n const detected = detectTokenPatterns(text);\n\n if (detected.length === 0) {\n return null;\n }\n\n const template = generateTemplateFromText(text, detected);\n const config = suggestPatternConfig(detected);\n\n return { detected, template, ...config };\n};\n"],"mappings":";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AA+BA,MAAM,mBAAmB;;;;;;;;;;;;;;;AAgBzB,MAAMA,eAA2B;CAC7B;EAAC;EAAU;EAAU;EAAU;EAAS;CACxC,CAAC,KAAU,IAAS;CACpB,CAAC,KAAU,IAAS;CACvB;;;;;;;;;;;;;;AAeD,MAAa,eAAe,MAAsB,EAAE,QAAQ,uBAAuB,OAAO;;;;;;;;;;;;;;;;;;AAmB1F,MAAM,iBAAiB,OAAuB;AAC1C,MAAK,MAAM,SAAS,aAChB,KAAI,MAAM,SAAS,GAAG,CAElB,QAAO,IAAI,MAAM,KAAK,MAAM,YAAY,EAAE,CAAC,CAAC,KAAK,GAAG,CAAC;AAI7D,QAAO,YAAY,GAAG;;;;;;;;;;;;;;;;;;;;;;;;AAyB1B,MAAM,wBAAwB,QAAgB;AAC1C,QAAO,IACF,UAAU,MAAM,CAChB,QAAQ,mBAAmB,GAAG,CAC9B,QAAQ,QAAQ,IAAI,CACpB,MAAM;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAsCf,MAAa,4BAA4B,SAAiB;CACtD,MAAM,oBAAoB,GAAG,iBAAiB;CAC9C,MAAM,OAAO,qBAAqB,KAAK;AAEvC,QAAO,MAAM,KAAK,KAAK,CAClB,KAAK,OAAO,cAAc,GAAG,GAAG,kBAAkB,CAClD,KAAK,GAAG;;;;;AC5JjB,MAAM,wBAAwB;CAAC;CAAI;CAAI;CAAI;CAAI;CAAI;CAAG;AAItD,MAAM,wBAAwB;CAAC;CAAI;CAAI;CAAI;CAAI;CAAI;CAAI;CAAI;CAAI;CAAG;CAAE;;;;;;;;;;;;;;;AAgBpE,MAAa,uBAAuB,OAAoC,OAAO,OAAO,WAAW,EAAE,SAAS,IAAI,GAAG;;;;;;;;;;;;;;;;;;AAmBnH,MAAa,kBAAkB,QAAgB,gBAAkD;AAC7F,KAAI,CAAC,eAAe,YAAY,WAAW,EACvC,QAAO;AAEX,MAAK,MAAM,QAAQ,YACf,KAAI,OAAO,SAAS,UAChB;MAAI,WAAW,KACX,QAAO;QAER;EACH,MAAM,CAAC,MAAM,MAAM;AACnB,MAAI,UAAU,QAAQ,UAAU,GAC5B,QAAO;;AAInB,QAAO;;;;;;;;;;;;;;;;AAiBX,MAAa,uBAAuB,QAAgB,SAAkC;AAClF,KAAI,KAAK,QAAQ,UAAa,SAAS,KAAK,IACxC,QAAO;AAEX,KAAI,KAAK,QAAQ,UAAa,SAAS,KAAK,IACxC,QAAO;AAEX,QAAO,CAAC,eAAe,QAAQ,KAAK,QAAQ;;;;;;;;;;;;;;;;;;AAmBhD,MAAa,mBAAmB,gBAAsD;CAClF,MAAM,6BAAa,IAAI,KAAa;AACpC,MAAK,MAAM,QAAQ,eAAe,EAAE,CAChC,KAAI,OAAO,SAAS,SAChB,YAAW,IAAI,KAAK;KAEpB,MAAK,IAAI,IAAI,KAAK,IAAI,KAAK,KAAK,IAAI,IAChC,YAAW,IAAI,EAAE;AAI7B,QAAO;;;;;;;;;;;;;;;;;;;AAoBX,MAAa,iBACT,SACA,YACA,UACA,SACiB;CACjB,MAAM,UAAU,QAAQ,MAAM;AAC9B,KAAI,CAAC,QACD,QAAO;CAEX,MAAMC,MAAe;EAAE,SAAS;EAAS,MAAM;EAAY;AAC3D,KAAI,aAAa,UAAa,aAAa,WACvC,KAAI,KAAK;AAEb,KAAI,KACA,KAAI,OAAO;AAEf,QAAO;;;;;;;;;;;;;;AA0BX,MAAa,qBAAqB,aAA2B,qBACzD,YAAY,KAAK,OAAO;CACpB,MAAM,OAAO,oBAAoB,GAAG;CACpC,MAAM,aAAa,gBAAgB,KAAK,QAAQ;CAChD,MAAM,gBACF,KAAK,aAAa,gBACL;EACH,MAAM,eAAeC,iBAAe,KAAK,SAAS;AAClD,MAAI;AACA,UAAO,IAAI,OAAO,cAAc,KAAK;WAChC,OAAO;GACZ,MAAM,UAAU,iBAAiB,QAAQ,MAAM,UAAU,OAAO,MAAM;AACtE,SAAM,IAAI,MAAM,sCAAsC,KAAK,SAAS,aAAa,UAAU;;KAE/F,GACJ;AACV,KAAI,KAAK,YAAY,GACjB,QAAO;EAAE;EAAY,OAAO;EAAM;EAAM;EAAe;CAE3D,MAAM,WAAWA,iBAAe,KAAK,QAAQ;AAC7C,KAAI;AACA,SAAO;GAAE;GAAY,OAAO,IAAI,OAAO,UAAU,MAAM;GAAE;GAAM;GAAe;UACzE,OAAO;EACZ,MAAM,UAAU,iBAAiB,QAAQ,MAAM,UAAU,OAAO,MAAM;AACtE,QAAM,IAAI,MAAM,6BAA6B,KAAK,QAAQ,aAAa,UAAU;;EAEvF;;;;;;;;;;;AAeN,MAAa,+BACT,SACA,SACA,OACA,SACA,iBACA,WACS;AACT,KAAI,WAAW,aAAa,WAAW,SAAS,CAAC,QAAQ,SAAS,KAAK,CACnE,QAAO;CAGX,IAAI,UAAU;CACd,IAAI,aAAa;AAEjB,MAAK,IAAI,KAAK,UAAU,GAAG,MAAM,OAAO,MAAM;EAC1C,MAAM,WAAW,gBAAgB,IAAI,QAAQ,IAAI;AACjD,MAAI,CAAC,SACD;EAGJ,MAAM,QAAQ,4BAA4B,SAAS,SAAS,QAAQ,WAAW,EAAE,WAAW;AAC5F,MAAI,QAAQ,KAAK,QAAQ,QAAQ,OAAO,KACpC,WAAU,GAAG,QAAQ,MAAM,GAAG,QAAQ,EAAE,CAAC,GAAG,QAAQ,MAAM,MAAM;AAEpE,MAAI,QAAQ,EACR,cAAa;;AAIrB,QAAO;;;;;AAMX,MAAM,+BAA+B,SAAiB,oBAA4B,eAA+B;AAC7G,MAAK,MAAM,OAAO,uBAAuB;EACrC,MAAM,SAAS,mBAAmB,MAAM,GAAG,KAAK,IAAI,KAAK,mBAAmB,OAAO,CAAC,CAAC,MAAM;AAC3F,MAAI,CAAC,OACD;EAEJ,MAAM,MAAM,QAAQ,QAAQ,QAAQ,WAAW;AAC/C,MAAI,MAAM,EACN,QAAO;;AAGf,QAAO;;;;;;;;;;AAWX,MAAa,oCACT,kBACA,gBACA,SACA,oBACS;CACT,MAAM,kBAAkB,gBAAgB,IAAI,QAAQ,gBAAgB;AACpE,KAAI,CAAC,gBACD,QAAO;CAGX,MAAM,WAAW,iBAAiB,WAAW,CAAC,MAAM,GAAG,KAAK,IAAI,IAAI,iBAAiB,OAAO,CAAC;CAC7F,MAAM,SAAS,SAAS,MAAM,GAAG,KAAK,IAAI,IAAI,SAAS,OAAO,CAAC;AAC/D,KAAI,CAAC,OACD,QAAO;CAGX,MAAM,MAAM,gBAAgB,QAAQ,QAAQ,OAAO;AACnD,QAAO,MAAM,IAAI,MAAM;;;;;;;;;AAU3B,MAAa,qCACT,kBACA,iBACA,eACA,kBACA,SACA,oBACS;CACT,MAAM,iBAAiB,gBAAgB,IAAI,QAAQ,eAAe;AAClE,KAAI,CAAC,eACD,QAAO;CAIX,MAAM,SAAS,KAAK,IAAI,KAAK,IAAI,GAAG,iBAAiB,EAAE,iBAAiB,OAAO;CAC/E,MAAM,cAAc,KAAK,IAAI,GAAG,SAAS,IAAO;CAChD,MAAM,YAAY,KAAK,IAAI,iBAAiB,QAAQ,SAAS,IAAM;CAInE,MAAM,gBAAgB,eAAe,QAAQ,WAAW;AACxD,MAAK,MAAM,OAAO,uBAAuB;EACrC,MAAM,SAAS,cAAc,MAAM,GAAG,KAAK,IAAI,KAAK,cAAc,OAAO,CAAC,CAAC,MAAM;AACjF,MAAI,CAAC,OACD;EAGJ,IAAI,MAAM,iBAAiB,QAAQ,QAAQ,YAAY;AACvD,SAAO,QAAQ,MAAM,OAAO,WAAW;AAEnC,OAAI,MAAM,KAAK,KAAK,KAAK,iBAAiB,MAAM,MAAM,GAAG,CACrD,QAAO;AAEX,SAAM,iBAAiB,QAAQ,QAAQ,MAAM,EAAE;;EAInD,MAAM,OAAO,iBAAiB,YAAY,QAAQ,OAAO;AACzD,MAAI,OAAO,EACP,QAAO;;AAIf,QAAO;;;;;;;;;AAUX,MAAa,mCACT,kBACA,gBACA,cACA,OACA,SACA,iBACA,sBACS;AAET,KAAI,gBAAgB,MAChB,QAAO,iBAAiB;CAG5B,MAAM,iBAAiB,eAAe;CACtC,MAAM,aAAa,iBAAiB;CACpC,MAAM,aAAa,KAAK,IAAI,gBAAgB,MAAM;CAElD,MAAM,2BAA2B,iCAC7B,kBACA,gBACA,SACA,gBACH;AAID,MAAK,IAAI,UAAU,YAAY,WAAW,YAAY,WAAW;EAC7D,MAAM,mBACF,kBAAkB,aAAa,UAAa,kBAAkB,oBAAoB,SAC5E,KAAK,IAAI,GAAG,kBAAkB,WAAW,kBAAkB,kBAAkB,yBAAyB,GACtG,iBAAiB;EAE3B,MAAM,MAAM,kCACR,kBACA,gBACA,SACA,kBACA,SACA,gBACH;AACD,MAAI,MAAM,EACN,QAAO;;AAMf,QAAO,iBAAiB;;;;;;;;AAS5B,MAAa,8BACT,gBACA,cACA,OACA,SACA,qBACA,sBACS;CACT,MAAM,iBAAiB,QAAQ;AAE/B,KAD6B,oBAAoB,MAAM,OAAO,GAAG,WAAW,IAAI,eAAe,CAAC,IACpE,iBAAiB,MAEzC,QAAO,kBAAkB,iBAAiB,KAAK,kBAAkB;AAIrE,MAAK,IAAI,UAAU,iBAAiB,GAAG,WAAW,cAAc,WAAW;EACvE,MAAM,SAAS,QAAQ;AAEvB,MADmB,oBAAoB,MAAM,OAAO,GAAG,WAAW,IAAI,OAAO,CAAC,CAE1E,QAAO,kBAAkB,WAAW,kBAAkB;;AAG9D,QAAO;;;;;;;;;;;;;AAcX,MAAa,qBACT,cACA,gBACA,OACA,SACA,oBACS;AACT,MAAK,IAAI,KAAK,OAAO,KAAK,gBAAgB,MAAM;EAC5C,MAAM,WAAW,gBAAgB,IAAI,QAAQ,IAAI;AACjD,MAAI,CAAC,SACD;EAGJ,MAAM,iBAAiB,SAAS,QAAQ,WAAW;AAKnD,OAAK,MAAM,OAAO,uBAAuB;GACrC,MAAM,eAAe,eAAe,MAAM,GAAG,KAAK,IAAI,KAAK,eAAe,OAAO,CAAC,CAAC,MAAM;AAKzF,OAAI,aAAa,SAAS,KAAK,aAAa,QAAQ,aAAa,GAAG,EAChE,QAAO;;;AAInB,QAAO;;;;;;;;;;;;;;;;AAiBX,MAAa,uBACT,cACA,gBACA,OACA,SACA,oBACS;CACT,MAAM,eAAe,aAAa,WAAW;AAC7C,KAAI,CAAC,aACD,QAAO;AAIX,MAAK,IAAI,KAAK,gBAAgB,MAAM,OAAO,MAAM;EAC7C,MAAM,WAAW,gBAAgB,IAAI,QAAQ,IAAI;AACjD,MAAI,UAAU;GACV,MAAM,aAAa,SAAS,QAAQ,MAAM,GAAG,KAAK,IAAI,IAAI,SAAS,OAAO,CAAC,CAAC,MAAM;GAClF,MAAM,cAAc,aAAa,MAAM,GAAG,KAAK,IAAI,IAAI,aAAa,OAAO,CAAC;AAK5E,OAAI,WAAW,SAAS,GAAG;AACvB,QAAI,aAAa,WAAW,WAAW,CACnC,QAAO;AAEX,QAAI,SAAS,QAAQ,WAAW,CAAC,WAAW,YAAY,CACpD,QAAO;;;;AAKvB,QAAO;;;;;;;;;;;AAoBX,MAAa,0BACT,YACA,SACA,SACA,UACU;AACV,KAAI,WAAW,SAAS,EACpB,QAAO;AAEX,MAAK,IAAI,UAAU,SAAS,WAAW,OAAO,UAC1C,KAAI,WAAW,IAAI,QAAQ,SAAS,CAChC,QAAO;AAGf,QAAO;;;;;;;;;;AAWX,MAAa,wBAAwB,kBAA0B,iBAAyC;CACpG,MAAM,eAAe,aAAa,QAAQ,MAAM,CAAC,MAAM,GAAG,KAAK,IAAI,IAAI,aAAa,OAAO,CAAC;AAC5F,KAAI,aAAa,WAAW,EACxB,QAAO;CAEX,MAAM,MAAM,iBAAiB,QAAQ,aAAa;AAClD,QAAO,MAAM,IAAI,MAAM;;;;;;;;;;AAW3B,MAAa,4BACT,eACA,OACA,WACS;CAGT,IAAIC;CACJ,IAAIC;AACJ,MAAK,MAAM,KAAK,cAAc,SAAS,MAAM,EAAE;EAC3C,MAAM,QAAQ;GAAE,OAAO,EAAE;GAAO,QAAQ,EAAE,GAAG;GAAQ;AACrD,MAAI,CAAC,MACD,SAAQ;AAEZ,SAAO;;AAEX,KAAI,CAAC,MACD,QAAO;CAEX,MAAM,WAAW,WAAW,WAAW,OAAQ;AAC/C,QAAO,SAAS,QAAQ,SAAS;;;;;;AAOrC,MAAM,2BACF,kBACA,cACA,mBACA,OACA,SACA,oBACS;CACT,MAAM,cAAc,eAAe;AACnC,KAAI,eAAe,OAAO;EACtB,MAAM,eAAe,gBAAgB,IAAI,QAAQ,aAAa;AAC9D,MAAI,cAAc;GACd,MAAM,MAAM,qBAAqB,kBAAkB,aAAa;AAChE,OAAI,MAAM,EACN,QAAO,KAAK,IAAI,KAAK,mBAAmB,iBAAiB,OAAO;;;AAI5E,QAAO,KAAK,IAAI,mBAAmB,iBAAiB,OAAO;;;;;;;;;;;;;AAc/D,MAAa,qBACT,kBACA,gBACA,OACA,cACA,mBACA,QACS;CACT,MAAM,EAAE,SAAS,iBAAiB,qBAAqB,WAAW;AAElE,MAAK,MAAM,EAAE,MAAM,OAAO,YAAY,mBAAmB,qBAAqB;AAE1E,MAAI,CAAC,oBAAoB,QAAQ,iBAAiB,KAAK,CACnD;AAIJ,MAAI,uBAAuB,YAAY,SAAS,gBAAgB,aAAa,CACzE;AAIJ,MAAI,eAAe,KAAK,iBAAiB,CACrC;AAIJ,MAAI,UAAU,KACV,QAAO,wBACH,kBACA,cACA,mBACA,OACA,SACA,gBACH;EAKL,MAAM,WAAW,yBADK,iBAAiB,MAAM,GAAG,KAAK,IAAI,mBAAmB,iBAAiB,OAAO,CAAC,EAC5C,OAAO,OAAO;AACvE,MAAI,WAAW,EACX,QAAO;;AAIf,QAAO;;;;;;;;;;;ACtqBX,MAAM,yBAAyB,YAAsB,IAAI,IAAI,QAAQ,KAAK,IAAI,MAAM,CAAC,IAAI,EAAE,CAAC,CAAC;AAE7F,MAAM,2BAA2B,OAAe,sBAAgC;CAC5E,MAAM,kCAAkB,IAAI,KAA6B;AACzD,MAAK,IAAI,IAAI,GAAG,IAAI,MAAM,QAAQ,KAAK;EACnC,MAAM,UAAU,kBAAkB;AAClC,kBAAgB,IAAI,MAAM,GAAG,IAAI;GAAE;GAAS,OAAO;GAAG,QAAQ,QAAQ;GAAQ,CAAC;;AAEnF,QAAO;;AAGX,MAAM,0BAA0B,SAAmB,oBAAiD;CAChG,MAAMC,oBAA8B,CAAC,EAAE;CACvC,IAAI,cAAc;AAClB,MAAK,IAAI,IAAI,GAAG,IAAI,QAAQ,QAAQ,KAAK;EACrC,MAAM,WAAW,gBAAgB,IAAI,QAAQ,GAAG;AAChD,iBAAe,WAAW,SAAS,SAAS;AAC5C,MAAI,IAAI,QAAQ,SAAS,EACrB,gBAAe;AAEnB,oBAAkB,KAAK,YAAY;;AAEvC,QAAO;;AAGX,MAAM,2BACF,qBACA,SACA,SACA,UACU,oBAAoB,MAAM,OAAO,uBAAuB,GAAG,YAAY,SAAS,SAAS,MAAM,CAAC;AAE9G,MAAa,uBACT,gBACA,OACA,SACA,aACS;CAET,MAAM,kBADgB,QAAQ,kBACU;CACxC,IAAI,eAAe;AACnB,MAAK,IAAI,IAAI,gBAAgB,KAAK,OAAO,IACrC,KAAI,QAAQ,MAAM,gBACd,gBAAe;KAEf;AAGR,QAAO;;AAGX,MAAM,wBAAwB,gBAAwB,OAAe,YACjE,QAAQ,SAAS,QAAQ;AAE7B,MAAM,sBACF,kBACA,gBACA,OACA,SACA,MACA,gBAEA,cACI,kBACA,QAAQ,iBACR,mBAAmB,QAAQ,QAAQ,SAAS,QAC5C,cAAc,OAAO,OACxB;AAIL,MAAM,qBACF,cACA,gBACA,OACA,cACA,SACA,oBACa;CACb,MAAM,iBAAiB,eACjB,oBAAoB,cAAc,gBAAgB,OAAO,SAAS,gBAAgB,GAClF;AAIN,QAAO;EAAE,cAHY,eACf,kBAAkB,cAAc,gBAAgB,cAAc,SAAS,gBAAgB,GACvF;EACiB;EAAgB;;AAG3C,MAAa,sBACT,kBACA,cACA,OACA,SACA,oBACS;CACT,IAAI,cAAc;AAClB,KAAI,oBAAoB,eAAe,KAAK,OAAO;EAC/C,MAAM,eAAe,gBAAgB,IAAI,QAAQ,eAAe,GAAG;AACnE,MAAI,cAAc;GACd,MAAM,aAAa,aAAa,QAAQ,MAAM,GAAG,KAAK,IAAI,IAAI,aAAa,OAAO,CAAC;GACnF,MAAM,kBAAkB,iBAAiB,WAAW,CAAC,MAAM,GAAG,KAAK,IAAI,IAAI,iBAAiB,OAAO,CAAC;AAIpG,OACI,eACC,iBAAiB,WAAW,WAAW,IAAI,aAAa,QAAQ,WAAW,gBAAgB,EAE5F,eAAc,eAAe;;;AAIzC,QAAO;;AAGX,MAAM,sBACF,cACA,gBACA,cACA,SACA,MACA,gBAEA,cACI,cACA,QAAQ,iBACR,eAAe,iBAAiB,QAAQ,gBAAgB,QACxD,cAAc,OAAO,OACxB;AAEL,MAAM,2BACF,SACA,SACA,OACA,SACA,iBACA,mBACA,qBACA,UACA,QACA,WACY;CACZ,MAAMC,SAAoB,EAAE;CAC5B,IAAI,mBAAmB,QAAQ;CAC/B,IAAI,iBAAiB;CACrB,IAAI,eAAe;CACnB,IAAI,iBAAiB;CACrB,MAAM,gBAAgB;AAEtB,QAAO,kBAAkB,OAAO;AAC5B;AACA,MAAI,iBAAiB,eAAe;AAChC,WAAQ,QAAQ,oEAAoE,EAChF,gBAAgB,eACnB,CAAC;AACF;;EAGJ,MAAM,yBAAyB,wBAAwB,qBAAqB,SAAS,gBAAgB,MAAM;EAC3G,MAAM,gBAAgB,qBAAqB,gBAAgB,OAAO,QAAQ;AAC1E,MAAI,iBAAiB,YAAY,CAAC,wBAAwB;GACtD,MAAM,WAAW,mBACb,kBACA,gBACA,OACA,SACA,QAAQ,MACR,aACH;AACD,OAAI,SACA,QAAO,KAAK,SAAS;AAEzB;;EAGJ,MAAM,eAAe,oBAAoB,gBAAgB,OAAO,SAAS,SAAS;AAClF,UAAQ,QAAQ,2BAA2B,kBAAkB;GACzD;GACA,mBAAmB,QAAQ;GAC3B,uBAAuB,iBAAiB,MAAM,GAAG,GAAG;GACpD,wBAAwB,iBAAiB;GACzC;GACA;GACA,UAAU,QAAQ;GAClB;GACA,iBAAiB,QAAQ;GAC5B,CAAC;EACF,MAAM,oBAAoB,gCACtB,kBACA,gBACA,cACA,OACA,SACA,iBACA,kBACH;EAED,MAAM,sBAAsB,wBAAwB,qBAAqB,SAAS,gBAAgB,aAAa;EAC/G,IAAI,gBAAgB;AAEpB,MAAI,oBACA,iBAAgB,2BACZ,gBACA,cACA,OACA,SACA,qBACA,kBACH;AAGL,MAAI,iBAAiB,EAEjB,iBAAgB,kBACZ,kBACA,gBACA,OACA,cACA,mBANqC;GAAE;GAAqB;GAAiB;GAAS;GAAQ,CAQjG;AAGL,MAAI,iBAAiB,EAEjB,iBAAgB;EAGpB,MAAM,eAAe,iBAAiB,MAAM,GAAG,cAAc,CAAC,MAAM;AACpE,UAAQ,QAAQ,+BAA+B;GAC3C;GACA,iBAAiB,aAAa,MAAM,IAAI;GACxC,oBAAoB,aAAa;GACjC;GACH,CAAC;EAEF,MAAM,EAAE,cAAc,mBAAmB,kBACrC,cACA,gBACA,OACA,cACA,SACA,gBACH;AAED,MAAI,cAAc;GACd,MAAM,WAAW,mBACb,cACA,gBACA,cACA,SACA,QAAQ,MACR,aACH;AACD,OAAI,SACA,QAAO,KAAK,SAAS;;AAI7B,qBAAmB,iBAAiB,MAAM,cAAc,CAAC,MAAM;AAC/D,UAAQ,QAAQ,4BAA4B;GACxC;GACA,wBAAwB,iBAAiB;GACzC,uBAAuB,iBAAiB,MAAM,GAAG,GAAG;GACvD,CAAC;AACF,MAAI,CAAC,kBAAkB;AACnB,WAAQ,QAAQ,2CAA2C;AAC3D;;AAGJ,mBAAiB,mBAAmB,kBAAkB,cAAc,OAAO,SAAS,gBAAgB;AACpG,UAAQ,QAAQ,+BAA+B;GAC3C;GACA,mBAAmB,QAAQ;GAC9B,CAAC;AACF,iBAAe;;AAGnB,SAAQ,QAAQ,6CAA6C,EAAE,aAAa,OAAO,QAAQ,CAAC;AAC5F,QAAO;;;;;;;AAQX,MAAa,oBACT,UACA,OACA,mBACA,UACA,aACA,QACA,kBACA,QACA,aAAkC,YACtB;CACZ,MAAM,UAAU,MAAM,KAAK,MAAM,EAAE,GAAG;CACtC,MAAM,gBAAgB,sBAAsB,QAAQ;CACpD,MAAM,kBAAkB,wBAAwB,OAAO,kBAAkB;CACzE,MAAM,oBAAoB,uBAAuB,SAAS,gBAAgB;CAC1E,MAAM,sBAAsB,kBAAkB,aAAa,iBAAiB;CAE5E,MAAMA,SAAoB,EAAE;AAE5B,SAAQ,OAAO,kCAAkC;EAAE;EAAU,cAAc,SAAS;EAAQ,CAAC;AAE7F,SAAQ,QAAQ,+BAA+B;EAC3C,cAAc,SAAS;EACvB,UAAU,SAAS,KAAK,OAAO;GAAE,eAAe,EAAE,QAAQ;GAAQ,MAAM,EAAE;GAAM,IAAI,EAAE;GAAI,EAAE;EAC/F,CAAC;AAEF,MAAK,MAAM,WAAW,UAAU;EAC5B,MAAM,UAAU,cAAc,IAAI,QAAQ,KAAK,IAAI;EACnD,MAAM,QAAQ,QAAQ,OAAO,SAAa,cAAc,IAAI,QAAQ,GAAG,IAAI,UAAW;EAEtF,MAAM,eAAe,QAAQ,MAAM,QAAQ,QAAQ,QAAQ;EAC3D,MAAM,gBAAgB,wBAAwB,qBAAqB,SAAS,SAAS,MAAM;AAE3F,MAAI,eAAe,YAAY,CAAC,eAAe;AAC3C,UAAO,KAAK,QAAQ;AACpB;;EAGJ,MAAM,SAAS,wBACX,SACA,SACA,OACA,SACA,iBACA,mBACA,qBACA,UACA,QACA,OACH;AAED,SAAO,KACH,GAAG,OAAO,KAAK,MAAM;GACjB,MAAM,aAAa,cAAc,IAAI,EAAE,KAAK,IAAI;GAChD,MAAM,WAAW,EAAE,OAAO,SAAa,cAAc,IAAI,EAAE,GAAG,IAAI,aAAc;AAChF,OAAI,cAAc,KAAK,WAAW,WAC9B,QAAO;IACH,GAAG;IACH,SAAS,4BACL,EAAE,SACF,YACA,UACA,SACA,iBACA,WACH;IACJ;AAEL,UAAO;IACT,CACL;;AAGL,SAAQ,OAAO,mCAAmC,EAAE,aAAa,OAAO,QAAQ,CAAC;AACjF,QAAO;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;ACnTX,MAAa,wBACT,QACA,iBACqC;AACrC,KAAI,CAAC,UAAU,aAAa,WAAW,EACnC;CAGJ,MAAMC,gBAAwC,EAAE;AAChD,MAAK,MAAM,QAAQ,aACf,KAAI,OAAO,UAAU,OACjB,eAAc,QAAQ,OAAO;AAIrC,QAAO,OAAO,KAAK,cAAc,CAAC,SAAS,IAAI,gBAAgB;;;;;;;;;;;;;;;;;;;;;;;AAwBnE,MAAa,4BAA4B,UAA+C;AACpF,KAAI,MAAM,UAAU,EAChB;AAGJ,MAAK,IAAI,IAAI,MAAM,SAAS,GAAG,KAAK,GAAG,IACnC,KAAI,MAAM,OAAO,OACb,QAAO,MAAM;;;;;;;;;;;;;;;;;;;;;;AA0BzB,MAAa,uBACT,SACA,MACA,UACgB;AAChB,QAAO,QAAQ,QAAQ,MAAM;EACzB,MAAM,KAAK,MAAM,EAAE,MAAM;AACzB,MAAI,KAAK,QAAQ,UAAa,KAAK,KAAK,IACpC,QAAO;AAEX,MAAI,KAAK,QAAQ,UAAa,KAAK,KAAK,IACpC,QAAO;AAEX,MAAI,eAAe,IAAI,KAAK,QAAQ,CAChC,QAAO;AAEX,SAAO;GACT;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AA6MN,MAAa,mBAAmB,OAAyC,WAA4B;AACjG,QAAO,MAAM,MAAM,MAAM;EACrB,MAAM,QAAQ,EAAE,QAAQ,UAAa,UAAU,EAAE;EACjD,MAAM,QAAQ,EAAE,QAAQ,UAAa,UAAU,EAAE;AACjD,SAAO,SAAS;GAClB;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;ACrTN,MAAa,0BAA0B,YAA4B;AAG/D,QAAO,QAAQ,QAAQ,+BAA+B,QAAQ,OAAO,YAAY;AAC7E,MAAI,MACA,QAAO;AAEX,SAAO,KAAK;GACd;;AAoDN,MAAM,aAAa,MA9BW;CAE1B;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CA3BwB;CACT;CA8BlB,CAEoC,KAAK,IAAI,CAAC;AAC/C,MAAM,cAAc,GAAG,WAAW,SAAS,WAAW;AAEtD,MAAMC,cAAsC;CAQxC,KAAK;CAUL,UAAU,CAAC,YAAY,IAAI,CAAC,KAAK,IAAI;CASrC,QAAQ;CAaR,MAAM;CAMN,MAAM,CAAC,SAAS,MAAM,CAAC,KAAK,IAAI;CAUhC,MAAM;CAiBN,OAAO;CASP,OAAO;CAeP,MAAM;EAAC;EAAS;EAAW;EAAS;EAAQ;EAAU;EAAU;EAAU;EAAU;EAAU,CAAC,KAAK,IAAI;CAUxG,MAAM;CAUN,OAAO;CAoBP,OAAO;CAMP,QAAQ;CACX;;;;;;;;;;;;AAkBD,MAAMC,mBAA2C,EAoB7C,UAAU,uBACb;;;;;;;;;;;;;;;AAgBD,MAAa,mCAAmC,aAA6B;CACzE,IAAI,MAAM;AACV,MAAK,IAAI,IAAI,GAAG,IAAI,IAAI,KAAK;EACzB,MAAM,OAAO,IAAI,QAAQ,mBAAmB,GAAG,cAAsB;AAEjE,UADoB,iBAAiB,cACf;IACxB;AACF,MAAI,SAAS,IACT;AAEJ,QAAM;;AAEV,QAAO;;;;;;;;;;AAWX,MAAM,oBAAoB,aAA6B;AACnD,QAAO,SAAS,QAAQ,mBAAmB,GAAG,cAAc;AACxD,SAAO,YAAY,cAAc,KAAK,UAAU;GAClD;;;;;;;;;;;;;;;;;;;;;;;;;;;AA4BN,MAAaC,iBAAyC;CAClD,GAAG;CAEH,GAAG,OAAO,YAAY,OAAO,QAAQ,iBAAiB,CAAC,KAAK,CAAC,GAAG,OAAO,CAAC,GAAG,iBAAiB,EAAE,CAAC,CAAC,CAAC;CACpG;;;;;;;;;;;AAYD,MAAM,2BAA2B;;;;;;;;;AAUjC,MAAM,qBAAqB;;;;;;;;;;;;;;;;AAiB3B,MAAa,kBAAkB,UAA2B;AACtD,oBAAmB,YAAY;AAC/B,QAAO,mBAAmB,KAAK,MAAM;;AAkCzC,MAAM,6BAA6B,UAAqC;CACpE,MAAMC,WAA8B,EAAE;CACtC,IAAI,YAAY;AAChB,0BAAyB,YAAY;CACrC,IAAIC;AAGJ,SAAQ,QAAQ,yBAAyB,KAAK,MAAM,MAAM,MAAM;AAC5D,MAAI,MAAM,QAAQ,UACd,UAAS,KAAK;GAAE,MAAM;GAAQ,OAAO,MAAM,MAAM,WAAW,MAAM,MAAM;GAAE,CAAC;AAE/E,WAAS,KAAK;GAAE,MAAM;GAAS,OAAO,MAAM;GAAI,CAAC;AACjD,cAAY,MAAM,QAAQ,MAAM,GAAG;;AAGvC,KAAI,YAAY,MAAM,OAClB,UAAS,KAAK;EAAE,MAAM;EAAQ,OAAO,MAAM,MAAM,UAAU;EAAE,CAAC;AAGlE,QAAO;;AAGX,MAAM,yBAAyB,MAAc,mBAAyD;AAClG,KAAI,kBAAkB,mBAAmB,KAAK,KAAK,CAC/C,QAAO,eAAe,KAAK;AAE/B,QAAO;;AAKX,MAAM,iCAAiC,cAAsB,mBAAyD;AAClH,KAAI,CAAC,eACD,QAAO;AAEX,QAAO,aACF,MAAM,IAAI,CACV,KAAK,SAAU,mBAAmB,KAAK,KAAK,GAAG,eAAe,KAAK,GAAG,KAAM,CAC5E,KAAK,IAAI;;AAGlB,MAAM,qBAAqB,YAAuE;AAC9F,0BAAyB,YAAY;CACrC,MAAM,aAAa,yBAAyB,KAAK,QAAQ;AACzD,KAAI,CAAC,WACD,QAAO;CAEX,MAAM,GAAG,WAAW,eAAe;AACnC,QAAO;EAAE;EAAa;EAAW;;AAGrC,MAAM,yBAAyB,kBAA2B;CACtD,MAAMC,eAAyB,EAAE;CACjC,MAAM,oCAAoB,IAAI,KAAqB;CAEnD,MAAM,YAAY,aAA6B;EAC3C,MAAM,QAAQ,kBAAkB,IAAI,SAAS,IAAI;AACjD,oBAAkB,IAAI,UAAU,QAAQ,EAAE;EAC1C,MAAM,aAAa,UAAU,IAAI,WAAW,GAAG,SAAS,GAAG,QAAQ;EACnE,MAAM,eAAe,gBAAgB,GAAG,gBAAgB,eAAe;AACvE,eAAa,KAAK,aAAa;AAC/B,SAAO;;AAGX,QAAO;EAAE;EAAc;EAAU;;AAGrC,MAAM,sBACF,SACA,SAKS;CACT,MAAM,SAAS,kBAAkB,QAAQ;AACzC,KAAI,CAAC,OACD,QAAO;CAGX,MAAM,EAAE,WAAW,gBAAgB;AAGnC,KAAI,CAAC,aAAa,YAEd,QAAO,MADM,KAAK,gBAAgB,YAAY,CAC5B;CAGtB,IAAI,eAAe,eAAe;AAClC,KAAI,CAAC,aAED,QAAO;AAGX,gBAAe,8BAA8B,cAAc,KAAK,eAAe;AAG/E,KAAI,YAEA,QAAO,MADM,KAAK,gBAAgB,YAAY,CAC5B,GAAG,aAAa;AAItC,QAAO;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAuCX,MAAa,4BACT,OACA,gBACA,kBACe;CACf,MAAM,WAAW,0BAA0B,MAAM;CACjD,MAAM,WAAW,sBAAsB,cAAc;CAErD,MAAM,iBAAiB,SAAS,KAAK,YAAY;AAC7C,MAAI,QAAQ,SAAS,OACjB,QAAO,sBAAsB,QAAQ,OAAO,eAAe;AAE/D,SAAO,mBAAmB,QAAQ,OAAO;GACrC;GACA;GACA,iBAAiB,SAAS;GAC7B,CAAC;GACJ;AAEF,QAAO;EACH,cAAc,SAAS;EACvB,aAAa,SAAS,aAAa,SAAS;EAC5C,SAAS,eAAe,KAAK,GAAG;EACnC;;;;;;;;;;;;;;;;;;;;;AAsBL,MAAa,gBAAgB,UAAkB,yBAAyB,MAAM,CAAC;;;;;;;;;;;;;;;;;;;;;;AAuB/E,MAAa,mBAAmB,aAAqB;CACjD,MAAM,WAAW,aAAa,SAAS;AACvC,KAAI;AACA,SAAO,IAAI,OAAO,UAAU,IAAI;SAC5B;AACJ,SAAO;;;;;;;;;;;;;;;AAgBf,MAAa,2BAA2B,OAAO,KAAK,eAAe;;;;;;;;;;;;;;;AAgBnE,MAAa,mBAAmB,cAA0C,eAAe;;;;;;;;;;;;;;;;;;;;;ACxpBzF,MAAa,qBAAqB,YAA6B;AAE3D,QAAO,WAAW,KAAK,QAAQ;;;;;;;;;;;;AAanC,MAAa,4BAA4B,YAA8B;CACnE,MAAMC,QAAkB,EAAE;AAG1B,MAAK,MAAM,SAAS,QAAQ,SADJ,iBAC6B,CACjD,OAAM,KAAK,MAAM,GAAG;AAExB,QAAO;;;;;AAMX,MAAa,oBAAoB,YAA4B;AACzD,KAAI;AACA,SAAO,IAAI,OAAO,SAAS,MAAM;UAC5B,OAAO;EACZ,MAAM,UAAU,iBAAiB,QAAQ,MAAM,UAAU,OAAO,MAAM;AACtE,QAAM,IAAI,MAAM,0BAA0B,QAAQ,aAAa,UAAU;;;;;;;;AASjF,MAAa,kBAAkB,SAAiB,OAAgB,kBAA6C;CAGzG,MAAM,EAAE,SAAS,UAAU,iBAAiB,yBAF5B,uBAAuB,QAAQ,EACxB,QAAQ,2BAA2B,QACoC,cAAc;AAC5G,QAAO;EAAE;EAAc,SAAS;EAAU;;AAG9C,MAAa,mCACT,UACA,OACA,kBAC4C;CAC5C,MAAM,YAAY,SAAS,KAAK,MAAM,eAAe,GAAG,OAAO,cAAc,CAAC;CAC9E,MAAM,QAAQ,UAAU,KAAK,MAAM,EAAE,QAAQ,CAAC,KAAK,IAAI;AAOvD,QAAO;EAAE,cANY,UAAU,SAAS,MAAM,EAAE,aAAa;EAMtC,OAAO,OAAO,MAAM,GADpB,gBAAgB,MAAM,cAAc,iBAAiB;EACZ;;AAGpE,MAAa,kCACT,UACA,OACA,kBAC4C;CAC5C,MAAM,YAAY,SAAS,KAAK,MAAM,eAAe,GAAG,OAAO,cAAc,CAAC;CAC9E,MAAM,QAAQ,UAAU,KAAK,MAAM,EAAE,QAAQ,CAAC,KAAK,IAAI;AAEvD,QAAO;EAAE,cADY,UAAU,SAAS,MAAM,EAAE,aAAa;EACtC,OAAO,OAAO,MAAM;EAAI;;AAGnD,MAAa,gCACT,UACA,OACA,kBAC4C;CAC5C,MAAM,YAAY,SAAS,KAAK,MAAM,eAAe,GAAG,OAAO,cAAc,CAAC;CAC9E,MAAM,QAAQ,UAAU,KAAK,MAAM,EAAE,QAAQ,CAAC,KAAK,IAAI;AAEvD,QAAO;EAAE,cADY,UAAU,SAAS,MAAM,EAAE,aAAa;EACtC,OAAO,MAAM,MAAM;EAAK;;AAGnD,MAAa,4BACT,UACA,kBAC4C;CAE5C,MAAM,EAAE,SAAS,iBAAiB,yBADlB,uBAAuB,SAAS,EACoB,QAAW,cAAc;AAC7F,QAAO;EAAE;EAAc,OAAO;EAAS;;AAG3C,MAAa,wBAAwB,aAAqB,kBACtD,kBAAkB,YAAY;;;;;;AAOlC,MAAa,kBAAkB,MAAiB,kBAAsC;CAClF,MAAMC,IAMF,EAAE,GAAG,MAAM;CAEf,MAAM,QAAS,KAA6B,SAAS;CACrD,IAAIC,kBAA4B,EAAE;AAGlC,KAAI,EAAE,iBAAiB,QAAQ;EAC3B,MAAM,EAAE,OAAO,iBAAiB,gCAAgC,EAAE,iBAAiB,OAAO,cAAc;AACxG,oBAAkB;AAClB,SAAO;GACH,cAAc;GACd,OAAO,iBAAiB,MAAM;GAC9B,aAAa;GACb,qBAAqB;GACxB;;AAGL,KAAI,EAAE,gBAAgB,QAAQ;EAC1B,MAAM,EAAE,OAAO,iBAAiB,+BAA+B,EAAE,gBAAgB,OAAO,cAAc;AACtG,IAAE,QAAQ;AACV,oBAAkB;;AAEtB,KAAI,EAAE,cAAc,QAAQ;EACxB,MAAM,EAAE,OAAO,iBAAiB,6BAA6B,EAAE,cAAc,OAAO,cAAc;AAClG,IAAE,QAAQ;AACV,oBAAkB;;AAEtB,KAAI,EAAE,UAAU;EACZ,MAAM,EAAE,OAAO,iBAAiB,yBAAyB,EAAE,UAAU,cAAc;AACnF,IAAE,QAAQ;AACV,oBAAkB,CAAC,GAAG,iBAAiB,GAAG,aAAa;;AAG3D,KAAI,CAAC,EAAE,MACH,OAAM,IAAI,MACN,gHACH;AAIL,KAAI,gBAAgB,WAAW,EAC3B,mBAAkB,yBAAyB,EAAE,MAAM;CAGvD,MAAM,cAAc,qBAAqB,EAAE,OAAO,gBAAgB;AAClE,QAAO;EACH,cAAc;EACd,OAAO,iBAAiB,EAAE,MAAM;EAChC;EACA,qBAAqB;EACxB;;;;;ACjML,MAAM,wBAAwB;AAE9B,MAAM,yBAAyB,UAA2B;AACtD,KAAI,CAAC,MACD,QAAO;CAGX,MAAM,UAAU,IAAI,IAAI;EAAC;EAAK;EAAK;EAAK;EAAK;EAAK;EAAI,CAAC;CACvD,MAAM,sBAAM,IAAI,KAAa;AAC7B,MAAK,MAAM,MAAM,OAAO;AACpB,MAAI,CAAC,QAAQ,IAAI,GAAG,CAChB,OAAM,IAAI,MAAM,gCAAgC,GAAG,qBAAqB;AAE5E,MAAI,IAAI,GAAG;;AAEf,KAAI,IAAI,IAAI;AACZ,KAAI,IAAI,IAAI;AAIZ,QADc;EAAC;EAAK;EAAK;EAAK;EAAK;EAAK;EAAI,CAC/B,QAAQ,MAAM,IAAI,IAAI,EAAE,CAAC,CAAC,KAAK,GAAG;;AASnD,MAAM,uBAAuB,UAAgD;CACzE,MAAMC,WAAkC,EAAE;AAC1C,MAAK,MAAM,KAAK,OAAO;AACnB,MAAI,EAAE,WAAW,EAAE,QAAQ,WAAW,EAElC;EAEJ,MAAM,QAAQ,sBAAsB,EAAE,MAAM;EAC5C,MAAM,KAAK,IAAI,OAAO,EAAE,OAAO,MAAM;AACrC,WAAS,KAAK;GACV,WAAW,EAAE,UAAU,IAAI,IAAI,EAAE,QAAQ,GAAG;GAC5C;GACA,aAAa,EAAE;GAClB,CAAC;;AAEN,QAAO;;;;;;;;;;;;AAaX,MAAa,qBAAqB,OAAe,UAAkC;AAC/E,KAAI,CAAC,SAAS,MAAM,WAAW,KAAK,MAAM,WAAW,EACjD,QAAO;CAEX,MAAM,WAAW,oBAAoB,MAAM;AAC3C,KAAI,SAAS,WAAW,EACpB,QAAO;AAGX,QAAO,MAAM,KAAK,MAAM;EACpB,IAAI,UAAU,EAAE;AAChB,OAAK,MAAM,QAAQ,UAAU;AACzB,OAAI,KAAK,aAAa,CAAC,KAAK,UAAU,IAAI,EAAE,GAAG,CAC3C;AAEJ,aAAU,QAAQ,QAAQ,KAAK,IAAI,KAAK,YAAY;;AAExD,MAAI,YAAY,EAAE,QACd,QAAO;AAEX,SAAO;GAAE,GAAG;GAAG;GAAS;GAC1B;;;;;;;;;;;;;;;;;AC5EN,MAAM,yBAAyB,SAA0B,QAAQ,QAAU,QAAQ;AAInF,MAAM,YAAY,OAAuB;AACrC,SAAQ,IAAR;EACI,KAAK;EACL,KAAK;EACL,KAAK,IACD,QAAO;EACX,KAAK,IACD,QAAO;EACX,KAAK,IACD,QAAO;EACX,QACI,QAAO;;;;;;;;;;;AAYnB,MAAa,6BAA6B,SAAiB,QAAgB,YAAmC;CAC1G,IAAI,IAAI;AAER,QAAO,IAAI,QAAQ,UAAU,sBAAsB,QAAQ,WAAW,EAAE,CAAC,CACrE;AAGJ,MAAK,IAAI,IAAI,GAAG,IAAI,QAAQ,QAAQ,KAAK;EACrC,MAAM,QAAQ,QAAQ;AAKtB,SAAO,IAAI,QAAQ,UAAU,sBAAsB,QAAQ,WAAW,EAAE,CAAC,CACrE;AAGJ,MAAI,KAAK,QAAQ,OACb,QAAO;EAGX,MAAM,MAAM,QAAQ;AACpB,MAAI,SAAS,IAAI,KAAK,SAAS,MAAM,CACjC,QAAO;AAEX;;AAIJ,QAAO,IAAI,QAAQ,UAAU,sBAAsB,QAAQ,WAAW,EAAE,CAAC,CACrE;AAEJ,QAAO;;AAGX,MAAM,iBAAiB,MAAuB;AAI1C,QAAO,CAAC,oBAAoB,KAAK,EAAE;;AAOvC,MAAa,6BAA6B,YAAuD;AAC7F,KAAI,CAAC,QACD,QAAO;AAEX,KAAI,CAAC,cAAc,QAAQ,CACvB,QAAO;CAEX,MAAM,eAAe,QAChB,MAAM,IAAI,CACV,KAAK,MAAM,EAAE,MAAM,CAAC,CACpB,OAAO,QAAQ;AACpB,KAAI,CAAC,aAAa,OACd,QAAO;AAEX,QAAO,EAAE,cAAc;;;;;;AAY3B,MAAa,6BAA6B,kBAAqD;CAC3F,MAAM,IAAI,cAAc,MAAM,kBAAkB;AAChD,KAAI,CAAC,EACD,QAAO;CAEX,MAAM,QAAQ,EAAE;CAChB,MAAM,eAAe,gBAAgB,MAAM;AAC3C,KAAI,CAAC,aACD,QAAO;CAEX,MAAM,WAAW,0BAA0B,aAAa;AACxD,KAAI,CAAC,SACD,QAAO;AAEX,QAAO;EAAE,cAAc,SAAS;EAAc;EAAO;;;;;;AAOzD,MAAa,yBAAyB,SAAiB,QAAgB,aAAgD;AACnH,MAAK,MAAM,OAAO,SAAS,cAAc;EACrC,MAAM,MAAM,0BAA0B,SAAS,QAAQ,IAAI;AAC3D,MAAI,QAAQ,KACR,QAAO;;AAGf,QAAO;;;;;AC5HX,MAAa,6BAA6B,UAAyC;CAC/E,MAAMC,kBAAwE,EAAE;CAChF,MAAMC,kBAA+B,EAAE;CACvC,MAAMC,iBAAkC,EAAE;AAG1C,OAAM,SAAS,MAAM,UAAU;AAE3B,MAAK,KAA6B,SAAS,oBAAoB,MAAM;GACjE,MAAM,WACF,KAAK,eAAe,WAAW,IAAI,0BAA0B,KAAK,eAAe,GAAG,GAAG;AAC3F,OAAI,UAAU;AACV,mBAAe,KAAK;KAAE;KAAU,MAAM;KAAc;KAAM,WAAW;KAAO,CAAC;AAC7E;;;AAKR,MAAK,KAA6B,SAAS,qBAAqB,MAAM;GAClE,MAAM,WACF,KAAK,gBAAgB,WAAW,IAAI,0BAA0B,KAAK,gBAAgB,GAAG,GAAG;AAC7F,OAAI,UAAU;AACV,mBAAe,KAAK;KAAE;KAAU,MAAM;KAAe;KAAM,WAAW;KAAO,CAAC;AAC9E;;;EAIR,IAAI,eAAe;AAGnB,MAAI,WAAW,QAAQ,KAAK,OAAO;GAC/B,MAAM,mBAAmB,yBAAyB,KAAK,MAAM,CAAC,SAAS;GACvE,MAAM,oBAAoB,UAAU,KAAK,KAAK,MAAM;GACpD,MAAM,uBAAuB,kBAAkB,KAAK,MAAM;AAC1D,OAAI,oBAAoB,qBAAqB,qBACzC,gBAAe;;AAIvB,MAAI,aACA,iBAAgB,KAAK;GAAE;GAAO,QAAQ,IAAI,MAAM;GAAI;GAAM,CAAC;MAE3D,iBAAgB,KAAK,KAAK;GAEhC;AAEF,QAAO;EAAE;EAAiB;EAAgB;EAAiB;;AAK/D,MAAa,+BAA+B,cAAsB,YAA4C;CAC1G,MAAM,2CAA2B,IAAI,KAAqB;AAC1D,MAAK,IAAI,IAAI,GAAG,IAAI,QAAQ,WAAW,QAAQ,IAC3C,0BAAyB,IAAI,QAAQ,WAAW,GAAG,OAAO,EAAE;CAGhE,MAAM,wCAAwB,IAAI,KAA4B;CAC9D,MAAM,yBAAyB,MAAiB,cAAqC;AACjF,MAAI,sBAAsB,IAAI,UAAU,CACpC,QAAO,sBAAsB,IAAI,UAAU,IAAI;EAEnD,MAAM,UAAW,KAAqC;AACtD,MAAI,CAAC,SAAS;AACV,yBAAsB,IAAI,WAAW,KAAK;AAC1C,UAAO;;EAEX,MAAM,WAAW,eAAe,SAAS,MAAM,CAAC;EAChD,MAAM,KAAK,IAAI,OAAO,MAAM,SAAS,KAAK,IAAI;AAC9C,wBAAsB,IAAI,WAAW,GAAG;AACxC,SAAO;;CAGX,MAAM,4BAA4B,kBAAkC;AAChE,MAAI,iBAAiB,EACjB,QAAO;EAEX,MAAM,eAAe,QAAQ,WAAW,gBAAgB;AAExD,OAAK,IAAI,IAAI,aAAa,MAAM,GAAG,KAAK,aAAa,OAAO,KAAK;GAC7D,MAAM,KAAK,aAAa;AACxB,OAAI,CAAC,GACD;AAEJ,OAAI,MAAM,KAAK,GAAG,CACd;AAEJ,UAAO;;AAEX,SAAO;;AAGX,SAAQ,MAAiB,WAAmB,eAAgC;EACxE,MAAM,gBAAgB,yBAAyB,IAAI,WAAW;AAC9D,MAAI,kBAAkB,UAAa,kBAAkB,EACjD,QAAO;EAEX,MAAM,UAAU,sBAAsB,MAAM,UAAU;AACtD,MAAI,CAAC,QACD,QAAO;EAEX,MAAM,WAAW,yBAAyB,cAAc;AACxD,MAAI,CAAC,SACD,QAAO;AAEX,SAAO,QAAQ,KAAK,SAAS;;;AAIrC,MAAa,+BACT,cACA,SACA,gBACA,yBAC4B;CAC5B,MAAM,oCAAoB,IAAI,KAA2B;AACzD,KAAI,eAAe,WAAW,KAAK,QAAQ,WAAW,WAAW,EAC7D,QAAO;CAIX,IAAI,cAAc;CAClB,IAAI,kBAAkB,QAAQ,WAAW;CACzC,MAAM,qBAAqB,WAAmB;AAC1C,SAAO,mBAAmB,SAAS,gBAAgB,OAAO,cAAc,QAAQ,WAAW,SAAS,GAAG;AACnG;AACA,qBAAkB,QAAQ,WAAW;;;CAI7C,MAAM,oBAAoB,WAAmB,OAAmB;EAC5D,MAAM,MAAM,kBAAkB,IAAI,UAAU;AAC5C,MAAI,CAAC,KAAK;AACN,qBAAkB,IAAI,WAAW,CAAC,GAAG,CAAC;AACtC;;AAEJ,MAAI,KAAK,GAAG;;CAGhB,MAAM,eAAe,WAA4B,WAAW,iBAAiB;AAG7E,MAAK,IAAI,YAAY,GAAG,aAAa,aAAa,SAAU;AACxD,oBAAkB,UAAU;EAC5B,MAAM,SAAS,iBAAiB,MAAM;AAEtC,MAAI,aAAa,aAAa,OAC1B;AAGJ,OAAK,MAAM,EAAE,UAAU,MAAM,MAAM,eAAe,gBAAgB;AAK9D,OAAI,GAHC,KAAK,QAAQ,UAAa,UAAU,KAAK,SACzC,KAAK,QAAQ,UAAa,UAAU,KAAK,QAC1C,CAAC,eAAe,QAAQ,KAAK,QAAQ,EAErC;AAGJ,OAAI,YAAY,UAAU,IAAI,CAAC,qBAAqB,MAAM,WAAW,UAAU,CAC3E;GAGJ,MAAM,MAAM,sBAAsB,cAAc,WAAW,SAAS;AACpE,OAAI,QAAQ,KACR;GAGJ,MAAM,cAAc,KAAK,SAAS,UAAU,OAAO,YAAY;AAC/D,OAAI,SAAS,aACT,kBAAiB,WAAW;IAAE,OAAO;IAAY,MAAM,KAAK;IAAM,CAAC;QAChE;IACH,MAAM,eAAe,MAAM;AAC3B,qBAAiB,WAAW;KACxB,qBAAqB,KAAK,SAAS,UAAU,OAAO,eAAe;KACnE,OAAO;KACP,MAAM,KAAK;KACd,CAAC;;;EAIV,MAAM,SAAS,aAAa,QAAQ,MAAM,UAAU;AACpD,MAAI,WAAW,GACX;AAEJ,cAAY,SAAS;;AAGzB,QAAO;;;;;;;;;;;;;;ACrMX,MAAa,wBAAwB,YAAoB;AACrD,QAAO,QAAQ,SAAS,KAAK,GAAG,QAAQ,QAAQ,UAAU,KAAK,GAAG;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;ACyCtE,MAAM,gBAAgB,UAAoF;CACtG,MAAMC,aAA6B,EAAE;CACrC,MAAMC,aAAuB,EAAE;CAC/B,IAAI,SAAS;CACb,MAAMC,QAAkB,EAAE;AAE1B,MAAK,IAAI,IAAI,GAAG,IAAI,MAAM,QAAQ,KAAK;EACnC,MAAM,aAAa,qBAAqB,MAAM,GAAG,QAAQ;AACzD,aAAW,KAAK;GAAE,KAAK,SAAS,WAAW;GAAQ,IAAI,MAAM,GAAG;GAAI,OAAO;GAAQ,CAAC;AACpF,QAAM,KAAK,WAAW;AACtB,MAAI,IAAI,MAAM,SAAS,GAAG;AACtB,cAAW,KAAK,SAAS,WAAW,OAAO;AAC3C,aAAU,WAAW,SAAS;QAE9B,WAAU,WAAW;;;;;;;;;CAW7B,MAAM,gBAAgB,QAA0C;EAC5D,IAAI,KAAK;EACT,IAAI,KAAK,WAAW,SAAS;AAE7B,SAAO,MAAM,IAAI;GACb,MAAM,MAAO,KAAK,OAAQ;GAC1B,MAAM,IAAI,WAAW;AACrB,OAAI,MAAM,EAAE,MACR,MAAK,MAAM;YACJ,MAAM,EAAE,IACf,MAAK,MAAM;OAEX,QAAO;;AAIf,SAAO,WAAW,WAAW,SAAS;;AAG1C,QAAO;EACH,SAAS,MAAM,KAAK,KAAK;EACzB,iBAAiB;EACjB,SAAS;GACL;GACA,QAAQ,QAAgB,aAAa,IAAI,EAAE,MAAM;GACjD;GACA,SAAS,WAAW,KAAK,MAAM,EAAE,GAAG;GACvC;EACJ;;;;;;;;;AAUL,MAAa,qBAAqB,gBAA4C;CAC1E,MAAM,0BAAU,IAAI,KAAyB;AAC7C,MAAK,MAAM,KAAK,aAAa;EACzB,MAAM,WAAW,QAAQ,IAAI,EAAE,MAAM;AACrC,MAAI,CAAC,UAAU;AACX,WAAQ,IAAI,EAAE,OAAO,EAAE;AACvB;;AAKJ,MAFK,EAAE,uBAAuB,UAAa,SAAS,uBAAuB,UACtE,EAAE,SAAS,UAAa,SAAS,SAAS,OAE3C,SAAQ,IAAI,EAAE,OAAO,EAAE;;CAG/B,MAAM,SAAS,CAAC,GAAG,QAAQ,QAAQ,CAAC;AACpC,QAAO,MAAM,GAAG,MAAM,EAAE,QAAQ,EAAE,MAAM;AACxC,QAAO;;;;;;AAOX,MAAa,yBACT,UACA,OACA,mBACA,eACY;AACZ,KAAI,SAAS,SAAS,KAAK,MAAM,WAAW,EACxC,QAAO;CAEX,MAAM,YAAY,MAAM;CACxB,MAAM,WAAW,MAAM,MAAM,SAAS;CACtC,MAAM,WAAW,eAAe,YAAY,OAAO;CACnD,MAAM,aAAa,kBAAkB,KAAK,SAAS,CAAC,MAAM;AAC1D,KAAI,CAAC,WACD,QAAO;CAEX,MAAMC,aAAsB;EAAE,SAAS;EAAY,MAAM,UAAU;EAAI;AACvE,KAAI,SAAS,OAAO,UAAU,GAC1B,YAAW,KAAK,SAAS;AAE7B,QAAO,CAAC,WAAW;;AAGvB,MAAM,+BAA+B,OAAoB,cAAsB,YAAmC;CAC9G,MAAM,uBAAuB,4BAA4B,cAAc,QAAQ;CAC/E,MAAM,EAAE,iBAAiB,gBAAgB,oBAAoB,0BAA0B,MAAM;CAI7F,MAAM,oBAAoB,4BAA4B,cAAc,SAAS,gBAAgB,qBAAqB;AAGlH,KAAI,gBAAgB,SAAS,GAAG;EAC5B,MAAM,cAAc,gBAAgB,KAAK,EAAE,MAAM,aAAa;GAC1D,MAAM,QAAQ,eAAe,MAAM,OAAO;AAC1C,UAAO;IACH;IACA,QAAQ,MAAM,OAAO,GAAG,MAAM,MAAM,OAAO;IAC3C,GAAG;IACN;IACH;EAEF,MAAM,iBAAiB,YAAY,KAAK,MAAM,EAAE,OAAO,CAAC,KAAK,IAAI;EACjE,MAAM,gBAAgB,IAAI,OAAO,gBAAgB,KAAK;AAEtD,gBAAc,YAAY;EAC1B,IAAI,IAAI,cAAc,KAAK,aAAa;AAExC,SAAO,MAAM,MAAM;GAEf,MAAM,mBAAmB,gBAAgB,WAAW,EAAE,aAAa,GAAG,SAAS,YAAY,OAAU;AAErG,OAAI,qBAAqB,IAAI;IACzB,MAAM,EAAE,MAAM,QAAQ,OAAO,kBAAkB,gBAAgB;IAC/D,MAAM,WAAW,YAAY;IAG7B,MAAMC,gBAAwC,EAAE;AAChD,QAAI,EAAE,QACF;UAAK,MAAM,gBAAgB,SAAS,aAChC,KAAI,EAAE,OAAO,kBAAkB,QAAW;MACtC,MAAM,YAAY,aAAa,MAAM,OAAO,OAAO;AACnD,oBAAc,aAAa,EAAE,OAAO;;;IAMhD,IAAIC;IACJ,IAAIC;AAEJ,QAAI,SAAS,qBAAqB;AAE9B,uBAAkB,EAAE,SAAS,GAAG,OAAO;AACvC,SAAI,oBAAoB,OAKpB,uBAFkB,EAAE,SAAS,WAAW,EAAE,IACX,SAAS,gBAAgB;;IAMhE,MAAM,QAAQ,EAAE;IAChB,MAAM,MAAM,EAAE,QAAQ,EAAE,GAAG;IAC3B,MAAM,SAAS,QAAQ,MAAM,MAAM;AAQnC,SAJK,KAAK,QAAQ,UAAa,UAAU,KAAK,SACzC,KAAK,QAAQ,UAAa,UAAU,KAAK,QAC1C,CAAC,eAAe,QAAQ,KAAK,QAAQ,EAElB;AACnB,SAAI,CAAC,qBAAqB,MAAM,eAAe,MAAM,CAGjD;KAEJ,MAAMC,KAAiB;MACnB,iBAAiB;MACjB;MACA,QAAQ,KAAK,SAAS,UAAU,OAAO,QAAQ;MAC/C,MAAM,KAAK;MACX,eAAe,OAAO,KAAK,cAAc,CAAC,SAAS,IAAI,gBAAgB;MAC1E;AAED,SAAI,CAAC,kBAAkB,IAAI,cAAc,CACrC,mBAAkB,IAAI,eAAe,EAAE,CAAC;AAE5C,uBAAkB,IAAI,cAAc,CAAE,KAAK,GAAG;;;AAItD,OAAI,EAAE,GAAG,WAAW,EAChB,eAAc;AAElB,OAAI,cAAc,KAAK,aAAa;;;CAK5C,MAAM,8BAA8B,MAAiB,cAA4B;EAC7E,MAAM,EAAE,OAAO,aAAa,cAAc,wBAAwB,eAAe,KAAK;EAOtF,MAAM,SALqB,oBADR,YAAY,cAAc,OAAO,aAAa,aAAa,EACnB,MAAM,QAAQ,MAAM,CAC5C,QAAQ,MAAM,qBAAqB,MAAM,WAAW,EAAE,MAAM,CAAC,CAIzE,KAAK,MAAM;GAC9B,MAAM,oBAAoB,uBAAuB,EAAE,aAAa;GAChE,MAAM,eAAe,oBAAoB,EAAE,MAAM,EAAE,SAAU,SAAS,EAAE,QAAQ;AAChF,UAAO;IACH,iBAAiB,oBAAoB,SAAY,EAAE;IACnD,oBAAoB,oBAAoB,eAAe;IACvD,QAAQ,KAAK,SAAS,UAAU,OAAO,EAAE,QAAQ,EAAE;IACnD,MAAM,KAAK;IACX,eAAe,EAAE;IACpB;IACH;AAEF,MAAI,CAAC,kBAAkB,IAAI,UAAU,CACjC,mBAAkB,IAAI,WAAW,EAAE,CAAC;AAExC,oBAAkB,IAAI,UAAU,CAAE,KAAK,GAAG,OAAO;;AAGrD,iBAAgB,SAAS,SAAS;AAG9B,6BAA2B,MADL,MAAM,QAAQ,KAAK,CACM;GACjD;CAGF,MAAMC,mBAAiC,EAAE;AACzC,OAAM,SAAS,MAAM,UAAU;EAC3B,MAAM,SAAS,kBAAkB,IAAI,MAAM;AAC3C,MAAI,CAAC,UAAU,OAAO,WAAW,EAC7B;EAGJ,IAAI,WAAW;AACf,MAAI,KAAK,eAAe,QACpB,YAAW,CAAC,OAAO,GAAG;WACf,KAAK,eAAe,OAC3B,YAAW,CAAC,OAAO,OAAO,SAAS,GAAG;AAG1C,mBAAiB,KAAK,GAAG,SAAS;GACpC;AAEF,QAAO;;;;;;;;;;;AAYX,MAAM,eAAe,SAAiB,OAAe,aAAsB,iBAA2B;CAClG,MAAMC,UAAyB,EAAE;AACjC,OAAM,YAAY;CAClB,IAAI,IAAI,MAAM,KAAK,QAAQ;AAE3B,QAAO,MAAM,MAAM;EACf,MAAMC,SAAsB;GAAE,KAAK,EAAE,QAAQ,EAAE,GAAG;GAAQ,OAAO,EAAE;GAAO;AAG1E,SAAO,gBAAgB,qBAAqB,EAAE,QAAQ,aAAa;AAGnE,MAAI,YACA,QAAO,WAAW,yBAAyB,EAAE;AAGjD,UAAQ,KAAK,OAAO;AAEpB,MAAI,EAAE,GAAG,WAAW,EAChB,OAAM;AAEV,MAAI,MAAM,KAAK,QAAQ;;AAG3B,QAAO;;;;;;;;;;;AAYX,MAAM,qBAAqB,aAAqB,WAAmB,iBAA2B;AAC1F,KAAI,aAAa,WAAW,EACxB,QAAO,EAAE;CAIb,IAAI,KAAK;CACT,IAAI,KAAK,aAAa;AACtB,QAAO,KAAK,IAAI;EACZ,MAAM,MAAO,KAAK,OAAQ;AAC1B,MAAI,aAAa,OAAO,YACpB,MAAK,MAAM;MAEX,MAAK;;CAKb,MAAMC,SAAmB,EAAE;AAC3B,MAAK,IAAI,IAAI,IAAI,IAAI,aAAa,UAAU,aAAa,KAAK,WAAW,IACrE,QAAO,KAAK,aAAa,KAAK,YAAY;AAE9C,QAAO;;;;;;;;;;;;;;;;AAiBX,MAAM,qBAAqB,SAAiB,aAAqB,eAAiC;AAE9F,KAAI,CAAC,WAAW,CAAC,QAAQ,SAAS,KAAK,CACnC,QAAO;CAIX,MAAM,gBAAgB,kBAAkB,aADtB,cAAc,QAAQ,QACwB,WAAW;AAG3E,KAAI,cAAc,WAAW,EACzB,QAAO;CAOX,MAAM,WAAW,IAAI,IAAI,cAAc;AACvC,QAAO,QAAQ,QAAQ,QAAQ,OAAO,WAAoB,SAAS,IAAI,OAAO,GAAG,MAAM,MAAO;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AA6DlG,MAAa,gBAAgB,OAAe,YAA4C;CACpF,MAAM,EAAE,QAAQ,EAAE,EAAE,WAAW,GAAG,cAAc,EAAE,EAAE,SAAS,UAAU,aAAa,SAAS,WAAW;CAExG,MAAM,iBAAiB,QAAQ,UAAU,kBAAkB,OAAO,QAAQ,QAAQ,GAAG;CACrF,MAAM,EAAE,SAAS,cAAc,iBAAiB,mBAAmB,YAAY,aAAa,eAAe;CAK3G,IAAI,WAAW,cAHA,kBADK,4BAA4B,OAAO,cAAc,QAAQ,CAChC,EAGR,cAAc,SAAS,MAAM;AAElE,YAAW,sBAAsB,UAAU,gBAAgB,mBAAmB,WAAW;AAGzF,KAAI,YAAY,KAAK,YAAY,QAAQ;EACrC,MAAM,oBAAoB,MAAc,eAAe,GAAG,MAAM,CAAC;AACjE,SAAO,iBACH,UACA,gBACA,mBACA,UACA,aACA,QACA,kBACA,QACA,WACH;;AAGL,QAAO;;;;;;;;;;;;;;;;;AAkBX,MAAM,iBAAiB,aAA2B,SAAiB,SAAkB,UAAkC;;;;CAInH,MAAMC,mBACF,OACA,KACA,MACA,iBACA,eACA,uBACiB;EAEjB,MAAM,cAAc,SAAS,sBAAsB;EAGnD,MAAM,SAAS,QAAQ,MAAM,aAAa,IAAI;EAC9C,IAAI,OAAO,iBAAiB,MAAM,KAAK,qBAAqB,OAAO,MAAM,GAAG,OAAO,QAAQ,YAAY,GAAG;AAC1G,MAAI,CAAC,KACD,QAAO;AAEX,MAAI,CAAC,gBACD,QAAO,kBAAkB,MAAM,aAAa,QAAQ,WAAW;EAEnE,MAAM,OAAO,QAAQ,MAAM,YAAY;EACvC,MAAM,KAAK,kBAAkB,QAAQ,MAAM,MAAM,EAAE,GAAG,QAAQ,MAAM,cAAc,KAAK,SAAS,EAAE;EAClG,MAAMC,MAAe;GAAE,SAAS;GAAM;GAAM;AAC5C,MAAI,OAAO,KACP,KAAI,KAAK;AAEb,MAAI,QAAQ,cACR,KAAI,OAAO;GAAE,GAAG;GAAM,GAAG;GAAe;AAE5C,SAAO;;;;;CAMX,MAAM,sCAAiD;EACnD,MAAMC,SAAoB,EAAE;AAC5B,OAAK,IAAI,IAAI,GAAG,IAAI,YAAY,QAAQ,KAAK;GACzC,MAAM,KAAK,YAAY;GACvB,MAAM,MAAM,IAAI,YAAY,SAAS,IAAI,YAAY,IAAI,GAAG,QAAQ,QAAQ;GAC5E,MAAM,IAAIF,gBACN,GAAG,OACH,KACA,GAAG,MACH,GAAG,iBACH,GAAG,eACH,GAAG,mBACN;AACD,OAAI,EACA,QAAO,KAAK,EAAE;;AAGtB,SAAO;;CAGX,MAAMG,WAAsB,EAAE;AAG9B,KAAI,CAAC,YAAY,QAAQ;AAErB,MAAI,gBAAgB,OADJ,QAAQ,MAAM,EAAE,CACG,EAAE;GACjC,MAAM,IAAIH,gBAAc,GAAG,QAAQ,OAAO;AAC1C,OAAI,EACA,UAAS,KAAK,EAAE;;AAGxB,SAAO;;AAIX,KAAI,YAAY,GAAG,QAAQ,GAEvB;MAAI,gBAAgB,OADJ,QAAQ,MAAM,EAAE,CACG,EAAE;GACjC,MAAM,IAAIA,gBAAc,GAAG,YAAY,GAAG,MAAM;AAChD,OAAI,EACA,UAAS,KAAK,EAAE;;;AAM5B,QAAO,CAAC,GAAG,UAAU,GAAG,+BAA+B,CAAC;;;;;ACrhB5D,MAAM,qBAAqB,aAA6B,QAAQ,MAAM,QAAQ,IAAI,EAAE,EAAE;AAEtF,MAAM,+BAA+B,YAEjC,QAAQ,QAAQ,UAAU,GAAG,CAAC,QAAQ,WAAW,GAAG;AAKxD,MAAM,sBAAsB,YAAgE;CACxF,MAAM,aAAa,kBAAkB,QAAQ;AAE7C,QAAO;EAAE,YADU,4BAA4B,QAAQ,CAAC;EACnC;EAAY;;AAQrC,MAAMI,kBAAoD;CACtD,0BAA0B;CAC1B,YAAY;CACZ,aAAa;CACb,UAAU;CACV,eAAe;CACf,2BAA2B;CAC3B,aAAa;CACb,gBAAgB,CAAC,OAAO;CACxB,QAAQ;CACR,MAAM;CACN,YAAY;CACf;AAMD,MAAM,0BAA0B,MAAsB,EAAE,QAAQ,oBAAoB,OAAO;AAG3F,MAAMC,yBAAiC;CACnC;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACH;AAED,MAAM,2BAAqC;CACvC,MAAM,YAAY,IAAI,IAAI,oBAAoB,CAAC;AAG/C,QAAOC,uBAAqB,QAAQ,MAAM,UAAU,IAAI,EAAE,CAAC;;AAG/D,MAAM,sBAAsB,MAAsB,EAAE,QAAQ,QAAQ,IAAI,CAAC,MAAM;AAI/E,MAAM,yBAAyB,MAE3B,EAAE,QAAQ,8CAA8C,GAAG;AAI/D,MAAM,uBAAuB,eAA+C;CACxE,MAAMC,WAAiC,EAAE;AACzC,MAAK,MAAM,SAAS,YAAY;EAC5B,MAAM,MAAM,eAAe;AAC3B,MAAI,CAAC,IACD;AAEJ,MAAI;AACA,YAAS,KAAK;IAAE,IAAI,IAAI,OAAO,KAAK,KAAK;IAAE;IAAO,CAAC;UAC/C;;AAIZ,QAAO;;AAGX,MAAM,YAAY,KAAa,SAAoC;AAC/D,KAAI,CAAC,IACD,QAAO;AAEX,KAAI,SAAS,QACT,QAAO,IAAI,SAAS,IAAI,GAAG,MAAM,GAAG,IAAI;AAE5C,QAAO,IAAI,SAAS,OAAO,GAAG,MAAM,GAAG,IAAI;;AAG/C,MAAM,0BACF,GACA,KACA,KACA,gBACA,eACoD;CACpD,IAAI,aAAa;CACjB,IAAI,aAAa;CACjB,IAAI,aAAa;AAEjB,MAAK,MAAM,MAAM,gBAAgB;AAC7B,MAAI,cAAc,EAAE,OAChB;EAEJ,MAAM,IAAI,GAAG,KAAK,EAAE,MAAM,WAAW,CAAC;AACtC,MAAI,CAAC,KAAK,EAAE,UAAU,KAAK,CAAC,EAAE,GAC1B;AAGJ,gBAAc,uBAAuB,EAAE,GAAG;AAC1C,gBAAc,EAAE,GAAG;AACnB,eAAa;EAEb,MAAM,UAAU,WAAW,KAAK,EAAE,MAAM,WAAW,CAAC;AACpD,MAAI,SAAS;AACT,iBAAc,QAAQ,GAAG;AACzB,gBAAa,SAAS,YAAY,WAAW;;;AAIrD,QAAO;EAAE;EAAY,KAAK;EAAY,KAAK;EAAY;;AAG3D,MAAM,wBACF,GACA,KACA,UACA,mBACyC;CACzC,IAAIC,OAA+C;AACnD,MAAK,MAAM,EAAE,OAAO,QAAQ,UAAU;AAClC,KAAG,YAAY;EACf,MAAM,IAAI,GAAG,KAAK,EAAE;AACpB,MAAI,CAAC,KAAK,EAAE,UAAU,IAClB;AAEJ,MAAI,CAAC,QAAQ,EAAE,GAAG,SAAS,KAAK,KAAK,OACjC,QAAO;GAAE,MAAM,EAAE;GAAI;GAAO;;AAIpC,KAAI,MAAM,UAAU,SAAS;EACzB,MAAM,MAAM,MAAM,KAAK,KAAK;EAC5B,MAAM,OAAO,MAAM,EAAE,SAAS,EAAE,OAAO;AACvC,MAAI,QAAQ,eAAe,KAAK,IAAI,CAAC,MAAM,KAAK,KAAK,CACjD,QAAO;;AAIf,QAAO;;AAGX,MAAM,qBACF,MACA,YACA,aACA,0BACA,2BACA,gBACA,eACgB;CAChB,MAAM,UAAU,mBAAmB,KAAK;AACxC,KAAI,CAAC,QACD,QAAO;CAGX,MAAM,KAAK,4BAA4B,sBAAsB,QAAQ,GAAG,SAAS,MAAM,GAAG,YAAY;CACtG,IAAI,MAAM;CACV,IAAI,MAAM;CACV,IAAI,aAAa;CACjB,IAAI,eAAe;CAGnB,MAAM,WAAW,oBAAoB,WAAW;CAIhD,MAAM,kBAAkB,OAAwB,qBAAqB,KAAK,GAAG,IAAI,SAAS,KAAK,GAAG;CAClG,MAAM,qBAAqB,OAAwB,0BAA0B,KAAK,GAAG;CAErF;EACI,MAAM,WAAW,uBAAuB,GAAG,KAAK,KAAK,gBAAgB,WAAW;AAChF,QAAM,SAAS;AACf,QAAM,SAAS;AACf,eAAa,SAAS;;CAK1B,IAAI,aAAa;AACjB,QAAO,aAAa,KAAK,MAAM,EAAE,QAAQ;EAErC,MAAM,UAAU,WAAW,KAAK,EAAE,MAAM,IAAI,CAAC;AAC7C,MAAI,SAAS;AACT,UAAO,QAAQ,GAAG;AAClB,SAAM,SAAS,KAAK,WAAW;AAC/B;;EAGJ,MAAM,OAAO,qBAAqB,GAAG,KAAK,UAAU,eAAe;AAEnE,MAAI,MAAM;AACN,OAAI,OAAO,CAAC,IAAI,SAAS,OAAO,EAAE;AAGlC,UAAO,KAAK,KAAK,MAAM;AACvB,gBAAa;AACb,kBAAe;AACf,UAAO,KAAK,KAAK;AACjB;AACA;;AAIJ,MAAI,YAAY;GACZ,MAAM,KAAK,EAAE;AACb,OAAI,MAAM,kBAAkB,GAAG,EAAE;AAC7B,WAAO,uBAAuB,GAAG;AACjC,WAAO;AACP;;;AAKR,MAAI,YAAY;AAGZ,OAAI,4BAA4B,CAAC,cAAc;IAC3C,MAAMC,eAAa,EAAE,MAAM,IAAI,CAAC,MAAM,kBAAkB,IAAI,EAAE,EAAE;AAChE,QAAI,CAACA,YACD;AAEJ,WAAO,uBAAuBA,YAAU;AACxC;;AAEJ;;AAGJ,MAAI,CAAC,yBACD,QAAO;EAIX,MAAM,aAAa,EAAE,MAAM,IAAI,CAAC,MAAM,kBAAkB,IAAI,EAAE,EAAE;AAChE,MAAI,CAAC,UACD,QAAO;AAEX,SAAO,uBAAuB,UAAU;AACxC;AACA,SAAO;;AAGX,KAAI,CAAC,WACD,QAAO;AAGX,KAAI,eAAe,QACf,QAAO,IAAI,SAAS,OAAO,CACvB,OAAM,IAAI,MAAM,GAAG,GAAG;KAG1B,QAAO,IAAI,SAAS,IAAI,CACpB,OAAM,IAAI,MAAM,GAAG,GAAG;AAG9B,QAAO;;;;;;;;AASX,MAAa,2BACT,OACA,UAAoC,EAAE,KACX;CAC3B,MAAMC,IAAsC;EACxC,GAAG;EACH,GAAG;EAEH,YAAY,QAAQ,cAAc,gBAAgB;EAClD,gBAAgB,QAAQ,kBAAkB,gBAAgB;EAC1D,YAAY,QAAQ,cAAc,gBAAgB;EACrD;CACD,MAAM,gBAAgB,oBAAoB;CAE1C,MAAM,yBAAS,IAAI,KAAqE;AAExF,MAAK,MAAM,QAAQ,OAAO;EAEtB,MAAM,QADa,qBAAqB,KAAK,WAAW,GAAG,CAClC,MAAM,KAAK;AACpC,OAAK,MAAM,QAAQ,OAAO;GACtB,MAAM,UAAU,mBAAmB,KAAK;AACxC,OAAI,QAAQ,SAAS,EAAE,cACnB;AAEJ,OAAI,EAAE,cAAc,CAAC,EAAE,WAAW,SAAS,KAAK,GAAG,CAC/C;GAGJ,MAAM,MAAM,kBACR,SACA,eACA,EAAE,aACF,EAAE,0BACF,EAAE,2BACF,EAAE,gBACF,EAAE,WACL;AACD,OAAI,CAAC,IACD;GAGJ,MAAM,WAAW,OAAO,IAAI,IAAI;AAChC,OAAI,CAAC,SACD,QAAO,IAAI,KAAK;IAAE,OAAO;IAAG,UAAU,CAAC;KAAE,MAAM;KAAS,QAAQ,KAAK;KAAI,CAAC;IAAE,CAAC;QAC1E;AACH,aAAS;AACT,QAAI,SAAS,SAAS,SAAS,EAAE,YAC7B,UAAS,SAAS,KAAK;KAAE,MAAM;KAAS,QAAQ,KAAK;KAAI,CAAC;;;;CAM1E,MAAM,+BAA+B,GAA2B,MAAsC;EAClG,MAAM,KAAK,mBAAmB,EAAE,QAAQ;EACxC,MAAM,KAAK,mBAAmB,EAAE,QAAQ;AAExC,MAAI,GAAG,eAAe,GAAG,WACrB,QAAO,GAAG,aAAa,GAAG;AAE9B,MAAI,GAAG,eAAe,GAAG,WACrB,QAAO,GAAG,aAAa,GAAG;AAG9B,MAAI,EAAE,UAAU,EAAE,MACd,QAAO,EAAE,QAAQ,EAAE;AAEvB,SAAO,EAAE,QAAQ,cAAc,EAAE,QAAQ;;CAG7C,MAAM,+BAA+B,GAA2B,MAAsC;AAClG,MAAI,EAAE,UAAU,EAAE,MACd,QAAO,EAAE,QAAQ,EAAE;AAEvB,SAAO,4BAA4B,GAAG,EAAE;;AAQ5C,QALyC,CAAC,GAAG,OAAO,SAAS,CAAC,CACzD,KAAK,CAAC,SAAS,QAAQ;EAAE,OAAO,EAAE;EAAO,UAAU,EAAE;EAAU;EAAS,EAAE,CAC1E,QAAQ,MAAM,EAAE,SAAS,EAAE,SAAS,CACpC,KAAK,EAAE,WAAW,UAAU,8BAA8B,4BAA4B,CAE7E,MAAM,GAAG,EAAE,KAAK;;;;;;;;;;;;;;;;;AC/ZlC,MAAM,uBAAuB;CACzB;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACA;CACH;;;;;AAMD,MAAM,yBAAyB;CAC3B,MAAM,YAAY,oBAAoB;CACtC,MAAM,cAAc,qBAAqB,QAAQ,MAAM,UAAU,SAAS,EAAE,CAAC;CAC7E,MAAM,YAAY,UAAU,QAAQ,MAAM,CAAC,qBAAqB,SAAS,EAAE,CAAC,CAAC,MAAM;AACnF,QAAO,CAAC,GAAG,aAAa,GAAG,UAAU;;AAGzC,MAAM,qBAAqB,MAAc,YAAoB,aAA8B;CAGvF,MAAM,SAAS,aAAa,IAAI,KAAK,aAAa,KAAK;CACvD,MAAM,QAAQ,WAAW,KAAK,SAAS,KAAK,YAAY;CAExD,MAAM,gBAAgB,OAAwB,CAAC,CAAC,MAAM,MAAM,KAAK,GAAG;CACpE,MAAM,iBAAiB,OAAwB,CAAC,CAAC,MAAM,SAAS,KAAK,GAAG;CACxE,MAAM,oBAAoB,OAAwB,CAAC,CAAC,MAAM,uBAAuB,KAAK,GAAG;CAIzF,MAAM,iBAAiB,OAAwB,CAAC,CAAC,MAAM,mBAAmB,KAAK,GAAG;CAElF,MAAM,SAAS,CAAC,UAAU,aAAa,OAAO,IAAI,cAAc,OAAO,IAAI,CAAC,cAAc,OAAO;CACjG,MAAM,UAAU,CAAC,SAAS,aAAa,MAAM,IAAI,iBAAiB,MAAM,IAAI,CAAC,cAAc,MAAM;AAEjG,QAAO,UAAU;;;;;;;;;;;;;;;;;AAkBrB,MAAa,uBAAuB,SAAiB;AACjD,KAAI,CAAC,KACD,QAAO,EAAE;CAGb,MAAMC,UAA6B,EAAE;CACrC,MAAMC,gBAAyC,EAAE;CAGjD,MAAM,qBAAqB,OAAe,QAAyB;AAC/D,SAAO,cAAc,MAChB,CAAC,GAAG,OAAQ,SAAS,KAAK,QAAQ,KAAO,MAAM,KAAK,OAAO,KAAO,SAAS,KAAK,OAAO,EAC3F;;AAIL,MAAK,MAAM,aAAa,kBAAkB,EAAE;EACxC,MAAM,UAAU,eAAe;AAC/B,MAAI,CAAC,QACD;AAGJ,MAAI;GAEA,MAAM,QAAQ,IAAI,OAAO,IAAI,QAAQ,IAAI,KAAK;GAC9C,IAAIC;AAGJ,WAAQ,QAAQ,MAAM,KAAK,KAAK,MAAM,MAAM;IACxC,MAAM,aAAa,MAAM;IACzB,MAAM,WAAW,aAAa,MAAM,GAAG;AAEvC,QAAI,cAAc,WAAW,CAAC,kBAAkB,MAAM,YAAY,SAAS,CACvE;AAIJ,QAAI,kBAAkB,YAAY,SAAS,CACvC;AAGJ,YAAQ,KAAK;KAAE;KAAU,OAAO;KAAY,OAAO,MAAM;KAAI,OAAO;KAAW,CAAC;AAEhF,kBAAc,KAAK,CAAC,YAAY,SAAS,CAAC;;UAE1C;;AAGZ,QAAO,QAAQ,MAAM,GAAG,MAAM,EAAE,QAAQ,EAAE,MAAM;;;;;;;;;;;;;;;AAgBpD,MAAa,4BAA4B,MAAc,aAAgC;AACnF,KAAI,CAAC,QAAQ,SAAS,WAAW,EAC7B,QAAO;CAKX,IAAI,WAAW;CACf,MAAM,oBAAoB,CAAC,GAAG,SAAS,CAAC,MAAM,GAAG,MAAM,EAAE,QAAQ,EAAE,MAAM;AAEzE,MAAK,MAAM,KAAK,kBACZ,YAAW,GAAG,SAAS,MAAM,GAAG,EAAE,MAAM,CAAC,IAAI,EAAE,MAAM,IAAI,SAAS,MAAM,EAAE,SAAS;AAGvF,QAAO;;;;;;;;AASX,MAAa,wBACT,aAC2F;CAE3F,MAAM,qBAAqB,SAAS,MAAM,MAAM;EAAC;EAAY;EAAS;EAAO;EAAO,CAAC,SAAS,EAAE,MAAM,CAAC;CAGvG,MAAM,qBAAqB,SAAS,MAAM,MAAM;EAAC;EAAS;EAAQ;EAAW,CAAC,SAAS,EAAE,MAAM,CAAC;AAGhG,KAAI,mBACA,QAAO;EACH,OAAO;EACP,UAAU,SAAS,MAAM,MAAM;GAAC;GAAS;GAAO;GAAO,CAAC,SAAS,EAAE,MAAM,CAAC,EAAE,SAAS;EACrF,aAAa;EAChB;AAIL,KAAI,mBACA,QAAO;EAAE,OAAO;EAAO,UAAU;EAAU,aAAa;EAAmB;AAI/E,QAAO;EAAE,OAAO;EAAO,aAAa;EAAmB;;;;;;;;AAS3D,MAAa,sBACT,SAOQ;CACR,MAAM,WAAW,oBAAoB,KAAK;AAE1C,KAAI,SAAS,WAAW,EACpB,QAAO;AAMX,QAAO;EAAE;EAAU,UAHF,yBAAyB,MAAM,SAAS;EAG5B,GAFd,qBAAqB,SAAS;EAEL"}
package/package.json CHANGED
@@ -7,8 +7,8 @@
7
7
  "devDependencies": {
8
8
  "@biomejs/biome": "2.3.10",
9
9
  "@types/bun": "^1.3.5",
10
- "shamela": "^1.4.1",
11
- "tsdown": "^0.18.2",
10
+ "shamela": "^1.4.2",
11
+ "tsdown": "^0.18.3",
12
12
  "typescript": "^5.9.3"
13
13
  },
14
14
  "engines": {
@@ -50,5 +50,5 @@
50
50
  },
51
51
  "type": "module",
52
52
  "types": "./dist/index.d.mts",
53
- "version": "2.6.1"
53
+ "version": "2.6.2"
54
54
  }