mask-privacy 2.0.0 → 3.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -87,6 +87,12 @@ Mask prevents the misidentification of real data as tokens by using universally
87
87
 
88
88
  This prefix-based approach ensures that the SDK does not inadvertently process valid PII as an existing token.
89
89
 
90
+ Additional collision-proof prefixes for international identifiers:
91
+ * Turkish TCID tokens use the `990000` prefix (no valid Kimlik number starts with `99`).
92
+ * Saudi NID tokens use the `100000` prefix (length-constrained to avoid overlap with real IDs).
93
+ * UAE Emirates ID tokens use the `784-0000-` prefix (zeroed sub-fields are structurally invalid).
94
+ * IBAN tokens zero the check digits (`XX00...`), which always fails ISO 7064 Mod-97 verification.
95
+
90
96
  ### 4. Enterprise Async Support
91
97
  Mask is built from the ground up for high-concurrency Node.js environments. All core operations are asynchronous and promised-based. Calling `encode()`, `decode()`, or `scanAndTokenize()` allows your event loop to remain unblocked while handling PII tokenization tasks.
92
98
 
@@ -127,19 +133,97 @@ Performance-sensitive deployments utilize the built-in `LocalTransformersScanner
127
133
  ### 7. Sub-string Detokenization
128
134
  Mask includes the ability to detokenize PII embedded within larger text blocks (like email bodies or chat messages). `detokenizeText()` uses high-performance regex to find and restore all tokens within a paragraph before they hit your tools.
129
135
 
130
- ## Installation and Setup
136
+ ## Multilingual PII Detection (Waterfall Pipeline)
137
+
138
+ Mask is built for the global enterprise. While many privacy tools are English-centric, the TypeScript SDK implements a **3-Tier Waterfall Detection** strategy designed for high-performance PII detection across 8 major languages using local ONNX models.
139
+
140
+ ### Supported Language Matrix
141
+
142
+ Mask provides first-class support for the following languages:
143
+
144
+ | Language | Code | Tier 0 (DLP) | Tier 2 (NLP Engine) |
145
+ | :--- | :--- | :--- | :--- |
146
+ | **English** | `en` | ✅ Full | DistilBERT (Simple) |
147
+ | **Spanish** | `es` | ✅ Full | BERT Multilingual |
148
+ | **French** | `fr` | ✅ Full | BERT Multilingual |
149
+ | **German** | `de` | ✅ Full | BERT Multilingual |
150
+ | **Turkish** | `tr` | ✅ Full | BERT Multilingual |
151
+ | **Arabic** | `ar` | ✅ Full | BERT Multilingual |
152
+ | **Japanese** | `ja` | ✅ Full | BERT Multilingual |
153
+ | **Chinese** | `zh` | ✅ Full | BERT Multilingual |
154
+
155
+ ### How the Waterfall Works: The Excising Mechanism
156
+
157
+ To maintain high performance, the TypeScript SDK does not simply run three separate scans. It uses a **Sequential Mutation** strategy:
158
+
159
+ 1. **Tier 0 & 1 (The Scouts):** The SDK first runs the high-speed DLP and Regex engines synchronously in the main thread.
160
+ 2. **Immediate Tokenization:** Any PII found by these tiers is **immediately replaced** by a token in the string buffer.
161
+ 3. **Tier 2 (The Heavy Infantry):** The expensive NLP engine (Transformers.js) only scans the *remaining* text. Because the PII has already been "excised" (cut out and replaced with tokens), the NLP engine doesn't waste compute on data already identified.
162
+ 4. **Bypass Logic:** All tiers are "token-aware." If a scan encounter a string that is already a Mask token, it skips it entirely, preventing redundant processing or "double-tokenization."
163
+
164
+ ---
165
+
166
+ ### Configuration & Environment Variables
167
+
168
+ Configure your multilingual environment using standard variables. These are parsed at runtime when the `LocalTransformersScanner` is initialized.
169
+
170
+ | Variable | Default | Description |
171
+ | :--- | :--- | :--- |
172
+ | `MASK_LANGUAGES` | `en` | Comma-separated list of languages (e.g., `en,es,fr,ar`). |
173
+ | `MASK_NLP_MODEL` | *(varies)* | Override the default model (e.g., `Xenova/bert-base-multilingual-cased-ner-hrl`). |
174
+ | `MASK_MODEL_CACHE_DIR` | `~/.cache` | Local directory for storing serialized ONNX models. |
175
+ | `MASK_NLP_MAX_WORKERS` | `4` | Number of worker processes/threads for NLP analysis. |
176
+ | `MASK_NLP_TIMEOUT_SECONDS` | `60` | Max seconds for a scan before timing out. |
177
+
178
+ ---
179
+
180
+ ### Automatic Model Management
181
+
182
+ The TypeScript SDK manages AI models automatically via the **Transformers.js** runtime.
183
+
184
+ #### 1. Automatic Downloads
185
+ When you set `MASK_LANGUAGES` to include non-English languages, the scanner will automatically download the multilingual BERT model from Hugging Face on the first execution and cache it locally.
186
+
187
+ #### 2. Pre-Warming (Performance)
188
+ Upon initialization, the `LocalTransformersScanner` starts a worker pool and "pre-warms" the models. This ensures that the first real request doesn't suffer from high "cold-start" latency.
131
189
 
132
- Install the core SDK via npm:
190
+ #### 3. Air-Gapped / Offline Environments
191
+ For high-security environments, you can pre-cache models. Run this script in your build pipeline:
133
192
  ```bash
134
- npm install mask-privacy
193
+ # Set a custom cache directory
194
+ export MASK_MODEL_CACHE_DIR="./models"
195
+
196
+ # Run a dummy scan to trigger the download
197
+ node -e "require('mask-privacy').getScanner().scanAndTokenize('John Doe')"
198
+
199
+ # Bundle the './models' folder with your container
135
200
  ```
136
201
 
137
- Add peer dependencies depending on your infrastructure:
202
+ ---
203
+
204
+ ### Performance & Latency Benchmarks
205
+
206
+ *Measured on 4-vCPU 8GB RAM Instance (Node.js 20+)*
207
+
208
+ | Tier | Engine | Avg. Latency | Rationale |
209
+ | :--- | :--- | :--- | :--- |
210
+ | Tier 0 | DLP (Heuristic) | ~2ms | Main-thread synchronous regex |
211
+ | Tier 1 | Regex (Deterministic) | ~1ms | Main-thread synchronous regex |
212
+ | Tier 2 | Transformers (Local) | 300ms - 800ms | Offloaded to Worker Threads (Piscina) |
213
+
214
+ **Total Overhead:** Usually **< 400ms** for standard chat lengths. Mask uses an **Excising Mechanism** to ensure that text already identified in Tiers 0/1 is removed from the NLP buffer, significantly accelerating the heavy Transformer inference.
215
+
216
+ ---
217
+
218
+ ### Installing AI Models (Production Ready)
219
+ The TypeScript SDK manages AI models automatically via **Transformers.js**. For production air-gapped environments or to avoid "cold-start" latency, we recommend using the pre-caching CLI:
220
+
138
221
  ```bash
139
- npm install ioredis # For Redis vaults
140
- npm install @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb # For DynamoDB vaults
141
- npm install memjs # For Memcached vaults
142
- npm install @huggingface/transformers # Required for local NLP scanning
222
+ npm install @huggingface/transformers # Required extra
223
+
224
+ # Pre-cache models for your required languages
225
+ export MASK_LANGUAGES="en,es,fr"
226
+ npx mask-privacy cache-models
143
227
  ```
144
228
 
145
229
  ### Framework Support
@@ -152,6 +236,24 @@ Mask supports major AI frameworks via built-in hooks:
152
236
 
153
237
  Before running your agents, Mask requires an encryption key and a vault backend selection.
154
238
 
239
+ #### Where to set these?
240
+ Select the method that best fits your deployment:
241
+
242
+ 1. **In a `.env` file (Recommended)**: Create a file in your project root.
243
+ ```env
244
+ MASK_LANGUAGES="es,en"
245
+ MASK_ENCRYPTION_KEY="your-key"
246
+ ```
247
+ Then load it using a library like `dotenv`.
248
+ 2. **In your Terminal**:
249
+ * **Bash**: `export MASK_LANGUAGES="es,en"`
250
+ * **PowerShell**: `$env:MASK_LANGUAGES="es,en"`
251
+ 3. **Directly in TypeScript/Node.js**:
252
+ ```typescript
253
+ process.env.MASK_LANGUAGES = "es,en";
254
+ // Ensure this happens BEFORE initializing the MaskClient
255
+ ```
256
+
155
257
  #### 1. Configure Keys
156
258
  By default, Mask reads from environment variables.
157
259
  ```bash
@@ -181,15 +283,24 @@ export MASK_DYNAMODB_REGION=us-east-1
181
283
  export MASK_MEMCACHED_HOST=localhost
182
284
  export MASK_MEMCACHED_PORT=11211
183
285
 
184
- #### 4. Security Enforcement
185
- # Enable strict mode to refuse startup without MASK_ENCRYPTION_KEY
186
- export MASK_STRICT_PROD=true
286
+ #### 4. Security Guardrails: Fail-Shut by Default
187
287
 
188
- # Configure Blind Index Salt (Optional)
189
- export MASK_BLIND_INDEX_SALT="custom-salt-here"
288
+ To prevent accidental data leakage, Mask defaults to a **Fail-Shut** strategy. If the Vault or Key Provider is unreachable, the SDK will throw a `MaskVaultConnectionError`.
190
289
 
191
290
  > [!IMPORTANT]
192
- > **Security Warning:** In production, you **must** change the default `MASK_BLIND_INDEX_SALT`. Using the default salt makes your blind indices vulnerable to pre-computed hash (rainbow table) attacks across different SDK installations.
291
+ > **Environment Modes:**
292
+ > - **Production (Default):** Fail-Shut enabled. Strictly protects PII.
293
+ > - **Development:** Set `MASK_ENV=dev` to allow "Fail-Open" behavior (PII is returned as-is if the vault fails).
294
+
295
+ #### 5. Model Pre-caching CLI
296
+
297
+ For production air-gapped environments or to avoid "cold-start" latency, use the model pre-caching tool:
298
+
299
+ ```bash
300
+ # Cache English and Spanish models
301
+ export MASK_MODEL_CACHE_DIR="./models"
302
+ npx ts-node src/cli.ts cache-models --languages en,es
303
+ ```
193
304
 
194
305
  # Configure MemoryVault cleanup aggressiveness (default: 0.01)
195
306
  export MASK_VAULT_CLEANUP_FREQUENCY=0.05
package/dist/index.d.mts CHANGED
@@ -61,6 +61,7 @@ declare class BaseScanner {
61
61
  protected static _luhnChecksum(ccNumber: string): boolean;
62
62
  /** Validate a US ABA routing number using the checksum algorithm. */
63
63
  protected static _abaChecksum(routingNumber: string): boolean;
64
+ protected _tier0Dlp(text: string, encodeFn: (val: string) => Promise<string>, confidenceThreshold: number): Promise<[string, any[]]>;
64
65
  protected _tier1Regex(text: string, encodeFn: (val: string) => Promise<string>, boostEntities: Set<string>, aggressive: boolean, confidenceThreshold: number): Promise<[string, any[]]>;
65
66
  protected _tier2Nlp(text: string, encodeFn: (val: string) => Promise<string>, boostEntities: Set<string>, aggressive: boolean, confidenceThreshold: number): Promise<[string, any[]]>;
66
67
  protected _resolveBoost(context?: string | null): Set<string>;
@@ -265,6 +266,182 @@ declare class MaskClient {
265
266
  close(): Promise<void>;
266
267
  }
267
268
 
269
+ /**
270
+ * Language Context Resolver — Unicode-block heuristic for multilingual DLP.
271
+ *
272
+ * Examines the character distribution of an input buffer to infer the
273
+ * dominant script / language. The resolved language tag is consumed by
274
+ * the DLPPatternRegistry to prioritise locale-specific regex groups.
275
+ *
276
+ * Supported language tags:
277
+ * en — English (default / Latin-only fallback)
278
+ * es — Spanish
279
+ * fr — French
280
+ * de — German
281
+ * tr — Turkish
282
+ * ar — Arabic
283
+ * zh — Chinese
284
+ * ja — Japanese
285
+ */
286
+ type LanguageTag = "en" | "es" | "fr" | "de" | "tr" | "ar" | "zh" | "ja";
287
+ interface LanguageBreakdown {
288
+ language: LanguageTag;
289
+ breakdown: Record<string, number>;
290
+ }
291
+ /**
292
+ * Determine the dominant language of a text buffer.
293
+ *
294
+ * The resolver is stateless and safe for concurrent use.
295
+ *
296
+ * @example
297
+ * ```ts
298
+ * const resolver = new LanguageContextResolver();
299
+ * const tag = resolver.resolve("Merhaba, TC Kimlik Numaram 12345678901");
300
+ * // tag === "tr"
301
+ * ```
302
+ */
303
+ declare class LanguageContextResolver {
304
+ /** Minimum number of script-specific characters required. */
305
+ private readonly charThreshold;
306
+ constructor(charThreshold?: number);
307
+ /** Return an ISO-639-1 language tag for the given text. Falls back to "en". */
308
+ resolve(text: string): LanguageTag;
309
+ /** Return the language tag together with per-script hit counts. */
310
+ resolveWithDetail(text: string): LanguageBreakdown;
311
+ }
312
+
313
+ /**
314
+ * DLP Pattern Registry — Centralised catalogue of 50+ sensitive-data signatures.
315
+ *
316
+ * Each entry bundles a compiled regex, a list of proximity keywords (used by
317
+ * the scorer for context boosting), a base leakage-risk probability, and an
318
+ * optional hard-validator tag that tells the DLPValidationEngine which
319
+ * checksum to run after the initial pattern match.
320
+ *
321
+ * Patterns are organised into SensitiveCategory groups so that callers can
322
+ * selectively load only the groups relevant to their compliance scope.
323
+ */
324
+
325
+ declare enum SensitiveCategory {
326
+ FINANCIAL = "FINANCIAL",
327
+ CONTACT = "CONTACT",
328
+ PERSONAL = "PERSONAL",
329
+ HEALTHCARE = "HEALTHCARE",
330
+ IDENTITY_US = "IDENTITY_US",
331
+ IDENTITY_INTL = "IDENTITY_INTL",
332
+ VEHICLE = "VEHICLE",
333
+ CORPORATE = "CORPORATE"
334
+ }
335
+ interface PatternDescriptor {
336
+ compiledRe: RegExp;
337
+ proximityTerms: ReadonlySet<string>;
338
+ baseRisk: number;
339
+ category: SensitiveCategory;
340
+ validatorTag: string | null;
341
+ }
342
+ /**
343
+ * Immutable catalogue of sensitive-data regex signatures.
344
+ *
345
+ * @example
346
+ * ```ts
347
+ * const reg = new DLPPatternRegistry(); // load everything
348
+ * const reg = new DLPPatternRegistry(new Set([SensitiveCategory.FINANCIAL]));
349
+ * ```
350
+ */
351
+ declare class DLPPatternRegistry {
352
+ private readonly catalogue;
353
+ constructor(loadGroups?: ReadonlySet<SensitiveCategory>);
354
+ get typeNames(): string[];
355
+ /** Yield [typeName, descriptor] pairs. */
356
+ iterDescriptors(): IterableIterator<[string, PatternDescriptor]>;
357
+ descriptorFor(typeName: string): PatternDescriptor | undefined;
358
+ /** Return locale-tuned name regexes, falling back to English. */
359
+ namePatternsFor(lang: LanguageTag | string): RegExp[];
360
+ /** Return locale-tuned address regexes, falling back to English. */
361
+ addressPatternsFor(lang: LanguageTag | string): RegExp[];
362
+ private buildCatalogue;
363
+ }
364
+
365
+ /**
366
+ * Run the appropriate hard-validator for a given validator tag.
367
+ *
368
+ * @example
369
+ * ```ts
370
+ * const engine = new DLPValidationEngine();
371
+ * const passed = engine.run("luhn", "4111111111111111");
372
+ * ```
373
+ */
374
+ declare class DLPValidationEngine {
375
+ /**
376
+ * Execute the validator identified by tag.
377
+ *
378
+ * @returns `true` — value passed checksum → confidence override.
379
+ * @returns `false` — value failed → confidence penalty.
380
+ * @returns `null` — no validator registered for tag.
381
+ */
382
+ run(tag: string | null | undefined, rawValue: string): boolean | null;
383
+ /** Return all registered validator tag names. */
384
+ static availableTags(): string[];
385
+ }
386
+
387
+ /**
388
+ * DLP Confidence Scorer — Proximity-weighted scoring for sensitive data matches.
389
+ *
390
+ * Combines three independent signals into a single confidence value:
391
+ * 1. Base risk — intrinsic leakage probability of the data type.
392
+ * 2. Proximity boost — logarithmic-decay bonus for each context keyword
393
+ * found near the match within a configurable window.
394
+ * 3. Validator override — hard-validator pass forces confidence to 0.99.
395
+ *
396
+ * The scorer is stateless and safe for concurrent use.
397
+ */
398
+ interface ScorerConfig {
399
+ contextWindow?: number;
400
+ keywordBoost?: number;
401
+ validatorOverride?: number;
402
+ maxConfidence?: number;
403
+ penaltyFactor?: number;
404
+ }
405
+ interface ScoreInput {
406
+ baseRisk: number;
407
+ matchStart: number;
408
+ matchEnd: number;
409
+ fullText: string;
410
+ proximityTerms: ReadonlySet<string>;
411
+ validatorPassed: boolean | null;
412
+ }
413
+ /**
414
+ * Calculate a weighted confidence score for a single regex hit.
415
+ *
416
+ * @example
417
+ * ```ts
418
+ * const scorer = new DLPConfidenceScorer();
419
+ * const score = scorer.score({
420
+ * baseRisk: 0.92,
421
+ * matchStart: 10,
422
+ * matchEnd: 21,
423
+ * fullText: "TC Kimlik No: 10000000146",
424
+ * proximityTerms: new Set(["kimlik", "tc"]),
425
+ * validatorPassed: true,
426
+ * });
427
+ * // score === 0.99 (validator override)
428
+ * ```
429
+ */
430
+ declare class DLPConfidenceScorer {
431
+ private readonly window;
432
+ private readonly kwBoost;
433
+ private readonly valOverride;
434
+ private readonly ceil;
435
+ private readonly penalty;
436
+ constructor(overrides?: ScorerConfig);
437
+ /**
438
+ * Compute the final confidence for one candidate match.
439
+ *
440
+ * @returns Confidence in [0.0, maxConfidence].
441
+ */
442
+ score(input: ScoreInput): number;
443
+ }
444
+
268
445
  /**
269
446
  * Mask Privacy SDK
270
447
  * Just-In-Time Privacy Middleware for AI Agents.
@@ -298,4 +475,4 @@ declare function ascanAndTokenize(text: string, options?: {
298
475
  */
299
476
  declare function secureTool(...args: any[]): any;
300
477
 
301
- export { BaseScanner, LocalTransformersScanner, MaskClient, MaskDecryptionError, MaskError, MaskNLPTimeout, MaskSecurityError, MaskVaultConnectionError, PresidioScanner, VERSION, adecode, adetokenizeText, aencode, ascanAndTokenize, decode, detectEntitiesWithConfidence, detokenizeText, encode, generateFPEToken, getScanner, getVault, looksLikeToken, resetMasterKey, secureTool };
478
+ export { BaseScanner, DLPConfidenceScorer, DLPPatternRegistry, DLPValidationEngine, LanguageContextResolver, LocalTransformersScanner, MaskClient, MaskDecryptionError, MaskError, MaskNLPTimeout, MaskSecurityError, MaskVaultConnectionError, PresidioScanner, SensitiveCategory, VERSION, adecode, adetokenizeText, aencode, ascanAndTokenize, decode, detectEntitiesWithConfidence, detokenizeText, encode, generateFPEToken, getScanner, getVault, looksLikeToken, resetMasterKey, secureTool };
package/dist/index.d.ts CHANGED
@@ -61,6 +61,7 @@ declare class BaseScanner {
61
61
  protected static _luhnChecksum(ccNumber: string): boolean;
62
62
  /** Validate a US ABA routing number using the checksum algorithm. */
63
63
  protected static _abaChecksum(routingNumber: string): boolean;
64
+ protected _tier0Dlp(text: string, encodeFn: (val: string) => Promise<string>, confidenceThreshold: number): Promise<[string, any[]]>;
64
65
  protected _tier1Regex(text: string, encodeFn: (val: string) => Promise<string>, boostEntities: Set<string>, aggressive: boolean, confidenceThreshold: number): Promise<[string, any[]]>;
65
66
  protected _tier2Nlp(text: string, encodeFn: (val: string) => Promise<string>, boostEntities: Set<string>, aggressive: boolean, confidenceThreshold: number): Promise<[string, any[]]>;
66
67
  protected _resolveBoost(context?: string | null): Set<string>;
@@ -265,6 +266,182 @@ declare class MaskClient {
265
266
  close(): Promise<void>;
266
267
  }
267
268
 
269
+ /**
270
+ * Language Context Resolver — Unicode-block heuristic for multilingual DLP.
271
+ *
272
+ * Examines the character distribution of an input buffer to infer the
273
+ * dominant script / language. The resolved language tag is consumed by
274
+ * the DLPPatternRegistry to prioritise locale-specific regex groups.
275
+ *
276
+ * Supported language tags:
277
+ * en — English (default / Latin-only fallback)
278
+ * es — Spanish
279
+ * fr — French
280
+ * de — German
281
+ * tr — Turkish
282
+ * ar — Arabic
283
+ * zh — Chinese
284
+ * ja — Japanese
285
+ */
286
+ type LanguageTag = "en" | "es" | "fr" | "de" | "tr" | "ar" | "zh" | "ja";
287
+ interface LanguageBreakdown {
288
+ language: LanguageTag;
289
+ breakdown: Record<string, number>;
290
+ }
291
+ /**
292
+ * Determine the dominant language of a text buffer.
293
+ *
294
+ * The resolver is stateless and safe for concurrent use.
295
+ *
296
+ * @example
297
+ * ```ts
298
+ * const resolver = new LanguageContextResolver();
299
+ * const tag = resolver.resolve("Merhaba, TC Kimlik Numaram 12345678901");
300
+ * // tag === "tr"
301
+ * ```
302
+ */
303
+ declare class LanguageContextResolver {
304
+ /** Minimum number of script-specific characters required. */
305
+ private readonly charThreshold;
306
+ constructor(charThreshold?: number);
307
+ /** Return an ISO-639-1 language tag for the given text. Falls back to "en". */
308
+ resolve(text: string): LanguageTag;
309
+ /** Return the language tag together with per-script hit counts. */
310
+ resolveWithDetail(text: string): LanguageBreakdown;
311
+ }
312
+
313
+ /**
314
+ * DLP Pattern Registry — Centralised catalogue of 50+ sensitive-data signatures.
315
+ *
316
+ * Each entry bundles a compiled regex, a list of proximity keywords (used by
317
+ * the scorer for context boosting), a base leakage-risk probability, and an
318
+ * optional hard-validator tag that tells the DLPValidationEngine which
319
+ * checksum to run after the initial pattern match.
320
+ *
321
+ * Patterns are organised into SensitiveCategory groups so that callers can
322
+ * selectively load only the groups relevant to their compliance scope.
323
+ */
324
+
325
+ declare enum SensitiveCategory {
326
+ FINANCIAL = "FINANCIAL",
327
+ CONTACT = "CONTACT",
328
+ PERSONAL = "PERSONAL",
329
+ HEALTHCARE = "HEALTHCARE",
330
+ IDENTITY_US = "IDENTITY_US",
331
+ IDENTITY_INTL = "IDENTITY_INTL",
332
+ VEHICLE = "VEHICLE",
333
+ CORPORATE = "CORPORATE"
334
+ }
335
+ interface PatternDescriptor {
336
+ compiledRe: RegExp;
337
+ proximityTerms: ReadonlySet<string>;
338
+ baseRisk: number;
339
+ category: SensitiveCategory;
340
+ validatorTag: string | null;
341
+ }
342
+ /**
343
+ * Immutable catalogue of sensitive-data regex signatures.
344
+ *
345
+ * @example
346
+ * ```ts
347
+ * const reg = new DLPPatternRegistry(); // load everything
348
+ * const reg = new DLPPatternRegistry(new Set([SensitiveCategory.FINANCIAL]));
349
+ * ```
350
+ */
351
+ declare class DLPPatternRegistry {
352
+ private readonly catalogue;
353
+ constructor(loadGroups?: ReadonlySet<SensitiveCategory>);
354
+ get typeNames(): string[];
355
+ /** Yield [typeName, descriptor] pairs. */
356
+ iterDescriptors(): IterableIterator<[string, PatternDescriptor]>;
357
+ descriptorFor(typeName: string): PatternDescriptor | undefined;
358
+ /** Return locale-tuned name regexes, falling back to English. */
359
+ namePatternsFor(lang: LanguageTag | string): RegExp[];
360
+ /** Return locale-tuned address regexes, falling back to English. */
361
+ addressPatternsFor(lang: LanguageTag | string): RegExp[];
362
+ private buildCatalogue;
363
+ }
364
+
365
+ /**
366
+ * Run the appropriate hard-validator for a given validator tag.
367
+ *
368
+ * @example
369
+ * ```ts
370
+ * const engine = new DLPValidationEngine();
371
+ * const passed = engine.run("luhn", "4111111111111111");
372
+ * ```
373
+ */
374
+ declare class DLPValidationEngine {
375
+ /**
376
+ * Execute the validator identified by tag.
377
+ *
378
+ * @returns `true` — value passed checksum → confidence override.
379
+ * @returns `false` — value failed → confidence penalty.
380
+ * @returns `null` — no validator registered for tag.
381
+ */
382
+ run(tag: string | null | undefined, rawValue: string): boolean | null;
383
+ /** Return all registered validator tag names. */
384
+ static availableTags(): string[];
385
+ }
386
+
387
+ /**
388
+ * DLP Confidence Scorer — Proximity-weighted scoring for sensitive data matches.
389
+ *
390
+ * Combines three independent signals into a single confidence value:
391
+ * 1. Base risk — intrinsic leakage probability of the data type.
392
+ * 2. Proximity boost — logarithmic-decay bonus for each context keyword
393
+ * found near the match within a configurable window.
394
+ * 3. Validator override — hard-validator pass forces confidence to 0.99.
395
+ *
396
+ * The scorer is stateless and safe for concurrent use.
397
+ */
398
+ interface ScorerConfig {
399
+ contextWindow?: number;
400
+ keywordBoost?: number;
401
+ validatorOverride?: number;
402
+ maxConfidence?: number;
403
+ penaltyFactor?: number;
404
+ }
405
+ interface ScoreInput {
406
+ baseRisk: number;
407
+ matchStart: number;
408
+ matchEnd: number;
409
+ fullText: string;
410
+ proximityTerms: ReadonlySet<string>;
411
+ validatorPassed: boolean | null;
412
+ }
413
+ /**
414
+ * Calculate a weighted confidence score for a single regex hit.
415
+ *
416
+ * @example
417
+ * ```ts
418
+ * const scorer = new DLPConfidenceScorer();
419
+ * const score = scorer.score({
420
+ * baseRisk: 0.92,
421
+ * matchStart: 10,
422
+ * matchEnd: 21,
423
+ * fullText: "TC Kimlik No: 10000000146",
424
+ * proximityTerms: new Set(["kimlik", "tc"]),
425
+ * validatorPassed: true,
426
+ * });
427
+ * // score === 0.99 (validator override)
428
+ * ```
429
+ */
430
+ declare class DLPConfidenceScorer {
431
+ private readonly window;
432
+ private readonly kwBoost;
433
+ private readonly valOverride;
434
+ private readonly ceil;
435
+ private readonly penalty;
436
+ constructor(overrides?: ScorerConfig);
437
+ /**
438
+ * Compute the final confidence for one candidate match.
439
+ *
440
+ * @returns Confidence in [0.0, maxConfidence].
441
+ */
442
+ score(input: ScoreInput): number;
443
+ }
444
+
268
445
  /**
269
446
  * Mask Privacy SDK
270
447
  * Just-In-Time Privacy Middleware for AI Agents.
@@ -298,4 +475,4 @@ declare function ascanAndTokenize(text: string, options?: {
298
475
  */
299
476
  declare function secureTool(...args: any[]): any;
300
477
 
301
- export { BaseScanner, LocalTransformersScanner, MaskClient, MaskDecryptionError, MaskError, MaskNLPTimeout, MaskSecurityError, MaskVaultConnectionError, PresidioScanner, VERSION, adecode, adetokenizeText, aencode, ascanAndTokenize, decode, detectEntitiesWithConfidence, detokenizeText, encode, generateFPEToken, getScanner, getVault, looksLikeToken, resetMasterKey, secureTool };
478
+ export { BaseScanner, DLPConfidenceScorer, DLPPatternRegistry, DLPValidationEngine, LanguageContextResolver, LocalTransformersScanner, MaskClient, MaskDecryptionError, MaskError, MaskNLPTimeout, MaskSecurityError, MaskVaultConnectionError, PresidioScanner, SensitiveCategory, VERSION, adecode, adetokenizeText, aencode, ascanAndTokenize, decode, detectEntitiesWithConfidence, detokenizeText, encode, generateFPEToken, getScanner, getVault, looksLikeToken, resetMasterKey, secureTool };