biometry-sdk 1.3.1 → 1.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -41,8 +41,25 @@ const sdk = new BiometrySDK('YOUR_API_KEY');
41
41
  ### Example
42
42
  You can find an example in the example/ directory. The example demonstrates how you might integrate the BiometrySDK in a React component with the state.
43
43
 
44
- ### 1. Consents
45
- #### 1.1 Give Authorization Consent
44
+ ### 1. Sessions
45
+ Session is a way to group transactions together. It is useful when you want to group transactions that are related to each other. For example, you can start a session and then use the session ID to link transactions within a unified group.
46
+ ```javascript
47
+ const response = await sdk.startSession();
48
+ const sessionId = response.data;
49
+
50
+ const voiceFile = new File([/* voice audio bytes */], 'voice.wav', { type: 'audio/wav' });
51
+ const faceFile = new File([/* face image bytes */], 'face.jpg', { type: 'image/jpeg' });
52
+
53
+ // Use the session ID to link transactions within a unified group
54
+ await sdk.giveStorageConsent(true, 'John Doe', { sessionId });
55
+ await sdk.enrollFace(faceFile, 'John Doe', { sessionId });
56
+ await sdk.enrollVoice(voiceFile, 'John Doe', { sessionId });
57
+
58
+ // Go to the Results page in your dashboard and see the transactions grouped by the session ID
59
+ ```
60
+
61
+ ### 2. Consents
62
+ #### 2.1 Give Authorization Consent
46
63
  You **must** obtain user authorization consent before performing any biometric operations (Face Recognition, Voice Recognition, etc.):
47
64
  ```javascript
48
65
  await sdk.giveAuthorizationConsent(true, 'John Doe');
@@ -54,7 +71,7 @@ You **must** obtain user authorization consent before performing any biometric o
54
71
  - The first argument (`true`) indicates that the user has granted consent.
55
72
  - The second argument is the user’s full name (used for record-keeping within Biometry).
56
73
 
57
- #### 1.2 Give Storage Consent
74
+ #### 2.2 Give Storage Consent
58
75
  You **must** obtain user consent before storing biometric data (Face Enrollment, Voice Enrollment):
59
76
  ```javascript
60
77
  await sdk.giveStorageConsent(true, 'John Doe');
@@ -66,7 +83,7 @@ You **must** obtain user consent before storing biometric data (Face Enrollment,
66
83
  - The first argument (`true`) indicates that the user has granted consent.
67
84
  - The second argument is the user’s full name (used for record-keeping within Biometry).
68
85
 
69
- ### 2. Face Enrollment
86
+ ### 3. Face Enrollment
70
87
  Enroll a user’s face for future recognition or matching:
71
88
  ```javascript
72
89
  const faceFile = new File([/* face image bytes */], 'face.jpg', { type: 'image/jpeg' });
@@ -76,7 +93,7 @@ Enroll a user’s face for future recognition or matching:
76
93
  console.log('Face Enrollment Response:', faceResponse);
77
94
  ```
78
95
 
79
- ### 3. Voice Enrollment
96
+ ### 4. Voice Enrollment
80
97
  Enroll a user’s voice for future authentication checks:
81
98
  ```javascript
82
99
  const voiceFile = new File([/* voice audio bytes */], 'voice.wav', { type: 'audio/wav' });
@@ -85,7 +102,7 @@ Enroll a user’s voice for future authentication checks:
85
102
  const voiceResponse = await sdk.enrollVoice(voiceFile, 'John Doe');
86
103
  console.log('Voice Enrollment Response:', voiceResponse);
87
104
  ```
88
- ### 4. Process Video
105
+ ### 5. Process Video
89
106
  Process a user’s video for liveness checks and identity authorization:
90
107
  ```javascript
91
108
  const videoFile = new File([/* file parts */], 'video.mp4', { type: 'video/mp4' });
@@ -97,28 +114,14 @@ Process a user’s video for liveness checks and identity authorization:
97
114
  try {
98
115
  const response = await sdk.processVideo(videoFile, phrase, userFullName);
99
116
  console.log('Process Video Response:', response);
100
-
101
- // Retrieve the processVideoRequestId from the *response headers* called x-request-id.
102
117
  } catch (error) {
103
118
  console.error('Error processing video:', error);
104
119
  }
105
120
  ```
106
- #### Additional
107
- - `processVideoRequestId`: After calling `sdk.processVideo()`, you typically receive a unique ID (`x-request-id`). You can pass this `processVideoRequestId` into subsequent calls (e.g., `faceMatch`) to reference the previously uploaded video frames.
108
- - `usePrefilledVideo`: When set to `true`, indicates that the SDK should reuse the video already on file from a previous `processVideo` call rather than requiring a new upload.
109
- ### 5. Face match
121
+
122
+ ### 6. Face match
110
123
  Use matchFaces to compare a reference image (e.g., a document or a captured selfie) with a face from a video:
111
124
  ```javascript
112
- /**
113
- * matchFaces(
114
- * image: File,
115
- * video?: File,
116
- * userFullName?: string,
117
- * processVideoRequestId?: string,
118
- * usePrefilledVideo?: boolean,
119
- * requestUserProvidedId?: string
120
- * ): Promise<FaceMatchResponse>
121
- */
122
125
  const faceFile = new File([/* face image bytes */], 'face.jpg', { type: 'image/jpeg' });
123
126
  const videoFile = new File([/* file parts */], 'video.mp4', { type: 'video/mp4' });
124
127
  const userFullName = 'John Doe';
@@ -128,34 +131,40 @@ Use matchFaces to compare a reference image (e.g., a document or a captured self
128
131
  videoFile,
129
132
  userFullName
130
133
  );
131
- // OR
134
+ ```
135
+
136
+ You can also reuse a video that was previously processed with the `processVideo` method by passing the same sessionId:
137
+ ```javascript
138
+ const sessionId = await sdk.startSession();
139
+
140
+ // First, process a video with a sessionId
141
+ const processVideoResponse = await sdk.processVideo(videoFile, phrase, userFullName, { sessionId });
142
+
143
+ // Later, reuse the same video for face matching by providing the sessionId
132
144
  const faceMatchResponse = await sdk.faceMatch(
133
- faceFile, // The image containing the user's face (doc or selfie)
134
- null, // No local video provided (we're reusing the old one)
135
- 'John Doe',
136
- processVideoRequestId, // From the /process-video response headers
137
- true // usePrefilledVideo
145
+ faceFile,
146
+ null, // No need to pass the video file again
147
+ userFullName,
148
+ true, // usePrefilledVideo
149
+ { sessionId }
138
150
  );
139
151
  ```
140
152
 
141
- ### 6. Sessions
142
- Session is a way to group transactions together. It is useful when you want to group transactions that are related to each other. For example, you can start a session and then use the session ID to link transactions within a unified group.
153
+ ### 7. DocAuth
154
+ DocAuth is a way to authenticate a user's document. It is useful when you want to authenticate a user's document.
143
155
  ```javascript
144
156
  const sessionId = await sdk.startSession();
145
157
 
146
- const videoFile = new File([/* file parts */], 'video.mp4', { type: 'video/mp4' });
147
- const phrase = "one two three four five six";
158
+ const documentFile = new File([/* file parts */], 'document.jpg', { type: 'image/jpeg' });
148
159
  const userFullName = 'John Doe';
149
160
 
150
161
  await sdk.giveAuthorizationConsent(true, userFullName, { sessionId });
151
162
 
152
163
  try {
153
- const response = await sdk.processVideo(videoFile, phrase, userFullName, { sessionId });
154
- console.log('Process Video Response:', response);
155
-
156
- // Retrieve the processVideoRequestId from the *response headers* called x-request-id.
164
+ const response = await sdk.checkDocAuth(documentFile, userFullName, { sessionId });
165
+ console.log('DocAuth Response:', response);
157
166
  } catch (error) {
158
- console.error('Error processing video:', error);
167
+ console.error('Error checking document:', error);
159
168
  }
160
169
  ```
161
170
 
@@ -168,8 +177,8 @@ One common advanced scenario involves document authentication in enrollment face
168
177
 
169
178
  Below is a possible flow (method names in your SDK may vary slightly depending on your integration setup):
170
179
  ```javascript
171
- // 1. Acquire user consent
172
- await sdk.giveConsent(true, userFullName);
180
+ // 1. Acquire user storage consent
181
+ await sdk.giveStorageConsent(true, userFullName);
173
182
 
174
183
  // 2. Enroll or capture the user’s face
175
184
  // (Either using enrollFace or processVideo, depending on your user flow)
@@ -177,14 +186,17 @@ Below is a possible flow (method names in your SDK may vary slightly depending o
177
186
  const userVideoFile = new File([/* user selfie bytes */], 'video.mp4', { type: 'video/*' });
178
187
  const enrollResponse = await sdk.enrollFace(userFaceFile, userFullName);
179
188
 
180
- // 3. Face Match (Compare video face with user’s enrolled face)
189
+ // 3. Acquire user authorization consent. It's required to use enrolled face for using biometric data.
190
+ await sdk.giveAuthorizationConsent(true, userFullName);
191
+
192
+ // 4. Face Match (Compare video face with user’s enrolled face)
181
193
  const faceMatchResponse = await sdk.faceMatch(
182
194
  userFaceFile,
183
195
  userVideoFile,
184
196
  userFullName
185
197
  );
186
198
 
187
- // 4. Evaluate the faceMatch result
199
+ // 5. Evaluate the faceMatch result
188
200
  if (faceMatchResponse.matchResult === 'match') {
189
201
  console.log('User video face matches user’s live face. Identity verified!');
190
202
  } else {
@@ -326,50 +338,6 @@ This project is licensed under the MIT License. See the [LICENSE](LICENSE) file
326
338
  ## More Information
327
339
  For more detailed information on Biometry’s API endpoints, parameters, and responses, visit the official [Biometry API Documentation](https://developer.biometrysolutions.com/overview/). If you have questions or need help, please reach out to our support team or create a GitHub issue.
328
340
 
329
- ## Quick Reference
330
- - **Install**:
331
- ```bash
332
- npm install biometry-sdk
333
- ```
334
- - **Consent**: (Required before enrollment/processing)
335
- ```javascript
336
- sdk.giveConsent(true, userFullName)
337
- ```
338
- - **Voice Enrollment**:
339
- ```javascript
340
- sdk.enrollVoice(file, userFullName)
341
- ```
342
- - **Face Enrollment**:
343
- ```javascript
344
- sdk.enrollFace(file, userFullName)
345
- ```
346
- - **Face match (basic):**
347
- ```javascript
348
- sdk.faceMatch(image, video, userFullName);
349
- ```
350
- - **Face match (advanced w/ reusing video or linking IDs):**
351
- ```javascript
352
- sdk.faceMatch(
353
- image, // Reference image file that contains user's face.
354
- video, // Video file that contains user's face.
355
- userFullName,
356
- processVideoRequestId, // ID from the response header of /process-video endpoint.
357
- usePrefilledVideo // Pass true to use the video from the process-video endpoint.
358
- );
359
- ```
360
- - **Process Video (basic):**
361
- ```javascript
362
- sdk.processVideo(file, phrase, userFullName);
363
- ```
364
- - **Process Video (advanced w/ reusing video or linking IDs):**
365
- ```javascript
366
- sdk.processVideo(
367
- video, // Video file that you want to process
368
- phrase,
369
- userFullName,
370
- requestUserProvidedId // An optional user-provided ID to link transactions within a unified group
371
- );
372
- ```
373
341
  - **UI Components:**
374
342
  - `<biometry-enrollment ...>` (face enrollment)
375
343
  - `<process-video ...>` (video enrollment)
@@ -221,11 +221,13 @@ class BiometrySDK {
221
221
  * @returns {Promise<FaceMatchResponse>} - A promise resolving to the voice enrolling response.
222
222
  * @throws {Error} - If required parameters are missing or the request fails.
223
223
  */
224
- async matchFaces(image, video, userFullName, processVideoRequestId, usePrefilledVideo, props) {
224
+ async matchFaces(image, video, userFullName, usePrefilledVideo, props) {
225
225
  if (!image)
226
226
  throw new Error('Face image is required.');
227
- if ((!processVideoRequestId && !usePrefilledVideo) && !video)
227
+ if ((!usePrefilledVideo) && !video)
228
228
  throw new Error('Video is required.');
229
+ if (usePrefilledVideo && !(props === null || props === undefined ? undefined : props.sessionId))
230
+ throw new Error('Session ID is required to use a video from the process-video endpoint.');
229
231
  const formData = new FormData();
230
232
  if (video) {
231
233
  formData.append('video', video);
@@ -235,10 +237,7 @@ class BiometrySDK {
235
237
  if (userFullName) {
236
238
  headers['X-User-Fullname'] = userFullName;
237
239
  }
238
- if (processVideoRequestId) {
239
- headers['X-Request-Id'] = processVideoRequestId;
240
- }
241
- if (processVideoRequestId && usePrefilledVideo) {
240
+ if (usePrefilledVideo) {
242
241
  headers['X-Use-Prefilled-Video'] = 'true';
243
242
  }
244
243
  if (props === null || props === undefined ? undefined : props.sessionId) {
@@ -247,9 +246,6 @@ class BiometrySDK {
247
246
  if (props === null || props === undefined ? undefined : props.deviceInfo) {
248
247
  headers['X-Device-Info'] = JSON.stringify(props.deviceInfo);
249
248
  }
250
- if (props === null || props === undefined ? undefined : props.deviceInfo) {
251
- headers['X-Device-Info'] = JSON.stringify(props.deviceInfo);
252
- }
253
249
  return await this.request('/api-gateway/match-faces', 'POST', formData, headers);
254
250
  }
255
251
  /**
package/dist/sdk.d.ts CHANGED
@@ -116,7 +116,7 @@ export declare class BiometrySDK {
116
116
  * @returns {Promise<FaceMatchResponse>} - A promise resolving to the voice enrolling response.
117
117
  * @throws {Error} - If required parameters are missing or the request fails.
118
118
  */
119
- matchFaces(image: File, video?: File, userFullName?: string, processVideoRequestId?: string, usePrefilledVideo?: boolean, props?: {
119
+ matchFaces(image: File, video?: File, userFullName?: string, usePrefilledVideo?: boolean, props?: {
120
120
  sessionId?: string;
121
121
  deviceInfo?: object;
122
122
  }): Promise<ApiResponse<FaceMatchResponse>>;
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "biometry-sdk",
3
- "version": "1.3.1",
3
+ "version": "1.3.3",
4
4
  "main": "dist/biometry-sdk.esm.js",
5
5
  "types": "dist/index.d.ts",
6
6
  "module": "dist/biometry-sdk.esm.js",
@@ -1 +0,0 @@
1
- export {};
@@ -1,233 +0,0 @@
1
- import { BiometrySDK } from "../sdk.js";
2
- import { BiometryAttributes, BiometryOnboardingState } from "../types.js";
3
- class BiometryOnboarding extends HTMLElement {
4
- constructor() {
5
- super();
6
- this.videoElement = null;
7
- this.canvasElement = null;
8
- this.captureButton = null;
9
- this.shadow = this.attachShadow({ mode: "open" });
10
- this.sdk = null;
11
- this.toggleState = this.toggleState.bind(this);
12
- this.capturePhoto = this.capturePhoto.bind(this);
13
- }
14
- static get observedAttributes() {
15
- return Object.values(BiometryAttributes);
16
- }
17
- get apiKey() {
18
- return this.getAttribute("api-key");
19
- }
20
- set apiKey(value) {
21
- if (value) {
22
- this.setAttribute("api-key", value);
23
- }
24
- else {
25
- this.removeAttribute("api-key");
26
- }
27
- }
28
- get userFullname() {
29
- return this.getAttribute("user-fullname");
30
- }
31
- set userFullname(value) {
32
- if (value) {
33
- this.setAttribute("user-fullname", value);
34
- }
35
- else {
36
- this.removeAttribute("user-fullname");
37
- }
38
- }
39
- attributeChangedCallback(name, oldValue, newValue) {
40
- if (name === "api-key" || name === "user-fullname") {
41
- this.validateAttributes();
42
- }
43
- }
44
- connectedCallback() {
45
- this.validateAttributes();
46
- this.init();
47
- }
48
- disconnectedCallback() {
49
- this.cleanup();
50
- }
51
- validateAttributes() {
52
- if (!this.apiKey) {
53
- console.error("API key is required.");
54
- this.toggleState(BiometryOnboardingState.ErrorOther);
55
- return;
56
- }
57
- if (!this.userFullname) {
58
- console.error("User fullname is required.");
59
- this.toggleState(BiometryOnboardingState.ErrorOther);
60
- return;
61
- }
62
- }
63
- init() {
64
- this.shadow.innerHTML = `
65
- <style>
66
- .wrapper {
67
- position: relative;
68
- }
69
- video {
70
- transform: scaleX(-1); /* Flip video for preview */
71
- max-width: 100%;
72
- border-radius: var(--border-radius, 8px);
73
- }
74
- canvas {
75
- display: none;
76
- }
77
- </style>
78
- <div class="wrapper">
79
- <slot name="video">
80
- <video id="video" autoplay playsinline></video>
81
- </slot>
82
- <slot name="canvas">
83
- <canvas id="canvas" style="display: none;"></canvas>
84
- </slot>
85
- <slot name="button">
86
- <button id="button">Capture Photo</button>
87
- </slot>
88
- <div class="status">
89
- <slot name="loading" class="loading"></slot>
90
- <slot name="success" class="success"></slot>
91
- <slot name="error-no-face" class="error-no-face"></slot>
92
- <slot name="error-multiple-faces" class="error-multiple-faces"></slot>
93
- <slot name="error-not-centered" class="error-not-centered"></slot>
94
- <slot name="error-other" class="error-other"></slot>
95
- </div>
96
- </div>
97
- `;
98
- this.initializeSDK();
99
- this.attachSlotListeners();
100
- this.setupCamera();
101
- this.toggleState("");
102
- }
103
- cleanup() {
104
- var _a;
105
- if ((_a = this.videoElement) === null || _a === void 0 ? void 0 : _a.srcObject) {
106
- const tracks = this.videoElement.srcObject.getTracks();
107
- tracks.forEach((track) => track.stop());
108
- }
109
- if (this.videoElement) {
110
- this.videoElement.srcObject = null;
111
- }
112
- }
113
- initializeSDK() {
114
- if (this.apiKey) {
115
- this.sdk = new BiometrySDK(this.apiKey);
116
- }
117
- else {
118
- this.toggleState(BiometryOnboardingState.ErrorOther);
119
- console.error("API key is required to initialize the SDK.");
120
- }
121
- }
122
- toggleState(state) {
123
- const slots = [
124
- BiometryOnboardingState.Loading,
125
- BiometryOnboardingState.Success,
126
- BiometryOnboardingState.ErrorNoFace,
127
- BiometryOnboardingState.ErrorMultipleFaces,
128
- BiometryOnboardingState.ErrorNotCentered,
129
- BiometryOnboardingState.ErrorOther,
130
- ];
131
- slots.forEach((slotName) => {
132
- const slot = this.shadow.querySelector(`slot[name="${slotName}"]`);
133
- if (slot) {
134
- slot.style.display = slotName === state ? "block" : "none";
135
- }
136
- });
137
- }
138
- attachSlotListeners() {
139
- const videoSlot = this.shadow.querySelector('slot[name="video"]');
140
- const canvasSlot = this.shadow.querySelector('slot[name="canvas"]');
141
- const buttonSlot = this.shadow.querySelector('slot[name="button"]');
142
- const assignedVideoElements = videoSlot.assignedElements();
143
- this.videoElement = (assignedVideoElements.length > 0 ? assignedVideoElements[0] : null) || this.shadow.querySelector("#video");
144
- const assignedCanvasElements = canvasSlot.assignedElements();
145
- this.canvasElement = (assignedCanvasElements.length > 0 ? assignedCanvasElements[0] : null) || this.shadow.querySelector("#canvas");
146
- const assignedButtonElements = buttonSlot.assignedElements();
147
- this.captureButton = (assignedButtonElements.length > 0 ? assignedButtonElements[0] : null) || this.shadow.querySelector("#button");
148
- if (!this.videoElement) {
149
- console.error("Video element is missing.");
150
- return;
151
- }
152
- if (!this.captureButton) {
153
- console.error("Capture button is missing.");
154
- return;
155
- }
156
- else {
157
- this.captureButton.addEventListener("click", this.capturePhoto);
158
- }
159
- }
160
- setupCamera() {
161
- if (!this.videoElement) {
162
- console.error("Video element is missing.");
163
- return;
164
- }
165
- navigator.mediaDevices
166
- .getUserMedia({ video: true })
167
- .then((stream) => {
168
- this.videoElement.srcObject = stream;
169
- })
170
- .catch((error) => {
171
- console.error("Error accessing camera:", error);
172
- });
173
- }
174
- async capturePhoto() {
175
- try {
176
- if (!this.videoElement || !this.canvasElement || !this.sdk) {
177
- console.error("Essential elements or SDK are not initialized.");
178
- return;
179
- }
180
- const context = this.canvasElement.getContext("2d");
181
- this.canvasElement.width = this.videoElement.videoWidth;
182
- this.canvasElement.height = this.videoElement.videoHeight;
183
- context.drawImage(this.videoElement, 0, 0, this.canvasElement.width, this.canvasElement.height);
184
- this.toggleState("loading");
185
- this.canvasElement.toBlob(async (blob) => {
186
- try {
187
- if (!blob) {
188
- console.error("Failed to capture photo.");
189
- this.toggleState(BiometryOnboardingState.ErrorOther);
190
- return;
191
- }
192
- const file = new File([blob], "onboard-face.jpg", { type: "image/jpeg" });
193
- try {
194
- const response = await this.sdk.onboardFace(file, this.userFullname);
195
- const result = response.data.onboard_result;
196
- this.resultCode = result === null || result === void 0 ? void 0 : result.code;
197
- this.description = (result === null || result === void 0 ? void 0 : result.description) || "Unknown error occurred.";
198
- switch (this.resultCode) {
199
- case 0:
200
- this.toggleState(BiometryOnboardingState.Success);
201
- break;
202
- case 1:
203
- this.toggleState(BiometryOnboardingState.ErrorNoFace);
204
- break;
205
- case 2:
206
- this.toggleState(BiometryOnboardingState.ErrorMultipleFaces);
207
- break;
208
- case 3:
209
- this.toggleState(BiometryOnboardingState.ErrorNotCentered);
210
- break;
211
- default:
212
- this.toggleState(BiometryOnboardingState.ErrorOther);
213
- }
214
- console.log("Onboarding result:", result);
215
- }
216
- catch (error) {
217
- console.error("Error onboarding face:", error);
218
- this.toggleState(BiometryOnboardingState.ErrorOther);
219
- }
220
- }
221
- catch (error) {
222
- console.error("Error in toBlob callback:", error);
223
- this.toggleState(BiometryOnboardingState.ErrorOther);
224
- }
225
- }, "image/jpeg");
226
- }
227
- catch (error) {
228
- console.error("Error capturing photo:", error);
229
- this.toggleState(BiometryOnboardingState.ErrorOther);
230
- }
231
- }
232
- }
233
- customElements.define("biometry-onboarding", BiometryOnboarding);
package/dist/index.js DELETED
@@ -1,2 +0,0 @@
1
- export * from './sdk';
2
- export * from './types';
package/dist/sdk.js DELETED
@@ -1,179 +0,0 @@
1
- export class BiometrySDK {
2
- constructor(apiKey) {
3
- if (!apiKey) {
4
- throw new Error('API Key is required to initialize the SDK.');
5
- }
6
- this.apiKey = apiKey;
7
- }
8
- async request(path, method, body, headers) {
9
- const defaultHeaders = {
10
- Authorization: `Bearer ${this.apiKey}`,
11
- };
12
- const requestHeaders = Object.assign(Object.assign({}, defaultHeaders), headers);
13
- if (body && !(body instanceof FormData)) {
14
- requestHeaders['Content-Type'] = 'application/json';
15
- body = JSON.stringify(body);
16
- }
17
- const response = await fetch(`${BiometrySDK.BASE_URL}${path}`, {
18
- method,
19
- headers: requestHeaders,
20
- body,
21
- });
22
- if (!response.ok) {
23
- const errorData = await response.json().catch(() => ({}));
24
- const errorMessage = (errorData === null || errorData === void 0 ? void 0 : errorData.error) || (errorData === null || errorData === void 0 ? void 0 : errorData.message) || 'Unknown error occurred';
25
- throw new Error(`Error ${response.status}: ${errorMessage}`);
26
- }
27
- return await response.json();
28
- }
29
- /**
30
- * Submits consent for a user.
31
- *
32
- * @param {boolean} isConsentGiven - Indicates whether the user has given consent.
33
- * @param {string} userFullName - The full name of the user giving consent.
34
- * @returns {Promise<ConsentResponse>} A promise resolving to the consent response.
35
- * @throws {Error} - If the user's full name is not provided or if the request fails.
36
- */
37
- async giveConsent(isConsentGiven, userFullName) {
38
- if (!userFullName) {
39
- throw new Error('User Full Name is required to give consent.');
40
- }
41
- const body = {
42
- is_consent_given: isConsentGiven,
43
- user_fullname: userFullName,
44
- };
45
- const response = await this.request('/api-consent/consent', 'POST', body);
46
- return {
47
- is_consent_given: response.is_consent_given,
48
- user_fullname: response.user_fullname,
49
- };
50
- }
51
- /**
52
- * Onboards a user's voice for biometric authentication.
53
- *
54
- * @param {File} audio - The audio file containing the user's voice.
55
- * @param {string} userFullName - The full name of the user being onboarded.
56
- * @param {string} uniqueId - A unique identifier for the onboarding process.
57
- * @param {string} phrase - The phrase spoken in the audio file.
58
- * @param {string} [requestUserProvidedId] - An optional user-provided ID to link transactions within a unified group.
59
- * @returns {Promise<VoiceOnboardingResponse>} - A promise resolving to the voice onboarding response.
60
- * @throws {Error} - If required parameters are missing or the request fails.
61
- */
62
- async onboardVoice(audio, userFullName, uniqueId, phrase, requestUserProvidedId) {
63
- if (!userFullName)
64
- throw new Error('User fullname is required.');
65
- if (!uniqueId)
66
- throw new Error('Unique ID is required.');
67
- if (!phrase)
68
- throw new Error('Phrase is required.');
69
- if (!audio)
70
- throw new Error('Audio file is required.');
71
- const formData = new FormData();
72
- formData.append('unique_id', uniqueId);
73
- formData.append('phrase', phrase);
74
- formData.append('voice', audio);
75
- const headers = {
76
- 'X-User-Fullname': userFullName,
77
- };
78
- if (requestUserProvidedId) {
79
- headers['X-Request-User-Provided-ID'] = requestUserProvidedId;
80
- }
81
- return this.request('/api-gateway/onboard/voice', 'POST', formData, headers);
82
- }
83
- /**
84
- * Onboards a user's face for biometric authentication.
85
- *
86
- * @param {File} face - Image file that contains user's face.
87
- * @param {string} userFullName - The full name of the user being onboarded.
88
- * @param {string} isDocument - Indicates whether the image is a document.
89
- * @param {string} [requestUserProvidedId] - An optional user-provided ID to link transactions within a unified group.
90
- * @returns {Promise<FaceOnboardingResponse>} - A promise resolving to the voice onboarding response.
91
- * @throws {Error} - If required parameters are missing or the request fails.
92
- */
93
- async onboardFace(face, userFullName, isDocument, requestUserProvidedId) {
94
- if (!userFullName)
95
- throw new Error('User fullname is required.');
96
- if (!face)
97
- throw new Error('Face image is required.');
98
- const formData = new FormData();
99
- formData.append('face', face);
100
- if (isDocument) {
101
- formData.append('is_document', 'true');
102
- }
103
- const headers = {
104
- 'X-User-Fullname': userFullName,
105
- };
106
- if (requestUserProvidedId) {
107
- headers['X-Request-User-Provided-ID'] = requestUserProvidedId;
108
- }
109
- return this.request('/api-gateway/onboard/face', 'POST', formData, headers);
110
- }
111
- /**
112
- * Matches a user's face from video against a reference image.
113
- *
114
- * @param {File} image - Reference image file that contains user's face.
115
- * @param {string} video - Video file that contains user's face.
116
- * @param {string} userFullName - Pass the full name of end-user to process Voice and Face recognition services.
117
- * @param {string} processVideoRequestId - ID from the response header of /process-video endpoint.
118
- * @param {boolean} usePrefilledVideo - Pass true to use the video from the process-video endpoint.
119
- * @param {string} [requestUserProvidedId] - An optional user-provided ID to link transactions within a unified group.
120
- * @returns {Promise<FaceMatchResponse>} - A promise resolving to the voice onboarding response.
121
- * @throws {Error} - If required parameters are missing or the request fails.
122
- */
123
- async matchFaces(image, video, userFullName, processVideoRequestId, usePrefilledVideo, requestUserProvidedId) {
124
- if (!image)
125
- throw new Error('Face image is required.');
126
- if ((!processVideoRequestId && !usePrefilledVideo) && !video)
127
- throw new Error('Video is required.');
128
- const formData = new FormData();
129
- if (video) {
130
- formData.append('video', video);
131
- }
132
- formData.append('image', image);
133
- const headers = {};
134
- if (userFullName) {
135
- headers['X-User-Fullname'] = userFullName;
136
- }
137
- if (processVideoRequestId) {
138
- headers['X-Request-Id'] = processVideoRequestId;
139
- }
140
- if (processVideoRequestId && usePrefilledVideo) {
141
- headers['X-Use-Prefilled-Video'] = 'true';
142
- }
143
- if (requestUserProvidedId) {
144
- headers['X-Request-User-Provided-ID'] = requestUserProvidedId;
145
- }
146
- return this.request('/api-gateway/match-faces', 'POST', formData, headers);
147
- }
148
- /**
149
- * Process the video through Biometry services to check liveness and authorize user
150
- *
151
- * @param {File} video - Video file that you want to process.
152
- * @param {string} phrase - Set of numbers that user needs to say out loud in the video.
153
- * @param {string} userFullName - Pass the full name of end-user to process Voice and Face recognition services.
154
- * @param {string} requestUserProvidedId - An optional user-provided ID to link transactions within a unified group.
155
- * @param {object} deviceInfo - Pass the device information in JSON format to include in transaction.
156
- * @returns
157
- */
158
- async processVideo(video, phrase, userFullName, requestUserProvidedId, deviceInfo) {
159
- if (!video)
160
- throw new Error('Video is required.');
161
- if (!phrase)
162
- throw new Error('Phrase is required.');
163
- const formData = new FormData();
164
- formData.append('phrase', phrase);
165
- formData.append('video', video);
166
- const headers = {};
167
- if (userFullName) {
168
- headers['X-User-Fullname'] = userFullName;
169
- }
170
- if (requestUserProvidedId) {
171
- headers['X-Request-User-Provided-ID'] = requestUserProvidedId;
172
- }
173
- if (deviceInfo) {
174
- headers['X-Device-Info'] = JSON.stringify(deviceInfo);
175
- }
176
- return this.request('/api-gateway/process-video', 'POST', formData, headers);
177
- }
178
- }
179
- BiometrySDK.BASE_URL = 'https://api.biometrysolutions.com';
package/dist/sdk.test.js DELETED
@@ -1,255 +0,0 @@
1
- import { BiometrySDK } from './sdk';
2
- // Mock the fetch API globally
3
- global.fetch = jest.fn();
4
- describe('BiometrySDK', () => {
5
- const apiKey = 'test-api-key';
6
- const sdk = new BiometrySDK(apiKey);
7
- afterEach(() => {
8
- jest.clearAllMocks();
9
- });
10
- it('should throw an error if no API key is provided', () => {
11
- expect(() => new BiometrySDK('')).toThrow('API Key is required to initialize the SDK.');
12
- });
13
- // CONSENT
14
- it('should call fetch with correct headers and body when giving consent', async () => {
15
- fetch.mockResolvedValueOnce({
16
- ok: true,
17
- json: async () => ({ is_consent_given: true, user_fullname: 'John Doe' }),
18
- });
19
- const result = await sdk.giveConsent(true, 'John Doe');
20
- expect(fetch).toHaveBeenCalledWith('https://api.biometrysolutions.com/consent', expect.objectContaining({
21
- method: 'POST',
22
- headers: {
23
- Authorization: `Bearer ${apiKey}`,
24
- 'Content-Type': 'application/json',
25
- },
26
- body: JSON.stringify({
27
- is_consent_given: true,
28
- user_fullname: 'John Doe',
29
- }),
30
- }));
31
- expect(result).toEqual({ is_consent_given: true, user_fullname: 'John Doe', });
32
- });
33
- it('should throw an error if response is not ok', async () => {
34
- fetch.mockResolvedValueOnce({
35
- ok: false,
36
- status: 400,
37
- json: async () => ({ error: 'is_consent_given must be true' }),
38
- });
39
- await expect(sdk.giveConsent(true, 'John Doe')).rejects.toThrow('Error 400: undefined');
40
- });
41
- // VOICE ONBOARDING
42
- it('should throw an error if user fullname is missing', async () => {
43
- const audioFile = new File(['audio data'], 'audio.wav', { type: 'audio/wav' });
44
- await expect(sdk.onboardVoice(audioFile, '', 'uniqueId', 'phrase')).rejects.toThrowError('User fullname is required.');
45
- });
46
- it('should throw an error if unique ID is missing', async () => {
47
- const audioFile = new File(['audio data'], 'audio.wav', { type: 'audio/wav' });
48
- await expect(sdk.onboardVoice(audioFile, 'User Name', '', 'phrase')).rejects.toThrowError('Unique ID is required.');
49
- });
50
- it('should throw an error if phrase is missing', async () => {
51
- const audioFile = new File(['audio data'], 'audio.wav', { type: 'audio/wav' });
52
- await expect(sdk.onboardVoice(audioFile, 'User Name', 'uniqueId', '')).rejects.toThrowError('Phrase is required.');
53
- });
54
- it('should throw an error if audio file is missing', async () => {
55
- await expect(sdk.onboardVoice(null, 'User Name', 'uniqueId', 'phrase')).rejects.toThrowError('Audio file is required.');
56
- });
57
- it('should successfully onboard voice and return the response', async () => {
58
- const mockResponse = {
59
- status: 'good',
60
- };
61
- fetch.mockResolvedValueOnce({
62
- ok: true,
63
- json: async () => mockResponse,
64
- });
65
- const audioFile = new File(['audio data'], 'audio.wav', { type: 'audio/wav' });
66
- const userFullName = 'User Name';
67
- const uniqueId = 'uniqueId';
68
- const phrase = 'phrase';
69
- const formDataSpy = jest.spyOn(FormData.prototype, 'append');
70
- const result = await sdk.onboardVoice(audioFile, userFullName, uniqueId, phrase);
71
- expect(formDataSpy).toHaveBeenCalledWith('unique_id', uniqueId);
72
- expect(formDataSpy).toHaveBeenCalledWith('phrase', phrase);
73
- expect(formDataSpy).toHaveBeenCalledWith('voice', audioFile);
74
- expect(fetch).toHaveBeenCalledWith('https://api.biometrysolutions.com/api-gateway/onboard/voice', expect.objectContaining({
75
- method: 'POST',
76
- headers: {
77
- 'Authorization': `Bearer ${apiKey}`,
78
- 'X-User-Fullname': userFullName,
79
- },
80
- body: expect.any(FormData)
81
- }));
82
- expect(result).toEqual(mockResponse);
83
- });
84
- // FACE ONBOARDING
85
- it('should throw an error if user fullname is missing', async () => {
86
- const imageFile = new File(['image data'], 'image.jpg', { type: 'image/jpeg' });
87
- await expect(sdk.onboardFace(imageFile, '')).rejects.toThrowError('User fullname is required.');
88
- });
89
- it('should throw an error if image file is missing', async () => {
90
- await expect(sdk.onboardFace(null, 'User Name')).rejects.toThrowError('Face image is required.');
91
- });
92
- it('should successfully onboard face and return the response', async () => {
93
- const mockResponse = {
94
- code: 200,
95
- description: 'Face onboarded successfully',
96
- };
97
- fetch.mockResolvedValueOnce({
98
- ok: true,
99
- json: async () => mockResponse,
100
- });
101
- const imageFile = new File(['image data'], 'image.jpg', { type: 'image/jpeg' });
102
- const userFullName = 'User Name';
103
- const formDataSpy = jest.spyOn(FormData.prototype, 'append');
104
- const result = await sdk.onboardFace(imageFile, userFullName);
105
- expect(formDataSpy).toHaveBeenCalledWith('face', imageFile);
106
- expect(fetch).toHaveBeenCalledWith('https://api.biometrysolutions.com/api-gateway/onboard/face', expect.objectContaining({
107
- method: 'POST',
108
- headers: {
109
- 'Authorization': `Bearer ${apiKey}`,
110
- 'X-User-Fullname': userFullName,
111
- },
112
- body: expect.any(FormData)
113
- }));
114
- expect(result).toEqual(mockResponse);
115
- });
116
- // FACE MATCH
117
- it('should throw an error if face image is missing', async () => {
118
- await expect(sdk.matchFaces(null, null)).rejects.toThrowError('Face image is required.');
119
- });
120
- it('should throw an error if video file is missing', async () => {
121
- const imageFile = new File(['image data'], 'image.jpg', { type: 'image/jpeg' });
122
- await expect(sdk.matchFaces(imageFile, null)).rejects.toThrowError('Video is required.');
123
- });
124
- it('should successfully match faces and return the response', async () => {
125
- const mockResponse = {
126
- code: 200,
127
- result: 1,
128
- description: 'Matched',
129
- anchor: {
130
- code: 200,
131
- description: 'Anchor face',
132
- },
133
- target: {
134
- code: 200,
135
- description: 'Target face',
136
- },
137
- };
138
- fetch.mockResolvedValueOnce({
139
- ok: true,
140
- json: async () => mockResponse,
141
- });
142
- const imageFile = new File(['image data'], 'image.jpg', { type: 'image/jpeg' });
143
- const videoFile = new File(['video data'], 'video.mp4', { type: 'video/mp4' });
144
- const formDataSpy = jest.spyOn(FormData.prototype, 'append');
145
- const result = await sdk.matchFaces(imageFile, videoFile);
146
- expect(formDataSpy).toHaveBeenCalledWith('video', videoFile);
147
- expect(formDataSpy).toHaveBeenCalledWith('image', imageFile);
148
- expect(fetch).toHaveBeenCalledWith('https://api.biometrysolutions.com/api-gateway/match-faces', expect.objectContaining({
149
- method: 'POST',
150
- headers: {
151
- 'Authorization': `Bearer ${apiKey}`,
152
- },
153
- body: expect.any(FormData)
154
- }));
155
- expect(result).toEqual(mockResponse);
156
- });
157
- it('should successfully match faces if processVideoRequestId is provided', async () => {
158
- const mockResponse = {
159
- code: 200,
160
- result: 1,
161
- description: 'Matched',
162
- anchor: {
163
- code: 200,
164
- description: 'Anchor face',
165
- },
166
- target: {
167
- code: 200,
168
- description: 'Target face',
169
- },
170
- };
171
- fetch.mockResolvedValueOnce({
172
- ok: true,
173
- json: async () => mockResponse,
174
- });
175
- const imageFile = new File(['image data'], 'image.jpg', { type: 'image/jpeg' });
176
- const formDataSpy = jest.spyOn(FormData.prototype, 'append');
177
- const result = await sdk.matchFaces(imageFile, undefined, 'User Name', 'processVideoRequestId', true);
178
- expect(formDataSpy).toHaveBeenCalledWith('image', imageFile);
179
- expect(fetch).toHaveBeenCalledWith('https://api.biometrysolutions.com/api-gateway/match-faces', expect.objectContaining({
180
- method: 'POST',
181
- headers: {
182
- 'Authorization': `Bearer ${apiKey}`,
183
- 'X-User-Fullname': 'User Name',
184
- 'X-Request-Id': 'processVideoRequestId',
185
- 'X-Use-Prefilled-Video': 'true',
186
- },
187
- body: expect.any(FormData)
188
- }));
189
- expect(result).toEqual(mockResponse);
190
- });
191
- // PROCESS VIDEO
192
- it('should throw an error if video file is missing', async () => {
193
- await expect(sdk.processVideo(null, 'phrase')).rejects.toThrowError('Video is required.');
194
- });
195
- it('should throw an error if phrase is missing', async () => {
196
- const videoFile = new File(['video data'], 'video.mp4', { type: 'video/mp4' });
197
- await expect(sdk.processVideo(videoFile, '')).rejects.toThrowError('Phrase is required.');
198
- });
199
- it('should successfully process video and return the response', async () => {
200
- const mockResponse = {
201
- data: {
202
- 'Active Speaker Detection': {
203
- code: 0,
204
- description: 'Successful check',
205
- result: 90.00,
206
- },
207
- 'Face Liveness Detection': {
208
- code: 0,
209
- description: 'Successful check',
210
- result: true,
211
- },
212
- 'Face Recognition': {
213
- code: 0,
214
- description: 'Successful check',
215
- },
216
- 'Visual Speech Recognition': {
217
- code: 0,
218
- description: 'Successful check',
219
- result: 'ONE TWO THREE FOUR FIVE SIX SEVEN EIGHT',
220
- },
221
- 'Voice Recognition': {
222
- status: 'good',
223
- id: '123',
224
- score: 0.99,
225
- imposter_prob: 0.01,
226
- log_odds: '1.0',
227
- },
228
- },
229
- result_conditions: {
230
- failed_conditions: [],
231
- failed_refer_conditions: [],
232
- status: 'pass',
233
- },
234
- message: 'video processed successfully',
235
- };
236
- fetch.mockResolvedValueOnce({
237
- ok: true,
238
- json: async () => mockResponse,
239
- });
240
- const videoFile = new File(['video data'], 'video.mp4', { type: 'video/mp4' });
241
- const phrase = 'ONE TWO THREE FOUR FIVE SIX SEVEN EIGHT';
242
- const formDataSpy = jest.spyOn(FormData.prototype, 'append');
243
- const result = await sdk.processVideo(videoFile, phrase);
244
- expect(formDataSpy).toHaveBeenCalledWith('video', videoFile);
245
- expect(formDataSpy).toHaveBeenCalledWith('phrase', phrase);
246
- expect(fetch).toHaveBeenCalledWith('https://api.biometrysolutions.com/api-gateway/process-video', expect.objectContaining({
247
- method: 'POST',
248
- headers: {
249
- 'Authorization': `Bearer ${apiKey}`,
250
- },
251
- body: expect.any(FormData)
252
- }));
253
- expect(result).toEqual(mockResponse);
254
- });
255
- });
package/dist/types.d.ts DELETED
@@ -1,105 +0,0 @@
1
- export declare enum BiometryAttributes {
2
- ApiKey = "api-key",
3
- UserFullname = "user-fullname"
4
- }
5
- export declare enum BiometryOnboardingState {
6
- Loading = "loading",
7
- Success = "success",
8
- ErrorNoFace = "error-no-face",
9
- ErrorMultipleFaces = "error-multiple-faces",
10
- ErrorNotCentered = "error-not-centered",
11
- ErrorOther = "error-other"
12
- }
13
- export interface ConsentResponse {
14
- is_consent_given: boolean;
15
- user_fullname: string;
16
- }
17
- export interface VoiceOnboardingResponse {
18
- status: "good" | "qafailed" | "enrolled";
19
- }
20
- type Base64String = string & {
21
- readonly __brand: unique symbol;
22
- };
23
- export interface DocAuthInfo {
24
- document_type: string;
25
- country_code: string;
26
- nationality_code: string;
27
- nationality_name: string;
28
- sex: string;
29
- first_name: string;
30
- father_name: string;
31
- last_name: string;
32
- expiry_date: string;
33
- document_number: string;
34
- birth_date: string;
35
- portrait_photo: Base64String;
36
- signature: Base64String;
37
- document_category: string;
38
- issuing_state: string;
39
- front_document_type_id: string;
40
- contains_rfid: boolean;
41
- errors?: string[];
42
- }
43
- export interface FaceOnboardingResponse {
44
- data: {
45
- onboard_result: FaceOnboardingResult;
46
- document_auth?: DocAuthInfo;
47
- };
48
- message?: string;
49
- error?: string;
50
- scoring_result?: string;
51
- }
52
- export interface FaceOnboardingResult {
53
- code: number;
54
- description: string;
55
- }
56
- export interface FaceMatchResponse {
57
- code: number;
58
- result: number;
59
- description: string;
60
- anchor: {
61
- code: number;
62
- description: string;
63
- };
64
- target: {
65
- code: number;
66
- description: string;
67
- };
68
- }
69
- export interface ProcessVideoResponse {
70
- data: {
71
- "Active Speaker Detection": {
72
- code: number;
73
- description: string;
74
- result: number;
75
- };
76
- "Face Liveness Detection": {
77
- code: number;
78
- description: string;
79
- result: boolean;
80
- };
81
- "Face Recognition": {
82
- code: number;
83
- description: string;
84
- };
85
- "Visual Speech Recognition": {
86
- code: number;
87
- description: string;
88
- result: string;
89
- };
90
- "Voice Recognition": {
91
- status: string;
92
- id: string;
93
- score: number;
94
- imposter_prob: number;
95
- log_odds: string;
96
- };
97
- };
98
- result_conditions: {
99
- failed_conditions: any[];
100
- failed_refer_conditions: any[];
101
- status: string;
102
- };
103
- message: string;
104
- }
105
- export {};
package/dist/types.js DELETED
@@ -1,14 +0,0 @@
1
- export var BiometryAttributes;
2
- (function (BiometryAttributes) {
3
- BiometryAttributes["ApiKey"] = "api-key";
4
- BiometryAttributes["UserFullname"] = "user-fullname";
5
- })(BiometryAttributes || (BiometryAttributes = {}));
6
- export var BiometryOnboardingState;
7
- (function (BiometryOnboardingState) {
8
- BiometryOnboardingState["Loading"] = "loading";
9
- BiometryOnboardingState["Success"] = "success";
10
- BiometryOnboardingState["ErrorNoFace"] = "error-no-face";
11
- BiometryOnboardingState["ErrorMultipleFaces"] = "error-multiple-faces";
12
- BiometryOnboardingState["ErrorNotCentered"] = "error-not-centered";
13
- BiometryOnboardingState["ErrorOther"] = "error-other";
14
- })(BiometryOnboardingState || (BiometryOnboardingState = {}));