@azure/ai-form-recognizer 4.0.0-beta.3 → 4.0.0-beta.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +198 -151
- package/dist/index.js +859 -2056
- package/dist/index.js.map +1 -1
- package/dist-esm/src/constants.js +1 -1
- package/dist-esm/src/constants.js.map +1 -1
- package/dist-esm/src/documentAnalysisClient.js +35 -213
- package/dist-esm/src/documentAnalysisClient.js.map +1 -1
- package/dist-esm/src/documentModel.js +71 -0
- package/dist-esm/src/documentModel.js.map +1 -0
- package/dist-esm/src/documentModelAdministrationClient.js +71 -64
- package/dist-esm/src/documentModelAdministrationClient.js.map +1 -1
- package/dist-esm/src/generated/generatedClient.js +149 -63
- package/dist-esm/src/generated/generatedClient.js.map +1 -1
- package/dist-esm/src/generated/index.js +0 -1
- package/dist-esm/src/generated/index.js.map +1 -1
- package/dist-esm/src/generated/models/index.js +63 -1
- package/dist-esm/src/generated/models/index.js.map +1 -1
- package/dist-esm/src/generated/models/mappers.js +321 -129
- package/dist-esm/src/generated/models/mappers.js.map +1 -1
- package/dist-esm/src/generated/models/parameters.js +39 -4
- package/dist-esm/src/generated/models/parameters.js.map +1 -1
- package/dist-esm/src/index.js +1 -1
- package/dist-esm/src/index.js.map +1 -1
- package/dist-esm/src/lro/{training.js → administration.js} +1 -1
- package/dist-esm/src/lro/administration.js.map +1 -0
- package/dist-esm/src/lro/{analyze.js → analysis.js} +29 -37
- package/dist-esm/src/lro/analysis.js.map +1 -0
- package/dist-esm/src/lro/util/poller.js.map +1 -1
- package/dist-esm/src/{options/CopyModelOptions.js → models/documentElements.js} +1 -1
- package/dist-esm/src/models/documentElements.js.map +1 -0
- package/dist-esm/src/models/fields.js +3 -1
- package/dist-esm/src/models/fields.js.map +1 -1
- package/dist-esm/src/models/index.js.map +1 -1
- package/dist-esm/src/options/AnalyzeDocumentsOptions.js.map +1 -1
- package/dist-esm/src/options/{GetInfoOptions.js → BeginCopyModelOptions.js} +1 -1
- package/dist-esm/src/options/BeginCopyModelOptions.js.map +1 -0
- package/dist-esm/src/options/BuildModelOptions.js.map +1 -1
- package/dist-esm/src/options/FormRecognizerClientOptions.js +1 -1
- package/dist-esm/src/options/FormRecognizerClientOptions.js.map +1 -1
- package/dist-esm/src/{prebuilt/schema.js → options/GetResourceDetailsOptions.js} +1 -1
- package/dist-esm/src/options/GetResourceDetailsOptions.js.map +1 -0
- package/dist-esm/src/options/index.js.map +1 -1
- package/dist-esm/src/transforms/polygon.js +26 -0
- package/dist-esm/src/transforms/polygon.js.map +1 -0
- package/dist-esm/src/util.js.map +1 -1
- package/package.json +25 -17
- package/types/ai-form-recognizer.d.ts +423 -1847
- package/CHANGELOG.md +0 -235
- package/dist-esm/src/generated/generatedClientContext.js +0 -41
- package/dist-esm/src/generated/generatedClientContext.js.map +0 -1
- package/dist-esm/src/lro/analyze.js.map +0 -1
- package/dist-esm/src/lro/training.js.map +0 -1
- package/dist-esm/src/models/GeneralDocumentResult.js +0 -14
- package/dist-esm/src/models/GeneralDocumentResult.js.map +0 -1
- package/dist-esm/src/models/LayoutResult.js +0 -16
- package/dist-esm/src/models/LayoutResult.js.map +0 -1
- package/dist-esm/src/models/ReadResult.js +0 -19
- package/dist-esm/src/models/ReadResult.js.map +0 -1
- package/dist-esm/src/options/CopyModelOptions.js.map +0 -1
- package/dist-esm/src/options/GetInfoOptions.js.map +0 -1
- package/dist-esm/src/prebuilt/index.js +0 -8
- package/dist-esm/src/prebuilt/index.js.map +0 -1
- package/dist-esm/src/prebuilt/modelSchemas/businessCard.js +0 -119
- package/dist-esm/src/prebuilt/modelSchemas/businessCard.js.map +0 -1
- package/dist-esm/src/prebuilt/modelSchemas/idDocument.js +0 -133
- package/dist-esm/src/prebuilt/modelSchemas/idDocument.js.map +0 -1
- package/dist-esm/src/prebuilt/modelSchemas/invoice.js +0 -225
- package/dist-esm/src/prebuilt/modelSchemas/invoice.js.map +0 -1
- package/dist-esm/src/prebuilt/modelSchemas/receipt.js +0 -525
- package/dist-esm/src/prebuilt/modelSchemas/receipt.js.map +0 -1
- package/dist-esm/src/prebuilt/modelSchemas/w2.js +0 -237
- package/dist-esm/src/prebuilt/modelSchemas/w2.js.map +0 -1
- package/dist-esm/src/prebuilt/models.js +0 -116
- package/dist-esm/src/prebuilt/models.js.map +0 -1
- package/dist-esm/src/prebuilt/schema.js.map +0 -1
package/README.md
CHANGED
|
@@ -14,19 +14,19 @@ Azure Cognitive Services [Form Recognizer](https://azure.microsoft.com/services/
|
|
|
14
14
|
[Product documentation](https://docs.microsoft.com/azure/cognitive-services/form-recognizer/) |
|
|
15
15
|
[Samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer/samples)
|
|
16
16
|
|
|
17
|
-
#### **_Breaking
|
|
17
|
+
#### **_Breaking change advisory_ ⚠️**
|
|
18
18
|
|
|
19
|
-
In version 4 (currently beta), this package introduces a full redesign of the Azure Form Recognizer client library. To leverage features of the newest Form Recognizer service API (version "2022-
|
|
19
|
+
In version 4 (currently beta), this package introduces a full redesign of the Azure Form Recognizer client library. To leverage features of the newest Form Recognizer service API (version "2022-06-30-preview" and newer), the new SDK is required, and application code must be changed to use the new clients. Please see the [Migration Guide](https://github.com/azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/MIGRATION-v3_v4.md) for detailed instructions on how to update application code from version 3.x of the Form Recognizer SDK to the new version (4.x). Additionally, the [CHANGELOG](https://github.com/azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md) contains an outline of the changes since version 3.x. This package targets Azure Form Recognizer service API version `2022-06-30-preview` and newer. To continue to use Form Recognizer API version 2.1, please use major version 3 of the client package (`@azure/ai-form-recognizer@^3.2.0`).
|
|
20
20
|
|
|
21
|
-
### Install the `@azure/ai-form-recognizer`
|
|
21
|
+
### Install the `@azure/ai-form-recognizer` package
|
|
22
22
|
|
|
23
23
|
Install the Azure Form Recognizer client library for JavaScript with `npm`:
|
|
24
24
|
|
|
25
25
|
```bash
|
|
26
|
-
npm install @azure/ai-form-recognizer@4.0.0-beta.
|
|
26
|
+
npm install @azure/ai-form-recognizer@4.0.0-beta.5
|
|
27
27
|
```
|
|
28
28
|
|
|
29
|
-
## Getting
|
|
29
|
+
## Getting started
|
|
30
30
|
|
|
31
31
|
```javascript
|
|
32
32
|
const { DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
|
|
@@ -47,7 +47,7 @@ const poller = await client.beginAnalyzeDocument("<model ID>", file);
|
|
|
47
47
|
const { pages, tables, styles, keyValuePairs, entities, documents } = await poller.pollUntilDone();
|
|
48
48
|
```
|
|
49
49
|
|
|
50
|
-
### Currently
|
|
50
|
+
### Currently supported environments
|
|
51
51
|
|
|
52
52
|
- [LTS versions of Node.js](https://nodejs.org/about/releases/)
|
|
53
53
|
- Latest versions of Safari, Chrome, Edge, and Firefox.
|
|
@@ -59,7 +59,7 @@ See our [support policy](https://github.com/Azure/azure-sdk-for-js/blob/main/SUP
|
|
|
59
59
|
- An [Azure subscription](https://azure.microsoft.com/free/)
|
|
60
60
|
- A [Cognitive Services or Form Recognizer resource][fr_or_cs_resource]. If you need to create the resource, you can use the [Azure Portal][azure_portal] or [Azure CLI][azure_cli].
|
|
61
61
|
|
|
62
|
-
#### Create a Form Recognizer
|
|
62
|
+
#### Create a Form Recognizer resource
|
|
63
63
|
|
|
64
64
|
Form Recognizer supports both [multi-service and single-service access][multi_and_single_service]. Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource.
|
|
65
65
|
|
|
@@ -83,17 +83,17 @@ If you use the Azure CLI, replace `<your-resource-group-name>` and `<your-resour
|
|
|
83
83
|
az cognitiveservices account create --kind FormRecognizer --resource-group <your-resource-group-name> --name <your-resource-name> --sku <your-sku-name> --location <your-location>
|
|
84
84
|
```
|
|
85
85
|
|
|
86
|
-
### Create and
|
|
86
|
+
### Create and authenticate a client
|
|
87
87
|
|
|
88
88
|
In order to interact with the Form Recognizer service, you'll need to select either a `DocumentAnalysisClient` or a `DocumentModelAdministrationClient`, and create an instance of this type. In the following examples, we will use `DocumentAnalysisClient`. To create a client instance to access the Form Recognizer API, you will need the `endpoint` of your Form Recognizer resource and a `credential`. The Form Recognizer clients can use either an `AzureKeyCredential` with an API key of your resource or a `TokenCredential` that uses Azure Active Directory RBAC to authorize the client.
|
|
89
89
|
|
|
90
90
|
You can find the endpoint for your Form Recognizer resource either in the [Azure Portal][azure_portal] or by using the [Azure CLI][azure_cli] snippet below:
|
|
91
91
|
|
|
92
92
|
```bash
|
|
93
|
-
az cognitiveservices account show --name <your-resource-name> --resource-group <your-resource-group-name> --query "endpoint"
|
|
93
|
+
az cognitiveservices account show --name <your-resource-name> --resource-group <your-resource-group-name> --query "properties.endpoint"
|
|
94
94
|
```
|
|
95
95
|
|
|
96
|
-
####
|
|
96
|
+
#### Use an API key
|
|
97
97
|
|
|
98
98
|
Use the [Azure Portal][azure_portal] to browse to your Form Recognizer resource and retrieve an API key, or use the [Azure CLI][azure_cli] snippet below:
|
|
99
99
|
|
|
@@ -111,7 +111,7 @@ const { DocumentAnalysisClient, AzureKeyCredential } = require("@azure/ai-form-r
|
|
|
111
111
|
const client = new DocumentAnalysisClient("<endpoint>", new AzureKeyCredential("<API key>"));
|
|
112
112
|
```
|
|
113
113
|
|
|
114
|
-
####
|
|
114
|
+
#### Use Azure Active Directory
|
|
115
115
|
|
|
116
116
|
API key authorization is used in most of the examples, but you can also authenticate the client with Azure Active Directory using the [Azure Identity library][azure_identity]. To use the [DefaultAzureCredential][defaultazurecredential] provider shown below or other credential providers provided with the Azure SDK, please install the `@azure/identity` package:
|
|
117
117
|
|
|
@@ -130,16 +130,14 @@ const { DefaultAzureCredential } = require("@azure/identity");
|
|
|
130
130
|
const client = new DocumentAnalysisClient("<endpoint>", new DefaultAzureCredential());
|
|
131
131
|
```
|
|
132
132
|
|
|
133
|
-
## Key
|
|
133
|
+
## Key concepts
|
|
134
134
|
|
|
135
135
|
### `DocumentAnalysisClient`
|
|
136
136
|
|
|
137
137
|
`DocumentAnalysisClient` provides operations for analyzing input documents using custom and prebuilt models. It has three methods:
|
|
138
138
|
|
|
139
|
-
- `beginAnalyzeDocument`, which extracts data from an input document using a custom or prebuilt model given by its model ID. For information about the prebuilt models supported in all resources and their model IDs/outputs, please see [the service's documentation of the models][fr-models].
|
|
140
|
-
- `
|
|
141
|
-
- `beginExtractGeneralDocument`, which uses the "prebuilt-document" model to extract key-value pairs and entities in addition to the properties of the prebuilt layout model. This method also provides a stronger TypeScript type for the general document result than the `beginAnalyzeDocument` method.
|
|
142
|
-
- `beginReadDocument`, which uses the "prebuilt-read" model to extract textual elements, such as page words and lines in addition to text language information.
|
|
139
|
+
- `beginAnalyzeDocument`, which extracts data from an input document file stream using a custom or prebuilt model given by its model ID. For information about the prebuilt models supported in all resources and their model IDs/outputs, please see [the service's documentation of the models][fr-models].
|
|
140
|
+
- `beginAnalyzeDocumentFromUrl`, which performs the same function as `beginAnalyzeDocument`, but submits a publicly-accessible URL of a file instead of a file stream.
|
|
143
141
|
|
|
144
142
|
### `DocumentModelAdministrationClient`
|
|
145
143
|
|
|
@@ -147,16 +145,16 @@ const client = new DocumentAnalysisClient("<endpoint>", new DefaultAzureCredenti
|
|
|
147
145
|
|
|
148
146
|
- `beginBuildModel` starts an operation to create a new document model from your own training data set. The created model can extract fields according to a custom schema. The training data are expected to be located in an Azure Storage container and organized according to a particular convention. See the [service's documentation on creating a training data set][fr-build-training-set] for a more detailed explanation of applying labels to a training data set.
|
|
149
147
|
- `beginComposeModel` starts an operation to compose multiple models into a single model. When used for custom form recognition, the new composed model will first perform a classification of the input documents to determine which of its submodels is most appropriate.
|
|
150
|
-
- `
|
|
151
|
-
- `
|
|
148
|
+
- `beginCopyModelTo` starts an operation to copy a custom model from one Form Recognizer resource to another (or even to the same Form Recognizer resource). It requires a `CopyAuthorization` from the target Form Recognizer resource, which can be generated using the `getCopyAuthorization` method.
|
|
149
|
+
- `getResourceDetails` retrieves information about the Form Recognizer resource's limits, such as the number of custom models and the maximum number of models the resource can support.
|
|
152
150
|
- `getModel`, `listModels`, and `deleteModel` enable managing models in the resource.
|
|
153
151
|
- `getOperation` and `listOperations` enable viewing the status of model creation operations, even those operations that are ongoing or that have failed. Operations are retained for 24 hours.
|
|
154
152
|
|
|
155
|
-
Please note that models can also be created using the Form Recognizer service's graphical user interface: [Form Recognizer
|
|
153
|
+
Please note that models can also be created using the Form Recognizer service's graphical user interface: [Form Recognizer Studio (Preview)][fr-studio].
|
|
156
154
|
|
|
157
155
|
Sample code snippets that illustrate the use of `DocumentModelAdministrationClient` to build a model can be found [below, in the "Build a Model" example section.](#build-a-model).
|
|
158
156
|
|
|
159
|
-
### Long-
|
|
157
|
+
### Long-running operations
|
|
160
158
|
|
|
161
159
|
Long-running operations (LROs) are operations which consist of an initial request sent to the service to start an operation, followed by polling for a result at a certain interval to determine if the operation has completed and whether it failed or succeeded. Ultimately, the LRO will either fail with an error or produce a result.
|
|
162
160
|
|
|
@@ -166,16 +164,17 @@ In Azure Form Recognizer, operations that create models (including copying and c
|
|
|
166
164
|
|
|
167
165
|
The following section provides several JavaScript code snippets illustrating common patterns used in the Form Recognizer client libraries.
|
|
168
166
|
|
|
169
|
-
- [Analyze a
|
|
170
|
-
- [
|
|
171
|
-
- [
|
|
172
|
-
- [Use
|
|
173
|
-
- [
|
|
174
|
-
- [
|
|
167
|
+
- [Analyze a document with a model ID](#analyze-a-document-with-a-model-id)
|
|
168
|
+
- [Use prebuilt document models](#use-prebuilt-document-models)
|
|
169
|
+
- [Use the "layout" prebuilt](#use-the-layout-prebuilt)
|
|
170
|
+
- [Use the "document" prebuilt](#use-the-document-prebuilt)
|
|
171
|
+
- [Use the "read" prebuilt](#use-the-read-prebuilt)
|
|
172
|
+
- [Build a model](#build-a-model)
|
|
173
|
+
- [Manage models](#manage-models)
|
|
175
174
|
|
|
176
|
-
### Analyze a
|
|
175
|
+
### Analyze a document with a model ID
|
|
177
176
|
|
|
178
|
-
The `beginAnalyzeDocument` method can extract fields and table data from documents.
|
|
177
|
+
The `beginAnalyzeDocument` method can extract fields and table data from documents. Analysis may use either a custom model, trained with your own data, or a prebuilt model provided by the service (see _[Use Prebuilt Models](#use-prebuilt-models)_ below). A custom model is tailored to your own documents, so it should only be used with documents of the same structure as one of the document types in the model (there may be multiple, such as in a composed model).
|
|
179
178
|
|
|
180
179
|
```javascript
|
|
181
180
|
const { DocumentAnalysisClient, AzureKeyCredential } = require("@azure/ai-form-recognizer");
|
|
@@ -229,16 +228,116 @@ main().catch((err) => {
|
|
|
229
228
|
});
|
|
230
229
|
```
|
|
231
230
|
|
|
232
|
-
|
|
231
|
+
#### Analyze a document from a URL
|
|
233
232
|
|
|
234
|
-
|
|
233
|
+
As an alternative to providing a readable stream, a publicly-accessible URL can be provided instead using the `beginAnalyzeDocumentFromUrl` method. "Publicly-accessible" means that URL sources must be accessible from the service's infrastructure (in other words, a private intranet URL, or URLs that use header- or certificate-based secrets, will not work, as the Form Recognizer service must be able to access the URL). However, the URL itself could encode a secret, such as an Azure Storage blob URL that contains a SAS token in the query parameters.
|
|
235
234
|
|
|
236
|
-
|
|
235
|
+
### Use prebuilt document models
|
|
237
236
|
|
|
238
|
-
|
|
239
|
-
const { DocumentAnalysisClient, AzureKeyCredential } = require("@azure/ai-form-recognizer");
|
|
237
|
+
The `beginAnalyzeDocument` method also supports extracting fields from certain types of common documents such as receipts, invoices, business cards, identity documents, and more using prebuilt models provided by the Form Recognizer service. The prebuilt models may be provided either as model ID strings (the same as custom document models—see the _[other prebuilt models](#other-prebuilt-models)_ section below) or using a `DocumentModel` object. When using a `DocumentModel`, the Form Recognizer SDK for JavaScript provides a much stronger TypeScript type for the resulting extracted documents based on the model's schema, and it will be converted to use JavaScript naming conventions.
|
|
240
238
|
|
|
241
|
-
|
|
239
|
+
<a id="prebuiltmodels-removed"></a>
|
|
240
|
+
**Breaking Change Warning** ⚠️: In previous `4.0.0-beta` versions of the Azure Form Recognizer SDK for JavaScript, prebuilt `DocumentModel` objects were exported from the package through an object named `PrebuiltModels`. This object has been removed and replaced with the [`DocumentModel` samples][samples-prebuilt], which you may use as part of your own project. This change will enable us to continue to provide timely updates and ensure stability as the number of supported prebuilt models increases and as their capabilities are enhanced.
|
|
241
|
+
|
|
242
|
+
Example `DocumentModel` objects for the current service API version (`2022-06-30-preview`) can be found in [the `prebuilt` samples directory][samples-prebuilt]. In the following example, we'll use the `PrebuiltReceiptModel` from the [`prebuilt-receipt.ts`] file in that directory.
|
|
243
|
+
|
|
244
|
+
Since the main benefit of `DocumentModel`-based analysis is stronger TypeScript type constraints, the following sample is written in TypeScript using ECMAScript module syntax:
|
|
245
|
+
|
|
246
|
+
```typescript
|
|
247
|
+
import { DocumentAnalysisClient, AzureKeyCredential } from "@azure/ai-form-recognizer";
|
|
248
|
+
|
|
249
|
+
// Copy the file from the above-linked sample directory so that it can be imported in this module
|
|
250
|
+
import { PrebuiltReceiptModel } from "./prebuilt/prebuilt-receipt";
|
|
251
|
+
|
|
252
|
+
import fs from "fs";
|
|
253
|
+
|
|
254
|
+
async function main() {
|
|
255
|
+
const endpoint = "<cognitive services endpoint>";
|
|
256
|
+
const apiKey = "<api key>";
|
|
257
|
+
const path = "<path to your receipt document>"; // pdf/jpeg/png/tiff formats
|
|
258
|
+
|
|
259
|
+
const readStream = fs.createReadStream(path);
|
|
260
|
+
|
|
261
|
+
const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
|
|
262
|
+
|
|
263
|
+
// The PrebuiltReceiptModel `DocumentModel` instance encodes both the model ID and a stronger return type for the operation
|
|
264
|
+
const poller = await client.beginAnalyzeDocument(PrebuiltReceiptModel, readStream, {
|
|
265
|
+
onProgress: ({ status }) => {
|
|
266
|
+
console.log(`status: ${status}`);
|
|
267
|
+
},
|
|
268
|
+
});
|
|
269
|
+
|
|
270
|
+
const {
|
|
271
|
+
documents: [receiptDocument],
|
|
272
|
+
} = await poller.pollUntilDone();
|
|
273
|
+
|
|
274
|
+
// The fields of the document constitute the extracted receipt data.
|
|
275
|
+
const receipt = receiptDocument.fields;
|
|
276
|
+
|
|
277
|
+
if (receipt === undefined) {
|
|
278
|
+
throw new Error("Expected at least one receipt in analysis result.");
|
|
279
|
+
}
|
|
280
|
+
|
|
281
|
+
console.log(`Receipt data (${receiptDocument.docType})`);
|
|
282
|
+
console.log(" Merchant Name:", receipt.merchantName?.value);
|
|
283
|
+
|
|
284
|
+
// The items of the receipt are an example of a `DocumentArrayValue`
|
|
285
|
+
if (receipt.items !== undefined) {
|
|
286
|
+
console.log("Items:");
|
|
287
|
+
for (const { properties: item } of receipt.items.values) {
|
|
288
|
+
console.log("- Description:", item.description?.value);
|
|
289
|
+
console.log(" Total Price:", item.totalPrice?.value);
|
|
290
|
+
}
|
|
291
|
+
}
|
|
292
|
+
|
|
293
|
+
console.log(" Total:", receipt.total?.value);
|
|
294
|
+
}
|
|
295
|
+
|
|
296
|
+
main().catch((err) => {
|
|
297
|
+
console.error("The sample encountered an error:", err);
|
|
298
|
+
});
|
|
299
|
+
```
|
|
300
|
+
|
|
301
|
+
Alternatively, as mentioned above, instead of using `PrebuiltReceiptModel`, which produces the stronger return type, the prebuilt receipt's model ID ("prebuilt-receipt") can be used, but the document fields will not be strongly typed in TypeScript, and the field names will generally be in "PascalCase" instead of "camelCase".
|
|
302
|
+
|
|
303
|
+
#### **Other prebuilt models**
|
|
304
|
+
|
|
305
|
+
You are not limited to receipts! There are a few prebuilt models to choose from, with more on the way. Each prebuilt model has its own set of supported fields:
|
|
306
|
+
|
|
307
|
+
- Receipts, using [`PrebuiltReceiptModel`][samples-prebuilt-receipt] (as above) or the prebuilt receipt model ID `"prebuilt-receipt"`.
|
|
308
|
+
- Business cards, using [`PrebuiltBusinessCardModel`][samples-prebuilt-businesscard] or its model ID `"prebuilt-businessCard"`.
|
|
309
|
+
- Invoices, using [`PrebuiltInvoiceModel`][samples-prebuilt-invoice] or its model ID `"prebuilt-invoice"`.
|
|
310
|
+
- Identity Documents (such as driver licenses and passports), using [`PrebuiltIdDocumentModel`][samples-prebuilt-iddocument] or its model ID `"prebuilt-idDocument"`.
|
|
311
|
+
- W2 Tax Forms (United States), using [`PrebuiltTaxUsW2Model`][samples-prebuilt-tax.us.w2] or its model ID `"prebuilt-tax.us.w2"`.
|
|
312
|
+
- Health Insurance Cards (United States), using [`PrebuiltHealthInsuranceCardUsModel`][samples-prebuilt-healthinsurancecard.us] or its model ID `"prebuilt-healthInsuranceCard.us"`.
|
|
313
|
+
- Vaccination Cards (currently supports US COVID-19 vaccination cards), using [`PrebuiltVaccinationCardModel`][samples-prebuilt-vaccinationcard] or its model ID `"prebuilt-vaccinationCard"`.
|
|
314
|
+
|
|
315
|
+
Each of the above prebuilt models produces `documents` (extracted instances of the model's field schema). There are also three prebuilt models that do not have field schemas and therefore do not produce `documents`. They are:
|
|
316
|
+
|
|
317
|
+
- The prebuilt Layout model (see _[Use the "layout" prebuilt](#use-the-layout-prebuilt)_ below), which extracts information about basic layout (OCR) elements such as pages and tables.
|
|
318
|
+
- The prebuilt General Document model (see _[Use the "document" prebuilt](#use-the-document-prebuilt)_ below), which adds key-value pairs (directed associations between page elements, such as labeled elements) to the information produced by the layout model.
|
|
319
|
+
- The prebuilt Read model (see _[Use the "read" prebuilt](#use-the-read-prebuilt)_ below), which extracts only textual elements, such as page words and lines, along with information about the language of the document.
|
|
320
|
+
|
|
321
|
+
For information about the fields of all of these models, see [the service's documentation of the available prebuilt models](https://aka.ms/azsdk/formrecognizer/models).
|
|
322
|
+
|
|
323
|
+
The fields of all prebuilt models may also be accessed programmatically using the `getModel` method (by their model IDs) of `DocumentModelAdministrationClient` and inspecting the `docTypes` field in the result.
|
|
324
|
+
|
|
325
|
+
### Use the "layout" prebuilt
|
|
326
|
+
|
|
327
|
+
<a id="beginextractlayout-removed"></a>
|
|
328
|
+
**Breaking Change Warning** ⚠️: In previous `4.0.0-beta` versions of the Azure Form Recognizer SDK for JavaScript, the prebuilt Layout model was provided by a custom method named `beginExtractLayout`. This method was removed and replaced with an example `DocumentModel` instance named [`PrebuiltLayoutModel`][samples-prebuilt-layout] for use with the same `beginAnalyzeDocument` method that is used to perform analysis with other prebuilt models. As previously, the model ID `"prebuilt-layout"` may still be used directly. This change will align the `prebuilt-layout` model with the other prebuilt models and enable us to continue to provide timely updates and ensure stability as the number of supported prebuilt models increases and as their capabilities are enhanced.
|
|
329
|
+
|
|
330
|
+
The `"prebuilt-layout"` model extracts only the basic elements of the document, such as pages, (which consist of text words/lines and selection marks), tables, and visual text styles along with their bounding regions and spans within the text content of the input documents. We provide a strongly-typed `DocumentModel` instance named [`PrebuiltLayoutModel`][samples-prebuilt-layout] that invokes this model, or as always its model ID `"prebuilt-layout"` may be used directly.
|
|
331
|
+
|
|
332
|
+
Since the main benefit of `DocumentModel`-based analysis is stronger TypeScript type constraints, the following sample is written in TypeScript using ECMAScript module syntax:
|
|
333
|
+
|
|
334
|
+
```typescript
|
|
335
|
+
import { DocumentAnalysisClient, AzureKeyCredential } from "@azure/ai-form-recognizer";
|
|
336
|
+
|
|
337
|
+
// Copy the above-linked `DocumentModel` file so that it may be imported in this module.
|
|
338
|
+
import { PrebuiltLayoutModel } from "./prebuilt/prebuilt-layout";
|
|
339
|
+
|
|
340
|
+
import fs from "fs";
|
|
242
341
|
|
|
243
342
|
async function main() {
|
|
244
343
|
const endpoint = "<cognitive services endpoint>";
|
|
@@ -248,14 +347,14 @@ async function main() {
|
|
|
248
347
|
const readStream = fs.createReadStream(path);
|
|
249
348
|
|
|
250
349
|
const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
|
|
251
|
-
const poller = await client.
|
|
350
|
+
const poller = await client.beginAnalyzeDocument(PrebuiltLayoutModel, readStream);
|
|
252
351
|
const { pages, tables } = await poller.pollUntilDone();
|
|
253
352
|
|
|
254
|
-
for (const page of pages) {
|
|
353
|
+
for (const page of pages || []) {
|
|
255
354
|
console.log(`- Page ${page.pageNumber}: (${page.width}x${page.height} ${page.unit})`);
|
|
256
355
|
}
|
|
257
356
|
|
|
258
|
-
for (const table of tables) {
|
|
357
|
+
for (const table of tables || []) {
|
|
259
358
|
console.log(`- Table (${table.columnCount}x${table.rowCount})`);
|
|
260
359
|
for (const cell of table.cells) {
|
|
261
360
|
console.log(` cell [${cell.rowIndex},${cell.columnIndex}] "${cell.content}"`);
|
|
@@ -268,16 +367,22 @@ main().catch((err) => {
|
|
|
268
367
|
});
|
|
269
368
|
```
|
|
270
369
|
|
|
271
|
-
|
|
370
|
+
### Use the "document" prebuilt
|
|
272
371
|
|
|
273
|
-
|
|
372
|
+
<a id="beginextractdocument-removed"></a>
|
|
373
|
+
**Breaking Change Warning** ⚠️: In previous `4.0.0-beta` versions of the Azure Form Recognizer SDK for JavaScript, the prebuilt document model was provided by a custom method named `beginExtractGeneralDocument`. This method was removed and replaced with an example `DocumentModel` instance named [`PrebuiltDocumentModel`][samples-prebuilt-document] for use with the same `beginAnalyzeDocument` method that is used to perform analysis with other prebuilt models. As previously, the model ID `"prebuilt-document"` may still be used directly. This change will align the `prebuilt-document` model with the other prebuilt models and enable us to continue to provide timely updates and ensure stability as the number of supported prebuilt models increases and as their capabilities are enhanced.
|
|
274
374
|
|
|
275
|
-
The `
|
|
375
|
+
The `"prebuilt-document"` model extracts information about key-value pairs (directed associations between page elements, such as labeled fields) in addition to the properties produced by the layout extraction method. This prebuilt (general) document model provides similar functionality to the custom models trained without label information in previous iterations of the Form Recognizer service, but it is now provided as a prebuilt model that works with a wide variety of documents. We provide a strongly-typed `DocumentModel` instance named [`PrebuiltDocumentModel`][samples-prebuilt-document] that invokes this model, or as always its model ID `"prebuilt-document"` may be used directly.
|
|
276
376
|
|
|
277
|
-
|
|
278
|
-
const { DocumentAnalysisClient, AzureKeyCredential } = require("@azure/ai-form-recognizer");
|
|
377
|
+
Since the main benefit of `DocumentModel`-based analysis is stronger TypeScript type constraints, the following sample is written in TypeScript using ECMAScript module syntax:
|
|
279
378
|
|
|
280
|
-
|
|
379
|
+
```typescript
|
|
380
|
+
import { DocumentAnalysisClient, AzureKeyCredential } from "@azure/ai-form-recognizer";
|
|
381
|
+
|
|
382
|
+
// Copy the above-linked `DocumentModel` file so that it may be imported in this module.
|
|
383
|
+
import { PrebuiltDocumentModel } from "./prebuilt/prebuilt-document";
|
|
384
|
+
|
|
385
|
+
import fs from "fs";
|
|
281
386
|
|
|
282
387
|
async function main() {
|
|
283
388
|
const endpoint = "<cognitive services endpoint>";
|
|
@@ -287,13 +392,13 @@ async function main() {
|
|
|
287
392
|
const readStream = fs.createReadStream(path);
|
|
288
393
|
|
|
289
394
|
const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
|
|
290
|
-
const poller = await client.
|
|
395
|
+
const poller = await client.beginAnalyzeDocument(PrebuiltDocumentModel, readStream);
|
|
291
396
|
|
|
292
397
|
// `pages`, `tables` and `styles` are also available as in the "layout" example above, but for the sake of this
|
|
293
398
|
// example we won't show them here.
|
|
294
|
-
const { keyValuePairs
|
|
399
|
+
const { keyValuePairs } = await poller.pollUntilDone();
|
|
295
400
|
|
|
296
|
-
if (keyValuePairs.length <= 0) {
|
|
401
|
+
if (!keyValuePairs || keyValuePairs.length <= 0) {
|
|
297
402
|
console.log("No key-value pairs were extracted from the document.");
|
|
298
403
|
} else {
|
|
299
404
|
console.log("Key-Value Pairs:");
|
|
@@ -302,19 +407,6 @@ async function main() {
|
|
|
302
407
|
console.log(" Value:", `"${value?.content ?? "<undefined>"}" (${confidence})`);
|
|
303
408
|
}
|
|
304
409
|
}
|
|
305
|
-
|
|
306
|
-
if (entities.length <= 0) {
|
|
307
|
-
console.log("No entities were extracted from the document.");
|
|
308
|
-
} else {
|
|
309
|
-
console.log("Entities:");
|
|
310
|
-
for (const entity of entities) {
|
|
311
|
-
console.log(
|
|
312
|
-
`- "${entity.content}" ${entity.category} - ${entity.subCategory ?? "<none>"} (${
|
|
313
|
-
entity.confidence
|
|
314
|
-
})`
|
|
315
|
-
);
|
|
316
|
-
}
|
|
317
|
-
}
|
|
318
410
|
}
|
|
319
411
|
|
|
320
412
|
main().catch((err) => {
|
|
@@ -322,12 +414,25 @@ main().catch((err) => {
|
|
|
322
414
|
});
|
|
323
415
|
```
|
|
324
416
|
|
|
325
|
-
|
|
417
|
+
### Use the "read" prebuilt
|
|
326
418
|
|
|
327
|
-
|
|
419
|
+
<a id="beginextractdocument-removed"></a>
|
|
420
|
+
**Breaking Change Warning** ⚠️: In previous `4.0.0-beta` versions of the Azure Form Recognizer SDK for JavaScript, the prebuilt "read" model was provided by a custom method named `beginReadDocument`. This method was removed and replaced with an example `DocumentModel` instance named [`PrebuiltReadModel`][samples-prebuilt-read] for use with the same `beginAnalyzeDocument` method that is used to perform analysis with other prebuilt models. As previously, the model ID `"prebuilt-read"` may still be used directly. This change will align the `prebuilt-read` model with the other prebuilt models and enable us to continue to provide timely updates and ensure stability as the number of supported prebuilt models increases and as their capabilities are enhanced.
|
|
328
421
|
|
|
329
|
-
|
|
330
|
-
|
|
422
|
+
The `"prebuilt-read"` model extracts textual information in a document such as words and paragraphs and analyzes the language and writing style (e.g. handwritten vs. typeset) of that text. We provide a strongly-typed `DocumentModel` instance named [`PrebuiltReadModel`][samples-prebuilt-document] that invokes this model, or as always its model ID `"prebuilt-read"` may be used directly.
|
|
423
|
+
|
|
424
|
+
Since the main benefit of `DocumentModel`-based analysis is stronger TypeScript type constraints, the following sample is written in TypeScript using ECMAScript module syntax:
|
|
425
|
+
|
|
426
|
+
```typescript
|
|
427
|
+
import { DocumentAnalysisClient, AzureKeyCredential } from "@azure/ai-form-recognizer";
|
|
428
|
+
|
|
429
|
+
// Copy the above-linked `DocumentModel` file so that it may be imported in this module.
|
|
430
|
+
import { PrebuiltReadModel } from "./prebuilt/prebuilt-read";
|
|
431
|
+
|
|
432
|
+
// See the samples directory for a definition of this helper function.
|
|
433
|
+
import { getTextOfSpans } from "./utils";
|
|
434
|
+
|
|
435
|
+
import fs from "fs";
|
|
331
436
|
|
|
332
437
|
async function main() {
|
|
333
438
|
const endpoint = "<cognitive services endpoint>";
|
|
@@ -337,22 +442,24 @@ async function main() {
|
|
|
337
442
|
const readStream = fs.createReadStream(path);
|
|
338
443
|
|
|
339
444
|
const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
|
|
340
|
-
const poller = await client.
|
|
445
|
+
const poller = await client.beginAnalyzeDocument(PrebuiltReadModel, readStream);
|
|
341
446
|
|
|
342
447
|
// The "prebuilt-read" model (`beginReadDocument` method) only extracts information about the textual content of the
|
|
343
448
|
// document, such as page text elements, text styles, and information about the language of the text.
|
|
344
|
-
const { content, pages, languages
|
|
449
|
+
const { content, pages, languages } = await poller.pollUntilDone();
|
|
345
450
|
|
|
346
|
-
if (pages.length <= 0) {
|
|
451
|
+
if (!pages || pages.length <= 0) {
|
|
347
452
|
console.log("No pages were extracted from the document.");
|
|
348
453
|
} else {
|
|
349
454
|
console.log("Pages:");
|
|
350
455
|
for (const page of pages) {
|
|
351
456
|
console.log("- Page", page.pageNumber, `(unit: ${page.unit})`);
|
|
352
457
|
console.log(` ${page.width}x${page.height}, angle: ${page.angle}`);
|
|
353
|
-
console.log(
|
|
458
|
+
console.log(
|
|
459
|
+
` ${page.lines && page.lines.length} lines, ${page.words && page.words.length} words`
|
|
460
|
+
);
|
|
354
461
|
|
|
355
|
-
if (page.lines.length > 0) {
|
|
462
|
+
if (page.lines && page.lines.length > 0) {
|
|
356
463
|
console.log(" Lines:");
|
|
357
464
|
|
|
358
465
|
for (const line of page.lines) {
|
|
@@ -362,14 +469,15 @@ async function main() {
|
|
|
362
469
|
}
|
|
363
470
|
}
|
|
364
471
|
|
|
365
|
-
if (languages.length <= 0) {
|
|
472
|
+
if (!languages || languages.length <= 0) {
|
|
366
473
|
console.log("No language spans were extracted from the document.");
|
|
367
474
|
} else {
|
|
368
475
|
console.log("Languages:");
|
|
369
476
|
for (const languageEntry of languages) {
|
|
370
477
|
console.log(
|
|
371
|
-
`- Found language: ${languageEntry.
|
|
478
|
+
`- Found language: ${languageEntry.locale} (confidence: ${languageEntry.confidence})`
|
|
372
479
|
);
|
|
480
|
+
|
|
373
481
|
for (const text of getTextOfSpans(content, languageEntry.spans)) {
|
|
374
482
|
const escapedText = text.replace(/\r?\n/g, "\\n").replace(/"/g, '\\"');
|
|
375
483
|
console.log(` - "${escapedText}"`);
|
|
@@ -384,81 +492,11 @@ main().catch((error) => {
|
|
|
384
492
|
});
|
|
385
493
|
```
|
|
386
494
|
|
|
387
|
-
|
|
388
|
-
|
|
389
|
-
### Using Prebuilt Models
|
|
390
|
-
|
|
391
|
-
The `beginAnalyzeDocument` method also supports extracting fields from certain types of common documents such as receipts, invoices, business cards, and identity documents using prebuilt models provided by the Form Recognizer service. The prebuilt models may be provided either as model ID strings (the same as custom document models) or using a `DocumentModel` object. When using a `DocumentModel`, the Form Recognizer SDK for JavaScript provides a much stronger TypeScript type for the resulting extracted documents based on the model's schema, and it will be converted to use JavaScript naming conventions.
|
|
392
|
-
|
|
393
|
-
For example, the following code shows how to use `PrebuiltModels.Receipt` to extract a strongly-typed receipt object from an input.
|
|
394
|
-
|
|
395
|
-
```javascript
|
|
396
|
-
const { DocumentAnalysisClient, PrebuiltModels, AzureKeyCredential } = require("@azure/ai-form-recognizer");
|
|
397
|
-
|
|
398
|
-
const fs = require("fs");
|
|
399
|
-
|
|
400
|
-
async function main() {
|
|
401
|
-
const endpoint = "<cognitive services endpoint>";
|
|
402
|
-
const apiKey = "<api key>";
|
|
403
|
-
const path = "<path to your receipt document>"; // pdf/jpeg/png/tiff formats
|
|
404
|
-
|
|
405
|
-
const readStream = fs.createReadStream(path);
|
|
406
|
-
|
|
407
|
-
const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
|
|
408
|
-
|
|
409
|
-
// The PrebuiltModels.Receipt `DocumentModel` encodes both the model ID and a stronger return type for the operation
|
|
410
|
-
const poller = await client.beginAnalyzeDocument(PrebuiltModels.Receipt, readStream, {
|
|
411
|
-
onProgress: ({status}}) => {
|
|
412
|
-
console.log(`status: ${status}`);
|
|
413
|
-
}
|
|
414
|
-
});
|
|
415
|
-
|
|
416
|
-
const { documents: [receiptDocument] } = await poller.pollUntilDone();
|
|
417
|
-
|
|
418
|
-
if (receipt === undefined) {
|
|
419
|
-
throw new Error("Expected at least one receipt in analysis result.");
|
|
420
|
-
}
|
|
421
|
-
|
|
422
|
-
// The fields of the document constitute the extracted receipt data.
|
|
423
|
-
const receipt = receiptDocument.fields;
|
|
424
|
-
|
|
425
|
-
console.log(`Receipt data (${receiptDocument.docType})`);
|
|
426
|
-
console.log(" Merchant Name:", receipt.merchantName?.value);
|
|
427
|
-
|
|
428
|
-
// The items of the receipt are an example of a `DocumentArrayValue`
|
|
429
|
-
if (receipt.items !== undefined) {
|
|
430
|
-
console.log("Items:");
|
|
431
|
-
for (const { properties: item } of receipt.items.values) {
|
|
432
|
-
console.log("- Description:", item.description?.value);
|
|
433
|
-
console.log(" Total Price:", item.totalPrice?.value);
|
|
434
|
-
}
|
|
435
|
-
}
|
|
436
|
-
|
|
437
|
-
console.log(" Total:", receipt.total?.value);
|
|
438
|
-
}
|
|
439
|
-
|
|
440
|
-
main().catch((err) => {
|
|
441
|
-
console.error("The sample encountered an error:", err);
|
|
442
|
-
});
|
|
443
|
-
```
|
|
444
|
-
|
|
445
|
-
Alternatively, as mentioned above, instead of using `PrebuiltDocuments.Receipt`, which produces the stronger return type, the prebuilt receipt's model ID ("prebuilt-receipt") can be used, but the document fields will not be strongly typed in TypeScript, and the field names will be in "PascalCase" instead of "camelCase".
|
|
495
|
+
### Build a model
|
|
446
496
|
|
|
447
|
-
|
|
448
|
-
|
|
449
|
-
You are not limited to receipts! There are a few prebuilt models to choose from, with more on the way. Each prebuilt model has its own set of supported fields:
|
|
497
|
+
The SDK also supports creating models using the `DocumentModelAdministrationClient` class. Building a model from labeled training data creates a new model that is trained on your own documents, and the resulting model will be able to recognize values from the structures of those documents. The model building operation accepts a SAS-encoded URL to an Azure Storage Blob container that holds the training documents. The Form Recognizer service's infrastructure will read the files in the container and create a model based on their contents. For more details on how to create and structure a training data container, see the [Form Recognizer service's documentation for building a model][fr-build-model].
|
|
450
498
|
|
|
451
|
-
|
|
452
|
-
- Business cards, using `PrebuiltModels.BusinessCard` or its model ID `"prebuilt-businessCard"` (see [the supported fields of the business card model](https://aka.ms/azsdk/formrecognizer/businesscardfieldschema)).
|
|
453
|
-
- Invoices, using `PrebuiltModels.Invoice` or its model ID `"prebuilt-invoice"` (see [the supported fields of the invoice model](https://aka.ms/azsdk/formrecognizer/invoicefieldschema)).
|
|
454
|
-
- Identity Documents (such as driver licenses and passports), using `PrebuiltModels.IdentityDocument` or its model ID `"prebuilt-idDocument"` (see [the supported fields of the identity document model](https://aka.ms/azsdk/formrecognizer/iddocumentfieldschema)).
|
|
455
|
-
- W2 Tax Forms (United States), using `PrebuiltModels.TaxUsW2` or its model ID `"prebuilt-tax.us.w2"` (see [the supported fields of the W2 model](https://aka.ms/azsdk/formrecognizer/taxusw2fieldschema)).
|
|
456
|
-
|
|
457
|
-
The fields of all prebuilt document models may also be accessed programmatically using the `getModel` method (by their model IDs) of `DocumentModelAdministrationClient` and inspecting the `docTypes` field in the result.
|
|
458
|
-
|
|
459
|
-
### Build a Model
|
|
460
|
-
|
|
461
|
-
The SDK also supports creating models, using `DocumentModelAdministrationClient`. Building a model from labeled training data creates a new model that is trained on your own documents, and the resulting model will be able to recognize values from the structures of those documents. The model building operation accepts a SAS-encoded URL to an Azure Storage Blob container that holds the training documents. The Form Recognizer service's infrastructure will read the files in the container and create a model based on their contents. For more details on how to create and structure a training data container, see the [Form Recognizer service's documentation for building a model][fr-build-model]. The Form Recognizer service team has created a tool to assist in the labeling and creation of models, please see [the documentation of the labeling tool][fr-labeling-tool] for more information.
|
|
499
|
+
While we provide these methods for programmatic model creation, the Form Recognizer service team has created an interactive web application, [Form Recognizer Studio (Preview)][fr-studio], that enables creating and managing models on the web.
|
|
462
500
|
|
|
463
501
|
For example, the following program builds a custom document model using a SAS-encoded URL to a pre-existing Azure Storage container:
|
|
464
502
|
|
|
@@ -473,17 +511,14 @@ async function main() {
|
|
|
473
511
|
const apiKey = "<api key>";
|
|
474
512
|
const containerSasUrl = "<SAS url to the blob container storing training documents>";
|
|
475
513
|
|
|
476
|
-
const
|
|
477
|
-
endpoint,
|
|
478
|
-
new AzureKeyCredential(apiKey)
|
|
479
|
-
);
|
|
514
|
+
const client = new DocumentModelAdministrationClient(endpoint, new AzureKeyCredential(apiKey));
|
|
480
515
|
|
|
481
516
|
// You must provide the model ID. It can be any text that does not start with "prebuilt-".
|
|
482
517
|
// For example, you could provide a randomly generated GUID using the "uuid" package.
|
|
483
518
|
// The second parameter is the SAS-encoded URL to an Azure Storage container with the training documents.
|
|
484
519
|
// The third parameter is the build mode: one of "template" (the only mode prior to 4.0.0-beta.3) or "neural".
|
|
485
520
|
// See https://aka.ms/azsdk/formrecognizer/buildmode for more information about build modes.
|
|
486
|
-
const poller = await
|
|
521
|
+
const poller = await client.beginBuildModel("<model ID>", containerSasUrl, "template", {
|
|
487
522
|
// The model description is optional and can be any text.
|
|
488
523
|
description: "This is my new model!",
|
|
489
524
|
onProgress: ({ status }) => {
|
|
@@ -521,9 +556,9 @@ main().catch((err) => {
|
|
|
521
556
|
});
|
|
522
557
|
```
|
|
523
558
|
|
|
524
|
-
### Manage
|
|
559
|
+
### Manage models
|
|
525
560
|
|
|
526
|
-
`DocumentModelAdministrationClient` also provides several methods for
|
|
561
|
+
`DocumentModelAdministrationClient` also provides several methods for accessing and listing models. The following example shows how to iterate through the models in a Form Recognizer resource (this will include both custom models in the resource as well as prebuilt models that are common to all resources), get a model by ID, and delete a model.
|
|
527
562
|
|
|
528
563
|
```javascript
|
|
529
564
|
const {
|
|
@@ -573,7 +608,7 @@ main().catch((err) => {
|
|
|
573
608
|
|
|
574
609
|
## Troubleshooting
|
|
575
610
|
|
|
576
|
-
### Form Recognizer
|
|
611
|
+
### Form Recognizer errors
|
|
577
612
|
|
|
578
613
|
For information about the error messages and codes produced by the Form Recognizer service, please refer to [the service's error documentation][fr-errors].
|
|
579
614
|
|
|
@@ -612,6 +647,18 @@ If you'd like to contribute to this library, please read the [contributing guide
|
|
|
612
647
|
[azure_portal_create_fr_resource]: https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer
|
|
613
648
|
[azure_cli_create_fr_resource]: https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account-cli?tabs=windows
|
|
614
649
|
[fr-labeling-tool]: https://aka.ms/azsdk/formrecognizer/labelingtool
|
|
650
|
+
[fr-studio]: https://formrecognizer.appliedai.azure.com/studio
|
|
615
651
|
[fr-build-training-set]: https://aka.ms/azsdk/formrecognizer/buildtrainingset
|
|
616
652
|
[fr-errors]: https://aka.ms/azsdk/formrecognizer/errors
|
|
617
653
|
[fr-models]: https://aka.ms/azsdk/formrecognizer/models
|
|
654
|
+
[samples-prebuilt]: https://github.com/azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/prebuilt/
|
|
655
|
+
[samples-prebuilt-businesscard]: https://github.com/azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/prebuilt/prebuilt-businessCard.ts
|
|
656
|
+
[samples-prebuilt-document]: https://github.com/azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/prebuilt/prebuilt-document.ts
|
|
657
|
+
[samples-prebuilt-healthinsurancecard]: https://github.com/azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/prebuilt/prebuilt-healthInsuranceCard.ts
|
|
658
|
+
[samples-prebuilt-iddocument]: https://github.com/azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/prebuilt/prebuilt-idDocument.ts
|
|
659
|
+
[samples-prebuilt-invoice]: https://github.com/azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/prebuilt/prebuilt-invoice.ts
|
|
660
|
+
[samples-prebuilt-layout]: https://github.com/azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/prebuilt/prebuilt-layout.ts
|
|
661
|
+
[samples-prebuilt-read]: https://github.com/azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/prebuilt/prebuilt-read.ts
|
|
662
|
+
[samples-prebuilt-receipt]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/prebuilt/prebuilt-receipt.ts
|
|
663
|
+
[samples-prebuilt-tax.us.w2]: https://github.com/azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/prebuilt/prebuilt-tax.us.w2.ts
|
|
664
|
+
[samples-prebuilt-vaccinationcard]: https://github.com/azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples-dev/prebuilt/prebuilt-vaccinationCard.ts
|