n8n-nodes-docx-filler 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +161 -0
- package/dist/DocxFiller/DocxFiller.node.d.ts +5 -0
- package/dist/DocxFiller/DocxFiller.node.js +467 -0
- package/dist/DocxFiller/docx.svg +5 -0
- package/package.json +63 -0
package/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2024 Rokodo
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
package/README.md
ADDED
|
@@ -0,0 +1,161 @@
|
|
|
1
|
+
# n8n-nodes-docx-filler
|
|
2
|
+
|
|
3
|
+
[](https://www.npmjs.com/package/n8n-nodes-docx-filler)
|
|
4
|
+
[](https://opensource.org/licenses/MIT)
|
|
5
|
+
|
|
6
|
+
n8n community node to automatically fill DOCX documents (French DC1, DC2, AE administrative forms) with company data.
|
|
7
|
+
|
|
8
|
+
**Works as AI Agent tool!**
|
|
9
|
+
|
|
10
|
+
## Features
|
|
11
|
+
|
|
12
|
+
- Fill DOCX templates with data from source documents
|
|
13
|
+
- Extract company data from filled documents
|
|
14
|
+
- Automatic field detection (SIRET, TVA, email, address, etc.)
|
|
15
|
+
- Checkbox state copying
|
|
16
|
+
- Compatible with n8n AI Agents (`usableAsTool: true`)
|
|
17
|
+
|
|
18
|
+
## Installation
|
|
19
|
+
|
|
20
|
+
### Via n8n Community Nodes (Recommended)
|
|
21
|
+
|
|
22
|
+
1. Go to **Settings > Community Nodes**
|
|
23
|
+
2. Select **Install**
|
|
24
|
+
3. Enter `n8n-nodes-docx-filler`
|
|
25
|
+
4. Click **Install**
|
|
26
|
+
|
|
27
|
+
### Via npm (self-hosted)
|
|
28
|
+
|
|
29
|
+
```bash
|
|
30
|
+
npm install n8n-nodes-docx-filler
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
### Enable AI Agent Tool Usage
|
|
34
|
+
|
|
35
|
+
Add this environment variable to your n8n instance:
|
|
36
|
+
|
|
37
|
+
```bash
|
|
38
|
+
N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
## Operations
|
|
42
|
+
|
|
43
|
+
### Fill Document
|
|
44
|
+
|
|
45
|
+
Fills a template document with data extracted from a source document.
|
|
46
|
+
|
|
47
|
+
| Parameter | Description |
|
|
48
|
+
|-----------|-------------|
|
|
49
|
+
| `sourceDocument` | Document containing company data (binary property or base64) |
|
|
50
|
+
| `templateDocument` | Empty template to fill (binary property or base64) |
|
|
51
|
+
| `outputProperty` | Binary property name for output (default: "data") |
|
|
52
|
+
|
|
53
|
+
### Extract Data
|
|
54
|
+
|
|
55
|
+
Extracts company data from a filled document.
|
|
56
|
+
|
|
57
|
+
| Parameter | Description |
|
|
58
|
+
|-----------|-------------|
|
|
59
|
+
| `document` | Document to analyze (binary property or base64) |
|
|
60
|
+
|
|
61
|
+
### Analyze
|
|
62
|
+
|
|
63
|
+
Analyzes document structure for debugging.
|
|
64
|
+
|
|
65
|
+
| Parameter | Description |
|
|
66
|
+
|-----------|-------------|
|
|
67
|
+
| `document` | Document to analyze |
|
|
68
|
+
|
|
69
|
+
## Supported Fields
|
|
70
|
+
|
|
71
|
+
The node automatically recognizes these fields in French administrative documents:
|
|
72
|
+
|
|
73
|
+
- **SIRET** - Company registration number
|
|
74
|
+
- **TVA Intracommunautaire** - EU VAT number
|
|
75
|
+
- **Email** - Email address
|
|
76
|
+
- **Téléphone** - Phone number
|
|
77
|
+
- **Adresse** - Postal address
|
|
78
|
+
- **Nom commercial** - Commercial name
|
|
79
|
+
- **Forme juridique** - Legal form
|
|
80
|
+
- **Capital social** - Share capital
|
|
81
|
+
- **Code NAF/APE** - Activity code
|
|
82
|
+
|
|
83
|
+
## Usage with AI Agent
|
|
84
|
+
|
|
85
|
+
This node is compatible with n8n AI Agents.
|
|
86
|
+
|
|
87
|
+
**Example agent prompt:**
|
|
88
|
+
```
|
|
89
|
+
You have access to the DOCX Filler tool to fill documents.
|
|
90
|
+
|
|
91
|
+
Use the "fill" operation with:
|
|
92
|
+
- sourceDocument: the document containing company data
|
|
93
|
+
- templateDocument: the empty form to fill
|
|
94
|
+
|
|
95
|
+
Execute directly without asking for confirmation.
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
## Example Workflow
|
|
99
|
+
|
|
100
|
+
```
|
|
101
|
+
[Read Binary File] → [DOCX Filler (Fill)] → [Write Binary File]
|
|
102
|
+
(source) (output)
|
|
103
|
+
↑
|
|
104
|
+
[Read Binary File]
|
|
105
|
+
(template)
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
## Development
|
|
109
|
+
|
|
110
|
+
```bash
|
|
111
|
+
# Clone the repository
|
|
112
|
+
git clone https://github.com/rokodo-io/n8n-nodes-docx-filler.git
|
|
113
|
+
cd n8n-nodes-docx-filler
|
|
114
|
+
|
|
115
|
+
# Install dependencies
|
|
116
|
+
npm install
|
|
117
|
+
|
|
118
|
+
# Build
|
|
119
|
+
npm run build
|
|
120
|
+
|
|
121
|
+
# Watch mode
|
|
122
|
+
npm run dev
|
|
123
|
+
|
|
124
|
+
# Create package
|
|
125
|
+
npm pack
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
## Publishing to npm
|
|
129
|
+
|
|
130
|
+
```bash
|
|
131
|
+
# Login to npm
|
|
132
|
+
npm login
|
|
133
|
+
|
|
134
|
+
# Publish
|
|
135
|
+
npm publish
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
## Contributing
|
|
139
|
+
|
|
140
|
+
Contributions are welcome! Please open an issue or submit a pull request.
|
|
141
|
+
|
|
142
|
+
## License
|
|
143
|
+
|
|
144
|
+
[MIT](LICENSE)
|
|
145
|
+
|
|
146
|
+
## Author
|
|
147
|
+
|
|
148
|
+
**Rokodo** - [contact@rokodo.io](mailto:contact@rokodo.io)
|
|
149
|
+
|
|
150
|
+
## Links
|
|
151
|
+
|
|
152
|
+
- [n8n Community Nodes](https://docs.n8n.io/integrations/community-nodes/)
|
|
153
|
+
- [n8n Documentation](https://docs.n8n.io/)
|
|
154
|
+
- [GitHub Repository](https://github.com/rokodo-io/n8n-nodes-docx-filler)
|
|
155
|
+
|
|
156
|
+
---
|
|
157
|
+
|
|
158
|
+
Sources:
|
|
159
|
+
- [n8n Custom Node Documentation](https://docs.n8n.io/integrations/creating-nodes/overview/)
|
|
160
|
+
- [Creating Custom Node as AI Tool](https://community.n8n.io/t/creating-a-custom-node-tool/79651)
|
|
161
|
+
- [n8n Community Packages Allow Tool Usage](https://github.com/n8n-io/n8n/issues/12593)
|
|
@@ -0,0 +1,467 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
3
|
+
return (mod && mod.__esModule) ? mod : { "default": mod };
|
|
4
|
+
};
|
|
5
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
6
|
+
exports.DocxFiller = void 0;
|
|
7
|
+
const n8n_workflow_1 = require("n8n-workflow");
|
|
8
|
+
const pizzip_1 = __importDefault(require("pizzip"));
|
|
9
|
+
// Labels spécifiques à rechercher dans les documents DC/AE
|
|
10
|
+
const LABEL_PATTERNS = [
|
|
11
|
+
['siret', ['numéro siret', 'n° siret', 'siret']],
|
|
12
|
+
['tva_intra', ['tva intracommunautaire', 'identification européen']],
|
|
13
|
+
['email', ['adresse électronique', 'courriel', 'e-mail']],
|
|
14
|
+
['telephone', ['numéros de téléphone', 'téléphone', 'télécopie']],
|
|
15
|
+
['adresse', ['adresse postale', 'siège social']],
|
|
16
|
+
['nom_commercial', ['nom commercial', 'dénomination sociale', 'raison sociale']],
|
|
17
|
+
['forme_juridique', ['forme juridique', 'statut juridique']],
|
|
18
|
+
['capital', ['capital social']],
|
|
19
|
+
['code_naf', ['code naf', 'code ape']],
|
|
20
|
+
];
|
|
21
|
+
const CHECKBOX_UNCHECKED = ['☐', '□', '▢'];
|
|
22
|
+
const CHECKBOX_CHECKED = ['☒', '☑', '▣'];
|
|
23
|
+
function normalize(text) {
|
|
24
|
+
return text.replace(/^[■▪●○•\s▪]+/, '').toLowerCase().trim();
|
|
25
|
+
}
|
|
26
|
+
function hasCheckbox(text) {
|
|
27
|
+
return [...CHECKBOX_UNCHECKED, ...CHECKBOX_CHECKED].some(c => text.includes(c));
|
|
28
|
+
}
|
|
29
|
+
function isChecked(text) {
|
|
30
|
+
return CHECKBOX_CHECKED.some(c => text.includes(c));
|
|
31
|
+
}
|
|
32
|
+
function isLabel(text) {
|
|
33
|
+
return text.startsWith('■') || text.startsWith('[') || text.startsWith('▪');
|
|
34
|
+
}
|
|
35
|
+
function isValue(text) {
|
|
36
|
+
if (!text)
|
|
37
|
+
return false;
|
|
38
|
+
if (isLabel(text))
|
|
39
|
+
return false;
|
|
40
|
+
if (text.length > 200)
|
|
41
|
+
return false;
|
|
42
|
+
return true;
|
|
43
|
+
}
|
|
44
|
+
function replaceCheckboxState(text, checked) {
|
|
45
|
+
let result = text;
|
|
46
|
+
if (checked) {
|
|
47
|
+
for (const c of CHECKBOX_UNCHECKED) {
|
|
48
|
+
result = result.split(c).join('☒');
|
|
49
|
+
}
|
|
50
|
+
}
|
|
51
|
+
else {
|
|
52
|
+
for (const c of CHECKBOX_CHECKED) {
|
|
53
|
+
result = result.split(c).join('☐');
|
|
54
|
+
}
|
|
55
|
+
}
|
|
56
|
+
return result;
|
|
57
|
+
}
|
|
58
|
+
function extractParagraphs(xml) {
|
|
59
|
+
const paragraphs = [];
|
|
60
|
+
// Regex pour extraire le contenu des paragraphes <w:p>...</w:p>
|
|
61
|
+
const pRegex = /<w:p[^>]*>([\s\S]*?)<\/w:p>/g;
|
|
62
|
+
let match;
|
|
63
|
+
while ((match = pRegex.exec(xml)) !== null) {
|
|
64
|
+
// Extraire tout le texte des <w:t> dans ce paragraphe
|
|
65
|
+
const pContent = match[1];
|
|
66
|
+
const textParts = [];
|
|
67
|
+
const tRegex = /<w:t[^>]*>([^<]*)<\/w:t>/g;
|
|
68
|
+
let tMatch;
|
|
69
|
+
while ((tMatch = tRegex.exec(pContent)) !== null) {
|
|
70
|
+
textParts.push(tMatch[1]);
|
|
71
|
+
}
|
|
72
|
+
paragraphs.push(textParts.join(''));
|
|
73
|
+
}
|
|
74
|
+
return paragraphs;
|
|
75
|
+
}
|
|
76
|
+
function extractAllFields(paragraphs) {
|
|
77
|
+
const extracted = new Map();
|
|
78
|
+
const usedIndices = new Set();
|
|
79
|
+
for (const [fieldName, patterns] of LABEL_PATTERNS) {
|
|
80
|
+
if (extracted.has(fieldName))
|
|
81
|
+
continue;
|
|
82
|
+
for (let i = 0; i < paragraphs.length; i++) {
|
|
83
|
+
if (usedIndices.has(i))
|
|
84
|
+
continue;
|
|
85
|
+
const textNorm = normalize(paragraphs[i]);
|
|
86
|
+
const matched = patterns.some(pattern => textNorm.includes(pattern.toLowerCase()));
|
|
87
|
+
if (matched) {
|
|
88
|
+
for (let j = i + 1; j < Math.min(i + 5, paragraphs.length); j++) {
|
|
89
|
+
if (usedIndices.has(j))
|
|
90
|
+
continue;
|
|
91
|
+
const valueText = paragraphs[j].trim();
|
|
92
|
+
if (isValue(valueText)) {
|
|
93
|
+
extracted.set(fieldName, {
|
|
94
|
+
value: valueText,
|
|
95
|
+
labelIndex: i,
|
|
96
|
+
valueIndex: j,
|
|
97
|
+
});
|
|
98
|
+
usedIndices.add(j);
|
|
99
|
+
break;
|
|
100
|
+
}
|
|
101
|
+
else if (isLabel(valueText)) {
|
|
102
|
+
break;
|
|
103
|
+
}
|
|
104
|
+
}
|
|
105
|
+
usedIndices.add(i);
|
|
106
|
+
break;
|
|
107
|
+
}
|
|
108
|
+
}
|
|
109
|
+
}
|
|
110
|
+
return extracted;
|
|
111
|
+
}
|
|
112
|
+
function extractCheckboxes(paragraphs) {
|
|
113
|
+
const checkboxes = [];
|
|
114
|
+
for (let i = 0; i < paragraphs.length; i++) {
|
|
115
|
+
const text = paragraphs[i].trim();
|
|
116
|
+
if (hasCheckbox(text)) {
|
|
117
|
+
const signature = text.replace(/[☐☒☑□▢▣]/g, '').trim().toLowerCase().slice(0, 60);
|
|
118
|
+
checkboxes.push({
|
|
119
|
+
index: i,
|
|
120
|
+
text,
|
|
121
|
+
signature,
|
|
122
|
+
checked: isChecked(text),
|
|
123
|
+
});
|
|
124
|
+
}
|
|
125
|
+
}
|
|
126
|
+
return checkboxes;
|
|
127
|
+
}
|
|
128
|
+
function findFillablePositions(paragraphs) {
|
|
129
|
+
const positions = new Map();
|
|
130
|
+
const usedIndices = new Set();
|
|
131
|
+
for (const [fieldName, patterns] of LABEL_PATTERNS) {
|
|
132
|
+
if (positions.has(fieldName))
|
|
133
|
+
continue;
|
|
134
|
+
for (let i = 0; i < paragraphs.length; i++) {
|
|
135
|
+
if (usedIndices.has(i))
|
|
136
|
+
continue;
|
|
137
|
+
const textNorm = normalize(paragraphs[i]);
|
|
138
|
+
const matched = patterns.some(pattern => textNorm.includes(pattern.toLowerCase()));
|
|
139
|
+
if (matched) {
|
|
140
|
+
for (let j = i + 1; j < Math.min(i + 5, paragraphs.length); j++) {
|
|
141
|
+
if (usedIndices.has(j))
|
|
142
|
+
continue;
|
|
143
|
+
const valueText = paragraphs[j].trim();
|
|
144
|
+
if (!valueText) {
|
|
145
|
+
positions.set(fieldName, {
|
|
146
|
+
labelIndex: i,
|
|
147
|
+
fillIndex: j,
|
|
148
|
+
labelText: paragraphs[i].slice(0, 50),
|
|
149
|
+
});
|
|
150
|
+
usedIndices.add(j);
|
|
151
|
+
break;
|
|
152
|
+
}
|
|
153
|
+
else if (isLabel(valueText)) {
|
|
154
|
+
break;
|
|
155
|
+
}
|
|
156
|
+
}
|
|
157
|
+
usedIndices.add(i);
|
|
158
|
+
break;
|
|
159
|
+
}
|
|
160
|
+
}
|
|
161
|
+
}
|
|
162
|
+
return positions;
|
|
163
|
+
}
|
|
164
|
+
function fillDocumentXml(templateXml, sourceData, templatePositions, sourceCheckboxes, templateCheckboxes) {
|
|
165
|
+
let xml = templateXml;
|
|
166
|
+
const filledFields = [];
|
|
167
|
+
let modifiedCheckboxes = 0;
|
|
168
|
+
// Remplir les champs texte
|
|
169
|
+
// On doit remplacer les paragraphes vides par les valeurs
|
|
170
|
+
const pRegex = /<w:p[^>]*>([\s\S]*?)<\/w:p>/g;
|
|
171
|
+
const paragraphs = [];
|
|
172
|
+
let match;
|
|
173
|
+
while ((match = pRegex.exec(templateXml)) !== null) {
|
|
174
|
+
paragraphs.push({
|
|
175
|
+
match: match[0],
|
|
176
|
+
start: match.index,
|
|
177
|
+
end: match.index + match[0].length,
|
|
178
|
+
});
|
|
179
|
+
}
|
|
180
|
+
// Pour chaque position à remplir
|
|
181
|
+
for (const [fieldName, position] of templatePositions) {
|
|
182
|
+
if (sourceData.has(fieldName)) {
|
|
183
|
+
const value = sourceData.get(fieldName).value;
|
|
184
|
+
const pIndex = position.fillIndex;
|
|
185
|
+
if (pIndex < paragraphs.length) {
|
|
186
|
+
const p = paragraphs[pIndex];
|
|
187
|
+
// Créer un nouveau paragraphe avec la valeur
|
|
188
|
+
// On conserve le style du paragraphe original mais on remplace le contenu
|
|
189
|
+
const newP = p.match.replace(/(<w:p[^>]*>)([\s\S]*?)(<\/w:p>)/, `$1<w:r><w:t>${escapeXml(value)}</w:t></w:r>$3`);
|
|
190
|
+
xml = xml.slice(0, p.start) + newP + xml.slice(p.end);
|
|
191
|
+
// Ajuster les indices pour les remplacements suivants
|
|
192
|
+
const diff = newP.length - p.match.length;
|
|
193
|
+
for (let i = pIndex + 1; i < paragraphs.length; i++) {
|
|
194
|
+
paragraphs[i].start += diff;
|
|
195
|
+
paragraphs[i].end += diff;
|
|
196
|
+
}
|
|
197
|
+
paragraphs[pIndex].match = newP;
|
|
198
|
+
paragraphs[pIndex].end = paragraphs[pIndex].start + newP.length;
|
|
199
|
+
filledFields.push(`${fieldName}: ${value}`);
|
|
200
|
+
}
|
|
201
|
+
}
|
|
202
|
+
}
|
|
203
|
+
// Copier l'état des checkboxes
|
|
204
|
+
for (const templateCb of templateCheckboxes) {
|
|
205
|
+
for (const sourceCb of sourceCheckboxes) {
|
|
206
|
+
if (templateCb.signature === sourceCb.signature) {
|
|
207
|
+
if (sourceCb.checked !== templateCb.checked) {
|
|
208
|
+
const newText = replaceCheckboxState(templateCb.text, sourceCb.checked);
|
|
209
|
+
// Remplacer dans le XML
|
|
210
|
+
xml = xml.split(escapeXml(templateCb.text)).join(escapeXml(newText));
|
|
211
|
+
modifiedCheckboxes++;
|
|
212
|
+
}
|
|
213
|
+
break;
|
|
214
|
+
}
|
|
215
|
+
}
|
|
216
|
+
}
|
|
217
|
+
return { xml, filledFields, modifiedCheckboxes };
|
|
218
|
+
}
|
|
219
|
+
function escapeXml(text) {
|
|
220
|
+
return text
|
|
221
|
+
.replace(/&/g, '&')
|
|
222
|
+
.replace(/</g, '<')
|
|
223
|
+
.replace(/>/g, '>')
|
|
224
|
+
.replace(/"/g, '"')
|
|
225
|
+
.replace(/'/g, ''');
|
|
226
|
+
}
|
|
227
|
+
class DocxFiller {
|
|
228
|
+
constructor() {
|
|
229
|
+
this.description = {
|
|
230
|
+
displayName: 'DOCX Filler',
|
|
231
|
+
name: 'docxFiller',
|
|
232
|
+
icon: 'file:docx.svg',
|
|
233
|
+
group: ['transform'],
|
|
234
|
+
version: 1,
|
|
235
|
+
subtitle: '={{$parameter["operation"]}}',
|
|
236
|
+
description: 'Remplit automatiquement des documents DOCX (DC1, DC2, AE) avec les données entreprise',
|
|
237
|
+
defaults: {
|
|
238
|
+
name: 'DOCX Filler',
|
|
239
|
+
},
|
|
240
|
+
// @ts-ignore - Required for AI agent tool usage
|
|
241
|
+
usableAsTool: true,
|
|
242
|
+
inputs: ['main'],
|
|
243
|
+
outputs: ['main'],
|
|
244
|
+
properties: [
|
|
245
|
+
{
|
|
246
|
+
displayName: 'Operation',
|
|
247
|
+
name: 'operation',
|
|
248
|
+
type: 'options',
|
|
249
|
+
noDataExpression: true,
|
|
250
|
+
options: [
|
|
251
|
+
{
|
|
252
|
+
name: 'Fill Document',
|
|
253
|
+
value: 'fill',
|
|
254
|
+
description: 'Remplit un template avec les données d\'un document source',
|
|
255
|
+
action: 'Fill document',
|
|
256
|
+
},
|
|
257
|
+
{
|
|
258
|
+
name: 'Extract Data',
|
|
259
|
+
value: 'extract',
|
|
260
|
+
description: 'Extrait les données entreprise d\'un document rempli',
|
|
261
|
+
action: 'Extract data',
|
|
262
|
+
},
|
|
263
|
+
{
|
|
264
|
+
name: 'Analyze',
|
|
265
|
+
value: 'analyze',
|
|
266
|
+
description: 'Analyse la structure d\'un document',
|
|
267
|
+
action: 'Analyze document',
|
|
268
|
+
},
|
|
269
|
+
],
|
|
270
|
+
default: 'fill',
|
|
271
|
+
},
|
|
272
|
+
// Fill operation
|
|
273
|
+
{
|
|
274
|
+
displayName: 'Source Document',
|
|
275
|
+
name: 'sourceDocument',
|
|
276
|
+
type: 'string',
|
|
277
|
+
default: '',
|
|
278
|
+
required: true,
|
|
279
|
+
displayOptions: {
|
|
280
|
+
show: {
|
|
281
|
+
operation: ['fill'],
|
|
282
|
+
},
|
|
283
|
+
},
|
|
284
|
+
description: 'Document source contenant les données entreprise (binary property name ou base64)',
|
|
285
|
+
},
|
|
286
|
+
{
|
|
287
|
+
displayName: 'Template Document',
|
|
288
|
+
name: 'templateDocument',
|
|
289
|
+
type: 'string',
|
|
290
|
+
default: '',
|
|
291
|
+
required: true,
|
|
292
|
+
displayOptions: {
|
|
293
|
+
show: {
|
|
294
|
+
operation: ['fill'],
|
|
295
|
+
},
|
|
296
|
+
},
|
|
297
|
+
description: 'Document template vide à remplir (binary property name ou base64)',
|
|
298
|
+
},
|
|
299
|
+
{
|
|
300
|
+
displayName: 'Output Property',
|
|
301
|
+
name: 'outputProperty',
|
|
302
|
+
type: 'string',
|
|
303
|
+
default: 'data',
|
|
304
|
+
displayOptions: {
|
|
305
|
+
show: {
|
|
306
|
+
operation: ['fill'],
|
|
307
|
+
},
|
|
308
|
+
},
|
|
309
|
+
description: 'Nom de la propriété binary pour le document de sortie',
|
|
310
|
+
},
|
|
311
|
+
// Extract operation
|
|
312
|
+
{
|
|
313
|
+
displayName: 'Document',
|
|
314
|
+
name: 'document',
|
|
315
|
+
type: 'string',
|
|
316
|
+
default: '',
|
|
317
|
+
required: true,
|
|
318
|
+
displayOptions: {
|
|
319
|
+
show: {
|
|
320
|
+
operation: ['extract', 'analyze'],
|
|
321
|
+
},
|
|
322
|
+
},
|
|
323
|
+
description: 'Document à analyser (binary property name ou base64)',
|
|
324
|
+
},
|
|
325
|
+
],
|
|
326
|
+
};
|
|
327
|
+
}
|
|
328
|
+
async execute() {
|
|
329
|
+
var _a, _b, _c, _d;
|
|
330
|
+
const items = this.getInputData();
|
|
331
|
+
const returnData = [];
|
|
332
|
+
const operation = this.getNodeParameter('operation', 0);
|
|
333
|
+
for (let i = 0; i < items.length; i++) {
|
|
334
|
+
try {
|
|
335
|
+
if (operation === 'fill') {
|
|
336
|
+
const sourceDocParam = this.getNodeParameter('sourceDocument', i);
|
|
337
|
+
const templateDocParam = this.getNodeParameter('templateDocument', i);
|
|
338
|
+
const outputProperty = this.getNodeParameter('outputProperty', i);
|
|
339
|
+
// Charger les documents (depuis binary ou base64)
|
|
340
|
+
let sourceBuffer;
|
|
341
|
+
let templateBuffer;
|
|
342
|
+
// Source document
|
|
343
|
+
if (items[i].binary && items[i].binary[sourceDocParam]) {
|
|
344
|
+
sourceBuffer = await this.helpers.getBinaryDataBuffer(i, sourceDocParam);
|
|
345
|
+
}
|
|
346
|
+
else {
|
|
347
|
+
// Assume base64
|
|
348
|
+
sourceBuffer = Buffer.from(sourceDocParam, 'base64');
|
|
349
|
+
}
|
|
350
|
+
// Template document
|
|
351
|
+
if (items[i].binary && items[i].binary[templateDocParam]) {
|
|
352
|
+
templateBuffer = await this.helpers.getBinaryDataBuffer(i, templateDocParam);
|
|
353
|
+
}
|
|
354
|
+
else {
|
|
355
|
+
// Assume base64
|
|
356
|
+
templateBuffer = Buffer.from(templateDocParam, 'base64');
|
|
357
|
+
}
|
|
358
|
+
// Charger les DOCX avec PizZip
|
|
359
|
+
const sourceZip = new pizzip_1.default(sourceBuffer);
|
|
360
|
+
const templateZip = new pizzip_1.default(templateBuffer);
|
|
361
|
+
const sourceXml = ((_a = sourceZip.file('word/document.xml')) === null || _a === void 0 ? void 0 : _a.asText()) || '';
|
|
362
|
+
const templateXml = ((_b = templateZip.file('word/document.xml')) === null || _b === void 0 ? void 0 : _b.asText()) || '';
|
|
363
|
+
// Extraire les paragraphes
|
|
364
|
+
const sourceParagraphs = extractParagraphs(sourceXml);
|
|
365
|
+
const templateParagraphs = extractParagraphs(templateXml);
|
|
366
|
+
// Extraire les données
|
|
367
|
+
const sourceData = extractAllFields(sourceParagraphs);
|
|
368
|
+
const sourceCheckboxes = extractCheckboxes(sourceParagraphs);
|
|
369
|
+
const templatePositions = findFillablePositions(templateParagraphs);
|
|
370
|
+
const templateCheckboxes = extractCheckboxes(templateParagraphs);
|
|
371
|
+
// Remplir le document
|
|
372
|
+
const { xml: filledXml, filledFields, modifiedCheckboxes } = fillDocumentXml(templateXml, sourceData, templatePositions, sourceCheckboxes, templateCheckboxes);
|
|
373
|
+
// Mettre à jour le ZIP
|
|
374
|
+
templateZip.file('word/document.xml', filledXml);
|
|
375
|
+
const outputBuffer = templateZip.generate({
|
|
376
|
+
type: 'nodebuffer',
|
|
377
|
+
compression: 'DEFLATE',
|
|
378
|
+
});
|
|
379
|
+
// Créer le résultat
|
|
380
|
+
const binaryData = await this.helpers.prepareBinaryData(outputBuffer, 'document_rempli.docx', 'application/vnd.openxmlformats-officedocument.wordprocessingml.document');
|
|
381
|
+
returnData.push({
|
|
382
|
+
json: {
|
|
383
|
+
success: true,
|
|
384
|
+
filledFields,
|
|
385
|
+
modifiedCheckboxes,
|
|
386
|
+
message: `Document rempli avec ${filledFields.length} champs et ${modifiedCheckboxes} checkboxes`,
|
|
387
|
+
},
|
|
388
|
+
binary: {
|
|
389
|
+
[outputProperty]: binaryData,
|
|
390
|
+
},
|
|
391
|
+
});
|
|
392
|
+
}
|
|
393
|
+
else if (operation === 'extract') {
|
|
394
|
+
const documentParam = this.getNodeParameter('document', i);
|
|
395
|
+
let docBuffer;
|
|
396
|
+
if (items[i].binary && items[i].binary[documentParam]) {
|
|
397
|
+
docBuffer = await this.helpers.getBinaryDataBuffer(i, documentParam);
|
|
398
|
+
}
|
|
399
|
+
else {
|
|
400
|
+
docBuffer = Buffer.from(documentParam, 'base64');
|
|
401
|
+
}
|
|
402
|
+
const zip = new pizzip_1.default(docBuffer);
|
|
403
|
+
const xml = ((_c = zip.file('word/document.xml')) === null || _c === void 0 ? void 0 : _c.asText()) || '';
|
|
404
|
+
const paragraphs = extractParagraphs(xml);
|
|
405
|
+
const data = extractAllFields(paragraphs);
|
|
406
|
+
const checkboxes = extractCheckboxes(paragraphs);
|
|
407
|
+
const extractedData = {};
|
|
408
|
+
for (const [key, value] of data) {
|
|
409
|
+
extractedData[key] = value.value;
|
|
410
|
+
}
|
|
411
|
+
const checkedBoxes = checkboxes.filter(cb => cb.checked);
|
|
412
|
+
returnData.push({
|
|
413
|
+
json: {
|
|
414
|
+
success: true,
|
|
415
|
+
data: extractedData,
|
|
416
|
+
checkboxes: {
|
|
417
|
+
total: checkboxes.length,
|
|
418
|
+
checked: checkedBoxes.length,
|
|
419
|
+
items: checkedBoxes.map(cb => cb.signature),
|
|
420
|
+
},
|
|
421
|
+
},
|
|
422
|
+
});
|
|
423
|
+
}
|
|
424
|
+
else if (operation === 'analyze') {
|
|
425
|
+
const documentParam = this.getNodeParameter('document', i);
|
|
426
|
+
let docBuffer;
|
|
427
|
+
if (items[i].binary && items[i].binary[documentParam]) {
|
|
428
|
+
docBuffer = await this.helpers.getBinaryDataBuffer(i, documentParam);
|
|
429
|
+
}
|
|
430
|
+
else {
|
|
431
|
+
docBuffer = Buffer.from(documentParam, 'base64');
|
|
432
|
+
}
|
|
433
|
+
const zip = new pizzip_1.default(docBuffer);
|
|
434
|
+
const xml = ((_d = zip.file('word/document.xml')) === null || _d === void 0 ? void 0 : _d.asText()) || '';
|
|
435
|
+
const paragraphs = extractParagraphs(xml);
|
|
436
|
+
const positions = findFillablePositions(paragraphs);
|
|
437
|
+
returnData.push({
|
|
438
|
+
json: {
|
|
439
|
+
success: true,
|
|
440
|
+
totalParagraphs: paragraphs.length,
|
|
441
|
+
fillableFields: Array.from(positions.keys()),
|
|
442
|
+
structure: paragraphs.slice(0, 50).map((p, idx) => ({
|
|
443
|
+
index: idx,
|
|
444
|
+
text: p.slice(0, 100),
|
|
445
|
+
hasCheckbox: hasCheckbox(p),
|
|
446
|
+
})),
|
|
447
|
+
},
|
|
448
|
+
});
|
|
449
|
+
}
|
|
450
|
+
}
|
|
451
|
+
catch (error) {
|
|
452
|
+
if (this.continueOnFail()) {
|
|
453
|
+
returnData.push({
|
|
454
|
+
json: {
|
|
455
|
+
success: false,
|
|
456
|
+
error: error.message,
|
|
457
|
+
},
|
|
458
|
+
});
|
|
459
|
+
continue;
|
|
460
|
+
}
|
|
461
|
+
throw new n8n_workflow_1.NodeOperationError(this.getNode(), error, { itemIndex: i });
|
|
462
|
+
}
|
|
463
|
+
}
|
|
464
|
+
return [returnData];
|
|
465
|
+
}
|
|
466
|
+
}
|
|
467
|
+
exports.DocxFiller = DocxFiller;
|
|
@@ -0,0 +1,5 @@
|
|
|
1
|
+
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 48 48" width="48" height="48">
|
|
2
|
+
<path fill="#2196F3" d="M6 6h28l8 8v28c0 1.1-.9 2-2 2H6c-1.1 0-2-.9-2-2V8c0-1.1.9-2 2-2z"/>
|
|
3
|
+
<path fill="#1565C0" d="M34 6l8 8h-6c-1.1 0-2-.9-2-2V6z"/>
|
|
4
|
+
<text x="24" y="32" text-anchor="middle" fill="white" font-family="Arial" font-size="10" font-weight="bold">DOCX</text>
|
|
5
|
+
</svg>
|
package/package.json
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "n8n-nodes-docx-filler",
|
|
3
|
+
"version": "1.0.0",
|
|
4
|
+
"description": "n8n node to automatically fill DOCX documents (French DC1, DC2, AE forms) with company data. Works as AI Agent tool.",
|
|
5
|
+
"keywords": [
|
|
6
|
+
"n8n-community-node-package",
|
|
7
|
+
"n8n",
|
|
8
|
+
"n8n-node",
|
|
9
|
+
"docx",
|
|
10
|
+
"document",
|
|
11
|
+
"fill",
|
|
12
|
+
"form",
|
|
13
|
+
"DC1",
|
|
14
|
+
"DC2",
|
|
15
|
+
"AE",
|
|
16
|
+
"french",
|
|
17
|
+
"company",
|
|
18
|
+
"ai-tool"
|
|
19
|
+
],
|
|
20
|
+
"license": "MIT",
|
|
21
|
+
"homepage": "https://github.com/rokodo-io/n8n-nodes-docx-filler",
|
|
22
|
+
"author": {
|
|
23
|
+
"name": "Rokodo",
|
|
24
|
+
"email": "contact@rokodo.io"
|
|
25
|
+
},
|
|
26
|
+
"repository": {
|
|
27
|
+
"type": "git",
|
|
28
|
+
"url": "git+https://github.com/rokodo-io/n8n-nodes-docx-filler.git"
|
|
29
|
+
},
|
|
30
|
+
"bugs": {
|
|
31
|
+
"url": "https://github.com/rokodo-io/n8n-nodes-docx-filler/issues"
|
|
32
|
+
},
|
|
33
|
+
"main": "index.js",
|
|
34
|
+
"scripts": {
|
|
35
|
+
"build": "tsc && gulp build:icons",
|
|
36
|
+
"dev": "tsc --watch",
|
|
37
|
+
"format": "prettier nodes --write",
|
|
38
|
+
"lint": "eslint nodes --ext .ts --fix",
|
|
39
|
+
"prepublishOnly": "npm run build"
|
|
40
|
+
},
|
|
41
|
+
"files": [
|
|
42
|
+
"dist"
|
|
43
|
+
],
|
|
44
|
+
"n8n": {
|
|
45
|
+
"n8nNodesApiVersion": 1,
|
|
46
|
+
"nodes": [
|
|
47
|
+
"dist/DocxFiller/DocxFiller.node.js"
|
|
48
|
+
]
|
|
49
|
+
},
|
|
50
|
+
"devDependencies": {
|
|
51
|
+
"@types/node": "^20.0.0",
|
|
52
|
+
"gulp": "^4.0.2",
|
|
53
|
+
"n8n-workflow": "^1.0.0",
|
|
54
|
+
"typescript": "^5.0.0"
|
|
55
|
+
},
|
|
56
|
+
"dependencies": {
|
|
57
|
+
"pizzip": "^3.1.7",
|
|
58
|
+
"docxtemplater": "^3.49.0"
|
|
59
|
+
},
|
|
60
|
+
"peerDependencies": {
|
|
61
|
+
"n8n-workflow": "*"
|
|
62
|
+
}
|
|
63
|
+
}
|