markdown-to-confluence 0.2.3__py3-none-any.whl → 0.2.5__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {markdown_to_confluence-0.2.3.dist-info → markdown_to_confluence-0.2.5.dist-info}/METADATA +9 -1
- markdown_to_confluence-0.2.5.dist-info/RECORD +21 -0
- {markdown_to_confluence-0.2.3.dist-info → markdown_to_confluence-0.2.5.dist-info}/WHEEL +1 -1
- md2conf/__init__.py +1 -1
- md2conf/__main__.py +11 -0
- md2conf/api.py +9 -1
- md2conf/application.py +47 -15
- md2conf/converter.py +50 -23
- md2conf/emoji.py +8 -0
- md2conf/matcher.py +8 -0
- md2conf/mermaid.py +11 -4
- md2conf/processor.py +19 -8
- md2conf/properties.py +8 -0
- md2conf/util.py +8 -0
- markdown_to_confluence-0.2.3.dist-info/RECORD +0 -21
- {markdown_to_confluence-0.2.3.dist-info → markdown_to_confluence-0.2.5.dist-info}/LICENSE +0 -0
- {markdown_to_confluence-0.2.3.dist-info → markdown_to_confluence-0.2.5.dist-info}/entry_points.txt +0 -0
- {markdown_to_confluence-0.2.3.dist-info → markdown_to_confluence-0.2.5.dist-info}/top_level.txt +0 -0
- {markdown_to_confluence-0.2.3.dist-info → markdown_to_confluence-0.2.5.dist-info}/zip-safe +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: markdown-to-confluence
|
|
3
|
-
Version: 0.2.
|
|
3
|
+
Version: 0.2.5
|
|
4
4
|
Summary: Publish Markdown files to Confluence wiki
|
|
5
5
|
Home-page: https://github.com/hunyadi/md2conf
|
|
6
6
|
Author: Levente Hunyadi
|
|
@@ -26,6 +26,8 @@ Requires-Dist: types-lxml>=2024.8.7
|
|
|
26
26
|
Requires-Dist: markdown>=3.6
|
|
27
27
|
Requires-Dist: types-markdown>=3.6
|
|
28
28
|
Requires-Dist: pymdown-extensions>=10.9
|
|
29
|
+
Requires-Dist: pyyaml>=6.0
|
|
30
|
+
Requires-Dist: types-PyYAML>=6.0
|
|
29
31
|
Requires-Dist: requests>=2.32
|
|
30
32
|
Requires-Dist: types-requests>=2.32
|
|
31
33
|
|
|
@@ -144,6 +146,12 @@ Provide generated-by prompt text in the Markdown file with a tag:
|
|
|
144
146
|
|
|
145
147
|
Alternatively, use the `--generated-by GENERATED_BY` option. The tag takes precedence.
|
|
146
148
|
|
|
149
|
+
### Publishing a directory
|
|
150
|
+
|
|
151
|
+
*md2conf* allows you to convert and publish a directory of Markdown files rather than a single Markdown file if you pass a directory as `mdpath`. This will traverse the specified directory recursively, and synchronize each Markdown file.
|
|
152
|
+
|
|
153
|
+
If a Markdown file doesn't yet pair up with a Confluence page, *md2conf* creates a new page and assigns a parent. Parent-child relationships are reflected in the navigation panel in Confluence. You can set a root page ID with the command-line option `-r`, which constitutes the topmost parent. (This could correspond to the landing page of your Confluence space. The Confluence page ID is always revealed when you edit a page.) Whenever a directory contains the file `index.md` or `README.md`, this page becomes the future parent page, and all Markdown files in this directory (and possibly nested directories) become its child pages (unless they already have a page ID). However, if an `index.md` or `README.md` file is subsequently found in one of the nested directories, it becomes the parent page of that directory, and any of its subdirectories.
|
|
154
|
+
|
|
147
155
|
### Ignoring files
|
|
148
156
|
|
|
149
157
|
Skip files in a directory with rules defined in `.mdignore`. Each rule should occupy a single line. Rules follow the syntax of [fnmatch](https://docs.python.org/3/library/fnmatch.html#fnmatch.fnmatch). Specifically, `?` matches any single character, and `*` matches zero or more characters. For example, use `up-*.md` to exclude Markdown files that start with `up-`. Lines that start with `#` are treated as comments.
|
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
md2conf/__init__.py,sha256=0eak9lvskuCqGJnGeno6SHoCiBFAX5IQLHVBx1LV0w8,402
|
|
2
|
+
md2conf/__main__.py,sha256=6iOI28W_d71tlnCMFpZwvkBmBt5-HazlZsz69gS4Oak,6894
|
|
3
|
+
md2conf/api.py,sha256=EZSHbuH5O9fPyW7iLAX0Fqw8njXmvd6sEbgseP-eUUc,16498
|
|
4
|
+
md2conf/application.py,sha256=hmfLiofGulN8zUw2uXuueohCkDh978sqLkoUot928qM,8796
|
|
5
|
+
md2conf/converter.py,sha256=8X8tNELqwAaZYSVvczJl_ZpJL9tu2ImCBXaQBQvGgeM,34413
|
|
6
|
+
md2conf/emoji.py,sha256=w9oiOIxzObAE7HTo3f6aETT1_D3t3yZwr88ynU4ENm0,1924
|
|
7
|
+
md2conf/entities.dtd,sha256=M6NzqL5N7dPs_eUA_6sDsiSLzDaAacrx9LdttiufvYU,30215
|
|
8
|
+
md2conf/matcher.py,sha256=mYMltZOLypK4O-SJugLgicOwUMem67hiNLg_kPFoJkU,3583
|
|
9
|
+
md2conf/mermaid.py,sha256=Tsibd1aOn4hRYv6emQg0hrZMPTkflIeXHVbZ7nQ5lSc,2108
|
|
10
|
+
md2conf/processor.py,sha256=tUt5D4_D3uhofg2Bn23owBJmkVHj4tSll0zI95J6cdk,4243
|
|
11
|
+
md2conf/properties.py,sha256=iVIc0h0XtS3Y2LCywX1C9cvmVQ0WljOMt8pl2MDMVCI,1990
|
|
12
|
+
md2conf/puppeteer-config.json,sha256=-dMTAN_7kNTGbDlfXzApl0KJpAWna9YKZdwMKbpOb60,159
|
|
13
|
+
md2conf/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
14
|
+
md2conf/util.py,sha256=ftf60MiW7S7rW45ipWX6efP_Sv2F2qpyIDHrGA0cBiw,743
|
|
15
|
+
markdown_to_confluence-0.2.5.dist-info/LICENSE,sha256=Pv43so2bPfmKhmsrmXFyAvS7M30-1i1tzjz6-dfhyOo,1077
|
|
16
|
+
markdown_to_confluence-0.2.5.dist-info/METADATA,sha256=E7j_aFJ7rT4SOpoUIa40G2QJL_7PjuXBA5JvdANRIdc,12764
|
|
17
|
+
markdown_to_confluence-0.2.5.dist-info/WHEEL,sha256=P9jw-gEje8ByB7_hXoICnHtVCrEwMQh-630tKvQWehc,91
|
|
18
|
+
markdown_to_confluence-0.2.5.dist-info/entry_points.txt,sha256=F1zxa1wtEObtbHS-qp46330WVFLHdMnV2wQ-ZorRmX0,50
|
|
19
|
+
markdown_to_confluence-0.2.5.dist-info/top_level.txt,sha256=_FJfl_kHrHNidyjUOuS01ngu_jDsfc-ZjSocNRJnTzU,8
|
|
20
|
+
markdown_to_confluence-0.2.5.dist-info/zip-safe,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1
|
|
21
|
+
markdown_to_confluence-0.2.5.dist-info/RECORD,,
|
md2conf/__init__.py
CHANGED
|
@@ -5,7 +5,7 @@ Parses Markdown files, converts Markdown content into the Confluence Storage For
|
|
|
5
5
|
Confluence API endpoints to upload images and content.
|
|
6
6
|
"""
|
|
7
7
|
|
|
8
|
-
__version__ = "0.2.
|
|
8
|
+
__version__ = "0.2.5"
|
|
9
9
|
__author__ = "Levente Hunyadi"
|
|
10
10
|
__copyright__ = "Copyright 2022-2024, Levente Hunyadi"
|
|
11
11
|
__license__ = "MIT"
|
md2conf/__main__.py
CHANGED
|
@@ -1,3 +1,14 @@
|
|
|
1
|
+
"""
|
|
2
|
+
Publish Markdown files to Confluence wiki.
|
|
3
|
+
|
|
4
|
+
Parses Markdown files, converts Markdown content into the Confluence Storage Format (XHTML), and invokes
|
|
5
|
+
Confluence API endpoints to upload images and content.
|
|
6
|
+
|
|
7
|
+
Copyright 2022-2024, Levente Hunyadi
|
|
8
|
+
|
|
9
|
+
:see: https://github.com/hunyadi/md2conf
|
|
10
|
+
"""
|
|
11
|
+
|
|
1
12
|
import argparse
|
|
2
13
|
import logging
|
|
3
14
|
import os.path
|
md2conf/api.py
CHANGED
|
@@ -1,3 +1,11 @@
|
|
|
1
|
+
"""
|
|
2
|
+
Publish Markdown files to Confluence wiki.
|
|
3
|
+
|
|
4
|
+
Copyright 2022-2024, Levente Hunyadi
|
|
5
|
+
|
|
6
|
+
:see: https://github.com/hunyadi/md2conf
|
|
7
|
+
"""
|
|
8
|
+
|
|
1
9
|
import io
|
|
2
10
|
import json
|
|
3
11
|
import logging
|
|
@@ -491,7 +499,7 @@ class ConfluenceSession:
|
|
|
491
499
|
page_id = self.page_exists(title)
|
|
492
500
|
|
|
493
501
|
if page_id is not None:
|
|
494
|
-
LOGGER.debug("Retrieving existing page: %
|
|
502
|
+
LOGGER.debug("Retrieving existing page: %s", page_id)
|
|
495
503
|
return self.get_page(page_id)
|
|
496
504
|
else:
|
|
497
505
|
LOGGER.debug("Creating new page with title: %s", title)
|
md2conf/application.py
CHANGED
|
@@ -1,8 +1,18 @@
|
|
|
1
|
+
"""
|
|
2
|
+
Publish Markdown files to Confluence wiki.
|
|
3
|
+
|
|
4
|
+
Copyright 2022-2024, Levente Hunyadi
|
|
5
|
+
|
|
6
|
+
:see: https://github.com/hunyadi/md2conf
|
|
7
|
+
"""
|
|
8
|
+
|
|
1
9
|
import logging
|
|
2
10
|
import os.path
|
|
3
11
|
from pathlib import Path
|
|
4
12
|
from typing import Dict, List, Optional
|
|
5
13
|
|
|
14
|
+
import yaml
|
|
15
|
+
|
|
6
16
|
from .api import ConfluencePage, ConfluenceSession
|
|
7
17
|
from .converter import (
|
|
8
18
|
ConfluenceDocument,
|
|
@@ -10,6 +20,7 @@ from .converter import (
|
|
|
10
20
|
ConfluencePageMetadata,
|
|
11
21
|
ConfluenceQualifiedID,
|
|
12
22
|
attachment_name,
|
|
23
|
+
extract_frontmatter,
|
|
13
24
|
extract_qualified_id,
|
|
14
25
|
read_qualified_id,
|
|
15
26
|
)
|
|
@@ -33,6 +44,7 @@ class Application:
|
|
|
33
44
|
def synchronize(self, path: Path) -> None:
|
|
34
45
|
"Synchronizes a single Markdown page or a directory of Markdown pages."
|
|
35
46
|
|
|
47
|
+
path = path.resolve(True)
|
|
36
48
|
if path.is_dir():
|
|
37
49
|
self.synchronize_directory(path)
|
|
38
50
|
elif path.is_file():
|
|
@@ -43,12 +55,14 @@ class Application:
|
|
|
43
55
|
def synchronize_page(self, page_path: Path) -> None:
|
|
44
56
|
"Synchronizes a single Markdown page with Confluence."
|
|
45
57
|
|
|
58
|
+
page_path = page_path.resolve(True)
|
|
46
59
|
self._synchronize_page(page_path, {})
|
|
47
60
|
|
|
48
61
|
def synchronize_directory(self, local_dir: Path) -> None:
|
|
49
62
|
"Synchronizes a directory of Markdown pages with Confluence."
|
|
50
63
|
|
|
51
|
-
LOGGER.info(
|
|
64
|
+
LOGGER.info("Synchronizing directory: %s", local_dir)
|
|
65
|
+
local_dir = local_dir.resolve(True)
|
|
52
66
|
|
|
53
67
|
# Step 1: build index of all page metadata
|
|
54
68
|
page_metadata: Dict[Path, ConfluencePageMetadata] = {}
|
|
@@ -58,7 +72,7 @@ class Application:
|
|
|
58
72
|
else None
|
|
59
73
|
)
|
|
60
74
|
self._index_directory(local_dir, root_id, page_metadata)
|
|
61
|
-
LOGGER.info(
|
|
75
|
+
LOGGER.info("Indexed %d page(s)", len(page_metadata))
|
|
62
76
|
|
|
63
77
|
# Step 2: convert each page
|
|
64
78
|
for page_path in page_metadata.keys():
|
|
@@ -71,7 +85,7 @@ class Application:
|
|
|
71
85
|
) -> None:
|
|
72
86
|
base_path = page_path.parent
|
|
73
87
|
|
|
74
|
-
LOGGER.info(
|
|
88
|
+
LOGGER.info("Synchronizing page: %s", page_path)
|
|
75
89
|
document = ConfluenceDocument(page_path, self.options, page_metadata)
|
|
76
90
|
|
|
77
91
|
if document.id.space_key:
|
|
@@ -88,7 +102,7 @@ class Application:
|
|
|
88
102
|
) -> None:
|
|
89
103
|
"Indexes Markdown files in a directory recursively."
|
|
90
104
|
|
|
91
|
-
LOGGER.info(
|
|
105
|
+
LOGGER.info("Indexing directory: %s", local_dir)
|
|
92
106
|
|
|
93
107
|
matcher = Matcher(MatcherOptions(source=".mdignore", extension="md"), local_dir)
|
|
94
108
|
|
|
@@ -99,27 +113,35 @@ class Application:
|
|
|
99
113
|
continue
|
|
100
114
|
|
|
101
115
|
if entry.is_file():
|
|
102
|
-
files.append(
|
|
116
|
+
files.append(Path(local_dir) / entry.name)
|
|
103
117
|
elif entry.is_dir():
|
|
104
|
-
directories.append(
|
|
118
|
+
directories.append(Path(local_dir) / entry.name)
|
|
105
119
|
|
|
106
120
|
# make page act as parent node in Confluence
|
|
107
|
-
|
|
108
|
-
if "index.md" in files:
|
|
109
|
-
|
|
110
|
-
elif "README.md" in files:
|
|
111
|
-
|
|
121
|
+
parent_doc: Optional[Path] = None
|
|
122
|
+
if (Path(local_dir) / "index.md") in files:
|
|
123
|
+
parent_doc = Path(local_dir) / "index.md"
|
|
124
|
+
elif (Path(local_dir) / "README.md") in files:
|
|
125
|
+
parent_doc = Path(local_dir) / "README.md"
|
|
126
|
+
|
|
127
|
+
if parent_doc is not None:
|
|
128
|
+
files.remove(parent_doc)
|
|
112
129
|
|
|
113
|
-
|
|
130
|
+
metadata = self._get_or_create_page(parent_doc, root_id)
|
|
131
|
+
LOGGER.debug("Indexed parent %s with metadata: %s", parent_doc, metadata)
|
|
132
|
+
page_metadata[parent_doc] = metadata
|
|
133
|
+
|
|
134
|
+
parent_id = read_qualified_id(parent_doc) or root_id
|
|
135
|
+
else:
|
|
114
136
|
parent_id = root_id
|
|
115
137
|
|
|
116
138
|
for doc in files:
|
|
117
139
|
metadata = self._get_or_create_page(doc, parent_id)
|
|
118
|
-
LOGGER.debug(
|
|
140
|
+
LOGGER.debug("Indexed %s with metadata: %s", doc, metadata)
|
|
119
141
|
page_metadata[doc] = metadata
|
|
120
142
|
|
|
121
143
|
for directory in directories:
|
|
122
|
-
self._index_directory(
|
|
144
|
+
self._index_directory(directory, parent_id, page_metadata)
|
|
123
145
|
|
|
124
146
|
def _get_or_create_page(
|
|
125
147
|
self,
|
|
@@ -137,6 +159,8 @@ class Application:
|
|
|
137
159
|
document = f.read()
|
|
138
160
|
|
|
139
161
|
qualified_id, document = extract_qualified_id(document)
|
|
162
|
+
frontmatter, document = extract_frontmatter(document)
|
|
163
|
+
|
|
140
164
|
if qualified_id is not None:
|
|
141
165
|
confluence_page = self.api.get_page(
|
|
142
166
|
qualified_id.page_id, space_key=qualified_id.space_key
|
|
@@ -147,6 +171,14 @@ class Application:
|
|
|
147
171
|
f"expected: parent page ID for Markdown file with no linked Confluence page: {absolute_path}"
|
|
148
172
|
)
|
|
149
173
|
|
|
174
|
+
# assign title from frontmatter if present
|
|
175
|
+
if title is None and frontmatter is not None:
|
|
176
|
+
properties = yaml.safe_load(frontmatter)
|
|
177
|
+
if isinstance(properties, dict):
|
|
178
|
+
property_title = properties.get("title")
|
|
179
|
+
if isinstance(property_title, str):
|
|
180
|
+
title = property_title
|
|
181
|
+
|
|
150
182
|
confluence_page = self._create_page(
|
|
151
183
|
absolute_path, document, title, parent_id
|
|
152
184
|
)
|
|
@@ -202,7 +234,7 @@ class Application:
|
|
|
202
234
|
)
|
|
203
235
|
|
|
204
236
|
content = document.xhtml()
|
|
205
|
-
LOGGER.debug(
|
|
237
|
+
LOGGER.debug("Generated Confluence Storage Format document:\n%s", content)
|
|
206
238
|
self.api.update_page(document.id.page_id, content)
|
|
207
239
|
|
|
208
240
|
def _update_markdown(
|
md2conf/converter.py
CHANGED
|
@@ -1,3 +1,11 @@
|
|
|
1
|
+
"""
|
|
2
|
+
Publish Markdown files to Confluence wiki.
|
|
3
|
+
|
|
4
|
+
Copyright 2022-2024, Levente Hunyadi
|
|
5
|
+
|
|
6
|
+
:see: https://github.com/hunyadi/md2conf
|
|
7
|
+
"""
|
|
8
|
+
|
|
1
9
|
# mypy: disable-error-code="dict-item"
|
|
2
10
|
|
|
3
11
|
import hashlib
|
|
@@ -338,12 +346,12 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
338
346
|
anchor.tail = heading.text
|
|
339
347
|
heading.text = None
|
|
340
348
|
|
|
341
|
-
def _transform_link(self, anchor: ET._Element) ->
|
|
349
|
+
def _transform_link(self, anchor: ET._Element) -> Optional[ET._Element]:
|
|
342
350
|
url = anchor.attrib["href"]
|
|
343
351
|
if is_absolute_url(url):
|
|
344
|
-
return
|
|
352
|
+
return None
|
|
345
353
|
|
|
346
|
-
LOGGER.debug(
|
|
354
|
+
LOGGER.debug("Found link %s relative to %s", url, self.path)
|
|
347
355
|
relative_url: ParseResult = urlparse(url)
|
|
348
356
|
|
|
349
357
|
if (
|
|
@@ -353,9 +361,24 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
353
361
|
and not relative_url.params
|
|
354
362
|
and not relative_url.query
|
|
355
363
|
):
|
|
356
|
-
LOGGER.debug(
|
|
357
|
-
|
|
358
|
-
|
|
364
|
+
LOGGER.debug("Found local URL: %s", url)
|
|
365
|
+
if self.options.heading_anchors:
|
|
366
|
+
# <ac:link ac:anchor="anchor"><ac:link-body>...</ac:link-body></ac:link>
|
|
367
|
+
target = relative_url.fragment.lstrip("#")
|
|
368
|
+
link_body = AC("link-body", {}, *list(anchor))
|
|
369
|
+
link_body.text = anchor.text
|
|
370
|
+
link_wrapper = AC(
|
|
371
|
+
"link",
|
|
372
|
+
{
|
|
373
|
+
ET.QName(namespaces["ac"], "anchor"): target,
|
|
374
|
+
},
|
|
375
|
+
link_body,
|
|
376
|
+
)
|
|
377
|
+
link_wrapper.tail = anchor.tail
|
|
378
|
+
return link_wrapper
|
|
379
|
+
else:
|
|
380
|
+
anchor.attrib["href"] = url
|
|
381
|
+
return None
|
|
359
382
|
|
|
360
383
|
# convert the relative URL to absolute URL based on the base path value, then look up
|
|
361
384
|
# the absolute path in the page metadata dictionary to discover the relative path
|
|
@@ -366,7 +389,7 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
366
389
|
if self.options.ignore_invalid_url:
|
|
367
390
|
LOGGER.warning(msg)
|
|
368
391
|
anchor.attrib.pop("href")
|
|
369
|
-
return
|
|
392
|
+
return None
|
|
370
393
|
else:
|
|
371
394
|
raise DocumentError(msg)
|
|
372
395
|
|
|
@@ -378,12 +401,12 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
378
401
|
if self.options.ignore_invalid_url:
|
|
379
402
|
LOGGER.warning(msg)
|
|
380
403
|
anchor.attrib.pop("href")
|
|
381
|
-
return
|
|
404
|
+
return None
|
|
382
405
|
else:
|
|
383
406
|
raise DocumentError(msg)
|
|
384
407
|
|
|
385
408
|
LOGGER.debug(
|
|
386
|
-
|
|
409
|
+
"found link to page %s with metadata: %s", relative_path, link_metadata
|
|
387
410
|
)
|
|
388
411
|
self.links.append(url)
|
|
389
412
|
|
|
@@ -402,8 +425,9 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
402
425
|
)
|
|
403
426
|
transformed_url = urlunparse(components)
|
|
404
427
|
|
|
405
|
-
LOGGER.debug(
|
|
428
|
+
LOGGER.debug("Transformed relative URL: %s to URL: %s", url, transformed_url)
|
|
406
429
|
anchor.attrib["href"] = transformed_url
|
|
430
|
+
return None
|
|
407
431
|
|
|
408
432
|
def _transform_image(self, image: ET._Element) -> ET._Element:
|
|
409
433
|
path: str = image.attrib["src"]
|
|
@@ -803,8 +827,7 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
803
827
|
|
|
804
828
|
# <a href="..."> ... </a>
|
|
805
829
|
elif child.tag == "a":
|
|
806
|
-
self._transform_link(child)
|
|
807
|
-
return None
|
|
830
|
+
return self._transform_link(child)
|
|
808
831
|
|
|
809
832
|
# <pre><code class="language-java"> ... </code></pre>
|
|
810
833
|
elif child.tag == "pre" and len(child) == 1 and child[0].tag == "code":
|
|
@@ -829,16 +852,16 @@ class DocumentError(RuntimeError):
|
|
|
829
852
|
pass
|
|
830
853
|
|
|
831
854
|
|
|
832
|
-
def extract_value(pattern: str,
|
|
855
|
+
def extract_value(pattern: str, text: str) -> Tuple[Optional[str], str]:
|
|
833
856
|
values: List[str] = []
|
|
834
857
|
|
|
835
858
|
def _repl_func(matchobj: re.Match) -> str:
|
|
836
859
|
values.append(matchobj.group(1))
|
|
837
860
|
return ""
|
|
838
861
|
|
|
839
|
-
|
|
862
|
+
text = re.sub(pattern, _repl_func, text, 1, re.ASCII)
|
|
840
863
|
value = values[0] if values else None
|
|
841
|
-
return value,
|
|
864
|
+
return value, text
|
|
842
865
|
|
|
843
866
|
|
|
844
867
|
@dataclass
|
|
@@ -851,20 +874,24 @@ class ConfluenceQualifiedID:
|
|
|
851
874
|
self.space_key = space_key
|
|
852
875
|
|
|
853
876
|
|
|
854
|
-
def extract_qualified_id(
|
|
877
|
+
def extract_qualified_id(text: str) -> Tuple[Optional[ConfluenceQualifiedID], str]:
|
|
855
878
|
"Extracts the Confluence page ID and space key from a Markdown document."
|
|
856
879
|
|
|
857
|
-
page_id,
|
|
880
|
+
page_id, text = extract_value(r"<!--\s+confluence-page-id:\s*(\d+)\s+-->", text)
|
|
858
881
|
|
|
859
882
|
if page_id is None:
|
|
860
|
-
return None,
|
|
883
|
+
return None, text
|
|
861
884
|
|
|
862
885
|
# extract Confluence space key
|
|
863
|
-
space_key,
|
|
864
|
-
|
|
865
|
-
)
|
|
886
|
+
space_key, text = extract_value(r"<!--\s+confluence-space-key:\s*(\S+)\s+-->", text)
|
|
887
|
+
|
|
888
|
+
return ConfluenceQualifiedID(page_id, space_key), text
|
|
889
|
+
|
|
890
|
+
|
|
891
|
+
def extract_frontmatter(text: str) -> Tuple[Optional[str], str]:
|
|
892
|
+
"Extracts the front matter from a Markdown document."
|
|
866
893
|
|
|
867
|
-
return
|
|
894
|
+
return extract_value(r"(?ms)\A---$(.+?)^---$", text)
|
|
868
895
|
|
|
869
896
|
|
|
870
897
|
def read_qualified_id(absolute_path: Path) -> Optional[ConfluenceQualifiedID]:
|
|
@@ -941,7 +968,7 @@ class ConfluenceDocument:
|
|
|
941
968
|
)
|
|
942
969
|
|
|
943
970
|
# extract frontmatter
|
|
944
|
-
frontmatter, text =
|
|
971
|
+
frontmatter, text = extract_frontmatter(text)
|
|
945
972
|
|
|
946
973
|
# convert to HTML
|
|
947
974
|
html = markdown_to_html(text)
|
md2conf/emoji.py
CHANGED
md2conf/matcher.py
CHANGED
md2conf/mermaid.py
CHANGED
|
@@ -1,3 +1,11 @@
|
|
|
1
|
+
"""
|
|
2
|
+
Publish Markdown files to Confluence wiki.
|
|
3
|
+
|
|
4
|
+
Copyright 2022-2024, Levente Hunyadi
|
|
5
|
+
|
|
6
|
+
:see: https://github.com/hunyadi/md2conf
|
|
7
|
+
"""
|
|
8
|
+
|
|
1
9
|
import logging
|
|
2
10
|
import os
|
|
3
11
|
import os.path
|
|
@@ -49,11 +57,10 @@ def render(source: str, output_format: Literal["png", "svg"] = "png") -> bytes:
|
|
|
49
57
|
"--outputFormat",
|
|
50
58
|
output_format,
|
|
51
59
|
]
|
|
60
|
+
root = os.path.dirname(__file__)
|
|
52
61
|
if is_docker():
|
|
53
|
-
cmd.extend(
|
|
54
|
-
|
|
55
|
-
)
|
|
56
|
-
LOGGER.debug(f"Executing: {' '.join(cmd)}")
|
|
62
|
+
cmd.extend(["-p", os.path.join(root, "puppeteer-config.json")])
|
|
63
|
+
LOGGER.debug("Executing: %s", " ".join(cmd))
|
|
57
64
|
try:
|
|
58
65
|
proc = subprocess.Popen(
|
|
59
66
|
cmd,
|
md2conf/processor.py
CHANGED
|
@@ -1,3 +1,11 @@
|
|
|
1
|
+
"""
|
|
2
|
+
Publish Markdown files to Confluence wiki.
|
|
3
|
+
|
|
4
|
+
Copyright 2022-2024, Levente Hunyadi
|
|
5
|
+
|
|
6
|
+
:see: https://github.com/hunyadi/md2conf
|
|
7
|
+
"""
|
|
8
|
+
|
|
1
9
|
import hashlib
|
|
2
10
|
import logging
|
|
3
11
|
import os
|
|
@@ -30,6 +38,7 @@ class Processor:
|
|
|
30
38
|
def process(self, path: Path) -> None:
|
|
31
39
|
"Processes a single Markdown file or a directory of Markdown files."
|
|
32
40
|
|
|
41
|
+
path = path.resolve(True)
|
|
33
42
|
if path.is_dir():
|
|
34
43
|
self.process_directory(path)
|
|
35
44
|
elif path.is_file():
|
|
@@ -40,12 +49,13 @@ class Processor:
|
|
|
40
49
|
def process_directory(self, local_dir: Path) -> None:
|
|
41
50
|
"Recursively scans a directory hierarchy for Markdown files."
|
|
42
51
|
|
|
43
|
-
LOGGER.info(
|
|
52
|
+
LOGGER.info("Synchronizing directory: %s", local_dir)
|
|
53
|
+
local_dir = local_dir.resolve(True)
|
|
44
54
|
|
|
45
55
|
# Step 1: build index of all page metadata
|
|
46
56
|
page_metadata: Dict[Path, ConfluencePageMetadata] = {}
|
|
47
57
|
self._index_directory(local_dir, page_metadata)
|
|
48
|
-
LOGGER.info(
|
|
58
|
+
LOGGER.info("Indexed %d page(s)", len(page_metadata))
|
|
49
59
|
|
|
50
60
|
# Step 2: convert each page
|
|
51
61
|
for page_path in page_metadata.keys():
|
|
@@ -56,6 +66,7 @@ class Processor:
|
|
|
56
66
|
) -> None:
|
|
57
67
|
"Processes a single Markdown file."
|
|
58
68
|
|
|
69
|
+
path = path.resolve(True)
|
|
59
70
|
document = ConfluenceDocument(path, self.options, page_metadata)
|
|
60
71
|
content = document.xhtml()
|
|
61
72
|
with open(path.with_suffix(".csf"), "w", encoding="utf-8") as f:
|
|
@@ -68,7 +79,7 @@ class Processor:
|
|
|
68
79
|
) -> None:
|
|
69
80
|
"Indexes Markdown files in a directory recursively."
|
|
70
81
|
|
|
71
|
-
LOGGER.info(
|
|
82
|
+
LOGGER.info("Indexing directory: %s", local_dir)
|
|
72
83
|
|
|
73
84
|
matcher = Matcher(MatcherOptions(source=".mdignore", extension="md"), local_dir)
|
|
74
85
|
|
|
@@ -79,17 +90,17 @@ class Processor:
|
|
|
79
90
|
continue
|
|
80
91
|
|
|
81
92
|
if entry.is_file():
|
|
82
|
-
files.append(
|
|
93
|
+
files.append(Path(local_dir) / entry.name)
|
|
83
94
|
elif entry.is_dir():
|
|
84
|
-
directories.append(
|
|
95
|
+
directories.append(Path(local_dir) / entry.name)
|
|
85
96
|
|
|
86
97
|
for doc in files:
|
|
87
98
|
metadata = self._get_page(doc)
|
|
88
|
-
LOGGER.debug(
|
|
99
|
+
LOGGER.debug("Indexed %s with metadata: %s", doc, metadata)
|
|
89
100
|
page_metadata[doc] = metadata
|
|
90
101
|
|
|
91
102
|
for directory in directories:
|
|
92
|
-
self._index_directory(
|
|
103
|
+
self._index_directory(directory, page_metadata)
|
|
93
104
|
|
|
94
105
|
def _get_page(self, absolute_path: Path) -> ConfluencePageMetadata:
|
|
95
106
|
"Extracts metadata from a Markdown file."
|
|
@@ -102,7 +113,7 @@ class Processor:
|
|
|
102
113
|
if self.options.root_page_id is not None:
|
|
103
114
|
hash = hashlib.md5(document.encode("utf-8"))
|
|
104
115
|
digest = "".join(f"{c:x}" for c in hash.digest())
|
|
105
|
-
LOGGER.info(
|
|
116
|
+
LOGGER.info("Identifier %s assigned to page: %s", digest, absolute_path)
|
|
106
117
|
qualified_id = ConfluenceQualifiedID(digest)
|
|
107
118
|
else:
|
|
108
119
|
raise ValueError("required: page ID for local output")
|
md2conf/properties.py
CHANGED
md2conf/util.py
CHANGED
|
@@ -1,21 +0,0 @@
|
|
|
1
|
-
md2conf/__init__.py,sha256=ypRfZF5ef0nZONGa1E9S2htodyslp3uPDgRUhUD8St4,402
|
|
2
|
-
md2conf/__main__.py,sha256=_qUspNQmQdhpH4Myh9vXDcauPyUx_FyEzNtaW_c8ytY,6601
|
|
3
|
-
md2conf/api.py,sha256=bP3Kp4PsGQrPyQMOs-MwE2Znl1ewuKNslMCv7AtXIT0,16366
|
|
4
|
-
md2conf/application.py,sha256=GUMPZUe_jZTBszKDyh4y-jeOp83VKCR3b_EHmzcL5Qs,7778
|
|
5
|
-
md2conf/converter.py,sha256=F75UtnCR3vxAE1W8JxZ5wmfzgtJLTeQvDN2jH49fNXU,33466
|
|
6
|
-
md2conf/emoji.py,sha256=2vMZlLD4m2X6MB-Fjv_GDzEUelb_sg4UBtF463d_p90,1792
|
|
7
|
-
md2conf/entities.dtd,sha256=M6NzqL5N7dPs_eUA_6sDsiSLzDaAacrx9LdttiufvYU,30215
|
|
8
|
-
md2conf/matcher.py,sha256=bZMX_GTXuEeKqIPDES8KqAqTBiesKfSH9rwbNFkD25A,3451
|
|
9
|
-
md2conf/mermaid.py,sha256=a7PVcd7kcFBOMw7Z2mOfvWC1JIVR4Q1EkkanLk1SLx0,1981
|
|
10
|
-
md2conf/processor.py,sha256=qnoO7kTPF2y5uUATnqGSkgVP2DJJiR8DwkUqWavE6r4,4036
|
|
11
|
-
md2conf/properties.py,sha256=2l1tW8HmnrEsXN4-Dtby2tYJQTG1MirRpM3H6ykjQ4c,1858
|
|
12
|
-
md2conf/puppeteer-config.json,sha256=-dMTAN_7kNTGbDlfXzApl0KJpAWna9YKZdwMKbpOb60,159
|
|
13
|
-
md2conf/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
14
|
-
md2conf/util.py,sha256=mghtBv5c0vOBHi5CxjBh4LZbjQ0Cu0h_vB30RN4N8Bk,611
|
|
15
|
-
markdown_to_confluence-0.2.3.dist-info/LICENSE,sha256=Pv43so2bPfmKhmsrmXFyAvS7M30-1i1tzjz6-dfhyOo,1077
|
|
16
|
-
markdown_to_confluence-0.2.3.dist-info/METADATA,sha256=Z7ts-W_aUJiau-mnFZIY6RPF5OdX_xCN081FCW4BNa8,11585
|
|
17
|
-
markdown_to_confluence-0.2.3.dist-info/WHEEL,sha256=GV9aMThwP_4oNCtvEC2ec3qUYutgWeAzklro_0m4WJQ,91
|
|
18
|
-
markdown_to_confluence-0.2.3.dist-info/entry_points.txt,sha256=F1zxa1wtEObtbHS-qp46330WVFLHdMnV2wQ-ZorRmX0,50
|
|
19
|
-
markdown_to_confluence-0.2.3.dist-info/top_level.txt,sha256=_FJfl_kHrHNidyjUOuS01ngu_jDsfc-ZjSocNRJnTzU,8
|
|
20
|
-
markdown_to_confluence-0.2.3.dist-info/zip-safe,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1
|
|
21
|
-
markdown_to_confluence-0.2.3.dist-info/RECORD,,
|
|
File without changes
|
{markdown_to_confluence-0.2.3.dist-info → markdown_to_confluence-0.2.5.dist-info}/entry_points.txt
RENAMED
|
File without changes
|
{markdown_to_confluence-0.2.3.dist-info → markdown_to_confluence-0.2.5.dist-info}/top_level.txt
RENAMED
|
File without changes
|
|
File without changes
|