markdown-to-confluence 0.2.7__py3-none-any.whl → 0.3.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {markdown_to_confluence-0.2.7.dist-info → markdown_to_confluence-0.3.0.dist-info}/METADATA +44 -7
- markdown_to_confluence-0.3.0.dist-info/RECORD +21 -0
- {markdown_to_confluence-0.2.7.dist-info → markdown_to_confluence-0.3.0.dist-info}/WHEEL +1 -1
- md2conf/__init__.py +2 -2
- md2conf/__main__.py +4 -4
- md2conf/api.py +177 -124
- md2conf/application.py +18 -19
- md2conf/converter.py +28 -28
- md2conf/emoji.py +1 -1
- md2conf/matcher.py +11 -6
- md2conf/mermaid.py +1 -1
- md2conf/processor.py +7 -7
- md2conf/properties.py +4 -4
- md2conf/util.py +1 -1
- markdown_to_confluence-0.2.7.dist-info/RECORD +0 -21
- {markdown_to_confluence-0.2.7.dist-info → markdown_to_confluence-0.3.0.dist-info}/LICENSE +0 -0
- {markdown_to_confluence-0.2.7.dist-info → markdown_to_confluence-0.3.0.dist-info}/entry_points.txt +0 -0
- {markdown_to_confluence-0.2.7.dist-info → markdown_to_confluence-0.3.0.dist-info}/top_level.txt +0 -0
- {markdown_to_confluence-0.2.7.dist-info → markdown_to_confluence-0.3.0.dist-info}/zip-safe +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
|
-
Metadata-Version: 2.
|
|
1
|
+
Metadata-Version: 2.2
|
|
2
2
|
Name: markdown-to-confluence
|
|
3
|
-
Version: 0.
|
|
3
|
+
Version: 0.3.0
|
|
4
4
|
Summary: Publish Markdown files to Confluence wiki
|
|
5
5
|
Home-page: https://github.com/hunyadi/md2conf
|
|
6
6
|
Author: Levente Hunyadi
|
|
@@ -12,20 +12,20 @@ Classifier: Intended Audience :: End Users/Desktop
|
|
|
12
12
|
Classifier: License :: OSI Approved :: MIT License
|
|
13
13
|
Classifier: Operating System :: OS Independent
|
|
14
14
|
Classifier: Programming Language :: Python :: 3
|
|
15
|
-
Classifier: Programming Language :: Python :: 3.8
|
|
16
15
|
Classifier: Programming Language :: Python :: 3.9
|
|
17
16
|
Classifier: Programming Language :: Python :: 3.10
|
|
18
17
|
Classifier: Programming Language :: Python :: 3.11
|
|
19
18
|
Classifier: Programming Language :: Python :: 3.12
|
|
19
|
+
Classifier: Programming Language :: Python :: 3.13
|
|
20
20
|
Classifier: Typing :: Typed
|
|
21
|
-
Requires-Python: >=3.
|
|
21
|
+
Requires-Python: >=3.9
|
|
22
22
|
Description-Content-Type: text/markdown
|
|
23
23
|
License-File: LICENSE
|
|
24
24
|
Requires-Dist: lxml>=5.3
|
|
25
|
-
Requires-Dist: types-lxml>=2024.
|
|
25
|
+
Requires-Dist: types-lxml>=2024.12.13
|
|
26
26
|
Requires-Dist: markdown>=3.7
|
|
27
27
|
Requires-Dist: types-markdown>=3.7
|
|
28
|
-
Requires-Dist: pymdown-extensions>=10.
|
|
28
|
+
Requires-Dist: pymdown-extensions>=10.14
|
|
29
29
|
Requires-Dist: pyyaml>=6.0
|
|
30
30
|
Requires-Dist: types-PyYAML>=6.0
|
|
31
31
|
Requires-Dist: requests>=2.32
|
|
@@ -57,6 +57,8 @@ This Python package
|
|
|
57
57
|
* [Collapsed sections](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/organizing-information-with-collapsed-sections)
|
|
58
58
|
* [Mermaid diagrams](https://mermaid.live/) in code blocks (converted to images)
|
|
59
59
|
|
|
60
|
+
Whenever possible, the implementation uses [Confluence REST API v2](https://developer.atlassian.com/cloud/confluence/rest/v2/) to fetch space properties, and get, create or update page content.
|
|
61
|
+
|
|
60
62
|
## Installation
|
|
61
63
|
|
|
62
64
|
Install the core package from PyPI:
|
|
@@ -160,6 +162,41 @@ First, *md2conf* builds an index of pages in the directory hierarchy. The index
|
|
|
160
162
|
|
|
161
163
|
If a Markdown file doesn't yet pair up with a Confluence page, *md2conf* creates a new page and assigns a parent. Parent-child relationships are reflected in the navigation panel in Confluence. You can set a root page ID with the command-line option `-r`, which constitutes the topmost parent. (This could correspond to the landing page of your Confluence space. The Confluence page ID is always revealed when you edit a page.) Whenever a directory contains the file `index.md` or `README.md`, this page becomes the future parent page, and all Markdown files in this directory (and possibly nested directories) become its child pages (unless they already have a page ID). However, if an `index.md` or `README.md` file is subsequently found in one of the nested directories, it becomes the parent page of that directory, and any of its subdirectories.
|
|
162
164
|
|
|
165
|
+
The concepts above are illustrated in the following sections.
|
|
166
|
+
|
|
167
|
+
#### File-system directory hierarchy
|
|
168
|
+
|
|
169
|
+
The title of each Markdown file (either the text of the first heading (`#`), or the title specified in front-matter) is shown next to the file name.
|
|
170
|
+
|
|
171
|
+
```
|
|
172
|
+
.
|
|
173
|
+
├── computer-science
|
|
174
|
+
│ ├── index.md: Introduction to computer science
|
|
175
|
+
│ ├── algebra.md: Linear algebra
|
|
176
|
+
│ └── algorithms.md: Theory of algorithms
|
|
177
|
+
└── machine-learning
|
|
178
|
+
├── README.md: AI and ML
|
|
179
|
+
├── awareness.md: Consciousness and intelligence
|
|
180
|
+
└── statistics
|
|
181
|
+
├── index.md: Introduction to statistics
|
|
182
|
+
└── median.md: Mean vs. median
|
|
183
|
+
```
|
|
184
|
+
|
|
185
|
+
#### Page hierarchy in Confluence
|
|
186
|
+
|
|
187
|
+
Observe how `index.md` and `README.md` files have assumed parent (or ancestor) role for any Markdown files in the same directory (or below).
|
|
188
|
+
|
|
189
|
+
```
|
|
190
|
+
root
|
|
191
|
+
├── Introduction to computer science
|
|
192
|
+
│ ├── Linear algebra
|
|
193
|
+
│ └── Theory of algorithms
|
|
194
|
+
└── AI and ML
|
|
195
|
+
├── Consciousness and intelligence
|
|
196
|
+
└── Introduction to statistics
|
|
197
|
+
└── Mean vs. median
|
|
198
|
+
```
|
|
199
|
+
|
|
163
200
|
### Ignoring files
|
|
164
201
|
|
|
165
202
|
Skip files in a directory with rules defined in `.mdignore`. Each rule should occupy a single line. Rules follow the syntax of [fnmatch](https://docs.python.org/3/library/fnmatch.html#fnmatch.fnmatch). Specifically, `?` matches any single character, and `*` matches zero or more characters. For example, use `up-*.md` to exclude Markdown files that start with `up-`. Lines that start with `#` are treated as comments.
|
|
@@ -214,7 +251,7 @@ options:
|
|
|
214
251
|
--local Write XHTML-based Confluence Storage Format files locally without invoking Confluence API.
|
|
215
252
|
--headers [KEY=VALUE ...]
|
|
216
253
|
Apply custom headers to all Confluence API requests.
|
|
217
|
-
--webui-links Enable Confluence Web UI links.
|
|
254
|
+
--webui-links Enable Confluence Web UI links. (Typically required for on-prem versions of Confluence.)
|
|
218
255
|
```
|
|
219
256
|
|
|
220
257
|
### Using the Docker container
|
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
md2conf/__init__.py,sha256=Lveuwj776s0e_lokulqzAtv64eStsiMROB96DimCLd0,402
|
|
2
|
+
md2conf/__main__.py,sha256=Sga_W_b5E1YokJcBAXcmZVnYk-us8A0kkhfkmdHogsg,6883
|
|
3
|
+
md2conf/api.py,sha256=KIZNwdsMGXYy9i4FHSe0XbbCjjVJzTTlnryGesxM-GI,19312
|
|
4
|
+
md2conf/application.py,sha256=Udp79iB0bGc7sPmE3jD7PSK35yedS_XBhfTEVap91-o,8777
|
|
5
|
+
md2conf/converter.py,sha256=BlS51hjiJd0lgrh5wJkun1_FZ9kAtmInKjEYhr5YOLA,36620
|
|
6
|
+
md2conf/emoji.py,sha256=IZeguWqcboeOyJkGLTVONDMO4ZXfYXPgfkp56PTI-hE,1924
|
|
7
|
+
md2conf/entities.dtd,sha256=M6NzqL5N7dPs_eUA_6sDsiSLzDaAacrx9LdttiufvYU,30215
|
|
8
|
+
md2conf/matcher.py,sha256=FgMFPvGiOqGezCs8OyerfsVo-iIHFoI6LRMzdcjM5UY,3693
|
|
9
|
+
md2conf/mermaid.py,sha256=82NGv6x_LNrN3c-VPx368KCBO87_Sv8-uz2ue40DzKg,2192
|
|
10
|
+
md2conf/processor.py,sha256=G-MIh1jGq9jjgogHnlnRUSrNgiV6_xO6Fy7ct9alqgM,4769
|
|
11
|
+
md2conf/properties.py,sha256=HqFveB-Wgg29e60tARHSk21l8b-SCk953eb_Mw-nI80,1984
|
|
12
|
+
md2conf/puppeteer-config.json,sha256=-dMTAN_7kNTGbDlfXzApl0KJpAWna9YKZdwMKbpOb60,159
|
|
13
|
+
md2conf/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
14
|
+
md2conf/util.py,sha256=Ct7T21oI2AUs6hIZVNljPhq5HbNscjnFXMpVCzRlRHw,743
|
|
15
|
+
markdown_to_confluence-0.3.0.dist-info/LICENSE,sha256=Pv43so2bPfmKhmsrmXFyAvS7M30-1i1tzjz6-dfhyOo,1077
|
|
16
|
+
markdown_to_confluence-0.3.0.dist-info/METADATA,sha256=oAWMGSnPXtLZZyDiwNdbb4QWwEP0uXPxzKiBqY9mfT0,14936
|
|
17
|
+
markdown_to_confluence-0.3.0.dist-info/WHEEL,sha256=In9FTNxeP60KnTkGw7wk6mJPYd_dQSjEZmXdBdMCI-8,91
|
|
18
|
+
markdown_to_confluence-0.3.0.dist-info/entry_points.txt,sha256=F1zxa1wtEObtbHS-qp46330WVFLHdMnV2wQ-ZorRmX0,50
|
|
19
|
+
markdown_to_confluence-0.3.0.dist-info/top_level.txt,sha256=_FJfl_kHrHNidyjUOuS01ngu_jDsfc-ZjSocNRJnTzU,8
|
|
20
|
+
markdown_to_confluence-0.3.0.dist-info/zip-safe,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1
|
|
21
|
+
markdown_to_confluence-0.3.0.dist-info/RECORD,,
|
md2conf/__init__.py
CHANGED
|
@@ -5,9 +5,9 @@ Parses Markdown files, converts Markdown content into the Confluence Storage For
|
|
|
5
5
|
Confluence API endpoints to upload images and content.
|
|
6
6
|
"""
|
|
7
7
|
|
|
8
|
-
__version__ = "0.
|
|
8
|
+
__version__ = "0.3.0"
|
|
9
9
|
__author__ = "Levente Hunyadi"
|
|
10
|
-
__copyright__ = "Copyright 2022-
|
|
10
|
+
__copyright__ = "Copyright 2022-2025, Levente Hunyadi"
|
|
11
11
|
__license__ = "MIT"
|
|
12
12
|
__maintainer__ = "Levente Hunyadi"
|
|
13
13
|
__status__ = "Production"
|
md2conf/__main__.py
CHANGED
|
@@ -4,7 +4,7 @@ Publish Markdown files to Confluence wiki.
|
|
|
4
4
|
Parses Markdown files, converts Markdown content into the Confluence Storage Format (XHTML), and invokes
|
|
5
5
|
Confluence API endpoints to upload images and content.
|
|
6
6
|
|
|
7
|
-
Copyright 2022-
|
|
7
|
+
Copyright 2022-2025, Levente Hunyadi
|
|
8
8
|
|
|
9
9
|
:see: https://github.com/hunyadi/md2conf
|
|
10
10
|
"""
|
|
@@ -41,6 +41,8 @@ class Arguments(argparse.Namespace):
|
|
|
41
41
|
generated_by: Optional[str]
|
|
42
42
|
render_mermaid: bool
|
|
43
43
|
diagram_output_format: Literal["png", "svg"]
|
|
44
|
+
local: bool
|
|
45
|
+
headers: dict[str, str]
|
|
44
46
|
webui_links: bool
|
|
45
47
|
|
|
46
48
|
|
|
@@ -169,14 +171,12 @@ def main() -> None:
|
|
|
169
171
|
"--webui-links",
|
|
170
172
|
action="store_true",
|
|
171
173
|
default=False,
|
|
172
|
-
help="Enable Confluence Web UI links.",
|
|
174
|
+
help="Enable Confluence Web UI links. (Typically required for on-prem versions of Confluence.)",
|
|
173
175
|
)
|
|
174
176
|
|
|
175
177
|
args = Arguments()
|
|
176
178
|
parser.parse_args(namespace=args)
|
|
177
179
|
|
|
178
|
-
# NOTE: If we switch to modern type aware CLI tool like typer
|
|
179
|
-
# the following line won't be necessary
|
|
180
180
|
args.mdpath = Path(args.mdpath)
|
|
181
181
|
|
|
182
182
|
logging.basicConfig(
|
md2conf/api.py
CHANGED
|
@@ -1,21 +1,21 @@
|
|
|
1
1
|
"""
|
|
2
2
|
Publish Markdown files to Confluence wiki.
|
|
3
3
|
|
|
4
|
-
Copyright 2022-
|
|
4
|
+
Copyright 2022-2025, Levente Hunyadi
|
|
5
5
|
|
|
6
6
|
:see: https://github.com/hunyadi/md2conf
|
|
7
7
|
"""
|
|
8
8
|
|
|
9
|
+
import enum
|
|
9
10
|
import io
|
|
10
11
|
import json
|
|
11
12
|
import logging
|
|
12
13
|
import mimetypes
|
|
13
14
|
import typing
|
|
14
|
-
from contextlib import contextmanager
|
|
15
15
|
from dataclasses import dataclass
|
|
16
16
|
from pathlib import Path
|
|
17
17
|
from types import TracebackType
|
|
18
|
-
from typing import
|
|
18
|
+
from typing import Optional, Union
|
|
19
19
|
from urllib.parse import urlencode, urlparse, urlunparse
|
|
20
20
|
|
|
21
21
|
import requests
|
|
@@ -31,12 +31,17 @@ JsonType = Union[
|
|
|
31
31
|
int,
|
|
32
32
|
float,
|
|
33
33
|
str,
|
|
34
|
-
|
|
35
|
-
|
|
34
|
+
dict[str, "JsonType"],
|
|
35
|
+
list["JsonType"],
|
|
36
36
|
]
|
|
37
37
|
|
|
38
38
|
|
|
39
|
-
|
|
39
|
+
class ConfluenceVersion(enum.Enum):
|
|
40
|
+
VERSION_1 = "rest/api"
|
|
41
|
+
VERSION_2 = "api/v2"
|
|
42
|
+
|
|
43
|
+
|
|
44
|
+
def build_url(base_url: str, query: Optional[dict[str, str]] = None) -> str:
|
|
40
45
|
"Builds a URL with scheme, host, port, path and query string parameters."
|
|
41
46
|
|
|
42
47
|
scheme, netloc, path, params, query_str, fragment = urlparse(base_url)
|
|
@@ -66,7 +71,7 @@ class ConfluenceAttachment:
|
|
|
66
71
|
@dataclass
|
|
67
72
|
class ConfluencePage:
|
|
68
73
|
id: str
|
|
69
|
-
|
|
74
|
+
space_id: str
|
|
70
75
|
title: str
|
|
71
76
|
version: int
|
|
72
77
|
content: str
|
|
@@ -101,7 +106,7 @@ class ConfluenceAPI:
|
|
|
101
106
|
|
|
102
107
|
def __exit__(
|
|
103
108
|
self,
|
|
104
|
-
exc_type: Optional[
|
|
109
|
+
exc_type: Optional[type[BaseException]],
|
|
105
110
|
exc_val: Optional[BaseException],
|
|
106
111
|
exc_tb: Optional[TracebackType],
|
|
107
112
|
) -> None:
|
|
@@ -116,6 +121,9 @@ class ConfluenceSession:
|
|
|
116
121
|
base_path: str
|
|
117
122
|
space_key: str
|
|
118
123
|
|
|
124
|
+
_space_id_to_key: dict[str, str]
|
|
125
|
+
_space_key_to_id: dict[str, str]
|
|
126
|
+
|
|
119
127
|
def __init__(
|
|
120
128
|
self, session: requests.Session, domain: str, base_path: str, space_key: str
|
|
121
129
|
) -> None:
|
|
@@ -124,30 +132,46 @@ class ConfluenceSession:
|
|
|
124
132
|
self.base_path = base_path
|
|
125
133
|
self.space_key = space_key
|
|
126
134
|
|
|
135
|
+
self._space_id_to_key = {}
|
|
136
|
+
self._space_key_to_id = {}
|
|
137
|
+
|
|
127
138
|
def close(self) -> None:
|
|
128
139
|
self.session.close()
|
|
140
|
+
self.session = requests.Session()
|
|
129
141
|
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
142
|
+
def _build_url(
|
|
143
|
+
self,
|
|
144
|
+
version: ConfluenceVersion,
|
|
145
|
+
path: str,
|
|
146
|
+
query: Optional[dict[str, str]] = None,
|
|
147
|
+
) -> str:
|
|
148
|
+
"""
|
|
149
|
+
Builds a full URL for invoking the Confluence API.
|
|
138
150
|
|
|
139
|
-
|
|
140
|
-
|
|
151
|
+
:param prefix: A URL path prefix that depends on the Confluence API version.
|
|
152
|
+
:param path: Path of API endpoint to invoke.
|
|
153
|
+
:param query: Query parameters to pass to the API endpoint.
|
|
154
|
+
:returns: A full URL.
|
|
155
|
+
"""
|
|
156
|
+
|
|
157
|
+
base_url = f"https://{self.domain}{self.base_path}{version.value}{path}"
|
|
141
158
|
return build_url(base_url, query)
|
|
142
159
|
|
|
143
|
-
def _invoke(
|
|
144
|
-
|
|
160
|
+
def _invoke(
|
|
161
|
+
self,
|
|
162
|
+
version: ConfluenceVersion,
|
|
163
|
+
path: str,
|
|
164
|
+
query: Optional[dict[str, str]] = None,
|
|
165
|
+
) -> JsonType:
|
|
166
|
+
"Execute an HTTP request via Confluence API."
|
|
167
|
+
|
|
168
|
+
url = self._build_url(version, path, query)
|
|
145
169
|
response = self.session.get(url)
|
|
146
170
|
response.raise_for_status()
|
|
147
171
|
return response.json()
|
|
148
172
|
|
|
149
|
-
def _save(self, path: str, data: dict) -> None:
|
|
150
|
-
url = self._build_url(path)
|
|
173
|
+
def _save(self, version: ConfluenceVersion, path: str, data: dict) -> None:
|
|
174
|
+
url = self._build_url(version, path)
|
|
151
175
|
response = self.session.put(
|
|
152
176
|
url,
|
|
153
177
|
data=json.dumps(data),
|
|
@@ -155,24 +179,68 @@ class ConfluenceSession:
|
|
|
155
179
|
)
|
|
156
180
|
response.raise_for_status()
|
|
157
181
|
|
|
182
|
+
def space_id_to_key(self, id: str) -> str:
|
|
183
|
+
"Finds the Confluence space key for a space ID."
|
|
184
|
+
|
|
185
|
+
key = self._space_id_to_key.get(id)
|
|
186
|
+
if key is None:
|
|
187
|
+
payload = self._invoke(
|
|
188
|
+
ConfluenceVersion.VERSION_2,
|
|
189
|
+
"/spaces",
|
|
190
|
+
{"ids": id, "type": "global", "status": "current"},
|
|
191
|
+
)
|
|
192
|
+
payload = typing.cast(dict[str, JsonType], payload)
|
|
193
|
+
results = typing.cast(list[JsonType], payload["results"])
|
|
194
|
+
if len(results) != 1:
|
|
195
|
+
raise ConfluenceError(f"unique space not found with id: {id}")
|
|
196
|
+
|
|
197
|
+
result = typing.cast(dict[str, JsonType], results[0])
|
|
198
|
+
key = typing.cast(str, result["key"])
|
|
199
|
+
|
|
200
|
+
self._space_id_to_key[id] = key
|
|
201
|
+
|
|
202
|
+
return key
|
|
203
|
+
|
|
204
|
+
def space_key_to_id(self, key: str) -> str:
|
|
205
|
+
"Finds the Confluence space ID for a space key."
|
|
206
|
+
|
|
207
|
+
id = self._space_key_to_id.get(key)
|
|
208
|
+
if id is None:
|
|
209
|
+
payload = self._invoke(
|
|
210
|
+
ConfluenceVersion.VERSION_2,
|
|
211
|
+
"/spaces",
|
|
212
|
+
{"keys": key, "type": "global", "status": "current"},
|
|
213
|
+
)
|
|
214
|
+
payload = typing.cast(dict[str, JsonType], payload)
|
|
215
|
+
results = typing.cast(list[JsonType], payload["results"])
|
|
216
|
+
if len(results) != 1:
|
|
217
|
+
raise ConfluenceError(f"unique space not found with key: {key}")
|
|
218
|
+
|
|
219
|
+
result = typing.cast(dict[str, JsonType], results[0])
|
|
220
|
+
id = typing.cast(str, result["id"])
|
|
221
|
+
|
|
222
|
+
self._space_key_to_id[key] = id
|
|
223
|
+
|
|
224
|
+
return id
|
|
225
|
+
|
|
158
226
|
def get_attachment_by_name(
|
|
159
|
-
self, page_id: str, filename: str
|
|
227
|
+
self, page_id: str, filename: str
|
|
160
228
|
) -> ConfluenceAttachment:
|
|
161
|
-
path = f"/
|
|
162
|
-
query = {"
|
|
163
|
-
data = typing.cast(
|
|
229
|
+
path = f"/pages/{page_id}/attachments"
|
|
230
|
+
query = {"filename": filename}
|
|
231
|
+
data = typing.cast(
|
|
232
|
+
dict[str, JsonType], self._invoke(ConfluenceVersion.VERSION_2, path, query)
|
|
233
|
+
)
|
|
164
234
|
|
|
165
|
-
results = typing.cast(
|
|
235
|
+
results = typing.cast(list[JsonType], data["results"])
|
|
166
236
|
if len(results) != 1:
|
|
167
237
|
raise ConfluenceError(f"no such attachment on page {page_id}: {filename}")
|
|
168
|
-
result = typing.cast(
|
|
238
|
+
result = typing.cast(dict[str, JsonType], results[0])
|
|
169
239
|
|
|
170
240
|
id = typing.cast(str, result["id"])
|
|
171
|
-
|
|
172
|
-
|
|
173
|
-
|
|
174
|
-
comment = extensions.get("comment", "")
|
|
175
|
-
comment = typing.cast(str, comment)
|
|
241
|
+
media_type = typing.cast(str, result["mediaType"])
|
|
242
|
+
file_size = typing.cast(int, result["fileSize"])
|
|
243
|
+
comment = typing.cast(str, result.get("comment", ""))
|
|
176
244
|
return ConfluenceAttachment(id, media_type, file_size, comment)
|
|
177
245
|
|
|
178
246
|
def upload_attachment(
|
|
@@ -205,9 +273,7 @@ class ConfluenceSession:
|
|
|
205
273
|
raise ConfluenceError(f"file not found: {attachment_path}")
|
|
206
274
|
|
|
207
275
|
try:
|
|
208
|
-
attachment = self.get_attachment_by_name(
|
|
209
|
-
page_id, attachment_name, space_key=space_key
|
|
210
|
-
)
|
|
276
|
+
attachment = self.get_attachment_by_name(page_id, attachment_name)
|
|
211
277
|
|
|
212
278
|
if attachment_path is not None:
|
|
213
279
|
if not force and attachment.file_size == attachment_path.stat().st_size:
|
|
@@ -226,7 +292,7 @@ class ConfluenceSession:
|
|
|
226
292
|
except ConfluenceError:
|
|
227
293
|
path = f"/content/{page_id}/child/attachment"
|
|
228
294
|
|
|
229
|
-
url = self._build_url(path)
|
|
295
|
+
url = self._build_url(ConfluenceVersion.VERSION_1, path)
|
|
230
296
|
|
|
231
297
|
if attachment_path is not None:
|
|
232
298
|
with open(attachment_path, "rb") as attachment_file:
|
|
@@ -304,7 +370,7 @@ class ConfluenceSession:
|
|
|
304
370
|
}
|
|
305
371
|
|
|
306
372
|
LOGGER.info("Updating attachment: %s", attachment_id)
|
|
307
|
-
self._save(path, data)
|
|
373
|
+
self._save(ConfluenceVersion.VERSION_1, path, data)
|
|
308
374
|
|
|
309
375
|
def get_page_id_by_title(
|
|
310
376
|
self,
|
|
@@ -321,82 +387,47 @@ class ConfluenceSession:
|
|
|
321
387
|
"""
|
|
322
388
|
|
|
323
389
|
LOGGER.info("Looking up page with title: %s", title)
|
|
324
|
-
path = "/
|
|
325
|
-
query = {
|
|
326
|
-
|
|
390
|
+
path = "/pages"
|
|
391
|
+
query = {
|
|
392
|
+
"space-id": self.space_key_to_id(space_key or self.space_key),
|
|
393
|
+
"title": title,
|
|
394
|
+
}
|
|
395
|
+
payload = self._invoke(ConfluenceVersion.VERSION_2, path, query)
|
|
396
|
+
payload = typing.cast(dict[str, JsonType], payload)
|
|
327
397
|
|
|
328
|
-
results = typing.cast(
|
|
398
|
+
results = typing.cast(list[JsonType], payload["results"])
|
|
329
399
|
if len(results) != 1:
|
|
330
|
-
raise ConfluenceError(f"page not found with title: {title}")
|
|
400
|
+
raise ConfluenceError(f"unique page not found with title: {title}")
|
|
331
401
|
|
|
332
|
-
result = typing.cast(
|
|
402
|
+
result = typing.cast(dict[str, JsonType], results[0])
|
|
333
403
|
id = typing.cast(str, result["id"])
|
|
334
404
|
return id
|
|
335
405
|
|
|
336
|
-
def get_page(
|
|
337
|
-
self, page_id: str, *, space_key: Optional[str] = None
|
|
338
|
-
) -> ConfluencePage:
|
|
406
|
+
def get_page(self, page_id: str) -> ConfluencePage:
|
|
339
407
|
"""
|
|
340
408
|
Retrieve Confluence wiki page details.
|
|
341
409
|
|
|
342
410
|
:param page_id: The Confluence page ID.
|
|
343
|
-
:param space_key: The Confluence space key (unless the default space is to be used).
|
|
344
411
|
:returns: Confluence page info.
|
|
345
412
|
"""
|
|
346
413
|
|
|
347
|
-
path = f"/
|
|
348
|
-
query = {
|
|
349
|
-
|
|
350
|
-
|
|
351
|
-
|
|
352
|
-
|
|
353
|
-
|
|
354
|
-
version = typing.cast(Dict[str, JsonType], data["version"])
|
|
355
|
-
body = typing.cast(Dict[str, JsonType], data["body"])
|
|
356
|
-
storage = typing.cast(Dict[str, JsonType], body["storage"])
|
|
414
|
+
path = f"/pages/{page_id}"
|
|
415
|
+
query = {"body-format": "storage"}
|
|
416
|
+
payload = self._invoke(ConfluenceVersion.VERSION_2, path, query)
|
|
417
|
+
data = typing.cast(dict[str, JsonType], payload)
|
|
418
|
+
version = typing.cast(dict[str, JsonType], data["version"])
|
|
419
|
+
body = typing.cast(dict[str, JsonType], data["body"])
|
|
420
|
+
storage = typing.cast(dict[str, JsonType], body["storage"])
|
|
357
421
|
|
|
358
422
|
return ConfluencePage(
|
|
359
423
|
id=page_id,
|
|
360
|
-
|
|
424
|
+
space_id=typing.cast(str, data["spaceId"]),
|
|
361
425
|
title=typing.cast(str, data["title"]),
|
|
362
426
|
version=typing.cast(int, version["number"]),
|
|
363
427
|
content=typing.cast(str, storage["value"]),
|
|
364
428
|
)
|
|
365
429
|
|
|
366
|
-
def
|
|
367
|
-
self, page_id: str, *, space_key: Optional[str] = None
|
|
368
|
-
) -> Dict[str, str]:
|
|
369
|
-
"""
|
|
370
|
-
Retrieve Confluence wiki page ancestors.
|
|
371
|
-
|
|
372
|
-
:param page_id: The Confluence page ID.
|
|
373
|
-
:param space_key: The Confluence space key (unless the default space is to be used).
|
|
374
|
-
:returns: Dictionary of ancestor page ID to title, with topmost ancestor first.
|
|
375
|
-
"""
|
|
376
|
-
|
|
377
|
-
path = f"/content/{page_id}"
|
|
378
|
-
query = {
|
|
379
|
-
"spaceKey": space_key or self.space_key,
|
|
380
|
-
"expand": "ancestors",
|
|
381
|
-
}
|
|
382
|
-
data = typing.cast(Dict[str, JsonType], self._invoke(path, query))
|
|
383
|
-
ancestors = typing.cast(List[JsonType], data["ancestors"])
|
|
384
|
-
|
|
385
|
-
# from the JSON array of ancestors, extract the "id" and "title"
|
|
386
|
-
results: Dict[str, str] = {}
|
|
387
|
-
for node in ancestors:
|
|
388
|
-
ancestor = typing.cast(Dict[str, JsonType], node)
|
|
389
|
-
id = typing.cast(str, ancestor["id"])
|
|
390
|
-
title = typing.cast(str, ancestor["title"])
|
|
391
|
-
results[id] = title
|
|
392
|
-
return results
|
|
393
|
-
|
|
394
|
-
def get_page_version(
|
|
395
|
-
self,
|
|
396
|
-
page_id: str,
|
|
397
|
-
*,
|
|
398
|
-
space_key: Optional[str] = None,
|
|
399
|
-
) -> int:
|
|
430
|
+
def get_page_version(self, page_id: str) -> int:
|
|
400
431
|
"""
|
|
401
432
|
Retrieve a Confluence wiki page version.
|
|
402
433
|
|
|
@@ -405,13 +436,10 @@ class ConfluenceSession:
|
|
|
405
436
|
:returns: Confluence page version.
|
|
406
437
|
"""
|
|
407
438
|
|
|
408
|
-
path = f"/
|
|
409
|
-
|
|
410
|
-
|
|
411
|
-
|
|
412
|
-
}
|
|
413
|
-
data = typing.cast(Dict[str, JsonType], self._invoke(path, query))
|
|
414
|
-
version = typing.cast(Dict[str, JsonType], data["version"])
|
|
439
|
+
path = f"/pages/{page_id}"
|
|
440
|
+
payload = self._invoke(ConfluenceVersion.VERSION_2, path)
|
|
441
|
+
data = typing.cast(dict[str, JsonType], payload)
|
|
442
|
+
version = typing.cast(dict[str, JsonType], data["version"])
|
|
415
443
|
return typing.cast(int, version["number"])
|
|
416
444
|
|
|
417
445
|
def update_page(
|
|
@@ -419,7 +447,6 @@ class ConfluenceSession:
|
|
|
419
447
|
page_id: str,
|
|
420
448
|
new_content: str,
|
|
421
449
|
*,
|
|
422
|
-
space_key: Optional[str] = None,
|
|
423
450
|
title: Optional[str] = None,
|
|
424
451
|
) -> None:
|
|
425
452
|
"""
|
|
@@ -431,7 +458,7 @@ class ConfluenceSession:
|
|
|
431
458
|
:param title: New title to assign to the page. Needs to be unique within a space.
|
|
432
459
|
"""
|
|
433
460
|
|
|
434
|
-
page = self.get_page(page_id
|
|
461
|
+
page = self.get_page(page_id)
|
|
435
462
|
new_title = title or page.title
|
|
436
463
|
|
|
437
464
|
try:
|
|
@@ -442,18 +469,17 @@ class ConfluenceSession:
|
|
|
442
469
|
except ParseError as exc:
|
|
443
470
|
LOGGER.warning(exc)
|
|
444
471
|
|
|
445
|
-
path = f"/
|
|
472
|
+
path = f"/pages/{page_id}"
|
|
446
473
|
data = {
|
|
447
474
|
"id": page_id,
|
|
448
|
-
"
|
|
475
|
+
"status": "current",
|
|
449
476
|
"title": new_title,
|
|
450
|
-
"space": {"key": space_key or self.space_key},
|
|
451
477
|
"body": {"storage": {"value": new_content, "representation": "storage"}},
|
|
452
478
|
"version": {"minorEdit": True, "number": page.version + 1},
|
|
453
479
|
}
|
|
454
480
|
|
|
455
481
|
LOGGER.info("Updating page: %s", page_id)
|
|
456
|
-
self._save(path, data)
|
|
482
|
+
self._save(ConfluenceVersion.VERSION_2, path, data)
|
|
457
483
|
|
|
458
484
|
def create_page(
|
|
459
485
|
self,
|
|
@@ -463,18 +489,22 @@ class ConfluenceSession:
|
|
|
463
489
|
*,
|
|
464
490
|
space_key: Optional[str] = None,
|
|
465
491
|
) -> ConfluencePage:
|
|
466
|
-
|
|
492
|
+
"""
|
|
493
|
+
Create a new page via Confluence API.
|
|
494
|
+
"""
|
|
495
|
+
|
|
496
|
+
path = "/pages/"
|
|
467
497
|
query = {
|
|
468
|
-
"
|
|
498
|
+
"spaceId": self.space_key_to_id(space_key or self.space_key),
|
|
499
|
+
"status": "current",
|
|
469
500
|
"title": title,
|
|
470
|
-
"
|
|
501
|
+
"parentId": parent_page_id,
|
|
471
502
|
"body": {"storage": {"value": new_content, "representation": "storage"}},
|
|
472
|
-
"ancestors": [{"type": "page", "id": parent_page_id}],
|
|
473
503
|
}
|
|
474
504
|
|
|
475
505
|
LOGGER.info("Creating page: %s", title)
|
|
476
506
|
|
|
477
|
-
url = self._build_url(path)
|
|
507
|
+
url = self._build_url(ConfluenceVersion.VERSION_2, path)
|
|
478
508
|
response = self.session.post(
|
|
479
509
|
url,
|
|
480
510
|
data=json.dumps(query),
|
|
@@ -482,43 +512,66 @@ class ConfluenceSession:
|
|
|
482
512
|
)
|
|
483
513
|
response.raise_for_status()
|
|
484
514
|
|
|
485
|
-
data = typing.cast(
|
|
486
|
-
version = typing.cast(
|
|
487
|
-
body = typing.cast(
|
|
488
|
-
storage = typing.cast(
|
|
515
|
+
data = typing.cast(dict[str, JsonType], response.json())
|
|
516
|
+
version = typing.cast(dict[str, JsonType], data["version"])
|
|
517
|
+
body = typing.cast(dict[str, JsonType], data["body"])
|
|
518
|
+
storage = typing.cast(dict[str, JsonType], body["storage"])
|
|
489
519
|
|
|
490
520
|
return ConfluencePage(
|
|
491
521
|
id=typing.cast(str, data["id"]),
|
|
492
|
-
|
|
522
|
+
space_id=typing.cast(str, data["spaceId"]),
|
|
493
523
|
title=typing.cast(str, data["title"]),
|
|
494
524
|
version=typing.cast(int, version["number"]),
|
|
495
525
|
content=typing.cast(str, storage["value"]),
|
|
496
526
|
)
|
|
497
527
|
|
|
528
|
+
def delete_page(self, page_id: str, *, purge: bool = False) -> None:
|
|
529
|
+
"""
|
|
530
|
+
Delete a page via Confluence API.
|
|
531
|
+
|
|
532
|
+
:param page_id: The Confluence page ID.
|
|
533
|
+
:param purge: True to completely purge the page, False to move to trash only.
|
|
534
|
+
"""
|
|
535
|
+
|
|
536
|
+
path = f"/pages/{page_id}"
|
|
537
|
+
|
|
538
|
+
# move to trash
|
|
539
|
+
url = self._build_url(ConfluenceVersion.VERSION_2, path)
|
|
540
|
+
LOGGER.info("Moving page to trash: %s", page_id)
|
|
541
|
+
response = self.session.delete(url)
|
|
542
|
+
response.raise_for_status()
|
|
543
|
+
|
|
544
|
+
if purge:
|
|
545
|
+
# purge from trash
|
|
546
|
+
query = {"purge": "true"}
|
|
547
|
+
url = self._build_url(ConfluenceVersion.VERSION_2, path, query)
|
|
548
|
+
LOGGER.info("Permanently deleting page: %s", page_id)
|
|
549
|
+
response = self.session.delete(url)
|
|
550
|
+
response.raise_for_status()
|
|
551
|
+
|
|
498
552
|
def page_exists(
|
|
499
553
|
self, title: str, *, space_key: Optional[str] = None
|
|
500
554
|
) -> Optional[str]:
|
|
501
|
-
path = "/
|
|
555
|
+
path = "/pages"
|
|
502
556
|
query = {
|
|
503
|
-
"type": "page",
|
|
504
557
|
"title": title,
|
|
505
|
-
"
|
|
558
|
+
"space-id": self.space_key_to_id(space_key or self.space_key),
|
|
506
559
|
}
|
|
507
560
|
|
|
508
561
|
LOGGER.info("Checking if page exists with title: %s", title)
|
|
509
562
|
|
|
510
|
-
url = self._build_url(path)
|
|
563
|
+
url = self._build_url(ConfluenceVersion.VERSION_2, path)
|
|
511
564
|
response = self.session.get(
|
|
512
565
|
url, params=query, headers={"Content-Type": "application/json"}
|
|
513
566
|
)
|
|
514
567
|
response.raise_for_status()
|
|
515
568
|
|
|
516
|
-
data = typing.cast(
|
|
517
|
-
results = typing.cast(
|
|
569
|
+
data = typing.cast(dict[str, JsonType], response.json())
|
|
570
|
+
results = typing.cast(list[JsonType], data["results"])
|
|
518
571
|
|
|
519
572
|
if len(results) == 1:
|
|
520
|
-
|
|
521
|
-
return typing.cast(str,
|
|
573
|
+
result = typing.cast(dict[str, JsonType], results[0])
|
|
574
|
+
return typing.cast(str, result["id"])
|
|
522
575
|
else:
|
|
523
576
|
return None
|
|
524
577
|
|
md2conf/application.py
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
"""
|
|
2
2
|
Publish Markdown files to Confluence wiki.
|
|
3
3
|
|
|
4
|
-
Copyright 2022-
|
|
4
|
+
Copyright 2022-2025, Levente Hunyadi
|
|
5
5
|
|
|
6
6
|
:see: https://github.com/hunyadi/md2conf
|
|
7
7
|
"""
|
|
@@ -9,7 +9,7 @@ Copyright 2022-2024, Levente Hunyadi
|
|
|
9
9
|
import logging
|
|
10
10
|
import os.path
|
|
11
11
|
from pathlib import Path
|
|
12
|
-
from typing import
|
|
12
|
+
from typing import Optional
|
|
13
13
|
|
|
14
14
|
from .api import ConfluencePage, ConfluenceSession
|
|
15
15
|
from .converter import (
|
|
@@ -77,7 +77,7 @@ class Application:
|
|
|
77
77
|
LOGGER.info("Synchronizing directory: %s", local_dir)
|
|
78
78
|
|
|
79
79
|
# Step 1: build index of all page metadata
|
|
80
|
-
page_metadata:
|
|
80
|
+
page_metadata: dict[Path, ConfluencePageMetadata] = {}
|
|
81
81
|
root_id = (
|
|
82
82
|
ConfluenceQualifiedID(self.options.root_page_id, self.api.space_key)
|
|
83
83
|
if self.options.root_page_id
|
|
@@ -94,24 +94,19 @@ class Application:
|
|
|
94
94
|
self,
|
|
95
95
|
page_path: Path,
|
|
96
96
|
root_dir: Path,
|
|
97
|
-
page_metadata:
|
|
97
|
+
page_metadata: dict[Path, ConfluencePageMetadata],
|
|
98
98
|
) -> None:
|
|
99
99
|
base_path = page_path.parent
|
|
100
100
|
|
|
101
101
|
LOGGER.info("Synchronizing page: %s", page_path)
|
|
102
102
|
document = ConfluenceDocument(page_path, self.options, root_dir, page_metadata)
|
|
103
|
-
|
|
104
|
-
if document.id.space_key:
|
|
105
|
-
with self.api.switch_space(document.id.space_key):
|
|
106
|
-
self._update_document(document, base_path)
|
|
107
|
-
else:
|
|
108
|
-
self._update_document(document, base_path)
|
|
103
|
+
self._update_document(document, base_path)
|
|
109
104
|
|
|
110
105
|
def _index_directory(
|
|
111
106
|
self,
|
|
112
107
|
local_dir: Path,
|
|
113
108
|
root_id: Optional[ConfluenceQualifiedID],
|
|
114
|
-
page_metadata:
|
|
109
|
+
page_metadata: dict[Path, ConfluencePageMetadata],
|
|
115
110
|
) -> None:
|
|
116
111
|
"Indexes Markdown files in a directory recursively."
|
|
117
112
|
|
|
@@ -119,8 +114,8 @@ class Application:
|
|
|
119
114
|
|
|
120
115
|
matcher = Matcher(MatcherOptions(source=".mdignore", extension="md"), local_dir)
|
|
121
116
|
|
|
122
|
-
files:
|
|
123
|
-
directories:
|
|
117
|
+
files: list[Path] = []
|
|
118
|
+
directories: list[Path] = []
|
|
124
119
|
for entry in os.scandir(local_dir):
|
|
125
120
|
if matcher.is_excluded(entry.name, entry.is_dir()):
|
|
126
121
|
continue
|
|
@@ -175,9 +170,7 @@ class Application:
|
|
|
175
170
|
frontmatter_title, _ = extract_frontmatter_title(document)
|
|
176
171
|
|
|
177
172
|
if qualified_id is not None:
|
|
178
|
-
confluence_page = self.api.get_page(
|
|
179
|
-
qualified_id.page_id, space_key=qualified_id.space_key
|
|
180
|
-
)
|
|
173
|
+
confluence_page = self.api.get_page(qualified_id.page_id)
|
|
181
174
|
else:
|
|
182
175
|
if parent_id is None:
|
|
183
176
|
raise ValueError(
|
|
@@ -189,11 +182,17 @@ class Application:
|
|
|
189
182
|
absolute_path, document, title or frontmatter_title, parent_id
|
|
190
183
|
)
|
|
191
184
|
|
|
185
|
+
space_key = (
|
|
186
|
+
self.api.space_id_to_key(confluence_page.space_id)
|
|
187
|
+
if confluence_page.space_id
|
|
188
|
+
else self.api.space_key
|
|
189
|
+
)
|
|
190
|
+
|
|
192
191
|
return ConfluencePageMetadata(
|
|
193
192
|
domain=self.api.domain,
|
|
194
193
|
base_path=self.api.base_path,
|
|
195
194
|
page_id=confluence_page.id,
|
|
196
|
-
space_key=
|
|
195
|
+
space_key=space_key,
|
|
197
196
|
title=confluence_page.title or "",
|
|
198
197
|
)
|
|
199
198
|
|
|
@@ -217,7 +216,7 @@ class Application:
|
|
|
217
216
|
absolute_path,
|
|
218
217
|
document,
|
|
219
218
|
confluence_page.id,
|
|
220
|
-
confluence_page.
|
|
219
|
+
self.api.space_id_to_key(confluence_page.space_id),
|
|
221
220
|
)
|
|
222
221
|
return confluence_page
|
|
223
222
|
|
|
@@ -251,7 +250,7 @@ class Application:
|
|
|
251
250
|
) -> None:
|
|
252
251
|
"Writes the Confluence page ID and space key at the beginning of the Markdown file."
|
|
253
252
|
|
|
254
|
-
content:
|
|
253
|
+
content: list[str] = []
|
|
255
254
|
|
|
256
255
|
# check if the file has frontmatter
|
|
257
256
|
index = 0
|
md2conf/converter.py
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
"""
|
|
2
2
|
Publish Markdown files to Confluence wiki.
|
|
3
3
|
|
|
4
|
-
Copyright 2022-
|
|
4
|
+
Copyright 2022-2025, Levente Hunyadi
|
|
5
5
|
|
|
6
6
|
:see: https://github.com/hunyadi/md2conf
|
|
7
7
|
"""
|
|
@@ -18,7 +18,7 @@ import uuid
|
|
|
18
18
|
import xml.etree.ElementTree
|
|
19
19
|
from dataclasses import dataclass
|
|
20
20
|
from pathlib import Path
|
|
21
|
-
from typing import Any,
|
|
21
|
+
from typing import Any, Literal, Optional, Union
|
|
22
22
|
from urllib.parse import ParseResult, urlparse, urlunparse
|
|
23
23
|
|
|
24
24
|
import lxml.etree as ET
|
|
@@ -46,7 +46,7 @@ class ParseError(RuntimeError):
|
|
|
46
46
|
pass
|
|
47
47
|
|
|
48
48
|
|
|
49
|
-
def starts_with_any(text: str, prefixes:
|
|
49
|
+
def starts_with_any(text: str, prefixes: list[str]) -> bool:
|
|
50
50
|
"True if text starts with any of the listed prefixes."
|
|
51
51
|
|
|
52
52
|
for prefix in prefixes:
|
|
@@ -73,7 +73,7 @@ def emoji_generator(
|
|
|
73
73
|
alt: str,
|
|
74
74
|
title: Optional[str],
|
|
75
75
|
category: Optional[str],
|
|
76
|
-
options:
|
|
76
|
+
options: dict[str, Any],
|
|
77
77
|
md: markdown.Markdown,
|
|
78
78
|
) -> xml.etree.ElementTree.Element:
|
|
79
79
|
name = (alias or shortname).strip(":")
|
|
@@ -107,7 +107,7 @@ def markdown_to_html(content: str) -> str:
|
|
|
107
107
|
)
|
|
108
108
|
|
|
109
109
|
|
|
110
|
-
def _elements_from_strings(dtd_path: Path, items:
|
|
110
|
+
def _elements_from_strings(dtd_path: Path, items: list[str]) -> ET._Element:
|
|
111
111
|
"""
|
|
112
112
|
Creates a fragment of several XML nodes from their string representation wrapped in a root element.
|
|
113
113
|
|
|
@@ -141,7 +141,7 @@ def _elements_from_strings(dtd_path: Path, items: List[str]) -> ET._Element:
|
|
|
141
141
|
raise ParseError(e)
|
|
142
142
|
|
|
143
143
|
|
|
144
|
-
def elements_from_strings(items:
|
|
144
|
+
def elements_from_strings(items: list[str]) -> ET._Element:
|
|
145
145
|
"Creates a fragment of several XML nodes from their string representation wrapped in a root element."
|
|
146
146
|
|
|
147
147
|
if sys.version_info >= (3, 9):
|
|
@@ -287,7 +287,7 @@ class ConfluenceConverterOptions:
|
|
|
287
287
|
conversion rules for the identifier.
|
|
288
288
|
:param render_mermaid: Whether to pre-render Mermaid diagrams into PNG/SVG images.
|
|
289
289
|
:param diagram_output_format: Target image format for diagrams.
|
|
290
|
-
:param
|
|
290
|
+
:param webui_links: When true, convert relative URLs to Confluence Web UI links.
|
|
291
291
|
"""
|
|
292
292
|
|
|
293
293
|
ignore_invalid_url: bool = False
|
|
@@ -304,17 +304,17 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
304
304
|
path: Path
|
|
305
305
|
base_dir: Path
|
|
306
306
|
root_dir: Path
|
|
307
|
-
links:
|
|
308
|
-
images:
|
|
309
|
-
embedded_images:
|
|
310
|
-
page_metadata:
|
|
307
|
+
links: list[str]
|
|
308
|
+
images: list[Path]
|
|
309
|
+
embedded_images: dict[str, bytes]
|
|
310
|
+
page_metadata: dict[Path, ConfluencePageMetadata]
|
|
311
311
|
|
|
312
312
|
def __init__(
|
|
313
313
|
self,
|
|
314
314
|
options: ConfluenceConverterOptions,
|
|
315
315
|
path: Path,
|
|
316
316
|
root_dir: Path,
|
|
317
|
-
page_metadata:
|
|
317
|
+
page_metadata: dict[Path, ConfluencePageMetadata],
|
|
318
318
|
) -> None:
|
|
319
319
|
super().__init__()
|
|
320
320
|
self.options = options
|
|
@@ -438,7 +438,7 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
438
438
|
if not src:
|
|
439
439
|
raise DocumentError("image lacks `src` attribute")
|
|
440
440
|
|
|
441
|
-
attributes:
|
|
441
|
+
attributes: dict[str, Any] = {
|
|
442
442
|
ET.QName(namespaces["ac"], "align"): "center",
|
|
443
443
|
ET.QName(namespaces["ac"], "layout"): "center",
|
|
444
444
|
}
|
|
@@ -457,11 +457,11 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
457
457
|
return self._transform_attached_image(Path(src), caption, attributes)
|
|
458
458
|
|
|
459
459
|
def _transform_external_image(
|
|
460
|
-
self, url: str, caption: Optional[str], attributes:
|
|
460
|
+
self, url: str, caption: Optional[str], attributes: dict[str, Any]
|
|
461
461
|
) -> ET._Element:
|
|
462
462
|
"Emits Confluence Storage Format XHTML for an external image."
|
|
463
463
|
|
|
464
|
-
elements:
|
|
464
|
+
elements: list[ET._Element] = []
|
|
465
465
|
elements.append(
|
|
466
466
|
RI(
|
|
467
467
|
"url",
|
|
@@ -475,7 +475,7 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
475
475
|
return AC("image", attributes, *elements)
|
|
476
476
|
|
|
477
477
|
def _transform_attached_image(
|
|
478
|
-
self, path: Path, caption: Optional[str], attributes:
|
|
478
|
+
self, path: Path, caption: Optional[str], attributes: dict[str, Any]
|
|
479
479
|
) -> ET._Element:
|
|
480
480
|
"Emits Confluence Storage Format XHTML for an attached image."
|
|
481
481
|
|
|
@@ -487,7 +487,7 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
487
487
|
self.images.append(path)
|
|
488
488
|
image_name = attachment_name(path)
|
|
489
489
|
|
|
490
|
-
elements:
|
|
490
|
+
elements: list[ET._Element] = []
|
|
491
491
|
elements.append(
|
|
492
492
|
RI(
|
|
493
493
|
"attachment",
|
|
@@ -525,7 +525,7 @@ class ConfluenceStorageFormatConverter(NodeVisitor):
|
|
|
525
525
|
AC(
|
|
526
526
|
"parameter",
|
|
527
527
|
{ET.QName(namespaces["ac"], "name"): "theme"},
|
|
528
|
-
"
|
|
528
|
+
"Default",
|
|
529
529
|
),
|
|
530
530
|
AC(
|
|
531
531
|
"parameter",
|
|
@@ -899,8 +899,8 @@ class DocumentError(RuntimeError):
|
|
|
899
899
|
pass
|
|
900
900
|
|
|
901
901
|
|
|
902
|
-
def extract_value(pattern: str, text: str) ->
|
|
903
|
-
values:
|
|
902
|
+
def extract_value(pattern: str, text: str) -> tuple[Optional[str], str]:
|
|
903
|
+
values: list[str] = []
|
|
904
904
|
|
|
905
905
|
def _repl_func(matchobj: re.Match) -> str:
|
|
906
906
|
values.append(matchobj.group(1))
|
|
@@ -921,7 +921,7 @@ class ConfluenceQualifiedID:
|
|
|
921
921
|
self.space_key = space_key
|
|
922
922
|
|
|
923
923
|
|
|
924
|
-
def extract_qualified_id(text: str) ->
|
|
924
|
+
def extract_qualified_id(text: str) -> tuple[Optional[ConfluenceQualifiedID], str]:
|
|
925
925
|
"Extracts the Confluence page ID and space key from a Markdown document."
|
|
926
926
|
|
|
927
927
|
page_id, text = extract_value(r"<!--\s+confluence-page-id:\s*(\d+)\s+-->", text)
|
|
@@ -935,13 +935,13 @@ def extract_qualified_id(text: str) -> Tuple[Optional[ConfluenceQualifiedID], st
|
|
|
935
935
|
return ConfluenceQualifiedID(page_id, space_key), text
|
|
936
936
|
|
|
937
937
|
|
|
938
|
-
def extract_frontmatter(text: str) ->
|
|
938
|
+
def extract_frontmatter(text: str) -> tuple[Optional[str], str]:
|
|
939
939
|
"Extracts the front matter from a Markdown document."
|
|
940
940
|
|
|
941
941
|
return extract_value(r"(?ms)\A---$(.+?)^---$", text)
|
|
942
942
|
|
|
943
943
|
|
|
944
|
-
def extract_frontmatter_title(text: str) ->
|
|
944
|
+
def extract_frontmatter_title(text: str) -> tuple[Optional[str], str]:
|
|
945
945
|
frontmatter, text = extract_frontmatter(text)
|
|
946
946
|
|
|
947
947
|
title: Optional[str] = None
|
|
@@ -974,8 +974,8 @@ class ConfluenceDocumentOptions:
|
|
|
974
974
|
plain text; when false, raise an exception.
|
|
975
975
|
:param heading_anchors: When true, emit a structured macro *anchor* for each section heading using GitHub
|
|
976
976
|
conversion rules for the identifier.
|
|
977
|
-
:param generated_by: Text to use as the generated-by prompt.
|
|
978
|
-
:param
|
|
977
|
+
:param generated_by: Text to use as the generated-by prompt (or `None` to omit a prompt).
|
|
978
|
+
:param root_page_id: Confluence page to assume root page role for publishing a directory of Markdown files.
|
|
979
979
|
:param render_mermaid: Whether to pre-render Mermaid diagrams into PNG/SVG images.
|
|
980
980
|
:param diagram_output_format: Target image format for diagrams.
|
|
981
981
|
:param webui_links: When true, convert relative URLs to Confluence Web UI links.
|
|
@@ -993,8 +993,8 @@ class ConfluenceDocumentOptions:
|
|
|
993
993
|
class ConfluenceDocument:
|
|
994
994
|
id: ConfluenceQualifiedID
|
|
995
995
|
title: Optional[str]
|
|
996
|
-
links:
|
|
997
|
-
images:
|
|
996
|
+
links: list[str]
|
|
997
|
+
images: list[Path]
|
|
998
998
|
|
|
999
999
|
options: ConfluenceDocumentOptions
|
|
1000
1000
|
root: ET._Element
|
|
@@ -1004,7 +1004,7 @@ class ConfluenceDocument:
|
|
|
1004
1004
|
path: Path,
|
|
1005
1005
|
options: ConfluenceDocumentOptions,
|
|
1006
1006
|
root_dir: Path,
|
|
1007
|
-
page_metadata:
|
|
1007
|
+
page_metadata: dict[Path, ConfluencePageMetadata],
|
|
1008
1008
|
) -> None:
|
|
1009
1009
|
self.options = options
|
|
1010
1010
|
path = path.resolve(True)
|
md2conf/emoji.py
CHANGED
md2conf/matcher.py
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
"""
|
|
2
2
|
Publish Markdown files to Confluence wiki.
|
|
3
3
|
|
|
4
|
-
Copyright 2022-
|
|
4
|
+
Copyright 2022-2025, Levente Hunyadi
|
|
5
5
|
|
|
6
6
|
:see: https://github.com/hunyadi/md2conf
|
|
7
7
|
"""
|
|
@@ -10,12 +10,17 @@ import os.path
|
|
|
10
10
|
from dataclasses import dataclass
|
|
11
11
|
from fnmatch import fnmatch
|
|
12
12
|
from pathlib import Path
|
|
13
|
-
from typing import Iterable,
|
|
13
|
+
from typing import Iterable, Optional
|
|
14
14
|
|
|
15
15
|
|
|
16
16
|
@dataclass
|
|
17
17
|
class Entry:
|
|
18
|
-
"
|
|
18
|
+
"""
|
|
19
|
+
Represents a file or directory entry.
|
|
20
|
+
|
|
21
|
+
:param name: Name of the file-system entry.
|
|
22
|
+
:param is_dir: True if the entry is a directory.
|
|
23
|
+
"""
|
|
19
24
|
|
|
20
25
|
name: str
|
|
21
26
|
is_dir: bool
|
|
@@ -42,7 +47,7 @@ class Matcher:
|
|
|
42
47
|
"Compares file and directory names against a list of exclude/include patterns."
|
|
43
48
|
|
|
44
49
|
options: MatcherOptions
|
|
45
|
-
rules:
|
|
50
|
+
rules: list[str]
|
|
46
51
|
|
|
47
52
|
def __init__(self, options: MatcherOptions, directory: Path) -> None:
|
|
48
53
|
self.options = options
|
|
@@ -92,7 +97,7 @@ class Matcher:
|
|
|
92
97
|
|
|
93
98
|
return not self.is_excluded(name, is_dir)
|
|
94
99
|
|
|
95
|
-
def filter(self, items: Iterable[Entry]) ->
|
|
100
|
+
def filter(self, items: Iterable[Entry]) -> list[Entry]:
|
|
96
101
|
"""
|
|
97
102
|
Returns only those elements from the input that don't match any of the exclusion rules.
|
|
98
103
|
|
|
@@ -102,7 +107,7 @@ class Matcher:
|
|
|
102
107
|
|
|
103
108
|
return [item for item in items if self.is_included(item.name, item.is_dir)]
|
|
104
109
|
|
|
105
|
-
def scandir(self, path: Path) ->
|
|
110
|
+
def scandir(self, path: Path) -> list[Entry]:
|
|
106
111
|
"""
|
|
107
112
|
Returns only those entries in a directory whose name doesn't match any of the exclusion rules.
|
|
108
113
|
|
md2conf/mermaid.py
CHANGED
md2conf/processor.py
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
"""
|
|
2
2
|
Publish Markdown files to Confluence wiki.
|
|
3
3
|
|
|
4
|
-
Copyright 2022-
|
|
4
|
+
Copyright 2022-2025, Levente Hunyadi
|
|
5
5
|
|
|
6
6
|
:see: https://github.com/hunyadi/md2conf
|
|
7
7
|
"""
|
|
@@ -10,7 +10,7 @@ import hashlib
|
|
|
10
10
|
import logging
|
|
11
11
|
import os
|
|
12
12
|
from pathlib import Path
|
|
13
|
-
from typing import
|
|
13
|
+
from typing import Optional
|
|
14
14
|
|
|
15
15
|
from .converter import (
|
|
16
16
|
ConfluenceDocument,
|
|
@@ -60,7 +60,7 @@ class Processor:
|
|
|
60
60
|
LOGGER.info("Synchronizing directory: %s", local_dir)
|
|
61
61
|
|
|
62
62
|
# Step 1: build index of all page metadata
|
|
63
|
-
page_metadata:
|
|
63
|
+
page_metadata: dict[Path, ConfluencePageMetadata] = {}
|
|
64
64
|
self._index_directory(local_dir, page_metadata)
|
|
65
65
|
LOGGER.info("Indexed %d page(s)", len(page_metadata))
|
|
66
66
|
|
|
@@ -83,7 +83,7 @@ class Processor:
|
|
|
83
83
|
self,
|
|
84
84
|
path: Path,
|
|
85
85
|
root_dir: Path,
|
|
86
|
-
page_metadata:
|
|
86
|
+
page_metadata: dict[Path, ConfluencePageMetadata],
|
|
87
87
|
) -> None:
|
|
88
88
|
"Processes a single Markdown file."
|
|
89
89
|
|
|
@@ -95,7 +95,7 @@ class Processor:
|
|
|
95
95
|
def _index_directory(
|
|
96
96
|
self,
|
|
97
97
|
local_dir: Path,
|
|
98
|
-
page_metadata:
|
|
98
|
+
page_metadata: dict[Path, ConfluencePageMetadata],
|
|
99
99
|
) -> None:
|
|
100
100
|
"Indexes Markdown files in a directory recursively."
|
|
101
101
|
|
|
@@ -103,8 +103,8 @@ class Processor:
|
|
|
103
103
|
|
|
104
104
|
matcher = Matcher(MatcherOptions(source=".mdignore", extension="md"), local_dir)
|
|
105
105
|
|
|
106
|
-
files:
|
|
107
|
-
directories:
|
|
106
|
+
files: list[Path] = []
|
|
107
|
+
directories: list[Path] = []
|
|
108
108
|
for entry in os.scandir(local_dir):
|
|
109
109
|
if matcher.is_excluded(entry.name, entry.is_dir()):
|
|
110
110
|
continue
|
md2conf/properties.py
CHANGED
|
@@ -1,13 +1,13 @@
|
|
|
1
1
|
"""
|
|
2
2
|
Publish Markdown files to Confluence wiki.
|
|
3
3
|
|
|
4
|
-
Copyright 2022-
|
|
4
|
+
Copyright 2022-2025, Levente Hunyadi
|
|
5
5
|
|
|
6
6
|
:see: https://github.com/hunyadi/md2conf
|
|
7
7
|
"""
|
|
8
8
|
|
|
9
9
|
import os
|
|
10
|
-
from typing import
|
|
10
|
+
from typing import Optional
|
|
11
11
|
|
|
12
12
|
|
|
13
13
|
class ConfluenceError(RuntimeError):
|
|
@@ -20,7 +20,7 @@ class ConfluenceProperties:
|
|
|
20
20
|
space_key: str
|
|
21
21
|
user_name: Optional[str]
|
|
22
22
|
api_key: str
|
|
23
|
-
headers: Optional[
|
|
23
|
+
headers: Optional[dict[str, str]]
|
|
24
24
|
|
|
25
25
|
def __init__(
|
|
26
26
|
self,
|
|
@@ -29,7 +29,7 @@ class ConfluenceProperties:
|
|
|
29
29
|
user_name: Optional[str] = None,
|
|
30
30
|
api_key: Optional[str] = None,
|
|
31
31
|
space_key: Optional[str] = None,
|
|
32
|
-
headers: Optional[
|
|
32
|
+
headers: Optional[dict[str, str]] = None,
|
|
33
33
|
) -> None:
|
|
34
34
|
opt_domain = domain or os.getenv("CONFLUENCE_DOMAIN")
|
|
35
35
|
opt_base_path = base_path or os.getenv("CONFLUENCE_PATH")
|
md2conf/util.py
CHANGED
|
@@ -1,21 +0,0 @@
|
|
|
1
|
-
md2conf/__init__.py,sha256=U8zdop7-AIrfwCYzWiwKfhCEPF_1QEKPt4Zwq-38LlU,402
|
|
2
|
-
md2conf/__main__.py,sha256=6iOI28W_d71tlnCMFpZwvkBmBt5-HazlZsz69gS4Oak,6894
|
|
3
|
-
md2conf/api.py,sha256=NmAbNWTrTSi2ZDGYymy70Fw6HcgrmB-Ua4re4yLJvVc,17715
|
|
4
|
-
md2conf/application.py,sha256=-kFpMRtSpQUU1hsiW5O73gL1X9McQWpvyAAEUxEnpuU,8869
|
|
5
|
-
md2conf/converter.py,sha256=S8Kka35Y99w0J00CYi-DQwsKzlHAvBfaSCf10mb1FZk,36596
|
|
6
|
-
md2conf/emoji.py,sha256=w9oiOIxzObAE7HTo3f6aETT1_D3t3yZwr88ynU4ENm0,1924
|
|
7
|
-
md2conf/entities.dtd,sha256=M6NzqL5N7dPs_eUA_6sDsiSLzDaAacrx9LdttiufvYU,30215
|
|
8
|
-
md2conf/matcher.py,sha256=mYMltZOLypK4O-SJugLgicOwUMem67hiNLg_kPFoJkU,3583
|
|
9
|
-
md2conf/mermaid.py,sha256=gqA6Hg6WcPDdR7JOClezAgNZj2Gq4pXJSgmOUlUt6Dk,2192
|
|
10
|
-
md2conf/processor.py,sha256=E-Na-a8tNp4CaoRPA5etcXdHXNRdgyMrf6bfKa9P7O4,4781
|
|
11
|
-
md2conf/properties.py,sha256=iVIc0h0XtS3Y2LCywX1C9cvmVQ0WljOMt8pl2MDMVCI,1990
|
|
12
|
-
md2conf/puppeteer-config.json,sha256=-dMTAN_7kNTGbDlfXzApl0KJpAWna9YKZdwMKbpOb60,159
|
|
13
|
-
md2conf/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
14
|
-
md2conf/util.py,sha256=ftf60MiW7S7rW45ipWX6efP_Sv2F2qpyIDHrGA0cBiw,743
|
|
15
|
-
markdown_to_confluence-0.2.7.dist-info/LICENSE,sha256=Pv43so2bPfmKhmsrmXFyAvS7M30-1i1tzjz6-dfhyOo,1077
|
|
16
|
-
markdown_to_confluence-0.2.7.dist-info/METADATA,sha256=76K_O_5b__MnKT7FuLXgCHX6hR5dZio3mK6RWR4DyCA,13551
|
|
17
|
-
markdown_to_confluence-0.2.7.dist-info/WHEEL,sha256=PZUExdf71Ui_so67QXpySuHtCi3-J3wvF4ORK6k_S8U,91
|
|
18
|
-
markdown_to_confluence-0.2.7.dist-info/entry_points.txt,sha256=F1zxa1wtEObtbHS-qp46330WVFLHdMnV2wQ-ZorRmX0,50
|
|
19
|
-
markdown_to_confluence-0.2.7.dist-info/top_level.txt,sha256=_FJfl_kHrHNidyjUOuS01ngu_jDsfc-ZjSocNRJnTzU,8
|
|
20
|
-
markdown_to_confluence-0.2.7.dist-info/zip-safe,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1
|
|
21
|
-
markdown_to_confluence-0.2.7.dist-info/RECORD,,
|
|
File without changes
|
{markdown_to_confluence-0.2.7.dist-info → markdown_to_confluence-0.3.0.dist-info}/entry_points.txt
RENAMED
|
File without changes
|
{markdown_to_confluence-0.2.7.dist-info → markdown_to_confluence-0.3.0.dist-info}/top_level.txt
RENAMED
|
File without changes
|
|
File without changes
|