updates2mqtt 1.7.3__tar.gz → 1.8.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: updates2mqtt
3
- Version: 1.7.3
3
+ Version: 1.8.0
4
4
  Summary: System update and docker image notification and execution over MQTT
5
5
  Keywords: mqtt,docker,oci,container,updates,automation,home-assistant,homeassistant,selfhosting
6
6
  Author: jey burrows
@@ -72,7 +72,7 @@ Read the release notes, and optionally click *Update* to trigger a Docker *pull*
72
72
 
73
73
  Updates2MQTT perioidically checks for new versions of components being available, and publishes new version info to MQTT. HomeAssistant auto discovery is supported, so all updates can be seen in the same place as Home Assistant's own components and add-ins.
74
74
 
75
- Currently only Docker containers are supported, either via an image registry check (using either v1 Docker APIs or the OCI v2 API), or a git repo for source (see [Local Builds](local_builds.md)), with specific handling for Docker, Github Container Registry, Gitlab, Codeberg, Microsoft Container Registry and LinuxServer Registry, with adaptive behaviour to cope with most
75
+ Currently only Docker containers are supported, either via an image registry check (using either v1 Docker APIs or the OCI v2 API), or a git repo for source (see [Local Builds](local_builds.md)), with specific handling for Docker, Github Container Registry, Gitlab, Codeberg, Microsoft Container Registry, Quay and LinuxServer Registry, with adaptive behaviour to cope with most
76
76
  others. The design is modular, so other update sources can be added, at least for notification. The next anticipated is **apt** for Debian based systems.
77
77
 
78
78
  Components can also be updated, either automatically or triggered via MQTT, for example by hitting the *Install* button in the HomeAssistant update dialog. Icons and release notes can be specified for a better HA experience. See [Home Assistant Integration](home_assistant.md) for details.
@@ -91,6 +91,9 @@ or without Docker, using [uv](https://docs.astral.sh/uv/)
91
91
  export MQTT_HOST=192.168.1.1;export MQTT_USER=user1;export MQTT_PASS=user1;uv run --with updates2mqtt python -m updates2mqtt
92
92
  ```
93
93
 
94
+ It also comes with a basic command line tool that will perform the analysis for a single running container, or fetch
95
+ manifests, JSON blobs and lists of tags from remote registries (known to work with GitHub, GitLab, Codeberg, Quay, LSCR and Microsoft MCR).
96
+
94
97
  ## Release Support
95
98
 
96
99
  Presently only Docker containers are supported, although others are planned, probably with priority for `apt`.
@@ -128,7 +131,7 @@ restarter:
128
131
  While `updates2mqtt` will discover and monitor all containers running under the Docker daemon,
129
132
  there are some options to make to those containers to tune how it works.
130
133
 
131
- These happen by adding environment variables to the containers, typically inside an `.env`
134
+ These happen by adding environment variables or docker labels to the containers, typically inside an `.env`
132
135
  file, or as `environment` options inside `docker-compose.yaml`.
133
136
 
134
137
  ### Automated updates
@@ -148,45 +151,6 @@ restarter:
148
151
  Automated updates can also apply to local builds, where a `git_repo_path` has been defined - if there are remote
149
152
  commits available to pull, then a `git pull`, `docker compose build` and `docker compose up` will be executed.
150
153
 
151
- ### Environment Variables
152
-
153
- The following environment variables can be used to configure containers for `updates2mqtt`:
154
-
155
- | Env Var | Description | Default |
156
- |----------------------------|----------------------------------------------------------------------------------------------|-----------------|
157
- | `UPD2MQTT_UPDATE` | Update mode, either `Passive` or `Auto`. If `Auto`, updates will be installed automatically. | `Passive` |
158
- | `UPD2MQTT_PICTURE` | URL to an icon to use in Home Assistant. | Docker logo URL |
159
- | `UPD2MQTT_RELNOTES` | URL to release notes for the package. | |
160
- | `UPD2MQTT_GIT_REPO_PATH` | Relative path to a local git repo if the image is built locally. | |
161
- | `UPD2MQTT_IGNORE` | If set to `True`, the container will be ignored by Updates2MQTT. | False |
162
- | |
163
- | `UPD2MQTT_VERSION_POLICY` | Change how version derived from container label or image hash, `Version`,`Digest`,`Version_Digest` with default of `Auto`|
164
- | `UPD2MQTT_REGISTRY_TOKEN` | Access token for authentication to container distribution API, as alternative to making a call to `token` service |
165
-
166
- ### Docker Labels
167
-
168
- Alternatively, use Docker labels
169
-
170
- | Label | Env Var |
171
- |--------------------------------|----------------------------|
172
- | `updates2mqtt.update` | `UPD2MQTT_UPDATE` |
173
- | `updates2mqtt.picture` | `UPD2MQTT_PCITURE` |
174
- | `updates2mqtt.relnotes` | `UPD2MQTT_RELNOTES` |
175
- | `updates2mqtt.git_repo_path` | `UPD2MQTT_GIT_REPO_PATH` |
176
- | `updates2mqtt.ignore` | `UPD2MQTT_IGNORE` |
177
- | `updates2mqtt.version_policy` | `UPD2MQTT_VERSION_POLICY` |
178
- | `updates2mqtt.registry_token` | `UPD2MQTT_REGISTRY_TOKEN` |
179
-
180
-
181
-
182
- ```yaml title="Example Compose Snippet"
183
- restarter:
184
- image: docker:cli
185
- command: ["/bin/sh", "-c", "while true; do sleep 86400; docker restart mailserver; done"]
186
- labels:
187
- updates2mqtt.relnotes: https://component.my.com/release_notes
188
- ```
189
-
190
154
 
191
155
  ## Related Projects
192
156
 
@@ -196,7 +160,7 @@ Other apps useful for self-hosting with the help of MQTT:
196
160
 
197
161
  Find more at [awesome-mqtt](https://github.com/rhizomatics/awesome-mqtt)
198
162
 
199
- For a more powerful Docker update manager, try [What's Up Docker](https://getwud.github.io/wud/)
163
+ For a more powerful Docker focussed update manager, try [What's Up Docker](https://getwud.github.io/wud/)
200
164
 
201
165
  ## Development
202
166
 
@@ -34,7 +34,7 @@ Read the release notes, and optionally click *Update* to trigger a Docker *pull*
34
34
 
35
35
  Updates2MQTT perioidically checks for new versions of components being available, and publishes new version info to MQTT. HomeAssistant auto discovery is supported, so all updates can be seen in the same place as Home Assistant's own components and add-ins.
36
36
 
37
- Currently only Docker containers are supported, either via an image registry check (using either v1 Docker APIs or the OCI v2 API), or a git repo for source (see [Local Builds](local_builds.md)), with specific handling for Docker, Github Container Registry, Gitlab, Codeberg, Microsoft Container Registry and LinuxServer Registry, with adaptive behaviour to cope with most
37
+ Currently only Docker containers are supported, either via an image registry check (using either v1 Docker APIs or the OCI v2 API), or a git repo for source (see [Local Builds](local_builds.md)), with specific handling for Docker, Github Container Registry, Gitlab, Codeberg, Microsoft Container Registry, Quay and LinuxServer Registry, with adaptive behaviour to cope with most
38
38
  others. The design is modular, so other update sources can be added, at least for notification. The next anticipated is **apt** for Debian based systems.
39
39
 
40
40
  Components can also be updated, either automatically or triggered via MQTT, for example by hitting the *Install* button in the HomeAssistant update dialog. Icons and release notes can be specified for a better HA experience. See [Home Assistant Integration](home_assistant.md) for details.
@@ -53,6 +53,9 @@ or without Docker, using [uv](https://docs.astral.sh/uv/)
53
53
  export MQTT_HOST=192.168.1.1;export MQTT_USER=user1;export MQTT_PASS=user1;uv run --with updates2mqtt python -m updates2mqtt
54
54
  ```
55
55
 
56
+ It also comes with a basic command line tool that will perform the analysis for a single running container, or fetch
57
+ manifests, JSON blobs and lists of tags from remote registries (known to work with GitHub, GitLab, Codeberg, Quay, LSCR and Microsoft MCR).
58
+
56
59
  ## Release Support
57
60
 
58
61
  Presently only Docker containers are supported, although others are planned, probably with priority for `apt`.
@@ -90,7 +93,7 @@ restarter:
90
93
  While `updates2mqtt` will discover and monitor all containers running under the Docker daemon,
91
94
  there are some options to make to those containers to tune how it works.
92
95
 
93
- These happen by adding environment variables to the containers, typically inside an `.env`
96
+ These happen by adding environment variables or docker labels to the containers, typically inside an `.env`
94
97
  file, or as `environment` options inside `docker-compose.yaml`.
95
98
 
96
99
  ### Automated updates
@@ -110,45 +113,6 @@ restarter:
110
113
  Automated updates can also apply to local builds, where a `git_repo_path` has been defined - if there are remote
111
114
  commits available to pull, then a `git pull`, `docker compose build` and `docker compose up` will be executed.
112
115
 
113
- ### Environment Variables
114
-
115
- The following environment variables can be used to configure containers for `updates2mqtt`:
116
-
117
- | Env Var | Description | Default |
118
- |----------------------------|----------------------------------------------------------------------------------------------|-----------------|
119
- | `UPD2MQTT_UPDATE` | Update mode, either `Passive` or `Auto`. If `Auto`, updates will be installed automatically. | `Passive` |
120
- | `UPD2MQTT_PICTURE` | URL to an icon to use in Home Assistant. | Docker logo URL |
121
- | `UPD2MQTT_RELNOTES` | URL to release notes for the package. | |
122
- | `UPD2MQTT_GIT_REPO_PATH` | Relative path to a local git repo if the image is built locally. | |
123
- | `UPD2MQTT_IGNORE` | If set to `True`, the container will be ignored by Updates2MQTT. | False |
124
- | |
125
- | `UPD2MQTT_VERSION_POLICY` | Change how version derived from container label or image hash, `Version`,`Digest`,`Version_Digest` with default of `Auto`|
126
- | `UPD2MQTT_REGISTRY_TOKEN` | Access token for authentication to container distribution API, as alternative to making a call to `token` service |
127
-
128
- ### Docker Labels
129
-
130
- Alternatively, use Docker labels
131
-
132
- | Label | Env Var |
133
- |--------------------------------|----------------------------|
134
- | `updates2mqtt.update` | `UPD2MQTT_UPDATE` |
135
- | `updates2mqtt.picture` | `UPD2MQTT_PCITURE` |
136
- | `updates2mqtt.relnotes` | `UPD2MQTT_RELNOTES` |
137
- | `updates2mqtt.git_repo_path` | `UPD2MQTT_GIT_REPO_PATH` |
138
- | `updates2mqtt.ignore` | `UPD2MQTT_IGNORE` |
139
- | `updates2mqtt.version_policy` | `UPD2MQTT_VERSION_POLICY` |
140
- | `updates2mqtt.registry_token` | `UPD2MQTT_REGISTRY_TOKEN` |
141
-
142
-
143
-
144
- ```yaml title="Example Compose Snippet"
145
- restarter:
146
- image: docker:cli
147
- command: ["/bin/sh", "-c", "while true; do sleep 86400; docker restart mailserver; done"]
148
- labels:
149
- updates2mqtt.relnotes: https://component.my.com/release_notes
150
- ```
151
-
152
116
 
153
117
  ## Related Projects
154
118
 
@@ -158,7 +122,7 @@ Other apps useful for self-hosting with the help of MQTT:
158
122
 
159
123
  Find more at [awesome-mqtt](https://github.com/rhizomatics/awesome-mqtt)
160
124
 
161
- For a more powerful Docker update manager, try [What's Up Docker](https://getwud.github.io/wud/)
125
+ For a more powerful Docker focussed update manager, try [What's Up Docker](https://getwud.github.io/wud/)
162
126
 
163
127
  ## Development
164
128
 
@@ -7,7 +7,7 @@ authors = [
7
7
  ]
8
8
 
9
9
  requires-python = ">=3.13"
10
- version = "1.7.3"
10
+ version = "1.8.0"
11
11
  license="Apache-2.0"
12
12
  keywords=["mqtt", "docker", "oci","container","updates", "automation","home-assistant","homeassistant","selfhosting"]
13
13
 
@@ -60,7 +60,8 @@ dev = [
60
60
  "pytest-subprocess>=1.5.3",
61
61
  "coverage",
62
62
  "icdiff",
63
- "genbadge[all]"
63
+ "genbadge[all]",
64
+ "actionlint-py"
64
65
  ]
65
66
  docs=[
66
67
  "mkdocs",
@@ -74,7 +75,8 @@ docs=[
74
75
  "mkdocs-git-revision-date-localized-plugin",
75
76
  "mkdocs-meta-descriptions-plugin",
76
77
  "pngquant",
77
- "mkdocs-mermaid2-plugin"
78
+ "mkdocs-mermaid2-plugin",
79
+ "mkdocs-htmlproofer-plugin"
78
80
  ]
79
81
 
80
82
  [build-system]
@@ -56,7 +56,15 @@ class App:
56
56
  self.scan_count: int = 0
57
57
  self.last_scan: str | None = None
58
58
  if self.cfg.docker.enabled:
59
- self.scanners.append(DockerProvider(self.cfg.docker, self.cfg.node, self.self_bounce))
59
+ self.scanners.append(
60
+ DockerProvider(
61
+ self.cfg.docker,
62
+ self.cfg.node,
63
+ packages=self.cfg.packages,
64
+ github_cfg=self.cfg.github,
65
+ self_bounce=self.self_bounce,
66
+ )
67
+ )
60
68
  self.stopped = Event()
61
69
  self.healthcheck_topic = self.cfg.node.healthcheck.topic_template.format(node_name=self.cfg.node.name)
62
70
 
@@ -71,9 +79,6 @@ class App:
71
79
  session = uuid.uuid4().hex
72
80
  for scanner in self.scanners:
73
81
  slog = log.bind(source_type=scanner.source_type, session=session)
74
- slog.info("Cleaning topics before scan")
75
- if self.scan_count == 0:
76
- await self.publisher.clean_topics(scanner, None, force=True)
77
82
  if self.stopped.is_set():
78
83
  break
79
84
  slog.info("Scanning ...")
@@ -84,7 +89,7 @@ class App:
84
89
  if self.stopped.is_set():
85
90
  slog.debug("Breaking scan loop on stopped event")
86
91
  break
87
- await self.publisher.clean_topics(scanner, session, force=False)
92
+ await self.publisher.clean_topics(scanner)
88
93
  self.scan_count += 1
89
94
  slog.info(f"Scan #{self.scan_count} complete")
90
95
  self.last_scan_timestamp = datetime.now(UTC).isoformat()
@@ -4,7 +4,7 @@ import structlog
4
4
  from omegaconf import DictConfig, OmegaConf
5
5
  from rich import print_json
6
6
 
7
- from updates2mqtt.config import DockerConfig, NodeConfig, RegistryConfig
7
+ from updates2mqtt.config import DockerConfig, GitHubConfig, NodeConfig, RegistryConfig
8
8
  from updates2mqtt.helpers import Throttler
9
9
  from updates2mqtt.integrations.docker import DockerProvider
10
10
  from updates2mqtt.integrations.docker_enrich import (
@@ -24,18 +24,20 @@ log = structlog.get_logger()
24
24
  """
25
25
  Super simple CLI
26
26
 
27
- python updates2mqtt.cli container=frigate
27
+ Command can be `container`,`tags`,`manifest` or `blob`
28
28
 
29
- python updates2mqtt.cli container=frigate api=docker_client log_level=DEBUG
29
+ * `container=container-name`
30
+ * `container=hash`
31
+ * `tags=ghcr.io/
32
+ * `blob=manifest=mcr.microsoft.com/dotnet/sdk:latest`
33
+ * `tags=quay.io/linuxserver.io/babybuddy`
34
+ * `blob=ghcr.io/blakeblackshear/frigate@sha256:759c36ee869e3e60258350a2e221eae1a4ba1018613e0334f1bc84eb09c4bbbc`
30
35
 
31
- ython3 updates2mqtt/cli.py blob=ghcr.io/homarr-labs/homarr@sha256:af79a3339de5ed8ef7f5a0186ff3deb86f40b213ba75249291f2f68aef082a25 | jq '.config.Labels'
36
+ In addition, a `log_level=DEBUG` or other level can be added, `github_token` to try a personal access
37
+ token for GitHub release info retrieval, or `api=docker_client` to use the older API (defaults to `api=OCI_V2`)
32
38
 
33
- python3 updates2mqtt/cli.py manifest=ghcr.io/blakeblackshear/frigate:stable
34
39
 
35
- python3 updates2mqtt/cli.py blob=ghcr.io/blakeblackshear/frigate@sha256:ef8d56a7d50b545af176e950ce328aec7f0b7bc5baebdca189fe661d97924980
36
-
37
- python3 updates2mqtt/cli.py manifest=ghcr.io/blakeblackshear/frigate@sha256:c68fd78fd3237c9ba81b5aa927f17b54f46705990f43b4b5d5596cfbbb626af4
38
- """ # noqa: E501
40
+ """
39
41
 
40
42
  OCI_MANIFEST_TYPES: list[str] = [
41
43
  "application/vnd.oci.image.manifest.v1+json",
@@ -91,7 +93,9 @@ ALL_OCI_MEDIA_TYPES: list[str] = (
91
93
  )
92
94
 
93
95
 
94
- def dump_url(doc_type: str, img_ref: str) -> None:
96
+ def dump_url(doc_type: str, img_ref: str, cli_conf: DictConfig) -> None:
97
+ structlog.configure(wrapper_class=structlog.make_filtering_bound_logger(cli_conf.get("log_level", "WARNING")))
98
+
95
99
  lookup = ContainerDistributionAPIVersionLookup(Throttler(), RegistryConfig())
96
100
  img_info = DockerImageInfo(img_ref)
97
101
  if not img_info.index_name or not img_info.name:
@@ -110,35 +114,49 @@ def dump_url(doc_type: str, img_ref: str) -> None:
110
114
  log.warning("No tag or digest found in %s", img_ref)
111
115
  return
112
116
  url = f"https://{api_host}/v2/{img_info.name}/manifests/{img_info.tag_or_digest}"
117
+ elif doc_type == "tags":
118
+ url = f"https://{api_host}/v2/{img_info.name}/tags/list"
113
119
  else:
114
120
  return
115
121
 
116
122
  token: str | None = lookup.fetch_token(img_info.index_name, img_info.name)
117
123
 
118
124
  response: Response | None = fetch_url(url, bearer_token=token, follow_redirects=True, response_type=ALL_OCI_MEDIA_TYPES)
119
- if response:
125
+ if response and response.is_error:
126
+ log.warning(f"{response.status_code}: {url}")
127
+ log.warning(response.text)
128
+ elif response and response.is_success:
120
129
  log.debug(f"{response.status_code}: {url}")
121
130
  log.debug("HEADERS")
122
131
  for k, v in response.headers.items():
123
132
  log.debug(f"{k}: {v}")
124
133
  log.debug("CONTENTS")
134
+
125
135
  print_json(response.text)
126
136
 
127
137
 
128
138
  def main() -> None:
129
139
  # will be a proper cli someday
130
140
  cli_conf: DictConfig = OmegaConf.from_cli()
131
- structlog.configure(wrapper_class=structlog.make_filtering_bound_logger(cli_conf.get("log_level", "WARNING")))
132
141
 
133
142
  if cli_conf.get("blob"):
134
- dump_url("blob", cli_conf.get("blob"))
143
+ dump_url("blob", cli_conf.get("blob"), cli_conf)
135
144
  elif cli_conf.get("manifest"):
136
- dump_url("manifest", cli_conf.get("manifest"))
145
+ dump_url("manifest", cli_conf.get("manifest"), cli_conf)
146
+ elif cli_conf.get("tags"):
147
+ dump_url("tags", cli_conf.get("tags"), cli_conf)
137
148
 
138
149
  else:
150
+ structlog.configure(wrapper_class=structlog.make_filtering_bound_logger(cli_conf.get("log_level", "INFO")))
151
+
139
152
  docker_scanner = DockerProvider(
140
- DockerConfig(registry=RegistryConfig(api=cli_conf.get("api", "OCI_V2"))), NodeConfig(), None
153
+ DockerConfig(registry=RegistryConfig(api=cli_conf.get("api", "OCI_V2"))),
154
+ NodeConfig(),
155
+ packages={},
156
+ github_cfg=GitHubConfig(access_token=cli_conf.get("github_token")),
157
+ self_bounce=None,
141
158
  )
159
+ docker_scanner.initialize()
142
160
  discovery: Discovery | None = docker_scanner.rescan(
143
161
  Discovery(docker_scanner, cli_conf.get("container", "frigate"), "cli", "manual")
144
162
  )
@@ -67,6 +67,11 @@ class MqttConfig:
67
67
  protocol: str = "${oc.env:MQTT_VERSION,3.11}"
68
68
 
69
69
 
70
+ @dataclass
71
+ class GitHubConfig:
72
+ access_token: str | None = None
73
+
74
+
70
75
  @dataclass
71
76
  class MetadataSourceConfig:
72
77
  enabled: bool = True
@@ -86,6 +91,24 @@ class VersionPolicy(StrEnum):
86
91
  VERSION_DIGEST = "VERSION_DIGEST"
87
92
 
88
93
 
94
+ @dataclass
95
+ class DockerPackageUpdateInfo:
96
+ image_name: str = MISSING # untagged image ref
97
+
98
+
99
+ @dataclass
100
+ class PackageUpdateInfo:
101
+ docker: DockerPackageUpdateInfo | None = field(default_factory=DockerPackageUpdateInfo)
102
+ logo_url: str | None = None
103
+ release_notes_url: str | None = None
104
+ source_repo_url: str | None = None
105
+
106
+
107
+ @dataclass
108
+ class UpdateInfoConfig:
109
+ common_packages: dict[str, PackageUpdateInfo] = field(default_factory=lambda: {})
110
+
111
+
89
112
  @dataclass
90
113
  class DockerConfig:
91
114
  enabled: bool = True
@@ -149,25 +172,9 @@ class Config:
149
172
  mqtt: MqttConfig = field(default_factory=MqttConfig) # pyright: ignore[reportArgumentType, reportCallIssue]
150
173
  homeassistant: HomeAssistantConfig = field(default_factory=HomeAssistantConfig)
151
174
  docker: DockerConfig = field(default_factory=DockerConfig)
175
+ github: GitHubConfig = field(default_factory=GitHubConfig)
152
176
  scan_interval: int = 60 * 60 * 3
153
-
154
-
155
- @dataclass
156
- class DockerPackageUpdateInfo:
157
- image_name: str = MISSING # untagged image ref
158
-
159
-
160
- @dataclass
161
- class PackageUpdateInfo:
162
- docker: DockerPackageUpdateInfo | None = field(default_factory=DockerPackageUpdateInfo)
163
- logo_url: str | None = None
164
- release_notes_url: str | None = None
165
- source_repo_url: str | None = None
166
-
167
-
168
- @dataclass
169
- class UpdateInfoConfig:
170
- common_packages: dict[str, PackageUpdateInfo] = field(default_factory=lambda: {})
177
+ packages: dict[str, PackageUpdateInfo] = field(default_factory=dict)
171
178
 
172
179
 
173
180
  class IncompleteConfigException(BaseException):
@@ -188,13 +195,13 @@ def load_app_config(conf_file_path: Path, return_invalid: bool = False) -> Confi
188
195
  try:
189
196
  log.debug(f"Creating config directory {conf_file_path.parent} if not already present")
190
197
  conf_file_path.parent.mkdir(parents=True, exist_ok=True)
191
- except Exception:
192
- log.warning("Unable to create config directory", path=conf_file_path.parent)
198
+ except Exception as e:
199
+ log.warning("Unable to create config directory: %s", e, path=conf_file_path.parent)
193
200
  try:
194
201
  conf_file_path.write_text(OmegaConf.to_yaml(base_cfg))
195
202
  log.info(f"Auto-generated a new config file at {conf_file_path}")
196
- except Exception:
197
- log.warning("Unable to write config file", path=conf_file_path)
203
+ except Exception as e:
204
+ log.warning("Unable to write config file: %s", e, path=conf_file_path)
198
205
  cfg = base_cfg
199
206
  else:
200
207
  cfg = base_cfg
@@ -145,8 +145,8 @@ class APIStats:
145
145
  """Log line friendly string summary"""
146
146
  return (
147
147
  f"fetches: {self.fetches}, cache ratio: {self.hit_ratio():.2%}, revalidated: {self.revalidated}, "
148
- + f"errors: {', '.join(f'{status_code}:{fails}' for status_code, fails in self.failed.items())}, "
149
- + f"oldest cache hit: {self.max_cache_age:.2f}, avg elapsed: {self.average_elapsed()}"
148
+ + f"errors: {', '.join(f'{status_code}:{fails}' for status_code, fails in self.failed.items()) or '0'}, "
149
+ + f"oldest cache hit: {self.max_cache_age:.2f}s, avg elapsed: {self.average_elapsed()}s"
150
150
  )
151
151
 
152
152
 
@@ -196,8 +196,9 @@ def fetch_url(
196
196
  allow_stale=allow_stale,
197
197
  )
198
198
  )
199
+ log_headers: list[tuple[str, str]] = [h for h in headers if len(h) > 1 and h[0] != "Authorization"]
199
200
  with SyncCacheClient(headers=headers, follow_redirects=follow_redirects, policy=cache_policy) as client:
200
- log.debug(f"Fetching URL {url}, redirects={follow_redirects}, headers={headers}, cache_ttl={cache_ttl}")
201
+ log.debug(f"Fetching URL {url}, redirects={follow_redirects}, headers={log_headers}, cache_ttl={cache_ttl}")
201
202
  response: Response = client.request(method=method, url=url, extensions={"hishel_ttl": cache_ttl})
202
203
  cache_metadata: CacheMetadata = CacheMetadata(response)
203
204
  if not response.is_success:
@@ -221,6 +222,45 @@ def fetch_url(
221
222
  return None
222
223
 
223
224
 
224
- def validate_url(url: str, cache_ttl: int = 300) -> bool:
225
+ def validate_url(url: str, cache_ttl: int = 1500) -> bool:
225
226
  response: Response | None = fetch_url(url, method="HEAD", cache_ttl=cache_ttl, follow_redirects=True)
226
227
  return response is not None and response.status_code != 404
228
+
229
+
230
+ def sanitize_name(name: str, replacement: str = "_", max_len: int = 64) -> str:
231
+ """Strict sanitization that removes/replaces common problematic characters for MQTT or HA
232
+
233
+ - Replaces spaces with underscores
234
+ - Removes control characters
235
+ - Ensures alphanumeric safety for broader compatibility
236
+
237
+ Args:
238
+ name: The topic component string to sanitize
239
+ replacement: Character to replace invalid characters with (default: "_")
240
+ max_len: Largest acceptable name size
241
+
242
+ Returns:
243
+ Sanitized topic string safe for most MQTT brokers
244
+
245
+ """
246
+ if not name:
247
+ raise ValueError("Name cannot be empty")
248
+ orig_name: str = name
249
+ name = re.sub(r"[^A-Za-z0-9_\-\.]+", replacement, name)
250
+
251
+ # Replace multiple consecutive replacement chars with single one
252
+ if replacement:
253
+ pattern = re.escape(replacement) + "+"
254
+ name = re.sub(pattern, replacement, name)
255
+
256
+ # Trim to max length
257
+ topic_bytes = name.encode("utf-8")
258
+ if len(topic_bytes) > max_len:
259
+ name = topic_bytes[:max_len].decode("utf-8", errors="ignore")
260
+
261
+ if not name:
262
+ raise ValueError("Topic became empty after sanitization")
263
+ if name != orig_name:
264
+ log.info("Component name %s changed to %s for MQTT/HA compatibility", orig_name, name)
265
+
266
+ return name
@@ -19,6 +19,7 @@ from updates2mqtt.config import (
19
19
  UNKNOWN_VERSION,
20
20
  VERSION_RE,
21
21
  DockerConfig,
22
+ GitHubConfig,
22
23
  NodeConfig,
23
24
  PackageUpdateInfo,
24
25
  PublishPolicy,
@@ -127,6 +128,8 @@ class DockerProvider(ReleaseProvider):
127
128
  self,
128
129
  cfg: DockerConfig,
129
130
  node_cfg: NodeConfig,
131
+ packages: dict[str, PackageUpdateInfo] | None = None,
132
+ github_cfg: GitHubConfig | None = None,
130
133
  self_bounce: Event | None = None,
131
134
  ) -> None:
132
135
  super().__init__(node_cfg, "docker")
@@ -136,8 +139,9 @@ class DockerProvider(ReleaseProvider):
136
139
  # TODO: refresh discovered packages periodically
137
140
  self.throttler = Throttler(self.cfg.default_api_backoff, self.log, self.stopped)
138
141
  self.self_bounce: Event | None = self_bounce
142
+
139
143
  self.pkg_enrichers: list[PackageEnricher] = [
140
- CommonPackageEnricher(self.cfg),
144
+ CommonPackageEnricher(self.cfg, packages),
141
145
  LinuxServerIOPackageEnricher(self.cfg),
142
146
  DefaultPackageEnricher(self.cfg),
143
147
  ]
@@ -145,12 +149,13 @@ class DockerProvider(ReleaseProvider):
145
149
  self.client, self.throttler, self.cfg.registry, self.cfg.default_api_backoff
146
150
  )
147
151
  self.registry_image_lookup = ContainerDistributionAPIVersionLookup(self.throttler, self.cfg.registry)
148
- self.release_enricher = SourceReleaseEnricher()
152
+ self.release_enricher = SourceReleaseEnricher(github_cfg)
149
153
  self.local_info_builder = LocalContainerInfo()
150
154
 
151
155
  def initialize(self) -> None:
152
156
  for enricher in self.pkg_enrichers:
153
157
  enricher.initialize()
158
+ self.log.debug("Docker provider initialized")
154
159
 
155
160
  def update(self, discovery: Discovery) -> bool:
156
161
  logger: Any = self.log.bind(container=discovery.name, action="update")
@@ -267,11 +272,11 @@ class DockerProvider(ReleaseProvider):
267
272
  )
268
273
 
269
274
  def rescan(self, discovery: Discovery) -> Discovery | None:
270
- logger = self.log.bind(container=discovery.name, action="rescan")
275
+ logger: Any = self.log.bind(container=discovery.name, action="rescan")
271
276
  try:
272
277
  c: Container = self.client.containers.get(discovery.name)
273
278
  if c:
274
- rediscovery = self.analyze(c, discovery.session, previous_discovery=discovery)
279
+ rediscovery: Discovery | None = self.analyze(c, discovery.session, previous_discovery=discovery)
275
280
  if rediscovery and not rediscovery.throttled:
276
281
  self.discoveries[rediscovery.name] = rediscovery
277
282
  return rediscovery
@@ -23,21 +23,32 @@ from updates2mqtt.config import (
23
23
  PKG_INFO_FILE,
24
24
  DockerConfig,
25
25
  DockerPackageUpdateInfo,
26
+ GitHubConfig,
26
27
  PackageUpdateInfo,
27
28
  RegistryConfig,
28
29
  UpdateInfoConfig,
29
30
  )
30
31
 
31
- log = structlog.get_logger()
32
+ log: Any = structlog.get_logger()
32
33
 
33
34
  SOURCE_PLATFORM_GITHUB = "GitHub"
34
35
  SOURCE_PLATFORM_CODEBERG = "CodeBerg"
35
- SOURCE_PLATFORMS = {SOURCE_PLATFORM_GITHUB: r"https://github.com/.*"}
36
+ SOURCE_PLATFORM_GITLAB = "GitLab"
37
+ SOURCE_PLATFORMS = {
38
+ SOURCE_PLATFORM_GITHUB: r"https://github.com/.*",
39
+ SOURCE_PLATFORM_GITLAB: r"https://gitlab.com/.*",
40
+ SOURCE_PLATFORM_CODEBERG: r"https://codeberg.org/.*",
41
+ }
36
42
  DIFF_URL_TEMPLATES = {
37
43
  SOURCE_PLATFORM_GITHUB: "{repo}/commit/{revision}",
38
44
  }
39
- RELEASE_URL_TEMPLATES = {SOURCE_PLATFORM_GITHUB: "{repo}/releases/tag/{version}"}
40
- UNKNOWN_RELEASE_URL_TEMPLATES = {SOURCE_PLATFORM_GITHUB: "{repo}/releases"}
45
+ RELEASE_URL_TEMPLATES = {
46
+ SOURCE_PLATFORM_GITHUB: "{repo}/releases/tag/{version}",
47
+ }
48
+ UNKNOWN_RELEASE_URL_TEMPLATES = {
49
+ SOURCE_PLATFORM_GITHUB: "{repo}/releases",
50
+ SOURCE_PLATFORM_GITLAB: "{repo}/container_registry",
51
+ }
41
52
  MISSING_VAL = "**MISSING**"
42
53
  UNKNOWN_REGISTRY = "**UNKNOWN_REGISTRY**"
43
54
 
@@ -45,18 +56,27 @@ HEADER_DOCKER_DIGEST = "docker-content-digest"
45
56
  HEADER_DOCKER_API = "docker-distribution-api-version"
46
57
 
47
58
  TOKEN_URL_TEMPLATE = "https://{auth_host}/token?scope=repository:{image_name}:pull&service={service}" # noqa: S105 # nosec
48
- REGISTRIES = {
49
- # registry: (auth_host, api_host, service, url_template)
50
- "docker.io": ("auth.docker.io", "registry-1.docker.io", "registry.docker.io", TOKEN_URL_TEMPLATE),
51
- "mcr.microsoft.com": (None, "mcr.microsoft.com", "mcr.microsoft.com", TOKEN_URL_TEMPLATE),
52
- "ghcr.io": ("ghcr.io", "ghcr.io", "ghcr.io", TOKEN_URL_TEMPLATE),
53
- "lscr.io": ("ghcr.io", "lscr.io", "ghcr.io", TOKEN_URL_TEMPLATE),
54
- "codeberg.org": ("codeberg.org", "codeberg.org", "container_registry", TOKEN_URL_TEMPLATE),
59
+
60
+ REGISTRIES: dict[str, tuple[str | None, str, str, str | None, str | None]] = {
61
+ # registry: (auth_host, api_host, service, url_template, repo_template)
62
+ "docker.io": ("auth.docker.io", "registry-1.docker.io", "registry.docker.io", TOKEN_URL_TEMPLATE, None),
63
+ "mcr.microsoft.com": (None, "mcr.microsoft.com", "mcr.microsoft.com", None, None),
64
+ "quay.io": (None, "quay.io", "quay.io", TOKEN_URL_TEMPLATE, None),
65
+ "ghcr.io": ("ghcr.io", "ghcr.io", "ghcr.io", TOKEN_URL_TEMPLATE, "https://github.com/{image_name}"),
66
+ "lscr.io": ("ghcr.io", "lscr.io", "ghcr.io", TOKEN_URL_TEMPLATE, None),
67
+ "codeberg.org": (
68
+ "codeberg.org",
69
+ "codeberg.org",
70
+ "container_registry",
71
+ TOKEN_URL_TEMPLATE,
72
+ "https://codeberg.org/{image_name}",
73
+ ),
55
74
  "registry.gitlab.com": (
56
75
  "www.gitlab.com",
57
76
  "registry.gitlab.com",
58
77
  "container_registry",
59
78
  "https://{auth_host}/jwt/auth?service={service}&scope=repository:{image_name}:pull&offline_token=true&client_id=docker",
79
+ "https://gitlab.com/{image_name}",
60
80
  ),
61
81
  }
62
82
 
@@ -190,7 +210,8 @@ class DockerImageInfo(DiscoveryArtefactDetail):
190
210
  digest = digest.split(":")[1] if ":" in digest else digest # remove digest type prefix
191
211
  return digest[0:12]
192
212
  return digest
193
- except Exception:
213
+ except Exception as e:
214
+ log.warning("Unable to condense digest %s: %s", digest, e)
194
215
  return None
195
216
 
196
217
  def reuse(self) -> "DockerImageInfo":
@@ -328,8 +349,8 @@ class LocalContainerInfo:
328
349
 
329
350
 
330
351
  class PackageEnricher:
331
- def __init__(self, docker_cfg: DockerConfig) -> None:
332
- self.pkgs: dict[str, PackageUpdateInfo] = {}
352
+ def __init__(self, docker_cfg: DockerConfig, packages: dict[str, PackageUpdateInfo] | None = None) -> None:
353
+ self.pkgs: dict[str, PackageUpdateInfo] = packages or {}
333
354
  self.cfg: DockerConfig = docker_cfg
334
355
  self.log: Any = structlog.get_logger().bind(integration="docker")
335
356
 
@@ -371,18 +392,18 @@ class DefaultPackageEnricher(PackageEnricher):
371
392
  class CommonPackageEnricher(PackageEnricher):
372
393
  def initialize(self) -> None:
373
394
  if PKG_INFO_FILE.exists():
374
- log.debug("Loading common package update info", path=PKG_INFO_FILE)
395
+ self.log.debug("Loading common package update info", path=PKG_INFO_FILE)
375
396
  cfg = OmegaConf.load(PKG_INFO_FILE)
376
397
  else:
377
- log.warn("No common package update info found", path=PKG_INFO_FILE)
398
+ self.log.warn("No common package update info found", path=PKG_INFO_FILE)
378
399
  cfg = OmegaConf.structured(UpdateInfoConfig)
379
400
  try:
380
401
  # omegaconf broken-ness on optional fields and converting to backclasses
381
402
  self.pkgs: dict[str, PackageUpdateInfo] = {
382
- pkg: PackageUpdateInfo(**pkg_cfg) for pkg, pkg_cfg in cfg.common_packages.items()
403
+ pkg: PackageUpdateInfo(**pkg_cfg) for pkg, pkg_cfg in cfg.common_packages.items() if pkg not in self.pkgs
383
404
  }
384
405
  except (MissingMandatoryValue, ValidationError) as e:
385
- log.error("Configuration error %s", e, path=PKG_INFO_FILE.as_posix())
406
+ self.log.error("Configuration error %s", e, path=PKG_INFO_FILE.as_posix())
386
407
  raise
387
408
 
388
409
 
@@ -392,7 +413,7 @@ class LinuxServerIOPackageEnricher(PackageEnricher):
392
413
  if cfg is None or not cfg.enabled:
393
414
  return
394
415
 
395
- log.debug(f"Fetching linuxserver.io metadata from API, cache_ttl={cfg.cache_ttl}")
416
+ self.log.debug(f"Fetching linuxserver.io metadata from API, cache_ttl={cfg.cache_ttl}")
396
417
  response: Response | None = fetch_url(
397
418
  "https://api.linuxserver.io/api/v1/images?include_config=false&include_deprecated=false",
398
419
  cache_ttl=cfg.cache_ttl,
@@ -413,26 +434,38 @@ class LinuxServerIOPackageEnricher(PackageEnricher):
413
434
  release_notes_url=f"{repo['github_url']}/releases",
414
435
  )
415
436
  added += 1
416
- log.debug("Added linuxserver.io package", pkg=image_name)
417
- log.info(f"Added {added} linuxserver.io package details")
437
+ self.log.info(f"Added {added} linuxserver.io package details")
418
438
 
419
439
 
420
440
  class SourceReleaseEnricher:
421
- def __init__(self) -> None:
441
+ def __init__(self, gh_cfg: GitHubConfig | None = None) -> None:
422
442
  self.log: Any = structlog.get_logger().bind(integration="docker")
443
+ self.gh_cfg: GitHubConfig | None = gh_cfg
423
444
 
424
445
  def enrich(
425
446
  self, registry_info: DockerImageInfo, source_repo_url: str | None = None, notes_url: str | None = None
426
447
  ) -> ReleaseDetail | None:
427
- if not registry_info.annotations and not source_repo_url and not notes_url:
428
- return None
429
-
430
448
  detail = ReleaseDetail()
431
449
 
432
450
  detail.notes_url = notes_url
433
451
  detail.version = registry_info.annotations.get("org.opencontainers.image.version")
434
452
  detail.revision = registry_info.annotations.get("org.opencontainers.image.revision")
435
- detail.source_url = registry_info.annotations.get("org.opencontainers.image.source") or source_repo_url
453
+ # explicit source_repo_url overrides container, e.g. where container source is only the docker wrapper
454
+ detail.source_url = source_repo_url or registry_info.annotations.get("org.opencontainers.image.source")
455
+
456
+ if detail.source_url is None and registry_info is not None and registry_info.index_name is not None:
457
+ registry_config: tuple[str | None, str, str, str | None, str | None] | None = REGISTRIES.get(
458
+ registry_info.index_name
459
+ )
460
+ repo_template: str | None = registry_config[4] if registry_config else None
461
+ if repo_template:
462
+ source_url = repo_template.format(image_name=registry_info.name)
463
+ if validate_url(source_url, cache_ttl=86400):
464
+ detail.source_url = source_url
465
+ self.log.info("Implied source from registry: %s", detail.source_url)
466
+
467
+ if detail.source_url is None and detail.notes_url is None and detail.revision is None and detail.version is None:
468
+ return None
436
469
 
437
470
  if detail.source_url and "#" in detail.source_url:
438
471
  detail.source_repo_url = detail.source_url.split("#", 1)[0]
@@ -452,24 +485,60 @@ class SourceReleaseEnricher:
452
485
  "source": detail.source_url or MISSING_VAL,
453
486
  }
454
487
 
455
- diff_url: str | None = DIFF_URL_TEMPLATES[detail.source_platform].format(**template_vars)
456
- if diff_url and MISSING_VAL not in diff_url and validate_url(diff_url):
488
+ diff_url_template: str | None = DIFF_URL_TEMPLATES.get(detail.source_platform)
489
+ diff_url: str | None = diff_url_template.format(**template_vars) if diff_url_template else None
490
+ if diff_url and MISSING_VAL not in diff_url and validate_url(diff_url, cache_ttl=3600):
457
491
  detail.diff_url = diff_url
458
492
  else:
459
493
  diff_url = None
460
494
 
461
- if detail.notes_url is None:
462
- detail.notes_url = RELEASE_URL_TEMPLATES[detail.source_platform].format(**template_vars)
463
-
464
- if MISSING_VAL in detail.notes_url or not validate_url(detail.notes_url):
465
- detail.notes_url = UNKNOWN_RELEASE_URL_TEMPLATES[detail.source_platform].format(**template_vars)
466
- if MISSING_VAL in detail.notes_url or not validate_url(detail.notes_url):
467
- detail.notes_url = None
468
-
469
- if detail.source_platform == SOURCE_PLATFORM_GITHUB and detail.source_repo_url:
495
+ if detail.notes_url is None and detail.source_platform in RELEASE_URL_TEMPLATES:
496
+ platform_notes_url: str | None = RELEASE_URL_TEMPLATES[detail.source_platform].format(**template_vars)
497
+ if (
498
+ platform_notes_url
499
+ and MISSING_VAL not in platform_notes_url
500
+ and validate_url(platform_notes_url, cache_ttl=86400)
501
+ ):
502
+ self.log.debug("Setting default known release notes url: %s", platform_notes_url)
503
+ detail.notes_url = platform_notes_url
504
+
505
+ if detail.notes_url is None and detail.source_platform in UNKNOWN_RELEASE_URL_TEMPLATES:
506
+ platform_notes_url = UNKNOWN_RELEASE_URL_TEMPLATES[detail.source_platform].format(**template_vars)
507
+ if (
508
+ platform_notes_url
509
+ and MISSING_VAL not in platform_notes_url
510
+ and validate_url(platform_notes_url, cache_ttl=86400)
511
+ ):
512
+ self.log.debug("Setting default unknown release notes url: %s", platform_notes_url)
513
+ detail.notes_url = platform_notes_url
514
+
515
+ if detail.source_platform == SOURCE_PLATFORM_GITHUB and detail.source_repo_url and detail.version is not None:
516
+ access_token: str | None = self.gh_cfg.access_token if self.gh_cfg else None
517
+ if access_token:
518
+ self.log.debug("Using configured bearer token (%s chars) for GitHub API", len(access_token))
470
519
  base_api = detail.source_repo_url.replace("https://github.com", "https://api.github.com/repos")
471
520
 
472
- api_response: Response | None = fetch_url(f"{base_api}/releases/tags/{detail.version}")
521
+ api_response: Response | None = fetch_url(
522
+ f"{base_api}/releases/tags/{detail.version}", bearer_token=access_token, allow_stale=True
523
+ )
524
+ if api_response and api_response.status_code == 404:
525
+ # possible that source version doesn't match release gag
526
+ alt_api_response: Response | None = fetch_url(f"{base_api}/releases/latest", bearer_token=access_token)
527
+ if alt_api_response and alt_api_response.is_success:
528
+ alt_api_results = httpx_json_content(alt_api_response, {})
529
+ if alt_api_results and re.fullmatch(f"(V|v|r|R)?{detail.version}", alt_api_results.get("tag_name")):
530
+ self.log.info(
531
+ f"Matched {registry_info.name} {detail.version} to latest release {alt_api_results['tag_name']}"
532
+ )
533
+ api_response = alt_api_response
534
+ elif alt_api_results:
535
+ self.log.debug(
536
+ "Failed to match latest release for %s, found tag %s for name %s",
537
+ detail.version,
538
+ alt_api_results.get("tag_name"),
539
+ alt_api_results.get("name"),
540
+ )
541
+
473
542
  if api_response and api_response.is_success:
474
543
  api_results: Any = httpx_json_content(api_response, {})
475
544
  detail.summary = api_results.get("body") # ty:ignore[possibly-missing-attribute]
@@ -523,14 +592,18 @@ class ContainerDistributionAPIVersionLookup(VersionLookup):
523
592
  self.api_stats = APIStatsCounter()
524
593
 
525
594
  def fetch_token(self, registry: str, image_name: str) -> str | None:
526
- default_host: tuple[str, str, str, str] = (registry, registry, registry, TOKEN_URL_TEMPLATE)
595
+ default_host: tuple[str, str, str, str, None] = (registry, registry, registry, TOKEN_URL_TEMPLATE, None)
527
596
  auth_host: str | None = REGISTRIES.get(registry, default_host)[0]
528
597
  if auth_host is None:
529
598
  return None
530
599
 
531
600
  service: str = REGISTRIES.get(registry, default_host)[2]
532
- url_template: str = REGISTRIES.get(registry, default_host)[3]
533
- auth_url: str = url_template.format(auth_host=auth_host, image_name=image_name, service=service)
601
+ url_template: str | None = REGISTRIES.get(registry, default_host)[3]
602
+ auth_url: str | None = (
603
+ url_template.format(auth_host=auth_host, image_name=image_name, service=service) if url_template else None
604
+ )
605
+ if auth_url is None:
606
+ return None
534
607
  response: Response | None = fetch_url(
535
608
  auth_url, cache_ttl=self.cfg.token_cache_ttl, follow_redirects=True, api_stats_counter=self.api_stats
536
609
  )
@@ -737,7 +810,7 @@ class ContainerDistributionAPIVersionLookup(VersionLookup):
737
810
  if index_digest:
738
811
  result.image_digest = index_digest
739
812
  result.short_digest = result.condense_digest(index_digest)
740
- log.debug("Setting %s image digest %s", result.name, result.short_digest)
813
+ self.log.debug("Setting %s image digest %s", result.name, result.short_digest)
741
814
 
742
815
  digest: str | None = m.get("digest")
743
816
  media_type = m.get("mediaType")
@@ -757,7 +830,7 @@ class ContainerDistributionAPIVersionLookup(VersionLookup):
757
830
  self.log.warning("Empty digest for %s %s %s", api_host, digest, media_type)
758
831
  else:
759
832
  result.repo_digest = result.condense_digest(digest, short=False)
760
- log.debug("Setting %s repo digest: %s", result.name, result.repo_digest)
833
+ self.log.debug("Setting %s repo digest: %s", result.name, result.repo_digest)
761
834
 
762
835
  if manifest.get("annotations"):
763
836
  result.annotations.update(manifest.get("annotations", {}))
@@ -8,7 +8,7 @@ from typing import Any
8
8
  import structlog
9
9
 
10
10
  from updates2mqtt.config import NodeConfig, PublishPolicy, UpdatePolicy, VersionPolicy
11
- from updates2mqtt.helpers import timestamp
11
+ from updates2mqtt.helpers import sanitize_name, timestamp
12
12
 
13
13
 
14
14
  class DiscoveryArtefactDetail:
@@ -59,6 +59,10 @@ class ReleaseDetail:
59
59
  "net_score": str(self.net_score) if self.net_score is not None else None,
60
60
  }
61
61
 
62
+ def __str__(self) -> str:
63
+ """Log friendly"""
64
+ return ",".join(f"{k}:{v}" for k, v in self.as_dict().items())
65
+
62
66
 
63
67
  class Discovery:
64
68
  """Discovered component from a scan"""
@@ -94,7 +98,7 @@ class Discovery:
94
98
  self.provider: ReleaseProvider = provider
95
99
  self.source_type: str = provider.source_type
96
100
  self.session: str = session
97
- self.name: str = name
101
+ self.name: str = sanitize_name(name)
98
102
  self.node: str = node
99
103
  self.entity_picture_url: str | None = entity_picture_url
100
104
  self.current_version: str | None = current_version
@@ -1,5 +1,6 @@
1
1
  import asyncio
2
2
  import json
3
+ import re
3
4
  import time
4
5
  from collections.abc import Callable
5
6
  from dataclasses import dataclass, field
@@ -21,6 +22,8 @@ from .hass_formatter import hass_format_config, hass_format_state
21
22
 
22
23
  log = structlog.get_logger()
23
24
 
25
+ MQTT_NAME = r"[A-Za-z0-9_\-\.]+"
26
+
24
27
 
25
28
  @dataclass
26
29
  class LocalMessage:
@@ -34,6 +37,7 @@ class MqttPublisher:
34
37
  self.node_cfg: NodeConfig = node_cfg
35
38
  self.hass_cfg: HomeAssistantConfig = hass_cfg
36
39
  self.providers_by_topic: dict[str, ReleaseProvider] = {}
40
+ self.providers_by_type: dict[str, ReleaseProvider] = {}
37
41
  self.event_loop: asyncio.AbstractEventLoop | None = None
38
42
  self.client: mqtt.Client | None = None
39
43
  self.fatal_failure = Event()
@@ -123,69 +127,60 @@ class MqttPublisher:
123
127
  else:
124
128
  self.log.warning("Disconnect failure from broker", result_code=rc)
125
129
 
126
- async def clean_topics(
127
- self, provider: ReleaseProvider, last_scan_session: str | None, wait_time: int = 5, force: bool = False
128
- ) -> None:
130
+ async def clean_topics(self, provider: ReleaseProvider, wait_time: int = 5, max_time: int = 120) -> None:
129
131
  logger = self.log.bind(action="clean")
130
132
  if self.fatal_failure.is_set():
131
133
  return
132
- logger.info("Starting clean cycle")
133
- cleaner = mqtt.Client(
134
- callback_api_version=CallbackAPIVersion.VERSION1,
135
- client_id=f"updates2mqtt_clean_{self.node_cfg.name}",
136
- clean_session=True,
137
- )
138
- results = {"cleaned": 0, "handled": 0, "discovered": 0, "last_timestamp": time.time()}
139
- cleaner.username_pw_set(self.cfg.user, password=self.cfg.password)
140
- cleaner.connect(host=self.cfg.host, port=self.cfg.port, keepalive=60)
141
- prefixes = [
142
- f"{self.hass_cfg.discovery.prefix}/update/{self.node_cfg.name}_{provider.source_type}_",
143
- f"{self.cfg.topic_root}/{self.node_cfg.name}/{provider.source_type}/",
144
- ]
145
-
146
- def cleanup(_client: mqtt.Client, _userdata: Any, msg: mqtt.MQTTMessage) -> None:
147
- if msg.retain and any(msg.topic.startswith(prefix) for prefix in prefixes):
148
- session = None
134
+ try:
135
+ logger.info("Starting clean cycle, max time: %s", max_time)
136
+ cutoff_time: float = time.time() + max_time
137
+ cleaner = mqtt.Client(
138
+ callback_api_version=CallbackAPIVersion.VERSION1,
139
+ client_id=f"updates2mqtt_clean_{self.node_cfg.name}",
140
+ clean_session=True,
141
+ )
142
+ results = {"cleaned": 0, "matched": 0, "discovered": 0, "last_timestamp": time.time()}
143
+ cleaner.username_pw_set(self.cfg.user, password=self.cfg.password)
144
+ cleaner.connect(host=self.cfg.host, port=self.cfg.port, keepalive=60)
145
+
146
+ def cleanup(_client: mqtt.Client, _userdata: Any, msg: mqtt.MQTTMessage) -> None:
147
+ discovery: Discovery | None = None
148
+ if msg.topic.startswith(
149
+ f"{self.hass_cfg.discovery.prefix}/update/{self.node_cfg.name}_{provider.source_type}_"
150
+ ):
151
+ discovery = self.reverse_config_topic(msg.topic, provider.source_type)
152
+ elif msg.topic.startswith(
153
+ f"{self.cfg.topic_root}/{self.node_cfg.name}/{provider.source_type}/"
154
+ ) and msg.topic.endswith("/state"):
155
+ discovery = self.reverse_state_topic(msg.topic, provider.source_type)
156
+ elif msg.topic.startswith(f"{self.cfg.topic_root}/{self.node_cfg.name}/{provider.source_type}/"):
157
+ discovery = self.reverse_general_topic(msg.topic, provider.source_type)
158
+ else:
159
+ logger.debug("Ignoring other topic ", topic=msg.topic)
160
+ return
161
+
149
162
  results["discovered"] += 1
150
- try:
151
- payload = self.safe_json_decode(msg.payload)
152
- session = payload.get("source_session")
153
- except Exception as e:
154
- log.warn(
155
- "Unable to handle payload for %s: %s",
156
- msg.topic,
157
- e,
158
- exc_info=1,
159
- )
160
- results["handled"] += 1
163
+ if discovery is not None:
164
+ results["matched"] += 1
161
165
  results["last_timestamp"] = time.time()
162
- if session is not None and last_scan_session is not None and session != last_scan_session:
163
- log.debug("Removing stale msg", topic=msg.topic, session=session)
164
- cleaner.publish(msg.topic, "", retain=True)
165
- results["cleaned"] += 1
166
- elif session is None and force:
167
- log.debug("Removing untrackable msg", topic=msg.topic)
166
+ if discovery is None:
167
+ logger.debug("Removing unknown discovery", topic=msg.topic)
168
168
  cleaner.publish(msg.topic, "", retain=True)
169
169
  results["cleaned"] += 1
170
- else:
171
- log.debug(
172
- "Retaining topic with current session: %s",
173
- msg.topic,
174
- )
175
- else:
176
- log.debug("Skipping clean of %s", msg.topic)
177
170
 
178
- cleaner.on_message = cleanup
179
- options = paho.mqtt.subscribeoptions.SubscribeOptions(noLocal=True)
180
- cleaner.subscribe(f"{self.hass_cfg.discovery.prefix}/update/#", options=options)
181
- cleaner.subscribe(f"{self.cfg.topic_root}/{self.node_cfg.name}/{provider.source_type}/#", options=options)
171
+ cleaner.on_message = cleanup
172
+ options = paho.mqtt.subscribeoptions.SubscribeOptions(noLocal=True)
173
+ cleaner.subscribe(f"{self.hass_cfg.discovery.prefix}/update/#", options=options)
174
+ cleaner.subscribe(f"{self.cfg.topic_root}/{self.node_cfg.name}/{provider.source_type}/#", options=options)
182
175
 
183
- while time.time() - results["last_timestamp"] <= wait_time:
184
- cleaner.loop(0.5)
176
+ while time.time() - results["last_timestamp"] <= wait_time and time.time() <= cutoff_time:
177
+ cleaner.loop(0.5)
185
178
 
186
- log.info(
187
- f"Clean completed, discovered:{results['discovered']}, handled:{results['handled']}, cleaned:{results['cleaned']}"
188
- )
179
+ logger.info(
180
+ f"Cleaned - discovered:{results['discovered']}, matched:{results['matched']}, cleaned:{results['cleaned']}"
181
+ )
182
+ except Exception as e:
183
+ logger.error("Cleaning topics of stale entries failed: %s", e)
189
184
 
190
185
  def safe_json_decode(self, jsonish: str | bytes | None) -> dict:
191
186
  if jsonish is None:
@@ -233,7 +228,7 @@ class MqttPublisher:
233
228
  source_type,
234
229
  comp_name,
235
230
  )
236
- updated = provider.command(comp_name, command, on_update_start, on_update_end)
231
+ updated: bool = provider.command(comp_name, command, on_update_start, on_update_end)
237
232
  discovery = provider.resolve(comp_name)
238
233
  if updated and discovery:
239
234
  if discovery.publish_policy == PublishPolicy.HOMEASSISTANT and self.hass_cfg.discovery.enabled:
@@ -305,12 +300,48 @@ class MqttPublisher:
305
300
  prefix = self.hass_cfg.discovery.prefix
306
301
  return f"{prefix}/update/{self.node_cfg.name}_{discovery.source_type}_{discovery.name}/update/config"
307
302
 
303
+ def reverse_config_topic(self, topic: str, source_type: str) -> Discovery | None:
304
+ match = re.fullmatch(
305
+ f"{self.hass_cfg.discovery.prefix}/update/{self.node_cfg.name}_{source_type}_({MQTT_NAME})/update/config",
306
+ topic,
307
+ )
308
+ if match and len(match.groups()) == 1:
309
+ discovery_name: str = match.group(1)
310
+ if discovery_name in self.providers_by_type[source_type].discoveries:
311
+ return self.providers_by_type[source_type].discoveries[discovery_name]
312
+
313
+ self.log.debug("MQTT CONFIG no match for %s", topic)
314
+ return None
315
+
308
316
  def state_topic(self, discovery: Discovery) -> str:
309
317
  return f"{self.cfg.topic_root}/{self.node_cfg.name}/{discovery.source_type}/{discovery.name}/state"
310
318
 
319
+ def reverse_state_topic(self, topic: str, source_type: str) -> Discovery | None:
320
+ match = re.fullmatch(
321
+ f"{self.cfg.topic_root}/{self.node_cfg.name}/{source_type}/({MQTT_NAME})/state",
322
+ topic,
323
+ )
324
+ if match and len(match.groups()) == 1:
325
+ discovery_name: str = match.group(1)
326
+ if discovery_name in self.providers_by_type[source_type].discoveries:
327
+ return self.providers_by_type[source_type].discoveries[discovery_name]
328
+
329
+ self.log.debug("MQTT STATE no match for %s", topic)
330
+ return None
331
+
311
332
  def general_topic(self, discovery: Discovery) -> str:
312
333
  return f"{self.cfg.topic_root}/{self.node_cfg.name}/{discovery.source_type}/{discovery.name}"
313
334
 
335
+ def reverse_general_topic(self, topic: str, source_type: str) -> Discovery | None:
336
+ match = re.fullmatch(f"{self.cfg.topic_root}/{self.node_cfg.name}/{source_type}/({MQTT_NAME})", topic)
337
+ if match and len(match.groups()) == 1:
338
+ discovery_name: str = match.group(1)
339
+ if discovery_name in self.providers_by_type[source_type].discoveries:
340
+ return self.providers_by_type[source_type].discoveries[discovery_name]
341
+
342
+ self.log.debug("MQTT ATTR no match for %s", topic)
343
+ return None
344
+
314
345
  def command_topic(self, provider: ReleaseProvider) -> str:
315
346
  return f"{self.cfg.topic_root}/{self.node_cfg.name}/{provider.source_type}"
316
347
 
@@ -363,6 +394,7 @@ class MqttPublisher:
363
394
  else:
364
395
  self.log.info("Handler subscribing", topic=topic)
365
396
  self.providers_by_topic[topic] = provider
397
+ self.providers_by_type[provider.source_type] = provider
366
398
  self.client.subscribe(topic)
367
399
  return topic
368
400