SnowSignal 0.1.1__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- snowsignal-0.1.1/.github/workflows/publish-to-test-pypi.yml +169 -0
- snowsignal-0.1.1/.gitignore +181 -0
- snowsignal-0.1.1/.gitlab/.gitlab-ci.yml +88 -0
- snowsignal-0.1.1/LICENSE +13 -0
- snowsignal-0.1.1/PKG-INFO +157 -0
- snowsignal-0.1.1/README.md +128 -0
- snowsignal-0.1.1/_version.py +16 -0
- snowsignal-0.1.1/docker-compose.dev.yml +28 -0
- snowsignal-0.1.1/docker-compose.local-dev.yml +50 -0
- snowsignal-0.1.1/docker-compose.yml +17 -0
- snowsignal-0.1.1/docs/local_dev.md +53 -0
- snowsignal-0.1.1/docs/local_dev_example.gif +0 -0
- snowsignal-0.1.1/docs/pvacess_communication_example.png +0 -0
- snowsignal-0.1.1/docs/socat_test_broadcast.md +1 -0
- snowsignal-0.1.1/docs/swarm_setup.md +25 -0
- snowsignal-0.1.1/pyproject.toml +75 -0
- snowsignal-0.1.1/snowsignal/__init__.py +0 -0
- snowsignal-0.1.1/snowsignal/__main__.py +12 -0
- snowsignal-0.1.1/snowsignal/configure.py +106 -0
- snowsignal-0.1.1/snowsignal/dockerfile +9 -0
- snowsignal-0.1.1/snowsignal/netutils.py +137 -0
- snowsignal-0.1.1/snowsignal/packet.py +174 -0
- snowsignal-0.1.1/snowsignal/snowsignal.py +130 -0
- snowsignal-0.1.1/snowsignal/udp_relay_receive.py +196 -0
- snowsignal-0.1.1/snowsignal/udp_relay_transmit.py +221 -0
- snowsignal-0.1.1/tests/dockerfile +5 -0
- snowsignal-0.1.1/tests/unit/__init__.py +0 -0
- snowsignal-0.1.1/tests/unit/test_netutils.py +83 -0
- snowsignal-0.1.1/tests/unit/test_packet.py +129 -0
- snowsignal-0.1.1/tests/unit/test_snowsignal.py +102 -0
- snowsignal-0.1.1/tests/unit/test_udp_relay_receive.py +74 -0
- snowsignal-0.1.1/tests/unit/test_udp_relay_transmit.py +138 -0
@@ -0,0 +1,169 @@
|
|
1
|
+
name: Publish Python 🐍 distribution 📦 to PyPI and TestPyPI
|
2
|
+
|
3
|
+
on: push
|
4
|
+
|
5
|
+
jobs:
|
6
|
+
lint:
|
7
|
+
name: Use Ruff to perform linting, formatting, and other code quality tests
|
8
|
+
runs-on: ubuntu-latest
|
9
|
+
steps:
|
10
|
+
- uses: actions/checkout@v4
|
11
|
+
- uses: actions/setup-python@v5
|
12
|
+
with:
|
13
|
+
python-version: "3.x"
|
14
|
+
- run: pip install .[test]
|
15
|
+
- run: |
|
16
|
+
ruff check --no-fix
|
17
|
+
ruff format --diff
|
18
|
+
|
19
|
+
# There's an oddity here I've only been able to inelegantly resolve
|
20
|
+
# The tests need to open sockets but don't have permissions to do so.
|
21
|
+
# I haven't figured out how to do so. To resolve this they need to
|
22
|
+
# use sudo but that affects the pip install and we have to manually
|
23
|
+
# identify the versions of pip and python to install.
|
24
|
+
test:
|
25
|
+
name: Run tests on multiple Python versions
|
26
|
+
needs:
|
27
|
+
- lint
|
28
|
+
strategy:
|
29
|
+
matrix:
|
30
|
+
os: [ubuntu-latest] #, mac-latest]
|
31
|
+
python-version: ["3.11", "3.12"]
|
32
|
+
runs-on: ${{ matrix.os }}
|
33
|
+
continue-on-error: true
|
34
|
+
steps:
|
35
|
+
- uses: actions/checkout@v4
|
36
|
+
- name: Set up Python ${{ matrix.python-version }}
|
37
|
+
uses: actions/setup-python@v5
|
38
|
+
with:
|
39
|
+
python-version: ${{ matrix.python-version }}
|
40
|
+
- name: Install dependencies
|
41
|
+
run: |
|
42
|
+
pip install .[test]
|
43
|
+
PIP_PATH=$(which pip)
|
44
|
+
sudo $PIP_PATH install .[test]
|
45
|
+
- name: Run tests
|
46
|
+
run: |
|
47
|
+
PYTHON_PATH=$(which python)
|
48
|
+
sudo $PYTHON_PATH -m coverage run --source=. -m unittest discover tests/
|
49
|
+
- name: Gather coverage statistics
|
50
|
+
if: ${{ always() }}
|
51
|
+
run: |
|
52
|
+
coverage report -m
|
53
|
+
coverage xml
|
54
|
+
- name: Upload pytest test results
|
55
|
+
if: ${{ always() }}
|
56
|
+
uses: actions/upload-artifact@v4
|
57
|
+
with:
|
58
|
+
name: coverage-results-${{ matrix.os }}-${{ matrix.python-version }}
|
59
|
+
path: coverage.xml
|
60
|
+
# Use always() to always run this step to publish test results when there are test failures
|
61
|
+
|
62
|
+
build:
|
63
|
+
name: Build distribution 📦
|
64
|
+
needs:
|
65
|
+
- test
|
66
|
+
runs-on: ubuntu-latest
|
67
|
+
steps:
|
68
|
+
- uses: actions/checkout@v4
|
69
|
+
with:
|
70
|
+
fetch-depth: 0 # Needed to fetch the tags
|
71
|
+
- name: Set up Python
|
72
|
+
uses: actions/setup-python@v5
|
73
|
+
with:
|
74
|
+
python-version: "3.x"
|
75
|
+
- name: Install build tools
|
76
|
+
run: >-
|
77
|
+
pip install .[dist]
|
78
|
+
- name: Build a binary wheel and a source tarball
|
79
|
+
run: python -m build
|
80
|
+
- name: Store the distribution packages
|
81
|
+
uses: actions/upload-artifact@v4
|
82
|
+
with:
|
83
|
+
name: python-package-distributions
|
84
|
+
path: dist/
|
85
|
+
|
86
|
+
publish-to-pypi:
|
87
|
+
name: >-
|
88
|
+
Publish Python 🐍 distribution 📦 to PyPI
|
89
|
+
if: startsWith(github.ref, 'refs/tags/') # only publish to PyPI on tag pushes
|
90
|
+
needs:
|
91
|
+
- build
|
92
|
+
runs-on: ubuntu-latest
|
93
|
+
environment:
|
94
|
+
name: pypi
|
95
|
+
url: https://pypi.org/p/SnowSignal
|
96
|
+
permissions:
|
97
|
+
id-token: write # IMPORTANT: mandatory for trusted publishing
|
98
|
+
steps:
|
99
|
+
- name: Download all the dists
|
100
|
+
uses: actions/download-artifact@v4
|
101
|
+
with:
|
102
|
+
name: python-package-distributions
|
103
|
+
path: dist/
|
104
|
+
- name: Publish distribution 📦 to PyPI
|
105
|
+
uses: pypa/gh-action-pypi-publish@release/v1
|
106
|
+
|
107
|
+
publish-to-testpypi:
|
108
|
+
name: Publish Python 🐍 distribution 📦 to TestPyPI
|
109
|
+
needs:
|
110
|
+
- build
|
111
|
+
runs-on: ubuntu-latest
|
112
|
+
environment:
|
113
|
+
name: testpypi
|
114
|
+
url: https://test.pypi.org/p/SnowSignal
|
115
|
+
permissions:
|
116
|
+
id-token: write # IMPORTANT: mandatory for trusted publishing
|
117
|
+
steps:
|
118
|
+
- name: Download all the dists
|
119
|
+
uses: actions/download-artifact@v4
|
120
|
+
with:
|
121
|
+
name: python-package-distributions
|
122
|
+
path: dist/
|
123
|
+
- name: Publish distribution 📦 to TestPyPI
|
124
|
+
uses: pypa/gh-action-pypi-publish@release/v1
|
125
|
+
with:
|
126
|
+
repository-url: https://test.pypi.org/legacy/
|
127
|
+
|
128
|
+
github-release:
|
129
|
+
name: >-
|
130
|
+
Sign the Python 🐍 distribution 📦 with Sigstore
|
131
|
+
and upload them to GitHub Release
|
132
|
+
needs:
|
133
|
+
- publish-to-pypi
|
134
|
+
runs-on: ubuntu-latest
|
135
|
+
|
136
|
+
permissions:
|
137
|
+
contents: write # IMPORTANT: mandatory for making GitHub Releases
|
138
|
+
id-token: write # IMPORTANT: mandatory for sigstore
|
139
|
+
|
140
|
+
steps:
|
141
|
+
- name: Download all the dists
|
142
|
+
uses: actions/download-artifact@v4
|
143
|
+
with:
|
144
|
+
name: python-package-distributions
|
145
|
+
path: dist/
|
146
|
+
- name: Sign the dists with Sigstore
|
147
|
+
uses: sigstore/gh-action-sigstore-python@v2.1.1
|
148
|
+
with:
|
149
|
+
inputs: >-
|
150
|
+
./dist/*.tar.gz
|
151
|
+
./dist/*.whl
|
152
|
+
- name: Create GitHub Release
|
153
|
+
env:
|
154
|
+
GITHUB_TOKEN: ${{ github.token }}
|
155
|
+
run: >-
|
156
|
+
gh release create
|
157
|
+
'${{ github.ref_name }}'
|
158
|
+
--repo '${{ github.repository }}'
|
159
|
+
--notes ""
|
160
|
+
- name: Upload artifact signatures to GitHub Release
|
161
|
+
env:
|
162
|
+
GITHUB_TOKEN: ${{ github.token }}
|
163
|
+
# Upload to GitHub Release using the `gh` CLI.
|
164
|
+
# `dist/` contains the built packages, and the
|
165
|
+
# sigstore-produced signatures and certificates.
|
166
|
+
run: >-
|
167
|
+
gh release upload
|
168
|
+
'${{ github.ref_name }}' dist/**
|
169
|
+
--repo '${{ github.repository }}'
|
@@ -0,0 +1,181 @@
|
|
1
|
+
# Ruff
|
2
|
+
.ruff_cache
|
3
|
+
|
4
|
+
# Hatch-VCS
|
5
|
+
_version.py
|
6
|
+
|
7
|
+
# VSCode files
|
8
|
+
.vscode/
|
9
|
+
.vscode/*
|
10
|
+
!.vscode/settings.json
|
11
|
+
!.vscode/tasks.json
|
12
|
+
!.vscode/launch.json
|
13
|
+
!.vscode/extensions.json
|
14
|
+
!.vscode/*.code-snippets
|
15
|
+
|
16
|
+
# Local History for Visual Studio Code
|
17
|
+
.history/
|
18
|
+
|
19
|
+
# Built Visual Studio Code Extensions
|
20
|
+
*.vsix
|
21
|
+
|
22
|
+
# Byte-compiled / optimized / DLL files
|
23
|
+
__pycache__/
|
24
|
+
*.py[cod]
|
25
|
+
*$py.class
|
26
|
+
|
27
|
+
# C extensions
|
28
|
+
*.so
|
29
|
+
|
30
|
+
# Distribution / packaging
|
31
|
+
.Python
|
32
|
+
build/
|
33
|
+
develop-eggs/
|
34
|
+
dist/
|
35
|
+
downloads/
|
36
|
+
eggs/
|
37
|
+
.eggs/
|
38
|
+
lib/
|
39
|
+
lib64/
|
40
|
+
parts/
|
41
|
+
sdist/
|
42
|
+
var/
|
43
|
+
wheels/
|
44
|
+
share/python-wheels/
|
45
|
+
*.egg-info/
|
46
|
+
.installed.cfg
|
47
|
+
*.egg
|
48
|
+
MANIFEST
|
49
|
+
|
50
|
+
# PyInstaller
|
51
|
+
# Usually these files are written by a python script from a template
|
52
|
+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
53
|
+
*.manifest
|
54
|
+
*.spec
|
55
|
+
|
56
|
+
# Installer logs
|
57
|
+
pip-log.txt
|
58
|
+
pip-delete-this-directory.txt
|
59
|
+
|
60
|
+
# Unit test / coverage reports
|
61
|
+
htmlcov/
|
62
|
+
.tox/
|
63
|
+
.nox/
|
64
|
+
.coverage
|
65
|
+
.coverage.*
|
66
|
+
.cache
|
67
|
+
nosetests.xml
|
68
|
+
coverage.xml
|
69
|
+
*.cover
|
70
|
+
*.py,cover
|
71
|
+
.hypothesis/
|
72
|
+
.pytest_cache/
|
73
|
+
cover/
|
74
|
+
|
75
|
+
# Translations
|
76
|
+
*.mo
|
77
|
+
*.pot
|
78
|
+
|
79
|
+
# Django stuff:
|
80
|
+
*.log
|
81
|
+
local_settings.py
|
82
|
+
db.sqlite3
|
83
|
+
db.sqlite3-journal
|
84
|
+
|
85
|
+
# Flask stuff:
|
86
|
+
instance/
|
87
|
+
.webassets-cache
|
88
|
+
|
89
|
+
# Scrapy stuff:
|
90
|
+
.scrapy
|
91
|
+
|
92
|
+
# Sphinx documentation
|
93
|
+
docs/_build/
|
94
|
+
|
95
|
+
# PyBuilder
|
96
|
+
.pybuilder/
|
97
|
+
target/
|
98
|
+
|
99
|
+
# Jupyter Notebook
|
100
|
+
.ipynb_checkpoints
|
101
|
+
|
102
|
+
# IPython
|
103
|
+
profile_default/
|
104
|
+
ipython_config.py
|
105
|
+
|
106
|
+
# pyenv
|
107
|
+
# For a library or package, you might want to ignore these files since the code is
|
108
|
+
# intended to run in multiple environments; otherwise, check them in:
|
109
|
+
# .python-version
|
110
|
+
|
111
|
+
# pipenv
|
112
|
+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
113
|
+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
114
|
+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
115
|
+
# install all needed dependencies.
|
116
|
+
#Pipfile.lock
|
117
|
+
|
118
|
+
# poetry
|
119
|
+
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
120
|
+
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
121
|
+
# commonly ignored for libraries.
|
122
|
+
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
123
|
+
#poetry.lock
|
124
|
+
|
125
|
+
# pdm
|
126
|
+
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
|
127
|
+
#pdm.lock
|
128
|
+
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
|
129
|
+
# in version control.
|
130
|
+
# https://pdm.fming.dev/#use-with-ide
|
131
|
+
.pdm.toml
|
132
|
+
|
133
|
+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
|
134
|
+
__pypackages__/
|
135
|
+
|
136
|
+
# Celery stuff
|
137
|
+
celerybeat-schedule
|
138
|
+
celerybeat.pid
|
139
|
+
|
140
|
+
# SageMath parsed files
|
141
|
+
*.sage.py
|
142
|
+
|
143
|
+
# Environments
|
144
|
+
.env
|
145
|
+
.venv
|
146
|
+
env/
|
147
|
+
venv/
|
148
|
+
ENV/
|
149
|
+
env.bak/
|
150
|
+
venv.bak/
|
151
|
+
|
152
|
+
# Spyder project settings
|
153
|
+
.spyderproject
|
154
|
+
.spyproject
|
155
|
+
|
156
|
+
# Rope project settings
|
157
|
+
.ropeproject
|
158
|
+
|
159
|
+
# mkdocs documentation
|
160
|
+
/site
|
161
|
+
|
162
|
+
# mypy
|
163
|
+
.mypy_cache/
|
164
|
+
.dmypy.json
|
165
|
+
dmypy.json
|
166
|
+
|
167
|
+
# Pyre type checker
|
168
|
+
.pyre/
|
169
|
+
|
170
|
+
# pytype static type analyzer
|
171
|
+
.pytype/
|
172
|
+
|
173
|
+
# Cython debug symbols
|
174
|
+
cython_debug/
|
175
|
+
|
176
|
+
# PyCharm
|
177
|
+
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
|
178
|
+
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
179
|
+
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
180
|
+
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
181
|
+
#.idea/
|
@@ -0,0 +1,88 @@
|
|
1
|
+
stages:
|
2
|
+
- format-lint
|
3
|
+
- test
|
4
|
+
- pypi
|
5
|
+
- build
|
6
|
+
- deploy
|
7
|
+
|
8
|
+
.before_script_template: &before_script_template
|
9
|
+
before_script:
|
10
|
+
- echo $NAME
|
11
|
+
- python -m pip install --upgrade pip
|
12
|
+
- pip install .[test]
|
13
|
+
- python -V
|
14
|
+
|
15
|
+
format:
|
16
|
+
<<: *before_script_template
|
17
|
+
image: python:latest
|
18
|
+
stage: format-lint
|
19
|
+
script:
|
20
|
+
- ruff check --no-fix
|
21
|
+
- ruff format --diff
|
22
|
+
|
23
|
+
.test_job_template: &test_job_template
|
24
|
+
<<: *before_script_template
|
25
|
+
stage: test
|
26
|
+
image: "python:$VERSION"
|
27
|
+
parallel:
|
28
|
+
matrix:
|
29
|
+
- VERSION: ['3.11', '3.12']
|
30
|
+
|
31
|
+
Run unittests:
|
32
|
+
<<: *test_job_template
|
33
|
+
script:
|
34
|
+
- python -m coverage run --source=. -m unittest discover tests/
|
35
|
+
- coverage report -m
|
36
|
+
- coverage xml
|
37
|
+
coverage: '/TOTAL.*\s+(\d+\%)/'
|
38
|
+
artifacts:
|
39
|
+
when: always
|
40
|
+
reports:
|
41
|
+
coverage_report:
|
42
|
+
coverage_format: cobertura
|
43
|
+
path: ./coverage.xml
|
44
|
+
|
45
|
+
Publish:
|
46
|
+
<<: *before_script_template
|
47
|
+
stage: pypi
|
48
|
+
when: on_success
|
49
|
+
image: python:latest
|
50
|
+
script:
|
51
|
+
- pip install .[dist]
|
52
|
+
- python -m build
|
53
|
+
- TWINE_PASSWORD=${CI_JOB_TOKEN} TWINE_USERNAME=gitlab-ci-token python -m twine upload --repository-url ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/pypi dist/*
|
54
|
+
|
55
|
+
Rebuild Image:
|
56
|
+
stage: build
|
57
|
+
image: docker
|
58
|
+
variables:
|
59
|
+
DOCKER_TAG: $CI_COMMIT_REF_NAME
|
60
|
+
services:
|
61
|
+
- docker:dind
|
62
|
+
script:
|
63
|
+
- echo $NAME
|
64
|
+
- docker build -t snowsignal snowsignal
|
65
|
+
- docker login https://harbor.stfc.ac.uk -u $DOCKER_REG_NAME --password $DOCKER_REG_TOKEN
|
66
|
+
- echo Build Identifiers - ${NAME}:$DOCKER_TAG
|
67
|
+
- docker build -t harbor.stfc.ac.uk/isis-accelerator-controls/snowsignal:$DOCKER_TAG snowsignal
|
68
|
+
- docker push harbor.stfc.ac.uk/isis-accelerator-controls/snowsignal:$DOCKER_TAG
|
69
|
+
|
70
|
+
Deploy Development Image:
|
71
|
+
stage: deploy
|
72
|
+
rules:
|
73
|
+
- if: $CI_COMMIT_BRANCH == "dev"
|
74
|
+
when: on_success
|
75
|
+
image: docker
|
76
|
+
script:
|
77
|
+
- apk add curl
|
78
|
+
- curl -X POST $PORTAINER_WEBHOOK_DEV
|
79
|
+
|
80
|
+
Deploy Production Image:
|
81
|
+
stage: deploy
|
82
|
+
rules:
|
83
|
+
- if: $CI_COMMIT_BRANCH == "main"
|
84
|
+
when: on_success
|
85
|
+
image: docker
|
86
|
+
script:
|
87
|
+
- apk add curl
|
88
|
+
- curl -X POST $PORTAINER_WEBHOOK_PROD
|
snowsignal-0.1.1/LICENSE
ADDED
@@ -0,0 +1,13 @@
|
|
1
|
+
BSD 3-Clause License
|
2
|
+
|
3
|
+
Copyright (c) 2024 Science and Technology Facilities Council (STFC), UK
|
4
|
+
|
5
|
+
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
|
6
|
+
|
7
|
+
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
|
8
|
+
|
9
|
+
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
|
10
|
+
|
11
|
+
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
|
12
|
+
|
13
|
+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
@@ -0,0 +1,157 @@
|
|
1
|
+
Metadata-Version: 2.3
|
2
|
+
Name: SnowSignal
|
3
|
+
Version: 0.1.1
|
4
|
+
Summary: UDP Broadcast Relay
|
5
|
+
Project-URL: Repository, https://github.com/ISISNeutronMuon/SnowSignal
|
6
|
+
Author-email: Ivan Finch <ivan.finch@stfc.ac.uk>
|
7
|
+
Maintainer-email: Ivan Finch <ivan.finch@stfc.ac.uk>
|
8
|
+
License-File: LICENSE
|
9
|
+
Keywords: UDP,UDP broadcast,docker swarm,epics,pvaccess
|
10
|
+
Classifier: Development Status :: 4 - Beta
|
11
|
+
Classifier: Environment :: Console
|
12
|
+
Classifier: Intended Audience :: System Administrators
|
13
|
+
Classifier: License :: OSI Approved :: BSD License
|
14
|
+
Classifier: Operating System :: POSIX :: Linux
|
15
|
+
Classifier: Programming Language :: Python
|
16
|
+
Classifier: Topic :: System :: Networking
|
17
|
+
Classifier: Typing :: Typed
|
18
|
+
Requires-Python: >=3.11
|
19
|
+
Requires-Dist: configargparse>=1.7
|
20
|
+
Requires-Dist: psutil>=5.9
|
21
|
+
Provides-Extra: dist
|
22
|
+
Requires-Dist: build>=1.2; extra == 'dist'
|
23
|
+
Requires-Dist: twine>=5.1; extra == 'dist'
|
24
|
+
Provides-Extra: test
|
25
|
+
Requires-Dist: coverage>=7.6; extra == 'test'
|
26
|
+
Requires-Dist: ruff>0.6; extra == 'test'
|
27
|
+
Requires-Dist: scapy~=2.0; extra == 'test'
|
28
|
+
Description-Content-Type: text/markdown
|
29
|
+
|
30
|
+
# SnowSignal
|
31
|
+
SnowSignal is designed to create a mesh network between instances of the program that will listen for UDP broadcasts received on one node of the network and rebroadcast on all other nodes.
|
32
|
+
|
33
|
+
|
34
|
+
[](https://gitlab.stfc.ac.uk/isis-accelerator-controls/playground/ivan/infrastructure/snowsignal/-/commits/main)
|
35
|
+
[](https://gitlab.stfc.ac.uk/isis-accelerator-controls/playground/ivan/infrastructure/snowsignal/-/commits/main)
|
36
|
+
|
37
|
+
[[_TOC_]]
|
38
|
+
|
39
|
+
## Usage
|
40
|
+
### General
|
41
|
+
Command line and environment variable options. Environment variables are defined in square brackets like `[env var: THIS]`. In general, command-line values override environment variables which override defaults.
|
42
|
+
```
|
43
|
+
usage: snowsignal.py [-h] [-t TARGET_INTERFACE] [-b BROADCAST_PORT] [-m MESH_PORT]
|
44
|
+
[--other-relays OTHER_RELAYS [OTHER_RELAYS ...]]
|
45
|
+
[-l {debug,info,warning,error,critical}]
|
46
|
+
```
|
47
|
+
#### Target Interface
|
48
|
+
```
|
49
|
+
-t TARGET_INTERFACE, --target-interface TARGET_INTERFACE
|
50
|
+
Target network interface [env var: TARGET_INTERFACE]
|
51
|
+
```
|
52
|
+
At this time SnowSignal only supports using a single network interface for receiving UDP broadcasts, sending to other relays, and rebroadcasting UDP messages received from other relays.
|
53
|
+
Defaults to `eth0`.
|
54
|
+
|
55
|
+
#### Broadcast Port
|
56
|
+
```
|
57
|
+
-b BROADCAST_PORT, --broadcast-port BROADCAST_PORT
|
58
|
+
Port on which to receive and transmit UDP broadcasts [env var: BDCAST_PORT]
|
59
|
+
```
|
60
|
+
SnowSignal listens for UDP broadcasts on a single port and rebroadcasts messages received from other SnowSignal instances on the same port. Defaults to port 5076.
|
61
|
+
|
62
|
+
#### Mesh Port
|
63
|
+
```
|
64
|
+
-m MESH_PORT, --mesh-port MESH_PORT
|
65
|
+
Port on which this instance will communicate with others via UDP unicast [env var:
|
66
|
+
MESH_PORT]
|
67
|
+
```
|
68
|
+
UDP port on which to listen for messages from other SnowSignal instances. Defaults to port 7124.
|
69
|
+
|
70
|
+
#### Other relays
|
71
|
+
```
|
72
|
+
--other-relays OTHER_RELAYS [OTHER_RELAYS ...]
|
73
|
+
Manually select other relays to transmit received UDP broadcasts to
|
74
|
+
```
|
75
|
+
Manually set a list of other SnowSignal instances with which to communicate. In Docker Swarm SnowSingal is capable of auto-discovering instances if the `SERVICENAME` environment variable is set, see "Mesh Network" below. If no other relays are defined via any of these means then SnowSignal will communicate with itself for testing purposes. Default is an empty list.
|
76
|
+
|
77
|
+
#### Log Level
|
78
|
+
```
|
79
|
+
-ll {debug,info,warning,error,critical}, --log-level {debug,info,warning,error,critical}
|
80
|
+
Logging level [env var: LOGLEVEL]
|
81
|
+
```
|
82
|
+
Set the logging level.
|
83
|
+
|
84
|
+
### Docker Swarm
|
85
|
+
If run in a Docker Swarm then the default configuration should work well with PVAccess.
|
86
|
+
|
87
|
+
There is an additional requirement that the environment variable SERVICENAME be set with the Swarm service's name, e.g.
|
88
|
+
```
|
89
|
+
environment:
|
90
|
+
SERVICENAME: '{{.Service.Name}}'
|
91
|
+
```
|
92
|
+
|
93
|
+
This allows each node in the service to automatically located and connect to the other nodes. The mesh will automatically heal as members enter and leave.
|
94
|
+
|
95
|
+
### Limitations ###
|
96
|
+
At this time this code has only been tested in Linux containers.
|
97
|
+
|
98
|
+
The `UDPRelayTransmit` class requires a raw socket to operate as it needs to
|
99
|
+
1. Filter out UDP broadcasts with an Ethernet source originating from the local relay. The Ethernet source is rewritten to allow this filtering while the IP source is left alone.
|
100
|
+
2. Differentiate UDP broadcast from UDP unicast messages, ignoring the latter.
|
101
|
+
|
102
|
+
These require Level 1 and 2 access and thus raw sockets. As the Python socket package does not support such access on Windows it has not been possible to make this tool compatible with that OS. (An earlier version using ScaPy was compatible.)
|
103
|
+
|
104
|
+
## The Problem
|
105
|
+
The EPICS PVAccess protocol uses a mixture of UDP broadcast, UDP unicast and TCP (in roughly that order) to establish communication between a client and a server. In the case relevant to this package a client makes a query for a PV and its value (or some other field), e.g. a pvget while the server holds the requested PV.
|
106
|
+
|
107
|
+
The image below gives an example of the communication between a client and server and is taken from the [PVAccess Protocol Specification](https://epics-controls.org/wp-content/uploads/2018/10/pvAccess-Protocol-Specification.pdf).
|
108
|
+
|
109
|
+

|
110
|
+
|
111
|
+
The relevant part for this problem is the initial searchRequest - a UDP broadcast / multicast. (Although the specification requires multicast support at this time I have only ever seen broadcast used.) When a pvget (or equivalent) is performed the first step is a UDP broadcast search request, i.e. a cry to local machines asking if they have the requested PV. If any do they will reply back to the requesting process with a UDP unicast and establish a TCP connection to exchange information.
|
112
|
+
|
113
|
+
UDP broadcasts are restricted to the network segment of the network interface conducting the broadcast. This means that search requests will not reach machines not on the same network segment. Alternative means suchs as a PVA Gateway or `EPICS_PVA_NAME_SERVERS` must be used in such circumstances. Note that
|
114
|
+
- PVA Gateway allows communication between isolated network segments but all subsequent communications must pass through the Gateway, i.e. a many to one to many topology is implicitly created.
|
115
|
+
- `EPICS_PVA_NAME_SERVERS` requires only that TCP communication between server and client be possible, but requires servers to be specified in advance.
|
116
|
+
|
117
|
+
However, if unicast communication between two network segments is possible then we could simply relay the UDP broadcasts between the two networks, allowing UDP unicast and TCP communication to proceed as usual.
|
118
|
+
|
119
|
+
This is the purpose of SnowSignal. It relays UDP broadcasts received on a specified port to other instances of SnowSignal (i.e. forming a mesh network) that then rebroadcast those UDP broadcasts on their own network segments.
|
120
|
+
|
121
|
+
**Note**: PVAccess server UDP beacon messages also use UDP broadcast and will be relayed by SnowSignal. Their purpose and consequences is not explored further here.
|
122
|
+
|
123
|
+
### Docker Swarm and Docker Networks
|
124
|
+
A docker swarm network may be created which crosses transparently between the nodes in the swarm. However, at the time of writing, docker swarm networks do not support UDP multicast or broadcast.
|
125
|
+
|
126
|
+
For PVAccess this means that search requests are isolated to individual nodes in the swarm. A pvget to a server-container on the same node will succeed, while one to a server-container on another node will fail. (Assuming that PVA Gateway or `EPICS_PVA_NAME_SERVERS` is not used to overcome this limitation.)
|
127
|
+
|
128
|
+
## For Developers
|
129
|
+
See details on using the [local dev setup](docs/local_dev.md) as well as discussion below.
|
130
|
+
|
131
|
+
## Implementation
|
132
|
+
SnowSignal is implemented in Python using base Python except for the libraries [ConfigArgParse](https://pypi.org/project/ConfigArgParse/) and [psutil](https://pypi.org/project/psutil/). The [scapy](https://scapy.readthedocs.io/en/latest/) library is used in integration and unit tests to create, send, receive, and manipulate UDP packets.
|
133
|
+
|
134
|
+
The SnowSignal code is in two main parts:
|
135
|
+
|
136
|
+
### 1. udp_relay_transmit
|
137
|
+
The UDPTransmitRelay class uses a raw socket to monitor for UDP broadcasts on the specified UDP port and local interface. A set of filter functions are used to filter out broadcasts either originating from local interfaces' MAC addresses or from the local IP addresses. This prevents us from reacting to our own UDP broadcasts.
|
138
|
+
|
139
|
+
If a UDP broadcast packet passes the required filters then the whole packet is sent to the other SnowSignal instances in the mesh network. They will subsequently rebroadcast it.
|
140
|
+
|
141
|
+
### 2. udp_relay_receive
|
142
|
+
A UDPReceiveRelay class listens for UDP unicast messages received on a specified port and broadcasts those messages on a specified local interface. The class is an implementation of the asynchio [DatagramProtocol](https://docs.python.org/3/library/asyncio-protocol.html#datagram-protocols) run by using [loop.create_datagram_endpoint()](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.create_datagram_endpoint).
|
143
|
+
|
144
|
+
When a UDP message is received from another SnowSignal its payload is turned into a UDP broadcast packet. We change only the Ethernet source MAC address of packet, setting it to that of the interface that will be used to send it. This means that it can be filtered out by the `udp_relay_transmit` and we do not create packet storms.
|
145
|
+
|
146
|
+
### Mesh Network
|
147
|
+
The SnowSignal mesh network may be manually specified.
|
148
|
+
|
149
|
+
However, in a Docker Swarm environment we can identify the other services by using the DNS entries for `{{.Service.Name}}.tasks`. We then need only remove this nodes IP address from that list to get the other nodes in the mesh. We update the list of mesh nodes from this source every 10 seconds which allows us to accomodate container restarts or migrations and even, in theory, nodes entering and leaving the swarm.
|
150
|
+
|
151
|
+
### Observations and Lessons Learned
|
152
|
+
A number of issues arose as I was developing this utility:
|
153
|
+
- I originally attempted to be clever around preventing a UDP broadcast storm by using a hashes of the UDP packets broadcast by a node member and then rejecting broadcast messages that were subsequently received by the same node. (More specifically a time-to-live dictionary so that packets weren't banned forever.) This proved overly complex and the current implementation simply filters out UDP broadcasts with sources with the same MAC address as the individual nodes.
|
154
|
+
- A PVAccess search request includes the IP address and ephemeral port that the unicast UDP reply should use. Experience shows that implementations ignore this in favour of the packet UDP source IP and port. This is why it's ultimately simpler to copy the whole packet and alter it rather than send the payload and construct a new packet around it.
|
155
|
+
|
156
|
+
## Origin of Name
|
157
|
+
A sensible name for this program would be UDP Broadcast Relay, e.g. UBrR. And brr is being cold. Hence with some helps from a name generator the name SnowSignal.
|