atex 0.7__tar.gz → 0.8__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {atex-0.7 → atex-0.8}/PKG-INFO +97 -2
- {atex-0.7 → atex-0.8}/README.md +95 -0
- {atex-0.7 → atex-0.8}/TODO +47 -1
- atex-0.8/aggrtest.py +74 -0
- atex-0.8/atex/cli/fmf.py +93 -0
- {atex-0.7 → atex-0.8}/atex/cli/testingfarm.py +23 -13
- {atex-0.7 → atex-0.8}/atex/connection/__init__.py +0 -8
- {atex-0.7 → atex-0.8}/atex/connection/ssh.py +3 -19
- {atex-0.7/atex/minitmt → atex-0.8/atex/executor}/README.md +8 -7
- {atex-0.7/atex/minitmt → atex-0.8/atex/executor}/RESULTS.md +32 -39
- atex-0.8/atex/executor/__init__.py +2 -0
- atex-0.8/atex/executor/duration.py +60 -0
- atex-0.8/atex/executor/executor.py +378 -0
- atex-0.8/atex/executor/reporter.py +106 -0
- {atex-0.7/atex/minitmt → atex-0.8/atex/executor}/scripts.py +30 -24
- {atex-0.7/atex/minitmt → atex-0.8/atex/executor}/testcontrol.py +16 -17
- {atex-0.7/atex/minitmt → atex-0.8/atex}/fmf.py +49 -34
- atex-0.8/atex/orchestrator/__init__.py +2 -0
- atex-0.8/atex/orchestrator/aggregator.py +106 -0
- atex-0.8/atex/orchestrator/orchestrator.py +324 -0
- atex-0.8/atex/provision/__init__.py +124 -0
- atex-0.8/atex/provision/testingfarm/__init__.py +2 -0
- {atex-0.7 → atex-0.8}/atex/provision/testingfarm/api.py +55 -40
- atex-0.8/atex/provision/testingfarm/testingfarm.py +236 -0
- {atex-0.7 → atex-0.8}/atex/util/__init__.py +1 -6
- {atex-0.7 → atex-0.8}/atex/util/log.py +8 -0
- atex-0.8/atex/util/path.py +16 -0
- atex-0.8/atex/util/ssh_keygen.py +14 -0
- atex-0.8/atex/util/threads.py +55 -0
- atex-0.8/orch.py +38 -0
- atex-0.8/provtest.py +51 -0
- {atex-0.7 → atex-0.8}/pyproject.toml +4 -2
- atex-0.8/runtest.py +73 -0
- atex-0.8/tmt_tests/reserve/main.fmf +11 -0
- atex-0.8/tmt_tests/reserve/repos/centos-aws +71 -0
- atex-0.8/tmt_tests/reserve/test.sh +58 -0
- atex-0.7/atex/cli/minitmt.py +0 -175
- atex-0.7/atex/minitmt/__init__.py +0 -23
- atex-0.7/atex/minitmt/executor.py +0 -348
- atex-0.7/atex/orchestrator/__init__.py +0 -59
- atex-0.7/atex/orchestrator/aggregator.py +0 -163
- atex-0.7/atex/provision/__init__.py +0 -155
- atex-0.7/atex/provision/nspawn/README +0 -74
- atex-0.7/atex/provision/testingfarm/__init__.py +0 -29
- atex-0.7/atex/provision/testingfarm/foo.py +0 -1
- atex-0.7/tmt_tests/reserve/main.fmf +0 -5
- atex-0.7/tmt_tests/reserve/test.sh +0 -72
- {atex-0.7 → atex-0.8}/.editorconfig +0 -0
- {atex-0.7 → atex-0.8}/.gitignore +0 -0
- {atex-0.7 → atex-0.8}/COPYING.txt +0 -0
- {atex-0.7 → atex-0.8}/atex/__init__.py +0 -0
- {atex-0.7 → atex-0.8}/atex/cli/__init__.py +0 -0
- {atex-0.7/atex/minitmt → atex-0.8/atex/executor}/TEST_CONTROL.md +0 -0
- {atex-0.7 → atex-0.8}/atex/provision/libvirt/VM_PROVISION +0 -0
- {atex-0.7 → atex-0.8}/atex/provision/libvirt/__init__.py +0 -0
- {atex-0.7 → atex-0.8}/atex/provision/libvirt/setup-libvirt.sh +0 -0
- {atex-0.7 → atex-0.8}/atex/provision/podman/README +0 -0
- {atex-0.7 → atex-0.8}/atex/provision/podman/host_container.sh +0 -0
- {atex-0.7 → atex-0.8}/atex/util/README.md +0 -0
- {atex-0.7 → atex-0.8}/atex/util/dedent.py +0 -0
- {atex-0.7 → atex-0.8}/atex/util/subprocess.py +0 -0
- {atex-0.7 → atex-0.8}/logtest.py +0 -0
- {atex-0.7 → atex-0.8}/reporter.py +0 -0
- {atex-0.7 → atex-0.8}/ssh.py +0 -0
- {atex-0.7 → atex-0.8}/tests/PYTEST.md +0 -0
- {atex-0.7 → atex-0.8}/tests/conftest.py +0 -0
- {atex-0.7 → atex-0.8}/tests/test_another.py +0 -0
- {atex-0.7 → atex-0.8}/tests/test_foobar.py +0 -0
- {atex-0.7 → atex-0.8}/tf.py +0 -0
- {atex-0.7 → atex-0.8}/tmt_tests/.fmf/version +0 -0
- {atex-0.7 → atex-0.8}/tmt_tests/plans/reserve.fmf +0 -0
{atex-0.7 → atex-0.8}/PKG-INFO
RENAMED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: atex
|
|
3
|
-
Version: 0.
|
|
3
|
+
Version: 0.8
|
|
4
4
|
Summary: Ad-hoc Test EXecutor
|
|
5
5
|
Project-URL: Homepage, https://github.com/RHSecurityCompliance/atex
|
|
6
6
|
License-Expression: GPL-3.0-or-later
|
|
@@ -8,7 +8,7 @@ License-File: COPYING.txt
|
|
|
8
8
|
Classifier: Operating System :: POSIX :: Linux
|
|
9
9
|
Classifier: Programming Language :: Python :: 3
|
|
10
10
|
Classifier: Topic :: Software Development :: Testing
|
|
11
|
-
Requires-Python: >=3.
|
|
11
|
+
Requires-Python: >=3.11
|
|
12
12
|
Requires-Dist: fmf>=1.6
|
|
13
13
|
Requires-Dist: urllib3<3,>=2
|
|
14
14
|
Description-Content-Type: text/markdown
|
|
@@ -45,8 +45,103 @@ BREAK. DO NOT USE IT (for now).
|
|
|
45
45
|
Unless specified otherwise, any content within this repository is distributed
|
|
46
46
|
under the GNU GPLv3 license, see the [COPYING.txt](COPYING.txt) file for more.
|
|
47
47
|
|
|
48
|
+
## Parallelism and cleanup
|
|
49
|
+
|
|
50
|
+
There are effectively 3 methods of running things in parallel in Python:
|
|
51
|
+
|
|
52
|
+
- `threading.Thread` (and related `concurrent.futures` classes)
|
|
53
|
+
- `multiprocessing.Process` (and related `concurrent.futures` classes)
|
|
54
|
+
- `asyncio`
|
|
55
|
+
|
|
56
|
+
and there is no clear winner (in terms of cleanup on `SIGTERM` or Ctrl-C):
|
|
57
|
+
|
|
58
|
+
- `Thread` has signal handlers only in the main thread and is unable to
|
|
59
|
+
interrupt any running threads without super ugly workarounds like `sleep(1)`
|
|
60
|
+
in every thread, checking some "pls exit" variable
|
|
61
|
+
- `Process` is too heavyweight and makes sharing native Python objects hard,
|
|
62
|
+
but it does handle signals in each process individually
|
|
63
|
+
- `asyncio` handles interrupting perfectly (every `try`/`except`/`finally`
|
|
64
|
+
completes just fine, `KeyboardInterrupt` is raised in every async context),
|
|
65
|
+
but async python is still (3.14) too weird and unsupported
|
|
66
|
+
- `asyncio` effectively re-implements `subprocess` with a slightly different
|
|
67
|
+
API, same with `asyncio.Transport` and derivatives reimplementing `socket`
|
|
68
|
+
- 3rd party libraries like `requests` or `urllib3` don't support it, one needs
|
|
69
|
+
to resort to spawning these in separate threads anyway
|
|
70
|
+
- same with `os.*` functions and syscalls
|
|
71
|
+
- every thing exposed via API needs to have 2 copies - async and non-async,
|
|
72
|
+
making it unbearable
|
|
73
|
+
- other stdlib bugs, ie. "large" reads returning BlockingIOError sometimes
|
|
74
|
+
|
|
75
|
+
The approach chosen by this project was to use `threading.Thread`, and
|
|
76
|
+
implement thread safety for classes and their functions that need it.
|
|
77
|
+
For example:
|
|
78
|
+
|
|
79
|
+
```python
|
|
80
|
+
class MachineReserver:
|
|
81
|
+
def __init__(self):
|
|
82
|
+
self.lock = threading.RLock()
|
|
83
|
+
self.job = None
|
|
84
|
+
self.proc = None
|
|
85
|
+
|
|
86
|
+
def reserve(self, ...):
|
|
87
|
+
try:
|
|
88
|
+
...
|
|
89
|
+
job = schedule_new_job_on_external_service()
|
|
90
|
+
with self.lock:
|
|
91
|
+
self.job = job
|
|
92
|
+
...
|
|
93
|
+
while not reserved(self.job):
|
|
94
|
+
time.sleep(60)
|
|
95
|
+
...
|
|
96
|
+
with self.lock:
|
|
97
|
+
self.proc = subprocess.Popen(["ssh", f"{user}@{host}", ...)
|
|
98
|
+
...
|
|
99
|
+
return machine
|
|
100
|
+
except Exception:
|
|
101
|
+
self.abort()
|
|
102
|
+
raise
|
|
103
|
+
|
|
104
|
+
def abort(self):
|
|
105
|
+
with self.lock:
|
|
106
|
+
if self.job:
|
|
107
|
+
cancel_external_service(self.job)
|
|
108
|
+
self.job = None
|
|
109
|
+
if self.proc:
|
|
110
|
+
self.proc.kill()
|
|
111
|
+
self.proc = None
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
Here, it is expected for `.reserve()` to be called in a long-running thread that
|
|
115
|
+
provisions a new machine on some external service, waits for it to be installed
|
|
116
|
+
and reserved, connects an ssh session to it and returns it back.
|
|
117
|
+
|
|
118
|
+
But equally, `.abort()` can be called from an external thread and clean up any
|
|
119
|
+
non-pythonic resources (external jobs, processes, temporary files, etc.) at
|
|
120
|
+
which point **we don't care what happens to .reserve()**, it will probably fail
|
|
121
|
+
with some exception, but doesn't do any harm.
|
|
122
|
+
|
|
123
|
+
Here is where `daemon=True` threads come in handy - we can simply call `.abort()`
|
|
124
|
+
from a `KeyboardInterrupt` (or `SIGTERM`) handle in the main thread, and just
|
|
125
|
+
exit, automatically killing any leftover threads that are uselessly sleeping.
|
|
126
|
+
(Realistically, we might want to spawn new threads to run many `.abort()`s in
|
|
127
|
+
parallel, but the main thread can wait for those just fine.)
|
|
128
|
+
|
|
129
|
+
It is not perfect, but it's probably the best Python can do.
|
|
130
|
+
|
|
131
|
+
Note that races can still occur between a resource being reserved and written
|
|
132
|
+
to `self.*` for `.abort()` to free, so resource de-allocation is not 100%
|
|
133
|
+
guaranteed, but single-threaded interrupting has the same issue.
|
|
134
|
+
Do have fallbacks (ie. max reserve times on the external service).
|
|
135
|
+
|
|
136
|
+
Also note that `.reserve()` and `.abort()` could be also called by a context
|
|
137
|
+
manager as `__enter__` and `__exit__`, ie. by a non-threaded caller (running
|
|
138
|
+
everything in the main thread).
|
|
139
|
+
|
|
140
|
+
|
|
48
141
|
## Unsorted notes
|
|
49
142
|
|
|
143
|
+
TODO: codestyle from contest
|
|
144
|
+
|
|
50
145
|
```
|
|
51
146
|
- this is not tmt, the goal is to make a python toolbox *for* making runcontest
|
|
52
147
|
style tools easily, not to replace those tools with tmt-style CLI syntax
|
{atex-0.7 → atex-0.8}/README.md
RENAMED
|
@@ -30,8 +30,103 @@ BREAK. DO NOT USE IT (for now).
|
|
|
30
30
|
Unless specified otherwise, any content within this repository is distributed
|
|
31
31
|
under the GNU GPLv3 license, see the [COPYING.txt](COPYING.txt) file for more.
|
|
32
32
|
|
|
33
|
+
## Parallelism and cleanup
|
|
34
|
+
|
|
35
|
+
There are effectively 3 methods of running things in parallel in Python:
|
|
36
|
+
|
|
37
|
+
- `threading.Thread` (and related `concurrent.futures` classes)
|
|
38
|
+
- `multiprocessing.Process` (and related `concurrent.futures` classes)
|
|
39
|
+
- `asyncio`
|
|
40
|
+
|
|
41
|
+
and there is no clear winner (in terms of cleanup on `SIGTERM` or Ctrl-C):
|
|
42
|
+
|
|
43
|
+
- `Thread` has signal handlers only in the main thread and is unable to
|
|
44
|
+
interrupt any running threads without super ugly workarounds like `sleep(1)`
|
|
45
|
+
in every thread, checking some "pls exit" variable
|
|
46
|
+
- `Process` is too heavyweight and makes sharing native Python objects hard,
|
|
47
|
+
but it does handle signals in each process individually
|
|
48
|
+
- `asyncio` handles interrupting perfectly (every `try`/`except`/`finally`
|
|
49
|
+
completes just fine, `KeyboardInterrupt` is raised in every async context),
|
|
50
|
+
but async python is still (3.14) too weird and unsupported
|
|
51
|
+
- `asyncio` effectively re-implements `subprocess` with a slightly different
|
|
52
|
+
API, same with `asyncio.Transport` and derivatives reimplementing `socket`
|
|
53
|
+
- 3rd party libraries like `requests` or `urllib3` don't support it, one needs
|
|
54
|
+
to resort to spawning these in separate threads anyway
|
|
55
|
+
- same with `os.*` functions and syscalls
|
|
56
|
+
- every thing exposed via API needs to have 2 copies - async and non-async,
|
|
57
|
+
making it unbearable
|
|
58
|
+
- other stdlib bugs, ie. "large" reads returning BlockingIOError sometimes
|
|
59
|
+
|
|
60
|
+
The approach chosen by this project was to use `threading.Thread`, and
|
|
61
|
+
implement thread safety for classes and their functions that need it.
|
|
62
|
+
For example:
|
|
63
|
+
|
|
64
|
+
```python
|
|
65
|
+
class MachineReserver:
|
|
66
|
+
def __init__(self):
|
|
67
|
+
self.lock = threading.RLock()
|
|
68
|
+
self.job = None
|
|
69
|
+
self.proc = None
|
|
70
|
+
|
|
71
|
+
def reserve(self, ...):
|
|
72
|
+
try:
|
|
73
|
+
...
|
|
74
|
+
job = schedule_new_job_on_external_service()
|
|
75
|
+
with self.lock:
|
|
76
|
+
self.job = job
|
|
77
|
+
...
|
|
78
|
+
while not reserved(self.job):
|
|
79
|
+
time.sleep(60)
|
|
80
|
+
...
|
|
81
|
+
with self.lock:
|
|
82
|
+
self.proc = subprocess.Popen(["ssh", f"{user}@{host}", ...)
|
|
83
|
+
...
|
|
84
|
+
return machine
|
|
85
|
+
except Exception:
|
|
86
|
+
self.abort()
|
|
87
|
+
raise
|
|
88
|
+
|
|
89
|
+
def abort(self):
|
|
90
|
+
with self.lock:
|
|
91
|
+
if self.job:
|
|
92
|
+
cancel_external_service(self.job)
|
|
93
|
+
self.job = None
|
|
94
|
+
if self.proc:
|
|
95
|
+
self.proc.kill()
|
|
96
|
+
self.proc = None
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
Here, it is expected for `.reserve()` to be called in a long-running thread that
|
|
100
|
+
provisions a new machine on some external service, waits for it to be installed
|
|
101
|
+
and reserved, connects an ssh session to it and returns it back.
|
|
102
|
+
|
|
103
|
+
But equally, `.abort()` can be called from an external thread and clean up any
|
|
104
|
+
non-pythonic resources (external jobs, processes, temporary files, etc.) at
|
|
105
|
+
which point **we don't care what happens to .reserve()**, it will probably fail
|
|
106
|
+
with some exception, but doesn't do any harm.
|
|
107
|
+
|
|
108
|
+
Here is where `daemon=True` threads come in handy - we can simply call `.abort()`
|
|
109
|
+
from a `KeyboardInterrupt` (or `SIGTERM`) handle in the main thread, and just
|
|
110
|
+
exit, automatically killing any leftover threads that are uselessly sleeping.
|
|
111
|
+
(Realistically, we might want to spawn new threads to run many `.abort()`s in
|
|
112
|
+
parallel, but the main thread can wait for those just fine.)
|
|
113
|
+
|
|
114
|
+
It is not perfect, but it's probably the best Python can do.
|
|
115
|
+
|
|
116
|
+
Note that races can still occur between a resource being reserved and written
|
|
117
|
+
to `self.*` for `.abort()` to free, so resource de-allocation is not 100%
|
|
118
|
+
guaranteed, but single-threaded interrupting has the same issue.
|
|
119
|
+
Do have fallbacks (ie. max reserve times on the external service).
|
|
120
|
+
|
|
121
|
+
Also note that `.reserve()` and `.abort()` could be also called by a context
|
|
122
|
+
manager as `__enter__` and `__exit__`, ie. by a non-threaded caller (running
|
|
123
|
+
everything in the main thread).
|
|
124
|
+
|
|
125
|
+
|
|
33
126
|
## Unsorted notes
|
|
34
127
|
|
|
128
|
+
TODO: codestyle from contest
|
|
129
|
+
|
|
35
130
|
```
|
|
36
131
|
- this is not tmt, the goal is to make a python toolbox *for* making runcontest
|
|
37
132
|
style tools easily, not to replace those tools with tmt-style CLI syntax
|
{atex-0.7 → atex-0.8}/TODO
RENAMED
|
@@ -1,4 +1,50 @@
|
|
|
1
|
-
-
|
|
1
|
+
- Orchestrator
|
|
2
|
+
- about platforms
|
|
3
|
+
- platform in Orchestrator is distro+arch when applied to a Provisioner class
|
|
4
|
+
- it is specific to that Provisioner, ie. CentOS-Stream-9 @ x86_64 on a TF Provisioner
|
|
5
|
+
- getting a new machine
|
|
6
|
+
- get a Remote from a Provisioner
|
|
7
|
+
- upload tests to it
|
|
8
|
+
- instantiate Executor for it
|
|
9
|
+
- run plan setup
|
|
10
|
+
- put it in the pool of Remotes to run tests on
|
|
11
|
+
- removing machines
|
|
12
|
+
- when all tests for a given platform have finished successfully
|
|
13
|
+
(all reruns also concluded)
|
|
14
|
+
- maintaining machines
|
|
15
|
+
- maintain an dict() of set()s of Remotes, indexed by platform (namedtuple?)
|
|
16
|
+
- instantiate a ResultAggregator for each platform
|
|
17
|
+
- probably a dict(), indexed by platform (namedtuple?)
|
|
18
|
+
- to-be-run tests
|
|
19
|
+
- discovered from fmf, one FMFTests instance for each platform
|
|
20
|
+
- algorithm?
|
|
21
|
+
- get platforms from user (ie. CentOS-Stream-9@x86_64@TestingFarmProvisioner)
|
|
22
|
+
- these MUST be distro/arch/provisioner, not random strings)
|
|
23
|
+
- discover tests using context built from the platform (distro/arch),
|
|
24
|
+
index the FMFTests in a dict() by platform name (or namedtuple?)
|
|
25
|
+
- instantiate a Provisioner for each platform, index them in a dict()
|
|
26
|
+
- may need some translation, ie. "latest-9" to a specific compose,
|
|
27
|
+
probably done by Provisioner functions
|
|
28
|
+
- while True
|
|
29
|
+
- for each provisioner instance, try to get a Remote
|
|
30
|
+
- if we get a Remote:
|
|
31
|
+
- run setup on it (see above), possibly in a separate thread
|
|
32
|
+
- go over Remote instances in some "Remotes waiting to be set up" list()
|
|
33
|
+
- if an instance is ready, find a to-be-run test, start executing it
|
|
34
|
+
and put the Remote into another "Remotes running tests" list()
|
|
35
|
+
- go over Remotes in the "Remotes running tests" list()
|
|
36
|
+
- if a Remote has finished,
|
|
37
|
+
|
|
38
|
+
- CLI
|
|
39
|
+
- atex orch \
|
|
40
|
+
platform -n 9.6@x86_64 -c distro=rhel-9.6 -c arch=x86_64 -p "TestingFarmProvisioner:RHEL-9.6.0-Nightly,x86_64" \
|
|
41
|
+
platform ... \
|
|
42
|
+
tests --repo tests_repo -p /plans/someplan -t /some/test -t /another/test -e SOME=ENV \
|
|
43
|
+
content --repo content_repo \
|
|
44
|
+
...
|
|
45
|
+
|
|
46
|
+
|
|
47
|
+
- concept of a RemoteSlot for Orchestrator ; basically, Orchestrator can
|
|
2
48
|
instantiate Provisioner instances in two ways:
|
|
3
49
|
- directly from given via pre-configured Provisioner classes (factories)
|
|
4
50
|
- indirectly from a list of RemoteSlot instances (classes?)
|
atex-0.8/aggrtest.py
ADDED
|
@@ -0,0 +1,74 @@
|
|
|
1
|
+
#!/usr/bin/python3
|
|
2
|
+
|
|
3
|
+
import sys
|
|
4
|
+
import logging
|
|
5
|
+
from pathlib import Path
|
|
6
|
+
import shutil
|
|
7
|
+
import tempfile
|
|
8
|
+
import concurrent.futures
|
|
9
|
+
|
|
10
|
+
#from atex.provision.testingfarm import TestingFarmProvisioner
|
|
11
|
+
|
|
12
|
+
from atex import executor, connection, fmf, orchestrator
|
|
13
|
+
|
|
14
|
+
|
|
15
|
+
logging.basicConfig(
|
|
16
|
+
level=logging.DEBUG,
|
|
17
|
+
stream=sys.stderr,
|
|
18
|
+
format="%(asctime)s %(name)s: %(message)s",
|
|
19
|
+
datefmt="%Y-%m-%d %H:%M:%S",
|
|
20
|
+
)
|
|
21
|
+
|
|
22
|
+
fmf_tests = fmf.FMFTests(
|
|
23
|
+
"/home/jjaburek/gitit/tmt-experiments",
|
|
24
|
+
"/plans/friday-demo",
|
|
25
|
+
context=None,
|
|
26
|
+
)
|
|
27
|
+
|
|
28
|
+
shutil.rmtree("/tmp/testme")
|
|
29
|
+
Path("/tmp/testme").mkdir()
|
|
30
|
+
|
|
31
|
+
ssh_options = {
|
|
32
|
+
"User": "root",
|
|
33
|
+
"Hostname": "18.119.100.84",
|
|
34
|
+
"IdentityFile": "/tmp/tmpbq4yl7es/key_rsa",
|
|
35
|
+
}
|
|
36
|
+
|
|
37
|
+
print("\n\n------------------\n\n")
|
|
38
|
+
|
|
39
|
+
with connection.ssh.ManagedSSHConn(options=ssh_options) as conn:
|
|
40
|
+
conn.cmd(["mkdir", "/var/myatex"])
|
|
41
|
+
with executor.Executor(fmf_tests, conn, state_dir="/var/myatex") as ex:
|
|
42
|
+
ex.upload_tests("/home/jjaburek/gitit/tmt-experiments")
|
|
43
|
+
ex.setup_plan()
|
|
44
|
+
|
|
45
|
+
aggr = orchestrator.CSVAggregator("/tmp/csv_file", "/tmp/storage_dir")
|
|
46
|
+
aggr.open()
|
|
47
|
+
print(f"\n\n====== {aggr.csv_writer} =====\n\n")
|
|
48
|
+
|
|
49
|
+
def run_one(num):
|
|
50
|
+
with connection.ssh.ManagedSSHConn(options=ssh_options) as conn:
|
|
51
|
+
with executor.Executor(fmf_tests, conn, state_dir="/var/myatex") as ex:
|
|
52
|
+
for test_name in fmf_tests.tests:
|
|
53
|
+
safe_test_name = test_name.strip("/").replace("/","-")
|
|
54
|
+
# TODO: actually delete them if test passed (or leave them if some DEBUG was set)
|
|
55
|
+
tmpdir = tempfile.TemporaryDirectory(prefix=f"{safe_test_name}-", delete=False, dir="/tmp/testme")
|
|
56
|
+
files_dir = Path(tmpdir.name) / "files"
|
|
57
|
+
json_file = Path(tmpdir.name) / "json"
|
|
58
|
+
ex.run_test(test_name, json_file, files_dir)
|
|
59
|
+
aggr.ingest(f"platform-{num}", test_name, json_file, files_dir)
|
|
60
|
+
|
|
61
|
+
print("\n\n------------------\n\n")
|
|
62
|
+
|
|
63
|
+
run_one(1)
|
|
64
|
+
n = 2
|
|
65
|
+
|
|
66
|
+
with concurrent.futures.ThreadPoolExecutor(max_workers=9) as ex:
|
|
67
|
+
for _ in range(9):
|
|
68
|
+
ex.submit(run_one, n)
|
|
69
|
+
n += 1
|
|
70
|
+
|
|
71
|
+
aggr.close()
|
|
72
|
+
|
|
73
|
+
with connection.ssh.ManagedSSHConn(options=ssh_options) as conn:
|
|
74
|
+
conn.cmd(["rm", "-rf", "/var/myatex"])
|
atex-0.8/atex/cli/fmf.py
ADDED
|
@@ -0,0 +1,93 @@
|
|
|
1
|
+
import sys
|
|
2
|
+
import pprint
|
|
3
|
+
|
|
4
|
+
from .. import fmf
|
|
5
|
+
|
|
6
|
+
|
|
7
|
+
def _fatal(msg):
|
|
8
|
+
print(msg, file=sys.stderr)
|
|
9
|
+
sys.exit(1)
|
|
10
|
+
|
|
11
|
+
|
|
12
|
+
def _get_context(args):
|
|
13
|
+
context = {}
|
|
14
|
+
if args.context:
|
|
15
|
+
for c in args.context:
|
|
16
|
+
key, value = c.split("=", 1)
|
|
17
|
+
context[key] = value
|
|
18
|
+
return context or None
|
|
19
|
+
|
|
20
|
+
|
|
21
|
+
def discover(args):
|
|
22
|
+
result = fmf.FMFTests(args.root, args.plan, context=_get_context(args))
|
|
23
|
+
for name in result.tests:
|
|
24
|
+
print(name)
|
|
25
|
+
|
|
26
|
+
|
|
27
|
+
def show(args):
|
|
28
|
+
result = fmf.FMFTests(args.root, args.plan, context=_get_context(args))
|
|
29
|
+
if tests := list(result.match(args.test)):
|
|
30
|
+
for test in tests:
|
|
31
|
+
print(f"\n--- {test.name} ---")
|
|
32
|
+
pprint.pprint(test.data)
|
|
33
|
+
else:
|
|
34
|
+
_fatal(f"Not reachable via {args.plan} discovery: {args.test}")
|
|
35
|
+
|
|
36
|
+
|
|
37
|
+
def prepare(args):
|
|
38
|
+
result = fmf.FMFTests(args.root, args.plan, context=_get_context(args))
|
|
39
|
+
print("--- fmf root ---")
|
|
40
|
+
print(str(result.root))
|
|
41
|
+
print("--- prepare packages ---")
|
|
42
|
+
print("\n".join(result.prepare_pkgs))
|
|
43
|
+
print("--- plan environment ---")
|
|
44
|
+
print("\n".join("{k}={v}" for k,v in result.plan_env))
|
|
45
|
+
for script in result.prepare_scripts:
|
|
46
|
+
print("--- prepare script ---")
|
|
47
|
+
print(script)
|
|
48
|
+
print("----------------------")
|
|
49
|
+
|
|
50
|
+
|
|
51
|
+
def parse_args(parser):
|
|
52
|
+
parser.add_argument("--root", help="path to directory with fmf tests", default=".")
|
|
53
|
+
parser.add_argument("--context", "-c", help="tmt style key=value context", action="append")
|
|
54
|
+
cmds = parser.add_subparsers(
|
|
55
|
+
dest="_cmd", help="executor feature", metavar="<cmd>", required=True,
|
|
56
|
+
)
|
|
57
|
+
|
|
58
|
+
cmd = cmds.add_parser(
|
|
59
|
+
"discover", aliases=("di",),
|
|
60
|
+
help="list tests, post-processed by tmt plans",
|
|
61
|
+
)
|
|
62
|
+
cmd.add_argument("plan", help="tmt plan to use for discovery")
|
|
63
|
+
|
|
64
|
+
cmd = cmds.add_parser(
|
|
65
|
+
"show",
|
|
66
|
+
help="show fmf data of a test",
|
|
67
|
+
)
|
|
68
|
+
cmd.add_argument("plan", help="tmt plan to use for discovery")
|
|
69
|
+
cmd.add_argument("test", help="fmf style test regex")
|
|
70
|
+
|
|
71
|
+
cmd = cmds.add_parser(
|
|
72
|
+
"prepare",
|
|
73
|
+
help="show prepare-related FMFTests details",
|
|
74
|
+
)
|
|
75
|
+
cmd.add_argument("plan", help="tmt plan to parse")
|
|
76
|
+
|
|
77
|
+
|
|
78
|
+
def main(args):
|
|
79
|
+
if args._cmd in ("discover", "di"):
|
|
80
|
+
discover(args)
|
|
81
|
+
elif args._cmd == "show":
|
|
82
|
+
show(args)
|
|
83
|
+
elif args._cmd == "prepare":
|
|
84
|
+
prepare(args)
|
|
85
|
+
else:
|
|
86
|
+
raise RuntimeError(f"unknown args: {args}")
|
|
87
|
+
|
|
88
|
+
|
|
89
|
+
CLI_SPEC = {
|
|
90
|
+
"help": "simple CLI interface to atex.fmf",
|
|
91
|
+
"args": parse_args,
|
|
92
|
+
"main": main,
|
|
93
|
+
}
|
|
@@ -1,4 +1,5 @@
|
|
|
1
1
|
import sys
|
|
2
|
+
import json
|
|
2
3
|
import pprint
|
|
3
4
|
|
|
4
5
|
from .. import util
|
|
@@ -49,6 +50,8 @@ def search_requests(args):
|
|
|
49
50
|
reply = api.search_requests(
|
|
50
51
|
state=args.state,
|
|
51
52
|
mine=not args.all,
|
|
53
|
+
user_id=args.user_id,
|
|
54
|
+
token_id=args.token_id,
|
|
52
55
|
ranch=args.ranch,
|
|
53
56
|
created_before=args.before,
|
|
54
57
|
created_after=args.after,
|
|
@@ -56,20 +59,24 @@ def search_requests(args):
|
|
|
56
59
|
if not reply:
|
|
57
60
|
return
|
|
58
61
|
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
|
|
62
|
+
if args.json:
|
|
63
|
+
for req in sorted(reply, key=lambda x: x["created"]):
|
|
64
|
+
print(json.dumps(req))
|
|
65
|
+
else:
|
|
66
|
+
for req in sorted(reply, key=lambda x: x["created"]):
|
|
67
|
+
req_id = req["id"]
|
|
68
|
+
created = req["created"].partition(".")[0]
|
|
62
69
|
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
70
|
+
envs = []
|
|
71
|
+
for env in req["environments_requested"]:
|
|
72
|
+
if "os" in env and env["os"] and "compose" in env["os"]:
|
|
73
|
+
compose = env["os"]["compose"]
|
|
74
|
+
arch = env["arch"]
|
|
75
|
+
if compose and arch:
|
|
76
|
+
envs.append(f"{compose}@{arch}")
|
|
77
|
+
envs_str = ", ".join(envs)
|
|
71
78
|
|
|
72
|
-
|
|
79
|
+
print(f"{created} {req_id} : {envs_str}")
|
|
73
80
|
|
|
74
81
|
|
|
75
82
|
def reserve(args):
|
|
@@ -177,9 +184,12 @@ def parse_args(parser):
|
|
|
177
184
|
)
|
|
178
185
|
cmd.add_argument("--state", help="request state (running, etc.)", required=True)
|
|
179
186
|
cmd.add_argument("--all", help="all requests, not just owned by token", action="store_true")
|
|
180
|
-
cmd.add_argument("--ranch", help="Testing Farm ranch")
|
|
187
|
+
cmd.add_argument("--ranch", help="Testing Farm ranch (detected from token)")
|
|
188
|
+
cmd.add_argument("--user-id", help="'user_id' request field (detected from token)")
|
|
189
|
+
cmd.add_argument("--token-id", help="'token_id' request field (detected from token)")
|
|
181
190
|
cmd.add_argument("--before", help="only requests created before ISO8601")
|
|
182
191
|
cmd.add_argument("--after", help="only requests created after ISO8601")
|
|
192
|
+
cmd.add_argument("--json", help="full details, one request per line", action="store_true")
|
|
183
193
|
|
|
184
194
|
cmd = cmds.add_parser(
|
|
185
195
|
"reserve",
|
|
@@ -65,14 +65,6 @@ class Connection:
|
|
|
65
65
|
"""
|
|
66
66
|
raise NotImplementedError(f"'disconnect' not implemented for {self.__class__.__name__}")
|
|
67
67
|
|
|
68
|
-
# TODO: is this needed? .. we probably want Remote.alive() instead
|
|
69
|
-
#def alive(self):
|
|
70
|
-
# """
|
|
71
|
-
# Return True if the connection was established and is active,
|
|
72
|
-
# False otherwise.
|
|
73
|
-
# """
|
|
74
|
-
# raise NotImplementedError(f"'alive' not implemented for {self.__class__.__name__}")
|
|
75
|
-
|
|
76
68
|
def cmd(self, command, func=_util.subprocess_run, **func_args):
|
|
77
69
|
"""
|
|
78
70
|
Execute a single command on the remote, using subprocess-like semantics.
|
|
@@ -68,7 +68,7 @@ def _shell_cmd(command, sudo=None):
|
|
|
68
68
|
"""
|
|
69
69
|
Make a command line for running 'command' on the target system.
|
|
70
70
|
"""
|
|
71
|
-
quoted_args = (shlex.quote(arg) for arg in command)
|
|
71
|
+
quoted_args = (shlex.quote(str(arg)) for arg in command)
|
|
72
72
|
if sudo:
|
|
73
73
|
return " ".join((
|
|
74
74
|
"exec", "sudo", "--no-update", "--non-interactive", "--user", sudo, "--", *quoted_args,
|
|
@@ -167,15 +167,12 @@ class StatelessSSHConn(Connection):
|
|
|
167
167
|
Optional, .cmd() and .rsync() work without it, but it is provided here
|
|
168
168
|
for compatibility with the Connection API.
|
|
169
169
|
"""
|
|
170
|
-
# TODO: just wait until .cmd(['true']) starts responding
|
|
170
|
+
# TODO: just wait until .cmd(['true']) starts responding ?
|
|
171
171
|
pass
|
|
172
172
|
|
|
173
173
|
def disconnect(self):
|
|
174
174
|
pass
|
|
175
175
|
|
|
176
|
-
# def alive(self):
|
|
177
|
-
# return True
|
|
178
|
-
|
|
179
176
|
# have options as kwarg to be compatible with other functions here
|
|
180
177
|
def cmd(self, command, options=None, func=util.subprocess_run, **func_args):
|
|
181
178
|
unified_options = self.options.copy()
|
|
@@ -231,7 +228,7 @@ class ManagedSSHConn(Connection):
|
|
|
231
228
|
to manage this complexity.
|
|
232
229
|
"""
|
|
233
230
|
|
|
234
|
-
# TODO: thread safety and locking via self.lock
|
|
231
|
+
# TODO: thread safety and locking via self.lock ?
|
|
235
232
|
|
|
236
233
|
def __init__(self, options, *, password=None, sudo=None):
|
|
237
234
|
"""
|
|
@@ -251,12 +248,6 @@ class ManagedSSHConn(Connection):
|
|
|
251
248
|
self._tmpdir = None
|
|
252
249
|
self._master_proc = None
|
|
253
250
|
|
|
254
|
-
# def __copy__(self):
|
|
255
|
-
# return type(self)(self.options, password=self.password)
|
|
256
|
-
#
|
|
257
|
-
# def copy(self):
|
|
258
|
-
# return self.__copy__()
|
|
259
|
-
|
|
260
251
|
def assert_master(self):
|
|
261
252
|
proc = self._master_proc
|
|
262
253
|
if not proc:
|
|
@@ -272,13 +263,6 @@ class ManagedSSHConn(Connection):
|
|
|
272
263
|
f"SSH ControlMaster on {self._tmpdir} exited with {code}{out}",
|
|
273
264
|
)
|
|
274
265
|
|
|
275
|
-
# def alive(self):
|
|
276
|
-
# try:
|
|
277
|
-
# self.assert_master()
|
|
278
|
-
# return True
|
|
279
|
-
# except (NotConnectedError, DisconnectedError):
|
|
280
|
-
# return False
|
|
281
|
-
|
|
282
266
|
def disconnect(self):
|
|
283
267
|
proc = self._master_proc
|
|
284
268
|
if not proc:
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
#
|
|
1
|
+
# Executor
|
|
2
2
|
|
|
3
3
|
This is a minimalistic re-implementation of some of the features of
|
|
4
4
|
[tmt](https://github.com/teemtee/tmt), without re-inventing the test metadata
|
|
@@ -60,9 +60,9 @@ pieces of fmf metadata, than to deal with all of the above.
|
|
|
60
60
|
|
|
61
61
|
## Compatibility
|
|
62
62
|
|
|
63
|
-
|
|
64
|
-
the idea is that you should be able to write tests that **work with
|
|
65
|
-
easily.
|
|
63
|
+
This implementation is designed to be mostly-compatible with tmt in most simple
|
|
64
|
+
use cases, the idea is that you should be able to write tests that **work with
|
|
65
|
+
both**, easily.
|
|
66
66
|
|
|
67
67
|
Our main problem with the ecosystem around tmt is that it is heavily
|
|
68
68
|
Beakerlib-inspired, with tools relying on a small subset of tmt functionality
|
|
@@ -76,10 +76,9 @@ So the goal here is to write tests that
|
|
|
76
76
|
- having no additional logs, letting tmt use `output.txt` as test output,
|
|
77
77
|
renamed to `testout.log` by Testing Farm
|
|
78
78
|
- not trying to be fancy
|
|
79
|
-
- run under
|
|
79
|
+
- run under atex in a more "wild" mode, without those limitations
|
|
80
80
|
- tens of millions of results
|
|
81
81
|
- logs with full paths
|
|
82
|
-
- cross-test result reporting
|
|
83
82
|
- etc.
|
|
84
83
|
|
|
85
84
|
Hopefully running well under Testing Farm / OSCI / etc., while being more
|
|
@@ -106,6 +105,8 @@ Everything supported by fmf should work, incl.
|
|
|
106
105
|
- `exclude` support (custom `re`-based filter, not in fmf)
|
|
107
106
|
- No remote git repo (aside from what fmf supports natively), no `check`,
|
|
108
107
|
no `modified-only`, no `adjust-tests`, etc.
|
|
108
|
+
- Tests from multiple `discover` sections are added together, eg. any order
|
|
109
|
+
of the `discover` sections in the fmf is (currently) not honored.
|
|
109
110
|
- `provision`
|
|
110
111
|
- Ignored (custom provisioning logic used)
|
|
111
112
|
- `prepare`
|
|
@@ -154,7 +155,7 @@ Everything supported by fmf should work, incl.
|
|
|
154
155
|
- Ignored
|
|
155
156
|
- `result`
|
|
156
157
|
- Ignored, intentionally, see [RESULTS.md](RESULTS.md) below
|
|
157
|
-
- The intention is for you to be able to use **both** tmt and
|
|
158
|
+
- The intention is for you to be able to use **both** tmt and atex
|
|
158
159
|
reporting if you want to, so `result` is for when you want full tmt
|
|
159
160
|
- `restart`
|
|
160
161
|
- Ignored, restart how many times you want until `duration`
|