xe-outsource 0.4.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,243 @@
1
+ Metadata-Version: 2.1
2
+ Name: xe-outsource
3
+ Version: 0.4.0
4
+ Summary: Job submission of reprocessing
5
+ Home-page: https://github.com/XENONnT/outsource
6
+ Author: Outsource contributors, the XENON collaboration
7
+ Requires-Python: >=3.8,<3.13
8
+ Classifier: Development Status :: 4 - Beta
9
+ Classifier: Intended Audience :: Science/Research
10
+ Classifier: License :: OSI Approved :: BSD License
11
+ Classifier: Natural Language :: English
12
+ Classifier: Programming Language :: Python :: 3
13
+ Classifier: Programming Language :: Python :: 3.8
14
+ Classifier: Programming Language :: Python :: 3.9
15
+ Classifier: Programming Language :: Python :: 3.10
16
+ Classifier: Programming Language :: Python :: 3.11
17
+ Classifier: Programming Language :: Python :: 3.12
18
+ Classifier: Programming Language :: Python :: Implementation :: CPython
19
+ Classifier: Topic :: Scientific/Engineering :: Physics
20
+ Requires-Dist: numpy
21
+ Requires-Dist: tqdm
22
+ Requires-Dist: utilix (>=0.9.1)
23
+ Requires-Dist: xe-admix
24
+ Project-URL: Repository, https://github.com/XENONnT/outsource
25
+ Description-Content-Type: text/markdown
26
+
27
+ # outsource
28
+
29
+ [![pre-commit.ci status](https://results.pre-commit.ci/badge/github/XENONnT/outsource/master.svg)](https://results.pre-commit.ci/latest/github/XENONnT/outsource/master)
30
+ [![PyPI version shields.io](https://img.shields.io/pypi/v/xe-outsource.svg)](https://pypi.python.org/pypi/xe-outsource/)
31
+
32
+ Job submission code for XENONnT.
33
+
34
+ Outsource submits XENON processing jobs to the [Open Science Grid](osg-htc.org). It is essentially a wrapper around [Pegasus](https://pegasus.isi.edu), which is itself something of a wrapper around [HTCondor](https://htcondor.readthedocs.io/en/latest/).
35
+
36
+
37
+ ## Prerequisites
38
+ Those running outsource need to be production users of XENON (ie computing experts, etc). Therefore you will need certain permissions:
39
+ - Access to the XENON OSG login node named ap23 (and a CI account). For more details see [here](https://xe1t-wiki.lngs.infn.it/doku.php?id=xenon:xenonnt:analysis:guide#for_ci-connect_osg_service).
40
+ - Credentials for a production user to RunDB API
41
+ - A grid certificate with access to the `production` rucio account.
42
+
43
+ #### Grid certificates
44
+ Production users will need their own grid certificate to transfer data on the grid. You can get a certificate via [CIlogon](https://www.cilogon.org/home).
45
+ After you get your certificate, you will need to add it to join the XENON VO. For more instructions here, see this (outdated but still useful) [wiki page](https://xe1t-wiki.lngs.infn.it/doku.php?id=xenon:xenon1t:sim:grid).
46
+
47
+ Additionally, you will need to add the DN of this certificate to the production rucio account. For this, please ask on slack and tag Judith or Pascal.
48
+ After you have these credentials set up, you are ready to use outsource and submit processing jobs to OSG.
49
+
50
+ #### Environment
51
+ Please use XENONnT environment. On the OSG submit hosts, this can be set up by sourcing (assuming you are on AP23):
52
+
53
+ ```
54
+ #!/bin/bash
55
+
56
+ . /cvmfs/xenon.opensciencegrid.org/releases/nT/development/setup.sh
57
+ export XENON_CONFIG=$HOME/.xenon_config
58
+ export RUCIO_ACCOUNT=production
59
+ export X509_USER_PROXY=$HOME/.xenon_service_proxy
60
+ export PYTHONPATH=`pegasus-config --python`:$PYTHONPATH
61
+ ```
62
+
63
+ #### Proxy
64
+ Please make sure you create a 2048 bit long key. Example:
65
+
66
+ ```
67
+ voms-proxy-init -voms xenon.biggrid.nl -bits 2048 -hours 168 -out ~/.xenon_service_proxy
68
+ ```
69
+
70
+ At the moment, outsource assumes that your certificate proxy is located at `X509_USER_PROXY`.
71
+
72
+ ## Installation
73
+ Since outsource is used by production users, it is currently not pip-installable (people often want to change the source code locally anyway). To install, first clone the repository from github into a directory of your preference in your home directory on the xenon OSG login node.
74
+
75
+ ```
76
+ git clone https://github.com/XENONnT/outsource.git
77
+ ```
78
+
79
+ Outsource depends on several packages in the XENON base environment. Therefore we recommend installing from within one of those environments. We cannot use containers due to the fact we rely on the host system installation of HTCondor for job submission.
80
+ Therefore, we recommend using the standard CVMFS installation of the xenon environments, e.g.
81
+
82
+ ```
83
+ . /cvmfs/xenon.opensciencegrid.org/releases/nT/development/setup.sh
84
+ ```
85
+
86
+ Then you can install using pip. We recommend doing a normal (albeit user) install because there is an executable script that doesn't get installed properly in development mode.
87
+
88
+ ```
89
+ cd outsource
90
+ pip install ./ --user
91
+ ```
92
+
93
+ Note that if you change anything in the source code you will need to reinstall each time. If you want to install in development mode, instead (or additionally) do
94
+
95
+ ```
96
+ pip install -e ./ --user
97
+ ```
98
+
99
+ but note that the `outsource` executable might not pick up all your changes if you go this route.
100
+
101
+
102
+ ## Configuration file
103
+
104
+ Just like [utilix](https://github.com/XENONnT/utilix), this tool expects a configuration file at environmental variable `XENON_CONFIG`. You will need to create your own config to look like below, but fill in the values.
105
+
106
+ Particularly it uses information in the field of the config with header 'Outsource', see below.
107
+
108
+ **Note you will need to fill in the empty slots**.
109
+
110
+ ```
111
+ [basic]
112
+ # usually helpful for debugging but it's a lot of msg
113
+ logging_level=DEBUG
114
+
115
+ [RunDB]
116
+ rundb_api_url = <ask teamA>
117
+ rundb_api_user = xenon-admin
118
+ rundb_api_password = <ask teamA>
119
+ xent_user = nt_analysis
120
+ xent_password = <ask teamA>
121
+ xent_database = xenonnt
122
+ xe1t_url = <ask teamA>
123
+ xe1t_user = 1t_bookkeeping
124
+ xe1t_password = <ask teamA>
125
+ xe1t_database = run
126
+
127
+ [Outsource]
128
+ work_dir = /scratch/$USER/workflows
129
+ # sites to exclude (GLIDEIN_Site), comma seprated list
130
+ exclude_sites = SU-ITS, NotreDame, UConn-HPC, Purdue Geddes, Chameleon, WSU-GRID, SIUE-CC-production, Lancium
131
+ # data type to process
132
+ include_data_types = peaklets, hitlets_nv, events_nv, events_mv, event_info_double, afterpulses, led_calibration
133
+ exclude_modes = tpc_noise, tpc_rn_8pmts, tpc_commissioning_pmtgain, tpc_rn_6pmts, tpc_rn_12_pmts, nVeto_LED_calibration,tpc_rn_12pmts, nVeto_LED_calibration_2
134
+ us_only = False
135
+ hs06_test_run = False
136
+ raw_records_rse = UC_OSG_USERDISK
137
+ records_rse = UC_MIDWAY_USERDISK
138
+ peaklets_rse = UC_OSG_USERDISK
139
+ events_rse = UC_MIDWAY_USERDISK
140
+ min_run_number = 1
141
+ max_run_number = 999999
142
+ max_daily = 2000
143
+ chunks_per_job = 10
144
+ combine_memory = 60000
145
+ combine_disk = 120000
146
+ peaklets_memory = 14500
147
+ peaklets_disk = 50000
148
+ events_memory = 60000
149
+ events_disk = 120000
150
+ dagman_retry = 0
151
+ dagman_maxidle = 5000
152
+ dagman_maxjobs = 300
153
+ ```
154
+
155
+ ## Add a setup script
156
+ For convenience, we recommend writing a simple bash script to make it easy to setup the outsource environment. Below is an example called `setup_outsource.sh`, but note you will need to change things like usernames and container.
157
+
158
+ ```
159
+ #!/bin/bash
160
+
161
+ . /cvmfs/xenon.opensciencegrid.org/releases/nT/development/setup.sh
162
+ export RUCIO_ACCOUNT=production
163
+ export X509_USER_PROXY=$HOME/.xenon_service_proxy
164
+ export PATH=/opt/pegasus/current/bin:$PATH
165
+ export PYTHONPATH=`pegasus-config --python`:$PYTHONPATH
166
+ ```
167
+
168
+ What this script does is
169
+ - Source a CVMFS environment for a particular environment we are using (this will change depending on what data you want to processs)
170
+ - Sets the rucio account to production
171
+ - Sets the X509 user proxy location via env variable
172
+ - Appends the path to your bin that will find the locally installed outsource executable
173
+
174
+ Then, everytime you want to submit jobs, you can setup your environment with
175
+
176
+ ```
177
+ . setup_outsource.sh
178
+ ```
179
+
180
+ ## Submitting workflows
181
+ After installation and setting up the environment, it is time to submit jobs. The easiest way to do this is using the `outsource` executable. You can see what this script takes as input via `outsource --help`, which returns
182
+
183
+ ```
184
+ [whoami@ap23 ~]$ outsource --help
185
+ usage: Outsource [-h] --context CONTEXT --xedocs_version XEDOCS_VERSION [--image IMAGE]
186
+ [--detector {all,tpc,muon_veto,neutron_veto}] [--workflow_id WORKFLOW_ID] [--ignore_processed]
187
+ [--debug] [--from NUMBER_FROM] [--to NUMBER_TO] [--run [RUN ...]] [--runlist RUNLIST] [--rucio_upload]
188
+ [--rundb_update]
189
+
190
+ optional arguments:
191
+ -h, --help show this help message and exit
192
+ --context CONTEXT Name of context, imported from cutax.
193
+ --xedocs_version XEDOCS_VERSION
194
+ global version, an argument for context.
195
+ --image IMAGE Singularity image. Accepts either a full path or a single name and assumes a format like this:
196
+ /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:{image}
197
+ --detector {all,tpc,muon_veto,neutron_veto}
198
+ Detector to focus on. If 'all' (default) will consider all three detectors. Otherwise pass a
199
+ single one of 'tpc', 'neutron_veto', 'muon_veto'. Pairs of detectors not yet supported.
200
+ --workflow_id WORKFLOW_ID
201
+ Custom workflow_id of workflow. If not passed, inferred from today's date.
202
+ --ignore_processed Ignore runs that have already been processed
203
+ --debug Debug mode. Does not automatically submit the workflow, and jobs do not update RunDB nor upload
204
+ to rucio.
205
+ --from NUMBER_FROM Run number to start with
206
+ --to NUMBER_TO Run number to end with
207
+ --run [RUN ...] Space separated specific run number(s) to process
208
+ --runlist RUNLIST Path to a runlist file
209
+ --rucio_upload Upload data to rucio after processing
210
+ --rundb_update Update RunDB after processing
211
+ ```
212
+
213
+ This script requires at minimum the name of context (which must reside in the cutax version installed in the environment you are in). If no other arguments are passed, this script will try to find all data that can be processed, and process it. Some inputs from the configuration file at environmental variable `XENON_CONFIG` are also used, specifically:
214
+ - The minimum run number considered
215
+ - The total number of runs to process at one time
216
+ - What data types to process
217
+ - The exclusion of different run modes
218
+ - etc.
219
+
220
+ As a first try, pass the `--debug` flag to see how many runs outsource would try to process. It might produce a lot of output as it also prints out the list of runs and the location of workflow.
221
+
222
+ ```
223
+ outsource --debug
224
+ ```
225
+
226
+ If you want to narrow down the list of runs to process, you can do one of several things:
227
+ - Pass a run or small list of runs with `--run` flag
228
+ - Pass a path to a text file containing a newline-separated runlist that you made in some other way with the `--runlist` flag
229
+ - Use the `--from` and/or `--to` flags to consider a range of run numbers
230
+ - Specify things like `--detector`, the source and run mode can be controlled in file indicated by `XENON_CONFIG`, like `include_modes` and `exclude_sources`
231
+
232
+ **One super important thing to keep in mind: you also specify the singularity image to run the jobs in**. This adds a significant possibility for mistakes, as the environment you submit jobs from (and thus do this query to find what needs to be processed) might not always be the same as the one that actually tries to do the processing.
233
+ So it's super important that the image you pass with the `--image` flag corresponds to the same base_environment flag as the CVMFS environment you are in. Otherwise, you might run into problems of datatype hashes not matching and/or context names not being installed in the cutax version you are using.
234
+ A nice feature would be to automatically get the tag from the environment itself so you don't have this problem (TODO: can someone work on it?).
235
+
236
+ If it is your very first time submitting a workflow, maybe try submitting a single run in debug mode:
237
+
238
+ ```
239
+ outsource --run {run_number} --debug
240
+ ```
241
+
242
+ This will create a pegasus workflow, which you need to use `pegasus-run` to submit yourself. Keep in mind that it will NOT upload results to rucio and update RunDB. What's more, the results will also be copied to your scratch folder in ap23 (`/scratch/$USER/...`).
243
+
@@ -0,0 +1,216 @@
1
+ # outsource
2
+
3
+ [![pre-commit.ci status](https://results.pre-commit.ci/badge/github/XENONnT/outsource/master.svg)](https://results.pre-commit.ci/latest/github/XENONnT/outsource/master)
4
+ [![PyPI version shields.io](https://img.shields.io/pypi/v/xe-outsource.svg)](https://pypi.python.org/pypi/xe-outsource/)
5
+
6
+ Job submission code for XENONnT.
7
+
8
+ Outsource submits XENON processing jobs to the [Open Science Grid](osg-htc.org). It is essentially a wrapper around [Pegasus](https://pegasus.isi.edu), which is itself something of a wrapper around [HTCondor](https://htcondor.readthedocs.io/en/latest/).
9
+
10
+
11
+ ## Prerequisites
12
+ Those running outsource need to be production users of XENON (ie computing experts, etc). Therefore you will need certain permissions:
13
+ - Access to the XENON OSG login node named ap23 (and a CI account). For more details see [here](https://xe1t-wiki.lngs.infn.it/doku.php?id=xenon:xenonnt:analysis:guide#for_ci-connect_osg_service).
14
+ - Credentials for a production user to RunDB API
15
+ - A grid certificate with access to the `production` rucio account.
16
+
17
+ #### Grid certificates
18
+ Production users will need their own grid certificate to transfer data on the grid. You can get a certificate via [CIlogon](https://www.cilogon.org/home).
19
+ After you get your certificate, you will need to add it to join the XENON VO. For more instructions here, see this (outdated but still useful) [wiki page](https://xe1t-wiki.lngs.infn.it/doku.php?id=xenon:xenon1t:sim:grid).
20
+
21
+ Additionally, you will need to add the DN of this certificate to the production rucio account. For this, please ask on slack and tag Judith or Pascal.
22
+ After you have these credentials set up, you are ready to use outsource and submit processing jobs to OSG.
23
+
24
+ #### Environment
25
+ Please use XENONnT environment. On the OSG submit hosts, this can be set up by sourcing (assuming you are on AP23):
26
+
27
+ ```
28
+ #!/bin/bash
29
+
30
+ . /cvmfs/xenon.opensciencegrid.org/releases/nT/development/setup.sh
31
+ export XENON_CONFIG=$HOME/.xenon_config
32
+ export RUCIO_ACCOUNT=production
33
+ export X509_USER_PROXY=$HOME/.xenon_service_proxy
34
+ export PYTHONPATH=`pegasus-config --python`:$PYTHONPATH
35
+ ```
36
+
37
+ #### Proxy
38
+ Please make sure you create a 2048 bit long key. Example:
39
+
40
+ ```
41
+ voms-proxy-init -voms xenon.biggrid.nl -bits 2048 -hours 168 -out ~/.xenon_service_proxy
42
+ ```
43
+
44
+ At the moment, outsource assumes that your certificate proxy is located at `X509_USER_PROXY`.
45
+
46
+ ## Installation
47
+ Since outsource is used by production users, it is currently not pip-installable (people often want to change the source code locally anyway). To install, first clone the repository from github into a directory of your preference in your home directory on the xenon OSG login node.
48
+
49
+ ```
50
+ git clone https://github.com/XENONnT/outsource.git
51
+ ```
52
+
53
+ Outsource depends on several packages in the XENON base environment. Therefore we recommend installing from within one of those environments. We cannot use containers due to the fact we rely on the host system installation of HTCondor for job submission.
54
+ Therefore, we recommend using the standard CVMFS installation of the xenon environments, e.g.
55
+
56
+ ```
57
+ . /cvmfs/xenon.opensciencegrid.org/releases/nT/development/setup.sh
58
+ ```
59
+
60
+ Then you can install using pip. We recommend doing a normal (albeit user) install because there is an executable script that doesn't get installed properly in development mode.
61
+
62
+ ```
63
+ cd outsource
64
+ pip install ./ --user
65
+ ```
66
+
67
+ Note that if you change anything in the source code you will need to reinstall each time. If you want to install in development mode, instead (or additionally) do
68
+
69
+ ```
70
+ pip install -e ./ --user
71
+ ```
72
+
73
+ but note that the `outsource` executable might not pick up all your changes if you go this route.
74
+
75
+
76
+ ## Configuration file
77
+
78
+ Just like [utilix](https://github.com/XENONnT/utilix), this tool expects a configuration file at environmental variable `XENON_CONFIG`. You will need to create your own config to look like below, but fill in the values.
79
+
80
+ Particularly it uses information in the field of the config with header 'Outsource', see below.
81
+
82
+ **Note you will need to fill in the empty slots**.
83
+
84
+ ```
85
+ [basic]
86
+ # usually helpful for debugging but it's a lot of msg
87
+ logging_level=DEBUG
88
+
89
+ [RunDB]
90
+ rundb_api_url = <ask teamA>
91
+ rundb_api_user = xenon-admin
92
+ rundb_api_password = <ask teamA>
93
+ xent_user = nt_analysis
94
+ xent_password = <ask teamA>
95
+ xent_database = xenonnt
96
+ xe1t_url = <ask teamA>
97
+ xe1t_user = 1t_bookkeeping
98
+ xe1t_password = <ask teamA>
99
+ xe1t_database = run
100
+
101
+ [Outsource]
102
+ work_dir = /scratch/$USER/workflows
103
+ # sites to exclude (GLIDEIN_Site), comma seprated list
104
+ exclude_sites = SU-ITS, NotreDame, UConn-HPC, Purdue Geddes, Chameleon, WSU-GRID, SIUE-CC-production, Lancium
105
+ # data type to process
106
+ include_data_types = peaklets, hitlets_nv, events_nv, events_mv, event_info_double, afterpulses, led_calibration
107
+ exclude_modes = tpc_noise, tpc_rn_8pmts, tpc_commissioning_pmtgain, tpc_rn_6pmts, tpc_rn_12_pmts, nVeto_LED_calibration,tpc_rn_12pmts, nVeto_LED_calibration_2
108
+ us_only = False
109
+ hs06_test_run = False
110
+ raw_records_rse = UC_OSG_USERDISK
111
+ records_rse = UC_MIDWAY_USERDISK
112
+ peaklets_rse = UC_OSG_USERDISK
113
+ events_rse = UC_MIDWAY_USERDISK
114
+ min_run_number = 1
115
+ max_run_number = 999999
116
+ max_daily = 2000
117
+ chunks_per_job = 10
118
+ combine_memory = 60000
119
+ combine_disk = 120000
120
+ peaklets_memory = 14500
121
+ peaklets_disk = 50000
122
+ events_memory = 60000
123
+ events_disk = 120000
124
+ dagman_retry = 0
125
+ dagman_maxidle = 5000
126
+ dagman_maxjobs = 300
127
+ ```
128
+
129
+ ## Add a setup script
130
+ For convenience, we recommend writing a simple bash script to make it easy to setup the outsource environment. Below is an example called `setup_outsource.sh`, but note you will need to change things like usernames and container.
131
+
132
+ ```
133
+ #!/bin/bash
134
+
135
+ . /cvmfs/xenon.opensciencegrid.org/releases/nT/development/setup.sh
136
+ export RUCIO_ACCOUNT=production
137
+ export X509_USER_PROXY=$HOME/.xenon_service_proxy
138
+ export PATH=/opt/pegasus/current/bin:$PATH
139
+ export PYTHONPATH=`pegasus-config --python`:$PYTHONPATH
140
+ ```
141
+
142
+ What this script does is
143
+ - Source a CVMFS environment for a particular environment we are using (this will change depending on what data you want to processs)
144
+ - Sets the rucio account to production
145
+ - Sets the X509 user proxy location via env variable
146
+ - Appends the path to your bin that will find the locally installed outsource executable
147
+
148
+ Then, everytime you want to submit jobs, you can setup your environment with
149
+
150
+ ```
151
+ . setup_outsource.sh
152
+ ```
153
+
154
+ ## Submitting workflows
155
+ After installation and setting up the environment, it is time to submit jobs. The easiest way to do this is using the `outsource` executable. You can see what this script takes as input via `outsource --help`, which returns
156
+
157
+ ```
158
+ [whoami@ap23 ~]$ outsource --help
159
+ usage: Outsource [-h] --context CONTEXT --xedocs_version XEDOCS_VERSION [--image IMAGE]
160
+ [--detector {all,tpc,muon_veto,neutron_veto}] [--workflow_id WORKFLOW_ID] [--ignore_processed]
161
+ [--debug] [--from NUMBER_FROM] [--to NUMBER_TO] [--run [RUN ...]] [--runlist RUNLIST] [--rucio_upload]
162
+ [--rundb_update]
163
+
164
+ optional arguments:
165
+ -h, --help show this help message and exit
166
+ --context CONTEXT Name of context, imported from cutax.
167
+ --xedocs_version XEDOCS_VERSION
168
+ global version, an argument for context.
169
+ --image IMAGE Singularity image. Accepts either a full path or a single name and assumes a format like this:
170
+ /cvmfs/singularity.opensciencegrid.org/xenonnt/base-environment:{image}
171
+ --detector {all,tpc,muon_veto,neutron_veto}
172
+ Detector to focus on. If 'all' (default) will consider all three detectors. Otherwise pass a
173
+ single one of 'tpc', 'neutron_veto', 'muon_veto'. Pairs of detectors not yet supported.
174
+ --workflow_id WORKFLOW_ID
175
+ Custom workflow_id of workflow. If not passed, inferred from today's date.
176
+ --ignore_processed Ignore runs that have already been processed
177
+ --debug Debug mode. Does not automatically submit the workflow, and jobs do not update RunDB nor upload
178
+ to rucio.
179
+ --from NUMBER_FROM Run number to start with
180
+ --to NUMBER_TO Run number to end with
181
+ --run [RUN ...] Space separated specific run number(s) to process
182
+ --runlist RUNLIST Path to a runlist file
183
+ --rucio_upload Upload data to rucio after processing
184
+ --rundb_update Update RunDB after processing
185
+ ```
186
+
187
+ This script requires at minimum the name of context (which must reside in the cutax version installed in the environment you are in). If no other arguments are passed, this script will try to find all data that can be processed, and process it. Some inputs from the configuration file at environmental variable `XENON_CONFIG` are also used, specifically:
188
+ - The minimum run number considered
189
+ - The total number of runs to process at one time
190
+ - What data types to process
191
+ - The exclusion of different run modes
192
+ - etc.
193
+
194
+ As a first try, pass the `--debug` flag to see how many runs outsource would try to process. It might produce a lot of output as it also prints out the list of runs and the location of workflow.
195
+
196
+ ```
197
+ outsource --debug
198
+ ```
199
+
200
+ If you want to narrow down the list of runs to process, you can do one of several things:
201
+ - Pass a run or small list of runs with `--run` flag
202
+ - Pass a path to a text file containing a newline-separated runlist that you made in some other way with the `--runlist` flag
203
+ - Use the `--from` and/or `--to` flags to consider a range of run numbers
204
+ - Specify things like `--detector`, the source and run mode can be controlled in file indicated by `XENON_CONFIG`, like `include_modes` and `exclude_sources`
205
+
206
+ **One super important thing to keep in mind: you also specify the singularity image to run the jobs in**. This adds a significant possibility for mistakes, as the environment you submit jobs from (and thus do this query to find what needs to be processed) might not always be the same as the one that actually tries to do the processing.
207
+ So it's super important that the image you pass with the `--image` flag corresponds to the same base_environment flag as the CVMFS environment you are in. Otherwise, you might run into problems of datatype hashes not matching and/or context names not being installed in the cutax version you are using.
208
+ A nice feature would be to automatically get the tag from the environment itself so you don't have this problem (TODO: can someone work on it?).
209
+
210
+ If it is your very first time submitting a workflow, maybe try submitting a single run in debug mode:
211
+
212
+ ```
213
+ outsource --run {run_number} --debug
214
+ ```
215
+
216
+ This will create a pegasus workflow, which you need to use `pegasus-run` to submit yourself. Keep in mind that it will NOT upload results to rucio and update RunDB. What's more, the results will also be copied to your scratch folder in ap23 (`/scratch/$USER/...`).
@@ -0,0 +1 @@
1
+ __version__ = "0.4.0"