and can be used by defining the variable CI_MODE, e.g.:
```
variables:
CI_MODE: "Batch"
```
The default mode (Slurm-Batched) is aktive if the CI_MODE variable is not set.
### Slurm-Batched -> Default
The default mode (Slurm-Batched) is aktive if the CI_MODE variable is not set.
All commands will be executed via sbatch, the output is parsed in parallel and redirected to Gitlab.
### Slurm-Srun -> Slurm
All commands will be executed via srun on an interactive batch job.
### Singularity-Srun -> Singularity
The singularity mode spawns a singularity container within an interactive slurm job to run all commands within the container. Locally, defined containers are still an experimental feature, the RWTH cluster currently only supports globally defined container (via the module file). The container must be defined with the variable CONTAINER within the CI definition file.
### Singularity-Batched -> Singularity_Batch
The batched singularity is only available when downscoping is aktivated. Simmilar to the singularity srun mode, the container must be defined with the variable CONTAINER within the CI definition file.
The batched singularity mode starts the container in parallel via sbatch. Thus, everything within the script is executed once per task. This mode should be limited to the execution of MPI applications within a containerized environment.
### Sbatch-Batched -> Sbatch
The sbatch mode allows the execution of a predefined batch script. This batch-skript will be executed via sbatch and the output will be piped to Gitlab, similar to the batched Slurm mode.
### Sbatch-Srun -> Batch
The batch mode allows the execution of a predefined batch script. This batch script will be interpreted by the runner, to extract the slurm parameters and run the script as an interactive batch script. The script must be defined via the variable BATCH_SCRIPT, which takes the path to your script relative to your repository. **Do NOT specify a slurm output file** this will lead to a faulty pipeline.
### Slurm Job Accross Stages
This mode allows you to share resources across multiple stages, allowing you to reuse data from the main memory instead of uploading and downloading artifacts between stages. By defining the variables BEGIN_SINGLE_SLURM_JOB and END_SINGLE_SLURM_JOB the start and end of a shared chain can be defined. The SLURM parameters used for the first job define the maximum values for all later jobs.
## Variables
To ease the Slurm integration a set of predefined Variables can be used to run a job in your pipeline with specific conditions. The parameters are provided via variables with the specific prefix: `SLURM_PARAM`.
The content of these parameters are the flags to be used with the corresponding Slurm execution, similar to a regular Batch-Job with Slurm.
In the following, we see a Batch-Script with a set off parameters:
* The job should use 24 cores
* The job should run under the account test1234
* The job should be limited to 15 minutes of runtime
Batch-Script:
```
#!/bin/bash
#SBATCH --cpus-per-task 24
#SBATCH -A test1234
#SBATCH -t 15:00
./some_computation.exe
```
CI-Script:
```
stages:
- test
lint-test-job: # CI-Job name
stage: test # It can run at the same time as unit-test-job (in parallel).