Skip to content
Snippets Groups Projects
Verified Commit 7e2cb986 authored by Jannis Klinkenberg's avatar Jannis Klinkenberg
Browse files

more content for job scripts

parent 34bfb032
No related branches found
No related tags found
No related merge requests found
# Generic Slurm Job Script Examples # Generic Slurm Job Script Examples
This folder contains common job script examples and best practices. You can submit jobs to the Slurm batch system via `sbatch <script-name>.sh`. This folder contains common job script examples and best practices.
## What can you find here? ## Asychronous jobs
The following table illustrates examples for asynchronous jobs that contain both:
- The allocation requests for your job, e.g. in form of `#SBATCH` flags in your batch script
- The actual task or instructions that your job needs to perform.
You can submit such jobs to the Slurm batch system via `sbatch <parameters> <script-name>` (detailed documentation [here](https://slurm.schedmd.com/sbatch.html)). Typically, these jobs are then queued and scheduled by the workload manager as soon as the desired resources are free and it is your turn to compute. (Remember: many people might want to use those hardware resources. So, Slurm needs to find a fair compromise.)
| File/Folder | Description | | File/Folder | Description |
|--------|-------------| |--------|-------------|
...@@ -16,3 +22,31 @@ This folder contains common job script examples and best practices. You can subm ...@@ -16,3 +22,31 @@ This folder contains common job script examples and best practices. You can subm
| [mpi_job_basic.sh](mpi_job_basic.sh) | A basic MPI job script, useful for testing and learning MPI-based job submission. | | [mpi_job_basic.sh](mpi_job_basic.sh) | A basic MPI job script, useful for testing and learning MPI-based job submission. |
| [mpi_job_1node.sh](mpi_job_1node.sh) | Runs an MPI job on a single node, demonstrating intra-node parallel processing with multiple processes per node. | | [mpi_job_1node.sh](mpi_job_1node.sh) | Runs an MPI job on a single node, demonstrating intra-node parallel processing with multiple processes per node. |
| [mpi_job_2nodes.sh](mpi_job_2nodes.sh) | Runs an MPI job spanning 2 full compute nodes, demonstrating inter-node parallelism and distributed computing across multiple machines. | | [mpi_job_2nodes.sh](mpi_job_2nodes.sh) | Runs an MPI job spanning 2 full compute nodes, demonstrating inter-node parallelism and distributed computing across multiple machines. |
## Interactive jobs
Sometimes, you are still in the testing/debugging phase or do not yet completely know how your job script instruction should correctly look like. In such cases, an *interactive job* might be what you want.
An interactive job allows users to run commands in real-time on an HPC cluster, making it useful for debugging, testing scripts, or exploring data interactively. Unlike asynchronous batch jobs, which are submitted to a queue and executed without user interaction, interactive jobs provide immediate feedback and enable on-the-fly adjustments. This is especially valuable when developing or fine-tuning workflows before submitting long-running batch jobs.
In such a case, you only define your resource requirements and boundary conditions with `salloc` (detailed documentation [here](https://slurm.schedmd.com/salloc.html)). After the jobs has been scheduled by Slurm, the system will provide a regular shell for interactive work. Here are a few examples:
### Example: Interactive job on CPU resources for OpenMP (full node)
```zsh
salloc --time=00:15:00 --nodes=1 --ntasks-per-node=1 --cpus-per-task=96 --partition=c23ms
```
### Example: Interactive job on CPU resources for MPI (2 full nodes)
```zsh
salloc --time=00:15:00 --nodes=2 --ntasks-per-node=96 --partition=c23ms
```
### Example: Interactive job on CPU resources for hybrid MPI+OpenMP (2 full nodes)
```zsh
salloc --time=00:15:00 --nodes=2 --ntasks-per-node=4 --cpus-per-task=24 --partition=c23ms
```
### Example: Interactive job on GPU resources (using 1 GPU)
```zsh
salloc --time=00:15:00 --nodes=1 --ntasks-per-node=1 --cpus-per-task=24 --gres=gpu:1 --partition=c23g
```
\ No newline at end of file
...@@ -33,4 +33,5 @@ nvidia-smi ...@@ -33,4 +33,5 @@ nvidia-smi
# Example: Only a single GPU is used. However, due to billing # Example: Only a single GPU is used. However, due to billing
# settings, 24 CPU cores can be requested and used # settings, 24 CPU cores can be requested and used
# for free. # in conjunction with that GPU. That also enables
\ No newline at end of file # multi-threaded preprocessing on the CPU side.
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment