Skip to content
Snippets Groups Projects
Commit af2fa2e0 authored by Jannis Klinkenberg's avatar Jannis Klinkenberg
Browse files

added serial job and tried to preserve content from help.itc example pages for job allocation

parent 40897bc5
No related branches found
No related tags found
No related merge requests found
Showing with 68 additions and 13 deletions
......@@ -2,7 +2,7 @@
This folder contains common job script examples and best practices.
## 1. Asychronous jobs
## 1. Asynchronous jobs
The following table illustrates examples for asynchronous jobs that contain both:
- The allocation requests for your job, e.g. in form of `#SBATCH` flags in your batch script
......@@ -23,6 +23,7 @@ You can submit such jobs to the Slurm batch system via `sbatch <parameters> <scr
| [mpi_job_1node.sh](mpi_job_1node.sh) | Runs an MPI job on a single node, demonstrating intra-node parallel processing with multiple processes per node. |
| [mpi_job_2nodes.sh](mpi_job_2nodes.sh) | Runs an MPI job spanning 2 full compute nodes, demonstrating inter-node parallelism and distributed computing across multiple machines. |
| [openmp_multi-threading_job.sh](openmp_multi-threading_job.sh) | Runs an multi-threaded (e.g. OpenMP) job, demonstrating inter-node shared-memory parallelism. |
| [serial_job.sh](serial_job.sh) | A minimal job script that runs a serial job, that will only request a single CPU core. |
## 2. Interactive jobs
......
......@@ -3,7 +3,7 @@
### Slurm flags
############################################################
# request Beeond
# request BeeOND
#SBATCH --beeond
# specify other Slurm commands
......@@ -13,10 +13,10 @@
### Execution / Commands
############################################################
# copy files to Beeond
# copy files to BeeOND mount
cp -r $WORK/yourfiles $BEEOND
# navigate to Beeond
# navigate to BeeOND
cd $BEEOND/yourfiles
# perform your job, which has high I/O meta data and bandwidth demands
......
......@@ -31,7 +31,12 @@ nvidia-smi
### Execution / Commands
############################################################
# Optional: Load desired models for GPU such as CUDA
# module load CUDA
# Example: Only a single GPU is used. However, due to billing
# settings, 24 CPU cores can be requested and used
# in conjunction with that GPU. That also enables
# multi-threaded preprocessing on the CPU side.
\ No newline at end of file
# multi-threaded pre-processing on the CPU side.
<prog> <params>
\ No newline at end of file
......@@ -31,6 +31,11 @@ nvidia-smi
### Execution / Commands
############################################################
# Optional: Load desired models for GPU such as CUDA
# module load CUDA
# Example: 1:2 mapping between MPI processes and GPUs
# Process intened to use both GPUs
# Process intened to use both GPUs. If your code is based on CUDA,
# you might internally need to use cudaSetDevice to target the individual GPUs.
<prog> <params>
......@@ -31,6 +31,10 @@ nvidia-smi
### Execution / Commands
############################################################
# Optional: Load desired models for GPU such as CUDA
# module load CUDA
# Example: 1:1 mapping between MPI processes and GPUs
# Each process intened to use 1 GPU
srun <prog> <params>
\ No newline at end of file
......@@ -31,6 +31,11 @@ nvidia-smi
### Execution / Commands
############################################################
# Optional: Load desired models for GPU such as CUDA
# module load CUDA
# Example: 1:1 mapping between MPI processes and GPUs
# Each process intened to use 1 GPU
srun <prog> <params>
......@@ -31,7 +31,11 @@ nvidia-smi
### Execution / Commands
############################################################
# Optional: Load desired models for GPU such as CUDA
# module load CUDA
# Example: 1:1 mapping between MPI processes and GPUs
# Each process intened to use 1 GPU.
# 2 full compute nodes are used.
srun <prog> <params>
\ No newline at end of file
......@@ -27,8 +27,10 @@ echo "Current machine: $(hostname)"
# Example: Hybrid MPI + OpenMP execution
# set number of threads
# set number of OpenMP threads to be used
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} # usually automatically set by SLURM
# Note: you can also use less OpenMP threads per process or experiment with different number of OpenMP threads in the same job by manually setting OMP_NUM_THREADS such as:
# export OMP_NUM_THREADS=4
# enable thread binding to pyhsical CPU cores
export OMP_PLACES=cores
......
......@@ -22,3 +22,4 @@ echo "Current machine: $(hostname)"
### Execution / Commands
############################################################
srun hostname
# srun <prog> <params>
\ No newline at end of file
......@@ -22,3 +22,4 @@ echo "Current machine: $(hostname)"
### Execution / Commands
############################################################
srun hostname
# srun <prog> <params>
\ No newline at end of file
......@@ -18,3 +18,4 @@
# also be placed on different nodes.
srun hostname
# srun <prog> <params>
\ No newline at end of file
......@@ -28,7 +28,8 @@ echo "Current machine: $(hostname)"
# set number of OpenMP threads to be used
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} # usually automatically set by SLURM
# Note: you can also use less cores/threads or experiment with different number of cores/threads in the same job
# Note: you can also use less OpenMP threads per process or experiment with different number of OpenMP threads in the same job by manually setting OMP_NUM_THREADS such as:
# export OMP_NUM_THREADS=4
# enable thread binding to pyhsical CPU cores
export OMP_PLACES=cores
......
#!/usr/bin/zsh
############################################################
### Slurm flags
############################################################
# Note: If you do not specify any requirements, your job will request 1 CPU core only
#SBATCH --time=00:15:00 # max. run time of the job
#SBATCH --job-name=example_job_ser # set the job name
#SBATCH --output=stdout_%j.txt # redirects stdout and stderr to stdout.txt
#SBATCH --account=<project-id> # insert your project-id or delete this line
############################################################
### Parameters and Settings
############################################################
# print some information about current system
echo "Current machine: $(hostname)"
############################################################
### Execution / Commands
############################################################
# execute your program (utilizing 1 CPU core)
<prog> <params>
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment