This folder contains common job script examples and best practices. You can submit jobs to the Slurm batch system via `sbatch <script-name>.sh`.
## What can you find here?
| File/Folder | Description |
|--------|-------------|
| [beeond_job.sh](beeond_job.sh) | Job script for setting up and using BeeOND (BeeGFS On Demand) in an HPC environment. |
| [gpu_job_1gpu.sh](gpu_job_1gpu.sh) | Runs a job with 1 GPU and a single process. |
| [gpu_job_2gpus-1proc.sh](gpu_job_2gpus-1proc.sh) | Runs a job with 2 GPUs and a single process. Useful for tasks that require multi-GPU acceleration but not multi-processing. |
| [gpu_job_2gpus-2procs.sh](gpu_job_2gpus-2procs.sh) | Runs a job with 2 GPUs and and 2 separate processes. Commonly used for parallel deep learning training. |
| [gpu_job_4gpus-4procs.sh](gpu_job_4gpus-4procs.sh) | Runs a job with 4 GPUs and and 4 separate processes (full node with 4x H100). Commonly used for parallel deep learning training. |
| [gpu_job_8gpus-8procs.sh](gpu_job_8gpus-8procs.sh) | Runs a job with 8 GPUs and and 8 separate processes (2 full nodes with 4x H100). Commonly used for parallel deep learning training. |
| [hybrid_mpi_openmp_job.sh](hybrid_mpi_openmp_job.sh) | Hybrid job script combining MPI (distributed computing) with OpenMP (shared-memory parallelism). Ideal for hybrid HPC workloads. |
| [mpi_job_basic.sh](mpi_job_basic.sh) | A basic MPI job script, useful for testing and learning MPI-based job submission. |
| [mpi_job_1node.sh](mpi_job_1node.sh) | Runs an MPI job on a single node, demonstrating intra-node parallel processing with multiple processes per node. |
| [mpi_job_2nodes.sh](mpi_job_2nodes.sh) | Runs an MPI job spanning 2 full compute nodes, demonstrating inter-node parallelism and distributed computing across multiple machines. |