diff --git a/generic-job-scripts/README.md b/generic-job-scripts/README.md index 9a1ab62e2662258362badf804a7f92e04b0d1a23..5374c5261915a10f3c43db8a721c647c9176736f 100644 --- a/generic-job-scripts/README.md +++ b/generic-job-scripts/README.md @@ -2,7 +2,7 @@ This folder contains common job script examples and best practices. -## Asychronous jobs +## 1. Asychronous jobs The following table illustrates examples for asynchronous jobs that contain both: - The allocation requests for your job, e.g. in form of `#SBATCH` flags in your batch script @@ -23,13 +23,13 @@ You can submit such jobs to the Slurm batch system via `sbatch <parameters> <scr | [mpi_job_1node.sh](mpi_job_1node.sh) | Runs an MPI job on a single node, demonstrating intra-node parallel processing with multiple processes per node. | | [mpi_job_2nodes.sh](mpi_job_2nodes.sh) | Runs an MPI job spanning 2 full compute nodes, demonstrating inter-node parallelism and distributed computing across multiple machines. | -## Interactive jobs +## 2. Interactive jobs Sometimes, you are still in the testing/debugging phase or do not yet completely know how your job script instruction should correctly look like. In such cases, an *interactive job* might be what you want. An interactive job allows users to run commands in real-time on an HPC cluster, making it useful for debugging, testing scripts, or exploring data interactively. Unlike asynchronous batch jobs, which are submitted to a queue and executed without user interaction, interactive jobs provide immediate feedback and enable on-the-fly adjustments. This is especially valuable when developing or fine-tuning workflows before submitting long-running batch jobs. -In such a case, you only define your resource requirements and boundary conditions with `salloc` (detailed documentation [here](https://slurm.schedmd.com/salloc.html)). After the jobs has been scheduled by Slurm, the system will provide a regular shell for interactive work. Here are a few examples: +In such a case, you only define your resource requirements and boundary conditions with `salloc` (detailed documentation [here](https://slurm.schedmd.com/salloc.html)). After the job has been scheduled by Slurm, the system will provide a regular shell for interactive work. Here are a few examples: ### Example: Interactive job on CPU resources for OpenMP (full node) ```zsh