diff --git a/machine-and-deep-learning/ollama/README.md b/machine-and-deep-learning/ollama/README.md
index 393646d391b470eca49701f8db31cdd2c6049c73..dc430350240d0c1ceceacd528e02b0101a47f408 100644
--- a/machine-and-deep-learning/ollama/README.md
+++ b/machine-and-deep-learning/ollama/README.md
@@ -34,7 +34,7 @@ pip install ollama
 > [!NOTE]
 > Examples here run `ollama serve` and `ollama run` in the background to enable concise demonstrations from a single script or shell. However, official examples also show that these commands can be run in separate shells on the same node instead.
 
-## 1.1. Running Ollama with the official container (recommended)
+### 1.1. Running Ollama with the official container (recommended)
 
 An Ollama container will be centrally provided on our HPC system **very soon**. However, for now lets assume we created one with the following command:
 ```bash
@@ -56,7 +56,7 @@ zsh submit_job_container.sh
 sbatch submit_job_container.sh
 ```
 
-## 1.2. Downloading and running Ollama manually
+### 1.2. Downloading and running Ollama manually
 
 Before beeing able to execute Ollama and run the examples, you need to download Ollama and make it available to the upcoming workflow steps.