# Ollama - Running temporary Large Language Models (LLMs)
# Running temporary Large Language Models (LLMs) with Ollama
This directory outlines two distinct scenarios and approaches, differing in the method of running the base Ollama server and the LLM:
1. An approach utilizing the official Ollama container image, which encompasses the entire software stack and necessary binaries to operate Ollama.
...
...
@@ -12,11 +12,11 @@ Please find more information to Ollama in the following links:
- https://github.com/ollama/ollama
- https://github.com/ollama/ollama-python
## 1. Running Ollama with the official Container (recommended)
## 1. Running Ollama with the official container
... follows soon ...
## 2. Downloading and Running Ollama manually
## 2. Downloading and running Ollama manually
Before beeing able to execute Ollama and run the exaples, you need to download Ollama and make it available to the upcoming workflow steps. Additionally, we use a Python virtual environment, to demonstrate how Ollama can be used via the `ollama-python` library.