diff --git a/machine-and-deep-learning/ollama/README.md b/machine-and-deep-learning/ollama/README.md index b0d1fe73ce19d2bc75824109face6d69e37997e0..f3f999514839e64038808f983bb38f69f1747fbe 100644 --- a/machine-and-deep-learning/ollama/README.md +++ b/machine-and-deep-learning/ollama/README.md @@ -1,4 +1,4 @@ -# Ollama - Running temporary Large Language Models (LLMs) +# Running temporary Large Language Models (LLMs) with Ollama This directory outlines two distinct scenarios and approaches, differing in the method of running the base Ollama server and the LLM: 1. An approach utilizing the official Ollama container image, which encompasses the entire software stack and necessary binaries to operate Ollama. @@ -12,11 +12,11 @@ Please find more information to Ollama in the following links: - https://github.com/ollama/ollama - https://github.com/ollama/ollama-python -## 1. Running Ollama with the official Container (recommended) +## 1. Running Ollama with the official container ... follows soon ... -## 2. Downloading and Running Ollama manually +## 2. Downloading and running Ollama manually Before beeing able to execute Ollama and run the exaples, you need to download Ollama and make it available to the upcoming workflow steps. Additionally, we use a Python virtual environment, to demonstrate how Ollama can be used via the `ollama-python` library.