From 942e63f0f7ebf5fdda97ea6592aa90c8f9fcdb29 Mon Sep 17 00:00:00 2001 From: Jannis Klinkenberg <j.klinkenberg@itc.rwth-aachen.de> Date: Fri, 23 May 2025 11:52:03 +0200 Subject: [PATCH] updated README.md --- machine-and-deep-learning/ollama/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/machine-and-deep-learning/ollama/README.md b/machine-and-deep-learning/ollama/README.md index b0d1fe7..f3f9995 100644 --- a/machine-and-deep-learning/ollama/README.md +++ b/machine-and-deep-learning/ollama/README.md @@ -1,4 +1,4 @@ -# Ollama - Running temporary Large Language Models (LLMs) +# Running temporary Large Language Models (LLMs) with Ollama This directory outlines two distinct scenarios and approaches, differing in the method of running the base Ollama server and the LLM: 1. An approach utilizing the official Ollama container image, which encompasses the entire software stack and necessary binaries to operate Ollama. @@ -12,11 +12,11 @@ Please find more information to Ollama in the following links: - https://github.com/ollama/ollama - https://github.com/ollama/ollama-python -## 1. Running Ollama with the official Container (recommended) +## 1. Running Ollama with the official container ... follows soon ... -## 2. Downloading and Running Ollama manually +## 2. Downloading and running Ollama manually Before beeing able to execute Ollama and run the exaples, you need to download Ollama and make it available to the upcoming workflow steps. Additionally, we use a Python virtual environment, to demonstrate how Ollama can be used via the `ollama-python` library. -- GitLab