Skip to content
Snippets Groups Projects
Commit 942e63f0 authored by Jannis Klinkenberg's avatar Jannis Klinkenberg
Browse files

updated README.md

parent 3cb067e9
Branches
No related tags found
No related merge requests found
# Ollama - Running temporary Large Language Models (LLMs)
# Running temporary Large Language Models (LLMs) with Ollama
This directory outlines two distinct scenarios and approaches, differing in the method of running the base Ollama server and the LLM:
1. An approach utilizing the official Ollama container image, which encompasses the entire software stack and necessary binaries to operate Ollama.
......@@ -12,11 +12,11 @@ Please find more information to Ollama in the following links:
- https://github.com/ollama/ollama
- https://github.com/ollama/ollama-python
## 1. Running Ollama with the official Container (recommended)
## 1. Running Ollama with the official container
... follows soon ...
## 2. Downloading and Running Ollama manually
## 2. Downloading and running Ollama manually
Before beeing able to execute Ollama and run the exaples, you need to download Ollama and make it available to the upcoming workflow steps. Additionally, we use a Python virtual environment, to demonstrate how Ollama can be used via the `ollama-python` library.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment