Skip to content
Snippets Groups Projects
Verified Commit 872d5156 authored by Jannis Klinkenberg's avatar Jannis Klinkenberg
Browse files

changed header line

parent a1572ae1
No related branches found
No related tags found
No related merge requests found
# Running temporary Large Language Models (LLMs) with Ollama
# Getting Started with LLM Inference Using Ollama
This directory outlines two distinct scenarios and approaches, differing in the method of running the base Ollama server and the LLM:
This directory outlines two distinct scenarios and approaches, differing in the method of running the base Ollama server and the Large Language Model (LLM):
1. An approach utilizing the official Ollama container image, which encompasses the entire software stack and necessary binaries to operate Ollama.
2. An approach involving manual setup of Ollama within your user directories, requiring you to download binaries and modify paths accordingly.
......
# Running temporary Large Language Models (LLMs) with vLLM
# Getting Started with LLM Inference Using vLLM
This directory outlines how to run LLMs via vLLM, either with a predefined Apptainer container image or with a virtual environment where vLLM is installed. Interaction with LLMs happens through the `vllm` Python package.
This directory outlines how to run Large Language Models (LLMs) and perform inference via vLLM, either with a predefined Apptainer container image or with a virtual environment where vLLM is installed. Interaction with LLMs happens through the `vllm` Python package.
You can find additional information and examples on vLLM under https://docs.vllm.ai/en/latest/
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment