From 872d5156a161c21564b3f650918422c62e4ce40d Mon Sep 17 00:00:00 2001 From: Jannis Klinkenberg <j.klinkenberg@itc.rwth-aachen.de> Date: Mon, 26 May 2025 08:59:05 +0200 Subject: [PATCH] changed header line --- machine-and-deep-learning/ollama/README.md | 4 ++-- machine-and-deep-learning/vllm/README.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/machine-and-deep-learning/ollama/README.md b/machine-and-deep-learning/ollama/README.md index eaec1b6..8e489e9 100644 --- a/machine-and-deep-learning/ollama/README.md +++ b/machine-and-deep-learning/ollama/README.md @@ -1,6 +1,6 @@ -# Running temporary Large Language Models (LLMs) with Ollama +# Getting Started with LLM Inference Using Ollama -This directory outlines two distinct scenarios and approaches, differing in the method of running the base Ollama server and the LLM: +This directory outlines two distinct scenarios and approaches, differing in the method of running the base Ollama server and the Large Language Model (LLM): 1. An approach utilizing the official Ollama container image, which encompasses the entire software stack and necessary binaries to operate Ollama. 2. An approach involving manual setup of Ollama within your user directories, requiring you to download binaries and modify paths accordingly. diff --git a/machine-and-deep-learning/vllm/README.md b/machine-and-deep-learning/vllm/README.md index fba8867..6a39176 100644 --- a/machine-and-deep-learning/vllm/README.md +++ b/machine-and-deep-learning/vllm/README.md @@ -1,6 +1,6 @@ -# Running temporary Large Language Models (LLMs) with vLLM +# Getting Started with LLM Inference Using vLLM -This directory outlines how to run LLMs via vLLM, either with a predefined Apptainer container image or with a virtual environment where vLLM is installed. Interaction with LLMs happens through the `vllm` Python package. +This directory outlines how to run Large Language Models (LLMs) and perform inference via vLLM, either with a predefined Apptainer container image or with a virtual environment where vLLM is installed. Interaction with LLMs happens through the `vllm` Python package. You can find additional information and examples on vLLM under https://docs.vllm.ai/en/latest/ -- GitLab