From b6081b63db48443067e31cae63f4f801b58b51b9 Mon Sep 17 00:00:00 2001 From: fdai0114 <alexander.gepperth@informatik.hs-fulda.de> Date: Thu, 23 Nov 2023 12:09:28 +0100 Subject: [PATCH] .. --- iclr2024_conference.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/iclr2024_conference.tex b/iclr2024_conference.tex index 529e94d..0fe5e7c 100644 --- a/iclr2024_conference.tex +++ b/iclr2024_conference.tex @@ -183,7 +183,7 @@ \subsection{Adiabatic replay (AR)} % TODO: "as a generator as well as a feature generator for the solver" könnte etwas unklar sein! In contrast to conventional replay, where a scholar is composed of a generator and a solver network, see \cref{fig:genrep}, AR proposes scholars where a single network acts as a generator as well as a feature generator for the solver. - Assuming a suitable scholar (see below), the high-level logic of AR is shown in \cref{fig:var}: Each sample from a new task is used to \textit{query} the scholar, which generates a similar, known sample. Mixing new and generated samples in a defined, constant proportion creates the training data for the current task. + Assuming a suitable scholar (see below), the high-level logic of AR is shown in \cref{fig:var}: Each sample from a new task is used to \textit{query} the scholar, which generates a similar, known sample. Mixing new and generated samples in a defined, constant proportion creates the training data for the current task (see \cref{alg:two} for pseudocode}). A new sample will cause adaptation of the scholar in a localized region of data space. Variants generated by that sample will, due to similarity, cause adaptation in the same region. Knowledge in the overlap region will therefore be adapted to represent both, while dissimilar regions stay unaffected (see \cref{fig:var} for a visual impression). None of these requirements are fulfilled by DNNs, which is why we implement the scholar by a \enquote{flat} GMM layer (generator/feature encoder) followed by a linear classifier (solver). Both are independently trained via SGD according to \cite{gepperth2021gradient}. Extensions to deep convolutional GMMs (DCGMMs) \cite{gepperth2021new} for higher sampling capacity can be incorporated as drop-in replacements for the generator. -- GitLab