Skip to content
Snippets Groups Projects
Commit b6081b63 authored by Alexander Gepperth's avatar Alexander Gepperth
Browse files

..

parent 3e484aa6
No related branches found
No related tags found
No related merge requests found
...@@ -183,7 +183,7 @@ ...@@ -183,7 +183,7 @@
\subsection{Adiabatic replay (AR)} \subsection{Adiabatic replay (AR)}
% TODO: "as a generator as well as a feature generator for the solver" könnte etwas unklar sein! % TODO: "as a generator as well as a feature generator for the solver" könnte etwas unklar sein!
In contrast to conventional replay, where a scholar is composed of a generator and a solver network, see \cref{fig:genrep}, AR proposes scholars where a single network acts as a generator as well as a feature generator for the solver. In contrast to conventional replay, where a scholar is composed of a generator and a solver network, see \cref{fig:genrep}, AR proposes scholars where a single network acts as a generator as well as a feature generator for the solver.
Assuming a suitable scholar (see below), the high-level logic of AR is shown in \cref{fig:var}: Each sample from a new task is used to \textit{query} the scholar, which generates a similar, known sample. Mixing new and generated samples in a defined, constant proportion creates the training data for the current task. Assuming a suitable scholar (see below), the high-level logic of AR is shown in \cref{fig:var}: Each sample from a new task is used to \textit{query} the scholar, which generates a similar, known sample. Mixing new and generated samples in a defined, constant proportion creates the training data for the current task (see \cref{alg:two} for pseudocode}).
A new sample will cause adaptation of the scholar in a localized region of data space. Variants generated by that sample will, due to similarity, cause adaptation in the same region. Knowledge in the overlap region will therefore be adapted to represent both, while dissimilar regions stay unaffected (see \cref{fig:var} for a visual impression). A new sample will cause adaptation of the scholar in a localized region of data space. Variants generated by that sample will, due to similarity, cause adaptation in the same region. Knowledge in the overlap region will therefore be adapted to represent both, while dissimilar regions stay unaffected (see \cref{fig:var} for a visual impression).
None of these requirements are fulfilled by DNNs, which is why we implement the scholar by a \enquote{flat} GMM layer (generator/feature encoder) followed by a linear classifier (solver). Both are independently trained via SGD according to \cite{gepperth2021gradient}. Extensions to deep convolutional GMMs (DCGMMs) \cite{gepperth2021new} for higher sampling capacity can be incorporated as drop-in replacements for the generator. None of these requirements are fulfilled by DNNs, which is why we implement the scholar by a \enquote{flat} GMM layer (generator/feature encoder) followed by a linear classifier (solver). Both are independently trained via SGD according to \cite{gepperth2021gradient}. Extensions to deep convolutional GMMs (DCGMMs) \cite{gepperth2021new} for higher sampling capacity can be incorporated as drop-in replacements for the generator.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment