Skip to content
Snippets Groups Projects
Commit d4ad274f authored by Alexander Gepperth's avatar Alexander Gepperth
Browse files

..

parent 46a7ce80
No related branches found
No related tags found
No related merge requests found
...@@ -221,11 +221,10 @@ ...@@ -221,11 +221,10 @@
% Machine description % Machine description
All experiments are run on a cluster of 30 machines equipped with single RTX3070Ti GPUs. All experiments are run on a cluster of 30 machines equipped with single RTX3070Ti GPUs.
% General experimental setup -> ML domain % General experimental setup -> ML domain
Replay is investigated in a supervised CIL-scenario, assuming known task-boundaries and disjoint classes. Replay is investigated in a supervised CIL-scenario, assuming known task-boundaries and disjoint classes. All of the following details apply to all investigated CL algorithms, namely AR, ER and DGR with VAEs.
% Balancing of Tasks/Classes % Balancing of Tasks/Classes
Tasks $T_{i}$ contain all samples of the corresponding classes defining them, see \cref{tab:slts} for details. Tasks $T_{i}$ contain all samples of the corresponding classes defining them, see \cref{tab:slts} for details.
% TODO: OK ??? It is assumed that data from all tasks occur with equal probability. Some datasets are slightly unbalanced, for example Fruits and SVHN classes 1 and 2, which may render certain sub-task settings as more difficult.
It is assumed that data from all tasks occurs with equal probability, however, it is not ensured that the amount/variability of samples per class is balanced, see e.g., SVHN classes 1 \& 2, which may render certain sub-task settings as more difficult.
% Initial/Replay % Initial/Replay
Training consists of an (initial) run on $T_1$, followed by a sequence of independent (replay) runs on $T_{i>1}$. Training consists of an (initial) run on $T_1$, followed by a sequence of independent (replay) runs on $T_{i>1}$.
% Averaged over runs & baseline experiments % Averaged over runs & baseline experiments
...@@ -275,7 +274,7 @@ ...@@ -275,7 +274,7 @@
It is worth noting that classes will, in general, \textit{not} be balanced in the merged generated/real data at $T_i$, and that it is not required to store the statictics of previously encountered class instances/labels. It is worth noting that classes will, in general, \textit{not} be balanced in the merged generated/real data at $T_i$, and that it is not required to store the statictics of previously encountered class instances/labels.
%------------------------------------------------------------------------- %-------------------------------------------------------------------------
\subsection{Variant generation with GMMs} \subsection{Selective replay functionality}
% %
\begin{figure}[h!] \begin{figure}[h!]
\centering \centering
...@@ -288,7 +287,7 @@ ...@@ -288,7 +287,7 @@
\caption{\label{fig:vargen} An example for variant generation in AR, see \cref{sec:approach} and \cref{fig:var} for details. Left: centroids of the current GMM scholar trained on MNIST classes 0, 4 and 6. Middle: query samples of MNIST class 9. Right: variants generated in response to the query. Component weights and variances are not shown. \caption{\label{fig:vargen} An example for variant generation in AR, see \cref{sec:approach} and \cref{fig:var} for details. Left: centroids of the current GMM scholar trained on MNIST classes 0, 4 and 6. Middle: query samples of MNIST class 9. Right: variants generated in response to the query. Component weights and variances are not shown.
} }
\end{figure} \end{figure}
First, we demonstrate the ability of a GMM layer $L_{(G)}$ to query its internal representation through data samples and selectively generate artificial data that \enquote{best match} those that define the query. To illustrate this, we train a GMM layer of $K=25$ components on MNIST classes 0,4 and 6 for 50 epochs using the best-practice rules described in \cref{app:ar}. Then, we query the trained GMM with samples from class 9 uniquely, as described in \cref{sec:gmm}. The resulting samples are all from class 4, since it is the class that is \enquote{most similar} to the query class. These results are visualized in \cref{fig:var}. Variant generation results for deep convolutional extensions of GMMs can be found in \cite{gepperth2021new}, emphasizing that the AR approach can be scaled to more complex problems. First, we demonstrate the ability of a trained GMM to query its internal representation through data samples and selectively generate artificial data that \enquote{best match} those that define the query. To illustrate this, we train a GMM layer of $K=25$ components on MNIST classes 0,4 and 6 for 50 epochs using the best-practice rules described in \cref{app:ar}. Then, we query the trained GMM with samples from class 9 uniquely, as described in \cref{sec:gmm}. The resulting samples are all from class 4, since it is the class that is \enquote{most similar} to the query class. These results are visualized in \cref{fig:var}. Variant generation results for deep convolutional extensions of GMMs can be found in \cite{gepperth2021new}, emphasizing that the AR approach can be scaled to more complex problems.
%------------------------------------------------------------------------- %-------------------------------------------------------------------------
\subsection{Comparison: AR, ER and DGR-VAE} \subsection{Comparison: AR, ER and DGR-VAE}
% BASELINE FOR RAW PIXEL/DATA INPUT % BASELINE FOR RAW PIXEL/DATA INPUT
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment