From d4ad274f41bfc7c7d5e4b8c6d5f7abf09e3cf1da Mon Sep 17 00:00:00 2001
From: fdai0114 <alexander.gepperth@informatik.hs-fulda.de>
Date: Wed, 27 Sep 2023 15:59:29 +0200
Subject: [PATCH] ..

---
 iclr2024_conference.tex | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/iclr2024_conference.tex b/iclr2024_conference.tex
index 7c1455a..6140f8d 100644
--- a/iclr2024_conference.tex
+++ b/iclr2024_conference.tex
@@ -221,11 +221,10 @@
 	% Machine description
 	All experiments are run on a cluster of 30 machines equipped with single RTX3070Ti GPUs.
 	% General experimental setup -> ML domain
-	Replay is investigated in a supervised CIL-scenario, assuming known task-boundaries and disjoint classes.
+	Replay is investigated in a supervised CIL-scenario, assuming known task-boundaries and disjoint classes. All of the following details apply to all investigated CL algorithms, namely AR, ER and DGR with VAEs.
 	% Balancing of Tasks/Classes
 	Tasks $T_{i}$ contain all samples of the corresponding classes defining them, see \cref{tab:slts} for details. 
-	% TODO: OK ???
-	It is assumed that data from all tasks occurs with equal probability, however, it is not ensured that the amount/variability of samples per class is balanced, see e.g., SVHN classes 1 \& 2, which may render certain sub-task settings as more difficult.
+	It is assumed that data from all tasks occur with equal probability. Some datasets are slightly unbalanced, for example Fruits and SVHN classes 1 and 2, which may render certain sub-task settings as more difficult.
 	% Initial/Replay
 	Training consists of an (initial) run on $T_1$, followed by a sequence of independent (replay) runs on $T_{i>1}$.
 	% Averaged over runs & baseline experiments
@@ -275,7 +274,7 @@
 	
 	It is worth noting that classes will, in general, \textit{not} be balanced in the merged generated/real data at $T_i$, and that it is not required to store the statictics of previously encountered class instances/labels.
 	%-------------------------------------------------------------------------
-	\subsection{Variant generation with GMMs}
+	\subsection{Selective replay functionality}
 	%
 	\begin{figure}[h!]
 		\centering
@@ -288,7 +287,7 @@
 		\caption{\label{fig:vargen} An example for variant generation in AR, see \cref{sec:approach} and \cref{fig:var} for details. Left: centroids of the current GMM scholar trained on MNIST classes 0, 4 and 6. Middle: query samples of MNIST class 9. Right: variants generated in response to the query. Component weights and variances are not shown.
 		}
 	\end{figure}
-	First, we demonstrate the ability of a GMM layer $L_{(G)}$ to query its internal representation through data samples and selectively generate artificial data that \enquote{best match} those that define the query. To illustrate this, we train a GMM layer of $K=25$ components on MNIST classes 0,4 and 6 for 50 epochs using the best-practice rules described in \cref{app:ar}. Then, we query the trained GMM with samples from class 9 uniquely, as described in \cref{sec:gmm}. The resulting samples are all from class 4, since it is the class that is \enquote{most similar} to the query class. These results are visualized in \cref{fig:var}. Variant generation results for deep convolutional extensions of GMMs can be found in \cite{gepperth2021new}, emphasizing that the AR approach can be scaled to more complex problems.
+	First, we demonstrate the ability of a trained GMM to query its internal representation through data samples and selectively generate artificial data that \enquote{best match} those that define the query. To illustrate this, we train a GMM layer of $K=25$ components on MNIST classes 0,4 and 6 for 50 epochs using the best-practice rules described in \cref{app:ar}. Then, we query the trained GMM with samples from class 9 uniquely, as described in \cref{sec:gmm}. The resulting samples are all from class 4, since it is the class that is \enquote{most similar} to the query class. These results are visualized in \cref{fig:var}. Variant generation results for deep convolutional extensions of GMMs can be found in \cite{gepperth2021new}, emphasizing that the AR approach can be scaled to more complex problems.
 	%-------------------------------------------------------------------------
 	\subsection{Comparison: AR, ER and DGR-VAE}
 	% BASELINE FOR RAW PIXEL/DATA INPUT
-- 
GitLab