\includegraphics[scale=0.35]{pics/occlusions_1.png}
\end{center}
+\section{Various thoughts}
+
+\begin{itemize}
+
+\item The whole process can be envisioned as natural selection of
+ quizzes in the representation landscape of GPTs. There probably is a
+ subtle relation between the temperature (mutation rate) and the
+ number of models used to validate with the ``all but one'' criterion
+ (survival).
+
+\item The setup does not push toward any specific invariance or
+ property in the generated quizzes, their consistency is entirely due
+ to the statistics of the ``world quizzes'' that remain in the
+ training set, and to the GPTs' inductive biased.
+
+\item The GPTs obviously get a sense of objectness and 2d topology
+ early on, since they rapidly increase the number of birds and
+ ``discover'' occlusion even though they never was in the world
+ quizzes.
+
+\item There may not be so many problems that can be cast as pairs of
+ patterns that are each a deterministic function of the other, which
+ is probably critical here.
+
+\item This overall process probably fight the ``simplicity bias'': If
+ a model is lacking a ``cue'' that the others have, there will
+ rapidly be quizzes that requires this cue, they will be added to the
+ training data, and that model will catch up.
+
+\item The randomness of the process probably allow to even go beyond
+ just synchronizing the abilities of the models. There may be some
+ additional complexification of quizzes that get accepted by chance.
+
+\item The current process to generate new quizzes, which simply sample
+ them at random is very rudimentary and probably not sufficient in a
+ real-data setup. It can probably be supplemented with a MCTS-type
+ search.
+
+\end{itemize}
+
\section*{Appendix}
The code is available at