X-Git-Url: https://fleuret.org/cgi-bin/gitweb/gitweb.cgi?a=blobdiff_plain;f=report%2Fculture.tex;h=59899c0b5dec1bc74cd58ac512b45d7060905478;hb=a06227e98fcef1960b8706c9a1ce72d10bd068c3;hp=fbeb1b495e700453f6289deaefbc8938650158a8;hpb=3889371f1793aa448bfc83d57a613be98e89f411;p=culture.git diff --git a/report/culture.tex b/report/culture.tex index fbeb1b4..59899c0 100644 --- a/report/culture.tex +++ b/report/culture.tex @@ -190,6 +190,60 @@ present in the original quizzes: \includegraphics[scale=0.35]{pics/occlusions_1.png} \end{center} +\section{Various thoughts} + +\begin{itemize} + +\item The whole process can be envisioned as natural selection of + quizzes in the representation landscape of GPTs. There probably is a + subtle relation between the temperature (mutation rate) and the + number of models used to validate with the ``all but one'' criterion + (survival criterion). + +\item The ``all but one'' could be ``all but K'', and there may be + some information-theoretical thing, where the goal is to maximize + mutual information, with $K=N$ being total randomness, so high + entropy but no structure, and $K=0$ is total determinism, so no + information to share. + +\item The setup does not push toward any specific invariance or + property in the generated quizzes, their consistency is entirely due + to the statistics of the ``world quizzes'' that remain in the + training set, and to the GPTs' inductive biased. + +\item The GPTs obviously get a sense of objectness and 2d topology + early on, since they rapidly increase the number of birds and + ``discover'' occlusion even though they never was in the world + quizzes. + +\item There may not be so many problems that can be cast as pairs of + patterns that are each a deterministic function of the other, which + is probably critical here. + +\item This overall process probably fight the ``simplicity bias'': If + a model is lacking a ``cue'' that the others have, there will + rapidly be quizzes that require this cue, they will be added to the + training data, and that model will catch up. + +\item The randomness of the process probably allow to even go beyond + just synchronizing the abilities of the models. There may be some + additional complexification of quizzes that get accepted by chance. + +\item It can be parallelized by dispatching the GPTs across multiples + nodes, and avoiding a quadratic cost by limiting the validation of + the quizzes to a subset of them. + +\item The current process to generate new quizzes, which simply + samples them at random is very rudimentary and probably not + sufficient in a real-data setup. It can probably be supplemented + with a MCTS-type search. + +\item There may be already in the generated quizzes some structure + that \emph{we} do not pick up (e.g. certain color or motion + patterns). + +\end{itemize} + \section*{Appendix} The code is available at