+Given a random world state, and the state after two iterations of
+birds moving, a "quiz" is to predict the second frame, given the
+first, or the opposite.
+
+My home-baked GPT-37M trained with 250k solves this with ~99% success.
+
+At every iteration, we select the GPT with the lowest test accuracy,
+and run one epoch. If its test accuracy got higher than 97.5%, it will
+create new quizzes. To do so, it generates a large number of pairs of
+frames, and checks which ones of these quizzes are hard but not too
+hard, which means
+
+[THIS IS THE IMPORTANT BIT]:
+
+it can be solved, in both time directions, by all the other GPTs **but
+one**
+
+The both time directions is to avoid a simple type of quizzes which is
+simply to deal with noise in the first frame.
+
+The GPT generates 1000 of such quizzes, that are added to the
+"culture", i.e. the training set.
+
+Then training resumes.
+
+The hope is that interesting concepts emerge (connectivity, symmetry,
+interior/exterior, shape vocabulary, etc.)