Trying to make GPTs build their own "culture".
+Francois Fleuret
+Jun 21st, 2024
+
* Motivation
The original motivation of this experiment is the hypothesis that
There are 5 competing GPTs.
-The "world" is a 6x8 grid with one or two "birds" moving in a straight
-line and bouncing on the world's borders. The colors correspond to a
-fixed "z-buffer order". It could be another "world", but this one has
-objectness, occlusion, and motion.
+The "world" is a 7x9 grid with three "birds" moving in a straight line
+and bouncing on the world's borders. It could be another "world", but
+this one has objectness and motion.
Given a random world state, and the state after two iterations of
birds moving, a "quiz" is to predict the second frame, given the
-first, or the opposite.
+first, or the opposite. The starting and ending states are chosen, by
+rejection, so that there is no occlusion.
-My home-baked GPT-37M trained with 250k solves this with ~99% success.
+My home-baked GPT-37M trained with 250k solves this with ~99% success
+[to be verified with the new setup].
At every iteration, we select the GPT with the lowest test accuracy,
-and run one epoch. If its test accuracy got higher than 97.5%, it will
-create new quizzes. To do so, it generates a large number of pairs of
-frames, and checks which ones of these quizzes are hard but not too
-hard, which means
+and run one epoch.
+
+If its test accuracy got higher than 97.5%, it will create new
+quizzes. To do so, it generates a large number of pairs of frames, and
+checks which ones of these quizzes are hard but not too hard, which
+means
[THIS IS THE IMPORTANT BIT]:
The GPT generates 1000 of such quizzes, that are added to the
"culture", i.e. the training set.
-Then training resumes.
+We update the test accuracy of all the GPTs, and then we go to the
+next iteration.
The hope is that interesting concepts emerge (connectivity, symmetry,
interior/exterior, shape vocabulary, etc.)