X-Git-Url: https://fleuret.org/cgi-bin/gitweb/gitweb.cgi?a=blobdiff_plain;f=README.txt;h=b34b5ea6617210c28a440fc49e414ff274ca7be1;hb=5c5668b0e52e2ae579d49ba8a44fafe2339ad8c0;hp=af96ee9d73173cc05be312e3e4d0d95882a8f8ea;hpb=f87a57354a1e575181e760fdaedbb2c2d5cf9fa0;p=culture.git diff --git a/README.txt b/README.txt index af96ee9..b34b5ea 100644 --- a/README.txt +++ b/README.txt @@ -1,6 +1,11 @@ +[This file may describe an older version than the current code] + Trying to make GPTs build their own "culture". +Francois Fleuret +Jun 21st, 2024 + * Motivation The original motivation of this experiment is the hypothesis that @@ -20,27 +25,31 @@ be solved but not by everybody. There are 5 competing GPTs. -The "world" is a 6x8 grid with one or two "birds" moving in a straight -line and bouncing on the world's borders. The colors correspond to a -fixed "z-buffer order". It could be another "world", but this one has -objectness, occlusion, and motion. +The "world" is a 6x8 grid with three "birds" moving in a straight line +and bouncing on the world's borders. It could be another "world", but +this one has objectness and motion. There are ten colors and 4 +directions of motions, so roughly (6x8x4x10)**3 ~ 7e9 states. Given a random world state, and the state after two iterations of birds moving, a "quiz" is to predict the second frame, given the -first, or the opposite. +first, or the opposite. The starting and ending states are chosen, by +rejection, so that there is no occlusion. -My home-baked GPT-37M trained with 250k solves this with ~99% success. +My home-baked GPT-37M trained with 250k solves this with ~99% success +[to be verified with the new setup]. At every iteration, we select the GPT with the lowest test accuracy, -and run one epoch. If its test accuracy got higher than 97.5%, it will -create new quizzes. To do so, it generates a large number of pairs of -frames, and checks which ones of these quizzes are hard but not too -hard, which means +and run one epoch. + +* Creating new quizzes -[THIS IS THE IMPORTANT BIT]: +If its test accuracy got higher than 97.5%, it will create new +quizzes. To do so, it generates a large number of pairs of frames, and +checks which ones of these quizzes are hard but not too hard, which +means [THIS IS THE IMPORTANT BIT]: -it can be solved, in both time directions, by all the other GPTs **but -one** + it can be solved, in both time directions, by all the other GPTs + **but one** The both time directions is to avoid a simple type of quizzes which is simply to deal with noise in the first frame. @@ -48,7 +57,8 @@ simply to deal with noise in the first frame. The GPT generates 1000 of such quizzes, that are added to the "culture", i.e. the training set. -Then training resumes. +We update the test accuracy of all the GPTs, and then we go to the +next iteration. The hope is that interesting concepts emerge (connectivity, symmetry, interior/exterior, shape vocabulary, etc.)