X-Git-Url: https://fleuret.org/cgi-bin/gitweb/gitweb.cgi?a=blobdiff_plain;f=README.txt;h=b34b5ea6617210c28a440fc49e414ff274ca7be1;hb=239a52ec7face6fcd4515916e80813702fbdf49b;hp=bf6180a1c5b3edbacc5da38493528b921bdcd871;hpb=45f841397dd6f8fb163f2c5b0793cdc250072c73;p=culture.git diff --git a/README.txt b/README.txt index bf6180a..b34b5ea 100644 --- a/README.txt +++ b/README.txt @@ -1,16 +1,64 @@ -For the stack experiment: +[This file may describe an older version than the current code] -./main.py --task=stack +Trying to make GPTs build their own "culture". -Takes ~1h10min on a 4090. +Francois Fleuret +Jun 21st, 2024 -For the arithmetic expressions experiments +* Motivation -# 38M parameters / 250k samples +The original motivation of this experiment is the hypothesis that +high-level cognition emerges from the competition among humans in the +space of language and ideas. -./main.py --task=expr +More precisely, communicating agents try to out-do competitors by +creating stuff that is smart but doable, e.g. some other agents get +it, but not all. Then, that smart thing is added to the "culture", +they all learn and get to understand it, and it repeats. -# 352M parameters / 2.5M samples +* Setup -./main.py --task=expr --nb_blocks=48 --result_dir=results_expr_48b_d1024_2.5M --dim_model=1024 --nb_train_samples=2500000 --learning_rate_schedule="1:2e-5,3:4e-6" +It starts with a "world model" that they got before they communicate, +and from there, they try to "be smart" by proposing quizzes that can +be solved but not by everybody. + +There are 5 competing GPTs. + +The "world" is a 6x8 grid with three "birds" moving in a straight line +and bouncing on the world's borders. It could be another "world", but +this one has objectness and motion. There are ten colors and 4 +directions of motions, so roughly (6x8x4x10)**3 ~ 7e9 states. + +Given a random world state, and the state after two iterations of +birds moving, a "quiz" is to predict the second frame, given the +first, or the opposite. The starting and ending states are chosen, by +rejection, so that there is no occlusion. + +My home-baked GPT-37M trained with 250k solves this with ~99% success +[to be verified with the new setup]. + +At every iteration, we select the GPT with the lowest test accuracy, +and run one epoch. + +* Creating new quizzes + +If its test accuracy got higher than 97.5%, it will create new +quizzes. To do so, it generates a large number of pairs of frames, and +checks which ones of these quizzes are hard but not too hard, which +means [THIS IS THE IMPORTANT BIT]: + + it can be solved, in both time directions, by all the other GPTs + **but one** + +The both time directions is to avoid a simple type of quizzes which is +simply to deal with noise in the first frame. + +The GPT generates 1000 of such quizzes, that are added to the +"culture", i.e. the training set. + +We update the test accuracy of all the GPTs, and then we go to the +next iteration. + +The hope is that interesting concepts emerge (connectivity, symmetry, +interior/exterior, shape vocabulary, etc.)