X-Git-Url: https://fleuret.org/cgi-bin/gitweb/gitweb.cgi?a=blobdiff_plain;f=README.txt;h=b34b5ea6617210c28a440fc49e414ff274ca7be1;hb=5c5668b0e52e2ae579d49ba8a44fafe2339ad8c0;hp=489a79299fa53b4b56d54c37b17092d676e9cf7f;hpb=0f4c86c0e7730db4147f136df5aeb5528fc943a0;p=culture.git diff --git a/README.txt b/README.txt index 489a792..b34b5ea 100644 --- a/README.txt +++ b/README.txt @@ -1,28 +1,64 @@ -18.10.2023 -./main.py --task=qmlp --model=352M --nb_train_samples=250000 --result_dir=results_qmlp_352M --batch_size=2 +[This file may describe an older version than the current code] -~11h per epoch on 3090 Ti +Trying to make GPTs build their own "culture". -====================================================================== -For the stack experiment: +Francois Fleuret +Jun 21st, 2024 -./main.py --task=stack +* Motivation -Takes ~1h10min on a 4090. +The original motivation of this experiment is the hypothesis that +high-level cognition emerges from the competition among humans in the +space of language and ideas. -====================================================================== -For the arithmetic expressions experiments +More precisely, communicating agents try to out-do competitors by +creating stuff that is smart but doable, e.g. some other agents get +it, but not all. Then, that smart thing is added to the "culture", +they all learn and get to understand it, and it repeats. -# 38M parameters / 250k samples +* Setup -./main.py --task=expr +It starts with a "world model" that they got before they communicate, +and from there, they try to "be smart" by proposing quizzes that can +be solved but not by everybody. -# 352M parameters / 2.5M samples, reaches 99.80% after 12 epochs, the - learning rate schedule is obviously terrible +There are 5 competing GPTs. -./main.py --task=expr --nb_blocks=48 --dim_model=1024 --nb_train_samples=2500000 --result_dir=results_expr_48b_d1024_2.5M -====================================================================== -25.07.2023 +The "world" is a 6x8 grid with three "birds" moving in a straight line +and bouncing on the world's borders. It could be another "world", but +this one has objectness and motion. There are ten colors and 4 +directions of motions, so roughly (6x8x4x10)**3 ~ 7e9 states. -./main.py --task=sandbox --nb_train_samples=10000 --nb_test_samples=1000 --nb_blocks=4 --nb_heads=1 --nb_epochs=20 +Given a random world state, and the state after two iterations of +birds moving, a "quiz" is to predict the second frame, given the +first, or the opposite. The starting and ending states are chosen, by +rejection, so that there is no occlusion. + +My home-baked GPT-37M trained with 250k solves this with ~99% success +[to be verified with the new setup]. + +At every iteration, we select the GPT with the lowest test accuracy, +and run one epoch. + +* Creating new quizzes + +If its test accuracy got higher than 97.5%, it will create new +quizzes. To do so, it generates a large number of pairs of frames, and +checks which ones of these quizzes are hard but not too hard, which +means [THIS IS THE IMPORTANT BIT]: + + it can be solved, in both time directions, by all the other GPTs + **but one** + +The both time directions is to avoid a simple type of quizzes which is +simply to deal with noise in the first frame. + +The GPT generates 1000 of such quizzes, that are added to the +"culture", i.e. the training set. + +We update the test accuracy of all the GPTs, and then we go to the +next iteration. + +The hope is that interesting concepts emerge (connectivity, symmetry, +interior/exterior, shape vocabulary, etc.)