X-Git-Url: https://fleuret.org/cgi-bin/gitweb/gitweb.cgi?a=blobdiff_plain;f=README.md;h=d0ec215a1767bd07bfc53b0e0758ce1891c39bd1;hb=77153ad6f6acb94a5132e9930722500cd93a6960;hp=a774d628c37ca6706d4cb269b552b57ac8edcd55;hpb=94548cd44f585308c13511b82320a04d96a0731c;p=pysvrt.git diff --git a/README.md b/README.md index a774d62..d0ec215 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,44 @@ # Introduction # -This is the port of the Synthetic Visual Reasoning Test to the pytorch -framework. +This is a port of the Synthetic Visual Reasoning Test problems to the +pytorch framework, with an implementation of two convolutional +networks to solve them. -The main function is +# Installation and test # + +Executing + +``` +make -j -k +./test-svrt.py +``` + +should generate an image example.png in the current directory. + +Note that the image generation does not take advantage of GPUs or +multi-core, and can be as fast as 10,000 vignettes per second and as +slow as 40 on a 4GHz i7-6700K. + +# Vignette generation and compression # + +## Vignette sets ## + +The svrtset.py implements the classes `VignetteSet` and +`CompressedVignetteSet` with the following constructor + +``` +__init__(problem_number, nb_samples, batch_size, cuda = False, logger = None) +``` + +and the following method to return one batch + +``` +(torch.FloatTensor, torch.LongTensor) get_batch(b) +``` + +## Low-level functions ## + +The main function for genering vignettes is ``` torch.ByteTensor svrt.generate_vignettes(int problem_number, torch.LongTensor labels) @@ -20,7 +55,7 @@ The returned ByteTensor has three dimensions: * Pixel row * Pixel col -Two functions additional functions +The two additional functions ``` torch.ByteStorage svrt.compress(torch.ByteStorage x) @@ -32,27 +67,28 @@ and torch.ByteStorage svrt.uncompress(torch.ByteStorage x) ``` -provides a lossless compression scheme adapted to the ByteStorage of +provide a lossless compression scheme adapted to the ByteStorage of the vignette ByteTensor (i.e. expecting a lot of 255s, a few 0s, and no other value). -They allow to reduce the memory footprint by a factor ~50, and may be -usefull to deal with very large data-sets and avoid re-generating -images at every batch. +This compression reduces the memory footprint by a factor ~50, and may +be usefull to deal with very large data-sets and avoid re-generating +images at every batch. It induces a little overhead for decompression, +and moving from CPU to GPU memory. See vignette_set.py for a class CompressedVignetteSet using it. -# Installation and test # +# Testing convolution networks # -Executing +The file ``` -make -j -k -./test-svrt.py +cnn-svrt.py ``` -should generate an image example.png in the current directory. +provides the implementation of two deep networks, and use the +compressed vignette code to allow the training with several millions +vignettes on a PC with 16Gb and a GPU with 8Gb. -Note that the image generation does not take advantage of GPUs or -multi-core, and can be as fast as 10,000 vignettes per second and as -slow as 40 on a 4GHz i7-6700K. +The networks were designed by Afroze Baqapuri during an internship at +Idiap.