automatic commit
authorFrancois Fleuret <fleuret@moose.fleuret.org>
Sun, 12 Oct 2008 12:45:05 +0000 (14:45 +0200)
committerFrancois Fleuret <fleuret@moose.fleuret.org>
Sun, 12 Oct 2008 12:45:05 +0000 (14:45 +0200)
README.txt
global.cc
global.h
materials.cc
materials.h
pose_cell_hierarchy.cc
run.sh

index 3bca7f0..c322ce7 100644 (file)
@@ -1,31 +1,27 @@
 
 
-######################################################################
-## INTRODUCTION
+I. INTRODUCTION
 
 
-  This is the C++ implementation of the folded hierarchy of
-  classifiers for cat detection described in
+  This is the open-source C++ implementation of the folded hierarchy
+  of classifiers for cat detection described in
 
      F. Fleuret and D. Geman, "Stationary Features and Cat Detection",
      Journal of Machine Learning Research (JMLR), 2008, to appear.
 
 
      F. Fleuret and D. Geman, "Stationary Features and Cat Detection",
      Journal of Machine Learning Research (JMLR), 2008, to appear.
 
-  Please cite this paper when referring to this software.
+  Please use that citation when referring to this software.
 
 
-######################################################################
-## INSTALLATION
+  Contact Francois Fleuret at fleuret@idiap.ch for comments and bug
+  reports.
 
 
-  This program was developed on Debian GNU/Linux computers with the
-  following main tool versions
-
-   * GNU bash, version 3.2.39
-   * g++ 4.3.2
-   * gnuplot 4.2 patchlevel 4
+II. INSTALLATION
 
   If you have installed the RateMyKitten images provided on
 
 
   If you have installed the RateMyKitten images provided on
 
-    http://www.idiap.ch/folded-ctf
+     http://www.idiap.ch/folded-ctf
 
   in the source directory, everything should work seamlessly by
 
   in the source directory, everything should work seamlessly by
-  invoking the ./run.sh script. It will
+  invoking the ./run.sh script.
+
+  It will
 
    * Compile the source code entirely
 
 
    * Compile the source code entirely
 
    * Run 20 rounds of training / test (ten rounds for each of HB and
      H+B detectors with different random seeds)
 
    * Run 20 rounds of training / test (ten rounds for each of HB and
      H+B detectors with different random seeds)
 
-  You can also run the full thing with the following commands if you
-  have wget installed
+  You can run the full thing with the following commands if you have
+  wget installed
 
 
-   > wget http://www.idiap.ch/folded-ctf/not-public-yet/data/folding-gpl.tgz
+   > wget http://www.idiap.ch/folded-ctf/data/folding-gpl.tgz
    > tar zxvf folding-gpl.tgz
    > cd folding
    > tar zxvf folding-gpl.tgz
    > cd folding
-   > wget http://www.idiap.ch/folded-ctf/not-public-yet/data/rmk.tgz
+   > wget http://www.idiap.ch/folded-ctf/data/rmk.tgz
    > tar zxvf rmk.tgz
    > ./run.sh
 
   Note that every one of the twenty rounds of training/testing takes
   more than three days on a powerful PC. However, the script detects
   already running computations by looking at the presence of the
    > tar zxvf rmk.tgz
    > ./run.sh
 
   Note that every one of the twenty rounds of training/testing takes
   more than three days on a powerful PC. However, the script detects
   already running computations by looking at the presence of the
-  corresponding result directory. Hence, it can be run in parallel on
-  several machines as long as they see the same result directory.
+  corresponding result directories. Hence, it can be run in parallel
+  on several machines as long as they see the same result directory.
 
   When all or some of the experimental rounds are over, you can
 
   When all or some of the experimental rounds are over, you can
-  generate the ROC curves by invoking the ./graph.sh script.
+  generate the ROC curves by invoking the ./graph.sh script. You need
+  a fairly recent version of Gnuplot.
+
+  This program was developed on Debian GNU/Linux computers with the
+  following main tool versions
+
+   * GNU bash, version 3.2.39
+   * g++ 4.3.2
+   * gnuplot 4.2 patchlevel 4
 
 
-  You are welcome to send bug reports and comments to fleuret@idiap.ch
+   Due to approximations in the optimized arithmetic operations with
+   g++, results may vary with different versions of the compiler
+   and/or different levels of optimization.
 
 
-######################################################################
-## PARAMETERS
+III. PARAMETERS
 
 
-  To set the value of a parameter during an experiment, just add an
-  argument of the form --parameter-name=value before the commands that
-  should take into account that value.
+  To set the value of a parameter, just add an argument of the form
+  --parameter-name=value before the commands that should take it into
+  account.
 
   For every parameter below, the default value is given between
   parenthesis.
 
   For every parameter below, the default value is given between
   parenthesis.
 
   * pool-name (no default)
 
 
   * pool-name (no default)
 
-    Where are the data to use
+    The scene pool file name.
 
   * test-pool-name (no default)
 
 
   * test-pool-name (no default)
 
-    Should we use a separate pool file, and ignore proportion-for-test
-    then.
+    Should we use a separate test pool file. If none is given, then
+    the test scenes are taken at random from the main pool file
+    according to proportion-for-test.
 
   * detector-name ("default.det")
 
 
   * detector-name ("default.det")
 
   * tree-depth-max (1)
 
     Maximum depth of the decision trees used as weak learners in the
   * tree-depth-max (1)
 
     Maximum depth of the decision trees used as weak learners in the
-    classifier. The default value corresponds to stumps.
+    classifier. The default value of 1 corresponds to stumps.
 
   * proportion-negative-cells-for-training (0.025)
 
     Overall proportion of negative cells to use during learning (we
 
   * proportion-negative-cells-for-training (0.025)
 
     Overall proportion of negative cells to use during learning (we
-    sample among them)
+    sample among for boosting).
 
   * nb-negative-samples-per-positive (10)
 
 
   * nb-negative-samples-per-positive (10)
 
 
   * nb-features-for-boosting-optimization (10000)
 
 
   * nb-features-for-boosting-optimization (10000)
 
-    How many pose-indexed features to use at every step of boosting.
+    How many pose-indexed features to look at for optimization at
+    every step of boosting.
 
   * force-head-belly-independence ("no")
 
     Should we force the independence between the two levels of the
     detector (i.e. make an H+B detector)
 
 
   * force-head-belly-independence ("no")
 
     Should we force the independence between the two levels of the
     detector (i.e. make an H+B detector)
 
-  * nb-weak-learners-per-classifier (10)
+  * nb-weak-learners-per-classifier (100)
 
 
-    This parameter corresponds to the value U in the JMLR paper, and
-    should be set to 100.
+    This parameter corresponds to the value U in the article.
 
   * nb-classifiers-per-level (25)
 
 
   * nb-classifiers-per-level (25)
 
-    This parameter corresponds to the value B in the JMLR paper.
+    This parameter corresponds to the value B in the article.
 
 
-  * nb-levels (1)
+  * nb-levels (2)
 
 
-    How many levels in the hierarchy. This should be 2 for the JMLR
-    paper experiments.
+    How many levels in the hierarchy.
 
 
-  * proportion-for-train (0.5)
+  * proportion-for-train (0.75)
 
     The proportion of scenes from the pool to use for training.
 
 
     The proportion of scenes from the pool to use for training.
 
   * write-parse-images ("no")
 
     Should we save one image for every test scene with the resulting
   * write-parse-images ("no")
 
     Should we save one image for every test scene with the resulting
-    alarms.
+    alarms. This option generates a lot of images for every round and
+    is switched off by default. Switch it on to produce images such as
+    the full page of results in the paper.
 
   * write-tag-images ("no")
 
     Should we save the (very large) tag images when saving the
     materials.
 
 
   * write-tag-images ("no")
 
     Should we save the (very large) tag images when saving the
     materials.
 
-  * wanted-true-positive-rate (0.5)
+  * wanted-true-positive-rate (0.75)
 
     What is the target true positive rate. Note that this is the rate
     without post-processing and without pose tolerance in the
 
     What is the target true positive rate. Note that this is the rate
     without post-processing and without pose tolerance in the
 
   * progress-bar ("yes")
 
 
   * progress-bar ("yes")
 
-    Should we display a progress bar.
+    Should we display a progress bar during long computations.
 
 
-######################################################################
-## COMMANDS
+IV. COMMANDS
 
    * open-pool
 
 
    * open-pool
 
 
    * compute-thresholds
 
 
    * compute-thresholds
 
-     Compute the thresholds of the detector classifiers to obtain the
-     required wanted-true-positive-rate
+     Compute the thresholds of the detector classifiers from the
+     validation set to obtain the required wanted-true-positive-rate.
 
    * test-detector
 
 
    * test-detector
 
 
      Visit nb-wanted-true-positive-rates rates between 0 and
      wanted-true-positive-rate, for each compute the detector
 
      Visit nb-wanted-true-positive-rates rates between 0 and
      wanted-true-positive-rate, for each compute the detector
-     thresholds on the validation set, estimate the error rate on the
-     test set.
+     thresholds on the validation set and estimate the error rate on
+     the test set.
 
    * write-detector
 
 
    * write-detector
 
 
    * write-pool-images
 
 
    * write-pool-images
 
-     Write PNG images of the scenes in the pool.
+     For every of the first nb-images of the pool, save one PNG image
+     with the ground truth, one with the corresponding referential at
+     the reference scale, and one with the feature material-feature-nb
+     from the detector. This last image is not saved if either no
+     detector has been read/trained or if no feature number has been
+     specified.
 
 --
 Francois Fleuret
 
 --
 Francois Fleuret
index 8016c41..de53962 100644 (file)
--- a/global.cc
+++ b/global.cc
@@ -32,7 +32,7 @@ Global::~Global() {
 
 void Global::init_parser(ParamParser *parser) {
   // The nice level of the process
 
 void Global::init_parser(ParamParser *parser) {
   // The nice level of the process
-  parser->add_association("niceness", "5", false);
+  parser->add_association("niceness", "15", false);
 
   // Seed to initialize the random generator
   parser->add_association("random-seed", "0", false);
 
   // Seed to initialize the random generator
   parser->add_association("random-seed", "0", false);
@@ -54,6 +54,8 @@ void Global::init_parser(ParamParser *parser) {
 
   // How many images to produce/process
   parser->add_association("nb-images", "-1", false);
 
   // How many images to produce/process
   parser->add_association("nb-images", "-1", false);
+  // What is the number of the feature to show in the images
+  parser->add_association("material-feature-nb", "-1", false);
 
   // What is the maximum tree depth
   parser->add_association("tree-depth-max", "1", false);
 
   // What is the maximum tree depth
   parser->add_association("tree-depth-max", "1", false);
@@ -66,11 +68,11 @@ void Global::init_parser(ParamParser *parser) {
   // Do we allow head-belly registration
   parser->add_association("force-head-belly-independence", "no", false);
   // How many weak-learners in every classifier
   // Do we allow head-belly registration
   parser->add_association("force-head-belly-independence", "no", false);
   // How many weak-learners in every classifier
-  parser->add_association("nb-weak-learners-per-classifier", "10", false);
+  parser->add_association("nb-weak-learners-per-classifier", "100", false);
   // How many classifiers per level
   parser->add_association("nb-classifiers-per-level", "25", false);
   // How many levels
   // How many classifiers per level
   parser->add_association("nb-classifiers-per-level", "25", false);
   // How many levels
-  parser->add_association("nb-levels", "1", false);
+  parser->add_association("nb-levels", "2", false);
 
   // Proportion of images from the pool to use for training
   parser->add_association("proportion-for-train", "0.5", false);
 
   // Proportion of images from the pool to use for training
   parser->add_association("proportion-for-train", "0.5", false);
@@ -90,7 +92,7 @@ void Global::init_parser(ParamParser *parser) {
   parser->add_association("write-tag-images", "no", false);
 
   // What is the wanted true overall positive rate
   parser->add_association("write-tag-images", "no", false);
 
   // What is the wanted true overall positive rate
-  parser->add_association("wanted-true-positive-rate", "0.5", false);
+  parser->add_association("wanted-true-positive-rate", "0.75", false);
   // How many rates to try for the sequence of tests
   parser->add_association("nb-wanted-true-positive-rates", "10", false);
 
   // How many rates to try for the sequence of tests
   parser->add_association("nb-wanted-true-positive-rates", "10", false);
 
@@ -143,6 +145,7 @@ void Global::read_parser(ParamParser *parser) {
   }
 
   nb_images = parser->get_association_int("nb-images");
   }
 
   nb_images = parser->get_association_int("nb-images");
+  material_feature_nb = parser->get_association_int("material-feature-nb");
   tree_depth_max = parser->get_association_int("tree-depth-max");
   nb_weak_learners_per_classifier = parser->get_association_int("nb-weak-learners-per-classifier");
   nb_classifiers_per_level = parser->get_association_int("nb-classifiers-per-level");
   tree_depth_max = parser->get_association_int("tree-depth-max");
   nb_weak_learners_per_classifier = parser->get_association_int("nb-weak-learners-per-classifier");
   nb_classifiers_per_level = parser->get_association_int("nb-classifiers-per-level");
index fae2d81..afe9dd2 100644 (file)
--- a/global.h
+++ b/global.h
@@ -43,12 +43,10 @@ public:
   char detector_name[buffer_size];
   char result_path[buffer_size];
 
   char detector_name[buffer_size];
   char result_path[buffer_size];
 
-  char materials_image_numbers[buffer_size];
-  char materials_pf_numbers[buffer_size];
-
   int loss_type;
 
   int nb_images;
   int loss_type;
 
   int nb_images;
+  int material_feature_nb;
 
   int tree_depth_max;
 
 
   int tree_depth_max;
 
index 64729db..ad79830 100644 (file)
@@ -75,26 +75,7 @@ void write_referential_png(char *filename,
     referential->draw(&result_sp, level);
   }
 
     referential->draw(&result_sp, level);
   }
 
-  (*global.log_stream) << "Writing " << filename << endl;
-  result_sp.write_png(filename);
-}
-
-void write_one_pi_feature_png(char *filename,
-                              LabelledImage *image,
-                              PoseCellHierarchy *hierarchy,
-                              int nb_target,
-                              int level,
-                              PiFeature *pf) {
-
-  PoseCell target_cell;
-  hierarchy->get_containing_cell(image, level,
-                                 image->get_target_pose(nb_target), &target_cell);
-  PiReferential referential(&target_cell);
-  RGBImage result(image->width(), image->height());
-  image->to_rgb(&result);
-  RGBImageSubpixel result_sp(&result);
-  referential.draw(&result_sp, level);
-  //   pf->draw(&result_sp, 255, 255, 0, &referential);
+  cout << "Writing " << filename << endl;
   result_sp.write_png(filename);
 }
 
   result_sp.write_png(filename);
 }
 
@@ -109,33 +90,35 @@ void write_pool_images_with_poses_and_referentials(LabelledImagePool *pool,
 
   PoseCellHierarchy *hierarchy = new PoseCellHierarchy(pool);
 
 
   PoseCellHierarchy *hierarchy = new PoseCellHierarchy(pool);
 
-  for(int i = 0; i < min(global.nb_images, pool->nb_images()); i++) {
-    image = pool->grab_image(i);
-    RGBImage result(image->width(), image->height());
-    image->to_rgb(&result);
-    RGBImageSubpixel result_sp(&result);
-
-    if(global.pictures_for_article) {
-      for(int t = 0; t < image->nb_targets(); t++) {
-        image->get_target_pose(t)->draw(8, 255, 255, 255,
-                                        hierarchy->nb_levels() - 1, &result_sp);
-
+  if(global.material_feature_nb < 0) {
+    for(int i = 0; i < min(global.nb_images, pool->nb_images()); i++) {
+      image = pool->grab_image(i);
+      RGBImage result(image->width(), image->height());
+      image->to_rgb(&result);
+      RGBImageSubpixel result_sp(&result);
+
+      if(global.pictures_for_article) {
+        for(int t = 0; t < image->nb_targets(); t++) {
+          image->get_target_pose(t)->draw(8, 255, 255, 255,
+                                          hierarchy->nb_levels() - 1, &result_sp);
+
+        }
+        for(int t = 0; t < image->nb_targets(); t++) {
+          image->get_target_pose(t)->draw(4, 0, 0, 0,
+                                          hierarchy->nb_levels() - 1, &result_sp);
+        }
+      } else {
+        for(int t = 0; t < image->nb_targets(); t++) {
+          image->get_target_pose(t)->draw(4, 255, 128, 0,
+                                          hierarchy->nb_levels() - 1, &result_sp);
+        }
       }
       }
-      for(int t = 0; t < image->nb_targets(); t++) {
-        image->get_target_pose(t)->draw(4, 0, 0, 0,
-                                        hierarchy->nb_levels() - 1, &result_sp);
-      }
-    } else {
-      for(int t = 0; t < image->nb_targets(); t++) {
-        image->get_target_pose(t)->draw(4, 255, 128, 0,
-                                        hierarchy->nb_levels() - 1, &result_sp);
-      }
-    }
 
 
-    sprintf(buffer, "/tmp/truth-%05d.png", i);
-    cout << "Writing " << buffer << endl;
-    result_sp.write_png(buffer);
-    pool->release_image(i);
+      sprintf(buffer, "/tmp/truth-%05d.png", i);
+      cout << "Writing " << buffer << endl;
+      result_sp.write_png(buffer);
+      pool->release_image(i);
+    }
   }
 
   for(int i = 0; i < min(global.nb_images, pool->nb_images()); i++) {
   }
 
   for(int i = 0; i < min(global.nb_images, pool->nb_images()); i++) {
@@ -145,8 +128,6 @@ void write_pool_images_with_poses_and_referentials(LabelledImagePool *pool,
     image->to_rgb(&result);
     RGBImageSubpixel result_sp(&result);
 
     image->to_rgb(&result);
     RGBImageSubpixel result_sp(&result);
 
-    int u = 0;
-
     // image->compute_rich_structure();
 
     for(int t = 0; t < image->nb_targets(); t++) {
     // image->compute_rich_structure();
 
     for(int t = 0; t < image->nb_targets(); t++) {
@@ -164,32 +145,26 @@ void write_pool_images_with_poses_and_referentials(LabelledImagePool *pool,
 
       PiReferential referential(&target_cell);
 
 
       PiReferential referential(&target_cell);
 
-      sprintf(buffer, "/tmp/referential-%05d-%02d.png", i, u);
       image->compute_rich_structure();
       image->compute_rich_structure();
-      write_referential_png(buffer, hierarchy->nb_levels() - 1, image, &referential, 0);
-
-      if(detector) {
-        int nb_features = 100;
-        for(int f = 0; f < nb_features; f++) 
-          if(f == 0 || f == 50 || f  == 53) {
-            int n_family, n_feature;
-            if(f < nb_features/2) {
-              n_family = 0;
-              n_feature = f;
-            } else {
-              n_family = detector->_nb_classifiers_per_level;
-              n_feature = f - nb_features/2;
-            }
-            pf = detector->_pi_feature_families[n_family]->get_feature(n_feature);
-            sprintf(buffer, "/tmp/pf-%05d-%02d-%03d.png", i, u, f);
-            write_referential_png(buffer,
-                                  hierarchy->nb_levels() - 1,
-                                  image,
-                                  &referential,
-                                  pf);
-          }
+
+      if(global.material_feature_nb < 0) {
+        sprintf(buffer, "/tmp/referential-%05d-%02d.png", i, t);
+        write_referential_png(buffer, hierarchy->nb_levels() - 1, image, &referential, 0);
+      } else if(detector) {
+        int n_family = 0;
+        int n_feature = global.material_feature_nb;
+        while(n_feature > detector->_pi_feature_families[n_family]->nb_features()) {
+          n_family++;
+          n_feature -= detector->_pi_feature_families[n_family]->nb_features();
+        }
+        pf = detector->_pi_feature_families[n_family]->get_feature(n_feature);
+        sprintf(buffer, "/tmp/pf-%05d-%02d-%05d.png", i, t, global.material_feature_nb);
+        write_referential_png(buffer,
+                              hierarchy->nb_levels() - 1,
+                              image,
+                              &referential,
+                              pf);
       }
       }
-      u++;
     }
 
     pool->release_image(i);
     }
 
     pool->release_image(i);
@@ -233,7 +208,6 @@ void write_image_with_detections(const char *filename,
     }
   }
 
     }
   }
 
-  (*global.log_stream) << "Writing " << filename << endl;
-
+  cout << "Writing " << filename << endl;
   result_sp.write_png(filename);
 }
   result_sp.write_png(filename);
 }
index 4d6b337..a271a22 100644 (file)
 void write_pool_images_with_poses_and_referentials(LabelledImagePool *pool,
                                                    Detector *detector);
 
 void write_pool_images_with_poses_and_referentials(LabelledImagePool *pool,
                                                    Detector *detector);
 
-void write_one_pi_feature_png(char *filename,
-                              LabelledImage *image,
-                              PoseCellHierarchy *hierarchy,
-                              int nb_target,
-                              int level,
-                              PiFeature *pf);
-
 void write_image_with_detections(const char *filename,
                                  LabelledImage *image,
                                  PoseCellSet *detections,
 void write_image_with_detections(const char *filename,
                                  LabelledImage *image,
                                  PoseCellSet *detections,
index 85a843a..b8c5eff 100644 (file)
@@ -92,14 +92,6 @@ PoseCellHierarchy::PoseCellHierarchy(LabelledImagePool *train_pool) {
   scalar_t belly_ryc_min = belly_resolution * floor(belly_ryc.min / belly_resolution);
   int nb_belly_ryc = int(ceil((belly_ryc.max - belly_ryc_min) / belly_resolution));
 
   scalar_t belly_ryc_min = belly_resolution * floor(belly_ryc.min / belly_resolution);
   int nb_belly_ryc = int(ceil((belly_ryc.max - belly_ryc_min) / belly_resolution));
 
-  (*global.log_stream) << "belly_rxc = " << belly_rxc << endl
-                       << "belly_rxc_min = " << belly_rxc_min << endl
-                       << "belly_rxc_min + nb_belly_rxc * belly_resolution = " << belly_rxc_min + nb_belly_rxc * belly_resolution << endl
-                       << endl
-                       << "belly_ryc = " << belly_ryc << endl
-                       << "belly_ryc_min = " << belly_ryc_min << endl
-                       << "belly_ryc_min + nb_belly_ryc * belly_resolution = " << belly_ryc_min + nb_belly_ryc * belly_resolution << endl;
-
   int used[nb_belly_rxc * nb_belly_rxc];
 
   for(int k = 0; k < nb_belly_rxc * nb_belly_ryc; k++) {
   int used[nb_belly_rxc * nb_belly_rxc];
 
   for(int k = 0; k < nb_belly_rxc * nb_belly_ryc; k++) {
@@ -150,22 +142,6 @@ PoseCellHierarchy::PoseCellHierarchy(LabelledImagePool *train_pool) {
 
   _belly_cells = new RelativeBellyPoseCell[_nb_belly_cells];
 
 
   _belly_cells = new RelativeBellyPoseCell[_nb_belly_cells];
 
-  for(int j = 0; j < nb_belly_ryc; j++) {
-    for(int i = 0; i < nb_belly_rxc; i++) {
-      if(used[i + nb_belly_rxc * j]) {
-        if(sq(scalar_t(i) * belly_resolution + belly_resolution/2 + belly_rxc_min) +
-           sq(scalar_t(j) * belly_resolution + belly_resolution/2 + belly_ryc_min) <= 1) {
-          (*global.log_stream) << "*";
-        } else {
-          (*global.log_stream) << "X";
-        }
-      } else {
-        (*global.log_stream) << ".";
-      }
-    }
-    (*global.log_stream) << endl;
-  }
-
   int k = 0;
   for(int j = 0; j < nb_belly_ryc; j++) {
     for(int i = 0; i < nb_belly_rxc; i++) {
   int k = 0;
   for(int j = 0; j < nb_belly_ryc; j++) {
     for(int i = 0; i < nb_belly_rxc; i++) {
@@ -184,8 +160,6 @@ PoseCellHierarchy::PoseCellHierarchy(LabelledImagePool *train_pool) {
       }
     }
   }
       }
     }
   }
-
-  (*global.log_stream) << _nb_belly_cells << " belly cells." << endl;
 }
 
 PoseCellHierarchy::~PoseCellHierarchy() {
 }
 
 PoseCellHierarchy::~PoseCellHierarchy() {
diff --git a/run.sh b/run.sh
index 13f3a8e..42dc58b 100755 (executable)
--- a/run.sh
+++ b/run.sh
@@ -19,6 +19,7 @@
 
 MAIN_URL="http://www.idiap.ch/folded-ctf"
 
 
 MAIN_URL="http://www.idiap.ch/folded-ctf"
 
+######################################################################
 # Compiling
 
 make -j -k
 # Compiling
 
 make -j -k
@@ -81,53 +82,85 @@ fi
 
 RESULT_DIR=./results
 
 
 RESULT_DIR=./results
 
-if [[ ! -d ${RESULT_DIR} ]]; then
-    mkdir ${RESULT_DIR}
-fi
+case $1 in
 
 
-for SEED in {0..9}; do
+    pics)
 
 
-    for MODE in hb h+b; do
+        SEED=0
 
 
-        EXPERIMENT_RESULT_DIR="${RESULT_DIR}/${MODE}-${SEED}"
+        EXPERIMENT_RESULT_DIR="${RESULT_DIR}/hb-${SEED}"
 
 
-        mkdir ${EXPERIMENT_RESULT_DIR} 2> /dev/null
+        if [[ -d "${EXPERIMENT_RESULT_DIR}" ]]; then
 
 
-        if [[ $? == 0 ]]; then
+            for n in -1 0 2501 2504; do
 
 
-            OPTS="--random-seed=${SEED} --wanted-true-positive-rate=0.75"
-
-            if [[ $MODE == "h+b" ]]; then
-                OPTS="${OPTS} --force-head-belly-independence=yes"
-            fi
-
-            if [[ $1 == "valgrind" ]]; then
-                OPTS="${OPTS} --nb-classifiers-per-level=1 --nb-weak-learners-per-classifier=10"
-                OPTS="${OPTS} --proportion-for-train=0.1 --proportion-for-validation=0.025 --proportion-for-test=0.01"
-                OPTS="${OPTS} --wanted-true-positive-rate=0.1"
-                DEBUGGER="valgrind --db-attach=yes --leak-check=full --show-reachable=yes"
-            fi
-
-            ${DEBUGGER} ./folding \
-                --niceness=15 \
-                --pool-name=${POOL_NAME} \
-                --nb-levels=2 \
-                --nb-classifiers-per-level=25 --nb-weak-learners-per-classifier=100 \
-                --result-path=${EXPERIMENT_RESULT_DIR} \
-                --detector-name=${EXPERIMENT_RESULT_DIR}/default.det \
-                ${OPTS} \
-                open-pool \
-                train-detector \
-                compute-thresholds \
-                write-detector \
-                sequence-test-detector | tee -a ${EXPERIMENT_RESULT_DIR}/stdout
+                ./folding --random-seed=${SEED} \
+                    --pool-name=${POOL_NAME} \
+                    --result-path=${EXPERIMENT_RESULT_DIR} \
+                    --detector-name=${EXPERIMENT_RESULT_DIR}/default.det \
+                    --nb-images=1 \
+                    --material-feature-nb=${n} \
+                    open-pool \
+                    read-detector \
+                    write-pool-images
+
+            done
 
         else
 
         else
+            echo "You have to run at least the first round completely to be able" >&2
+            echo "to generate the pictures." >&2
+            exit 1
+        fi
+
+        ;;
 
 
-            echo "${EXPERIMENT_RESULT_DIR} exists, aborting experiment."
+    valgrind|"")
 
 
+        if [[ ! -d ${RESULT_DIR} ]]; then
+            mkdir ${RESULT_DIR}
         fi
 
         fi
 
-    done
+        for SEED in {0..9}; do
+
+            for MODE in hb h+b; do
+
+                EXPERIMENT_RESULT_DIR="${RESULT_DIR}/${MODE}-${SEED}"
+
+                mkdir ${EXPERIMENT_RESULT_DIR} 2> /dev/null
+
+                if [[ $? == 0 ]]; then
+
+                    if [[ $MODE == "h+b" ]]; then
+                        OPTS="${OPTS} --force-head-belly-independence=yes"
+                    fi
+
+                    if [[ $1 == "valgrind" ]]; then
+                        OPTS="${OPTS} --nb-classifiers-per-level=1 --nb-weak-learners-per-classifier=10"
+                        OPTS="${OPTS} --proportion-for-train=0.1 --proportion-for-validation=0.025 --proportion-for-test=0.01"
+                        OPTS="${OPTS} --wanted-true-positive-rate=0.1"
+                        DEBUGGER="valgrind --db-attach=yes --leak-check=full --show-reachable=yes"
+                    fi
+
+                    ${DEBUGGER} ./folding \
+                        --random-seed=${SEED} \
+                        --pool-name=${POOL_NAME} \
+                        --result-path=${EXPERIMENT_RESULT_DIR} \
+                        --detector-name=${EXPERIMENT_RESULT_DIR}/default.det \
+                        ${OPTS} \
+                        open-pool \
+                        train-detector \
+                        compute-thresholds \
+                        write-detector \
+                        sequence-test-detector | tee -a ${EXPERIMENT_RESULT_DIR}/stdout
+
+                else
+
+                    echo "${EXPERIMENT_RESULT_DIR} exists, aborting experiment."
+
+                fi
+
+            done
+
+        done
 
 
-done
+esac