-This is a simple profiler to estimate processing time per module per function. One should call
+This is a simple profiler to estimate processing time per module per function.
- profiler.decorate(model[, functionsToDecorate])
+### profiler.decorate(model, [functionsToDecorate]) ###
-before starting the computation. It will replace functions specified in functionsToDecorate by instrumented versions which keep track of computation times. If functionsToDecorate is not provided, it will decorate by default updateOutput and backward.
+This function should be called before starting the computation.
-When the computation is over.
+t replaces functions specified in functionsToDecorate by instrumented versions which keep track of computation times. If functionsToDecorate is not provided, it decorates by default updateOutput and backward.
- profiler.print(model[, nbSamples])
+### profiler.print(model, [nbSamples]) ###
-will print the measured processing times. If nbSamples is provided, the time per samples will also be printed.
+Prints the measured processing times. If nbSamples is provided, the time per samples will also be printed.
It seems to work okay, but there was no heavy testing so far.