X-Git-Url: https://fleuret.org/cgi-bin/gitweb/gitweb.cgi?p=profiler-torch.git;a=blobdiff_plain;f=README.md;h=2adb72c321a7a7a3b6199bbf26643a91cc8b930c;hp=193434571dfdf3dbcc8ce353b1dbd59c004e6fe1;hb=e927faab65fb190dc01959236c07df46f3d28946;hpb=6b672dde68f8a5755221f5126dc350154fa785ad diff --git a/README.md b/README.md index 1934345..2adb72c 100644 --- a/README.md +++ b/README.md @@ -1,18 +1,16 @@ This is a simple profiler to estimate processing time per module per function. +It seems to work okay, but there was no heavy testing so far. See test-profiler.lua for a short example. + ### profiler.decorate(model, [functionsToDecorate]) ### This function should be called before starting the computation. -t replaces functions specified in functionsToDecorate by instrumented versions which keep track of computation times. If functionsToDecorate is not provided, it decorates by default updateOutput and backward. - -### profiler.print(model, [nbSamples]) ### +It replaces functions specified in functionsToDecorate by instrumented versions which keep track of computation times. If functionsToDecorate is not provided, it decorates by default updateOutput and backward. -Prints the measured processing times. If nbSamples is provided, the time per samples will also be printed. +It also resets the accumulated timings to zero. -It seems to work okay, but there was no heavy testing so far. +### profiler.print(model, [nbSamples], [totalTime]) ### --- -Francois Fleuret -Dec 5th, 2016 +Prints the measured processing times. If nbSamples is provided, the time per samples will also be printed. If totalTime is provided, the percentages will also be printed.