Closed
Description
Hi @vstinner , thanks for this PyPerf project. Presumably because of its sophisticated architecture, a command line pyperf timeit -s "...define my_func..." "my_func()"
is able to print a human-readable output to the terminal, such as "Mean +- std dev: 38.3 us +- 2.8 us". And I love that its std dev is noticeably smaller than some other benchmark tools, and its mean is more consistent.
Now, how do I programmatically get that reliable mean value? I tried the following experiments, but could not get what I want.
- Intuitively/pythonically, I thought PyPerf's timeit() would mimic Python's same-name function timeit() to return the time elapsed, preferably a mean, but that is not the case. PyPerf's timeit() returns None.
- The alternative
bench_func()
would return a benchmark object, but the following attempt does not work.
import pyperf
runner = pyperf.Runner()
return_value = runner.timeit("Times a function", stmt="locals()")
print(return_value) # This is always None
benchmark = runner.bench_func("bench_func", locals)
print(benchmark)
if benchmark: # This check is somehow necessary, probably due to the multiprocess architecture
print(benchmark.get_values())
# It is still unclear how to get benchmark.mean()
# It throws exception: statistics.StatisticsError: mean requires at least one data point
BTW, I suspect #165 was for my same use case.
Metadata
Metadata
Assignees
Labels
No labels