BUG: change default timer to timeit.default_timer to fix resolution on Windows#780
Conversation
h-vetinari
left a comment
There was a problem hiding this comment.
Haven't had time to test it yet, but LGTM aside from two things I don't understand.
benchmarks/step_detect.py
Outdated
|
|
||
| def time_solve_potts_approx(self): | ||
| step_detect.solve_potts_approx(self.y, 0.3, p=1) | ||
| step_detect.solve_potts_approx(self.y, [0.3] * len(self.y), gamma=1) |
There was a problem hiding this comment.
I don't understand how that change relates to the PR.
There was a problem hiding this comment.
This is actually #779, which came about from my wanting to use asv's own suite to demonstrate this change - but the one benchmark in the vicinity of 15.6ms was broken due to a typo. It's necessary to reproduce the results above, but I will remove prior to merge.
|
Looks ok, modulo comments above. |
issues on Windows
826e58e to
c5ee784
Compare
|
@pv All changes implemented. As far as testing, a simple regression test could look at the samples output from a test like: def test_pass():
passFor the baseline, we get: And for this PR: Simply looking for zeros (or values < clock resolution) seems like it would be sufficient (albeit stochastic, and much less reliable on slower machines like cloud VMs) |
This PR closes #775 in two different ways:
numberof runs to targetsample_timeistime.time(), which has a resolution of15.6mson Windows. This leads to a lot of error in the proper scaling and is clearly better off being a wall time measurementBenchmark.timertotimeit.default_timer(). This fixes the poor benchmark resolution on WindowsTo see proof of the latter, we can run the following:
When testing this change, be careful to not use
asv runto iterate through commits as it usesasv/benchmark.pyfrom HEAD. Instead, step through withgitand the above command.For our baseline, we see the following output in
asv's own benchmarks:As you can see, the quantization of
process_time()on Windows is very clear.And now with this PR:
In both cases, we see that the standard deviations have decreased - in the latter case, they're reduced by 25x. This effect would be even more pronounced if the mean didn't happen to be close to the
process_time()quantum.