-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
It's hard to speed up code without proper profiling tools. Existing solutions that I'm aware of:
- The built in
profile/cProfilemodules. I find these to be cumbersome, particularly because they can track evaluation time for functions that aren't being called or defined in the relevant scope. Perhaps there is a better way to use these. kernprof.pyAKA the line profiler. Really terrific as far as isolating and inspecting one scope. Not great for routine profiling.- IDE (e.g. PyCharm) profilers. In my brief stint with PyCharm I was unable to get this tool running; it also is unsuitable for routine profiling.
- Calling
timevia the command line, or other tools likepytest. Only useful for isolated code, and subject to a lot of variability. - Tools like the
timeitmodule. Again, only useful for isolated code, but does a few things to temper out any evaluation-to-evaluation variability.
Custom solutions I've used in the past:
- Peppering code with
time.time()calls to collect the evaluation time associated with blocks of code. Quick and easy, but potentially ugly and non-specific. - Writing a decorator that wraps a function and accumulates information about how long it takes to run. Can be largely transparent. Requires functionalization of the interesting bits.
I'm inclined to go with the latter; it will require us to break off more pieces and test them independently, but I think that's healthy anyway.
As far as routine profiling goes - I think this is desirable. I find it to be a useful way to check the health of code, as well as a way to make sure that anticipated performance hits do indeed have the anticipated effect (sanity check). It's unclear to me where this ought to go. It can't really be a unit test, since we have no expected run time that is going to be consistent across hardware (and run time is variable regardless).
Metadata
Metadata
Assignees
Labels
No labels