I did some quick performance checks using timeit with the following arguments:
timeit.py -s 'import pytz' 'tzinfo = pytz.timezone("Australia/Perth")'
Before:
100000 loops, best of 3: 8.85 usec per loop
After:
100000 loops, best of 3: 2.22 usec per loop
Of course in both cases, a lot less work is being done on subsequent loops. I think my branch is quicker here because it is caching the previously used tzinfo objects in a dictionary rather than importing the corresponding Python module each time. Of course, this modification could be made without the runtime tzfile loading to get a similar speed up.
On a single loop importing each time zone in pytz.all_timezones, the times seem to be fairly similar when the Python source or binary tz data are in cache. This is a bit of a difficult one to profile, but the new code doesn't seem noticeably faster or slower than the old code.
I did some quick performance checks using timeit with the following arguments: "Australia/ Perth") '
timeit.py -s 'import pytz' 'tzinfo = pytz.timezone(
Before:
100000 loops, best of 3: 8.85 usec per loop
After:
100000 loops, best of 3: 2.22 usec per loop
Of course in both cases, a lot less work is being done on subsequent loops. I think my branch is quicker here because it is caching the previously used tzinfo objects in a dictionary rather than importing the corresponding Python module each time. Of course, this modification could be made without the runtime tzfile loading to get a similar speed up.
On a single loop importing each time zone in pytz.all_timezones, the times seem to be fairly similar when the Python source or binary tz data are in cache. This is a bit of a difficult one to profile, but the new code doesn't seem noticeably faster or slower than the old code.