Profiling Agilo

In Python, there are two major profiling libraries, cProfile and hotshot. hotshot has much less overhead (ca. 50% faster than cProfile) but is marked as deprecated in Python's documentation. Therefore I decided to go with cProfile.

Using cProfile to profile isolated areas

If you just want to profile some isolated areas, you can use some code like this:

import cProfile
prof = cProfile.Profile()
result = prof.runcall(function_to_profile, parameters)
prof.dump_stats('/your/path/agilo.profile')

The file 'agilo.profile' stores your collected profiling data afterwards. The script in trunk/scripts/profiling/print_stats.py (usage 'print_stats.py <filename>') can analyze this data. You should see something like this:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   333706    3.682    0.000   16.593    0.000 .../config.py:271(get)
   333662    3.194    0.000    7.481    0.000 .../ConfigParser.py:495(get)
     7057    2.143    0.000   22.961    0.003 .../config.py:128(get_options)
   667329    1.925    0.000    2.868    0.000 .../ConfigParser.py:335(optionxform)
    22835    1.899    0.000    1.899    0.000 .../sqlite_backend.py:48(_rollback_on_error)
    62281    1.773    0.000    4.701    0.000 .../logging/__init__.py:216(__init__)

Most interesting are the columns 'ncalls' ("for the number of calls") and 'tottime' ("the total time spent in the given function (and excluding time made in calls to sub-functions)").

Profiling agilo with multiple requests

repoze.profile was the most promising tool for me (WSGIProfile did not work for me because base.html was missing in the tar.gz and source repository was not active). For that you need a custom WSGI start script. An example script is trunk/scripts/profiling/run_wsgi_profiler.py (set the environment variable TRAC_ENV to your trac environment dir).

On http://localhost:8001/profile you can see a web view of the profiling data. Furthermore the profiling data is written to agilo.profile where you can work on it later.

Fancy call graphs

There are some cachegrind viewing tools which give you a graphical overview. KCachegrind seems to be the most well known though I find a bit annoying because of its usability (package name in Fedora is kdesdk, you should install graphviz for nicer graphs). On Windows there is WinCacheGrind.

Before you can use the cProfile data with KCachegrind you need to preprocess them. There is a script for that floating around the web. The newest version seems to be pyprof2calltree at pypi.

Measuring the Time to Deliver the Backlog Page

I wrote two scripts specifically tailored for measuring the time to deliver the backlog page:

Last modified 9 years ago Last modified on 02/02/2009 11:08:35 AM

1.3.15 © 2008-2016 Agilo Software all rights reserved (this page was served in: 0.15226 sec.)