Evaluate memory consumption in the Evolution mailer

Difficulty: medium.

Evolution's mailer uses a lot of memory. Before we can fix it, we need to know how it uses that memory. Your task is to build the infrastructure to get breakdowns of how the memory gets used, and if possible, fix some of the problems related to memory consumption.


  1. Get a breakdown of how memory is used. Is it for message summaries? Indexing data? Random scattered strings? Bookkeeping data? See how the breakdown can be generated easily ("push a button and it generates a pretty chart"), and how it can be presented so that it can be understood easily (treemaps, pie charts, bar charts).
  2. Find out how the memory usage changes over time. What operations cause it to grow a lot, and how does it grow? How about visiting a big mail folder? Visiting every mail in that folder? Importing a lot of mail? Fetching a lot of mail from a POP or IMAP server? Composing mails? Forwarding mails? Plot this change over time. If different parts of the breakdown change at different rates, you may want to write a tool to generate a little animation of the charts from step (1).
  3. See if replacing the implementation of EMemChunk with GSlice makes a difference (see this for instructions).

  4. Once you have the breakdown and some idea of how memory usage changes over time, instrument the code to see the actual data that is in memory. If lots of strings are used, see if there is redundancy in them. Mailing lists have many mails with the same Subject: see if those strings can be shared with a reference-counting scheme (they probably are already).
  5. Write a report on what you did: the methods you followed, the patches you added to Evolution, the tools you wrote to measure memory consumption, etc.
  6. You may find genuine memory leaks. Identify them; bonus points for fixing them.


  • Valgrind's massif gives you time-space plots with a breakdown of how memory is used.

  • Valgrind's memcheck with the --show-reachable option gives you a fine-grained list of where memory got allocated. You can correlate this with the massif output.

  • FIXME: Alex Larsson wrote a framework to instrument GObjects in order to monitor their memory consumption. Camel could be made to have a similar infrastructure.


  • Tinymail (mailing list) is a project to write a mail client using Evolution's Camel library, for machines with low resources. You may be able to apply some of the techniques used there to improve Evolution's own memory consumption.

  • FIXME: go-evolution.org has several pages with memory analyses. Find them, link to them from here.

Finish the Evolution disk-summary branch

Difficulty: hard.

There is an unfinished "disk-summary" branch in the CVS repository for Evolution, which aims toward keeping most of the summary data on disk while Evolution runs, instead of keeping it in RAM. You may want to collaborate with someone working on the task "Evaluate memory consumption in the Evolution mailer" to see how this unfinished branch could be used. Your task is to see how finishing this branch would affect memory consumption, and to finish the code in the branch so that it can be integrated into the mainline.

Note: Someone needs to figure out how much performance loss would be suffered for both a single-user Desktop as well as a setup closer to what the City of Largo is using, which serves Evolution to 100's of users on the same machine. A happy medium of disk/memory usage may need to be found.


  1. See the state of the disk-summary branch. Assess how far it is from being finished. Write a report on this.
  2. Measure how this code would improve on memory consumption.
  3. Get the same kind of time-space evaluation as in the previous task.


  • FIXME: go-evolution.org describes some of this; find the links and put them here.
  • FIXME: Michael Zucchi had some blog entries or mails describing the status and the work to be done; find the links and put them here.

Make it faster to switch components in Evolution

Difficulty: medium.

Even on a fast machine, Evolution takes slightly more than 1 second to switch between its components (Mailer, Calendar, Addressbook, Tasks, Memos). Your task is to profile the action of switching between components, and fix this so that the switch happens in under 0.2 seconds.


  1. Automate the task of switching between components: "while (1) { switch_to_mailer (); switch_to_calendar (); }"
  2. Modify the code so that it is easy to tell when the switch is really complete. This involves all the widgets being repainted.
  3. Use a global profiler such as Sysprof or Oprofile to see which processes are involved in the switch, and to identify the parts that take the most time. The slowness can be in Evolution, evolution-data-server, the X server, GTK+ itself, etc.
  4. Fix the most important problematic parts, and go back to step 3 until the goal of switching in 0.2 seconds is achieved.
  5. Write a short report with your findings. If you uncover major problems, this should include the details of what you think needs fixing.


  • Sysprof is by far the easiest global profiler.

  • Oprofile can extract more data for the system, but its output is not as easy to understand.
  • GtkWidgetProfiler is an unfinished tool in GTK+ which lets you know when a widget is finished repainting. Get GTK+ from CVS and look at gtk+/perf/gtkwidgetprofiler.h.

Reduce the startup time of gnome-panel and its applets

Difficulty: easy to medium.

Gnome-panel and its applets take a substantial amount of time to start up. This is the time from the panel being exec()ed, to all the applets being fully loaded and painted on the screen. Your task is to identify the slow parts of this process, and fix them.


  1. Ensure that your system has an up-to-date icon cache. Ensure that you are using a preloading scheme such as Suse's "preload" or Fedora's "readahead" to populate the buffer cache before login time.
  2. Instrument the panel and its applets so that you can plot timelines of the startup process. You'll want to know things such as:

    • When the panel starts its basic initialization, and when it finishes.
    • When the panel starts loading applets.
    • When the panel finishes loading applets.
    • When individual applets get started, initialized, and painted.
    • Whether applets make each other wait to be loaded, or whether it is more or less all simultaneous.
  3. Fix the problematic parts of the panel and applets.
  4. Write a report with the details of your findings.

Add support for batched requests to GConf

Difficulty: easy to medium.

Internally, GConf uses a CORBA API to communicate between clients and the GConf daemon. Clients can only fetch a single key/value pair at a time. Programs which have many configuration options that must be loaded at startup (i.e. metacity) need to do a large number of round-trips to the GConf daemon. This causes many context switches, especially at login time when every program fetches a bunch of configuration keys.

There is an unimplemented API in the GConf daemon to do batch requests: you could say "give me the values for this set of keys: {/foo/bar/key1, /foo/bar/key2, /foo/bar/key3}", and this would happen with a single round-trip.

Your task is to finish this batch API, and make the precaching feature of gconf_client_add_dir() use this batch API.


  1. See the if the existing unimplemented batch API is all that is needed (gconf/gconf/GConfX.idl:ConfigDatabase::batch_lookup()).

  2. Finish implementing the batch API (gconf/gconf/gconf-database.c:impl_ConfigDatabase_batch_lookup()).

  3. Make the client-side code use the batch API for its precaching functions (gconf_client_add_dir()).
  4. Probably expose a new API in GConfClient to use the batch API directly from client apps.
  5. See if this helps performance when starting up applications. This can be done with a small benchmark that fetches a bunch of keys one by one (many roundtrips), and comparing it to another version that uses the batch API to fetch those keys simultaneously (one roundtrip).

Outreach/SummerOfCode/2006/Performance (last edited 2013-12-03 18:31:49 by WilliamJonMcCann)