• Experience the slowness: boot your Linux box with a "mem=64M" argument to the kernel. Try to run Gnome and a few apps.
  • Run some enlightening exercises. Start up Firefox. Note down how much memory your X server uses. Go to a web page heavy on graphics. See how much the X server uses now. Kill Firefox, and check the X server memory again.
  • Get a classification of where the allocations on startup are coming from. Memprof is nice for this, as it lets you see the stack trace at each allocation.
  • See how many resources in the X server we use. Use xrestop for this.
  • Might be worth integrating the gdb libmmalloc package into somewhere more visible, like glib. This lets you allocate multiple heaps in one app, a feature Win32 developers have had for ever. It can help with fragmentation: if you know you are going to about to do lots of allocations of varying sizes you can set up a new heap for it, fragment it then release it all in one go. This way you never get a block sitting at the top of the heap preventing the heap size going down.
  • Figure out a "reference desktop configuration" (which panel applets, which apps, which documents) that can be used to get reproducible numbers during testing. This would also allow to plot our progress over time. LuisVilla4: At least in theory (we'll see when this actually happens) I'm working on setting up tinderbox + LDTP to do reproducible testing of core GNOME apps. This could presumably be tied in to memory/performance testing if someone wanted to do the work.

  • Debug and re-enable saving of profiles in Memprof.
  • Write tutorials on how to profile and fix example apps. A good example is Callum's Massif/Valgrind tutorial.

  • Figure out how the "fat" applications use their memory (Evolution, Firefox, OpenOffice.org). Many apps have data caches that never evict any data, so the caches grow unboundedly.

  • Do we need a desktop-wide mechanism to indicate memory pressure? Something would notify apps that memory is running low; upon receving that notification, they would purge their caches or compact their data.
    • MikeHearn: Windows has something like this, but it's extremely obscure (some window message). Actually I can't even find it right now. I doubt it'd have much impact except at the toolkit level, though if glib provided some cache abstraction that knew how to respond to low memory conditions it might help.

    • BenMaurer: There is a newer winapi for this http://msdn.microsoft.com/library/default.asp?url=/library/en-us/memory/base/querymemoryresourcenotification.asp But it is only available for XP/2003 Server, so it is not being used on the desktop today for windows. But I know .NET will try to do a gen2 (full) gc when it senses memory is running low. Not sure how it does that.

    • MikeHearn: maybe Linux could expose an equivalent API that returns a file descriptor which can be put into the main loop. Then you could connect a signal handlers to some object to get notification when memory is low/high.

  • Firefox bugs:
  • Try to reduce the number of shared library dependencies of the core libraries. For example on FC3 libgnomeui-2 links to libgnomevfs-2 which in turn links to libk5crypto, libgssapi_krb5, libcrypto, libssl and libkrb5. So each binary that links to libgnomeui-2 will link about 1.5MB of text and 100KB of writable mappings (non-shareable)! Most binaries won't use the above libraries at all. Can some libraries be dlopened?
    • MikeHearn: libraries can be triviallly dlopened using relaytool weak linking, which makes it automatic (just a build system modification), but would this actually make a real difference? Linking happens on demand for C libraries, so the amount of text mapped shouldn't be an issue, it won't be linked or faulted in if not used)

      • Maps have to be created, so they have a non-zero cost. Extra libraries bring in extra symbols that need to be relocated and also increase the cost of the current relocations (see section 1.5.2 in Ulrich Drepper's dsohowto.pdf).
        • Yes, but an mmap is just rearranging the kernels internal data structures, it's really not expensive. Extra symbols in the search scope would be the main slowdown, as relocation is done on-demand for C libraries.

A list of hacks

This is a list of possible hacks that one could do to help our with reducing memory level. It is organized first into the area (toolkit, desktop, application) and then into difficulty (easy, medium, master-hackers-only)



  • Take a look at all the strdup's in the type system. Many of these are duplications of constant strings. It shouldn't be all that hard to avoid them. GTK+ addresses this with the G_PARAM_STATIC_NAME, G_PARAM_STATIC_NICK and G_PARAM_STATIC_BLURB flags for properties, and by using string interning with g_intern_static_string() (since glib 2.10).
  • Another function that can save a few unnecessary strdups is gdk_atom_intern_static_string()
  • /MakeBonoboNotLoadLocaleDotAlias


  • Add code that lets people look at the usage of list and hashtable nodes. It should be able to answer
    • Who allocates nodes?
    • Are we getting alot of temporary allocations, and then lots of frees
  • Add code that looks at memory usage as the toolkit starts up.
  • Add a GMemPool. This allows a bunch of variously sized allocations to be put in one pool. It could be used to reduce fragmentation for, eg, the type system. It can avoid the overhead of many small malloc calls.
  • The type system may contain bunches of small copied strings. Can we batch some of them into appropriate GStringChunks to avoid the malloc() overhead?
  • Investigate who is doing so many mallocs on a Hello World. Why are there so many temporary allocations -- these take up time durring boot up.
  • Look at objdump -x foo.so and grep for .data. This data is mmap'd read-write at runtime. Many of the instances are const data structures that have strings. We should use techniques to do relocationless structures (FIXME: add more). As a simple case, strings declared as "static const gchar *foo" end up in .data, while strings declared as "static const gchar foo[]" go to .rodata. http://bugzilla.gnome.org/show_bug.cgi?id=75754 is a GTK+ bug related to this topic.

  • Collect desktop-wide statistics about icons. Which icons are commonly used, and in what sizes ? What icons are used in sizes that are not available in the icon theme ? Making sure that icon themes provide all the commonly used sizes of an icon reduces desktop wide memory usage, since it means that the icons can be used directly out of the icon cache (which is shared between all applications), and don't have to be copied and scaled in each process. The relevant function to hook into for this kind of statistics is icon_info_ensure_scale_and_pixbuf in gtkicontheme.c

Master Hackers only

  • Look at toolkits at a very low level, such as fontconfig and freetype. For `Hello World' these seem to take up about 150 kb of mallocs, combined. Much of that data looks mmapable. Note that memory consumption of freetype and fontconfig will be addressed in freetype 2.2 and fontconfig 2.4.



  • Get a list of processes that start up with GNOME and how much memory they use.


Master Hackers only

  • Could something be done to reduce applet memory consumption? Having a separate executable for, etc, the clock is going to duplicate alot of memory.
  • Look into some of the messy daemon processes to see what kind of memory they are using and how it can be reduced.



  • File reproducable bugs for when applications start eating up memory.


  • Find a reproducable way to create a "standard desktop" and look at its memory usage.
  • Profile memory usage of major applications, eg, evolution.

Master Hackers only

  • Create an automated tool that could build from CVS and measure the desktop wide footprint.
  • Set up automated tests that look at long term memory usage of the desktop.

Initiatives/MemoryReduction/Tasks (last edited 2013-11-22 21:16:20 by WilliamJonMcCann)