The CamelDataCache in the disksummary branch has been enhanced slightly over the version in the main branch.

Now it works using a rendesvous method, in the same manner as the Evolution/Camel.Object#Camel.ObjectBag code does, so that multiple threads can properly arbitrate access to the same object without any extra overhead. It also implements transaction semantics so that the using code doesn't have to do it itself.

Base class

The base class has a single virtual method:

 char *(*path)(CamelDataCache *cmc, const char *path, const char *key);

This used to create a full path-name to the final resting place of the object identified by key. This allows sub-classes to vary the layout of the cache.

By default, a hash of the key is used as a part component, followed by a munged version of the key, which removes non-safe filesystem characters.


Creating a cache is simple, you give it a location. Currently there are no flags defined, to pass 0.

 CamelDataCache *camel_data_cache_new(const char *path, guint32 flags, CamelException *ex);

Then there are control functions to control it. Basically you can set a minimum limit on how long items will remain in the cache, either based on their creation time or their last access time. Note that this is only a minimum, object may last much longer depending on the cache usage, and size is never tracked.

 void camel_data_cache_set_expire_age(CamelDataCache *cache, time_t when);
 void camel_data_cache_set_expire_access(CamelDataCache *cdc, time_t when);

A helper function for sub-classes to find the file name of a given item.

 char *camel_data_cache_path(CamelDataCache *cache, const char *path, const char *key);

The get method is the heart of the cache. It will resolve the item, and look it up in the committed results. If it exists, a new O_RDONLY stream is returned which will read the object. It will then check to see if any other thread is currently creating this object, if so, it will wait until it has finished its job.

If after all of that, the object still doesn't exist, and reserve is supplied, then it will create a new writable stream and return NULL. The reserved stream can then be used to create the cache entry, and then commited or aborted as appropriate.

 CamelStream *camel_data_cache_get(CamelDataCache *cdc, const char *path, const char *key, CamelStream **reserve, CamelException *ex);

The commit or abort functions MUST be called on the reserve stream if one was created. This will either save or discard the object. The stream MUST NOT have any additional references, it cannot be used again after these calls, and it cannot be unreffed elsewhere either.

 void camel_data_cache_commit(CamelDataCache *cdc, CamelStream *stream, CamelException *ex);
 void camel_data_cache_abort(CamelDataCache *cdc, CamelStream *stream);

This will uncondtionally remove an item from the cache. If another thread is currently creating the item, then it will be lost, regardless of whether or not it commits it.

 void camel_data_cache_remove(CamelDataCache *cache, const char *path, const char *key);


Internally newly created items are stored in a "tmp" path, much like the way Maildir operates. The client code then writes to this as necessary.

Only once the item is commited, is it moved to the proper location - the one looked at first when getting from the cache.

So this implements a simple, reliable, lockless persistent cache of filesystem objects.


This code is much more useful and reliable than the original version which didn't do much more than create a path and expire old files.

Apps/Evolution/CamelDS.DataCache (last edited 2013-08-08 22:50:07 by WilliamJonMcCann)