Posts Tagged 'pickling'

Smarter caching of Django QuerySets

Let me start by saying I love the Django ORM.  It does an excellent job of “just working” and hiding all the dirty SQL business required to get what you need out of your database.  There are times however when it doesn’t behave exactly like you’d want.  One of these that I ran into recently was the behavior of caching (or more specifically pickling) QuerySets.  By default Django evaluates QuerySets fully whenever they are pickled.  I often find myself displaying the first few dozen rows from QuerySets that contain tens of thousands of rows.  Each request only fetches the rows it needs but we have to redo the query on each page load. Caching the QuerySet and sharing it between requests would solve this problem but if we only need the first few rows, caching all 10,000 doesn’t make sense.

What we need is a QuerySet that can be partially evaluated, cached, and shared between requests. There are a few technical challenges to overcome, so let’s get started by understanding how QuerySets work.

The User model is a convenient one to work with, so let’s create some users. We need enough that they won’t all be pulled down in one chunk.

>>> from django.contrib.auth.models import User
>>> for i in range(1000):
>>>     User.objects.create(username="test%d"% i)
<User: test0>
<User: test1>
<User: test2>
...
<User: test999>
>>>

Now we have some users to work with, let’s see what’s happening with the QuerySet and its result_cache. We can use connection.queries to see what’s happening with the db (note, DEBUG must be set to true).

Let’s see what happens when we try to get a specific record:

>>> from django.db import connection
>>> connection.queries = [] #Clear user creation queries from before
>>> qs = User.objects.all()
>>> qs[50]
<User: test49@example.com>
>>> connection.queries
[{'time': '0.001', 'sql': u'SELECT `auth_user`.`id`, `auth_user`.`username`, `auth_user`.`first_name`, `auth_user`.`last_name`, `auth_user`.`email`, `auth_user`.`password`, `auth_user`.`is_staff`, `auth_user`.`is_active`, `auth_user`.`is_superuser`, `auth_user`.`last_login`, `auth_user`.`date_joined` FROM `auth_user` LIMIT 1 OFFSET 50'}]
>>> qs._result_cache is None
True

We see that Django executed just enough query to get back the single result and didn’t populate its result_cache at all. Now let’s see what happens when we start iterating on a QuerySet

>>> connection.queries = [] #Clear the queries again.
>>> for i in qs:
>>>    break
>>> connection.queries
[{'time': '0.038', 'sql': u'SELECT `auth_user`.`id`, `auth_user`.`username`, `auth_user`.`first_name`, `auth_user`.`last_name`, `auth_user`.`email`, `auth_user`.`password`, `auth_user`.`is_staff`, `auth_user`.`is_active`, `auth_user`.`is_superuser`, `auth_user`.`last_login`, `auth_user`.`date_joined` FROM `auth_user`'}]
>>> len(qs._result_cache)
100

This time there’s nothing in the SQL that limits how many results we get, but we’ve only fetched 100 records and we know there are 1000. Let’s see what’s actually going on:

class QuerySet(object):
    ...
    def __iter__(self):
        if self._result_cache is None:
            self._iter = self.iterator()
            self._result_cache = []
        if self._iter:
            return self._result_iter()
        # Python's list iterator is better than our version when we're just
        # iterating over the cache.
        return iter(self._result_cache)

    def _result_iter(self):
        pos = 0
        while 1:
            upper = len(self._result_cache)
            while pos < upper:
                yield self._result_cache[pos]
                pos = pos + 1
            if not self._iter:
                raise StopIteration
            if len(self._result_cache) <= pos:
                self._fill_cache()

Since the cache has not yet been populated, __iter__ will set self._iter to self.iterator() (this is the method that gets data from the database and converts each row into a model object, more below). _results_iter calls self._fill_cache() to populate result_cache, then loops back around and yields objects from the cache it just filled. Let’s take a look at what _fill_cache does:

    ...
    def _fill_cache(self, num=None):
        """
        Fills the result cache with 'num' more entries (or until the results
        iterator is exhausted).
        """
        if self._iter:
            try:
                for i in range(num or ITER_CHUNK_SIZE):
                    self._result_cache.append(self._iter.next())
            except StopIteration:
                self._iter = None

QuerySet._fill_cache fills up self._result_cache from self._iter (which as we saw earlier was set to self.iterator()). We’re finally getting close to actually hitting the database but we’re not quite there yet.

A lot goes on in QuerySet.iterator, but suffice it to say it is the mechanism whereby database rows are converted into model instances:

class QuerySet(object):
    ...
    def iterator(self):
        ...
        compiler = self.query.get_compiler(using=self.db)
        for row in compiler.results_iter():
            ...
            yield obj

The QuerySet class has a Query object which in turn has a SQLCompiler object. SQLCompiler.results_iter returns a generator that spits out database rows it gets from execute_sql.

class SQLCompiler(object):
    ...
    def results_iter(self):
        ...
        for rows in self.execute_sql(MULTI):
            for row in rows:
                ....
                yield row

SQLCompiler.execute_sql returns an iterator that does successive calls to cursor.fetchmany() to pull data in chunks of 100 (by default anyway)

    ....
    def execute_sql(self, result_type=MULTI):    
        ...
        result = iter((lambda:
                  cursor.fetchmany(GET_ITERATOR_CHUNK_SIZE)), 
                  self.connection.features.empty_fetchmany_value)

If we iterate past the objects available in the cache, more rows are yielded from cursor.fetchmany() and converted to models objects in QuerySet.iterator().

>>> connection.queries = []
>>> for i, o in enumerate(qs):
>>>   if i == 101:
>>>      break
>>> connection.queries
[]
>>> len(qs._result_cache)
200

Note that it doesn’t need to do any more queries, instead it yields another 100 results with fetchmany, converts those rows into model instances, and populates _result_cache.

Note: QuerySets have special handling for slicing. If instead of iterating up to 101, we had sliced up to 101, the query_cache would only have been filled to the point necessary to fulfill the slice request.

Things are looking good at this point. The QuerySet already handles partial evaluation and evaluating more rows as required. We just need to share this QuerySet that’s already partially evaluated between between requests. This means we either need to put it in the session or put it in the cache. Either way, the QuerySet will have to be pickled. So let’s try that and see what happens.

>>> import cPickle
>>> pickle_str = cPickle.dumps(qs)
>>> qs = cPickle.loads(pickle_str)
>>> len(qs._result_cache)
1000

Unfortunately, as I mentioned earlier, Django has decided to completely evaluate the QuerySet when we pickled it:

class QuerySet(object):
    ...
    def __getstate__(self):
        """
        Allows the QuerySet to be pickled.
        """
        # Force the cache to be fully populated.
        len(self)

        obj_dict = self.__dict__.copy()
        obj_dict['_iter'] = None
        return obj_dict

There are two reasons why it’s tricky to pickle a partially evaluated QuerySet.  First, you cannot pickle generators and the QuerySet actually relies on two generators (one in the QuerySet itself and one in the SQLCompiler).  Second, even if you could it still wouldn’t work because fetching more rows assumes that the database connection is still there and remembers what row we were at, etc. To avoid these issues Django fully evaluates the QuerySet and stops iterating upon pickling.  What we have to do is stop iterating with an only partially populated cache and allow further records to be fetched with a second query instead of fetching more rows from the same query. 

This requires the QuerySet to be in a somewhat unnatural state. Normally QuerySets fall into one of three states:

  • Completely unevaluated (i.e. no cache and not iterating)
  • Partially cached and currently iterating
  • Fully cached and not iterating

What we want is for it to be partially cached and not iterating.  There are a bunch of places where this causes problems. An easy way to get around these problems is to make the QuerySet start iterating as soon as it is unpickled.  This confuses Django less and doesn’t make the QuerySet behave any differently.  The behavior we want is that iterating on the QuerySet retrieved from cache should start from the beginning pulling from the precached values and only going back to the database when it needs to.  The default behavior is very close to what we want: If the QuerySet has enough cached data to fulfill the request it returns that; if it doesn’t have enough, it fills up its cache a bit more in the fill_cache method. The only difference is that a QuerySet retrieved from the cache cannot have any active generators or remember from earlier where it was in fetching rows from the generator in the SQLCompiler. Instead we can use set_limits on the QuerySet’s query– set_limits would normally be used for slicing, but it serves this purpose just fine.  The only thing that we need to worry about when using set_limits is the count().  For convenience I decided to cache count before pickling and return that instead of trying to get fancy.

After all that preamble, here’s the money-shot:

from django.db.models.query import QuerySet
from django.db.models import Q

class SmartCachingQuerySet(QuerySet):
    """
    This works pretty much exactly like a regular queryset
    except that when you pickle it, it does not fully evaluate
    itself, instead it evaluates DEFAULT_PREFETCH_COUNT
    records.
    """
    DEFAULT_PREFETCH_COUNT = 100
    def __init__(self, *args, **kwargs):
        self.cached_count = None
        super(SmartCachingQuerySet, self).__init__(*args, **kwargs)
        
    def __getstate__(self):
        """
        Override the default behavior of the Django QuerySet
        to only partially evaluate before pickling.
        """
        if not self._result_cache:
            self.prefetch()

        #We need to cache the count. Note, we might be
        #able to be smarter about this but why bother?
        self.count()
        
        obj_dict = self.__dict__.copy()
        obj_dict['_iter'] =  None
        return obj_dict

    def __setstate__(self, in_dict):
        """
        Restore the iterator and set the the limits.
        """
        self.__dict__ = in_dict
        self.__dict__['_iter'] = self.iterator()
        self.__dict__['query'].set_limits(low=len(self.__dict__['_result_cache']))        

    def prefetch(self, num_to_prefetch=None):
        """
        Use Django's built in functionality to prefetch
        a number of content models from the db
        """
        self.__iter__()
        self._fill_cache(num_to_prefetch or self.DEFAULT_PREFETCH_COUNT)

    def count(self):
        if self.cached_count:
            return self.cached_count

        self.cached_count = self.query.get_count(using=self.db)
        return self.cached_count

    def _filter_or_exclude(self, negate, *args, **kwargs):
	"""
	Override the normal behavior of filter_or_exclude which 
	would check if limits has been set and return an error.
	Since we're handling this issue in _clone, we don't
	need to do the check.
	"""
        clone = self._clone()
        if negate:
            clone.query.add_q(~Q(*args, **kwargs))
        else:
            clone.query.add_q(Q(*args, **kwargs))
        
        return clone
            
    def order_by(self, *field_names):
	"""
	Override order_by for the same reason we overrode _filter_or_exclude
	"""
        qs = self._clone()
        return super(SmartCachingQuerySet, qs).order_by(*field_names)

    def _clone(self, klass=None, setup=False, **kwargs):
        #Store these values and clear them so we can safely clone.
        query_limits = (self.query.low_mark, self.query.high_mark)
        result_cache = self._result_cache
        self.query.clear_limits()
        self._result_cache = None
        cloned_qs = super(SmartCachingQuerySet, self)._clone(klass, setup, **kwargs)

        #Restore stuff.
        self.query.set_limits(*query_limits)
        self._result_cache = result_cache
        return cloned_qs

Let’s take a quick look at how this works:

>>> from django.db import connection
>>> from django.core.cache import cache
>>> from django.contrib.auth.models import User
>>> qs = SmartCachingQuerySet(model=User)
>>> cache.set('test',qs)
>>> connection.queries
[{'time': '0.026', 'sql': u'SELECT `auth_user`.`id`, `auth_user`.`username`, `auth_user`.`first_name`, `auth_user`.`last_name`, `auth_user`.`email`, `auth_user`.`password`, `auth_user`.`is_staff`, `auth_user`.`is_active`, `auth_user`.`is_superuser`, `auth_user`.`last_login`, `auth_user`.`date_joined` FROM `auth_user`'}, {'time': '0.000', 'sql': u'SELECT COUNT(*) FROM `auth_user`'}]
>>> connection.queries = [] #Clear the queries
>>> cached_qs = cache.get('test')
>>> len(cached_qs._result_cache)
100
>>> cached_qs.count()
1000
>>> connection.queries
[]
>>> for i,v in enumerate(cached_qs):
...     if i == 101:
...        break
... 
>>> connection.queries
[{'time': '0.016', 'sql': u'SELECT `auth_user`.`id`, `auth_user`.`username`, `auth_user`.`first_name`, `auth_user`.`last_name`, `auth_user`.`email`, `auth_user`.`password`, `auth_user`.`is_staff`, `auth_user`.`is_active`, `auth_user`.`is_superuser`, `auth_user`.`last_login`, `auth_user`.`date_joined` FROM `auth_user` LIMIT 18446744073709551615 OFFSET 100'}]
>>> len(cached_qs._result_cache)
200

We can see that the SmartCachingQuerySet provides the same behavior as a normal QuerySet, but what happens behind the scenes is different. After pulling cached_qs from the cache and iterating past the number of items in _result_cache, it will hit the database again to get the next 100 objects using OFFSET 100 to avoid fetching the same items it already has.

The last thing we need to make sure we can do is filter, exclude, and order cached QuerySets and make sure they behave appropriately. What we want is for them to create a new QuerySet with no cache.

>>> filtered_qs = cached_qs.filter(id__gt=10)
>>> filtered_qs._result_cache is None
True
>>> ordered_qs = qs.order_by("-id")
>>> ordered_qs._result_cache is None
True
>>> (cached_qs._result_cache)
200

This effect is achieved by overriding _filter_or_exclude and order_by to avoid the normal check that would prevent you from cloning a Query that had limits set on it. This does not affect the _result_cache on the original QuerySet.

So there you have it: smarter behavior for caching QuerySets. Obviously the same caveats apply for caching in general. If the underlying data changes, a SmartCachingQuerySet pulled from cache will not only be stale but may provide duplicate results or miss results that would have been returned by a new query. Apply invalidation appropriately.

I’d like to give a shout out to my friend Michaux for his assistance. I haven’t done so yet, but I plan on putting together a package with sample app, etc and making it available. I welcome any feedback!

Advertisement