You are here

An Introduction to NuoDB's Cache

We get a lot of questions about the caching part of our architecture, and whether NuoDB is an “in-memory” database. After all, there are many different types of caches, and it’s a key element of how a system performs.  This is the first in a few entries that will explain what caching really means in NuoDB, how it’s different from other systems and what that means to you as a user.


What the cache is

Recall that a NuoDB database is made up of a set of running processes. There are Transaction Engines responsible for the transactional logic and Storage Managers that handle durability. While there are several caches in our system, the focus for this discussion is the object cache in the TEs.

When a transaction is running, and needs some object, it starts out looking in the local cache. So the first thing the cache provides is fast access to previously-requested data. If the object isn’t available locally, but is in-memory at some other TE then the transaction can get the object there. This means that the cache is hierarchical, providing options other than just going to disk when a miss is taken.

If no TE has the object in memory the transaction will request the object from an SM. Likewise, when the TE’s local cache is filling up it can drop objects from its cache (using a Least Recently Used algorithm) knowing that the committed versions are already durable at the SMs. In other words, the cache does not need to keep the entire database in-memory and failure of a given TE has no affect on the availability, durability or consistency of the database.

Because the TE works entirely on in-memory data, and is decoupled from the durability part of the system, the internal representation of the data is also decoupled from the on-disk format. In other words, this is a hybrid architecture that works like an in-memory database but provides true ACID properties. I’ll leave the details of this to a follow-on post.

There are a few other things to note here. First, we’re talking about an on-demand, shared-nothing cache. There’s no explicit k-safe replication scheme or any other attempt to maintain a minimum number of replicated copies. If you page in some object at more than one TE that means it may be more quickly available, but there’s no explicit cross-talk to maintain some number of replicas and the reliability of the system doesn’t depend on how objects are cached.

Second, each TE is maintaining an independent cache. That means that as you bring more TEs into the database you’re not only increasing throughput and availability but also your aggregate cache size. You get to choose how much you want to keep in-memory simply by deciding how much memory to buy.

Finally, “the cache” is not an add-on to our system. It is core to the system, and it’s something you get for free. There’s nothing additional to configure, nothing explicit to tune and no new processes to monitor. It’s just part of what the TE does to process transactions. I think that’s very cool. It means that you can just assume a good caching behavior and get on with developing your database applications.


What the cache is not

Now that I’ve walked you through what our caching component provides, let’s talk a little bit more about what the cache is not trying to do.

I’ve already said that it’s not an explicitly replicated cache. Safety and availability doesn’t come from trying to maintain multiple copies, or from replicating every object in every cache. Products like Coherence or GemFire can be configured to do this, but require explicit configuration and programming. They also give you the option of running as a distributed, partitioned cache.

That’s another thing that NuoDB doesn’t make you think about. There is no explicit sharding required to scale the system, and there is no need to define some kind of partitioning strategy. You can easily add or remove processes without any re-balancing or re-assignment of keys.

Because the architecture is already designed using a tiered approach the caching component isn’t trying to keep all objects in memory and therefore isn’t limited to the available RAM in your cluster. This differentiates NuoDB from true in-memory systems like MemSQL, or products like TimesTen that can be connected to a specific backing database but are designed for datasets that fit in memory.

Another advantage to having the caching layer built into the architecture is that TEs understand how to cache as efficiently as possible and in a native format. Compare that to a separate, layered cache like memcached that is very good at storing simple key-value pairs for write-through but require the application to know how to identify these objects and invalidate them when appropriate.


So, to summarize

NuoDB uses a built-in, on-demand caching scheme. All transactional operations are done on in-memory data to give you good performance and reliability guarantees, but durability and data availability is backed by always consistent durable stores. Perhaps most importantly, it’s just a piece of the architecture you get out of the box, with no additional configuration or application tweaking.

That’s all well and good, but it’s still pretty high-level. You probably want to know more about what’s really going on, what the performance implications are and how this compares to some similar products. I’m going to stop talking now and tag-off to Trek. Trek: tag, you’re on for the next installment!

Add new comment