You are here

Our Approach to Database Backup

Error message

Note: This blog was published over a year ago. Content may be out of date.

If you’ve used any piece of software that stores data then you’ve thought about how to backup that data. If you’re reading this, then I’m sure that you fall into this group and won’t be surprised that we get a lot of questions about how we support backup. This entry is going to answer some of those questions, but it’s also going to talk about the philosophy we bring to data backup and what that means for the future of the product.

Before going any further I’ll state something that may be obvious: part of being in the “NewSQL” camp is applying new approaches to old problems. That’s really what the “new” part of the category means. We’ve talked a lot on this blog about how our architecture is a new approach. We’ve also shown how a new architecture lets you tackle old problems in new ways.

When you step back and try to re-imagine solutions it’s not always the architecture that changes, or if it does then that change enables you to re-think other problems as well. That’s the case when it comes to the question of backup in NuoDB. We want to support traditional models for backup but we’d also like to push the simplicity and scaling curves. In the case of backup that means we’re offering some new, non-intrusive backup models that we think are just another part of solving the overall scaling challenge.

Manual backup in NuoDB

Out of the box today NuoDB supports a couple of models for doing data backup. First, we’re a standard SQL database that works with Hibernate, Entity, PDO, Rails and a bunch of other common frameworks. So any backup utility that you’re already familiar with built on these interfaces is going to be pretty easy to use. Obviously this is less efficient when you’re doing bulk backups of large data-sets, but it’s a good starting-point.

Building on those interfaces we provide a bulk data tool called nuoloader. As the name implies it can load data into a NuoDB database, but it can also do bulk exports. For instance, let’s say you startup a new database, and then create a new table:

SQL> use example;
SQL> create table great_teas (name string, style string);
SQL> insert into great_teas (name,style) values ('tieguanyin', 'oolong');
SQL> insert into great_teas (name,style) values ('biluochun', 'green');
SQL> insert into great_teas (name,style) values ('longjing', 'green');

You can do an export of your table like this:

nuoloader tea@localhost --user dba --password oolong --schema example --export "select * from great_teas" --to /tmp/great_teas.csv

If you go look at the output file what you’ll see is the CSV for the table:


Nothing fancy, but enough to give a database-neutral version of the data in a text-form that is easy to save, script and use as the basis for whatever backup automation you want to build out. The rest of this entry is going to talk about the approaches that we’ll automate for you; I started with the most simple piece of our backup story because a lot of people don’t want to know about all the fancy, automated features if they can’t roll-their-own when they really need to.

Seamless backup in NuoDB

Having a scriptable, manual backup tool is great in terms of flexibility but it doesn’t make backup easy out of the box. For that, we’ve taken a different strategy.

Recall that a NuoDB database is a collection of processes: Transaction Engines, caching components that handle SQL requests and Storage Managers, durable end-points that manage complete database archives. The minimal, viable database has a single SM running. This configuration gives you durability, but to get higher-availability we suggest you always run at least two SMs.

When you’re running with more than one Storage Manager you’re getting automated replication of your database. Because client-interaction is with Transaction Engines, however, you’re still always-consistent and always-active no matter where a transaction is running. One of the many nice architectural elements of an SM is that it does synchronization automatically on startup, so you can shut one down and re-start it with no consistency or durability concerns and you can add a new SM to a running database at any point with no manual pre-setup. Because we have a tunable commit protocol you can even run remote SMs without impacting the running performance of a database.

The three stages of adding a Storage Manager: starting the process, automatic archive synchronization and active participation.

The three stages of adding a Storage Manager: starting the process, automatic archive synchronization and active participation.

By running a second Storage Manager you’ve created a second copy of your database. If you’re storing the archive on a filesystem then what you’ve actually done is create a complete on-disk backup of your database, essentially for free. To automate “backup” of your database all you need now is a consistent snapshot of that on-disk data. Our tools and APIs let you automate process management, and our architecture lets you safely take down a redundant Storage Manager at any time, so to get a consistent snapshot all you need to do is:

  1. Make sure your database is running with at least 2 Storage Managers. In the case of our previous 1-TE 1-SM example, just add a second SM:
    nuodb [domain/tea]> start process sm host localhost archive /tmp/tea-backup initialize true
    Started: [SM] freisa.local/ [ pid = 9021 ] ACTIVE
    nuodb [domain/tea]> show database processes
    [SM] freisa.local/ [ pid = 9015 ] RUNNING
    [TE] freisa.local/ [ pid = 9017 ] RUNNING
    [SM] freisa.local/ [ pid = 9021 ] RUNNING
  2. Tell one SM do a clean shutdown through the management layer:
    nuodb [domain/tea]> shutdown process host localhost pid 9021 graceful true
  3. Copy the local archive somewhere:
    In this example, copy the contents of /tmp/tea-backup to a snapshot directory.
  4. Optionally re-start the SM through the management layer if you want it synchronized and available for another future snapshot. Be sure to re-start using the same archive point, this time without initializing your on-disk data:
    start process sm host localhost archive /tmp/tea-backup initialize false

At step 4 the SM will automatically synchronize with any changes it missed and then start participating again in the database. If you’re running on a filesystem that lets you do snapshot or similar optimized synchronization operations the whole process will take on the order of seconds. If you’re running with at least 3 SMs, or choose to periodically start an SM for the explicit purpose of backup then step 2 has no affect on availability.

Of course, you don’t have to be using the local filesystem to do backup. For instance, you might be using S3 as your durable store. In that case, you can use all the tools Amazon gives you to manage your database. Not only are you getting an option for doing automatable backup but you’re already using something redundant.

Backing up the database is just a matter of taking a redundant SM offline, capturing the archive and then starting a new SM process against the original archive.

Backing up the database is just a matter of taking a redundant SM offline, capturing the archive and then starting a new SM process against the original archive.

The bottom-line here is that what you’ve just done in three, automatable steps is take a complete, guaranteed-consistent backup of your database with no down-time, no hit in performance and no special replication that you had to configure. I think that’s pretty neat. This is the kind of “new” stuff that a “NewSQL” solution should give you.

Note that if you’re running with journalling enabled (and if you ask anyone in our engineering team they’ll tell you that you should) there’s probably one more thing you should do. Before step 3, run the nuochk tool over your archive.  This will verify that the data is valid and it will compress your data by removing old versions and other data that is no longer needed but hasn’t been collected yet. There’s no harm per se in skipping this step, but it will improve backup efficiency so I say go for it. In either case, when you copy your archive you do not need to copy the journal directory; the archive by itself is enough to act as a snapshot.

When a backup is more than a backup

What can you do with this backup? Obviously, you can use it to restore your database in the case that everything running fails and takes all active archives with it (unlikely, but definitely possible). You’ve got a true, full backup of your database. If you’re doing periodic archive backup then you’re also taking point-in-time snapshots that you could pick between and use to roll-back in the case of application errors. Restoring in any of these cases is just using the same syntax from Step 4 above against whichever archive copy you want to use.

Umm, yeah, let me just say that again: restoring from a backup is just a matter of shutting down your running database processes and re-starting at least one SM and one TE with the SM pointed at your backup archive. That works for any backing store you use. It’s always painful to have to roll-back your application to a previous state, but at least we’ll make the data management part of it easy.

If you choose to follow the pattern of starting a new SM each time you want to do backup, or want to bring new SMs online regularly you can do it with no intervention. An SM automatically synchronizes on startup, and uses all available peers for this to improve efficiency. In the case of a large database, however, that could obviously be slow over the network. So you can also use a backed-up archive as a way to seed a new SM process. Point the new SM at a copy (preferably as up-to-date as available) of your database archive to make its automated synchronization run a lot faster. This is especially useful as you want to get multi-region databases bootstrapped quickly.

Something else an archive backup gives you is simple data provisioning. Once you’ve taken a backup of an archive you can now use this to seed as many new databases as you’d like. Want to start every database with some baseline set of data? Simply snapshot your running database and start any new database’s first SM pointed against a copy of the snapshot. In practice we find this makes doing development and testing a lot more productive. It also makes it way easier to share active datasets that we’re using for our own tasks.

Backup never rests

What I showed in this discussion is that NuoDB gives you the tools to manage specific data backups and to easily automate the process of doing complete, consistent, non-intrusive backups with no downtime. We’ll keep improving on both these paths, adding better automation for point-in-time backup and integrating with some great backing stores that make snapshot even easier.

I’ve also talked a little about the architecture, and how that’s the enabler for our approach to data backup. What I haven’t talked about is what else that architecture lets us do to support more traditional backup models as well as some pretty cool new ways of working with your data. That’s all coming…check back on this blog over the course of the summer as we keep rolling out some pretty cool stuff.

I get really animated when people ask about our backup story today, partly because I think we already make backup really easy and powerful and partly because our architecture gives us a lot of flexibility to provide backup solutions that you just can’t get from a traditional database. The question now is, what do you want your backup solution to give you? I want to know what your most important use-cases and pain-points are and how NuoDB can make those simpler and make you more productive.

Add new comment