You are here

Benchmarking Google Cloud Spanner, CockroachDB, and NuoDB

Interested in NuoDB's distributed SQL database and want to know how it all works? Download our white paper on NuoDB's database architecture.

Looking for the next-generation database

As a NuoDB solution architect, I constantly talk to people about what they’re looking for in a database. More and more often, architects and CTOs tell us that they’re building their next-generation data center – often using containers or cloud infrastructure - and they need a database that fits this model.

In their ideal world, they want a familiar relational database – but they want one that can deliver elastic capacity, maintain consistent transactions across multiple data centers, and span multiple public clouds at once. They’re asking for a database of the future.

This is typically why they’re talking to us at NuoDB. As a distributed SQL database that can scale in and out by just adding and deleting nodes, NuoDB can run across multiple deployment environments and data centers, while still maintaining strict transactional consistency and supporting ANSI SQL. We think this makes it an ideal database for modern deployment needs.

A new kind of cloud database

But actually, we aren’t the only ones thinking along these lines. There’s a whole new category of database – the distributed SQL database – that has emerged. This type of database gives us a preview today into what databases will look like in five years. In addition to NuoDB, there is:

  • Google Cloud Spanner. Unlike Google’s other cloud SQL databases (specifically Google Cloud SQL), Cloud Spanner is built as a distributed, scale-out SQL database. It has been used for many years powering “AdWords” applications – the revenue engine for Google.
  • CockroachDB. Developed by Cockroach Labs as the “database that survives,” CockroachDB is basically an open-source adaptation of Spanner. It shipped the first production release of the database earlier this year.

I am an admirer of all of these technologies as they represent a new way of thinking about databases. Yet, after exploring multiple papers and presentations, I still felt unfulfilled. All these products look promising on paper, but how do they really perform?

To answer this question, a couple of my colleagues and I decided to take these products out for a spin and compare them directly. Not a formal pedantic evaluation, but just a wet finger in the air.

YCSB evaluation: Environments and configuration

For simplicity we decided to use the Yahoo! Cloud Serving Benchmark (YCSB) benchmark. Brian Cooper, the author of YCSB, joined Google and wrote the YCSB driver for Spanner. CockroachDB and NuoDB both support JDBC driver, allowing for out-of-the-box support of YCSB. So the task of spinning up the YCSB tests is straightforward.

The test configurations were configured similarly. A single multi-threaded YCSB application connects to up to three database servers running on separate hosts.

YCSB Application

The beauty of a distributed SQL database is that the application does not care how many servers make up the database or where the servers physically reside. The application always operates against a single consistent logical database.

For our exercise we ran the YCSB Spanner tests on Google Cloud for Cloud Spanner, while benchmarks for CockroachDB and NuoDB resided on bare metal in our lab using 32GB, 4 core SuperMicros with 10G network.

If you aren’t familiar with YCSB tests, they consist of a number of workloads named from A to E. In short the workloads can be described as follows:

A – Heavy update (50% read, 50% update)
B – Mostly read (95% read, 5% update)
C – Read-only (100% read)
D – Read the latest inserted (90% read, 10% insert)
E – Scan the latest inserted (90% read, 10% insert)
F – Read-modify-write (50% read, 50% update)

Read full descriptions of YCSB workloads.

YCSB results for Google Cloud Spanner, CockroachDB, and NuoDB

For each of the workloads, we varied the application load (number of threads ran by YCSB application) and the database capacity (number of nodes between 1 and 3). We executed multiple runs, then took the best throughput numbers across all runs for each database and plotted them in the chart below:

peak throughput - 3 nodes

This graph measures the number of transactions per second for each database during peak throughput. In all five of the tested workloads, NuoDB significantly outperforms both Cloud Spanner and CockroachDB.

You’ll notice that we did not complete Workload E as it would require a minor change to the YCSB application in order to run correctly with NuoDB. To preserve the integrity of the test, we wanted to run YCSB without any changes, so excluded Workload E from our testing. For those interested in running YCSB Workload E on NuoDB, you can contact us or comment below for details on what you would need to change.

As you can see from the graph, NuoDB outperforms other distributed SQL databases with much higher transactional throughput across all workloads. However, the most striking difference is with Workload C - read-only tests. This behavior is expected and due to NuoDB’s memory-centric architecture.

In addition to observing throughput results, we also wanted to understand latency – a critical concern with a distributed database. As you can see in the chart below, this is another area where NuoDB’s memory-centric architecture delivers benefits – in this case in the form of low-latency data access. NuoDB’s latency numbers are significantly lower than latency for Spanner and CockroachDB.

read latencies - peak throughput

This graph measures average latency experienced by the application for READs during periods of peak throughput. Minimal latency is ideal for the best user experience.

To be fair, some of Spanner’s sluggishness can be attributed to network latency of the Google Cloud when compared with native LAN speeds. We chose to run both Cockroach and NuoDB on our hardware so that we could more easily understand any anomalies between test runs – something that would have been much more difficult to do under cover of cloud.

YCSB also collects “update” latency numbers that are captured below. The results are much more aligned for all databases as all of them are gated by I/O performance.

update latencies - peak throughput

This graph measures average latency experienced by the application for UPDATEs and INSERTs during periods of peak throughput. Minimal latency is ideal for the best user experience. Note that Workload C is not represented here as it is a READ-only workload.

Noticeably, the insert latency (Workload D) for Google Spanner is very low. Our tests ran repeatedly exhibiting the same kind of behavior. It is not clear whether this is just a testing anomaly or simply that Spanner is well optimized for local insert latencies.

Summary: Three Options for the Modern Data Center

In summary, our hands-on experiment gave us a pretty good sense for performance ranges. We also made a few general observations:

  • Google Cloud Spanner is extremely easy to manage. There is only one configuration parameter – number and location of Spanner nodes. The rest of the heavy lifting is done behind the curtain. This approach sets a high bar for management ease-of-use
  • CockroachDB is very easy to set up and start using. It is packaged as a single executable that you drop on a host and pass a few configuration parameters. And it is an open-source distribution for those committed to an open source based infrastructure
  • NuoDB is flexible and fast. It can achieve superior performance for throughput and volume. But it requires awareness of architecture and best practices to do so

While we tried to make the benchmark tests as fair as possible, we admit that as experts in NuoDB, our knowledge has probably biased the results at least a little – for instance, we ran a configuration that enabled the entire data set to fit within memory. BUT, we ran the same configuration for CockroachDB as well. That said, we don’t think the drastic differences can be explained by that alone.

NuoDB has been generally available since January 2013, so we’ve spent years of hard work improving our product to perform in hard-hitting, real-world customer experiences. We’re excited to welcome these new kids on the block and see what they bring to the table. I think we have a lot we can learn from each other.

And in general, I think all three of these “distributed SQL” databases are pretty promising options that meet the needs of the modern data center. The question is – what do you think? Would you take these out for a spin?

Provide your thoughts below, or download our product at I’ve even written a self-evaluation guide to help you get started.

Interested in NuoDB's distributed SQL database and want to know how it all works? Download our white paper on NuoDB's database architecture.


boris bulanovAs VP of Technology, Boris Bulanov works with strategic customers on designing and deploying next-generation, cloud-based enterprise systems, and guides the adoption of NuoDB’s break-through database technology in key accounts. 

Boris’s 20+ years of experience span a spectrum of disciplines, including enterprise applications and systems in Financial Services and Telecommunications. His expertise includes designing and implementing high-performance applications, architecting large-scale systems, and evolving enterprise architectures. 

(This post was originally published in 2017 and has been updated.)


Looks promising. Here is a lack of information about amount of data used, schema example and resource utilization on db sides.
Do you have any data about scalability?
For example: we had 5 nodes, latency was 10ms and throughput was 50 operations per second. We increased number of nodes twice and latency becomes 8 ms, throughput is 100 op/sec.

The database size we used for the test is 14GB. This is the size of a default YCSB table with 12M rows. Table schema is documented here (( – standard YCSB defaults are 10 columns with 100 bytes of randomly generated data each.

Your scalability question is pretty broad. Our high-level observation is that there is a capacity tipping point for each product after which performance starts to degrade – initially latency and then the throughput. This is the expected behavior.

Each product we tested has a different “tipping point” depending on the type of workload mix and its “intensity”.

For NuoDB read latency stays below .5 milliseconds for YCSB loads of up to 50 threads.

Writes are more difficult to isolate as they include both concurrency and I/O overhead. For a single threaded load (no conflicts), update latency stays around 1 millisecond per transaction. For 35 threads it goes up to 5 milliseconds per transaction.

Hope this gives you a bit more clarity. Let me know if you have additional questions.

In which unit are latency results expressed ?


Which versions of each are you using in these tests? Are any optimizations being done? In your licensing structure how many machines/nodes do the Professional and Enterprise licenses allow the us of for scaling and redundancy.

>> Which versions of each are you using in these tests?
  • CockroachDB – v1.0
  • Google Spanner – August 2017
  • NuoDB – v2.6.5
>> Are any optimizations being done?
  • No optimizations for any of the products.
>> In your licensing structure how many machines/nodes do the Professional and Enterprise licenses allow the us of for scaling and redundancy.
  • For NuoDB a 4 engine configuration provides “minimal redundancy” - no Single Point of Failure.
  • For mission critical apps customer generally configure 8 engines – 2 Data Centers with 4 engines each.
  • More engines generally translate into more capacity, but also is dependent on a workload mix.

From NuoDB Architecture (the Technical White Paper), one logical database must host on one host, if I have a large database, for example 1TB data, does NuoDB will automatically split these data to multiple hosts like CockrtoachDB?

One logical NuoDB database can span multiple hosts. This is not done automatically, but through a database design process using Table Partitioning SQL API. This way DBAs and system architects can stay in control of data distribution.

Have you tried using Jepsen to see if NuoDB is truly consistent?

We have used Jepsen in the past and also have a series of internally designed tools to check for and ensure consistency.

>> To be fair, some of Spanner’s sluggishness can be attributed to network latency of the Google Cloud when compared with native LAN speeds.
To account for network latency and uniform environment, I think its better if DB under benchmark consideration reside in same environment (for example Google Cloud/GKE).

Another factor is ensuring strict global consistency especially for highly concurrent traffic/requests. Someone talked about Jepsen earlier. Good to have result for this aspect as well.

Add new comment