You are here

What is the Elastic SQL approach?

Editor’s Note: This is an excerpt of a larger white paper written by Craig Mullins that explores database scaling options. You can download the full white paper here.

Elastic SQL: An Alternative, Services-Based Approach

So far we have examined the status quo, wherein various architectural alternatives of achieving improved elasticity and scalability using existing solutions are offered. But each of these methods has some significant drawbacks: shared-disk requires expensive hardware and software to achieve, shared-nothing requires partitioning the data that may not match the requirements of all applications, and NoSQL eliminates ACID and often, SQL. Clearly an alternative approach is needed – one that does not try to work around the shortcomings of the existing solutions.

Many organizations desire a database platform that maintains industry-standard SQL and ACID for data integrity, but with an architecture that offers the flexibility and agility to elastically scale up and down as needed. Fortunately, an elastic SQL database system delivers a compelling, services-based alternative to traditional approaches that were spun out of a clientserver mindset. There are a number of different approaches to elastic SQL – from a variation on synchronous replication that relies on global timestamps to a two-tiered approach that uses multi-version concurrency control. Since the former relies heavily on minimal latency to operate, we’ll focus primarily on the latter.

There are several aspects of an elastic SQL approach that enable it to deliver scalability while eliminating many of the drawbacks of the traditional approaches.

An Example of Elastic SQL Architecture

Consider for example, a two-tier architecture that splits the transactional tier and the storage tier. This is not dissimilar to what you might find in Hadoop, where compute processing is independent from storage. The transactional tier provides an in-memory, on-demand cache distributed across multiple servers and potentially even geographically dispersed data centers. The storage tier uses a set of peer-to-peer coordination messages to manage commit processing and access data when it is not available in the transactional cache.

This two-tier architecture delivers ACID compliance but separates data durability (the “D” in ACID) from the transactional processing (which handles the “ACI” in ACID). Such a scalable architecture is neither as sensitive to disk throughput (like shared-disk), nor does it require explicit partitioning and sharding (like shared-nothing).

Because transaction processes are deployed separately from storage processes, they each can be scaled independently of the other. When throughput increases, additional transaction processes can be started very quickly – even in new geographies. Upon adding another transaction process, the system authenticates it for processing and balances the workload. This allows for seamlessly scaling to use the additional transaction processing capability with no outage. Natural data affinity builds in the caches based on usage patterns, speeding performance. If workload decreases, transaction processes can be stopped, and the system will rebalance the workload across the remaining, operational transaction peers.
 

The biggest benefit of the Elastic SQL approach is that it delivers transactional consistency and durability with industry-standard SQL and elastic scale-out simplicity.

Similarly, more than one storage process can be started. Doing so results in multiple, independent, durable copies of the database. When a new storage process is started, the system automatically synchronizes it with the active database. In some deployments of elastic SQL, disk I/O can be managed to better support intensive modification requirements. By determining how much of the data set is controlled by each durability peer (i.e. a subset of the database rather than the entirety) and how many data copies are managed, you can improve disk I/O in a manner invisible to the application.

elastic sql architecture

Figure 3. The Elastic SQL architecture. 

Refer to Figure 3 for an overview of the elastic SQL architecture.

Since data can be cached in one or more locations, data needs to be self-replicating to ensure consistency across all of the independent transaction and storage tiers. If a peer needs data that is not in its own cache, it requests it from the nearest peer that has it. Disk I/O is only required when no other peer has the needed data or during an update, insert, or delete. Each peer knows the other locations of each piece of data stored in its own cache and can therefore coordinate consistency.

In such an architecture, database performance, latency, redundancy, and availability can be adjusted by bringing peers online and offline as needed, with no downtime. No single peer is wholly responsible for a given task or piece of data, so any task can fail or be purposefully shut down without impacting service availability.

Coordination of read/write access to the system is enabled using MVCC, or Multi-Version Concurrency Control. MVCC works by treating all data as being versioned. This means that all modification operations create new versions of the data. Each transaction process caches multiple versions of any given object with new versions pending until the associated transaction commits. Because data is never changed in-place, updates can be handled optimistically; a rollback simply drops the pending modification from the cache. By handling lock management at the row (or group of rows) level, the database can reduce conflict and interference. As long as the lock manager is not fixed, it can shift location as access patterns change.

The biggest benefit of the elastic SQL approach is that it delivers transactional consistency and durability with industry-standard SQL and elastic scale-out simplicity. Although there are many benefits to an elastic SQL architecture, there are, of course, drawbacks. The strength of Elastic SQL systems today is in OLTP workloads. That said, it is possible to use Elastic SQL for Hybrid Transactional Analytic Processing (HTAP); that is, to simultaneously perform both OLTP and real-time analytic queries without interfering with each other by dedicating separate transaction processes for each. Since there are different architectural approaches to achieving Elastic SQL, additional limitations are vendor-dependent.

NuoDB is an example of an elastic SQL database system. Others include Google Cloud Spanner and CockroachDB, which both rely on global timestamps for consistency coordination. When evaluating Elastic SQL database systems, be sure to compare and contrast the SQL capabilities (not all provide the same coverage of SQL) and evaluate successful customer use cases that match your needs.

Table 1 offers a summary of the capabilities of shared-disk versus shared-nothing clustering versus elastic SQL, and outlines the basic pros and cons of the three architectural approaches discussed in this white paper.

Shared-Disk Shared-Nothing Elastic SQL
Adaptability to changing workloads; load balancing Can exploit simpler, cheaper hardware Uses simpler, cheaper hardware to adapt to changing workloads; automated load balancing
High availability; failover High scalability; more downtime Continuous availability; resilience to failure; native support for activeactive deployment; rolling upgrades; high scalability
Simple ability to scale-in  Data partitioning incompatible with scale-in Simple ability to scale-in
Performs well in a heavy read environment Works well in a high-volume, read-write environment Works well in both heavy read and mixed read-write environments
Data need not be partitioned Data must be partitioned across the cluster Data need not be partitioned, but can be grouped for improved I/O throughput

Table 1. Shared-Disk Versus Shared-Nothing versus Elastic SQL

Next week on the TechBlog, Craig will summarize his findings on Database Scalability. 

---

Craig MullinsCraig is president & principal consultant of Mullins Consulting, Inc., a consultant for datAvail, and the publisher/editor of The Database Site. His experience spans multiple industries (banking, utilities, software dev, research and consulting).

Craig writes for many computer/IT publications, having authored articles for popular IT and database journals and web sites including Database Trends and Applications, TDANTechTarget and others.

Add new comment