You are here

By Popular Demand: The Rise of Elastic SQL

Barry Morris of NuoDB and Eric Kavanagh of Bloor Group explain the emergence of Elastic SQL database vendors, and how NuoDB is addressing a dynamic market.

Video transcript: 

ERIC:     OK, ladies and gentlemen, hello and welcome back once again. Thank you so much for your time and attention and we're going to talk about database and database survival. This is a special research webcast today with Barry Morris of NuoDB. We're very happy to have Barry on the show today. He’s frankly a visionary in this space and so we're going to be talking about what those folks are doing and what this whole concept of HTAP is all about, hybrid transactional analytical database. It's not completely new, but it's certainly taking off these days and we're going to talk about what that means, what the space really is, why we got here, and what it means going forward.

So I'm just going to share a few slides from Dr. Bloor’s presentation of a couple of weeks ago, database disruption. You know, I actually remember, and Barry, maybe you can comment on this in the Q&A section at the end of the show, I remember back in 2005, the year of our Lord, did an interview with Michael Stonebraker, of course the database visionary from years and years ago. And I remember he was pushing Vertica back then and he had this whole pitch around one size does not fit all and he said that what's happened in the marketplace, and he had some interesting theories on this, was that the relational database dominated and was filling all the gaps and all the voids and serving all things to all people and his point was that the time had come for that to not be the case anymore. His argument was that there are use cases where a traditional relational database model is simply not going to be performance enough to do what end users want it to do. And obviously the cloud had a lot to do with that, but just use cases at large organizations who wanted to do heavy analysis on data. His argument back then of course was that a column oriented database would be much more purpose built for those kinds of solutions for a variety of reasons, one of which simply being that columns compress a lot easier than rows do simply because they have all the same kind of data in them and when push comes to shove, let’s face it, everything in the database world is going to be about performance. You want your database to perform the way it needs to in order to solve your business needs.

And I think that dating back to 2005 and just before we were really starting to see that trend evolve where organizations realized they wanted to analyze their data, they realized that traditional approaches were not quite delivering what they wanted, and this is back when I worked at the Data Warehousing Institute so of course the whole data warehouse evolved out of a need to do analysis on data. People realized they couldn't be querying these ERP solutions. They had to pull that data out, load it into a data warehouse, then do their analysis on that environment. Well, you know, what goes around comes around, right? In a way we're kind of going back to the old way of doing things now with these hybrid databases and with some of the solutions on the market. You know, specifically I keep thinking about HANA of SAP and their whole interesting approach. But things are changing, but boy, was Dr. Stonebraker correct. He said that we were in the middle of a transformation. That was really the beginning of the evolution of this whole movement around NoSQL or not only SQL or new SQL as some folks like to call it. So a lot of the innovation came from the big web giants.

Let’s talk just for a second here about data governance and I'm really just trying to tee up Barry for his presentation, but I mention data governance because of a couple of things. One, we've got this GDPR coming down the pike, the global data protection regulation out of the EU, and the whole concept of right to be forgotten which I find absolutely fascinating. I mean my very candid cynical perspective is that it's an unachievable goal, quite frankly, and it's largely just going to be used as a cudgel by the EU to cow companies into line. And there are good reasons for this. You want to have some privacy. Certainly in the EU they're very respectful of that. Not so much in the United States. I think we all understand that. There are cookies everywhere, people are tracking your every move. We just kind of accept that here, but that's just one reason why data governance is important.

Of course, hacking, just knowing what's going on. We keep hearing of all these hacks everywhere. So security compliance, all sorts of issues come into play with respect to governance. You have to keep that in mind with your database obviously. So just a couple of slides here from Robin’s presentation. You can see some information about SQL. You know, it's interesting, we had this movement away from SQL, that's where you got the NoSQL from, and typically what happened is the innovators, like you can consider what came out of Facebook, what has come out of some of these other web giants, what did they want to do? They wanted higher performance for a certain part of the database stack essentially for certain use cases. So typically when we talk about ACID compliance what we gave up was consistency. And you had this movement toward what's called eventual consistency.

And there was even some interesting stuff that came out of Kafka where before Confluent spun out as a company and hardened Kafka, back when it was just the engine that drove LinkedIn, it didn't even have eventual consistency. You could actually lose transactions because it didn't really matter. And this kind of speaks to the whole heart of why things are changing in the database space. It's all about the business and the business needs. What are we trying to accomplish, what do we need to achieve, how can we get that performance we need effectively. And the answer is pretty simple. What you do is you give certain things up. So the NoSQL movement came around, everybody got excited about that, there were lots of NoSQL databases. In fact, I'll throw up a slide here with just some basic information about that.

So a lot of excitement around NoSQL because of the performance, but then what happened? All the NoSQL vendors started strapping SQL engines onto their NoSQL database, right? Because people realized for a variety of reasons SQL is here to stay. SQL is the standard. SQL is very important for a variety of reason, one of which is because lots of people use SQL, lots of people know SQL. One of the challenges that we have, and Barry and I were chatting about this before we hit the record button, in the whole big data space is that for the major players, your Cloudera, Hortonworks, MapR, the reality on the ground is that when push comes to shove you need a lot of expertise in house or through some consulting firm to stand these solutions up. Well, guess what skill set is all over the place is just about every country on the planet? SQL. Lots of people know SQL. So if you adhere to that, if you respect the fact that SQL is the standard you're going to understand why SQL is coming back with a vengeance these days.

So lots of different database types. We've talked about horses for courses, as my old buddy Jim Erickson used to say on DM radio, and these days you really have to think about your use case. What is your business model, what are you going to need? And we talked again before the show about the whole concept of scalability and elasticity. I think this is one of the major drivers for innovation and it's why NuoDB is doing pretty well these days, because they saw this years ago as they began their journey and then they took a pivot a few years ago, I think when they realized exactly the magnitude of what was coming down the pike. If you think about, especially if you're in the marketing world, think about when something goes viral. I saw this just the other day with a promotion around an article that I had written on LinkedIn. And doing some promotion, according to Bitly it went viral. It went from 50 hits to 5,000 hits in like a matter of minutes.

Now, personally I think probably there were some bots in there, there's something weird that happened. You never know and this is one of the downsides of the modern web enabled world is that you can't really get behind a lot of these engines to understand what's happening, but the point is it went viral. Now, imagine if you are some brick and mortar company who is selling widgets but you're not selling them online and once that goes viral if you do not have the ability to scale out, to essentially leverage elasticity in your infrastructure, you're going to have a lot of unhappy prospects, you're going to lose that opportunity. And I think that's the key. The opportunity cost is massive if you're not able to scale out as needed. And something can go viral at any point in time. So this is why distributed databases are such an attractive option these days for companies in the new modern web enabled world. If you have the elasticity and you have ACID compliance, if you have durability and elasticity all in one, you're positioned for success. And with that I'm going to position Barry Morris of NuoDB to share with us what he has. And Barry, I've just now given you the keys to the WebX. Take it away and show us what you've got.

BARRY MORRIS:  Thanks very much, Eric, and hello to everybody. So I just want to basically take you through some background and then we'll have some chat about it afterwards and welcome your questions. The core of what I have to say really is about the -- exactly Eric’s comments on elasticity and so we need to jump into that. It really starts with, and you have in front of you I think here, a depiction of the database decision maker. And what's been the case now for some years has been this person having to decide whether or not they care about elasticity more than they care about strong data guarantees and powerful server-side processing. And in general the answer has been that the former is where people have gone. And so hence a lot of the conversation about NoSQL, which a lot of the time has been as much about no transactions, people have sort of taken the right-hand path and said, “OK, we're going to have to give up on some of the things that we've been used to over a period of some decades in order to get the things that we really want on the right-hand road.”

What are they giving up? Well, Oracle is a very powerful system. So is SQL Server and other relational databases. These are very high-performance systems, very reliable systems, very secure systems, give you lots of guarantees about the data. And there are things that we're -- it's how we run the world really. You know, this is a $40 billion database market of which almost all of it is SQL and based on those kinds of technologies. And so that's a big thing for us to be giving up. And really, I wanted to talk about these two roads and then talk about where we're going with it. So let me see. So basically the question of why elasticity, and some of this you'll be quite used to, but imagine what you can do with an elastic database. It starts with you're not going fast enough, add a node. That's not what you can do with a traditional database of course. You can't just add a node to Oracle or to SQL Server or something. Elastic database is a database, you're not going fast enough, you add a node.

Same thing for latency. You know, latency is actually much more important than people realize when it comes to database. People don’t like waiting for web pages. Often when your machine starts getting overloaded latency is the first thing that takes a hit. Guess what the answer is? Add a node. The same thing if you've got too much data. Your database is getting too big, what do you need to do about it? You need to split it into multiple stores. If you want to split it into multiple stores elastic database allows you to just add a node. Same thing for users. I think you're getting the pattern. It's Black Friday or whatever it is, you've got several million more users, what are you going to do? Elasticity is all about just adding nodes and the same is true for redundancy. If your system needs to be resilient for the loss of a machine or the loss of a disk drive or whatever the simple answer in an elastic database is add a node. And that's really sort of the point is that add a node is the answer to almost any question when you're talking about elasticity.

Same with new (inaudible). Here’s an example of somebody who wanted to come in and do some analytics. Maybe it's a monthly report or whatever it is and you don’t want to run that on your existing resources because they're running transactional workloads. No problem, add a node. Add a big node if you want. Add a note with a terabyte of DRAM and run your reports there and switch it off when you're done. That's what elasticity is about. Equally you can delete nodes. Most database servers, as you know, run at like five or 10% utilization of the server because they're provisioned for the 500-year flood. And in an elastic database system you don’t have to worry about that. You can shut down nodes at any time. You don’t lose any data, you don’t lose any transactions, and users don’t see any loss of service.

Even upgrades. You want to do a rolling upgrade, you want to upgrade machines or upgrade operating systems, upgrade the database software itself, that's no problem. Just add some new nodes and delete the old nodes and keep going. So elasticity is much more than just capacity. In fact, in certain elastic databases, and we'll talk about this if we have a moment later, you can add your nodes wherever you want. So if you need low latency in London and in Tokyo and in New York at the same time just add your nodes in those data centers and keep going. You obviously can't do that with a single server database system.

There are a bunch of other things. Automated management. You know, hooking into (inaudible) and micro services and all those kinds of things. These are all part of what an elastic system can do that a dedicated server system can't, but the bottom line is dollars. And now what we're talking about is systems that are on virtual infrastructure, there are systems where you're paying by the drink, there are systems where you're moving from capital cost to operating expenses. And so elastic, when you go back to the original question, of course you want elasticity. That one’s not negotiable. The questions are really all about the other two pieces and so let’s jump into that.

Sometimes when I hear people talk about ACID transactions you get this kind of strange comment that well, you don’t really need ACID transactions except for very, very few workloads. And that kind of misses the point. ACID transactions are not about that. They're about the fact that they change a lot about how we build applications. For one thing guarantees of durability and you definitely want that. What this means is if you told a database system to store something it's stored and, you know, absent a meteor blowing up planet Earth that data is there. That's something that you want. I've heard people say, “Well, it was only a picture of my goldfish or my poodle on Facebook.” But even then you’d rather that it stays there than it doesn't. So durability is a great guarantee to have and you definitely want it if you can have it.

The same is true of consistency. You want to know that the data that you're looking at is consistent in a strong consistency model and that in that sense is the truth, that you're not looking at a product that was sold to a nonexistent customer or whatever other consistency constraints are -- need to be in place. If you look at Atomicity, which is the third of the guarantees that you get from ACID, that's all about knowing what happened. Did the changes that I make all happen, did they all not happen, when did they happen? All that stuff, that's about Atomicity. Guarantees of isolation are about you and I are editing the same Microsoft Word document, I don’t want you to just overwrite my version of it without having some kind of structured model around that. But all of those kind of pale into insignificance compared to the most important thing about ACID which is transactional systems allow much simpler applications. And simpler applications are more reliable, they're cheaper, you can build them with lower skilled teams. The abstraction of ACID transactions is what allows us to build the many, many thousands of business applications that we have without having to have PhDs writing every line of code.

And the same is true for database administration. It's extremely powerful for a database administrator to be able to update all of the zip codes of all of the users in one go and know that it all happened at the same time or didn't happen. Those kinds of things, simplification of database administration, hugely powerful as well. So the answer with ACID is you definitely want it if you can have it. It guarantees your data in ways that systems without it don’t guarantee your data. It simplifies your applications and in fact it also provides a sort of a recovery model in the case of disaster which is much richer and much more powerful than systems that don’t have ACID transactions. So if you look at that as a kind of an objective you'll see that is it as important as elasticity? A lot of people don’t think so, but you definitely want it if you can have it.

SQL is the third piece and I know it's sort of -- Eric was saying at the end of the day SQL is about server-side sort of computation and not having a server-side computational model, which by the way many of these so-called NoSQL systems don’t, is a very big problem. It creates application complexity. It requires you to get all the data across the application side and do the equivalents of your aggregates are joined or whatever it is you want to do on the application side which is prone to error, which makes applications much more complex, which takes longer to build and more complex to maintain and so on. Similarly with performance. If you've got to somehow join 300 million Americans with 20,000 towns that they live in, doing that on the client side requires you to get all the data across the client side. So even aside from application complexity you've also got big performance issues that are a consequence of not having server-side processing. Also reliability that comes with that. So that's fairly obvious.

I think one of the things people miss with SQL is just how powerful it is. And if you ask in the context of big data and analytics, you know, Hadoop came out and people thought that SQL was over for that kind of processing and it turns out that it has been a massive move back to SQL. Why is that? It's because of the power of the language largely. It's the ability to walk up to petabytes of data and to do very powerful transformations and selections and everything else directly with sort of one-line commands. SQL is extremely high level and a powerful language. But most importantly I think it is that SQL is standardized, at least somewhat standardized, enough that there are lots of practitioners out there that know how to write a SQL application, know how to manage SQL data, know how to back it up or how to load it into warehouses or whatever they need to do. You can hire those people readily. The language is good enough to do those things. And so that's one of the reasons that we've had the kind of analytics and big data world moving back to SQL and we're now starting to see it with operational databases like NuoDB.

The last comment is a little bit more subtle, but if you're an enterprise guy you know what I'm talking about. Enterprises don’t want data to be captive to applications. They want the data to be independent of the applications because it is an asset, because you do want to be able to back it up and analyze it and audit it and ETL it and everything else and SQL is the way that you do that because SQL is an application independent language. So what you know about SQL is that it's basically SQL -- you need a server-side language. It might as well be a set based declarative language. Definitely needs to be something which is mature enough to (inaudible) high performance and pretty much you're describing SQL. Maybe somebody will tell me that SQL is not very pretty. Maybe someone will come up with a better thing, but frankly right now it's good enough.

So you end up in this kind of conclusion from the original question which is that elasticity is a kind of a necessary thing. If you can possibly have ACID you should have it. If you could positively have SQL you should have it. The reason that we haven't is because it's been very hard and so people have been left with this choice of going down the track of not having data guarantees and not having powerful server-side processing models. And what we're really talking about now is the question of can we have both? Tough, but is it impossible?

So elastic SQL is what we're talking about. It's something which Jim and I kind of talked about when we founded the company some years back. These are some of the design objectives and you'll see they're pretty much the things that we've been talking about. It's about can I add nodes to a running database, can I take nodes away from a running database, can I get the benefits of that to include that it runs continuously, that it can handle capacity on demand, that it can do automated load balancing, and it can run in multiple data centers at the same time? Those were the goals of the system and just to look at it from a kind of a market perspective, you'll see we're talking about, on the upper layer, really traditional relational database systems, very powerful as databases, more modern systems on the bottom which are very powerful as cloud systems but less powerful as databases, and really this kind of requirement for can you put those two things together?

That is what we've done. It's also what Google’s done. So you'll note that at the bottom right here we're talking about something called Google Cloud Spanner. Google, of course, being the sort of progenitor of NoSQL came out in their F1 white paper -- I recommend that you read that if you're interested -- saying that in order to run AdWords and now multiple other applications they couldn't do that on their NoSQL systems and they couldn't do that on their MySQL systems and they had to build what is in effect an elastic SQL system. They call that system F1. It's built on technology called Spanner and that technology is now available to you or I as a service on the Google cloud and it's referred to as Google Cloud Spanner which is an elastic SQL database. It does all the things that we've been talking about. There are others and I would expect to see more and more people coming out with systems that are designed to deliver this combination of the elasticity, which as I say is a requirement, but with that to have the power of SQL and the guarantees of ACID transactions. There's another company here called Cockroach Labs, for example, that is some of the folks from Google that started a company and brought some of the technology to bear.

So just a little bit of kind of technology and I think that there are some technologists listening so I'll dive into this a bit. How can you build one of these things? And some of the historical approaches are very limited. The most obvious one is what you call shared disk architectures. In effect you've got a tightly coupled cluster that's like the old VMS clusters or the Oracle rack clusters, IBM pureScale. These are systems where a small number of identical machines are tightly coupled on a sort of a hybrid interconnect and accessing a single data -- sort of master data storage system. And they're OK. You know, they definitely give you some benefits over single server systems. They're fragile. They're not what you would call elastic. You sometimes can add nodes and things, but it's a very limited kind of elasticity. And in fact, many of the biggest databases in the world are running this way so I don’t mean to suggest that they don’t work. They do, they just don’t give us the elasticity we're talking about.

A lot of systems, and typically kind of internet -- kind of large internet companies use sharded systems where you basically divide the database into multiple databases and either the application figures out which database server to go to or you put some kind of middleware layer over the top and try and automate either the sharding and the load balancing and the kind of -- and so on. Challenging because obviously you can't really get around the issue of multi-node queries and transactions, but you also have load balancing challenges that typically these systems can end up with one or two of the nodes being maxed out and the others sitting waiting. And so you have to move data around and it's kind of a complex thing, complex to back up, complex to manage. It's, as my co-founder Jim would say, sharding is a great idea if you don’t have any other ideas. I’d probably agree with that.

The third one here is actually what Google are doing. So this is kind of synchronous replication. It's basically two-phase commit for those of you that are familiar with it, two phase commit plus they're also using a sort of a quorum technology as well in their clusters. And this is sort of you should think of as kind of brute force approach. They have GPS devices on every server, they have atomic clocks on every server to support something called true time and which they use for global serialization. It's quite a heavy-duty thing, but it works and it really does work. They're running it at incredible scale with incredible performance. Every time you go and do a Google query and there's kind of AdWords coming up, those AdWords are going through this system. And to be clear, it is absolutely a SQL -- distributed SQL database system that's on a global scale.

The thing that -- on the right-hand side is what we're doing. It's called a durable distributed cache and it is what that says. It's basically a cache-like system which does dynamic and on demand loading of our data objects in a cache-like way with all sorts of typical cache style management of that data. It's distributed so there are many nodes that might have any given piece of data and it's durable. There are some nodes that take responsibility for maintaining backing stores. And so from the outside it supports full SQL style database services, no different in principle from an Oracle or a SQL Server or a DB2, but it's an upside-down database system. It's really a sort of an in-memory system that happens to be maintaining transactional durability in the way I've described.

Just hold on a minute. Just trying to get the next slide here. A little bit more detail on this. Don’t have a lot of time to go into this, but happy to talk about it in the Q&A. Basically our nodes, which are these transaction engines and storage managers, are peer to peer nodes that have specialized roles. Some of them maintain the backing stores which is the storage managers, some of them take client connections and run SQL transactions, which are the transaction engines. Underlying all of this there are basically container objects and these container objects might be your user data like your sort of tables and so on, but there might be indexes and they might be our system data. It doesn't really matter. They're just objects being passed around the system in a container-like fashion and in some sense it's a container management system. The objects have actually a lot of transactional behavior that is delegated to them in a sort of an actor pattern so that any given object knows how to serialize itself to disk or serialize itself to the network or maintain replicas of itself in a consistent fashion. The system, I should mention, is a SQL system because layered on top of this distributed object system is a SQL engine, but we could just as well layer on top of it an adjacent engine or a graph engine or anything else.

But just to look at some of the systems in the market, we talked about the kind of the Google stuff ourselves. You know, when you look at the SQL capabilities, and I apologize, (inaudible) on the fly, but the SQL capabilities and the elastic capabilities, we went to great lengths to make sure this is a very rich SQL implementation. We've got customers out there doing thousands of interactions per transaction. We've got customers that are running 13-way joins. This is a very serious SQL engine. At the same time it's extremely elastic and it's a system where adding and deleting nodes is a matter of seconds. It's a very, very simple system to be able to just in an automated start and stop nodes, add nodes, add capacity, delete capacity, and so on. That kind of dynamism we think is the future and it's very much part of the design of the system.

In terms of customers I’d love to go into some of these in greater depth and maybe we'll get a chance. What I can tell you is that, for example, let’s pick London Stock Exchange. You know, we went in there and they said we're running a -- I won’t mention the name of the company, but they were running a traditional relational database system, hit the limits of it, and have a requirement through the MiFID regulations to get to 10X, that kind of performance. We rolled out 10 nodes and we hit the performance requirements instantly. That's the kind of thing that you see when you can scale out a SQL database. We're doing that all fully transactionally and it's a full SQL engine and we're running at 10 times the speed that they were doing on a traditional SQL database. Lots more to be said about those.

So Eric, I did want to kind of make sure that we've got enough time to talk. The main kind of point I wanted to make is, going back to the initial comments, which is that as a database decision maker, whether you're an application developer or a CIO or anybody else, you have in recent years been faced with this kind of choice of do I really want to be next generation, do I want to be cloud and containers and micro services and so forth, or do I need to just hang onto my trustworthy database systems that run SQL and do transactions and fit in with all of my kind of business processes and for which I can hire people relatively easily and so on. And I think what we're talking about here in terms of elastic SQL is that those two things are coming together, that just as has happened in the analytics and big data world there's a resurgence of SQL in the context of cloud and these kinds of cloud native databases are emerging and are in fact being successful in very large deployments worldwide.

ERIC:     Yeah, that's great news and this -- I'm going to push this slide here again with some of these fantastic brands. I mean the London Stock Exchange, you want to talk about mission critical, that's some serious stuff. Can you talk about how a company would migrate to using NuoDB from a traditional system? What does that process look like of populating the database and, you know, how long does it take? I'm sure what you want to do is be able to run concurrently for a period of time to make sure you didn't miss anything. Like, you know, how painful is it to migrate off of a traditional relational database if you are an organization like the London Stock Exchange?

BARRY MORRIS:  Very good question obviously. There are -- there's bound to be an 80-20 rule. So the vast majority of database systems actually we're able to move across quite quickly. We've moved DB2 applications, big DB2 applications, in about four days. So it can be done very quickly. And of course there are circumstances under which people are using some sort of exotic things that are harder to do. The steps in the process are pretty much what you said which is that, you know, you've got to basically suck the data out of the existing database system and put it into ourselves. Our APIs are standard APIs. It's JDBC and ODBC and all those kinds of standard things. Our SQL is extremely standard ANSI SQL and quite rich. And so moving the data is relatively straightforward. And a lot of the time actually is spent more on the testing and, you know, kind of performance characterization and so forth rather than the work of actually moving the data or moving the workloads. But again, it's an 80-20 rule. There are always going to be some things that, you know, people will find there are some things that we do one way and somebody else does a different way and work our way through those.

ERIC:     OK, and in terms of elasticity let’s talk about adding nodes because you mentioned a few comments about that early on in the webcast. Let’s say you get that spike of activity, it's the Christmas season, some advertisement went out early, you didn't expect that much activity, so you've got a spike in traffic. How dynamically can those nodes be provisioned and how do you actually manage that? In other words, from a management console do you give the system the capability to dynamically scale up X number of nodes or is that a manual process? How does that happen?

BARRY MORRIS:  So let’s just get back to the point that there really are sort of two tiers of these nodes, the lower tier being the guys that are maintaining backing stores, what we call storage managers. And the two tiers scale out independently of each other. Very important point, OK? There are times when you want to add more storage managers because you're wanting to partition the data in some way or you want more redundancy or you want to support a, you know, backup in a different data center or something like that. You can scale out the storage manager tier independently, but the question you asked really, which is about more users and more throughput, typically that's about scaling the in-memory tier and that's these transaction engines that we referred to, they don’t touch the disk, they're really only in-memory, they run as basic and in-memory database. And here’s what happens.

You simply -- either you can do it manually, you can have it automated, you can run it -- we have demos in which we have Kubernetes just adding nodes when needed, when it detects some kind of threshold that's been crossed. And the new node comes in, it hands over some security credentials, it joins into the database, and at some point the load balancer points is the connections added because it notices those not doing anything. And at that point it just starts loading data and doing things. How long does that take? The startup is milliseconds. Sometimes, depending on how much data needs to be loaded, that may be seconds, but it's not many minutes and it's certainly not hours. A node typically, startup time is much more dominated by how long it takes to fire up an Amazon node, for example, than it does -- than it is to start up the database servers.

And of course, deleting them is similar. I mean you can crash them if you want, but shutting them down, they shut down within, you know, within milliseconds. So yeah, so the scale out and scale in, some people do want the thing to be automated scaleout, but quite often they want to have it driven by some other business driver. They know that markets are opening and there's a FOREX spike at the beginning of the day, you might as well just scale up to be ready for that and that's not something that's driven off of kind of monitoring data.

ERIC:     That leads to the question I was going to ask you, is monitoring the system. When you're talking about especially the spikes or the peaks and the valleys what kind of visibility does the end user have through the NuoDB platform to see where things are, to better understand yes, we need to start adding some nodes or we can start deleting some nodes? How does that visibility occur?

BARRY MORRIS:  Right. Well, so we're huge believers in first of all transparency which is so we're publishing a huge amount of stuff that people can track. And secondly, we're huge believers in being -- integrating with kind of the systems that people are running. And so what that boils down to is providing that information to Kubernetes or Mesos or whatever it is that you're using to run your cloud or your environment or indeed if you're just running it locally, making that information available. Now, that information includes transactional throughput, it includes latency, but it also includes things like, you know, sort of network data, it includes our objects loading and ejection data, it includes lots and lots of things. And so typically what people are observing is really, for this kind of scale out and scale in, you can do it simply based on system level information like, you know, processor kind of utilization or network utilization or something like that. But we're providing all of this information and what we're finding is that it's not identical from customer to customer what is the right thing to observe.

ERIC:     Yeah, OK, good. We have an attendee asking a question about enabling auto scaling. How would an administrator enable that kind of thing in NuoDB?

BARRY MORRIS:  So it's a good question. So the system, it does not implement an auto scaling algorithm itself. What it does, for example, on Amazon what we do is we hand over to the Amazon order scaling tools. We tell Amazon what it is, you know, what to do when it gets a signal from us, and it goes ahead and scales out the Amazon infrastructure and fires us up. That's typically our goal is to be integrated into whatever you're trying to do. On a single application basis it's actually very simple. You can do the -- because starting up a node is literally a, you know, a Linux command or a rest interface call and so you can simply start that up very trivially and you can monitor what we're doing very trivially. You can do that in a shell script or a packed down script or anything you want. At scale let’s say people will be handing that over to some cloud monitor or some kind of, you know, choreography system.

ERIC:     OK, no, that makes sense. And I'm looking at this elastic steeple design approaches slide here and, you know, when you get to number three, synchronous replication, I can't help but think about all the excitement around the Hadoop movement let’s say circa seven, eight years ago and how disruptive that was to plannings going forward and the fact that, you know, of course Hadoop comes built in with a replication of three so every time you store data it stores in three different places, but that's a lot of overhead, right? And so when nodes go down it causes some disruption and it's just frankly not that -- let’s just say it's not built for speed. I think what you guys did is you designed a solution, again to get back to the mantra here, that would be scalable and durable. So you had to put a lot of thought into understanding how you're going to accomplish that and I think one of the key points you made here is this two-tier architecture, right?

BARRY MORRIS:  Yeah. So let me sort of kind of make a couple of distinctions because you're touching on some important topics. There's replication -- often in the database world people refer to replication and what they mean is -- what I'm going to say is post commit replication. In other words, data gets committed somewhere and then it gets replication which is by definition not transactional replication. And so that is how a lot of systems work. You know, basically you've got a database system sitting on one node or one data center and it does a commit of whatever the transaction is and then by monitoring the log or something that transaction or those deltas get shifted to another database system and it gets updated. So it's following, it's asynchronous, it's not up-to-date, it's not transactionally committed.

The Hadoop kind of model is something similar to that, but we didn't go into that. What we've got going here is that when we talk about synchronous replication the way to think about this is that these systems, like let’s take the Google Cloud Spanner, is a system where it's got both replicas and partitions of your entire data set sitting on different nodes, and in fact different clusters with different nodes, and that when something -- in order for it to be truly transactional if I change data item A in one place it needs to transactionally be changed in every other place at the same time before that transaction completes. So I talked about post commit replication. This is intra-transaction replication.

ERIC:     Right.

BARRY MORRIS:  It's making sure that everything is up-to-date before I return to the user and say that it's committed. In order to do that, fairly obviously, you need coordination between machines. In order to do that, fairly obviously, you're going to have go across a network and in order to do that, fairly obviously, you're operating at something like three or four orders of magnitude slower than if you were doing it on a single machine.

ERIC:     Right.

BARRY MORRIS:  And so that's what the fundamental issue is with this model of a kind of synchronous replication model or two-phase commit styles of models is that by definition in order to get that kind of coordination between nodes you are going to have to send messages, multiple messages by the way, back and forth between the nodes and that slows you down. That's called the slow pedal. And what Jim has designed in the DDC model is something which is much, much more sophisticated. It's much finer grained, it's got an optimistic sort of asynchrony about how it's updating things and it's heavily using something that Jim invented 30 years ago which is called multi-version concurrency control which is, if you want to think of it as version based concurrency as opposed to lock based concurrency. So what Jim did here was a sort of a turbocharged version of [MBCC?]. And when you boil all of that goop down to the detail what it really says is NuoDB is a very low latency, its memory, you know, in-memory database class of latency, although it's distributed, communications, rather than being synchronous, which is what the Google design and the two-phase commit design is, it's asynchronous which allows it to run in multiple data centers. So you know, apologies for getting into the gory details of it, but it turns out that this slide that you're looking at really tells the story in a very interesting way.

ERIC:     Yeah, this is good stuff. We've got a very good question from another attendee here and I'm guessing the answer is yes, but I'll throw it over to you as it came in. The attendee asks does this architecture support data warehousing slash marts with real time capabilities to mirror from OLTP systems?

BARRY MORRIS:  Oh, wow, OK. So yes, but with some qualifications. Let me say this, that we -- the core architecture could be taken very far down that track. We could go down the track of building, you know, a full-on competitor to Vertica or Greenplum or something like that and achieve that. We haven't done that. This system is optimized today, it's optimized for operational database. A general-purpose database. Think of it as Oracle for the cloud basically. That's what it's designed for. Now, having said that, it is extremely good at something, Eric, you mentioned earlier which is HTAP, hybrid transactional analytical processing. It's very good, in other words, for you to do analytics on your actually operational data. That doesn't mean that we're going to migrate -- we're going to manage multiple schemas and start schemas and things like that. It's you're doing the analytics on the standard operational schema of the database, but why it can do it is two things.

One is because whereas lock based concurrency is something which causes readers and writers to typically trip over each other, this multi-version concurrency control that we talked about, and this is true of all MBCC databases, allows analytics to be effectively done on a snapshot while everyone else continues to mutate the database. That's part one. Part two is, and I think that your question kind of alluded to this, is that you can add a node in real time, have it do whatever analytics you want on the database, and then shut that node down again and that node -- and the impact of that is quite minimal, the performance impact on the rest of the system. And so for these kinds of workloads, what we call HTAP workloads, the system is extremely good at handling those kind of workloads, but I wouldn't put us up against, you know, the sort of traditional warehouse engines. As of today that's not what we're trying to do.

ERIC:     I see. OK, good. We have more really good questions coming in here so let me throw this one at you. An attendee is asking can you explain how micro services can take advantage of NuoDB in a Docker cloud architecture?

BARRY MORRIS:  Absolutely. One of my favorite topics. So, as you know, Docker and containers are fantastic. I think they're a huge breakthrough. I think we still haven't really sort of figured out how much of an impact they're going to have. But they're particularly appropriate for stateless kinds of systems and that makes it easy and it allows us to deploy these Docker systems in -- sort of arbitrarily. The issue always comes up, well, what do you do about state? And in particular around micro services which goes a step further. What you want for state is you're wanting something which is easily sort of dynamically relocatable and so -- which is what we are. So our transaction engines, which are the in-memory systems, you can fire it up anywhere there's an IP address that's got access to the rest of the system. And so you can start those up on the same nodes as your Docker instances. Of course we run in Docker ourselves. Dockerizing us is a natural thing. And so you get very low latency, good access, etc.

Two more points. The storage element of it, recall that I said that we've got multiple storage managers with redundant storage which is how we recommend that you run it. Obviously you don’t have to have it redundant, but we’d recommend that you have two or three separate copies of the data. You can add these nodes any time you want. I can walk up to a running NuoDB system and say here’s an empty storage manager, it will populate itself and it will join in and it will be a peer of the existing one or the preexisting ones, which I can switch off at that point. So the system has, even at the storage layer, the ability to move the data around, to move around -- and you can do that with partitions of the data as well. The last point I was going to make is, just as relates to micro services, one of the big questions people have is how do I have -- in the micro services philosophy the idea is very independent services that are not dependent on each other in terms of kind of deployment sort of release trains and so on and that the interdependencies are API based.

But you also at the same time want that data to be somehow centralized because you're wanting to be able to do things which are kind of process-wise analytics and so forth. And so a distributed SQL database is particularly good at this. They can maintain the data independently at the same time that data is in the same effective logical repository and you can do anything you want analytically. So we love the Docker environment, we love containers, we love micro services, and we think that there's nothing on the planet that's as well suited to those environments as ourselves.

ERIC:     OK, good. We've got some more really good questions coming in here. One of the questions is pretty simple. Could you take a minute to explain HTAP? So the full concept of hybrid transactional analytical database systems for processing. This kind of goes back to what we were talking about at the top of the hour and if you would, if possible, could you kind of compare and contrast what you're doing with SAP’s whole movement towards this in-memory architecture with HANA? Because they're attacking some of the same stuff, but how do you differentiate?

BARRY MORRIS:  Yeah, absolutely. So HTAP, just to kind of make sort of further comments on that, a lot of HTAP is about applications that include analytics as opposed to, if you like, research analytics or batch analytics or stuff like that, which are very valuable things, but a lot of applications, and we talked earlier about AdWords, in the time that you do a Google query AdWords -- and the main query of course does a big index lookup to go and get you whatever your pages of links are. In parallel, AdWords rushes off and goes and does a market optimization for what ads are appropriate, which ones have been paid for, which ones are within the right time boundary or geographic boundary or whatever, optimizes all that stuff, does a bunch of kind of transactional stuff on the back end and still within the sort of half second or so of the page coming back it displays those ads for you, right?

That is an analytical undertaking. It's doing analytics, right? And that's a typical HTAP kind of a situation. There are lots of others, but that gives you a sort of a picture of the kinds of things that you're doing. And you're doing it at gigantic scale. The -- so HTAP is about this idea of being able to do analytics on your operational data in real time and that data is absolutely fresh, doesn't have to be moved into some back-end system or anything else. Once again, there are circumstances under which you do want to move it into back end systems and do all sorts of other things, that's fine, we're not trying to kind of -- we're just saying that for HTAP that's what HTAP is all about.

ERIC:     Yeah, that's good. Yeah, go ahead.

BARRY MORRIS:  Yeah, sorry. So Eric, you had a sort of a secondary part of that question?

ERIC:     Yeah, well, we're just kind of asking how you’d compare and contrast with what SAP is doing with HANA. I mean of course they've got some pretty specific use cases they're building around and I think that you -- I'm just guessing here -- that you're a bit more flexible in terms of the use cases that you can empower whereas I'm guessing HANA is of course focused on those heavy industrial clients, Fortune 500 companies with global presence, that kind of thing. But can you talk about that for a second?

BARRY MORRIS:  Yeah. I mean, you know, we know a lot about HANA and I'm not keen on talking too much about other people’s technology. I think it's a very interesting approach. What they're essentially doing is saying we've got an in-memory that's going to sit -- we're going to load the entire database into memory and we're going to have, if you like, both column oriented storage and row oriented storage in that system and you can pick and choose and we're going to be able to handle different kinds of workloads as a consequence.

Our model is very different. Remember I said that at the core of this -- it's actually much more important than it sounds -- at the core of this are these intelligent objects, these kind of data containers that we internally call atoms, and these data containers actually do all the work. The system doesn't -- at a deep level the system doesn't really know about the storage formats, whether it's column oriented or row oriented or anything else, these are just objects and the objects are intelligent. You can tell them to do things, they do things. And so for us topics of is it row oriented, is it column oriented don’t really matter. What matters is our ability to load these objects and have them efficiently maintain, you know, a consistent state across, you know, potentially quite large clusters of machines.

ERIC:     Yeah, that's cool stuff. And you know, we talked a bit about Docker here and containers and the whole issue around stateless and there's another component I’d like to throw into that conversation which is application design. Because what we're seeing here is the foundation shifting and becoming more flexible, frankly, and that changes how application developers think about what they create, right? Can you kind of talk about how developers need to start rethinking the architecture of the foundation as being more nimble, as being more flexible, and how that changes the way they design applications.

BARRY MORRIS:  Yeah, although maybe I'm going to say something slightly different from what you might expect. And my analogy is we've got a requirement for a new type of train service and there's an existing set of railway lines out there. One approach is to say let’s rip up all the railway lines and re-lay them so we can deliver the new kind of service. And a different approach is to say can't we design a smarter train to run on the existing infrastructure? And in this sense the existing infrastructure is SQL and transactions and we're saying you already know how to build those applications, you've got the tool sets, you've got the ecosystem, you've got your standard business practices, you've got your training, you've got, you know, everybody -- you've got your backup strategy. That's the railway line, that's the infrastructure, and our job is to make it so that you don’t really have to do anything differently.

So let me give you some concrete examples. From our perspective, you know, it's JDBC. That's the same thing you use to talk to Oracle, right? It's a single logical database even though you've got all these kind of funny things going on under the covers and objects being loaded and sort of however many nodes involved and things dynamically moving around. At the end of the day the application developer simply sees a traditional single logical database. It's got some tables and it's got some rows and it's got some indexes and the rest of it and you don’t see all that stuff going on under the covers. And that's part of our point. What we want is for application developers to be able to hand over all of that operational decision making about performance and about which machines it's running today and all of that is somebody else’s challenge. It doesn't come into the application developer’s mindset. They are simply building a SQL application, period.

ERIC:     OK, good. We've got a couple more good questions. I'll throw one over to you and I'll give you a very technical question that came in. You talk about migrating systems, right? So this is always one of the big questions for a vendor such as yourself. Are you going to go out and find net new business, meaning net new use spaces, or are you going to be used to supplant existing technologies? And it seems to me we're going to see a good bit of both those categories moving forward simply as companies recognize that the foundation is changing and that if they want to remain nimble and agile, and I think that's going to be a hallmark of success going forward, they need to have a more robust infrastructure underneath them. But you've got one of these lines here in your matrix, migrate existing SQL applications. Well, that's something that you really have to think about if you're going to move over and you handle that. Can you kind of talk about why you're able to facilitate in that specific use case? Because I'm guessing that would usually be one of the red flags that users are going to throw up is hey, I don’t want to change the app I'm using.

BARRY MORRIS:  Yeah. Well, and there are a couple of questions in that so let me sort of take it quickly. First of all, we've got a mix of people that are building new applications and people that are moving existing applications. There are many existing applications that people are just going to put on the cloud running on a single server traditional database system and they're fine with it. And that's fine with us. But there are many applications, they're often strategic applications, where people really need the elasticity. Either that's about performance or it's about up time or it's about kind of cloud enablement or something, and so we do both and we're very happy to do both. What we say is that -- why is that check mark there? The answer is that you've got to have rich SQL support to do this. The kinds of things people are doing with our system, both in terms of kind of SQL capabilities and in terms of performance, in some cases are quite extreme.

They put us up against traditional relational databases and they say, “We want to see you on similar hardware being at least as fast and then we want to add more hardware and see you go much faster.” Stuff like that. The reason that there are not check marks, for example, if you look at Google’s offering, their SQL is not pure SQL, it's not rich SQL. In fact, the way that you write things for their database, the recommended way is to not use SQL at all. And so -- and we're pure SQL. We're pure ANSI SQL and we're rich ANSI SQL and that's what people need. We also have things which people think of as more exotic capabilities including [cougars?] and stored procedures and other things that are sort of deeper in the sort of SQL stack. But that's what that's all about. And yes, we're very happy, many of our biggest customers have not only moved applications but we've got big customers that have literally bet the farm on us. We're talking about multibillion dollar companies that have simply said, “We're ripping out existing SQL solutions and replacing you across the board.”

ERIC:     That's the kind of evidence you're looking for when you're out there talking to prospects, right? When you've got those major league industrial use cases and you've stepped in and hit home runs. That's what people want to hear. Here’s a very good technical question and I'm curious to know the answer myself. The question is if you replicate data in memory on demand isn't that the same as synchronous replication? Can you kind of talk about that for a second?

BARRY MORRIS:  Wow. That's going to take longer than a second. There's a patent on this and you're very welcome to go and read it if you understand patents. The answer is no. The answer is that would require us to do essentially what Google’s doing which is to have, you know, synchronous replication between them. I would remind you that under the covers, and apologies to people that are not as technical, but under the covers what's going on is it's a transactional system and so if you think about the messages that are going between these nodes, a transaction gets started, a whole other mutation has happened that gets essentially published around and subscribed to, and then at some point there's the end of that transaction which takes the form of either a rollback or a commit. And that allows us to -- that window of different things happening allows us to do all sorts of interesting things. So I don’t have time to go into it right now, but no, we are not effectively trying to maintain a distributed shared memory grid, you know, which is synchronously replicated. That would be -- that would require much tighter coupling. The system is close to 100% asynchronous because of what I just said, but Eric, it's a much longer conversation and we’d be very happy to talk to people about how that works.

ERIC:     OK, that's sounds great. Well, let’s see, folks, we've burned through an hour and some change here. I think there's one more question lingering in the back of my mind and I can't -- it doesn't come to me at the moment. But you know, this is just fascinating stuff. I guess one question I’d throw out to you is cloud infrastructure, right? So you’d mentioned Amazon Web Services of course, Azure is out there, there's a relatively new company that we've actually been talking to called Wasabi which is promising to be something like five times faster than AWS and 20% the cost. As those new offerings come down the pike I'm guessing that you guys are again pretty well suited to leverage that sort of thing, right? I mean I always worry a little bit about the dominance of Amazon Web Services and the irony of Microsoft saving the day. I think we're going to see some more players step into that market for purpose built type solutions. But you guys don’t care, right?

BARRY MORRIS:  Yeah, I mean we're cloud independent. We've gone to great lengths to make sure that we don’t make assumptions about specialized hardware of any sort. So we're running on commodity hardware, be it machines or networking or anything else. I would remind you that our ability to run in multiple data centers simultaneously also includes, naturally, the ability to run in multiple clouds simultaneously. So if you chose to you could run some of our nodes in the AWS and some of them on Wasabi in the same database at the same time.

ERIC:     That's good stuff. And you know, one of the things that the Wasabi folks talk about is they're kind of throwing down the gauntlet on standardizing APIs. So what they've done is they're mimicked the Amazon Web Services API such that you could just shift everything over from one place to the next. Do you see that as a potential standard coming down the pike, establishing some standards around APIs for cloud storage?

BARRY MORRIS:  You know, our view on it is that there's obviously some kind of de facto standards happening at various levels of the stack and that's fine. But for the most part we keep out of the kind of cloud war stuff. And so we happily run on AWS, we happily run on Azure, we'll happily run on Google. I don’t know that we've tried running on Wasabi. We'll certainly run on IBM SoftLayer. And we just simply try to keep out of that. A lot of our customers are wanting cloud independence. One of the reasons that they would choose us is so that they can decide on Monday morning to move to a different cloud or in some cases run on one cloud in Europe and another in the US for whatever reasons. That's the sort of customers, you know, that we're dealing with.

ERIC:     That's great stuff, folks. Thanks for some fantastic questions out there. For our attendee friends we do archive all these webcasts for later viewing. Feel free to share it with your friends and colleagues and a big thanks for Barry Morris and his team over at NuoDB for sponsoring our webcast -- our database research this year. Good stuff. Things are changing. There are lots of options out there and like you say, Barry, I love the whole capacity to move from one environment to another. Nobody wants to lock-in these days. With that we're going to bid you farewell. Thanks again, folks. We'll talk to you tomorrow at 1:00 eastern for a new series now streaming. Don’t miss that. My good buddy Damian Black is going to be back in the hot seat, building an application in minutes for IoT analytics. Talk to you then. Take care, folks. Bye bye.

BARRY MORRIS:  Thanks, bye bye.