You are here

Webinar: How to Evaluate an Elastic SQL Database

Listen to this on-demand webinar for a technical discussion about how to evaluate an elastic SQL database, why it’s different from evaluating a traditional database, and what to consider during your evaluation.

Video transcript: 

JEFF BOEHM:

Hello, and welcome to today’s webinar on how to evaluate an elastic SQL database. My name is Jeff Boehm. I am the chief marketing officer at NuoDB and I am joined today by Tim Tadeo, a solutions architect with NuoDB. I’m going to start today by introducing the elastic SQL database, and why and how you should be thinking differently about databases as you move to a modern cloud architecture. Tim will then do a live demonstration showing how to evaluate key aspects of an elastic SQL database. And I’ll wrap up with a pointer to resources available for you to conduct your own evaluation. At the end of the presentation, we’ll be taking live questions from the audience.

At any time during the webinar, you can enter your questions using the question box on the right side of the screen, and we’ll address them at the end. We are also recording today’s webinar and will share the replay link with you, should you wish to revisit it or share it with your colleagues. So with that, let’s get started by exploring the database landscape, starting with today’s traditional databases and some of their inherent challenges.

Our world has moved from the mainframe, to client server, to distributed cloud computing. Advancements in everything from hardware to processing power, application architectures to development processes, have enabled organizations to take advantage of cloud benefits, such as agility, elasticity, and scale out. One key element, the database tier, has remained stubbornly unchanged. Several decades ago, the database market underwent a fundamental shift, as the world moved from mainframe to client server, ushering in a new group of databases, such as Oracle, Informix, Ingres, and Cybase.

A similar transition is happening today with the shift to a distributed cloud architecture, as the traditional relational databases were not built for this distributed world. Today’s business challenges are fundamentally different, as you need to meet requirements of real-time access to data and handling highly dynamic workloads with reasonable resource and budget constraints. Databases can accelerate or constrain your business because your choice of database technology affects everything from downtime, to application performance, to your ability to scale to the overall return you can achieve with your investments. The database could be considered the lifeblood of the modern application. If you trust your data, you can trust the application.

So what’s needed to truly be successful? Organizations can’t risk everything by starting completely fresh. One of the reasons why traditional relational databases are so pervasive and successful today is because they do some things really, really well. They provide consistency and durability for systems of record. Financial institutions count on databases to perform transactions that the world can trust. And when data and transactions are compromised, there are huge, devastating consequences. And everyone knows how to talk to and access data in databases today. It’s easy to find developers who know SQL and are accustomed to the database handling core data management logic. While traditional relational databases do have their limitations, as we’ll explore, they bring huge advantages, advantages developers and operators don’t want to lose.

But at the same time, as companies transition to a modern cloud architecture, they need a database architecture that is built for this new world, a database that can easily scale out and back in on commodity hardware to meet demand, a database that brings deployment flexibility to run on premises, in containers, in the cloud, or in a hybrid architecture, wherever you need the data to reside, and a database that never goes down. Thus, what you need is a traditional SQL database that can scale out to the cloud elastically.

But today’s relational database systems can’t do this. So developers are left with a few options or compromises. One option is to look at Nuo SQL technology that brings the scale out elasticity, but compromises many of the benefits of traditional relational database architectures. The other is to look at recent advancements in relational database architectures. Recognizing the need to scale out, traditional relational databases now how have options including replication, shared disk, or shared or sharding. Read replicas, not shown here, simply allows you to scale out read workload by replicating data to multiple databases for read-only transactions. For read-mostly applications, this could be a suitable option, but really does not address mixed workloads.

Traditional vendors have then looked at various scale-out approaches, using either shared everything or shared nothing, also known as ‘sharding,’ approaches. These approaches have distinct advantages and disadvantages, compared to each other, which I won’t go into here. But as those of you who have implemented either approach can attest, both have significant drawbacks. In addition to requiring the purchase and implementation of complicated and expensive add-ons, they typically add significant complexity for both the application developer and database operator, and they really don’t provide the true elasticity that companies are looking for.

Instead of these compromises, what if you could have a database that truly brings together these worlds of traditional relational capabilities and elastic scalability, a database that could easily scale out and back by simply adding a node, a database that never needs to be shut down, providing continuous availability for your application, a database that’s tolerant to hardware and software failures, failures that are much more common with commodity cloud hardware today, a database that automatically handles archiving and fault tolerance, without complex add-ons or the need to provision hardware that sits idle 99% of the time, waiting for a disaster to occur, and a database that can automatically balance loads without changes to the application tier?

This is what Fortune Magazine referred to as the “holy grail for database technology.” This is what we call Elastic SQL. Elastic SQL databases provide the scale-out simplicity, elasticity, and continuous availability that cloud applications require, without foregoing the transactional consistency, durability, and SQL interface that databases of record demand. This is an emerging class of database technologies that promises to bring these two worlds together.

So if we look at the landscape today, we see a mixture of options out there. We see traditional relational databases that are excellent at providing a business database of record, with key capabilities around SQL and acid properties, but as we discussed, really can’t scale out very effectively. No SQL technology can scale out elastically, but forgoes strict acid compliance and full ANSI-SQL support, making them less suitable for business-critical applications.

New products, such as Google Cloud Platform Spanner and CockroachDB, promise the combination of these areas, with traditional relational designs that scale out elastically. These first-generation products today provide limited SQL support, making it harder to migrate existing SQL workloads to them, and rely on sophisticated clock synchronization that limits or constrains deployment flexibility. But they hold the promise of running business-critical applications in a cloud architecture, and are examples of what we would consider to be elastic SQL architectures.

NuoDB was first introduced four years ago and has been proven in many production implementations. It truly brings these worlds together with rich SQL support for both read and write workloads, acid compliance, and a full scale-out architecture that can be deployed on-prem, across cloud providers, and even in a hybrid cloud architecture, without sophisticated clock synchronization requirements. NuoDB has been built from the ground up to be an operational database that scales out for cloud deployments. NuoDB appears a single logical SQL database to the application, allowing developers to focus on building great applications versus dealing with scale-out complexities. Under the hood, NuoDB has a peer-to-peer, two-layer distributed architecture that can be deployed across multiple data centers and is optimized for in-memory speeds, continuous availability, and elastic scale-out.

The transaction layer consists of in-memory process nodes called transaction engines. These handle requests from applications, cash data for fast access, and coordinate transactions with other process nodes in both the transaction and storage layers. As an application makes requests of NuoDB, the transaction engines will naturally build in-memory caches with affinity for that application’s data, allowing NuoDB to maintain high performance.

The storage layer consists of process nodes called storage managers. The storage managers ensure durability of data by writing it to disk. They manage the data on disk, and they handle requests from transaction engines, and send asynchronous message to other storage managers to commit data to disk and to maintain copies of data in memory. These process nodes provide acid guarantees, data redundancy, and data persistence. Within both layers, NuoDB can elastically scale out and back without any interruption to application service, simply by adding and removing TEs and SMs. This means developers can design applications to access a single logical database and not worrying about handling scale-out complexity related to dynamic application workloads. Database operators can scale out the database to accommodate these workloads and not worry about adverse consequences to the application. The result is that developers and operators can truly focus on maximizing performance of both the application and the database.

Looking back at our options for scaling out of relational database, then, we see some of these newer options. Synchronous replication is an option that allows you to scale out read-write workloads across multiple databases, but requires synchronizing all transactions across nodes with strict clock synchronization between the servers to ensure consistency, placing constraints on the deployment architecture, performance, and the application, itself.

By contrast, NuoDB uses what we call a durable distributed cache. As we saw, NuoDB is a distributed peer-to-peer architecture, made up of transaction processing and storage nodes, running on different hosts, even across different data centers. These nodes can be independently scaled to increase performance, throughput, and redundancy. Each of these nodes has an in-memory cache, holding the working set of data, improving performance and throughput. Each peer manages coordination and cache consistency with its peers, ensuring data consistency.

And finally, the storage manager peers ensure durability by writing data to disk at one or more node. In this way NuoDB was really built from the ground up to be a distributed system that provides data consistency and durability versus the database that is being retrofitted to somewhat work in a distributed model. In practice, then, we have a deployment with one or more transaction engines and storage managers, deployed on premises, in the cloud, or in a hybrid deployment. If any transaction engines or storage managers fail, or are taken offline, the application automatically reconnects to a surviving node, without disruption to the application service. In the same way, additional TEs or SMs can be added to increase performance or resiliency. And throughout this expansion and contraction, the application itself views NuoDB as a single logical database, and no coding or changes are needed at the application level as you scale out or in.

Now that we’ve talked about the need for a new class of database, let’s have a look under the covers at how you can go about validating these claims and evaluating elastic SQL databases. I’m going to pass things over to Tim at this point, who will provide a live demonstration, where he tests out some of the key capabilities of NuoDB, including our ANSI standard SQL support, how easy it is to scale out, and our resiliency to failure. With that, let me pass things over to Tim and I will make Tim the presenter and allow him to take control and show us a demonstration of NuoDB. Tim?

TIM TADEO:

Well, thank you very much, Jeff. Thank you for the presentation. And good afternoon, good morning, or good evening, depending on where you are on the planet today, geographic-wise. So, again, my name is Tim Tadeo. I’m a solutions architect, here at NuoDB. And we’re going to discuss and, in real terms, show you what is important to evaluate, why you’re evaluating certain areas that Jeff had covered this morning.

So let’s start off for a moment about SQL. As you saw in the presentation, you know, evaluating a database isn’t so much as, you know, performance, running benchmarks -- it’s important because it is a relational database that you want to be able to preserve and consider, right, the training, the experience you have with SQL, the ability, the compatibility of existing applications today that you are considering porting or migrating. So those are very important SQL overall. Now to consider, as well, we’ve heard a lot about distributed databases, and one of the key capabilities that you have to consider again is SQL. For instance, we’ve heard about Google Cloud Spanner.

If you take a look a little bit deeper, there’s really no DML right now. Everything is API-driven. There is not a delete statement. There’s not insert statements. If you look at Cockroach Database, there’s no joins in there, no SQL databases. Some of them have a limited SQL interface, but the other things to consider are acid transactions, if I need those, and do I have to consider coding transaction integrity, transaction integrity for data within my application. So these are very, very important areas.

So let’s get started here. What you see in front here is I’m just using a DB visualizer tool. And for a lot of our audience out there that are developers, DBAs, architects, I mean, these are tools you’re very familiar with, the simple JDBC tool. Inside of NuoDB -- and we’ll see that today, as well -- we do have a SQL interface that I can run from a terminal. Should be no different that you’ve seen through other database products, such as Oracle’s SQL*Plus, the DB2 command line interface, and so on.

So I’m going to take you through and show you the SQL ANSI compliancy that NuoDB has here, so we’ll be running some different type of joins: inner joins, outer joins, multiple joins. We’re going to be doing inserts through a few store procedures and triggers. So let’s take a look here. What we have stood up is our hockey database. And that comes with our self-evaluation guide. Jeff will touch on that later. But let’s just execute this. So I’m simply executing from my hockey database, joining on a couple tables here. And so let’s execute that.

So the inner join, as you can see here, executes like any other SQL that you’ve seen before, OK? We’re going to come down here and do this left outer join now. And before that, we have to do some setup. I’ve already taken care of that. I simply need to update a player ID, set it to blank for a particular player ID. So when we execute the outer join, we get the proper results. And what I want to point out here, as well, about SQL compliance, is, you know, you’ll see, like, a concatenation string here I’m doing as correlations. So all of that should be very, very familiar with you. So let’s execute this. We execute our left outer join, and what we’ll return, right, is I want, from that table, you know, wherein all values are, as well as the relative values that I’m bringing in on that join. So, very simple.

Multiple joins here, in this example. A little more complexity to this, so let’s run that. Execute. And there’s our result: 2,500 rows. And I can also limit that, as well. As you saw here, I did this on purpose to show you, you know, functionality, here. I can do limitations on how many rows I want to retrieve. Now what’s important for developers, as well as DBAs, is I have to understand in any database environment -- relational database environment, you know, how do I tune this? How do I know my access pass? How much do I know about the cost of a particular SQL statement that I’m going to execute in my application? So I’ll run that. So there’s our explain output that you see in front of you, and it will tell you the columns that you’re accessing, how the joins are occurring, what the costs are. So very, very important. I’m using an index. So we have all those tools that you’re familiar with.

So let’s proceed on -- and take a look at how we do updates with inside a view. I’ve already created a view here for particular constrain on players’ first names that start with S-P. So let’s go in and take a look here at our table before we execute. So we’re accessing through the view. We see player IDs, first name, and last name. So what I’m going to do is I’m going to insert into this view a value here with a player ID that starts with S-P, first name, and a last name. So let’s take that and let’s insert that. So we insert this through our view, into the table, successful. Let’s go have a look, see if we have that. Sure enough, there’s our player ID that we inserted with the values, as you can see, up in my insert statement, down here inside the table. So we can delete through a view, as well, as in any other database that’s SQL-compliant. We’ll execute that. We’ll come back in, take a look at our table through the view, make sure we’ve got that correct. And there we go. The table has been put back.

All right. So what’s important to developers, particularly, in the ease of how you develop SQL applications are common components, common functionalities, like triggers, like store procedures. So what I’m showing you here today, you know, is how we’ve built a trigger. So the first thing I did is I needed a table. It’s called a change table, something common we see in our applications, where maybe when I fire off a trigger, I want to capture those changes. So I’ve created something called a change player table here. And I have already prebuilt the trigger already for time’s sake. So let’s take a look on our system tables here. Oh, I’m sorry. Let’s come down and take a look at our trigger first, all right?

So this is how our trigger was defined, as you can see here. So on an insert, we’re going to capture some information, we’re going to take those values that we’re changing, and we’re going to put them in the change player table. And as you can see here, we use things like date functions, current time -- oh, let’s take a look, see where that trigger is defined. So as you can see, I’m accessing a system table. So we do have a system catalogue that you’ll be familiar with, with any database. So there it is. It’s called USER -- called SCHEMA. It’s built on the players table, and the trigger’s called TRG HOCKEY PLAYERS.

So we need some data. Take a look at our trigger here. So let’s go in here and select a name that we want to work with. All right. So we’ve got this alphabetically loaded inside our database. Somebody called Aalto. His height is currently 73 inches. We found out that he’s actually an inch taller than that, so we’re going to do an update statement that will fire off that trigger. We execute it. Now let’s go back in and look at the change players table here, make sure we capture, through the trigger, our update into the proper table. And there it is. Gives us our time when it was updated, the date, his name, and the operation that took place.

OK. Let’s move onto store procedures. So in our store procedure example that we have here, we’re just creating a procedure. We’re going to accept as input an integer. We’re going to have an output string. We will throw an exception if the value that we’ve entered in our store procedure, our procedure prop we executed will give us an error, or if we give it the proper name -- oh, I’m sorry -- the proper input value, it’ll give us the correct information.

So what I’m going to do here is show you the SQL interface that we have, and then I’ll execute that store procedure. So we have something called Nuo SQL. Like I said, it’s very similar to SQL*plus. And if you like -- you know, if you don’t have a visual database tool or developer tool, you can all do it through the command line. So let me just come over here and copy that execute procedure string here. So the first time through, what I’m going to show is we’re going to put in a valid value that’s within 0 and 99. Let me expand my screen here so we can see that. So we’ll execute that. So it goes in, into our player table, takes a look, it finds who has number 37. That would be Patrice Bergeron. So let’s change that to show you how it throws an exception. So I’m going to go out of bounds on my parameter and say 100, procedure executes. It tells me you need to have this 0 through 99.

So what have I show you here, particularly in the SQL section? It’s not to show you that NuoDB executes SQL better than anybody else. The intent is, in the context, doing an evaluation of databases, we have to consider SQL very highly on our priority list.

So next, I want to show you some scale-out. Jeff showed you some slides about how NuoDB looks and things you’ve got to consider during an evaluation about scale-out. But why do we have to consider scale-out? Well, we need to consider scale-out about concepts of how easy it is to implement it, and in traditional databases really do have to require putting in an underlying replication layer. An example that comes to mind for me is something like -- with Oracle with GoldenGate. With SQL server, you need SQL server replication server. Some other things you have to consider: You know, how is the administration? How do you handle all that? You know, the care and feeding, right? Is it easy to provision, OK? And, you know, how quickly can I come online? Do I have restrictions where something can be read-only? You know, would I have the scale-out?

So what I’m going to do is show you how we’re scaling out here. And before we go onto that, I want to quickly show you our environment. We’re running up on AWS, EC2 cloud. And what I just simply have here is we’ve got an app server, we have NuoDB, right, processes running on two different instance, a NuoDB 01 and a NuoDB 02. Sort of, kind of small, medium type of instances, quickly spun up here on AWS. And what we’re going do today is show you failover scale-out. So let’s proceed here.

So the first thing we want to look at is our NuoDB system. We have something called our NuoDB manager. And what that allows you to do is look inside NuoDB. The first thing we want to do here is I wanted to show you the domain and what that looks like. Sorry. Wrong instance. OK. So let’s go on the NuoDB manager. Let’s take a look and see how we’re configured here. So what we see, and we saw this earlier illustrated, we have what we call a two instances of NuoDB, right? Separate instances. And you can tell that by the distinguishing IP addresses are there. So on the first two, we have a storage manager, which is where we persist data. That’s our durability layer. And then we have the transaction engine. That’s where the actual execution takes place inside there. And this is how we’re going to get scale out. And then we see on our second instance, again, a transaction engine and a storage manager.

So what does that look like when we scale out? Well, we need to have a workload running. We have something simple that we’ve created. You have this in the evaluation -- self-evaluation guide that we have. And what this does is simply going to put a load on the system. So to quickly explain, this is a Java program here. You’ll have access to it. If you wish, you can highly customize this. This is out on GitHub, on the NuoDB GitHub. And you can do a lot with it. So just to quickly explain what we’re going to do here, I’m going to run it for 60 seconds. I am trying to achieve a transaction rate of 2,200 transactions per second, OK? So bear in mind, we’ll see this again. I’m only running two transaction engines, OK, in two difference instances. All right. So this will run for about 60 seconds and this’ll be a little bit more clear to you as we run this.

So I’m going to start that up. It’s going to put a load on our system. And then I’m going to go over to this tool we call Grabana, right? Our professional services people put this together. We use this to measure performance of our system to display for customers. So what’s actually happening here that you see in front of you is, this is a period of the last five minutes. Again, remember I’m trying to achieve a transaction-per-second rate of 2,200. Now why am I doing this, right? Well, as I’m evaluating, I have to understand how a database can easily scale, right, how I can scale out. Do I need to pre-provision hardware? Do I have to do a lot of configuration and complexity? And the answer is those are things you definitely have to consider. So that’s going to run for 60 seconds. We’re almost done here. But as you can see, I’m having a hard time getting it up to that 2,200 transaction rate per second, right? So what I need to do here, right, transaction is going to run for just about another few seconds. There we go.

So I’m going to start this again. However, I want to work on scale-out. That’s what we’re trying to do here. So what I’m going to do is increase, on each instance, an additional transaction engine. And what that’s going to provide us is the scale-out. So as you saw, I had a NuoDB 1 and a 2. So I’m going to start a transaction engine. There. It’s coming up. I’m going to start one over on NuoDB 01. So we’re doubling. There we go. And let’s show again what our host looks like, or hosts, I should say. And there we go. As you can see, I scaled out two more transaction engines.

Let’s start up our transaction again. So nothing’s changed here. Again, what’s important here is I’m trying to achieve a 2,200 rate transactions per second. So I’m going to come back over here in Grabana. Take us a moment or two here to catch up. We’ve got four transaction engines. And as you can see from this line here, the gold line, sit down here -- this is my tran-- this is my TEs, and then I’m scaling up to four. But like I said, it takes a moment or two here. And there we go. I’m already climbing up above my transaction rate. And as you can see, I’ve hit my 2,300 transaction. I’ve achieved that rate, OK? So that’s an example of scale-out, right?

Now if I need to scale back in -- and I can do this on the fly -- I can simply remove those processes that I started. So I’m going to scale back in. So I’ve shown you scale-out. Let’s look at scale-in. I’m just going to bring down the nodes that I created. It gets evicted from the system. I’ll remove the next one, which was 24. We’ll remove that. And then we’ll continue on. As you can see the ex-- there’s no effect to the execution here. This has already -- have been started inside the engines, right, inside our processes. OK. So what’s the next step here in the scalability? Well, in that scalability factor, I’ve shown you how to get scale-out. I can scale in conceivably what I could’ve done, as well -- we won’t do that in this demo today -- is I could have started this NuoDB 4 down here, which I’ve got stopped, or NuoDB 3. I have that running. I’m not using it. I could simply start up NuoDB within that instance. Simply add the transaction engines, the storage managers, an additional way to scale out either across multiple servers again, right, adding additional servers, or I can be doing this across data centers.

So let’s move on here. The next step we really want to cover in the last part is about availability and why availability’s important, right? Why do I want to evaluate that? Well, you know, I need to understand the databases I’m evaluating, how -- again, how easy is it to implement? What are the capabilities? Can I only go read-only? Can I do it between data centers? Do I have to have a cold, warm, hot scenario? Do I have to decide that? Does it have to be single purpose, right, for read-only, or can I have dual purpose? And that -- really what I’m getting at relates to, you know, number one, availability, continuous availability, but also relates to costs, right? There are certain costs associated with that.

So let’s go back and quickly review what we have here for an environment running. Let’s go up to our domain. And let’s take a look one more time here, what we have. We remove those processes, as you can see. Let’s show that. OK. And there we are, back to our original configuration.

Now because we’re running on commodity hardware -- and this is no news to anybody -- is it’s meant to fail, right? So I have to have the ability to have redundancy. So let’s go back up and let’s do this. Let’s start our application again. Our transactions are running, all right? And down in the bowels of the data center, something happens to a disk array, right? So in this process, what I’m going to do here is I’m going to remove -- the most important part of this, right, is the data, right, where that data’s sitting. So I’m going to remove node ID 3, all right? That’s a complete copy of our database, all right? Now as it’s running, it’s evicted. Take a look. It should be gone now. And it is. We’re running on one storage manager.

OK, so what’s the first important thing we have to do? I don’t have any redundancy right now, OK? So what I want to do is I want to start a storage manager, maybe on a different file system, OK, within our instance, to make sure, number one, I’ve got data redundancy here. So I’ve got my scripts here, running here. So what I’m going to do here is I am going to start, right? I’m going to start it brand new, all right? I’ve already created the underlying file system. And we’ll play a little pretend here. It’s sitting on a different disk array. So let’s go back to NuoDB.

So the first thing I want to do is protect that redundancy, right? So I’m going to start a storage manager on the local host. Here -- I’m sorry. I’m going to start -- yes. I’m going to start this on the local host, right? We’re having a failure on 36, as you can see out here, that -- where I took a storage manager down, the dot 36 guy. So let’s start this up. OK. There we go. An interesting thing that’s happening right away. You’ll see over here to the far right, OK. I started a storage manager for durability and redundancy, and it’s sinking, all right? So I’ll come back in here, and let’s make sure we’ve got everything back where we need it. Ah, there we are. Now I’ve got two storage managers running, right, on our surviving system where I’ve got confidence. And now you can see it’s running. It’s already synchronized, OK?

Our application was continuing to run here. You could check the timestamps. I went a little longer than 60 seconds to do that, but I’ll start this up again anyways because it’s part of the next step. So what happens now, right -- something’s wrong with that server over there, and we’re going to simulate here for a moment -- we’re going to remove transaction engine. So that transaction engine is still serving up work. But let’s simulate that we lose it, OK? So the process -- there, it sits up there as node ID number 4. That’s where the first storage manager failed. So there we go. I bring that one down to simulate that. I’ll take a look again. Oh, there. We’ve only gone one instance running, right? We simulated that.

So just to show you that we’re doing this live time here -- OK, that application’s still running. Let’s come here. Let’s go over to AWS. Now I’m going to kill this guy. Take the action here for that instance. We’re going to bring it down into a stop state. I’m going to say yes. I’m going to come back to my application, see if I did this faster than 60 seconds. I didn’t, but to demonstrate, run this again. We’re going to go back to Grabana real quickly. We’re down to one transaction engine, all right? And obviously, right, we’ve got some degradation here, right? It’s not going to push our transactions into that 2,200.

I’ve shown you now -- for review, here, I’ve shown you about scale-out, why that’s important to consider, why availability, continuous availability, high availability, what’s important to consider, like, to evaluate. And at the top, we talked about SQL, right? We’ve got to consider the evaluation, its compatibility, right, existing applications and strengths.

So Jeff, with that, I’m going to turn that back over to you.

BOEHM:

Thank you. Thank you, Tim. Thank you for that demonstration. As you saw, Tim was able to walk through step-by-step in some key tests. And again, there’s a lot of different areas that you can evaluate in Elastic SQL Database. But he showed you running SQL queries, whether those be simple select statements, more complex joins, inserts, updates, stored procedures, proving that an Elastic SQL Database truly should be able to handle the range of SQL that you want to run at -- that you want to throw at it. He also showed how you can scale out, and how easy it is to simply add a node and see your performance improve, see your latency decrease. He also showed how NuoDB handled outages, how you can intentionally scale it back in, but also how you can handle, if nodes or entire systems go down, providing continuous availability.

So that was a good demonstration. Before I wrap up, I want to remind you that we will be handling -- I see a quest-- couple questions have come in already. We will be handling all those questions at the conclusion, so feel free to type any questions you have on the demonstration or the presentation so far into the question box on your right.

So summarizing the discussion, then, and the presentation, and the demonstration, Elastic SQL is really about no compromises, right? It’s about an architecture that’s built for the cloud and for elasticity, a database that can be easily scaled out and in on commodity hardware, that can be deployed on premises, in private or public clouds, or in a hybrid deployment, and that provides continuous availability withstanding both intentional and unintentional disruptions.

But it’s also a database that can run your business-critical workloads with a familiar standard SQL API and full acid compliance, a database that fully manages all the data management logic in the database, limiting application complexity and making it easier to migrate applications from Oracle, SQL Server, MySQL, etc.

Just as a wrap-up, as a company, NuoDB was founded by database industry pioneers about seven years ago and has a patented elastic SQL architecture. The product has been deployed by leading software development organizations from Gashouse Systems to the London Stock Exchange, and constantly, consistently receives high marks from Gartner and other analyst firms.

We’ve recently published a comprehensive guide to evaluating NuoDB on your own with our free community edition or an enterprise trial version of the product. This substantial document walks you through many of the tests that Tim ran today from running simple through complex query -- SQL queries, to testing and validating scale-out capabilities and the associated performance gains, to validating high availability and active-active operations across a hybrid environment.

If you’d like to learn more about NuoDB, and try it for yourself, or download this evaluation guide, there’s a few URLs here on the screen. You can have a look at a full, recorded demo of the product at nuodb.com/full-demo. You can also download the free community edition at nuodb.com/download, or download that evaluation guide I mentioned at nuodb.com/eval-guide.

So at this point, we’re going to move into Q&A. We do have several questions here, so let me start with one for Tim, based on some of the stuff that he (inaudible). The question is, how easy is it to migrate SQL from SQL Server to NuoDB?

TADEO:

Well, that’s a good question, Jeff. So given that we’re an ANSI SQL database, a lot of the database objects you would have, such as tables, indexes, the data definition language that you would use to build those, those objects, are relatively, and almost certainly, very alike, all right, with very little effort to bring those in. The same thing would exist, as I showed you today, things about triggers, and I talked about store procedures. Obviously each database management system, albeit S -- you know, their SQL ANSI standard, they can have somewhat of a little bit of different dialect inside those type of function -- inside those type of objects you’re going to build. However, we have several customers that have migrated successfully from Oracle, from SQL Server, from MySQL. So it’s a pretty straightforward process.

BOEHM:

Yeah, and in fact, I actually had the pleasure, about a month ago now, of sitting down with one of our customers, Tim [Gauge?], from Alpha Systems. There’s a recording of this on our website, but Tim talked about how Alpha, which is a software company in the leasing -- asset finance software space was able to migrate their application that had been built for MySQL, they were able to migrate that application using our community addition product within a couple weeks, all by themselves. It’s a very easy migration. And that’s certainly very possible.

Another -- you may have addressed this. Another specific question that came in around the SQL was, you know, you showed some stored procedures. Can you migrate stored procedures from Oracle or Microsoft?

TADEO:

 

You certainly again. You know, those stored procedures -- when we say migrate, we don’t -- there’s not a particular, you know, tool -- for that matter, any database will use some third party product, right, that’s going to take that exact stored procedure, bringing it in. Again, as I talked about, those stored procedures, you’re going to have to bring them in. In a lot of instances, you can use them as they are, and just create a stored procedure or a trigger, and they’ll work fine. Other instances, like I talked, some certain SQL dialects that might differ slightly, you might -- you’re not going to rewrite the whole stored procedure. You may have to change how that -- some of the certain dialect is.

BOEHM:

Got it. OK. Another question here around active-active: We talked about deploying out across multiple machines. Does NuoDB support active-active write operations? Can I be writing in an active-active mode to multiple nodes at the same time?

TADEO:

That’s a great question, Jeff, and we have a lot of customers that ask that same question. And yes, we can do writes. It doesn’t matter. I was trying to show that here, but I brought down the wrong instance of AWS. But that question gets brought up quite a bit. And that is a very important point you bring up. In a lot of instances, what we see with customers for an active-active scenario, what they’ll actually do -- because they don’t want to have to code in their application’s transaction integrity, right? They don’t want to have to code in data integrity and they’ll say, “OK, that’s our warm standby site in an active-active, but that’s going to be read-only.” And they may use it for workload balancing. But, yeah, to reinforce, absolutely, our active-active deployment model, you can do writes.

BOEHM:

Good. OK. Another question here about performance. I don’t know that we’re going to have an exact answer here, but it says, will a single machine NuoDB instance perform the same or better than a MySQL database on another single machine with identical resources, with a mix of OLTP and OLAP-type operations? And I’m guessing the answer is, quite a bit, mileage is going to vary, obviously, based on the specific, you know, types of operations you’re running. Obviously, it’s much easier to -- you can -- it’s possible to scale out with My-- with NuoDB in a way that you can’t with MySQL, adding nodes and resources. I don’t know if you have any specific answer around the, you know, apples for apples comparison on us versus MySQL.

TADEO:

Sure. MySQL’s a fine database. But really, MySQL is really made to, you know, be a vertical scaling type of database. So, you know, a single instance of NuoDB -- as Jeff says, mileage may vary against -- you know, another single machine, identical resources, on SQL database, is actually -- you probably -- like you said, mileage will vary. However, this is important to understand: NuoDB, being a very, very different database inside, OK, and a single machine of NuoDB is -- you saw my example here about latency and average transaction times. I simply added another transaction engine process, right? So it’s a little bit of apples to apples slash bananas, oranges to compare. But a great thing would be for our customer asking his question -- you know, you can download Community Edition, bring that down, and give it a try.

As far as a mix of OLTP and OLAP, that’s something that now Gartner has defined as HTAP, hybrid transactional analytical processing. That’s now our primary focus. You could do it, depending on what size it is. But just so our audience understands, our focus is cloud scale, right, online transactional process. So I hope that helped answer the question.

BOEHM:

Yep. Another question, which I can take, which, actually, you touched on a little bit, which is this Community Edition -- people are asking about the Community Edition and the limitations of the Community Edition.

The Community Edition is a fully functional version of NuoDB that is freely available at our download site. The only limitation on the Community Edition is how far you can scale it out. We touched on this idea of transaction engines and storage managers. The Community Edition is limited to a two-two configuration, so two transaction engines and two storage managers, again, which is great for testing or development, but as soon as you want to scale that out or run it in a production mode with full, you know, support from NuoDB, you’ll want to upgrade to our Professional Enterprise Edition, and more details are available that on our website.

There are a lot of questions here. We’re not going to get to all of them today, unfortunately, because we are running short on time. I am going to cover one -- actually, I’ll cover two other questions here. The first one is, what is the top reason organizations move from a traditional database to NuoDB?

I will say -- and I’ve been out to many of our customers. And what I consistently hear is they have built an application for a SQL database, for an Oracle, or a Microsoft, or a MySQL, even, and they need to scale it out. They really want that scale-out capability for an existing SQL database. And they want to do so at a fraction of the cost that it might be possible if I was to look at, as Tim said, adding on complex add-ons, such as Oracle Data Guard, or RAC, or GoldenGate. So it’s really that ability to maintain my SQL capabilities and scale it out more effectively.

And then the last question, which you’ve touched on a little bit, but I want to cover again, which is, what work effort is required when you migrate from SQL server to NuoDB? How much recoding is required around stored procedures, functions, triggers, etc.?

You know, again, I think what we’ve seen from our customers is, depending on the complexity of the application, and depending on how much customizing, or how much you’ve used to sort of the custom extensions of SQL that Microsoft, or Oracle, or other databases use, that may make it more difficult to migrate to any other database. But to the extent that you’ve stuck with fairly anti-standard SQL for any of, you know, create, read, update, delete transaction, stored procedures, those should be able to be migrated fairly easily. Again, we’ve had customers report that they migrated their application in a number of days, fairly complex applications. I don’t know. Again, I think you’ve touched on this, Tim. I don’t know if there’s anything else you’d add to that comment.

TADEO:

Yeah, those are good points, Jeff. So, to our audience -- and that’s a great question. We have that all the time. So get a lot of experience NuoDB of moving -- porting applications over there. One thing that has to be considered -- and I’m sure the audience and our person who asked the question has considered this, as well -- there are tradeoffs, right? So you have to think about, you know, what am I trying to provide, what type of environment I have to provide to my customers, and about an ISV? You know, I’ve got to consider cost. And again, I can’t stress it, you know, enough to say, SQL ANSI, right? ANSI SQL-compliant, right? That’s definitely going to make the job much easier. But to be -- you know, to be fair to the question, here, store procedures, functions, triggers, you know, it really depends on how far outside of the bounds you’ve gone away from an ANSI standard and using particular dialect for that -- that are only pertinent to that database.

BOEHM:

Yep. And again, I think one of the best ways is to test it out yourself, using our Community Edition.

Again, I see there are other questions here. Unfortunately, we are up against our time. We will follow up with people individually, or you can certainly reach out and contact us. But with that, I am going to thank Tim for joining me today and for the excellent demonstration. And hopefully this was helpful to the audience in exposing you to some of the ways that you can evaluate a modern elastic SQL database.

Again, the webinar has been recorded and will be available on demand, and we thank you for your time today. Thank you very much. Goodbye.