You are here

Cloud Database Migration Made Easy: Migrating NuoDB

Senior PM, Joe Leslie, showcases how simple it is to move from a traditional relational database to NuoDB’s elastic SQL database and talk about how this compares to the complexity of moving to a NoSQL database.

Slides available here

Video transcript: 

LORITA: Hi, everyone. Thanks for joining our broadcast. We've got a couple of people that are just getting into the line, so I'm just going to wait one or two more minutes, and then we'll begin the broadcast. Thank you. Hello everyone, and welcome to our webinar, Cloud Database Migration Made Easy. My name's Lorita, and I'll be moderating today's webinar. I'm joined by Joe Leslie, our Senior Product Manager, here at NuoDB. Hi, Joe.

JOE: Hello, Lorita.

LORITA: Today, we'll be talking about a topic that we get a lot of questions about. Just how easy is it to migrate your existing SQL database to NuoDB? But we're not just going to be talking about it. Joe's actually going to run a little demonstration for you so that you can actually see it in action. At the end, we'll open it up for Q and A. Before we get started on today's presentation, I'd like to review a few logistics. Our webinar is scheduled to last about 45 minutes. About 30 minutes of content, and we're going to have about 10 or 15 minutes at the end for Q and A. You'll see a go-to webinar questions panel, of in your control panel. Feel free at any point in time to go ahead and submit your questions there. We'll address them either in the midst of the presentation, or more likely, at the end. The webinar will be recorded and made available for replay. We will have both the script and the presentation slides available following the webinar, as well. And we'll send them out with the recording. And then as I said, you know, we'll continue to answer questions as they come in and as time permits at the end. So with that, I'm going to go ahead and turn it over to Joe so that we can get started. Joe.

JOE: Great. Thank you, Lorita. And I would like to take an opportunity to welcome you all to our webinar today. I truly hope you find it informative and maybe even a little fun as we get into the demo. So let's go ahead and get started. So right now, many companies are considering moving to the cloud. And in that process, they're virtually rethinking everything. It's quite a task, and there's many pieces to it. For example, will we run our applications in micro-services and container environments, or will we run in host or virtualized host environments? And what sort of purchasing patterns might we see when we move to the cloud, for either services or software? It's a completely new paradigm and deployment model. As we move our applications, we may also be considering different methods of developing our applications, considering agile methodology. So there's quite a bit of new process to consider. Yet at the same time, we're still faced with some of the common challenges that exist in a on-prem environment today, like growing data workloads and volumes and, of course developer scarcity. Right? We're always trying to do more with less. And then we can't forget data security. It's practically the backdrop that exists for every decision along the way, because data security is so important. So with all of these challenges, you know, how do we actually transform our business in this service- and customer-oriented world, to take advantage of the cloud? And so we're going to talk about that today, and even what it means to access your data and get your data to the cloud and store your data in the cloud.

So with that, we will need to consider just what cloud database requirements you may have in order to move your applications to a cloud deployment paradigm. And when we look at the requirements in that space, they really fall in these two general categories of SQL-related requirements and elastic scale-out types of requirements. Now, when we stack up the common, traditional, relational database, along with no-SQL benders, we start to see and get a feel for how well they accommodate and meet these requirements. So naturally, traditional, relational database does quite well with the SQL types of requirements commonly found to run your business database of record. They're anti-SQL compliant. They adhere to acid transactional properties. Of course, it's easy to migrate the data and SQL apps to those types of systems. And you also benefit with in-memory performance. But where do they, you know, lack is really in that elastic nature. Relational databases run well on single architectures. Whenever we try to run them across platforms, this is where the challenges can arise. Often clustering software is required, more expensive hardware. It's a very expensive proposition. But then, when we look at the no-SQL benders, we see, well, they actually do quite well in the area of elasticity, and they can scale out quite well. But, of course, they give up these SQL semantics, which are so important to our OLTP-SQL transactional type of applications. And then we have some somewhat new entries, like the cloud provider databases, which tend to fall right into the traditional RDBMS style of meeting those requirements. They meet the SQL requirements, but typically do not scale well.

Then, if we look at a new category of tools, and this is where NuoDB fits, is elastic SQL. So for example, we have some new entries, new entrants in this marketplace. There's the Google spanner product. We can start to see that these products are now covering requirements in both sections, both SQL and elasticity. We see spanner does quite well in the area of SQL, but also lacks the SQL existing-app requirement, because it's very difficult to run existing SQL against spanner, because it doesn't support all of your SQL, specifically DML-style SQL, like insert, deletes, and updates, and so on. And as far as elasticity, it does well. But, you know, you are locked into a single cloud platform. Which brings us to NuoDB, which, as we can see from the chart, does very well in covering both the SQL semantic anti-standard, in-memory database. Easily allows existing SQL apps to run against NuoDB, and also provides the elastic capabilities, scaling out and scaling in, adapting an environment to its application workload.

So as we define this elastic SQL term, really, NuoDB combines both scale-out simplicity, elasticity, and continuous availability that our cloud SQL applications require. But at the same time, provides the SQL transactional consistency and durability that our database of records demand.

And well, how does NuoDB do all this? Let's just take a quick run-through of the elastic SQL database architecture. We'll explain what we've done here and that's so unique. We take a legacy RDBS architecture, which is somewhat stacked, runs well in a single host environment or architecture. What NuoDB has done, simply, is it's broken that architecture into two major processing components. There's a transactional component, which we call RTE. This is the transactional engine. And it's a fast, in-memory copy of the database. It's processing the application's SQL requests. And also keeping a cache of data, so it's running in-memory very quickly. But when the SQL application requires to run some DML-type statements, like inserts, deletes, and updates, it's going to then pass that transaction off to the other component, which we call our storage manager. And the storage manager is the piece that's responsible for making that transaction durable. Meaning that should a system ever sustain any failure or slowdowns, etcetera, that that transaction you know is durable after it's been committed. You can reliably retrieve that data. Now, the piece of the architecture, as we can see from the diagram, is that it allows the scale-out. So the transaction engines will easily scale out to meet an application workload by adding more transaction engines. Likewise, you can extend and expand the durability of your database by adding more storage managers, making the system minimally redundant in both areas, as well as based on application requirements, increase that redundancy as may be needed, potentially creating active, active, active type environments to support all the flexible deployment models that you may consider.

So with that, and now that we understand why we would consider and move our data to NuoDB, let's start getting more into the details about what, you know, how easy is it to move our data to NuoDB? Right, there may be some natural questions, right? I've got a running application. I've got a rich existing set of SQLs. Do I need to modify my SQLs? What's the work involved in order to get to NuoDB? And what we want to show today is just how easy it is to migrate the data, as well as run those SQL statements just as you have them today. So there's four easy steps. The first step is really what I like to refer to as the setup. Before we can start the migration, we're going to need to capture a few critical pieces of information, right? And in today's demo, we're going to migrate the My SQL Employees Database. Now, this database holds, oh, approximately three million rows, starts, schema, demo database that's easily downloadable. Many of you may even be familiar with it. But it's a nice sample set for us to sort of play with today and move some data. But in this setup, we need to know a few things. We need to know, first, the JDBC driver class, right. Because we're going to need that for our native NuoDB migration utility to start running the migration. We also need to be able to find that database, and that's going to be specified to NuoDB using a database URL. So we'll have a specific database URL that's going to allow us to connect to My SQL and get that schema information, in order to create a replica schema within NuoDB.

Then the next and two final steps, the dump data and load data, these are simply data movers. By this point, all the difficult work has been done, and the dump data is going to dump the My SQL database to the file system. And then the load data utility will then pick up those files and load them into NuoDB. And with that, that's our setup. We can actually go ahead and move on to our demo. So I'm going to go ahead and switch over to our demo environment. And here, just let's take a second here. I'm switching over. OK. In a moment, you should be able to see a new screen. And this screen, here, let's go ahead and look at some, something interesting. So as I mentioned, we're going to migrate a My SQL employee database to NuoDB. And today, we're using DB viz. This is just one of the available graphical SQL run tools that are available in the marketplace. I'm using this one today. I think it demonstrates, well, a graphical presentation of what we plan to show. And what I'm showing here already is something many of you are familiar with. This is an entity-relationship diagram. And it shows us the details of the My SQL employee schema that we're going to migrate. We see tables, and we see columns and their data types and their sizes. There's column-constraint information. There's primary and foreign key information. There's sequence data. There's lots of information tucked away in the schema, that it's important, as we migrate, that we capture all this information. And we create an exact replica schema in NuoDB to then receive that data.

So we look at the data that we're, or at least, let's look at some of the count information. This can give us an idea of what we're moving. And we can see we have nine department rows. There's about 300,000 department employee rows, and 24 managers, and a whole bunch of employees. We've got 300,000 employees, and they've had a lot of salary actions. So we see here, there's 2.8, approximately 2.8 million salary rows. That's going to be the bulk of the data we're going to move. But people's titles have also changed over time. And we can see there's about 400,000 titles. So we have lots of good data to work with here. And let's go over to our receiving schema over here. So I'm going to open this up, and we're going to see that it's empty. Right? We haven't moved any data yet. So we have no tables. There's no entity-relationship diagram. We're starting with a clean slate. So let's go ahead and have some fun. Let's go ahead and migrate some data. And I am going to project that we can migrate all this data in a single breath. Now how exactly are we going to do that, right? Well, we all know a single breath, right. If you hold your breath, that's still a single breath. So here's the audience participation part. If anyone out there would like to try to hold their breath, we're going to migrate this data in a single breath, OK. So I don't know if there's any sort of hand raising or something they can do after this, to know who participated. But I'm going to take a guess that this data migration will happen in about a minute. So if you think you can hold your breath for a minute, you can play along at home, alright.

So we are going to start our migration, OK, and we're going to time it. That's, I'm going to use this Linux timing command, and we're going to run the migration. So if you’re ready out there, here we go. We're going to start the migration. OK. We have started. We have already completed the first step, right. That's that, really the most important step, the one that's going to get all the interesting information from the My SQL database, the table and column information, the index, primary and foreign key column, data types, column constraints. All that information is going to create the DDL statements to then run against the NuoDB database and create that replica target location. And then, you can see it's already started the second step and completed, which was extracting the data to the system. And it loads the, it puts them into CSV files, which are then loaded. And that's the step we're in right now. And they are loading into NuoDB. This is the last and final step, which we are going to be just about done. And look at that. For those who are holding your breath, you managed to, with us, migrate 2.8 million rows in a single breath. It actually took 55 seconds today for us to do that on the processing power that we have here today. Of course, if you have more processing power, then I was using, I allocated 4 CPUs on a single-disk system. But it demonstrates for you just how quickly we can move data.

Now, let's go back to our graphical environment, OK, and reconnect to the database. Alright. And lo and behold, we now see, within the table section that was previously blank, we now have the six tables, the six tables that we are already familiar with, the department tables. And how many rows do we have? We have nine. Why? Because that's exactly how many were in My SQL. Likewise, the department end table had the 300,000 or so. We moved those over. We moved over all the department managers. We had about 300,000 employees. Here they are. Of course, the salaries was the big number. There's the 2.8 million salary records. And all of our titles have moved over, as well. Now, you might remember that entity-relationship diagram that we showed earlier for My SQL. If we click on the references tab, to no great surprise, it looks exactly as what we had in My SQL. Why? Because we easily were able to read the source database schema and replicate all the tables and the columns and their constraints and data types and all the details in NuoDB, as our receiving database, and move that data into the system.

Now that the data is loaded in the system, well, let's go ahead and run some SQLs. So I'm going to move over to a SQL window. And here's a select statement. It's a basic select statement, but it's a, it's going to join all the tables together. And it's going to go and filter on some date, some complicated date logic, along with filtering on a department name, the marketing department. We're going to go out there and gather some of those employees. And then we're going to order our data. So, but before we run the SQL, let's run it against My SQL. Because our intent here is not to modify any of the SQL. We're going to take the SQL statement that's very happy to run against My SQL. Go ahead and run the statement. It goes out. And we can see that it's already done. And it retrieved, if we look here in the bottom of the screen, it retrieved 134,000 rows, 94. OK.

Now, what we're going to do is we're not going to change any SQL, but we're just going to change and connect to NuoDB, the new employee schema that we just created. So we're going to run the SQL statement, again unchanged, against this brand new, fresh data set, OK. So we're going to go out, run this same SQL. And I can see it's already gathering the data. And it's done. And as we would expect, it has returned 134,000 rows in 94. Same exact data. Why? Because everything is exactly the same. So part of the demonstration here is to show just how easy, not only how easy it is to move the data into NuoDB, but to have all of the same functionality, all that investment in your SQL. It's going to run just as it does in your My SQL environment, or whatever other relational database system you were migrating from. Sometimes, I like to describe it is NuoDB supports migrating from the big five, the big five other mature database products, like Oracle or SQL Server or My SQL or PostGres or DB2.

Probably the most significant piece of the data migration, as I was alluding to earlier, was that get schema piece. That's where all the smarts are, because it's going to read those, whichever of the five source databases, it's going to map their data types to the receiving NuoDB data types. And it does that automatically. Of course, it also gives you the ability to override any of those data types, as well. And I can start to show... Let's take a little bit of a look to see what those migration commands look like. And as Lorita was mentioning earlier, in your, after the presentation, you're going to receive links to the audio, the presentation materials. And at the end of those presentation materials are all of the syntax and code that I'm actually going to show you now, so we can see how all this sort of works so easily and the little parts that made it happen.

Let's first look at that get schema. So we do more, get schema. So this is the little script that ran. And I want to show you the secret sauce. The secret sauce is right here in these two lines. This is really the part here that the migrator needed in order to make that connection to My SQL. Now, it's going to be a little different for Oracle. It's going to be a little different for, you know, PostGres. But all of these are documented in the NuoDB documentation. And that's a link that Lorita's going to provide you in your kit after the presentation.

But we see here, again, this source driver. So this is the source, JDBC, driver class, that needs to get loaded. And then it also needs to know which database to connect to. And that's the second line, the source URL. And that's how we connected to the My SQL database on local hosts, port 3306. And we connected to the employees database. And oh, I decided to pass along, use SSL = false. I didn't have the encryption turned on.

The other pieces of information are the easy parts. The user name, the password, what kind of quoting might you want to do. NuoDB migrator allows you to move your tables over and either, you know, keep the table text either in lower text, or you want it in upper case, whatever you decide you can do.

If you want, you can migrate, you know, different portions, some tables singularly at a given time. You can migrate an entire schema, as we did. You can, in this case, what I recommend, I dropped the DDL script file to disc. That's indicated here, schema.SQL. That allows you to then go review the schema, potentially, before you create it in NuoDB. I think that's a good, a good best practice. I would recommend it. As you can see here, I just used the Nuo SQL, which is much like, if you're familiar with a P-SQL or My SQL or Oracle SQL plus program. NuoDB has its own command line to connect to the database, Nuo SQL, which I then ran the schema file, which is indicated here. OK. But let's go ahead and type the file to the screen, because again, that's where really all the smarts are. We generated this file here. And this is the one that took everything that, the schema declarations that were in My SQL. We rewrote the statements into DDL, to then run and create that schema in NuoDB. So there's all the statements in all their glory.

And of course, the next step is dumping the data to disk. And that's done with our little dump script. And all my dump script did was call the NuoDB migrator command, and I passed it the dump switch or the dump command, which then, you know, again, there's our secret sauce on how to connect to the My SQL database. And then I'm, in this script, I'm just stating that I'm going to output my CSV files, because I said the type of file that I want to write is a CSV file. Again, this is a great format because it's very portable. Right, these CSV files, once you dump your data, these are very portable and can be moved to other environments to then load up your data to, you know, on PRAM, in the cloud, wherever you want to load your data, OK.

My output directory was in the temp directory. And then finally, our last step is to then load the data. So let's take a look at our load script. OK. And here... Now this one's slightly different, right, because the secret sauce now, instead of connecting to My SQL, we're going to load the data in NuoDB. So of course, we need to specify the information we need to connect to NuoDB and know where to put that data. So what we have is a target database URL that specifics the JDBC and database location of my database, which is called test, which we see here. OK. And then I connect to a schema called employees. And I log on. And I also show where my input path is, as well, reading from that temp directory. So those are the three steps. And I chose today to put them all together in a single command. That was the one I showed you, where some of you were able to hold your breath. Nice job out there, everyone, for those of you that were able to do that. And we see it's just a very simple little script that calls the get schema, the dump schema, and then finally, the load schema. And then it was done.

And that's all there was to the migration. So let's go ahead and review. If we go back to our presentation material, and we will review one more time the important steps of the setup, which again, was the pieces that I called the secret sauce. Which is you have to make sure that you gather the correct information during setup to specify your JDBC driver class name and the database connection URLs. Again, this is all documented clearly in the NuoDB documentation for each of those database, the five databases that I mentioned. And then setting of the class path. So right. As soon as we know our JDBC driver, we want to make sure that it's found in our class path. So you're going to set your class path environment. I gave an example here. This is something that I did just right here. We see export class path. I did this on my Red Hat server and Centos servers. And that's really all we need to do. After that, it was the simple stuff, right. User names and passwords and so on. Then, we ran through our three simple steps of get schema. Right, that's the one that captured the source schema, the tables, the columns, the data types, their constraints, and all that important information so we had a proper replica schema to receive our data in NuoDB.

And the next two were what I called the, you know, the simple data movers, right. One's going to dump the data to this file system. And lastly, the third is going to load that data into NuoDB. And we were done.

Also, we just want to mention, this is really just a part of the whole migration step, right. What we covered today was the details on how to convert your data, your schema, your data, and how to run the same SQLs that work against your applications today to run them against a NuoDB database unchanged. But there are other steps as well, right, to the migration. There's the planning steps, identifying exactly which applications you may migrate, and conducting a proper feature parity of that application, to ensure that all capabilities map over well into the new target application and receiving database. And migrating your application data access layers. And then how to do all this in a zero downtime fashion.

So for that, we say and recommend, stay tuned for our December webinar with NayaTech, to hear more about these other steps. And here are some helpful and useful links. I mentioned earlier on how you can get started when you're ready to give this a try. Hopefully, we've shown today how easy it is to migrate that data, and where you can get your migration scripts and watch the demo, and also where to download your community edition. The community edition is a full-featured working version of NuoDB. However, it supports just three transaction engines and one storage manager. Those were those processes that we were discussing earlier in the architectural piece that will allow the database to run. Those transaction engines process the SQL connections, and the storage managers then process the data to disk and make it durable.

So hopefully, you're ready to give it a try. With that, we're going to conclude the, my portion. And we're going to turn this back over to Lorita. We probably have some questions that we can start to take, and go from there.

LORITA: Great. Thanks, Joe. So yes. As a reminder, there is a questions panel when you go to webinar control panel. Feel free to go ahead and submit your questions there. We do have a few already. The first is a clarification. So the question is, Is NuoDB a database that runs in a cloud-hosted environment on, or on servers in the NuoDB data center?

JOE: So, the new... We do not host these ourselves. This is actually for you to go ahead and deploy. You know, really, some of the extra benefits of a NuoDB environment is the deployment flexibility. You can run NuoDB on PRAM. You can run it in hybrid cloud environments. You can run it in the private cloud. So, and you can do all that at the same time. So NuoDB is not hosting these environments. You can run it in Google Cloud. You can run it in Amazon. You can run it on PRAM. You can run it in containers. You can run it on virtual host environments. That is one of the superpowers of NuoDB. You get to choose your deployment model.

LORITA: Great. Similarly, I think, the next question is, Where does that extracted data reside? Is it on local disks, or where? I think...

JOE: Great question. Yes. So that data extraction is going to land on disk, OK. Some of the benefits of that is it's now portable. You can now move that to wherever you like. So you can actually extract in one location and load to another. It extracts them in the common CSV format, comma-delimited file format. So yes, it does write them to disk.

LORITA: And to be clear, again, with Joe's comment earlier, the deployment flexibility of NuoDB means that that disk can reside in any variety of places. And because NuoDB is natively active active, it can in fact, you know, reside both in the cloud and on PRAM, should you want to have a, you know, disaster-recovery strategy or hybrid-cloud strategy. The next question is, Can you run an incremental migration, or is it a full load only?

JOE: Great question. When you have an opportunity to go ahead and review the documentation link, please review the schema options. There are lots of options that allow you to choose exactly how you want to migrate. You can choose to migrate an entire schema at a time. That's what I chose to do today. I was just using that small schema with six tables. But for larger schemas, there is an option that allows you to migrate table at a time, or groups of tables, by comma-delimiting tables, or different objects that you want to migrate. It's completely your choice on how you want to break up your migration. In fact, you can even control the data ordering. Now by default, NuoDB, as it extracts from the source database, it extracts that data in order of its primary key. That's a good place to start. And it will load that data into NuoDB by primary key. But you may, for whatever purposes your application may require, you may actually want to load that data in a different ordering. And that's an important feature that NuoDB supports. You can specify the ordering of that data as well. If your SQLs typically access the data by a particular foreign key, you can also order and lay your data down on disk contiguously by that key. So lots of options. You can review them all within the schema, the migrator schema section of our documentation.

LORITA: The next question is, Will the migration process handle cloud/BLOB data?

JOE: Yes, it will. So NuoDB supports a very rich set of data types. Again, those are also available right in our online documentation, available for you to see the different ones we support. You know, the CLOBs and the BLOBs, you know, binary objects. And by default, we look at the source database CLOB or BLOB. And we convert them automatically to the NuoDB equivalent data type structure.

LORITA: Great. The next question is, What other migration formats do you support?

JOE: What other migration formats. Would that question be the file format that we're writing to? I don't --

LORITA: So if that, yeah. If that person can just clarify exactly what they mean by migration formats, that would be great. In the meantime, another question is, So how closely does NuoDB conform to the ANSI SQL standard?

JOE: Great question. Because, so we know that the ANC standard is very rich, and there's been many editions through the years. SQL 89 and, you know, 92, which really introduced SQL 2. And then since, in 99, SQL 3 came out. And there's been many variants, many additional SQL language editions since then. NuoDB, this is an important part of our product, to make sure that we remain SQL-ANSI compliant. We even document our SQL-ANSI compliance to the actual keywords within our documentation. So you can always look to see, you know, if a particular keyword is supported. So yes, you can ensure that Nuo SQLs, SQL semantics and compliance is very strong. In fact, I'll add one more piece there, because this tends to be where it's very important. Not only is it the SQL language with the different joined languages, which we support them all, even going back to SQL 89, with old, you know, Oracle-style, open-PRAM plus close-PRAM syntax. But we support all the new joined syntax of natural joins and join on. You know, specifying inner join by name and left by name. All of that's supported. But what I wanted to add is the rich functions that we support, the SQL functions, the mathematical functions, the stream functions, the date functions. So what we've done is we've taken a careful survey of these most common functions that are supported in the more mature data base products. And we make sure they work in NuoDB.

LORITA: Great. So I got clarification on that migration question. The person was, you mentioned the CSV files. So the person was wondering if you've got, if you're data's got complex strings and such like that. Can you support any other files besides CSV?

JOE: So we do support BSON. And I would need to check the documentation myself. I believe XML may also be supported. But certainly CSV and BSON format. And, you know, again, it's an easy check on what's supported, because it's right there, in our documentation for that migrator-dump command. So look for the third one being XML. I'd even have to verify that myself.

**UPDATE: XML format is also supported.**

LORITA: Great.

JOE: I'll just say, CSV has been the most popular. And you'll notice, when I did my migration, I used the tilde character as my delimiter. But you can choose your delimiter. It can be anything you want. So choose a character that works well for your applications.

LORITA: So one of the other questions is, you know, that we had talked about active active. They were wondering if we can write to the same data for multiple nodes in different locations.

JOE: Absolutely. So this is an important piece of NuoDB. And the architecture, as we spoke about earlier, how we've sort of taken a database, and we've broken it into these two key pieces. There's a transactional process and a storage manager process. So the applications can connect to any number of nodes throughout the distributed domain environment and process transactions. Each one of those transactions opens. And it then has a consistent view of the database, where it then starts and runs its transaction. If that transaction should then require an update to the database, it will then communicate to a storage manager and commit that transaction, where that transaction is then sent from storage manager to storage manager in a reliable manner to ensure that the entire database remains consistent, durable, and up-to-date.

LORITA: Great. All right. So I think we're right at that 45 minutes. So I'm afraid that we don't have time for any more questions. But certainly, if you have any further questions, please feel free to send them in or, you know, shoot them over. I will be sending out to everybody, probably tomorrow, this recording and the link to the scripts, as we had indicated before. I want to also encourage people who are interested in our customers', you know, examples. There's an interview on our website. You can get to it from the home page, if you just click the link to the logo alpha on the home page. And that contains an interview with one of our customers, talking about their migration process and how they actually had done, you know, the import in about 10 days, to get the port up and running. So that will give you kind of a real-world view of how easy it is to migrate to NuoDB from, you know, from different traditional relational databases.

In the meantime, so I want to thank Joe, certainly for the great presentation and demonstration, Q and A after. And thanks to all of you, our audience, for joining us today. We hope that you found the webinar interesting and informative. And feel free to follow up with us if you have any additional questions. Thanks for attending, and this concludes today's webinar.