You are here

NuoDB 3.0: Getting Started with Community Edition

Watch this webinar for help on getting started with NuoDB. Solutions Architect Tim Tadeo walks through installing and creating a NuoDB database.

Slides available here

Video transcript: 

Jeff Boehm: Good morning, and good afternoon everyone. My name is Jeff Boehm, I am the Chief Marketing Officer at NuoDB, and I am here today with Tim Tadeo, who is a solutions architect with NuoDB, to cover the NuoDB CE 3.0 webinar. Today we will be introducing version 3.0 of our NuoDB Community Edition. I’ll provide some context upfront, and then Tim will jump into a live demonstration of NuoDB CE 3.0. Hopefully the demo gods will be with us, and everything will work properly. This is a live webinar; everyone is on mute for the webinar. You can submit questions in the Q&A box on the right-hand side; you should see a control panel there where you can submit questions, and Tim and I will take questions at the end of the webinar. We are also recording the webinar, and we will send that out to everybody, and available on replay; you can watch it again if you want to, or share it with your colleagues.

So with that, we’re going to get started. Again today we are introducing our 3.0 version of NuoDB Community Edition. We officially announced this last week, and it is generally available on our download site, freely available. We’ll provide that link again at the end of the broadcast if you’re not already a Community Edition user, or not familiar with where to find it.

So, by quick way of introduction, for those of you who are not familiar with NuoDB, or the NuoDB Community Edition initially, overall what we talk about with NuoDB is this emergency of a new form of database, something we call “Elastic SQL,” and we really view Elastic SQL as bringing together the best of two different database worlds. For decades now, organizations have run operational applications and trusted their business to traditional databases like Oracle, Microsoft SQL Server, IBM, DB2. They provide strong data consistency, or more generally, the ACID properties. They provide an abstraction layer through the SQL language, and they also provide strong data management capability in the database itself.

But as companies increasingly turn towards wanting to deploy their applications on the cloud, a new form of database sprung up, and that was the NoSQL database, represented by the logos shown on the screen here, that provide very strong scale-out capabilities, the ability to run across virtualized and commodity environments, provide better reliability, and not be stuck with a scale-up architecture that traditional databases limit you to typically.

What we believe is that Elastic SQL really brings the benefits of these two together, by marrying strong data consistency and a SQL interface with that elastic scale-out and ability to run across different hardware environments. And in fact, we see multiple different companies introducing products that essentially are solving the same general needs of bringing these two worlds, not only NuoDB, which has been in production now for many years, but just earlier this year, both Google with their Google Cloud Platform Spanner product, as well as Cockroach Labs with the CockroachDB product, those were both just introduced and came out of beta earlier this year, and provide an attempt to solve a similar class of problem that NuoDB does.

We won’t go into tons of differences today between us and the other vendors; there are materials on our website to help you with that, but for today we’re going to dig into this Elastic SQL concept a little bit more. And in essence what we say is that we combine that scale-out simplicity and elasticity and continuous availability that cloud applications require, and that you want as you’re going through data center and application modernization efforts, without sacrificing the transactional consistency and durability that your databases of record demand. That is what we call an Elastic SQL database.

At a high level, the NuoDB architecture essentially has a couple of core concepts that enable this Elastic SQL concept. The first is that it splits the query processing and storage units into separate peer-to-peer nodes, or peer-to-peer architecture, where traditionally databases were tightly coupled between query processing and storage, and if you had to handle more storage, or more query processing, you had to scale up to a larger machine for the overall database. The NuoDB architecture splits out what we call transaction engines, which are in-memory transaction processing nodes, from storage managers, which provide durable storage and connection to your physical storage wherever that may be, whether that is on-prem, in the cloud, in virtual environments or containers. This allows you to very easily and independently scale both the transaction processing and the storage management here, yet still present to the application as a single logical database. The application itself does not realize that there are multiple different nodes, or the separation of these processing capabilities; again it appears as a single logical database with the standard SQL API. This allows you to deploy across different environments, across on-prem, in containers, in different cloud environments. It allows you to handle continuous availability for either planned or unplanned outages, as if any node suffers an outage, immediately the workload is picked up by other nodes and the application itself never notices a difference.

So with 3.0, we have built on this architecture, and built on this capability with a number of key enhancements, and Tim will walk though some of these in this demonstration. The first is that we’ve extended our support for hybrid cloud environments by adding more environments that we can run in, and also by formalizing our partnership with Red Hat, and for gaining additional product certification for the Red Hat OpenShift and Red Hat Enterprise Linux environments.

We’ve extended our supported for distributed environments; we had already supported Active-Active operation across two data centers, across two availability zones. We have now extended this to providing Active-Active-Active across three different availability zones. We can also integrate with additional environments; most recently we had a customer that is using a message queueing application and wanted to integrate with NuoDB as an XA resource, so it added that capability in.

And finally, we’ve dramatically improved our performance capabilities, as we do with many of our releases, by automating some of the transaction performance optimization across those distributed environments, and providing targeted performance improvements especially for write-intensive OLTP workloads.

Digging into this in a little bit more depth, the new environments we support, we had already been certified for both AWS and Docker environments, and actually, we had customers already running in other environments, but in this release, we have added support for Microsoft Azure, as well as Google Cloud platform, and in fact, you can run across multiple environments. We have a customer today that is actually running NuoDB across three different cloud providers to provide high availability and reliability, again, in the face of any data cloud provider outage.

We’ve also added, and been certified by Red Hat for integration with Red Hat OpenShift, as well as Red Hat Enterprise Linux and Red Hat JBoss Enterprise Application Platform. So this really rounds out our story and Tim will touch on this again in his demonstration of the different environments that you can deploy, and easily use NuoDB in.

As I mentioned, we have also extended support for distributed environments. I will note that due to some of the limitations in the NuoDB community edition, the Active-Active-Active is only available for Professional and Enterprise editions, so if you are a Community Edition user and wish to test out the Active-Active-Active across three availability zones capability, you would need to contact somebody from NuoDB and look to upgrade to either a Professional or Enterprise Edition.

And then finally, on the performance side, there’s been a number of important enhancements that I’m not going to get into all the details, but as I mentioned, one is to automate workload management across distributed environments doing something that we call “chairman migration,” as workloads move from one data center to another. We’ve also made significant improvements in SQL query performance, and write-intensive workloads.

And in fact, we had one of our customers perform some benchmarking on a release candidate build, and they found across a variety of workloads, their transactions-per-second test showed anywhere from 10-20% improvements, to 100% or more, or doubling of their transactions per second, again, especially in more write-intensive, or in this case, as measured in the mixed workloads, where they achieved over 90% of performance improvements across the board, and again, this was between the 2.6. Production version that’s out there, and a Release Candidate of 3.0.

So with that, that’s a quick introduction to NuoDB and NuoCB CE 3.0. With that, I’m going to turn it over to Tim to introduce and talk a little bit about what he’s going to be showing today. So, Tim?

Tim Tadeo: Well good afternoon, everybody, wherever you’re located.

So my name is Tim Tadeo, I’m the Senior Solutions Architect here at NuoDB. And what I’d like to bring you through this afternoon is some ways you can download and use our Community Edition Product. Now one of the things we want to help you do, is when I bring it down, what can I do with it, right, how can I do it, now what? Faces any top-four vendor, want to make it very easy for us.

So we’re going to look at some ways through how you can deploy it, what type of platforms you can run on, and we’ll take a look at that, and we’re going to also take a look at the fastest way to understand some operational aspects, right? And what I mean by that, is I’m sure we have a varied audience out there; if you’re an architect, if you’re a DBA, if you’re a developer, you’re obviously going to have some different ways that you want to understand NuoDB, where it fits in your environment, can it support an application that you’re designing. So some of the things you come up with is, well I don’t have an app, I don’t have data, or maybe I do. I’m going to bring my own, and I’m going to take my application and test it against my data, my schema. So, we’re going to show you some ways that you can accomplish that, those two cases.

And then also, we have a broad set of tools that run. All you need is our JDBC driver, and you can use things like DbVisualizer, all these types of different modeling tools. So it makes life a lot easier as you’re trying to work through your testing with the Community Edition, and be able to move pretty smoothly.

And then we just talked about, what’s some cool stuff you can do, and some next steps.

OK, so as Jeff talked earlier, we can run on a very broad set of platforms. It could be standalone running on Red Hat Enterprise Linux; it could be CentOS, Ubuntu, that would be your choice. It can run in different types of virtualized environments. You can run on VMware, you can run on containers, as far as virtualization goes. And then my favorite these days has been the cloud environments, and also, you’ll see that we have Red Hat OpenShift. Now, for those of you that may be designing, or working with, or considering microservices architectures, I’m sure you have a big interest in that, right, because it’s a way that I can orchestrate, deploy my applications that run in Docker, so we’re going to look at couple of those today.

All right, so you download it, depending on what your responsibilities are, how do you get started, right? I want to get this done fast; I want to understand NuoDB. And as I said, you want to understand some of those operational aspects, the developer aspects, depending on what your role is.

We have the hands-on self-evaluation guide that we’ve written; it’s rather lengthy, but it’s a very comprehensive, simple, easy-to-follow, step-by-step so it can get you quickly up and running if you don’t have an application that you have to readily test, or you don’t have data, so we can bring you right through that self-evaluation guide to get started very quickly.

Another area that we’re finding with some of our customers, they will use Community Edition; they’re not necessarily developers; they’re not Java experts, but they want to be able to run some applications, so they’ve got a number of way to do that. We use something we call the “simple driver,” which is accessible through the hands-on self-evaluation guide; we have that posted up on GitHub. So it’s actual source code that you really don’t have to be an expert Java programmer; you can actually hack the particular source code and use it against your own data, your own schema, very easy to do. If you want to use something that’s a little bit more simpler than that, we have something that we give you these small client programs, and you can see from the code there, I’m just simply going to connect to my Community Edition and I can execute some SQL statements, so again, even if you do not have your own application, very easy to customize.

OK, and then lastly, if you have your application that you want to test, but I want to bring my schema in. So what you’re looking at here is our -- and you’ll see some more scripts for this that we have designed up. We want to bring our application -- just a word of caution, this migrator is to migrate your schema. It will not convert stored procedures for you, SQL statements. So I just want to be very clear on that. But, it’s very useful if you want to migrate your schemas from MySQL SQL Server Oracle, DB2, Postgres, it’s all been tested on those platforms.

OK, so one of the first areas I want to show you in the deployment methods you can have is OpenShift. I don’t have it live, up-and-running right now, so this is obviously a screenshot, but actually used OpenShift using Kubernetes, in this Red Hat environment for containers. We actually did a full-scale demo, how we launch containers very easily, how we can scale those up by adding pods, and you’ll see like the diagram in your bottom left, is what you’re looking at is the actual architecture that we ran. We ran it both in pure cloud, and hybrid, adding and scaling NuoDB.

So I’m going to take control of the screen. OK, so let’s take a look at a couple of the environments here. So I’ve got this running up on the Google Cloud Platform. And for those of you that are familiar with these cloud environments, the look and feel are pretty much the same, and because it’s one of my favorite environments is, with NuoDB, even using community edition, you’re able to stand up an environment; you’ll able to use any type of tools in that particular cloud environment. I want it to be able to do load-balancing, all of that, so you’re not restricted in the size of your database, right, and Jeff talked about that. So you can get some hardcore testing. But we run up in the Google Cloud Platform; we run up in Microsoft Azure, as you can see here, and that’s kind of what I’m going to be using this afternoon on 3.0, and it’s very nice interface, and we’ll come back to this screen; just want to demonstrate on how you can have the ability to run that. And we also can run up on EC2, AWS environment. So I’m doing quite a bit of testing here. Again, I like the whole concept that I can use these other cloud tools.

So, let’s talk about NuoDB Community Edition, and how we get started. So, I’ve got it running on several platforms, and we’ll come take a look at that this afternoon. So one of the nice features in 3.0 now, if you really want to get up and running very, very quickly, we now have an admin home page that you can use, and in this example, I have this on a standalone environment; it’s running CentOS7 out on a little server that I’m running back in my home. So if I wanted to get started very quickly, I can just simply create a database here, and what it’s going to do is go out, connect to my server, and we’re going to go take a look at that, and quickly build our “HOCKEY” DB schema has actual tables in it, and so on and so forth. So there, it’s done here.

So let’s just quickly jump in and take a look to see what that looks like, open up my panel here. And if we come in and take a look in our environment here, so if we’re going to take a look at this, what it’ll do is it’ll just simply build up the HOCKEY DB domain, right, and get you started. So, it’s very easy to use. In conjunction right away, you can jump to our evaluation guide and you can get started. So in this instance here, I’ve created the objects that I want to do out here, this demo DB, and you can see that up at the top. And one reason I showed you, kind of like this is, you know, it’s very multi-tenancy. So I’ve Community Edition running in here, but the point I’m trying to get across here is it’s very easy to get stood up and run very, very quickly. So in our environment that we’re looking at is, in Docker. So, out on AWS, what I actually have running out here and you can see the Docker instances running, I’m just going to quickly run a process command here, and what you’ll see is the actual NuoDB containers running out in Docker here. So you’ve got some very good flexibility to take Community Edition, stand it up in standalone; I can put it inside Docker and run it there. I can use OpenShift to take care of my orchestration and my management and deployment of CE, so I have the ability to actually do some very robust testing here.

So why don’t we move on here, and I talked about cloud environment, particularly. So one thing I do like about cloud is that, I’ve got tools; I have monitoring tools that I can use. Sometimes it’s not so easy, especially as a developer, to be able to go SysAdmin see the edge and if you want to get some metrics around how your database is performing, your application, you normally don’t have a tool. So, I’m going to kill two birds with one stone.

So what I’m running here is something called “SimpleDriver,” and that comes with Community Edition; it’s in our samples directory. And again, what it allows you to do is quickly crank up an application, be able to run it. So I’m just going to start this; we’re going to kill two birds one store, talk about a little program that I run here. It’ll produce some workloads. And again, this is highly customizable. If I come in here and just kind of take a look at this script, I could do this two ways: I can parameterize this and tell it how many threads I want, how long I want it to run. So I can get quite a bit of workload in there, and get some statistics.

So what I’m going to do is I’m going to pop back over to our Azure environment, and I talked about having some tooling. SO what we’ll be able to see here is I ran that quick SimpleDriver program, and not much CPU, but I can see this doing type of operations in here, and be able to get some metrics around this. So it’s very easy to get up running in the cloud environment, whether I’m using Docker, Azure, what have you, the cloud environments. So it’s a very good way to do it.

Now, I also thought about tooling here. So if I’m going to take a look at DbVisualizer, tool I have built out here. So I’m actually connected up in NuoDB Community Edition here, and it’s actually connected up to my Azure environment. So, very easy to use; I’m sure everybody has been around plenty of tools they can use, but it’s a very nice tool if I want to execute SQL, other than trying to do it from a prompt which you can with NuoDB, have an SQP prompt. But, it’s a relational database, and I just have this little “Inner Join” that I run here. So it’s a very nice tool for a developer, test my SQL, that type of thing.

Now, from a DBA side, and I’m trying to look at kind of performance metrics here, and take a look how my database is performing, and I’ve got two ways here. I can either run this, again, from this tool, or from an SQL prompt, and we can take a look at how I’m executing here, within the product. But the tools make it very, very easy. And again, I want to iterate, whatever type of tools that you’re using, need to get a database connection and build my schemas out and test for developer, be it with some type of IDE, or DBA type, and developer, maybe could use a tool like this, or I’m creating models, I use something like Irwin. So this is a very quick way to get up and running in this environment.

OK, so Jeff, I’m going to turn this back to you. I’ve covered the topics that I need to do that, and I’m looking forward to some questions.

Jeff Boehm: OK. Thank you Tim, that was a great, quick walkthrough on some of the different environments, and again, I think that tied in well with the NuoDB 3.0 announcement and some of the new environments. Tim was able to show you some of the places you can run NuoDB, as we mentioned now. We do have customers running us across Azure; we’ve tested it with GCP, with AWS, VMware, Red Hat, etc., so good mixed environments. Obviously. Tim wasn’t necessarily doing performance tests in here, but again, 3.0 brought some important improvements in terms of performance, and then also the distributed environments and ability to do that.

As Tim mentioned, we do have a number of tools available for you to help you with your evaluation, or usage of NuoDB. The first is actually the simplest, which is just a quick recorded demo; if you go to http://www.nuodb.com/full-demo, you’ll actually see that OpenShift integration demo that Tim referenced at one point, and you’ll see us running across a mixed environment, including in OpenShift, both on-prem and in AWS. If you want to give NuoDB Community Edition a try yourself, or upgrade now to the 3.0 version, you can go to NuoDB.com/download, and then that evaluation guide that Tim referenced, that is a great step-by-step guide through using the product and checking out some of the different capabilities include performance testing, etc., you can get that at http://www.nuodb.com/eval-guide.

So with that, I’m going to open up to see if we’ve got any questions that have come in yet. I see lots of people on the line; there have been a few questions here. The first one I’m going throw to you, Tim, which is you showed some of these different environments; can I run across two cloud providers at once? Can I actually have NuoDB running in two different environments at once?

Tim Tadeo: Yeah, that’s a great question, Jeff. And as you talked about earlier about Active-Active-Active, you’re able to run across three platforms if you’re in the cloud environment; you run across Azure, Google, AWS. I’ve personally done it quite a few times in tests, and works very well. And it’s important to note for our audience out there, you’re able to do that with Community Edition, to be able to do that.

The other point I’ll make it, not only using that cloud environment, you mentioned something early about hybrid. Not every type of project is going to be pure cloud; it’s going to require on-prem and a part of the cloud in there, so you can run this across an availability zone up in Azure, or AWS, and you can run it from a server behind your firewall.

Jeff Boehm: Thank you, Tim, that’s a great answer. Second question is in relation to the Active-Active environment, and is that for read-only across two nodes, or is that full read/write Active-Active?

Tim Tadeo: That’s another great question from the audience. So this is where we differ in NuoDB’s architecture. So it’s not just simply an Active-Active where it’s read-only mode, right, because I’m doing this replication underneath, and I’ve got to be able to provide transactions coming in, have consistency with those transactions. So, that is a full Active-Active. I can be reading and writing to both availability zones.

Jeff Boehm: OK. A couple more questions. There was a question about performance, because we talked about the performance improvement, and do we have any benchmark numbers? I mean, I’ll address this unless you have more to add here, but we have not published performance benchmarks per se, but we actually did do some tests; Tim himself did some really interesting tests, comparing some of the different elastic SQL providers that we alluded to earlier on, and we are actually publishing a blog very shortly on some of Tim’s findings there, so I think that’ll be quite interesting to watch in this space, and you’ll see some of the performance numbers that we saw.

I guess on a related question, how is this difference than Google Cloud Spanner? I don’t know; I mean I can certainly share my thoughts, or I don’t know Tim if you want to jump in --

Tim Tadeo: Sure, I’ll jump in, some of the tests we did, and Jeff brings up a great point, benchmarks, we used Yahoo’s Cloud Serving Benchmark, so that’s easy enough for folks out there if you want to download that and just run it out of the box.

So you know, as we compare to something like Cloud Spanner, right, we’re a different architecture. And you have to remember one thing about Google Cloud is how they are replicating data from node-to-node underneath, in Cloud Spanner, right. I mean, they’re using brute force; they’re using atomic clocks to keep synchronization in point; they claim to be ACID. You know, we’ve seen some interesting things in there as far as ACID goes. Another interesting aspect is right now, I’m sure they’ll built it up to some point, is you’re able to do SQL select statements, but to do any type of manipulative SQL, it doesn’t support that right now, so it has to be built into your application, and use Google Cloud’s API environment, which obviously makes it a little more challenging; I mean you talked about some of the migration capabilities. And how many, I mean a lot of our customers have migrated applications from MySQL, or Microsoft SQL, and obviously it would be a lot more difficult if you have to rewrite all your DML into a custom API as opposed to being able to continue to use that SQL. Absolutely, because that’s a key differentiator; we have an SQL, you know, a full compliant ANSI SQL layer. So for porting existing applications to Cloud Spanner, that might presents some challenges, right, there’s going to be a lot of rewriting of code, you know.

Another interesting aspect that I see is I think Google Spanner more competes with the NoSQL environments than is really a true, true competitor for NuoDB.

Jeff Boehm: Another question for somebody, I guess not familiar with the Community Edition, I can take this one, is just what are the limitations of Community Edition? Community Edition is freely-available from the link on the site here, and there’s really only two limitations. The first is, from a scale-out perspective, we limit you to three transaction engines and one storage manager, so you’re able to test the sort of scale-out at the transaction tier, but you’re not going to get full redundancy at the storage manager, and that can only be run in that datacenter. The second limitation is one of support; we have a full Enterprise support capability for those customers who purchase our Professional or Enterprise editions, and for our Community Edition, we do have a forum that you can certainly get your questions answered, or look up information there. But as customers look to deploy this across broader environments and get full production support, that’s typically when they go Professional or Enterprise edition.

Another question I’ll hand off to you Tim, it’s a fairly vague question, but it just says, can you talk about concurrency a little bit?

Tim Tadeo: Sure, certainly. So we are an ACID-compliant relational database system. So how do we do that? Well what we use, it’s not your traditional type of locking, right, row-locking, you know, page-locking, column-locking type of technology. Because we are distributed across a large area, right, could be large, cross-availability zones if you can picture that, is we do something called multiversion concurrency control. That really has been around for a very long time; it’s been discussed at length.

So I want to make a distinction here. When we use multiversion concurrency control, it does not equate to some type of eventual consistency. So, it’s simple. Think about MVC, without getting too much in the weeds here, is readers don’t block writers; writers don’t block readers, and you are guaranteed, the application is guaranteed to have the correct version of the data, OK?

Two questions here about sort of modern architecture, the first one’s simple and I’ll address that, but I’ll let you talk about the second one. The first one is simply, is NuoDB available off Docker registry? The answer is absolutely yes; you can do a pull, and you will find NuoDB there, Community Edition, that is. And the second one more generally for you Tim is, how does NuoDB fit in a microservices architecture? We can kind of alluded to that a little bit in the presentation, but I guess do you want to touch on that a little bit more?

Tim Tadeo: Yeah, that’s a really good question. If you think about it, in the cloud environment today, right, that’s what we call ourselves, you know, we are “cloud-ready,” it’s a cloud-ready relational database for a lot of reasons. But let’s get back to the microservices architecture. If you think in today’s modern cloud environment, cloud modernization, quite a few enterprises are moving to that model, right? And what does microservices architecture look like? Well the design principle is very loosely coupled between services, right? Number one: it’s taking advantage of containerization, number two. So I can isolate my microservices from other set of microservices. I’m isolating myself from the operating system, so I can stay up and running. And they’re meant to crash as well.

Now, let’s contrast that, not so much contrast that, but let’s consider what the architecture is of NuoDB. You have an illustration there that talks about our transaction engines are storage managers. OK, so we are a loosely-coupled architecture. In fact, we’ve decoupled the transaction engine from the storage managers, two different processes, they run appears. So what does that gain you at the end? Why would NuoDB fit nicely there? Well if I have a microservice, right, that I want to separate out, I can easily put a transaction engine, a storage manager. I can quickly start up a container so I can scale, right, with the microservice architecture, and originally, the broader point here, as you alluded to, we can run Docker containers, you can run that.

So I’ll take it up a level, so if you’re running something, a container manager type of system, be it OpenShift, something that’s up in Azure for container management, same thing in AWS, now I’ve got an environment, or I’ve got a database that can scale with those microservice architectures; I can keep them loosely coupled, decoupled.

Jeff Boehm: Good, all right, that’s excellent. Actually one last question here, then we’ll probably wrap up, but just came in: does all of the data have to be in all of the storage managers? And once again, for Community Edition users, this is not as relevant a question, because Community Edition only supports one storage manager, but as you update to the Professional or Enterprise Edition and you have multiple storage managers, the question is, is all the data always in all the storage managers, or can I separate out and have some data in one storage manager and some in another?

Tim Tadeo: Good point, Jeff, and just to reiterate, with Community Edition you have to do what I discussed here. So we have something in the product, 3.0, table partitioning storage groups. And what that allows you to do, as our audience has asked, is to break that database off. I wouldn’t exactly call it sharding; I guess it is. But it would give me the ability to do a few things. So I can partition the data just from running in availability zones, and I want to implement my application so I’ve got application locality, data locality. And by that, what I mean is I’m US West, US East. Both are the same applications, exact applications running at East and West; however in East, I just want to keep my transactional customer data for the United States East in that region, so I create a table partition storage group. And then also do that in the US West. But it also gives me the ability, should the US West go down, I can still have a copy of that data, complete data, and use the US East data from that application.

Jeff Boehm: Excellent. OK, all right, well I think with that, I think we’ve addressed all the questions that have come in online. If there are other questions that come in, certainly feel free to reach out to us. Again, this webinar has been recorded, and we will be posting the recording on our website, and we will be sending out a link to everybody who attended. Again, thank you for your attention today, thank you Tim, for the great demonstration and addressing the questions, and we will see you on another upcoming webinar. Thank you very much, everyone.

Tim Tadeo: Thank you.