In this webinar, CMO Jeff Boehm and VP of Products Ariff Kassam discuss the emergence of the elastic SQL database that forgoes compromises and delivers a distributed database built for today’s modern applications.
Jeff Boehm: Good afternoon, and thank you everyone for joining us. My name is Jeff Boehm, I am the chief marketing officer for NuoDB. And I am joined here today by Ariff Kassam, who is the vice president of products from NuoDB. Welcome to our webinar on top reasons to deploy an elastic SQL database. Today, we will be introducing the drivers for a new class of operational database, an elastic SQL database, and the key reasons you may want to consider this. We will have a live Q&A after the presentation. You can type in questions on the right-hand side, you should see a question panel, and you can type in questions at any time, and we will be addressing them at the end of the webinar. So with that, we will get started.
The software market today has fundamentally changed. Whether you are an independent software vendor that is developing and commercializing software for customers, or if you’re a software development team within a larger enterprise, expectations are no longer that you simply develop and deliver high quality software. It’s now about the entire experience. Enterprises used to be responsible for running and managing the software that they acquired, providing the security, the uptime guarantees, and ensuring consistent performance. But now, that responsibility has fallen on the application development organization, and the application development company. Rather than simply being a development team, ISVs and software development organizations are now full service organizations, and they must ensure that they are delivering amazing experiences to their customers, both inside and outside their organization. And this change is not an easy one, with a world that is more global, mobile, social, and data-driven, software development teams have to adapt or die. In a recent study, market research firm IDC predicted that by 2020, 30% of software vendors will fail to make the shift to the new service-based reality. Those software organizations who fail will simply not be able to reach the new markets, or respond to shifting customer demands, and hence will not be able to remain competitive in today’s dynamic workplace.
To accomplish this shift, software organizations are rethinking everything about their computing environment. Infrastructure, application development, operations, and their business terms. Data workloads are growing everywhere, and storage and access for these large data sets are a growing need as well. Developers are becoming a scarce commodity, and the role that developers play is changing. Applications are being asked to do more, and perform more demanding requirements, while keeping costs to a minimum. And workloads are being broken down and delivered as collections of loosely coupled services. To accommodate these needs and transform to a service, customer-oriented business, software organizations are increasingly moving their applications to the cloud. And purchasing patterns and models for licensing this software is changing as well. As part of the modernization transformation, organizations must consider how cloud applications will handle and store data.
But moving to the cloud is not as simple as just running your existing application on a cloud provider. Software companies embracing this journey are facing significant data modernization challenges, ensuring that workloads are handled responsibly. And that data is accurate, and kept secure. Planning and resourcing for handling transactional workloads at scale, and under dynamic loading, all while delivering new software and updates to market quickly, maximizing your return on investments, and database resources, infrastructure, skill sets, and existing application code while minimizing risks and costs.
In addition to the web and application tiers of cloud-based software as a service, the database plays a significant role, and contributes to your overall success. As you look to appeal to a broader market with your cloud-based offerings, the database can either become a significant bottleneck, or allow you to quickly support new tiers of customers in your rapidly growing customer base. The last thing you want to do is be forced to rearchitect your application, because your database is holding you back. As one of our customers said, database divorce is hard. If you architect your service with the right database technology from the start, you’ll be able to quickly expand your market reach, and support a higher volume of customers without rearchitecting. Your choice in database can also give you deployment flexibility. The cloud onramp, as many people call it, is often not a linear path, with many companies gradually shifting workloads to the cloud, and often replatforming some workloads back on premises. Picking a database that works across environments, on prem, private cloud, public cloud, gives you that flexibility and agility to respond to market, business, and competitive pressures. Just as companies were loath to succumb to vendor lock-in in the past, most want to avoid the modern version of that being peddled by the major cloud providers. As you begin offering your products as a service, you have the opportunity to tailor different versions of that application to different market segments, gaining new revenue streams. Again, does your database support this model? Or does it burden you with a high total cost of ownership that makes going after a commodity market prohibitive?
And as you think about the costs of your database, and its impact on your bottom line, the actual database licensing costs are just one piece. Do you have to rewrite significant aspects of your existing application to accommodate the database? Do you have to retrain your staff, or hire new staff, to move to an unfamiliar database interface, or API? Does your database address common service-based requirements of continuous availability? And does it maximize your hardware investment?
Moving to the cloud and service-oriented architectures gives you the opportunity to fundamentally transform your application strategy and your business. Don’t underestimate the role the database plays in this transformation. To talk a little bit more about the role the database plays, and the evolution we’ve been seeing in the database market space, let me turn to Ariff. Ariff?
Ariff Kassam: Thanks Jeff. Before we talk about the evolution of the database, let’s take a quick look at the evolution of the hardware and networking infrastructures over the years. Over the last 30 years, we’ve come a long way in technologies. We sort of moved from mainframe-centric systems, through client server, through web scale out, and now we’re squarely in sort of this new cloud architectural space. The advances in computing power, as well as networking speeds, have resulted in most modern day applications being typically based on loosely coupled services that leverage scale out for performance gains. However, as you can sort of see in this diagram, the database tier has fundamentally and stubbornly remained locked in sort of the client server architectural world. It is still a monolithic process that’s hard to make continuously available, and requires scale up hardware for performance improvements.
As business grows, so does your challenges for meeting requirements of real-time access to data, and handling highly dynamic operational workloads within reasonable resource and budget constraints. Databases can accelerate your constraint or constrain your business, because your choice of database technology affects everything from downtime to application performance, and your ability to scale your overall application performance. The database can be considered the lifeblood of the modern application. To be successful in today’s dynamic world, organizations must modernize their databases.
So what’s required for cloud success? Our customers are saying that they want, quote unquote, “elasticity” without losing the benefits of SQL. Elasticity in our customers mean a number of things. This could be everything from virtualization, operating on premises, operating in cloud or some sort of commodity systems in a hybrid approach. It also means dynamically scaling out to meet application performance demands, and scaling back in when the workload is reduced through peak performance. It also means full continuous availability, both within the data center, as well as across multiple data centers for DR protection. Our customers are also saying that they want SQL. They’ve looked at NoSQL architectures, they’ve looked at other alternatives, but have found the power of SQL and the ability to keep SQL reduces their risk of application migrations. So that includes keeping the acidity properties, or consistency properties. Being able to leverage existing skills in SQL and code, and keeping sort of the database abstraction in managing the data in the database here, rather than coding that into the application. Ultimately, what customers are saying is they want to elastically scale their current SQL database into the cloud.
So we’re introducing this new concept of elastic SQL databases. An elastic SQL database combines the scale out, simplicity, and elasticity with continuous availability that cloud applications require, while maintaining transactional consistency and durability that databases of record demand. So now we get to sort of the top reasons for why elastic SQL. So let’s take the elasticity part of this equation. Again, we talked about sort of the main drivers for elasticity in a slide previously. So let’s talk about the first one. Easy read-write scale out on commodity hardware. Traditional databases can do read replicas, so you can scale out your read workload across multiple replicas. But in today’s world, we need read-write scalability across multiple servers. Not just read replica servers. So you want to be able to have your full application workload scale out across multiple servers. We also want to have this dynamic capability to scale back in, to reduce overprovisioned hardware. You could have applications that had various cycles, at the end of the month of beginning of the month, where you know your workload is going to be greater than the remaining part of the month. So you want to be able to scale out to meet those demands, and scale back in to reduce costs for the remaining part of the month. Customers also want flexibility. Flexibility both in terms of hardware types, containers, physical hardware, virtualized environments, as well as deployment models, either on premises, in cloud, or across both, in hybrid models. And finally, elasticity also means continuous availability. The idea of a single system that provides an easy to use way to manage and maintain availability of an application, either within a data center or across data centers without the complex add-ons of additional products layered on top of a database service.
On the other side of the equation is the SQL standard, right? So customers are asking for a SQL interface that is easy to use, and easy to migrate existing applications. It’s a standards-based interface that mostly everybody’s very familiar with, and there’s a lot of tooling and existing ecosystem built around SQL applications and SQL databases. Again, the ability to reduce risk by leveraging your existing application code, and adding it to a modern-day database application, database that supports that application. So to be able to -- ability to reuse existing code and skills without teaching new skills. Again, the traditional, one of the benefits of traditional SQL databases is the ability to do data management within the database, and not having to worry about hardcoding data management, and query syntax, and query paths in the application. The benefits in an elastic SQL database allows just that. It still keeps the data management logic within the database. And not force you to build that into your application, like some of the new SQL applications do. Sorry, that NoSQL applications do. It also, again, because the logic is now in the database, it minimizes the application complexity.
So we’ve talked a lot about sort of the new SQL benefits. As a high level here, let’s take a look at the options you have available for your database systems for your applications. So when we split this into traditional SQL databases, databases that are purely cloud-based, NoSQL systems, as well as elastic SQL systems. So the first category is read, write, scale out, and scale in. Traditional databases, this is one of the main downfalls for traditional databases. They are basically inflexible in terms of how they provide scale. You have to scale up the hardware to provide performance improvements in traditional databases. Sticking with traditional databases, you can deploy them both in on premises systems, as well as cloud systems. However, it’s impossible to split that, and go across multiple environments, both on premises and cloud. That’s sort of the, what we mean by the checkmark with brackets. Typically yes, but there are certain cases of deployment flexibility that traditional databases don’t support.
Continuous availability, databases typically have a fairly good solution for availability within a data center, but to get availability across data centers, you typically need a database add-on to get that continuous availability for DR purposes. And as we all know, we all have a great SQL interface, obviously -- application is already based on traditional SQL databases, and handles data management. The cloud vendors are very similar to the traditional vendors, in that the cloud databases are basically instances of traditional databases in a cloud service. So they provide typically database as a service environment that makes it easier to manage that environment, so you don’t have to really worry about physical installation and management of the database. So they typically have the same sort of advantages and disadvantages as traditional databases. However, cloud databases are locked into the cloud, so the deployment flexibility is limited, and then it has the same availability for continuous availability in the fact of DR protection.
The NoSQL databases were obviously born with the frustrations of traditional databases in terms of read write scale out, deployment flexibility, and continuous availability. However, in building those systems, they gave up a number of things. They gave up the standard SQL interface, the ease of use of -- and migration of current applications, and forcing users to build a lot of logic within the application to manage the data. SQL, elastic SQL databases is sort of the best of both worlds. We provide the scale out and simplicity of flexibility of NoSQL databases, while maintaining the traditional SQL interfaces, the application to the data management, and application migration capabilities.
So, we want to introduce NuoDB, which we classify as the elastic SQL database. The main architectural advantage of NuoDB that enables this elasticity is this creation of splitting the traditional database into two service tiers. An in-memory transaction tier for processing transactions, as well as a storage management tier for maintaining durability of the data. Each layer in the NuoDB architecture is independent and can be scaled independently of each other. So you can scale out the transaction processing layer to account for additional transaction processing requirements by the application, and scale requirements by the application. Or you can scale out your durability layer to either have multiple copies of the data for case safety, or you could also scale out storage layer for scaling out your write I/O. It provides a standard anti-SQL interface, it allows flexible configuration on either on premises or in cloud deployments. All these processes, these transaction processes, as well as the storage processing, from an application perspective, it looks like a single logical database. The application still connects to a single environment, which distributes the workload across the transaction processing engines, and makes sure the data is stored durably across all storage managers.
Because there’s a single logical database, and this is a peer to peer architecture, any of these processes can be located either locally or in a different data center to provide deployment, active deployment across multiple data centers. And again, because these are a peer to peer architecture, and independent processes, we can survive failures of any one of these processes without affecting the application. And it also supports the ability to do performing rolling upgrades without application outages.
One of the key advantages, and features that we have that enables a lot of these capabilities is what we call the durable distributed cache. If you recall the transaction engines are fully in memory processing systems. They have a -- they put the caching local to the transaction engine, which is close to the application, allows for fast data access and processing at the transaction engine layer. There is no requirement to store on disk, and it allows us to optimize the code paths for in memory use. The cache is also distributed across multiple transaction engines. Each transaction engine doesn’t have to have a full copy of the data. It can have a partial copy of the data that is being required by the application that is talking to that particular transaction engine. It is also dynamic, in the fact that that cache gets built up, based on usage that goes through that transaction engine. Each peer coordinates together to provide a single logical view of the entire database. And then finally is the durability. Durability, again, provides safety of that data in case of failures. So we can guarantee that any transactions are committed durably to disk, and stored for resiliency in the case of either process, or data center failures.
In practice, this is what sort of a typical deployment looks like. So just to orient you, there are three application servers, there could originally be three transaction engines and two storage managers. So you have three transaction engines processing application workload, and two storage managers storing the data on disk for durability. So you have a factor of two for availability on your storage side. So let’s take the example where the middle transaction engine fails, the server on -- that server goes away for whatever reason. Because this is a single logical database entity, that application can connect up to any transaction engine and continue with its processing, without having any sort of application outage from the client perspective. You could also expand this configuration by adding new TEs, more new SMs dynamically, without having to take an application outage or any sort of large administration efforts.
Again, because this is a single logical database, the systems can be distributed across multiple data centers to provide active read write scale out across multiple data centers.
Jeff Boehm: Thank you, Ariff. So, what I’d like to do now is actually just show a quick glimpse into NuoDB. I’ve got a short sort of one minute snippet here to show you what you’ll be seeing is a logical diagram showing you those transaction engines and storage managers who are actually running it, in this case, in an open shift, a red hat open shift environment, running both on premises as well as in Amazon Web Services, and you will see some throughput graphs at the top, showing you the output. Oops, excuse me, showing you the output of these engines. So what we’ll be showing you is the, again, the elastic SQL database providing these always available, always active hybrid cloud environments. So on the left, you can see that we have one -- a single transaction engine, and we simply add another transaction engine by scaling that up within the open shift environment. Across the top, you’ll see that reflected we now have a second transaction engine, and the transaction per second jumps up to reflect that it’s got a second processing engine. We can continue to add nodes, and in this case, we’ll scale this all the way up to 30-some transaction nodes. Transaction engines, hitting well over two million transactions per second. Those transaction engines can be spread across public cloud as well as on premises data centers, handling workload in an active/active environment, such that even if you suffer an outage in your on premises data center, the public cloud continues to give you service that you need, and continues to provide capabilities for servicing your application. What this gives you is a much better application experience, giving you zero downtime, allowing you to perform things like rolling upgrades, handling server outages, ensuring much better performance by giving you the ability to easily scale up and scale down your transaction engines as needed. And complete automated redundancy and disaster recovery, again, thanks to the architecture that Ariff described of multiple transaction engines and multiple storage managers. All in all, this should help you as a business with your customer satisfaction, and your customer retention.
Because this is based on a traditional anti-SQL API, it’s the standard SQL capability, it’s much easier to reuse your existing SQL logic and skills. You don’t have to learn a new language, retrain your staff, or hire new people, in order to handle -- in order to develop your application. And ultimately, you can trust the database to handle data management logic. You don’t have to develop data management logic, starting logic, etc., into your applications here itself.
This architecture also provides better agility for you. Again, making it easy, if you need to handle increased workload, if you need to handle increased performance, if you need better resiliency, you simply add a node. There’s no application redesign, rearchitecture, it’s very easy to simply add a node, or remove a node, as you need. And as I mentioned, that can be deployed across any environment, or across a hybrid environment. You can also modify those applications much more quickly, again, with a consistent SQL API. All of this ultimately also leads to a total reduction in total cost of ownership. Not only is our licensing model significantly less expensive than the traditional relational databases, it provides much better server utilization through an active/active architecture, where you don’t have servers sitting idle, waiting for failover to happen. Instead, those servers are being utilized, and can natively handle resiliency, and continuous availability. And because you’re reusing your existing SQL code and skill sets, you don’t have to again, go through retraining or rehiring, in order to develop or expand your applications.
Before we wrap up and take some of the questions, I see some questions have been asked already in the question box. If you have other questions, feel free to type those in there. But before we get to those questions, I want to share a couple customer examples, and wrap up with just a little bit about NuoDB, the company. The first customer example I want to share is Dassault Systemes. For those of you not familiar with Dassault, they’re a $3 billion French-based software company that provides manufacturing software solutions in the CAD/CAM space, in the product life cycle management, or PLM space, to the world’s leading manufacturing organizations, all around the world. As with many software markets, the engineering software market is in transition to the cloud. The majority of PLM users still use on premises solutions, but Dassault recognizes that the future looks different. The future is cloud-based, it’s service-based, and it’s all about integrating their portfolio into an integrated suite of services they’re calling the 3D Experience. Something where the data tier actually becomes a competitive advantage for these services they offer. Dassault’s been working with NuoDB for several years, and has deployed NuoDB as the relational database backend to their 3D Experience cloud offering. They selected NuoDB due to the elastic SQL properties we described in this presentation. Their applications were originally designed for a relational database backend, thus maintaining a SQL interface, and the traditional relational database properties was important. But unlike traditional relational databases, NuoDB is designed for the cloud. And they’re finding it easy to scale it out as demands increase.
A second example, in a quite different space, is the London Stock Exchange. And in particular, a division of the London Stock Exchange called UnaVista. UnaVista is a software development organization within the London Stock Exchange that develops and markets a trade reconciliation platform that helps LSE and its clients reduce regulatory risk, and ensure compliance with stricter regulations. With financial regulatory changes set to take place in Europe in 2018, notably a shift in the MiFID, or Markets In Financial Instruments Directive, becoming a regulation, thus MiFID is becoming MiFIR, UnaVista needed a platform that could more easily scale out to accommodate increasing demand, and anticipated increases in data volumes. Their existing database solution really could not easily scale out to address the need, but UnaVista didn’t want to abandon the SQL interface and database consistency and capabilities they had come to rely on. They also had staff well trained in SQL databases, and didn’t want to have to retrain or hire new staff. They can’t afford any application downtime, and they particularly liked the separation of transaction processing from storage that Ariff described, giving them deployment flexibility and elasticity. Overall, they’re experiencing significantly higher performance and throughput with NuoDB, and an elastic SQL database, while also significantly reducing their overall costs.
The final example I’ll touch on is Alfa Systems. I actually had the pleasure of sitting down with Tim Gage from Alfa Systems a couple of weeks ago to discuss their move to the cloud, a video of that interview is actually available on our website if you’d like to learn more about Alfa’s deployment. Alpha is a leading asset finance software platform. Some of the largest asset finance organizations from Mercedes Benz Financial Services to Toyota Financial Services, or Siemens Financial Services, or the Commonwealth Bank of Australia, all rely on Alfa Systems. Like many software providers, they have traditionally offered their product as an on premises application, but are increasingly seeing demand for cloud-based offerings. As they move to the cloud, they sought an offering that they could easily migrate their existing SQL-based applications to, and gain new cloud-based efficiencies. In my discussions with him a couple weeks ago, he described how they downloaded the free NuoDB community edition first, and they actually migrated their existing applications on their own in just a couple of weeks, to NuoDB community edition. The community edition is actually a fully functional, freely available version of NuoDB that hundreds of organizations have built and deployed applications with. As those companies look to scale their applications out to handle higher volumes, or achieve storage redundancy, they upgrade to the professional or enterprise version of NuoDB, which Alfa has now done. Overall, Alfa believes that NuoDB will provide significant cost savings over traditional database offerings, value that both Alfa and their end clients will benefit from. And will allow Alfa to more easily and seamlessly move to a modern cloud-based deployment model.
As a final slide, and then we’ll get started on these questions that I see here, NuoDB as a company was founded about seven years ago. And we’ve been pioneering, developing, and commercializing this new generation of operational databases. What we call elastic SQL. We have a management team comprised, and an original development team comprised of database industry veterans and pioneers, and hundreds of organizations have benefited from our patented novel approach that brings together the best of both worlds. The traditional relational database reliability and guarantees, with modern cloud flexibility. With that, I do see several questions that have come across the transom here, so we’re going to start with one. We’ve actually had a couple questions along the same lines here, talking about migration. There was one that asked how easy is it to migrate an existing application? And another said, is it possible to migrate from Microsoft SQL Server 2008 to NuoDB. Ariff, do you want to take that?
Ariff Kassam: Yeah, so it’s relatively simple to migrate. We actually had Jeff sort of talk about one of our customers, London Stock Exchange, that migrated from SQL server to NuoDB. As with all database migrations, there are sometimes specific SQL dialects that need to be handled. We generally cover most -- well we are anti-SQL, but there are certain specific SQL extensions that each database vendor provides that we need to add, and we did a couple for London Stock Exchange. But generally the migration is relatively simple.
Jeff Boehm: Yeah, and again, thinking back to our conversation with Tim Gage a couple weeks ago, they had an application, they had actually built the application to work with both Oracle and MySQL, from recollection. And as I mentioned a minute ago, they had actually been able to migrate that application pretty easily over from a standards-based, you know, anti-SQL interface to NuoDB quite quickly.
Ariff Kassam: Yeah.
Jeff Boehm: Another question here, question about sharding, and how what we’re doing differs from sharding.
Ariff Kassam: Right. So sharding is an application strategy where you can take your application, take the data that the application uses, and partition it into smaller databases, into multiple smaller databases, allowing the application to have access to multiple smaller databases, rather than one large database. It’s a strategy that applications use to enable scale out, but it does require a lot of changes at the application layer to enable that, and management of shards can become pretty time consuming as your number of shards grow. So what NuoDB does is we don’t, from the applications perspective, it doesn’t look like multiple databases or multiple processes. Again, it’s a single logical database, and we’re doing sort of the partitioning automagically within the database, and within the transaction engines, based on our architecture.
Jeff Boehm: Got it, OK. Another question here about portability. Actually there’s two related questions here. One asking what environments can I run NuoDB in, and another one saying is NuoDB portable between cloud providers? For example, AWS or Azure.
Ariff Kassam: Yeah.
Jeff Boehm: Can you talk about some of the environments that we touched on?
Ariff Kassam: Yeah. So, our customers span multiple environments. We have customers both on premises, on standard Linux, physical Linux servers, or virtualized Linux servers, or even container environments on premises systems. We also have customers running in cloud environments, AWS and Cloud Scale. We haven’t certified on Azure as of yet, but it is on our road map to support Azure as well as Google Cloud platform. I don’t expect it to be any technical challenges, it’s more of a certification process.
Jeff Boehm: Got it. OK. The -- another question about some of the products out there, there’s sort of two related questions here. One is, where would you recommend using elastic SQL compared to NoSQL databases? Another one said, you know, what makes NuoDB cloud friendly, you know, compared to document databases. Which again, I think is probably a shortcut for NoSQL-type systems. How would you sort of compare that and the use cases of an elastic SQL system versus a NoSQL system?
Ariff Kassam: Right. So databases like DocumentDB, or any other NoSQL database, they’re great databases for specific use cases, for use cases where applications don’t require strict consistency or are fine with sort of key value APIs, or want to embed some additional logic in the application layer. So, there are a lot of applications out there that live and are suited for NoSQL applications, such as -- sorry, NoSQL databases such as DocumentDB. And those provide great availability, they provide great scale out, and typically across multiple data centers. So those are great options. Where those options fall short is when applications actually need strict consistency, transactional semantics, and the ability for the applications to leverage their existing SQL logic and SQL systems to get to an elastic scale out, or scale out environment across data centers. And so, the need that we’re seeing is for those costs of standard royalty fee transactional applications that can’t migrate or cannot easily move to a NoSQL system.
Jeff Boehm: Yeah. And in fact, we heard from, there’s another case study that we published from a smaller company called CauseSquare that provides an online portal for charities and nonprofit organizations to build communities, they actually had built their application originally on MySQL and were looking for a better scale out alternative, they evaluated NoSQL solutions and as you said, found that sort of the needs that they had, especially for being able to migrate a MySQL app, but also for some of the consistency and transactional needs that they had really were not appropriate for a NoSQL system. Another question in here is asking about the cost. I’m not going to go through the whole cost table here, there is actually, we’re fairly transparent on this, so you can go to the NuoDB website, I believe it’s under the product area of our menus, you will see our full cost structure there, I would also point you towards the NuoDB community edition as I referenced with Alfa, and many of our customers actually start with the community edition, and are able to develop out applications, and use it for free. There’s no limitations on that. The only limitation is the community edition is limited to three transaction engines and one storage manager. So as your either sort of data volume needs grow, or as you want full sort of backup, you know, data redundancy, then you’re going to want to upgrade to either the standard -- I’m sorry, either the professional or enterprise edition that is outlined on the website.
I see one additional question here that I’m going to answer. If there are additional questions and we haven’t gotten to them, we will follow up with you. And again, we’ll -- if there’s any other questions, feel free to type them in. The last question I’m going to ask is again, for Ariff, it talks about what connectivity standards are available for J2EE, or spring-based -- framework-based environments.
Ariff Kassam: Yeah. So we support standard SQL connectivity clients, such as JDBC and ODBC, .Net. We’ve got some sort of community-based connectivity through Python and OJS, and other types of application frameworks. But generally, we support the standard interfaces, such as JDBC, and ODBC and .Net.
Jeff Boehm: Great. OK. Well thank you Ariff for joining me, and thank you all for joining us today. Hopefully this has been informative, and is giving you a sense of a new class of database, what we call the elastic SQL database, really providing that best of both worlds between the new cloud architectures that you want for scaling out, along with traditional benefits of relational databases around SQL and asset properties. Certainly if you have additional questions or want additional information, I encourage you to visit the NuoDB website, we will be running additional webinars later this month, outlining additional capabilities and benefits of elastic SQL. Again, thank you all for joining, and we wish you a good day. Bye-bye.