You are here

Competitive Advantage thru Microservices for Retail Banking

Watch this session with Red Hat and NuoDB technologists to understand how a microservices architecture directly supports your response to banking meta trends.

Video transcript: 

Alan Shimel: Good day, everyone.  This is Alan Shimel, and you’re listening to another Devops.com webinar.  Thanks for joining.  Today’s webinar looks like it’s gonna be a good one.  It’s “Transforming
Retail Banking: Competitive Edge Through Microservices.”  Our webinar today is sponsored by Red Hat and NuoDB. 

Let me introduce you to our panel members today.

So, joining us from NuoDB is Boris Bulanov.  Boris is the Vice President of Technology at NuoDB.  Boris, welcome to DevOps.com webinars.

Boris Bulanov: Thank you, Alan.

AS: Fantastic, you sound great.  And joining us from Red Hat, Technical Consultation Architect, Justin Goldsmith.  Justin, welcome to DevOps.com webinars.

Justin Goldsmith: Thank you, Alan.

AS: Okay, sound check’s out of the way.  Boris, I’m going to hand things over to you.  Let’s have a great webinar.

BB: Fantastic.  Thank you so much, Alan, and thank you everyone for participating and taking time with us.

So, just in terms of maybe reintroducing ourselves, both Justin and myself are practitioners, technical practitioners who spend time for the most part in financial companies, specifically in banks worldwide.  So in our roles, which is pretty unique, we get to see a cross-section of multiple organizations, and therefore, we sort of like can draw certain conclusions and observations, and when we decided to put this webinar together, we really wanted to share these kind of observations from a techie perspective if you will, and frame the high-level picture of why it appears to us that the banking industry is in a very interesting transition, sometimes referred to as digital transformation, sometimes it’s digital banking, but things are happening, and we wanted to share a couple of thoughts with you in the front-end of the session which would be -- which would frame the discussion further, quite well.

And then, Justin will jump into a little bit more detail on Red Hat offering with OpenShift and how it fits into this whole notion of banking transformation.  And after that, I’ll spend a few minutes describing you the technical background and rationale behind why distributed databases and specifically NuoDB is a great fit for these types of activities and actually being used quite extensively across the banks.

So with that, to step back a little bit, when we think about financial industry in general, or banking specifically, we generally think about cutting-edge technology, lots of investment, new types of hardware, new types of applications, which is absolutely the case.  However, another interesting observation if we step a little bit further back and look at banking, banking is quite slow in terms of changing itself. 

So talking to experts in banking who spend their entire lifetimes working in a banking arena, people generally refer to something like introduction of ATMs, automated teller machines, as really the last big leap forward for how banking services are being delivered, right?  Every since then, there wasn’t really a fundamental innovation in terms of how people interact with banks.  And if you’ll think about this, ATMs were introduced almost 50 years ago.  So here it is, half-a-century since the last time the banking industry really re-invented itself; it’s a very long time.

But interestingly enough, in the last number of years, that is changing, right?  So clearly, the banking industry is starting to change, and it’s actually much more significant than that.  The banking industry is forced to change itself and reinvent itself, and that really, it’s a great time to be a technologist at this time and space; a lot of interesting things happening.  But the question is, so what is happening now that is different from a number of years ago, and from an outsider perspective, it’s actually quite simple.  Historically, banks really competed with each other in terms of, how do you gain customers, how do you produce more avenues, and that type of competition really is incremental; you invest into better technology; you invest into better systems, but it doesn’t really force you to do something entirely different.  And that has changed dramatically in the last few years with what you would describe as sort of like hyper-competition if you will, from competitors which historically are not a part of the banking industry itself. 

So if we look at this particular chart, a couple of points I wanted to make.  So if you look at a bank, a bank is generally comprised out of several lines of business, right?  And lines of business includes things like core banking.  Core banking is pretty much opening a checking account, or a savings account, and the fundamental banking operations.  But then there are additional services around that, around payments, around lending, around currency exchanges, and the idea of how banks are really operating today are you attract customers to have accounts with you in core banking, but then you cross-sell and upsell other services to them to make real money.  So all those different lines of business have different amount of revenues associated with them.

So interestingly enough, the external actors in all of the ecosystem are starting to carve out bits and pieces of the core banking revenues in a very significant way, right?  So a couple of areas where there is more apparent than others. 

So, one example is, sometimes pure internet players like PayPal or like Quicken start carving out certain parts of the banking business, because they are much better at doing one thing and one thing only, and they perfect that particular line of business, and they’re able to attract a lot of customers to do mortgages, or to do payments with them.  So that’s one area where there is a lot of pressure that is coming from outside of the banking world.

Another very significant pressure is from large tech platform players, and there are actually specifically a number of these players which analysts in a banking industry often refers to as GAFA.  GAFA stands for Google, Apple, Facebook, and Amazon.  Those are the platform players which are adapting some of the core banking processes to be incorporated into their platform.  Why is this interesting and important to banking industry?  Well those companies that I mentioned have tremendous track record and experience in terms of creating very compelling website, very compelling user experience.  They do not necessarily have to look for new customers in terms of providing banking services, but they can just extend the existing platforms with new services and therefore just that ability to do payments or to do lending and other types of banking, core banking operations, to their installed banks, to their users, to their providers, to their merchants.  And therefore, they’re extremely well-positioned to take advantage of the valuable banking services which otherwise belong in core banking.  And if you’ll think about this, a couple of observations, that these platform companies are really after the types of banking services which are most lucrative.  In a sense, payments is an incredibly large industry, right?  Worldwide payment fees at up to multiple trillions, that is trillions of dollars per year.  So imagine that somebody with a global presence like Amazon or Google or Facebook would start on cooperating payments at a higher pace into their offerings, that truly represents a very significant threat to the core banking. 

Another interesting point I wanted to make is around really breakthrough innovations in the space which are not necessarily on a path of the core banking operation today, but very well may be, and that’s the concepts of blockchain, or technologies rather than concepts at this point, blockchain and cryptocurrencies.  So while a little bit away in terms of the actual impact, these are technologies which can truly redefine how banking is both delivered and consumed by the ecosystem, right.  Specifically, just a quick example I can give you that if you look back a few years at a company called Napster, some of you may remember those guys, right, they truly disrupted the music industry because they provided ability for users to exchange music and music collections outside of the core industry norms if you will, right?  And that was very disruptive.  So in the same way, blockchain and blockchain-derived technologies present a very different view of banking, where banking services can be delivered peer-to-peer, between individuals, between organizations, without really having a trusted third party being sort of like in a central part of conducting business transactions.  And what this really means that technologies which are blockchain-based have a potential of being sort of like an existential threat to banking industry as a whole, and therefore banks today, while not necessarily trying to do anything with a blockchain in terms of providing a counter-solution with those technologies, but they’re incorporating elements of the cryptocurrency and blockchain into their offering, trying to stay at the same pace as those environments are.

So the net is that the next couple of years will be very exciting for those of us who are both on the technology and the business side in a banking arena because the whole market segment is really in a fundamental transition, and reinventing itself, if you want to take a look at that from that perspective.

So how do you change yourself?  And that’s where this notion of digital transformation comes in.  And the best way to compete with somebody is to adapt best practices, or those successful best practices that are being used by your competition.  And if you look at the banking industry, what they need to be able to do is to start moving much faster.  From top-down view, business agility or ability to roll out new services, new offerings faster, is absolutely critical, right?  That’s what defines companies like Google and Apple and Amazon.  Being able to produce a customer experience, a user experience which is unparalleled, is very important for the banks.  Ability to have this notion of easy-to-do-business with is critical. 

So, I just wanted to give you a quick example of probably one of the most standard business processes in banking, is something called, new customer onboarding, or application processing, right, when somebody decides that they want to either open a credit card, or sign up for a banking account, you have to go through a number of steps before a bank makes you a customer, right?  And sometimes that involves talking to a customer rep for a few minutes on a phone to open a credit card.  Sometimes you need to do something more substantial, but each line of business within a bank today is very likely to have its own onboarding process, right?  And therefore, both the implementation of the process on the technical side, and customer-facing view of that is very different, and sometimes it’s not easy to deal with. 

As a matter of fact, I wanted to give you an example: last year I was opening up what is called a dependent account for my son, which is sort of like, a part of my personal account, and I’ve been a customer of this particular bank, which is one of the top three banks in the U.S.; I will not name them necessarily.  But it took me literally, first of all make an appointment in a branch to sit down with a customer rep for an hour to open up that application.  So I spent a lot of time, a lot of planning, for one reason only, to open up an account for my son.  And it’s clearly not a very fast, very, sort of rewarding experience, if you will.  That needs to change.

On the flipside, I wanted to share another personal experience, that a few months ago I decided to open up a cryptocurrency account in an internet bank, and that was a radically different experience.  All I had to do was provide to that institution my social security number.  Then I took a mobile app and took a picture of my passport, and I took a picture of myself.  These are three things which I had to provide to them interactively, and the system thought for a little while, and then in a minute or so, it came back, essentially approving me as a customer, so I was able, you know, in the next five minutes, I was able to trade cryptocurrencies, right? 

So this is an example of a different type of experience, user experience, where the front-end, the interaction is extremely streamlined.  You can imagine that the bank [hand?] behind this kind of action is very complex.  They have to validate my identity; they have to check my credit score.  They probably have to go to something like anti-money-laundering registry, make sure that I’m a legitimate person to deal with, and so forth, right?  But the idea that this type of onboarding process and risk assessment can be done in real time is really the very high bar to jump over for traditional banks.  But that’s what needs to happen.

Right, so let’s again remind ourselves, business agility and customer experiences where the banks need to get to.  But at the same time, banks are banks, and we have certain subtle expectations for them.  They have to be completely trusted, they have to be always available.  It’s sort of, like the analogy is, whenever you pick up the phone, you have to be able to hear a dial tone.  Right, with money, it’s the same thing.  You have to be 100 percent guaranteed that your money is in a safe place; you can always access it.  You can always check the balance.  And therefore, the banks have a very high bar of both achieving this kind of business agility and superior customer experience to be able to compete both with each other, but also with very predatory, very experienced internet companies, but at the same time, they have to retain the basics, and this notion of high visibility, being always on, and being trusted are the cornerstones of that.

So switching to the next topic, so now we know that the banks are -- banks need to change, they need to transform.  We know that they need to -- or they know that they need to gain agility and customer experience.  How do they do that?  And what is that transformation?  What does it really entail?  And here’s, this picture is quite simple, especially for most of us on the phone who are practitioners of this, the picture is pretty straightforward.  But generally, this type of transition involves several components: it involves change in process, right, process, development process, software development lifecycle needs to be different.  It involves people, right, skills are different, people can absorb new skills, but it’s always good to have something that you’ve done before so you don’t -- you’re not doing it for the first time.  So skills are important.  And lastly, technology is important, because technology that fits one type of application deployment will not fit another, right?

So this notion of moving from a classical enterprise IT, which is one simple way of viewing it or thinking about this; it’s like a layered cake, right?  In a bank today, you have a number of organizations which are responsible for their own part of the software development life cycle.  So for instance, you have somebody who is responsible for architecture and design of the application.  You have somebody who is responsible for development.  There is organization which does test and QA.  Then somebody will operate this software, operate, dBase will operate the databases.  So there are many different organizations, and a single project needs to be able to [spin?] all of them.  Well, the outcome is something that we are already used to expecting; the process is very expensive, it’s very slow, and the number of changes that this kind of process can product is very, very limited.  As a matter of fact, generally, it’s a norm for a bank to have two, three, four changes a year for application, which is clearly not acceptable. 

And that’s why this process is changing, right?  So this forum today is all about DevOps, right, ability to transition into a very different environment which, rather than having this layered cake, it has individual teams which are responsible for the entire software development life cycle, of a slice of an application, slice of the project, that’s how leading internet companies implementing their infrastructure; that’s how banking today is transforming itself, maybe not transforming itself yet, in the purest form of DevOps, but it’s definitely moving in that direction.

And actually, in the next section of this session, Justin will jump in and cover a little bit of what is involved in DevOps and how Red Hat infrastructure, both infrastructure and technology as well as processes support that kind of a transition.

Just to mention a couple of other things.  Banks are well-known for their broad use of mainframes, which is a fantastic technology.  Unfortunately, it’s very expensive, and also it’s not very much aligned with online processing, which is essential for supporting banking operations today with all the devices, mobile devices, internet devices.  The number of inquiries, sort of like the workload the banks are experiencing, is dramatically changing, right, from being batch-oriented and processing lots of information at once, to being always available, right, interactive loads.  Mainframes are not very good for that.  Therefore, applications are migrating from mainframes or offloading some of the processing from mainframes to open platforms like containers and clouds. 

Applications, we’ve mentioned that in a current incarnation, applications take amazing amount of effort both to develop and maintain.  Not very efficient, so the new approach is to break them up into microservices, and have the slices of applications or projects to make up certain services, certain functionality.  Those are much more efficient ways to organize the software development life cycle.

And lastly, databases need to change.  Databases are a large part of the application stack, and the databases that have been designed for centralized environment don’t work that well anymore in a distributed environment, in a container environment, and in a cloud environment.  And we’ll touch on that a little bit more in a later part of the session.

So with that, I wanted to pass the mic to Justin, and he’ll give us a little bit more information about how OpenShift fits into these kinds of trends in banking transformation.  Justin?

JG: Thank you, Boris.  Let me just quickly reintroduce myself.  I’m Justin Goldsmith.  I’m an Architect in our Financial Services Consulting Team at Red Hat.  So I’ve actually worked on a bunch of applications that Boris mentioned in the first half.  I’ve worked with some of the largest banks in the U.S. on their payments platform, even some on their KYC or Know Your Customer.  So hopefully, in the future, that process gets fixed, and your son can get onboarded onto the bank faster.

Okay, so first, digital transformation.  I’m going to talk a little bit about how Red Hat thinks we should do this, and how OpenShift is at the core of that.

So first, we need a way of changing how we build applications from building to how we deliver them, CI/CD, basically the whole process from building through deployment to production.  We need to work on our platform, existing infrastructure isn’t exactly built to deal with this, so we want to change how we create our platforms to enable apps to be deployed faster.  And then the process of how these teams work together.  Isolated siloed teams cannot really do this that well, so DevOps is obviously an integral part of digital transformation.

So, at Red Hat, we think containers are an integral part of this process.  So first, what are containers?  They are an isolated Linux process on a shared kernel.  They are much more lightweight than traditional VMs, and they’re also portable across environments.  From an application point of view, what does that mean?  We bundle our application and all of its dependencies in an immutable package.  That package can be deployed anywhere from my laptop to our production environment, or any number of environments in between.  Also, we have access to shared images so that we can leverage other people’s technologies, including NuoDBs to run a database without having to worry about how we build that.

Okay, so I’m going to really harp on portability here.  So, containers provide application portability due to the standard image format, and the fact that they are only dependent on Linux.  This gives teams better control over the infrastructure where they deploy applications.  A container house provides a common ground for running containers on any infrastructure from your own laptop to bare metal, virtualization, private and public clouds.  In order to guarantee that portability across those environment, it is best practice to run in all environments the same version and distribution of Linux.  Containers still run on Linux, so it’s not necessarily a good idea to, you know, for example, build your container on Ubuntu and then run it on RHEL in production.  We want the same standard Linux across all environments where we use our containers to guarantee that they work everywhere.

So at the foundation of Red Hat’s digital transformation work is OpenShift, which is our container platform.  So OpenShift built on top of the industry standard Red Hat Enterprise Linux, which is the foundation for running Enterprise-class Linux containers.

On top of that, we have Kubernetes, and Red Hat provides as part of OpenShift an Enterprise version of Kubernetes, so that deals with container orchestration, scheduling, persistent [storage?], life cycle management, operational management, and a bunch of other infrastructure services required to be able to run containers at scale and production.  Sometimes, this is referred to as container-as-a-service, or CaaS.

So on top of that, OpenShift does not limit itself to being a CaaS.  It also provides application services, such like lightweight application platforms, message broker, single-sign-on, and other middleware.  It also provides build automation and CI/CD pipelines in order for developers to take advantage of using these containers, and basically using the same tools they use today.  This layer is also, is often referred to as “PaaS”, or platform-as-a service.  And with OpenShift, you don’t have to choose between the two.  We let the teams decide what they want to do.  If they want to leverage the features that OpenShift provides, and how to be more of a PaaS, that’s great, and we fully support that and recommend that.  If they just want to use, treat this more as a container-as-a-service and not use all those other features, that’s great too.  OpenShift supports both ways of doing work.

All of these components constructed together using open industry standards enable to quickly and easily create, edit, and deploy nearly any application on virtually any type of infrastructure.  On top of all that is a large portfolio of middleware solutions that can provide superior business automation, integration services, data and storage, and of course, also mobile.  Red Hat provide the holistic solution across the entire stack that is supported by a constant stream of updates, and as you can see here, it doesn’t really matter the type of application that you build on OpenShift.  Yes, we would love everyone to modernize all their old applications, make everything microservices, and hopefully you do, and all new applications, greenfield applications can be built with that in mind, and OpenShift provides a lot of great features that enable you to do that.  But, given that a container is just a Linux process, it can also run existing staple applications, and we provide means to support that too.

So how dos CI/CD work in OpenShift?  So, it starts with a developer submitting code.  Once you submit code, CI/CD engine, Jenkins, or really anything, but Red Hat provides a supported image of Jenkins with OpenShift, enable you to build your application.  From there, we’re going to want to create our container with that.  Sometimes that can be done in one step with what is known as source-to-image, which enables you to basically just provide your source code, and this process will basically take your source, build it, and create an image for you, or you can create your application package yourself, and put that image into your image builds after that. 

Once you have that container, the great part about containers is that they can be deployed anywhere.  You can run OpenShift on physical hardware, or on virtual hardware, in private clouds, in public clouds, in multiple public clouds at the same time, and even on your developer laptop, so you’re not locked into any one place to run this and run OpenShift and run your applications; they can be run anywhere, and that portability is provided by Linux containers.

And OpenShift is a true polyglot platform.  All of these technologies that are listed here are either supported or certified to run on OpenShift.  So there’s a ton of stuff here that you can do and you’re not limited to any one language, library, application server or anything.  One of those supported -- sorry, one of those certified things to run on OpenShift is NuoDB, so I’ll hand it back to Boris to talk about that.

BB: Justin, thank you very much.  Great background.  I just want to reinforce and connect what Justin just described with an earlier, so like bigger picture, why would banks embrace these kind of approaches?  And it’s pretty clear that really the goal is how do you organize yourself in such a passion that you can produce applications faster and better, and evolve them over time?  So this organization around DevOps, around the infrastructure which supports rapid testing, rapid development, and agile deployment while independent of other components is critical.  So it’s interesting to look at technology purely from the technology perspective on how things happen, but I find they’re always useful to pull [also?] up to the top-down picture, so why would a bank do something like that?  Why would they invest into such a dramatic shift in the way that they do their internal development and deployment and conduct their business?  And clearly this is the way of the future, and that’s one of the reasons, is Justin and myself and all our colleagues travel around the world and seeing all these different banks.  It’s not one bank or another bank that is doing this.  What’s pretty amazing about our experiences is that every bank is in a different phase of this digital transformation, and if they’re not, that means they’re probably going to be too late, because it’s not something that happens overnight. 

But, Justin, thank you very much, and again, connecting business to technology is always very useful for me.

So switching gears a little bit, and stopping back, looking at the history a few years back, in general, computing industry is going through phases and it’s evolving over time, and if we look far back enough, it’s amazing to see how architectures have changes, right?  We started with centralized mainframes, large beautiful computers which were very fast, which were very scalable, very reliable.  But then what started to happen over the years is we’re slowly moving towards decentralization or distributed computing, right?  And there are multiple steps in between where we are now and where we’re going and what happened in the past, like client-server approaches, like scale-out architectures which were pretty dominant for delivering applications as a service a number of years ago.  But what’s interesting is not to lose the fact that you know, while you gain something and you gain a lot with distributed computing in terms of scale and cost and ability to operate in a commodity environment and standardizing things, you give something up, right?  And what you give up is reliability, right?  So as you create more and more distributed systems, they become inherently less reliable; it’s just, that’s the fact of life, that’s the fact of the design.  And therefore, everything that resides in the top of those distributed systems need to be approached very differently.  It has to incorporate ability to respond correctly to failures in such a way that failures is not something unusual, but it’s a part of the normal operation of application or system.  And therefore, what’s quite important is that the entire application or system stack has to be redundant.  It has to be fault-tolerant, and those two words, if nothing else resonates from today’s presentation, if you can just keep in mind those two word, these are the key to applications of the future.  They have to be redundant, and they have to be fault-tolerant.  And therefore, that puts a certain requirement on the entire set of components which go into, so like making an application, it’s true for the application servers, and it’s true for web servers, it’s true for database, it’s true for containers, it’s true for the system which operates containers.  Something that Justin referred to a minute ago.  So the entire stack, the entire operational environment has to be design and operate in such a way that it can sustain failures.

And that brings us to the next topic, so let’s spend a few minutes on databases.  There are traditional databases which have been designed a very long time ago, and they’re fantastic.  Oracles and DB2s and SQL servers and MySQLs, these are phenomenal pieces of technology.  They are probably some of the most popular software that has been written, and has been successful in the relational databases.  But they have been designed for a particular architecture and time, for centralized architecture.  Now times are different.  Now you have to come up with distributed databases which are fault-tolerant, which are redundant.  And we have, again, a great number of choices.  There are databases which are graph databases, there are key value pair databases, there are document databases, there are relational SQL databases, and depending on a project, a project team generally decides, application development decides to pick a particular database technology first before they actually go to designing the application.  And each application can benefit from a different type of a database.  There’s no, like one choice for everybody.

On a NuoDB side, we’re sort of fortunate because our offering, our product, which is a SQL-distributed database, happens to be a great choice for a lot of enterprises, especially for banks which are undergoing this kind of digital transformation, for many reasons, but to name a few specifically, the fact that NuoDB, and in the next slide, we’ll talk a little bit more about their architecture and now NuoDB is different from traditional databases.  But at the same time, the reason why NuoDB is a great choice for banks is because it’s a well-known entity; it’s a SQL database.  Banks have incredible skills in both designing applications against SQL, as well as operating SQL databases.  They’re a well-known entity.  But at the same time, NuoDB is a unique offering in a relational space that offers ability to be, remember those two words I mentioned before, redundant and fault-tolerant.  So NuoDB is a SQL database which is fault-tolerant, which is very unique, and therefore it presents a very interesting option for the banks to base their next generation of development on, right because again, going back to the digital transformation, while you have to be agile, while you have to have a great user experience, at the same time, your service has to be always available; it has to be always on.

So with that, let me spend a few minutes on the architecture of NuoDB, and I will not do it justice, just putting a little bit in front of you.  But then if you’re interested, we can continue this discussion.  But if I would describe NuoDB in the shortest possible form, NuoDB is a pure SQL database, right, pure relational database, and what this means, it supports ANSI SQL in its full glory, and also it supports transactions, so everything you do you can have multiple writers, you can have multiple readers, you can update data at multiple places at once.  NuoDB is guaranteeing consistency of the data, right?  We have asset transactions in its proper form.

On another hand, NuoDB is not a monolithic database; it’s a distributed database. What this means is that, in the back end of NuoDB, we have a number of servers, a number of processes, working together in a peer-to-peer fashion, delivering a single logical database representation to applications, right?  And that’s, if you’ll think about this, this is a very tall order to make happen.  It’s not easy; we’ve been working on this for a long time, and we’re pretty much perfected this kind of model and delivering mission-critical services, relational database services on top of our technology.

So from the application perspective on this diagram, these are gray bubbles on the top, applications don’t know that NuoDB is really a distributed database.  All application does, it just grabs a JDBC driver or ODBC driver, provides a connection string, and connects to server, and that’s all that the application knows.  And then it can issue SQL queries, it gets the result sets back.  It does the normal relational interaction with the database.

But on the back end, a couple of interesting things happen.  So first of all, going back to this notion of redundancy and full tolerance.  It really doesn’t matter which server application connects to; it will always see the same consistent transaction with consistent view of the database.  The individual entrance can fail.  What happens with the application if the engine to which it connected fails?  Well, it’s very simple.  The connection drops, application catches the exception, and reconnects, and transparently under the covers, application reconnects to a different server.  So as far as application is concerned, if one of the engine fails, not a problem, you just retry and transaction is rolled back, and you retry the transaction, you succeed the next time, and the application goes on.

So this notion of full tolerance, and the whole system is designed in a way that it doesn’t have a single point of failure, no matter what fails, whether it’s a server which fails, process which fails, fine.  A collection of processes fails, fine.  A network fails, fine.  A data center fails, fine.  Database service will be available if configured properly, even if those failures occur, single failures or multiple failures at once.  So that is a very important property of the database.

Just quickly touch on a couple of other points which are interesting, and their architecture is very elegant, that we have two types of servers or engines which comprise the NuoDB logical database.  Some engines are called transaction processors, and they are responsible for things that you expect a relational database to have, such as SQL query parsing, optimizing of the queries based on indexing and statistics, getting data from other places.  So that’s the responsibility of the transaction servers, or we’ll call them transaction engines.

On the flipside, we have another type of role, if you will, that the server plays, and we call it a storage engine, or a storage manager.  And those are the servers which are responsible for writing data to the disk.  So essentially, as far as application is concerned, where application commits a transaction, it’s guaranteed that the data is stored on a disk somewhere, okay?

So however, an interesting side effect -- not side effect, but really part of the core architecture, that if you have two storage engines, it essentially means that the data is stored in two places at once.  Right, so it can be stored in two places at once, in four places at once.  It really is a part of the configuration that you’d like to have in your system.  You can configure more redundancies.  As a matter of fact, in banking, the general topology for midrange application that we would have, it would be two data centers, each one of the data centers has two availability zones, and each one of the availability zones has one transaction engine and one storage manager.  Right, and therefore you have quadruple redundancy.  If the availability zones fail, nothing happens; if the data center fails, system continues to operate, and so forth.

So this notion of seamless replication of data which doesn’t slow down the overall performance of the system, but it results in data being present in multiple places at once, so if the components fail, you’ll be able to recover, is a very important, sort of overall outcome of this approach in architecture.

One last thing I wanted to just touch on is, which I find to be fascinating, is another aspect of this architecture, which is, you probably heard of the memory-centric databases, or memory databases.  So the architecture of NuoDB is such that it lends itself to the same kind of memory-centric patterns, right, which are critical in a banking area because a lot of paths that are done have to occur at in-memory speeds, right, for instance, those requests for account balances, or for transfers, and so forth, they have to happen really quickly, at the online interactive speeds, at [maps?], at volumes, right?  And in NuoDB, because of this layer of engines, which are responsible for the actual interaction with application, the data resides very closely to applications, and therefore, you can easily get, what are called “in-memory speeds,” right, which is comparable, as good, or sometimes better than in-memory databases, right?  So therefore, NuoDB can actually sustain both very high interactive speeds, right, with low latency, as well, this higher, write transaction volumes, which result in data being written to the disk across multiple occasions.  So it’s a very unique architecture, but if we abstract back, the goal is, how do you deliver full tolerance and redundancy in the context of generally unreliable distributed system, and that’s exactly what NuoDB does.

So quickly transitioning to the next concept, from what Justin has described, it becomes almost intuitive, I think, for most of you to sense the notion that there is a great fit between this multi-process architecture of NuoDB and the container architecture of OpenShift, right?  So in OpenShift, as Justin mentioned, we can run databases as PAS services, but at the same time, we can containerize them. 

And that’s where NuoDB architecture is very compelling, right?  Why is it compelling?  Well, because if we containerize NuoDB, as we containerize NuoDB, the database service is not delivered by a single container, right?  And if you’ll step back for a second, if we’re using traditional bases, like Oracle or MySQL, the way it fits into the containers, you have a container, which contains the entire database, right, and then probably, you want to have a standby container, which will be active in case, if the primary goes down, right?  While that’s not quite the model that you’d expect to have for a database in this environment, in general, the databases are, from what we see today, traditional databases are run outside of OpenShift, and they’re just used as a database service. 

Well NuoDB doesn’t work that way, right?  You can fully containerize it; it can be, you know, fault-tolerant, and highly available, and redundant, as a connection of containers working together to deliver a single logical database service.  And therefore, there are a number of benefits that are derived from it. 

So first of all, clearly, the notion of, as I mentioned, redundancy and full tolerance is important, but also, ability to actually move containers around, to bring them up, new containers up very quickly, so containers with transaction engines can come up and be active and be productive in a matter of seconds.  Storage managers which manage persistent state of the data are a little bit slower to react, but they’re self-managing; they’re self-replicating, so you literally can start storage managers in different parts of your cluster, and the data will be sucked into a different place, and it can be moved around.  So, the notion of fully-containerized database under single control is extremely compelling.  And truly, it technically is the right solution for using database, with containers with OpenShift.

So, to sort of complete this view of NuoDB as a distributed database which is containerized with OpenShift, a couple of final observations, right, that as OpenShift has a number of controllers and monitors and other facilities to manage containers, application containers across the datacenter, or across multiple datacenters, NuoDB becomes essentially part of the same managed set of components, right?  There is no separation between application and database.  Database becomes a part of the same managed entity, and that is very significant; it can react to failures from a single controller perspective which is critical.  It can change workload and adjust to new requirements dynamically, elastically, just as the rest of the application is, and therefore this means if for instance if application load increases, the database will react not just application and database will react to this appropriately, it can also reduce capacity if need to, right?  Why spend cycles where it’s not needed?  As well as one of the fundamental premises is to be able to move the actual working components around the datacenter, and with NuoDB just like with the application that is managed in containers, by external controllers, the database function can be moved around into places of the cluster which is underutilized, which is very significant.

And probably the most significant strategic aspect of having the database as a part of your managed entity, if you will, the promise of OpenShift and really the strategic vision of OpenShift is to arrive at some point in time to the concept of totally automated datacenter, so like a self-driving datacenter.  And that’s where the notion of database being a part of that datacenter is pretty critical, right?  If you have a database which resides outside, it means you cannot fully automate the entire datacenter operation; you’ll have to make exceptions, and that just doesn’t work.

So, again, NuoDB fits right into the containers.  It’s a part of the same managed components which are managed by the same controllers, and that’s really the high-level view of why it’s critical to have database as a part of the platform rather than reside outside of the platform, and NuoDB is integrated, and is capable of doing so.

So, let me step back now, and just quickly come back and remind everybody what Justin and I attempted to communicate and share with you guys.  So first of all, you know, the high-level observation is that banking industry as a whole is in a process of transition.  It’s not something that they chose to do, but they’re clearly being forced to change the way they do business and they have to reinvent themselves, and that creates a lot of interesting dynamics for us and for you as technology providers and technology practitioners, and in general, those changes involve changes to processes, infrastructure, databases, skills, all that goes into the ingredients of what it takes to reinvent yourself.  And Red Hat has incredible set of assets which enable banks and companies to do so in a very effective manner.  NuoDB is starting to be a part of that portfolio; we’re very excited about that.  And jointly, we’re looking forward to working with everybody to deliver those types of solutions.

But also if we step back, these types of distributed systems do require a fresh look at technology, because these systems are inherently distributed systems, and therefore they will -- certain components of those systems will fail.  And therefore, the databases, the applications, the controllers, the entire ecosystem has to be fault-tolerant; it has to be redundant.  And you have to be asking these kind of questions when you choose technologies, when you design applications, when you deploy systems.  And NuoDB is -- clearly OpenShift is designed and operate in such manner, and NuoDB is also designed from scratch to be this kind of highly-available redundant database to satisfy the requirements of systems that are required for digital transformation. 

So with that, let me stop here.  We have a few minutes to answer your questions, but if we don’t necessarily address your specific question, we’ll try to do it after the session online, so we hope to get to all the questions and answer them. 

With that, Alan, let me pass the mic to you and see if we have any questions from the audience.

AS: Boris, thank you.  We do have questions.  I have to apologize up front, guys, I have a little bit of a cough, so if you hear me coughing, it’s me, I apologize.

All right, first question comes from Oliver.  Seeing Atomic [coast?] in the Enterprise container host building block, how does CoreOS or Container OS fit into the big picture?

BB: Justin, I believe this is a question for you?

JG: Yeah.  So unfortunately, I don’t even really know too much about the acquisition yet, and how we’re treating that.  There should be more information released in the next couple of months, particularly around Red Hat Summit timeframe which is in May about what we’re planning to do there, but I can’t really give you any answers on CoreOS today.

AS: Okay.  Next comes from Aswani.  What is the memory and disk footprint of NuoDB?  Boris, that’s for you.

BB: Excellent question.  So, in general, the capacity planning is a part of, so like application development life cycle, and we have a lot of practices and know-how how to structure applications properly.  But, to be a little bit more concrete about this, generally, because we are a database which benefits from having a lot of memory, the first rule of thumb is that your working dataset, or majority of the working dataset, it would be very beneficial for it to fit into memory of a process, right?  So that’s the simplest way to describe it, so for instance, if you have 100 -- your total database is 100 gigabytes, as an example, but at any given time, you’re really operating within the 10 gigabyte range, it’s useful to have a memory available to the process of 10 gigabytes.  You can operate on the lower sizes as well, but it’s all a function of how much performance you try to gain versus how many resources you would use.  But generally, on the smaller side, we start with 10 gigabytes, size of memory available for the engines.  Our normal size is around 30-60 gigabytes, and as we scale databases to be more large-scale traditional databases, then you may end up using 120 gigabytes as the memory available to the machines.

AS: Follow-on question to that, Boris, is, is the database reactive to changes, triggers for instance that can call external processes?

BB: Great question.  So, because NuoDB is a true relational database, we support pretty much all of the features you’ll find in every relational database, which is triggers, stored procedure, you know, schema changes, online upgrades and so forth.  So yes, all of the traditional features of relational database are supported; specifically triggers can call out things which reside outside of the database itself.  And those could be transactional, for instance.

AS: Excellent.  Next question, also a NuoDB question, lots of action for you, Boris.  How does NuoDB support structural changes?  Like, for instance in Oracle that would need downtime, do you need downtime as well here?

BB: Again, a good question from the audience, thank you very much.  So, one of the things which is critical, and forgive me, I’ll take a step back, but one of the fundamental requirements for distributed databases is to be always available, which means there is no downtime, and we’re designing every aspect of the database to be always on.  So, things like schema changes can happen online without taking the service down.  Features -- functions like, clearly incremental backups and old operational things can happen while the database is up and running.  Upgrades and some of the downgrades may happen while database is up and running, so all of the things that you would operationally require a database to have throughout its life cycle can be done in a fashion that doesn’t require for the database service to be brought down.

AS: Excellent.  Last question that we have so far is from [Sujeet?] and it’s -- actually, this can probably go to both of you.  How does a container-native database facilitate DR?  Can Kubernetes nodes be deployed across multiple datacenters?  Really two questions in there.

JG: So let me start with answering that questions, and then you can take the NuoDB aspect of it.  So typically, [L-bridge within?] Kubernetes is not a single cluster across multiple datacenters.  What you would do is you’d have two unique OpenShift clusters, one in each datacenter.  But, that doesn’t mean that the application layer can’t support running in multiple datacenters, so I actually spoke to Boris about this a couple days ago when I was talking with him about NuoDB and as far as I know, NuoDB can -- it doesn’t really matter that it’s separate clusters; it can still run in multiple datacenters and I’ll let Boris add on to that.

BB: Justin, that’s precisely it.  So, NuoDB is something we haven’t talked about, but if you imagine, it’s an active-active database, meaning that each server within the NuoDB cluster, or each container which makes up NuoDB is an active server; it can run within one datacenter, or the servers can run across datacenters.  It really doesn’t matter; it’s a function of how you configure the deployment.  And therefore -- go ahead.

JG: Yeah, and that’s immaterial whether that’s one OpenShift cluster or two; it’s still the same database.

BB: That’s correct.  And therefore DR is somewhat, you know, it’s somewhat a legacy concept, in the case of NuoDB, it’s active-active across datacenters, so you don’t really need a DR which is not active, right, and just sits there, waiting until things -- until action needs to take place; it’s constantly active.

AS: Excellent.  Guys, we might have one other one here, if you don’t -- well, we’ve got a minute, we’ve just got a minute, so quickly, what extra features does OpenShift provide apart from Kubernetes clusters?

JG: Sure.  So we spoke about this briefly on some of the other slides, but it basically is an enterprise Kubernetes cluster which a lot more features for application development.  So it provides an out-of-the-box  outing layers, for example, for connection from outside the cluster to inside the cluster.  It provides supported Jenkins, for example, to enable you to do CI/CD.  It has that source-to-image process I was talking about which is a different process for building containers.  There’s a lot more features that also come with OpenShift, including all of the images that Red Hat provides that we support, such as like, a supported Tomcat, supported integration tier, business rules, things like that.  So it’s mostly the application components on top of Kubernetes that turn it from more of just a container-as-a-service into what could be considered a platform-as-a-service.

AS: Yup, excellent.  Guys, with that, we’re at the top of the hour.  We need to call it a day.  I think we’ve gotten to all of the questions which is always good though.  Gentlemen, thank you for a fantastic webinar.  Thank you so much to Red Hat and NuoDB for sponsoring today’s webinar as well.  And thank you all, everyone who stuck around here until the bitter end for the last questions, and to everyone else who’s listening to this on YouTube, DevOps.com or anyone else.

This is Alan Shimel of DevOps.com.  You’ve just listened to another great DevOps.com webinar, and we hope to see you soon on another one.  Have a great day, everyone.