You are here

LTE Webinar: Building Next-Gen Application Services for Mobile Telecoms

Examine the challenges facing mobile telcos and their software providers as new technologies, new services, and new service usage patterns in turn create new challenges for operators.

Video transcript: 

(Iain Gillott): -- was the design and capability, the mobile networks in the background have also been changing.  And actually, the bigger changes have yet to come.

So we’ll be talking about this very quick introduction, but really talking about the operator realities today, the main challenges that the mobile operators around the world face.  And it really is a global phenomenon here we’re talking about; it’s not just US or a European-centric industry anymore.  We’ll then talk about the LTE architecture, LTE, of course, is Long Term Evolution, which is the -- a lot of people know as 4G networks today, and actually be the basis of the 5G networks in 2020, which is the next bullet point there.  Really, what do we need to build here, ready for that next generation of network?  The industry is really looking at 2020 as the next step in the evolution of those networks.  How those networks evolve, NFD and SDN will be familiar to many people, familiar with telecom networks in general, or certainly IT networks, enterprise networks.  But for the mobile industry, this is an evolution that’s really happening right now.

And then I’ll just go to one summary slide, kind of put it all together, and then I’ll be handing over to [Guy?] to tell you more about NuoDB solutions.

So the main operator drive here is, we’ve got to monetize some of these new services.  And if you look back in time in the last 20 years, and actually back even to 3G, when we were a voice-centric industry, every additional minute of voice we used on our phones resulted in an additional dollar of revenue, or euro or pound, or whatever you want to use.  So the more minutes you use, the more you paid.  Then we had texting came along, and the more texts you sent, the more you paid.  When we got into more the 3G data, we started getting some of these unlimited buckets introduced, which were really a way to encourage people to get out there and use more data.  And at the time, of course, what you could do with a mobile phone was actually fairly limited in terms of bandwidth.  But then along came 4G, LTE, where the bandwidth available to the user now was infinitesimally higher, and continues to increase.  So today, watching video on a smartphone is perfectly reasonable, and many people do it, my kids included, tethering your tablets to your smartphone through WiFi, and then using the LTE connection to download a movie is a perfectly reasonable thing to do.

So what’s happened now is the bandwidth, of course, is increasing almost exponentially.  Just as an example, in the US in the next four years, the amount of bandwidth used on mobile data networks, and that’s not including WiFi, that’s just the LTE and 3G networks, will go up about nine times.  In other parts of the world, the growth is even higher.  So everybody’s using more data.  It’s not a few people using more data, it’s everybody.  But at the same time, the red line there, the revenues, are not increasing at the same time.  So while the bandwidth increase is going up nine times in the next four years, we will not be paying nine times as much every month.  We’ll be paying a little bit more every month, but certainly not nine times.  So this kind of evolution here, what you end up with is this gap building

Now when we get to 5G around 2020, some of the things that are being discussed are increases in bandwidth of 10 times what we have on LTE today, 10 times as many devices on the networks.  So the gap is going to get even bigger.  So the challenge for the operator here is, how do I close that gap, how do I minimize that gap?  Or if I can’t get my revenues up, then how do I make sure that my costs stay more in line with the revenue that I do have?  And obviously, I’ll need to reduce the cost of operations here.  So that’s the main challenge.

Now what’s interesting about this chart as well is, if you took this chart to any operator in the US, they’d nod, Latin America they would agree with you, Europe they would agree with you, Japan, Asia -- we have a global trend here and a global problem, if you like, from a mobile operator perspective.  Nobody is immune to this as we look around the world.

So now let’s look at the LTE architecture, which is really what is being deployed today.  Not every operator in the world has LTE yet, everybody will.  Most of the developed markets, one or two operators in each market, at least, some of the implementations are still very early.  Some of them are extremely mature.  So if you look at the United States, we’ve got some very big LTE networks, same as you do in China, Japan and then also in Europe as well.  But this diagram -- I kind of joke with people that if there’s one thing we love more than bandwidth in the mobile industry, it’s acronyms, and we’re great at reinventing acronyms, so if you don’t understand an acronym, just wait six months or a year, and we’ll come up with something new to replace it.  So this one, this chart is a little bit busy in terms of the acronyms and the naming.

But essentially, what we’ve got here -- and I’ll kind of run through this quickly is ‑‑ on the left hand side we’ve got the RAN, Radio Access Network.  The little hexagon here -- octagon, it’s an octagon ‑‑ hexagon -- is actually the end user device, so actually a smartphone or a tablet.  That’s connected to what’s called an eNodeB, more commonly we know it as a cell tower, the base station.  That then connects back into the network, and this solid purple line here is where your traffic goes into a serving gateway, and then (inaudible) [it goes into?] a PDN gateway.  So think of the serving gateway as the local router where your traffic goes, and then it’s directed into the rest of the network.  Serving gateway -- you would have one typically one or two per city, major city, whereas a PDN gateway is the rollup, it’s the multiple serving gateways connect into the PDN gateway.  So in US terms, you would have one PDN gateway per state.  In European terms, you’d probably have one PDN gateway per European country, maybe two, something like that.  That’s kind of the scale.

Then, of course, you traffic yours back into the internet, it goes into the company Intranets, and of course that’s where you get your content from.  I’ll talk about IMS in a second here.  This area at the top, the HSS the triple A and the PCRF is our subscriber control functions.  So the HSS is actually a big database that basically says, Iain is a subscriber, he has access to services; he is allowed to use these services on our network.  The triple A is an authentication accounting function, very similar to what you see on other networks, and the PCRF is actually the policy control function.  And this actually is the entity that says, you’ve reached your limits on the data you’ve used this month, we’re going to cut you off, or you need to top up your prepaid account, you’ve used all your data allocation.  And if you’ll notice, it’s got control functions down here to the rest of the network, and that’s how it does that.  The MME is the Mobility Management Entity, and that’s actually connected to all of the base stations, so that is what actually controls your connection as you move through the network and move from base station to base station; a very powerful entity that one is, as you can imagine.

Now the one here that I’ve got highlighted here is IMS, which is the IP Multimedia Subsystem -- again, a great acronym -- and also operates the services up here.  So what you find in this area are basically the IP services that the operator can offer on the network.  So it could be an example like a push to talk function, it could be voiceover LTE, it could be a messaging application.  These would sit in an operator data center, and of course provide services to all the subscribers through the network.  So obviously, this is an area where we see a lot more development in terms of new services, a lot of vendors offering different solutions in that space, and can market to the operator and deploy as an IP service.

So if the radio access network is this part, so this is what we call the evolved packet core, and this is basically the services part of the data network here, so really, three distinct pieces.  And just to complete out the picture, this bottom area is actually 3G.  The example I’ve got here is actually a CDMA network, so this will be an [EVBO?] provider, with this offering LTE as well, obviously.  But over time, of course, this is going to diminish, and the investment today is really going into the LTE part of the network.

So what do we need for our next generation of network?  Now we’ve got this basic architecture here we’re deploying.  Well, firstly, we’re going to support about 50% more devices.  Everybody in the developed market has got a smartphone.  Many people have got tablets.  A lot of people are getting connected cars.  We’ll have more of those.  But to get more devices and to get the growth going up on the network that’s here, it’s going to be things like internet of things, I’m sure you’ve heard of the term before, machine-2-machine communications.  So what we see up more in the future, we’re already starting to see it in many markets, are things like monitoring of home alarm systems, video cameras for the home, or security cameras for the outside, control of things like sprinkler systems or irrigation, or etc., etc.  You can go on and on.  Thermostats, of course, have been a lot in the news.  And while they’re certainly controlled by WiFi today, you can see that there’ll be an LTE component to some of these things as we get down the road.  But certainly more connected cars, more applications for long-haul trucking -- these types of applications are very popular.

Lower latencies -- the latencies in the network for LTE are around 50 milliseconds.  And the discussion for 2020 is that that comes down, some discussions, at around one millisecond for the network to respond, which is extremely fast.  And we need this for messaging, for voiceover LTE, but especially for connected car, actually.  And I’ll give you a very quick statistic, is at 60 miles an hour in one millisecond, the car moves five inches.  So if you’re doing a connected car kind of a safety application, for monitoring vehicle to vehicle communications, in the time it takes for the network to respond to the car’s request about maybe information about what else is around it, that car will have moved five inches.  At 50 milliseconds, that has moved four feet, just over a meter.  That’s plenty of room for an accident.  So latency is very, very important here; the speed with which the network can respond is becoming critical.

Yes, we want more bandwidth, we want more video, we want more stuff and we want more video.  There is something called LTE Broadcast which we won’t get into today, but it makes use of the architecture to actually very efficiently deliver large amounts of data, but the point being we know there’s going to be more bandwidth.  An HD movie uses about twice as much bandwidth as a standard definition movie.  And now, of course, the industry’s starting to talk about 4K, which is going to push the bandwidth up even further.

Seamless integration with WiFi, handoff, policy, control, WiFi networks, so it’s, of course, extremely popular both in the home, in the office and then outside, so tighter integration with that.  And then cashing the content as close to the device as possible -- if we’re going to have more video and more content, having to go all the way through the network to get that content becomes very inefficient and very expensive.  So there’s a lot of discussion in the industry now about pushing the content out to the edge of the network; it could be in the data center next to the base station, it could be a little bit removed from that, it could be actually on the base station itself.  And there’s different models that have been proposed for this.  So basically, the goal here is to reduce the transit for the major content, of course make it more responsive, increase customer experience and ultimately reduce the cost.

And that brings us to the last one here, is lower the operating costs, and I mean much lower.  So today, the average smartphone in the US, the average revenue per user is around $60, $65.  Adding a connected car to the network, you’re not going to be spending $65 a month.  It’s going to be a couple of dollars a month.  So how can you as an operator make money on that type of application, where the revenues are a few dollars a month, not 10s or 20s of dollars a month?  And that’s the challenge, really bringing down the cost here is really what everybody’s on about.

So now let’s look at our LTE architecture again, and you’ll notice I’ve added a couple of boxes here, which I’m calling a Mobile Content Delivery Server.  And it’s really part of a mobile content delivery network.  And some operators have started to do this already, but this is essentially what I was saying about moving that content to the edge.  So today, if I’m out here, and I want to get the piece of video that’s out on the internet, so I’m going to go all the way through the network, all the way out the back end to get it and bring it all the way back, which obviously is very inefficient and adds to the time taken.  So by moving the content towards the edge down at the base station here, or putting it down by the serving gateway in the data center, then of course that content is cached locally, and it’s much faster for me to get it, and more efficient.

And now content delivery networks are very common in the wired networks, obviously a very common part of the internet structure itself, but until recently have not been part of the mobile networks like this.  And this is what we’re starting to see now.  So it’s this type of application that we’re talking about.  That content delivery server, of course, is going to be something, it’s going to have database, it’s going to be storage, it’s going to get refreshed, it’s going to have to keep up with consumer needs and wants.  And of course it’s going to be a very dynamic environment in terms of the content on there, what’s going to be stored.

So now let’s talk about virtualization.  Virtualization from a mobile perspective can be a little bit different.  Some things you’ve seen in the enterprise IT world, we kind of joke in the mobile industry that, you know, we will get to major trends, major technology trends eventually, we just take a little bit longer than some of the parts of the IT industry.  So while virtualization in an enterprise IT data center has been very common for some time, really the big operators now in mobile [places?] are starting to get to this.  So really what we’re talking about here is virtually application virtualization, where the applications are separated from other apps and the service is actually running on the device.  That’s starting to happen a little bit, but mainly the focus for virtualization, and if you talk to an operator about virtualization, they’re going to talk about the network virtualization here, where the packet called the EPC could be virtualized and run in a data center with off-the-shelf hardware.  A couple of companies have actually shown mobile radio access network, the base station, in a virtualized environment where they’ve split the conventional base station and then run it on regular hardware.

So if we look at this, what we start to see now, of course, is a lot more Software Defined Networking, SDN, and network function virtualization.  So really, what we’re talking about here is the separating of the hardware from the actual software, your hardware instead of a dedicated piece of hardware like a traditional class 5 switch, which we always used to have in telecom networks many years ago, now replaced with soft switches.  But those replace that with something that sits on a standardized hardware, it could be standardized routing hardware or server hardware, and then the software becomes separate from that.  And you’d get to a virtualized network.  The point with this, of course, is you can have a more distributed network.  You can obviously buy hardware from multiple vendors, and hopefully reduce the cost of implementation.  And of course because you’re distributed now, we can put the hardware when we actually need it and move functions around, and then start to get into some more of the -- reduce some of the operating costs.  And of course, so the benefits we see from SDN and NFV in a mobile environment here are very, very similar to what you see in an enterprise IT shop.  And the same reasons they went that route, you’re going to see the mobile operators do this.  This is already happening today.  The large operators around the world have already started implementations.  It’s been a discussion the last few years that actually investments and work is actually happening now with some of the big operators around the world.  And certainly in the next three or four years, you’re going to see much more of this type of discussion, and much more investment.

So the functional view of virtualization here, and it looks like a nice colorful chart, obviously, but for those of you not familiar with the mobile networks but you know about virtualization, you’re going to look at this and go, hang on a minute, this looks exactly like I see in an enterprise IT shop.  And that’s the point.  So the hardware on the bottom here could be our routing function, our SDN-PDN gateway equivalent.  It could be the radio access network down here, the radio itself on the tower.  Then we have the virtual infrastructure manager, the network function manager, and of course the orchestrator, which keeps everybody in tune here.  We have other virtual hardware, and then we’ve got these virtual network functions.  And of course, we have the physical network functions as well.  And then OSS and BSS, the Operating Support Systems/Business Support Systems, so billing, those types of things, sitting on top of that.

So again, a very traditional view of a virtualized environment.  If I look in a large insurance company today, I can probably draw a similar picture.  And this is what’s really being put into the mobile networks, so what the mobile networks are evolving to, and the point being, it looks like everybody else in many respects.

Now if I kind of look at this a little bit differently here, so this is the same diagram I had earlier of LTE, but I’ve drawn it a little bit differently.  So what I’ve got here is, again, on the left hand side, we’ve got the radio access networks, remote radio heads, macro cell sites, the big towers, small cells here.  And you can see hundreds of thousands of these being deployed in different countries around the world, of course.  A large operator in the U.S. will have around 50,000 cell towers.  In China, half a million upwards, so it’s a different scale.  These all connect back into our -- the base band units here is the other half of the base station, but there’s the serving gateway there.  And this will be in a local data center, or what we used to call a central office.  There are distance limitations of how far the radio can be from that data center.  And we’ll typically have thousands of these in a network.  So if you think of the traditional central office, here’s one per neighborhood, one per small town type thing, and that’s what we’re looking at here.  Lots of old central offices are actually getting repurposed for this purpose.

Then we get into the metro data center, and now we’re starting to see the packet core here, so we’ve got the P-gateway, the PCRF and some of the other functions would be sitting in there, triple A perhaps, and parts of the HSS.  And there’ll be hundreds of these.  So maybe two per state in the United States, something like that.  And then a national data center will have the rest of the IP core, the rest of the EPC, some billing and business support solutions here, and there will be probably four or five of these on a nationwide basis.  But the point being here that whether we’re going to virtualize, in the short-term, we’re starting at this area here.  These look like big data centers running telecom hardware.  And so that’s an area that’s relatively easy to start the virtualization process for.  In the median term here, we get into the metro areas and also starting to look at these local data centers.  The longer term is the RAN; it’s a little harder to do, a little bit more involved.

So now if I look at my original diagram and look at the ease of virtualization here, then what I start to see is that, again, those IP services, that IMS area where we’re offering all those different services on the network, that’s one of the first areas to be virtualized.  And the reality is, if you’re an application or a service provider today to the mobile operators, you’re going to have to offer a virtualized solution to what you do, it’s as simple as that.  The next area we’ll see is the customer support area.  We’re already seeing that, as I said, the HSS is a large database, and obviously you’ve got those other servers and functions in there, which lend themselves very well to virtualization.  The MME is also one of the areas they’re looking at.  And then a little bit harder is the serving gateway and the P-gateway.  Yes, we’ll see those, but the problem there, of course, is you’ve got a lot of traffic going through them, so they are very, very critical to the revenue stream to the operator, certainly not an area you want to get wrong.  And then finally, the base station, the eNodeB, and this will be one of the last areas to get done, not because we can’t, but because there’s so many of them.  It’s such a huge process to go through all those hundreds of thousands of sites, as we said, and move over to new hardware.  So what we’ll see is that some areas will have been virtualized, some areas will not over time, some geographic areas.

OK, so let’s put all this up together here.  So network virtualization is a great opportunity.  It’s also a great challenge for the vendors.  From a vendor perspective, great opportunity to get in with the operators, new solutions are needed, new architectures are being deployed.  The threat is, as a vendor, if you don’t move quickly enough, somebody else will, as simple as that.  Virtualization is already here, as I said.  Operators are working on this type of architecture around the world.  And the price wars here in the US actually start to hasten the virtualization.  We need to get that operating cost down.  In other parts of the world, of course, there have been pricing pressures for some time; maybe it’s a growth pressure, but the concept of do more with less now is very real.  So virtualization and SDN are really generational shifts here; we’re moving to a next generation of architecture.  And course with SDN, we do have challenges with performance, interoperability and coexistence with the legacy.  We can’t just rip out all the old legacy stuff and put in new, it has to co-exist.  It’s an evolution and a handoff that has to happen over time.  And the final thing is that the challenges to evolve this network while maintaining the services revenues, you can’t just turn off the mobile network and turn on a new one.  And certainly we’re not going to be in a situation where we can afford to have an interruption of service from a consumer point of view.  Competition is massive in the mobile industry, and just one slip up can lead to losing a lot of customers to your competitor is absolutely something you cannot avoid.

So that’s a very quick view of a lot that’s happening in the industry right now.  So with that, I’ll hand over to Guy, and he can talk about NuoDB solutions.  Thank you.

(Guy): Thanks very much, Iain.  OK, so I want to pick up on a couple of the threads that Iain’s been exploring there, and drill down on them and specifically talk about that in the context of NuoDB.  I’ll scale-out the SQL database and how we’re seeing through our customers that begin to play out in this whole landscape that Iain’s been mapping out over -- well, not quite a decade, but close to it.  Thank you, [Knox?], I’ve got it.

Now Iain’s taking a very forward-looking visionary perspective in what he’s been talking about today, so what I thought I’d do is slightly in contrast with that, is I want to focus back in on what we can do now, what is happening now.  So my topics are, I want to look at what we’re learning from our customers and how that maps on to Iain’s vision, and then specifically what aspects, what is it about our solution, about our products which we think are most valuable to most customers, and which we’re focusing on to deliver more focus to those customers in future?  And I’ll finish by taking a brief peek under the covers, a quick look at the architecture at what is a fairly unusual product.

So to go back to Iain’s picture, he talked about all the different areas of the network, and all the different components and all the different opportunities in there.  And he identified this area here, IMS within services and the packet data network as being key area for early virtualization.  And I’ve found that’s certainly what we’re seeing in our customer base in this area, and also companies who are providing services to consumers over the network as internet’s general apps that download on phones.  For example these are the two areas where I’ve got most to say about what’s happening now.  We’ve got some other activity elsewhere on this picture, but I’m not going to dwell on it much today, one, because I really can’t say too much about what’s going on, and two, it’s relative only data, in most cases.

So let’s look at a couple of things I can talk about.  Our first case study is a customer who is a US software company.  They deploy a very successful product; they deploy it on dedicated equipment in the providers’ data centers, those central national data centers.  But they have a couple of problems with that; one is, it’s very complex to manage.  They need multi-sites because of -- to provide disaster recovery.  But the database they’re using, which is traditional relational database, it’s Oracle with GoldenGate for replication.  To manage that, the disaster recovery and to upgrade, it is extremely complex.  It’s very expensive in terms of deploying two stacks, although any one of them is as we’re actually earning the money at any one time, as it were, and very expensive in terms of the complexity of the administration and cost of the DBA time to do that.  They really wanted to address that problem.

And the other problem was that their customers, in line with what Iain was saying, their customers are increasingly pushing them towards a more virtual environment.  They are less willing, and this is a trend we’ve seen elsewhere, I’ll talk about it in our second case study, and it’s certainly something I’ve seen elsewhere as well, is there is reluctance to take these appliance-like or dedicated hardware solutions.  And so there is pressure on vendors into that space to provide cloud solutions, as well as, of course, advantages to do them.  And that was what this particular vendor was looking for, ease of management I talked about, better -- the continuous availability, not just high availability, but continuous availability, and active-active distribution.  Say they were operating one with a second DR in order to roll this service out, and to make the revenue on it that they really feel is there to make, they have to deploy this multi-data center, and that’s what we’re hoping to deliver to them.  Well, what we are delivering, as I’ve described, is active-active-active, while the third active is that which means not just two data centers with active-active (inaudible) transactions updating in both places, but three, four and more data centers.  That’s something, actually, we’re rolling out to them, and they’re expecting to take advantage of them for this year.

But what was key to them was to get multi-data center operation, rolling upgrades to get rid of that torturous upgrade process.  And the interesting thing was another one, which was ease of migration, which of course in line with what Iain was saying, which is the need to retain consistency with the legacy not be disruptive.  This played out to them in two forms; one is that because this is a SQL database, they could migrate their existing SQL applications.  And equally because this is a SQL database, the skills they’ve got, the tooling they’ve got, the processes they’ve got are all geared to developing, maintain and extending SQL applications.  So for them, the SQL application is high advantage, even apart from the fact that, of course, that some of their applications and other people’s applications, they need that transactional consistency from a SQL ACID database.

And the second case study I’d like to look at is a European company, and they rolled out a pretty cool mobile commerce application.  But they’ve been piloting that in what I’ve called here “merging markets,” i.e., small.  But they need this application to really own their corporate success, they need to roll this application out in multiple geographies, and specifically in the major markets.  Now they deploy it as an appliance.  And that appliance will not economically give them the performance they need, handle the workload that they need to roll this out to the major markets.  And without that, they cannot make a success of their product.  So they came to us because, one, they needed the cloud deployment, they were getting that same push-back I talked about before against deploying appliances in the provider’s data center.  And secondly, they needed to scale-out that performance, just pure performance scale-out is what they needed.  They’re interested in multi-data center; they’re interested in active-active and all of that.  But the key thing for them was just a bigger cluster that could handle more transactions, handle more connections and handle it economically, and specifically handle it elastically, because of course like most mobile applications, the workload is very peaky, and you don’t want to be deploying an application stack; that is, provisions for the maximum, and 90% of the time you’re wasting capability and paying for it, effectively.  So what we offered them was this ability to migrate their appliance application quite easily onto a cloud-architected database, deliver that scale-out and, as I said in the previous case, the ease of migration was critical for them, and was critical, of course, for continuity with the existing uses of that application.

So very quickly, that was what our customers were telling us that they need to go distributed for a number of reasons; sometimes the scale out just getting bigger, more cheaply, sometimes it’s because they need to be in multi-data centers.  But usually global operations (inaudible) part of the equation, and they need to be cloud-ready, easy ability to go global, and the elasticity of the pricing model that goes around the cloud generally.  But at the same time, they were not willing to give up transactional capability that they have in existing SQL applications, and equally they’re not willing to give up the skills, processes, tools and experience that they’ve got which re-architect, restart in a different technology would give them, as I think Iain pointed out.  If you want to re‑architect for somebody else, yes, you could escape the strictures of scale-up traditional relational databases and move on to scale-out standardized hardware with a number of architectures.  But what’s important for our customers, what they’re telling us is this coexistence with legacy, this need to manage risk and the SQL and the skillset around that delivers for them, because it’s more rapid time to market, less disruption, less risk.

So I’ve described the challenges that we’re trying to meet, and we’re meeting with our customers.  And let’s just drill down a little about this NuoDB that I’m talking about.  It’s a distributed, transactional SQL database, and it’s engineered for the cloud.  Distributed because, you know, and I’ve described use cases, and Iain’s talked about some of the trends that are going to make this inevitable, a single data center is not sufficient.  If you’ve got five national data centers in a provider in a country, then you need to be able to deliver across all of those five, unless you’re going to have a very difficult infrastructure to deal with.  So it’s not just active-active, it’s active-active-active.  And it’s becoming (inaudible), I say no way to go without it, because ACID is vital, it will remain vital through many critical use cases.  And the cost of simulating transactional consistency and some other -- and no SQL databases.  It’s very high and can lead you to a requirement of very rare skills and a brittle architecture, even so.

So scale-out, elastic, continuously available, low administration -- these are the characteristics of a genuine cloud application.  And if my slides would move on, I’d be a happy bunny!

As I said, the key requirements for a cloud application, and what you see, incidentally, in the middle, it mentioned the scale-out performance, that purple chart, which you can see the shape of but little more, is part of -- that’s a screen grab from one of our nightly performance tests.  We run them every night on the build, which is, are we imperiling in any way the performance increments that we get as we add more service -- you can kind of deduce that from the chart.  That the horizontal axis is adding more services, and adding more service in the vertical axis is the throughput you’re achieving with that.  And you’re seeing that that steep line is what we want, steep steps is what we want to see.  And we consistently look for that.  But that’s something that we need for our internal testing, but externally it’s just vendor benchmarks which, I’m sure, most people take with a pinch of salt, and so they should.  So what we do is, we encourage our customers to build their own real world benchmarks.  We encourage them to benchmark their applications in use, and to feed those results back to us.  And if they don’t get the results that we’ve predicted and they expect, then we’ve got a problem.  It doesn’t matter what our internal numbers say, then we’re going to have to deal with the real world and our customers.  Fortunately, that’s not been a showstopper for us anyway, yet.  And the other thing is what we call no-knobs administration, which means high levels of automation.  You’re going to deploy in the cloud, you’re going to deploy in multi-data centers.  You do not need to have -- you do not want to have highly expensive, highly distributed, highly worked, heavily workloaded DBA teams.  You need to keep that to a minimum, or else your costs are going to go through the ceiling.

Why is NuoDB valuable?  Well, I kind of -- I think, I hope I answered my own question here, which is basically conventional applications based in a single data center on a dedicated service on a scale-up stack, provisioned for maximum workload on usually dedicated, high cost proprietary hardware, which is a high capex and high running cost situation -- you stack that back up against a cloud application, geo-distributed, elastic provisioning, low capex and pay for what you eat, basically, it’s a no-brainer that that’s where you want to be.  And if you go back even 10 years looking at web applications, the web tier, well, that’s no problem.  It scales out.  We wouldn’t have an internet if we didn’t.  Application service in the middle tier, serving up business logic -- well established, well used.  Storage service at the back, you know, tremendous answers in hardware and storage management in the last 10, 15 years, but right in the middle of that, what I’ve called OldSQL, a traditional, relational database service that are the workhorses of the applications that we want to move into the cloud -- they don’t scale-out.  They scale up.  And they are often dependent on highly specialized and highly expensive hardware and software to do that.  That’s the problem that we’re cracking with our customers, the ability to preserve transactional, preserve SQL for all of the good reasons for preserving SQL, but to make that cloud-ready, these were distributable.  So I hope that that was just a summary of what I just said in the previous 10 minutes, and described our customers.

I could wrap at this point and go to questions, but I’m just going to take a couple of minutes, Knox, if you don’t mind, to talk about how it works, because, you know, what I’ve just described, it sounds a little bit like magic.  You know, Arthur C. Clarke, SciFi author, once said any technology sufficiently advanced is indistinguishable from magic.  And my experience is that software engineers don’t believe in magic.  So although I can’t prove to you that NuoDB solved that problem of distributed, relational database, what I want to do is just give you a couple of images, a couple of little notes that will help you maybe make you think, OK, this is worth -- this is interesting.  It’s worth investigating.  Let’s -- and now if you want to take that conversation further, I’ll be more than happy to take it.  And these are just a few points up here maybe to frame that conversation.

So what we’ve got is a multi-tiered architecture, and management transaction and storage, and the management layer, the brokers and agents, they connect the nodes together and they connect clients to the transaction engines in the transaction layer.  And what the transaction engines do is, they conduct all of the operational workload for the clients in a shared cache, a cache that’s shared between all the transaction engines and all the storage managers.  This is not an in-memory database, but it is a very memory-oriented database.  It’s what we call a distributed, durable cache.  It’s obviously not an in-memory database, but basically there’s a clue at the bottom of the picture there with the database archives.  But it behaves as though it were to the client; the client thinks they’re dealing with an in-memory.  But what’s actually happening is, every time there is a request for data, the transaction engine that gets that request seeks that data from another transaction engine if it’s already there, or from a storage manager.  And maybe the first time when the cache is warming up, they need to go to just get that.  And every time a client transaction updates, then it updates in the cache of their local transaction engine.  And that is then shared out with the storage managers and the other transaction engines.  But whilst those other transaction engines are sharing that data potentially with other clients, the storage managers don’t connect the clients at all.  All they do, they have one job in life, which is to make sure that whatever they get in their cache, in the transaction engines, they make you durable.  That’s all they’re doing is, they’re managing the durability -- correction, they have another job, they serve up the queries, but their critical job for the operation of the system, the continuity of the system, is that they make that data durable and stored in the archives.

And of course, this whole thing has to be elastic, so as the workload increases, you just spin up more transaction engines, and they support more transactions for more customers, more connections, more uses.  And, indeed, if you have pre-provisioned infrastructure, then you can tell NuoDB to monitor its own service levels.  And if certain benchmark queries, for example, are running too slowly, it will spin up additional transaction engines, and the brokers and the agents automatically load-balance between all of the available transaction engines so that the service levels are quickly restored.  And then when the additional transaction engines are no longer needed, they are [deep revisions?], so you’re not paying for what you’re not using.

So there’s the elasticity of a distributed system.  But, of course, it is a distributed system, so although I drew that picture of transaction engines and storage managers without any virtual split, you would normally expect this to be split at least once between two storage -- sorry -- two data centers.  And but as far as the client’s concerned, it’s completely transparently talking to a transaction engine, and that transaction engine will serve up their requirements.  You may see some additional latency from the network, but operationally, the distribution of this across multiple data centers is completely transparent to the user.  And, of course, because we’re using distributed database system and we’re running it on cloud and we’re running it on commodity hardware, we’re going to expect to get failures.  If we lose a transaction engine, then what we’ll actually get is, every transaction that’s currently in flight on that transaction engine will be backed out and will be restarted on another transaction engine, the brokers and agents will load-balance that across there.  What you would see at that stage would be momentarily a drop in the service levels, because -- well, you might see if it you’re overloading the other transaction engines, in which case the elasticity provisioning that I talked about before will kick in and deal with that.  If you lose a service manager, then so long as you’ve got service and redundancy, and what I’m showing here is minimal redundancy, if you’ve got two data centers, you’d expect to have at least four service managers.  But if you lose a service manager, all that happens is, the other service manager will soldier on alone, and when the failed server comes back, then in background, whilst the system still runs, the archives will be synchronized.  Then again, it’s transparent to the customers.

So I’m going to stop now to give us some time for questions.  But all I wanted to cover today was, you know, that little box on the top right hand corner of Iain’s, which is, where are people now feeling the need to, the pressure, to virtualization?  And what is happening in NuoDB world to meet that challenge?  So I’m going to stop, no, I’m just going to hand it back to you.  And do we have any questions?

(M3): Yeah, thanks, Guy, we have a few minutes left.  So I think the first question is for you.  And that is, can I install NuoDB on my own hardware?

(Guy): Oh yes, sorry, I’m talking about, oh, scale-out multi-data center -- yes, I actually run it on my laptop.  You can do that, and indeed we do have customers who run it on [prem?], and we do have customers who use it in hybrid cloud, where they will provision on premises dedicated, albeit virtualized hardware, and when they have peak loads, they will provision in -- from a cloud provider, to just the cover -- they do that for economic reasons.  It just makes sense to spend that money that way.

(M3): OK.  Thank you.  Iain, I think the next question is for you.  Which aspect of your functional view of virtualization do you see experiencing the most disruption within the next few years?

(Iain Gillott): So I think (inaudible), really the way to think about this is more the peripheral -- I call them peripheral functions of the network.  So there’s a lot of customer databases.  Those servers, the PCRF, the triple A, HSS -- those types of functions obviously are very critical.  But they -- if you were to see those in the data center, they look like a very traditional IT type of application.  Those areas are certainly getting a lot of attention right now.  The IP services, as we said, and as Guy pointed out, there’s certainly a lot of tension there.  Those areas I think you’ve seen first.  Then look at the big data centers on a nationwide basis.  They’ve got some big routers in there, those PDN gateways.  Certainly the billing support systems, operational support systems are getting attention.  And then really work your way down through the network, the last area is really going to be the radio access networks, purely because there’s so many of them.  It’s as simple as that.

(M3): Thank you.  And then Guy, I think the last question is for you.  You only talked about transactional applications.  Can I use NuoDB for analytics?

(Guy): Yes you can.  Our customer base mostly, at the most so far, is operational transactional systems, because that’s the thing that we do so much better than most other solutions will do.  But we do support -- we do have customers using what we call -- I think it was [Gartner?] coined the phrase, which is HTAP, which is Hybrid Transactional and Analytic Processing, which typically is relatively short, predictable analytic queries running parallel.  And we’re doing that now, and we do have in our roadmap -- I don’t have time to talk about it now, some changes, some improvements in the architecture to make it easier to kind of partition part of the network to support your analytic processing, and part of it to support your operational -- can separate that out now, but we’re going to do more in the future to make that easier, and to help you to scale up for more deep queries for the analytic processing.

(M3): OK, thank you.  So I’m afraid we’ve run out of time.  I’d like to extend a special thanks to Iain and Guy for leading our discussion on application services for mobile telecoms.  And thanks to our audience for attending our session.  We hope that you found today’s conversation informative and useful.  Thank you for attending.  This concludes today’s webinar.

(Guy): Thanks, Knox.  Thanks Iain.  Thanks, everyone.

(M3): Thank you.  Thanks, Guy.