NuoDB on Red Hat OpenShift

A few years ago, Red Hat introduced OpenShift which provides a Platform as a Service (PaaS). There are a number of flavors of OpenShift from Origin, an open-source community project, to a fully managed Online service. OpenShift allows developers to focus on incremental application changes while not having to administer the infrastructure. OpenShift provides a continuous integration framework where it slurps your GitHub repo into a docker container with the proper language engine and lets kubernetes orchestrate the deployment of the container pods to various host nodes. That being said, NuoDB is not an application that can be deployed via the OpenShift web console but can be deployed as part of the PaaS. In this blog, I’ll walk you through standing up your own OpenShift Origin cluster on Centos and pull in NuoDB containers as part of the platform.

OpenShift Configuration

For this blog, I’m only launching a single OpenShift node on an AWS EC2 instance. This node contains both the “Master” and “Node” services. There are a number of ways to deploy an OpenShift cluster but I found the steps in this link, OpenShift Quickstart, to be the simplest and quickest to get a cluster up and running. Use the first option “oc cluster up”. This will deploy the necessary OpenShift containers, create a default oc user account, and create a new project.

When running the cluster up command, you should get the following results:

[root@ip-172-31-17-249 ~]# oc cluster up
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.4.1 image ... 
   Pulling image openshift/origin:v1.4.1
   Pulled 0/3 layers, 3% complete
   Pulled 1/3 layers, 34% complete
   Pulled 2/3 layers, 79% complete
   Pulled 3/3 layers, 100% complete
   Image pull complete
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ... 
   Using nsenter mounter for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ... 
   Using as the server IP
-- Starting OpenShift container ... 
   Creating initial OpenShift configuration
   Starting OpenShift using container 'origin'
   Waiting for API server to start listening
   OpenShift server started
-- Adding default OAuthClient redirect URIs ... OK
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Removing temporary directory ... OK
-- Server Information ... 
   OpenShift server started.
   The server is accessible via web console at:
   You are logged in as:
       User:     developer
       Password: developer
   To login as administrator:
       oc login -u system:admin

OpenShift should have launched your necessary containers. You can verify this by running the Docker command “docker ps” which will return the following running containers. If any of them have failed, you can view the log output using “docker logs ” or restart the container with “docker start .

[root@ip-172-31-17-249 ~]# docker ps
CONTAINER ID        IMAGE                           COMMAND                  CREATED   STATUS     PORTS   NAMES
22139909dd41        origin-docker-registry:v1.4.1   "/bin/sh -c 'DOCKER_R"   2 mins    up 2 mins          k8s_registry...
8a9cf9785d91        origin-haproxy-router:v1.4.1    "/usr/bin/openshift-r"   2 mins    up 2 mins          k8s_router...
a24548fc86ff        origin-pod:v1.4.1               "/pod"                   2 mins    up 2 mins          k8s_POD...
3c2e9b053ed9        origin-pod:v1.4.1               "/pod"                   2 mins    up 2 mins          k8s_POD...
08e31f593cbd        origin:v1.4.1                   "/usr/bin/openshift s"   2 mins    up 2 mins          origin

Log into OpenShift

The “master” Docker container is running the OpenShift web interface. You can log into the interface by browsing to the host’s IP address on port 8443. For example:

The username and password “cluster up” created is ‘developer/developer’.  To deploy NuoDB, we’ll need to log into the CLI.  This can be done locally or from another box running the oc client tools.  The following example is logging in locally:

[root@ip-172-31-17-249 ~]# oc login -u developer -p developer
Login successful.
You have one project on this server: "myproject"
Using project "myproject".

As I mentioned before, the “oc cluster up” command has already created a new project for you. Checking the status of our project will show that no applications have yet been created.

[root@ip-172-31-17-249 ~]# oc status
In project My Project (myproject) on server
You have no services, deployment configs, or build configs.
Run 'oc new-app' to create an application.

We are going to remove the default project and create our own:

[root@ip-172-31-17-249 ~]# oc delete project myproject
project "myproject" deleted
[root@ip-172-31-17-249 ~]# oc new-project nuodb-demo
Now using project "nuodb-demo" on server "".
You can add applications to this project with the 'new-app' command. For example, try:
    oc new-app centos/ruby-22-centos7~
to build a new example application in Ruby.

We need to make one more OpenShift system configuration. NuoDB runs as the root within our Docker containers so we’ll need a policy to allow containers to run as root. First elevate your oc permissions:

[root@ip-172-31-17-249 ~]# oc login -u system:admin
Logged into "" as "system:admin" using existing credentials.
You have access to the following projects and can switch between them with 'oc project <projectname>':
  * nuodb-demo
Using project "nuodb-demo".

Then set the policy to allow containers to run as any user:

[root@ip-172-31-17-249 ~]# oc adm policy add-scc-to-user anyuid -z default

We are now ready to deploy NuoDB containers.

NuoDB Deployment

Before we deploy our NuoDB containers, we need to disable transparent hugepage for the NuoDB storage manager to operate correctly. Execute the following in a shell file on the server:

disable_thp() {
    echo "Checking to see if THP needs to be disabled..."
    if [ ! -f ${THP_BASE}/enabled ]; then
        if [ ! -f ${THP_BASE}/enabled ] ; then
            die "unable to find THP enable file"
    grep "\[always\]" ${THP_ENABLED} 
    if [ $? -eq 0 ]; then
        echo "THP is enabled on this machine - this script will temporarily disable it"
        echo madvise> ${THP_ENABLED}
        if [ $? -ne 0 ]; then
            die "Cannot automatically disable THP on a readoly file system - See"
            echo madvise> ${THP_DEFRAG}
        echo "THP is not enabled on this machine - No action taken"

I’ve made deploying NuoDB as simple as possible. The following command will create a temporary OpenShift application. This deployment container will pull a NuoDB CE image from registry and deploy a Broker, SM (storage manager), and TE (transaction engine).  

oc new-app \
     --name nuodb-deployer \
     -e "OC_ADDRESS=<host_IPAddress>" \
     -e "USERNAME=developer" \
     -e "PASSWORD=developer" 

The environment variables are required for the deployment container to establish a CLI session with OpenShift. OC_ADDRESS is the IP address of your OpenShift instance.

In the OpenShift web console, you should now see three applications under “Overview”. The console will allow you to scale up or down your pods to match load demands. The version of NuoDB is a community edition and has some scaling restrictions but will be sufficient for testing your applications against.

You now have NuoDB up and running on OpenShift. The first database has been created for you called ‘testdb’ with a username and password of ‘dba/dba’. Your application’s connection string will point to the broker’s IP address on port 48004. To get the IP address of your broker use the following command: 

oc describe pod broker | grep IP: | awk '{print $2}')

For additional information on how to access and import data into your NuoDB database go to here.


how do i make this highly available? also is it just in-memory or will the records be written to disk ? why is the example not using any persistent volume?

1. How do i make this highly available?

This blog describes the shortest path to deploying NuoDB on OpenShift.  By design, the command "oc cluster up" only creates a single node cluster. I spent some time playing with and deploying multi node clusters using OpenShift-ansible, Once you've configured 2 or more nodes you can increase the number of NuoDB Transaction Engines (TE) for high availability endpoints for your application to connect to.

NuoDB also has the ability to run in multiple data centers for an active-active configuration that therefore provides continuous availability even in the event of a data center outage.

2. Also is it just in-memory or will the records be written to disk? Why is the example not using any persistent volume?

NuoDB's architecture consists of two services: Transaction Engines (TEs) and Storage Managers (SMs). The SM is responsible for providing data durability by writing the data to disk. In this blog, both services were in containers. Specifically, the SM container for the demo uses container-based volumes. In this configuration, all data written to disk is automatically deleted when the container is shutdown.

For production use, the container can be configured to use persistent volumes. In such a configuration, the data is not deleted when the container is shutdown. You can enable persistent volumes by configuring Docker storage before installing OpenShift. Here's the document on how to do this:

One thing to note - this blog is using NuoDB's Community Edition version, which is great for developers and smaller projects, but only features transaction scale out and not storage scale out.  To evaluate a full version of NuoDB that includes further compute scale-out and redundant storage capabilities, you can ask for a trial at

We’ve got a couple other resources on container-based deployments that you may find interesting (some of which discuss the persistent storage question):

Add new comment