You are here

Moonshot Hands-On

Over the past few weeks we have been doing a lot of testing on the HP Moonshot system. With some help from the HP Discovery Lab we were able to get access to a fully loaded 45 server Moonshot System. The Discovery Lab provides on-site testing of current and future HP Moonshot infrastructure letting customers test and size out applications. They not only got us on an Atom-based Moonshot System but also got on a Calxeda system for doing load testing.

In addition to testing out our ideas around database density, we learned a bunch about the day to day configuration, management, and use of a Moonshot system.

Low Level Management

The HP Moonshot System is equipped with an Integrated Lights-Out (iLO) chassis manager. In fact, the management is broken down into four zones, each with a dedicated manager. Three of the zones contain the actual Moonshot servers, while the fourth management zone provides for system-wide resources, such as the power supplies and fans. All of the managers can be accessed over SSH, which makes it really easy to connect up and issue management commands. While there are many management commands that you would expect (power on, power off, etc), some commands are very interesting in the context of the Moonshot system. In particular, I found that the live per-server power usage information was great to have. The management interfaces also provide an easy way to re-image individual servers by providing a centralized point to initiate a PXE boot and monitor progress through the virtual serial port interface.

Up and Running

With the Moonshot servers powered up and running an operating system (Ubuntu 12.10 in our case) the challenge now becomes effectively managing such a dense system. I found that ClusterSSH (available for Linux and OSX) was an indispensable tool for quickly and easily interacting with the Moonshot system. ClusterSSH (cssh) is very straightforward to use, it lets you open a number of SSH sessions in parallel and execute commands on some or all of them simultaneously. While that doesn’t seem like such a big deal, being able to run software or change configuration settings in an identical manner across all 45 servers at once makes a huge difference. To illustrate this, here is how I would deploy, configure, and launch NuoDB on the whole Moonshot system.

Copy the NuoDB .deb installer to the first server

local$ scp ~/nuodb-1.0.2.linux.x64.deb hdl@hdl-r04-g1-c1n1:~

Use cssh to copy the installer from the first server to all of the other servers (Note that you will likely need to increase the allowed number of simultaneous SSH connections that are allowed by the first server)

server[2-45]$ scp hdl@hdl-r04-g1-c1n1:~/nuodb-1.0.2.linux.x64.deb .

Use cssh to install NuoDB on all servers

server[1-45]$ sudo dpkg -i nuodb-1.0.2.linux.x64.deb

Edit the NuoDB configuration for the NuoDB broker alone

server1$ sudo vim /opt/nuodb/etc/default.properties

broker = true

Use cssh to edit the NuoDB configuration for the NuoDB agents together

server[2-45]$ sudo vim /opt/nuodb/etc/default.properties

broker = false
peer = hdl-r04-g1-c1n1

Restart the NuoDB Broker service

server1$ sudo service nuoagent restart

Use cssh to restart the NuoDB Agents

server[2-45]$ sudo service nuoagent restart

And with that you have NuoDB deployed, configured, and running on all 45 servers, with about as much work as it would have taken to run it on two. It is worth noting that while this approach works well for a single Moonshot system, it would be difficult to use it on more than that. Since each cssh terminal is shown at the same time, screen real estate becomes a significant issue. In that case something like Opscode Chef would probably be a good choice for production configuration and deployment.

Add new comment