Automating the Datacenter

In every industry, there is an exception that proves the rule. Many builders, for example, are incredibly fastidious about the materials and disciplines they apply to making your home as energy efficient, waterproof and aesthetically pleasing as possible. And yet, when you visit their own homes, it’s like entering an exhibition of half finished projects. There’ll be semi-completed plastering jobs, exposed cables and all kinds of other open wounds they’ve never had time to dress.

If my own experience is anything to go by, doctors and nurses don’t always exemplify the healthy living standards that they urge the rest of us to adhere to.

Similarly, the telco/comms industry, my UK experience suggests, has some of the worst communicators in the entire human spectrum.

The data center industry lectures the rest of us about automation systems, the efficient allocation of resources and fine-tuning systems, and yet many aspects of the business of data are incredibly labor intensive. Ironically, virtualization is only exacerbating this problem. I’m reminded of the old adage about mainframes. If you have a bad system in place, a computer simply gives you computerized chaos.

Not that virtualization has created chaos. But it has increased the workload so that IT bosses are spending more, instead of less, time managing the business of data.The cause of this is that not all the data comes in a format they can use. The business of data has changed, but the format people like to work with hasn’t. Yet.

Take for example, the divvying up of resources. Every department in any organization has its regular jobs that need to be allocated computing resources. It’s the job of the department head to exaggerate their requirements as much as possible, in order to fight their corner. Otherwise, in the cutthroat arena of office politics, they could disappear without trace. So even though a set of research queries doesn’t have to offer instant gratification, and could be run in the wee small hours, the marketing department that is running them will demand the best possible resources. Meanwhile, a retail system, which does need to run in retail and whose success hinges on instant responses, might suffer because the boss in charge isn’t pushy enough in demanding available processing power and memory from the cloud.

It’s often up to the CIO to weigh up all these demands. The tragedy, in many companies, is that their system for gathering all these information is incredibly laborious. IT admin staff are asked to email, phone or write to each department head and get them to itemize the requirements for their system. These are compiled manually in a spreadsheet that, eventually, the CIO will sanity check, before allocating the time slots and computing resources to each computing job. Surely this is a job that everyone should automate.

Spreadsheets are the basis for running another of the processes of virtualization – the security configuration of servers. Every department now has a range of virtual servers, each of which has to be defined in great detail, if firewalls are to meet compliance standards. Given that virtual servers can be spun up in minutes, there is a tendency for them to multiply like Japanese knotweed. Pity the poor Security Officer who has to manually compile the port settings, the access rights and all the other important details that make the crucial difference between compliance and culpability. Again, this is a process that has only recently been automated. But in most corporations there are thousands of business processes that are coordinated by spreadsheets.

Cloud applications could provide the solution to all these manual challenges and that, in turn, calls for more fluid databases types. The automation of the data center could be the foundation of for evolving the business of data to a much higher level of performance. But we’re going to need a bigger, better database.

Add new comment