by Chris Uttenweiler, System Architect, DLT Solutions
Contrary to a sea of white papers and hours of slick video clips, the point when your back is against the wall is NOT the time to start looking to “the cloud” to save you. Instead, you need to start with a controllable situation of your own choosing that is as far from the fire, but as close to reality as possible.
The IT battlefield is littered with mangled reputations and dead careers from those who picked up the shiny new technology-of-the-month and went chasing windmills, while hordes of very real IT problems overpowered them in a cruel game of attrition. Cloud technologies can help even the odds in many situations; however they very rarely can go straight into production without a proper evaluation. After all, you’re not just playing with technology here, but with workflow & mindset as well.
Compared to other types of proof of concept (POC) work, the cost of giving the cloud a spin can be considerably cheaper than what you might be used to. In many cases, you can deploy redundant structures into the cloud and directly compare their performance & operational profiles in real-time with little unknown risk.
There are a number of things that you can do to make POC in the cloud more successful; however, some of them may be a little counter-intuitive:
1) Private doesn’t mean easier or less embarrassing: It seems logical that a private cloud floating in your own familiar data center should be easier to deploy, right? Think again. While there are a few different technologies that help you apply the basics of the Cloud methodology to the confederated mass of IT resources in your server room, most of them have tradeoffs in features and maturity when compared to going with a CSP. If you decide to go private first, see if you can’t get a small amount of capital to at least attempt a public cloud trial side-by-side at the same time. In many cases, you will be surprised at how much easier it is to get an environment up in a public cloud as opposed to building private cloud.
2) Dynamic, not lethargic: The cloud is a reactionary platform: It reacts to stimulus, almost like it is a living organism. If you take on an application that is mostly static, you will not exercise the design elements and operational tools/constructs that make the cloud any different than virtualization or co-location. In essence, you’re not going to learn that much, and may set yourself up for an unexpected future fall by allowing hubris to lead you to the conclusion that “this cloud stuff is easy.”
If you choose an application that is already in service or possibly in need of an overhaul, pick one that has a dynamic load/feature profile because you will get the best bang for your buck and, along the way, learn more about the cloud models and design patterns. Doing so will help you better understand the benefits of cloud including pay-as-you-go and just-in-time provisioning.
3) Clouds are not fluffy and soft: Security is security, and you’d better pay close attention to how it works and how it doesn’t. In many ways, even though the name seems to imply that it’s less secure than previous methodologies, constructs in the cloud are often locked down a lot tighter than your local data center. You can quickly find yourself locked out of your hosts and, even worse, you may find yourself in situations where something should work but, very stubbornly, doesn’t. The abstraction of cloud services and their security controls can make parsing through a traditional, multi-layer firewall profile seem like child’s play. Take no short cuts here – learn the security ins and outs of your particular CSP and put it to paper before you start your infrastructure modeling.
4) ‘Apple’ is to ‘Orange’ as ‘Hand Grenade’ is to ‘Land Mine’: Anything you move to the cloud will need to be rationalized against its relationship (or in cloud speak, ‘affinity’) to other infrastructure or software systems that your business relies on. Understanding the type of systems and business processes that work with, feed or are fed by the systems you move into the cloud is vital to making sure that you don’t upset the eco-system that is your IT & business environment. Discover these affinities by using software tools or good old fashioned IT detective work, and then document it formally. Next, make sure that you’ve got it right by meeting with your application teams AND your business partners. Frame the interactions carefully: You are verifying the relationship between the systems, not asking for permission to move your target application. It’s a fine line to walk, so tread carefully. Here be Dragons – don’t get crispy.
Ideally, you should choose an application that is open and well understood. The number of hooks into other systems/processes matter less than the clarity of the relationships and your understanding of them.
5) Hard now isn’t hard forever: Some things that work really well in the cloud are so difficult in the private data center that they never cross the minds of the professionals who are considering their first cloud projects/POCs. Here are a couple places to start that you might not have considered:
Log Management – The biggest logistical problem with deploying a log management and analysis system is the storage aspect. It’s expensive to get off the ground and exponentially difficult to keep up with. You almost always pay for more storage than you need on day one, and by the next budget cycle you’ve got to find the funds to feed the beast that you’ve built, regardless of how useful the system is. This is where the pay-per-drink model of cloud storage is a real life saver. By coupling it with expandable compute capacity to handle unexpected spikes in traffic and deep analysis queries, you’ve now made it a lot easier to have a functional, healthy log management system that can scale with your needs – both up and down.
Information Assurance & Disaster Recover(DR)/Continuing Operations(COOP) – Everyone wants DR and strong data backup, but anyone that has ever been charged with designing it understands that the costs of doing it right can be downright prohibitive. Now, the devil is in the details, but this is another big area where the cloud can save you a considerable amount of headache and money. There are quite a few solutions out there that allow you to take snapshots of your data or even virtual machines and place them into the cloud to pull back when needed. There are also complete infrastructure designs at your finger tips that could allow you to build out an inexpensive DR environment that 99% of the time is only sized to handle data synchronization. However, within 30 minutes of a disruptive failure of your primary systems, the same environment can be scaled up to full strength to run your business in the cloud while you pick up the pieces and rebuild your primary site.
At the end of your efforts, the scaling abilities of a Cloud Service Provider (CSP) can allow your successful POC to become a production-ready system in a few clicks of the mouse. If the outcome isn’t what you expected, the POC can be terminated just as easily. In either case, you absolutely must think through the interdependencies of your cloud-deployed application and make sure that you properly stress the system. Remember, you’re evaluating not only a technology, but a methodology.
Chris Uttenweiler is a System Architect with DLT Solutions. He is specializes in application, architecture, and migration/operational concerns for the DLT Cloud Advisory Group. Chris has a diverse background in IT Operations & Infrastructure Design and has supported a broad range of IT consumers ranging from the Federal Government to Digital Media companies.