This article is more than 1 year old

Cloud may be the future, but it ain't all sunshine and rainbows

Learning lessons the hard way so you don't have to

Yes, cloud might be the future but what truths lie hidden beneath this rock of certainty? You've heard the hype, pros and cons, but there's plenty the average cloud user may not have considered in the clamour to get up there. Our company recently heeded the cloud's call, and this is what we discovered.

Where cloud excels is auto-scaling infrastructure but the fly in the ointment is that most small-to-medium environments are still based on services grouped around a set of virtual machines rather than being focused on the service it provides.

Re-engineering such a platform to fit the cloud paradigm isn't something a lot of vendors discuss. Rather, it's something frequently kicked down the road using phrases like "tri-modal".

For those admins and engineers who want to try re-engineering, most systems can't put an application into the cloud without a significant amount of work. The real problem is working out the dependencies and how they would map into any cloud environment.

One example that immediately springs to mind is an application stack that had an AS/400 as part of it. When you get such non-Wintel/Linux items in a configuration it is a stumbling block as you won't find AS/400 supported on the main cloud providers. To be clear, there are AS/400 cloud providers but trying to pair them with a cloud environment in its infancy is going to cause technical issues.

Another area no one really talks about, and this can be make or break, is the support costs. These can be anywhere between 10 per cent for smaller installations down to around 3 per cent for those customers spending a million per month. That is a lot of money by anyone's standards.

Everyone expects to pay for support, but these prices don't include support for those items that are non-AWS related so it's not a case of replacing one set of support costs with another. Rather, it's an additional cost on top of the AWS bill. What makes the situation even worse is that, unlike with traditional vendors, there is no real price negotiation for all but the biggest customers. Wise managers will need to budget for additional support costs for that environment.

Let's talk control – or the illusion thereof. If a cloud service goes down you are one of hundreds of thousands of customers potentially. Sometimes even with the best design in the world there is going to be some form of outage. At this point, anyone not spending significant amounts of money per month is going to be put to the back of the queue when a major disaster hits.

Contingency is the answer and it's up to the designers of cloud-based services to ensure resiliency. The most critical cloud systems use not only a different data centre but a different provider.

A real-world example of when the cloud didn't perform for me personally is Disaster Recovery. Yes, it fills a lot of gaps but it also has its own pitfalls. When doing on-premises DR, customers had become accustomed to sub-10-minute failovers. Everyone was happy. The next stage was DR into the cloud. This is where our major issue reared its ugly head.

Due to the way our cloud providers had engineered their storage, our 10-minute failover potentially became 16 hours. The response from management? "Well, it is still within SLA." While technically correct, it was the rank and file that was left to potentially share that message.

And, looping back to cost, something that is not often realised is that every byte of data that leaves the cloud provider's data centre will be billed. Depending on how the backup is set up, it could be quite a hefty sum. It's not exactly a hidden cost but something a business would have to factor in.

Another, more opaque issue is resource constraints. It is critical for the designer to understand the limitations of cloud infrastructure. A simple but common example of this is situations where systems use cheap, S3 buckets for cheap storage. That is all well and good until that data needs to be pulled into the Amazon compute infrastructure.

There is a hard limit of 20MBps transfer speed into the compute per disk. Therefore, moving a large amount of data could take many hours. Net result is unhappy customers who are used to sub-20-minute operations for on-premises infrastructure. It is possible to design around this. Essentially, it comes down to verifying that what you are planning is considered best practice. Once off the beaten track, it may be difficult to come back in the cloud environment.

Last but not least is, arguably, the most contentious subject: code quality. In the cloud more than ever, quality of your code counts for a lot. As mentioned earlier, scaling is the big feature of cloud deployments. Poorly optimised code scales the deployment but it also scales up the inefficiencies of the code. Poorly written code consumes more CPU cycles, RAM and ultimately instances. This inefficiency translates directly to bottom-line costs.

Cloud is an excellent choice for a new generation of service-orientated architecture where the focus is on the service being provided rather than worrying about specific individual virtual machines.

Not all applications are anywhere near that yet but the landscape is changing fast. Going cloud requires a lot of design work and due diligence to avoid stumbling into the hidden traps and unforeseen complications.

My big takeaway? If you don't have the skills in house, buy them – at least initially. Having experienced cloud administrators and programmers on board will not only help preempt and protect, they'll help you lay the foundation for some solid, future-proof IT infrastructure. ®

More about

TIP US OFF

Send us news


Other stories you might like