BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Evolving Enterprise Calculus Of Public Cloud Versus Private Infrastructure

Following
POST WRITTEN BY
Rhett Dillingham
This article is more than 5 years old.

There is clear enterprise consensus that hybrid cloud is the appropriate long-term strategic approach to best leverage cloud computing. There has been much less consensus in decision-making on the underlying use of public cloud versus private infrastructure for individual applications. I find that as an enterprise improves its recognition of the differences between infrastructures in feature capabilities and control of security and compliance, decision-making across the application portfolio matures into greater focus on cost efficiency for the breadth of applications it can operate on either type of infrastructure. Enterprises working from an oversimplified view of their options risk missing out on substantial opportunities for cost optimization.

Capabilities, control, and cost are the key factors in enterprise decisions on where to develop new applications and whether to migrate some of their existing applications in either direction between public cloud and private infrastructure. Over the first decade of cloud computing, the pendulum of adoption began with over-conservatism towards the public cloud, based mainly on concerns of control and steady-state workload cost. In the middle of the decade, the pendulum swung toward an aggressive public “cloud-first” approach for many early majority enterprise cloud adopters who aimed to maximize their use of public cloud while shrinking their private infrastructure footprint. High profile early innovators like GE and Netflix fueled this perspective routing capability benefits, including features and elasticity for operational agility, coupled with expected long-term cost advantages and improvements to control. The vast majority of enterprises have now settled into a middle ground approach with hybrid cloud based on their expectation of sustained control and cost benefits from private infrastructure for some of their application portfolio.

Source: BeeBright/Shutterstock

Seeing this market dynamic, the private infrastructure providers have steadily sharpened their messaging — acknowledging the value of public cloud for applications requiring specific capabilities or public cloud scale elasticity, while recognizing application profiles for which private cloud can deliver a cost advantage. Meanwhile, the public cloud providers are rapidly delivering features that pull application development towards their platforms for capabilities and control not available in private cloud platforms.

The top cloud providers and professional services firms have determined a common pattern in how to help an enterprise triage its application portfolio into public cloud versus private infrastructure use. I still find many in the late majority of hybrid cloud adopters are struggling through the process and tend to oversimplify their analysis. A common oversimplification is that private infrastructure is better for control, while public cloud is better for capabilities and cost benefit. What the earlier adopters recognize is how often control needs are satisfied on public cloud and how readily cost savings are achieved on private infrastructure.

I’ll walk through the typical application portfolio triage to highlight the top decision drivers. While there are some variations, it typically starts with full discovery of the organization’s application footprint, identification of applications to end of life, and sorting out applications that should transition to a SaaS-based model. What remains is the application footprint to segment into public cloud versus private infrastructure — or in many cases the options of either or both (i.e. hybrid cloud).

Decisions focused on cloud infrastructure platform capabilities

For new application development, capabilities only available on a single infrastructure platform increasingly favor allocation of applications to public cloud. Private cloud infrastructure platforms (e.g. OpenStack and VMware) have become easier to deploy and use. They have also closed the gap in ease of rapid application development as cloud management and container platforms have advanced in offering orchestration and CI/CD capabilities competitive with proprietary public cloud services. However, public cloud platforms are offering proprietary application services that provide compelling functional value for firms willing to sacrifice long-term application portability for short-term quicker development and less operational workload. These services include proprietary data stores for ease of rapid development and scalable operation (e.g. AWS Redshift, Microsoft CosmosDB, and Google Spanner). Other examples are AI services — from pre-trained vision, translation, and speech recognition, to services that ease custom machine learning model training and deployment (e.g. Amazon SageMaker and Microsoft Azure Batch AI).

Specific hardware infrastructure capabilities can drive private infrastructure preference. Examples include GPUs, FPGAs, and Infiniband networking for HPC applications, as well as traditional/legacy enterprise storage arrays for legacy and commercial off-the-shelf applications. However, these hardware capabilities are increasingly available from at least one hyperscale public cloud provider (e.g. Infiniband from Microsoft Azure) and often many of them (e.g. GPUs from AWS, Google, Microsoft, IBM, and more). The public clouds are addressing these hardware capabilities to remove barriers to public cloud adoption. This results in more applications that enterprises can run on either public cloud or private infrastructure.

Decisions focused on control within a cloud infrastructure platform

The need for control guides enterprises to keep many applications on private infrastructure for security, compliance, and/or auditing reasons. However, AWS, Google, and Microsoft are delivering ever-increasing options to better control infrastructure platform security — short of being able to physically access or view the infrastructure. In fact, cloud providers can make a strong case that the security customers enjoy on their infrastructure platform is better than what they can achieve themselves in their own data centers. In other words, while there are still many enterprise applications deemed required to run on private infrastructure for control, this category is set to shrink as perceptions catch up to functional progress in public cloud. This enables more applications to run on either public cloud and private infrastructure.

Decisions focused on optimal cost optimization

What results is decision-making increasingly driven by cost optimization for applications where capability and control requirements allow flexibility of infrastructure choice. In the early years of cloud computing, there was a push in enterprise towards public cloud for cost efficiency enabled by the elasticity of infrastructure resources for highly variable workloads. Then there was a push back towards private infrastructure in reaction to the undisciplined use of public cloud (e.g. the over-provisioning of compute and storage resources, the under-utilization of compute, stranded dev/test resources, and more).

As enterprises have matured their cloud use, they have clarified their cost profile per infrastructure to better inform which applications should run on public cloud versus private infrastructure. A recent IDC survey showed over 80% of enterprises migrated applications from public cloud to private infrastructure in 2017. The key to consideration and planning of cost-driven migrations is the enterprise expectation of private infrastructure cost versus public cloud. This is where private infrastructure providers are stepping in with their refined messaging recognizing the value of public cloud for high elasticity scenarios while advising strong consideration of private infrastructure options for those with more predictable capacity needs.

HPE recently published an example of the potential cost efficiency advantages of enterprise on-premises infrastructure, comparing its Proliant servers to AWS EC2 virtual machines for operating a Hadoop analytics workload.

Source: HPE On-Prem Price-Performance vs. Amazon Web Services (AWS) Technical White Paper

While there are always aspects to question as to whether these comparisons are completely apples-to-apples, HPE provides a very detailed view addressing the key comparison factors. For example, it considers on-premises operational costs as part of the HPE Proliant costs and provides the maximum discounted, all-upfront EC2 Reserved Instance pricing (significant since it is up to 75% lower than EC2's general, on-demand pricing). Even with the fully discounted AWS pricing, HPE shows significant cost savings for a Hadoop cluster, as well as further advantage when considering overall price-performance. Every organization's mileage will vary, of course, depending on specific applications considered.

The lack of predictability in compute capacity needs has long been a top factor encouraging public cloud use by enterprises given historical cycles of weeks to months to add capacity in private environments. Private infrastructure providers have addressed this through flexible capacity consumption models. HPE's Greenlake Flex Capacity, for example, is designed to enable immediate growth on substantial existing capacity (i.e. swings of around 10-30% per month, but not double). It does this by providing pre-positioned additional capacity and bills for it only once utilized. HPE actively manages the capacity to maintain continuing availability of additional capacity.

Closing thoughts

As an enterprise identifies more and more applications it can run on either public cloud or private infrastructure, it should be careful not to oversimplify its cost consideration – this risks leaving a lot of cost-saving opportunities on the table. The key to this cost analysis is determining, first, how well one can profile and project infrastructure capacity needs, and, second, how efficient one can operate their private infrastructure at scale, compared to the numbers a private infrastructure provider like HPE uses in its recommended analysis.

Disclosure: Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including Advanced Micro Devices, Apstra, ARM Holdings, Bitfusion, Cisco Systems, Dell EMC, Diablo Technologies, Echelon, Ericcson, Frame, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest), HP Inc. Hewlett Packard Enterprise, Huawei Technologies, IBM, Jabil Circuit, Intel, Interdigital, Konica Minolta, Lenovo, Linux Foundation, MACOM (Applied Micro), MapBox, Mavenir, Mesosphere, Microsoft, National Instruments, NOKIA (Alcatel Lucent), Nortek, NVIDIA, ONUG, OpenStack Foundation, Peraso, Portworx, Protequus, Pure Storage, Qualcomm, Rackspace, Rambus, Red Hat, Samsung Technologies, Silver Peak, SONY, Springpath, Sprint, Stratus Technologies, TensTorrent, Tobii Technology, Synaptics, Verizon Communications, Vidyo, Xilinx, Zebra, which may be cited in this article.

Follow me on Twitter or LinkedInCheck out my website