Public cloud services can cost substantially less than implementing and maintaining a private cloud infrastructure. But for some applications, especially for mission-critical database applications that require both high availability and high performance, using the public cloud might instead serve only to bust the budget.

The reason is the need for redundant compute, storage and networking resources, and these additional, standby resources cost more money—potentially much more than when configured in a private cloud.

There are, of course, ways to make a public cloud service equally, if not more, cost-effective than a high availability, high performance private cloud. Achieving the best results requires carefully managing how the public cloud services are utilized by each application. This article offers 10 suggestions for utilizing public cloud services more cost-effectively while also maintaining appropriate service levels for all applications.

The objective here should not be to minimize cost, but rather to optimize price/performance for each application. In other words, it is appropriate to pay a higher price when provisioning resources for those applications that require higher uptime and throughput performance.

It is also important to note that, for some applications, a hybrid cloud—with only some applications running in the public cloud—might be the best way to achieve optimal price/performance.

#1 – Understand the cloud service providers’ business and pricing models.

Because it is impossible to utilize public cloud services cost-effectively without a thorough understanding what these cost, this research serves as the perfect starting point. And because all cloud service providers (CSPs) have different business and pricing models, the discussion here must be generalized.

In general, the most expensive resources involve large amounts of compute and data movement, which can together account for 80 percent or more of the total costs. Indeed, provisioning virtual machines (VMs) with lots of CPU cores and memory that run constantly with applications that transfer lots of data back to the enterprise is guaranteed to bust the budget.

All CSPs publish their pricing, and unless specified in the service agreement, that pricing is constantly changing. All hardware-based resources will have at least some direct or indirect cost, including physical and virtual compute, storage and networking, and the charges are based in part on the space, power and cooling these resources consume.

For software, open source is usually free, but use of all commercial operating system and application software incurs licensing fees. And be forewarned that some software licensing models can be quite complex, so study them carefully.

While researching costs be sure to also look at all available discounts and capitalize on any that apply. For example, making service commitments, pre-paying for services and/or relocating applications to a different region can together achieve a savings of up to 50 percent.

#2 – Be sure to understand any “devils” that might be hidden in the pricing details.

In addition to the basic charges for hardware and software are some potential á la carte costs for various value-added services, including for security, load-balancing, and RAID or other data protection provisions. There may also be “hidden” costs for I/O to storage or among distributed microservices, or for peak utilization that occurs only rarely during “bursts.”

As mentioned above, data movement back to the enterprise—whether to a private cloud in a data center or directly to users—can be among the more costly services. Data movement will also likely incur separate charges in the WAN that are not included in the bill from the CSP.

#3 – Right-size the most costly resources right away.

Willie Sutton was purportedly asked why he robbed banks, to which he replied, “Because that’s where the money is.” In the public cloud, the money is in compute, so right-sizing compute resources becomes the priority for optimizing price/performance.

For new applications, especially those designed with a microservices model, start with minimal configurations for compute resources, adding CPU cores and/or memory only as needed to achieve satisfactory performance. VMs for existing applications should all be right-sized, beginning with those that cost the most.

Reduce resources gradually while monitoring performance constantly until achieving diminishing returns. And do give serious consideration to re-architecting those applications that have simply become resource hogs—a potentially cost-justifiable effort even in the private cloud.

#4 – Look next at all less expensive resources to “tweak” utilization.

Among the least costly resources in public clouds, again in general, are storage and networking within the CSP’s multi-zone infrastructure.

The use of solid state drives is generally more expensive than traditional spinning media on a per-Terabyte basis, but SSDs also deliver superior performance, so the price/performance may be comparable or even better. And while moving data back to the enterprise can be expensive, moving data in the other direction—from the enterprise to the public cloud—can usually be done cost-free (notwithstanding the separate WAN charges). So once there, keep as much data as possible within the public cloud.

Be careful using cheap storage, however, because I/O might be a separate—and costly—“hidden” expense with some services. If so, try to make use of performance-enhancing technologies like caching, solid state storage and/or in-memory databases, where available, to optimize utilization of compute and storage resources.

#5 – Look for ways minimize software licensing fees.

Software licenses can be a significant expense in both private and public clouds. For this reason, many organizations are migrating from Windows to Linux, and from SQL Server to other commercial or open source databases. But for those applications that must continue to operate with “premium” software, check various CSPs to see which pricing models might afford some savings, and compare these costs to licensing the same software in the private cloud.

Consider SQL Server, for example. SQL Server remains the dominant choice for mission-critical database applications. The Enterprise Edition offers truly carrier-class high-availability features, but it also costs considerably more to license than the Standard Edition. Similarly, the license-free Linux operating system lacks the equivalent of robust Windows Server Failover Clustering in the Windows Server OS, which does have a licensing fee. (See suggestion #9 for more information on this important topic.)

#6 – Don’t forget the obvious.

It may seem rather obvious, but in the spirit of being thorough, it must be mentioned that applications should be run only when needed. Although there may be only minimal savings in tearing down and spinning up resources as needed to run those applications that do not require 24x7 operation, it’s worth checking.

Because CSPs also benefit from such “on demand” agility, which frees up available resources to meet other needs, many now make it possible to automate the scheduling of applications, enabling each to be run only when needed, whether daily, weekly, quarterly or annually. This potential cost-savings measure might also be beneficial in the private cloud based on its ability to utilize the same resources for different tasks at different times.

#7 – Create and enforce cost containment controls.

Self-provisioning in the cloud might be a desirable capability to offer users, but without appropriate controls, it can make it too easy to over-utilize resources, some of which are potentially very expensive.

Begin by taking full advantage of the many monitoring and management tools all CSPs offer. There might be a learning curve involved because these tools may be quite different from and potentially more sophisticated than those used in a private cloud. But the effort is necessary to gain full control over resource utilization.

For cost containment, one the more useful tools involves tagging resources. Tags consist of key/value pairs and metadata associated with individual resources, and some can be quite granular. For example, each VM, along with the CPU, memory, I/O and other billable resources it uses might all have tags. Collectively, the tags can constitute the total utilization of resources reflected in the bill.

Also essential to controlling costs is information about which applications are in a production, development or other environment, along with the cost center showing which department owns each, whether this data is available in a tag or via some other means.

Organizations that make extensive use of public cloud services might be well served to create a script to load information from all available monitoring, management and tagging tools into a spreadsheet or other application for detailed analyses and other uses, such as compliance, chargeback and trending/budgeting.

Ideally, information from all CSPs and the private cloud would all be normalized for inclusion in a holistic view to enable optimizing price/performance for all applications running in the hybrid cloud.

#8 – Consider creating a cloud service catalog.

A comprehensive and consistent set of tags can serve another useful purpose: creating a cloud service catalog. Such a catalog can be used to enforce tagging and resource utilization policies for many if not most applications.

One particularly powerful policy is to require tags for the application environment (e.g. production or development) and cost center. Another policy might place limits or other restrictions on the self-provisioning of certain resources, which require management authorization to override. A service catalog limits the potential for budget-busting surprises when each bill arrives.

#9 – Test your talents by taking on the worst-case use case.

The worst-case use case in the public cloud is often the mission-critical database application, as these are generally the most costly for a variety of reasons. They may need to run 24x7. They require redundancy, which involves replicating the data and provisioning standby server instances.

Data replication requires data movement, potentially in both the LAN and WAN. On-demand compute resource reservations that are sufficient to meet peak demand might be expensive. And for SQL Server, Always On Availability Groups are only available in the more expensive Enterprise Edition.

In addition, the three major cloud service providers—Google, Microsoft and Amazon—all have at least some HA-related limitations. Examples include failovers normally being triggered only by patching or zone outages and not many other common failures; master instances only being able to create a single failover replica; and the use of event logs to replicate data, which create a “replication lag” that results in temporary outages during a failover.

Of course, none of these limitations is insurmountable—with a sufficiently large budget. The challenge, therefore, is to find one approach capable of working cost-effectively across public, private and hybrid clouds.

Getting optimal price/performance for HA applications in a hybrid cloud environment will likely require implementing a “universal” failover clustering solution. Among the most versatile and affordable of such solutions is the SANless failover cluster.

These HA solutions are implemented entirely in software that is purpose-built to create, as implied by the name, a shared-nothing cluster of servers and storage with automatic failover to assure high availability at the application level. Most of these solutions provide a combination of real-time block-level data replication, continuous application monitoring, and configurable failover/failback recovery policies.

Some of the more robust SANless failover clusters also offer advanced capabilities, such as ease of configuration and operation, support for the less expensive Standard Edition of SQL Server, WAN optimization to maximize performance and minimize bandwidth utilization, manual switchover of primary and secondary server assignments for planned maintenance, and performing regular backups without disruption to the application.

#10 – Maintain the proper perspective while making ongoing changes.

Try always to keep the monthly CSP bill in its proper perspective. With the public cloud, every cost appears on a single invoice. By contrast, the total cost to operate a private cloud is rarely presented in such a comprehensive and consolidated fashion. And if it were, that total cost might also cause consternation.

A useful exercise, therefore, might be to understand the all-in cost of operating the private cloud—taking nothing for granted—as if it were a standalone business like that of a CSP. Then those bills from the CSP might not seem so high after all.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access