Why High Availability Is Important for Your Business

high-availabilityToday’s businesses are addicted to IT. Every part of the business now needs Internet connectivity to function, and not just for cat videos during breaks; from communication via email, instant messaging, and VoIP, to back office ERP and CRM – not to mention the importance of digital marketing channels and ecommerce.

Applications, and even consumer-facing services, are now shifting to cloud and hybrid models as companies realise that in-house IT, once tasked with running an Exchange server and maintaining desktops, lack the tools and specialist expertise to keep these systems running.

Our heavy dependence on the Internet to get business done exposes a threat with the potential to transform the competitive advantages of this new world of doing business in the cloud into a business killer.

Downtime causes losses of about $896,000 per week for a company with roughly 10,000 employees. Dun & Bradstreet has demonstrated that more than half of Fortune 500 companies experience a minimum of 1.6 hours of downtime [pdf] each and every week. The direct losses are substantial, but also imply a heavy risk of losing customer trust. This is much harder to quantify, but nonetheless presents a challenge to customer retention and is a significant barrier to top-line growth. In short, any form of unplanned application downtime is toxic to business success regardless of the role that particular application plays within the business.

The best way to prevent downtime and eliminate these losses is to adopt a series of best practices that help you to achieve high availability for your service or application. High availability, or HA, methodologies aim to maintain uninterrupted service for as long as possible – typically only allowing for a downtime of 0.001% (roughly 5 minutes per year).

Consider that a regular hosting provider may only be able to provide 99% service availability, meaning 87 hours (3.62 days) of downtime per year. Even the promise of 99.9% uptime allows for roughly nine hours downtime per year. Although an improvement, a business can still experience significant productivity and customer losses in that amount of time; especially if the downtime hits during peak periods.

It’s easy to dismiss the staggering revenue losses as only relevant for mega-corps such as Amazon and Facebook, but this issue affects businesses of all sizes. Whilst a 30 minute 2013 Amazon.com outage reportedly cost the company nearly $2 million ($66,240 per minute), IDC estimates that for 20% of SMBs an IT-related outage of just 1 hour could cost over £50,000.

Have you calculated the cost of downtime to your business?

Why high availability is essential for business

As shown above, the primary reason why your business needs to establish a high availability solution can be found in the simple economics. You’re more likely to be offline more often and for longer periods of time without a high availability strategy, and the cost of downtime to your business – however large or small – is unquestionably much larger than you realised.

However there’s more to it than just the cold hard numbers. Here are a few other reasons why high availability is so crucial to the continuation of your business:

  • Your reputation will improve as your brand is known for its reliability vs. your competitors
  • Some high availability implementations may improve application performance – e.g. geo-distribution of users to their nearest datacentre – offering its own productivity and sales conversion rate benefits
  • Reduced risk of data loss; up to 70% of businesses that experience data loss cease trading within 1 year
  • Reduced customer impact during planned maintenance – in many cases you can completely avoid service disruption during planned maintenance events
  • Minimise production impact during backup windows; data replication used as part of a high availability implementation can also improve your data backup strategies

A multi-faceted approach to high availability


If you want to have a complete highly-available infrastructure, you cannot only adopt one policy or solution; you need a multi-faceted approach involving a number of best practices and solutions that revolve around redundancy and the protection of existing infrastructure.

Here are a few steps to achieving just that:

  • Use a scalable system such as Layershift’s Jelastic PaaS to run your business applications. When you use a highly-scalable solution, you do not need to worry about high fixed costs when usage is low. You are assured that the service can manage spikes or increases in traffic as your usage grows. An elastic solution can properly fit the size of your organization, and automatically evolves with you as your needs change over time.
  • Use load balancing, such as provided by Layershift’s Jelastic PaaS, or Traffic Guard service to distribute your traffic between multiple web or application servers. This removes the application server as a single point of failure, and provides a scalable architecture that can handle very large volumes of user requests. The Traffic Guard service also provides DDoS protection and application firewalls, as well, to ensure that you can continue to serve your customers and employees even when faced with malicious attacks.
  • Use as much redundancy as possible for your particular application and budget, through replication and failover servers.
  • Pay attention to overall solution complexity. As you increase the number of moving parts in the system, you also increase the risk of failure: make use of built-in mechanisms instead of re-inventing the wheel through overly complex implementations.
  • Consider opportunities to dovetail your backup / disaster recovery strategy with your high availability implementation; for example can you use a high availability replica to improve backup performance or restore speed in the event of a disaster?
  • Avoid the temptation to consider replication as backup. They are different and must always be treated as such. Certain scenarios require recovery from historical backup snapshots; data replication usually replicates your mistakes as well!
  • Test your solution: this applies equally to redundancy / failover as for capacity planning via load testing. An overloaded system is inherently an unreliable system.

By applying a high availability strategy, you can serve your customers through thick and thin. You send a message that you value their business. A highly available infrastructure also mitigates the negative impact of outages to revenue and productivity. Fortunately, high availability does not have to come at a steep cost, with scalable cloud-based services at your disposal.

About the Author

Daan is a Cloud Computing, Web Security Expert and Blogger for Hire. His current interests include enterprise automation, cloud-based security and solutions.