When disaster strikes
Generally speaking, data centres are subject to more stringent protection against natural elements than most structures given the value of information they hold. However, not even multinational IT giants can control the full force of Mother Nature at all times.
Every second counts
"The demands of being always-on have risen exponentially in recent years. The digitisation of enterprise business processes means that even the smallest system fault can have a profound knock-on effect. An intelligent availability solution helps minimise the impact of unscheduled interruptions," says Olivier.
According to the latest Veeam Data Centre Availability Report, 82 percent of CIOs say they cannot meet their business' needs. Every data loss and every second of downtime costs money and reputation. In the event of unexpected downtime, the time interval for data backup should be no longer than 15 minutes.
But it is not just about speed. The quality and accessibility of the backup interval is also critical. For securing mission-critical data which must be on-hand instantly, most enterprises trust onsite storage. Yet this method is not foolproof. In fact, the survey shows that more than 90 percent of CIOs are under pressure to both recover data faster, reducing the financial impact of unplanned downtime, and also backup data more often, to reduce the risk of data loss.
3-2-1
"This is where the 3-2-1 rule can near-guarantee the availability of data when one route to data or services is disabled. It states that a comprehensive availability strategy consists of at least three copies of data, which are stored on at least two different media, one of which is stored externally," he says.
But no matter how fast a workload can be restored from a backup and made available in an emergency, it must be up-to-date. If an enterprise is to minimise the risk of data loss, there needs to be very short recovery point objectives. This is also true for offsite replication, whereby the transmission of data to a cloud service or external data centre reflects the most recent backup status.
Embrace regular testing
"One of the most effective ways to ensure this is to conduct frequent tests on what would happen to your systems in the event of a disaster. Decision-makers want to have peace of mind that when bad things happen, the business can carry on as usual."
It is especially important when considering some of the findings from the Veeam survey that show one in six backup recoveries fails. This means that with 13 incidents of application downtime per year, data will be permanently lost at least twice. And with the financial impact of this lost data estimated to be at least R9 million annually, companies cannot risk this.
Olivier says that if companies partner with the right service provider, testing should be a seamless experience and one that can be done on a daily basis. "Imagine if you tested your backups seven days ago, and all was recoverable then something went wrong and that was your last known valid restore point. You just lost a week's worth of data. However, there are still perceptions out there that it is difficult to conduct effective testing. Other misconceptions to overcome include that it takes a lot of time and resources and costs money to test. However, in a virtualised environment with a trusted partner, the reality could not be further from this," says Olivier.
So consider embracing testing on an ongoing basis. After all, says Olivier, a false sense of availability and backup strategy is worse than no strategy at all.