News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

My Biz

Submit content

My Account

Advertise with us

Continuity and disaster recovery

With an alarmingly high failure rate, the use of tape-based back-up should seriously be reconsidered as a primary back-up method.
Anton Jacobsz
Anton Jacobsz

Companies still using tape back-ups should prepare themselves for possible data loss. With a failure rate of up to 65% and the fact that these back-ups are generally archived off site in a warehouse requiring manual processes to search for data, tape back-ups are a risky place to keep business-critical data. In the event of a disaster, when recovery time is essential for business survival, enterprises need technologies that give them virtually instant access to their most critical information and applications to get up and running again.

Yet, a frighteningly large number of companies still rely on outdated, unreliable manual processes to back up and store their data. With new legislation like 'Protection of Personal Information' (POPI) and the exponential growth of data that must be stored and managed, recovery off traditional back-up technologies becomes massively problematic - even more so if a crucial data tape has been sitting there for five years. These technologies might have enjoyed a better reputation if they had gone out along with vinyl records.

In an era where data is the lifeblood of the competitive enterprise, you need to know how quickly you can recover your critical data in the event of a disaster. Historically, full disaster recovery could take days or even weeks, but times have changed. Now, enterprises have become so reliant on technology to automate all processes that they can virtually not trade without it. In the event of a disaster, they not only lose revenue and suffer reputational damage; they can also lose data that is important from an auditing perspective, and risk losing legal documents, causing non-compliance.

Fortunately, advanced new solutions offer complete and effective disaster recovery and continuity that makes the traditional methods look as dated as the vinyl record.
Effective, advanced data recovery and continuity depend on five key principles:

  • Know what you want to protect
  • An effective data recovery plan begins with an understanding of the interdependencies between systems. You need a pane of glass view of whichapplications are going to whichservers, how the traffic moves. You must know which systems you need to protect to meet your recovery point objectives and which systems need to be recovered and made operational to meet your recovery time objectives. By automating the process of mapping transactions to their underlying infrastructure, accurate application definitions and interdependencies are determined, allowing you to better plan and prioritise your disaster recovery and business continuity practices.

  • Simplify branch office protection to accelerate disaster recovery
  • Enterprises with branch offices spread across a number of geographic locations face added complexity in disaster recovery and business continuity. To get past this challenge and significantly reduce the IT costs associated with numerous branch offices, enterprises should back up branch office data direct to the data centre. Riverbed Technology's customers have centralised over 47 petabytes of branch data and typically see a 30%-50% reduction in their total cost of ownership, as well as benefiting from massively improved recovery times.

  • Optimise replication between data centres
  • As organisations consolidate data centres, centralise branch servers and storage and move from tape-based backup to wide area network- (WAN) based data transfer and replication, the result is more data in fewer data centres, with large data sets transferred between data centres. Fast but reliable performance between data centres can be challenged by high latency, packet loss, limited bandwidth,congestion, and competition among applications. By optimising and accelerating replication workloads between data centres, data is better protected and WAN bandwidth better utilised.

  • Reduce costs with cloud storage
  • As storage and data volumes grow, there is a rising cost involved in protecting that data on the premises. On the other hand, cloud storage prices are dropping. To take advantage of these cloud economics, IT organisations must integrate with cloud storage offerings, ensure security requirements are met, as well as solve the bandwidth challenge for uploading to the cloud.

  • Make applications resilient against site failures
  • With the average cost of data centre downtime estimated at around $7,900 per minute in 2013[1], it is crucial to architect key applications to run seamlessly from secondary data centres or cloud data centres. Through a coordinated application delivery fabric, which may span multiple, geographically distributed data centres and cloud-based environments and a virtual application delivery controller, enterprises can reduce the capex costs associated with building redundant application environments.
    [1]"2013 Cost of Data Center Outages," Ponemon Institute, December 2013

    About Anton Jacobsz

    Anton Jacobsz is the Managing Director of Riverbed distributor Networks Unlimited
    Let's do Biz