Advertisment

Remote Data Replication

author-image
DQI Bureau
New Update

In today's dynamic business environment, it is quite apparent that business

continuity requirements are changing by the day. While trying to address these

needs enterprises must respond to new business drivers. The challenge is to

reduce risk and increase business resilience, while also reducing costs and

increasing efficiency.  

Advertisment

While the need for business continuity is universal for all organisations,

the degree of resilience depends upon various aspects. For instance, in many

industries and geographies, government regulations require companies to have

effective business continuity plans that will enable them to protect information

assets and maintain their service capabilities, in spite of local or regional

disaster. The most commonly regulated industries likely to adopt out-of-region

strategies worldwide include telecom, transportation, banking and other

financial services, government, utilities, healthcare, and e-commerce. 

Other factors that are crucial to charting out business continuity plans

include data replication with guaranteed integrity and consistency, scope of

definition of data that needs replication, better RTO (Recovery Time Objective)

and RPO (Recovery Point Objective). 

“Remote replication processes are considered the most acceptable form of protecting organizational data”



-Lim Beng Lay, product manager, Asia-South, Hitachi Data Systems

Advertisment

And, companies quite often have to meet these requirements under the

constraints of cost reductions and increased efficiencies. In order to meet

this, the most commonly followed method is popularly known as storage

consolidation. A consolidated platform requires a greater degree of data

protection and disaster resilience. Most data replication and business

continuity solutions can use remote replication capabilities for data protection

but these replication solutions themselves may consume scarce resources that

could affect performance of applications. 

The introduction of disk based replication systems has provided remote

replication a positive fillip-by significantly improving RTO and RPO.

Currently, there are two types of replication strategies adopted by

organizations: synchronous replication for local disasters and asynchronous

replication for regional disasters. 

Remote replication processes can cause significant bandwidth problems leading

to momentary link failures, giving rise to situations which can lead to a

painstaking and costly recovery process. While both synchronous and asynchronous

remote replication processes can co-exist within an organisation, existing

solutions require storage for multiple copies of the data, as well as complex

management and scripting. 

Advertisment

In such a situation, a replication strategy that uses a disk-based journaling

and a pull-based replication engine to reduce resource consumption and costs,

while increasing performance and operational resilience, may turn out to be the

best bet.

Coupled with disk-based journaling strategy, use of a pull-based replication

process can create one of the most effective remote replication solutions. As

compared to traditional method of the primary storage system, dedicating

resources to push data across the replication link, this kind of an approach

would have a remote replication engine pull the data from the primary storage

system's journal volume across the link, and write it to the journal volume at

the receiving site.  

There are many benefits that can be gained through such an approach to remote

replication. By using local disk-based journaling and a pull-based remote

replication engine, the solution releases critical resources that are consumed

by other asynchronous replication approaches at the primary site, such as disk

array cache in storage-based solutions, or server memory in host-based software

approaches. This kind of a solution improves cache utilization, lowering costs

and improving performance of production transaction applications. It also

maximizes the use of bandwidth by better handling the variations of the

replication network resources.

Lim Beng Lay, product manager

Asia-South, Hitachi Data Systems

Advertisment