Byte Level Data Replication to Remote Server

The following was suggested from

Re: Byte Level Replication under Linux


After quite a bit of research and product trials I came down to a few solutions.

One of which was not actually replicating the data on our RHEL server but just doing normal byte level replication inside the Windows hosts that we run under VMware on the RHEL box. That proved to be costly as software like DoubleTake isn’t cheap. We would also need a pair of licenses for every Windows host we wanted to replicate.

I looked into another product called NetVault Replicator from BakTrack. This is truly a great product but it’s quite pricey. It would do everything I needed but to the tune of $10,000 per pair of HA servers.

In the end I ended up playing around with DRBD, Heartbeat and Open-iSCSI.

Here’s what I did:

I set up two machines under VM as a test. Installed Fedora 7 on each of them and updated to the latest kernels. I then downloaded DRBD and got that configured on each of the machines. I gave DRBD a physical disk to manage and copied a bunch of files onto the disk on the Primary server. That data was replicated at the block level over to the Secondary server. I then promoted the Secondary server to Primary with DRBD and mounted the volume and all my files were there. Excellent.

Once DRBD was fully tested I downloaded and installed Heartbeat V2. After a couple hours of reading and configuration, I had Heartbeat managing DRBD and the mount/dismount of the volumes. If I failed the primary server, Heartbeat would automatically promote the backup server to primary and mount the drive accordingly. Excellent. I now had automated failover between the two servers.

I tested that for a bit to make sure it was working as planned and attempted various methods of “breaking” the replication to make sure it was stable and would recover from failure in a predictable manner.

When all my testing was done. I went ahead and installed iSCSI on the box and set up fileio block volumes on the DRBD managed drive. I mounted the /dev/drbd0 drive under /mnt/array/ and created block volumes at /mnt/array/iscsi-target0.img, /mnt/array/iscsi-target1.img, and /mnt/array/iscsi-target2. Once the iSCSI targets were created, I used the wonderful (heh, right) iSCSI Initiator 2.04 on my XP machine and mounted the targets. Once mounted, I was able to copy a bunch of files from a local drive to one of the iSCSI drives just as you would to a local drive.

My final failover testing consisted of taking the primary offline and verifying that Heartbeat properly mounted the /dev/drbd0 drive on the secondary and then fired up the iSCSI daemon to export the targets. All worked well. Bringing the primary back on-line automatically triggered a reverse replication from the secondary back to the primary before stopping iSCSI and unmounting the drives on the secondary and then mounting the drives and starting iSCSI on the newly recovered primary.

The only issue I have now is getting the Microsoft iSCSI Initiator to look at at the virtual IP address that Heartbeat tosses back and forth between the servers rather than the physical IP addresses of the two boxes. It appears to just be a Microsoft iSCSI implementation issue, but I need to do a bit more testing to confirm that.

If anyone is in a position where they need to build extremely low-cost high availability file servers, this is a great way to go. I’d gladly share my config files and any tweaks I had to perform to get this running.”