HyperIP Series – You asked about Backup…
Posted by DaveHuhne
Backing up your data to a remote site is a business necessity. The method or design of your backup solution will be dependent on your requirements and whose system you own. Things like backup window, WAN, de-dupe, distance and incrementals all come into play when making purchase decisions for a backup solution. So why do I need a WAN Accelerator? My storage backup system de-dupes and compresses the data before sending it to its remote DR site. If I install a WAN Accelerator will it provide any additional value? Will data be further reduced after dedup and how much will my application throughput increase?
These are great questions but first you will have to determine if there is a bottleneck in your network. My backups don’t complete on time, why? My backup application throughput is low, why? I can only backup certain servers per night or my backup fills the window and I have new servers arriving as we speak. I can’t keep up so what should I do to solve the issue?
I can buy more bandwidth, increase the buffers in my switches, make the TCP windows bigger. Some of these remedies are expensive, take time or maybe I can’t make changes to the network? A plausible alternative is to test a WAN Accelerator.
HyperIP WAN Optimization virtual appliance helps alleviate many network issues that cause poor application performance and throughput over WANs. The software doesn’t care that your backup system data has been prior de-duped and/or compressed. The software uses an adaptive compression algorithm and will attempt to further reduce deduped data if at all possible. Compression is only one feature of the software that improves application performance. TCP transfers will also be affected by any number of network issues including congestion, jitter, latency, and packet loss. A minimal amount of packet loss can reduce effective throughput by half. Any resulting retransmits will further consume your bandwidth into making you believe your WAN utilization is high when in fact you are really only moving a fraction of the real data. HyperIP shields TCP applications from network issues allowing the maximize throughput.
So back to the question “What data reduction will I get with HyperIP”? The answer is HyperIP will manage the network so that the maximum throughput will be achieved as long as the application can deliver the required data to fill the pipe. As an example we have customers whose backup windows have gone from 24+ hours down to single digits. A recent Veeam customer reduced their backup window from 15 hours to 3 hours with HyperIP. CLICK HERE to see the Veeam / HyperIP success story.
Obviously every backup environment is different but downloading and testing HyperIP for yourself is quick and easy and could save you a lot of time.