HyperIP by NetEx Blog

HyperIP Series – You Asked About WAN Acceleration of Encrypted Data…

Posted by Marketing

A customer recently asked a question during a webcast, “How does HyperIP accelerate encrypted data?” The answer is, it depends.

In the case where the data was encrypted when it was written to disk, as is required in most financial institutions, encryption poses problems for WAN optimization controllers who need to inspect the data to perform their optimization techniques:

1. Compression and deduplication on the network can no longer be applied to a secured/encrypted packet, so data reduction algorithms are a moot point.

2. Data security is paramount, so movement or transport of that data over the IP network requires the datagram to be intact, not un-encrypted, then re-encrypted putting data security at risk. That now means data pattern caching in disk or memory is no longer applicable.

3. Payloads in the encrypted data block can be quite large requiring a data streaming technology to meet window requirements and aggressive RTO’s to be adhered to.

So how can HyperIP WAN Optimization Virtual Appliance from NetEx accelerate encrypted data?

If the data is encrypted prior to HyperIP compression won’t be possible but HyperIP will still be able mitigate network issues that degrade WAN performance. SSL data payload, certificates, and keys will all passed through HyperIP’s accelerated transport at or near wire speed. No matter the distance or latency, no matter the packet loss on the WAN, no matter the amount of network congestion or out–of-order sequence issues, HyperIP will maximize the throughput of the application.  This allows for complete data security, no modification of the SSL-encrypted block of data jeopardizing the integrity of the payload, transparent to both the application and encryption.

If the traffic is encrypted with a Taclane KG encryptor, HyperIP takes the unencrypted data from the source, optimizes the transport of that data to near wire speed, then compresses the data blocks to reduce traffic on the WAN, then hands that data to an encryption appliance. This is the preferred solution in most Department of Defense implementations, where specific encryption gear is required. This solution allows for complete WAN Acceleration of the block of data before it is encrypted. Global replication and backup of data now leverage HyperIP’s value and complete data security with government approved encryption on the WAN links.

See a success story about HyperIP in a DoD implementation:

Whether you are moving your secured data to a cloud storage provider, your own private cloud facility, a centralized data repository from remote offices, or an in-house DR facility, HyperIP can significantly improve the performance of your applications.

Part 2 of The Ultimate Guide to Gaining Control of Your WAN
How to Leverage Cloud Backup Services

This entry was posted in HyperIP, and tagged , , , , , , , Bookmark the permalink.

HyperIP Series – You Asked About TSM Testing with HyperIP..

Posted by DaveHuhne

We recently had an opportunity to test IBM Tivoli Storage Manager (TSM) Client to a TSM Server in our HyperIP lab. When doing any kind of application verification or performance testing it is important to first determine the overall limits of the native application with and without WAN acceleration.

Lab testing in an emulated environment is a good way to test applications because you can mimic certain network topologies and characteristics. In our case the HyperIP lab consists of two HyperIP WAN Optimization virtual appliances, two windows servers, and a distance simulator for the WAN. The simulator has the ability to inject packet loss, network latency and other network conditions over various bandwidths that can degrade application performance.

The main objective with any test is to try to validate whether the HyperIP can accelerate the application over various distances with varying latency and packet loss scenarios. Every application has its own performance characteristics and limitations. The same is true for WAN networks. They are about as unique as fingerprints.

Like many backup applications TSM was designed for the data center and performs very well when moving data short distances. Since we are truly becoming a global society is it important to be able to move data over longer distances which is clearly a requirement of cloud storage environments.

With the case of IBM TSM, we started off testing with a simple delay of 10 ms round trip time (RTT). At this relatively short distance TSM slowed by 80% compared to its native performance. This is typical application degradation due primarily to the inefficiencies of the TCP transport and not necessarily the fault of the TSM application. When HyperIP was added to the configuration, the TSM application was able to achieve throughput equivalent to native performance and no delay. In fact HyperIP was able to help TSM achieve near native performance rates at distances represented by 40 ms RTT, 80 ms RTT, 320 ms RTT all the way up to a 1 second RTT. This is a testament to how well TSM and HyperIP interoperate together.

Many applications have internal limitations such as outstanding operations, queue size, or queue depth that artificially restrict the application’s ability to maximize throughput. That was certainly not the case with TSM. TSM can certainly pump data over the network when it is not encumbered with TCP performance issues. When operating TSM with HyperIP, the two combined can sustain the same throughput rates whether running across town, across the ocean, or around the world. That was very impressive. TSM over HyperIP brings LAN-like performance to WAN-based remote backups.


This entry was posted in HyperIP, and tagged , , , , , , , , , , Bookmark the permalink.

HyperIP Series – You Asked About Enabling Centralized Remote Backup

Posted by Marketing

A successful remote backup and recovery process depends on the right backup applications, the right management of those apps, and the network to support it. How can HyperIP WAN optimization virtual appliance enable this? Let’s look at a typical remote backup solution consisting of remote servers, residing in a branch, and a central repository of data for the backups, residing in a data center. These servers, virtualized in most cases, require remote backups to occur in a given backup window for each server. These backups are slave to the size of the WAN bandwidth to/from the branch. To reduce the backup windows or at worst, meet them, the WAN overhead has to be eliminated.

Typically, TCP overhead limits actual application throughput over these WAN links. The table below shows anticipated application throughput with HyperIP. Compare this to what you get now and the value proposition of HyperIP becomes evident.

HyperIP mitigates the effects of packet loss, latency, and out of order packets to more effectively drive near wire speed of the WAN link (~95%). Then, if needed, block-level compression, a feature of HyperIP, is applied to further reduce the amount of data traveling over the WAN link, dramatically increasing application throughput. This effectively turns the WAN into LAN-like performance from the backup client to the backup server destination.

Sounds interesting? Want to try HyperIP with your backup application? Go to our website at www.netex.com and click on the big orange download box. This will get you started on the right track. Our SE Team at NetEx will be glad to help size your bandwidth requirements for remote backup.Feel free to CONTACT US.

This entry was posted in HyperIP, and tagged , , , , , , , , Bookmark the permalink.

HyperIP Series – You asked about Backup…

Posted by DaveHuhne

Backing up your data to a remote site is a business necessity. The method or design of your backup solution will be dependent on your requirements and whose system you own. Things like backup window, WAN, de-dupe, distance and incrementals all come into play when making purchase decisions for a backup solution. So why do I need a WAN Accelerator? My storage backup system de-dupes and compresses the data before sending it to its remote DR site. If I install a WAN Accelerator will it provide any additional value? Will data be further reduced after dedup and how much will my application throughput increase?

These are great questions but first you will have to determine if there is a bottleneck in your network. My backups don’t complete on time, why? My backup application throughput is low, why? I can only backup certain servers per night or my backup fills the window and I have new servers arriving as we speak. I can’t keep up so what should I do to solve the issue?

I can buy more bandwidth, increase the buffers in my switches, make the TCP windows bigger. Some of these remedies are expensive, take time or maybe I can’t make changes to the network? A plausible alternative is to test a WAN Accelerator.

HyperIP WAN Optimization virtual appliance helps alleviate many network issues that cause poor application performance and throughput over WANs. The software doesn’t care that your backup system data has been prior de-duped and/or compressed. The software uses an adaptive compression algorithm and will attempt to further reduce deduped data if at all possible. Compression is only one feature of the software that improves application performance. TCP transfers will also be affected by any number of network issues including congestion, jitter, latency, and packet loss. A minimal amount of packet loss can reduce effective throughput by half. Any resulting retransmits will further consume your bandwidth into making you believe your WAN utilization is high when in fact you are really only moving a fraction of the real data. HyperIP shields TCP applications from network issues allowing the maximize throughput.

So back to the question “What data reduction will I get with HyperIP”? The answer is HyperIP will manage the network so that the maximum throughput will be achieved as long as the application can deliver the required data to fill the pipe. As an example we have customers whose backup windows have gone from 24+ hours down to single digits. A recent Veeam customer reduced their backup window from 15 hours to 3 hours with HyperIP. CLICK HERE to see the Veeam / HyperIP success story.

Obviously every backup environment is different but downloading and testing HyperIP for yourself is quick and easy and could save you a lot of time.

This entry was posted in HyperIP, and tagged , , , , , , , , , , Bookmark the permalink.

HyperIP WAN Optimization Virtual Appliance Solves Key Network Issues for Dell EqualLogic Replication

Posted by Marketing

A customer was recently experiencing slow replication of their databases using a Dell EqualLogic Replication application from their iSCSI SAN. They were concerned about replication windows so decided to research performance on the Dell website. In their investigation they found the following document on the Dell website.

Using Dell EqualLogic Auto Replication

This document explains use case scenarios and various configuration issues of Dell EqualLogic Replication in a typical DR scenario. A couple of the main network considerations when implementing Dell EqualLogic Auto Replication should be bandwidth and latency of the circuit. In high latency connections, replication will still work, but allow more time to complete. This is sound advice. From our experience, packet loss can also have a huge impact on application throughput.

HyperIP WAN Optimization virtual appliance, when configured with EqualLogic Replication, can reduce bandwidth requirements for replication by over 50% while maintaining high throughput. This brings the windows for replication into minutes instead of hours, allowing for the ease of testing the recovery plan, as well as recovering a node, server, cluster, or the entire site’s data volumes after a true disaster.

Simply, your current benchmark or replication assessment must take into consideration the average available bandwidth on a link with any packet loss or latency can be reduced to 40%-60%. HyperIP accelerates applications over WANs by time of day, to nearly 95% utilization. With a combination of block level compression, latency and packet loss mitigation, HyperIP can then improve EqualLogic replication throughput by 3X-6X .

Check out this TechValidate case study from one of our customers who uses Dell EqualLogic Replication and HyperIP. Click here to see case study.

This entry was posted in HyperIP, and tagged , , , , , , , , , , Bookmark the permalink.