HyperIP by NetEx Blog

2016 NetExIP-HyperIP Security Enhancement Update

Posted by Marketing

NetEx/IP® and HyperIP® Today

Network Executive Software, Inc. (NESi) brings high performance file transfer technology to the industry-standard IP environment with its NetEx/IP and HyperIP software products.

NetEx/IP is many times faster than TCP over long distances, which makes it the ideal solution for moving massive amounts of mission- or time-critical data across the country or across the globe.  As proven by our long-term users, NetEx/IP has the highest throughput rates over long distances with no degradation of performance because of its efficient bandwidth utilization and mitigation of the effects of packet loss and latency. In fact, NetEx/IP and its predecessor product NetEx/HC (HyperChannel) have provided solutions for moving data for global corporations and US state and government agencies for more than 30 years.

For existing TCP applications, the premier solution is NESi’s low cost HyperIP.  HyperIP transparently implements NetEx/IP in the data path to provide all the NetEx/IP improvements, plus compression of data, on a virtual machine without having to modify existing applications or operating procedures.

The Challenge: Securing the Data

With an increase in hacking and breaches of sensitive databases in recent years, many corporations (especially those in the financial, government, or health sectors) and US government agencies are looking at ways to better protect data transiting between sites and the databases themselves.  Data security is also of utmost importance for those customers utilizing shared/public networks.

NetEx/IP Security Enhancements

NESi recognizes this concern for data protection and is therefore planning to enhance NetEx/IP and HyperIP over the next year with standards-based security technology like Transport Layer Security (TLS) to significantly increase the security of the data being moved across the computer room, the country, or globally.

TLS is a cryptographic protocol that secures data as it is transmitted, focusing on authentication, data integrity, and data confidentiality. With TLS, keys are generated uniquely for each connection and are based on a shared secret negotiated at the start of a session, providing security between two applications using NetEx/IP or HyperIP.  Adding TLS to our NetEx/IP products will also provide improved security for our BFX & PFX utilities, which interface upward to customer applications.

In addition to data security, adaptive block compression of data will also be added to NetEx/IP, thus decreasing WAN bandwidth usage and effectively increasing the application data throughput over the network.

This entry was posted in Hyper IP, HyperIP, Netex, and tagged , , , , , , , , Bookmark the permalink.

Continuation of TSM 6.3 Replication testing over HyperIP

Posted by Marketing

We recently had an opportunity to test IBM Tivoli Storage Manager (TSM) release 6.3 replication in our HyperIP lab. IBM just released this feature as part of their TSM 6.3 release in November. As stated in our previous Blog entry about TSM Backup testing, http://www.netex.com/blog/?p=175, it is important to first determine the overall limits of the native application before WAN acceleration.

Our test configuration included two HyperIP WAN Optimization virtual appliances, two windows servers running TSM 6.3, and a distance simulator for the WAN. The WAN simulator has the ability to inject packet loss, network latency, and other network conditions over various bandwidths that can degrade replication performance.

Like many other applications, replication is designed for the datacenter – to – datacenter movement of corporate data. Most replication applications perform very well when moving data over short distances, or in a metro environment. Customers running TSM Replication, in many cases, will need the remote site to be extended over the WAN, to an internal DR site, DR Service Provider, or Cloud Storage Provider. Any time distance is needed, network conditions such as latency and packet loss can significantly degrade application performance and become a huge impact on the throughput and application efficiency.

In our lab when latency and packet loss is experienced TSM native replication performance slowed by over 80% due to the typical inefficiencies of the TCP transport and not necessarily the fault of the TSM application. When HyperIP was added to the configuration, TSM Replication was able to achieve throughput equivalent to native performance and no delay. In fact HyperIP was able to help TSM Replication achieve near native line speeds at distances represented by 40 ms RTT, 80 ms RTT, 320 ms RTT all the way up to a 1 second RTT. TSM Replication over HyperIP proved to perform quite well at any distance, even with a significant amount of packet loss. In some cases HyperIP will accelerate TSM Replication by 6X. If 2:1 compression is possible then the TSM acceleration with HyperIP may approach 12X. Check it out for yourself. Download HyperIP by clicking on the big orange box above.

Want more information about TSM performance with HyperIP? Send an email to info@netex.com.

Links to our Product information and Best Practices are found here:
HyperIP product info: http://www.netex.com/hyperip
TSM Best Practices with HyperIP: http://www.netex.com/index.php/download_file/view/301
Become a HyperIP reseller: http://www.netex.com/partners/register
IBM PartnerWorld Link: HyperIP Virtual WAN Optimization
IBM Tivoli Storage Blog Link: NetEx HyperIP Accelerates TSM Replication

 

This entry was posted in HyperIP, and tagged , , , , , , , , , , , Bookmark the permalink.

HyperIP Series – You Asked About TSM Testing with HyperIP..

Posted by DaveHuhne

We recently had an opportunity to test IBM Tivoli Storage Manager (TSM) Client to a TSM Server in our HyperIP lab. When doing any kind of application verification or performance testing it is important to first determine the overall limits of the native application with and without WAN acceleration.

Lab testing in an emulated environment is a good way to test applications because you can mimic certain network topologies and characteristics. In our case the HyperIP lab consists of two HyperIP WAN Optimization virtual appliances, two windows servers, and a distance simulator for the WAN. The simulator has the ability to inject packet loss, network latency and other network conditions over various bandwidths that can degrade application performance.

The main objective with any test is to try to validate whether the HyperIP can accelerate the application over various distances with varying latency and packet loss scenarios. Every application has its own performance characteristics and limitations. The same is true for WAN networks. They are about as unique as fingerprints.

Like many backup applications TSM was designed for the data center and performs very well when moving data short distances. Since we are truly becoming a global society is it important to be able to move data over longer distances which is clearly a requirement of cloud storage environments.

With the case of IBM TSM, we started off testing with a simple delay of 10 ms round trip time (RTT). At this relatively short distance TSM slowed by 80% compared to its native performance. This is typical application degradation due primarily to the inefficiencies of the TCP transport and not necessarily the fault of the TSM application. When HyperIP was added to the configuration, the TSM application was able to achieve throughput equivalent to native performance and no delay. In fact HyperIP was able to help TSM achieve near native performance rates at distances represented by 40 ms RTT, 80 ms RTT, 320 ms RTT all the way up to a 1 second RTT. This is a testament to how well TSM and HyperIP interoperate together.

Many applications have internal limitations such as outstanding operations, queue size, or queue depth that artificially restrict the application’s ability to maximize throughput. That was certainly not the case with TSM. TSM can certainly pump data over the network when it is not encumbered with TCP performance issues. When operating TSM with HyperIP, the two combined can sustain the same throughput rates whether running across town, across the ocean, or around the world. That was very impressive. TSM over HyperIP brings LAN-like performance to WAN-based remote backups.

 

This entry was posted in HyperIP, and tagged , , , , , , , , , , Bookmark the permalink.

When gambling many times the River Card does not help ….

Posted by Marketing

An enterprise online gaming company uses HyperIP WAN Optimization virtual appliance for global replication acceleration. The company started off using the HyperIP appliance, liked it so much that they migrated to the virtual version of HyperIP which in their environment runs on VMware ESXi. For them, the HyperIP WAN Optimization virtual appliance solution is very cost effective, very easy to implement and provides the ability to scale with software as transfer requirements increase. Everybody likes a little investment protection, right?

So what problem was this company trying to solve? Like many other global enterprises they were challenged with their disaster recovery processes. They used the public internet to move terabytes of data during replication but found it increasingly difficult to meet recovery time objectives as mandated by their disaster recovery plans. The public internet was much less expensive than dedicated circuits but was hampered by latency, packet loss and out of order issues. The company also wanted to reduce their transfer windows, and at the same time deliver more efficient use of current WAN resources, and control bandwidth costs.

The customer uses EMC SRDF/A between sites and added Oracle DataGuard as a second replication application between sites. They tested Oracle DataGuard without informing anyone from NetEx and as expected, HyperIP worked like a charm. The point is, it is pretty easy to add additional applications to operate with HyperIP.

Did the customer try any other WAN Optimization solutions? Yes they tried Riverbed Steelhead appliances but decided to keep using HyperIP because of the significant performance advantage and the cost effectiveness of the software solution.

At the end of the day HyperIP helped this online gaming customer reduce replication, backups and migrations time frames by as much as 60%. The fact that HyperIP was a VMware Ready solution is extremely important to this customer. With a HyperIP WAN Optimization virtual appliance solution the customer is happy with the ease of deployment, lost cost, ease of support and maintenance, ease of integration into their existing virtual environment, including the speed of deployment of newly created virtual machines.

This customer is very satisfied with their HyperIP WAN Optimization virtual appliance solution.

Portions of this case study are sourced from:
TechValidate Survey of a Large Enterprise Hospitality Company
http://www.techvalidate.com/product-research/netex-hyperip/case-studies/AD1-EFB-F91

This entry was posted in HyperIP, and tagged , , , , , , , , , Bookmark the permalink.

HyperIP Series – You Asked About Enabling Centralized Remote Backup

Posted by Marketing

A successful remote backup and recovery process depends on the right backup applications, the right management of those apps, and the network to support it. How can HyperIP WAN optimization virtual appliance enable this? Let’s look at a typical remote backup solution consisting of remote servers, residing in a branch, and a central repository of data for the backups, residing in a data center. These servers, virtualized in most cases, require remote backups to occur in a given backup window for each server. These backups are slave to the size of the WAN bandwidth to/from the branch. To reduce the backup windows or at worst, meet them, the WAN overhead has to be eliminated.

Typically, TCP overhead limits actual application throughput over these WAN links. The table below shows anticipated application throughput with HyperIP. Compare this to what you get now and the value proposition of HyperIP becomes evident.

HyperIP mitigates the effects of packet loss, latency, and out of order packets to more effectively drive near wire speed of the WAN link (~95%). Then, if needed, block-level compression, a feature of HyperIP, is applied to further reduce the amount of data traveling over the WAN link, dramatically increasing application throughput. This effectively turns the WAN into LAN-like performance from the backup client to the backup server destination.

Sounds interesting? Want to try HyperIP with your backup application? Go to our website at www.netex.com and click on the big orange download box. This will get you started on the right track. Our SE Team at NetEx will be glad to help size your bandwidth requirements for remote backup.Feel free to CONTACT US.

This entry was posted in HyperIP, and tagged , , , , , , , , Bookmark the permalink.

HyperIP Series – You asked about Backup…

Posted by DaveHuhne

Backing up your data to a remote site is a business necessity. The method or design of your backup solution will be dependent on your requirements and whose system you own. Things like backup window, WAN, de-dupe, distance and incrementals all come into play when making purchase decisions for a backup solution. So why do I need a WAN Accelerator? My storage backup system de-dupes and compresses the data before sending it to its remote DR site. If I install a WAN Accelerator will it provide any additional value? Will data be further reduced after dedup and how much will my application throughput increase?

These are great questions but first you will have to determine if there is a bottleneck in your network. My backups don’t complete on time, why? My backup application throughput is low, why? I can only backup certain servers per night or my backup fills the window and I have new servers arriving as we speak. I can’t keep up so what should I do to solve the issue?

I can buy more bandwidth, increase the buffers in my switches, make the TCP windows bigger. Some of these remedies are expensive, take time or maybe I can’t make changes to the network? A plausible alternative is to test a WAN Accelerator.

HyperIP WAN Optimization virtual appliance helps alleviate many network issues that cause poor application performance and throughput over WANs. The software doesn’t care that your backup system data has been prior de-duped and/or compressed. The software uses an adaptive compression algorithm and will attempt to further reduce deduped data if at all possible. Compression is only one feature of the software that improves application performance. TCP transfers will also be affected by any number of network issues including congestion, jitter, latency, and packet loss. A minimal amount of packet loss can reduce effective throughput by half. Any resulting retransmits will further consume your bandwidth into making you believe your WAN utilization is high when in fact you are really only moving a fraction of the real data. HyperIP shields TCP applications from network issues allowing the maximize throughput.

So back to the question “What data reduction will I get with HyperIP”? The answer is HyperIP will manage the network so that the maximum throughput will be achieved as long as the application can deliver the required data to fill the pipe. As an example we have customers whose backup windows have gone from 24+ hours down to single digits. A recent Veeam customer reduced their backup window from 15 hours to 3 hours with HyperIP. CLICK HERE to see the Veeam / HyperIP success story.

Obviously every backup environment is different but downloading and testing HyperIP for yourself is quick and easy and could save you a lot of time.

This entry was posted in HyperIP, and tagged , , , , , , , , , , Bookmark the permalink.

On the Road to a Cloudy World

Posted by Marketing

Recently we wrote about WAN optimizers becoming indispensable for cloud applications like backup/replication and disaster recovery.

In the past year we’ve watched a significant number of companies emerge to provide cloud services for a variety of applications that vary in scope and nature. More and more cloud users expect quick storage access from their mission critical data from remote networking architectures, including the ability to replicate and restore data when needed. This is not always possible because of the same network issues that can slow down recovery of secondary data: bandwidth restrictions, network latency, jitter, packet loss, bit errors, poor line quality and network errors.

Yes, clouds offer many benefits, including a theoretically limitless capacity and scalability, elimination of hardware acquisition and infrastructure expansion costs, the ability to budget for future growth, even the conversion of capital expenses into operating expenses. But for cloud applications to reach their true potential, they need to deal with network latency to deliver on throughput and performance. This is especially true for bandwidth intensive applications. In other words, data needs to be at the right time and right place for clouds to be effective.

We’d like to hear from companies using cloud services on their existing IP networks that are willing to evaluate HyperIP in their environment. You will see firsthand the performance improvements delivered by HyperIP that are compelling to your business.

This entry was posted in HyperIP, and tagged , , , , , Bookmark the permalink.