Traffic Engineering for HCX Enterprise
I had the opportunity to write about the traffic engineering technologies in the VMware blog a couple of weeks ago. Here is an excerpt:
TCP Flow Conditioning
VMware customers use the HCX Network Extension service for turn-key connectivity between their legacy, public and private clouds. This service allows Distributed Virtual Switch, NSX or 3rd Party Virtual Switch based networks to be bridged to the target environment without hassle. The process is a simple right-click operation with three parameters. With a Network Extension in place, the applications moved to the target can communicate with the original network without awareness of HCX, which is silently encapsulating traffic and forwarding Extended Network bound traffic.
Any two communicating virtual machines use their interface MTU setting to negotiate the maximum size of any data segment. The virtual machines use their own interface MTU as a basis for this negotiation, but have no awareness of northbound MTU values, or of any encapsulation or tunneling operations along the way.
An example negotiation is depicted below. During the TCP handshake (using the TCP Header Options), each virtual machine proposes the maximum amount of data they want to receive in any individual packet/segment. The lowest proposed value is used.
For simplification, let’s assume the following scenario :
During the TCP handshake, the virtual machines negotiate a maximum segment value of 1500. The transmitting virtual machine TX-VM needs to send 9000 bytes of data to the its target RX-VM.
(A real life negotiated MSS would not be round, due to header overhead)
Without TCP Flow Conditioning:
- 9000 bytes will be carved and transmitted as six 1500 byte segments.
- Network Extension encapsulation will add 150 bytes, creating 1650 segments.
- The single 1650 segment is fragmented into two segments 1500 + 150 bytes to respect the uplink MTU.
- 12 packets are sent over the network, averaging 750 bytes per packet.
With TCP Flow Conditioning enabled:
- HCX Network Extension intercepts the TCP handshake and adjusts the Maximum segment dynamically. In this scenario, the maximum segment is set to 1350 instead of 1500.
- 9000 bytes will be carved and transmitted as six 1350 byte packets and one 900 byte packet.
- Network Extension encapsulation will add 150 bytes, creating 1500 byte segments.
- The single 1500 byte segment is within the 1500 MTU and is not fragmented.
- 7 packets are sent over the network, averaging 1285 bytes per packet.
TCP Flow Conditioning dynamically optimizes the segment size for all traffic traversing the Network Extension path.
HCX Application Path Resiliency
Today’s networks are often built to be resilient through the use of aggregated switched paths (e.g. LACP) and Equal-Cost Multi-Path-based Dynamic Routing paths.
Visibility to these resilient multi-paths is not necessary for applications as long as the source address can reach the target address. The switched path will use a Load Balanced hashing mechanism to pin the traversed path and ensure minimal need to reorder transmitted packets.
The HCX Interconnect and Network Extension transport in HCX Enterprise take advantage of these existing multi-path hashing behaviors to achieve a powerful improvement. HCX Application Path Resiliency technology creates multiple FOU tunnels between the source and destination Uplink IP pair for improved performance and resiliency and path diversity.
The HCX Application Path Resiliency proves multiple possible paths, avoids blackholed and suboptimal paths, and selects the best.
Enhancements to Compute Profile and Service Mesh Environment Validation
HCX deployments always span multiple distinct environments. This feature helps promote awareness regarding the state of the infrastructure when it . is being selected during HCX configurations. Warnings are now given during Compute Profile and Service Mesh creation when the selected inventory objects are reporting a degraded state.
message": "Datastore : nfs0-1(id - datastore-#) is in critical (red) state."
OS Assisted Migration Enhancements
OSAM allow customers to use HCX for Non-vSphere to vSphere migrations. With the OSAM technology, HCX performs a “fixup” that ensures compatibility once the underlying platform changes. If the concept is new to you, start with this blog.
With the R134 update, OSAM added support for the following Operating Systems:
Hyper-V (Windows 2008 R2 64-bit)
For more detailed HCX OSAM requirements, see this page.
And that is it!
P.S. Outside of work I’ve been getting back into the run groove 🤘🏼, here is how things are shaping up: