The first minor release for HCX 4 has arrived! It brings exciting advancements to the product’s VM mobility technologies. By this time, all subscribed systems will see the option to upgrade the HCX 4.1 software. I want to describe some of the those updates in a bit more detail, beginning with the MON feature.
Read the official HCX 4.1 release note here.
Read the HCX 4.1 launch blog here.
General Availability for Mobility Optimized Networking!
Mobility Optimized Networking is now fully supported for VMware Cloud Foundation, Google Cloud, Microsoft Azure, and private datacenter HCX deployments, in addition to being already supported VMware Cloud on AWS.
This one is a long time coming!
Our users love HCX Network Extension because it delivers turnkey connectivity for the application being migrated (literally a right-click operation in VC, with minimal parameters) HCX abstracts away the complexity of connectivity between distinct locations, all while the original VM network and the path upstream is undisturbed. The existing first hop gateway and policies apply. There is a bit magic in the simplicity.
The underbelly of the unicorn exists in the form of the tromboning effect and the increased latency that is observed as a result of virtual machines taking the scenic path to their first hop gateway.
Mobility Optimized Networking (MON) builds on HCX Network Extension by solving the tromboning problem for migrated virtual machines. It allows those virtual machines to reach routed destinations efficiently, before the subnet routing function is fully migrated. MON gives an additional level of egress control in the cloud.
MON saves the day when there is a hard requirement to migrate without re-ip and the workloads and the applications cannot tolerate the increased latency that results from tromboning – it can reduce site to site egress (may help with cost, network capacity, performance by reducing the load on the site to site connection).

Fundamentally MON provides east west optimization within the cloud’s NSX Tier 1 router – this is true anywhere the feature is implemented, and requires nothing more than enabling the feature.
MON can also optimize for other ingress/egress scenarios beyond the cloud’s Tier 1 router – MON policy allows the user to dictate how routing will happen for that traffic.
Network conditions and requirements vary from deployment to deployment (according to customer design decisions, and features available in the various public/private cloud platforms) so what ultimately needs to be configured to achieve the desired connectivity becomes more nuanced.
Take Internet egress for virtual machines migrated to the cloud as an example.
I’ve worked with users who design for internet egress in the cloud & with users with security requirements for internet egress on-prem. Both user cases are legitimate, but require a specific MON policy configuration.
Internet egress on-prem
When this requirement exists, it is very likely that there is a private connection to the cloud announcing a 0.0.0.0/0 default route (to meet the requirement for native cloud VMs).
The default MON policy includes RFC-1918 private subnets as ALLOW.
This policy forwards any RFC-1918 traffic to the original source router, other traffic (like internet bound traffic is sent to the Tier 1 router in the cloud.
The default configuration will result in traffic using the private connection’s default route.
For the reverse path of this internet bound flow, there are two common possibilities:
- The original router at the source site has learned about the migrated VM in the form of a VM host route from the cloud side of the private connection, that was advertised because of the MON feature.
In this case, the traffic returns symmetrically across the private connection, everything works. - The original router at the source site has not learned about the migrated VM, because host routes are not supported by the cloud provider, or suppressed, or not configured to be advertised by the user.
In this case, the traffic returns asymmetrically via the HCX L2 extension, stateful flows are disrupted.
This is solved by configuring a 0.0.0.0/0 ALLOW entry in the MON policy. The traffic pattern will meet the requirement and remain symmetric.
Digressing
I’m excited the feature is now available for all users; I believe many future migration projects will benefit from the benefits this brings. It’s extremely easy to enable this feature (just like the HCX NE service), but please do not forget to design. Environment artifacts should influence how MON is configured beyond the basic functionality.
I’ll end with some things to consider as you design for using this feature:
Do I have noisy applications that communicate across subnet boundaries? (MON helps contain this traffic for VMs as they are migrated)
How will a migrated MON VMs reach the internet? (this affects the MON policy configuration)
Is my cloud environment advertising static routes? (The ability to do this depends on the cloud platform)
Can the original router learn advertised /32 host routes? (this allows the “proximity routing-like route locally design)
Do I have cloud storage or services that need to be accessed directly? (These configurations may exist as exception DENY entries in your MON policy)
Does my design use multiple Tier 1 routers in the same cloud? (The MON functionality works well with multiple Tier 1s that are fully isolated from each other)
I looked forward to sharing additional MON guidance that in the context of new, provider scenarios – please stay tuned!
—
Gabe
[…] week in the What’s New in HCX 4.1 Part 1 post we explored Mobility Optimized Networking, which is now Generally Available for all HCX […]
LikeLike
[…] What’s New in HCX 4.1 Part I – General Availability for Mobility Optimized Networking […]
LikeLike