What’s New in HCX 4.2 (Part 2 of 2)

Last week in What’s New in HCX Part I, we covered the addition of Real Time migration estimates for HCX vMotion and RAV migrations, Predictive Estimations for RAV migrations and the ability to migrate a virtual machine’s Custom Attributes.

There’s a lot to cover on this one but I will try my damndest to keep it brief.

For Part II, I’m going to cover these topics:

  • OS Assisted Migration for VMware Cloud on AWS and VMC.
  • Integration with Cloud Director.
  • Updates “Network Underlay Requirements” this is one of my favorite things.

OS Assisted Migration is now available for VMware Cloud (VMC/Dell EMC)

The HCX OSAM migration technology was launched in August 2019, but for #reasons it was not available for VMware Cloud based deployments. When running HCX 4.2 – the OSAM capability is available for any deployment running the HCX Enterprise feature set.

To recap OSAM…the OSAM launch blog describes it very well. The list of supported operating systems is larger today, you can see an updated list here.

When the HCX OSAM service is enabled in the service mesh, the OSAM (Sentinel) agent can be downloaded from the HCX UI (there is Windows and Linux version of that file:

Sentinel Mgmt in the Service Mesh UI when OSAM is enabled

Once the agent is running; the rest is business as usual, HCX allows the OS Assisted to be selected as an alternate to the vSphere based site pairing, then virtual machines running the OSAM agent will show up in the HCX migration inventory. The rest is the same.

Some important things to keep in mind regarding OSAM:

  • The virtual machine operating system matters. It has to be explicitly supported.
  • The virtual machine operating system MUST be capable of a tools installation on the destination side. If the virtual machine is not running tools because of a requirement was not met, that may result in a migration problem.
  • The virtual machine communicates directly with the HCX Sentinel Gateway (HCX-SGW) using TCP-443.
  • The Compute Profile needs to be updated to include the “HCX Guest Network”, this is the interface on the SGW that Agents use to dial in.
  • While HCX vMotion and RAV are HOT/LIVE (no downtime)\, Bulk Migration is WARM (~ reboot) , OSAM is requires some additional conversion downtime. Quiescing and fixing up the virtual machine will generally add 15-45 minutes of downtime (or more – depending on the activity)
  • OSAM conversions should be performed using adequate maintenance windows.
  • Software that interacts with the filesystems like os security agents should be disabled. Generally an OS Assisted Migration will continuously work towards the switchover. Software and Services generating heavy activity can delay the conversion time by hours.
  • OSAM is a one way migration option – the source VM is powered off, but there is no reverse migration from vSphere back to the non-vSphere hypervisor.

HCX with VMware Cloud Director (& NSX-T)

Prior to HCX 4.2, there was a limited availability of HCX for vCloud Director installations with NSX-v (limited to very specific parties).

With HCX 4.2 and going forward, HCX installations with VMware Cloud Director (using NSX-T are generally available).

In this model, the tenant is able to connect their vSphere datacenter to a Cloud Director based organization.

Tenancy boundaries are enforced (the tenant is only able to perform operations in the context of their resources, and will not be exposed to infrastructure and resources carved for co-tenants in the Cloud Director environment.

Example tenant org Coke

The functionality requires the following software HCX 4.2+ , NSX-T 3.1.1+, Cloud Director 10.2.1 & 10.12.2. (10.12.0 is not compatible / 10.12.3 is not yet supported)

At the beginning of the install; Cloud Director must be selected:


This changes the remaining appliance management configuration to include the VCD constructs. You can find more information about what is different in a Cloud Director install here.


  • This capability is for Cloud Director providers. This capability is not for VCD to VCD migrations.
  • Some caveats apply, read the release notes!
  • This general availability is for VMware Cloud Director installations with a single vCenter Server (multi-VC VCD installations) are not supported (rather one VC must be selected).
  • OSAM & MON are not available with VMware Cloud Director installations.

Network Underlay Requirements Revision

On the surface – this is a basic revision to the User Manual. There is a Network Underlay Minimum Requirements. Behind the scenes, what happened was a series of efforts to finetune operational thresholds for deploying and operating HCX services (as it relates to the network underlay (the connectivity underpinning between two environments running HCX).

What HCX users have gained through this is the ability to use HCX with a variety of underlays by way of meeting the now generalized underlay requirements. So what does this actually mean? Lets take two basic scenarios:

1. [hcxsrc]<-->250MbpsWAN-w/VPN<-->[hcxdst]

2. [hcxsrc]<-->250MbpsMPLS<-->[hcxdst]

Scenarios 1 & 2 may meet basic bandwidth requirements, but scenario 1 was not officially supported based on VPN.

With HCX 4.2, we provide additional requirements for parameters like MTU, latency, loss, and generalized the requirement to any underlay (Internet / Private / VPN / SD-WAN/ etc) , allowing both underlay scenarios (and many others) to be to be supported without the need to specifically qualify vendor specific variations (as long as HCX is running version 4.2, and the new requirements are satisfied – operations are supported.

We have some documentation in the works with even more granular details.

This is HCX 4.2!

  • Updated estimation capabilities.
  • Support for Cloud Director migration.
  • 4.2 Support for any Network Underlay that meets the updated requirements.

If you’re deploying or upgrading to HCX 4.2 and using the new capabilities would love to hear about it.

Good bye! Happy Thursday!



  1. “RAV migrations now retain the VM Disk UUIDs.” is it tested statement ? I am using latest 4.3.0 RAV migration but VM disk UUIDs are changing and also i dont see this statement in any official documentation, i am looking for some workaround to retain VM Disk UUID and VM Instance ID, Can you please advise.


    • Hi Nirupan, I may have added this in error! You’re right that this is not in out official docs (I even have another blog post mentioning how a new UUID is generated for RAV). I’ll confirm, for now I will remove this. Thanks!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s