When the source and target HCX environments are leveraging a private path (e.g. direct connects, private lines, MPLS (not the internet)), and the path leverages Jumbo MTU (beyond 1500, tends to be ~9000 when configured) it makes perfect sense to also take advantage of the larger MTU in HCX. Larger segments will reduce the packets on the network (which can reduces fragmentation and CPU cycles on processing of packets). This can translate to improved HCX migration rates, and improved performance for VM to VM communications over the HCX L2 path.
Verifying the default vNIC MTU on the HCX Migration and Extension appliances in the CCLI. Getting started with the HCX Central CLI.
MTU adjustment in the Network Profile UI:
Resynchronize the service mesh to apply the Network Profile/Compute Profile change:
HCX appliances (HCX-IX and HCX-NE) will reflect the updated MTU in the vNIC configuration (for this type of adjustment, appliances are not rebooted).