What’s New 2020 R141 – R146

Happy holidays migration admins! I hope everyone out there (especially those reading this post) are prospering and staying healthy during “hindsight” year. I’ve been itching for an opportunity to share updates for HCX over the last quarter and change.. but alas the time arrives (WAY later than intended) so this post is a mouthful (tried to add lots of pictures to make it easier to consume).

As always – the latest and greatest is always available in a raw text format in the HCX Release Notes. Feel free to consume that and bring questions here. Speaking of the release note.. a handy little navigator was added on the right side so you can jump right to any historical release; you can also subscribe to the Release Note updates via RSS.

On to the updates…

OSAM enables the conversion and migration of KVM and Hyper-V virtual machines into a vSphere environment using the HCX platform and workflows.
 HCX R140 adds support for migrating:
CentOS 7.8 and RHEL 7.8 virtual machines from Hyper-V environments.
Ubuntu 16.04 LTS and 18.04 LTS virtual machines from Hyper-V environments.

 Understanding OS Assisted Migration in VMware docs.

Service Mesh Preview

Benefits

On screen confirmation of changes at the end of the Service Mesh wizard. This feature allows users to manage the service mesh with more confidence by allowing them to understand the potential impact of configuration changes.

Network Extension UI 2.0

UI Updates

  • Network Extension UI has been facelifted  to use VMware’s clarity framework and angular 2.
  • Active configurations are presented in the context of Site Pairs and Service Meshes.
  • Zero configuration interface is consistent with Profile and Mesh UI.

Functionality Updates

  • Smart filters for ineligible networks, empty VM networks, and previously extended VM networks.
  • Batch operations to (previously single .
  • Updated Launch point and UI for Mobility Optimized Networking, and Policy Routes 

Initial Availability of HCX for vSphere 7.0 
(Restrictions for VM Hardware version 17 resolved in R144)

With HCX R143 , we announced the general availability of HCX for vSphere 7.0 deployments. The ability deploy and use HCX with vSphere 7.0 based destinations.   

VMware Cloud Foundation 4.0 became supported the destination environment.

HCX Support for Multi-Edge SDDC Deployments in VMC

The Multi-Edge capability in VMC allows customers to increase their network capacity by dedicating network capacity selectively to traffic groups. R142 allows an HCX Configuration in VMC to consume Multi-Edge Traffic Groups as uplink networks.

Note: Multi Edge SDDCs require Transit Connect. (Transit Virtual Interfaces)

Configuring SDDC Traffic Groups in HCX

HCX Service Mesh Scale Out for Network Extension

Benefits

  • Use a dedicated NE service mesh with existing Compute Profile cluster pairs.
  • This allows users with to separate Network Extension and Migration traffic.
  • Users can define distinct migration and extension uplinks and deployment resources.

vRealize Operations Management Pack 5.1 for HCX

​The Management Pack for HCX extends the Operations Management capabilities of vRealize Operations for HCX Hybrid Mobility, Interconnect Management and Data Center and Cloud Migrations.

(How to go about getting the MP)

Note: Requires vRealize Operations 8.2

General Availability of HCX for vSphere 7 

The R144 update removes the limitation for hardware version 17 at the source, allowing users to fully consume HCX live migration features with vSphere 7 based Cloud to Cloud deployments.

Note: HCX Cloud deployments with vSphere 7 also require NSX-T version 3.0.1+

HCX Enterprise features available for VMware Cloud on AWS

With R145, HCX Enterprise was unleashed in VMC. Read the announcement here.

VMC users now have access to Replication Assisted vMotion, Mobility Groups (Migration Management interface), Application Path Resiliency, TCP Flow Conditioning and Mobility Optimized Networking.
Note: HCX OSAM and the HCX SRM Integration features are excluded.

Replication Assisted vMotion (RAV)

The RAV migration type features the resilient and parallel nature of replication during the transfer, and invokes vMotion protocol to allow live completion of the migrations.

Mobility Group (Migration Management)

This feature enables structuring of migration waves based on business requirements.

Simplifies selection of large groups of VMs to build migration waves. It has a “Shopping cart fashion” add button. Additional filters to encompass a larger group

Intended virtual machine migrations can be added to the interface as drafts , and validated prior to execution.

Consolidated Migration report

Mobility Optimized Networking

Mobility Optimized Networking (MON) enables migrated VMs to reach other VMs and networks, optimally, without tromboning or hair-pinning, routing traffic through the cloud-side NSX T1 gateway instead of the source environment router.

API level integration between HCX and NSX-T automatically configures / reconfigures networks when VMs move.

Note: This feature allows VMC virtual machines to use the HCX Network Extension path to send internet traffic, new MON configurations should be designed and tested carefully to avoid service disruption.

Traffic Engineering Features (Application Path Resiliency & TCP Flow Conditioning)

I’ve discussed the traffic engineering features in this blog.

In a nutshell Application Path Resiliency improves Onprem to VMC path resiliency by exploiting ECMP / hashed path behaviors. Multiple tunnels (8 per uplink) are created to achieve path diversity.

TCP Flow conditioning dynamically manages the TCP segment size for traffic traversing the Network Extension path, the result is a reduction of fragmentation, total packet reduction and increased average packet size.

The latest R146 release is a maintenance release with no new features. It is important to note the deprecation notice.

2 comments

  1. Hello,

    I’m new to HCX and your blog is great! I’ve learned so much. One item I’m confused on is scaling HCX data migration over multiple links. Let’s say I have 3 low bandwidth links connecting my two DCs and not much time for migrations. Can I utilize links 1-2 for replication traffic to increase bandwidth and link 3 for the remaining HCX features?

    Like

    • Hi Trent, to some extent you can. The HCX migration concurrency side of things scales out when you have clusters available. The current product allows you to scale IX node pairs to N (N = SRC/DST unique cluster pairs). Also, even when there are no additional unique cluster pairs (lets say there is one src cluster and one dst cluster, you could still create an additional Service Mesh that uses to your 3rd links, and only run Network Extension services on that service mesh. I probably should do a post on scaling options!

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s