VM Migration Series: HCX & IP Address Overlap

Lá Fhéile Pádraig sona duit! Happy belated feast of St. Patrick!

Today I want to talk about overlapping IP address space. Whether it is encountered unintentionally or included by design, it means something in the context of workload migration, and subsequently VMware HCX.

What is HCX?

HCX is a workload mobility platform that brings techniques and technologies to accomplish data center evacuation, cloud adoption, hardware lifecycle-based lift & shifts and even non-vSphere workload conversions with the highest degree of compatibility while abstracting away many of the complexities in these sorts of projects.

What do I mean by IP Address Overlap?

Simply put, scenarios where network subnets managed by one customer are not unique. [RFC5684] has great summary of why this happens:

The Internet was originally designed to use a single, global 32-bit IP address space to uniquely identify hosts on the network, allowing applications on one host to address and initiate communications with applications on any other host regardless of the respective host's topological locations or administrative domains.  

For a variety of pragmatic reasons, however, the Internet has gradually drifted away from strict conformance to this ideal of a single flat global address space, and towards a hierarchy of smaller "private" address spaces [RFC1918] clustered around a large central "public" address space.

[RFC5684] Srisuresh, P. and B. Ford, "Unintended Consequences of NAT Deployments with Overlapping Address Space", RFC 5684, February 2010. http://www.ietf.org/rfc/rfc5684.txt

In my world of infrastructure & workload migration there are some IP overlap themes that stand out:

A. Planned IP Address Re-utilization

With the A scenarios, I am referring to subnet re-utilization that has been planned out as a means to combat the exhaustion of managed or administered private ip address space (traditionally the IP address space described in RFC-1918).

A1 Non-Routed Infrastructure or Protected Subnets

We encounter this kind of IP re-utilization in very large scale vSphere deployments. The ESXi cluster vMotion and storage networks for example, could be treated as contained/protected networks that do not to be accessed. This strategy is used to “stamp out” infrastructure that conserves IP space and to simplify automation for the overlapped networks.

A2 NAT ‘Tenant’ Design

This is when IP address space is re-utilized on virtual machines or applications that are scoped to operate within a defined infrastructure scope, and NAT can be leveraged when egressing that scope (similar to the way home networks are treated by home network device vendors and ISPs).

B. Unplanned Overlap

B1 Acquired Subnets

Every IP address is globally unique and ideally planned, until an M&A event changes everything.

B2 Unmanaged Provider Subnets

Every IP address is globally unique and ideally planned, until we adopted the cloud vendor who does not customize ESXi IP addressing and their space collided with our own.

The nature of vSphere is such that site to site hypervisor communications are not required. Data migration

What about HCX?

In the A1 scenario, the subnet design uses planned overlapping space and contains the vMotion network. In the B1 & B2 scenarios, you’ve inherited subnets during M&A or you are using a provider that does not delegate control of this space, thus potentially causing overlapping infra addressing.

Without HCX:
Protected vMotion subnets must be made unprotected and routed.
Overlapping vMotion subnets must be modified.

With HCX:
A multi-arm HCX service can establish a migration path on behalf of every cluster. Configurations do not need to be changed.
Protected vMotion subnets can stay protected
Non-routed vMotion subnets can stay non-routed.
Overlapping vMotion subnets are supported and do not need to be changed.

The A2 (planned NAT) and B1 (unplanned M&A) scenarios deal with overlapping workload subnets. Lets say the intention is to consolidate the overlapping subnets to a new site while addressing the overlap:

Without HCX:
Workload Re-IP project prior to the consolidation/migration.

With HCX:
Use HCX Network Extension with multiple Tier-1 routers for tenant separation of overlapping networks. Workloads can be consolidated without clashing without a separate re-IP project.

Its important to create/select unique NSX Tier-1 routers for this scenario. The HCX extension operation checks the NSX Tier-1’s interface subnets to avoid overlap and will attempt to re-attach if a matching subnet is found.

Use HCX Bulk Migration
If server re-IP is a mandated strategy, use HCX Bulk Migration’s Guest Customization options to re-IP workloads in flight. You can also fix DNS and run additional customizations via scripted OS commands as automatic post-migration steps.

& That is it.

We discussed several planned and unplanned scenarios that are common themes (at least in my sphere of work). I explained how HCX is able to provide compatibility for these cases. I hope you enjoyed reading like I enjoyed writing.

✌🏼🥸 Peace to you and yours on this 🌴☀️ day!

Gabe & #6 of 6.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s