HCX uses Network Profiles (NP) to abstract service mesh network configurations, and to define HCX’s participation in those NP networks in a way that is flexible and scalable.
The laborious pre-NP approach was to specify the network segment, IP address details for each (Uplink|Mgmt|vMotion|Replication) interface, during the appliance deployment of each (Imagine having to manually configure the IPs on each devices connecting to your wifi network.
The NPs work like a mini multi-homed DHCP for HCX service deployments. The NP effectively says this:
When the HCX Service Mesh appliances are deployed, how should I connect them? How will HCX connect to the cluster’s Management, Replication and vMotion networks? How will HCX connect to the peer/remote appliances?
If the deployment scales out, the existing NPs are used. The NPs are portable configurations.. If you’re using multiple Compute Profiles (CP), each distinct CP can use dedicated NPs or can use existing CPs! 🤪🤪🤪 … ok let me give you an example for that last one:
Lets say you have Cluster A (maybe PROD), which has MGMT and VMO vmkernel networks that are not reachable by Cluster B (maybe DEV) and vice-versa.
Let us say you also have a dedicated migration private circuit (dedicated migration bandwidth that needs to be utilized by both clusters. The Network Profiles can be configured this way:
Cluster A Compute Profile
Cluster A MGMT-NP
Cluster A VMO-NP
Cluster B Compute Profile
Cluster B MGMT-NP
Cluster B VMO-NP
In any case, I’ve created several diagrams to illustrate these Network Profile concepts, and to supplement some documentation that is still in-progress (but . the Network Profiles drawings are done. I wanted to go ahead and share these here. Enjoy! 🍻
[…] pre-deployment/design phase publication for HCX). This blog post is an excerpt of that content, the Network Profiles content I shared is also an excerpt from the same publication. Said document will be live sometime […]