HCX Compute Profiles

I’ve been working on a Getting Started document (a pre-deployment/design phase publication for HCX). This blog post is an excerpt of that content, the Network Profiles content I shared is also an excerpt from the same publication.

Said document will be live sometime in the next day or so! For those following here – enjoy the preview!

Characteristics of Compute Profiles

  • An HCX Manager system must have one Compute Profile.
  • Compute Profile references clusters and inventory within the vCenter Server that is registered in HCX Manager (other vCenter Servers require their own HCX Manager).
  • Creating a Compute Profile does not deploy the HCX appliances (Compute Profiles can be created and not used).
  • Creating a Service Mesh deploys appliances using the settings defined in the source and destination Compute Profiles.
  • Compute Profile is considered “in use” when it is used in a Service Mesh configuration.
  • Changes to a Compute Profile profile are not effected in the Service Mesh until a Service Mesh a Re-Sync action is triggered.

    The examples that follow depict the configuration flexibility when using Compute Profiles to design HCX Service Mesh deployments. Each example is depicted in the context of inventory within a single vCenter Server connected to HCX. The configuration variations are decision points that can be applied uniquely to each environment.

~

CP Config 1 – Single Cluster

In the illustrated example below, Cluster-1 is both the Deployment Cluster and Service Cluster.

  • Single cluster deployments will use a single Compute Profile (CP).
  • In the CP, the one cluster is designated as a Service Cluster and as the Deployment Cluster.

~

CP Config 2 – Multi-Cluster Simple

In the illustrated example below, Cluster-1 is the Deployment Cluster. Both Cluster-1 and Cluster-2 are Service Clusters.

  • In this CP configuration, one cluster is designated as the Deployment Cluster, and all clusters (including the Deployment Cluster) are designated as Service Clusters.
  • All of the Service Clusters must be similarly connected (i.e. Same vMotion/Replication networks).
  • When the Service Mesh is instantiated, one HCX-IX is deployed for all clusters.
  • In larger deployments where clusters may change, a Datacenter can used (instead of indvidual clusters) so HCX will automatically manage the Service Clusters.

~

CP Config 3 – Multi-Cluster Dedicated Deployment Cluster

In the illustrated example below, Cluster-1 is the Deployment Cluster and Cluster-2 is the Service Cluster.

  • In this CP configuration, one cluster is designated as the Deployment Cluster and is not a Service Cluster. All other clusters are designated as Service Clusters:
    • This CP configuration can be used to dedicate resources to the HCX functions.
    • This CP configuration can be used to control site to site migration egress traffic.
    • This CP configuration can be used to provide a limited scope vSphere Distributed Switch in environments that heavily leverage the vSphere Standard Switch.
  • For HCX migrations, this CP configuration requires the Service Cluster vmkernel networks to be reachable from the Deployment Cluster, where the HCX-IX will be deployed.
  • For HCX extension, this CP configuration requires the Deployment Cluster hosts to be within workload networks’ broadcast domain (Service Cluster workload networks must be available in the Deployment Cluster Distributed Switch).
  • When the Service Mesh is instantiated, one HCX-IX is deployed for all clusters.

~

CP Config 4 – Cluster Exclusions

In the illustrated example below, Cluster-2 is not included as a Service Cluster.

  • In this CP configuration, one or more servers have been excluded from the Service Cluster configuration.
  • This can be used to prevent portions of infrastructure from being eligible for HCX services. Virtual machines in clusters that are not designated as a Service Cluster cannot be migrated using HCX (migrations will fail).

~

CP Config 5 – Multi-Cluster Multi-CP (Optional, for Scale)

In the illustrated example below, Compute Profile (CP) 1 has been created for Cluster-1 and CP-2 has been created for Cluster-2.

In the illustrated example, the vmkernel networks are the same. Creating additional CPs is optional (for scaling purposes).

  • In this CP configuration ,Service Clusters are ‘carved’ into Compute Profiles.
  • Every Compute Profile requires a Deployment Cluster, resulting in a dedicated Service Mesh configuration for each Compute Profile.
  • As an expanded example, if there were 5 clusters in a vCenter Server, you could have Service Clusters carved out as follows:
    • CP-1: 1 Service Cluster , CP-2: 4 Service Clusters
    • CP-1 2 Service Clusters, CP-2: 3 Service Clusters
    • CP-1: 1 Service Cluster, CP-2: 2 Service Clusters, CP-3: 2 Service Clusters
    • CP-1: 1 Service Cluster, CP-2: 1 Service Cluster, CP-3: 1 Service Cluster, CP-4: 1 Service Cluster, CP-5: 1 Service Cluster
  • It is worthwhile noting that the distinct Compute Profile configurations can leverage the same Network Profiles for ease of configuration.

~

CP Config 6 – Multi-Cluster Multi-CP (Required, with Dedicated NP)

In the illustrated example below, Cluster-1 uses vMotion and Mgmt network 1. Cluster-2 uses vMotion and Mgmt network2.

In the illustrated example, the vmkernel networks are differnet, and isolated from each other. Creating dedicated Network Profiles (NPs) and dedicated Compute Profiles (CPs) is required.

  • In this CP configuration, the Service Clusters are ‘carved up’ into distinct Compute Profiles. The Compute Profiles reference cluster-specific Network Profiles.
  • Because the Service Mesh HCX-IX appliance connects directly to the cluster vMotion network, anytime the cluster networks for Replication and vMotion are different, cluster-specific Network Profiles should be created, and assigned to cluster-specific Compute Profiles, which will be instantitated using cluster-specific Service Mesh.


Gabe 🖖🏼

2 comments

  1. Hello Gabe,

    Good day!

    Thanks for the article. All the configuration mentioned above are based on single VDS design where all the clusters are attached. I have a question with multi-VDS cluster design in the same source VC.

    If there are 2 clusters in the VC, each cluster having it’s own VDS, how is the service and deployment clusters placed in the CP? What are the best practices to have migration and network extension possible?

    Waiting for your kind reply.

    Thanks

    Like

    • Hi Amit,

      Config #6 has an image “typo” it is meant to be depicted with 2 DVS. The best practice is to carve out a CP for the different DVS. The primary reason is that the deployment resource (cluster) dictates the possible hosts for the HCX appliances. The server hosting the HCX-NE appliance must also be connected to the DVS that it will service.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s