Sharing some recent diagramming work (they let let me play with my crayons sometimes 😋 ).
This is an 11-page PDF deck. These diagrams highlight several modes of operation for the HCX Network Extension appliances. May these help if you’re looking for a visual to discuss and understand the related concepts. Adding some light explanations below.
The first picture is a basic topology describing HCX extension. Key ideas depicted:
- Encrypted site to site extension of a network at the source to a private destination (e.g Cloud Foundation, VVD SDDC, or custom vSphere+NSX infra) or one of the HCX enabled public cloud offerings (like the Google Cloud VMware Engine, Microsoft’s Azure VMware Solution or VMware’s Cloud on AWS/Dell EMC).
- Automated data-plane. You define (in the Service Mesh config) whether network extension will be enabled, and if so, how many appliances are needed. The rest happens automatically (hands off):
– NE nodes are instantiated and auto-configured (you’re never deploying individual appliances or configuring settings, and things like IPsec.
– Source NE nodes automatically find their peer nodes ( if they are reachable over UDP-4500) and configure multiple encrypted tunnels without additional user intervention. - VDS VLANs, NSX Overlays and VLANs can be extended. HCX will always create or connect to NSX (N-VDS or VDS7) Overlays at the destination.
– It is by design principle that HCX extension operations are implemented as overlays. This - HCX Network Extension is for making virtual machine networks (not cluster networks) available at the destination.
The rest of the images build from there. The diagrams are all related to the availability/resiliency operations for the appliances. I used a lot of this for the HCX Availability Guide. Some key ideas from these slides:
- NE Without HA:
– HCX relies on vSphere HA to respond to node failures.
– ISSU minimizes the impact of HCX NE upgrades but requires additional IPs.
– Non-ISSU (Standard) upgrades are available (re-use existing IPs) but result in some impact.
- NE with HA :
– HCX can tolerate appliance-level failures.
– Redeploy & upgrades orchestrated based on the HA group model to minimize impact. - NE Site to Site Uplink connectivity
– Refers to the connectivity between the HCX appliances (the tunneling based on the HCX Uplink configuration). Uplink resiliency is critical for a highly available deployment.
– Configure multiple uplinks when there are multiple network paths available.
– In multi-uplink configurations, each uplink should be a distinct portgroup backed by a distinct connectivity providers.
– Application Path Resiliency is a single vNIC muti-tunneling option that leverages ECMP/switch IP hash placement to improve HCX NE resiliency.
Application Path Resiliency & Network Extension High Availability are HCX Enterprise features for VMware HCX.
May the odds be ever in your favor!
—
Gabe
[…] have been updated in similar fashion to the Network Extension Availability diagrams I shared in an earlier post. This one is a 7-page PDF deck that emphasizes shared vs dedicated and multi-site HCX Network […]
LikeLike