This post originally appeared on Clouds, etc. by Daniel Paluszek.
I’ve been rebuilding my vCloud Director (vCD) lab and running through a few connectivity scenarios. Moreover, I wanted to write and share my findings on orchestrating an on-prem NSX environment connecting to a vCD/Provider environment using vCloud Director Extender (VXLAN to VXLAN). In vCD Extender, this is also known as a DC Extension.
To back up, let’s talk about how NSX provides flexible architecture as it relates to Provider scalability and connectivity. My esteemed colleagues did some great papers from our vCloud Architecture Toolkit (vCAT):
Before I get started, I also think this is a good guide for planning out VXLAN <–> VXLAN VPN connectivity.
Let’s look at my lab design from a network / logical perspective –
As you can see above, I have my Acme-Cloud organization available on the left with a single VM in the 172.16.20.x/24 network that’s running on NSX/VXLAN.
On the right, we have “Acme DC” that’s also using NSX and has a logical switch named “Acme-Tier” with the same network subnet.
The orange Extender Deployed L2VPN Client is what’s deployed by vCD Extender on tunnel creation. We’re going to walk through how Extender creates this L2VPN tunnel within an on-prem NSX environment.
Provider
-
- This is very similar to my warm migration setup, so I’m going to try not to duplicate material.
- I have my Acme-Cloud-Tier Org VDC Network that was converted to a Subinterface inside of vCD:
-
- We can see in the Edge Services, my L2VPN Server has been set up with a default site configuration. However, vCD Extender creates its own site configuration –
- Extender generates a unique ID for the username and password. This is done when the DC Extension is executed by the tenant. I also established the egress optimization gateway address for local traffic.
Tenant – vCD Extender Setup
-
- Before we can create a Data Center Extension, we need two required fields for NSX deployments.
- First, we need to give the required information to successfully deploy a standalone Edge that will be running our L2VPN client service. This would include the uplink network along with an IP address, gateway, and VMware-specific host/storage information.
-
- Second, we need to provide the required NSX Manager information. I’m sure this is to make the API calls required to deploy the Edge device(s) to the specified vCenter.
- Once the DC Extension has been created, we would see a new Edge device under Networking & Security
Tenant – DC Extension (L2VPN) Execution
-
- So what happens when we attempt to create a new DC Extension (or L2VPN Connection)? A few things:
- Creation of our trunk port for our specified subinterface
- Deployment of the new Edge device that will act as the L2VPN Client
- Reconfiguration of the trunk port (uses mcxt-tpg-l2vpn-vxlan-** prefix)
- Allowing NSX to do its magic along with L2VPN
- We can see within my task console what happened –
- So what happens when we attempt to create a new DC Extension (or L2VPN Connection)? A few things:
-
- Voila, we have a connected L2VPN tunnel. As you can see, the blue “E” stipulates that we have a local egress IP set. I did this since I wanted to route traffic to its local interface for traffic optimization.
-
- So, what happens in the background? Well, let’s take a look at the Edge device. We can see the trunk interface was created while the subinterface is configured to point to my logical switch “Acme-Tier.”
-
- Last of all, the L2VPN configuration was completed and established as the Client. We can now see the tunnel is alive too.
- From the main vCD Extender page, we can also see traffic utilization over the tunnel. Pretty nice graph!
Just a quick ping test, yep! WebVM can access WebVM2.
In summary, NSX to NSX DC Extensions within vCD Extender works pretty similar to Provider/VXLAN to On-Prem/VLAN. The key difference is the on-prem vCD Extender has the embedded Standalone Edge to deploy to vCenter.
Stay tuned to the VMware Cloud Provider Blog for future updates, and be sure to follow @cloudhappens on Twitter and ‘like’ us on Facebook.