Create NSX-T Edge cluster using PowerVCF

Here we will install a NSX-T Edge cluster using VMware Cloud Foundation version 4. This can be done on VCF on vSAN ready nodes such as Dell EMC PowerEdge. Here I will create the NSX-T edge cluster using VCF on VxRail.

Download and install PowerVCF using powershell.

> Install-Module -Name PowerVCF

Make a connection to get an authentication token from VCF.

Connect-VCFManager -fqdn a1-vcf01.converged.local -username administrator@vsphere.local -password <yourpassword>

I like to put wrappers around my scripts to make it easier for me to go from one script to the next.

We need to get the cluster id and put the cluster id in our json file. Note, make sure you choose the correct cluster id.

The command Get-PowerVCF will list all cluster id’s.

copy the cluster id from the correct cluster, in my instance I want to create a cluster on my A2, secondary cluster. So, I will copy the value fe81….d6ef into my json file as seen below. Note, the cluster id is listed twice in your json file, so make sure you update both.

Once you have updated your json file, you are ready to begin a validation routine. The command to begin the validation is New-VCFEdgeCluster and make sure you include the ‘-validate’ option. Otherwise, you’ll just end up deploying a edge cluster with no validation.

Once the executionStatus is Completed, check resultStatus and make sure it says SUCCEEDED. If it doesn’t complete or status is not SUCCEEDED, do not proceed until you have addressed your json file.

Once you are satisfied with your validation, then run the same exact command without the “-validation” option, to deploy the NSX-T edge cluster.


Postman NSX-T edge cluster on VCF 4 VxRail 7

Have you ever wanted to save a little time by using REST API in VMware Cloud Foundation to create a NSX-T edge cluster? Yes, we could do this using the GUI. However, I often have to create and re-create the NSX Edge cluster for various reasons. To date, I’ve been using the GUI and simply copying and pasting my various settings I have documented in notepad++ into VCF GUI to accomplish this, but it’s a lot of copying and pasting and I want a way to automate this in a repeatable fashion. Here, we will do the NSX-T edge cluster deployment using Postman.

To give some context, one logs into the VCF GUI, selects the domain, and clicks on the three dots next to the domain name and selects “Add Edge Cluster” as shown below.

If you were to click thru the GUI, you’ll find a lot of details that must be filled out to validate and deploy the NSX-T edge cluster. We will cover these details to some degree, but you must already have a good understanding on the requirements that are specific to your setup. Once you have the defined the values, you will be able to take those values and automate via Postman. This guide will help you do that.

Install Postman. reference:

Create a new collection in Postman.

Right click the new collection and make a new request or click Add request. Change the type of request from Get to POST and enter the FQDN or IP of you VCF instance like seen below. Click on Headers tab and fill out the Key Value of Content-Type and application/json. If you start typing, it will auto suggest completions.

Note: For the Authorization type, just leave the default of inherit auth from parent.

We will actually create a bearer token in the Body tab. This tripped me up for a bit, just trying to authenticate. Click Body tab and here is where we will actually plug in our credentials to create the bearer token. You can read more about this in the Developer Center from VCF UI.

Inside Postman, click on Body and you will need to insert your “admin” username, typically, administrator@vsphere.local and your password. I assumed, I should use the basic auth type in Postman Authorization tab, but no. Like the above curl command in Developer Center, you need to create a json file and put this in the Body of the request like below.

code sample:

  "username" : "administrator@vsphere.local",
  "password" : "VxR@il123"

Ok. We have changed request type to POST, gave it a valid URL with https://<FQDN or IP>/v1/tokens. We have also put our credentials in the Body of the request. Save that request and click Send.

We should get a status 200 OK back and see our fancy new accessToken like below.

Starting at line 2 of the Body, make note of everything in the quotes. We copy that string into our next request. Save that request if you not already. You can either copy it or make a new request, but make a new request in the Postman Collection. I like to duplicate and then make adjustments to I don’t have to type the URL again.

The new request will be a Get Request. If you copied the first POST request, make sure the new request is a Get request. Since I copied the request, it will already have the Header key value Content-Type application/json like below. Update the URL to /v1/clusters, instead of /v1/tokens.

Click on the Authorizations tab. Change authorization type to Bearer Token like below, and copy and past the token we created in the Post request. We should have two request in the collection at this point. One POST request and this GET request. Save this request and click Send.

At this point, we should see something in the output body. With any luck, the above will give us a 200 OK response and look like below, after clicking Send.

So far this is excellent progress. We are able to create and authenticate using a Bearer Token and we are able to retrieve the CLUSTERID. Above you can see my ID on line 4 is “89978ee5-fe01-4f08-85da-5544c4a9e22d”.

One could also do a Get https://<FQDN or IP>/v1/domains request to get the ID, but the above is excellent information and required, so we can create/modify our NSX-T edge cluster json file.

OK. Now for the hard part. We need to make the edge cluster json file. But luckily we have a few tips on how to proceed. I’m not gonna lie, I spent a long time trying to figure this out using tips from, whom I must give credit. reference:

If you followed that link, it’s very helpful on trying to figure out the same thing, but using curl and a shell script instead of Postman. Make note of the json file on that site or copy and paste my json below. The format and key fields are important, but your values will be completely different than the values noted below. Yes, it will take you a long time to figure out what these values should be, based on your infrastructure. First time, I deployed one it took me all day, and and then the second time, about an hour or so.. then I just started copying and pasting all the key value pairs from the GUI into notepad++, so that I at least had all the values for the next time I deployed a NSX-T edge cluster.

        "edgeClusterName": "b1-m01-ec01",
        "edgeClusterType": "NSX-T",
        "edgeRootPassword": "VxR@il1234!!",
        "edgeAdminPassword": "VxR@il1234!!",
        "edgeAuditPassword": "VxR@il1234!!",
        "edgeFormFactor": "LARGE",
        "tier0ServicesHighAvailability": "ACTIVE_ACTIVE",
        "mtu": 9000,
        "asn": 65011,
        "tier0RoutingType": "EBGP",
        "tier0Name": "b1-m01-ec01-t0-gw01",
        "tier1Name": "b1-m01-ec01-t1-gw01",
        "edgeClusterProfileType": "DEFAULT",
        "edgeNodeSpecs": [{
                        "edgeNodeName": "b1-m01-en01.converged.local",
                        "managementIP": "",
                        "managementGateway": "",
                        "edgeTepGateway": "",
                        "edgeTep1IP": "",
                        "edgeTep2IP": "",
                        "edgeTepVlan": 2619,
                        "clusterId": "89978ee5-fe01-4f08-85da-5544c4a9e22d",
                        "interRackCluster": "false",
                        "uplinkNetwork": [{
                                        "uplinkVlan": 2620,
                                        "uplinkInterfaceIP": "",
                                        "peerIP": "",
                                        "asnPeer": 65010,
                                        "bgpPeerPassword": "VMw@re1!"
                                        "uplinkVlan": 2621,
                                        "uplinkInterfaceIP": "",
                                        "peerIP": "",
                                        "asnPeer": 65010,
                                        "bgpPeerPassword": "VMw@re1!"
                        "edgeNodeName": "b1-m01-en02.converged.local",
                        "managementIP": "",
                        "managementGateway": "",
                        "edgeTepGateway": "",
                        "edgeTep1IP": "",
                        "edgeTep2IP": "",
                        "edgeTepVlan": 2619,
                        "clusterId": "89978ee5-fe01-4f08-85da-5544c4a9e22d",
                        "interRackCluster": "false",
                        "uplinkNetwork": [{
                                        "uplinkVlan": 2620,
                                        "uplinkInterfaceIP": "",
                                        "peerIP": "",
                                        "asnPeer": 65010,
                                        "bgpPeerPassword": "VMw@re1!"
                                        "uplinkVlan": 2621,
                                        "uplinkInterfaceIP": "",
                                        "peerIP": "",
                                        "asnPeer": 65010,
                                        "bgpPeerPassword": "VMw@re1!"

OK. The above json is kinda daunting, but luckily, it looks self explanatory for the most part. One other note… The above json file / syntax is documented in VCF > Developer Center in section 2.27. Edge Clusters.

So, at this point, you should have an idea on what values to use in your json file to match your environment.

Let’s make a new POST request in Postman and we will first want to make a validation request to make sure it passes a sniff test successfully. This will be a validation POST request like https://<FQDN or IP>/v1/edge-clusters/validations as seen below.

The response status: 202 Accepted and a json body of stuff which looks interesting. Scroll down the resonse and check it out.. One thing you’ll see is “executionStatus” is “IN_PROGRESS”. Well, how do we follow up on that validation request? I’m glad you asked. Above in line 2, we have a validation ID of “59da9f21-4f67-427f-89ca-3df6fa9baa04”. We now need to make a new GET request in Postman to check on the validation after about 60 seconds have passed to give it time to process the values in the json.

In Postman, make a new GET request to retrieve the results of the validation. The URL is: https://<FQDN or IP>/v1/edge-clusters/validations/<validation ID> where validation id is the string copied from line 2 above.

Below you will see my NSX-T edge validation, but you will also notice my string or UUID is different. It’s because I ran the validation twice and I got excited and carried away and clicked Send twice, thus creating two different validations… So, the above screenshot includes my first validation ID. The screenshot below shows my validation request from the second time I clicked Send. Sorry about that.. The next time I create a cluster, I’ll have to remember and come back and update this blog.

With any luck, at this point, you should see a “resultStatus” as “SUCCEEDED” for all validation checks. If something doesn’t validate successfully, you’ll need to fiddle with your values a bit more.

Now, onto the home stretch. We will make one more POST request in POSTMAN. This will actually kick off the workflow of the json file we created and after we validated it successfully. In Postman, right click the previously created POST validation and call it deploy nsx-t edge cluster or something meaningful. This is POST and contains the URL https://<FQDN or IP>/v1/edge-clusters as seen below.

At this point, we have successfully kicked off the deployment operation in Postman to create the NSX-T Edge Cluster. If we navigate to the VCF UI, we will see a new task as seen below.

You should be giddy with joy and excitement at this point as you figured out how to deploy a very complicated procedure. What has taken many people hours, or days, or weeks, you can now do in just a few minutes in a repeatable fashion. Yes, you did have to do a little homework to figure out the values for your environment, but you were gonna have to do that anyways.

Some might be asking, why didn’t I just do this during the initial bring up of VCF by selecting Yes to AVN network and fill this info out in the CloudBuilder spreadsheet. Well, now you can repeat this procedure for any workload domain. My reasoning is I want to deploy a “Consolidated” architecture and use the management domain for Tanzu. More around this topic can be found at: or my previous build at

Thanks for your time.

Create kubernetes namespace in VCenter with Postman rest API

I have previously stood up VMware Cloud Foundation 4 on DellEMC VxRail 7. After deploying VCF on VxRail, I then created the NSX-T edge cluster and installed the solution for Kubernetes Workload Management. More information on this can be found at VMware:

Download and install Postman from

Open Postman and create an environment profile.

Create the values you’ll use to authenticate to your setup like below.

Click Update to save the environment settings and close that window.

Next create a collection

Click ‘Add reqeust’ or right click your collection and create a new request.

Give your request a name and click Save.

Change it from a GET request to a POST request. Update the Authorization to type Basic Auth and update the username and password. In this example, we will login to the environment using the following POST request:


Click Save, and click Send for this request. The response should be Status: 200 OK. and you will see the following information.

One can do Basic Auth as well as inherit auth from parent. This just means you’ll have to type in the credentials for each request you make and save. Below is similar results using Basic Auth instead of inherit auth from parent.

We are now authenticated and we can begin to list and query information to make the new namespace.

Make a new request and give it a meaningful name like Namespace list.


Click Send and you should see something like below.

Notice we have a namespace called demo1 that I previously created. Let’s see if we can get more information on this namespace. Let’s make a new query on demo1.

I’m going to make a new Get request called namespace demo1 details.

Insert the following for this Get request.


Let’s save the Get request to the collection and click Send. We should see something similar to below.

Let’s take a look in vCenter and see what is looks there. As we can tell from the rest API and from the vCenter GUI, no user has been assigned to this namespace and no storage policy has been configured.

I’m going to add a user and storage policy to this namespace and review the REST API again. First I’ll need to create a user for this namespace.

Now that I have a user, I can add it to the demo1 namespace.

I will also assign a storage policy while I’m here.

The result looks like below.

OK. Now let’s flip back over to Postman and run the namespace demo1 details Get request again. Notice we have more information, including the user demouser who has edit role and a storage policy UUID string.

Let’s create a new namespace called demo2.

This will be a post request and we will have to change the Body to raw and add the following:

    "cluster": "domain-c10",
    "namespace": "demo2"

We got an error…

We forgot to add the Content-Type key value and set to application/json. Update the Headers to include the following header.

Click Send after updating the header. After clicking Send we should see a Status 204 No Content message.

Looking back into vCenter we see the new namespace demo2.

Let’s run the Get Namespace List instances request again. We can see both namespaces listed.


We will now set the user and storage policy for demo2 namespace. I’m going to create a new request and call it set-permission-storage-policy-demo2. This will be a PUT request. We will also need to add the Headers key value Content-Type application/json.

Add the API URL: https://{{vc}}/api/vcenter/namespaces/instances/demo2

Also click on Body and set to raw format. Using the information from the output of demo1 we can add the storage policy to demo2.

    "access_list": [
            "domain": "vsphere.local",
            "role": "EDIT",
            "subject": "demouser",
            "subject_type": "USER"
    "description": "",
    "storage_specs": [
            "limit": 0,
            "policy": "aa6d5a82-1c88-45da-85d3-3d74b91a5bad"

Save and click Send.

This should return Status 204 No Content.

Let’s flip back over into vCenter and check.

We have successfully created and set a user to a namespace as well as assigned a storage policy to the namespace.



Protect with PowerProtect Data Manager

Let’s create a Protection Policy to back up something. Login to Power Protect Data Manager (PPDM), and navigate to Protection > Protection Policies and click Add.

Give the policy a name and select the type of resource to back up. I will select Virtual Machines and click Next.

Select the type of protection you want.

Crash Consistent: Select this option to capture all of the virtual machine disks at the same time and back up the data to storage targets to create a transactionally-consistent backup for VMWare virtual machines. Use this option for Windows and Linux virtual machines, and for guest operating systems that have applications other than the SQL Server.

Application Aware: Select this option to perform a virtual machine image backup that will create an application-consistent SQL Server backup with transaction log protection and truncation. Enabling virtual machine application aware snapshots also enables self-service recovery of SQL Server data from this backup by using the SQL Server Management Studio and Dell EMC ItemPoint. This option requires vSphere version 6.5 or 6.7 and VMware Tools version 10.1 or later.

Exclusion: Select this option to exclude assets in this group from protection activities and protection rule assignment.

Select your VM’s to add to the Protection Policy and click Next.

We now need to create a schedule for the backup policy. Click Backup.

There’s quite a few choices to make.

I will choose a daily backup with retention of 30 days and a full backup once a week. Below, I click on the ‘i’ with a circle around it to make sure we get a full understanding of what’s going on. Click OK.

We now can review the schedule details.

We could make adjustments if needed, or perhaps make a SLA for the backup. Or, perhaps set a retention lock or change the vNIC to be used for the backup Policy.

If we did create an SLA, it would look like the following screen. But I will cancel out of the SLA and just click Next.

We can select optimization options if we like. Perhaps capacity or Performance is more important. We could also enable swap files, or disable quiesce for file system of the VM. But generally speaking these options in their default state look fine to me. Below, I’ll go into more detail on each option.

Optimize for Performance. This sounds like a great default option. Performance! yes!…

Optimize for Capacity. Perhaps I’m low on storage space on my DDVE and willing to make a trade off from Performance. And honestly, if my job depended on it, I’d rather have a slow back up than no back up. But that’s on me to make sure I’ve planned accordingly.

Exclude swap info. I can’t think of a reason why I’d want to back up swap.

Info on Quiesce System. This option is always a good idea.

Changing the settings back to default as generally speaking that’s best. I want performance, I don’t care about swap, and I want anything in memory to be written to disk or synced to disk. If this was an in-memory database, I would have instead chosen Purpose Type of Application Aware, versus Crash Consistent.

Review all details and if we like it, click Finish. We can always click edit or back to make adjustments.

Since I am backing up VM’s that are not using VMware-tools, I also add the VM root credentials.

I could also install VM-tools and do it that way.

After installing VM-tools, we can see it created the snapshot in VCenter.

We are getting a compression factor of 21 x. in DDVE.

In PPDM, we can also do a restore or view copies.

If I click View Copies, I can do a file level restore versus an entire VM or individual virtual disk (vmdk).

Below is Restore option.

If I click View Copies instead, I have more choices.

Select DD

From here, we can do a file level restore if desired.

We can restore to the original VM or to somewhere else.

DellEMC: PPDM integration with DDVE

Log into Power Protect Data Manager (PPDM) GUI and from dashboard expand Infrastructure and select Storage. In this setup, I’m connecting to a Data Domain Virtual Edition (DDVE).

Click Add to add the Data Domain.

Select Data Domain System. Give this instance a name and add the IP address or FQDN of your DataDomain. For Host Credentials, select Add Credentials.

Give this set of credentials a name and add the username previously created in the setup procedure of DataDomain. Enter the password of that user and then Save.

Select Verify for certificate.

Review certificate and select Accept.

Select Save to store the information and complete the connection from PPDM to DDVE.

It connection begins.

It’s thinking about it.

After a moment, I clicked on Cloud DR and then selected Protection Storage to freshen the page. We now have the connection from PPDM to DDVE. Uh oh. We have a failure to connect. I forgot to make the ddboost user I created in DDVE deployment an admin user, or limited-admin user. 🙁

It failed. I forgot to make the ddboost user I created in DDVE an admin user, or limited-admin user. 🙁

Login to DataDomain and modify the ddboost user we created, and since it doesn’t appear to want me to make a full on admin, I’m going to try and make it a limited-admin.

To update DataDomain, we will need to create a new user, I’m calling it ppuser and make it the owner of the Storage Unit. under Protocol DDBoost.

Under Administration make a new user, I’m making that ppuser an admin for now.

After creating a temporary user, select Protocols > DD Boost and select Storage Units and click the edit button for this storage unit.

Modify the user to ppuser and call it good, or change to ppuser and adjust the credentials of the ddboost user we previously created.

After changing the user for the storage unit to ppuser, we can then go back to the Administration and Modify the username we created called ddboost to either admin or limited-admin. I am going to select limited-admin.

Once that user right have been corrected, let’s then go back to the Storage Unit section under Protocols > DD Boost > and make the ddboost user owner of that storage unit.

If you want to clean up, we can delete the temporary user we created as a place holder for the Storage Unit owner.

Now, let’s flip back over to PPDM UI and rerun the discovery process.

Select the Manage Storage Units link and verify we can see it.

Here, we can see the previously created Storage Unit within DDVE.

To recap, we’ve deployed DDVE and PPDM. We have added the VxRail vCenter to PPDM. Now we can create Protection Policies for our workload.


PPDM Plugin and Discovery for VCenter 7

I will add a vCenter server running on VCF 4 with VxRail 7.

Log into Power Protect Data Manager and add a vCenter server as a assest source by expanding Infrastructure > Asset Sources. Or from Dashboard, click Add under VMware VCenter.

Click Add and begin complete the vCenter add.

I am going to add host credentials for vCenter to PPDM.

Click Save.

Fill out FQDN and select the host credentials we just created. Click Verify.

Verify and Accept the certificate.

Now we can click Save.

It will start an automatic discovery.

Discovery has completed.


DellEMC PPDM: Power Protect Data Manager 19.5 OVA

Deploy the Power Protect Data Manager OVA.

Right click you vCenter cluster and select deploy OVF from template and select your OVA from local file.

Give the VM a name and click Next.

Select a compute resource and click Next.

Review details and click Next.

Select datastore and click Next.

Select your data protection network and click Next.

Configure a static IP, netmask, gatetway and set a DNS IP and hostname. Don’t forget to update your DNS server with FQDN.

Review details and click Finish.

Power on the VM.

Once the VM powers on, wait for it to get a DHCP IP and open a browser and go to that IP. Select New Install and click Next.

Input a license if you have it, otherwise continue with a 90 day evaluation license.

Now we will choose a password for Power Protect Data Manager.

Select your timezone and add a NTP server if you have one.

If you have a mail server, put that info into the next screen. Otherwise, toggle the Configure Email and AutoSupport button to disable email setup and click Next.

Review the summary screen and click Done. There should be information on the following screen if you filled it out.

Once setup completes, we can log into it.

Once we log in, we can configure the connection to the Data Domain.


Deploy and initial config DDVE

I will be deploying DellEMC Data Domain Virtual Edition (DDVE) version In the next post, I’ll deploy Power Protect Data Manager 19.5 to connect to DDVE.

Login to vCenter and deploy the DDVE OVA filename: ddve-

Give DDVE VM a name.

Select your compute resource.

Review details and click Next.

Select the amount of resources to give the VM. I am going to give it 4 vCPU’s with 16GB memory.

Select the datastore for the VM. I have a Datastore cluster I will pick.

Put the VM nics on your data protection network if you have one. I will also leave the IP allocation to Static for now. During the initial configuration, one interface will be DHCP and one will be Static.

Review details and click Finish.

Once the DDVE VM has been deployed, we will want to add a storage device to it. I like to give the data drive it’s own secondary SCSI controller. So, I will add a new paravirtual SCSI controller and a 500GB vmdk.

Right click the VM after it’s been deployed and select edit VM settings.

Add the SCSI controller and change type to paravirtual scsi to match SCSI controller 0.

VMware Paravirtual SCSI controller.

Add hard drive (vmdk).

Select the hard drive and change the size to 500 GB and also change the connection to the new SCSI Controller.

After the new VMDK hard drive has been added it will look like below. Adding the hard drive is needed as it will be used for the data to be stored.

Power On the DDVE VM.

After we add a storage device we will power it on and go to the given DHCP IP, port 443.

Opening a browser and going to https://<ip_address>:443

Login as sysadmin and default password of changeme

It will ask us to apply our license or use the pre-installed eval license. I will accept the Eval license and click Apply.

Read and accept the EULA.

Configure the network by selecting Yes. I want to set up one of the interfaces to be a static IP.

Update the hostname / domain name and gateway as desired.

Set the static IP and netmask as desired and click Next.

One interface must be DHCP as we see on the next screen. Click Next to continue.

Review details and click submit.

It will apply the updates and let you know if there’s any issues.

If you changed the IP you were connected to, you can also connect on the other DHCP IP to get to the GUI. If you want, you can continue on the static IP or the DHCP IP. I am going to disconnect and continue on the static IP and it brings me back to the Apply Your License screen, but that’s OK. I’ll just accept and continue to pick up on the config wizard. This time, i’ll say No to configuring the network as we just did that. We could say Yes if we want to review, but I’ll say No and continue.

Select Yes to configure the File System.

Select the 500GB virtual drive we added to the VM. and select Add to Tier.

Click Next once you’ve added the disk to the Active Tier.

I will not be configuring a cloud tier at this time. I will click on Next to continue with no Cloud Tier.

Select a protocol for backup. I will choose DD Boost for backup protocol. It will run an assessment to ensure there’s enough throughput / IOPs to meet performance requirement.

Review the storage performance assessment and click Next.

If one clicks on the “View Assessment Qualification Requirements” link, you’ll see the more details. If you were expecting different numbers, perhaps make some adjustments and click rerun deployment assessment tool. That might include a Storage vMotion of the VM to something faster. I’m going to click Close and Next to continue.

Enable the file system and click submit. It will create and enable the file system for that storage device we added.

Click OK after file system has been created and enabled.

Do we want to configure system settings? I don’t know. Let’s click Yes.

Change the default password and set the administrator email address. You can enable or disable email alerts once you put an email address in.

Update the mail server IP or FQDN. Give DDVE a location if desired.

Review details of system summary and click Submit.

Once you click submit go ahead and click OK on the following screen.

Next it will ask me to configure DD Boost Protocol. Sure, let’s click Yes.

I am going to add a user for the storage unit called ddboost. Note: When making the connection to PowerProtect Data Manager, it failed with non-admin rights. I later adjust the rights of the ddboost user I create here to a limited-admin role.

I’m going to create a user called ddboost and give it a password and select Next.

Select Submit to accept.

DD Boost has been configured.

We could configure CIFS or NFS, but I’ve already selected DD Boost and that’s good enough for me at this moment. I am going to say No to additional configuration steps for CIFS and NFS steps.

We now have Data Domain Virtual Edition up and running.


Later on, when making the connection to PPDM, I came back into DDVE and adjust the credentials for my user called ddboost to a limited-admin role. In order to do this, I had to create a new user i’m calling ppuser, as a holding username, for the Storage Unit owner. I flip the Storage Unit owner from ddboost to ppuser, adjust the rights of ddboost, then go back to Storage Units and change the owner back to ddboost.

Under Administration > Access > Local Users, we can create and modify users.


How to deploy VMware HCX: Workload Mobility, Hybrid Cloud

My on premise datacenter is running VMware VCF 3.9.1 on VxRail 4.7.410. We want to deploy VMware HCX OVA into the on-premise datacenter and then connect to a cloud remote instance of HCX. There will be two instances of HCX. One in the cloud and one instance in your datacenter.

Below is the on-prem datacenter.

First thing we will need to do for the on premise site is to get the OVA for the Enterprise site or Connector Site. If you log into the cloud site using port 443, it can be seen as below. Click on “Request Download Link” right below Pair your remote data center with VMware HCX. Depending on your cloud provider, this step might vary, but get the Enterprise OVA.

If you clicked Request link, you’ll see the following and select the down arrow VMware HCX. On inspection of Copy Link and Download VMware HCX, it’s pulling from So, that’s good.

The “Enterprise” or Connector site OVA looks like “VMware-HCX-Enterprise-3.5.2-15665455.ova”. While writing and double checking this, I see there is actually a slightly newer OVA available “VMware-HCX-Connector-3.5.3-16460208.ova”. I am using an OVA I previously downloaded. I may have to update this or create new blog posting to reflect newer content, but there’s so many hours in the day.

The “Cloud” site OVA looks like “VMware-HCX-Installer-3.5.3-15914833.ova”. Notice the name difference.

Deploy the Enterprise OVA by downloading it from or by downloading it from the “cloud” site.

Select the compute resource. I am putting it in my edge resource group for now.

Review OVA product details from publisher.

You will accepted the EULA.

and selected a datastore.

I will put this on my vCenter management network port group.

Give the admin account and the root account a password. Fill in the Network Properties, and DNS settings, and NTP section as well. I also checked the checkbox for Enable SSH, for fun, but I probably would not leave it checked if in production. You can enable / disable SSH service later as needed. None of this should be new information.. just fill it out for static IP and network settings. If you leave it blank, you’ll get what’s available from a DHCP server if available.

verify your subnet info and DNS info.

Enable SSH

Her’s an example, where I only filled out the password and the rest will be picked up from DHCP. I usually prefer to use static IP’s, but to each their own.

After your appliance is deployed, power it on.

Once HCX Enterprise / Connector OVA has been deployed we will need to go to the IP port 9443 to run the initial setup of the appliance. Login as admin and the password you specified during OVA deployment.

On first login attempt it will prompt you to enter your NSX Enterprise Plus license key. Note: This is not your HCX Enterprise Key. That key, if you have it, is optional and you would put it on the next license key screen. Yes, there are two license screens back to back.

  • License screen 1: NSX Enterprise Plus License
  • License screen 2: HCX Advanced License

Once you login as the admin user, you will see the screen below. You will need to put your NSX Enterprise Plus license key in this field. Also, notice it says Enterprise top right. If this was the “cloud” site, instead of saying Type: Enterprise, it will say Type: Cloud top right. License screen 1.

If you select an incorrect key it will tell you like error below. In this sample below with error message, I tried to put in my “HCX Enterprise” key. The first license page is actually asking for my NSX Enterprise key. One can also select Activate Later, if you like, but you will not get very far with actually using the product.

The next license screen is the HCX Enterprise Upgrade Key. This is the HCX Enterprise Key which is optional. License screen 2.

You will also see this called HCX Advanced Key on port 9443 Configuration tab > Licensing page.

From here, you may see an update screen. If so, just let it run and check back later. I’ve seen time frames vary from 1 to several hours.

The next screen will ask you to select a City where you gear is located. Round Rock wasn’t a choice, but Austin is listed. You choose your city where the gear is located.

Give it a system name. I like making the system name the same as the shortname of a FQDN.

HCX is not completely installed just yet. We still need to make it available as a plugin into VCenter.

Click Yes, Continue to add in the VCenter server and optional NSX URL. I add my VCenter URL giving it administrator@vsphere.local with my administrator password. I am not going to connect my on-prem NSX instance at this time. I will be stretching my local vCenter port group to the cloud, but I don’t have a need to stretch the port groups from the cloud to on-prem vCenter at this time. Once the L2 stretched segment has been stretched, one can create VM’s in the cloud on that L2 stretched segment if you like.

The next screens it will prompt you to put in your SSO URL or Platform Service Controller URL. Note: If you have VCF deployed, it deploys a secondary PSC. So, I just use the first PSC shown below for my SSO / PSC URL in HCX config. If you do not have an external PSC, then your embedded PSC will work, just point the SSO / PSC URL to the VCenter URL again. It’s always a great idea to verify if your vCenter is using the embedded PSC or an external PSC. If unsure, log into VCenter port 5480 to double check. As I write this, I am aware that as of VSphere 7, external Platform Service Controllers are depreciated and no longer used.

After filling out the VCenter URL and SSO URL it will ask you if you want to restart now or later. Either way, but now is good. It will restart the Web Service and Application Service you will find on the Appliance Summary page.

Now’s a good time to take a break while the Web Service and Application Service is restarted as it will take several minutes.

There are other tabs up top we can review like Configuration and Administration, but after the services restart, you can exit out of this UI on port 9443.

After it’s all said and done, your Dashboard will look like mine. Select admin username top right and logout of the UI.

After several minutes have passed and the services have restarted in HCX, we can login to VCenter and the new HCX plugin will be installed. If you are already in VCenter, log out of VCenter and log back in.

You should see VMware HCX in menu.


How to: VMware VCF NSX backup to external destination

By default, VCF will back up NSX configuration to itself. However, we should point it to an external location.

This is a File-Based Backup of SDDC Manager and NSX Manager. When you first log into VCF SDDC manager, you will see an Orange banner with information to reconfigure it to point to an external file based backup for SDDC / NSX manager. First we will need to create a new user in vCenter with a new Group for the job.

Log into VCenter and navigate to administration > Single Sign On > Users and Groups and create a new user and give that user a password on the vsphere.local domain. Not the localos domain.

Create a Group and put that user in that Group.

That user we create in VCenter will be used to perform the Authentication from SDDC manager.

Scroll down to see that field, and you will need to remember an Encryption Passphrase. I made it the same as my NSX-T enable passphrase, but you do you.

Once we click save it will either like it or tell us something went wrong and we need to correct it in order to be submitted and enabled. You should see it configuring the backup on successful form completion.

It starts configuring a backup configuration job we can expand in SDDC manager.

We can dig into the task by expanding tasks view and double click the backup task, we can explore the sub-tasks as they progress.

I like how SDDC manager we can make the tasks screen large and click on a tasks to view sub-tasks. After a minute or two, the Backup configuration tasks will complete.

If we check our external SFTP destination, we will see a backup file.

We reconfigured SDDC manager and NSX-T to an external location using SFTP.