Monday, 4 April 2016

Unable To Register VDP to vCenter in the vdp-configure Page

After deploying the OVF template for a new VDP appliance, we will have to go to the vdp-configure page to get the appliance configured to the vCenter.

Here, in the vCenter registration page, after entering the username, vCenter details and try to test connection, you run into the below error.

"Unable to verify vCenter listening on specified HTTP Port. Please re-check values and try again"



So, here I was trying to configure VDP appliance with a port number of 80 for http and 443 for https.

However, the vCenter is running on a custom port of 81 and 444.

You can login to your vCenter, 6.0, Select the Administration tab > vCenter Server Settings > Advanced Settings.

Here there are two parameters which talks about your vCenter ports. They are:

config.vpxd.rhttpproxy.httpsport 443
config.vpxd.rhttpproxy.httpport 80

443 and 80 are the default ports. If they are different, then we are using custom ports and we need top open this port on the VDP appliance firewall.

You can use telnet to check the connection between the appliance and vCenter.
Run the command from the SSH of the appliance
telnet <server_IP> <port_number>

To perform this:
1. Open a SSH to the VDP appliance.
2. Change your directory to:
#: cd /etc/
3. Open the file "firewall.base" in a vi editor
4. Locate the line:
exec_rule -A OUTPUT -p tcp -m multiport --dport 53,80
5. Add your custom http and https port value here and save the file.
6. Restart the firewall service using the following commands:
#: service avfirewall stop
#: service avfirewall start

Register the appliance again and make sure you give the custom ports in the http and https field during configuration.

That's it!

Saturday, 2 April 2016

Migrate Networking From Distributed Switch To Standard Switch

Written by Suhas Savkoor



In the previous article here, we saw how to migrate ESXi networking from Standard Switch to Distributed Switch. In this one, we will perform the reverse of this.

Step 1:
This is the setup that I have for my vDS after I had it migrated.


Here I have 2 portgroups, one for my virtual machine and one for my vmk management port-group. And both of these are connected to two uplinks, vmnic0 and vmnic1

Step 2:
Before creating a standard switch, I will be removing one of the vmnic (Physical Adapter) from the vDS as I do not have any free uplinks to add to the standard switch. Select Manage Physical Adapters and Remove the required uplink.


Step 3:
Now let's go ahead and create a new Standard Switch. Select the vSphere Standard Switch and click Add Networking


Step 4:
Choose Virtual Machine as the port-group type.


Step 5:
Select the available uplink that needs to be connected to this standard switch and click Next


Step 6:
Provide a Network Label to the virtual machine port-group on the standard switch.


Review the settings and complete the create and now you will have one Standard Switch with one virtual machine port-group connected to an uplink. It's now time to begin the migration.


Step 7:
Go back to distributed switches section and select Manage Virtual Adapters


Step 8:
Select the required vmk and click Migrate


Step 9:
Select the required vSwitch as to where you want to migrate this port-group to.


Step 10:
Provide a Network label for this vmk port-group on the standard switch. If you are using any VLAN for the vDS port-group for this vmk, specify the same in the VLAN section to replicate this on the standard switch. Else the migration fails.


Review and complete and you have the management vmk migrated off the distributed switch to the standard switch.


Step 11:
To migrate virtual machine's networking, go to Home > Networking > Right click the vDS and select Migrate Virtual Machine Networking


Step 12:
The source would be the VM port-group on the vDS in my case is, dvPortGroup ad the destination is the standard switch port-group which we created recently, VSS VM Portgroup


Step 13:
Select the virtual machines that you want to migrate.


Review and finish and once the migrate completes, you can now check the standard switch configuration to verify everything is migrated successfully.



Well, that's it!

Migrate Networking From Standard Switch To Distributed Switch

In this article, let's see how to migrate your ESXi host networking from vSphere Standard Switch to vSphere Distributed Switch.

Step 1:
Here we see the current configuration of my standard switch for one of my ESXi host.


I have one standard switch, with two portgroups; one for virtual machines and other one is the management network. I have simplified the networking by eliminating additional vmkernel portgroups for vMotion, FT, iSCSI as the process to move them would be the same. I have one uplink given to this standard switch, vmnic1. 

Step 2:
Let's go ahead and create a distributed switch. Go to Home and select Networking. Right click the required Datacenter and click New vSphere Distributed Switch.


Step 3:
Select the version of the distributed switch that you are going to create


Step 4:
Provide a name to this distributed switch and if you want to alter the number of uplink ports to this switch, you can do the same in the Number of Uplink Ports section. 


Step 5:
I am going to add hosts later as I like to review and make sure I got the setup right before I start moving anything off my standard switch. 


Review your settings and Finish in the Ready to Complete section. 

Step 6:
Navigate back to the Networking section, now you can see your distributed switch created under the specified Datacenter. Right click this switch and select Add Host


Step 7:
Select the host that you want to add. Now, you can see that I have two uplinks in the menu for this host. vmnic0 and vmnic1. You need to make sure that you have one free uplink when you add the host to the distributed switch. This is because, when you are migrating your portgroups off standard switches and you do not have any uplinks on the vDS, your networks are going to be disconnected. 
Here, I will choose the free unused adapter, vmnic0, to be added to the vDS.


Step 8:
As seen in the standard switch configuration, I had one vmkernel port-group, vmk0. I am not going to migrate this port group right now. You can do it at this stage by simply using the drop-down under Destination Port Group and selecting the required port-group on the vDS as to where your management network must migrate to. 


Step 9:
I am neither moving any virtual machine networking as well because I will be doing both of these steps later. Review your settings and complete the host add to vDS.


Step 10:
Now, we will migrate the VMkernel from standard switch to the vDS. Select the Host and click the Configuration tab and browse to Networking > vSphere Distributed Switch. Click Manage Virtual Adapters.


Step 11:
Click Add to check the required vmk.


Step 12:
Select Migrate existing virtual adapter as we already have the vmk in the standard switch.


Step 13:
Select the required port group and the destination port-group in vDS under the section "Port Group"


Review the settings and complete the migrate. It will take a couple of seconds to finish the migrate. You can also do a continuous ping to the host to check the network connectivity. Once migrated, you can review your vDS diagram.


Step 14:
Next, we will migrate the virtual machine networking from standard switch to the vDS. Go back to Home and select Networking. Right click the respective distributed switch and select Migrate Virtual Machine Networking.


Step 15:
The source network is your standard switch networking and from the drop-down select the port-group on the standard switch where the virtual machines reside. In my case, the port-group on the standard switch is called, VM Network. The destination network port-group is on the vDS and I want to migrate the VMs to a port-group called dvPortGroup on the vDS.


Step 16:
Select the virtual machine you want to migrate on this port-group in the next section. 


Review changes and finish the migration. Once Migrated, go back to your distributed switch under the ESXi host and cluster section and review the final configuration. 


That's pretty much it. If you have additional portgroups you will have to repeat the process. If your port-group have VLAN IDs, then you will have to create a port-group on the vDS with the same VLAN ID, else the migration will fail.
If you are migrating iSCSI with port binding, then you will have to remove the port binding and then migrate the iSCSI and then configure port binding post migration. 

Friday, 1 April 2016

Re-deploy VDP with existing storage

Written by Suhas Savkoor



You will come across instances where your VDP appliance has gone corrupt or the appliance is not booting at all no matter what fix you have implemented, in this case the simple and easy resolution would be to redeploy the appliance. When you redeploy the appliance in VDP, you will have an option to attach an existing storage to the appliance.

So your VDP appliance comprises of X number of disks, with the hard disk 1 always being your OS disk, and the remaining ones created to store backup data. Now, when you re-deploy the appliance, the OS disk is replaced completely. This means, your backup jobs or email configuration for the VDP appliance is lost. So, once the appliance is redeployed you will have to recreate your backup jobs. However, the backup data will be intact as it resides on the storage disk.

What you need to do:

So, here before discarding my old appliance, you can see the backed up virtual machine available under Restore of VDP.


Make a note of the data disks for the VDP appliance from the Edit Settings of the virtual machine.


Here the Hard Disk 1 is the OS disk hosting the backup jobs and the SMTP settings for VDP and DIsk 2/3/4 are the storage disk where the backup data will be stored.

Power OFF the old VDP appliance and remove it from the inventory.

Follow the OVF deployment procedure to deploy the new VDP appliance. Once the virtual machine is deployed, power it on and wait for the boot to be complete. Then browse to the vdp-configure page:
https://<VDP_IP>:8543/vdp-configure

Go through the initial configuration step, until you come across Create Storage. In this wizard, select Attach Existing VDP Storage and click Next.


Browse the location where the existing data disk for VDP resides. Select each disk one by one and mount them. There is no option to select all VMDKs at a time and perform the mount. This has to be done for each disks. It's going to validate each disk.


Once all the disks are mounted, then proceed Next and it's going to validate the set of disks.


Proceed ahead to complete, and each disk is imported and attached to the new VDP appliance, and a mount point is created for each disk. Reboot the machine once completed, and it is going to run a VDP: Configure task for about 15-20 minutes.


Once the appliance is configured, login to Web Client and connect to the appliance. Go to the Restore tab and you can see the backed up data still available. 


You will have to just reconfigure your backup jobs for the virtual machines.

Monday, 28 March 2016

Deploying External Proxy for VDP

Written by Suhas Savkoor



With VDP, you get 8 internal proxy by default. Using this, you can backup up to 8 VMs concurrently. 
The moment you configure an external proxy for the VDP, the internal proxy is disabled. So with external proxies you can deploy up to 8 external proxy VM. And each external proxy again supports up to 8 concurrent backups. However, with for example 2 external proxies, you can have 16 concurrent backups. This does not mean that we go ahead and deploy all 8 external proxies and have 64 backups running concurrently. This will have a huge performance impact on your environment. Hence, choose the external proxies as required based on your environment. 

To deploy an external proxy:

1. Login to your VDP management page.
https://vdp-IP:8543/vdp-configure

2. Click the Gear icon on the Proxy row and select Add External Proxy


2. Provide the host where this proxy virtual machine should reside, the storage and the network. Please provide a Standard vSwitch Port Group and make sure the underlying ESXi host has 4 CPU cores. If it is deployed on a DVS or on a host less than 4 CPUs, then you will run into the error "VDP: Failed to find CIM service on VM Image Proxy"


3. Enter the network configuration details for the proxy virtual machine


4. Enter the name for the proxy virtual machine


5. Click Finish and wait for the deployment to be completed. You will come across the following message for a successful deployment. 


Click Close and now under the proxy list, you will see the external proxy in a working status. 

To delete an external proxy:

1. Click the gear icon again and select Manage Proxies. Check Review information and proceed to remove the proxy.


Now, once the external proxy is removed the internal proxy is not enabled automatically. We will have to perform this step manually. Click the gear icon again and select Enable internal proxy

Check box the Enable internal proxy and click Finish.


The proxy status will be in warning state. Refresh the proxy using the refresh button next to it in about 5-10 minutes and we should be good to go.