Thursday, 7 April 2016

Re-register VDP 6.1 to vCenter Server

Sometimes, there might be a need to re-register your VDP appliance to your vCenter server, maybe to use a different user account for registration or some issues with the vCenter. The registration process is quite easy and will not affect any of your backup jobs or the backup data present in your deduplication store. 

To Re-register your VDP appliance to vCenter, follow the below steps:

From the below screenshot you can see the backup job that is already present on my appliance prior to the re-registration. 


Next, you need to go to your vdp-configure page, which is available at the below link.

https://VDP_IP/vdp-configure

Login to your appliance with your root credentials and you will come across the below page. Click the gear icon and select vCenter registration. 


Please read the below message. Do not make any changes to vCenter with regards to vCenter host-name, IP, port number. This will cause your backup jobs to be lost. 

However, re-registering with a different user should not cause any issues. 


Provide the new user details and keep your vCenter details the same. Click Next, review the changes and click Finish. 


The below task will be started during the re-registration process. Once the task is done, it will reconnect to web client and log you out of your vdp-configure session. 


Now, login back to your web client, go to vSphere Data Protection. Connect the required appliance to web client and go to the backup tab and you will notice your backup job is still retained. 


That's it!

Tuesday, 5 April 2016

Cannot Open vdp-configure Page Or Check Status of VDP Services

So when you try to open the :8543/vdp-configure page you will receive the message:

"This site can't be reached. ERR_CONNECTION_REFUSED"



When you open SSH to this VDP appliance and check the status of the services, this will also throw up an error. Run the below command to check the VDP service status:
root@vdp:~/#: dpnctl status
The error you will receive is:

mkdtemp: private socket dir: No space left on device.


I tried to run the command to start the webservices, which is:
root@vdp:~/#: emwebapp.sh --start
Which also failed with an error:

Waiting for postmaster to start ...........Failed to connect DBI:Pg:dbname=postgres;port=5558.ERROR: Failed to start the database.

Interesting!
I ran the below command to check the space on the VDP appliance partition.
root@vdp:~/#: df -h
Here the output I noticed the partition, /dev/sda2 was at 100 percent used.


Run the below command to list directories within each other with largest used space:
root@vdp:~/#: du -h --max-depth=1 <directory>
Upon performing this, I found the below directory occupying nearly 40 percent of space on sda2
root@vdp:~/#: /usr/local/avamar-tomcat-7.0.42/logs
Removed all the old log files from this directory.

Also, if the space does not change even when the logs from the above directory is removed, then you need to check the following directory:
root@vdp:/usr/local/avamar-tomcat-<your_version>/webapps/ROOT
You will see a logbundle.zip file which is a manually generated log file. You can go ahead and remove this log.zip file. Do not remove any other file in this directory.

Prior to removing the logs, stop the VDP services using the below command:
root@vdp:~/#: dpnctl stop
However, this command also failed due to unavailable space. If this occurs, go ahead and remove the files without stopping the service. I risked this, however, the log files cleared out and space was freed and I was able to start the web services for VDP and login to the GUI of the appliance.

Cheers!

Monday, 4 April 2016

Unable To Register VDP to vCenter in the vdp-configure Page

After deploying the OVF template for a new VDP appliance, we will have to go to the vdp-configure page to get the appliance configured to the vCenter.

Here, in the vCenter registration page, after entering the username, vCenter details and try to test connection, you run into the below error.

"Unable to verify vCenter listening on specified HTTP Port. Please re-check values and try again"



So, here I was trying to configure VDP appliance with a port number of 80 for http and 443 for https.

However, the vCenter is running on a custom port of 81 and 444.

You can login to your vCenter, 6.0, Select the Administration tab > vCenter Server Settings > Advanced Settings.

Here there are two parameters which talks about your vCenter ports. They are:

config.vpxd.rhttpproxy.httpsport 443
config.vpxd.rhttpproxy.httpport 80

443 and 80 are the default ports. If they are different, then we are using custom ports and we need top open this port on the VDP appliance firewall.

You can use telnet to check the connection between the appliance and vCenter.
Run the command from the SSH of the appliance
telnet <server_IP> <port_number>

To perform this:
1. Open a SSH to the VDP appliance.
2. Change your directory to:
#: cd /etc/
3. Open the file "firewall.base" in a vi editor
4. Locate the line:
exec_rule -A OUTPUT -p tcp -m multiport --dport 53,80
5. Add your custom http and https port value here and save the file.
6. Restart the firewall service using the following commands:
#: service avfirewall stop
#: service avfirewall start

Register the appliance again and make sure you give the custom ports in the http and https field during configuration.

That's it!

Saturday, 2 April 2016

Migrate Networking From Distributed Switch To Standard Switch

Written by Suhas Savkoor



In the previous article here, we saw how to migrate ESXi networking from Standard Switch to Distributed Switch. In this one, we will perform the reverse of this.

Step 1:
This is the setup that I have for my vDS after I had it migrated.


Here I have 2 portgroups, one for my virtual machine and one for my vmk management port-group. And both of these are connected to two uplinks, vmnic0 and vmnic1

Step 2:
Before creating a standard switch, I will be removing one of the vmnic (Physical Adapter) from the vDS as I do not have any free uplinks to add to the standard switch. Select Manage Physical Adapters and Remove the required uplink.


Step 3:
Now let's go ahead and create a new Standard Switch. Select the vSphere Standard Switch and click Add Networking


Step 4:
Choose Virtual Machine as the port-group type.


Step 5:
Select the available uplink that needs to be connected to this standard switch and click Next


Step 6:
Provide a Network Label to the virtual machine port-group on the standard switch.


Review the settings and complete the create and now you will have one Standard Switch with one virtual machine port-group connected to an uplink. It's now time to begin the migration.


Step 7:
Go back to distributed switches section and select Manage Virtual Adapters


Step 8:
Select the required vmk and click Migrate


Step 9:
Select the required vSwitch as to where you want to migrate this port-group to.


Step 10:
Provide a Network label for this vmk port-group on the standard switch. If you are using any VLAN for the vDS port-group for this vmk, specify the same in the VLAN section to replicate this on the standard switch. Else the migration fails.


Review and complete and you have the management vmk migrated off the distributed switch to the standard switch.


Step 11:
To migrate virtual machine's networking, go to Home > Networking > Right click the vDS and select Migrate Virtual Machine Networking


Step 12:
The source would be the VM port-group on the vDS in my case is, dvPortGroup ad the destination is the standard switch port-group which we created recently, VSS VM Portgroup


Step 13:
Select the virtual machines that you want to migrate.


Review and finish and once the migrate completes, you can now check the standard switch configuration to verify everything is migrated successfully.



Well, that's it!

Migrate Networking From Standard Switch To Distributed Switch

In this article, let's see how to migrate your ESXi host networking from vSphere Standard Switch to vSphere Distributed Switch.

Step 1:
Here we see the current configuration of my standard switch for one of my ESXi host.


I have one standard switch, with two portgroups; one for virtual machines and other one is the management network. I have simplified the networking by eliminating additional vmkernel portgroups for vMotion, FT, iSCSI as the process to move them would be the same. I have one uplink given to this standard switch, vmnic1. 

Step 2:
Let's go ahead and create a distributed switch. Go to Home and select Networking. Right click the required Datacenter and click New vSphere Distributed Switch.


Step 3:
Select the version of the distributed switch that you are going to create


Step 4:
Provide a name to this distributed switch and if you want to alter the number of uplink ports to this switch, you can do the same in the Number of Uplink Ports section. 


Step 5:
I am going to add hosts later as I like to review and make sure I got the setup right before I start moving anything off my standard switch. 


Review your settings and Finish in the Ready to Complete section. 

Step 6:
Navigate back to the Networking section, now you can see your distributed switch created under the specified Datacenter. Right click this switch and select Add Host


Step 7:
Select the host that you want to add. Now, you can see that I have two uplinks in the menu for this host. vmnic0 and vmnic1. You need to make sure that you have one free uplink when you add the host to the distributed switch. This is because, when you are migrating your portgroups off standard switches and you do not have any uplinks on the vDS, your networks are going to be disconnected. 
Here, I will choose the free unused adapter, vmnic0, to be added to the vDS.


Step 8:
As seen in the standard switch configuration, I had one vmkernel port-group, vmk0. I am not going to migrate this port group right now. You can do it at this stage by simply using the drop-down under Destination Port Group and selecting the required port-group on the vDS as to where your management network must migrate to. 


Step 9:
I am neither moving any virtual machine networking as well because I will be doing both of these steps later. Review your settings and complete the host add to vDS.


Step 10:
Now, we will migrate the VMkernel from standard switch to the vDS. Select the Host and click the Configuration tab and browse to Networking > vSphere Distributed Switch. Click Manage Virtual Adapters.


Step 11:
Click Add to check the required vmk.


Step 12:
Select Migrate existing virtual adapter as we already have the vmk in the standard switch.


Step 13:
Select the required port group and the destination port-group in vDS under the section "Port Group"


Review the settings and complete the migrate. It will take a couple of seconds to finish the migrate. You can also do a continuous ping to the host to check the network connectivity. Once migrated, you can review your vDS diagram.


Step 14:
Next, we will migrate the virtual machine networking from standard switch to the vDS. Go back to Home and select Networking. Right click the respective distributed switch and select Migrate Virtual Machine Networking.


Step 15:
The source network is your standard switch networking and from the drop-down select the port-group on the standard switch where the virtual machines reside. In my case, the port-group on the standard switch is called, VM Network. The destination network port-group is on the vDS and I want to migrate the VMs to a port-group called dvPortGroup on the vDS.


Step 16:
Select the virtual machine you want to migrate on this port-group in the next section. 


Review changes and finish the migration. Once Migrated, go back to your distributed switch under the ESXi host and cluster section and review the final configuration. 


That's pretty much it. If you have additional portgroups you will have to repeat the process. If your port-group have VLAN IDs, then you will have to create a port-group on the vDS with the same VLAN ID, else the migration will fail.
If you are migrating iSCSI with port binding, then you will have to remove the port binding and then migrate the iSCSI and then configure port binding post migration. 

Friday, 1 April 2016

Re-deploy VDP with existing storage

Written by Suhas Savkoor



You will come across instances where your VDP appliance has gone corrupt or the appliance is not booting at all no matter what fix you have implemented, in this case the simple and easy resolution would be to redeploy the appliance. When you redeploy the appliance in VDP, you will have an option to attach an existing storage to the appliance.

So your VDP appliance comprises of X number of disks, with the hard disk 1 always being your OS disk, and the remaining ones created to store backup data. Now, when you re-deploy the appliance, the OS disk is replaced completely. This means, your backup jobs or email configuration for the VDP appliance is lost. So, once the appliance is redeployed you will have to recreate your backup jobs. However, the backup data will be intact as it resides on the storage disk.

What you need to do:

So, here before discarding my old appliance, you can see the backed up virtual machine available under Restore of VDP.


Make a note of the data disks for the VDP appliance from the Edit Settings of the virtual machine.


Here the Hard Disk 1 is the OS disk hosting the backup jobs and the SMTP settings for VDP and DIsk 2/3/4 are the storage disk where the backup data will be stored.

Power OFF the old VDP appliance and remove it from the inventory.

Follow the OVF deployment procedure to deploy the new VDP appliance. Once the virtual machine is deployed, power it on and wait for the boot to be complete. Then browse to the vdp-configure page:
https://<VDP_IP>:8543/vdp-configure

Go through the initial configuration step, until you come across Create Storage. In this wizard, select Attach Existing VDP Storage and click Next.


Browse the location where the existing data disk for VDP resides. Select each disk one by one and mount them. There is no option to select all VMDKs at a time and perform the mount. This has to be done for each disks. It's going to validate each disk.


Once all the disks are mounted, then proceed Next and it's going to validate the set of disks.


Proceed ahead to complete, and each disk is imported and attached to the new VDP appliance, and a mount point is created for each disk. Reboot the machine once completed, and it is going to run a VDP: Configure task for about 15-20 minutes.


Once the appliance is configured, login to Web Client and connect to the appliance. Go to the Restore tab and you can see the backed up data still available. 


You will have to just reconfigure your backup jobs for the virtual machines.