Tuesday, 1 November 2016

Avigui.html Shows Err_Connection_Refused in Avamar Virtual Edition 7.1

Recently I started deploying and testing the EMC Avamar Virtual Edition, and one of the first issue I ran into was with the configuration. The deployment of the appliance is pretty simple. The Avamar virtual edition 7.1 is a 7zip file, which when extracted provides the ovf file. Using the deploy ovf template option I was able to get this appliance deployed. Post this, as per the installation guide of AVE (Avamar Virtual Edition), I added the data drives, configured the networking for this appliance and rebooted post a successful configuration. 

However, when trying to access the https://avamar-IP:8543/avi/avigui.html, I received the Err_Connection_Closed message. No matter what I tried I was unable to get into the actual configuration GUI to initialize the services. 

Looks like there are couple of steps I had to run. There is a package called AviInstaller.pl which is responsible for package installations. So, this had to be installed. To do this, SSH into the avamar appliance as root and password as changeme and browse the below directory:
# cd /usr/local/avamar/src/
Run the aviInstaller bootstrap with the below command:
# ./avinstaller-bootstrap-version.sles11_64.x86_64.run

Once this runs, log back into the same avigui.html URL and we should be able to see the below login screen.
That's pretty much it.

Saturday, 29 October 2016

Migrating VDP From 5.8 and 6.0 To 6.1.x With Data Domain

You cannot upgrade a vSphere Data Protection appliance from 5.8.x and 6.0.x to 6.1.x due to the difference in the underlying SUSE Linux version. Since the earlier versions of vSphere Data Protection used SLES 11 SP1 and the 6.1.x uses SLES 11 SP3, we will be performing the migrate.

This article only discusses about migrating a VDP appliance from 5.8.x and 6.0.x with a data domain attached. If you had a VDP appliance without a data domain, we would choose the "Migrate" option in the vdp-configure wizard during the setup of the new 6.1.x appliance. However, this is not the path we will follow when the destination storage is an EMC Data Domain. A VDP appliance with Data Domain migration would be done by a process called as checkpoint restore. Let's discuss these steps below...

For this instance let's consider the following setup:
1. A vSphere Data Protection 5.8 appliance
2. A Virtual Edition of EMC Data Domain Appliance (Process is still the same for physical as well)
3. The 5.8 VDP was deployed as a 512GB deployment.
4. The IP address of this VDP appliance was 192.168.1.203
5. The IP address of the Data Domain appliance is 192.168.1.200

Pre-requisites:
1. In the point (3) above you saw that the 5.8 VDP appliance was setup with a 512 GB local drives. The first question that comes here is, why have a local drive when the backups are residing on the Data Domain?
A vSphere Data Protection appliance with a Data Domain would still have a local VMDK is to store the meta-data of the client backups. The actual data of the client is deduplicated and stored on the DD appliance and the meta-data of this backup is stored under the /data0?/cur directory on the VDP appliance. So, if your source appliance was of 512 GB deployment, then the destination has to be either equal to or greater than the source deployment.

2. The IP address, DNS name, domain and all other networking configuration of the destination appliance should be same as the source.

3. It is best to keep the same password on the destination appliance during the initial setup process.

4. On the source appliance make sure the Checkpoint Copy is Enabled. To verify this, go to https://vdp-ip:8543/vdp-configure page, select the Storage tab, click the Gear Icon and click Edit Data Domain. The first page displays this option. If this is not checked, then the checkpoint on the source appliance will not be copied over to the Data Domain, and you will not be able to perform a checkpoint restore.

The migration process:
1. Take a SSH to the source VDP appliance and run the below command to get the checkpoint list:
# cplist

The output would be similar to:
cp.20161011033032 Tue Oct 11 09:00:32 2016   valid rol ---  nodes   1/1 stripes     25
cp.20161011033312 Tue Oct 11 09:03:12 2016   valid --- ---  nodes   1/1 stripes     25

Make a note of this output.

2. Run the below command to obtain the Avamar System ID:
# avmaint config --ava | grep -i "system"
The output would be similar to:
  systemname="vdp58.vcloud.local"
  systemcreatetime="1476126720"
  systemcreateaddr="00:50:56:B9:3E:6D"

Make a note of this output as well.  1476126720 would be the Avamar System ID. This is used to determine which mTree this VDP appliance corresponds to on the Data Domain.

3. Run the below command to obtain the hashed Avamar Root Password. This would be to test the GSAN login if the migration fails. This will be used for VMware Support, so you can skip this step. 
# grep ap /usr/local/avamar/etc/usersettings.cfg
The output would be similar to:
password=6cbd70a95847fc58beb381e72600a4cb33d322cc3d9a262fdc17acdbeee80860a285534ab1427048

4. Power off the source appliance

5. Deploy VDP 6.1.x appliance via the OVF template, provide the same networking details during the ova deployment and power on the 6.1.x appliance once the ova deployment completes successfully.

6. Go to the https://vdp-ip:8543/vdp-configure page and complete the configuration process for the new appliance. As mentioned above, during the "Create Storage" section in the wizard specify the local storage space, either equal to or greater than the source VDP appliance system. Once the appliance configuration completes, it will reboot the new 6.1.x system.

7. Once the reboot is completed, open a SSH to the 6.1.x appliance and run the below command to list the available checkpoints on the data domain.
# ddrmaint cp-backup-list --full --ddr-server=<data-domain-IP> --ddr-user=<ddboost-user-name> --ddr-password=<ddboost-password>

Sample command from my lab:
# ddrmaint cp-backup-list --full --ddr-server=192.168.1.200 --ddr-user=ddboost-user --ddr-password=VMware123!
The output would be similar to:
================== Checkpoint ==================
 Avamar Server Name           : vdp58.vcloud.local
 Avamar Server MTree/LSU      : avamar-1476126720
 Data Domain System Name      : 192.168.1.200
 Avamar Client Path           : /MC_SYSTEM/avamar-1476126720
 Avamar Client ID             : 200e7808ddcde518fe08b6778567fa4f397e97fc
 Checkpoint Name              : cp.20161011033032
 Checkpoint Backup Date       : 2016-10-11 09:02:07
 Data Partitions              : 3
 Attached Data Domain systems : 192.168.1.200

The highlighted parts are what we need. The avamar-1476126720 would be the Avamar mTree on the data domain. We received this system ID earlier in this article. The checkpoint cp.20161011033032 was also a checkpoint on the source VDP appliance which was copied over to the data domain.

8. Now, we will perform a cprestore to this checkpoint. The command to perform the cprestore is:
# /usr/local/avamar/bin/#: cprestore --hfscreatetime=<avamar-ID> --ddr-server=<data-domain-IP> --ddr-user=<ddboost-user-name> --cptag=<checkpoint-name>

Sample command from my lab:
# /usr/local/avamar/bin/#: cprestore --hfscreatetime=1476126720 --ddr-server=192.168.1.200 --ddr-user=ddboost-user --cptag=cp.20161011033032
Where, 1476126720 is the Avamar System ID and cp.20161011033032 is a valid checkpoint. Do not rollback if the checkpoint is not valid. If the checkpoint is not validated, then on the source VDP appliance you will have to run an integrity check to generate a valid checkpoint and copy this over to the Data Domain system.

The output would be:
Version: 1.11.1
Current working directory: /space/avamar/var
Log file: cprestore-cp.20161011033032.log
Checking node type.
Node type: single-node server
Create DD NFS Export: data/col1/avamar-1476126720/GSAN
ssh ddboost-user@192.168.1.200 nfs add /data/col1/avamar-1476126720/GSAN 192.168.1.203 "(ro,no_root_squash,no_all_squash,secure)"
Execute: ssh ddboost-user@192.168.1.200 nfs add /data/col1/avamar-1476126720/GSAN 192.168.1.203 "(ro,no_root_squash,no_all_squash,secure)"
Warning: Permanently added '192.168.1.200' (RSA) to the list of known hosts.
Data Domain OS
Password:

Enter the data domain password when prompted. Once the password is authenticated, the cprestore will start. It is going to copy the meta data of the backups for the displayed checkpoint on to the 6.1.x appliance. 

The output would be similar to:
[Thu Oct  6 08:24:44 2016] (22497) 'ddnfs_gsan/cp.20161011033032/data01/0000000000000015.chd' -> '/data01/cp.20161011033032/0000000000000015.chd'
[Thu Oct  6 08:24:44 2016] (22498) 'ddnfs_gsan/cp.20161011033032/data02/0000000000000019.wlg' -> '/data02/cp.20161011033032/0000000000000019.wlg'
[Thu Oct  6 08:24:44 2016] (22497) 'ddnfs_gsan/cp.20161011033032/data01/0000000000000015.wlg' -> '/data01/cp.20161011033032/0000000000000015.wlg'
[Thu Oct  6 08:24:44 2016] (22499) 'ddnfs_gsan/cp.20161011033032/data03/0000000000000014.wlg' -> '/data03/cp.20161011033032/0000000000000014.wlg'
[Thu Oct  6 08:24:44 2016] (22498) 'ddnfs_gsan/cp.20161011033032/data02/checkpoint-complete' -> '/data02/cp.20161011033032/checkpoint-complete'
[Thu Oct  6 08:24:44 2016] (22499) 'ddnfs_gsan/cp.20161011033032/data03/0000000000000016.chd' -> '/data03/cp.20161011033032/0000000000000016.chd'

This would keep going on until all the meta-data is copied over. The length of cprestore process would depend on the amount of backup data. Once the process is complete you will see the below message.

Restore data01 finished.
Cleanup restore for data01
Changing owner/group and permissions: /data01/cp.20161011033032
PID 22497 returned with exit code 0
Restore data03 finished.
Cleanup restore for data03
Changing owner/group and permissions: /data03/cp.20161011033032
PID 22499 returned with exit code 0
Finished restoring files in 00:00:04.
Restoring ddr_info.
Copy: 'ddnfs_gsan/cp.20161011033032/ddr_info' -> '/usr/local/avamar/var/ddr_info'
Unmount NFS path 'ddnfs_gsan' in 3 seconds
Execute: sudo umount "ddnfs_gsan"
Remove DD NFS Export: data/col1/avamar-1476126720/GSAN
ssh ddboost-user@192.168.1.200 nfs del /data/col1/avamar-1476126720/GSAN 192.168.1.203
Execute: ssh ddboost-user@192.168.1.200 nfs del /data/col1/avamar-1476126720/GSAN 192.168.1.203
Data Domain OS
Password:
kthxbye

Once the data domain password is entered, the cprestore process completes with a kthxbye message.

9. Run the # cplist command on the 6.1.x appliance and you should notice that the checkpoint that was displayed in the cpbackup list is now listing under the 6.1.x checkpoints:

cp.20161006013247 Thu Oct  6 07:02:47 2016   valid hfs ---  nodes   1/1 stripes     25
cp.20161011033032 Tue Oct 11 09:00:32 2016   valid rol ---  nodes   1/1 stripes     25

The cp.20161006013247 is the 6.1.x appliance's local checkpoint and the cp.20161011033032 is the checkpoint of source appliance which was copied over from the data domain during the cprestore.

10. Once the restore is complete, we need to perform a rollback to this checkpoint. So first, you will have to stop all core services on the 6.1.x appliance using the below command:
# dpnctl stop
11. Initiate the force rollback using the below command:
# dpnctl start --force_rollback

You will see the following output:
Identity added: /home/dpn/.ssh/dpnid (/home/dpn/.ssh/dpnid)
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
Action: starting all
Have you contacted Avamar Technical Support to ensure that this
  is the right thing to do?
Answering y(es) proceeds with starting all;
          n(o) or q(uit) exits
y(es), n(o), q(uit/exit):

Select yes (y) to initiate the rollback. The next set of output you will see is:

dpnctl: INFO: Checking that gsan was shut down cleanly...
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
Here is the most recent available checkpoint:
  Tue Oct 11 03:30:32 2016 UTC Validated(type=rolling)
A rollback was requested.
The gsan was shut down cleanly.

The choices are as follows:
  1   roll back to the most recent checkpoint, whether or not validated
  2   roll back to the most recent validated checkpoint
  3   select a specific checkpoint to which to roll back
  4   restart, but do not roll back
  5   do not restart
  q   quit/exit

Choose option 3 and the next set of output you will see is:

Here is the list of available checkpoints:

     2   Thu Oct  6 01:32:47 2016 UTC Validated(type=full)
     1   Tue Oct 11 03:30:32 2016 UTC Validated(type=rolling)

Please select the number of a checkpoint to which to roll back.

Alternatively:
     q   return to previous menu without selecting a checkpoint
(Entering an empty (blank) line twice quits/exits.)

So in the earlier cplist command you will notice that the cp.20161011033032 had a time-stamp of Oct 11. So choose option (1) and the next output you will see is:
-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
You have selected this checkpoint:
  name:       cp.20161011033032
  date:       Tue Oct 11 03:30:32 2016 UTC
  validated:  yes
  age:        -7229 minutes

Roll back to this checkpoint?
Answering y(es)  accepts this checkpoint and initiates rollback
          n(o)   rejects this checkpoint and returns to the main menu
          q(uit) exits

Verify if this indeed the checkpoint and proceed yes (y) upon confirmation. The GSAN and MCS rollback begins and you will notice this in the console:

dpnctl: INFO: rolling back to checkpoint "cp.20161011033032" and restarting the gsan succeeded.
dpnctl: INFO: gsan started.
dpnctl: INFO: Restoring MCS data...
dpnctl: INFO: MCS data restored.
dpnctl: INFO: Starting MCS...
dpnctl: INFO: To monitor progress, run in another window: tail -f /tmp/dpnctl-mcs-start-output-24536
dpnctl: WARNING: 1 warning seen in output of "[ -r /etc/profile ] && . /etc/profile ; /usr/local/avamar/bin/mcserver.sh --start"
dpnctl: INFO: MCS started.

**If this process fails, open a ticket with VMware support. I cannot provide the troubleshooting steps for this as this is confidential. Request / Add information in your support ticket to contact me if needed for the engineer assigned to run a check past me**

If the rollback goes through successfully you might be presented with an option to restore the tomcat database.

Do you wish to do a restore of the local EMS data?

Answering y(es) will restore the local EMS data
          n(o) will leave the existing EMS data alone
          q(uit) exits with no further action.

Please consult with Avamar Technical Support before answering y(es).

Answer n(o) here unless you have a special need to restore
  the EMS data, e.g., you are restoring this node from scratch,
  or you know for a fact that you are having EMS database problems
  that require restoring the database.

y(es), n(o), q(uit/exit):

I would choose no if my database is not causing issues in my environment. Post this, the remaining services will be started. The output:

dpnctl: INFO: EM Tomcat started.
dpnctl: INFO: Resuming backup scheduler...
dpnctl: INFO: Backup scheduler resumed.
dpnctl: INFO: AvInstaller is already running.
dpnctl: INFO: [see log file "/usr/local/avamar/var/log/dpnctl.log"]

That should be pretty much it. When you login to https://vdp-ip:8543/vdp-configure page, you should be able to see the Data Domain automatically in the Storage Tab. If not, open a support ticket with VMware

There are couple of post-migration steps:
1. If you are using internal proxy, un-register the proxy and re-register it back from the VDP configure page.
2. External proxies (if used) will be orphaned, so you will have to delete the external proxies, change the VDP root password and re-add the external proxy
3. If you are using Guest Level backups, then the agents for SQL, Exchange, Sharepoint has to be re-installed. 
4. If this appliance is replicating to another VDP appliance, then the replication agents need to be re-registered. Follow the below 4 commands in the same order to perform this:
# service avagent-replicate stop
# service avagent-replicate unregister 127.0.0.1 /MC_SYSTEM
# service avagent-replicate register 127.0.0.1 /MC_SYSTEM
# service avagent-replicate start

And that should be it...

Friday, 28 October 2016

VDP Stuck In A Configuration Loop

There have been a few cases logged with VMware where the newly deployed VDP appliance gets stuck in a configuration loop. Not to worry, there is now a fix for this. 

A little insight to what this is: So, we will go ahead and deploy a VDP (6.1.2 in my case) as an ova template. The deployment goes through successfully, and then we power On the VDP appliance which too completes successfully. Then, we go to the https://vdp-ip:8543/vdp-configure page and run through the configuration wizard. Everything goes here as well, the configuration wizard completes and requests you to reboot the appliance. Once the appliance is rebooted, it's going to make certain changes to the appliance, configure alarms and initialize core services. There will be a task called as "VDP: Configure Appliance" which will be initiated. Here, this task gets stuck somewhere around 45 to 70 percent. The appliance will boot up completely, however, when you go back to the vdp-configure page, you will notice that it is taking you through the configuration wizard again. You can run up to the configure storage section post which you will receive an error, as the appliance is already configured with the storage. And no matter which browser or how many times you access this vdp-configure page, you will be taken back to the configuration wizard. This will end up as an infinite loop.

This issue is mainly and mostly (almost certainly) seen only on vCenter 5.5 U3e release. This is because, the VDP uses JSAFE/BSAFE Java libraries and these do not go well with the vCenter SSL ciphers in the 5.5 U3e. To fix this, we switch from JSAFE to Java JCE libraries on the VDP appliance.

Before, we get to this, you can visit the vdr-server.log at the time of the issue (/usr/local/avamar/var/vdr/server_logs) to verify the following:

2016-10-29 01:15:40,676 INFO  [Thread-7]-vi.ViJavaServiceInstanceProviderImpl: vcenter-ignore-cert ? true
2016-10-29 01:15:40,714 WARN  [Thread-7]-vi.VCenterServiceImpl: No VCenter found in MC root domain
2016-10-29 01:15:40,714 INFO  [Thread-7]-vi.ViJavaServiceInstanceProviderImpl: visdkUrl = https:/sdk
2016-10-29 01:15:40,715 ERROR [Thread-7]-vi.ViJavaServiceInstanceProviderImpl: Failed To Create ViJava ServiceInstance owing to Remote VCenter connection error
java.rmi.RemoteException: VI SDK invoke exception:java.lang.IllegalArgumentException: protocol = https host = null; nested exception is:
        java.lang.IllegalArgumentException: protocol = https host = null
        at com.vmware.vim25.ws.WSClient.invoke(WSClient.java:139)
        at com.vmware.vim25.ws.VimStub.retrieveServiceContent(VimStub.java:2114)
        at com.vmware.vim25.mo.ServiceInstance.<init>(ServiceInstance.java:117)
        at com.vmware.vim25.mo.ServiceInstance.<init>(ServiceInstance.java:95)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.createViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:297)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.createViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:159)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.createViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:104)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.createViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:96)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.getViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:74)
        at com.emc.vdp2.common.vi.ViJavaServiceInstanceProviderImpl.waitForViJavaServiceInstance(ViJavaServiceInstanceProviderImpl.java:212)
        at com.emc.vdp2.server.VDRServletLifeCycleListener$1.run(VDRServletLifeCycleListener.java:71)
        at java.lang.Thread.run(Unknown Source)

Caused by: java.lang.IllegalArgumentException: protocol = https host = null
        at sun.net.spi.DefaultProxySelector.select(Unknown Source)
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(Unknown Source)
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source)
        at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Unknown Source)
        at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(Unknown Source)
        at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(Unknown Source)
        at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(Unknown Source)
        at com.vmware.vim25.ws.WSClient.post(WSClient.java:216)
        at com.vmware.vim25.ws.WSClient.invoke(WSClient.java:133)
        ... 11 more

2016-10-29 01:15:40,715 INFO  [Thread-7]-vi.ViJavaServiceInstanceProviderImpl: Retry ViJava ServiceInstance Acquisition In 5 Seconds...
2016-10-29 01:15:45,716 INFO  [Thread-7]-vi.ViJavaServiceInstanceProviderImpl: vcenter-ignore-cert ? true
2016-10-29 01:15:45,819 WARN  [Thread-7]-vi.VCenterServiceImpl: No VCenter found in MC root domain

The mcserver.out log file should show the below:

Caught Exception : Exception : org.apache.axis.AxisFault Message : ; nested exception is:
javax.net.ssl.SSLHandshakeException: Unsupported curve: 1.2.840.10045.3.1.7 StackTrace : AxisFault
faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.userException faultSubcode:
faultString: javax.net.ssl.SSLHandshakeException: Unsupported curve: 1.2.840.10045.3.1.7 faultActor:
faultNode:
faultDetail:
{http://xml.apache.org/axis/}stackTrace:javax.net.ssl.SSLHandshakeException: Unsupported curve: 1.2.840.10045.3.1.7

To fix this:

1. Discard the newly deployed appliance completely. 
2. Deploy the VDP appliance again. Go through the ova deployment and power on the appliance. Stop here, do not go to the vdp-configure page.

3. To enable the Java JCE library we need to add a particular line in the mcsutils.pm file under the $prefs variable. The line is exactly as below:

. "-Dsecurity.provider.rsa.JsafeJCE.position=last "

4. vi the following file;
# vi  /usr/local/avamar/lib/mcsutils.pm
The original content would look like:

my $rmidef = "-Djava.rmi.server.hostname=$rmihost ";
   my $prefs = "-Djava.util.logging.config.file=$mcsvar::lib_dir/mcserver_logging.properties "
             . "-Djava.security.egd=file:/dev/./urandom "
             . "-Djava.io.tmpdir=$mcsvar::tmp_dir "
             . "-Djava.util.prefs.PreferencesFactory=com.avamar.mc.util.MCServerPreferencesFactory "
             . "-Djavax.xml.parsers.DocumentBuilderFactory=org.apache.xerces.jaxp.DocumentBuilderFactoryImpl "
             . "-Djavax.net.ssl.keyStore=" . MCServer::get( "rmi_ssl_keystore" ) ." "
             . "-Djavax.net.ssl.trustStore=" . MCServer::get( "rmi_ssl_keystore" ) ." "
             . "-Dfile.encoding=UTF-8 "
             . "-Dlog4j.configuration=file://$mcsvar::lib_dir/log4j.properties ";  # vmware/axis

After editing it would look like:

 my $rmidef = "-Djava.rmi.server.hostname=$rmihost ";
   my $prefs = "-Djava.util.logging.config.file=$mcsvar::lib_dir/mcserver_logging.properties "
             . "-Djava.security.egd=file:/dev/./urandom "
             . "-Djava.io.tmpdir=$mcsvar::tmp_dir "
             . "-Djava.util.prefs.PreferencesFactory=com.avamar.mc.util.MCServerPreferencesFactory "
             . "-Djavax.xml.parsers.DocumentBuilderFactory=org.apache.xerces.jaxp.DocumentBuilderFactoryImpl "
             . "-Djavax.net.ssl.keyStore=" . MCServer::get( "rmi_ssl_keystore" ) ." "
             . "-Djavax.net.ssl.trustStore=" . MCServer::get( "rmi_ssl_keystore" ) ." "
             . "-Dfile.encoding=UTF-8 "
             . "-Dsecurity.provider.rsa.JsafeJCE.position=last "
             . "-Dlog4j.configuration=file://$mcsvar::lib_dir/log4j.properties ";  # vmware/axis

5. Save the file
6. There is no use of restarting mcs using mcserver.sh --restart, as the VDP appliance is not yet configured and hence the core services are not yet initialized. 
7. Reboot the appliance.
8. Once the appliance is booted up, go to the configure page and begin the configuration and this should avoid the configuration loop issue.

If the VDP was already deployed and the vCenter was upgraded later, then you can follow the same steps until 6. Instead of rebooting the VDP this time, we should be good to restart the MCS using the mcserver.sh --restart --verbose command.

That's it. A permanent fix is in talks with engineering for the future VDP release.

Update:
A permanent fix is in 6.1.3 version of VDP.

Tuesday, 25 October 2016

MCS Fails To Start On VDP. ERROR: gsan rollbacktime: xxxxxxx does not match stored rollbacktime: xxxxxxxx

Recently while working on a case, I came across the following issue. The MCS service was not coming up on a newly deployed VDP with existing drives. If I tried to start the MCS manually, the error received during this process was:

root@vdp58:#: dpnctl start mcs

Identity added: /home/dpn/.ssh/dpnid (/home/dpn/.ssh/dpnid)
dpnctl: INFO: Starting MCS...
dpnctl: INFO: To monitor progress, run in another window: tail -f /tmp/dpnctl-mcs-start-output-26291
dpnctl: ERROR: error return from "[ -r /etc/profile ] && . /etc/profile ; /usr/local/avamar/bin/mcserver.sh --start" - exit status 1
dpnctl: ERROR: 1 error seen in output of "[ -r /etc/profile ] && . /etc/profile ; /usr/local/avamar/bin/mcserver.sh --start"
dpnctl: INFO: [see log file "/usr/local/avamar/var/log/dpnctl.log"]

And if I tailed the log that was displayed during the start attempt:
tail -f /tmp/dpnctl-mcs-start-output-26291

The actual error message was displayed:
ERROR: gsan rollbacktime: 1475722913 does not match stored rollbacktime: 1475722911

This occurs when GSAN has rolled back to a particular checkpoint but the MCS has not. 
Since these are not on the same rollbacktime the MCS service will not start. 

There are a couple of fixes available for this, and I would recommend you to start in the following order.

Fix 1:
Restore MCS

Run the below command to being MCS restore:
# dpnctl start mcs --force_mcs_restore
In most cases, this too fails. For me, it did, with the error:

root@vdp58:#: dpnctl start mcs --force_mcs_restore

Identity added: /home/dpn/.ssh/dpnid (/home/dpn/.ssh/dpnid)
dpnctl: INFO: Restoring MCS data...
dpnctl: ERROR: 1 error seen in output of "[ -r /etc/profile ] && . /etc/profile ; echo 'Y' | /usr/local/avamar/bin/mcserver.sh --restore --id='root' --hfsport='27000' --hfsaddr='192.168.1.203' --password='*************'"
dpnctl: ERROR: MCS restore did not succeed, so not restarting MCS
dpnctl: INFO: [see log file "/usr/local/avamar/var/log/dpnctl.log"]

If this worked for you and the MCS is restored and started successfully, then stop here. Else, move further. 

Fix 2:
Restore MCS to an older Flush

Basically, your MCS data is constantly backed up, and this is what is called as MCS flush. This is to protect the MCS from server or any hardware failures.
MCS flushes its data to the avamar server every 60 minutes as a part of system checkpoints. This is why, I would recommend you to roll back to a MCS flush which has a valid local checkpoint on that VDP server. So the more older MCS flush you roll back to, the more MCS data is lost. 

The local checkpoints in my case were:

root@vdp58:#: cplist

cp.20161020033059 Thu Oct 20 09:00:59 2016   valid rol ---  nodes   1/1 stripes     25
cp.20161020033339 Thu Oct 20 09:03:39 2016   valid --- ---  nodes   1/1 stripes     25

To list your MCS Flush, run the below command:
avtar --archives --path=/MC_BACKUPS --count=7
The output is similar to:

   Date      Time    Seq       Label           Size     Plugin    Working directory         Targets
 ---------- -------- ----- ----------------- ---------- -------- --------------------- -------------------
 2016-10-20 15:25:20   372                      369201K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 14:45:20   371                      368582K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 13:45:18   370                      367645K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 12:45:17   369                      366716K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 11:45:19   368                      365779K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 10:45:17   367                      364842K Linux    /usr/local/avamar     var/mc/server_data
 2016-10-20 09:45:17   366                      363762K Linux    /usr/local/avamar     var/mc/server_data

Here the numbers 372, 371.....are the MCS Flush labels. This list keeps going on till the day where the VDP appliance was deployed. 

I will rollback my appliance to Label 366

The command would be:
mcserver.sh --restore --labelnum=<flush_ID>
In my case:
mcserver.sh --restore --labelnum=366
This will start a small interactive script, where you need to accept the restore, provide the VDP IP to proceed further. Sample output:

root@vdp58:#: mcserver.sh --restore --labelnum=366

mcserver.sh must be run as admin, please login as admin and retry
root@vdp58:/usr/local/avamar/var/log/#: su admin
admin@vdp58:/usr/local/avamar/var/log/#: mcserver.sh --restore --labelnum=366
=== BEGIN === check.mcs (prerestore)
check.mcs                        passed
=== PASS === check.mcs PASSED OVERALL (prerestore)
--restore will modify your Administrator Server database and preferences.
Do you want to proceed with the restore Y/N? [Y]: y
Enter the Avamar Server IP address or fully qualified domain name to
restore from (i.e. dpn.your_company.com): <enter-vdp-fqdn-here>
Enter the Avamar Server IP port to restore from [27000]:

The port will be default 27000. Post this, you will see a long list of logging of the mcsrestore task.
This is going to make certain changes to your MCS database.

If the restore to an older flush completes successfully, then start the MCS using:
mcserver.sh --start --verbose
This started the MCS successfully for me.

Now, I have also worked on a case, where the mcserver restore to an older flush completed with error / warnings causing the mcserver.sh --start to fail with the same error:

ERROR: gsan rollbacktime: 1475722913 does not match stored rollbacktime: 1475722911

You can try rolling to an even older MCS Flush and see how that goes. But, the chances are less that the MCS will ever come up. 

So if this fails, move to the next step:

Fix 3:
Update the MCS Database Manually. 

The last fix for this is to manually update the MCS database with the correct rollbacktime.

**This is a very tricky fix, and is not a best practice or a recommended method. If you are running a lab environment, then go ahead and try this. If you have production data at stake, stop! Involve EMC to check for other alternatives**

With that out of the way, the final fix would be in the order.

1. Connect to the MCS database. 

VDP is a SUSE box, and it runs a PostgreSQL database. The command would be the same as any to connect to the psql DB:
psql -p 5555 -U admin mcdb

The port for MCS database is 5555
We are connecting with admin user as we want to make certain changes on the MCS database. If you want to be in a view only mode then use the "viewuser" to connect to "mcdb"

2. Once you connect, you see the following message:

admin@vdp58:#: psql -p 5555 -U admin mcdb

Welcome to psql 8.3.23, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
       \h for help with SQL commands
       \? for help with psql commands
       \g or terminate with semicolon to execute query
       \q to quit

3. Run \d to list the MCS tables. The one we are interested in is "property_value"

4. Run the below query to list all the contents of this table:
select * from property_value;
The output is similar to:

      property       |            value
---------------------+------------------------------
 morning_cron_start  | -1
 evening_cron_start  | -1
 mcsnmp_cron_start   | 1
 clean_db_cron_start | 3
 rollbacktime        | 1475722911
 systemid            | 1476126720@00:50:56:B9:3E:6D
 hfscreatetime       | 1476126720
 systemname          | vdp58.vcloud.local
 restoredFlushTime   | 2016-10-10 19:45:00 PDT
 license_period_day  | 14
 license_buffer_pct  | 10
(11 rows)

The row that we are interested in is rollbacktime. Here we see the rollbacktime is 1475722911 which is not matching the GSAN rollback time of 1475722913

5. To update this, run the below query:
update property_value set value = <GSAN_rollbacktime> where property = 'rollbacktime';
So my query would look like:
update property_value set value = 1475722913 where property = 'rollbacktime'; 
Verify if the rollbacktime parameter is updated with the correct GSAN rollbacktime. 

6. Switch to admin mode of VDP appliance (su admin) and then start the MCS using:
mcserver.sh --start --verbose
This has to start the MCS as we have force synced the MCS. 


If this does not work too, then I do not know what else will. 

Saturday, 22 October 2016

VDP Reports Incorrect Information About Protected Clients

When you connect to vSphere Data Protection in your web client, switch to the Reports tab and select Unprotected Clients, you will see a list of VMs that are not protected by VDP. When I say not protected by VDP, it means that they are not added to any backup jobs in that particular appliance. 

In some cases, you will see the virtual machine is still listed under the Unprotected Client section when the VM is already added in the backup job. This mostly occurs when a rename operation is done on the virtual machine. When a rename is done on the virtual machine, the backup job picks up the new name. The Unprotected Clients under Restore tab will not pick this up. 

Here is the result of a small test.

1. I have a backup job called "Windows" and a VM called "Windows_With_OS" is added under it. 


2. In the Unprotected Client section, you can see that this "Windows_With_OS" VM is not listed as it is already protected. 


3. Now, I will re-name this virtual machine in my vSphere Client to "Windows_New"


4. Back in the vSphere Data Protection, you can see the name is updated in the backup job list, but not in the Reporting Tab.


You can see that Windows_New is now coming up under Unprotected Clients even though it is already protected. (Ignore the vmx file name as this is renamed for other purposes)


This is an incorrect report and the VDP appliance should sync these changes automatically with vCenter naming changes. You can restart services, proxy, the entire appliance too and it will not fix this reporting. 

This can be also confirmed from the virtual name report in MCS and GSAN. To check this:

1. Open a SSH / Putty to the VDP appliance. Login as admin and elevate to root.
2. Run the below command:
# mccli client show --recursive=true


So if you observer here, the MCS still picks up the old virtual machine name. (mccli is only for MCS related information)

3. If you check what the GSAN shows, run the below command:
# avmgr getl --path=/vCenter-IP/Virtual-Machine-Domain.
The vCenter IP and VM domain can be found from the above mccli command which in my case is /192.168.1.1/VirtualMachines. The output is:


The avgmr is only for GSAN related information and also shows the Client ID is for the VM with the older name. 

So your vdr server naming is out of sync with the MCS and GSAN sync. 

The solution:

You will have to force sync the naming changes between the Avamar server and the vCenter Server. To do this, you will need the proxycp.jar file which can be downloaded from here

A brief about proxycp.jar, this is a java archive file which contains a set of built in commands that can be used to automate or run a specific set of tasks from the command line. Some of the things would require changes from multiple locations and numerous files, and the proxycp.jar will help you do these things by running the required commands.

1. So once you download the proxycp.jar file, open a WinSCP to the VDP appliance and copy this file into your /root or preferably /tmp  folder. 

2. Then SSH into your VDP appliance and change directory to where the proxycp.jar file is and run the following command
# java -jar proxycp.jar --syncvmnames
The output:


The In Sync column was false for the renamed virtual machine, and the "syncvmnames" switch updated this value.

3. Now if I go back to the Unprotected Client's list, this VM is no longer listed and if you run the mccli and avmgr command mentioned earlier will show the updated name.

If something is a bit off for you in this case, feel free to comment.

Wednesday, 12 October 2016

vSphere Data Protection /data0? Partitions Are 100 Percent.

VDP can be connected to a data domain or a local deduplication store to contain all the backup data. This article discusses in specific when VDP is connected to a data domain. As far as the deployment process goes, a VDP with data domain attached to it, would still have a local data partition as well. The sda mount is for your OS partitions and sdb, sdc and so on are for your data partitions (Hard Disk1, 2, 3..and so on).

These partitions, data01, data02....(Grouped as data0?) contain the metadata of the backups that is stored on the data domain. So, if you cd to /data01/cur and do a list "ls", you will see the metadata stripes.

0000000000000000.tab 
0000000000000008.wlg 
0000000000000012.cdt 
0000000000000017.chd

Before, we get into the cause of this issue, let's have a quick look at what a retention policy is. When you create a backup job for a client / group of clients, you will define a retention policy for the restore points. The retention policy tells, how long you need your restore points to be saved after a backup. The default is 60 days and can be adjusted as per need. 

Once the retention policy is reached, that restore point which has reached its expiration date will be deleted. Then, during the maintenance window, the Garbage Collection (GC), will be executed, which will perform the space reclamation. If you run, status.dpn, you will notice, "Last GC" and amount of space that was reclaimed. 

Space reclamation by GC is done only on the data0? partitions. So, if your data0? partitions are 100 percent, then there are few explanations. 

1. Your retention period for all backup is set to "Never Expire", which is not recommended to be set.
2. The GC was not executed at all during the maintenance window. 

If you set the backups to never expire, then go ahead and set an expiration date for it, otherwise your data0? partitions will frequently enter 100 percent space usage. 

To check if your GC was executed successfully or not, run the below command:
# status.dpn
The output, you should look at is the "Last GC". You will either see an error here such as DDR_ERROR or Last GC was executed somewhere weeks back. 

Also, if you login to vdp-configure page, you should notice that your maintenance services are not running. If this is the case, then your space reclamation task will not run, and if your space reclamation task is not running, then those metadata for expired backup are not cleared. 

To understand why this happens, let's have a basic look at how MCS talks to data domain. Your MCS will be running on your VDP appliance. If there is a data domain attached to the appliance, the MCS will be querying the data domain via the DD SSH keys

This means, we have a private-public key combination on the VDP appliance and the data domain system. When there is a public-private key combination, there is no need as password authentication for MCS to connect to data domain. Your MCS will use it's private key and Data Domains public key, and similarly, the data domain will use it's private key and VDP's public key to communicate. 

You can do a simple test to see if this working by performing the below steps:

1. On the VDP appliance load and add the private key. 
# ssh-agent bash 
# ssh-add ~admin/.ssh/ddr_key
2. Once the key is added, you can login to Data Domain from the VDP SSH directly without a password. This is how the MCS works too. 
# ssh sysadmin@192.168.1.200
Two outcomes here: 

1. If there is no prompt to enter a password, it will directly connect you to the Data Domain console, and we are good to go. 

2. It will prompt you to enter a passphrase and/or a password to login to Data Domain. If you run into this issue, then it means that the SSH public keys for VDP are not loaded / unavailable on the Data Domain end.

For this issue, we will be most likely running into Outcome (2)

How to verify public key availability on data domain end:

1. On the VDP appliance run the following command to list the public key:
# cat ~admin/.ssh/ddr_key.pub
The output would be similar to:

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAw7XWjEK0jVPrT0z6JDmdKUDLfvvoizdzTpWPoCWNhJ/LerUs9L4UkNr0Q0mTK6U1tnlzlQlqeezIsWvhYJHTcU8rh
yufw1/YZLoGeA0tsHl6ruFAeCIYuf5+mmLXluPhYrjGMdsDa6czjIAtoA4RMY9WjAtSOPX3L2B73Wf3BScigzC/D83aX8GnaldwQU88qkfmhN+dpy2IdxiFm4
hnK+2m4XMtveBTq/8/7medeBTMXYYe7j7DVffViU4DizeEpGj2TBxHIe2dGe0epFDDc9wpa8W5a/XPOeiz4WelHfKtqS1hYUpFEQWXUOngwjDPpqG+6k1t
1HoOp/+OVC3lGw== admin@vmrel-ts2014-vdp

2. On the Data Domain end, run the following command:
# adminaccess show ssh-keys user <ddbost-user>
You can enter your custom ddboost user or sysadmin, if this was itself promoted to ddboost user. 

In our case, we should not see the above mentioned public key in the list. 
The DD will have its private key and VDP will have its private key. The public key of the VDP is not available on the data domain end, which leads to password request when connecting from SSH of VDP to DD. Due to this, the GC will not run as MCS will be waiting for a manual password entry. 

To fix this:

1. Copy the public key of the VDP appliance obtained from the "cat" command mentioned earlier. Copy the entire thing starting from and including ssh-rsa to the end, including -vdp
Make sure no spaces are copied, else this will not work. 

2. Login to DD with sysadmin and run the following command:
# adminaccess add ssh-keys user <ddboost-user>
You will see a prompt like below:

Enter the key and then press Control-D, or press Control-C to cancel.

Then, enter the copied key and Press Ctrl+D (You will see the "key accepted" message)

ssh-user AAAAB3NzaC1yc2EAAAABIwAAAQEAw7XWjEK0jVPrT0z6JDmdKUDLfvvoizdzTpWPoCWNhJ/LerUs9L4UkNr0Q0mTK6U1tnlzlQlqeezIsWvhYJHTcU8rh
yufw1/YZLoGeA0tsHl6ruFAeCIYuf5+mmLXluPhYrjGMdsDa6czjIAtoA4RMY9WjAtSOPX3L2B73Wf3BScigzC/D83aX8GnaldwQU88qkfmhN+dpy2IdxiFm4
hnK+2m4XMtveBTq/8/7medeBTMXYYe7j7DVffViU4DizeEpGj2TBxHIe2dGe0epFDDc9wpa8W5a/XPOeiz4WelHfKtqS1hYUpFEQWXUOngwjDPpqG+6k1t
1HoOp/+OVC3lGw== admin@vmrel-ts2014-vdpSSH key accepted.

3. Now test the login from VDP to DD using ssh sysadmin@192.168.1.200 and you should be directly connected to the data domain.

Even though, we have re-established the MCS connectivity to DD, we will have to now manually run a garbage collection to force clear the expired metadata. 

You have to first stop the backup scheduler and maintenance service else you will receive the below error when trying to run GC:
ERROR: avmaint: garbagecollect: server_exception(MSG_ERR_SCHEDULER_RUNNING)

To stop the backup scheduler and maintenance service:
# dpnctl stop maint
# dpnctl stop sched
Then, run the below command to force start a GC:
# avmaint garbagecollect --timeout=<how many seconds should GC run> --ava
4. run df -h again, and the space has to be reduced considerably provided all the backups have a good retention policy set.


**If you are unsure about this process, open a ticket with VMware to drive this further**