Monday, 17 April 2017

VDP - Avmar Migration: Part-2: Configuring Avamar Virtual Edition 7.2

VDP - Avmar Migration: Part-1: Deploying Avamar Virtual Edition 7.2

In part 1 we saw how to deploy and setup the basic Avamar 7.2 machine. In this article, we will have the appliance configured for usage.

Open a browser and type in:
https://avamar-server-ip/fqdn:7543/avi/avigui.html

This should bring up the below screen and you can login with root as the user and changeme as the password.


Post the login you will see the below page. Here on the top left corner you will see a lock icon. Click this and enter the password as Supp0rtHarV1.  Once unlocked, click Install to begin the package installation.


The initialization will take a few minutes to complete and once done you will be presented with the below window. Here, you will need to fill all the fields with the red exclamation marks.

Under Server Settings, the Avamar Server Address should be the hostname and select an appropriate time zone.

The number of storage node I will be using is 0, just like how my VDP is setup. A single node server. This is the node you see when you run status.dpn


Fill in the remaining fields and click Continue on the bottom right. This should begin the package installation like seen below:


Once the configuration is complete you will see the below task. At this point, your AVE setup is complete.

Now you can SSH into the AVE machine with admin credentials and run the below command to verify system and service status.

admin@ave:~/>: dpnctl status
Identity added: /home/admin/.ssh/dpnid (/home/admin/.ssh/dpnid)
dpnctl: INFO: gsan status: up
dpnctl: INFO: MCS status: up.
dpnctl: INFO: emt status: up.
dpnctl: INFO: Backup scheduler status: up.
dpnctl: INFO: Maintenance windows scheduler status: suspended.
dpnctl: INFO: Unattended startup status: enabled.
dpnctl: INFO: avinstaller status: up.

admin@ave:~/>: status.dpn
Tue Apr 18 00:49:01 IST 2017  [AVE] Mon Apr 17 19:19:01 2017 UTC (Initialized Mon Apr 17 17:51:04 2017 UTC)
Node   IP Address     Version   State   Runlevel  Srvr+Root+User Dis Suspend Load UsedMB Errlen  %Full   Percent Full and Stripe Status by Disk
0.0   10.109.10.169   7.2.1-32  ONLINE fullaccess mhpu+0hpu+0hpu   1 false   0.49 5209    33104   0.0%   0%(onl:8  )  0%(onl:8  )  0%(onl:8  )
Srvr+Root+User Modes = migrate + hfswriteable + persistwriteable + useraccntwriteable

System ID: 1492451464@00:50:56:9A:52:F6

All reported states=(ONLINE), runlevels=(fullaccess), modes=(mhpu+0hpu+0hpu)
System-Status: ok
Access-Status: full

Last checkpoint: cp.20170417180550 finished Mon Apr 17 23:36:09 2017 after 00m 19s (OK)
No GC yet
No hfscheck yet

Maintenance windows scheduler capacity profile is active.
  WARNING: Scheduler is WAITING TO START until Wed Apr 19 20:30:00 2017 IST.
  Next backup window start time: Thu Apr 20 20:00:00 2017 IST
  Next maintenance window start time: Thu Apr 20 08:00:00 2017 IST

That should be it with the configuration part. Next, we will look into how to Configure AVE to vCenter Server. 

VDP - Avmar Migration: Part-3: Avamar Client And Configuring To vCenter Server

VDP - Avmar Migration: Part-1: Deploying Avamar Virtual Edition 7.2

Since VMware announced, the end of vSphere Data Protection, there is a choice to migrate existing deployment to EMC Avamar. More about EOA can be found in this link here.

In this article, we will be looking at deploying Avamar Virtual Edition 7.2. You can go ahead and download the required version of AVE from EMC download portal. The version I will be using is Avamar 7.2.1.

Login to Web Client or vSphere Client, Select the ESXi host where you want to deploy your AVE and select File > Deploy OVF. Browse the location for the AVE download and add the file. Click Next.


Review the details of the OVF template and click Next.


Accept the EULA and click Next.


Provide a name for this AVE virtual machine. Click Next.


If available and required select a resource pool in which you want to place this VM. Click Next.


Select the datastore where you want to deploy this. Remember the vmdk bundled with AVE is just the OVF, the data drives are configured later just like a VDP appliance. Click Next.


Select a disk provisioning type. Thick provision is recommended. Click Next.


Select a network where this AVE should be connected to. Click Next.


Review the changes and click Finish. Do not check Power On after deployment, because there are couple of steps to be done once the OVF deployment is completed.


Just like VDP, the AVE comes with 4 supported available backup storage. You can refer the below table to size your AVE accordingly.

Once you choose the deployment type, you will need to refer the below table to plan the drive sizes. Just like in VDP a 512GB deployment will have 3 drives of 256 GB each. The additional space is for the checkpoint maintenance overhead.

So the rule goes like:
Total size = GSAN capacity + 1/2 of GSAN capacity.

GSAN capacity would be the actual space for storing backup data.


So go ahead and add three disks manually (Depending on your AVE configuration) to this VM. Only Thick Provisioning is supported for AVE. I will be using Thin because of space constraints.


Once the drives are added, power on the AVE virtual machine.

The default login is root and changeme

First, we will have to configure network settings for the AVE machine. Post the login to AVE from VM console, run yast2 to begin the networking configuration. You will see a similar interface:


Select Network Devices and then Network Services to begin the network configuration wizard and you should see something similar:


You will need to Set IP Configuration in Overview, Hostname / DNS settings and Gateway under Routing.

Once the appliance is configured with network, restart the guest and then verify the network by ping and nslookup. If this works good proceed to Part 2 in the below link.

VDP - Avmar Migration: Part-2: Configuring Avamar Virtual Edition 7.2

Friday, 14 April 2017

VDP Configure Page Reports - Server Is Still Starting

You might sometimes restart your appliance and you will be presented with the message:

The server is still starting. Depending on the configuration, this could take up to 25 minutes. Try again later.

No matter, how many times you try to login you will run into the same message.


Again, if you look at the vdr-configure.log, you will notice the following:

2017-04-15 05:44:49,242 INFO  [http-nio-8543-exec-3]-services.LoginService: Login service called with action: [login]
2017-04-15 05:44:49,243 INFO  [http-nio-8543-exec-3]-services.LoginService: Checking if the server is in a running state...
2017-04-15 05:44:49,243 INFO  [http-nio-8543-exec-3]-services.LoginService: Server is not running
2017-04-15 05:45:06,592 WARN  [pool-21-thread-1]-backupagent.BackupAgentUpdaterImpl: No proxy-clients are available.

This does not really help much to understand that what is going on.The cause here is due to missing .av_sys_state_marker_running file. I guess this file records the state of the VDP appliance. If this file goes missing, the server is unable to determine the state, which is why vdr throws up "Server is not running" in the logs. 

The file is located under /usr/local/avamar/var

Go to this directory and recreate this file using:
# touch .av_sys_state_marker_running

Post this, refresh the vdp-configure page and you should have access.

Failed To Start Internal Proxy In VDP 6.x

Mostly after an upgrade most of your backups fail with a status of "No eligible proxies" or "No data"
You will not be able to run on demand backups in some cases and this would fail with an error "Adhoc Backup Request Error - Exception"

root@vdp-dest:/data01/home/admin/#: mccli client backup-dataset --domain=/vcenter-prod.happycow.local/VirtualMachines --name=VM-C
1,22253,Client Adhoc Backup Request Error - Exception.

If you try to enable Internal proxy from the vdp-configure page, it will fail with the below error:


In the vdr-configure.log you will notice the following:

2017-04-15 03:50:52,463 ERROR [pool-22-thread-1]-cmdline.RuntimeExecImpl: avagent Info <5008>: Logging to /usr/local/avamarclient/var/avagent.log
2017-04-15 03:50:52,463 ERROR [pool-22-thread-1]-cmdline.RuntimeExecImpl: avagent Error <7531>: Unable to register clients/vdp-dest with Administrator 127.0.0.1:28001
2017-04-15 03:50:52,464 ERROR [pool-22-thread-1]-cmdline.RuntimeExecImpl:  'Could not reconcile proxy with vCenter.' (203)
2017-04-15 03:50:52,464 ERROR [pool-22-thread-1]-cmdline.RuntimeExecImpl: avagent Info <5008>: Logging to /usr/local/avamarclient/var/avagent.log

You will see vCenter connections down if you run the below command:
# mccli server show-services

You will something similar to:

0,23000,CLI command completed successfully.
Name                               Status
---------------------------------- -----------------------------
Hostname                           vdp-dest.happycow.local
IP Address                         10.109.10.167
Load Average                       0.24
Last Administrator Datastore Flush 2017-04-15 04:45:00 IST
PostgreSQL database                Running
Web Services                       Error
Web Restore Disk Space Available   256,417,868K
Login Manager                      Running
snmp sub-agent                     Disabled
ConnectEMC                         Disabled
snmp daemon                        Disabled
ssh daemon                         Running
Data Domain SNMP Manager           Not Running
Remote Backup Manager Service      Running
RabbitMQ                           Not Running
Replication cron job               Not Running
/vcenter-prod.happycow.local       5 vCenter connection(s) down.

If you try to register proxy from the command line using the below command, it will fail as well. 
# /usr/local/avamarclient/etc/initproxy.sh start

avagent.d Info: Stopping Avamar Client Agent (avagent-vmware)...
avagent.d Info: Client Agent stopped.
avagent Info <5008>: Logging to /usr/local/avamarclient/var/avagent.log
avagent Error <7531>: Unable to register clients/vdp-dest with Administrator 127.0.0.1:28001
 'Could not reconcile proxy with vCenter.' (203)
avagent.d Info: Client activation error.
avagent Info <5008>: Logging to /usr/local/avamarclient/var/avagent.log
avagent Info <5417>: daemonized as process id 351
avagent.d Info: Client Agent started.

Registration Failed.
initproxy.sh FAIL: registerproxy failed

The cause:
This is because, there is a key called as "ignore_vc_cert" which will be flipped to false. The VDP will always be waiting for process to acknowledge the certificate warning which will never work and hence the proxy fails to start.

The fix:
1. Run the below command to verify the key value:
# grep -i ignore /usr/local/avamar/var/mc/server_data/prefs/mcserver.xml

The output should be similar to:
     <entry key="ddr_ignore_snmp_errors" value="false" />
     <entry key="email_logs_tar_cmd" value="tar -cz --atime-preserve=system --dereference -- ignore-failed-read --one-file-system --absolute-names" />
      <entry key="ignore_vc_cert" value="false" />

2. Edit this mcserver.xml file and replace the ignore_vc_cert value to true and save the file

3. Switch to admin mode of VDP (sudo su - admin) and restart the mcs using:
# mcserver.sh --restart

4. Register the internal proxy from GUI and it should work successfully and none of the vCenter connections will be reported as down.

Hope this helps.

Tuesday, 11 April 2017

Unable To Configure VDP To vCenter - Unable to find this VDP in the vCenter inventory

So, you might run into issues where you are unable to configure VDP to vCenter and you run into this error.
Unable to find this VDP in the vCenter inventory



In the vdr-configure.log you will notice the following. Again, for all issues with vdp-configure page refer the vdr-configure.log

2017-04-10 10:41:13,365 WARN  [http-nio-8543-exec-2]-vi.VCenterServiceImpl: No VCenter found in MC root domain
2017-04-10 10:41:13,365 INFO  [http-nio-8543-exec-2]-reconfig.VcenterConfigurationImpl: Failed to locate vCenter Client in Avamar, reconfiguration is required
2017-04-10 10:41:13,365 INFO  [http-nio-8543-exec-2]-sso.VmwareSsoServiceImpl: Getting SSL certificates for https://psc-prod:7444/lookupservice/sdk
2017-04-10 10:41:13,715 INFO  [http-nio-8543-exec-2]-services.VcenterConnectionTestService: Finished vCenter Connection test with result:
                <?xml version="1.0"?><vCenter><certValid>true</certValid><connection>true</connection><userAuthorized>true</userAuthorized><ave_in_vcenter>false</ave_in_vcenter><switch_needed>true<
/switch_needed><persistent_mode>true</persistent_mode><ssoValid>true</ssoValid><httpPortValid>true</httpPortValid></vCenter>

2017-04-10 10:41:13,025 WARN  [http-nio-8543-exec-2]-vi.VCenterServiceImpl: Failed to get root domain from MC
2017-04-10 10:41:13,025 WARN  [http-nio-8543-exec-2]-vi.VCenterServiceImpl: No VCenter found in MC root domain
2017-04-10 10:41:13,025 INFO  [http-nio-8543-exec-2]-vi.ViJavaServiceInstanceProviderImpl: visdkUrl = https://vc-prod:443/sdk
2017-04-10 10:41:13,337 INFO  [http-nio-8543-exec-2]-util.UserValidationUtil: vCenter user has sufficient privileges to run VDP.
2017-04-10 10:41:13,339 INFO  [http-nio-8543-exec-2]-network.NetworkInfoApi: Found IP Address: [10.116.189.178] link local? [false], site local? [true], loopback? [false]
2017-04-10 10:41:13,339 INFO  [http-nio-8543-exec-2]-network.NetworkInfoApi: Found IP Address: 10.116.189.178

2017-04-10 10:41:13,353 ERROR [http-nio-8543-exec-2]-vi.ViJavaAccess: getPoweredOnVmByIpAddr(): Cannot determine appropriate powered on AVE virtual machine with IP Address [10.x.x.x] since there exist many of them (2): type=VirtualMachine name=vdp-vm mor-id=vm-208, type=VirtualMachine name=Windows-Jump mor-id=vm-148


So in this case, 10.x.x.x is the IP of my VDP machine and there is a duplicate IP used by another VM in the vCenter and this is Windows-Jump. If this is the case, determine if you can remove the duplicate IP or change the IP of the VDP appliance. The configuration test should then complete without issues. 

Hope this helps.

Thursday, 6 April 2017

Farewell vSphere Data Protection - End of Availability.

On April 5, VMware announced the end of vSphere Data Protection. vSphere 6.5 would be the last release to support VDP. Which means post this, you will need to migrate to third party backup.

The EOA details can be found in this link here:

The EOA KB article is published here:

" On April 5th, 2017, VMware announced the End of Availability (EOA) of the VMware vSphere Data Protection (VDP) product.
VMware vSphere 6.5 is the last release to include vSphere Data Protection and future vSphere releases will no longer include this product. We have received feedback that customers are looking to consolidate their backup and recovery solutions in support of their overall software-defined data center (SDDC) efforts. As a result, we are focusing our investments on vSphere Storage APIs – Data Protection to further strengthen the vSphere backup partner ecosystem that provides you with a choice of solution providers.
  
All existing vSphere Data Protection installations with active Support and Subscription (SnS) will continue to be supported until their End of General Support (EOGS) date. The EOGS dates for vSphere Data Protection are published on the VMware Lifecycle Product Matrix under the dates listed for different versions. After the EOA date, you can continue using your existing installations until your EOGS dates.
VMware supports a wide ecosystem of backup solutions that integrate with vSphere and vCenter using vSphere Storage APIs – Data Protection framework. You can use any data protection products that are based on this framework. 

Beginning today, Dell EMC is offering you a complimentary migration to the more robust and scalable Dell EMC Avamar Virtual Edition. VMware vSphere Data Protection is based on Dell EMC Avamar Virtual Edition, a key solution for protecting and recovering workloads across the SDDC. To learn more about this offer please go to the Dell EMC website.

If you have additional questions please contact your VMware Sales Representative or read the FAQ document "


However, the Support for VDP will continue to follow as per VMware SnS agreement from this link:

Dell EMC will provide an offer to migrate VDP to AVE (Avamar Virtual Edition) here:

Any questions on the migration, refer the below FAQ:

I will continue to post articles on VDP and answer your questions as long as I am supporting it. I will be exploring more into the vRealize Suite from today with vRealize Operations to begin with. 

Comment to leave your thoughts. 

Well, you never know what you got until it's gone. 

Thursday, 23 March 2017

Unable To Start Backup Scheduler In VDP 6.x

You might come across issues, where backup scheduler does not start when you try it from the vdp-configure page or the command line using dpnctl start sched. It fails with:

2017/03/22-18:58:53 dpnctl: ERROR: error return from "[ -r /etc/profile ] && . /etc/profile ; /usr/local/avamar/bin/mccli mcs resume-scheduler" - exit status 1

And the dpnctl.log will have the following:

2017/03/22-18:58:53 - - - - - - - - - - - - - - - BEGIN
2017/03/22-18:58:53 1,22631,Server has reached the capacity health check limit.
2017/03/22-18:58:53 Attribute Value
2017/03/22-18:58:53 --------- -------------------------------------------------------------------------------
2017/03/22-18:58:53 error     Cannot enable scheduler until health check limit reached event is acknowledged.
2017/03/22-18:58:53
2017/03/22-18:58:53 - - - - - - - - - - - - - - - END
2017/03/22-18:58:53 dpnctl: ERROR: error return from "[ -r /etc/profile ] && . /etc/profile ; /usr/local/avamar/bin/mccli mcs resume-scheduler" - exit status 1

If you run the below command you can see there are quite a few unacknowledged alarm that speaks about health check events not being acknowledged.

# mccli event show --unack=true | grep "22631"

1340224 2017-03-22 13:58:53 CDT WARNING 22631 SYSTEM   PROCESS  /      Server has reached the capacity health check limit.
1340189 2017-03-22 13:58:01 CDT WARNING 22631 SYSTEM   PROCESS  /      Server has reached the capacity health check limit.

To resolve this, acknowledge these events using the below command:

# mccli event ack --include=22631

Post this start the schedule either from GUI or command line using dpnctl start sched

Hope this helps.