Blog: backup

I had an issue come up with Platespin the other day that was very strange. The Platespin protection job for a server hadn’t been completing successfully since it was upgraded to a new build. The job would run all the way up until the point where it was doing the VSS snapshots of the source machine, then it would die with a very cryptic VSS related error. This would cause the VSS System Writer to display an error state when using vssadmin list writers. I engaged Platespin support, and after about two weeks going back and forth with their support, they finally cut me loose with a “call Microsoft” recommendation. I kept troubleshooting and found that when I would try to clear out the VSS snapshots by changing the maximum space setting for the VSS snapshots to 300 MB (which is supposed to be the minimum amount required for an x86 system), I would get an error pop-up noting that 300 MB was not sufficient amount of space for snapshots on that volume. I finally found by process of elimination that 1800 MB was the magic number for the c: volume. However, even though the drive had over 2.5 GB of free space, the PlateSpin job would still fail. [more]

As a last resort, I changed the storage location for the VSS snapshots for the c: partition to the d: partition (which had over 20 GB of free space) then ran the job again. This time, the job ran a little bit farther, then died when trying to snapshot the f: partition (which was only used for a page file). After moving the VSS snapshots for the f: partition over to d:, the job ran successfully…finally. What was very strange is that the VSS snapshots would always reserve the same amount of space for each partition as the maximum setting for the c: partition. I could change the maximum space setting for snapshots on the c: partition and run the job again and the snapshots for all partitions would match the c: partition no matter what the maximum setting that I had specified for the individual partitions. I could snapshot the partitions with vssadmin and this did not happen and when backing the server up with CommVault (which uses VSS) it didn’t happen….only PlateSpin. I looks to me like their software has a bug in it.  I have emailed their support tech I was working with to explain what I found…no response so far.


 

This gotcha should only apply to Acronis Backup & Recovery 10 Advanced Workstation.  The “Advanced” version is the enterprise version and comes with a lot more components, including a separate license server.  As I was troubleshooting a different backup issue, I noticed a log entry that said, “Cannot check the license key.  Either Acronis License Server is unavailable, or the license key is corrupt.  Check connectivity to Acronis License Server and run it to manage licenses”.  Additionally, the management console told me I had 17 days before the license would expire and the software would stop working. [more]

The license server is installed locally on my machine, so I didn’t understand why it wouldn’t be able to communicate.  However, when I opened the license server and clicked “Manage licenses on the local machine” I got a pop-up error that said, “E000807D5: Computer ‘localhost’ is not found”.  About this time, I remembered I had disabled some of the Acronis services that I thought were only necessary in an enterprise deployment.  After playing with them, I discovered the “Acronis Remote Agent” service is required for the license server to communicate.  After enabling this service, the license error message went away.

One security note: The Acronis Remote Agent service is also used for remote connectivity to the system so that IT staff can remotely manage the Acronis software.  For that reason, I went into the firewall rules and blocked all of the Acronis services from getting in/out of the firewall.


 

We had an issue last week where backups of an Exchange 2007 server began to fail after we removed the EMC Replication Manager & EMC Solutions Enabler apps. The errors that we began to see in the Application log like this:

Volume Shadow Copy Service error: A critical component required by the Volume Shadow Copy service is not registered.  This might happened if an error occurred during Windows setup or during installation of a Shadow Copy provider.  The error returned from CoCreateInstance on class with CLSID {bd902507-4491-4001-acdd-a540a2cad34b} and Name HWPRV is [0x80040154].

I went through the process described here http://support.microsoft.com/kb/940032  to reregister all the VSS stuff, but it didn’t work. After digging into the VSS CLI, I was seeing the following returned from issuing a “vssadmin list providers” [more]

Provider name: 'Microsoft Software Shadow Copy provider 1.0'
   Provider type: System
   Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5} 
   Version: 1.0.0.7

Provider name: 'ERM VSS Provider'
   Provider type: Hardware
   Provider Id: {e929a027-cf8c-47bf-90a3-cd4241c7cace}
   Version: 1.0

It appeared as if the EMC VSS provider was not removed when I uninstalled the software. The online knowledgebase for EMC, said to fix it, re-install the apps, then start the VSS service, then uninstall the apps again suggesting that the provider would not have been removed if the service wasn’t running at the time the apps were uninstalled. I had a really hard time getting that stuff installed to start with so I didn’t want to start that again. I did some testing on a VM and found that I could remove the provider by just removing the registry key which matched the Provider Id listed by the vssadmin list providers command.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\VSS\Providers\{e929a027-cf8c-47bf-90a3-cd4241c7cace}

After restarting the VSS service one time, the vssadmin list providers command provided this output

Provider name: 'Microsoft Software Shadow Copy provider 1.0'
   Provider type: System
   Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5}
   Version: 1.0.0.7

Success!! This could possibly be a problem and the fix could work with any applications that insert 3rd party VSS providers.


 

I received a new hard drive recently and was going to use ThinkVantage Rescue and Recovery to restore my current system to the new drive.  During this process I discovered that Rescue and Recovery requires that you restore a backup from the same location where the backups were made. [more] I have my backups configured to go to an external HD (using eSata with PCMCIA card).  The external hard drive is labeled “2nd Hard Drive” in R&R.  When I got the new drive in, I ran a full backup (3-4 hours), then swapped hard drives.  I was able to boot into the R&R environment, but didn’t have drivers for my PCMCIA card so I couldn’t restore using eSata.  My external HD also has connections to allow it to be used as a USB drive.  I was able to boot into R&R off of the external HD over USB, but no backups were showing up on the “USB Drive” (or any other backup locations).  I was afraid R&R was saving the backup location as part of the backup and just couldn’t see my “2nd Hard Drive” backups on a USB drive.  I swapped physical disks again, did another full backup over USB (8-10 hours), swapped disks again, and was able to boot and restore my “USB backup”.  This is just something to consider when your choosing a location to store your Rescue and Recovery backups.


 

My external hard drive that I back up my home computer to crashed the other day. My replacement options included buying another external HDD @ $200+ or find another way to back up my data. I have too much data to backup to DVD. There is always the SOHO RAID solutions but those are $500+. What I wanted was the “no crash, no maintenance” backup solution. [more]I found an online backup company called Mozy (http://www.mozy.com/). Mozy is an EMC company and for $4.95 per computer per month, you have unlimited online backups. I figured I would get my money’s worth out of that high speed connection and give it a try. As of yet, I don’t know if it is really “unlimited” but I have about 100 GB uploaded so far and still going. You can retrieve via web, locally installed client app, or by ordering a DVD collection. At that price, I can use this back up service 3 years for the price of an external HDD. It’s not as fast obviously, but my connection is idle most of the time anyway.  Plus now I have an offsite backup of my files.  So, check it out if your looking for a back up solution for your home computer.


 

G-Archiver, a shareware application used to backup Gmail accounts, was reported to be storing usernames and passwords. [more]

Jeff Atwood reports that he received the following "hair-raising tale" from Dustin Brooks via e-mail:

"I was looking for a way to back up my gmail account to a local drive. I've accumulated a mass of important information that I would rather not lose. During my search I came across G-Archiver, I figured what the heck I'll give it a try.
It didn't really have the functionality I was looking for, but being a programmer myself I used Reflector to take a peek at the source code. What I came across was quite shocking. John Terry, the apparent creator, hard coded his username and password to his gmail account in source code. All right, not the smartest thing in the world to do, but then I noticed that every time a user adds their account to the program to back up their data, it sends and email with their username and password to his personal email box! Having just entered my own information I became concerned.
I opened up a browser and logged in to gmail using his account information. It still worked.
Upon getting to the inbox I was greeted with 1,777 emails with account information for everyone who had ever used the software and right at the top was mine. I decided to go ahead and blast every email to the deleted folder and then empty it. I may have accidentally changed the password and security question to something I don't remember as well, whoops, my bad. I also contacted google to erase this account as I didn't see a way to delete it myself."

For more details, visit http://www.codinghorror.com/blog/archives/001072.html or http://www.informationweek.com/news/showArticle.jhtml?articleID=206902839

This is a perfect example of why end users need to be very conscious of what they install, and why companies need to have adequate policies and procedures related to the installation and use of software.  As we have said in our company before, "Paranoia is not necessarily a bad thing"


 

The Symantec Mail Security Appliance software uses passive mode for ftp when backing up the configuration. Since this device is usually installed in the DMZ, an ISA server publishing rule needs to be created to publish your internal ftp server.  This rule needs to be edited to support passive mode with a port range to be used. [more]

When backing up the configuration, a path is required and it puts a / in front of the path specified.  Specifying "." for the path works, but it drops the file name and creates a file named ".".  I found the best solution is to specify "./" for the path and then it will transfer the backup file into the ftp server's user's default directory.


 

When using the Advanced Open File Option with Backup Exec, make sure you check the Job Log to see if it is actually getting used correctly. I wanted to use it to back up VMWare Server virtual machines at CITBA. The job was running successfully, so I thought it was working correctly. We started getting calls that VMs running on that server could not be reached by users trying to RDP to them. Once the OSE connected to them via the VMWare Server console, the app would show an "access denied" error (only once) and then go away and stuff would start working. [more]After research, it was discovered that Backup Exec was actually using standard backup (not AOFO) to backup the VM vmdk files thus causing a file lock issue with VMWare Server. Note the very inconspicuous log below.

You can find this is the "Job History" tab of the job log. The reason was that no AOFO licenses were installed. So, the moral of the story is Backup Exec will let you select the AOFO option in a backup job and let you deploy the Backup Exec agent with the AOFO option even in you don't have the license installed. Thus, making you think AOFO will actually work, but don’t be fooled. It doesn't.


 

If you want to restore a SBS 2003 box that was upgraded from SBS 2000 using tape backups from Backup Exec, here is the process…and believe me this is abbreviated. [more]

  1. Install SBS 2000 so that you can get the system path to be c:\winnt and some necessary dlls that will break the kernel if you try to go directly to SBS 2003. It is temping to use an unattended install and skip directly to SBS 2003 with a  custom install point, but I speak from experience…it doesn’t work. No need to install and configure DNS…I know it sounds like it will break, but it won’t. The only component that should be installed is SBS. Don’t install Exchange, ISA, SQL or the optional components….JUST SBS. Trust me. Be sure to name the domain the same as it was before during setup.
  2. Your goal is to get to SBS 2003, but before you upgrade your SBS 2000 install, you must install Windows 2000 SP3, then SBS SP 1a, then Windows 2000 SP4. Having fun yet?
  3. Upgrade to SBS 2003 and then fix what didn’t work when you upgraded it….just kidding this actually works pretty well considering.
  4. Your next step is to get Backup Exec up and running. So either reinstall Backup Exec on the SBS 2003 box and inventory your recovery tape or install the tape drive and Backup Exec to another server and do it there. Really doesn’t matter where you do it from. Make sure your backup exec service account has access to your restored server if you moved it to a different server.
  5. Reboot your restored SBS 2003 server into AD recovery mode by pressing F8 at boot time. It’s like booting to safe mode, but it’s a different option on the same screen.
  6. Do the authoritative restore, but DON’T restore anything that has anything to do with SQL, Exchange. That includes program files directories, databases, all the other items that are included in the doc link below. Yeah, this seems strange, but bare with me. Oh, and if ISA was originally installed, you can restore it, BUT if it was set up to log to a local SQL MSDE database (which most are because it is an SBS install and I think that is the default behavior), it won’t work. Exactly how ISA will act once restored is somewhat of a mystery so best of luck to you. IMO, just remove it and deal with it after all this mess is done.
  7. Reinstall SQL Server and Exchange Server from media. I know, I know….you have a backup of it so why do you have to reinstall it from the CD that you don’t have. Because…
  8. Using single user mode, restore the master SQL Server database first then restore all the other databases.
  9. Reinstall Exchange with the /disasterrecovery option. Follow the instructions in the doc…just follow the doc. Just get ready to run eseutil on your databases because they will need it, especially if circular logging was turned on at the message store level (and if you are the one that turned circular logging on…shame on you!). Mount your databases after all the consistency checking is complete.
  10. Now, take a breath, go get a burger from Whataburger because by now it is 2:00 in the morning and that is the only place open.
  11. Address the literally hundreds of issues that will arise after you have done this procedure.

Here is a link to the unabridged version:  http://seer.support.veritas.com/docs/243037.htm 

Oh, and in all this you better hope you are restoring to similar if not the same hardware. The support on this process from Backup Exec goes right out the window if you aren’t restoring to the same/similar hardware. And you MUST have the media to reinstall all this stuff. Gathering this type of stuff seems trivial, but it is actually one of the MOST difficult parts of this process, especially if the customer is not a volume license holder.

 


 

When working with HP Ultrium tape drives, it is sometimes a challenge to get them to run at speeds even close to the advertised maximum. Sometimes there are questions as to how to troubleshoot and how know you are getting the most out of the drive. Here are some tips: [more]

  • Most HP modern Ultrium drives have a hardware compression feature. In order to get even close to the advertised speed of the drive, you must ensure that the hardware compression is enabled on the drive. You can do this with the HP Library and Tape Tools utility. Without it, you will get approximately half of the advertised speed.
  • If you are using Backup Exec with an Ultrium drive, use the Symantec drivers that are provided on their website and make sure the drive you are using is on the HCL for the version of BE you are using. Only switch to the HP/OEM driver if you experience issues with media robotics when moving/inventorying media.
  • If you are using a tape library or autoloader, use the HP/OEM driver for the media changer/robotics and the Symantec driver for the drive. Symantec does not make drivers for the robotics.
  • Make sure that you are not using a SCSI channel that is connected to a RAID card. This can seriously impact performance of the drive. Best possible scenario is an Ultra320 SCSI card on a dedicated 64-bit PCI bus. Make sure the SCSI card drivers are up to date.
  • There are two types of autoloaders/libraries: LUN based and SCSI ID based. The difference is the way the device presents itself to the OS. LUN based devices share a SCSI ID and present different LUNs to the OS for the media changer and the drive. SCSI ID devices present two different SCSI IDs . Backup Exec works best on SCSI ID devices, but either will work. If the device you are working on allows you to choose LUN based or SCSI based, choose SCSI based.
  • Make sure the SCSI bus of your device chain is properly terminated. If using LVD SCSI, make sure you are using an LVD/SE terminator. Do not put more than two SCSI devices on the same SCSI chain as the tape device.
  • When testing performance, do a backup to local disk to test performance. If speeds are significantly faster, you should further troubleshoot the tape device because data CAN be backed up faster; the bottleneck is the tape device. If speeds are about the same, the bottleneck is the BE remote agent. Data cannot be presented to the device fast enough to keep it busy. In this scenario, troubleshoot the hosts for performance increases.
  • On the Backup Exec media server, disable the "Removable Storage" service. It has been known to cause issues with Backup Exec since version 8 and up.