Blog: VMware

For our audits, we run VMware Health Analyzer (VMHA) on any vCenters to collect information on ESXi build numbers, snapshots, dormant VMs, etc. Recently, a customer we were scanning had two vCenters, and while VMHA worked fine on one of them, we were getting errors on the other. Standard troubleshooting didn't work, and the customer didn't know why we weren't able to collect the information this year. After running nmap on the vCenter, we determined the customer had redefined the port used for this vCenter instance and simply defining the port in our scan credentials solved the issue.


Recently, quiesce snapshot jobs for some customers kept showing up as failed in Veeam with the error "msg.snapshot.error-quiescingerror".

After exhausting several research options I called VMware Support and we began sifting through event log files on the server as well as looking at the VSS writers and how VMware Tools was installed.

Looking at the log files on the ESX host the server was on led to this article:

A folder named backupScripts.d gets created and references a path C:\Scripts\PrepostSnap\ which is empty. Therefore the job fails. The fix is found below:

  1. Log in to the Windows virtual machine that is experiencing the issue.
  2. Navigate to C:\Program Files\VMware\VMware Tools.
  3. Rename the backupscripts.d folder to backupscripts.d.old

If that folder is not present, and/or if the job still fails, the VMware services checked.


Recently, I deployed a new vCenter appliance (VCSA) – version 6.5 – with an external Platform Services Controller (PSC) appliance. VMware has made the deployment considerably simpler than it originally was with their first few appliance releases. Instead of having to import an OVA/OVF and do a lot of the configuring yourself, VMware has made an EXE available to configures most of those steps automatically. Simply step through the wizard, providing information such as “what host to deploy the appliance to” and “what deployment model would you like” (external or internal PSC) and the wizard will deploy/configure the appropriate OVA templates.

Unfortunately, the first time that I ran through this wizard, it hung around two-thirds the way through indefinitely. I even left it running overnight and it never completed. Looking through the deployment logs, it turns out that the deployment failed due to licensing issues.

debug: initiateFileTransferFromGuest error: ServerFaultCode: Current license or ESXi version prohibits execution of the requested operation.
debug: Failed to get fileTransferInfo:ServerFaultCode: Current license or ESXi version prohibits execution of the requested operation.
debug: Failed to get url of file in guest vm:ServerFaultCode: Current license or ESXi version prohibits execution of the requested operation. 

Granted, these hosts hadn’t been licensed yet – I had just upgraded the hosts from 6.0 and had assumed the evaluation license was in effect. Apparently not. I installed the full license and tried the deployment once more. Sure enough, that did it …

Moral of the story, if you don’t license your hosts for vCenter (i.e. using the free Hypervisor license), you will not be able to deploy the vCenter appliance.


There are power management settings that should be checked when running ESX on HP Proliant G6 and above or Dell PowerEdge 11th and 12th Generation servers.  See VMware article for details:‚Äč 

The Proliant G8 that I examined having performance issues was set in the BIOS to use "HP Dynamic Power Savings mode" instead of "HP Static High Performance mode".  This can have an impact on virtual machines ability to utilize the CPU of the host.   This setting can be changed through iLO without the need to get into the BIOS directly to make the change.  It does not require a reboot of the ESX host to change the setting this way, which is even better.


After upgrading a customer to vCenter to 6.0, VMs that were being replicated with Veeam from one site to another started to issue an alarm for a "VM MAC Conflict". However, when I compared the MACs of the replicated VM and the original VM, they were unique. I had not upgraded the hosts at this point, only vCenter. Nothing had changed with Veeam, so this was a new issue as a result of the vCenter upgrade.

As it turns out, there is nothing wrong technically, this is simply a change in behavior in the alarm issued by vCenter. When Veeam replicates a VM, the replica VM initially has all the same settings (other than the name) of the source VM. vCenter sees the same MAC address on two VMs and alarms. vCenter then changes the MAC address of the replica (as had always been the behavior), but it never clears the alarm. You must clear it manually. Then when the next replication occurs, the alarm will trigger again.

I found several references to this issue online and most had suggested simply disabling the alarm to avoid vCenter showing the replicas with an alarm all the time, but that's not a great solution because no alarm would be generated in the event of an actual MAC conflict. Further research revealed a workaround. You can edit the alarm VM MAC Conflict in vCenter and add an argument to exclude VMs whose name ends in "_replica".


I came across an issue where two ESX servers that had been running for approximately 8-9 months without a reboot suddenly showed offline status in VCenter.  Looking at the events in vCenter, it showed that the ramdisk 'TMP' was full  and could not write to file /tmp/.SapInfoSysSwap.lock.LOCK.#####.


I got consoled into the ESX hosts and saw that there was a log file that had consumed most of the space at /tmp/mili2d.log.  From what I read, this file would have been removed upon rebooting the ESX Host, but that was not something I wanted to have to do if I could help it.


I reviewed the log file and determined there to be nothing of significance inside, but it had been filling up for months until reaching the limit on both hosts.  I thought I would just remove the file and reclaim the storage space, but that didn't reclaim the space. 


You can check the space allocation with command "vdf -h".  Here you can see the space left on the RAM Disk.


In order to get the ESX host to rescan the RAM Disk, restart the management services with " restart".  After I did this, the space allocation showed available, and the ESX hosts showed online again within vCenter without having to reboot the servers.


I was updating ESX with a customer a few weeks ago and ran into issues. We successfully upgraded from ESXi 5.1 to 5.5 Update3 using the custom Dell ISO. We then attempted to update to the latest version of ESXi 5.5, but the host purple screened upon reboot. We decided to call VMware support to create a trouble ticket. The VMware engineer provided a simple solution for our issue, which was to press Shift+r when the Hypervisor progress bar starts loading. This takes you to a menu where you can select the previous build. The VMware article can be found here: We followed these instructions and were able to successfully boot the ESX host again.


I believe what caused the purple screen was that vSphere Update Manager tried to install HP updates on Dell hardware. It turns out that vSphere Update Manager does not detect what updates are actually needed, just what isn’t installed. The fix for this is to create different baselines for each brand of hardware in mixed hardware environments.



I recenly rebuilt a vCenter environment for a customer. We decided to use the vCenter Server Appliance 6.5. The configuration of the vCenter Server Appliance was fairly simple and operates very similar to vCenter Server installed on Windows. We attempted to setup email alerts, but were unable to get the alerts to send. We initially thought the alerts would not send due to an issue with the SMTP relay. Since this was not a Windows OS, I was not able to login to the OS and test the STMP relay using telnet. I checked my configuration of email alerts several times and the administrator of the SMTP servers checked his as well and everything looked correct on both sides, but emails still would not send.

After researching for quite some time, I found that I could use the "mailq" command to view the email queue on the vCenter Server Appliance. I connected to the vCenter Server Appliance via SSH, ran the "shell" command to get to the full shell, and then ran the "mailq" command. This showed me that several messages were in the mail queue and not being sent. I began to troubleshoot this more and eventually found an VMWare article regarding a bug in the vCenter Server Appliance 6.5 that prevented SMTP from working correctly. This article had been published one day before I found it, which was about a month after I first started troubleshooting the issue. From looking at the files, the original code had the wrong patch in the file. 

Here is a link to the VMWare article with instructions on how to fix the bug:

The following must be done to successfully SCP the file to the vCenter Server Appliance:


I was recently working on a project to migrate a customer from a physical server to new virtual servers on a new ESX host. I installed ESXi 6.0 Update 2 on the new physical server and delivered to the customer site. After the server was onsite, I began building my first virtual machine. Since it was the first virtual machines and vCenter was not installed yet, I downloaded the VI client and connected to the host.

While creating the first VM, I received the following warning:

"If you use this client to create a VM with this version, the VM will not have the new features and controllers in this hardware version. If you want this VM to have the full hardware features of this verison, use the vSphere Web Client to create it."

According to the warning message, I needed to use the vSphere Web Client to create a VM with the latest full hardware feature set. The vSphere Web Client is part of vCenter, so I didn’t see how this was possible because vCenter was not installed yet. VMware has been planning to obsolet the VI client and moving to the web client, so I figured this was just a push in that direction. Obviously, this doesn’t work well for customers who are just building their first virtual servers. I didn’t need the new hardware features, so I just picked Virtual Machine Version: 11 and continued building the VM.

A few days later I was curious as to what the warning message meant and decided to do some more investigation. It turns out that with ESXi 6.0 Update 2, VMware started embedding a new VMWare Embedded Host Client (EHC) in ESXi. This new Embedded Host Client is a HTML5-based tool to directly manage the ESXi host and is a replacement for the VI client. This is nice because nothing needs to be downloaded or installed to manage the ESXi host using the EHC.

Here's a screenshot of the new EHC:

Knowing that the EHC exists, I now understand what the warning message I received when using the VI client was saying. They were not necessarily saying I had to use the vSphere Web Client that uses vCenter, but rather that I could connect directly to the ESXi host using the Embedded Host Client.

The VMware Embedded Host Client can be access by going to http://IPAddressOfESXiHost/ui. More information on the VMWare Embedded Host Client can be found here:




This is handy if you need to quickly connect to the console of a VM and don't need any other features of the vSphere web interface. The documentation from VMware says to run this from the web interface, but it can be run standalone, like this:

"C:\Program Files (x86)\VMware\VMware Remote Console\vmrc.exe" "vmrc://DOMAIN\USERNAME@VCENTER.DOMAIN.COM/?moid=vm-VMID"

VCENTER.DOMAIN.COM should be replaced with the FQDN of your vCenter server.

The "DOMAIN\USERNAME@" can be omitted, but if you are saving this command somewhere, you might as well include your username.

Use VMware PowerCLI PowerShell command "get-vm MACHINENAME | fl id" to find the VMID.  Just use the part that starts with vm-.  You can also get these from the ESX console.  

Download VMRC from here:  There is a link to this on the vSphere web page.  This requires an account with VMware.