On Windows 10 Build 1703, 64-bit Java web applets (*.jnlp) fail to launch from IE11 if the 64-bit Java JRE is installed (or both 64 and 32-bit). The workaround is to uninstall the 64-bit JRE and leave the 32-bit (or install it).
On Windows 10 Build 1703, 64-bit Java web applets (*.jnlp) fail to launch from IE11 if the 64-bit Java JRE is installed (or both 64 and 32-bit). The workaround is to uninstall the 64-bit JRE and leave the 32-bit (or install it).
Recently, I deployed a new vCenter appliance (VCSA) – version 6.5 – with an external Platform Services Controller (PSC) appliance. VMware has made the deployment considerably simpler than it originally was with their first few appliance releases. Instead of having to import an OVA/OVF and do a lot of the configuring yourself, VMware has made an EXE available to configures most of those steps automatically. Simply step through the wizard, providing information such as “what host to deploy the appliance to” and “what deployment model would you like” (external or internal PSC) and the wizard will deploy/configure the appropriate OVA templates.
Unfortunately, the first time that I ran through this wizard, it hung around two-thirds the way through indefinitely. I even left it running overnight and it never completed. Looking through the deployment logs, it turns out that the deployment failed due to licensing issues.
debug: initiateFileTransferFromGuest error: ServerFaultCode: Current license or ESXi version prohibits execution of the requested operation.
debug: Failed to get fileTransferInfo:ServerFaultCode: Current license or ESXi version prohibits execution of the requested operation.
debug: Failed to get url of file in guest vm:ServerFaultCode: Current license or ESXi version prohibits execution of the requested operation.
Granted, these hosts hadn’t been licensed yet – I had just upgraded the hosts from 6.0 and had assumed the evaluation license was in effect. Apparently not. I installed the full license and tried the deployment once more. Sure enough, that did it …
Moral of the story, if you don’t license your hosts for vCenter (i.e. using the free Hypervisor license), you will not be able to deploy the vCenter appliance.
I have had the issue of Windows explorer crashing several times a day. All explorer windows, the desktop and task bar disappear then the desktop and task bar reappear after a few seconds.
I did not nail down the specific culprit but used ShellExView (www.nirsoft.net/utils/shexview.html) to disable all non-Microsoft shell extentions. That made a significant difference and I haven't had explorer crash in the last few days. Of course, it could be a combination of shell extensions that will make it harder to identify. In the meantime, I will add an extension as I miss it and see if it destabilizes Windows explorer again.
There are power management settings that should be checked when running ESX on HP Proliant G6 and above or Dell PowerEdge 11th and 12th Generation servers. See VMware article for details: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1018206
The Proliant G8 that I examined having performance issues was set in the BIOS to use "HP Dynamic Power Savings mode" instead of "HP Static High Performance mode". This can have an impact on virtual machines ability to utilize the CPU of the host. This setting can be changed through iLO without the need to get into the BIOS directly to make the change. It does not require a reboot of the ESX host to change the setting this way, which is even better.
I’ve been migrating Exchange customers from Barracuda ESS over the past few months and recently ran into a small issue. I had logged into the BESS portal one morning and decided to go ahead and start cleaning up some of the domains that were registered so that Barracuda would stop routing email for these customers.
It was a simple enough process – Click the Domains tab, find the domain to remove, click Remove. Everything magically disappears.
I removed 23 domains and called it good for the morning and proceeded to work on other things. A few days later, we get a task from a customer who was unable to receive email from another customer of ours who was still using Barracuda ESS. After tracking down the logs in Customer #2’s BESS portal, I discovered that BESS was still routing email internally instead of respecting MX records.
A quick phone call to Barracuda Support and they immediately escalated the case to Tier 2/3 and Product Development. I heard back from them later that afternoon and was informed that I was removing the domains incorrectly.
I didn’t think I could screw up clicking a “Remove” button, but apparently I did.
After another minute or two of explanation, my support rep explained that the issue was really because the domains I had removed that included Aliases. There’s apparently an acknowledged bug with the portal that requires you to un-alias all domains before removing the parent domain from the portal. They checked all 23 domains I sent and verified we were good to go.
The Equifax data breach announced yesterday potentially affects 143 million U.S. consumers and is one of the largest breaches of personal information. The following steps can be taken by consumers to help protect against fraud and identity theft:
The credit reporting bureau, Equifax, reported yesterday that they have been compromised. Non-public information affecting potentially 143 million U.S. consumers was stolen, primarily consisting of names, Social Security numbers, birth dates, addresses, and, in some instances, driver's license numbers. Additionally, credit card numbers for approx. 209,000 U.S. consumers and dispute documents for approx. 182,000 U.S. consumers were accessed. Further details from Equifax can be found here:
For information from a source independent of Equifax, Brian Krebs' coverage can be found here - https://krebsonsecurity.com/2017/09/breach-at-equifax-may-impact-143m-americans/.
Additional information about the steps consumers can take to protect against fraud and identity theft:
Beginning with ASA OS v9.7, the 5506-X has a new default configuration that allows the ports to be used as switchports, similar to how the 5505 models worked. The default configuration includes a Bridge Virtual Interface (BVI) that has ports G1/2 - G1/7 (6 ports) as members of the BVI. This will apply to models that ship with 9.7 code. However, if you upgrade a device to 9.7, you will have to manually create the BVI group (the upgrade itself does not do this).
Even though the BVI supports up to 6 ports in the BVI, if you try to configure this via ASDM, it only allows you to add 4 ports as members. This actually is a restriction when running the ASA in transparent mode (we rarely do this) instead of routed mode (typical install), but ASDM seems to ignore the mode and apply this restriction regardless. So for an ASA in routed mode, this seems to be an ASDM bug. To work around this, you must add the member ports via the CLI. In addition, the ports cannot have a name defined before you configure the bridge group. However, they must have the naming convention inside1, inside2, etc. to work as part of the BVI group named inside. The default is to assign the members of BVI1 (G1/2 - G1/8) the names inside1 - inside7. The BVI interface is named inside.
Also, the http and ssh commands don't allow you to assign the BVI named interface (inside). Instead, you must add the member name (ex. inside1, inside2, etc.). The snmp-server command actually does allow you to add the BVI interface name, but it doesn't work when you do (seemingly another bug). So again, you'll need to use the member port name instead.
After upgrading a customer to vCenter to 6.0, VMs that were being replicated with Veeam from one site to another started to issue an alarm for a "VM MAC Conflict". However, when I compared the MACs of the replicated VM and the original VM, they were unique. I had not upgraded the hosts at this point, only vCenter. Nothing had changed with Veeam, so this was a new issue as a result of the vCenter upgrade.
As it turns out, there is nothing wrong technically, this is simply a change in behavior in the alarm issued by vCenter. When Veeam replicates a VM, the replica VM initially has all the same settings (other than the name) of the source VM. vCenter sees the same MAC address on two VMs and alarms. vCenter then changes the MAC address of the replica (as had always been the behavior), but it never clears the alarm. You must clear it manually. Then when the next replication occurs, the alarm will trigger again.
I found several references to this issue online and most had suggested simply disabling the alarm to avoid vCenter showing the replicas with an alarm all the time, but that's not a great solution because no alarm would be generated in the event of an actual MAC conflict. Further research revealed a workaround. You can edit the alarm VM MAC Conflict in vCenter and add an argument to exclude VMs whose name ends in "_replica".
I was recently doing a maintenance window for a customer and had an issue with several of their servers giving me an Error Code 80243004 – Windows Update encountered an unknow error when I was trying to install updates. After researching, I came across an article with a very simple and weird fix for the issue.
After turning on the notifications for Windows Update, I was able to successfully install all Windows Updates.
I received several new Cisco 2960x switches to configure and one of them would not boot up stating that the image failed digital signature verification. These switches have USB interfaces on the front and can be used for file transfer, however more modern USB flashdrives would not work for me. I had a few older USB flashdrives that worked, so hold on to your flashdrives!
From a working switch, copy the boot image to the USB flashdrive.
"copy flash:/c2960..../c2960...bin usbflash1:" (or usbflash0: depending on which port it was connected to).
I booted up the switch that wouldn't verify and tried to copy the image onto the switch from usbflash1: but it told me the copy command was unknown. Luckily, you can boot off the USB flashdrive image.
I typed "boot usbflash1:/c2960....bin" and it booted the switch where I was able to copy the working image to flash: "copy usbflash1:/c2960....bin flash:/c2960..../c2960....bin"
After overwriting the corrupt image, I rebooted the switch and it passed the verification on the image.