Blog

A customer was setting up Bitlocker encryption on laptops so that they could be checked out of the office. They wanted to have Bitlocker startup keys created on one removable flash drive and then be able to copy the required key to another flash drive. When a user needed to check out a laptop, they would also be given a flash drive to be able to start up any of the laptops with Bitlocker.

The customer was saving the startup keys correctly through BitLocker, but could not see them on the removable flash drive through Windows Explorer. Although they had "show hidden files" enabled, they needed to uncheck the view options for "Hide protected operating system files". This allowed the customer to see the startup keys to be able to copy to other removable flash drives.


 

I ran across an issue where I was trying to delete a file and kept getting several errors while attempting to delete said file.

1. Permissions issue --- Received an error that I needed permission from File owner to be able to delete. I made myself the owner of the file and attempted to delete the file. That introduced error #2 listed below.
2. Directory is not Empty --- After resolving the permissions issue I began to receive an error that indicated the folder was not empty "Cannot Delete folder: The directory is not empty". So I went into make sure 'view hidden files' was checked in file explorer and it already was, yet the file in question still showed to be empty when opening it. Did some research and discovered that you can change the search options to include all subfolders and also to allow searching for files that are 'Empty' see screenshot below. After searching in this manner I was able to view a ton of subfolders that were sometimes 4 or 5 levels deep, and inside of those deeper folders, there would be data, which introduced error #3 to follow.

3. Filename too long --- The final error I was receiving indicated that the filename was too long. "The file name(s) would be too long for the destination folder…" This is the result of embedded file paths that end up surpassing the 255 character limit. Typically what you'll come across is filename\filename\filename\filename\filename\filename\filename or you might see filename\filename\realllyyyllllonnngggfilename\. Some suggestions for fixing this are to find one of the directories that seem to include the long string of characters and rename the folder. That didn't always work. 

A more common suggestion is to navigate a good way into the long directory path (filename\filename\filename\filename\filename\filename\filename) and then share out one of the folders. Map to the newly shared folder and then delete everything inside. After that you should be able to delete this directory itself, and the root directory that this folder lives in. This solution was actually working, but with so many files to navigate through, this was very time consuming and not really practical for my situation.

What I ended up using as the actual solution was a robocopy script with the /purge switch. Basically you create an empty folder somewhere, and use that as the source folder during the robocopy. Script will end up looking like this:
Robocopy EmptyFolderPath FolderToDeletePath /purge
Robocopy will cycle through and purge, or delete anything in the destination folder that is NOT in the source folder. Since the source folder in this case is empty, all files will be deleted. Please note you will need to run this through and elevated cmd prompt.


 

During my first attempt at installing the Office suite through O365, I began running into issues with the 'Invalid Security Certificate' warning popping up every few minutes after setting up the associates Outlook profile. This customer already had the proper GPO in place set to disable SCP look up, exclude httpautodiscoverdomain, etc, which had been effective at stopping this from occurring in the past with your standard Office install. After updating the ADMX files (which include a number of new Autodiscover policies) to the latest set in hopes of resolving the issue, the issue with the certificate warning continued to surface every few minutes. After doing some reading, I discovered there are two registry hives that can manage Autodiscover:
Computer\HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Outlook\Autodiscover
Computer\HKEY_CURRENT_USER\Software\Policies\Microsoft\office\16.0\outlook\autodiscover

The latter Policies hive is where the key changes take place when managing via GPO, which was not performing the intended way with this O365 setup. I manually tested adding the Excludehttpautodiscoverdomain key to the first hive, and the security warnings stopped immediately. I tested disabling and enabling keys in both hives, and was able to confirm the finding. I have not had an opportunity to see if this issue exists for any other workstation/customer, but hopefully someone might find this useful if they do. I wound up just adding this key in via the registry via GPO and had no further issues after it was applied.


 

One of our customers is hosting their servers with a hosting provider who also provides some other servers, like backups and patching. The hosting provider was unable to patch some of the servers for this customer. After investigating with the hosting provider, it was determined that they could patch all of the servers except for the domain controllers. The service account they were using was a Domain Admin so it should have been able to patch any server. I logged into another server as the service account and tried to access the admin$ share on one of the domain controllers, but was unable to do so.

After some investigation, I found that the Domain Admins group was not a member of the built-in Administrators group in Active Directory. The customer had removed the groups from the Administrators group and had manually put accounts in that group when necessary. This caused the service account the hosting provider was using to work on all of the member servers because Domain Admins had administrative rights to those servers, but they were unable to access the domain controllers because the service account was not an administrator on the domain controllers since it was not a member of the Administrators group. I am not sure why the customer removed the default groups from the Administrators group, so I just added the service account to the built-in Administrators group. The hosting provider attempted to patch the servers again and verified it was working properly.


 

In most of our Exchange environments, we'll have port 443 open to the outside for ActiveSync and Outlook Anywhere. When you do that, you'll also open up OWA and ECP to the outside. If you'd like to keep access for ActiveSync and Outlook Anywhere open but would like to block OWA and ECP you can follow the steps below.

There are a few ways to block OWA and ECP to external addresses, but the best method is probably to use the IP and Domain Restrictions feature in IIS. This feature isn't available by default, so you'll have to install it.
To install it, open Server Manager, select Add Roles and Features. In the Add Roles and Features Wizard, under Server Roles, expand Web Server (IIS), then expand Web Server, and then expand Security. Then click the checkbox for IP and Domain Restrictions.

Once that installs, open IIS, expand Default Web Site and click on the OWA Virtual Directory. You'll now see the IP Address and Domain Restrictions feature available.

When you open that feature you can add an Allow Restriction Rule or a Deny Restriction Rule. My suggestion would be to add the subnets you would like to be able to access OWA and ECP (internally and externally) and then change the default behavior for unspecified clients to Deny.

To add an entire subnet as an allowed subnet, click Add Allow Entry, and then in the Rule settings enter the IP info. You can add an individual IP, a range, or a subnet.

To change the default behavior for unspecified clients, click Edit Feature Settings and set Access for unspecified clients to Deny.

Repeat the same steps on the ECP virtual directory. Once that has been completed restart the IIS service (iisreset) to apply the changes.


 

We're working on testing and rolling out features of Microsoft Teams internally that will eventually allow us to migrate to Teams as our Enterprise Voice. During the process, one of my goals was to get the Calendar tab working inside the Teams client so that we could see and schedule meetings on our Outlook calendar from Teams. After a lot of reading and researching, it became apparent that the only way to get this working would be to enable Hybrid Exchange so that Teams (sitting in the O365 cloud) could talk to my mailbox (sitting on-prem).

I configured our Exchange server for hybrid connection and let it sit overnight (thanks to Microsoft replication delays). The next morning, as I started looking into this again, I got a message from a coworker about how nice and helpful the Calendar tab was. I hadn't received it, yet, but was excited that it had started rolling out. Several hours later, the tab still wasn't present for me, but for everyone else that I spot-checked, the tab had appeared.

Looking through the logs from my Teams client, the error message kept saying that my mailbox could not be found. Surely this couldn't be the case because my account was set up the same as everyone else. The only thing I could think of at the time was that it had to absolutely be a permissions issue.

Continuing research over the next day or two, I discovered that the error message actually was accurate. I had attempted to migrate my mailbox to Exchange Online on a whim, but when I licensed my account in O365 for Exchange Online, it started building a new mailbox automatically. Normally, Exchange Online is aware of synced accounts that have on-premise mailboxes and will not create a new mailbox in that instance. So somewhere in the syncing process, my Azure AD account and on-prem AD account were not completely talking to each other (which didn't make complete sense, because the password hash sync was still working fine).

I discovered that the sourceAnchor (ImmutableID / ms-DS-ConsistencyGuid) between the two accounts was different. Since it's impossible to update an ImmutableID attribute, I decided to update the ms-DS-ConsistencyGuid instead. Converting the ImmutableID from Base64 to Hex, you can then easily update the ms-DS-ConsistencyGuid on the source side.

However, before doing that, I needed to clean up Exchange in Azure. You see, even if you unlicensed a user for Exchange Online, Azure will only disconnect the mailbox and tombstone it for 30 days. I needed to purge the Exchange attributes on my AzureAD account so that I didn't have to wait 30 days.

https://techcommunity.microsoft.com/t5/exchange-team-blog/permanently-clear-previous-mailbox-info/ba-p/607619

The solution is simple: Connect to the MSOL service in Powershell (Connect-MSOL), run "Set-User <upn> -PermanentlyClearPreviousMailboxInfo"

It will then give you a warning that this is irreversible. Acknowledging that will fully purge the Exchange attributes and let you start over.

I then updated the ms-DS-ConsistencyGuid to be correct, forced a sync via AzureAD Connect, wait for replication, and then enabled my account for Exchange. No new mailbox was created, as expected, and after a few hours the calendar tab showed up in my Teams client!


 

As more and more offerings are moved into the cloud under subscription-based licensing, it's both becoming easier for people to take advantage of software that previously may not have been as readily available, and more challenging to find a solution that doesn't involve storing your data one someone else's hardware.

Office 365 is a great example of this as it really does seem like the writing is on the wall for non-subscription licensing of the Office Suite for on-prem installation of much of the server software platform. Granted, Microsoft does make it remarkably easy to just switch over to their platform and to use all the pieces of the software suite that have been designed to seamlessly work together, but there is always a concern (even if it's tiny) that your data is in someone else's hands.

Let's look at cloud hosting from a data availability vantage point instead. The cloud makes it incredibly simple to access the platform from practically anywhere in the world – provided thie service is online. In early February 2020, there was a global outage of the Microsoft Teams platform which brought a lot of this to light. If you were already logged in prior to the outage, you were good to go. Anyone else trying to connect up after the fact was in trouble.

The cause of the outage? https://twitter.com/MSFT365Status/status/1224351597624537088

That's right, even an incredibly large organization that spends billions every year, such as Microsoft, can forget to renew their certificates occasionally. Within a few hours, the renewed certificate was deployed across their vast infrastructure and the Teams service was brought back online. While I doubt that this certificate will ever be forgotten in the future, maybe Microsoft should take the advice of one of the responders to their announcement.  https://twitter.com/nunu10000/status/1224353813987053568?s=20 

As a consumer, my biggest takeaways are this:

The Good: The Office 365 platform is always up-to-date. Any security vulnerabilities are likely patched within a matter of hours rather than waiting on a change window once a month. New features are released regularly.

The Bad: Any outage on-prem and I have a general idea how long things will take to bring it back online. With cloud services, any outage is at the whim of the service provider.

The Ugly: Knowing the above, once you're on-boarded onto a specific platform, it's usually impractical to evacuate that platform and move to another with any sort of regularity. What is the magic number for downtime to make it worth the cost of migrating services? What about if you were unable to easily or cheaply move existing data off that platform during the migration (i.e. online archive/journaling solutions)?


 

I had upgraded an older ESX 5.5 host for a customer to version 6.7. That night, when Veeam backup tried to run, it stated that the remote certificate was invalid.

The easy fix was to go to the Backup Infrastructure in Veeam and find the ESX host. Open the properties of the ESX host and "next" your way through the name and credentials pages. You will eventually get a pop-up about the untrusted certificate and asked if you "want to connect anyway". After accepting, the backups worked again.


 

Finding the right cybersecurity provider is not easy. While some services are like utilities where the options are very similar, the cybersecurity company that you choose is not just a matter of personal preference; you need a reliable provider because the potential risks to your business are much greater. As you consider your options, here are a few things you should consider to determine if a cybersecurity provider will protect your business.

Products

Full Coverage – When it comes to cybersecurity, many products have a specific specialty and only work for a certain kind of device. A good cybersecurity provider should be able to install and support a solution - like Security Information and Event Monitoring (SIEM) - that will aggregate multiple solutions to cover your entire network at a reasonable cost.

Complete Protection – Similar to products that only cover certain devices, there are solutions that only protect from attacks that come from the Internet and the cloud but are limited in detecting internal attacks. Be cautious of this and make sure you find a provider that will support solutions to detect and stop not only external threats but attacks from multiple sources. A layered approach to cybersecurity utilizing Intrusion Prevention Systems, Endpoint Protection, Email Filtering, Web Filtering, and Multi-Factor Authentication with comprehensive monitoring and reporting is ideal to provide complete protection.

Reports

A product like a SIEM will make meeting reporting and compliance requirements much easier. Most products generate reports or alerts from one type of device, which can be a headache when you are looking for a potential problem across your network, but a good SIEM solution can centralize your alerts and remove a lot of false positives. Ideally, your cybersecurity provider can provide guidance on the type of reports and alerts that are needed to satisfy your regulators.

Services

Expertise – Along with the product, cybersecurity providers should offer additional services in the form of expert analysis and guidance. This is a crucial aspect to consider. You might not have a lot of experience on the complexities of cybersecurity, but when a problem or question comes up, will you know what do to? How much pressure can you handle during a security incident? A good cybersecurity company will have a team of experts that understands your network and can customize a solution to meet your needs.

Monitoring and Notifications – A good cybersecurity company can provide 24x7 monitoring and notifications at a reasonable cost. In the past, having staff to monitor security full-time was only an option for the largest companies. Now there are many cybersecurity providers with Security Operations Center (SOC) services to ensure that when any unusual behavior takes place you will be notified. A good cybersecurity provider should provide a written service level agreement (SLA) on their response times.

Conclusion

In order to have the most complete and reliable cybersecurity coverage, you need a cybersecurity provider that will offer you all the product and service positives that we've discussed here. Our company, CoNetrix Technology, checks all the boxes. If you need a good cybersecurity company, contact us today!


 

I came across a system that was running very low on disk space.  Disk cleanup utility scans did not offer much to remove, even with system files checked.  When I looked at the drive with a graphical utility, I could see that a big chunk of space was being consumed in C:\Windows\Installer.

I came across a very useful utility called "PatchCleaner" which can be downloaded free from https://www.homedev.com.au/Free/PatchCleaner .  When it runs, it will give you the results about how many patch files are in use and how many are orphaned. 

From there, you have 2 options in the program.  Either move the unneeded files to another drive or delete them altogether.  I elected to move the patch files first and observed that it moved MSP patch install files.  It freed up many GB of space.