Blog

I have been working with a customer on a file server and domain migration project. The original plan was to move the files to our Aspire datacenter on a server that was in a different domain. Since we were moving domains, we were going to have to recreate the file permissions on the new domain. I typically run Robocopy using the /COPYALL (which is equivalent to /COPY:DATSOU) parameter, but since we did not want to copy the security, owner, or auditing information, I used /COPY:DAT.
 
After the initial seeding, the customer prioritized some other moves and postponed the file server migration. During that time, the old datacenter suffered a three day Internet outage. After the outage, the customer decided to move the files while client machines remained on the old domain to prevent another outage. This caused us to need to copy the existing permissions instead of the original plan to translate the permissions at the time of migration.
 
I changed my Robocopy scripts to use /COPYALL instead of /COPY:DAT. Robocopy copied the permissions for the files that had changed or been added since the seeding, but it did not fix the security permissions for the files that had not changed. This is by design as Robocopy only copies permissions when it copies a file. In order to reevaluate the permissions, the /SECFIX parameter must be added. I changed my script to include /COPYALL /SECFIX and it sync the files AND the permissions. This Robocopy takes longer because it has to evaluate security instead of just the files.
 
To keep files and permissions in sync, you need to use the /COPYALL and /SECFIX. You can add /v for verbose logging. The Robocopy command I used to keep the files and permissions in sync was: "robocopy source destination /COPYALL /SECFIX /MIR  /S /E /DCOPY:T /R:0 /W:0 /LOG+:log.log".

 

We had a banking customer with a FiServ application consistently crashing under Windows 10. The crash would always display a .Net framework error. All users of this application were having issues with it, but the severity changed from user to user. One user would crash once every couple of hours, while the other would crash once every other day. No user was doing the exact same thing, and no other errors were showing before the crash. It seemed to be a completely random occurrence.
 
FiServ support could not recreate the issue and advised we update to Windows 10 1803. While updating a PC to test this solution I checked the event logs and noticed a printer kept trying to map every 60 minutes and fail. It just so happened that whenever this printer failed to map, the .Net error would also show up on event logs. Group Policies were refreshing and triggering the printer mapping error. I launched the application and ran a "gpupdate" and sure enough, the application crashed. I looked into the GPO's and found the drive that was mapping the location of the program was set to "replace" instead of "update" or "create". This was causing the file path to be lost every time there was a group policy update. I changed this drive map to "create" and it resolved the issue.

 

I've increasingly had issues getting Excel to open other Excel files if I already had one open. I noticed it happened every time I was working in one of my spreadsheets that contained macros. However after some research, I discovered that Microsoft has intentionally designed this so that if Excel thinks you are editing a cell, it will not allow you to open any other Excel files (even if they are unrelated).
Although there isn't really a true solution, if you hit enter, or simply get out of the cell as if you're editing it, or hit save, you should be able to open other files.

 

If you have ever been annoyed with Office AutoCorrect changing words like "VMware" to "Vmware", you'll be relieved to know there is help for you. In any Office application, go to File->Options->Proofing-> AutoCorrect Options->Exceptions->INitial CAps. There you can add the string in question (ex. "VMw") to the list to stop Office apps from constantly correcting your typing "errors".

 

What is Colorado Cybersecurity Regulation (HB 18-1128)?

On January 19, 2018, the General Assembly of the State of Colorado introduced House Bill 18-1128, Concerning Strengthening Protections for Consumer Data Privacy. The regulation was signed into law on May 29, 2018 and goes into effect on September 1, 2018.

The new regulation contains four primary sections:

  1. Disposal of Personal Identifying Information
  2. Protection of Personal Identifying Information
  3. Notification of Security Breach
  4. Security Breaches and Personal Information

The first three sections focus on how a "covered entity" can protect personal identifying information (PII). A "covered entity" is defined as a "person" (e.g., an individual, corporation, business trust, etc.) who maintains, owns, or licenses PII in the course of their business, vocation, or occupation.

Section Four shifts some wording around, but repeats the first three sections, replacing the term "covered entities" with "governmental entities."

Does Colorado HB 18-1128 apply to Banks and Credit Unions?

Yes. While the regulation defines PII a couple different ways, both definitions include things a financial institution would "maintain, own, or license" in the course of normal business (e.g., social security numbers, credit cards, debit cards, account numbers, etc.). If you are a financial institution in the State of Colorado, Colorado HB 18-1128 applies to you.

Are Financial Institutions in Compliance with Colorado HB 18-1128?

Let's break this down by section.

  • Section One: Yes.
    Financial institutions are already subject to GLBA, so the organization should already have a policy in place that defines the secure disposal of paper and electronic documents containing PII.
  • Section Two: Yes.

Again, since financial institutions are already subject to GLBA, the organization should already have reasonable security procedures and practices in place to protect PII from unauthorized access, use, modification, disclosure, or destruction.

  • Section Three: Partially.

Per GLBA, each financial institution should have an incident response policy, program, and/or plan that outlines what the organization should do in the event of a security breach. However, Section Three additionally includes new requirements, specific to the State of Colorado, about classification and notification of a security breach.

For example, Section 3(2)(e) states that if the security breach affected more than 500 Colorado residents, the covered entity must notify the Colorado Attorney General as soon as possible, but no later than 30 days after determining a security breach occurred. This requirement is new and it is specific to Colorado organizations, so it does not likely exist in your current incident response policy, program, and/or plan.

How to Prepare for September 1st

To prepare for the September 1st effective date, it would be beneficial for each financial institution to compare their existing incident response policy with the new requirements in Section Three and make updates, as needed.

We have developed a downloadable PDF called, "Understanding & Preparing for the Colorado Cybersecurity Regulation (HB 18-1128)." This document provides a side-by-side comparison of the regulatory language with our opinion to help you simplify and interpret the regulatory wording. This document will help you understand the regulation, as you prepare your institution for the September 1st deadline.

For Tandem Customers: The resource also provides information about how the requirements of HB 18-1128 are already addressed in Tandem, including recommendations about how you can incorporate the Colorado-specific requirements into your existing information security program.

What is Tandem?

Tandem is an online information security and compliance software designed to increase security and help financial institutions stay in compliance with GLBA and FFIEC guidance. Tandem is now being utilized by financial institutions across the country and helps by saving both time and money without sacrificing information security, cybersecurity, or compliance.

Understanding & Preparing for the Colorado Cybersecurity Regulation HB 18-1128


 

We had a customer report that all browser windows were closing for users and this was increasing in frequency. Most of the users reporting the issue were at the corporate office, which has about 150 users and is where the IT department is located. I performed a remote session with on the users and confirmed the issue. Internet Explorer, Chrome, and Firefox all would close, not crash, at the same time.

My first thought was that some remote assistance and IT management software they had recently installed was causing the issues. We uninstalled the software and the issues continued. My next thought was that something malicious was on the network and was killing the processes remotely. I moved the PC to the guest wireless network and the issues stopped. After moving the PC back to the internal network, the issues began again. After a while, the issues randomly stopped for this user. I moved on to looking at another user's PC. The IT department did not know of any new devices that had been brought onto the network.

Whatever was causing the issues was obviously powerful enough to kill processes. The browsers seemed to be closing at regular intervals, at the top of the hour and half after the hour. I started Process Monitor, Process Explorer, and WireShark, opened the browser, and waited. As expected, the issue occurred again. I started looking through the WireShark logs and did not see anything odd. I looked at the Process Monitor log and found several cmd.exe processes killing the browser applications. At about the same time I saw the cmd.exe commands that killed the browser processes, I saw nxclient.exe processes that called cmd.exe and ran taskkill commands.

I started searching online and found a blog on the NxFilter support group discussing the same issue. This customer has NxFilter for web filtering for several years. They were using version 5.0 of NxClient, which was before the version mentioned in the NxFilter support group. The NxFilter creator responded to that group and said that the client was doing so to force a refresh the user's session, but that this is not the correct behavior. There was a new version of NxClient that fixed this behavior. Version 9.1.3 of NxClient was current, so I updated the customer to use the newer version. That resolved the issues. 

 


 

By: (Security+)

How do you know what due diligence documents to gather from each of your vendors? There are many methods available, but some result in more accurate documentation than others. Today, I'm going to review two of the primary methods and discuss the effectiveness of each method.

Method #1: The Bucket Method

I often see, what I will call, the bucket method.

It Goes Something like This

Imagine you have a list of questions you ask about vendor characteristics, and then you classify that vendor based on the number of questions answered as "yes." For example, a vendor should be considered:

  • "Level 1" if two or less are answered as "yes."
  • "Level 2" if three to four are answered as "yes."
  • "Level 3" if five or more are answered as "yes."

Then, you could define the required due diligence based on the level of the vendor, or based on the bucket in which the vendor is grouped. At "Level 1," collect only a service level agreement. At "Level 2," collect a contract, a confidentiality agreement, and financial statements. At "Level 3," collect all document types (e.g., a contract, confidentiality agreement, financial statements, SOC report, examination report, BCP, etc.).

What Happens Now?

This method seems relatively simple to carry out. But in reality, it can create a lot of unnecessary document exceptions, and occasionally miss opportunities to request relevant documents.

  • Unnecessary Document Exceptions in a Bucket Method
    Consider a vendor who is "Level 3." While five characteristics applied to them, several of the required documents are both unnecessary to request, and at some rate, unreasonable. This results in an exception record to explain each case and ultimately, requires more effort from you, as the vendor manager, to oversee the relationship.

  • Missed Opportunities for Requesting Relevant Documents in a Bucket Method
    Consider a vendor who is "Level 2." While only three characteristics applied to the vendor, one of them is very important. If this vendor were to be unavailable for 24 hours, it would be detrimental for our business. We should get their BCP, but we did not because it was not required for "Level 2" vendors.
What This Means for You

The bucket method costs a lot of time and effort even though the labelling process seems quick and simple.

[Learn how to review your 3rd party vendor SOC reports in 15 minutes or less. Plus, download our free SOC review checklist.]

Method #2: The If-Then Method

Instead of the bucket method, consider the more accurate if-then method.

It Goes Something like This

Imagine you have a list of questions you ask about vendor characteristics. You could say that if you answer Question A as "yes," then you should collect a specific type of document related to the effects of that characteristic, Document A. Here are a few examples to consider:

  • If a vendor performs critical functions or provides critical services, then you should get a service level agreement.
  • If a vendor uses subcontractors in the performance of critical functions, then you should get their Third party Due Diligence of Subcontracts.
  • If a vendor stores customer information, then you should get a SOC report.

method for collecting vendor management due diligence documents

What Happens Now?

By using the if-then method, you only gather the documentation that is appropriate to the third party relationship. This method can be continually refined. If you notice you are creating a lot of document exceptions for a specific type of document, revisit the question you are asking that instigates this requirement. Consider what assumptions are being incorrectly made about the characteristic's effects. Update your list to appropriately account for this.

Let's say you thought, "If a vendor stores, transmits, or accesses customer data, then I should get their SOC report." You would quickly find that not every vendor who can access your customer's data is going to have a SOC report, and that the SOC report is quite unnecessary for the service you are receiving. In this case, you could create two separate questions. One question would be about storing customer data, in which you would require a SOC report. Then another about accessing and transmitting customer data, in which you would require a confidentiality agreement, but not a SOC report. Making this adjustment would greatly reduce the number of documented exceptions.

What This Means for You

The if-then method will eliminate unnecessary document requests and ensure pertinent documents are obtained.

In Summary

While both methods provide standardized ways to gather due diligence documentation from vendors, the bucket method can actually cause more problems for your vendor managers.  By using the if-then method, you can manage your vendors based on the services that are being provided to you and easily change your program to meet the developing needs of your environment. Couple this method with the Tandem Vendor Management Software, and increase the efficiency in which you conduct your program. 


 

We recently moved a customer from a datacenter at one of their locations to a large datacenter in the Dallas/Ft. Worth area. One of the devices we moved was a Meraki MX84 being used as a VPN concentrator. A VPN concentrator works by extending the network the VPN concentrator is on to the access points. Basically, wireless clients at all locations get an IP address on the same layer two network. This is important for a few reasons. First, the VPN concentrator needs to be in it's own VLAN/DMZ. Second, something on the layer two network the VPN concentrator is connect to needs to be handing out DHCP addresses. In our case, we used a Fortigate UTM to run the DHCP server for that subnet. Third, traffic needs to be allowed outbound to the Internet from all clients on the VPN concentrator layer two network so clients can connect to the Internet. The traffic is tunneled from the access points to the VPN concentrator, so the traffic does not intermix with the normal network traffic.

One of the issues we had was that the access points would not create the tunnel back to the VPN concentrator. After talking to Meraki support, we found that the issue was that the access points and the VPN concentrator would not connect to each other if their public IP address was the same. This does not work because Meraki uses the same technology to build the VPN from the MX to the access points as they use to build a VPN mesh between MX devices. Our devices were both using the default overloaded outbound NAT rule, so they were coming from the same public IP address. The solution is to make the MX come from a different public IP address, which can be accomplished via an inbound and outbound NAT statement. After we made this change, the access points connected to the VPN tunnel and wireless began to work.

One other thing to note is that the access points will not broadcast SSIDs if the VPN to the concentrator is not up when configured to tunnel traffic through a VPN concentrator. This can be helpful when troubleshooting wireless when there are not clients at the location of the access points.

 

 


 

As a part of a recent data center move we had to reconfigure several APC management cards. The first thing that I did to each of these NMCs was to reset to factory defaults and update the firmware.

This is a fairly simply process normally. Connect the appropriate serial cable, connect to the comm port, press the reset button a couple of times, and log in with the default credentials (http://www.apc.com/us/en/faqs/FA156075/). Once logged in, you can use the command to factory_reset or format the card and bring it back to factory settings.

In one case, however, the card didn't survive the factory reset. In fact, it appeared the card had started to boot, but never finished the booting process. By changing the baud rate around in my settings, I was able to connect to the BootMonitor using 57600 baud, 8 data bits, no parity, no flow control. It was also at this point that I saw an error related to the checksum of the AOS firmware which was preventing the card from booting. We had another NMC that I could swap to if I needed to, but I wanted to see if I could simply reload the firmware onto the card and get everything working again. Fortunately, APC has an article for that: http://www.apc.com/us/en/faqs/FA293874/

Using TeraTerm and XMODEM, I was able to upload the bootmon, AOS, and application module firmware files (in that order) to the NMC. Once that had finally finished, simply rebooting the NMC brought everything back online.

This also had the positive side effect of updating the firmware on the card because I wasn't able to download old firmware files from APC's support site. From there, I could complete my setup process and bring everything back online.


 

By: (CISA, CISSP)

Early this year the tech world was rocked with the announcement of two unprecedented vulnerabilities named Meltdown and Spectre.

These two vulnerabilities are a big deal because they are hardware vulnerabilities affecting any device with a silicon chip. This includes microprocessors on workstations and servers, mobile phones, tablets, cloud services, and other platforms.

Understandably there was a rush from three main industries, processor companies, operating system companies, and cloud providers to provide solutions. However, as a result of the urgent response, there were unanticipated update incompatibilities which crashed systems. This created a dilemma for IT professionals. "Do we install updates which may cause our systems to crash?" or "Do we sit-tight and remain vulnerable?"

Even in the weeks of uncertainty, there were calm voices of seasoned reasoning. Their message reminded us that basic security standards remain our first line of defense. No matter how bad an exploit may be, its impact can be limited if:

  • The vulnerability doesn't have access to your systems
  • Operating system or application weaknesses are patched
  • Security software is installed (advanced end-point protection software with artificial intelligence is a game changer)

So how do you do achieve these standards? Here are some fundamental best practices:

  1. Monitor availability of operating system and application updates. Be sure you find and establish good sources to inform you about the patches and updates for your systems and applications. Then, monitor the sources or subscribe to notifications.

  2. Test updates to ensure compatibility. It is best if your update and patching process includes a test environment where non-production systems are updated first in order to test functionality and compatibility. This allows you to postpone or avoid updates which might crash systems or applications.

  3. Apply updates and patches on a regular schedule. As a best practice, you should implement a schedule (at least monthly) to evaluate, test and install updates for systems and critical applications. In this way, your schedule can coincide with schedules of operating system and application vendors (e.g., Microsoft has "Patch Tuesday, the second Tuesday of each month).

  4. Install and maintain security software (e.g., antivirus software, endpoint security software, etc.). If possible, explore and utilize behavior based end-point protection software. This genre of software "watches" system behavior to notice and stop suspicious action.

  5. Prevent malicious code execution. The goal is to keep malicious code out of your network and systems. This is best accomplished with layers of security including Internet filtering, phishing detection, and security awareness training for system users. Security awareness is essential to help prevent users from falling prey to malicious emails.