On January 19, 2018, the General Assembly of the State of Colorado introduced House Bill 18-1128, Concerning Strengthening Protections for Consumer Data Privacy. The regulation was signed into law on May 29, 2018 and goes into effect on September 1, 2018.
The new regulation contains four primary sections:
The first three sections focus on how a "covered entity" can protect personal identifying information (PII). A "covered entity" is defined as a "person" (e.g., an individual, corporation, business trust, etc.) who maintains, owns, or licenses PII in the course of their business, vocation, or occupation.
Section Four shifts some wording around, but repeats the first three sections, replacing the term "covered entities" with "governmental entities."
Yes. While the regulation defines PII a couple different ways, both definitions include things a financial institution would "maintain, own, or license" in the course of normal business (e.g., social security numbers, credit cards, debit cards, account numbers, etc.). If you are a financial institution in the State of Colorado, Colorado HB 18-1128 applies to you.
Let's break this down by section.
Again, since financial institutions are already subject to GLBA, the organization should already have reasonable security procedures and practices in place to protect PII from unauthorized access, use, modification, disclosure, or destruction.
Per GLBA, each financial institution should have an incident response policy, program, and/or plan that outlines what the organization should do in the event of a security breach. However, Section Three additionally includes new requirements, specific to the State of Colorado, about classification and notification of a security breach.
For example, Section 3(2)(e) states that if the security breach affected more than 500 Colorado residents, the covered entity must notify the Colorado Attorney General as soon as possible, but no later than 30 days after determining a security breach occurred. This requirement is new and it is specific to Colorado organizations, so it does not likely exist in your current incident response policy, program, and/or plan.
To prepare for the September 1st effective date, it would be beneficial for each financial institution to compare their existing incident response policy with the new requirements in Section Three and make updates, as needed.
We have developed a downloadable PDF called, "Understanding & Preparing for the Colorado Cybersecurity Regulation (HB 18-1128)." This document provides a side-by-side comparison of the regulatory language with our opinion to help you simplify and interpret the regulatory wording. This document will help you understand the regulation, as you prepare your institution for the September 1st deadline.
For Tandem Customers: The resource also provides information about how the requirements of HB 18-1128 are already addressed in Tandem, including recommendations about how you can incorporate the Colorado-specific requirements into your existing information security program.
Tandem is an online information security and compliance software designed to increase security and help financial institutions stay in compliance with GLBA and FFIEC guidance. Tandem is now being utilized by financial institutions across the country and helps by saving both time and money without sacrificing information security, cybersecurity, or compliance.
We had a customer report that all browser windows were closing for users and this was increasing in frequency. Most of the users reporting the issue were at the corporate office, which has about 150 users and is where the IT department is located. I performed a remote session with on the users and confirmed the issue. Internet Explorer, Chrome, and Firefox all would close, not crash, at the same time.
My first thought was that some remote assistance and IT management software they had recently installed was causing the issues. We uninstalled the software and the issues continued. My next thought was that something malicious was on the network and was killing the processes remotely. I moved the PC to the guest wireless network and the issues stopped. After moving the PC back to the internal network, the issues began again. After a while, the issues randomly stopped for this user. I moved on to looking at another user's PC. The IT department did not know of any new devices that had been brought onto the network.
Whatever was causing the issues was obviously powerful enough to kill processes. The browsers seemed to be closing at regular intervals, at the top of the hour and half after the hour. I started Process Monitor, Process Explorer, and WireShark, opened the browser, and waited. As expected, the issue occurred again. I started looking through the WireShark logs and did not see anything odd. I looked at the Process Monitor log and found several cmd.exe processes killing the browser applications. At about the same time I saw the cmd.exe commands that killed the browser processes, I saw nxclient.exe processes that called cmd.exe and ran taskkill commands.
I started searching online and found a blog on the NxFilter support group discussing the same issue. This customer has NxFilter for web filtering for several years. They were using version 5.0 of NxClient, which was before the version mentioned in the NxFilter support group. The NxFilter creator responded to that group and said that the client was doing so to force a refresh the user's session, but that this is not the correct behavior. There was a new version of NxClient that fixed this behavior. Version 9.1.3 of NxClient was current, so I updated the customer to use the newer version. That resolved the issues.
How do you know what due diligence documents to gather from each of your vendors? There are many methods available, but some result in more accurate documentation than others. Today, I'm going to review two of the primary methods and discuss the effectiveness of each method.
I often see, what I will call, the bucket method.
Imagine you have a list of questions you ask about vendor characteristics, and then you classify that vendor based on the number of questions answered as "yes." For example, a vendor should be considered:
Then, you could define the required due diligence based on the level of the vendor, or based on the bucket in which the vendor is grouped. At "Level 1," collect only a service level agreement. At "Level 2," collect a contract, a confidentiality agreement, and financial statements. At "Level 3," collect all document types (e.g., a contract, confidentiality agreement, financial statements, SOC report, examination report, BCP, etc.).
This method seems relatively simple to carry out. But in reality, it can create a lot of unnecessary document exceptions, and occasionally miss opportunities to request relevant documents.
The bucket method costs a lot of time and effort even though the labelling process seems quick and simple.
[Learn how to review your 3rd party vendor SOC reports in 15 minutes or less. Plus, download our free SOC review checklist.]
Instead of the bucket method, consider the more accurate if-then method.
Imagine you have a list of questions you ask about vendor characteristics. You could say that if you answer Question A as "yes," then you should collect a specific type of document related to the effects of that characteristic, Document A. Here are a few examples to consider:
By using the if-then method, you only gather the documentation that is appropriate to the third party relationship. This method can be continually refined. If you notice you are creating a lot of document exceptions for a specific type of document, revisit the question you are asking that instigates this requirement. Consider what assumptions are being incorrectly made about the characteristic's effects. Update your list to appropriately account for this.
Let's say you thought, "If a vendor stores, transmits, or accesses customer data, then I should get their SOC report." You would quickly find that not every vendor who can access your customer's data is going to have a SOC report, and that the SOC report is quite unnecessary for the service you are receiving. In this case, you could create two separate questions. One question would be about storing customer data, in which you would require a SOC report. Then another about accessing and transmitting customer data, in which you would require a confidentiality agreement, but not a SOC report. Making this adjustment would greatly reduce the number of documented exceptions.
The if-then method will eliminate unnecessary document requests and ensure pertinent documents are obtained.
While both methods provide standardized ways to gather due diligence documentation from vendors, the bucket method can actually cause more problems for your vendor managers. By using the if-then method, you can manage your vendors based on the services that are being provided to you and easily change your program to meet the developing needs of your environment. Couple this method with the Tandem Vendor Management Software, and increase the efficiency in which you conduct your program.
We recently moved a customer from a datacenter at one of their locations to a large datacenter in the Dallas/Ft. Worth area. One of the devices we moved was a Meraki MX84 being used as a VPN concentrator. A VPN concentrator works by extending the network the VPN concentrator is on to the access points. Basically, wireless clients at all locations get an IP address on the same layer two network. This is important for a few reasons. First, the VPN concentrator needs to be in it's own VLAN/DMZ. Second, something on the layer two network the VPN concentrator is connect to needs to be handing out DHCP addresses. In our case, we used a Fortigate UTM to run the DHCP server for that subnet. Third, traffic needs to be allowed outbound to the Internet from all clients on the VPN concentrator layer two network so clients can connect to the Internet. The traffic is tunneled from the access points to the VPN concentrator, so the traffic does not intermix with the normal network traffic.
One of the issues we had was that the access points would not create the tunnel back to the VPN concentrator. After talking to Meraki support, we found that the issue was that the access points and the VPN concentrator would not connect to each other if their public IP address was the same. This does not work because Meraki uses the same technology to build the VPN from the MX to the access points as they use to build a VPN mesh between MX devices. Our devices were both using the default overloaded outbound NAT rule, so they were coming from the same public IP address. The solution is to make the MX come from a different public IP address, which can be accomplished via an inbound and outbound NAT statement. After we made this change, the access points connected to the VPN tunnel and wireless began to work.
One other thing to note is that the access points will not broadcast SSIDs if the VPN to the concentrator is not up when configured to tunnel traffic through a VPN concentrator. This can be helpful when troubleshooting wireless when there are not clients at the location of the access points.
As a part of a recent data center move we had to reconfigure several APC management cards. The first thing that I did to each of these NMCs was to reset to factory defaults and update the firmware.
This is a fairly simply process normally. Connect the appropriate serial cable, connect to the comm port, press the reset button a couple of times, and log in with the default credentials (http://www.apc.com/us/en/faqs/FA156075/). Once logged in, you can use the command to factory_reset or format the card and bring it back to factory settings.
In one case, however, the card didn't survive the factory reset. In fact, it appeared the card had started to boot, but never finished the booting process. By changing the baud rate around in my settings, I was able to connect to the BootMonitor using 57600 baud, 8 data bits, no parity, no flow control. It was also at this point that I saw an error related to the checksum of the AOS firmware which was preventing the card from booting. We had another NMC that I could swap to if I needed to, but I wanted to see if I could simply reload the firmware onto the card and get everything working again. Fortunately, APC has an article for that: http://www.apc.com/us/en/faqs/FA293874/
Using TeraTerm and XMODEM, I was able to upload the bootmon, AOS, and application module firmware files (in that order) to the NMC. Once that had finally finished, simply rebooting the NMC brought everything back online.
This also had the positive side effect of updating the firmware on the card because I wasn't able to download old firmware files from APC's support site. From there, I could complete my setup process and bring everything back online.
Early this year the tech world was rocked with the announcement of two unprecedented vulnerabilities named Meltdown and Spectre.
These two vulnerabilities are a big deal because they are hardware vulnerabilities affecting any device with a silicon chip. This includes microprocessors on workstations and servers, mobile phones, tablets, cloud services, and other platforms.
Understandably there was a rush from three main industries, processor companies, operating system companies, and cloud providers to provide solutions. However, as a result of the urgent response, there were unanticipated update incompatibilities which crashed systems. This created a dilemma for IT professionals. "Do we install updates which may cause our systems to crash?" or "Do we sit-tight and remain vulnerable?"
Even in the weeks of uncertainty, there were calm voices of seasoned reasoning. Their message reminded us that basic security standards remain our first line of defense. No matter how bad an exploit may be, its impact can be limited if:
So how do you do achieve these standards? Here are some fundamental best practices:
One of our Technology customers recently migrated to a new AT&T WAN offering called AT&T Switched Ethernet – Network on Demand (ASE NoD). This is the most recent evolution of their metro Ethernet service with the addition of long-distance layer 2 connections.
What makes this "network on demand" is the ability to change bandwidth as needed through a web portal up to the physical Ethernet hand-off limit, typically 10Mb/s, 100mb/s, or 1Gb/s. The default rate for each of this customer's location was set to 20, 50, or 100Mb/s.
Since this is a relatively new product we had several Gotcha's in the implementation: