It was announced on August 16th that 22 Texas cities were attacked and infected with ransomware, rendering many of their municipal IT systems unavailable to conduct daily business. The mayor of one of these cities has said the ransom request was $2.5 million to unlock their files. The Texas Department of Information Resources believes this was a coordinated attack by a single threat actor. Source:

We will likely get more details about how these networks were infected, but this incident should be a reminder to continually evaluate your cyber security risks and follow best practices to ensure your business or financial institution is protected. 

Below are a few comments and recommendations to consider as you examine your cyber security posture.

You don't have to be a big business to be a target

We've seen an increasing number of cyber attacks and ransomware infections directed toward small businesses where the bad actors see them as "low hanging fruit" with limited cyber security defenses. The cities listed in the recent news articles about this event are relatively small - less than 10,000 residents.

Most of these attacks rely on email phishing to gain access

A good email filtering solution is a good start, but on-going employee training and testing is critical to help them recognize potentially malicious emails. There are several tools availalble like the Tandem Phishing solution ( to help design and implement a phishing plan.

Traditional Anti-Virus solutions are not good enough

Many small businesses are still relying on traditional signature-based AV solutions. These products are not sufficient to protect against the latest malware. New products such as CylancePROTECT are more effective for stopping attacks by using machine learning instead of a bulky signature database.

Monitor your network

Our IT environments are under constant attack from bad actors around the world. This is an unfortunate fact of life today. An effective monitoring solution like CoNetrix Network Threat Protection is one of the security layers that every business should implement to help identify these attacks, and help them react quickly to prevent or limit potential damage. 

Incident Response is important

While we apply controls to protect against incidents, it is important to have a plan in the event of an incident occurs. If you have a documented Incident Response plan, great! Now take that IR plan to the next level by regularly conducting table top exercises and penetration testing to validate and improve it.

Backups should be a last resort

Ideally, if several security layers are in place then restoring from a backup won't be needed. However to ensure your backup is safe from being encrypted by ransomware it should be "air gapped" from the primary network. This means the backup data should be offline or not directly accessible for the malware to encrypt. Historically this has been done using removable media like tapes, but today it is much more efficient and cost-effective to use a cloud backup service. Many of these services (like CoNetrix AspireRecovery) provide a cloud backup with an option for disaster recovery services. 

No enterprise has to be a victim to ransomware. With proper planning and intentional practice, you CAN protect your network. While there is an investment associated with implementing appropriate controls and practices, the return on investment is well worth it if you protect against just one attack, not to mention the peace of mind you gain.

Contact CoNetrix Sales if you would like more information about protecting your network.


The world of cybersecurity has had some fundamental shifts in the past several years that have made the vast majority of companies unprepared for today's threats. The extensive use of malware, for example, has dramatically reduced the value of traditional security solutions, such as firewalls, IDS/IPS, and anti-virus software. These solutions that used to adequately prevent attacks are now very limited in their risk mitigation value. Many organizations have not updated their cybersecurity technology and solutions to stop today's threats. It's like monitoring your front door for a break in while someone comes in through the back window.

Even companies that have taken cybersecurity seriously have not always been led the right way by cybersecurity vendors. In the past, an organization who was serious about cybersecurity was told that they needed 24x7x365 monitoring - paying for really smart cybersecurity professionals to watch the alerts and events as they happen in real-time so they could respond at a moment's notice to malicious events.

But legacy technologies have relied mostly on human review, not machine intelligence. A common metric for a traditional Managed Security Service Providers (MSSP's) is to have a security engineer for every 30 devices under management. In the U.S., the average cybersecurity professional makes $116,000/year. This means the cost to monitor a single device is $322/month, forcing traditional MSSP's to charge between $500 and $1500/device/month to be profitable. Does this sound like your MSSP?

At those rates most customers can only afford for a few devices to be monitored; usually the firewall, IDS/IPS, and possibly a Windows domain controller. When asked why they don't need to monitor more devices, these MSSP's would state "As long as you are monitoring the choke points, you are safe."

Using the home security system analogy, imagine being told that monitoring the front and back doors are enough and then having your child kidnapped through a bedroom window. No choke point only security system would detect that, allowing the worst-case scenario to happen without your system even tripping. Home security systems relied upon a few choke points because it was very expensive to run wires to the whole home (especially after it was already built). However today many home security systems use wireless technology which has made it possible to place multiple sensors throughout the house without the use of wires. This makes the cost of securing the entire home from multiple threats much less expensive.

Thankfully, IT cybersecurity has evolved as well. Automated correlation and analytics from a properly deployed, configured, and tuned Security Information and Event Management (SIEM) solution has the ability to increase the ratio of devices per cybersecurity professional exponentially. Today, SIEM technology can quickly and efficiently find the "needle in a haystack" with far less human interaction. This dramatically reduces the number of cybersecurity professionals needed for a traditional Security Operation Center (SOC) which means a lower cost per device for customers. With a lower cost to monitor each device, we can now monitor more devices. Rather than just monitoring choke points, we can monitor all of the windows, doors, and rooms; which is really what was needed from the beginning.

When all of the critical devices are being monitored and correlated, you can now stitch together pieces of information across different systems and areas of the network to give you a much more accurate picture of what is happening. In other words, the more devices that you monitor, the more accurate the monitoring becomes and, therefore, the better economies of scale can be achieved.

So, what should a customer monitor? It's still a good idea to monitor the firewall and IDS, but we need to go beyond that and focus on today's threats. Routers, servers (especially Active Directory servers), wireless access points, and endpoint security solutions should all be monitored. With current SIEM technology, you can monitor all of these systems for about the same price as you used to be able to monitor just the firewall and IDS/IPS.

Monitoring only choke points and smaller areas of a network will not protect your organization from today's threats. Cybersecurity monitoring is more important than ever, but real risk mitigation comes with a holistic approach to monitoring all of the possible security events from every possible device. Stop only monitoring your front door for a break-in and assuming that your business is safe... because your back window is open.

Contact Technology Sales at 806-698-9600 or email if you want to improve your Cybersecurity Monitoring and Response solution AND lower the annual cost.


The Technology and Security groups at CoNetrix have received several questions from customers about the announcement from Oracle to move to a paid subscription model for commercial users. This issue has been very confusing for everyone as we try to decipher what this means with the various versions and editions of Java available today. In this article, we will attempt to clear up some of the confusion and provide recommendations going forward.

Java Standard Edition (SE) is the most common installation of Java today. Java SE consists of the Java Development Toolkit (JDK) and the Java Runtime Environment (JRE). Unless you are a developer, the JRE is the most important component because it's what allows you to run Java-enabled applications. Many users will have a version of JRE installed on their PC to support an application they use every day. Until recently Oracle Java SE has been free to download and install for everyone.

However starting in January 2019, commercial customers must have a paid subscription license for Java SE in order to receive updates. Historically Java has not had the best track record on security, so installing Java updates at least monthly is critical to ensure any newly discovered security vulnerabilities are fixed.

Does this mean you have to purchase Oracle Java subscription licensing to install updates? The answer is "It depends!"

Thankfully there are some open-source alternatives to the licensed Java SE. The most common are:

  • AdoptOpenJDK is an open-source distribution of the OpenJDK project which is jointly supported by Oracle and the Java community.
  • Corretto is another distribution of the OpenJDK that is supported by Amazon.

Both of these distributions provide support back to Java version 8, which can be important for some applications that require this older version. Both are also supported by CoNetrix Technology for our Network Advantage patch management customers.

The following are our recommendations for installing and supporting Java:

  • Verify you actually need to run Java. It's common for Java to get installed at some point but not removed when it's no longer needed.
  • Test one of the open-source Java options and see if your applications continue to work. If the testing is successful you should be good to remove Oracle Java.
  • Check with your application vendors who use Java to determine if they will support one of the open-source options. If they won't provide support, or they confirm their application doesn't work, then you may have to purchase a Java SE license for every system where these applications are used.
  • If an application requires Oracle Java, check with your vendor to see if they can bundle Java SE with their application. This could be more cost-effective than purchasing it separately.
  • If you deploy one of the open-source options, verify updates for this distribution are included in your patch management solution. Additionally, if your systems are scanned regularly for audits and exams make sure the scanning solution will recognize the open-source installation.

Please contact Customer Support at 806-698-9600 or email if you have any questions about management of Java and how CoNetrix can assist.


I was about to make a call on my Android phone, and when I went into the phone app, I noticed all my contact names were gone, instead it was just the phone numbers.

First I went to make sure Google contacts were turned on. Then I made sure they were properly synced, but still nothing.

Google suggested installing the Google contacts app and restoring from the latest backup. So I installed the app and there was a suggestion to restore from yesterday's backup. Did that and still no contacts. I went into Google contacts using a web browser and realized it wasn't just missing from my phone, my contacts were missing from my Google account period.

Further research found another setting called Undo Changes. This setting will let you revert your account back to a previous state from any day within the previous 30 days. I reverted back 1 week and all my contact data came back.

I then immediately made another backup and made sure daily backups were still set. You can also do the same thing from the web. Sign into your Google account online and Contacts is one of the apps you can open. (Same place you'll find gmail, drive, photos, etc)


If you have a Lenovo laptop with a built-in battery and it won't power on or wake-up from a sleep state, you can use the pin-hole emergency reset hole (button) to resolve the issue.

Disconnect the power adapter and depress this button with a paper-clip or similar item. Wait for 1 minute, then reconnect the AC adapter or power up using the battery.

The location of the reset button varies by model. The location for a T480s is shown below (taken from the Hardware Maintenance Manual). You do not lose any settings or data. Best I can tell, this is the similar to removing a removable battery on older models.


After migrating Exchange to a new domain, the "Conversation History" folder in Outlook to see Skype conversation history quit syncing conversations. Follow these steps below to get conversations to show up again:

  1. Type "credential manager" in the Windows 10 Searchbox on the taskbar to select and open the Credential Manager (or alternately open it through the Control Panel)
  2. Within the Credential Manager, select Windows Credentials and click "Add a Windows credential"
  3. Add your CoNetrix domain (email) credentials:
    1. Internet or network address:
    2. User name: domain\<username>
    3. Password: <domain password>
  4. Once you enter your credentials and click "Ok".
  5. Note: it may take 20-30 minutes after you complete these steps before you see your conversations begin to show up. In addition, you may begin to get some older emails that indicate you missed conversations…


We recently installed some new blade servers in our Aspire datacenter and I was working on getting ESXi 6.5 installed on them. After the installation, took the opportunity to upgrade to 6.7. I didn't want to mount an ISO to iLO, reboot each host, wipe the config, and start fresh – I wanted to do an in-place upgrade.

When a host is connected to vCenter and Update Manager, you can just use Update Manager to create a baseline for the in-place upgrade. These are fairly fresh installations and were not connected to our vCenter environment so I needed an alternative. Standalone hosts can also be upgraded using an Offline Bundle download and the "esxcli software profile" commands. I wanted to use an HP branded bundle so couldn't use the online depot, which means I would need to download the offline bundle, upload it either to every host or to a shared datastore which didn't yet exist. Surely there's an even simpler method that would still allow me to use an HP branded offline bundle image and not have to worry about the shared datastore.

Fortunately, there's a PowerShell method available. The "Install-VMHostPatch" cmdlet allows you to install host patches stored either locally, from a web location, or in a host file system.

If you have multiple hosts, just connect to all of them in the same PowerShell session (or connect to vCenter, if that's available) – "Connect-VIServer -Server -User root -Password LocalPassword" – and run a "Get-VMHost | Install-VMHostPatch" to install the patches at the same time.

The basic syntax and instructions can be found here - - this is a quick and easy way to install patches without Update Manager or enabling SSH on each individual host.

One other thing to note, I ran into issues with the Local Path and Web Path, but I believe it was due to a lack of available space in the tmp partition to copy the installation files. Unfortunately, this means I had to mount a shared datastore anyway, but setting up NFS on a spare Linux appliance made even that simpler than it could've been.


The May 2019 Microsoft patch releases included an update for a very high-risk vulnerability (CVE-2019-0708, aka BlueKeep) that affects Windows XP, Windows 7, Server 2003, Server 2008, and Server 2008 R2.

This vulnerability allows an unauthenticated attacker (or malware) to remotely execute code on the vulnerable system. It is considered as VERY high risk, particularly for systems with Remote Desktop Protocol (RDP, port 3389) directly exposed to the Internet. However if a system inside the network is compromised it could easily spread to other PC's and servers because RDP is enabled by default.

CoNetrix strongly recommends all customers ensure the May updates are installed as soon as possible.

Microsoft has not only released updates for Windows 7, Server 2008 & R2, but also has issued updates for Windows XP and Server 2003 which are not officially supported.

All CoNetrix Technology customers with managed services agreements and all cloud hosted Aspire systems, were updated shortly after this vulnerability was announced.

This vulnerability can be mitigated by enabling Network Level Authentication (NLA) - Additionally CoNetrix recommends disabling RDP access over the Internet to internal systems.


If you are using Server 2016 as a Citrix or RDS server, users often request for Windows Photo Viewer to be their default program for photos instead of Paint. Photo viewer is installed with Server 2016, but does not have the file associations needed. Also, setting the default application is a per user setting and will require a GPO policy.

Here are the steps:

  1. Import the registry settings to create the file associations needed for Windows Photo Viewer
  2. Set Default Program associations
    1. Control Panel > Default Programs > Set default programs
    2. Select Windows Photo Viewer
    3. Select Choose Defaults for this program
    4. Select the extensions you want to set as default for Windows Photo Viewer
    5. Click Save
  3. Verify functionality by opening a file with extension set in previous step and verify it opens with Photo Viewer
  4. Create default association file to set default for all users at logon
    NOTE: the above process sets defaults for the current user only, to set for a user at logon the settings must be imported at logon
    1. Via powershell run the command below to create an XML document with the necessary associations
      "dism /Online /Export-DefaultAppAssociations: C:\cnx\DefAppAssoc.xml"
    2. Copy XML to a network location accessible by GPO policies
  5. Create or modify an existing GPO to pull XML file settings
    1. Computer Configuration > Administrative Templates > Windows Components > File Explorer > Set a default associations configuration file
    2. Enable policy and set network path of file from previous step


I recently built some new Remote Desktop Server for a customer. They had previously used roaming profiles set via the Profile Path setting in the Remote Desktop Services Profile tab of the user's Active Directory object. This worked well when setup correctly, but sometimes the IT department would forget to add this path to new user profiles which would cause issues. I was looking for a way to eliminate the need for IT to have to remember to add this option to the profiles of the RDS users.

I remember User Profile Disks being an option in Windows Server 2012 and newer server operating systems. I added the User Profile Disks to the configuration when I setup my new collection and it initially seemed to work well. However when I then logged into all six of my RDS server at the same time and noticed that I received a temporary profile on all but one of the RDS servers. Some investigation led me to find that a User Profile Disk can only be connected to one server at a time. This likely would have been fine 99% of the time, but I wanted to be sure that the odd occasion where a user got connected to two servers at one time due to something like a server being prevented from accepting new connection would now cause problems. I ultimately decided not to enable user profile disk to avoid any potential issues when a user might have a session on two servers.

As an alternative I was able to set a roaming profile path via a computer Group Policy and link it to the OU containing the RDS servers. This accomplished the goal of automating the user profile setup. If a user is logged into to servers at one time, there may be an issue with which profile is written back to the share last, but it will not cause a temporary profile to be created on the RDS server. The settings I enabled are shown below: