IT audits are a cornerstone of a strong information security program. The auditing process is how we confirm systems are configured securely and controls are working as expected. But it's not just about checking boxes. An audit's real value comes from collecting and reviewing data in a verifiable way.

In this article, we'll define independent, authenticated audit data collection, explain why it is key to validating your security posture, and talk about how you can effectively manage risks associated with it.

About Independent, Authenticated Audit Data Collection

Independent, authenticated audit data collection is a process used by auditors to sample reliable and high-quality information about systems and controls. Let's break it down.

Audit Data Collection

IT audits aren't just about running scans. They're about finding stories hidden in data. Here's a look at the types of data typically collected during an IT audit and why the data is important.

Data Type 

Why Collecting This Data Matters 

User access and system activity logs

This data shows who logged in, when, and from where, as well as what happened on a system, including background processes and events not directly related to user activity.

Issues with logging controls can easily go unnoticed for long periods of time, as systems may continue to operate effectively in every other way. An organization may not realize these critical logs are nonexistent until a system failure, security breach, or other incident reveals the gap. 

Group memberships and role assignments 

This data shows who has access to which systems or data including who has elevated privileges. 

Roles, groups, and privileges change regularly. It's easy for access levels to drift, especially when someone changes roles, leaves the company, or when a new vendor comes on board. Each of these changes can inadvertently result in incorrect group memberships or access rights if not carefully managed. 

Legitimate temporary changes can easily become not-so-temporary. For example, if an employee is having technical issues, the IT team might grant them elevated privileges to help with troubleshooting. But once the issue is resolved, those privileges don't always get removed as they should. 

Configuration settings 

This data shows operating system and application settings that impact security. Hardware and software vendors are constantly releasing updates, adding features, fixing bugs, and patching vulnerabilities, all of which can change configuration settings. 

New security settings introduced in these updates don't always get enabled. This is largely due to a lack of awareness about the changes or the existence of the new settings. 

Patch status reports 

This data shows information about missing or out-of-date security updates. 

When auditors find missing critical security patches, this should spark a helpful conversation about the root cause. In some cases, the patch management system may not have been configured to report on all types of updates. In others, certain systems or applications may have been unintentionally excluded from the reporting process altogether. 

Firewall and network device configurations 

This data shows the rules and policies that control how traffic flows into, out of, and through the network. 

Temporary firewall rules are sometimes left in place permanently. For example, a rule might be added that allows unrestricted internet access to a system during testing, but after the testing ends, the rule is not removed or restricted. 

Endpoint protection status 

This data shows whether endpoint protection systems (e.g., EDR, anti-malware, etc.) are installed and functioning. 

Individual workstations sometimes fail to check-in with the endpoint security server. These workstations often go unnoticed because they are not reporting an issue and the reporting software does not flag when the system has stopped communicating. As a result, these workstations can miss important updates or fall out of compliance. 

Vulnerability scan results 

This data shows the status of unresolved vulnerabilities. 

Managing vulnerabilities across IT systems can be overwhelming with hundreds (or even thousands) of issues continuously emerging. An effective vulnerability management program relies on risk-based decision-making: determining which vulnerabilities to fix, which to mitigate through alternative controls, and which to accept.  

The goal of running vulnerability scans during an audit isn't simply to point out vulnerabilities, but to assess whether the organization's risk-based decisions are resulting in the level of risk reduction they expect. Performing an independent vulnerability scan in an IT audit can bring valuable clarity to the effectiveness of what is often a complex and nuanced process. 

Other data types 

Depending on the nature and scope of the audit, other data types may be collected, as well. For example: 

  • Audit trails from key applications which show logs of data access, changes, and user activity. 
  • Backup records which show evidence that data is being backed up and would be recoverable, if needed. 
  • Security system alerts which show key security alerts are configured and are being delivered, as expected, from applicable systems (e.g., SIEM systems, EDR / XDR systems, email systems, etc.). 

 

These types of data can be collected manually or with the help of automated tools. Manually gathering and reviewing this data can be incredibly time-consuming and becomes even more time-consuming in larger, more complex, and/or decentralized environments. This is why many auditors rely on automation to gather the data.

The FFIEC Audit Booklet calls this "computer-assisted audit techniques."

"IT auditors frequently use computer-assisted audit techniques (CAATs) to improve audit coverage by reducing the cost of testing and sampling procedures that otherwise would be performed manually. CAATs include many types of tools and techniques, such as generalized audit software, utility software, test data, application software tracing and mapping, and audit expert systems."

Guidance goes on to state that examples of CAATs include internally developed tools, commercial audit software, vendor-supplied utilities, and tools created by IT auditors. Vulnerability scanners are one common example of a CAAT, but many other types of tools can be used to collect audit data depending on the audit's scope and objectives.

Guidance is clear that these tools can be effective in streamlining and improving the accuracy of the audit data collection process, as long as they remain under "strict control" of the audit function.

Independent

For audit data collection to be independent, it must be performed by someone other than the people who implemented the system or control.

This principle is clearly stated in the Interagency Guidelines Establishing Information Security Standards, per the Gramm-Leach-Bliley Act (GLBA). The standards require financial institutions to:

"Regularly test the key controls, systems and procedures of the information security program. […] Tests should be conducted or reviewed by independent third parties or staff independent of those that develop or maintain the security programs."

This idea is expanded in the FFIEC Information Security Booklet, which states:

"To be considered independent, testing personnel should not be responsible for the design, installation, maintenance, and operation of the tested system, or the policies and procedures that guide its operation."

In other words, if someone implements a system or control (whether it's internal IT staff, a managed service provider (MSP), a managed security service provider (MSSP), or anyone else) that same person or group cannot be the one responsible for validating it. This need for independence extends to the tools and techniques used to collect audit data. 

Independence is crucial to the audit function, including IT audits. Reasons for independence include: 

  • Avoiding conflicts of interest. When individuals evaluate their own work, they might unintentionally overlook issues or downplay flaws to avoid negative consequences. 
  • Maintaining audit credibility. Audit results are easier for stakeholders to trust when the auditor is clearly impartial and objective. 
  • Improving control effectiveness. An unbiased review often leads to clearer, more effective recommendations, resulting in a stronger control environment.  

Validation needs fresh eyes to be credible. The more independent the audit data collection, the more trustworthy the results. 

Authenticated 

For audit data collection to be authenticated, it must use credentials that provide access to the system or control being audited. 

If you'll allow us a quick analogy, collecting unauthenticated data is like trying to do a home inspection with the doors locked and the blinds drawn. You might catch a few things from the outside (e.g., a missing shingle, a cracked window, etc.) but you can't see the wiring, the plumbing, or whether the foundation is sound. You're guessing about what really matters. 

On the other hand, collecting authenticated data gives you the keys to the house. You can walk through every room, open cabinets, check for leaks, and test the smoke detectors. 

Both perspectives can provide value, but when your purpose is thorough control validation, performing authenticated data collection is the difference between presuming everything works and knowing it does.

A common example of authenticated data collection is authenticated vulnerability scanning. The FFIEC Architecture, Infrastructure, and Operations Booklet highlights this as a method for collecting high quality data as it discussed the importance of using a dedicated account for authenticated vulnerability scans:

"[Authenticated scans are] an essential tool to obtain accurate vulnerability information on covered devices by authenticating to scanned devices to obtain detailed and sensitive information about the OS and installed software, including configuration issues and missing security patches."

While authenticated scans are important, authenticated audit data collection goes beyond vulnerability scanning. Authenticated audit data collection includes a broader set of activities that require internal access to provide meaningful insights. 

Using the same data types mentioned earlier, here are some examples of why authenticated data collection matters.

Data Types 

Why Authenticated Audit Data Collection Matters 

User access logs; group memberships and role assignments 

It goes beyond just looking at exported or reported values. It reveals real-time user permissions and roles. 

Configuration settings 

It goes beyond just looking at the procedures for setting up a new system. It allows direct access to system and application settings to verify actual configurations and find areas where configurations have changed. 

Patch status reports 

It goes beyond relying on reports that may not be complete. It verifies installed patches directly on a sample of systems selected by the auditor. 

System activity logs; firewall and network device configurations; audit trails from key applications; backup records; security system alerts 

It goes beyond reviewing a sample from the system or control owner. It ensures the logs are coming from trusted, original sources, they are being collected and stored properly, and they haven't been tampered with. 

Endpoint protection status 

It goes beyond reviewing the same reports your IT team, MSP, and/or MSSP are already reviewing. It ensures required endpoint security tools (e.g., EDR, anti-malware, etc.) are installed and function correctly on the systems your staff use daily. 

 

In short, authenticated vulnerability scans are valuable, but they represent just one part of a much broader picture. Full-scope, authenticated audit data collection helps validate systems are configured, secured, and functioning as intended. 

Risks of Independent, Authenticated Audit Data Collection

Testing always comes with some level of risk. This is true of any meaningful security activity. The key isn't to just avoid risk, but to thoughtfully measure and manage it. 

Here are a few common concerns with independent, authenticated audit data collection and suggestions for how to address them.

Risk 

Response 

Privileged access. Giving external auditors or tools privileged access can increase exposure. 

Grant least privilege access. Ideally, audit data collection should be done with read-only access, since auditors don't administer the system; they just need to collect data. However, the level of access required varies across systems. For example, Microsoft Active Directory only allows audit data collection with Domain Admin privileges. On the other hand, Microsoft's cloud products have made significant strides in allowing audit data collection through read-only access. The goal is to balance collecting high-quality audit data while granting the least privilege necessary to minimize risk. 

Use a dedicated account. The account used for data collection should be a dedicated account and should not be used for any other activities. This allows you to manage security settings of the account, enable/disable the account as needed, and maintain logs related to the account's activity. 

Disable accounts when not in use. Audit data collection is a relatively short-lived process, especially when CAATs speed up the process. This means the window of time privileged access is needed can be short, sometimes days or only hours. As soon as the data collection is complete, disable the accounts until the next audit. 

Do good vendor management. Independent testing should be done by reputable firms with experienced personnel. Assess the audit firm's qualifications, review their testing procedures, and negotiate a strong contract. Granting privileged access doesn't remove accountability. Instead, it provides the necessary visibility to ensure the audit delivers real value. 

Vulnerable technology. The tools used for testing might be vulnerable or require network changes to use. 

Don't relax your security posture. Authenticated testing may require privileged access and treating the audit tools as trusted, internal assets. But the audit should use credentials you control and follow your change management, authentication, and monitoring processes.

Do good vendor management. Just like with any vendor who connects to your network, make sure the vendor is reputable, and their testing tools are patched, secured, and subject to your due diligence process. 

Operational downtime. Testing could disrupt production systems or slow down operations. 

Coordinate testing with the audit firm. Testing should always be carefully planned and managed. This could include off-hours scheduling, controlled scan settings, and clear communication throughout the process. Reputable audit firms know how to test without disruption and know your priority is keeping the organization running. 

Vendor compromise. Audit firms are attractive cyber targets. If their tools or systems are compromised, they could become a conduit to your network. 

Do good vendor management. You can and should ask about the audit firm's own security posture. Just like any vendor, they should meet your due diligence requirements. Disable their access when it is not in use and perform ongoing monitoring to ensure continued compliance with contractual obligations. 

Realistic testing. Unauthenticated scans reflect more realistic attack conditions. Internal security tools may block testing or flag it as malicious. 

This should not be an either/or decision. Both unauthenticated and authenticated testing provide value. One shows what an outsider sees (internal and external penetration tests); the other shows what's happening inside the house, to continue our analogy (IT audits). Confusing the two decreases the value of both. 

Audit data collection needs to be thorough to verify control effectiveness. Blocking this data collection by treating it the same as malicious activity decreases the auditor's ability to give you what you need: independent verification. 

In contrast to an audit, internal and external penetration testing should be limited by all your controls and treated like malicious activity because the goal is to simulate an attack. 

Third-party testing coverage. Relying solely on MSP-provided SOC reports or scan results may leave gaps in assessing the security of the systems or controls they manage on your behalf. 

Get the full story. SOC reports and vulnerability scans from your MSP are a valuable part of the picture. They show how well the MSP secures their own infrastructure, but they don't always cover the security of your hosted systems and network devices. For example, a SOC report might assess how the MSP patches their own servers, but not necessarily the ones they manage for you. That's why it's important to ensure your own controls are independently audited. 

Do good IT audit scoping. To ensure complete coverage, clarify with your MSP which controls are being independently audited. Ask questions like: 

  • Are admin and user accounts in my Active Directory domain being independently audited? 
  • Are patches and updates for my servers, workstations, and network devices being independently audited? 
  • Are endpoint protections on devices used by my employees (e.g., workstations, laptops, tablets, etc.) being independently audited? 
  • If "Yes," how and by whom? 

 

Bottom Line

Anyone can collect and review data, but not all data is equal. What makes an audit truly worthy of the term "validation" is the reliability, quality, and objectivity of the data being assessed. 

If you're relying on your information security program to protect your institution and your customers, your audit approach should reflect that level of responsibility. Ask your IT team, MSP, or MSSP whether your audit data collection is both independent and authenticated — and if not, consider what steps you may need to take to get there.

Need help getting there? CoNetrix Security specializes in bringing deep experience to the IT audit process in a way that meets both compliance demands and operational realities. Learn more about how CoNetrix Security can help you at CoNetrix.com/Security.