Security Assessment Techniques
Date: Feb 25, 2021
A number of tools and techniques are available to help organizations conduct security assessments. In this sample chapter from CompTIA Security+ SY0-601 Exam Cram, you will learn how a security information and event management (SIEM) system can help identify internal and external threats, monitor activity, conduct compliance reporting, and more.
A number of tools and techniques are available to help organizations conduct security assessment. Identifying vulnerabilities and threats is key to maintaining organizational security. In addition to identifying vulnerabilities, organizations need an approach to assess threats against their systems. A myriad of solutions are available. In the past, an organization first needed to move beyond simple log management and find a method to efficiently store and analyze log data across all of its networks, devices, and applications. Security information management (SIM) was the solution. Then, in addition, the data needed to be analyzed in real time to provide correlation across events and enable alerts and reporting. Security event management (SEM) was the solution in this case. SIM and SEM were eventually combined into what’s known today as security information and event management (SIEM). This chapter looks at security assessment techniques, including how they are combined and continue to evolve.
Vulnerability Scans
Many network scanners are designed to be passive and non-intrusive to the target systems. Passive scanning poses minimal risk to the assessed environment because it is designed to avoid interfering with normal activity or degrading performance. However, tests against the system can affect network and system performance. A comprehensive vulnerability scan helps an organization identify vulnerabilities, uncover common misconfigurations, and understand where further security controls are required. The following points briefly summarize these three goals:
▸ Identify vulnerability: Vulnerabilities include outdated software versions that contain flaws or are missing patches.
▸ Identify common misconfigurations: Vulnerability scanners can identify many common misconfigurations. Some scanners are even capable of remediation. Checking for misconfigurations is most beneficial when deployed configurations are compared against an organization’s security policies and standards.
▸ Identify lack of security controls: Identifying vulnerabilities provides an opportunity to remediate weaknesses. In some cases, organizations may find that they need to implement more security controls to mitigate the risk.
Vulnerability scanners fall into three broad categories, based on the devices they evaluate:
▸ Network scanners: This type of scanner probes hosts for open ports, enumerates information about users and groups, and proactively looks for known vulnerabilities.
▸ Application scanners: This type of scanner requires access to application source code or binaries but does not need to actually execute the application. Thus, this type of scanner tests an application from the inside. Application scanning supports all types of applications and is also known as static application security testing (SAST).
▸ Web application scanners: This type of scanner applies specifically to web applications and identifies vulnerabilities such as cross-site scripting, SQL injection, and path traversal. This type of scan executes an application and tests from the outside in. This type of scanning is known as dynamic application security testing (DAST).
A network vulnerability scanner, for example, is a software utility that scans a range of IP addresses, testing for the presence of known vulnerabilities in software configuration and accessible services. A traditional vulnerability scanner relies on a database of known vulnerabilities. It is an automated tool that can be directed at a targeted system or systems. Unlike systems that test for open ports, which test only for the availability of services, vulnerability scanners can check for the version or patch level of a service to determine its level of vulnerability.
Keep in mind that a vulnerability does not necessarily indicate an issue that needs to be immediately remediated—or even remediated at all. Using an analogy, consider a home as a subject for a vulnerability assessment. A broken deadbolt lock certainly seems like a vulnerability. Ideally, the homeowner would replace it; however, in some parts of the world, residents do not lock their doors anyway. A smashed window is a vulnerability as well. In some cases, it might make sense to mitigate a broken window simply by covering it with plastic to protect against the elements. Even a perfectly functioning window is a vulnerability, however. The benefit a window offers typically outweighs the benefits gained by living without windows. What is counted as a vulnerability typically depends on what you are trying to protect.
Upon completion of a vulnerability scan, an organization can generally choose to take one of three approaches:
▸ Remediation: The organization can patch the vulnerability.
▸ Mitigation: The organization can introduce a control to reduce the likelihood of the vulnerability being exploited or the impact if it is exploited.
▸ Acceptance: The organization can take no action if the risk is low, especially compared with the cost or operational impact of addressing the vulnerability.
There isn’t necessarily a quick method for determining risk based on the output of a vulnerability scanner. Relevancy to the business, trade-offs, and identified threats and likelihoods need to be considered to accurately interpret the results.
Vulnerability scanners rely heavily on catalogs of known vulnerabilities. Two standards are commonly used, both of which are open industry standards:
▸ Common Vulnerability Scoring System (CVSS)
CVE is a standard for identifying vulnerabilities. It is designed to allow vulnerability databases to be linked together and does not contain attributes such as risk, impact, remediation steps, or detailed technical information. It primarily includes a description and a unique identifier assigned by the vendor where a patch has been provided to fix the vulnerability. CVE also includes related references, such as vulnerability reports and advisories.
On the other hand, CVSS is a framework for communicating the characteristics and severity scores of vulnerabilities. A CVSS score is a rating from 0 to 10. Calculation of the score is complex and takes various components into consideration, such as how easy it would be to exploit the vulnerability. CVSS scoring seeks to address the following questions:
▸ What is the attack vector? Does it require physical access, or can it be exploited over the network?
▸ What is the attack complexity?
▸ Are elevated privileges required?
▸ Is user interaction required?
Intrusive vs. Non-Intrusive
Vulnerability tests seldom disrupt systems. However, an initial port scan can cause a system to fail, particularly if the implementation of a particular service does not follow proper standards. Intrusive scans aim to verify vulnerabilities by trying to exploit them. Organizations should take care before initiating such intrusive tests.
Credentialed vs. Non-Credentialed
Credentials such as usernames and passwords enable authorized access to a system. Scanners can be configured to run in either credentialed or non-credentialed mode. Non-credentialed scans are less invasive and provide an outsider’s point of view. With credentialed scans, however, the system can ascertain more information, which results in a more complete vulnerability status with greater certainty. Both credentialed and non-credentialed scans can mistakenly identify a vulnerability when none exists; this is known as a false positive. Confirming a large number of false positives can be time-consuming and places a burden on IT resources. Credentialed scans tend to reduce false positives and can also reduce the opposite effect: false negatives. False negatives are more difficult to see than false positives. A false negative is a lack of result when there should be one. A false negative may occur, for example, when a vulnerability is new, and a check has not been developed yet to look for the vulnerability.
Threat Assessment
Since evolving from SIM and SEM, SIEM has for years played a vital role in identifying threats and detecting security incidents. Now organizations are looking for ways to combine threat intelligence with SIEM as the intelligence gained can provide enriched data with greater context through correlation with external information. One trend that has emerged in recent years is that organizations now tend to assume that they have already been breached. Rather than be reactive, security teams look for ways to be proactive rather than simply respond to incidents. Targeted threat hunting assessments have gained popularity as a result, and the programs and tools continue to evolve.
Security Information and Event Management (SIEM)
A security information and event management (SIEM) system provides the technological means to accomplish a number of goals related to security monitoring, including the following:
▸ Identifying internal and external threats
▸ Monitoring activity and resource usage
▸ Conducting compliance reporting for internal and external audits
▸ Supporting incident response
SIEM tools collect and correlate and subsequently provide alerts and information dashboards based upon that data. SIEM output can be used proactively to detect emerging threats and improve overall security by defining events of interest (EOI) and resulting actions. SIEM systems are the main element in compliance regulations such as SOX, GLBA, PCI, FISMA, and HIPAA. SIEM systems provide a plethora of fine-grained details to support incident response programs. The purpose of SIEM is to store and turn a large amount of data into knowledge that can be acted upon. SIEM systems are generally part of the overall security operations center (SOC) and have three basic functions:
▸ Centrally managing security events
▸ Correlating and normalizing events for context and alerting
▸ Reporting on data gathered from various applications
Consider, for example, that just one intrusion detection sensor or log data source can generate more than 100,000 events each day. SIEM systems rely on log collectors, which are responsible for aggregating and ingesting the log data from the various sources such as security devices, network devices, servers, and applications. Log aggregation is the process by which SIEM systems combine similar events to reduce event volume. SIEM systems aggregate data from many network sources and consolidate the data so that crucial events are not missed. By default, events are usually aggregated based on the source IP address, destination IP address, and event ID. The purposes of aggregation are to reduce the event data load and improve efficiency. Conversely, if aggregation is incorrectly configured, important information could be lost. Confidence in this aggregated data is enhanced through techniques such as correlation, automated data filtering, and deduplication within the SIEM system.
Event aggregation alone is not enough to provide useful information in an expeditious manner. A common best practice is to use a correlation engine to automate threat detection and log analysis. The main goal of correlation is to build EOIs that can be flagged by other criteria or that allow for the creation of incident identification. To create EOIs, the correlation engine uses data aggregated by using the following techniques:
▸ Pattern matching
▸ Anomaly detection
▸ Boolean logic
▸ A combination of Boolean logic and context-relevant data
Finding the correct balance in correlation rules is often difficult. Correlation rules that try to catch all possible attacks generate too many alerts and can produce too many false-positive alerts.
A SIEM facilitates and automates alert triage to notify analysts about immediate issues. Alerts can be sent via email but are most often sent to a dashboard. To help with the large volume of alerts and notifications that SIEM systems generate, these systems typically provide data visualization tools. From a business perspective, reporting and alerting provide verification of continuous monitoring, auditing, and compliance. Event deduplication improves confidence in aggregated data, data throughput, and storage capacity. Event deduplication is also important because it provides the capability to audit and collect forensic data. The centralized log management and storage in SIEM systems provide validation for regulatory compliance storage or retention requirements. Regarding forensic data and regulatory compliance, WORM (write once read many) drives keep log data protected so that evidence cannot be altered. WORM drives permanently protect administrative data. This security measure should be implemented when an administrator with access to logs is under investigation or when an organization is discussing regulatory compliance.
Some SIEM systems are good at ingesting and querying flow data both in real time and retrospectively. However, significant issues are associated with time, including time synchronization, time stamping, and report time lag. For example, if a report takes 45 minutes to run, the analyst is already that far behind real time, and then time is also needed to read and analyze the results.
When designing a SIEM system, the volume of data generated for a single incident must be considered. SIEM systems must aggregate, correlate, and report output from devices such as firewalls, intrusion detection/prevention systems (IDSs/IPSs), access controls, and myriad network devices. How much data to log from critical systems is an important consideration when deciding to use a SIEM system.
SIEM systems have high acquisition and maintenance costs. If the daily events number in the millions per day and events are gathered from network devices, endpoints, servers, identity and access control systems, and application servers, a SIEM might be cost-effective. For smaller daily event occurrences, free or more cost-effective tools should be considered.
SIEM systems continue to evolve to capture more and more use cases and to be combined with other solution sets. SIEM systems, for example, continue to help secure organizations against threats. Consider user behavior analysis, for example. A SIEM system can establish a baseline for user activity and identify anomalous behavior that deviates from that baseline. This often involves advanced techniques such as machine learning, and the SIEM system needs to be capable of comparing data across time horizons and across groups, such as the department the user works in. More recently, this data has been combined to perform sentiment analysis: Data can be tracked and analyzed to look for patterns that rely on human sentiment. In this way, systems are able to recognize threats before they become threats. This type of analysis should leverage external data sources, including those from the public domain. As discussed in the next section, SIEM systems are now being combined with other functions to perform security assessments.
Threat Hunting
Threat hunting is a proactive approach to finding an attacker before alerts are triggered. It is not reactive or detective. A reactive approach requires data such as the data a SIEM system provides; a detective approach relies on the use of various algorithms and rules. Threat hunting has the following key attributes:
▸ Hypothesis: Threat hunting starts with a hunch, often based on clues. Drivers may include analytics such as user behavior analytics, situational awareness (for example, based on internal risk assessment, trends, or high-value targets), and intelligence based on intelligence bulletins, intelligence feeds, or vulnerability scans.
▸ People: While many sources—such as those discussed in Chapter 5, “Threat Actors, Vectors, and Intelligence Sources,” and earlier in this chapter—are used, threat hunting is centered around the security analyst, who has deep expertise and knowledge of the organization’s environment.
▸ Assumptive: Threat hunting does not take a breach-preventive approach but rather assumes that the organization has already been breached.
▸ Iterative: Much like a penetration tester, a threat hunter must pivot frequently in order to continue lateral movement while seeking further evidence.
Throughout the process, a threat hunter is looking to disrupt the attacker during any phase of what’s known as the cyber kill chain, which is a framework developed to track the steps or phases that an attacker goes through as part of an intrusion. (We examine the cyber kill chain more closely in Chapter 27, “Incident Response.”) The threat hunting process combined with knowledge of the cyber kill chain allows a security analyst to quickly outmaneuver an attacker. The goal of the security team is to completely disrupt the attacker or quickly impede the attacker’s ability to move across the attack chain.
A threat hunter relies on a number of intelligence sources, such as a SIEM system and external sources. Recall that in Chapter 5, we discussed various open and closed sources of threat intelligence and research. All the gathered data may be intelligently pulled together using commercially available software and services. This bringing together of internal and external threat feeds is known as intelligence fusion, and it enables an organization to establish a more accurate threat profile. Internal and external sources are defined as follows:
▸ Internal threat data: Internal threat data consists of alert and event data from the SIEM system and any other raw log sources. It includes previous knowledge about prior attacks, including vulnerabilities exploited, previous indicators of compromise, details about the attacker, and packet captures. Baseline data on network traffic also makes it possible to understand what’s expected and aid in identifying anomalies.
▸ External threat data: External threat data consists of structured threat information such as STIX, as well as unstructured data from security advisories, bulletins, and other OSINT tools. External threat feeds from security organizations providing such data as a service can also be used as data sources. Attacks across organizations are often similar in their techniques. Chances are good that your organization isn’t the first to see an attacker and his or her methods, and external threat data can give you a warning about what is happening elsewhere.
Fusion analysis can aid in processing data and yielding more meaningful insights to provide a comprehensive look at the threats to an organization. This analysis can even compare internal telemetry data with external data to provide prioritized insight. A threat hunter with good threat data can more quickly identify indicators of compromise and indicators of attacks. Some intelligence platforms integrate with and can also provide capabilities to automate and orchestrate the actions required by security.
Security Orchestration, Automation, and Response (SOAR)
Security orchestration, automation, and response (SOAR) tools can aggregate intelligence from internal and external sources to provide fusion analysis and other insights. SOAR combines data and also provides for case management and automated workflow. Gartner, a leading technology research company, came up with the idea of SOAR. According to Gartner, SOAR primarily does three things:
▸ Threat and vulnerability management
▸ Security incident response
▸ Security operations automation
You can see that, as a combined platform, a SOAR solution combines security orchestration and automation (SOA) with threat intelligence platforms (TIP) and incident response platforms (IRP). SOAR works with and augments SIEM. Gartner expects that in the future these capabilities will merge.
What Next?
If you want more practice on this chapter’s exam objective before you move on, remember that you can access all of the Cram Quiz questions on the Pearson Test Prep software online. You can also create a custom exam by objective with the Online Practice Test. Note any objective you struggle with and go to that objective’s material in this chapter.