Securing the Enterprise Architecture by Implementing Data Security Techniques
Date: Sep 24, 2022
This sample chapter from CompTIA Advanced Security Practitioner (CASP+) CAS-004 Cert Guide covers CAS-004 Objective 1.4: Given a scenario, implement data security techniques for securing enterprise architecture.
This chapter covers the following topics:
■ Data Loss Prevention: This section covers blocking the use of external media, print blocking, Remote Desktop Protocol (RDP) blocking, clipboard privacy controls, restricted virtual desktop infrastructure (VDI) implementation, and data classification blocking.
■ Data Loss Detection: Topics covered include watermarking, digital rights management (DRM), network traffic decryption/deep packet inspection, and network traffic analysis.
■ Data Classification, Labeling, and Tagging: This section covers the handling of metadata and attributes.
■ Obfuscation: Topics covered include tokenization, scrubbing, and masking.
■ Anonymization: This section discusses the removal of personal information from data.
■ Encrypted vs. Unencrypted: This section discusses the impact of encryption.
■ Data Life Cycle: This section covers the stages of the data life cycle: creating, using, sharing, storing, archiving, and destroying.
■ Data Inventory and Mapping: This section discusses the importance of inventory and mapping.
■ Data Integrity Management: This section discusses issues with data changes and corruptions.
■ Data Storage, Backup, and Recovery: This section covers the use of redundant array of inexpensive disks (RAID).
This chapter covers CAS-004 Objective 1.4: Given a scenario, implement data security techniques for securing enterprise architecture.
Securing the enterprise architecture entails the use of many techniques and processes. In this chapter you’ll learn about data security techniques and how they can be used to support securing of the overall architecture.
Data Loss Prevention
As you learned in Chapter 1, preventing the loss of critical and sensitive data requires the use of both policies and procedures that reflect best practices and software tools such as data loss prevention (DLP) software to prevent malicious as well as inadvertent data leaks. In this opening section of the chapter you’ll learn about other techniques to prevent data loss.
Blocking Use of External Media
One of the many ways malware and other problems can be introduced to a network (right around all your fancy firewalls and security devices) is through the peripheral devices that users bring in and connect to their computers. Moreover, sensitive data can also leave your network this way. To address this, you should implement controls over the types of peripherals users can bring and connect (if any). The following sections look at the biggest culprits.
The use of any types of USB devices (thumb drives, external hard drives, network interfaces, and so on) should be strictly controlled—and in some cases prohibited altogether. Granular control of this issue is possible thanks to Windows Group Policy.
Some organizations choose to allow certain types of USB storage devices but require that the devices be encrypted before they can be used. It is also possible to allow some but not all users to use these devices, and it is even possible to combine digital rights management features with the policy to prohibit certain types of information from being copied to these devices.
For example, with Group Policy in Windows, you can use a number of policies to control the use of USB devices. Figure 4-1 shows a default domain policy to disallow the use of all removable storage. As you see, there are many other less drastic settings as well.
Figure 4-1 Controlling the Use of USB Devices
Print Blocking
As you learned in Chapter 1, blocking the printing of sensitive documents is entirely within the capabilities of DLP software. Print blocking can prevent someone from getting a copy of sensitive information off the printer and can prevent that information from being stored for any length of time in the memory of the print device, where it might be obtained by someone hacking into the printer.
Remote Desktop Protocol (RDP) Blocking
Remote Desktop Protocol (RDP) is a proprietary protocol developed by Microsoft that provides a graphical interface to connect to another computer over a network connection. Unlike Telnet and SSH, which allow only working from the command line, RDP enables you to work on a remote computer as if you were actually sitting at its console.
RDP sessions use native RDP encryption but do not authenticate the session host server. To mitigate this, you can use SSL/TLS for server authentication and to encrypt RDP session host server communications. This requires a certificate. You can use an existing certificate or the default self-signed certificate.
While RDP can be used for remote connections to a machine, it can also be used to connect users to a virtual desktop infrastructure (VDI). A VDI allows a user to connect from anywhere and work from a virtual desktop. Each user may have his or her own virtual machine (VM) image, or many users may use images based on the same VM.
The advantages and disadvantages of RDP are described in Table 4-1.
Table 4-1 Advantages and Disadvantages of RDP
Advantages |
Disadvantages |
---|---|
Data is kept in the data center, so disaster recovery is easier. |
Sever downtime can cause issues for many users. |
Users can work from anywhere when using RDP in a VDI. |
Network issues can cause problems for many users. |
There is a potential reduction in the cost of business software when using an RDP model where all users are using the same base VM. |
Insufficient processing power in the host system can cause bottlenecks. |
Implementing and supporting RDP requires solid knowledge. |
RDP can be blocked at the firewall and at the system level by blocking port 3389.
Clipboard Privacy Controls
The clipboard function in desktops, laptops, and mobile devices is a convenient feature that stores information in memory until you paste it somewhere. But did you ever think of what happens after that? The information stays there until you copy over it! Moreover, in many systems, including Android, it has been found that any application can read that data without your permission.
While there is a fix to the Android issue, the point to be made is that organizations should be aware of this issue and take whatever steps are required to solve it as it may exist in your operating systems.
Restricted Virtual Desktop Infrastructure (VDI) Implementation
Virtual desktop infrastructures (VDIs) host desktop operating systems within a virtual environment in a centralized server. Users access the desktops and run them from the server. There are three models for implementing VDI:
■ Centralized model: All desktop instances are stored in a single server, which requires significant processing power on the server.
■ Hosted model: Desktops are maintained by a service provider. This model eliminates capital cost and is instead subject to operational cost.
■ Remote virtual desktops model: An image is copied to the local machine, which means a constant network connection is unnecessary.
Figure 4-2 compares the remote virtual desktop models (also called streaming) with centralized VDI.
Figure 4-2 VDI Streaming and Centralized VDI
While a VDI environment can be beneficial, there are some steps that can be taken to restrict the infrastructure for security reasons:
■ Consider disallowing copy and paste functions.
■ Create an allow list (formerly known as a whitelist) or a block list (formerly known as a blacklist) to prevent users from accessing certain external sites or email providers.
■ Evaluate the primary image for unnecessary services.
■ Implement firewalls and antivirus software.
■ Require multifactor authentication.
Data Classification Blocking
Data should be classified based on its value to the organization and its sensitivity to disclosure. Assigning a value to data allows an organization to determine the resources that should be used to protect the data. Resources that are used to protect data include human resources, monetary resources, and access control resources. Classifying data as it relates to confidentiality, integrity, and availability (CIA) allows you to apply different protective measures.
After data is classified, the data can be segmented based on the level of protection it needs. Classification levels ensure that data is handled and protected in the most cost-effective manner possible. An organization should determine the classification levels it uses based on the needs of the organization. A number of commercial business and military and government information classifications are commonly used.
The information life cycle should also be based on the classification of the data. Organizations are required to retain certain information, particularly financial data, based on local, state, and federal laws and regulations.
Once data classification has occurred, you can then use the classifications to restrict access to data based on its classification. In Chapter 5, you’ll will learn about an access control system called mandatory access control (MAC) that uses such classification labels to block access to data.
Data Loss Detection
It’s bad enough when data leakages or data breaches occur, and it’s even worse when you don’t even know it’s occurring! It is astounding how long it takes some companies to know they’ve been breached! In this section you’ll learn about methods of detecting and preventing data loss.
Watermarking
Steganography occurs when a message is hidden inside another object, such as a picture or a document. In steganography, it is crucial that only those who are expecting the message know that the message exists.
Digital watermarking is a method used in steganography. It involves embedding a logo or trademark in documents, pictures, or other objects. The watermark deters people from using the materials in an unauthorized manner.
Digital Rights Management (DRM)
Hardware manufacturers, publishers, copyright holders, and individuals use digital rights management (DRM) to control the use of digital content. This often also involves device controls. First-generation DRM software controls copying. Second-generation DRM controls executing, viewing, copying, printing, and altering works or devices. The U.S. Digital Millennium Copyright Act (DMCA) of 1998 imposes criminal penalties on those who make available technologies whose primary purpose is to circumvent content protection technologies. DRM includes restrictive license agreements and encryption. DRM protects computer games and other software, documents, ebooks, films, music, and television.
In most enterprise implementations, the primary concern is the DRM control of documents by using open, edit, print, or copy access restrictions that are granted on a permanent or temporary basis. Solutions can be deployed that store the protected data in a central or decentralized model. Encryption is used in DRM to protect the data both at rest and in transit.
Network Traffic Decryption/Deep Packet Inspection
In Chapter 1 you learned about firewalls that can perform deep packet inspection. Deep packet inspection can be used to identify data types that should not be on the network as well as data types that should not be leaving the network.
When performing deep packet inspection on encrypted traffic, realize that the capturing system must be configured with the decryption key, and it will impact performance of the system doing the capture and subsequent decryption.
Network Traffic Analysis
When network traffic is captured for analysis, we typically are most concerned with which systems are communicating with which other systems and what they are sending to one another. One of the best tools for organizing traffic into conversations or flows is NetFlow (you learned about NetFlow in Chapter 1).
Data Classification, Labeling, and Tagging
Earlier in this chapter you learned about the value of classifying data into sensitivity levels. In this section you’ll learn about how data is marked with its classification.
Metadata/Attributes
Data types are marked or labeled with their classification. This can be done physically with tags on storage devices containing data of various types and can also be done electronically so the DLP system can read this information and take the appropriate action, according to the DLP policy. Attributes (properties) of the data and its metadata (more details about the data) can also be used in this process.
XACML
Extensible Access Control Markup Language (XACML) is a standard for an access control policy language using Extensible Markup Language (XML). Its goal is to create an attribute-based access control system that decouples the access decision from the application or the local machine. It provides for fine-grained control of activities based on criteria including:
■ Attributes of the user requesting access (for example, all division managers in London)
■ The protocol over which the request is made (for example, HTTPS)
■ The authentication mechanism (for example, requester must be authenticated with a certificate)
LDAP
LDAP attributes are used in Active Directory. Examples include the Distinguished Name (DN) and Relative Distinguished Name (RDN), Common Name (CN), Domain Component (DC), and Organizational Unit (OU) attributes.
Obfuscation
Obfuscation is the act of making something obscure, unclear, or unintelligible. When we use that term with respect to sensitive or private information, it refers to changing the information in some way to make it unreadable to unauthorized individuals. It’s not encryption, however. In this section you’ll learn about methods of obfuscation.
Tokenization
Tokenization substitutes a sensitive value in data with another value that is not sensitive. It is an emerging standard for mobile transactions that uses numeric tokens to protect cardholders’ sensitive credit and debit card information. Tokenization is a great security feature that substitutes the primary account number with a numeric token that can be processed by all participants in the payment ecosystem.
Scrubbing
Data scrubbing actually has two meanings:
■ Scrubbing is used to maintain data quality. It involves checking main memory and storage for errors and making corrections using redundant data in the form of different checksums or copies of data. By detecting and correcting errors quickly, scrubbing reduces the likelihood that correctable errors will accumulate and lead to uncorrectable errors.
■ Scrubbing also can refer to removing private data. This meaning relates to obfuscation.
Masking
Data masking means altering data from its original state to protect it. You already learned about two forms of masking: encryption and hashing. Encryption is storing the data in an encrypted form, and hashing is storing a hash value (generated from the data by a hashing algorithm) rather than the data itself. Many passwords are stored as hash values.
Other methods of data hiding are
■ Using substitution tables and aliases for data
■ Redacting or replacing sensitive data with random values
■ Averaging or aggregating individual values
Anonymization
Data deidentification, or data anonymization, is the process of deleting or masking personal identifiers, such as personal names, from a set of data. It is often done when the data is being used in the aggregate, such as when medical data is used for research. Anonymization is a technical control used as one of the main approaches to data privacy protection.
Encrypted vs. Unencrypted
While using obfuscation is appropriate for some data types, it is not sufficient for all types. When security is top of mind, data should be encrypted—both at rest and when it is in transit.
Data Life Cycle
You learned about the data life cycle earlier in this chapter. Review that section. You will learn more about it in Chapter 27. The information life cycle should also be based on the classification of the data. Organizations are required to retain certain information, particularly financial data, based on local, state, or government laws and regulations. This section looks at the steps in the data life cycle.
Create
The first step in the data life cycle is the creation or acquisition of the data. While most data is generated by an organization, in some cases, an organization might purchase data, such as purchasing a marketing report from an industry organization or demographic data that helps sell products. The important issue during this step is the proper classification of the data so it can receive the appropriate protection.
Use
Once the data is available to users, those who require access to it need to use the data in the manner intended. At this step, the important issue is proper access control and review of accounts given access to ensure that permissions are being used appropriately.
Share
The sharing of data with others is a step fraught with danger. Uncontrolled sharing can cancel out all of an organization’s security safeguards. Granting the right to share the data should only be done when necessary, and this right should be held by as few individuals as possible.
Store
During the time that data is held by an organization, it must be stored somewhere. Security issues that are paramount at this step are ensuring that the prescribed encryption is in place, that the data is being successfully backed up, and that integrity is being ensured by frequently generating hash values of the data that can be used to identify data corruption if it occurs.
Archive or Destroy
All organizations need procedures in place for the retention and destruction of data. Data retention and destruction must follow all local, state, and federal regulations and laws. Documenting proper procedures ensures that information is maintained for the required time to prevent financial fines and possible incarceration of high-level organizational officers. These procedures must include both the retention period, including longer retention periods for legal holds, and the destruction process.
Data Inventory and Mapping
Data inventory and mapping is a process typically carried out using software tools to enumerate all the data, regardless of where it might be stored or which department uses it. It’s also a stringent requirement of modern privacy legislation, like the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA), because it also identifies privacy information. It also consolidates data from multiple databases.
Data Integrity Management
When data has been altered by an unauthorized process or individual, we say that it lacks integrity. To maintain integrity, access control is certainly important, but the best assurance that integrity has been maintained is to generate message digests of the relevant data by using hashing algorithms. The values can be used at a later time to verify that the data remains unchanged from the time the message digest was generated.
Data Storage, Backup, and Recovery
While protecting data on a device is always a good idea, in many cases an organization must comply with an external standard regarding the minimum protection provided to the data on the storage device. For example, the Payment Card Industry Data Security Standard (PCI DSS) enumerates requirements that payment card industry players should meet to secure and monitor their networks, protect cardholder data, manage vulnerabilities, implement strong access controls, and maintain security policies.
The operations team also must determine which data is backed up, how often the data is backed up, and the method of backup used. An organization must determine how data is stored, including data in use and data that is backed up. While data owners are responsible for determining data access rules, data life cycle, and data usage, they must also ensure that data is backed up and stored in alternate locations to ensure that it can be restored.
Let’s look at an example. Suppose that an organization’s security administrator has received a subpoena for the release of all the email received and sent by the company’s chief executive officer (CEO) for the past three years. If the security administrator is only able to find one year’s worth of email records on the server, he should check the organization’s backup logs and archives before responding to the request. Failure to produce all the requested data could possibly have legal implications. The security administrator should restore the CEO’s email from an email server backup and provide whatever is available for up to the past three years from the subpoena date. Keep in mind, however, that the organization should provide all the data that it has regarding the CEO’s emails. If the security administrator is able to recover the past five years’ worth of the CEO’s email, the security administrator should notify the appropriate authorities and give them access to all five years’ data.
As a rule of thumb, in a subpoena situation, you should always provide all the available data, regardless of whether it exceeds the requested amount or any internal data retention policies. For example, if users are not to exceed 500 MB of storage but you find that a user has more than 3 GB of data, you should provide all that data in response to any legal requests. Otherwise, you and the organization could be held responsible for withholding evidence.
To design an appropriate data recovery solution, security professionals must understand the different types of data backups that can occur and how these backups are used together to restore the live environments.
Security professionals must understand the following data backup types and schemes:
■ Full backup
■ Differential backup
■ Incremental backup
■ Copy backup
■ Daily backup
■ Transaction log backup
■ First-in, first-out rotation scheme
■ Grandfather/father/son rotation scheme
The three main data backup types are full backups, differential backups, and incremental backups. To understand these three data backup types, you must understand the concept of archive bits. When a file is created or updated, the archive bit for the file is enabled. If the archive bit is cleared, the file will not be archived during the next backup. If the archive bit is enabled, the file will be archived during the next backup.
With a full backup, all data is backed up. During the full backup process, the archive bit for each file is cleared. A full backup takes the longest time and the most space to complete. However, if an organization uses only full backups, then only the latest full backup needs to be restored. Any backup that uses a differential or incremental backup will first start with a full backup as its baseline. A full backup is the most appropriate for offsite archiving.
In a differential backup, all files that have been changed since the last full backup will be backed up. During the differential backup process, the archive bit for each file is not cleared. A differential backup might vary from taking a short time and a small amount of space to growing in both the backup time and amount of space needed over time. Each differential backup will back up all the files in the previous differential backup if a full backup has not occurred since that time. In an organization that uses a full/differential scheme, the full backup and only the most recent differential backup must be restored, meaning only two backups are needed.
An incremental backup backs up all files that have been changed since the last full or incremental backup. During the incremental backup process, the archive bit for each file is cleared. An incremental backup usually takes the least amount of time and space to complete. In an organization that uses a full/incremental scheme, the full backup and each subsequent incremental backup must be restored. The incremental backups must be restored in order. If your organization completes a full backup on Sunday and an incremental backup daily Monday through Saturday, up to seven backups could be needed to restore the data. Table 4-2 provides a comparison of the three main backup types.
Table 4-2 Backup Types Comparison
Type |
Data Backed Up |
Backup Time |
Restore Time |
Storage Space |
---|---|---|---|---|
Full backup |
All data |
Slowest |
Fast |
High |
Incremental backup |
Only new/modified files/folders since the last full or incremental backup |
Fast |
Moderate |
Lowest |
Differential backup |
All data since the last full backup |
Moderate |
Fast |
Moderate |
Copy and daily backups are two special backup types that are not considered part of any regularly scheduled backup scheme because they do not require any other backup type for restoration. Copy backups are similar to normal backups but do not reset the file’s archive bit. Daily backups use a file’s timestamp to determine whether it needs to be archived. Daily backups are popular in mission-critical environments where multiple daily backups are required because files are updated constantly.
Transaction log backups are used only in environments where it is important to capture all transactions that have occurred since the last backup. Transaction log backups help organizations recover to a particular point in time and are most commonly used in database environments.
Although magnetic tape drives are still in use today to back up data, many organizations today back up their data to optical discs, including CD-ROMs, DVDs, and Blu-ray discs; high-capacity, high-speed magnetic drives; solid-state drives; or other media. No matter the media used, retaining backups both onsite and offsite is important. Store onsite backup copies in a waterproof, heat-resistant, fire-resistant safe or vault.
As part of any backup plan, an organization should also consider the backup rotation scheme that it will use. Cost considerations and storage considerations often dictate that backup media be reused after a period of time. If this reuse is not planned in advance, media can become unreliable due to overuse. Two of the most popular backup rotation schemes are first-in, first-out and grandfather/father/son:
■ First-in, first-out (FIFO): In this scheme, the newest backup is saved to the oldest media. Although this is the simplest rotation scheme, it does not protect against data errors. If an error exists in the data, the organization might not have a version of the data that does not contain the error.
■ Grandfather/father/son (GFS): In this scheme, three sets of backups are defined. Most often these three definitions are daily, weekly, and monthly. The daily backups are the sons, the weekly backups are the fathers, and the monthly backups are the grandfathers. Each week, one son advances to the father set. Each month, one father advances to the grandfather set. Figure 4-3 displays a typical five-day GFS rotation using 21 tapes. The daily tapes are usually differential or incremental backups. The weekly and monthly tapes must be full backups.
Figure 4-3 Grandfather/Father/Son Backup Rotation Scheme
Electronic backup solutions back up data more quickly and accurately than the normal data backups and are best implemented when information changes often. You should be familiar with the following electronic backup terms and solutions:
■ Electronic vaulting: This method involves copying files as modifications occur in real time.
■ Remote journaling: This method involves copying the journal or transaction log offsite on a regular schedule, in batches.
■ Tape vaulting: This method involves creating backups over a direct communication line on a backup system at an offsite facility.
■ Hierarchical storage management (HSM): This method involves storing frequently accessed data on faster media and less frequently accessed data on slower media.
■ Optical jukebox: This method involves storing data on optical discs and uses robotics to load and unload the optical discs as needed. This method is ideal when 24/7 availability is required.
■ Replication: This method involves copying data from one storage location to another. Synchronous replication uses constant data updates to ensure that the locations are close to the same, whereas asynchronous replication delays updates to a predefined schedule.
■ Cloud backup: Another method growing in popularity is to back up data to a cloud location.
Redundant Array of Inexpensive Disks (RAID)
RAID is a hard drive technology in which data is written across multiple disks in such a way that a disk can fail, and the data can be made available quickly by remaking disks in the array without resorting to a backup tape. The most common types of RAID are:
■ RAID 0: Also called disk striping, this method writes the data across multiple drives. While it improves performance, it does not provide fault tolerance. RAID 0 is depicted in Figure 4-4.
Figure 4-4 RAID 0
■ RAID 1: Also called disk mirroring, RAID 1 uses two disks and writes a copy of the data to both disks, providing fault tolerance in the event of a single drive failure. RAID 1 is depicted in Figure 4-5.
Figure 4-5 RAID 1
■ RAID 3: This method, which requires at least three drives, writes the data across all drives, as with striping, and then writes parity information to a single dedicated drive. The parity information is used to regenerate the data in the event of a single drive failure. The downfall of this method is that the parity drive is a single point of failure. RAID 3 is depicted in Figure 4-6.
Figure 4-6 RAID 3
■ RAID 5: This method, which requires at least three drives, writes the data across all drives, as with striping, and then writes parity information across all drives as well. The parity information is used in the same way as in RAID 3, but it is not stored on a single drive, so there is no single point of failure for the parity data. With hardware RAID 5, the spare drives that replace the failed drives are usually hot swappable, meaning they can be replaced on the server while it is running. RAID 5 is depicted in Figure 4-7.
Figure 4-7 RAID 5
■ RAID 7: While not a standard but a proprietary implementation, this system incorporates the same principles as RAID 5 but enables the drive array to continue to operate if any disk or any path to any disk fails. The multiple disks in the array operate as a single virtual disk.
■ RAID 10: This method combines RAID 1 and RAID 0 and requires a minimum of four disks. However, most implementations of RAID 10 have four or more drives. A RAID 10 deployment contains a striped disk that is mirrored on a separate striped disk. Figure 4-8 depicts RAID 10.
Figure 4-8 RAID 10
RAID can be implemented with software or with hardware, and certain types of RAID are faster when implemented with hardware. Both RAID 3 and 5 are examples of RAID types that are faster when implemented with hardware. Simple striping and mirroring (RAID 0 and 1), however, tend to perform well in software because they do not use the hardware-level parity drives. When software RAID is used, it is a function of the operating system. Table 4-3 summarizes the RAID types.
Table 4-3 RAID Types
RAID Level |
Minimum Number of Drives |
Description |
Strengths |
Weaknesses |
---|---|---|---|---|
RAID 0 |
2 |
Data striping without redundancy |
Highest performance |
No data protection; if one drive fails, all data is lost |
RAID 1 |
2 |
Disk mirroring |
Very high performance; very high data protection; very minimal penalty on write performance |
High redundancy cost overhead; because all data is duplicated, twice the storage capacity is required |
RAID 3 |
3 |
Byte-level data striping with a dedicated parity drive |
Excellent performance for large, sequential data requests |
Not well suited for transaction-oriented network applications; the single parity drive does not support multiple, simultaneous read and write requests |
RAID 5 |
3 |
Block-level data striping with distributed parity |
Best cost/performance for transaction-oriented networks; very high performance and very high data protection; supports multiple simultaneous reads and writes; can also be optimized for large, sequential requests |
Write performance is slower than with RAID 0 or RAID 1 |
RAID 10 |
4 |
Disk striping with mirroring |
High data protection, which increases each time you add a new striped/mirror set |
High redundancy cost overhead; because all data is duplicated, twice the storage capacity is required |
Exam Preparation Tasks
As mentioned in the Introduction, you have a couple choices for exam preparation: the exercises here and the practice exams in the Pearson IT Certification test engine.
Review All Key Topics
Review the most important topics in this chapter, noted with the Key Topic icon in the outer margin of the page. Table 4-4 lists these key topics and the page number on which each is found.
Table 4-4 Key Topics for Chapter 4
Key Topic Element |
Description |
Page Number |
---|---|---|
Controlling the Use of USB Devices |
126 |
|
Table 4-1 |
Advantages and Disadvantages of RDP |
127 |
List |
VDI models |
128 |
VDI Streaming and Centralized VDI |
128 |
|
List |
VDI attributes |
131 |
List |
Data masking methods |
132 |
Section |
Data Life Cycle |
132 |
Paragraph |
Backup types |
135 |
Table 4-2 |
Backup Types Comparison |
136 |
Grandfather/Father/Son Backup Rotation Scheme |
137 |
|
List |
Electronic backup terms and solutions |
138 |
RAID 0 |
139 |
|
RAID 1 |
139 |
|
RAID 3 |
140 |
|
RAID 5 |
140 |
|
RAID 10 |
141 |
|
Table 4-3 |
RAID Types |
141 |
Define Key Terms
Define the following key terms from this chapter and check your answers in the glossary:
Remote Desktop Protocol (RDP)
virtual desktop infrastructure (VDI)
digital watermarking
digital rights management (DRM)
deep packet inspection
Extensible Access Control Markup Language (XACML)
obfuscation
tokenization
scrubbing
data masking
data anonymization
data inventory and mapping
Payment Card Industry Data Security Standard (PCI DSS)
full backup
differential backup
incremental backup
first-in, first-out (FIFO)
grandfather/father/son (GFS)
electronic vaulting
remote journaling
tape vaulting
hierarchical storage management (HSM)
optical jukebox
replication
cloud backup
RAID
RAID 0
RAID 1
RAID 3
RAID 5
RAID 7
RAID 10
Review Questions
1. Which of the following forms of RAID places the parity information on a single drive?
RAID 0
RAID 1
RAID 3
RAID 5
2. Which of the following techniques or tools is used to deploy print blocking?
DLP
RAID
RDP
VDI
3. Which of the following is not a characteristic of RDP?
Server downtime can cause issues for many users.
Data is not kept in the data center, so disaster recovery is easier.
Network issues can cause problems for many users.
Insufficient processing power in the host system can cause bottlenecks.
4. In which of the following rotation schemes are three sets of backups defined?
FIFO
RAID
GFS
STP
5. In which VDI model are desktops maintained by a service provider?
Centralized model
Hosted model
Remote virtual desktops model
Streaming model
6. Which backup model is the fastest to back up but the slowest to restore?
Full
Copy
Differential
Incremental
7. Which backup type is used to capture all transactions that have occurred since the last backup?
Transaction log backup
Incremental backup
Full backup
Copy backup
8. Which backup method involves copying files as modifications occur in real time?
Electronic vaulting
Optical jukebox
Remote journaling
Tape vaulting
9. Which of the following enumerates requirements that payment card industry players should meet to secure and monitor their networks, protect cardholder data, manage vulnerabilities, implement strong access controls, and maintain security policies?
GLBA
PCI DSS
COPPA
SOX
10. Which RAID method can potentially survive two drive failures?
RAID 1
RAID 3
RAID 5
RAID 10