Asset Security
By Robin Abernathy and Darren R. Hayes
Date: Feb 13, 2023
Assets are any entities that are valuable to an organization and include tangible and intangible assets. In this sample chapter from CISSP Cert Guide, 4th Edition, learn how to identify and classify information and assets, review information and asset handling requirements, explore the data life cycle, and more.
This chapter covers the following topics:
Asset Security Concepts: Concepts discussed include asset and data policies, data quality, and data documentation and organization.
Identify and Classify Information and Assets: Classification topics discussed include data and asset classification, sensitivity and criticality, private sector classifications, and military and government classifications.
Information and Asset Handling Requirements: Topics include marking, labeling, storing, and destruction.
Provision Resources Securely: Topics include how to determine and document information, asset ownership, asset inventory, and asset management.
Data Life Cycle: Components include the data life cycle, databases, data audit, data roles, data collection, data location, data maintenance, data retention, data remanence, collection limitation, and data destruction.
Asset Retention: Retention concepts discussed include media, hardware, and personnel retention and asset retention terms.
Data Security Controls: Topics include data security, data states, data access and sharing, data storage and archiving, baselines, scoping and tailoring, standards selections, and data protection methods.
Assets are any entities that are valuable to an organization and include tangible and intangible assets. As mentioned in Chapter 1, “Security and Risk Management,” tangible assets include computers, facilities, supplies, and personnel. Intangible assets include intellectual property, data, and organizational reputation. All assets in an organization must be protected to ensure the organization’s future success. Although securing some assets is as easy as locking them in a safe, other assets require more advanced security measures. The most valuable asset of any organization is its data.
The Asset Security domain addresses a broad array of topics, including information and asset identification and classification, information and asset handling, information and asset ownership, asset inventory and asset management, data life cycle, asset retention, and data security controls and compliance requirements. Out of 100 percent of the exam, this domain carries an average weight of 10 percent, which is the lowest weight of the domains.
A security professional must be concerned with all aspects of asset security. The most important factor in determining the controls used to ensure asset security is an asset’s value. Although some assets in the organization may be considered more important because they have greater value, you should ensure that no assets are forgotten. This chapter covers all the aspects of asset security that you, as an IT security professional, must understand.
Foundation Topics
Asset Security Concepts
Asset security concepts that you must understand include
Asset and data policies
Data quality
Data documentation and organization
Asset and Data Policies
As a security professional, you should ensure that your organization implements a data policy that defines long-term goals for data management and asset policies that define long-term goals for each asset type at a minimum. In some cases, each asset may need its own defined policy to ensure that it is properly administered. Business units will need to define asset policies and data policies for any assets and data owned by that business unit. Asset and data policies should be based on the organization’s overall asset and data policies. Individual roles and responsibilities should be defined to ensure that personnel understand their job tasks as related to these policies.
After the overall policies are created, asset and data management practices and procedures should be documented to ensure that the day-to-day tasks related to assets and data are completed. In addition, the appropriate quality assurance and quality control procedures must be put into place for data quality to be ensured. Storage and backup procedures must be defined to ensure that assets and data can be restored.
As part of a data policy, any databases implemented within an organization should be carefully designed based on user requirements and the type of data to be stored. All databases should comply with the data policies that are approved, created, and implemented. Data policies should be strictly enforced.
Prior to establishing a data policy, you should consider several issues that can affect it. These issues include cost, liability, legal and regulatory requirements, privacy, sensitivity, and ownership.
The cost of any data management mechanism is usually the primary consideration of any organization. Often organizations do not implement a data policy because they think it is easier to allow data to be stored in whatever way each business unit or user desires. However, if an organization does not adopt formal data policies and procedures, data security issues can arise because of the different storage methods used. For example, suppose an organization’s research department decides to implement a Microsoft SQL Server database to store all research data, but the organization does not have a data policy. If the database is implemented without a thorough understanding of the types of data that will be stored and the users’ needs, the research department may end up with a database that is difficult to navigate and manage. In addition, the proper access control mechanism may not be in place, resulting in users being able to edit the data that should only have view access.
Liability involves protecting the organization from legal issues. Liability is directly affected by legal and regulatory requirements that apply to the organization. Issues that can affect liability include asset or data misuse, data inaccuracy, data corruption, data breach, and data loss or a data leak.
Data privacy is determined as part of data analysis. Data classifications must be determined based on the value of the data to the organization. After the data classifications are determined, data controls should be implemented to ensure that the appropriate security controls are implemented based on data classifications. Privacy laws and regulations must also be considered.
Sensitive data is any data that could adversely affect an organization or individual if it were released to the public or obtained by attackers. When determining sensitivity, you should understand the types of threats that can occur, the vulnerability of the data, and the data type. For example, Social Security numbers are more sensitive than physical address data.
Data ownership is the final issue that you must consider as part of data policy design. This issue is particularly important if multiple organizations store their data within the same asset or database. One organization may want completely different security controls in place to protect its data. Understanding legal ownership of data is important to ensure that you design a data policy that takes into consideration the different requirements of multiple data owners. While this is most commonly a consideration when multiple organizations are involved, it can also be an issue with different business units in the same organization. For example, data from the human resources department has different owners and therefore different requirements than research department data.
Data Quality
Data quality is defined as data’s fitness for use. The integrity factor of the security triad drives the data quality. Data quality must be maintained throughout the data life cycle, including during data capture, data modification, data storage, data distribution, data usage, and data archiving. These terms are also known as data in use, data at rest, and data in transit. Security professionals must ensure that their organization adopts the appropriate quality control and quality assurance measures so that data quality does not suffer. Data quality is most often safeguarded by ensuring data integrity, which protects data from unintentional, unauthorized, or accidental changes. With data integrity, data is known to be good, and information can be trusted as being complete, consistent, and accurate. System integrity ensures that a system will work as intended.
Security professionals should work to document data standards, processes, and procedures to monitor and control data quality. In addition, internal processes should be designed to periodically assess data quality. When data is stored in databases, quality control and assurance are easier to ensure using the internal data controls in the database. For example, you can configure a field to only a valid number. By doing this, you would ensure that only numbers could be input into the field. This is an example of input validation. Input validation can occur on both the client side (using regular expressions) and the server side (using code or in the database) to avoid SQL injection attacks.
Data contamination occurs when data errors occur. Data can be corrupt due to network or hash corruptions, lack of integrity policies, transmission errors, and bad encryption algorithms. Data errors can be reduced through implementation of the appropriate quality control and assurance mechanisms. Data verification, an important part of the process, evaluates how complete and correct the data is and whether it complies with standards. Data verification can be carried out by personnel who have the responsibility of entering the data. Data validation evaluates data after data verification has occurred and tests data to ensure that data quality standards have been met. Data validation must be carried out by personnel who have the most familiarity with the data.
Organizations should develop procedures and processes that keep two key data issues in the forefront: error prevention and correction. Error prevention is provided at data entry, whereas error correction usually occurs during data verification and validation.
Data Documentation and Organization
Data documentation ensures that data is understood at its most basic level and can be properly organized into data sets. Data sets ensure that data is arranged and stored in a relational way so that the data can be used for multiple purposes. Data sets should be given unique, descriptive names that indicate their contents.
By documenting the data and organizing data sets, organizations can also ensure that duplicate data is not retained in multiple locations. For example, the sales department may capture all demographic information for all customers. However, the shipping department may also need access to this same demographic information to ensure that products are shipped to the correct address. In addition, the accounts receivable department will need access to customer demographic information for billing purposes. There is no need for each business unit to have separate data sets for this information. Identifying the customer demographic data set as being needed by multiple business units prevents duplication of efforts across business units.
Within each data set, documentation must be created for each type of data. In the customer demographic data set example, customer name, address, and phone number are all collected. For each of the data types, the individual parameters for each data type must be created. Whereas an address may allow a mixture of numerals and characters, a phone number should allow only numerals. In addition, each data type may have a maximum length. Finally, it is important to document which data is required—meaning that it must be collected and entered. For example, an organization may decide that fax numbers are not required but phone numbers are required. Remember that each of these decisions is best made by the personnel working most closely with the data.
After all the documentation has occurred, the data organization must be mapped out. This organization will include all interrelationships between the data sets. It should also include information on which business units will need access to data sets or subsets of a data set.
Identify and Classify Information and Assets
Security professionals should ensure that the organizations they work for properly identify and classify all organizational information and assets. The first step in this process is to identify all information and assets the organization owns and uses. To perform information and asset identification, security professionals should work with the representatives from each department or functional area. After the information and assets are identified, security professionals should perform data and asset classification and document sensitivity and criticality of data.
Security professionals must understand private sector classifications, military and government classifications, the information life cycle, databases, and data audit.
Data and Asset Classification
Data and assets should be classified based on their value to the organization and their sensitivity to disclosure. Assigning a value to data and assets allows an organization to determine the resources that should be used to protect them. Resources that are used to protect data include personnel resources, monetary resources, access control resources, and so on. Classifying data and assets allows you to apply different protective measures. Data classification is critical to all systems to protect the confidentiality, integrity, and availability (CIA) of data.
After data is classified, the data can be segmented based on its level of protection needed. The classification levels ensure that data is handled and protected in the most cost-effective manner possible. The assets could then be configured to ensure that data is isolated or protected based on these classification levels. An organization should determine the classification levels it uses based on the needs of the organization. A number of private sector classifications and military and government information classifications are commonly used.
The information life cycle, covered in more detail later in this chapter, should also be based on the classification of the data. Organizations are required to retain certain information, particularly financial data, based on local, state, or government laws and regulations.
Sensitivity and Criticality
Data sensitivity is a measure of how freely data can be handled. Some data requires special care and handling, especially when inappropriate handling could result in penalties, identity theft, financial loss, invasion of privacy, or unauthorized access by an individual or many individuals. Some data is also subject to regulation by state or federal laws and requires notification in the event of a disclosure.
Data is assigned a level of sensitivity based on who should have access to it and how much harm would be done if it were disclosed. This assignment of sensitivity is called data classification.
Data criticality is a measure of the importance of the data. Data that is considered sensitive may not necessarily be considered critical. Assigning a level of criticality to a particular data set requires considering the answers to a few questions:
Will you be able to recover the data in case of disaster?
How long will it take to recover the data?
What is the effect of this downtime, including loss of public standing?
Data is considered essential when it is critical to the organization’s business. When essential data is not available, even for a brief period of time, or when its integrity is questionable, the organization is unable to function. Data is considered required when it is important to the organization but organizational operations would continue for a predetermined period of time even if the data were not available. Data is nonessential if the organization is able to operate without it during extended periods of time.
When the sensitivity and criticality of data are understood and documented, the organization should then work to create a data classification system. Most organizations either use a private sector classification system or a military and government classification system.
PII
Personally identifiable information (PII) was defined and explained in Chapter 1. PII is considered information that should be classified and protected. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-122 gives guidelines on protecting the confidentiality of PII.
According to SP 800-122, organizations should implement the following recommendations to effectively protect PII:
Organizations should identify all PII residing in their environment.
Organizations should minimize the use, collection, and retention of PII to what is strictly necessary to accomplish their business purpose and mission.
Organizations should categorize their PII by the PII confidentiality impact level.
Organizations should apply the appropriate safeguards for PII based on the PII confidentiality impact level.
Organizations should develop an incident response plan to handle breaches involving PII.
Organizations should encourage close coordination among their chief privacy officers, senior agency officials for privacy, chief information officers, chief information security officers, and legal counsel when addressing issues related to PII.
SP 800-122 defines PII as “any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual’s identity, such as name, Social Security number, date and place of birth, mother’s maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.” To distinguish an individual is to identify an individual. To trace an individual is to process sufficient information to make a determination about a specific aspect of an individual’s activities or status. Linked information is information about or related to an individual that is logically associated with other information about the individual. In contrast, linkable information is information about or related to an individual for which there is a possibility of logical association with other information about the individual.
All PII should be assigned confidentiality impact levels based on the FIPS 199 designations. Those designations are
LOW if the loss of confidentiality, integrity, or availability could be expected to have a limited adverse effect on organizational operations, organizational assets, or individuals.
MODERATE if the loss of confidentiality, integrity, or availability could be expected to have a serious adverse effect on organizational operations, organizational assets, or individuals.
HIGH if the loss of confidentiality, integrity, or availability could be expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, or individuals.
Determining the impact from a loss of confidentiality of PII should take into account relevant factors. Several important factors that organizations should consider are as follows:
Identifiability: How easily PII can be used to identify specific individuals
Quantity of PII: How many individuals are identified in the information
Data field sensitivity: The sensitivity of each individual PII data field, as well as the sensitivity of the PII data fields together
Context of use: The purpose for which PII is collected, stored, used, processed, disclosed, or disseminated
Obligation to protect confidentiality: The laws, regulations, standards, and operating practices that dictate an organization’s responsibility for protecting PII
Access to and location of PII: The nature of authorized access to PII
PII should be protected through a combination of measures, including operational safeguards, privacy-specific safeguards, and security controls. Operational safeguards should include policy and procedure creation and awareness, training, and education programs. Privacy-specific safeguards help organizations collect, maintain, use, and disseminate data in ways that protect the confidentiality of the data and include minimizing the use, collection, and retention of PII; conducting privacy impact assessments; de-identifying information; and anonymizing information. Security controls include separation of duties, least privilege, auditing, identification and authorization, and others from NIST SP 800-53.
Organizations that collect, use, and retain PII should use NIST SP 800-122 to help guide the organization’s efforts to protect the confidentiality of PII.
PHI
Protected health information (PHI), also referred to as electronic protected health information (EPHI or ePHI), is any individually identifiable health information. PHI is treated as a special case of PII with different standards and frameworks. NIST SP 800-66 provides guidelines for implementing the Health Insurance Portability and Accountability Act (HIPAA) Security Rule. The Security Rule applies to the following covered entities:
Covered healthcare providers: Any provider of medical or other health services, or supplies, who transmits any health information in electronic form in connection with a transaction for which HHS (U.S. Department of Health and Human Services) has adopted a standard.
Health plans: Any individual or group plan that provides or pays the cost of medical care (e.g., a health insurance issuer and the Medicare and Medicaid programs).
Healthcare clearinghouses: A public or private entity that processes another entity’s healthcare transactions from a standard format to a nonstandard format, or vice versa.
Medicare prescription drug card sponsors: A nongovernmental entity that offers an endorsed discount drug program under the Medicare Modernization Act.
Each covered entity must ensure the confidentiality, integrity, and availability of PHI that it creates, receives, maintains, or transmits; protect against any reasonably anticipated threats and hazards to the security or integrity of EPHI; and protect against reasonably anticipated uses or disclosures of such information that are not permitted by the Privacy Rule.
The Security Rule is separated into six main sections as follows:
Security Standards General Rules: Includes the general requirements all covered entities must meet; establishes flexibility of approach; identifies standards and implementation specifications (both required and addressable); outlines decisions a covered entity must make regarding addressable implementation specifications; and requires maintenance of security measures to continue reasonable and appropriate protection of PHI.
Administrative Safeguards: Defined in the Security Rule as the “administrative actions and policies, and procedures to manage the selection, development, implementation, and maintenance of security measures to protect electronic protected health information and to manage the conduct of the covered entity’s workforce in relation to the protection of that information.”
Physical Safeguards: Defined as the “physical measures, policies, and procedures to protect a covered entity’s electronic information systems and related buildings and equipment, from natural and environmental hazards, and unauthorized intrusion.”
Technical Safeguards: Defined as the “the technology and the policy and procedures for its use that protect electronic protected health information and control access to it.”
Organizational Requirements: Includes standards for business associate contracts and other arrangements, including memoranda of understanding between a covered entity and a business associate when both entities are government organizations; and requirements for group health plans.
Policies and Procedures and Documentation Requirements: Requires implementation of reasonable and appropriate policies and procedures to comply with the standards, implementation specifications, and other requirements of the Security Rule; maintenance of written documentation (which may be also in electronic form such as email) and/or records that includes policies, procedures, actions, activities, or assessments required by the Security Rule; and retention, availability, and update requirements related to the documentation.
NIST SP 800-66 includes a relationship linking the NIST Risk Management Framework (RMF) and the Security Rule. It also includes key activities that should be carried out for each of the preceding six main sections of the Security Rule. Organizations that collect, use, and retain PHI should use NIST SP 800-66 to help guide the organization’s efforts to provide confidentiality, integrity, and availability for PHI.
Proprietary Data
Proprietary data is defined as internally generated data or documents that contain technical or other types of information controlled by an organization to safeguard its competitive edge. Proprietary data may be protected under copyright, patent, or trade secret laws. While there are no specific and different standards or frameworks to govern the protection of proprietary data, organizations must ensure that the confidentiality, integrity, and availability of proprietary data are protected. Because of this, many organizations protect proprietary data with the same types of controls that are used for PII and PHI.
Security professionals should ensure that proprietary data is identified and properly categorized to ensure that the appropriate controls are put into place.
Private Sector Data Classifications
Organizations in the private sector can generally classify their data using four main classification levels, listed from highest sensitivity level (1) to lowest (4):
Confidential
Private
Sensitive
Public
Data that is confidential includes trade secrets, intellectual data, application programming code, and other data that could seriously affect the organization if unauthorized disclosure occurred. Data at this level would be available only to personnel in the organization whose work needs, or is directly related to, the accessed data. Access to confidential data usually requires authorization for each access. In most cases, the only way for external entities to have authorized access to confidential data is as follows:
After signing a confidentiality agreement
When complying with a court order
As part of a government project or contract procurement agreement
Data that is private includes any information related to personnel, including human resources records, medical records, and salary information, that is used only within the organization. Data that is sensitive includes organizational financial information and requires extra measures to ensure its CIA and accuracy. Public data is data that is generally shared with the public and would not cause a negative impact on the organization. Examples of public data include how many people work in the organization and what products an organization manufactures or sells.
Military and Government Data Classifications
Military and governmental entities usually classify data using five main classification levels, listed from highest sensitivity level to lowest:
Top Secret: Disclosure would cause exceptionally grave danger to national security.
Secret: Disclosure would cause serious damage to national security.
Confidential: Disclosure would cause damage to national security.
Sensitive but Unclassified: Disclosure might harm national security.
Unclassified: Any information that can generally be distributed to the public without any threat to national interest.
U.S. federal agencies use the Sensitive but Unclassified (SBU) designation when information is not classified but still needs to be protected and requires strict controls over its distribution. There are over 100 different labels for SBU, including
For official use only (FOUO)
Limited official use
Sensitive security information
Critical infrastructure information
Executive order 13556 created a standard designation Controlled Unclassified Information (CUI). Implementation is in progress.
Data that is top secret includes weapon blueprints, technology specifications, spy satellite information, and other military information that could gravely damage national security if disclosed. Data that is secret includes deployment plans, missile placement, and other information that could seriously damage national security if disclosed. Data that is confidential includes strength of forces in the United States and overseas, technical information used for training and maintenance, and other information that could seriously affect the government if unauthorized disclosure occurred. Data that is sensitive but unclassified includes medical or other personal data that might not cause serious damage to national security if disclosed but could cause citizens to question the reputation of the government and may even lead to legal battles with lawsuits. Military and government information that does not fall into any of the four other categories is considered unclassified and usually available to the public based on the Freedom of Information Act.
Information and Asset Handling Requirements
Organizations should establish the appropriate information and asset handling requirements to protect their assets. As part of these handling requirements, personnel should be instructed on how to mark, label, store, and destroy or dispose of media.
Handling requirements are spelled out in organizational standards and other documentation. Organizational standards and documentations must be enforced to ensure proper asset handling. Handling requirements inform custodians and users how to protect the information they use and systems with which they interact. Handling requirements dictate by classification level how information must be stored, transmitted, communicated, accessed, retained, and destroyed. Handling requirements can extend to incident management and breach notification. Handling requirements extend to automated tools, such as data loss prevention (DLP) solutions. Handling requirements should be succinctly documented in a usable format. Handling requirements compliance should be referenced in the acceptable use policy (AUP). Users should be introduced to handling requirements during the onboarding process. Handling requirements should be reinforced throughout the user life cycle.
Marking, Labeling, and Storing
Plainly label all forms of storage media (tapes, optical drives, and so on) and store them safely. Some guidelines in the area of media control are to
Accurately and promptly mark all data storage media.
Ensure proper environmental storage of the media.
Ensure the safe and clean handling of the media.
Log data media to provide a physical inventory control.
The environment where the media will be stored is also important. For example, damage could occur to magnetic media above 100 degrees Fahrenheit (38 degrees Celsius).
Media marking refers to the use of human-readable information about the media, while media labeling refers to the use of security attributes in internal data structures. Marking is usually written on the media itself so the correct media can be easily identified. Labeling is internal to the media itself. A backup tape may be marked with a server name or other identifier of the asset to which the backup belongs. If an administrator accesses the backups on the backup tape, each backup will be labeled with a descriptive name that usually includes the date, time, and type of backup. In addition, ACLs may be configured on the different backup files to limit the users who can access the backup files.
Labeling is the vehicle for communicating the assigned classification to custodians, users, and applications (for example, access control and DLP). Labels make it easy to identify the data classification. Labels can take many forms: electronic, print, audio, or visual. Labeling recommendations are tied to media type. In electronic form, the classification label should be a part of the document name (for example, Customer Transaction History_Protected). On written or printed documents, the classification label should be clearly watermarked, as well as in either the document header or footer. For physical media, the classification label should be clearly marked on the case using words or symbols.
Destruction
During media disposal, you must ensure no data remains on the media. The most reliable, secure means of removing data from magnetic storage media, such as a magnetic tape cassette, is through degaussing, which exposes the media to a powerful, alternating magnetic field. It removes any previously written data, leaving the media in a magnetically randomized (blank) state. More information on the destruction of media is given earlier in this chapter, later in the “Data Remanence and Destruction” section, and in Chapter 7, “Security Operations.”
Provision Resources Securely
While information and assets within an organization are ultimately owned by the organization, it is usually understood that information and assets within the organization are owned and managed by different business units. These business units must work together to ensure that the organizational mission is achieved and that the information and assets are protected.
For this reason, security professionals must understand where the different information and assets are located and work with the various owners to ensure that the information and assets are protected. The owners that security professionals need to work with include data owners, system owners, and business/mission owners. As part of asset ownership, security professionals should ensure that appropriate asset management procedures are developed and followed, as described in Chapter 7.
Asset Inventory and Asset Management
To properly secure organizational assets, security professionals must ensure that an accurate inventory of all assets is obtained. After all assets are inventoried, assets must be managed by the asset owners. To fully understand asset inventory and management, security professionals must understand the asset life cycle. According to the National Institute of Standards (NIST), the asset life cycle is an eight-phase process, as shown in Figure 2-1.
Figure 2-1 Asset Life Cycle
In a typical life cycle, an asset goes through the enrollment, operation, and end-of-life phases. The enrollment process involves manual IT staff activities, such as assigning and tagging the asset with a serial number and barcode, loading a baseline IT image, assigning the asset to an owner, and, finally, recording the serial number as well as other attributes into a database. The attributes might also include primary location, hardware model, baseline IT image, and owner. This process can also be referred to as the inventory phase.
As the asset goes through the operations phase, changes can occur. Such changes could include introduction of new or unauthorized software, the removal of certain critical software, or the removal of the physical asset itself from the enterprise. These changes need to be tracked and recorded. As a consequence, asset monitoring, anomaly detection, reporting, and policy enforcement are the primary activities in this phase.
The assets within the enterprise are monitored using installed agents that reside on the asset, as well as network-based monitoring systems that scan and capture network traffic. These monitoring systems collect data from and about the assets and send periodic reports to the analytics engine.
As an asset reaches the end of its operational life, it goes through activities within the end-of-life phase that include returning the asset to IT support for data removal and removing the serial number from the registration database and other associated databases. Finally, the asset is prepared for physical removal from the enterprise facility.
Asset management includes the operate, maintain, and modify phases of the asset life cycle. After an asset is configured as it should be with all updates and settings, administrators should document the configuration baseline, which is a description of an asset’s attributes at a point in time, which serves as a basis for defining change. (Configuration and change management are discussed in more detail in Chapter 7.) As part of asset management, an asset’s security and configuration baseline should be enforced by configuration management agents, and installed software is captured by software asset management agents. Both categories of agents forward reports to their respective servers, which serve as data storage facilities. Reports can be compiled based on the data received from the agents and sent to those responsible for managing the assets. Regular examination of these reports should be a priority to ensure that assets have the appropriate security controls.
Data Life Cycle
A data life cycle refers to the entire period of time that an organization retains data. The following sections discuss the data life cycle, databases, roles and responsibilities, data collection and limitation, data location, data maintenance, data retention, data remanence and destruction, and data audit.
Organizations should ensure that any information they collect and store is managed throughout the life cycle of that information. If no information life cycle is followed, the data would be retained indefinitely, never discarded, and rarely, if ever, updated. Security professionals must therefore ensure that data owners and custodians understand the information life cycle.
For most organizations, the five phases of the information life cycle are as follows:
Create/receive
Distribute
Use
Maintain
Dispose/store
During the create/receive phase, data is either created by organizational personnel or received by the organization via the data entry portal. If the data is created by organizational personnel, it is usually placed in the location from which it will be distributed, used, and maintained. However, if the data is received via some other mechanism, you might need to copy or import the data to an appropriate location. In this case, the data will not be available for distribution, usage, and maintenance until after the copy or import. Not all data is used by all users. As such, data needs to be sorted, stored, and distributed in various ways as the needs arise from each user or business unit.
After the create/receive phase, organizational personnel must ensure that the data is properly distributed. In most cases, this step involves placing the data in the appropriate location and possibly configuring the access permissions as defined by the data owner. Keep in mind, however, that in many cases the storage location and appropriate user and group permissions may already be configured. In such a case, it is just a matter of ensuring that the data is in the correct distribution location. Distribution locations include databases, shared folders, network-attached storage (NAS), storage-area networks (SANs), and data libraries.
After data has been distributed, personnel within the organization can use the data in their day-to-day operations. Whereas some personnel will have only read access to data, others may have write or full control permissions. Remember that the permissions allowed or denied are designated by the data owner but configured by the data custodian.
Now that data is being used in day-to-day operations, data maintenance is key to ensuring that data remains accessible and secure. Maintenance includes auditing, performing backups, performing data integrity checks, and managing data leaks and loss.
When data becomes old, invalid, and not fit for any further use, it is considered to be in the disposition stage. You should either properly dispose of it or ensure that it is securely stored. Some organizations must maintain data records for a certain number of years per local, state, or federal laws or regulations. This type of data should be archived for the required period. In addition, any data that is part of litigation should be retained as requested by the court of law, and organizations should follow appropriate chain of custody and evidence documentation processes. Data archival and destruction procedures should be clearly defined by the organization.
All organizations need policies in place for the retention and destruction of data. Data retention and destruction must follow all local, state, and government regulations and laws. Documenting proper procedures ensures that information is maintained for the required time to prevent financial fines and possible incarceration of high-level organizational officers. These procedures must include both the retention period and destruction process.
Figure 2-2 shows the information life cycle.
Figure 2-2 Information Life Cycle
A discussion of data would be incomplete without a discussion of databases.
Databases
Databases have become the technology of choice for storing, organizing, and analyzing large sets of data. End users who use data from databases generally access a database though a client interface. As the need arises to provide access to entities outside the enterprise, the opportunities for misuse increase. In the following sections, concepts necessary to discuss database security are covered as well as the security concerns surrounding database management and maintenance.
DBMS Architecture and Models
Databases contain data, and the main difference in database models is how that information is stored and organized. The model describes the relationships among the data elements, how the data is accessed, how integrity is ensured, and acceptable operations. The five models or architectures we discuss are
Relational
Hierarchical
Network
Object-oriented
Object-relational
The relational model uses attributes (columns) and tuples (rows) to organize the data in two-dimensional tables. Each cell in the table represents the intersection of an attribute, and a tuple represents a record.
When working with relational database management systems (RDBMSs), you should understand the following terms:
Relation: A connection between one or more tables. One column in a table is a primary key that relates to another table as a foreign key.
Tuple: A row in a table.
Attribute: A column in a table.
Schema: Description of a relational database.
Record: A collection of related data items.
Base relation: In SQL, a relation that is actually existent in the database.
View: The set of data derived from one or more tables or views available to a given user. Security is enforced through the use of views.
Degree: The number of columns in a table.
Cardinality: The number of rows in a relation.
Domain: The set of allowable values that an attribute can take.
Primary key: One or more columns that make each row of a table unique.
Foreign key: An attribute in one relation that has values matching the primary key in another relation. Matches between the foreign key and the primary key are important because they represent references from one relation to another and establish the connection among these relations.
Candidate key: An attribute in a row that uniquely identifies that row.
Referential integrity: A requirement that for any foreign key attribute, the referenced relation must have a tuple with the same value for its primary key.
An important element of database design that ensures that the attributes in a table depend only on the primary key is a process called normalization. Normalization includes
Eliminating repeating groups by putting them into separate tables
Eliminating redundant data (occurring in more than one table)
Eliminating attributes in a table that are not dependent on the primary key of that table
In the hierarchical database model, data is organized into a hierarchy. An object can have one child (an object that is a subset of the parent object), multiple children, or no children. To navigate this hierarchy, you must know the branch in which the object is located. An example of the use of this system is the Windows Registry and a Lightweight Directory Access Protocol (LDAP) directory.
In the network model, as in the hierarchical model, data is organized into a hierarchy, but unlike the hierarchical model, objects can have multiple parents. Because of this, knowing which branch to find a data element in is not necessary because there will typically be multiple paths to it.
The object-oriented model can handle a variety of data types and is more dynamic than a relational database. Object-oriented database (OODB) systems are useful in storing and manipulating complex data, such as images and graphics. Consequently, complex applications involving multimedia, computer-aided design (CAD), video, graphics, and expert systems are more suited to the object-oriented model. It also has the characteristics of ease of reusing code and analysis and reduced maintenance.
Objects can be created as needed, and the data and the procedures (or methods) go with the object when it is requested. A method is the code defining the actions that the object performs in response to a message. This model uses some of the same concepts of a relational model. In the object-oriented model, the relation, column, and tuple (relational terms) are referred to as class, attribute, and instance objects.
The object-relational model is the marriage of object-oriented and relational technologies, combining the attributes of both. This is a relational database with a software interface that is written in an object-oriented programming (OOP) language. The logic and procedures are derived from the front-end software rather than the database. This means each front-end application can have its own specific procedures.
Database Interface Languages
Access to information in a database is facilitated by an application that allows you to obtain and interact with data. These interfaces can be written in several different languages. This section discusses some of the more important data programming languages:
ODBC: Open Database Connectivity (ODBC) is an application programming interface (API) that allows communication with databases either locally or remotely. An API on the client sends requests to the ODBC API. The ODBC API locates the database, and a specific driver converts the request into a database command that the specific database will understand.
JDBC: As you might expect from the title, Java Database Connectivity (JDBC) makes it possible for Java applications to communicate with a database. A Java API is what allows Java programs to execute SQL statements. It is database agnostic and allows communication with various types of databases. It provides the same functionality as the ODBC.
XML: Data can be created in Extensible Markup Language (XML) format, but the XML:DB API allows XML applications to interact with more traditional databases, such as relational databases. It requires that the database have a database-specific driver that encapsulates all the database access logic.
OLE DB: Object Linking and Embedding Database (OLE DB) is a replacement for ODBC, extending its functionality to nonrelational databases. Although it is COM-based and limited to Microsoft Windows–based tools, it provides applications with uniform access to a variety of data sources, including service through ActiveX objects.
Data Warehouses and Data Mining
Data warehousing is the process of combining data from multiple databases or data sources in a central location called a data warehouse. The warehouse is used to carry out analysis. The data is not simply combined but is processed and presented in a more useful and understandable way. Data warehouses require stringent security because the data is not dispersed but located in a central location. Within data warehouses, organizations may also designate a data mart, which is a subset of data from the data warehouse that is used for analytics and business intelligence users.
Data mining is the process of using special tools to organize the data into a format that makes it easier to make business decisions based on the content. It analyzes large data sets in a data warehouse to find nonobvious patterns. These tools locate associations between data and correlate these associations into metadata, which is data about data. It allows for more sophisticated inferences (sometimes called business intelligence [BI]) to be made about the data. Three measures should be taken when using data warehousing applications:
Control metadata from being used interactively.
Monitor the data purging plan.
Reconcile data moved between the operations environment and data warehouse.
Database Maintenance
Database administrators must regularly conduct database maintenance. Databases must be backed up regularly. All security patches and updates for the hardware and software, including the database software, must be kept up to date. Hardware and software upgrades are necessary as organizational needs increase and as technology advances.
Security professionals should work with database administrators to ensure that threat analysis for databases is performed at least annually. They should also work to develop the appropriate mitigations and controls to protect against the identified threats.
Database Threats
Security threats to databases usually revolve around unwanted access to data. Two security threats that exist in managing databases involve the processes of aggregation and inference. Aggregation is the act of combining information from various sources. It can become a security issue with databases when users do not have access to a given set of data objects but do have access to them individually, or at least some of them, and are able to piece together the information to which they should not have access. On the other hand, an inference occurs when the attacker uses several pieces of generic nonsensitive information to determine or learn specific sensitive data.
Two types of access measures can be put in place to help prevent access to inferable information:
Content-dependent access control bases access on the sensitivity of the data. For example, a department’s manager might have access to the salaries of the employees in that department but not to the salaries of employees in other departments. The cost of this measure is an increased processing overhead.
Context-dependent access control bases the access to data on multiple factors to help prevent inference. Access control can be a function of factors such as location, time of day, and previous access history.
Database Views
Access to the information in a database is usually controlled through the use of database views. A view refers to the given set of data that a user or group of users can see when they access the database. Before a user is able to use a view, that user must have permission on both the view and all dependent objects. Views can be used to enforce the concept of least privilege.
Database Locks
Database locks are used when one user is accessing a record that prevents another user from accessing the record at the same time to prevent edits until the first user is finished. Locking not only provides exclusivity to writes but also controls reading of unfinished modifications or uncommitted data.
Polyinstantiation
Polyinstantiation is a process used to prevent data inference violations like the database threats previously covered. It does this by enabling a relation to contain multiple tuples with the same primary keys, with each instance distinguished by a security level. It prevents low-level database users from inferring the existence of higher-level data.
Database ACID Test
Every database should adhere to the rules of ACID. An ACID test ensures that each transaction has the following properties before it is committed:
Atomicity: Either all command operations are complete, or the database changes are rolled back.
Consistency: The transaction follows an integrity process that ensures that data is consistent in all places where it exists.
Isolation: A transaction does not interact with other transactions until completion.
Durability: After it’s verified, the transaction is committed and cannot be rolled back.
Roles and Responsibilities
Many roles are tied to asset security. The three most important are data owners, data controllers, and data custodians. Data owners are the personnel who actually own a given set of data. These data owners determine the level of access that any users are given to their data. Data controllers are those who either alone or jointly with other persons determine the purposes for which and the manner in which any personal data is processed. Data custodians are the personnel who actually manage the access to a given set of data. While data owners determine the level of access given, it is the data custodians who actually configure the appropriate controls to grant or deny the user’s access, based on the data owner’s approval.
Other roles within an organization affect data but do not necessarily affect the data as much as data owners, controllers, and custodians. In the following sections, we discuss these roles and their effect on data.
Data Owner
Data owners must understand the way in which the data they are responsible for is used and when that data should be released. They must also determine the data’s value to and impact on the organization. A data owner should understand what it will take to restore or replace data and the cost that will be incurred during this process. Finally, data owners must understand when data is inaccurate or no longer needed by the organization.
In most cases, each business unit within an organization designates a data owner, who must be given the appropriate level of authority for the data for which that owner is responsible. Data owners must understand any intellectual property rights and copyright issues for the data. Data owners are responsible for ensuring that the appropriate agreements are in place if third parties are granted access to the data.
Security professionals must work with the owners of information and assets to determine who should have access to the information and assets, the value of the information and assets, and the controls that should be implemented to protect the privacy of information and assets. As a result, security professionals must understand the role of data owners, system owners, and business/mission owners.
Unfortunately, in most cases, data owners do not own the systems on which their data resides. Therefore, it is important that the data owner work closely with the system owner. Even if the appropriate ACLs are configured for the data, the data can still be compromised if the system on which the data resides is not properly secured.
Data Controller
Data controllers determine the “why” and “how” personal data is processed. In some cases, the data controller role and the data owner role are the same. However, organizations should separate these roles when the data being stored and managed has special protections or laws that affect it. The data controller can focus on maintaining the privacy and confidentiality of the data.
In the European Union’s General Data Protection Regulation (GDPR) and other privacy laws, the data controller has the most responsibility when it comes to protecting the privacy and rights of the data’s subject, such as the user of a website. Simply put, the data controller controls the procedures and purpose of data usage. Data controllers should ensure that any data collected adheres to the applicable laws and regulations. The data controller is responsible for documenting how the data will be used.
Data Custodian
Data custodians must understand the levels of data access that can be given to users. Data custodians work with data owners to determine the level of access that should be given. This is an excellent example of separations. By having separate roles such as data owners and data custodians, an organization can ensure that no single role is responsible for data access. This setup prevents fraudulent creation of user accounts and assignment of rights.
Data custodians should understand data policies and guidelines. They should document the data structures in the organization and the levels of access given. They are also responsible for data storage, archiving, and backups. Finally, they should be concerned with data quality and should therefore implement the appropriate audit controls.
Centralized data custodians are common. Data owners give the data custodians the permission level that users and groups should be given. Data custodians actually implement the access control lists (ACLs) for the devices, databases, folders, and files.
System Owners
System owners are responsible for the systems on which data resides. While the data owner owns the data and the data custodian configures the appropriate permissions for user access to the data, the system owner must determine the parameters that govern the system, such as what types of data and applications can be stored on the system, who owns the data and applications, and who determines the users who can access the data and applications.
System Custodians
System custodians are responsible for administering the systems on which data resides based on the parameters set forth by the system owner.
Business/Mission Owners
Business or mission owners must ensure that all operations fit within the business goals and mission. This task includes ensuring that collected data is necessary for the business to function. Collecting unnecessary data wastes time and resources. Because the business/mission owner is primarily concerned with the overall business, conflicts between data owners, data custodians, and system owners may need to be resolved by the business/mission owner, who will need to make the best decision for the organization. For example, say that a data owner requests more room on a system for the storage of data. The data owner strongly believes that the new data being collected will help the sales team be more efficient. However, storage on the system owner’s asset is at a premium. The system owner is unwilling to allow the data owner to use the amount of space requested. In this case, the business/mission owner would need to review both sides and decide whether collecting and storing the new data would result in enough increased revenue to justify the cost of allowing the data owner more storage space. If so, it may also be necessary to invest in more storage media for the system or to move the data to another system that has more resources available. But keep in mind that moving the data would possibly involve another system owner.
Security professionals should always be part of these decisions because they understand the security controls in place for any systems involved and the security controls needed to protect the data. Moving the data to a system that does not have the appropriate controls may cause more issues than just simply upgrading the system on which the data currently resides. Only a security professional is able to objectively assess the security needs of the data and ensure that they are met.
Data Processors
Data processors are any personnel within an organization who process the data that has been collected throughout the entire life cycle of the data. If any individual accesses the data in any way, that individual can be considered a data processor. However, in some organizations, data processors are only those individuals who can enter or change data.
No matter which definition an organization uses, it is important that security professionals work to provide training to all data processors on the importance of asset privacy, especially data privacy. This training is usually included as part of the security awareness training. It is also important to include any privacy standards or policies that are based on laws and regulations. After personnel have received the appropriate training, they should sign a statement saying that they will abide by the organization’s privacy policy.
Data Users and Subjects
Data users are any individuals who are granted access to some or all data after proper forms are signed. These entities do not necessarily have any editing rights on the data. They often simply view and use the data as part of their day-to-day duties. In some cases, the terms data users and data processors are used synonymously. However, security professionals should keep in mind that processors can edit the data, whereas data users generally cannot.
Data subjects are any end users whose personal data can be collected. A data subject is any individual person who can be directly or indirectly identified via an identifier, including name, ID number, location data, or via factors specific to the person’s physical, physiological, genetic, mental, economic, cultural, or social identity. Many laws and regulations now include rights for the data subjects so that they can request that data be modified or deleted.
Data Collection and Limitation
For any organization, a data collection limitation exists based on what is needed. System owners and data custodians should monitor the amount of free storage space so that they understand trends and can anticipate future needs before space becomes critical. Without appropriate monitoring, data can grow to the point where system performance is affected. No organization wants to have a vital data storage system shut down because there is no available free space. Disk quotas allow administrators to set disk space limits for users and then automatically monitor disk space usage. In most cases, the quotas can be configured to notify users when they are nearing space limits.
Collection of data is also limited based on laws and regulations and, in some cases, on gaining the consent of the subject of the data. Organizations should ensure that they fully document any laws and regulations that affect the collection of private data and adjust any private data collection policies accordingly. Organizations should document and archive the consent of the data subject. In addition, this consent should be renewed periodically, especially if the collection policy changes in any way.
Security professionals should work with system owners and data custodians to ensure that the appropriate monitoring and alert mechanisms are configured. System owners and data custodians can then be proactive when it comes to data storage needs.
Data Location
Data location is the physical location of the data when it is being stored, transmitted, and accessed. Many laws and regulations affect what kinds of data can be stored and how that data must be protected based on the citizenship of the data subject. But there are also laws and regulations that can affect data based on where it is stored. This issue is complicated by the fact that so many organizations are implementing cloud solutions for data storage.
Security professionals should ensure that organizations keep backup copies on site and another backup copy off site. If a disaster, such as a tornado, destroys the primary location, the organization would still have a backup copy stored off site.
The decision of how far off site to store the backup needs to be considered. If the backup is stored at the same location, it could be destroyed in the same tornado. Even if the backup was stored three miles away, it is possible a tornado could destroy both locations.
Some organizations maintain data in data centers. The organization may replicate this data to one or more other data centers to maintain critical data availability. These data centers are typically located in separate geographical locations. Organizations should verify the cloud storage locations to ensure they are in a separate geographical location.
Organizations such as Google, Amazon, and Netflix store data in content data servers around the country for easy access. Content servers store the content of data at various locations for users to connect quickly and play the video or order an item seamlessly.
Data Maintenance
Data maintenance includes efforts to organize and care for data throughout its lifetime. Maintenance includes updating the data on a regular schedule or when requested to do so by the data subject. If the data is stored on a single server, updating the data is easy. If data is stored on multiple servers and other devices, protecting the data and ensuring it is maintained are much more complicated.
Depending on whether the organization can truly isolate the classified or private data from unclassified data using separate networks, air gaps or unidirectional network bridges could be implemented to provide data protection. If an air gap is implemented, personnel would need to manually add data to the classified data network. If network bridges are used, personnel could copy data from the unclassified data network to the classified data network, provided the policies were appropriately configured.
Organizations should routinely review data policies to ensure that policies are updated and being followed. Security professionals should review recent data breaches and determine the root cause so that controls can be installed or updated.
Data Retention
Data retention involves retaining and maintaining important data as long as it is needed or required by law and destroying it when it is no longer needed. An organization’s data policy typically identifies retention time frames for data based on data types. Some laws and regulations dictate the time that an organization should retain data, such as three years, seven years, or even indefinitely. Data can even be placed on legal hold based on pending litigation. Organizations have the responsibility of complying with all laws, regulations, and legal holds. However, even in the absence of external requirements, an organization should establish policies on the length for data retention of the different types of data.
For example, organizations may retain financial data for a different period of time than it retains PII. Audit logs may need different and longer or shorter retention periods than financial data or PII because audit logs allow security professionals to reconstruct actions that occurred during a breach. If appropriate retention policies are not in place, valuable data can be lost earlier than expected. A longer retention period means that the organization will need more backup media.
Data Remanence and Destruction
Whenever data is erased or removed from a storage media, residual data can be left behind. These remnants can allow data to be reconstructed when the organization disposes of the media, resulting in unauthorized individuals or groups gaining access to private data. Media that security professionals must consider include magnetic hard disk drives, solid-state drives, magnetic tapes, and optical media, such as CDs and DVDs. When considering data remanence, security professionals must understand three countermeasures:
Clearing: This countermeasure includes removing data from the media so that it cannot be reconstructed using normal file recovery techniques and tools. With this method, the data is only recoverable using special forensic techniques. Overwriting is a clearing technique that writes data patterns over the entire media, thereby eliminating any trace data. Another clearing technique is disk wiping. For clearing and wiping to work well, data must be cleared or wiped at least seven times.
Purging: Also referred to as sanitization, purging makes the data unreadable even with advanced forensic techniques. With this technique, data should be unrecoverable. Degaussing, a purging technique, exposes the media to a powerful, alternating magnetic field, removing any previously written data and leaving the media in a magnetically randomized (blank) state.
Destruction: Destruction involves destroying the media on which the data resides. Encryption scrambles the data on the media, thereby rendering it unreadable without the encryption key. Destruction is the physical act of destroying media in such a way that it cannot be reconstructed. Shredding involves physically breaking media to pieces. Pulverizing involves reducing media to dust. Pulping chemically alters the media. Finally, burning incinerates the media.
The majority of these countermeasures work for magnetic media. However, solid-state drives present unique challenges because they cannot be overwritten. Most solid-state drive vendors provide sanitization commands that can be used to erase the data on the drive. Security professionals should research these commands to ensure that they are effective. Another option for these drives is to erase the cryptographic key. Often a combination of these methods must be used to fully ensure that the data is removed.
Data remanence is also a consideration when using any cloud-based solution for an organization. Security professionals should be involved in negotiating any contract with a cloud-based provider to ensure that the contract covers data remanence issues, although it is difficult to determine that the data is properly removed. Using data encryption is a great way to ensure that data remanence is not a concern when dealing with the cloud.
Data Audit
While an organization may have the most up-to-date data management plan in place, data management alone is not enough to fully protect data. Organizations must also put into place a data auditing mechanism that will help administrators identify vulnerabilities before attacks occur. Auditing mechanisms can be configured to monitor almost any level of access to data. However, auditing mechanisms affect the performance of the systems being audited. Always carefully consider any performance impact that may occur as a result of the auditing mechanism. Although auditing is necessary, it is important not to audit so many events that the auditing logs are littered with useless or unused information.
Confidential or sensitive data should be more carefully audited than public information. As a matter of fact, it may not even be necessary to audit access to public information. But when considering auditing for confidential data, an organization may decide to audit all access to that data or just attempts to change the data. Only the organization and its personnel are able to develop the best auditing plan.
Finally, auditing is good only if there is a regular review of the logs produced. Administrators or security professionals should obtain appropriate training on reviewing audit logs. In addition, appropriate alerts should be configured if certain critical events occur. For example, if multiple user accounts are locked out due to invalid login attempts over a short period of time, this lockout may be an indication that systems are experiencing a dictionary or other password attack. If an alert were scheduled to notify administrators when a certain number of lockouts occur over a period of time, administrators may be able to curtail the issue before successful access is achieved by the attacker.
Asset Retention
Asset and data retention requirements vary based on several factors, including asset or data type, asset or data age, and legal and regulatory requirements. Security professionals must understand where data is stored and the type of data stored. In addition, security professionals should provide guidance on managing and archiving data. Therefore, data retention policies must be established with the help of organizational personnel. The assets that store data will use the data retention policies to help guide the asset retention guidelines. If a storage asset needs to be replaced, a thorough understanding of the data that resides on the asset is essential to ensure that data is still retained for the required period.
A retention policy usually contains the purpose of the policy, the portion of the organization affected by the policy, any exclusions to the policy, the personnel responsible for overseeing the policy, the personnel responsible for data, the data types covered by the policy, and the retention schedule. Security professionals should work with data owners to develop the appropriate data retention policy for each type of data the organization owns. Examples of data types include, but are not limited to, human resources data, accounts payable/receivable data, sales data, customer data, and email.
Security professionals should ensure that asset retention policies also are created and enforced. While asset retention policies are often governed by the data retention policies, organizations may find it necessary to replace physical assets while needing to retain the data stored on the asset. Security professionals should ensure that the data residing on an asset that will be retired is fully documented and properly retained as detailed by the data retention policy. Doing so will usually require that the data is moved to another asset. For example, suppose an organization stores all the PII data it retains on a SQL server located on the organization’s demilitarized zone (DMZ). If the organization decides to replace the SQL server with a new Windows Server computer, it will be necessary to back up the PII from the old server and restore it to the new server. In addition, the organization may want to retain the backup of the PII and store it in a safe or other secured location, in case the organization should ever need it. Then the organization must ensure that the PII cannot be retrieved from the hard drive on the old server. Doing so may require physical destruction of the hard drive.
To design asset and data retention policies, the organization should answer the following questions:
What are the legal/regulatory requirements and business needs for the assets/data?
What are the types of assets/data?
What are the retention periods and destruction needs for the assets/data?
The personnel who are most familiar with each asset and data type should work with security professionals to determine the asset and data retention policies. For example, human resources personnel should help design the data retention policies for all human resources assets and data. While designing asset and data retention policies, an organization must consider the media and hardware that will be used to retain the data. Then, with this information in hand, the organization and/or business unit should draft and formally adopt the asset and data retention policies.
As part of asset retention, security professionals need to understand two terms: end of life (EOL) and end of support (EOS). These terms apply to any software or hardware asset. EOL is the date when a vendor stops offering a product for sale. However, the product is still supported (such as warranties, updates, and repairs) by the vendor, at least for a while. This support usually includes deploying security updates for the product. EOS, also referred to as end-of-service life (EOSL), is the date when all vendor support ends. Organizations sometimes retain legacy hardware to access older data, such as data on tape drives. Legacy applications can also be retained if the application provides a particular function and a replacement application has not been selected. If legacy hardware or applications need to be retained, security professionals should take measures to ensure that they are deployed in a manner that prevents remote access, including deploying them on an isolated network.
After the asset and data retention policies have been created, personnel must be trained to comply with these policies. Auditing and monitoring should be configured to ensure data retention policy compliance. Periodically, data owners and processors should review the data retention policies to determine whether any changes need to be made. All data retention policies, implementation plans, training, and auditing should be fully documented. In addition, IT support staff should work to ensure that the assets on which the data is stored are kept up to date with the latest security patches and updates.
Remember that it is not possible to find a one-size-fits-all solution for all organizations because of the different types of information, assets, or data. Only those most familiar with each asset or data type can determine the best retention policy for that asset or data. Although a security professional should be involved in the design of the retention policies, the security professional is there to ensure that security is always considered and that retention policies satisfy organizational needs. The security professional should act only in an advisory role and should provide expertise when needed.
Data Security Controls
Now it is time to discuss the data security controls that organizations must consider as part of a comprehensive security plan. Security professionals must understand the following as part of data security controls: data security, data states (data at rest, data in transit, and data in use), data access and sharing, data storage and archiving, baselines, scoping and tailoring, standards selection, and cryptography.
Data Security
Data security includes the procedures, processes, and systems that protect data from unauthorized access. Unauthorized access includes unauthorized digital and physical access. Data security also protects data against any threats that can affect data confidentiality, integrity, or availability.
To provide data security, an organization should implement security using a defense-in-depth strategy, as discussed in Chapter 1. If a single layer of access is not analyzed, then data is at risk. For example, you can implement authentication mechanisms to ensure that users must authenticate before digitally accessing the network. But if you do not have the appropriate physical security controls in place to prevent unauthorized access to your facility, an attacker can easily gain access to your network just by connecting an unauthorized device to the network.
Security professionals should make sure their organization implements measures and safeguards for any threats that have been identified. In addition, security professionals must remain vigilant and constantly be on the lookout for new threats.
Data States
Three basic data states must be considered as part of asset security. These three states are data at rest, data in transit, and data in use. Security professionals must ensure that controls are implemented to protect data in all three of these states.
Data at Rest
Data at rest is data that is being stored and not being actively used at a certain point in time. While data is at rest, security professionals must ensure that the confidentiality, integrity, and availability of the data are ensured. Confidentiality can be provided by implementing data encryption. Integrity can be provided by implementing the appropriate authentication mechanisms and ACLs so that only authenticated, authorized users can edit data. Availability can be provided by implementing a fault-tolerant storage solution, such as RAID.
Data in Transit
Data in transit is data that is being transmitted over a network or sent via a physical medium, like a DVD or flash drive. While data is being transmitted, security professionals must ensure that the confidentiality, integrity, and availability of the data are ensured. Confidentiality can be provided by implementing link encryption or end-to-end encryption. As with data at rest, authentication and ACLs can help with data integrity of data in transit. Availability can be provided by implementing server farms and dual backbones.
Data in Use
Data in use is data that is being accessed or manipulated in some way. Data manipulation includes editing the data and compiling the data into reports. The main issues with data in use are to ensure that only authorized individuals have access to or can read the data and that only authorized changes are allowed to the data. Confidentiality can be provided by using privacy or screen filters to prevent unauthorized individuals from reading data on a screen. It can also be provided by implementing a document shredding policy for all reports that contain PII, PHI, proprietary data, or other confidential information. Data integrity can be provided by implementing the appropriate controls on the data. Data locks can prevent data from being changed, and data rules can ensure that changes occur only within defined parameters. For certain data types, organizations may decide to implement two-person controls to ensure that data changes are entered and verified. Availability can be provided by using the same strategies as used for data at rest and data in transit. In addition, organizations may wish to implement locks and views to ensure that users needing access to data obtain the most up-to-date version of that data. Data in use is the most difficult data to protect.
Data Access and Sharing
Personnel must be able to access and share data in their day-to-day duties. This data usage starts when the data owner approves access for a user. The data custodian then gives the user the appropriate permissions for the data. But these two steps are an oversimplification of the process. Security professionals must ensure that the organization understands issues such as the following:
Are the appropriate data policies in place to control the access and use of data?
Do the data owners understand the access needs of the users?
What are the different levels of access needed by the users?
Which data formats do the users need?
Are there subsets of data that should have only restricted access for users?
Of the data being collected, is there clearly identified private versus public data?
Is data being protected both when it is at rest and when it is in transit?
Are there any legal or jurisdictional issues related to data storage location, data transmission, or data processing?
While the data owners and data custodians work together to answer many of these questions, security professionals should be involved in guiding them through this process. If a decision is made to withhold data, the decision must be made based on privacy, confidentiality, security, or legal/regulatory restrictions. The criteria by which these decisions are made must be recorded as part of an official organizational data policy.
Data Storage and Archiving
Data storage and archiving are related to how an organization stores data—both digital data and physical data in the form of hard copies. It is very easy for data to become outdated. Once data is outdated, it is no longer useful to the organization.
Although data storage used to be quite expensive, it has become cheaper in recent years. Security professionals should work with data owners and data custodians to help establish a data review policy to ensure that data is periodically reviewed to determine whether it is needed and useful for the organization. Data should be archived in accordance with data retention policies and schedules. Data that is no longer needed or useful for the organization should be properly destroyed. The exception is data that has been archived and must be kept for a certain duration based on a set retention policy period, especially data that may be on legal hold.
When considering data storage and archiving, security professionals need to ensure that the different aspects of storage are properly analyzed to ensure appropriate deployment. This includes analyzing server hardware and software, database maintenance, data backups, and network infrastructure. Each part of the digital trail that the data will travel must be understood so that the appropriate policies and procedures can be put into place to ensure asset privacy.
Data that is still needed and useful to the organization should remain in primary storage for easy access by users. Data marked for archiving must be moved to some sort of backup media or secondary storage. Organizations must determine the form of data archive storage that will best suit their needs. For some business units in the organization, it may be adequate to archive the data to magnetic tape or optical media, such as DVDs. With these forms of storage, restoring the data from the archive can be a laborious process. For business units that need an easier way to access the archived data, some sort of solid-state or hot-pluggable drive technology may be a better way to go.
No matter which media your organization chooses for archival purposes, security professionals must consider the costs of the mechanisms used and the security of the archive. Storing archived data that has been backed up to DVD in an unlocked file cabinet may be more convenient for a business unit, but it does not provide any protection of the data on the DVD. In this case, the security professional may need to work with the business unit to come up with a more secure storage mechanism for data archives. When data is managed centrally by the IT or data center staff, personnel usually better understand security issues related to data storage and may therefore not need as much guidance from security professionals.
Baselines
One practice that can make maintaining security simpler is to create and deploy standard images that have been secured with security baselines. A baseline is a set of configuration settings that provides a floor of minimum security in the image being deployed. Organizations should capture baselines for all devices, including network devices, computers, host computers, and virtual machines.
Baselines can be controlled through the use of Group Policy in Windows. These policy settings can be made in the image and applied to both users and computers. These settings are refreshed periodically through a connection to a domain controller and cannot be altered by the user. It is also quite common for the deployment image to include all of the most current operating system updates and patches.
When a network makes use of these types of technologies, the administrators have created a standard operating environment. The advantages of such an environment are more consistent behavior of the network and simpler support issues. System scans should be performed weekly to detect changes from the baseline.
Security professionals should help guide their organization through the process of establishing baselines. If an organization implements very strict baselines, it will provide a higher level of security that may actually be too restrictive. If an organization implements a very lax baseline, it will provide a lower level of security that will likely result in security breaches. Security professionals should understand the balance between protecting organizational assets and allowing users access, and they should work to ensure that both ends of this spectrum are understood.
Scoping and Tailoring
Scoping and tailoring are closely tied to the baselines. These processes allow an organization to narrow its focus to identify and address the appropriate risks.
Scoping instructs an organization on how to apply and implement security controls. Baseline security controls are the minimums that are acceptable to the organization. When security controls are selected based on scoping, documentation should be created that includes the security controls that were considered, whether the security controls were adopted, and how the considerations were made.
Tailoring allows an organization to more closely match security controls to the needs of the organization. When security controls are selected based on tailoring, documentation should be created that includes the security controls that were considered, whether the security controls were adopted, and how the considerations were made.
NIST SP 800-53, which is covered extensively in Chapter 1, provides some guidance on tailoring.
Standards Selection
Because organizations need guidance on protecting their assets, security professionals must be familiar with the standards that have been established. Many standards organizations have been formed, including NIST, the U.S. Department of Defense (DoD), and the International Organization for Standardization (ISO).
The U.S. DoD Instruction 8510.01 establishes a certification and accreditation process for DoD information systems.
The ISO organization works with the International Electrotechnical Commission (IEC) to establish many standards regarding information security. The ISO/IEC standards that security professionals need to understand are covered in Chapter 1.
Security professionals may also need to research other standards, including standards from the European Network and Information Security Agency (ENISA), European Union (EU), and U.S. National Security Agency (NSA). It is important that the organization researches the many standards available and apply the most beneficial guidelines based on the organization’s needs.
Data Protection Methods
Data is protected in a variety of ways. Security professionals must understand the different data protection methods and know how to implement them. Data protection methods should include administrative (managerial), logical (technical), and physical controls. All types of controls are covered extensively in Chapter 1.
The most popular method of protecting data and ensuring data integrity is by using cryptography. However, security professionals should also understand Digital Rights Management (DRM), Data Loss Prevention (DLP), and Cloud Access Security Broker (CASB).
Cryptography
Cryptography, also referred to as encryption, can provide different protection based on which level of communication is being used. The two types of encryption communication levels are link encryption and end-to-end encryption.
Link Encryption
Link encryption encrypts all the data that is transmitted over a link. In this type of communication, the only portion of the packet that is not encrypted is the data-link control information, which is needed to ensure that devices transmit the data properly. All the information is encrypted, with each router or other device decrypting its header information so that routing can occur and then re-encrypting before sending the information to the next device.
If the sending party needs to ensure that data security and privacy are maintained over a public communication link, then link encryption should be used. This is often the method used to protect email communication or when banks or other institutions that have confidential data must send that data over the Internet.
Link encryption protects against packet sniffers and other forms of eavesdropping and occurs at the data link and physical layers of the OSI model. Advantages of link encryption include: All the data is encrypted, and no user interaction is needed for it to be used. Disadvantages of link encryption include: Each device that the data must be transmitted through must receive the key, key changes must be transmitted to each device on the route, and packets are decrypted at each device.
End-to-End Encryption
End-to-end encryption encrypts less of the packet information than link encryption. In end-to-end encryption, packet routing information and packet headers and addresses are not encrypted. As a result, potential hackers can obtain more information if a packet is acquired through packet sniffing or eavesdropping.
End-to-end encryption has several advantages. A user usually initiates end-to-end encryption, which allows the user to select exactly what gets encrypted and how. It affects the performance of each device along the route less than link encryption because every device does not have to perform encryption/decryption to determine how to route the packet.
Digital Rights Management (DRM)
Digital rights management (DRM) is a mechanism that provides copyright protection for copyrighted works. Using DRM, an organization or individual controls the use, modification, and distribution of copyrighted material.
When DRM is deployed, a DRM license is issued to grant access to the copyrighted material. The license defines the user’s terms and usually includes a decryption key for the copyrighted material. DRM has an always-on requirement, also referred to as persistent online authentication, that requires an Internet connection to access the copyrighted material. This authentication mechanism will periodically check the connection to the authentication server and will block the use of the copyrighted material if the connection fails. DRM also provides an audit trail that tracks copyrighted material usage and detects concurrent usages of copyrighted material. Finally, automatic expiration can be used so that copyrighted material can be shared as a subscription that blocks access after a certain date.
Pirating is any method of duplicating and distributing copyrighted works and is used for written material, videos, games, software, and more. DRM can be used to prevent the copying, printing, and forwarding of protected works.
Digital watermarking can be used to detect, but not prevent, copying of protected material. In addition, vendors sometimes insert metadata in the file that helps to identify the original purchaser of the protected content.
Each feature in DRM has advantages and disadvantages. Security professionals should work with management to perform a full risk analysis prior to deploying a DRM solution and include only those features that are needed.
Data Loss Prevention (DLP)
Data loss prevention (DLP) software scans communication to determine whether protected or confidential data is being exfiltrated. If the data being sent violates the DLP allowed rules, then the communication is blocked. The DLP system will look for data patterns or keywords based on the rules that a security administrator configures.
Two types of DLP systems are used: network-based and endpoint-based DLP. Network-based systems exist on a network edge and scan all data leaving the organization’s network. An endpoint-based system exists on an endpoint device, such as a file server, and can be configured to scan all data stored on the endpoint or leaving the endpoint in any manner, including printing and copying to a USB flash drive. Administrators can configure alerts so that any violation results in immediate notification of appropriate personnel.
DLP systems can examine zipped files but cannot decrypt or examine encrypted data. They can, however, scan an internal or cloud network and perform data discovery, which scans all data and reports the location of the data back to administrators. This report can then be used to configure the appropriate DLP rules to protect any newly discovered data. Data discovery scans should be performed on a regular basis.
Cloud Access Security Broker (CASB)
A cloud access security broker (CASB) is an application that monitors activity between users and cloud-based resources to apply the organization’s security policies, including policies that control authentication, authorize access, encrypt stored data, log all access, and alert on suspicious activity. It can also provide authentication controls, authorization controls, activity alerts, and DLP services.
Exam Preparation Tasks
As mentioned in the section “About the CISSP Cert Guide, Fourth Edition” in the Introduction, you have a couple of choices for exam preparation: the exercises here, Chapter 9, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep practice test software.
Review All Key Topics
Review the most important topics in this chapter, noted with the Key Topic icon in the outer margin of the page. Table 2-1 lists a reference of these key topics and the page numbers on which each is found.
Table 2-1 Key Topics for Chapter 2
Key Topic Element |
Description |
Page Number |
List |
NIST SP 800-122 recommendations to effectively protect PII |
171 |
List |
Private sector classifications |
175 |
List |
Military and government classifications |
176 |
Asset Life Cycle |
179 |
|
List |
Information life cycle |
181 |
Define Key Terms
Define the following key terms from this chapter and check your answers in the glossary:
access control list (ACL)
aggregation
atomicity
authentication
availability
base relation
baseline
candidate key
cardinality
certification
column or attribute
Cloud Access Security Broker (CASB)
confidentiality
consistency
contamination
criticality
cryptography
data criticality
data custodian
data loss prevention (DLP)
data mining
data owner
data processors
data purging
data quality
data sensitivity
data structure
data warehouse
data warehousing
database locks
database views
defense in depth
degree
Digital Rights Management (DRM)
domain
durability
EPHI
end of life (EOL)
end of support (EOS)
end-of-service life (EOSL)
foreign key
guideline
hierarchical database
inference
intangible assets
integrity
International Electrotechnical Commission (IEC)
International Organization for Standardization (ISO)
isolation
Java Database Connectivity (JDBC)
liability
network-attached storage (NAS)
Object Linking and Embedding Database (OLE DB)
object-oriented programming (OOP)
object-oriented database (OODB)
Open Database Connectivity (ODBC)
personally identifiable information (PII)
policy
polyinstantiation
protected health information (PHI)
record
referential integrity
relation
relational database
remanence
row
schema
standard
system owner
tangible assets
view
Answer Review Questions
1. What is the highest classification level for data in the U.S. military?
Confidential
Top Secret
Private
Sensitive
2. Who is responsible for deciding which users have access to data?
Business owner
System owner
Data owner
Data custodian
3. Which term is used for the fitness of data for use?
Data sensitivity
Data criticality
Data quality
Data classification
4. What is the highest level of data classification for private sector systems?
Public
Sensitive
Private
Confidential
5. What is the first phase of the information life cycle?
Maintain
Use
Distribute
Create/receive
6. Which organizational role owns a system and must work with other users to ensure that data is secure?
Business owner
Data custodian
Data owner
System owner
7. What is the last phase of the information life cycle?
Distribute
Maintain
Dispose/store
Use
Answers and Explanations
1. b. Military and governmental entities classify data using five main classification levels, listed from highest sensitivity level to lowest:
Top Secret
Secret
Confidential
Sensitive but unclassified
Unclassified
2. c. The data owner is responsible for deciding which users have access to data.
3. c. Data quality is the fitness of data for use.
4. d. Private sector systems usually use the following classifications, from highest to lowest:
Confidential
Private
Sensitive
Public
5. d. The phases of the information life cycle are as follows:
Create/receive
Distribute
Use
Maintain
Dispose/store
6. d. The system owner owns a system and must work with other users to ensure that data is secure.
7. c. The phases of the information life cycle are as follows:
Create/receive
Distribute
Use
Maintain
Dispose/store