Information security risk management serves organizations best when it is proactive versus reactive. A reactive risk management program identifies a risk after the organization has been affected by the risk and has possibly experienced a risk event. This is obviously troublesome because a single risk event can severely damage an organization or even put it out of business. Proactive risk management programs identify risk before the organization has been affected. This means that mitigating controls can be put into place prior to the first attack or potential event. These controls can either prevent the risk event from occurring or can reduce the impact of the event.
Follow-up to my prior post, I wanted to discuss in a little more depth the process of Data Collection that we use. Information risk identification is a critical step in a proactive information security risk management program and as part of collecting the data used to identify risk, we consider a wide range of data sources including internal data, external data, historical data, current organizational as well as industry trends, and forward-looking business change. Some specific sources include:
- Internal incident data which helps highlight current problem areas & the magnitude of losses
- Internal security/risk professionals that provide subject matter experts’ perspective
- Internal business contacts which provide insight on upcoming changes in the business
- Internal monitoring systems that provide insight into your environment
- Industry trends & incident data that helps map the emerging risk landscapes
- Regulatory standards that help provide current and future compliance baselines
We look at these sources as pieces of information that aid in developing a more comprehensive information security risk profile. They tell us about our current environment, problems in our current environment, our external environment, problems that other similar companies are seeing, and upcoming business changes that will affect our information security risk posture.
The volume of data can be large so efficiency is important. We try to automate data pulls and data analysis when possible. For example, when pulling raw data from systems, we try to automate the analysis so trends are the final output. It is also important for us to validate our contacts as subject matter experts in their areas before engaging them for information.
As data comes in from our sources, red flags begin to appear. Red flags are potential problems areas that may represent emerging risk. As red flags appear, we attempt to investigate the item further by trying to identify underlying facts and/or assumptions. What is the issue? What is the root cause? What is the scope? How does it affect our organization? Further fact gathering is conducted to understand the flagged area and understand if it is or is not an area of risk for our organization.
The facts and assumptions that we discover are used to determine which red flags are actually real risks for our organization that we have to manage. Other red flags will turn out to be of little concern and can be monitored or archived away as appropriate.
An example of this process for our team involved an internal search engine. Through a series of interviews, we heard concerns around an internal search engine. Individuals were concerned that the engine was accessing sensitive information inappropriately. This was something we logged as a red flag. Risk Assessors in our team were asked to investigate the issue in more detail to understand the details. Through our assessment, it was determined that the search engine could access sensitive information but all controls were in place to make sure the engine could not inappropriately access sensitive data. In this case a well controlled risk existed and as a result, this was logged and flagged as something to monitor and potentially revisit in future to ensure the controls remained effective.