The Three Main Concepts

1 Incident Prioritization

This chapter precisely gives introduction to the three main concepts that support the incident prioritization such as the related work to incident prioritization, understanding risk and some methods for prioritization using decision theories.

Don't use plagiarized sources. Get your custom essay on

“The Three Main Concepts”

Get custom essay

Section 3.1 introduces definition for some terminology used in the chapters and continues with the detail of the methods and techniques used as approaches in prioritizing incidents. Since risk assessment is widely used as a criteria and method in incident prioritization, Section 3.2 discusses the concept of risk and method used to evaluate risk. Section 3.3 presents the relationship between decision theory and theory of measurement in the general process of prioritization. Finally, in order to give a general view of prioritization, Section 3.4 discusses the prioritization theory includes some potential methods for the framework.

1.1 Prioritization and Incident

In the previous chapter, this report has presented a brief new concept on multi type of responses. With this multi type of response, security analyst or automatic system has privileged to select an appropriate response to react on incident. Therefore, either manual or automatic, the process of selecting an appropriate response needs to be done exclusively, systematically and properly. One of the approaches that can be used to select appropriate responses is by arranging or prioritizing incident. Based on the result of incident prioritization process, incident can be valued and therefore it can be mapped with an appropriate response. For example, a high priority incident can be mapped with an active response and a low priority incident with a passive response.

Before this report extends the discussion on how to design the initial framework for the incident prioritization and enhancing IRSs, there are three main topics that need to be discussed. The topics discussed in this chapter are related to incident prioritization, understanding risk, and method for prioritization using decision theory.

Before the discussion proceeds further, it is necessary to clarify some basic terminology used throughout this chapter. Extrapolating from definitions and usages in (Porras et al., 2002; Gupta et al., 2003; Covey, 2004; Karlsson et al., 2004; Karlsson et al., 2007; Alsubhi et al., 2008; Cukier and Panjwani, 2009; Mind Tools Ltd, 2009b), the term prioritization can be transformed as,

Prioritization is a process of making a list of rank according to the goals, suitable criteria and available resources.

In addition, extrapolating from definitions and usages in (Yuill et al., 2000; Salomon and Elsa, 2004; Stephenson, 2005; Wiant, 2005; Hacking-Lexicon, 2009), the term incident can be defined as,

An event of a cyber attack and nothing more than a success threat delivered against vulnerability.

Therefore, using the combination between incident and prioritization terminology;

Incident prioritization is a process of making a rank of incident either using list order or group according to the need and goals of organisation, criteria and method used to evaluate incident and resources that influencing the criteria and goals.

Extension to the terminology, published literatures (Porras et al., 2002; Qin and Lee, 2003; Alsubhi et al., 2008; Dondo, 2008) have shown that there are many approaches can be used to prioritize incidents. The approaches are signature tuning, vulnerabilities prioritization and incident prioritization.

1.1.1 Signature Tuning

Signature tuning is a process of eliminating of unnecessary rules and signatures. The process optimizes the priority processes by editing and tagging priority on the rules and signatures. For example, Snort 2.1 provides the Severity Identifier Option and this can be used to set priority tag for some rules (Caswell and Beale, 2004). In extension, the process of tagging priority in Snort IDS uses two different approaches;

  • Tagged within the signature, and
  • Use a typical customizable configuration file named classification.config.

Furthermore, some commercial products practice a similar approach to the Snort IDS. For example, Cisco System Inc (2009) provides an alert-severity as one of the parameter in signature rules in signature engines.

The advantage of this approach is that it offers pre-prioritization process which benefit for the proactive response. The pre-prioritization process estimates the potential impact and severity for the specific vulnerability based on the characteristic of the vulnerability itself.

On the other hand, drawbacks of this approach are that:

  • In some cases, an uncontrolled and simply tuning on the signature could increase the risk of missing real attacks (Tjhai et al., 2008).
  • Since it uses a signature and rule which is static, only particular known attacks and incident can be prioritized. A new and unknown vulnerability need to be analysed later or cannot be prioritized.
  • In a large, different and distributed network, the process of tagging rules and signatures might need different tagged for different location.
  • Since the process of tagging uses a manual process, it needs an extra time to configure and tuning the signature.
  • Although the process of tagging can be minimized using group of attack classification, the additional process needs to be done in classifying new attack. Therefore, again an extra time is needed.
  • There is no clear guideline or method or technique used to give priority to the specific attack or group. Perhaps, security analyst defines the priority based on the current situation or other indicators in making judgement.

1.1.2 Vulnerabilities Prioritization

The approach is quite similar to the signature tuning which performs the process of prioritization before the intrusion is detected. Implemented and researched by Dondo (2008), instead of waiting incident to be detected, this approach calculates potential risk for future potential incidents. For example, Dondo (2008) applied a fuzzy system approach with some additional key indicators (KRIs) in accessing potential risk. The estimation of the risk is done by associating three criteria; computer network assets, attacks and vulnerability (Qin and Lee, 2003). Furthermore, the Common Vulnerability Scoring System (CVSS)(Mell et al., 2009) also provides such severity impact score for a particular known vulnerabilities and using this score, the rank and priority of the vulnerability can be done before any incident happen.

The advantage of this approach shares similarity with the signature tuning approach. With additional advantages:

  • This approach still offers pre-prioritization process which benefits for the proactive response.
  • This approach reduces the process of manual configuration by using a semi-automatic and an automatic process.
  • This approach applies clear indicator in estimating the risk of the potential incident. For example, it uses asset, asset and vulnerability in calculating the potential risk.

However, the approach also inherent drawback as follows:

  • Since the process is done before the incident, perhaps all the potential risks for all vulnerabilities will be estimated and this might also needs an additional process.
  • The approach evaluates only for known vulnerabilities, and therefore, for a new incident with new and unknown vulnerability, it cannot be prioritized.
  • The estimation of the ranking result is solely based on the assumption and probability. This estimation sometimes not accommodate with the current asset environment.
  • There are many indicators applied in estimating the risk for the potential incident. Therefore, the process of collecting and gathering information to estimate the indicator may need an additional process and task.
  • Not apply to the changes of time and environment. The estimation is made on the specific time and not changes automatically even though there are new patches or solutions for the threat which sometimes can change the original priority.

1.1.3 Incident prioritization

Alternatively, unlike the vulnerability prioritization and signature tuning, incident prioritization focuses on the process of investigating and evaluating incident based on the level of potential risk before and after the incident happens. Popularized and implemented in Porras et al. (2002), Qin and Lee (2003), Yu et al. (2004) and Zhang et al. (2009), this approach uses risk assessment on the prioritizing incident. For example;

  • In order to rank a stream of security incident produced by IDS sensors, Porras et al. (2002) introduced an incident ranking computational model in a “mission-impact-based” correlation engine, also known as the M-Correlator.
  • In addition with the M-Correlator adaptation, Qin and Lee (2003) simplified and proposed a priority computational model based on Bayesian Network.
  • However, an alternative to the M-Correlator, Concurrent Engineering Research Center developed a general collaborative architecture for multiple IDS products by combining two elements include intelligent agents and knowledge-based alert evaluation (Yu et al., 2004).
  • A quite similar approach on evaluating risk and differentiating responses in intrusion response, Zhang et al. (2009) discussed about the possibilities of alert prioritizing by estimating attack impacts using decision theory and cost-benefit analysis.

Since this approach is done before and after the incident, it also inherits some advantages and reduces the drawback from the previous approach. The advantages of adopting incident prioritization are that:

  • The approach still offers pre-prioritization process which benefits for the proactive response but also offers post-prioritization process with real-time risk assessment on incident.
  • This approach also reduces the process of manual configuration by using a semi-automatic and an automatic process.
  • This approach also applies clear indicator to estimate the risk of the potential incident. It uses asset, asset and vulnerability in calculating the potential risk.
  • Unlike previous approach, this approach not only evaluates for known vulnerabilities, it is also evaluates new and unknown vulnerabilities.
  • Since the process is done after the incident, it reduces the processing time by only process on the related incident and not all vulnerabilities.
  • The estimation of the ranking result is not only based on the assumption and probability but also uses the current asset environment.
  • Sometimes can be applied and changed according to the changing of time and environment. The estimation is made on the specific time but sometimes changes automatically based on the new patches or solutions for the threat. The changes of the time and environment variable, it helps to reduce the original priority.

On the other hand, the approach also inherent drawback as follows:

  • There are many indicators used in estimating the risk for the potential incident. Therefore, the process of collecting and gathering information to estimate the indicator may need an additional process, task and extra time.
  • The prioritisation process could be affected by missing or unavailable information, which are used to estimate indicators.
  • When the estimation process uses more than one indicator, the weight value for each indicator is not clearly defined. Sometimes, the important indicator is addressed similar with other non important indicator.

However, in a relation to the security evaluation and risk assessment, Arnes et al. (2006) use Hidden Markov Model (HMM) and Mu et al. (2008) introduce online risk assessment. Both projects not discuss anything that can relate to incident prioritization, but generally discuss on how to estimate the risk of intrusions.

Moreover, evaluating risk using risk assessment promises significant results in prioritizing incident and vulnerability, but measuring more factors other than risk can also be used to prioritize incident. For example, cost-sensitive approach, where a response is selected based on several cost factors such as operational cost, damage cost and response cost. This approach which introduced in (Lee et al., 2002; Stakhanova et al., 2007; Wang et al., 2007; Yaorui and Shufen, 2008) discussed the relationship between cost factor and responses but unfortunately the researches not highlight about the incident prioritization or prioritizing process. However, based on the relationship between the selection of response and costs, apparently, the projects have prioritized the incident indirectly. Fundamentally, the cost-sensitive model introduced by Lee et al. (2002) and suggested a qualitative approach on calculating the cost factors. The difference between damage cost and response cost helps intrusion response to determine appropriate response either ignore, investigate, blocking or terminating attacks. Therefore, by differentiating response, perhaps this can also be considered as prioritizing incident.

1.1.4 Similarities and Limitations

In term of the process of prioritizing incident, the aforementioned approaches share some similarities. The similarities of the approach can be categorised into three main perspectives. The perspectives are;

    • The goals of the process

Based on the incident prioritization terminology given earlier, the goal of the incident prioritization is either to produce a priority list of incident or a priority group of the incident based on the method of analysis, support indicators and constraint of resources. In this particular context, the research conducted by Dondo (2008) applied a priority list for the vulnerabilities prioritization and (Porras et al., 2002; Qin and Lee, 2003; Alsubhi et al., 2008) make priority by having a group of priority such as low, medium, and high priority.

    • Method to estimate

In order to evaluate and investigate the incident, researchers (Porras et al., 2002; Qin and Lee, 2003; Alsubhi et al., 2008; Dondo, 2008) utilized a combination between probability of attack and asset as a main criteria to evaluate incident and vulnerability. Clearly, the combination between the probability of attack and asset can be done using risk assessment. However, there is other approach used to create a relationship between responses and costs using cost-sensitive analysis (Lee et al., 2002) and this relationship apparently can be seen as an initial process of incident prioritization.

  • The criteria used to evaluate the incident

Criteria used to evaluate incident refers to factor that influence the process of evaluating incident. Based on the aforementioned approach, the criteria are divided into two main criteria, the resources and indicators that influence the decision.

  • The resources.

The resources are related to the current information. The information can be divided into two type of information like asset information and attack information.

  1. Asset information is related to the network Information (e.g. IP address, protocol and port), software Information (e.g. operating system, application and services) and physical location (e.g. hardware and room). However, there are other information such as cost of the asset and policies that applied to the asset to control the threat and vulnerability. This information may become a direct support or indirect to the process of estimates and evaluate the incident.
  2. Attack information is related to the vulnerability Information (e.g. severity score) and threat Information (source, victim, time of occurrence)
  • The Indicators.

The indicators are related to the uncertain factors which refer to the combination of the asset information and attack information. In general, it is also used to support the process of estimating the risk. The uncertain factor such as sensor sensitivity, applicability of attacks, importance of victims, attack severity, asset criticality, relationship between alerts, service stability, social activity between attackers and victim and cost (e.g. operational cost, damage cost and response cost) are commonly used to evaluate the incident either using quantitatively or qualitatively assessment.

In highlighting the similarity of the approaches, the aforementioned approaches also share some limitation. The limitations are;

  • As discussed before, the misleading of the information and the constraint of the information sometimes can produce unconfident and incomplete result. In any case of the unavailable information for any indicator, perhaps the risk assessment or other approaches can not be done properly and therefore it will affect and jeopardise the whole process of incident prioritization.
  • When the estimation process uses more than one indicator, the weight value for each indicator is not clearly defined. Sometimes, the important indicator is addressed similar with other non important indicator.
  • The aforementioned approach does not provide any specific method to prioritize the incident. In prioritizing incident, the current approach only uses high number for high priority and low number for low priority. The detail of the arranging and selecting the priority is clearly not well defined.
  • The information used to estimate the incident involves vary information depending on the current input. The information input can be qualitative and sometimes quantitative. In the case of the qualitative input, the estimation process is not easy to control. For example, input or information for the high operation cost can be quoted as $200K for small-medium enterprises and that amount also can be priced as a low operation cost for big enterprises.
  • The current indicator involves too many uncertain factors. Sometimes the factors use qualitative approach in estimating the risk before it can be used to prioritize indicator. However, the current approach does not really discuss on the method on how to reduce the uncertain factor for those indicators.

Even though the current solution still inherits some limitation for the process of incident prioritization, it is also can be considered as acceptable approaches. Therefore, the initial framework for this research will try to reduce the current limitation and introduces a new solution in order to engage the limitation. The introduction of the initial framework for the incident prioritization will be discussed in the end of this report.

However, the current concept of incident prioritization only helps to understand the state-of-art in the technique and method used in prioritizing incident but not the detail of the processes. It is clearly defined that one of the approaches used to prioritize incident is using risk assessment. Therefore, to continue the further discussion especially on the relationship between risk assessment and incident prioritization, the next section introduces the concept of risk and initially discusses how the risk assessment can be used to help in enhancing the process of prioritization.

1.2 Understanding Risk

As discussed in Section 3.1.3, the incident prioritization can be systematically done by using risk assessment. Hence, with the intention of giving an overview and the concept of risk, this report extends the discussion in this section on the detail about understanding risk and how to use risk assessment as one of the approaches in estimating risk and finally prioritize the incident. The discussion not only covers the meaning of the risk, but covers also the risk assessment and how to evaluate risk using quantitative and qualitative assessment. With the concept of the risk, each incident can be evaluated and therefore, with the result of the risk assessment, later it can be prioritized.

Literally, it is difficult to define an absolute meaning of the risk terminology. According to Kaplan (1997), there is no an absolute meaning of risk and it is better not to define the meaning. However, this report attempts to understand the risk by giving it an appropriate meaning. The word ‘risk’ has been used for ages in many different aspect such as business risk, social risk, economic risk, investment risk, political risk and so forth. Some literatures (Kaplan and Garrick, 1981; Kaplan, 1997; Hubbard, 2007) described risk by giving distinction and relationship between risk and uncertainty. Kaplan and Garrick (1981) symbolically defined the relationship between risk and uncertainty. They define risk as a total combination of uncertainties and damages. More precisely, Kaplan (1997) expressed risk as a complete set between three entities; scenario, the likelihood of the scenario and the consequences of the scenario. More, using intersection between three areas; criticality, vulnerability and threat, Bass and Robichaux (2001) proposed a qualified operational risk. Additionally, in many security books (Bejtlich, 2004; Jones and Ashenden, 2005; Davis et al., 2007), risk is defined as a combination of threat, vulnerability and asset value. Additionally, Pfleeger (2006) defined the risk as a combination between a loss associated with events (impact), the likelihood of the event occurring (problem) and the degree that can change and reduce the impact (control). Similarly, in the British Standard, the risk is identified as a combination between assets, threats, vulnerabilities and impact (BS ISO/IEC 27001, 2005).

Therefore, combination of above definitions, this report will use the risk definition as:

Risk is a combination of the likelihood of event and the consequences of the event.

Additionally, this report extends the meaning of the likelihood of event and the consequences of the event.

Kaplan (1997) defines likelihood as a frequency or probability or frequency and probability. According to Kaplan (1997), frequency applies to a repetitive situation and probability applies to the degree of confidence dictated by evidence. Moreover, the combination between frequency and probability apply to the uncertain repetitive situation. In addition, in risk management, probability is a degree of likelihood of the risk occurring (JISC Infonet, 2009). For example, some authors (Bejtlich, 2004; Jones and Ashenden, 2005; Davis et al., 2007) use likelihood of threat and vulnerability as a probability in evaluating risk related to incident and intrusion. Furthermore, in the British Standard, the realistic likelihood is related to the security failures and it associated with activities of threats, vulnerabilities, impacts on asset and control (BS ISO/IEC 27001, 2005).

Therefore, in this report, the likelihood of event is defined as;

The likelihood of event is the probability of event and estimates using a combination of threats and vulnerabilities.

On the other hand, the consequence of the event refers to the element that has a direct impact that link to the event. For example, the direct impact to the incident and intrusion can be referred as impact of assets. This is obviously and clearly explained in Bejtlich (2004) and Davis et al. (2007) where they associate the risk definition with the asset value. Additionally, the British Standard uses business impact and associates it with the loss of confidentiality, integrity and availability of assets (BS ISO/IEC 27001, 2005). Furthermore, the impact can be assessed in terms of its effects on time, cost and quality (JISC Infonet, 2009). Therefore, impact on assets can be assessed by looking into the detail of its criticality in terms of cost, time and quality.

Therefore, in this report, the consequence of the event is defined as;

The consequence of event is the impact on asset and estimates using a current assets and controls.

To understand clearly on how incident can be used to prioritize using risk; this report extends the discussion on risk assessment.

1.2.1 Risk Assessment

In incident prioritization, risk assessment can be used to determine the relationship between the likelihood of event (probability of threat and vulnerability) and the consequences of the event (impact on asset). Additionally, risk assessment is also widely used in other operation. For example, financial organisation uses risk assessment in determining and identifying any related exposure to system that potentially could effect the financial operation (Pfleeger, 2006). Unlike risk assessment for business operations which focuses on the overall process of the business, risk assessment in a context of computer security focuses on assessing the likelihood of attack, likelihood of adversary success given attack, and consequences given success for a variety of threat scenarios (McGill, 2007).

The risk assessment also focuses on a study to locate security vulnerabilities and risks. The assessment normally includes an evaluation of both physical and technical risks, uses personal interviews, vulnerability scans, and on-site observation to evaluate risk (Fry and Nystrom, 2009). Moreover, Gregg and Kim (2005) linked risk assessment with risk management and define it as a process of reducing risk to information technology assets. The process can be done by identifying and eliminating threats through the deployment of security controls and security countermeasures. Defined as a first process in the risk management methodology, the risk assessment used by organisation to determine the extent of the potential threat and risk associated with an IT system (Stoneburner et al., 2002).

In a specific context, especially in a computer security risk management, Campbell and Sands (1979) proposed a modular approach. The modular approach is designed to provide a responsive framework according to the need of the environment. The approach contains several assessment includes vulnerability analysis, threat analysis, risk assessment and control assessment.

In general, risk management is defined as a systematic process in identify, analyse and respond to project risks. There are sub-processes applicable in risk management and basically the model for risk management is divided into five sub-processes (Figure 5), quoting from JISC Infonet (JISC Infonet, 2009), the risk management contains;

  1. Risk Identification
  2. Qualitative Risk Analysis
  3. Quantitative Risk Assessment
  4. Risk Response Planning
  5. Risk Monitoring and Control

However risk assessment steps proposed by Pfleeger (2006) are as follows:

  1. Identify assets.
  2. Determine vulnerabilities.
  3. Estimate likelihood of exploitation.
  4. Compute expected annual loss.
  5. Survey applicable controls and their costs.
  6. Project annual savings of control.

Additionally, the National Institute of Standards and Technology applies nine primary steps in risk assessment and the 2nd, 3rd, 4th, and 6th step can be executed parallel immediately after the 1st step has be completed (Stoneburner et al., 2002). The steps are:

  1. Step 1 : System Characterization
  2. Step 2 : Threat Identification
  3. Step 3 : Vulnerability Identification
  4. Step 4 : Control Analysis
  5. Step 5 : Likelihood Determination
  6. Step 6 : Impact Analysis
  7. Step 7 : Risk Determination
  8. Step 8 :Control Recommendations
  9. Step 9 : Results Documentation

Alternatively, instead of having many steps , other quantitative methodologies concluded by Hamdi and Boudriga (2005) introduced only four steps:

  1. Asset analysis: Identifying assets and assigning values to them,
  2. Threat and vulnerability identification,
  3. Annual Loss Expectancy (ALE) calculation for each threat, and finally
  4. Safeguard selection.

Based on the aforementioned risk assessment steps, basically, the process of risk assessment provides a process of identifying relationship between probabilistic and impact, particularly between assets, vulnerabilities, threats, control and countermeasures. Therefore, the risk assessment can be simplified into three main steps as follows:

1. Identify and list assets (or group of assets), tangibly or intangibly, continue by valuing of each asset.

The first process of risk assessment is identifying and listing of all assets in the organisation. For example, this step can be done by creating a matrix of assets that contain valuable information and critical to organisation. Every single asset can be valued manually by the security analyst or automatically by computer system. In any network environment for example, collecting and gaining information about assets are much easier compare to the normal businesses operations. This process can be done by scanning the entire network and running operating system finger printing software such as NMAP or Passive OS Fingerprinting. By doing this, devices that are directly attached to the network can be listed easily. Then, the process is followed by a process of valuing assets. The process of valuing asset can be done quantitatively and qualitatively. In this particular aspect, some other factors can be used to estimate the value of asset such as bandwidth utilization, number of users, level of confidentiality, integrity, availability and criticality, number of services provided and dependencies with other devices.

2. For each network asset, identify and associate threats and vulnerabilities.

The next process of the risk assessment is identifying and associating threats and vulnerabilities with assets. Some risk assessment researches like Porras (2002), Zhang et al. (2009), Mu et al. (2008), Dondo (2008) and Alsubhi et al. (2008) potentially show on how to identify and associate threat and vulnerability with assets. In addition, the response-oriented Intrusion Taxonomy introduced by Papadaki (2004) also shows a mapping between specific threats and vulnerabilities.

3. Calculate quantitatively or qualitatively the possible impact analysis for each threat and vulnerability that associated with each asset.

Finally, with the value of asset and some additional calculation on the probability of threat and vulnerabilities, the possible potential risk assessment for each threat that associated with each asset can be estimated.

In addition to the risk assessment process, there is range of tools that can be used to support the risk assessment process. The current tools today are including the CCTA (Central Computer and Telecommunications Agency) Risk Analysis and Management Method (CRAMM), COBRA and Operationally Critical Threat, Asset, And Vulnerability Evaluation (OCTAVE). Most of the tools use a similar method and commonly use qualitative data to produce significant results. Other than that, there are other tools produced by the Information Security Forum (ISF, 2009) such as Fundamental Information Risk Management (FIRM), Information Risk Analysis Methodology (IRAM), Simple to Apply Risk Analysis (SARA) and Simplified Process for Risk Identification (SPRINT).

However, when dealing with risk assessment, particularly for private and public organization, nowadays there are now many new laws and mandates that impact the requirements and scope of the risk assessment. These laws and mandates are also important to promote security standard and guideline for public and private organisation in organising and dealing with security matters. For example, Health Insurance Portability and Accountability Act (HIPAA), Gramm-Leach-Bliley Act (GLBA), Federal Information Security Management Act (FISMA) and Section 404 – Sarbanes-Oxley Act, Section 404 (SOX). However, with the limitation of the research, the laws and mandates denoted above will not be considered in designing the initial framework in this research project. Additionally, the aforementioned mandates and laws are only applied to certain countries especially in the United States and not a general practice for the whole world and all organisations. Therefore, the laws and mandates can not be as one of the criteria in designing the initial framework for incident prioritization.

1.2.2 Quantitative and Qualitative Risk Assessment

In extension to the risk assessment, mainly in the second and third step of the risk assessment, there are two approaches that can be used to conduct the detail analysis; using quantitative risk assessment and quantitative risk assessment. Depends on the usage, both approaches are significant to any environment and for the use of the incident prioritization process; it may produces a better comparison result.

For instance, some organisation may conduct a risk assessment using qualitative approach and some organisation may conduct it using quantitative approaches. Both approaches may influence the final result of prioritization where qualitative technique may produces a group of qualitative rank such as low, medium or high whereas quantitative technique may produces a list of rank in numbers.

In this particular context, to determine and estimate the relationship between threats, vulnerabilities, asset and control, the risk assessment uses two different approaches; Quantitative risk assessment

According to Gregg and Kim (2005), quantitative risk assessment is a methodical, step-by-step calculation of asset valuation, exposure to threat and the financial impact or lost in the event. Quantitative assessment uses monetary values and applies it to the components used in the estimation process (Munteanu, 2006).

Using quantifiable data and results, quantitative risk assessment is related to the employment of mathematical model and theory. Therefore, quantitative assessment produces a lot of numbers relationship, mathematical equations and uses statistical technique in the detail of the analysis. For example, the Exposure Factor (EF), Single Loss Expectancy (SLE), Annualized Loss Expectancy (ALE), Annualized Rate of Occurrence (ARO) and Safeguard cost/benefit analysis (Tan, 2002; Hamdi and Boudriga, 2005; Munteanu, 2006). According to them, the calculation for the computer security risk assessment methods can be relied on the Annualized Loss Expectancy (ALE).

In extension, Butler (2002) and Ekelhart et al. (2007) discussed the relationship between result of the quantitative risk assessment and cost-sensitive or cost-benefit analysis. The cost and benefit ratio of individual safeguard discussed by Ekelhart et al. (2007) is apparently similar to the research in the cost-sensitive modelling for intrusion detection and response by Lee et al. (2002). Therefore, this approach has significant value in designing the initial framework for the incident prioritization.

However, this approach inherent some advantages. The advantages of the quantitative risk assessment approach are;

  • The process of data collection can be done automatically and manually. For the reason of precision, automatic data collection can be applied. Additionally, quantitative data is more precise than qualitative data (Houmb et al., 2009).
  • It provides more objective data and result on the assessment. Unlike qualitative data, the data for this approach is high in term of the measurability. This is because the data is normally formed in integers and numbers in value.
  • The level of efficiency is reliable. Most of the time, results are based on the mathematical and statistical model. The result is still same even though the process is run in multiple environments and different time.
  • Since the result is a quantify result, the usability of the result is high compared to the qualitative risk assessment.
  • It provides a measurement of the impacts’ magnitude, which can be used in the cost-benefit analysis (Stoneburner et al., 2002).

However, the drawbacks of the quantitative risk assessment are;

  • The approach needs a tool to collect an excessive of quantitative data. Otherwise, without an appropriate tool, the final results are not efficient and reliable enough.
  • The reasoning process is not really easy as qualitative risk assessment. Since there are multiple values and integers used, it applies a very complex computational process. The complexity of data and values also force the use of high computing devices.
  • The process of assessment involves a sophisticated decision-making and complex comparison. Therefore automatic tools and unique computer software are needed.
  • In certain cases for precision, the use of computer software is unavoidable. Unfortunately, the related software is really limited and rare to be found. Development of suitable software is also expensive and not really productive for small and medium organisations.
  • High-level manager may be facing several difficulties on understanding certain mathematical function, graph and interrelation between data. Sometimes, the result needs to be interpreted in a qualitative manner (Stoneburner et al., 2002).
  • Some result might need to be expressed in management language in order to increase the level of understanding. However, the process of changing the original expression reduces the precision of the result and may differ from the original objectives.

Since there are many drawbacks, this approach will not be considered in this project. Qualitative risk assessment

Alternatively, instead of having numbers and equation in the quantitative risk assessment, qualitative risk assessment is a scenario-based approach (Munteanu, 2006), where the scenario is examined and assessed for each critical or major threat to an asset (Gregg and Kim, 2005). The qualitative risk assessment requires a calculation on intangible factors such as impact (e.g. asset criticality) and subjective attack probability (e.g. high, low and medium level of threat and vulnerability). In this approach, intangible factors and subjective probability need to be measured so that the value is quantifiable and easy to evaluate and compute. The purpose of the assessment is to provide a consistent and subjective assessment of the risk (Gregg and Kim, 2005). In order to estimate the risk using the qualitative risk assessment, it uses other additional elements such as;

  • A qualitative scale. The qualitative scale used to represent the probability of threats (likelihood of event) and impact on assets (consequence of event). Quoting from JISC Infonet (2009), table below shows the example of qualitative scales;
  • The probability and impact matrix. The risk can be estimated by adding a numeric scale to the qualitative like the qualitative scale above. For example, using a numeric scale 1 to 5 where 1 denotes as very low and 5 denotes as very high, the risk can be estimated using the probability and impact matrix as Figure 6. Using the matrix, the combination of the impact and probability can be mapped and as a result, the final qualitative risk can be quantified.
  • Risk Matrix Summary Table. The risk matrix summary table is a table to show the scale of the risk in general. The result from the probability and impact matrix above can be summarised as depicted in Table 2. For example, for medium scale of probability and the medium scale of impact, the risk is represented as 9 (computed from 3×3). Using the matrix given in Figure 6 and Table 2.

However, the table, matrix and scale can be changed according to the need of the organisation. There is no absolute guideline in determine which scale is the best, perhaps depend on the scale used, it helps to reduce the vagueness amongst decision makers.

In the meantime, the advantages of the qualitative risk assessment are;

  • The reasoning and computational process is easy and does not imply a complex reasoning. Therefore less computational step involves. If there is arithmetic operations, only simple with a less integer will be used.
  • The process of collecting data can be done manually where data collected and gathered using traditional method such as interview, observation and literature comparison. Additionally, questionnaire based is a famous technique and often used by analyst.
  • The approach not uses a sophisticated decision-making and complex comparison. Unlike quantitative risk assessment, automatic tools are not really needed.
  • The result for the high-level management is easy to be understood. Hypothesis provides a broad view of understanding and linguistically understood by most of the non technical people.

However, the drawbacks of the approach are;

  • Since the qualitative analysis deals with extremely subjective and intangible factors, the drawback of the approach is that the result produced is not precise and absolute number; therefore the result gives uncomfortable situation for some finance and accounting organisation (Watson Business Systems Ltd, 2009).
  • The level of data measurability is low compare to data for the quantitative risk assessment. Additionally, in a form of linguistic data, it does not easy to be rescaled as number.
  • Since it does not provide specific quantifiable measurements of the magnitude of the impacts, therefore making a cost-benefit analysis is difficult (Stoneburner et al., 2002).
  • Even though it involves less computational process, interpretation of data and result might differ from one person to other. Yet, the assessment for the qualitative risk is hard to be interpreted. A different expert may produces different analysis and hypothesis.
  • The usability of result is low. The result applies to certain organizations and cases only.
  • Hypothesis is a final result and it subjective and hardly to measure.

Both approaches promise a significant result in valuing threats, assets and vulnerabilities. However, both approaches also give almost similar values in term of the advantages and disadvantages.

In extension to this particular project, the initial framework for incident prioritization uses the qualitative risk assessment in assessing the risk for the incident. Even though the qualitative risk assessment inherent many disadvantages, but the disadvantages are more to the data interpretation and subjective level. However, in order to reduce the misinterpretation and subjectivity, assessing uncertainty may help to solve the problem. Understanding on how to measure uncertainty is a key to measuring risk (Hubbard, 2007). In order to measure uncertainty, Hubbard (2007) proposed a method of measuring uncertainty by using a decomposition technique. The decomposition involves figuring out on how to compute something very uncertain from other things that are a lot less uncertain or at least easier to measure. In the other word, decomposition breaks the uncertain factors into another small part of less uncertain factors. Therefore, reducing the uncertainty element in calculating probability and impact is possible to help to improve the process of measurement. The detail about how to measure uncertainty element and other intangible factor involve in the initial framework will be discussed in the next chapter.

However, before this report proceeds with the new chapter, there are other theories that can be used to help to rank the priority of the risk. With the concept of decision theory and theory of measurement, next section discusses about the other possibility and approach of making decision and at the same time it helps to prioritize options and risks.

1.3 Decision Theory and Prioritization

Although the process of prioritizing incident can be done systematically using risk assessment, understanding the risk itself is not enough to prioritize incident. Perhaps, with the additional theory on prioritization like decision theory could be used to help to boost the process. In this section, using some decision theory and theory of measurement, the general process of prioritization is discussed. Using the combination between those theories, some other method for making decision are discussed.

Developing a systematic approach on prioritizing incident is not an easy task. Therefore, in selecting a systematic approach in prioritizing incident, there are many approaches that can be used to prioritize incident. Perhaps, aforementioned discussion about the used of risk assessment is one of the example, common and significant approach used to evaluate and prioritize incident.

However, in a broader aspect of prioritization in other research areas, there are other methods that can be followed. The next section discusses the method of prioritization, particularly in the area of decision theory. The discussion is not limit to the discussion on the method used in prioritizing incident and vulnerabilities but more general.

Since there are many approaches proposed, there are confusions amongst administrators on selecting which method is the best. Therefore, in reducing the confusions, theory of scales in measurement system can be used to differentiate between methods.

In the theory of scales, Stevens (1946) introduces four different types of scale in measurement; nominal, ordinal, interval and ratio. Karlsson, Hast and Regnell (2006) applied an ordinal and ratio to categorise the prioritization methods based on ranking strategies. Although the theory received few critics from other researcher (Velleman and Leland, 1993), the theory of scales has been evolved and latest searching on the Internet, there are more than 1000 publication literatures use the theory (Google, 2009).

Two main scales are widely used in the process of prioritization; a ratio scale and an ordinal scale.

  1. A ratio scale provides not only ordering of requirement and relative distance between requirements, but at the same time, states how important one requirement is than another (Karlsson et al., 2006). Using ratio scale, the different between options can be measured statistically where the distance between options is clearly defined. For example, if the scale using 1, 2, 3 and 4 to differentiate between options, the scale of two represents 2 time of 1 and if the option uses 4 as a scale then the option is four times better to the scale of 1 and two times better than scale of 1.
  2. Alternatively, ordinal scale uses a scale either using numbering option like 1, 2, 3, and 4 or using qualitative group like high, medium and low by denoting the option is less or more than the next option. In this particular scale, for example when ranking strategy using number between 1 to 4, unlike ratio scale, it does not mean that a scale of 4 indicates a twice of scale of 2 or a scale of 2 indicates a twice of scale of one, but the ordinal scale illustrates the rank of qualitative and uncertain value by indicating 2 is ‘more’ than 1 and 3 is ‘more’ than 1 and 2, and so on. Originally, the ordinal scale arises from the operation of rank-ordering and this scale is widely and effectively used by psychologists (Stevens, 1946).

The categories described in Table 4 below shows the multi-type of prioritization method and most of the methods are using the ordinal scale as a main scale for ranking strategy.

Even though there are two types of scales used to prioritize options, method of prioritization can be used to transform from one scale to another. For example, the exchangeable process from ordinal scale to a ratio scale and vice versa. Using intangible options and method like the AHP or pair wise comparison method, the approach can transform ordinal scale to ratio scale with a significant relationship between options.

This section discusses some potential method that helps to reduce the problem of the current prioritization process. As mentioned earlier, there are some disadvantages of the current prioritization process. Therefore, these potential methods help to reduce some of the limitations. The limitations associated with this section are (refer to Section 3.1.4):

  • When the estimation process uses more than one indicator, the weight value for each indicator is not clearly defined. Sometimes, the important indicator is addressed similar with other non important indicator.
  • The aforementioned approach does not provide any specific method to prioritize the incident. In prioritizing incident, the current approach only uses high number for high priority and low number for low priority. The detail of the arranging and selecting the priority is clearly not well defined.

Both limitations can be addressed using the combination of one or more of the methods. However, in this particular report, the appropriate method is not selected yet, therefore, some the possible and potential method will be discussed. The selection of the best method will be done in the PhD stages.

Instead of addressing all the method (as mentioned), based on the limitation mentioned, this report chooses some of the significant and relevant method. The methods are:

  • Voting Scheme Method
  • Analytic Hierarchical Process (AHP)
  • Paired Comparison Analysis
  • Grid Analysis
  • Covey’s Quadrants

The reason of the selecting the above methods because they have potential to solve the aforementioned limitation. For instance, some of the methods such as Voting Scheme Method, Analytic Hierarchical process (AHP) and Grid Analysis have a capability to prioritize options using difference weight value of indicator. In addition, the Paired Comparison Analysis able to give and estimate the difference weight value for the indicator. Finally, the Covey’s Quadrants have a potential to divide options or incident into specific groups.

This report does not consider the other method which expresses similar to the five chosen methods. Some of the methods are also insignificant and not suitable to this project. For instance, Weiger’s Method (Wiegers, 1999) and Grounded Theory (Herrmann and Daneva, 2008) are related to the cost-benefit approach. The Binary Search Tree (BST) (Karlsson et al., 1998) is an algorithm used to sort and search information. Priority Group (Karlsson et al., 1998; Karlsson et al., 2007), Action Priority Matrix, Eisenhower Method are similar to the Covey’s Quadrants. In addition, Planning Game Portioning (PG) performs similar to the Priority Groups by using sorting algorithm to sort option and requirement (also called a story card) (Karlsson et al., 2007). Similar to the method used by Covey (2004), the ABC Analysis presents high-level prioritization by ranking tasks into three main groups; A – very important, B – important, and C – least important (Chu et al., 2008). In addition, the combination of the ABC Analysis and Pareto Principle promises an alternative method to prioritize options. Similar to the Cover’s Quadrants, by grouping option into different groups, and group are prioritized according to its priority.

Other unsuitable method, Theory-W also known as “Win-Win”, proposed by Boehm and Ross (Boehm and Ross, 1989) supports negotiations between stakeholders in solving disagreement about software requirements and ensures win-win situation between them. This method uses risk assessment and risk handling to ensure the stakeholders’ “win-win” condition is secured. Since this not use a technical aspect on prioritizing option, it seems not practical.

Note that other than this example, there are other methods are not discussed in this report.

In order to understand the detail of the methods, the next section presents the selected methods for prioritization.

1.4 Method for Prioritization

There are many methods used in prioritizing option or task or decision. The method is not only limit on the process of prioritizing task in selecting best options but at the same time optimizes the objective of goal, the use of criteria and resources that influence the criteria and goals such as time and cost. The method of prioritization is widely used in daily life, particularly in decision making in managing business environment. There are few and difference perceptive applied by organisation in selecting which method is the best and fit with their need but the final goal is only to have the best solution in managing tasks, options, people and so on.

Some discussion on the method for prioritization is taken from the area of software engineering where this method normally used to prioritize software requirement. Therefore, in order to avoid conflict in term of meaning of requirements and options, the word ‘requirement’ will be used in interchangeable with the word ‘options’ and literally has similar meaning. In the discussion, the words ‘option’ refers to the task, decision, object, requirement and anything that need to be prioritized. The word ‘option’ used here does not depict to the real meaning of option, therefore, it is not a definition of option.

1.4.1 Voting Scheme Method

In extension to the Delphi Method (Helmer, 1967), the voting scheme prioritization method uses traditional approach and face-to-face discussion like the Delphi Method. Since the Delphi method uses a common and traditional approach in making appropriate decision, the voting scheme method helps to prioritize options if discussion involves many criteria, factors and experts. The voting scheme allows experts to show their preferences on prioritizing options. In this particular method, each expert votes for their prefer option and the highest vote for a particular option considered as a highest priority.

Other than the original voting scheme, there are many methods similar to the voting scheme such as multi-voting, cumulative-voting and 100-Point Method.

  • The Multi-Voting Method uses multiple round of voting by selecting best option and narrowing down a large list of possibilities to a smaller list of the top priorities or to a final selection (Tague, 1995). Also known as a Nominal Group Techniques (NGT) and nominal prioritization, traditionally, the Multi-Voting Method uses a brainstorming session like Delphi Method.
  • Alternatively, the Cumulative Voting or Weighted Voting (Ayad and Kamel, 2008) ensures each voter has flexibility in casting options. Similar to other voting scheme, the highest and the most votes is the winner. In contrast to the ‘straight voting’, the Cumulative Voting allows a voter casts more than a single vote to an option or distribute it amongst options. One advantage of the Cumulative Voting is the scheme is popular in a process of selecting a director in one company and advantage of this approach; it gives additional value for minority shareholders (Bhagat and Brickley, 1984).
  • In addition to the Cumulative Voting, instead of having variable value of vote for options, the 100-Point Method (Leffingwell and Widrig, 2003) uses 100 point in voting and prioritizing options. Similar to the $100 test (Karlsson et al., 2006), using a static 100 point, experts distribute the points between tasks equally or according to their desires.
  • The similar approach to the 100-Point Method, where vote is limited to a certain number of vote, Cast Five Straws (or sticky notes) (Doyle et al., 1996) allows experts to distribute 5 votes amongst options. Those all votes can be used on a single option or distributes amongst options. Similarly, the highest total of votes in a particular option will be a highest priority.

In addition to the traditional Cumulative Voting, Berander and Jansson (2006) introduced a new techniques of prioritizing using hierarchical structure requirement on a ratio scale, Hierarchical Cumulative Voting (HCV). The HCV uses Analytic Hierarchical Process (AHP) by addressing the weaknesses of the method and combine with the strength of the Cumulative Voting.

1.4.2 Analytic Hierarchical Process (AHP)

Fundamentally, the Analytic Hierarchical Process (AHP) is a theory of measurement through pair wise comparisons and relies on the judgements of experts to derive priority scales (Saaty, 2008). Over a decade, the AHP has been used as a method for decision makers in solving complicated and decision problems. Zahedi (1986) in the report published in the year 1986 had shown that many applications in diverse and numerous area apply the AHP in making decision. The AHP is not limit as a method to solve complicated problem but also widely used as a method to rate decision alternative which involve both tangible and intangibles criteria for evaluation, selection and prediction. Hence, the AHP can be used as a decision method not only for selecting best decision but at the same time prioritizes decision alternatives.

In order to organise systematic procedure for prioritization, the AHP consists four main steps, quoting from Saaty (2008);

  1. Define the problem and determine the kind of knowledge sought.
  2. Structure the decision hierarchy from the top with the goal of the decision, then the objectives from a broad perspective, through the intermediate levels (criteria on which subsequent elements depend) to the lowest level (which usually is a set of the alternatives).
  3. Construct a set of pairwise comparison matrices (similar to the Paired Comparison Analysis). Each element in an upper level is used to compare the elements in the level immediately below with respect to it.
  4. Use the priorities obtained from the comparisons to weigh the priorities in the level immediately below. Do this for every element. Then for each element in the level below add its weighed values and obtain its overall or global priority. Continue this process of weighing and adding until the final priorities of the alternatives in the bottom most level are obtained.

The AHP is a common and widely approach in prioritizing options. For example, in prioritizing value and cost, Karlsson and Ryan (1997) utilized AHP and pair-wise comparison method in evaluating in between cost and value. In this particular research, also called as Cost-Value Approach, the AHP is combined with other method like pair-wise comparison method and the combination has been proved as a success approach.

In addition, this method has been combined with other method such as Planning Game Portioning (PG), Theory-W and ABC Analysis. In direction to reduce the weaknesses and increase the strengths of the PG, particularly subjectiveness between option, the combination of PG with AHP (PGcAHP) is proposed (Karlsson et al., 2004; Ahl, 2005). In addition to the original Boehm’s Theory-W, Ruhe et al. (2002) proposed the Quantitative Win-Win by combining the AHP method. In this particular method, like other hybrid prioritization method, Ruhe et al. (2002) uses AHP in determine the priorities between stakeholders, requirement class and resource constrains.

Chen et al. (2008) and Chu et al. (2008) reported that the latest concept of the ABC Analysis, Multiple Criteria ABC (MCABC) Analysis is widely used by combining other than prioritization method include the Analytic Hierarchical Process (AHP), statistical analysis, weighted linear programming, fuzzy classification and many other classification method.

1.4.3 Paired Comparison Analysis

The Paired Comparison Analysis involves a process of comparing two or more alternative options and give priorities for option that has more advantages. In the process of comparison, each alternative will be compared with other alternative and continues by choosing which is most important and give greatest advantage in between two pair options (Mind Tools Ltd, 2009a). The process continues with other two pair options until all options are compared.

Unlike the AHP which uses a goal of decision as a main objective of prioritization, the Paired Comparison Analysis is popular to be applied where organisation has no objective but at the same time need a simple and unclear prioritize option (HR Management, 2009).

The Paired Comparison Analysis or also known as Pair-wise Comparison is sometimes can be used as a method to determine the weight between options. For example, weight for criteria in the AHP (Saaty, 2008) and weight for factors in the Grid Analysis (Manktelow, 2003) can use Paired Comparison Analysis in order to obtain the basic comparison value.

There are six typical steps in the analysis, quoting from (Mind Tools Ltd, 2009a);

  1. List the options you will compare. Assign a letter to each option.
  2. Mark the options as row and column headings on the worksheet.
  3. For each cell in the table, the option can not be compare by itself; therefore the cell will be blocked out.
  4. For each cell in the table, avoid any duplication of comparison; therefore for duplicated cell, it will be blocked out
  5. Within the remaining cells compare the option in the row with the one in the column. For each cell, decide which of the two options is more important and advantage. Write down the letter of the more important option in the cell, and score the difference in importance from 0 (no difference) to 3 (major difference).
  6. Finally, consolidate the results by adding up the total of all the values for each of the options. You may want to convert these values into a percentage of the total score.

The alternate calculation for Pairwise comparison method introduced in the AHP uses different method calculation. The calculation uses “eigenvalue” method to estimate the relative weight of decision elements (Zahedi, 1986).

1.4.4 Grid Analysis

Grid Analysis is another method used in decision-making and functionally to make a decision where many factors must be balanced. Using the Grid-Analysis, option can be prioritized even though it has many alternatives and factors to take into considerations (Manktelow, 2003).

There are five steps to follow;

  1. The first step is list all option and factor that important and influences the decision making. The list of option and factor is tabled where option as row labels and factors as column headings.
  2. The second step is to find out the relative importance of the factor to the decision. This can be done by giving a weight to each factor. The weight of factor can be very subjective; therefore, in order to get appropriate value for each factor, it is suggested to use a technique such as Paired Comparison Analysis.
  3. The third step is giving the score for each relationship between options and factors. The score can be varied but the best method is giving a number from 0 to any higher number either 3, 5 or 10 where 0 indicates poorly importance and the higher number indicates highly importance.
  4. The fourth step is multiply for each of scores using the score given in Step 2 and Step 3.
  5. Final step, all the result from step 4 is added and the highest total indicates highest priority and decision.

1.4.5 Covey’s Quadrants

Unlike complicated calculation and numbering on other previous methods, Covey (2004) introduces high-level prioritization scheme by using time management concepts. The high-level prioritization not looks deeply into the detail of criteria and influencing factor, instead, Covey (2004) categorizes task into four quadrants based on the important and urgent of task. The four quadrants include;

Unlike complicated calculation and numbering on other previous methods, Covey (2004) introduces high-level prioritization scheme by using time management concepts. The high-level prioritization not looks deeply into the detail of criteria and influencing factor, instead, Covey (2004) categorizes task into four quadrants based on the important and urgent of task. The four quadrants include;

  1. Q1 – Important and Urgent
  2. Q2 – Important but Not Urgent
  3. Q3 – Not Important but Urgent
  4. Q4 – Not Important and Not Urgent

The quadrant specifies the use of time in term of important and urgent. In this particular method, important and urgent task will be grouped into first quadrant, and for not important and not urgent task, the task will be grouped into last quadrants. This method uses high-level of uncertainty where task and option can be grouped according to the suitability based on the need of decision makers.

Since this method provides a fix group of priorities where first quadrant is the most important and urgent compares to others, it is difficult to assign task directly under appropriate group. Therefore, a detail prioritization method like the AHP and Grid Analysis perhaps can be used to calculate which task is appropriate to be grouped into suitable quadrants and which task should be done first within a same quadrant.

Additional to the Covey’s Quadrants, there are other methods similar to this approach where they devide task and option into fixed groups or quadrants. The approaches like the Eisenhower Method, Action Priority Matrix, Urgent/Important Matrix and Impact Feasibility Matrix are some of the examples.

The Eisenhower Method (Time Management Training, 2009) is a method used in the context of project management and deals ultimately with time management. Similar to the method described by Covey’s Quadrants, the Eisenhower Method evaluates tasks in determining daily priorities using two criteria; important and urgent (Mind Tools Ltd, 2009b).

Alternatively, popularised by the (2009b), the Action Priority Matrix (also known as the Impact Feasibility Matrix) approach makes the very most of option opportunities as a top priority. Instead of having urgent and important criteria, Figure 9 shows the Action Priority Matrix. The matrix uses impact and effort as basic criteria.

1.5 Summary

This chapter has discussed three sections and divided into three main topics like incident prioritization, understanding risk and method of prioritization. With the intention to extend and enhance the process of selecting response in Intrusion Response Systems, conceptually, the combination of the concepts will help to boost the process.

Firstly, this chapter has discussed some of the previous work on the incident prioritization. The discussion is widely covered on the response techniques used in managing incident from Intrusion Detections Systems. This chapter has discussed the limitation of the current work and with it the further study is continued.

Secondly, the chapter continued by discussing a concept of risk and how risk is evaluated. Deeply discussed about the common method applied in risk assessment, the chapter also showed some advantages and disadvantages of the approach in order to give a better selection and understanding.

Finally, as a part of the discussion, using decision theory, this chapter discussed the basic and general method for prioritization. The discussion is not limit to the discussion on the theory itself but also some example on how to use to method it self. With a combination of the theory of scale of measurement, the discussions on the methods are divided into two types of scale which are an ordinal scale and ratio scale.

Since there are some topics need a further discussion, particularly in measuring intangible element and subjective factors in risk assessment, the next chapter extends some theories of measurement includes measuring incident, threat, vulnerability and impact.

Did you like this example?

Cite this page

The three main concepts. (2017, Jun 26). Retrieved December 5, 2022 , from

Save time with Studydriver!

Get in touch with our top writers for a non-plagiarized essays written to satisfy your needs

Get custom essay

Stuck on ideas? Struggling with a concept?

A professional writer will make a clear, mistake-free paper for you!

Get help with your assigment
Leave your email and we will send a sample to you.
Stop wasting your time searching for samples!
You can find a skilled professional who can write any paper for you.
Get unique paper

I'm Chatbot Amy :)

I can help you save hours on your homework. Let's start by finding a writer.

Find Writer