Communicating and Understanding Risk
First Published: 2003

Before starting a security assessment, a lot of work is often invested in getting legal frameworks and confidentiality agreements in place between all concerned parties. While I ensure that all reports produced by me are clearly marked “strictly confidential” and “client only”, I know for a fact that there is a high probability that a copy of the report will be made available to a competitor consultancy organisation through an “authorised advisor” or similar legal clause. I know this - for I too have received many competitor assessment reports prior to conducting security work at the clients bequest.

My clients typically require an assessment to be compared to the previous one – noting an improving security posture or the increase of infrastructure scope. The result is that I have had the opportunity to review reporting styles from many different consulting groups around the world and been able to compare their individual qualities.

Of all the different security assessment reporting styles, and perhaps with a biased eye, I would divide the reports into three groupings – strong on management, strong on technical, or plain rubbish. By using the term “rubbish” I mean that the reports were largely focused upon reproducing tool output, were weak in both the management and technical sections, or were generally bereft of any kind of security analysis and understanding of the listed findings.

I have yet to come across a security consulting group that excels in both reporting areas, but suspect that this is largely due to different styles and emphasis when carrying out a security assessment, and the type of consultant the group attracts.

A number of well known international organisations are very good at writing sophisticated security reports with strong management sections geared towards CIO, CSO and Director Level. These reports typically focus upon comparing the security status of the client against others in the same vertical market, or adherence to specific corporate security standards as part of a compliancy audit. The emphasis of these management reports is to provide a high level view of the client’s security risk – and often appear to be more security audit than assessment.

While this type of management report suits many organisations, the clients I repeatedly deal with require strong technical reports instead. This type of report is typically passed on down to the technology implementers and operators. Emphasis is thus placed upon finding all the security vulnerabilities within the client’s infrastructure environment or application, and providing technical analysis of the security threat including detailed remediation advice.

An interesting aspect of both types of report is the evaluation of security risk. With management type reports, risk is evaluated in terms of business continuity and financial cost. In technical reports, risk is evaluated in terms of successful vulnerability exploitation and eventual host or data compromise.
Like I said before, I prefer to follow the technical reporting style. However, one of the gripes I have about the many reports I have reviewed, is the reliance upon tools such as vulnerability scanners to evaluate the level of “Risk” a particular vulnerability has. While I have no problem with vulnerability scanners themselves, I do not believe that they are appropriate for reporting the relative risk level of a discovered vulnerability – instead consultants should provide an independent evaluation in context of the client’s environment.

One significant problem with adhering to the process of defining a separate risk for each reported vulnerability, is the classification of newly discovered or previously undisclosed vulnerabilities. This is an important issue for an organisation like the one I work for - which is well known for the discovery of new vulnerabilities. To overcome this, I do not actually report security risk in the traditional sense. Instead I focus upon evaluating and reporting upon three separate parameters – Impact, Likelihood and Source.

The first value, Impact, is an evaluation of the impact of exploitation should an attacker successfully conduct an attack against the vulnerability. The evaluation is specific to both the vulnerability and the host it was discovered upon, and ranges from very high impact (an attacker could gain interactive administrative access to the host) through to low impact (a divergence from security best practices). In many ways, this value most closely resembles the traditional tool-based evaluation of Risk.

The second value, Likelihood, is an evaluation of the likelihood of successful exploitation. This value helps to define the priority that a client organisation should respond to the security finding. The value reflects the skills and ability of the attacker, or frequency of attack, most likely to lead to discovery and compromise. The Likelihood value is extremely useful when prioritising previously undisclosed vulnerabilities, and values range from high likelihood (script-kiddie skill level) through to low likelihood (attack requires detailed knowledge of the applications internal code).

The third and final value, Source, defines where an attack could be successfully launched from. Typically this would reflect whether the vulnerability could only attacked from internal sources, or additionally from external points such as the Internet.

I strongly believe that the use of these or similar methods for defining vulnerabilities can be used to increase the value of any technical findings and also used to communicate to relative level of risk to the client. This level of classification and parameterisation is more important to the organisation receiving the consultant’s technical advice and can be used to prioritise necessary remediation actions. In cases where vulnerabilities, such as those regularly encountered inside web-based applications, are newly discovered – it is critical that an evaluation of likelihood of exploitation is made. My clients have certainly benefited from such an approach.
    Copyright 2001-2007 © Gunter Ollmann