TechnicalInfoBannerA
TechnicalInfoBannerB
TechnicalInfoBannerC

Frequency-X_BlogEntry

  Disclosure vs. Ethics
Posted by Gunter Ollmann on June 13, 2007 at 2:29 PM EDT.

Public disclosure of security vulnerabilities has been a topic in which not many people have chosen to sit quietly upon the fence.  Like an Australian brushfire the heated discussions on disclosure flair up at random locations, burn brightly for a few days, consume the local tundra, and leave behind yet another blackened scar upon the landscape.

Time and again, each major security conference sees yet another disclosure brushfire flair to life, only to be doused by the closing of the bar (or the filing to weekly story deadlines).

Whilst that may have been the case for quite some time, I think that more recent changes in the commercialization and the underlying ethics of vulnerability disclosure really needs a little more study.

A Question of Ethics and Conduct

First of all, let me explain that at a personal level, I have no problem with public vulnerability disclosure – even in detail – as long as it is coordinated in such a way that it places no consumers of that technology or product at an increased risk. 
During my years of security consultancy I’ve never publicly disclosed a vulnerability I discovered.  Every one of them was discovered under contract or covered under NDA to a customer (or vendor) and, at the end of the day, belonged to them – whether it be a new SQL Injection bug, an authentication bypass, a buffer overflow or local privilege escalation – and, for that work, I was usually well compensated.  In addition, I’ve also never taken a consulting contract that would lead to customers being at risk to my findings.

You see, for me the question of vulnerability discovery and disclosure has a lot to do with personal ethics and business conduct.

However, depending on the type of security work being conducted, different rules come in to play.  For example, the X-Force vulnerability research team adheres to ISS’ public disclosure guidelines – guidelines that have existed for quite some time and have served the basis for several other commercial research teams.

There have been several proposals over the years by various groups and organizations to develop a universal set of public disclosure practices – none of which have been widely adopted.  As you could probably expect with the security industry, these proposals have ranged from having public disclosure practically being called a crime, all the way through to a hackerish “Information wants to be free” philosophy (complete with corresponding ASCII art).

A lot of these proposals have included hard-line tactics for dealing with unresponsive vendors.

Again, from my perspective, it is irresponsible and unjustifiable to hold an unresponsive vendor’s customers to ransom and undue risk.  This is why I have trouble digesting some organizations disclosure guideline exceptions when dealing with Apple.  Granted, Apple has an extremely poor – if not downright hostile – relationship with vulnerability researchers around the world, but that doesn’t mean that we should take our frustrations out on their customers.  There are plenty of other ways to educate Apple and their customers – even the naive ones that believe heart and soul in Apples boldest security claims.

Cashing in on Vulnerabilities

That said, the trend in vulnerability disclosure I find most insidious is the open market purchasing of vulnerabilities. 

First of all, it has never struck me as a particularly good business idea for a number of reasons.  From a commercial (revenue generating) perspective, I don’t think it really makes a lot of sense as there are plenty of more profitable ventures within the security market that result in greater revenue returns from this kind of information.  In addition, I struggle to see what value it actually brings to the vendor’s customers because you are effectively saying “hey, we don’t have the internal technical ability to discover and understand these threats ourselves.”

I also see quite a difference between consulting for an organization and finding vulnerabilities in their applications based upon a man-day rate, and the bounty programs offered by the likes of VeriSign (iDefense) and 3COM (TippingPoint) to purchase vulnerabilities in other vendors products.  I guess that’s just my moral code kicking in.

There have been numerous arguments against this open vulnerability market, and perhaps the most frequent one I hear is that these types of programs fund criminal gangs.  Basically, the point being proposed is that these criminal gangs (or independent bug hunters) find the odd bug – sell it – and then use those proceeds to fund more bug hunts.  These follow-on bugs aren’t sold, instead they are weaponized and used for more malicious activities.

My gut feel is that this is probably true, but I’m yet to observe any conclusive evidence of it.

Funding a Cottage Industry

The fact that this pair of commercial entities has essentially legitimized an entire cottage industry for purchasing vulnerabilities has the biggest concern for me.  Their code of ethics has indirectly resulted in greater risks for their (and my) customers.
What do I mean by that? Here are some examples of the knock-on effects that worry me:

  1. Most large organizations (particularly within the financial sector) have their own security teams using vulnerability discovery tools and conducting internal penetration testing.  The anonymity offered when purchasing vulnerabilities means that some internal personnel have been tempted to sell on vulnerabilities discovered within their own employers applications or products for cash – without their superiors knowledge – thereby greatly increasing the risk to that employer.  As a professional consultant, this is a degradation of trust, and should be treated no different to other intellectual theft crimes.
  2. The haziness over the legal framework for the purchase and sale of vulnerability information appears to have spawned further ambiguity in more insidious vulnerability services.  As I’ve discussed several times this year, the growth of commercial Managed Exploit Provider (MSP) services offering weaponized exploit frameworks through subscription has been a recent threat growth area.  
  3. The precedents set in purchasing vulnerabilities appear to have made it more acceptable to some people to purchase (and use) more dangerous material – in particular exploit code.

Now, don’t get me wrong.  I am not saying that the development of exploit code or other kinds of vulnerability weaponization are wrong or precisely illegal (depending upon where you are in the world!).  Development and use of this material has commercial significance – particularly in the field of penetration testing and vulnerability assessment tools.  However, just about all these activities are conducted by commercial entities in an (almost) regulated fashion, and are typically quite open about how they conduct their business – they even pay taxes!

Going forward I would love to see VeriSign and 3COM drop their shortsighted vulnerability purchase schemes, or (at the very least) completely remove the anonymity layer they have added to their disclosure practices.  I’m pretty sure that that would help remove the temptation of some of the intellectual theft and devious activities being undertaken by “trusted” employees.

     
    Copyright 2001-2007 © Gunter Ollmann