Heisenberg Uncertainty : 2007 : Frequency-X Blog : Blog : Home | ||
|
Heisenberg Uncertainty Posted by Gunter Ollmann on July 04, 2007 at 11:23 PM EST. Some people feel that I tend to take an unduly harsh position on signature protection engines. In fact, a quick review of my blog entries so far throughout 2007 may reveal to some people that I am not a huge fan of them – often referring to them as “legacy” – while promoting “preemptive” protection engines. This isn’t precisely true. I don’t hate or even dislike signature engines – I’m just not particularly enamored with the way a lot of vendors are positioning them. It’s just that if you’re serious about protecting against today’s threats, you need to use the right technology. I can say however that I don’t favor those evangelistic luddites who insist on promoting one technology over absolutely everything else. For all their ranting, the only resonating sound bite I take away (or superimpose) is “if you only have a hammer, everything’s a nail.” Signature engines do have an important role to play in today’s threat mitigation strategies, provided you understand their limitations and deploy to their strengths. Where signature engines excel is in the unique identification of a threat. In the anti-virus world, signature engines (consisting of a mix of regular expression matching and hash correlation technologies) provide a framework for practical and speedy examination of files for known malicious content. When adopted for use in IDS/IPS systems, they provide a vehicle for identifying known attack strings within a single network packet or reconstituted stream of data. The major limitation with signature engines lies with the fact that they are bound to the identification of threats that have previously been observed elsewhere – i.e. if it happens to you first, you’re likely not protected (unless you’re incredibly lucky and an existing signature just happens to match something else in the observed attack – let’s call it a “lucky false positive”, or LFP for short). This becomes a major problem because:
In the face of these (not insignificant) challenges, a colleague publicly likened the process of updating legacy signature protection systems to “giving vaccine to a corpse”. That is, given the attack profiles currently being observed in the wild, you’re increasingly likely to be the first one hit by something new and which doesn’t already have a signature. If it’s all doom and gloom for signature engines, why haven’t they been dismissed already, usurped by better technologies, and become just an historic footnote in Internet security? In fact, why has IBM ISS gone out of its way in the last year or two to add signature capabilities to some of its product range and work alongside its more advanced preemptive protection engines? In a nutshell - because it’s a complementary technology that aids the way in which organizations continue to manage their day-to-day threat protection. Heisenberg Uncertainty Perhaps it’s the physicist in me wanting to emerge after too many years of silent slumber, but I can’t help but draw a comparison between the Heisenberg Uncertainty principle and the legacy vs. preemptive protection engine relationship. In very simple terms the Heisenberg Uncertainty principle explains that (within a quantum mechanical framework) you can measure the position of a particle or its momentum, but the more precise you are in measuring one of these values the less certain you will be in the measurement of the other – as well placing quantitative bounds on their uncertainties (related to Planck’s constant). From an Internet security perspective, signature engines are perfect for providing an exact name for an attack that was intercepted – and hopefully stopped. Things are either good (if no signatures were matched) or bad (a signature match was found). But they are incapable of identifying something they don’t already have a specific signature for. Meanwhile modern preemptive technologies such as behavioral detection have no such binary distinctions. Instead, they review the actions the 'malicious' file would like to undertake if it were unimpeded, and evaluate whether the observed actions are likely to be dangerous or run counter to a defined policy. Hence, they make no differentiation as to whether the specific threat has been observed a thousand times before, or whether it’s the first time. In a sense, we have signature engines capable of uniquely classifying and naming a particular attack, but have no context of the threat (for example, triggering upon the string “Gobbles” within a PDF document discussing the technical details of an old Apache exploit). While on the other we have a preemptive technology that can detect a threat, but has no means of uniquely naming the attack it just prevented (for example, deciding that something wanting to write to the windows root directory, start up an FTP server and open an IRC channel to an IP address in Korea, is probably a bad thing and should not be allowed to run). Luckily we’re not as restricted as Heisenberg in the cyber world – we can just employ both technologies together and use them to counter each other’s limitations. So, there you go. Legacy signature engines are great for naming the threats you’ve just stopped with a preemptive technology. If you’re prepared to wait a week or two before printing off the management reports and download the latest signature updates in the meantime, you’ll probably be able to differentiate between Troj/ServU-ER, Troj/ServU-AQ, and Troj/Istbar-DG and plot their respective graphs for an upcoming management meeting. BTW – Apologies in advance to my former Professor and any other physicists out there for a butchered explanation of Heisenberg’s Uncertainty principle. Unfortunately I found studying quantum physics about as enjoyable as watching men’s synchronized swimming. |
|