Shattering Client-side Applications
First Published: SC Magazine

Over the last few months I have had a number of discussions with clients and participants at open forums relating to software vulnerabilities, and what can be done for long-term protection or risk management. A point often made by participants is that “our biggest concern is that Microsoft’s software is full of security holes”, closely followed by “why can’t they just write secure code?” – and usually terminated by a lot of head nodding or grunts of agreement.

From my own perspective – i.e. providing professional penetration testing services - there certainly are a large number of vulnerabilities with the Microsoft products that my clients operate in critical areas which can be successfully exploited if the hosts haven’t been properly hardened or fully patched. The caveats obviously being patching and system hardening.

Without sounding like I’m taking a soap-box stance, I have to question the legitimacy of the Microsoft bashing. Having access to a dedicated security research team who break software as a day job, Microsoft’s software actually fares pretty well in comparison to most commercial “off-the-shelf” software we assess. It just tends to be much bigger and more complex, so the opportunity for finding “something” tends to be greater. But then again, the size and complexity is driven by the inclusion of advanced features – features that make it popular with their customers who buy it.

I like to point out that, for all of Microsoft’s published flaws, people should realise that they actually go to considerable lengths to train their developers in how to code securely, implement and police secure coding policies, and are generally a very security conscious company. I then have to question what lengths their own organisation goes to securing their internally developed software. In most cases they’d be lucky to have sent a team of developers on a C++ course within the last 6 months – let alone given them any training on secure coding practices.

If, through all this, security flaws still make it into Microsoft’s commercial software, why do they think that the security of their in-house developed – critical – software is any better?

This is certainly borne out during onsite penetration tests or security assessments against environments that include newly developed corporate applications. Without doubt, the easiest way to compromise data integrity and access restricted information is through the applications the client has developed themselves.

Many of the findings can be attributed to a lack of basic application security knowledge or understanding of modern attack vectors. There is a strong tendency for internal development teams to assume that “some other” security device or procedure will identify and prevent an attack.

The most common vulnerabilities include storing confidential information locally on the client (e.g. caching user login credentials, off-line copies of customer address details, detailed transactional logs and debug files, etc.), using clear-text communication protocols for login and validation purposes, depositing and retrieving critical data files from public shares, and a reliance on client-side data validation.

While I hope that the message is getting through to corporate organisations that client-side content checking in web-based applications provides absolutely no security to the application, the message has fallen on deaf ears for developers of internal compiled applications. In almost all cases developers have assumed that the user of their software (whether that be thin-client, thick-client, or something in between) would either never attempt to subvert the client application or not have the technical ability to do so. Therefore, they naively believe that client-side content checking is all the security they require.

In many security assessments of internally developed compiled software, I see the same classes of vulnerabilities as I would expect to see in poorly developed web-based applications – except this time the financial impact is often many times greater. For instance, several recent applications have built SQL data queries from user supplied data to retrieve data from a central database. Using a standard debugger, it was possible to modify the data submissions (i.e. SQL Insertion) to pull back more informative and confidential data, or even access command shells on un-hardened database servers. In one case this was an “advanced” finding – since the application stored database DSN connection strings unencrypted locally and users could create their own connections using Microsoft Access anyway (the developers didn’t realise this was possible).

In another assessment, client-side application logic disabled certain buttons from working (i.e. preventing the user from submitting bad data) if a monetary value was too high. Using standard tools within the locked-down user desktop, it was possible to write a simple “Shatter” attack to re-enable buttons as required. The absence of server-side checking meant that the high value transactions were accepted and security (i.e. data integrity) was successfully breached.

So, if you think that the commercial software you buy has too many security flaws, you may want to have a closer look at your own development processes first.

Top Application Tips

  • Client-side checking can always be bypassed. The only secure place to check is at the server.
  • Never store confidential data locally in an unencrypted format.
  • Never store login or access control data locally (if possible).
  • Always harden back-end data storage hosts – even if they are only for internal use.
  • Make sure that all developers undergo regular secure development training and annual refreshers.
    Copyright 2001-2007 © Gunter Ollmann