Top 10 Hacks of 2009 and WAF Mitigations
Jeremiah Grossman gave his “2010: A Web Hacking Odyssey – The Top Ten Hacks of the Year” talk here at RSA this morning where he presented on the Top 10 Hacks list gathered from readers of his blog. In preparation for his talk, he contacted me and ask if/when/how a web application firewall could be used to help mitigate these issues. What a great question! :) So, in case you were not able to attend his RSA talk today, I am going to outline which items can been addressed by WAFs.
HTTP Parameter Pollution (HPP) Luca Carettoni, Stefano diPaola
I actually had a previous HPP post on my blog and in it I present one approach that a WAF can take to identify potential HPP attacks and that is to learn whether or not having multiple parameters with the same name is normal for the specific URL resource and flagging when duplicates are present. This type of behavioral profiling of requests, that identify request construction deviations, is critical for identifying non-injection types of attacks. Most input validation is done on parameter payloads and not the request as a whole. This helps to identify some HPP attack variants but it does not cover all examples attack vectors from the presentation. For the business logic attacks where a new parameter is added which may alter a mid-tier HTTP request, a learning WAF should flag this as an anomalous parameter. Finally, for HPP attacks that aim to split attack payloads across multiple parameters of the same name in order to bypass negative security filters, the only real way to attempt to identify these attacks is to mimic what the back-end web application will do with the request. In the case of ASP/ASP.NET, the app will take all of the payloads of parameters with the same name and then join them together into one payload (separated by commas). A WAF would need to do this as well and then take the new consolidated payload and run it through the standard security checks looking for attack payloads. As a matter of fact, we have added some experimental rules to the OWASP ModSecurity Core Rule Set Project v2.0.6 to do just this.
Slowloris HTTP DoS Robert Hansen, (additional credit for earlier discovery to Adrian Ilarion Ciobanu & Ivan Ristic - “Programming Model Attacks” section of Apache Security for describing the attack, but did not produce a tool)
The DoS concept behind Slowloris is important as many organizations don't truly understand the threat, how effective it can be and how difficult it may be to identify if you are being hit by it. This is not the typical "flooding" type of attack where the network or web app is being saturated by HTTP requests. In these scenarios, there are other network security/infrastructure devices that may be able to identify and respond. In the case of Slowloris, however, the web app is basically in a holding pattern waiting for the layer 7 HTTP request... So, how can a WAF help? In an earlier post I had entitled "Identifying DoS Conditions Through Performance Monitoring" I outlined how a WAF can help to identify a Slowloris type of attack by monitoring and learning the transactional metrics associated with the website content. Specifically, Breach's WebDefend appliance learns the key metric of how long it takes for a client to complete sending the HTTP request data to each resource. This is graphically displayed in the Performance dashboard and it is easy to visually identify when there are request receiving issues.
On a more tactical note for Apache - it is possible to identify a Slowloris type of attack by doing two things -
1) Decrease the default Apache Timeout directive setting. By default it is set to 300 seconds which makes it quite easy for Slowloris to DoS the site. It should be lowered to something much smaller like 10-30 seconds.
2) Use the httpd-guardian perl script from Ivan Ristic's Apache Security tools package with the ModSecurity SecGuardianLog directive. Having this external application monitoring the Apache logs allows it to identify these automated attacks and issue alerts and/or blacklist rules for IPTables.
Microsoft IIS 0-Day Vulnerability Parsing Files (semi‐colon bug) Soroush Dalili
The concept of Impedance Mismatch is a re-occurring theme with these issues. Correctly parsing uploaded file information can be tricky as you must correctly interpret the file meta-data (such as the filename, etc....) in the same way as the web app. In this particular case, the attacker is tricking the application file uploading resource by appending a bogus file extension after a semi-colon however the IIS server will interpret it as an ASP page and execute it. In this case, a WAF must get the filename parsing correct and enforce allowable character-sets. The second part is to do some actual file upload inspection to identify what the uploaded file actually is. ModSecurity has the @inspectFile operator which will temporarily dump the file attachment to disc and allow for AV scanning or some other custom logic. This can help to verify that the file type is actually what you are expecting.
Exploiting unexploitable XSS Stephen Sclafani
For XSS, it is important to try and identify the root cause of the problem which is web apps that fail to properly track user supplied data and apply appropriate output escaping. From a WAF perspective, it is possible to identify reflective XSS attacks by mimicking the Dynamic Taint Propagation concept of tracking user supplied data and seeing where it is misused. In this case, we want to inspect any request data to see if it might have meta-characters that are used in XSS attacks and then capture the full parameter payloads. We then inspect the response body content to see if the same data is present. If it is, then the application is not properly output escaping user supplied data. I outlined this concept and showed some examples using ModSecurity in a previous Blackhat DC presentation.
Our Favorite XSS Filters and how to Attack them Eduardo Vela (sirdarckcat), David Lindsay (thornmaker)
Ahh, the fine art of filter evasions... Let me be clear, it is not possible to have 100% protection from XSS payloads if you are using only a negative security model approach. There are just too many ways that an attacker can have functionally equivalent code and bypass signatures. The only hope that you really have is when your web application should not accept *any* html data. If your app has to allow html data but you want to filter out malicious payloads, then looking at something like Anti-Samy is a good choice. One important note about filter evasions and XSS - most people believe that if an attacker is able to bypass the filter that he/she wins. In practice, that is not always the case. What I have seen is that the XSS payloads have to be munged up so much in order to bypass the filter that it no longer will execute in a target's browser. In an attempt to improve XSS negative signatures, we launched the ModSecurity CRS Demo page which allows the community to send attacks and see if they can evade the rules. This has been a great research tool to help us to improve our signatures in both ModSecurity and WebDefend.