So you want to be a pentester
There's a ton of different types of pentesting that you can do. What we'll talk about here though is website pentesting. This is the type of pentesting that will help developers secure websites that you and other people, potentially globally, will have access to so it's pretty important. But there are a lot of ways to remove any of the value that it can bring project owners. Some testers for instance will only use automated tools and don't understand the output that they generate and just hand over a report. This is unhelpful because as a pentester it's your duty to know why vulnerabilities appear on Burp and how to resolve them. Building off of that, as well as knowing the vulnerabilities, you should understand any solutions you offer and avoid canned suggestions. Today we will review the necessary prep work, what tools to use, common vulnerabilities to look for, and how to put it together in a report. This post is intended for people who are new to web app pentesting or those who want to know whats needed at a semi technical level in order to succeed. Also note, all demonstrations are either using culbertreport.com or a vulnerable version of OrangeHRM/LotusCMS.
The paperwork
The typical scope discussion and documentation will tell you that there are numerous URLs like subdir1.staging.culbertreport.com that are in scope, whether or not you're testing the staging environment (and you should really only test there) and whether or not it contains real data - you need to know whether or not the data you may see is potentially real PHI, for example, as there are other liabilities around this and BAA's that need to be signed. You also may or may not be given an account to use and you should also be given a point of contact who controls the webapp environment. This is to be used to relay findings but also in case you get locked out of the testing environment, something gets broken, or an attack accidentally leaks over to production and it needs to be remedied.
Tools to use
Burp Suite
Burp Suite Enterprise
Burp Enterprise is fantastic for automating the more mundane portions of your engagements and for passively collecting vulnerability details as you work as well as running auditing scans. If your organization can afford an enterprise license, the tools that come with it will make the cost well worth it. Some highlights that are worth noting include the auditing scans to test for every exploit possible against every URL - though it's very important that this is only run against a staging environment that can be broken and you should not rely on this to find everything exploitable. Another highlight is that while you browse the site, Burp will passively note vulnerable components. For instance potentially there was a JS dependency that was missed by your manual investigation.
OWASP Zap
SQLMap
- sqlmap -r request.txt # SQLMap will figure out what position is exploitable in the request
- sqlmap -r request.txt --dbs # We then get the different databases
- sqlmap -r request.txt -D target_DB --tables # Get the tables from the target DB
- sqlmap -r request.txt -D target_DB -T target_table -dump # Then finally dump the contents of the table
SSLscan
This tool, for the most part, is covered by what's provided through Burp Enterprise. But if you don't have one of those licenses, this will be a nice compliment to your toolkit. SSLScan examines encrypted communications, such as HTTPS, and finds all the ciphers that are supported. This is the non flashy side of web app testing. Determining if weak ciphers are in use or if protocols that would be regulatory failings are in use is very important.
Common vulnerabilities you'll test for
CSRF
XSS
- The most common one I've seen is stored, which is where you enter something like a comment on a site and then any visitors afterwards will be affected.
- Followed by that is reflected, which is when you send someone a link like https://cr.culbertreport.com/search?q=<script>alert(1)</script> and upon clicking this link they trigger the alert popup.
- And finally is DOM. This refers to the document object model and can be thought of similarly to reflected XSS, but they attack two different functions. This one will be by far the most complicated to attack. I really encourage anyone curious to read the OWASP entry as it explains it in the best way possible. This is the only XSS that can be executed in such a way that the server has no idea that the user fell victim to this as well. This is accomplished by using a # in the URI to fragment it and have the XSS be loaded client side by the DOM.
The typical protections for this are to escape special characters and sanitize the user input. Escaping in this case means to invalidate potential HTML characters like "<" through methods like encoding. Sanitizing would be to strip those special characters entirely from the supplied input.
SQLi
There are a number of protections against SQL injection ranging from using prepared statements to what we did with XSS and try to escape the supplied input and treat it as a string. They each have pros and cons and it's important to remember that no solution is perfect.
Directory traversal
Protecting against this is typically equally as simple as the attack itself. First, ensure that user input is valid and remove unexpected characters. Then second, when processing resource requests, append the requested path to the folders canonical path. This will ensure that any requests stay inside the websites root directory.
User account takeover
This falls under broken access control - a user should only have access to their own resources and so any requests like this need to be validated against their permissions. If proper validation were in place, the back-end would see this request and see that the requested user ID to edit did not match the users ID or permission level and then kick back an error.
Privilege escalation
Developers should ensure that all requests are validated against the users permission level. Both of these scenarios, forced browsing and accessing other users information, also falls under broken access control.
Sensitive information disclosure
Returning errors like this is really handy in development, since it helps pinpoint exactly what is breaking, but in a production environment it gives away too much information. Instead, return generic error pages that only let users know something went wrong or the requested page is missing.
Malicious file uploads
Developers should use an allow list of extensions and avoid deny lists - the possible number of extension is far too much for any one person to keep track of. In addition to this, you should review that the allowed list of extensions only contains the bare minimum of file types needed for your application to function properly.
You found a vulnerability, so what?
Finding the vulnerability is not the end, your next responsibility is helping the client and developers fix the issues. Understanding what goes into fixing these issues is absolutely an important skill for quality pentesters to have.
There's no shame in looking up the vulnerability on OWASP and finding a suggested solution there, but you should definitely understand why that solution works in the supplied example and be prepared for developer use cases to veer from the recommendations. Also be prepared for company priorities to shift and your finding to be downgraded. Just because OWASP says it's a high doesn't mean that your client or development team will feel the same way and ultimately it's their issue to deal with how they want.
Take for example SQL injection. This is a common vulnerability to encounter and OWASP has a number of suggestions for protecting against it, so which should you pick? You have prepared statements, stored procedures, allow list input validation, and escaping all user supplied input. They each have pros and cons. For instance, escaping user supplied input assumes that you are actually catching all escape attempts. On the other hand, prepared statements are generally thought of as being able to stop SQL injection attempts against the parser, but it still leaves things like logging who writes what and user defined triggers vulnerable to SQL injection. Not to mention some people join supplied input strings to create prepared statements which defeats the purpose. There are a lot of ways that attempting to solve an issue can actually open you up to further damage, so understanding the environment and the solutions you will suggest is critical.
Writing your report
This can vary organization to organization, but typically this will include a high level discussion of the issues detected, their impact, and a total count of all the detected vulnerabilities followed by a table of each and supporting evidence. You should sort the detected vulnerabilities from high to low so that important issues catch the readers attention early and include clear steps for how to reproduce the detected issues to make developers lives easier. The faster the developers can see it in action, the faster they can determine where a fix needs to be placed and get it rolled out to staging.
The high level discussion allows you to outline vulnerabilities that you thought were of note and discuss their potential impact if left unpatched. Getting across the right amount of urgency is crucial as too little will result in people leaving gaping holes while too much makes them treating future findings much more lightly then maybe they should. Think about the potential impact to client and customer data or reputational impact in order to judge how critical it is that something be patched in a week or in 90 days.
Adding as much detail in the client facing report will reduce frustration on both ends as the developers can begin implementing fixes and the testers can focus on other work that needs to be done. Make sure you have your target audience in mind too when writing this, as going to technical will result in misunderstandings but leaving it too high level will have the report receivers scratching their heads figuring out the exact thing you meant.