SoftwareSecurity2013/Group 2/Code Scanning Reflection
Fortify
- Fortify finds A LOT of security risks. A big part of them are, at first sight, not real risks, rather possible risks. A lot of them could be false positives. (Erik: A pedantic remark about terminology: risk typically includes the chance of something happening. So I would talk about potential vulnerabilities, which may be small/low, big/high, or no risk in the end)
- Because of the huge amount of risks found, it's difficult and time consuming to find the real problems. So after running Fortify, you still need to invest a lot of time to get useful information.
- There is an extensive amount of categories, divided by impact severity. You can even sort by OWASP Top 10, for example.
- While defining the above categories, it would have been a better idea to implement the verification requirements defined by OWASP (ASVS). This would give a lot more information than just the OWASP top 10.
- You can can easily find the problems in the code by clicking on an error. The relevant piece of code is then opened and the possible risk is highlighted.
- A lot of information is given about the possible risks (e.g. examples).
We have not yet learned anything specific about security vulnerabilities from using Fortify. Mostly because the filters in Fortify do not match our verificatoin requirements (Access Control and Data Protection). The number of unrelated problems found is also very high, as well as the number of false positives obscuring the real problems.
Fortify does not indicate if a problem is related to our verification requirements, because of this we would have to manual check all problems indicated by Fortify. While it is certainly possible some of the indicated problems related to our verification requirements somehow, it is not very efficient to go over every single one of them. While a problem like "Cookie Security" could lead to a access control problem, the underlying bug is actually a authentication problem and "Weak Encryption" is actually a cryptography problem.
The nice thing about Fortify is that you can add comments to the found bugs which shall be visible if you generate the error rapport. When you do not add any comment the file looks ugly and not well designed and is not giving more information that the tool gives. Only by adding useful comments you can make this report useful. But when it is filled in it is already ready to use and show to customers and colleagues.
RATS
RATS was a bit more clumsy to use than Fortify, because it's commandline only. It takes a few tries to get satisfying results, since you have to learn to use the right flags. An important one was the html flag, in combination with piping RATS's output to a file. This generates an easy-to-read HTML-file, which you can open in your browser. But we still miss a GUI like Fortify, where you can easily browse through the problems and directly view the associated code. Now we had to look up manually the files (and the right lines) in which RATS found problems.
RATS is a lot less thorough than Fortify. It doesn't try to understand the code and it doesn't seem to track the program flow. I only seems to check for known problematic PHP-functions, or wrong usage of functions.
The advantage for RATS over Fortify is that it gives less reports of errors, making the number of false positives smaller.
The rapport generated by the RATS tool is quite ugly, while Fortify returns a pretty decorated rapport. With some effort you can send a report generated by Fortify to a customer, while the rapport of RATS is really just a list of possible errors in the system.