SoftwareSecurity2013/Group 1/Reflection

Uit Werkplaats
< SoftwareSecurity2013‎ | Group 1
Versie door Erik Poll (overleg | bijdragen) op 6 jul 2013 om 13:02 (OWASP Evaluation)
(wijz) ← Oudere versie | Huidige versie (wijz) | Nieuwere versie → (wijz)
Ga naar: navigatie, zoeken

Choices

A reflection upon the choices we made in advance and the use of different tools within the process of scanning a code base of an IT-project is given below.

  • We chose to rewrite the OWASP set of requirements into our own set of translated requirements for FluxBB. We are happy with the choice we made, since it was more practical and down-to-earth to work with some more concrete points which fit the context of the (open source PHP) project). Although the requirements were translated to be more concrete, they are also a bit too specific for the job at hand, therefore we do not recommend them to use them as a replacement of the OWASP security requirements. We are convinced though, that some more language specific OWASP requirements would be suitable, this way, ie. OWASP V5.1 (buffer overflows) would not be in the PHP-version. Programmers wanting to check their code would not be bothered with the requirements that would not suit their language and/or environment.
  • We chose to use the output of the different tools to structure our manual code checking. We are happy with the choice, since it helped us to be able to use the stack trace in the manual code checking. Although it took awhile to find out how to interpret the output of these tools, due to our choice (see the next one) to check every file twice, we are convinced to have used the full potential of the different tools.
  • We chose to check every file twice. We are quite unhappy with this choice, in the sense that it took twice as much work to complete this assignment. The vulnerabilities that we found were only found by one of the members that checked it, not by both. If we would have chosen to check every file only once, we think that would have affected our results in a negative way.
  • We chose to introduce some additional code scanning tools (nessus, rips, rats). These tools, although they introduced some additional work, gave a better insight into the vulnerabilities involved in this project. Not every tool was as useful as expected LINK, but we are happy with the choice we made to have at least the possibility to use different tools.
  • We initially made a list with Points of Interest, in which we noted all possible user inputs, i.e. $_SERVER and $_GET. We sometimes made use of the list to check (by using a grep command) whether we had forgotten anything useful within a file, but in the end it was hardly used. The amount of work and time invested to get this POI file was so little that we would probably use such a list again.
  • We chose to not check the UTF-8 files input wise, but compared the old library version used in FluxBB to the new version we found [[1]]. Except for some minor changes, these two code bases were comparable and showed no sign of fixing significant problems. We are glad that we chose not to manual check the whole set of files for two reasons. First of all, it saved us a lot of time and effort, but furthermore, we are convinced that we would have hardly spotted any vulnerabilities since this piece of code is used a lot by others and probably already thoroughly checked by others.
  • We chose to accept a few basics about PHP, ie. that buffer overflows are hard to force. We are glad we chose to accept these basic requirements of PHP, because it is out of the scope for our project work. Secondly it focussed our project work to the actual code base of FluxBB.
  • We chose to be quite firm in rejecting some "design choices" of FluxBB, ie. the fact that they hardly do sanitization checks within the admin part. We could not find any hard evidence that it actually was a design choice or just bad programming, either way, we do not agree upon this architecture. We are happy with the choice we made, since it would have been a lot of work to describe all the different aspects of these design choices, while they are quite comparable and easily fixable.

OWASP Evaluation

By starting with the code review, the conclusion was immediately that the OWASP requirements for input validation were not really specified for the targeted environment. For this reason we translated the requirements to more specific language based requirements and took this as starting point for the input validation for FluxBB. The structure of the ASVS provided a good guideline to evaluate all aspects of code scanning. If the code is followed and all points are validated, the code is scanned for all known bugs. Although the list is quite complete, the quality of the manual code scan is still a variable factor because the experience and knowledge of the person who does the code scanning differs and might not be optimal.

The set-up of the assignment, the division of the verification requirements, is a good approach. This is especially the case when some requirements are strongly related. People tend to look very quickly over certain issues if they appear similar. By dividing the tasks, this problem is mitigated.

Another good consequence of using the OWASP code is that the assessor will look differently when he or she will develop new code. (Erik:good point!) This OWASP code will give a good overview of the problems that exist in the developers’ projects. By adopting this knowledge, the programmer will construct the code in a way that is easier to verify and hopefully complies with these requirements. This makes the OWASP code very valuable and makes it an excellent starting point for every developer.

General remarks are that the AVSV code is very useful, but that it is exhausting to check every aspect of it. If the application doesn’t have a big security impact or the chance is low for an attack, maybe this check is a overkill. If that is the case, the code is more useful to create more awareness for the developers then a security check. Nevertheless, in both cases it facilitates a good security aspect.

Tools

When it comes to simplicity to get some output, RATS is definitely a winner. When it comes to accuracy and the toolset, Fortify is a winner. However, its has a steep learning curve and costs a lot of time to generate rules. We are certain this pays of if a development team would use it in their development cycle (ie. rules have to be created once), but doing audits on regular basis for different projects would cost a lot of time and money.

Group meetings

We had a meeting every Tuesday at 10 o'clock and kept working on the project till 4 o'clock in the afternoon. We first identified what everybody had done since the previous meeting. Based on this, we decided what had to be done during this meeting. When issues occurred, we discussed it in the group. Afterwards, we took a look at what still needed to be done and came up with an appropriate schedule of things to do before the next meeting.