SoftwareSecurity2014/Group 3/Code Scanning

Uit Werkplaats
Ga naar: navigatie, zoeken

[warning]: There were 1 problems with insufficient memory. Results may be incomplete. Consider allocating more memory.

Fortify

We have used the Fortify tool to inspect FluxBB 1.5.6. The tool came up with the following results:

  • Critical - 245
  • High - 1153
  • Medium - 124
  • Low - 880
  • Total - 2402

We analysed these results for our verfication requirement, and found for each category a number of input validation related (possible) issues. These are listed in the next sub section.

Results

Critical

  • 4 Dangerous file inclusions
  • 2 Dynamic code evaluation / code injection
  • 3 Open redirect
  • 55 SQL injections

(64 in total)

High

  • 54 Command injection
  • 12 Dangerous file inclusion
  • 2 Dynamic code evaluation / code injection
  • 39 Path manipulation
  • 36 SQL injections

(143 in total)

Medium

  • 2 File upload

(2 in total)

Low

  • 1 Command injection
  • 9 Open redirect
  • 26 Resource injection

(36 in total)

For each type of problem Fortify finds. It nicely describes why the problem it found might be problem. For example, for a type of Open Redirect (Shared Sink), it argues "Allowing unvalidated input to control the URL used in a redirect can aid phishing attacks." This makes it very easy for Fortify users to see what is wrong, and why it is wrong. It also could give them an indication of how the issue could be fixed. We did not copy Fortify's message for each type of error, as this seems rather redundant to us. For more information on Fortify's findings, we refer to its report for FluxBB.

Verdict

Some of the issues Fortify came up with, are definitely valid in our eyes. However, some issues are quite dubious, or clearly no issues at all. We think we can use these results for our verification requirement, even though there are some false positives. Many of the problems it reports are true positives and should indeed be addressed (and some with high priority). (Erik:Some indication of what proportion of issues you think are dubious/no issues at all/true positives, would have been nice, and also you basis for concluding this. Eg. what is the basis for your claim that many issues are true posiitives? No need to look at all warnings in depth - that's not really feasible - but some indication of what your statements are based on would be nice - even if you admit that it is just a gut feeling after glancing at a few of them)

A disadvantage (and advantage?) of the tool is that it comes up with a lot of problems, over 2000 in total. This means that if you attempt to fix problems of a certain category, say input validation, it will take some time and patience to check them all. Even though the tool categorises all problems in both critical/high/medium/low and sub-categories (SQL injections and such), there is no 'input validation' category. Such a category overlaps some of the categories Fortify uses, so you will have to walk through all issue (categorie)s.

RATS

We also have used RATS 2.4 on FluxBB 1.5.6, which came up with the following results.

  • Critical - 0
  • High - 6
  • Medium - 4
  • Low - 48
  • Total - 58

Results

High

  • 5 untrusted use of fopen
  • 1 untrusted use of mail

(6 in total)

Medium

  • 3 TOCTOU (Time Of Check, Time Of Use) vulnerability
  • 1 untrusted use of fsockopen

(4 in total)

Low

  • 2 TOCTOU (Time Of Check, Time Of Use) vulnerability
  • 46 fixed size local buffer

(48 in total)

RATS doesn't return a nice report like the one Fortify returns. It has the ability to output plaintext about its findings or return it in XML or HTML. Very short descriptions about the issues are given and the issues are not grouped by severity. Also no possible solution is suggested by the tool. For all findings of RATS, we refer to Media:Thomas_Nagele_RATS_scan.pdf.

Verdict

RATS doesn't find that many severe security vulnerabilities as Fortify, but it does give some of them. Probably all security vulnerabilities that RATS has found were also found by Fortify. The description, however, is very short and doesn't give you a clue on how to solve the issue (other than the line number where the error is spotted). We think RATS doesn't give many false positives, because it seems to be using quite some filter before returning its findings to the user. Also, RATS seems to perform some sort of grep through the code instead of really checking it. RATS is usable to check for input validation (V5), because it finds the functions that are vulnerable for attacks through input validation. Altogether, not many issues were spotted, RATS may be a good place to start when looking for vulnerabilities, because it's free and fast to use.