SoftwareSecurity2013/Group 6/Reflection

Uit Werkplaats
Ga naar: navigatie, zoeken

OWASP vs Wikipedia

Using OWASP to verify security of Mediawiki points at it being worthwhile to revisit the requirements to see if they can be re-worked to support software architectures where e.g. secrecy is not important, or where an organization has an interest in preserving the privacy of their users. There are definitely some implicit value judgements in the framework that the authors probably did not realize were controversial (such as not logging unless necessary.) (Erik: Interesting point)

Honorable Mention: RATS

Among the most interesting result of using the various tools was that the least sophisticated of the lot, RATS, ended up being the most useful in the manual (self-directed) analysis. After pulling back the covers, the only thing RATS really does is locate where functions defined in some XML files are used (and a hard-coded test for use of back-ticks.)

We used RATS to list out every use of the PRNG when asking ourselves when it was called with the "force strong" option. As it happens: only the main secret key is "forced strong" because if other things were forced strong, there is a risk of crashes due to insufficient entropy. This led to the question of "Wikipedia people are smart: why would they make such an obvious mistake?" Our next question was of course "Was it really a mistake?"

To answer that question, we performed a risk analysis for each use case in which randomness was utilized (a list of every use of the PRNG from RATS.) It turns out that the fallback PRNG provides adequate security in each of the cases. From that, we determined that an exception should be made for V7.2. Since Wikipedia is very busy, it must scale well and one thing that often does not scale well in Linux is /dev/urandom. Accordingly, they use as much as /dev/urandom will give them and fall back to something less secure but adequate when / if its entropy pool runs low. It was no mistake.

Automated Scanning

Largely useless except for the dumbest of errors. There was a large amount of time wasted filtering out false positives and over-general value judgements like reporting every time the word password appears in the code and calling it a hard-coded password, null password, or password contained in a comment. It could be argued that calling out every instance of MD5 and non-cryptographic random number generators was ok because it let us verify that they were not being used in an unsafe context. On the other hand, everything useful that Fortify did (in the context of this section of the ASVS checklist,) another tool could have done. In the end, everything that came up in the Fortify scan also came up in manually addressing the requirements. None of the requirements were fully answered by the automated scan.

Everything about the evaluation of authentication and cryptography has to do with the meaning and organization of the code. Automated scanners seem to do a descent job at finding coding errors, buffer overflows, tainting, etc. With high profile open source projects like Mediawiki, written by competent programmers, there will be a mountain of false positives and only a couple legitimate finds in terms of coding mistakes. V2 and V7 are too high level to benefit from automated scanning a la Fortify. That is why RATS provided the most value by far: it can quickly answer questions that come up in the manual analsysis, saving a great deal of manual work and used in such a way, does not introduce extra, unnecessary work.

(Erik:I agree that for many things a simple tool like RATS can be much better. Eg for V7 it is useful to find out where crypto is being done by grepping for the crypto library calls, or strings like SHA, and there is no need for anything as complicated as Fortify to do that. One thing where the more complex analysis that tools like Fortify could pay of is in data flow of tainted data, though current tools still leave a lot to be desired there...)


Environment Setup

The first step was to install a web server, php, mysql, and the mediawiki code and configure them. This is because fortify complains about included files that do not exist (such as the configuration for Mediawiki.) Having the tools installed also allows for dynamic analysis at some point in the future.

Statistics

Output of the tools were placed into SQL databases along with false positive information so that we could have statistics telling us which of the vulnerabilities were most common and where to start in order to eliminate them.

We were also able to quickly compare the findings of multiple tools: RATS and Fortify executed with files fed in one at a time and all at once.

False Positive Verification

We spent a lot of time cleaning out the most reported vulnerabilities when it seems like we would have done better to have focused on the least reported ones because they are the most subtle.

Of course, going through a lot of the vulnerabilities directly did give us interesting details about the false positive rate on some classes of vulnerabilities reported by the tools. Most notably the password related checks from Fortify are extremely unreliable.

Filtering

In the end, we started excluding vulnerabilities that were not part of our scope. This dramatically reduced the number of things we had to look at.

  • Vulnerabilities called out against unit tests included with the code were excluded (tests/*) because they cannot be run by users.
  • Vulnerabilities not related to cryptography and authentication were excluded.

We are left with

  • Password Management (more or less all false positives)
  • Insecure Randomness
  • Weak Cryptographic Hash