SoftwareSecurity2014/Group 7/Reflection

Uit Werkplaats
< SoftwareSecurity2014‎ | Group 7
Versie door Erik Poll (overleg | bijdragen) op 29 jun 2014 om 15:53 (Is splitting up the work as we did a sensible way to split the work? Or are there more practical ways?)
(wijz) ← Oudere versie | Huidige versie (wijz) | Nieuwere versie → (wijz)
Ga naar: navigatie, zoeken

What difficulties did you encounter, either with the code or the ASVS? Can you think of ways to reduce or avoid these difficulties?

It is hard to validate the input when it could come from so many places. In PHP, the superglobals can be accessed from any script and can then be assigned to any other variable or passed to any function. This makes it hard to track which variables are safe and which ones aren't. A single point of entry would be very usefull here, but probably not practical. The function _GP that replaces $_GET and $_POST is a very good partial solution, but it cannot be used in external libraries and it gives no guarantees about whether programmers are actually using the function.

Automated tools help a lot with the above issue, but they make a lot of mistakes in assessing the code. A better solution would be to have the ability to access input parameters removed by the programming language. (Erik:This sentence is a bit cryptic, though the sentences that follow make it a bit clearer. I guess you mean avoiding global variables.) For example, Haskell uses pure functions as default, which cannot access input unless explicitly passed to them. Neither of the solutions excludes input validation vulnerabilities, but they do improve the chance of spotting one in the code.

Could the ASVS be clearer, more complete, or structured in a better way?

The ASVS is a good way to assess if developers are following secure development techniques, remove most common vulnerabilities and as such, would improve application security when applied.

Nevertheless, it seems that a specific aim of ASVS was to be easily accessible / acceptable to organisations and also to be applicable to legacy projects. Because of this, it is structured based on the amount of effort companies are willing to put in instead of promoting typical software engineering guidelines which focuses on a life cycle approach i.e. to first identify threats and security requirements before starting the design and implementation phase.

Without such prior threat and security requirement analysis, we (as auditors) depend on the fixed ASVS requirements which target the most common web vulnerabilities. Although the ASVS requirements are relatively comprehensive and provide a good start, it would be more efficient and effective to check security properties specific to the use case and threat scenarios.

Finally, when performing a rigorous verification that spans all levels, the ASVS does not clearly describe how the requirements and verification tasks of level 1, 2, 3 and 4 should be tied together. For example, to naively perform level 1 to level 4 tests sequentially would not be efficient!

Is splitting up the work as we did a sensible way to split the work? Or are there more practical ways?

We think that having everyone pick their own requirement and application they are interested in was a good method to divide work. Within our group we had split up the work by every person looking at a specific requirement of their choice and expertise. This was very practical for a school group project. However, having only one pair of eyes look at each requirement might not be good enough for real-life code verification. (Erik: Good point. But note that in a real-life scenario, just have ONE pair of eyes doing a code inspection/security review is already VERY expensive. ) There, groups should be assigned to each security requirement such that multiple people check every requirement independent of each other. This way a more thorough code analysis can be performed.

Also, having each group to look at a single requirement is fine for independent requirements but not efficient when there are overlaps between requirements. For example in Typo3, we noticed similarities between the input and output validation methods. Conducting verification of both requirements would save some time and prevent duplication of efforts.

What could or should developers do to facilitate a security assessment? Do you think that experiencing security from the perspective of a reviewer rather than a developer has changed the way you would design or implement a web application yourself?

From what we experienced with HP fortify, tools can be used to perform reviews but they always need human verification. Tools do not understand context, which is the keystone of security code review. Tools are good at assessing large amounts of code and pointing out possible issues, but a reviewer needs to verify every single result to determine if it is a real issue, if it is actually exploitable, and calculate the risk to the enterprise.

As such, the developer could facilitate the security assessment with

  • Modularize the security features (in this case, validation functions) so that there are less code to assess
  • Document the validation rationale so that the assessor does not need to second guess the developer's intent
  • Document where the validation routines are located so the assessor can swiftly focus on the correct areas

When designing our own web application, we would definately create a good design first including (at least) the security requirements listed in ASVS.