Sunday, 20 February 2011

Security Testing Part 1 - Overview


Fundamental Practices for Secure Software Development

Two weeks ago on February 8, 2011, the Software Assurance Forum for Excellence in Code (SAFECode) published the 2ND EDITION of their paper Fundamental Practices for Secure Software Development - A Guide to the Most Effective Secure Development Practices in Use Today (2MB PDF). Their stated ambition for their report (original 2008 edition) was "...to help others in the industry initiate or improve their own software assurance programs and encourage the industry-wide adoption of what we believe to be the most fundamental secure development methods."

Rather than a comprehensive guide to all possible secure development practices, their concise, actionable and pragmatic report provides a foundational set of these; a set that has been effective in improving software security in real-world implementations by SAFECode members across diverse development environments. They call these “practiced practices”, meaning they are actually employed by SAFECode members, having been identified through an ongoing analysis of members’ individual software security efforts, and are currently in use at leading software companies.

CWE References, Verification, and Resources

Before going into detail about the section dedicated to Testing Recommendations, notice that all subsections are bookended by these three bullets:
  • CWE References: originally created by MITRE Corporation, Common Weakness Enumeration references provide a unified, measurable set of software weaknesses - a universal basis for an extended technical vocabulary, similar in this respect to the utility of software design patterns in development - enabling and encouraging effective discussion, description, selection and use of software security practices. By mapping their recommended practices to CWE, the authors provide a detailed illustration of the security issues these practices aim to resolve, and a precise starting point for interested parties to learn more.
  • Verification: usefully, each subsection includes a list of methods and tools that can be used to verify whether a given practice was applied. This is aimed at checking whether development teams are actually following prescribed security practices!
  • Resources: self explanatory; books, articles, reports, tools, tutorials, in short anything that can usefully be combined with the foregoing report text to expand on it in any way.
And So To Test

For security testing and verification, you'll want to head for page 39, and the section helpfully entitled Testing Recommendations. Here you're reminded more than once, that the goal of testing activities is not to add security by testing, but instead to validate the robustness and secure implementation of a product, reducing the likelihood of security bugs being released and discovered by customers and/or malicious users.

This and other preliminaries dispensed, there then follow the four subsections unique to security testing and verification recommendations:
  1. Determine Attack Surface. Which is to say, understand the attack surface, with the aid of a good, up-to-date Threat Model, combined with such tools as port scanners, or Microsoft's Attack Surface Analyser; and your knowledge of all the program's inputs, determined from requirements & design, and supplemented by information about protocols and parsers as supplied by development.
  2. Use Appropriate Testing Tools. Consider which fuzz testing tools, vulnerability scanners, and other resources can be mobilised to uncover programming errors, known vulnerability classes, and administrative issues. Which of these can be automated? What should be the level of exploratory testing, using say network packet analyzers, and network or web proxies that allow man-in-the-middle attacks and data manipulation?
  3. Perform Fuzz / Robustness Testing. This is currently a fast changing area of automated security testing, seeing new research and advancement almost daily. Test departments are identifying software development training requirements, in spite of the growing availability of off-the-shelf fuzz testing tools for standard protocols and general use, because of custom file and network data formats used by the applications under test. Effort needs to be focused on the particular networking protocols or data formats in use, and on the high priority, high exposure entry points that have been identified during the threat modelling stage, as being available to attackers.
  4. Perform Penetration Testing. Which is expensive, and is often partly or wholly outsourced to professional penetration and security assessment vendors. But an in-house penetration test resource or team can maintain a very valuable advantage, from one test to the next, based on the availability of internal product knowledge.
A Sample Agenda

These five pages 39-43 of the SAFECode report supply us with most of the headings we need to form a starting agenda for an introduction to security testing.

Concepts:
  • Integrity, Availability, Confidentiality (CIA)
  • Threat Modelling
  • Attack Surface
  • Inputs, Protocols and Parsers
  • Fuzz / Robustness Testing
Classifications:
  • Vulnerability Classes (SQL Injection, XSS)
  • S.T.R.I.D.E.
  • Common Weakness Enumeration (CWE)
Tools:
  • Vulnerability Analyzers
  • Network / Web Proxies
  • Port Scanners
  • Packet Analyzers
Finally...

Just to reiterate (and to paraphrase one of the report's authors, the SDL's Michael Howard), this paper's unique importance is its description of what SAFECode members are doing in practice, to raise the security bar. It is deeply pragmatic, not a theoretical or academic document. SAFECode is also actively seeking public comment on this paper, especially in the verification sections. So if you know of specific tools or techniques to help determine if a software development team is adhering to the practices, please let them know.

No comments:

Post a Comment