Pentests, short for Penetration Tests, describe the simulation of malicious attacks for the purpose of evaluating a computer system or network's security. These attacks are available in any colour, as long as it's greyscale...
In The Clear
Pentests are quite often clear box (or white box, or full disclosure) attacks, simply because as testers we more often than not have full access to, and complete knowledge of, the infrastructure to be tested. So, why not make best use of this advantage? Such knowledge includes all available information about the internal data structures and algorithms employed, the relevant source code used to implement these, the network diagrams, IP address data, actual passwords, and so on.
Examples of white box testing include:
- API, Fault Injection and Mutation Testing methods.
- Most static testing, e.g., code reviews, inspections and Valkyries (damn autocorrect: I meant walkthroughs); testing where the software isn't actually used. Instead for example, the code may be read, or scanned automatically for syntactic validity.
- Code coverage can only be delivered through white box testing. Without inspecting the source code, there can be no guarantee that any given code path is exercised.
In The Black
Pentests can also of course be black box (also known as blind) tests, where there is assumed to be no prior knowledge of the infrastructure to be tested. As testers, we must first determine the location and extent of the system under test, before commencing analysis. When we use black box penetration testing methods, we are assuming the role of a real, external, black hat hacker, who is trying to intrude into our system without much actual knowledge about it. By contrast - almost literally! - white box testing can be seen as simulating what might happen after a leak of sensitive information, or equivalently, during an "inside job", when the attacker has access to confidential information.
In between these two extremes, a school of grey box testing has evolved. As the name suggests, these pentests combine aspects of black and white box attacks. The main reason to do this is to provide customised test coverage for various elements in distributed systems. For example, we might use our knowledge of internal data structures and algorithms for the purpose of designing our test cases. Then when it comes to the point of actually executing these tests, we perform them as a normal user, i.e., at a black box level.
A good example of grey boxing is when we modify the content of a data repository, which is not itself part of the delimited system under test. That's not something which a user would normally be able to do, so it can introduce a white box element into an otherwise black box test or attack suite. Another example in a similar vein might be the determination of error message contents, or system boundary values, by reverse engineering.
Note that most instances of input data manipulation and/or output formatting do not qualify as grey box techniques, because by definition, input and output are not part of the system under test. So for example, integration testing between two code modules written by two distinct developers, where only certain interfaces are exposed for test, is still regarded as black boxing.
Elementary white box penetration testing can often be done automatically, and therefore cheaply. Black box attacks are another matter entirely. Because you are literally attacking a network (often a working production system) blindly, your test activities will inevitably comprise actual security attacks. You will cause denial of service, both intentionally and as a side effect of the stress you put on network response time via vulnerability scanning. At worst, you might cause actual harm to the system, rendering it just as inoperable as had a real black hat attacked. Much of the time and effort required with black box pentests lies in trying not to destroy things, while still reaching deeply enough to expose vulnerabilities.
Pronounced "Awe Stem"
The OSSTMM, or Open Source Security Testing Methodology Manual, is both a peer-reviewed security testing, metric measurement and analysis methodology, and a philosophy of operational security. It is a Creative Commons licensed publication of the Institute for Security and Open Methodologies (ISECOM). As such, the encapsulated methodology, covering what / when / where to test, is itself free to use and distribute under the Open Methodology License (OML).
The Manual's primary objective is to create a scientific methodology and metrics for operational security evaluation, based upon test results. It suits most kinds of security audit: penetration tests, ethical hacks, security and vulnerability assessments, and so on. Secondarily it acts as a central reference in all security tests regardless of the size of the organization, technology, or protection, and provides analyst guidelines, enabling a certified OSSTMM audit by assuring:
- test thoroughness and legal compliance;
- inclusion of all necessary channels;
- results quantification, consistency and repeatability; and
- that factual information is derived exclusively from tests.