ReDoS VulnerabilitiesA reader and colleague recently noticed
SDL Regex Fuzzer bobbing up and down in this blog's sidebar, and perhaps knowing my association with security and/or regular expressions, asked me a question about it. Ashamed to say, I gave something of a terse, maybe dismissive, answer. In fact I don't even recall exactly what the question was; and the best attempt I can make at a reason or excuse is that at the time, I was en route to my annual review.
I know! And come to think of it, quite why that should have proved so distracting, well now, that escapes me completely. Was I simply too enthusiastic, too keen to reach the grilling room on time? Believe me, no.
So with suitable apologies, let me begin by stating that ReDoS is a
meta exploit, almost a new level of sophistication in attack strategy. To explain it properly, we should start with basic, script kiddie,
Denial of Service.
Every system's resources are finite, and the more that you succeed in using up, the less there is available to anyone and everyone else. This was my (14yo) thinking when in 1972, I launched my brilliant cyber criminal career, with an attempt to bring down the mainframe servicing our whole school's computing requirements, via the submission of a BASIC Trojan looking a lot like this:
10 GOTO 10
My reasoning was that the computer would be so busy following my instructions, that it would never stop obeying and following my evil program; no other tasks would get a chance to run, and so next week's maths class would be postponed indefinitely.
Of course everyone else in the class tried the same thing, and our efforts were universally frustrated by a Task Scheduling Demon living somewhere in the ICL's shadows. Still, the principle has some merit as an attack strategy, and in one form or another, it lives on today, beyond the mainframe environment. Every server, every website is a zero-sum game; the more of its costly resources (processor time, bandwidth and storage) that you can succeed in having committed to your own code, the less everyone else has available to play with.
Algorithmic EfficienciesNow, here comes the
meta part. Some systems, looking to sanitise their user input data, resort to regular expression (or
regex) matching in order to check whether a given prompted input string conforms to the syntax of a well-formed names, address, telephone number, email address, or whatever. But as you'll know if you read
my earlier article on regex parsers and non-deterministic finite automata, some implementations of regex matching have their own peculiar weaknesses. "Lazy" implementations can in particular be forced to perform a metric tonne of backtracking, and so can be attacked using certain fixed input strings which take an inordinate time to process.
Now, that in itself is not enough to create an exploitable vulnerability. In addition to that, you need also to find a case where the
pattern used by a regex parser to match a given arbitrary line of input, is of a type likely to generate a lot of these backtracking steps. An attacker has to search for such cases by trial and error. This is where a Regex Fuzzer can be deployed to ensure that even a "dodgy" parser (and all commercially available regex parsers are, by the definition explained in the forementioned article, extremely dodgy indeed) can be deployed safely by improving the efficiency of its associated patterns, all without necessarily restricting the range of input strings that it is capable of matching correctly.
Test FirstA Regex Fuzzer like
the SDL offering keeps the algorithmic implementation constant, lets you vary the target pattern, and then throws systematically randomised input data down its neck. Use it to check your particular patterns for mitigation of the backtracking string vulnerability. Bob's your uncle, Alice your aunt.
In common with all the other SDL tools, the Regex Fuzzer integrates with both the SDL and the
MSF-Agile+SDL Process Templates.