Crashes |
Hangs |
Assertion failures |
Leaks |
Valgrind warnings |
Address Sanitizer warnings |
If Firefox crashes as a result of some input, it's a bug. Valgrind is a memory debugger; it will tell you about things that could have crashed or could result in undefined behavior. Assertions document what the developer believes to be true, and allows finding bugs before they lead to crashes.
These are the kinds of bugs you can find without knowing what output is expected, without being a UI designer or standards expert or understanding the security policy.
(function() { var x; eval("for (x in (gc)()) for each(e in [0]) { print }") })()
triggers
Assertion failure: !JSVAL_IS_PRIMITIVE(regs.sp[-2]), at ../jsops.cpp:489
jsfunfuzz is 4000 lines of randomly-recursive JavaScript. The way it generates input (functions) is pretty boring: there are functions like "makeStatement" and "makeExpression", and each picks a template at random and fills it in with calls to other make* functions.
Crashes |
Assertion failures |
Leaks |
Valgrind warnings |
It can find all the normal "easy to detect" kinds of bugs. But it accidentally puts the engine into an infinite loop or exponential blowup a lot, so I don't really bother with hangs. But also...
Crashes |
Assertion failures |
Leaks |
Valgrind warnings |
Cross-engine consistency |
Decompiler consistency |
What's more interesting is what kinds of bugs it can find. It can find assertions and crashes like any other fuzzer, but it can also find several types of "correctness" bugs. The first way of finding correctness bugs is simple but temporal.
var v = 0; for each (var a in [0, {}, {}, {}]) { v = v >>> 0; for each (var b in [{}, {}, new String(''), 42, new String(''), {}, 42]) { } } print(v);
When you refactor, or introduce a new execution mode, you can feed fuzz inputs into both engines to ensure they give the same answers. Here I was trying to find bugs in the tracing JIT, which does special things when variables change type, so I instructed the fuzzer to make code where variables change type.
Firefox has been adding a new JIT every year!
There's this site called Translation Party where you can see what happens when Google Translate tries to take a sentence to Japanese and back. With some inputs it detects convergence right away.
But with other inputs, the translator has some trouble. "Isn't it ironic" became "Is it not ironic". Ok, that's not too bad. But then it turned into "Ironically, it is not", which has a very different meaning. Maybe it's a comment about the song.
I don't know Japanese, so I can't tell you exactly where it went wrong. But I think it's safe to say something did go wrong.
So here's the same idea applied to JavaScript. When SpiderMonkey prints out a function, it's actually decompiling bytecode. In this case, the compiler turned my function into 5 bytecodes, and the decompiler turned it back into exactly the same text I had entered.
Now, I can read bytecode about as well as I can read Japanese, but seeing the round-trip work gives me some confidence that both parts worked correctly.
Something went wrong here. In fact, the first one passes undefined to C, and the second one passes 0. The decompiler screwed up by not including enough parentheses.
Seems like a lot of bugs, but it was over a period of 6 years. Some were regressions that were only in trunk for a few days. On the other hand, some were 10 years old.
The JavaScript engine team loves getting bugs from the fuzzer. Because otherwise they'll hit the bug in some complicated web page, all tangled up in application logic and jQuery.
Now competing against jandem's method/object fuzzer and decoder's testcase mutation fuzzer.
The first part of the DOM fuzzer just picks two nodes at random in a document and performs an appendChild or insertBefore call. (In other words, it picks a node to move and a place to put it.) It quickly turns any page into a DOM soup, so I called it Stir DOM.
Very easy to implement and reasonably effective. Great bang-for-buck, 300 bugs out of maybe 30 lines of JS. This is how I got started and it's a good place to start fuzzing a new feature.
After running for a while you have a "DOM soup" containing the ingredients of the original page but all mixed together. The effectiveness depends on starting points. I used to use real web pages, but now I use crashtest and reftest files from our own regression test suite. The starting points can be really good by choosing features that go well together (tables; image maps) that would be hard to create randomly.
Tests layout engine's handling of changes, not just weird DOMs
You're testing not only the final rendering but also how DOM and layout code react to dynamic changes. This code is harder to get right, more performance-sensitive, and has more room for dangling-pointer bugs.
I didn't know this when I started! I was just being lazy, and got lucky!
Lists of elements, attributes, attribute values. Requires reading specs or implementations or documentation, or following bugs and commits, to compile those lists. Great at creating weird combinations of elements, but not so great at building meaningful combinations. But we have Stir DOM and the starting points for that.
This information is used not only for DOM but also for setting innerHTML. Without these modules, rare elements would get little testing, especially with other rare elements.
There are also a bunch of other modules. I don't have time to talk about them all, but I'd like to point out that mutation events are evil.
animVal = GetDOMWrapperIfExists(InternalAList().GetAnimValKey()); ]]>
kungFuDeathGrip; if (mBaseVal) { + if (!aNewValue.Length()) { + // InternalListLengthWillChange might clear last reference to |this|. + // Retain a temporary reference to keep from dying before returning. + kungFuDeathGrip = this; + } mBaseVal->InternalListLengthWillChange(aNewValue.Length()); } /* Do more things using |this| */ ]]>
this
drops to 0Reference count of this
drops to 0. Often solved with kungFuDeathGrip. Usually better to solve by having the caller hold a strong reference.
Early returns. Error paths are often untested. These bugs are often solved by adding "auto" classes to do cleanup (RAII). Out-of-memory (OOM) bugs, in particular, are the bane of my existence.
Didn't expect X to happen in the middle of Y. The web platform is full of crazy callbacks, such as mutation events. It's also full of crazy methods, such as closing the window or spinning the event loop. Adding "phase" assertions can help.
C-style casts. If you change the representation of one of the classes, you can forget to update the cast. Type-restricted casts (functions that encapsulate the cast) can prevent this problem.
Symptom |
---|
Crashes |
Assertion failures |
Valgrind warnings |
Leaks |
Hangs |
Inconsistent painting |
Symptom | Specific symptom | Input |
---|---|---|
Crashes | 32 ignored | |
Assertion failures | 407 ignored | |
Valgrind warnings | 22 ignored | |
Leaks | 9 ignored | 5 ignored |
Hangs | 21 ignored | |
Inconsistent painting | 29 ignored |
Leaks are the trickiest. I have a hierarchy, with the top occupied by nsDocShell, nsDocument, and nsGlobalWindow. Unfortunately, there are bugs for all of these, so at the present I will only notice small leaks.
“This seems tedious. How can I automate it?”
2005 | Ad-hoc layout testing | Fuzzing |
2006 | Reproducing fuzz bugs | Record fuzz actions |
2006 | Reducing fuzz testcases | Disentangle fuzz actions |
2006 | Reducing fuzz testcases | Automate line reduction |
One of my favorite aspects of being a programmer is that I can automate any task that becomes tedious. When ad-hoc testing starts feeling tedious, maybe it's time to automate it -- with a regression test, an exhaustive test, or a fuzzer.
Unfortunately, fuzzing has a tendency to give you gigantic testcases unsuitable for bug reports. If a fuzzer finds lots of bugs, it's worth investing in automating the pieces around it.
“This seems tedious. How can I automate it?”
2007 | Reducing starting parts | serializeDOMAsScript |
2007 | Adding starting points | Crashtests |
2008 | Adding starting points | Crashtestify extension |
2010 | Remembering to remove "ignores" | after-fix |
2010 | Finding regressions quickly | Download per-checkin builds |
It can be worth automating something even if the automation only works half the time. Automate what you can and free up your mind for the interesting cases.
Now, it is possible to automate too much. If it's an easy thing you're only going to do twice, it might not be worth it. Or if you try to automate the process of automating tedious tasks. It's really hard, but even worse, if you succeed, you might have just enough AI to bring about the technological singularity.
So now my fuzzers run continuously on ~20 computers, and I get emails when they find new bugs. I just finish the reduction, upload to Bugzilla, and add an ignore entry.
Pick a component to fuzz, and try to fuzz it directly. If your component has too much complexity for ad-hoc testing or normal regression test suites, because there are too many combinations that could interact. Can't rely on the web to find bugs, or you're very sad when the web finds bugs.
"Almost valid" or "Mostly valid" is where you want to aim, so you get into deep code paths but also the error cases off of those paths.
Identifying a component's inputs can be subtle. For the layout module, it's largely changes to the DOM that form input, but also user zooming and resizing. For the regular expression engine, the input is regexps AND text to match against.