xpcshell manifests, phase 2

June 29th, 2011

Recently we implemented a manifest format for xpcshell unit tests (with Joel Maher doing the lion’s share of the work). After that work landed, we realized there were some things missing from our initial design, so we set out to revamp a few things to make it easier to write useful manifests.

We decided to make the manifests support boolean expressions, similar to what reftest manifests allow, except with a restricted grammar, and not “all of JavaScript”. To make this useful, we had to offer a set of values to test, so Jeff Hammel buckled down and wrote a Python module called mozinfo that had been in discussion for a long time. I wrote a few bits to hook this all up between the build system and the xpcshell test harness, and it all landed on mozilla-central this morning.

I’ll update the MDN documentation later today, but for a preview, a sample manifest entry might look like:

[test_foo.js]
skip-if = os == 'win' || os == 'linux'

If you look in your object directory in a build containing these patches (or in the xpcshell directory of a test package from a tinderbox build), you’ll find a mozinfo.json, which is where most of the values you can use in these expressions come from. For example, the mozinfo.json for my 64-bit Linux build looks like:

{'os': 'linux', 'toolkit': 'gtk2', 'crashreporter': false, 'debug': false, 'bits': 64, 'processor': 'x86_64'}

You can annotate tests with “skip-if, run-if and fail-if” conditions currently. “skip-if” indicates that a test should not be run if the condition evaluates to true, “run-if” indicates that a test should only be run if the condition evaluates to true and “fail-if” indicates that the test is known to fail if the condition is true. Tests marked fail-if will produce TEST-KNOWN-FAIL output if they fail, and TEST-UNEXPECTED-PASS (which is treated as a failure) if they pass.

Hopefully this work will enable developers to more easily work with xpcshell tests. We’d appreciate any feedback you have on these changes!

For a long time when our unit tests or Talos performance tests encountered a crash, the result was nothing but frustration. If you were lucky, you could tell that it crashed, but you had no idea where. Poor Blake spent weeks tracing down a crash from his speculative-parsing patch that only seemed to occur on Talos. Up until recently I figured the only way to make this happen was going to involve a fair amount of work that only I was going to be able to do. A few weeks ago it was determined that this was becoming a significant impact on development, as patches would get checked in, cause a crash and be backed out, leaving the developer with nothing to go on.

Benjamin Smedberg has been hard at work making it possible to get stacks in this situation, using the same Breakpad utilities we use on our Socorro server, but locally on the machine running the tests. Practically all of the pieces were in place this afternoon when #developers cornered Alice and closed the tree while she landed the final patch to make Talos produce stack traces. Boris then committed a test crash, and as a result we were able to see crash stacks in Mochitest (OS X, Linux) as well as Talos (OS X, Linux).

Thanks to Benjamin for doing most of the heavy lifting here, and for
Alice for taking the Talos part across the finish line. The Talos work
was mostly in bug 480577, and the unit test work was bug 481732. Note
that currently this only works in Mochitest (all 4 varieties), it will
work in Reftest/Crashtest after bug 479225 is fixed (which should be soon).

(Cross posted in dev.tree-management, but posting here for a wider audience.)

I was reminded of bug 414049 yesterday, a bug I filed about getting screenshots from our unit test machines after every run so we could see if there was obviously something wrong with the machine (like error dialogs covering the screen). Linux and OS X tend to have built-in tools to grab screenshots (as mentioned in the bug), but Windows does not. I searched around for a free tool to do the job, but all I could find was shareware. It’s possible there’s a free tool out there that I just couldn’t find, but I figured I would just write one. After a bit of poking around on MSDN, I wrote screenshot.cpp. It’s only about 70 lines of C++, hard to believe people pay money for stuff like that. I’ve placed it under a BSD license, since it’s useful code and I couldn’t find a simple self-contained example like this anywhere.

more tests, kthx

January 16th, 2009

Josh recently landed a test plugin, with the intent of finally getting some test coverage of our plugin-handling code via mochitests. This is awesome, as plugins are an area of code where we’ve caused lots of regressions in the past, and until then had zero automated test coverage. After it landed, I took a peek at the code and noticed that it would be pretty easy to extend it to make it usable in our layout tests (reftest) as well. I just landed some patches to add this functionality, so we can now test that our layout of plugins doesn’t regress. If you’d like to write some reftests yourself using this, you can check out the basic tests I added along with the patch. (Note: it’s mac-only at the moment, but there’s gtk2 code ready to land any minute now, and a win32 implementation should be forthcoming.)

SSL in Mochitest

September 22nd, 2008

Without a lot of fanfare, a patch landed recently that enables the use of SSL with the test HTTP server we use in our Mochitest test harness.

About five months ago, I read an article about how Fedora wanted to standardize on NSS as the cryptography solution for their distro in order to be able to leverage a common certificate database, among other things. The article went into detail on how they wrote an OpenSSL wrapper around NSS so they could easily port applications that only supported OpenSSL to use NSS instead. As a concrete example, they showed a ported version of stunnel using NSS. This gave me pause, as one of the things we were lacking in our Mochitest harness was SSL support and stunnel would do exactly what we needed in this case. Considering we already build and ship NSS with every copy of Firefox, and it was clearly possible to implement the functionality we needed using NSS, I set out to figure out how to implement a bare-bones version of stunnel from scratch. After a bit of poking through the online NSPR and NSS documentation, I had a proof of concept application which I called “ssltunnel.” After some insightful review comments from NSS developers I committed it to CVS.

Unfortunately, that wasn’t the end. We still needed to hook this program up to the test harness, and I just didn’t have the motivation to do so. I filed the bug, and hoped someone else would do the work. (as I often do!) Thankfully, that someone appeared in the person of Honza Bambas, whom I can only describe as a “programming rockstar.” He not only integrated ssltunnel into Mochitest, but he rewrote large sections of it to make it work robustly and made it work as an HTTP proxy while he was at it. After some reviews, and a couple of landings and backouts due to unrelated test failures, and some time spent languishing in bugzilla, we finally made his patch stick.

Of course, now that we have this capability, we need tests to use it! Honza has written some great documentation on what is currently available via Mochitest, and how to add custom servers and certificates other things you might want. If you get motivated to write some tests and hit a rough spot, feel free as always to track me down on IRC and ask me about it.

MochiTest Maker

April 18th, 2008

Just something I threw together this morning: MochiTest Maker. It’s a pure HTML+JavaScript environment for writing MochiTests. It’s not as full-featured as the real MochiTest, as you can’t set HTTP headers or include external files, but it should serve for a lot of simple web content tests.

Ideally at some point I’d like to add a CGI backend to this so you could specify a directory, and have it generate a patch against current CVS to include your test in that directory. That would lower the bar even further for getting new tests into the tree. Another cool addition would be to integrate this with my regression search buildbot (currently offline), so that you could write a mochitest and then with one click submit it to find out when something regressed. That shouldn’t be hard to do, but my buildbot needs to find a more permanent home first.

I think there’s still a lot more we can (and must) do to lower the bar for writing tests. We need all the tests we can get!