running js tests faster

I burned the tree today.  To verify my proposed fix, I had to run make check, which I don’t normally do.  Excuse me, make -srj20 check, because that’s what my development machine can handle.  Which is all fine until I got to check-jit-tests in js/src.  Which requires running nearly 30 thousand tests.  (N tests x 10 different JS engine option combinations starts to add up rather quickly…)

In serial.  A load average of 1 has never looked so pitiful.

Surely there must be a better way.  While my machine churned through these tests, one at a time, I had a chance to take a look around, and found bug 686240.  Said bug is mostly about reducing overhead by cutting down the number of processes spawned, but it does give a nod to running the tests in parallel.  But I wanted to improve the way the test harness runs first; fixing the innards of the test harness on top of that would just be gravy.

I considered modifying to handle spawning multiple processes and managing them.  But that’s just silly; we already have a tool for doing that: it’s called make.  So let’s use it:

diff --git a/js/src/ b/js/src/
index 31dc5fb..7264cc5 100644
--- a/js/src/
+++ b/js/src/
@@ -602,7 +602,7 @@ endif

-	$(wildcard $(RUN_TEST_PROGRAM)) $(PYTHON) -u $(srcdir)/jit-test/ \
+	+$(wildcard $(RUN_TEST_PROGRAM)) $(PYTHON) -u $(srcdir)/jit-test/ \
 	        --no-slow --no-progress --tinderbox --tbpl $(JITTEST_VALGRIND_FLAG) \

diff --git a/js/src/jit-test/ b/js/src/jit-test/
index 7690548..e7d323b 100755
--- a/js/src/jit-test/
+++ b/js/src/jit-test/
@@ -502,6 +502,33 @@ def main(argv):
     if not OPTIONS.run_slow:
         test_list = [ _ for _ in test_list if not _.slow ]

+    if 'JS_PARALLELIZE' in os.environ and len(test_list) > 1:
+        makefile_path = tmppath('jsmakefile')
+        with open(makefile_path, 'w') as f:
+            makefile_targets = []
+            for (i, test) in enumerate(test_list):
+                target = "test%d" % i
+                subcmd = [sys.executable, sys.argv[0]]
+                if OPTIONS.tinderbox:
+                    subcmd.append('--tinderbox')
+                if OPTIONS.tbpl:
+                    subcmd.append('--tbpl')
+                if OPTIONS.ion:
+                    subcmd.append('--ion')
+                if OPTIONS.hide_progress:
+                    subcmd.append('--no-progress')
+                subcmd.append(JS)
+                subcmd.append(os.path.basename(test.path))
+                makefile_targets.append((target, subcmd))
+            f.write(".PHONY: %s\n" % ' '.join([x for (x, y) in makefile_targets]))
+            f.write("check: %s\n" % ' '.join([x for (x, y) in makefile_targets]))
+            for (target, cmdline) in makefile_targets:
+                f.write("%s:\n\t@%s\n" % (target, subprocess.list2cmdline(cmdline)))
+        make_cmd = ['make', '-f', makefile_path, 'check']
+        env = os.environ.copy()
+        del env['JS_PARALLELIZE']
+        os.execvpe(make_cmd[0], make_cmd, env)
     # The full test list is ready. Now create copies for each JIT configuration.
     job_list = []
     if OPTIONS.tbpl:

What the above code is doing is writing out a makefile that handles executing all the tests found by the test harness in parallel. There’s one target for each test, and then the check target depends on all of them being run. The script then runs a copy of make that executes all the commands found in this new makefile. The modification to is so make knows that you’re going to be executing recursive makes with that command. Credit where credit is due: I cribbed this idea from GCC, which uses it to run bits of LTO compilation in parallel.

Anyway, how fast does it go? JS_PARALLELIZE=1 time -p make -srj20 check-jit-test says:

real 539.33
user 5492.89
sys 653.84

Those times are in seconds. That’s for a --disable-optimize --enable-debug build, by the way.  An --enable-optimize --enable-debug build looks like:

real 144.20
user 1389.44
sys 308.25

So about a 10x speedup either way. Not too shabby for 30 extra lines of code!

The above patch is obviously only a proof-of-concept:

  • We leave our temporary makefile lying around;
  • Propagation of options to the sub-make needs to be expanded and cleaned up;
  • Output from the sub-make may need to be handled sanely;
  • I have no idea whether this works properly with pymake;
  • …and probably a few other things I haven’t thought about.


  1. Also note that build slaves run make check without any -j.

  2. See also and its dependency tree.

    The harness has parallel execution capability. It may be unnecessary given your make-based approach, but it does work. It also deals with timeouts and parallel output.

    There’s some ugly work in between here and merging the two test suites, though.

  3. Gregory Szorc

    Thank you for looking into this!

    FWIW, Python has multiprocessing.Pool which theoretically allows you to dole out work to a subprocess pool:

    Unfortunately, the implementation in Python 2 (and possibly still Python 3) has some pretty nasty gotchas in that it doesn’t handle signals properly ( So, you inevitably need to roll your own version.

    For the temporary file, check out tempfile.NamedTemporaryFile. e.g.

    with tempfile.NamedTemporaryFile() as tf:

    path =
    # Execute your command

    # When the context manager (with) goes out of scope, the temp file is unlinked. Yay context managers!

    I think this make-based solution abuses make because you aren’t really using make for dependencies. I think a tool like GNU Parallel is much better suited for the task at hand. That being said, the abuse isn’t severe, make is a tool we have readily available in the build system, and, most importantly, it works. Great work!