Skip to content

recent tsan improvements for firefox

27-Apr-15

One of my goals for Q2 was to make Firefox usable with “modern” (version 3.6+) clang. For reasons previously unknown, Firefox would only work with relatively old clang (version ~3.3); indeed, our wiki page for TSan recommends checking out specific SVN versions!

I’m happy to say that goal has been met.  If you like, you can read (some of) the gory details in bug 1153244.  Checking out a development version of Clang and building that should enable you to test Firefox (from mozilla-central, since that’s where the aforementioned bug has landed thus far) with TSan.  Upgrading through several major releases has its benefits: TSan appears to be somewhat more stable—tests that would previously timeout reliably now run without complaint.  I’m not sure that it’s quite reliable enough to run in automation yet, but running TSan tests in automation is definitely more feasible than it was last quarter.

With an eye towards doing that, and also with an eye towards improving individual developer experience, I’ve recently landed several patches that teach our test harnesses about TSan-specific things.  For instance, when we’re running mochitests, we only care about races in the browser itself; we don’t care about races while running any of the supporting tools (particularly xpcshell, which shares a lot of code with the browser through libxul).  So TSan should only be enabled for the actual browser executable.  We already do similar things for our ASan builds.

I also discovered that my local builds weren’t using the llvm-symbolizer tool to produce backtraces in TSan’s reports, but instead defaulting to using addr2lineaddr2line is a fine tool, but llvm-symbolizer understands debug information better (for instance, it shows you synthesized frames for inlined functions, which is a huge win) and llvm-symbolizer is somewhat faster than addr2line. In my case, it was a combination of llvm-symbolizer not being in my PATH, and not informing TSan about the location of llvm-symbolizer ($OBJDIR/dist/bin/, usually). The former was addressed by setting LLVM_SYMBOLIZER in my mozconfig (not necessary if llvm-symbolizer is on your PATH), and the latter was addressed by adding a bit of code to our test scripts.

Now, to update the wiki with better instructions!

on development speedbumps

10-Apr-15

Last week, I ran into a small speedbump with my development process. I normally write my patches and commit them to git with initial lines that look like:

fix IPDL thinko for never-inline method declarations; r=bent

Then, when I use git-bz to push my patches to bugzilla, it autofills the r? flag for the patch to :bent, according to the r= bit specified in the commit message. Quite convenient.

Except last week, something broke when trying to use that feature. Bugzilla would complain that :bent was an invalid user. “OK, fine,” I initially thought, “maybe :bent is unavailable.” (People often change their Bugzilla name to not include :NAME when they won’t be reachable for a period of time.) Inspection showed this to not be the case. Although the initial evaluation was that git-bz was old and crusty (it screen-scrapes Bugzilla, rather than using Bugzilla’s REST API), it turned out that Bugzilla was indeed at fault.

OK, but what to do until Bugzilla actually gets updated with the fix? Options:

  1. Cope with a little extra work: use git-bz as normal, but manually set the r? flags from Bugzilla’s web UI.
  2. Paralysis: any extra work is unacceptable, so do nothing (at least in terms of writing patches).

I admit option 2 is a bit silly, but that’s exactly what the situation felt like: there had been this unexpected obstacle in my normally smooth development process and it was simply Not Worth going back and doing things the old way.

I then realized that this situation, in reverse, is exactly the “barriers to entry” that people like Gregory Szorc talk about: every extra step that you have to do in Firefox development (or development generally) is one more place for somebody to run into trouble and decide contributing to your project isn’t worth the hassle. Make the extra steps go away! (And then don’t bring them back via bugs. *wink*)

tsan bug finding update

31-Mar-15

At the beginning of Q1, I set a goal to investigate races with Thread Sanitizer and to fix the “top” 10 races discovered with the tool.  Ten races seemed like a conservative number; we didn’t know how many races there were, their impact, or how difficult fixing them would be.  We also weren’t sure how much participation we could count on from area experts, and it might have turned out that I would spend a significant amount of time figuring out what was going on with the races and fixing them myself.

I’m happy to report that according to Bugzilla, nearly 30 of the races reported this quarter have been fixed.  Folks in the networking stack, JavaScript GC, JavaScript JIT, and ImageLib have been super-responsive in addressing problems.  There are even a few bugs in the afore-linked query that have been fixed by folks on the WebRTC team running TSan or similar thread-safety tools (Clang’s Thread Safety Analysis, to be specific) to detect bugs themselves, which is heartening to see.  And it’s also worth mentioning that at least one of the unfixed DOM media bugs detected by TSan has seen some significant threadsafety refactoring work in its dependent bugs.  All this attention to data races has been very encouraging.

I plan on continuing the TSan runs in Q2 along with getting TSan-on-Firefox working with more-or-less current versions of Clang.  Having to download specific Subversion revisions of the tools or precompiled Clang 3.3 (!) binaries to make TSan work is discouraging, and that could obviously work much better.

measuring power usage with power gadget and joulemeter

24-Feb-15

In the continuing evaluation of how Firefox’s energy usage might be measured and improved, I looked at two programs, Microsoft Research’s Joulemeter and Intel’s Power Gadget.

As you might expect, Joulemeter only works on Windows. Joulemeter is advertised as “a software tool that estimates the power consumption of your computer.” Estimates for the power usage of individual components (CPU/monitor/disk/”base”) are provided while you’re running the tool. (No, I’m not sure what “base” is, either. Perhaps things like wifi?) A calibration step is required for trying to measure anything. I’m not entirely sure what the calibration step does, but since you’re required to be running on battery, I presume that it somehow obtains statistics about how your battery drains, and then apportions power drain between the individual components. Desktop computers can use a WattsUp power meter in lieu of running off battery. Statistics about individual apps are also obtainable, though only power draw based on CPU usage is measured (estimated). CSV logfiles can be captured for later analysis, taking samples every second or so.

Power Gadget is cross-platform, and despite some dire comments on the download page above, I’ve had no trouble running it on Windows 7 and OS X Yosemite (though I do have older CPUs in both of those machines). It works by sampling energy counters maintained by the processor itself to estimate energy usage. As a side benefit, it also keeps track of the frequency and temperature of your CPU. While the default mode of operation is to draw pretty graphs detailing this information in a window, Power Gadget can also log detailed statistics to a CSV file of your choice, taking samples every ~100ms. The CSV file also logs the power consumption of all “packages” (i.e. CPU sockets) on your system.

I like Power Gadget more than Joulemeter: Power Gadget is cross-platform, captures more detailed statistics, and seems a little more straightforward in explaining how power usage is measured.

Roberto Vitillo and Joel Maher wrote a tool called energia that compares energy usage between different browsers on pre-selected sets of pages; Power Gadget is one of the tools that can be used for gathering energy statistics. I think this sort of tool is the primary use case of Power Gadget in diagnosing power problems: it helps you see whether you might be using too much power, but it doesn’t provide insight into why you’re using that power. Taking logs along with running a sampling-based stack profiler and then attempting to correlate the two might assist in providing insight, but it’s not obvious to me that stacks of where you’re spending CPU time are necessarily correlated with power usage. One might have turned on discrete graphics in a laptop, or high-resolution timers on Windows, for instance, but that wouldn’t necessarily be reflected in a CPU profile. Perhaps sampling something different (if that’s possible) would correlate better.

finding races in Firefox with ThreadSanitizer

20-Feb-15

We use a fair number of automated tools for memory errors (AddressSanitizer/Leak Sanitizer for use-after-free and buffer overflows; custom leak checking on refcounted objects; Valgrind tests and Julian Seward’s mochitests on Valgrind periodic testing), but we do very little in terms of checking for data races between threads.  As more and more components of the browser use threads in earnest (parts of the JavaScript engine, including the GC; graphics; networking; Web features like indexedDB, workers, and WebRTC; I have probably left out some others), preventing and/or fixing data races become more and more important as a way of ensuring both correctness and stability. One of my goals this quarter is running mochitests and reftests with ThreadSanitizer (TSan), reporting any races that it finds, and either fixing some of them myself or convincing other people to fix them.

What is a data race? Informally, data races occur when two different threads operate on the same location in memory without any synchronization between the threads. So if you do:

*ptr = 1;

in one thread and:

if (*ptr == 1) {
...
}

in another thread (without locks or similar), that’s a data race. It’s not guaranteed that the second thread will see the value written by the first thread, and if the code was written with that assumption, things can (and usually do) work as expected, but can also go badly wrong. When things do go badly wrong, it can be extremely frustrating to find the actual problem, because the problems don’t typically show up at the point where the data race happened. Of course, since the bug is dependent on caches and timing issues, the problem doesn’t always reproduce on a developer’s machine, either. You can see one of Chromium’s experiences with data races producing a common crash, and it took them nearly three months to find a fix. TSan told them exactly where to look. Google has written blog posts about their experiences using TSan with Chromium and more generally with other software they develop, including Go and their own internal tools.

When faced with evidence of a data race, people sometimes say, “well, int/long/pointer accesses are atomic, so it’s OK.” This is decidedly not true, at least not in the sense that writes to memory locations of such types are immediately visible to all threads. People sometimes try using volatile to fix the problem; that doesn’t work either (volatile says nothing about concurrent accesses between threads, or visibility of operations to other threads). Even if you think your data race is benign–that it can’t cause problems–you might be surprised at what can go wrong with benign data races. Hans Boehm, who led the effort to define the C++ memory model, has written a paper describing why there is no such thing as a benign data race at the C/C++ language level. Data races are always real bugs and are explicitly undefined behavior according to the C++ standard.

I’m not the first person to try using TSan on Firefox inside Mozilla; Christian Holler started filing bugs about races detected by TSan over a year ago.  So far, I’ve filed about 30 bugs from running “interesting” groups of mochitests: mostly mochitests under dom/ and layout/. That doesn’t sound like that many, and there are a couple reasons for that. One is that the same races tend to get reported over and over again. I’ve applied some local fixes to silence some of the really obnoxious races, but you still have to sort through the same races to find the interesting ones. (I should probably spend quality time setting up suppression files, but I’ve avoided doing that thus far for the fear of inadvertently silencing other races that I haven’t seen/reported before.) Test runs are also slow (about 10x slower than a non-TSan run), and it’s fairly common for the browser to simply hang during testing. I’ve been able to fiddle around with the mochitest harness to increase timeouts or the number of timeouts permitted, but that makes tests go even slower, as tests that do timeout take longer and longer to do so.

Fortunately, people have been responsive to the bugs I’ve filed; Honza Bambas, Valentin Gosu, and Patrick McManus on the networking side of things; Terrence Cole, Jon Coppeard, Andrew McCreight, and Shu-yu Guo on the JavaScript side of things; and Milan Sreckovic on the graphics side of things have all been fixing bugs and/or helping analyze what’s going wrong as they’ve been filed.

If you want to try out TSan locally, there’s an excellent introduction to using TSan on MDN.  I strongly recommend following the instructions for building your own, local clang; my experiences have shown that the released versions of clang don’t always work (at least with Firefox) when trying to use TSan. If you find new bugs in your favorite piece of code, please file a bug, CC me, and make it block bug 929478.

multiple return values in C++

17-Feb-15

I’d like to think that I know a fair amount about C++, but I keep discovering new things on a weekly or daily basis.  One of my recent sources of new information is the presentations from CppCon 2014.  And the most recent presentation I’ve looked at is Herb Sutter’s Back to the Basics: Essentials of Modern C++ Style.

In the presentation, Herb mentions a feature of tuple that enables returning multiple values from a function.  Of course, one can already return a pair<T1, T2> of values, but accessing the fields of a pair is suboptimal and not very readable:

pair<...> p = f(...);
if (p.second) {
  // do something with p.first
}

The designers of tuple must have listened, because of the function std::tie, which lets you destructure a tuple:

typename container::iterator position;
bool already_existed;
std::tie(position, already_existed) = mMap.insert(...);

It’s not quite as convenient as destructuring multiple values in other languages, since you need to declare the variables prior to std::tie‘ing them, but at least you can assign them sensible names. And since pair implicitly converts to tuple, you can use tie with functions in the standard library that return pairs, like the insertion functions of associative containers.

Sadly, we’re somewhat limited in our ability to use shiny new concepts from the standard library because of our C++ standard library situation on Android (we use stlport there, and it doesn’t feature useful things like <tuple>, <function>, or <thread_local>. We could, of course, polyfill some of these (and other) headers, and indeed we’ve done some of that in MFBT already. But using our own implementations limits our ability to share code with other projects, and it also takes up time to produce the polyfills and make them appropriately high quality. I’ve seen several people complain about this, and I think it’s something I’d like to fix in the next several months.

examples of poor API design, 1/N – pldhash functions

27-Jan-15

The other day in the #content IRC channel:

<bz> I have learned so many things about how to not define APIs in my work with Mozilla code ;)
<bz> (probably lots more to learn, though)

I, too, am still learning a lot about what makes a good API. Like a lot of other things, it’s easier to point out poor API design than to describe examples of good API design, and that’s what this blog post is about. In particular, the venerable XPCOM datastructure PLDHashTable has been undergoing a number of changes lately, all aimed at bringing it up to date. (The question of why we have our own implementations of things that exist in the C++ standard library is for a separate blog post.)

The whole effort started with noticing that PL_DHashTableOperate is not a well-structured API. It’s necessary to quote some more of the API surface to fully understand what’s going on here:

typedef enum PLDHashOperator {
    PL_DHASH_LOOKUP = 0,        /* lookup entry */
    PL_DHASH_ADD = 1,           /* add entry */
    PL_DHASH_REMOVE = 2,        /* remove entry, or enumerator says remove */
    PL_DHASH_NEXT = 0,          /* enumerator says continue */
    PL_DHASH_STOP = 1           /* enumerator says stop */
} PLDHashOperator;

typedef PLDHashOperator
(* PLDHashEnumerator)(PLDHashTable *table, PLDHashEntryHdr *hdr, uint32_t number,
                      void *arg);

uint32_t
PL_DHashTableEnumerate(PLDHashTable *table, PLDHashEnumerator etor, void *arg);

PLDHashEntryHdr*
PL_DHashTableOperate(PLDHashTable* table, const void* key, PLDHashOperator op);

(PL_DHashTableOperate no longer exists in the tree due to other cleanup bugs; the above is approximately what it looked like at the end of 2014.)

There are several problems with the above slice of the API:

  • PL_DHashTableOperate(table, key, PL_DHASH_ADD) is a long way to spell what should have been named PL_DHashTableAdd(table, key)
  • There’s another problem with the above: it’s making a runtime decision (based on the value of op) about what should have been a compile-time decision: this particular call will always and forever be an add operation. We shouldn’t have the (admittedly small) runtime overhead of dispatching on op. It’s worth noting that compiling with LTO and a quality inliner will remove that runtime overhead, but we might as well structure the code so non-LTO compiles benefit and the code at callsites reads better.
  • Given the above definitions, you can say PL_DHashTableOperate(table, key, PL_DHASH_STOP) and nothing will complain. The PL_DHASH_NEXT and PL_DHASH_STOP values are really only for a function of type PLDHashEnumerator to return, but nothing about the above definition enforces that in any way. Similarly, you can return PL_DHASH_LOOKUP from a PLDHashEnumerator function, which is nonsensical.
  • The requirement to always return a PLDHashEntryHdr* from PL_DHashTableOperate means doing a PL_DHASH_REMOVE has to return something; it happens to return nullptr always, but it really should return void. In a similar fashion, PL_DHASH_LOOKUP always returns a non-nullptr pointer (!); one has to check PL_DHASH_ENTRY_IS_{FREE,BUSY} on the returned value. The typical style for an API like this would be to return nullptr if an entry for the given key didn’t exist, and a non-nullptr pointer if such an entry did. The free-ness or busy-ness of a given entry should be a property entirely internal to the hashtable implementation (it’s possible that some scenarios could be slightly more efficient with direct access to the busy-ness of an entry).

We might infer corresponding properties of a good API from each of the above issues:

  • Entry points for the API produce readable code.
  • The API doesn’t enforce unnecessary overhead.
  • The API makes it impossible to talk about nonsensical things.
  • It is always reasonably clear what return values from API functions describe.

Fixing the first two bulleted issues, above, was the subject of bug 1118024, done by Michael Pruett. Once that was done, we really didn’t need PL_DHashTableOperate, and removing PL_DHashTableOperate and related code was done in bug 1121202 and bug 1124920 by Michael Pruett and Nicholas Nethercote, respectively. Fixing the unusual return convention of PL_DHashTableLookup is being done in bug 1124973 by Nicholas Nethercote. Maybe once all this gets done, we can move away from C-style PL_DHashTable* functions to C++ methods on PLDHashTable itself!

Next time we’ll talk about the actual contents of a PL_DHashTable and how improvements have been made there, too.

profiling for wakeups on osx

07-Jan-15

One of my Q1 goals is to investigate what can be done for cross-platform power profiling, in service of the Firefox Platform’s Project Candle.  Roberto Vitillo already did a bit of exploration in this space last year.  Ideally, what would come out of my research would be some sort of event counter that we could hook into the Gecko Profiler.  Failing that, it would at least be good to document what is possible for power profiling on each of our supported platforms.

The only power profiling tool I was at all familiar with was powertop. I thought I’d start by figuring out if there was any rough equivalent on OS X.  Turns out there is; it’s called powermetrics.  You can simply run:

$ sudo powermetrics

at a Terminal prompt and you will periodically (every five seconds by default) see a large amount of information concerning power usage printed.  The default data sources that powermetrics draws from includes things like detailed C-state usage for all available processors.  For my purposes, I’m just interested in what applications are waking up most frequently.  powermetrics supports limiting its data sources with the –samplers switch:

$ sudo powermetrics --samplers tasks

will just display interesting information on all currently running tasks in the system.  You might think of it as a more detailed version of Activity Monitor’s Energy tab, and indeed, you can convince powermetrics to output the “Energy Impact” number found in Activity Monitor:

$ sudo powermetrics --samplers tasks --show-process-energy

The numbers from powermetrics don’t match up exactly with the numbers from Activity Monitor; I’m guessing that the impact of kernel_task (seen in powermetrics) is somehow spread among applications calling into the kernel by Activity Monitor. Or Activity Monitor is aggregating energy usage over a longer time period than every five seconds.

Knowing that your application is waking up rather often is helpful (Firefox is waking up ~once a second while I’m writing this), but even more helpful is where the wakeups are coming from.  The easiest way I know of for finding that information is to use dtrace:

$ sudo dtrace -n 'mach_kernel::wakeup { @[ustack()] = count(); }'

It’s worth breaking down what’s going on in the above.

  1. The mach_kernel::wakeup bit selects a probe point in dtrace parlance.  mach_kernel is the “module name” and wakeup is the “probe name”.  You can see a complete list of probes by running sudo dtrace -l.
  2. The bits between the braces are the actions to run when the probe point is hit.  In our example, we’re counting unique stack traces when wakeups occur; ustack is short for “user stack”, i.e. the stack of the userspace program executing.

(dtrace is a pretty snazzy tool; it’s worth checking out Brendan Gregg’s DTrace Tools or FreeBSD’s DTrace tutorial if you’re at all interested by the above script.)

Running the above on my system for several seconds gives stack traces like the following:

              libsystem_kernel.dylib`mach_msg_trap+0xa
              CoreFoundation`CFRunLoopWakeUp+0xab
              XUL`nsAppShell::ScheduleNativeEventCallback()+0x3c
              XUL`non-virtual thunk to nsBaseAppShell::OnDispatchedEvent(nsIThreadInternal*)+0x26
              XUL`nsThread::PutEvent(nsIRunnable*, nsThread::nsNestedEventTarget*)+0x95
              XUL`nsTimerImpl::PostTimerEvent(already_AddRefed)+0x229
              XUL`TimerThread::Run()+0x283
              XUL`nsThread::ProcessNextEvent(bool, bool*)+0x3d8
              XUL`NS_ProcessNextEvent(nsIThread*, bool)+0x35
              XUL`mozilla::ipc::MessagePumpForNonMainThreads::Run(base::MessagePump::Delegate*)+0x11b
              XUL`MessageLoop::Run()+0x4d
              XUL`nsThread::ThreadFunc(void*)+0x15b
              libnss3.dylib`_pt_root+0xda
              libsystem_pthread.dylib`_pthread_body+0x83
              libsystem_pthread.dylib`_pthread_body
              libsystem_pthread.dylib`thread_start+0xd
               90

              libsystem_kernel.dylib`__psynch_cvsignal+0xa
              libnss3.dylib`pt_PostNotifies+0xb8
              libnss3.dylib`PR_Unlock+0x4e
              XUL`nsTimerImpl::InitCommon(unsigned int, unsigned int)+0x1f7
              XUL`mozilla::PreciseRefreshDriverTimer::ScheduleNextTick(mozilla::TimeStamp)+0x215
              XUL`mozilla::RefreshDriverTimer::Tick()+0x35
              XUL`nsTimerImpl::Fire()+0x3e5
              XUL`nsTimerEvent::Run()+0xf9
              XUL`nsThread::ProcessNextEvent(bool, bool*)+0x3d8
              XUL`NS_ProcessPendingEvents(nsIThread*, unsigned int)+0x51
              XUL`nsBaseAppShell::NativeEventCallback()+0x77
              XUL`nsAppShell::ProcessGeckoEvents(void*)+0x132
              CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__+0x11
              CoreFoundation`__CFRunLoopDoSources0+0x10d
              CoreFoundation`__CFRunLoopRun+0x39f
              CoreFoundation`CFRunLoopRunSpecific+0x128
              HIToolbox`RunCurrentEventLoopInMode+0xeb
              HIToolbox`ReceiveNextEventCommon+0x1af
              HIToolbox`_BlockUntilNextEventMatchingListInModeWithFilter+0x47
              AppKit`_DPSNextEvent+0x3c4
              154

              libsystem_kernel.dylib`mach_msg_trap+0xa
              CoreGraphics`_CGSEventAppUnresponsiveStatus+0x79
              CoreGraphics`CGSEventAppUnresponsiveStatus+0x38
              CoreGraphics`CGSEventIsAppUnresponsive+0x13
              Activity Monitor`0x000000010e918bd1+0xa76
              Activity Monitor`0x000000010e9038c8+0x141
              libsysmon.dylib`sysmon_table_apply+0x28
              Activity Monitor`0x000000010e9032df+0x231
              Activity Monitor`0x000000010e90a156+0x192
              libdispatch.dylib`_dispatch_client_callout+0x8
              libdispatch.dylib`_dispatch_barrier_sync_f_invoke+0x39
              Activity Monitor`0x000000010e90a09b+0x91
              libsysmon.dylib`__sysmon_request_execute_block_invoke_2+0xe0
              libxpc.dylib`_xpc_connection_call_event_handler+0x3a
              libxpc.dylib`_xpc_connection_mach_event+0x76d
              libdispatch.dylib`_dispatch_client_callout4+0x9
              libdispatch.dylib`_dispatch_mach_msg_invoke+0x1bd
              libdispatch.dylib`_dispatch_queue_drain+0x23b
              libdispatch.dylib`_dispatch_mach_invoke+0xe8
              libdispatch.dylib`_dispatch_queue_drain+0x23b
              176

              libsystem_kernel.dylib`mach_msg_trap+0xa
              CoreFoundation`CFRunLoopWakeUp+0xab
              XUL`nsAppShell::ScheduleNativeEventCallback()+0x3c
              XUL`non-virtual thunk to nsBaseAppShell::OnDispatchedEvent(nsIThreadInternal*)+0x26
              XUL`nsThread::PutEvent(nsIRunnable*, nsThread::nsNestedEventTarget*)+0x95
              XUL`nsTimerImpl::PostTimerEvent(already_AddRefed)+0x229
              XUL`TimerThread::Run()+0x283
              XUL`nsThread::ProcessNextEvent(bool, bool*)+0x3d8
              XUL`NS_ProcessNextEvent(nsIThread*, bool)+0x35
              XUL`mozilla::ipc::MessagePumpForNonMainThreads::Run(base::MessagePump::Delegate*)+0xac
              XUL`MessageLoop::Run()+0x4d
              XUL`nsThread::ThreadFunc(void*)+0x15b
              libnss3.dylib`_pt_root+0xda
              libsystem_pthread.dylib`_pthread_body+0x83
              libsystem_pthread.dylib`_pthread_body
              libsystem_pthread.dylib`thread_start+0xd
              177

              libsystem_kernel.dylib`__psynch_cvbroad+0xa
              libnss3.dylib`PR_Interrupt+0x37
              XUL`mozilla::HangMonitor::NotifyActivity(mozilla::HangMonitor::ActivityType)+0xe0
              XUL`nsThread::ProcessNextEvent(bool, bool*)+0x3ce
              XUL`NS_ProcessPendingEvents(nsIThread*, unsigned int)+0x51
              XUL`nsBaseAppShell::NativeEventCallback()+0x77
              XUL`nsAppShell::ProcessGeckoEvents(void*)+0x132
              CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__+0x11
              CoreFoundation`__CFRunLoopDoSources0+0x10d
              CoreFoundation`__CFRunLoopRun+0x39f
              CoreFoundation`CFRunLoopRunSpecific+0x128
              HIToolbox`RunCurrentEventLoopInMode+0xeb
              HIToolbox`ReceiveNextEventCommon+0x1af
              HIToolbox`_BlockUntilNextEventMatchingListInModeWithFilter+0x47
              AppKit`_DPSNextEvent+0x3c4
              AppKit`-[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:]+0xc2
              XUL`-[GeckoNSApplication nextEventMatchingMask:untilDate:inMode:dequeue:]+0x56
              AppKit`-[NSApplication run]+0x252
              XUL`nsAppShell::Run()+0xfd
              XUL`nsAppStartup::Run()+0x29
              201

              libsystem_kernel.dylib`mach_msg_overwrite_trap+0xa
              CoreGraphics`CGXRunOneServicesPass+0x282
              CoreGraphics`CGXServer+0x347
              WindowServer`DYLD-STUB$$CGXServer
              libdyld.dylib`start+0x1
              WindowServer`0x2
              219

You can see that timers going off in Firefox are a major source of wakeups (those are the stacks with TimerThread::Run() in them).

Even with stacks causing wakeups, it’s not always obvious what sequence of events leads to those wakeups; this is especially true in the e10s era, where a particular stack ending in an IPC message send might only give you half of the picture.  Figuring that puzzle out will help solve problems like frequently waking up when animating images with CSS.

what’s new in xpcom

16-Dec-14

I was talking to somebody at Mozilla’s recent all-hands meeting in Portland, and in the course of attempting to provide a reasonable answer for “What have you been doing lately?”, I said that I had been doing a lot of reviews, mostly because of my newfound duties as XPCOM module owner. My conversational partner responded with some surprise that people were still modifying code in XPCOM at such a clip that reviews would be a burden. I responded that while the code was not rapidly changing, people were still finding reasons to do significant modifications to XPCOM code, and I mentioned a few recent examples.

But in light of that conversation, it’s good to broadcast some of the work I’ve had the privilege of reviewing this year.  I apologize in advance for not citing everybody; in particular, my reviews email folder only goes back to August, so I have a very incomplete record of what I’ve reviewed in the past year.  In no particular order:

table-driven register reading in rr

03-Nov-14

Many of the changes done for porting rr to x86-64 were straightforward, mechanical changes: change this type or format directive, add a template parameter to this function, search-and-replace all SYS_* occurrences and so forth. But the way register reading for rr’s GDB stub fell out was actually a marked improvement on what went before and is worth describing in a little more detail.

At first, if rr’s GDB stub wanted to know register values, it pulled them directly out of the Registers class:

enum DbgRegisterName {
  DREG_EAX,
  DREG_ECX,
  // ...and so on.
};

// A description of a register for the GDB stub.
struct DbgRegister {
  DbgRegisterName name;
  long value;
  bool defined;
};

static DbgRegister get_reg(const Registers* regs, DbgRegisterName which) {
  DbgRegister reg;
  reg.name = which;
  reg.defined = true;
  switch (which) {
  case DREG_EAX:
    reg.value = regs->eax;
    return reg;
  case DREG_ECX:
    reg.value = regs->ecx;
    return reg;
  // ...and so on.
  default:
    reg.defined = false;
    return reg;
  }
}

That wasn’t going to work well if Registers supported several different architectures at once, so we needed to push that reading into the Registers class itself. Not all registers on a target fit into a long, either, so we needed to indicate how many bytes wide each register was. Finally, we also needed to indicate whether the target knew about the register or not: GDB is perfectly capable of asking for SSE (or even AVX) registers, but the chip running the debuggee might not support those registers, for instance. So we wound up with this:

struct DbgRegister {
  unsigned int name;
  uint8_t value[DBG_MAX_REG_SIZE];
  size_t size;
  bool defined;
};

static DbgRegster get_reg(const Registers* regs, unsigned int which) {
  DbgRegister reg;
  memset(&reg, 0, sizeof(reg));
  reg.size = regs->read_register(&reg.value[0], which, &reg.defined);
  return reg;
}
...
template<typename T>
static size_t copy_register_value(uint8_t* buf, T src) {
  memcpy(buf, &src, sizeof(src));
  return sizeof(src);
}

// GDBRegister is an enum defined in Registers.
size_t Registers::read_register(uint8_t* buf, GDBRegister regno, bool* defined) const {
  *defined = true;
  switch (regno) {
  case DREG_EAX:
    return copy_register_value(buf, eax);
  case DREG_ECX:
    return copy_register_value(buf, ecx);
  // ...many more integer and segment registers...
  case DREG_XMM0:
    // XMM registers aren't supplied by Registers, but they are supplied someplace else.
    *defined = false;
    return 16;
  // ...and so on
  }
}

The above changes were serviceable, if a little repetitive. When it came time to add x86-64 support to Registers, however, writing out that same basic switch statement for x86-64 registers sounded unappealing.

And there were other things that would require the same duplicated code treatment, too: when replaying traces, rr compares the replayed registers to the recorded registers at syscalls and halts the replay if the registers don’t match (rr calls this “divergence” and it indicates some kind of bug in rr). The x86-only comparison routine looked like:

// |name1| and |name2| are textual descriptions of |reg1| and |reg2|, respectively,
// for error messages.
bool Registers::compare_register_files(const char* name1, const Registers* reg1,
                                       const char* name2, const Registers* reg2,
                                       int mismatch_behavior) {
  bool match = true;

#define REGCMP(_reg)                                     \
  do {                                                   \
    if (reg1-> _reg != reg2-> _reg) {              \
      maybe_print_reg_mismatch(mismatch_behavior, #_reg, \
                               name1, reg1-> _reg,	   \
                               name2, reg2-> _reg);	   \
      match = false;					   \
    }							   \
  } while (0)

  REGCMP(eax);
  REGCMP(ebx);
  // ...and so forth.

  return match;
}

Duplicating the comparison logic for twice as many registers on x86-64 didn’t sound exciting, either.

I didn’t have to solve both problems at once, so I focused solely on getting the registers for GDB. A switch statement with almost-identical cases indicates that you’re doing too much in code, when you should be doing things with data instead. In this case, what we really want is this: given a GDB register number, we want to know whether we know about this register, and if so, how many bytes to copy into the outparam buffer and where to copy those bytes from. A lookup table is really the data structure we should be using here:

// A data structure describing a register.
struct RegisterValue {
  // The offsetof the register in user_regs_struct.
  size_t offset;

  // The size of the register.  0 means we cannot read it.
  size_t nbytes;

  // Returns a pointer to the register in |regs| represented by |offset|.
  // |regs| is assumed to be a pointer to |struct user_regs_struct| of the appropriate
  // architecture.
  void* pointer_into(void* regs) {
    return static_cast<char*>(regs) + offset;
  }

  const void* pointer_into(const char* regs) const {
    return static_cast<const char*>(regs) + offset;
  }
};

template<typename Arch>
struct RegisterInfo;

template<>
struct RegisterInfo<rr::X86Arch> {
  static struct RegisterValue registers[DREG_NUM_LINUX_I386];
};
struct RegisterValue RegisterInfo<rr::X86Arch>::registers[DREG_NUM_LINUX_I386];

static void initialize_register_tables() {
  static bool initialized = false;

  if (initialized) {
    return;
  }

  initialized = true;
#define RV_X86(gdb_suffix, name) \
  RegisterInfo<rr::X86Arch>::registers[DREG_##gdb_suffix] = { \
    offsetof(rr::X86Arch::user_regs_struct, name), \
    sizeof(((rr::X86Arch::user_regs_struct*)0)->name) \
  }
  RV_X86(EAX, eax);
  RV_X86(ECX, ecx);
  // ...and so forth.  Registers not explicitly initialized here, e.g. DREG_XMM0, will
  // be zero-initialized and therefore have a size of 0.
}

template<typename Arch>
size_t Registers::read_register_arch(uint8_t buf, GDBRegister regno, bool* defined) const {
  initialize_register_tables();
  RegisterValue& rv = RegisterInfo::registers[regno];
  if (rv.nbytes == 0) {
    *defined = false;
  } else {
    *defined = true;
    memcpy(buf, rv.pointer_into(ptrace_registers()), rv.nbytes);
  }

  return rv.nbytes;
}

size_t Registers::read_register(uint8_t buf, GDBRegister regno, bool* defined) const {
  // RR_ARCH_FUNCTION performs the return for us.
  RR_ARCH_FUNCTION(read_register_arch, arch(), buf, regno, defined);
}

Much better, and templating enables us to (re)use the RR_ARCH_FUNCTION macro we use elsewhere in the rr codebase.

Having to initialize the tables before using them is annoying, though. And after debugging a failure or two because I didn’t ensure the tables were initialized prior to using them, I wanted something more foolproof. Carefully writing out the array definition with zero-initialized members in the appropriate places sounded long-winded, difficult to read, and prone to failures that were tedious to debug. Ideally, I’d just specify the array indices to initialize along with the corresponding values, and let zero-initialization take care of the rest.

Initially, I wanted to use designated initializers, which are a great C99 feature that provided exactly what I was looking for, readable initialization of specific array indices:

#define RV_X86(gdb_suffix, name) \
  [DREG_##gdb_suffix] = { \
    offsetof(rr::X86Arch::user_regs_struct, name), \
    sizeof(((rr::X86Arch::user_regs_struct*)0)->name) \
  }

struct RegisterValue RegisterInfo<rr::X86Arch>::registers[DREG_NUM_LINUX_I386] = {
  RV_X86(EAX, eax),
  RV_X86(ECX, ecx),
};

I assumed that C++11 (which is the dialect of C++ used in rr) would support something like this, and indeed, my first attempts worked just fine. Except when I added more initializers:

struct RegisterValue RegisterInfo<rr::X86Arch>::registers[DREG_NUM_LINUX_I386] = {
  RV_X86(EAX, eax),
  RV_X86(ECX, ecx),
  // ...other integer registers, eflags, segment registers...
  RV_X86(ORIG_EAX, orig_eax);
};

I got this curious error message from g++: “Sorry, designated initializers not supported.”

But that’s a lie! You clearly do support designated initializers, because I’ve been able to compile code with them!

Well, yes, g++ supports designated initializers for arrays. Except that the indices have to be in consecutive order from the beginning of the array. And all the integer registers, EIP, and the segment registers appeared in consecutive order in the enumeration, starting with DREG_EAX = 0. But DREG_ORIG_EAX came some 20 members after DREG_GS (the last segment register), and so there we had initialization of non-consecutive array indices, and g++ complained. In other words, designated initializers in g++ are useful for documentation and/or readability purposes, but they’re not useful for sparse arrays. Hence the explicit initialization approach described above.

Turns out there’s a nicer, C++11-ier way to handle these kind of sparse arrays, at the cost of some extra infrastructure. Instead of an explicit [] variable, you can have a class with an operator[] overload and a constructor that takes std::initializer_list<std::pair<size_t, YourActualClass>>. The first member of the pair is the index into your sparse array, and the second member is the value you want placed there. The constructor takes care of putting everything in its place; the actual code looks like this:

typedef std::pair<size_t, RegisterValue> RegisterInit;

// Templated over N because different architectures have differently-sized register files.
template<size_t N>
struct RegisterTable : std::array<RegisterValue, N> {
  RegisterTable(std::initializer_list<RegisterInit> list) {
    for (auto& ri : list) {
      (*this)[ri.first] = ri.second;
    }
  }
};

template<>
struct RegisterInfo<rr::X86Arch> {
  static const size_t num_registers = DREG_NUM_LINUX_I386;
  typedef RegisterTable<num_registers> Table;
  static Table registers;
};

RegisterInfo<rr::X86Arch>::Table RegisterInfo<rr::X86Arch>::registers = {
  // RV_X86 macro defined similarly to the previous version.
  RV_X86(EAX, eax),
  RV_X86(ECX, ecx),
  // ...many more register definitions
};

// x86-64 versions defined similarly...

There might be a better way to do this that’s even more C++11-ier. Comments welcome!

Once this table-driven scheme was in place, using it for comparing registers was straightforward: each RegisterValue structure adds a name field, for error messages, and a comparison_mask field, for applying to register values prior to comparison. Registers that we don’t care about comparing can have a register mask of zero, just like registers that we don’t want to expose to GDB can have a “size” of zero. A mask is more flexible than a bool compare/don’t compare field, because eflags in particular is not always determinstic between record and replay, so we mask off particular bits of eflags prior to comparison. The core comparison routine then looks like:

template<typename Arch>
static bool compare_registers_core(const char* name1, const Registers* reg1,
                                   const char* name2, const Registers* reg2,
                                   int mismatch_behavior) {
  bool match = true;

  for (auto& rv : RegisterInfo<Arch>::registers) {
    if (rv.nbytes == 0) {
      continue;
    }

    // Disregard registers that will trivially compare equal.
    if (rv.comparison_mask == 0) {
      continue;
    }

    // XXX correct but oddly displayed for big-endian processors.
    uint64_t val1 = 0, val2 = 0;
    memcpy(&val1, rv.pointer_into(reg1.ptrace_registers()), rv.nbytes);
    memcpy(&val2, rv.pointer_into(reg2.ptrace_registers()), rv.nbytes);
    val1 &= rv.comparison_mask;
    val2 &= rv.comparison_mask;

    if (val1 != val2) {
      maybe_print_reg_mismatch(mismatch_behavior, rv.name, name1, val1, name2,
                               val2);
      match = false;
    }
  }

  return match;
}

which I thought was a definite improvement on the status quo. Additional, architecture-specific comparisons can be layered on top of this (see, for instance, the x86-specialized versions in rr’s Registers.cc).

Eliminating or reducing code by writing data structures instead is one of those coding tasks that gives me a warm, fuzzy feeling after I’ve done it. In addition to the above, I was also able to use the RegisterValue structures to provide architecture-independent dumping of register files, which was another big win in making Registers multi-arch aware. I was quite pleased that everything worked out so well.