Teething Troubles: Assigning blame for pagefaults

I have been able to get precise filed-backed page fault logging(systemtap on Linux, xperf on Windows) for a while. It is incredibly useful to see exactly how Firefox is being loaded from disk. From there one I deduce what is causing the IO, try to make improvements and measure if I accomplished anything.

Unfortunately, a mere IO log requires a lot of pondering of why IO is happening. It would be so much easier if one could just get a report of every IO operation + an application backtrace to easily identify the cause. I was having trouble figuring out why some of my optimizations were not having the impact I expected, so i embarked on adding a backtrace to my log.

XPerf Fail
I fed xperf my Firefox symbols hoping this would plop stack traces next to my faults, but no such luck. It records backtraces in just about every probe, except for the “hard faults” probe I care about. I wonder if a custom perf probe could log what I want.

Perf Fail

Some prominent kernel hackers have long been complaining about OProfile/SystemTap/NIH performance monitoring tools. They finally produced a perf tool (It’s like they tried to make it hard to google for. It does not have a proper website; Real men read the source and skim LKML archives) to do profiling the Linux way(tm). I might be wrong, but so far it appears to be a functional equivalent of Microsoft’s xperf minus the nice UI.

Turned out that my Fedora 2.6.32 perf implementation is too buggy to even log pagefaults. Apparently this works in the current Linus kernels. I’m not completely sure, but looks like even if xperf pagefault logging worked, it’s pretty neutered. It does not appear that it can log file offsets next to pagefaults, nor stacks.

I think perf could be fixed to log io and accompanying userspace backtraces. There are some talented folks contributing to it. However I think that the pre-canned analysis model sucks. It is useful for building sophisticated versions of top, but when you really need to dig into what’s causing a particular issue, it really sucks to be restricted by what the developers foresaw as useful.

SystemTap

As awesome as the kernel side of SystemTap is, I keep running into userspace bugs and limitations. Getting userspace stacks for large collections of large libraries that Firefox relies on has been a systemtap-bug-finding affair. I can occasionally get useful userspace tracks for userspace probes, but apparently recording a userspace stack from a kernel probe is a hard problem that SystemTap devs haven’t fully addressed yet.

Luckily SystemTap provides a uaddr() function which appears to get correct addresses from my kernel probes(which is way more than the other tools offer). Unfortunately usymname() fails to resolve those addresses.

As a workaround, Jim Blandy suggested turning off address-space randomization so I can log uaddr() and resolve the values in gdb. I’ve been manually printing this with gdb’s “p/a <addr>” command until recently.

Success?

Then it dawned on me that I can use python gdb-scripting to automatically post-process the log. So now with a combination of a systemtap io logging script and a hacky gdb python script I can produce logs like this.

I still don’t have backtraces, but at least now I have the name of the function that’s causing trouble. This is surprisingly useful already. One can now easily tell how much of startup is being wasted on relocations(dlopen() in a prelinked binary!). Another obvious one is the harm of single-page COW faults to zero .bss (memset entries in the log). Turns out sprinkling initializers all over the binary is a bad idea. Looks like there are significant performance wins to be had with a bit of ‘easy’ compiler/linker hacking.

All of the above problems are really obvious and would’ve been fixed a long time ago if it was easier to get at this information. Unfortunately, there is still a lot of room for improvement in developer tools.

Update:

Sounds like I can use addr2line instead of gdb.

4 comments

  1. Pure gold as usual.

    It’s difficult to distinguish the links in your low contrast CSS. Perhaps you could keep this color for a :visited and increase the saturation of regular links.

  2. You might want to take look at dtrace:

    root@roo.7f000001.org:# dtrace -n ‘vminfo:::pgin / execname == “firefox-bin” / { ustack(); exit(0) }’
    dtrace: description ‘vminfo:::pgin ‘ matched 1 probe
    CPU ID FUNCTION:NAME
    0 7893 pageio_setup:pgin
    libmozjs.so`JS_ArenaGrow+0×112
    libmozjs.so`__1cMGrowTokenBuf6FpnOJSStringBuffer_I_i_+0xda
    libmozjs.so`js_GetToken+0x66c
    libmozjs.so`__1cRXMLElementContent6FpnJJSContext_pnNJSTokenStream_pnLJSParseNode_pnNJSTreeContext__i_+0×45
    libmozjs.so`__1cQXMLElementOrList6FpnJJSContext_pnNJSTokenStream_pnNJSTreeContext_i_pnLJSParseNode__+0x23e
    libmozjs.so`__1cRXMLElementContent6FpnJJSContext_pnNJSTokenStream_pnLJSParseNode_pnNJSTreeContext__i_+0×246
    libmozjs.so`__1cUXMLElementOrListRoot6FpnJJSContext_pnNJSTokenStream_pnNJSTreeContext_i_pnLJSParseNode__+0x2c3
    libmozjs.so`__1cKJSCompilerMparseXMLText6MpnIJSObject_b_pnLJSParseNode__+0x9c
    libmozjs.so`__1cOParseXMLSource6FpnJJSContext_pnIJSString__pnFJSXML__+0x88c
    libmozjs.so`js_ValueToXMLObject+0xc6
    libmozjs.so`js_Interpret+0x299d
    libmozjs.so`js_Invoke+0x88a
    libxul.so`__1cTnsXPCWrappedJSClassKCallMethod6MpnOnsXPCWrappedJS_HpknTXPTMethodDescriptor_pnRnsXPTCMiniVariant__I_+0x1dec
    libxul.so`__1cOnsXPCWrappedJSKCallMethod6MHpknTXPTMethodDescriptor_pnRnsXPTCMiniVariant__I_+0×36
    libxul.so`__1cSPrepareAndDispatch6FpnOnsXPTCStubBase_IpI_I_+0x18f
    libxul.so`__1cOnsXPTCStubBaseFStub36M_I_+0×22
    libxul.so`__1cLnsTimerImplEFire6M_v_+0x12d
    libxul.so`__1cMnsTimerEventDRun6M_I_+0×51
    libxul.so`__1cInsThreadQProcessNextEvent6Mipi_I_+0×142
    libxul.so`__1cVNS_ProcessNextEvent_P6FpnJnsIThread_i_i_+0×44

    root@roo.7f000001.org:#
    (I have only tried it on (Open)Solaris, but it should also work on Mac OS X and FreeBSD.)

  3. I guess one of the problem in that tool space is that most people are already busy enough trying to get data on their own code and fixing their own code that they don’t want to go and contribute to yet another project to improve those tools.

  4. I’m probably far too late to the party (was on vacation), but I have a python script that I use for my systemtap fun that deals with the address space randomization by reading /proc/PID/maps (having been told PID) and nm and some caching to do what you want in realtime as a form of stream processing. I also use it to perform some mangling like removing the vtable crap.

    http://hg.mozilla.org/users/bugmail_asutherland.org/tb-test-help/file/tip/systemtap/addrsymfilt.py