23
Jun 14

on code review and commit policies

I’ve been doing what feels like a lot of reviews this year.  Groveling imperfectly through mozilla-inbound’s commit log:

[froydnj@cerebro mi.hg]$ hg log -d '>2014-01-01' -k r=froydnj -k r=nfroyd|grep -c summary:
189

tells me I’ve been doing slightly over 1 review/day, which seems like a healthy clip.  Most of these reviews are hardly rocket science: some minor changes to make a compiler happy(ier), some sweeping style changes, and tweaks to make things more robust against previously unforeseen problems.  I call out reviews that take me “significant” amounts of time in my weekly status updates to my team; last week was the first week in a while where I could say that a review would lead to significantly better code being committed into mozilla-central.

You could take this as commentary on the generally high quality of patches that I get to review (the “our contributors are incredible” argument). You could take this as people asking me to review fairly trivial patches and saving the “real” patches for “real” reviewers (the “inferiority complex” argument). You could also take this as evidence that not all patches need an explicitly-requested and mandatory code review prior to commit (the “you’re doing code review wrong” argument).

All the projects I’ve worked on in my professional life have what I’ll call “hard” code review policies: everything that gets committed must undergo review, save for “obvious” patches. Definitions of “obvious” vary, but spelling fixes, backing out build/test bustages, and no-questions-asked correctness fixes generally qualify. (GDB has a “will the person who hates my work the most be able to find fault with the change” test for obviousness that seems both whimsical and helpful.) Such a policy applies to everyone, from the person who just submitted their first patch to the veteran working for 10+ years on the same project.

Other, equally successful projects that I’ve had contact with have what I’ll call “soft” code review policies: patches from people without commit bits undergo review, but being given commit privileges is an expression of trust. (This trust is qualitatively different from the trust expressed in granting commit privileges in the “hard” review model.) Your prior work has demonstrated sufficient taste and competence that you may commit patches freely, but you are also expected to exhibit appropriate humility in knowing when to ask for a second opinion. Of course, your code may undergo post-commit review by interested parties, where changes ranging from minor edits to complete backouts may be requested.

What percentage of commits benefit from a “hard” review policy? Can we classify patches so as to apply the “correct” review policy? (Can the people writing the patches even identify the “correct” review policy themselves?)


09
May 14

the compiler is always right

I think everybody who programs has had a bug in one of their programs that they were positive was the compiler’s fault.  (Perhaps “it’s the operating system’s fault” or “it’s the hardware’s fault”, though these are less common.) At this point, you learn the rules of programming:

  1. The compiler is always right.
  2. If the compiler is ever wrong, see rule #1.

The corollary, of course, is that it is your program that has the bug, not the compiler.  (The third through sixth laws are restatements of these two with the operating system and the hardware, respectively.)

Yesterday was one of those occasions where I thought the compiler might be wrong and spent a little time remembering the First Rule.  It’s instructive to look at the example and understand why the compiler is always right.  I was looking at bug 611781, reducing the size of the NSS library shipped with Firefox, and ran across this Linux x86-64 assembly code, compiled with gcc -Os:

0000000000000000 <NSSTrustDomain_GenerateSymmetricKeyFromPassword>:
   0:	41 51                	push   %r9
   2:	48 8b 05 00 00 00 00 	mov    0x0(%rip),%rax
			5: R_X86_64_GOTPCREL	NSS_ERROR_NOT_FOUND+0xfffffffffffffffc
   9:	8b 38                	mov    (%rax),%edi
   b:	e8 00 00 00 00       	callq  10 <NSSTrustDomain_GenerateSymmetricKeyFromPassword+0x10>
			c: R_X86_64_PLT32	nss_SetError+0xfffffffffffffffc
  10:	31 c0                	xor    %eax,%eax
  12:	41 5a                	pop    %r10
  14:	c3                   	retq   

You can find a lot of these small functions in lib/pki/trustdomain.c in an NSS source tree. Looking over this, you notice two things if you know a little bit about x86-64 assembler:

  1. The push and pop instructions are suspiciously mismatched; if I save a register to the stack with push, I ought to restore the same register from the stack with a pop. The compiler must have a bug!
  2. Also, there’s an efficiency concern here. Why is the compiler saving and restoring any registers here? We never use %r9 in the body of the function, so we shouldn’t need to save it at all.

What is the compiler doing here? It’s easiest to explain away the second issue first. If you look at the x86-64 ABI document, you’ll see that in section 3.2.2, the stack must be kept 16-byte aligned before calling other functions. Since our function calls another function, our function must ensure that the stack is properly aligned when that other function begins execution. And since the call instruction on x86-64 (which is how we would have arrived at our function) adjusts the stack pointer by 8 bytes (in addition to all the other work it does), our function must adjust by an additional 8 bytes to maintain 16-byte alignment. The compiler has chosen to use a push instruction to manipulate the stack pointer in this case. This instruction subtracts 8 bytes from the stack pointer and stores the indicated register into the memory at the new stack pointer.

Another way to do this would be to subtract 8 bytes from the stack pointer (sub $0x8, %rsp), which avoids writing the register to memory at all. If you compile with -O2, optimizing for speed, instead of -Os, you would indeed see the compiler using sub $0x8, %rsp. But since we compiled this code with -Os, optimizing for size, the compiler knows that the instruction for pushing a register onto the stack (2 bytes) is smaller than the instruction for subtracting 8 bytes from the stack pointer (4 bytes). Likewise, the instruction for popping a register from the stack (2 bytes) is smaller than the instruction for adding 8 bytes to the stack pointer (4 bytes).

These “useless” instructions are therefore doing real work, which is maintaining the contract of the ABI.

OK, so the efficiency claim has been addressed. What about the correctness claim with mismatched registers? Again, if you look at the aforementioned ABI document, section 3.2.3 describes how registers are used for function calls. The registers %r9 and %r10 are caller-saved registers, which means that a called function is free to overwrite any values stored in those registers. It doesn’t matter what value, if any, our function stores in %r10, because we know that if the caller had cared about the value, the caller would have stored that value away somewhere. Since we need spare registers for maintaining stack alignment via push and pop, caller-saved registers are ideal for pushing and popping with abandon.

In this case, it turned out that my understanding of what the program was doing had the bug, not the compiler.  It’s also worth pointing out that if the compiler really was mismatching register saves and restores, lots and lots of things would be broken.  The likelihood of the code produced in this instance being wrong—but the same problem not occurring in the millions of lines of code the compiler has compiled to produce the system on your computer—is vanishingly small.  The next time you see the compiler doing something weird, remember that the compiler is always right and try to figure out why it’s doing that.

(I should say, of course, that compilers, just like many other computer programs, do have bugs; GCC’s bugzilla or LLVM’s bugzilla would not exist otherwise, nor would bugfix releases continue to come out for your favorite compiler.  But in the vast, vast majority of cases, you have a bug to fix, not the compiler.)


06
May 14

my code search engine

Christian Legnitto wrote a blog post where he mentioned Firefox developers being forced to deal with “crufty code-search tools” (and many other perceived suboptimalities in the development process).  I’m looking forward to reading his followup, but I also thought it was worth blogging about what I use for my day-to-day code search needs.

I use Emacs’s rgreprgrep (and its cousin commands grep and lgrep) executes grep on a group of files in a given directory, recursively. The grep results are then displayed in a hyperlinked format for easy browsing between matches. Well, being Emacs, I use it with a few modifications of my own.  Here’s my setup.

First, a small utility I wrote for making quick selections with a single keypress:

(defun character-prompt (alist description)
  (message "Select [%s]: "
           (apply #'string (mapcar #'car alist)))
  (let* ((ch (save-window-excursion
               (select-window (minibuffer-window))
               (read-char)))
         (x (find ch alist :key #'car)))
    (cond
      ((null x)
       (message (format "No %s for character: %s" description ch))
       (sleep-for 1)
       (discard-input)
       (character-prompt alist description))
      (t (cdr x)))))

This function gets used in the small wrapper I wrote around rgrep. Some preliminaries first, like where the Firefox tree lives, files that contain overly long lines and therefore mess with Emacs’s hyperlinking, and directories that I generally don’t deal with in my day-to-day work.

(defvar froydnj-mozilla-srcdir (expand-file-name "~/src/gecko-dev.git/"))
(defvar froydnj-mozilla-ignored-files
  (list "searchindex.js"
        "jquery.js"
        "jquery.min.js"
        "interfaces.js"
        "socket.io.min.js"
        "jquery-ui-1.7.1.custom-min.js"
        "edit_area_full.js"
        "php.js"
        "packed.js"
        "socket.io.min.js"
        "named-character-references.html"
        "edit_area_full_with_plugins.js"
        "string-tagcloud.js"
        "string-unpack-code.js"
        "check-string-tagcloud.js"
        "check-string-unpack-code.js"
        "string-unpack-code.html"))
(defvar froydnj-mozilla-ignored-directories
  (list "nss" "nsprpub" "js/src/tests" "intl/icu"))

Next, the way I select subsets of files to search in. I learned after writing all this that rgrep already has built-in functionality for this (see the grep-files-aliases variable), but I like my setup better.

(defvar froydnj-mozilla-files
  '((?a . "*")                    ; All of it
    (?c . "*.[cm]*")              ; C/C++/Obj-C
    (?C . "*.[cmh]*")             ; Same, plus headers (and HTML, sadly)
    (?h . "*.h")
    (?H . "*.html")
    (?i . "*.idl")
    (?j . "*.js*")
    (?l . "*.lisp")
    (?m . "Makefile.in")
    (?p . "*.py")
    (?v . "*.java")
    (?w . "*.webidl")))

Finally, the wrapper itself, which prompts for the search pattern, the filename pattern, makes sure the directories and files above are ignored, and executes the search.

(defun froydnj-mozilla-rgrep ()
  (interactive)
  (let ((regexp (grep-read-regexp))
        (files (character-prompt froydnj-mozilla-files "filename pattern"))
        (grep-find-ignored-files (append grep-find-ignored-files
                                         froydnj-mozilla-ignored-files))
        (grep-find-ignored-directories (append grep-find-ignored-directories
                                               froydnj-mozilla-ignored-directories)))
    (rgrep regexp files froydnj-mozilla-srcdir)))

One other bit that I find useful is a custom name for each buffer. By default, the rgrep results are deposited in a buffer named *grep* (likewise for grep and lgrep; the following advice applies to them automatically), and future greps overwrite this buffer. Having records of your greps lying around is occasionally useful, so I’ve changed the hook that determines the buffer name for rgrep. The comments that refer to compilation are from a previous job where it was useful to launch compilation jobs from within Emacs. I keep meaning to tweak those bits to launch mach in various configurations instead, but I haven’t done so yet.

(defun froydnj-compilation-buffer-name (major-mode-name)
  (let ((cfg-regexp "\\([-0-9_.a-z/+]+\\.cfg\\)"))
    (cond
      ;; We're doing local compilation, stick the name of the release
      ;; configuration in the buffer name.
      ((or (string-match "^cd /scratch/froydnj/\\([^ ;]+\\)" command)
           (string-match "^build-config" command))
       (string-match cfg-regexp command)
       (concat "*compilation " (match-string 1 command) "*"))
      ;; We're doing remote compilation, note the machine name and
      ;; the release configuration name.
      ((string-match "^ssh \\([^ ]+\\)" command)
       (let ((machine (match-string 1 command)))
         (string-match cfg-regexp command)
         (concat "*compilation@" machine " " (match-string 1 command) "*")))
      ;; grep.el invokes compile, we might as well take advantage of that.
      ((string-equal major-mode-name "grep")
       (if (boundp 'regexp)
           (concat "*grep for " regexp "*")
           "*grep*"))
      ;; We have no idea, just use the default.
      (t
       "*compilation*"))))

(setq compilation-buffer-name-function 'froydnj-compilation-buffer-name)

Search times are comparable to web-based tools, and browsing results is more convenient. It has its shortcomings (overloaded C++ method names can be a pain to deal with, for instance), but it works well enough for 95%+ of my searching needs.


29
Apr 14

getting older

I have been reading The Eighth Day of Creation by Horace Freeland Judson, which is a superb book, and thought this passage was relevant to writing software as well as scientific research:

At lunch one day in Paris, early in December of 1975, I asked Monod whether he missed doing research directly. “Oh, I miss it,” he said; then what began as a shrug became instantaneously more thoughtful. “I do more than miss it. It’s too short a question.” He paused, began again. “No, I don’t know that it is actually working at the bench that I miss—miss so very much, although I do, at times; but it is in fact not being this permanent contact with what’s going on in science, in the doing, which I do miss.” I was reminded of a parallel conversation in which Watson had tried to claim the opposite, that he could stay close to what was happening in science. But if one was not actively working, Monod said, “Then you don’t have that. And also if you’re overburdened with general responsibilities, it becomes not so much a question of time but your subjective preoccupations. There’s a displacement—the internal conversation that you keep running in your head concerns all sorts of subjects, things that have got to be done, rather than just thinking about situations [in research]. That’s what bothers me most.”

When his term as director was up? “No, it’s too late to go back to research.” Why? Monod paused once more, and then said, “Well, you know, I always had a sort of amused and—amused, pitiful sympathy for the wonderful old guys who were still doing something at the bench when it was quite clear that whatever they did, it would be less than one hundredth of what they had been able to do before.” We spoke of examples—of scientists whose work became gradually less interesting as they aged, of others who lost their critical judgement and fooled themselves into believing they had solved problems that were beyond them…

[Kornberg said] “Almost every scientist winds up working on a problem he can’t bear to solve. And that’s where his life in science ends. It’s probably being very cruel to the older scientists, but I really believe it’s true. Or sometimes it’s a gradual loss of energy, of the ability to focus the energy on the problem. Or perhaps it’s a loss of edge—of the hunger. Some younger scientists—a few—have that quality that Francis has exemplified; he was ruthless in solving problems, I mean he would just carve them up and solve them in the most brutal way, much to the dismay of people like Chargaff who enjoyed the mystery of those problems and didn’t want to see it disappear, to them the mystery was the beauty of it….It probably does happen to all aging scientists.”


20
Feb 14

finding addresses of virtual functions

Somebody on #developers this morning wanted to know if there was an easy way to find the address of the virtual function that would be called on a given object…without a debugger. Perhaps this address could be printed to a logfile for later analysis. Perhaps it just sounds like an interesting exercise.

Since my first attempt had the right idea, but was incomplete in some details, I thought I’d share the actual solution. The solution is highly dependent on the details of the C++ ABI for your particular platform; the code below is for Linux and Mac. If somebody wants to write up a solution for Windows, where the size of pointer-to-member functions can vary (!), I’d love to see it.

include 
#include 

/* AbstractClass may have several different concrete implementations.  */
class AbstractClass {
public:
  virtual int f() = 0;
  virtual int g() = 0;
};

/* Return the address of the `f' function of `aClass' that would be
   called for the expression:

   aClass->f();

   regardless of the concrete type of `aClass'.

   It is left as an exercise for the reader to templatize this function for
   arbitrary `f'.  */
void*
find_f_address(AbstractClass* aClass)
{
  /* The virtual function table is stored at the beginning of the object.  */
  void** vtable = *(void***)aClass;

  /* This structure is described in the cross-platform "Itanium" C++ ABI:

http://mentorembedded.github.io/cxx-abi/abi.html

     The particular layout replicated here is described in:

     http://mentorembedded.github.io/cxx-abi/abi.html#member-pointers  */
  struct pointerToMember
  {
    /* This field has separate representations for non-virtual and virtual
       functions.  For non-virtual functions, this field is simply the
       address of the function.  For our case, virtual functions, this
       field is 1 plus the virtual table offset (in bytes) of the function
       in question.  The least-significant bit therefore discriminates
       between virtual and non-virtual functions.

       "Ah," you say, "what about architectures where function pointers do
       not necessarily have even addresses?"  (ARM, MIPS, and AArch64 are
       the major ones.)  Excellent point.  Please see below.  */
    size_t pointerOrOffset;

    /* This field is only interesting for calling the function; it
       describes the amount that the `this' pointer must be adjusted
       prior to the call.  However, on architectures where function
       pointers do not necessarily have even addresses, this field has the
       representation:

       2 * adjustment + (virtual_function_p ? 1 : 0)  */
    ptrdiff_t thisAdjustment;
  };

  /* Translate from the opaque pointer-to-member type representation to
     the representation given above.  */
  pointerToMember p;
  int ((AbstractClass::*m)()) = &AbstractClass::f;
  memcpy(&p, &m, sizeof(p));

  /* Compute the actual offset into the vtable.  Given the differing meaing
     of the fields between architectures, as described above, and that
     there's no convenient preprocessor macro, we have to do this
     ourselves.  */
#if defined(__arm__) || defined(__mips__) || defined(__aarch64__)
  /* No adjustment required to `pointerOrOffset'.  */
  static const size_t pfnAdjustment = 0;
#else
  /* Strip off the lowest bit of `pointerOrOffset'.  */
  static const size_t pfnAdjustment = 1;
#endif

  size_t offset = (p.pointerOrOffset - pfnAdjustment) / sizeof(void*);

  /* Now grab the address out of the vtable and return it.  */
  return vtable[offset];
}

29
Jan 14

space saving miscellany

Yesterday’s post on space saving techniques generated a few comments.  It seemed worthwhile to highlight a few of the comments for a wider audience.

  • Various people have pointed out that clang and GCC support a -Wpadded option to warn when padding is necessary inside of a structure.  Visual C++ supports warning C4280 that does the same thing.  You can enable this warning in Visual C++ by passing /we4280 on the compiler command line.  I’m fairly certain this warning would generate a lot of output, but it might be worthwhile to comb through the output and see if anything interesting turns up.
  • David Major pointed out the /d1reportAllClassLayout switch for Visual C++, which prints the class layout of all the classes in the compilation unit.  If you’re only interested in a single class, you can use /d1reportSingleClass$NAME to narrow the report down to the class with $NAME. GCC used to have something similar in -fdump-class-hierarchy, but that option has been removed.
  • Benoit Girard asked if he could see a list of the 50 largest things on a Linux build.  Forthwith, I present the 50 largest objects and the 50 largest functions in libxul for an x86-64 Linux optimized build.  One thing to note about the objects is that they’re not all in the same section; the seventh field in readelf output tells you the section index.  So for the linked list of objects above, section 15 is .rodata (read-only, shareable data), section 22 is .data (read-write non-shareable data), section 27 is .data.rel.ro (data that needs to have relocations applied at load time, but can be read-only thereafter, e.g. virtual function tables), and section 29 is .bss (zero-initialized memory).  Unsurprisingly, string encoding/decoding tables are the bulk of the large objects, with various bits from WebRTC, JavaScript, and the Gecko profiler also making an appearance.  Media codec-related functions appear to take up a large amount of space, along with some JavaScript functions, and a few things related to the HTML parser.
  • A commenter by the name of “AnimalFriend” correctly pointed out that what you really want to know is which structures both have a lot of instances hanging around and have holes that you could fill.  I don’t know of a good way to answer the first part without adding a lot of instrumentation (though perhaps you could catch a lot of cases by augmenting the MOZ_COUNT_CTOR macro to tell you which structures get allocated a lot). The second part can be answered by something like pahole.
  • Alternatively, you could use something like access_profiler to tell you what fields in your objects get accessed and how often, then carefully packing those fields into the same cache line.  The techniques access_profiler uses are also applicable to counting allocations of individual objects.  Maybe we should start using something more access_profiler-like instead of MOZ_COUNT_CTOR and friends!  Definitely more C++-ish, more flexible, and it eliminates the need to write the corresponding MOZ_COUNT_DTOR.

28
Jan 14

finding space savings

My last post talked about trimming down JSJitInfo.  But it left the question of how one might find things like this unanswered (and even unasked!).  Herein follows a short list of my strategies, with a large amount of explanation, for finding things that take up too much space.

The first thing to look at is what objects (as distinct from functions, see below) are taking up a lot of space (on Linux/Android):

readelf -sW dist/bin/libxul.so | awk '$4 == "OBJECT" { print }' | sort -k 3 -n -r | head -n 50 | c++filt

(I don’t know of a good way of doing this on Mac or Windows. Mach-O symbol tables on Mac don’t contain the sizes of objects like ELF symbol tables do. The sizes are derivable from the addresses, of course, but that’s non-trivial code to write and insert into the pipeline above. dumpbin on Windows appears to print sizes of symbols with /symbols, but its output is so verbose and there appears to be some sort of indirection between the symbol and the actual data associated with the symbol. If anybody has non-Linux/Android insight here, I would appreciate it!)

That pipeline is a mouthful, so breaking it down:

  • readelf -sW dist/bin/libxul.so: The -s argument tells readelf to print the symbol table from libxul.so. The -W argument says to format it for a wide display, so the long symbol names that are so common with C++ programming don’t get cut off.
  • awk '$4 == "OBJECT" { print }': The lines readelf prints out look like this:

    3: 00000000021ef9c0 0 FUNC GLOBAL DEFAULT 13 _fini@@xul29a1

    The interesting ones for our purposes are the third field (the size of the object), the fourth field (the kind of object), and the last field (the name of the object). The awk invocation is selecting all the symbols with a type of OBJECT.

  • sort -k 3 -n -r: Now that we have all of the information about the symbols, let’s sort them by their size. -k 3 selects the third field, which is the size, and -n says that we have a numeric field to sort on. -r is just for convenience to list the biggest symbols first.
  • head -n 50 selects the top fifty biggest symbols.
  • c++filt converts all the mangled C++ symbol names to something more human-readable. So instead of _Z25js_GetterOnlyPropertyStubP9JSContextN2JS6HandleIP8JSObjectEENS2_I4jsidEEbNS1_13MutableHandleINS1_5ValueEEE@@xul29a1, you get js_GetterOnlyPropertyStub(JSContext*, JS::Handle<JSObject*>, JS::Handle, bool, JS::MutableHandle)@@xul29a1, which is more understandable.

You don’t have to look at libxul, of course: you can look at whatever objects or executables are of interest to you.

Once you have this information in hand, you can start investigating. Maybe the object is an array of structures that need better bitfield packing.  Maybe the object is large and doesn’t get used.  Maybe we could do a better job of packing string data if there are embedded pointers which require relocations.  Maybe we could use smaller datatypes to store values (saved nearly a megabyte on 32-bit platforms!).  Maybe a structure’s fields just need to be reordered to pack better.  Maybe we have large, duplicated tables lying around.

For discovering whether structures are wasting space, I recommend pahole.  Specifically, I recommend using my fork of pahole, which contains some build fixes for non-Fedora systems and some improvements so pahole copes with modern C++ debug information better. My experience with pahole suggests that its options for only selecting structures with holes are buggy and unhelpful, so you’re going to have to examine the full output. Using the -M option to avoid printing out all the methods on individual objects is tremendously useful.

The second, but less fruitful, strategy is to look for large functions, rather than large objects:

readelf -sW dist/bin/libxul.so | awk '$4 == "FUNC" { print }' | sort -k 3 -n -r | head -n 50 | c++filt

This is less effective, because functions are often harder to trim down: there’s some amount of work that needs to be done and reducing that is difficult. The biggest functions also tend to be quite a bit smaller than the biggest objects in the program, so there’s less room for winning space back.

Even so, maybe you could find cases of excessive inlining.  Finding cases of excessive inlining is a little more difficult than just looking at the symbol table, but the principles are the same. Or you could find cases where lookup tables should be used instead of long if-else chains.  Or macro-constructed case statements could be made smaller.

The third strategy recognizes that sometimes individual functions or objects aren’t that large, but there are so many of them that they wind up costing you a significant amount of space.  For instance, if you do:

readelf -sW dist/bin/libxul.so | c++filt | grep '::Read' | grep StandardURLParams

you’ll notice that IPDL codegen has created several copies of the exact same function.  Perhaps you could find a way to teach the IPDL compiler to not do that.  Or maybe there are several copies of functions that differ only slightly; find a way to merge those functions together. Templates are great places for finding code duplication, maybe they could be restructured to duplicate less.

A word of caution: link-time optimization, which gets used on a majority of our major platforms, does essentially the same thing as some of the aforementioned by-hand merging strategies. So just because you get some huge win locally doesn’t mean that a release build is going to exhibit the same win. (I would argue that the by-hand merging is beneficial in any event, but that is a topic for some other time.)

Finally, you need to decide what is a reasonable amount of space savings for you to try and hunt down. There are likely several thousand places in Gecko where we could win back ten or twenty bytes. But finding them all is liable to be difficult. Changing them to effect the savings may require significant amounts of code gymnastics that not all reviewers would accept. My personal threshold is somewhere around 10K and varies with how difficult the patch is to write. If I can save 5K with a two-line, fairly obvious patch, I’ll do it. If I can save 12K, but the patch is more involved, it goes on the todo list for a rainy day.


17
Jan 14

packing structures for fun and profit

When the DOM bindings code first started generating information for the JavaScript JIT about getters and setters, the generated information was rather sparse. JSJitInfo, the container for such information, looked like this:

struct JSJitInfo {
    JSJitPropertyOp op;
    uint32_t protoID;
    uint32_t depth;
    bool isInfallible;
    bool isConstant;
};

On a 32-bit platform, sizeof(JSJitInfo) was 16. You could shrink some of the fields, and even pack things into bitfields, but there weren’t that many getters and setters, so it probably wasn’t worth it.

Of course, nothing ever gets smaller: the bindings code started generating these structures for methods, the amount of information the bindings code wanted to communicate to the JIT expanded, and some of the parallel JS work started using the structure for its own purposes. And a bunch more interfaces got converted to WebIDL, so many more of these “little” structures were generated. Eventually, we wound up with:

struct JSJitInfo {
    enum OpType { ... };
    enum AliasSet { ... };
    union {
        JSJitGetterOp getter;
        JSJitSetterOp setter;
        JSJitMethodOp method;
    };
    uint32_t protoID;
    uint32_t depth;
    OpType type;
    bool isInfallible;
    bool isMovable;
    AliasSet aliasSet;
    bool isInSlot;
    size_t slotIndex;
    JSValueType returnType;
    const ArgType* const argTypes;
    JSParallelNative parallelNative;
};

This structure has several issues:

  • There’s wasted space after the isMovable and isInSlot fields, since the aliasSet and slotIndex fields need to be aligned on 4-byte boundaries (for 32-bit platforms). There’s even more wasted space on 64-bit platforms.
  • It’s less obvious, but JSValueType is only a single byte, so there’s wasted space after the returnType field so argTypes can be properly aligned.
  • OpType and AliasSet both have only a few values, yet take up four bytes each.
  • The argTypes field is only used for DOM methods, and even then, it isn’t used for very many.
  • The parallelNative field is never used for any of the DOM bindings code.
  • We’re unlikely to have four billion prototypes in the bindings. The web platform is large, but not that large. Similarly, we are unlikely to have an inheritance graph that’s four billion layers deep, or even several thousand layers deep.

This definition weighed in at 44 bytes on a 32-bit system. With 7k+ of these being generated by the bindings code, and more being added every release cycle, now seemed like a worthwhile time to slim these structures down.

This work has gone on in bug 952777, bug 960653, and yet-to-be-landed bug 960109. After all of those bugs land, we’ll have something that looks like:

struct JSJitInfo {
    union {
        JSJitGetterOp getter;
        JSJitSetterOp setter;
        JSJitMethodOp method;
        JSParallelNative parallelNative;
    };
    uint16_t protoID;
    uint16_t depth;
    uint32_t type_ : 4;
    uint32_t aliasSet_ : 4;
    uint32_t returnType_ : 8;
    uint32_t isInfallible : 1;
    uint32_t isMovable : 1;
    uint32_t isInSlot : 1;
    uint32_t isTypedMethod : 1;
    uint32_t slotIndex : 12;
};

Compared to our earlier version, we’ve addressed every complaint:

  • No internally wasted space due to field alignment, on 32-bit or 64-bit platforms.
  • Enum fields don’t take up a full four bytes of space anymore; they take much closer to the minimum amount of space needed.
  • DOM methods with type information available now have a separate subclass, JSTypedMethodJitInfo, so the argTypes field is only present when needed.
  • The parallelNative field has been moved into the union, so we’re not wasting that field anymore.
  • The protoID and depth fields are now a more reasonable size.

It’s worth noting that several of these fields could be smaller, but there’s no reason for them to be currently, given that shrinking them wouldn’t make the overall structure smaller.

The final size of the structure is 12 bytes on 32-bit platforms, and 16 bytes on 64-bit platforms. With 7k+ JSJitInfo structures, that means we’ve saved ~220K of space in a 32-bit libxul.  For a 32-bit libxul, this is almost 1% of the entire size of libxul, which is an extremely pleasant win for size optimizations.  Smaller downloads, less space on your device, and less memory at runtime!

If you’re interested in examining how much space particular structures take up, you can use pahole for Linux or struct_layout for Mac. I’m not aware of any similar program for Windows, though I imagine such a beast does exist. These programs work by examining the debugging information generated by the compiler and displaying the structure layout, along with any “holes” in the structure. pahole will also tell you about cacheline boundaries, so that you can pack frequently-accessed members in the same cacheline to minimize cache misses.

Given the above (and that there have been other wins, somewhat less spectacular in nature), I’ve wondered if it isn’t worth adding some sort of annotations, either in-source or off to the side, about how big we expect certain structures to be or how much internal waste we expect them to have. Then people modifying those structures–and the reviewers as well–would be aware that they were modifying something space-critical, and take appropriate steps to minimize the space increase or even trim things down a bit.


09
Jan 14

on old releases

Gregory Szorc recently wrote a laundry list of reasons for ditching support for old Python releases.  I think this list of reasons to upgrade misses the larger point in providing software for other people: You do not get to tell your users what to do.

Maybe those users don’t have sufficient control over their working environments to install a new version of Python.  (Webhosts and university computers are the two examples that spring to mind immediately.  Enterprise environments have similar constraints.)  Maybe those users rely on certain APIs only available in older versions of Python and don’t wish to take an indeterminate amount of time to rewrite (retest, recertify, etc. etc.) their software.  Maybe those users rely on certain APIs that were changed to operate differently in newer releases and don’t want to engage in an extensive audit of their codebase to fix those incompatibilities.  Maybe those users are using other packages that are incompatible with later Python releases and cannot upgrade.  Maybe those users are just rationally lazy and don’t want to deal with downloading, configuring, and installing a new version of Python, plus dealing with inevitable fallout, when the old version has worked Just Fine for everything else.  The list goes on and on.  (Of course, these reasons are not applicable to just Python; feel free to substitute your favorite language X or favorite extensible program X or what have you.)

Microsoft is the best example I can think of for backwards compatibility.  New Windows releases have gone to significant lengths to make it possible to run applications for older versions of Windows, whatever faults those applications may have.  Raymond Chen’s blog documents a number of extraordinary things Windows does under the hood to make outright buggy and/or undocumented-internals-groveling programs that worked under previous versions of Windows still work under newer ones.  You can, of course, argue that this has taken significant engineering effort that could have been put to use doing “better” things.  But Microsoft’s evaluation of “better” clearly includes “how much pain will this inflict on our customers?”

And that’s the point: you are trading off some perceived (and make no mistake, it is perceived) benefit to yourself as a developer of software X against the agony of some unknown number of users as your changes break their world.  Maybe you’ve decided that this agony is small enough: I’ve seen some great examples of this from the DOM/Content folks as they install Telemetry probes to measure how many users might be impacted by backwards-incompatible changes.  Maybe you’ve decided that you have a small enough community of users and they are all enthusiastic enough to adopt whatever you decided to do, even if it breaks things.  (You might be surprised at just how “small” your community is, though.)  Maybe you’ve decided that ditching the old stuff really is necessary and appropriate (hi Australis!).  Maybe you’ve decided that you simply don’t care about the agony of your users, or that the sharp spike in curses uttered against your household don’t matter.  (If you take this last approach, please don’t write any software that I use.)

I realize that using the new shiny stuff in Python, or C++, or whatever generally makes life nicer as a developer.  But I think developers tend (myself included) to systematically underestimate the amount of agony that user-facing changes cause.  Even when we know we are prone to doing so.


06
Dec 13

reading binary structures with python

Last week, I wanted to parse some Mach-O files with Python.  “Oh sure,” you think, “just use the struct module and this will be a breeze.”  I have, however, tried to do that:

class MyBinaryBlob:
    def __init__(self, buf, offset):
        self.f1, self.f2 = struct.unpack_from("BB", buf, offset)

and such an approach involves a great deal of copy-and-pasted code.  And if you have some variable-length fields mixed in with fixed-length fields, using struct breaks down very quickly. And if you have to write out the fields to a file, things get even more messy. For this experiment, I wanted to do things in a more declarative style.

The desire was that I could say something like:

class MyBinaryBlob:
    field_names = ["f1", "f2"]
    field_kinds = ["uint8_t", "uint8_t"]

and all the necessary code to parse the appropriate fields out of a binary buffer would spring into existence. (Automagically having the code to write these objects to a buffer would be great, too.) And if a binary object contained something that would be naturally interpreted as a Python list, then I could write a minimal amount of code to do that during initialization of the object as well. I also wanted inheritance to work correctly, so that if I wrote:

class ExtendedBlob(MyBinaryBlob):
    field_names = ["f3", "f4"]
    field_kinds = ["int32_t", "int32_t"]

ExtendedBlob should wind up with four fields once it is initialized.

At first, I wrote things like:

def field_reader(fmt):
    size = struct.calcsize(fmt)
    def reader_sub(buf, offset):
        return struct.unpack_from(fmt, buf, offset)[0], size
    return reader_sub

fi = field_reader("i")
fI = field_reader("I")
fB = field_reader("B")

def initialize_slots(obj, buf, offset, slot_names, field_specs):
    total = 0
    for slot, reader in zip(slot_names, field_specs):
        x, size = reader(buf, offset + total)
        setattr(obj, slot, x)
        total += size

class MyBinaryBlob:
    field_names = ["f1", "f2"]
    field_specs = [fB, fB]

    def __init__(self, buf, offset):
        initialize_slots(self, buf, offset, self.field_names, self.field_specs)

Fields return their size to make it straightforward to add variable-sized fields, not just fixed-width fields that can be parsed by struct.unpack_from. This worked out OK, but I was writing out a lot of copy-and-paste constructors, which was undesirable. Inheritance was also a little weird, since the natural implementation looked like:

class ExtendedBlob(MyBinaryBlob):
    field_names = ["f3", "f4"]
    field_specs = [fi, fi]

    def __init__(self, buf, offset):
        super(ExtendedBlob, self).__init__(buf, offset)
        initialize_slots(self, buf, offset, self.field_names, self.field_specs)

but that second initialize_slots call needs to start reading at the offset resulting from reading MyBinaryBlob‘s fields. I fixed this by storing a _total_size member in the objects and modifying initialize_slots:

def initialize_slots(obj, buf, offset, slot_names, field_specs):
    total = obj._total_size
    for slot, reader in zip(slot_names, field_specs):
        x, size = reader(buf, offset + total)
        setattr(obj, slot, x)
        total += size
    obj._total_size = total

which worked out well enough.

I realized that if I wanted to use this framework for writing binary blobs, I’d need to construct “bare” objects without an existing buffer to read them from. To do this, there had to be some static method on the class for parsing things out of a buffer. @staticmethod couldn’t be used in this case, because the code inside the method didn’t know what class it was being invoked on. But @classmethod, which received the invoking class as its first argument, seemed to fit the bill.

After some more experimentation, I wound up with a base class, BinaryObject:

class BinaryObject(object):
    field_names = []
    field_specs = []

    def __init__(self):
        self._total_size = 0

    def initialize_slots(self, buf, offset, slot_names, field_specs):
        total = self._total_size
        for slot, reader in zip(slot_names, field_specs):
            x, size = reader(buf, offset + total)
            setattr(self, slot, x)
            total += size
        self._total_size = total

    @classmethod
    def from_buf(cls, buf, offset):
        # Determine our inheritance path back to BinaryObject
        inheritance_chain = []
        pos = cls
        while pos != BinaryObject:
            inheritance_chain.append(pos)
            bases = pos.__bases__
            assert len(bases) == 1
            pos = bases[0]
        inheritance_chain.reverse()

        # Determine all the field names and specs that we need to read.
        all_field_names = itertools.chain(*[c.field_names
                                            for c in inheritance_chain])
        all_field_specs = itertools.chain(*[c.field_specs
                                            for c in inheritance_chain])

        # Create the actual object and populate its fields.
        obj = cls()
        obj.initialize_slots(buf, offset, all_field_names, all_field_specs)
        return obj

Inspecting the inheritance hierarchy at runtime makes for some very compact code. (The single-inheritance assertion could probably be relaxed to an assertion that all superclasses except the first do not have field_names or field_specs class members; such a relaxation would make behavior-modifying mixins work well with this scheme.) Now my classes all looked like:

class MyBinaryBlob(BinaryObject):
    field_names = ["f1", "f2"]
    field_specs = [fB, fB]

class ExtendedBlob(MyBinaryBlob):
    field_names = ["f3", "f4"]
    field_specs = [fi, fi]

blob1 = MyBinaryBlob.from_buf(buf, offset)
blob2 = ExtendedBlob.from_buf(buf, offset)

with a pleasing lack of code duplication.  Any code for writing can be written once in the BinaryObject class using a similar inspection of the inheritance chain.

But how does parsing additional things during construction work? Well, subclasses can define their own from_buf methods:

class ExtendedBlobWithList(BinaryObject):
    field_names = ["n_objs"]
    field_specs = [fI]

    @classmethod
    def from_buf(cls, buf, offset):
        obj = BinaryObject.from_buf.__func__(cls, buf, offset)
        # do extra initialization here
        for i in range(obj.n_objs):
            ...
        return obj

The trick here is that calling obj = BinaryObject.from_buf(buf, offset) wouldn’t do the right thing: that would only parse any members that BinaryObject had, and return an object of type BinaryObject instead of one of type ExtendedBlobWithList. Instead, we call BinaryObject.from_buf.__func__, which is the original, undecorated function, and pass the cls with which we were invoked, which is ExtendedBlobWithList, to do basic parsing of the fields. After that’s done, we can do our own specialized parsing, probably with SomeOtherBlob.from_buf or similar. (The _total_size member also comes in handy here, since you know exactly where to start parsing additional members.) You can even define from_buf methods that parse a bit, determine what class they should really be constructing, and construct an object of that type instead:

R_SCATTERED = 0x80000000

class Relocation(BinaryObject):
    field_names = ["_bits1", "_bits2"]
    field_specs = [fI, fI];
    __slots__ = field_names

    @classmethod
    def from_buf(cls, buf, offset):
        obj = BinaryObject.from_buf.__func__(Relocation, buf, offset)

        # OK, now for the decoding of what we just got back.
        if obj._bits1 & R_SCATTERED:
            return ScatteredRelocationInfo.from_buf(buf, offset)
        else:
            return RelocationInfo.from_buf(buf, offset)

This hides any detail about file formats in the parsing code, where it belongs.

Overall, I’m pretty happy with this scheme; it’s a lot more pleasant than bare struct.unpack_from calls scattered about.