21
Nov 13

my git workflow

Mark Hammond recently started an etherpad about how people work with git. Rather than commenting there, I thought I’d blog about my workflow instead.

First piece: magit.  If you use emacs and git, and you don’t use magit, you are missing out.  Highly recommended.  I don’t use the git command line for common operations anymore; I do everything through magit.  magit’s interactive staging is a big improvement over git add -i: you can stage files, hunks, or individual regions selectable by normal Emacs point-and-mark.  I also really like magit’s rebasing support, as I use rebase a lot.

Second piece: git-bz-moz.  I was reluctant to use this at first, but it’s been a huge boon in posting patches directly from my editor.  Setup is pretty straightforward; I have:

[bz]
	firefox-profile = other
	default-tracker = bugzilla.mozilla.org
	default-product = General
	default-component = Core
        default-assigned-to = nfroyd@mozilla.com
	add-url-method = subject-prepend:Bug %d -

in my ~/.gitconfig, and git-bz is smart enough to go grovel through my Firefox profile to get my Bugzilla login information. Having it auto-mark bugs with the appropriate bug numbers during export is also helpful. (It’s smart enough to remove them when adding descriptions for the patches in Bugzilla.) My only complaint is that attaching patches to a bug doesn’t auto-assign the bug to you, like like hg bzexport does.

Third piece: I wrote a script I call export-patches for sending stuff to try, committing to inbound, and exporting patches for uplift.  (I used to use it for formatting patches for bugzilla, but stopped doing that after learning git-bz.)  I can push things to try:

export-patches -h ${mc_repo} -t '-b do -p all -u all -t none' ${start}..${end}

or push things to inbound:

export-patches -h ${mi_repo} -r ehsan -b 1 -c ${start}..${end}

It supports per-patch reviewers, too (along with a couple of other things I won’t demonstrate here):

export-patches -h ${mi_repo} -r bz:glandium -b 1 -c ${start}..${end}

The -b 1 convention is leftover from when I wasn’t tagging my patches with bug numbers until commit.  (The script complains if bug numbers aren’t specified on the command line for commits.)  git-bz absolved me of doing that. I should probably fix that.

Third-and-a-half piece: export-patches takes some pains (not as many as it could) to ensure that whatever repo I’m using gets its patch queue wiped if things fail.  Less monkeying around with mercurial commands is a win in my book.

Fourth piece: One big branch of work. I used to use separate branches for bugs. However, I found that I was working on enough things simultaneously that switching between branches, rebasing if necessary, clobbering if necessary (often), and so forth was just too disruptive for day-to-day stuff. I’ll use branches if I have really disruptive things that I can’t integrate piecemeal into my one big branch, but generally everything goes into one branch. I ensure things build locally and I make occasional efforts to ensure appropriate tests still work locally, but try is where most of my testing gets done nowadays.

Fourth-and-a-half piece: I never checkout master.  I always fetch origin, and then rebase off of origin/master.  My branches all track origin/master, so magit will tell me exactly what commits I have remaining to go upstream.

Annoyances: If I commit patches, then those patches get backed out, when I next pull from mozilla-central and rebase, the patches that I pushed disappear from my branch. I haven’t looked too deeply into why this happens, but I’d really like to fix that.


05
Nov 13

ipdl syntax changes for types coming from C++

Over the weekend, I landed bug 918651, which changes the syntax for how you inform the IPDL compiler about types defined in C++.  Previously, you did:

include "mozilla/HeaderFile.h";
...
using typeFromHeaderFile;
...

The using declaration informs the IPDL compiler that typeFromHeaderFile may appear in places types can normally appear.  The include directive is so the generated headers know what to #include for the C++ compiler to be informed about typeFromHeaderFile.

This scheme has a couple of drawbacks:

  • The header files from the include directives aren’t connected to the using declarations in any way.  Those headers might only include the relevant type(s) incidentally, which doesn’t help in unraveling Gecko’s include dependencies.
  • The generated IPDL headers don’t necessarily need the full definition of typeFromHeaderFile.  For structs or classes, the generated headers can get by with a simple forward declaration.  The full definition is only needed in the generated source files.  The above syntax, however, doesn’t enable any sort of forward declaration magic.

To address both of these issues, the syntax for using declarations was changed.  For structs, you should say:

using struct structFromHeaderFile from "mozilla/HeaderFile.h"

The syntax for classes is similar:

using class classFromHeaderFile from "mozilla/HeaderFile.h"

In these cases, the IPDL compiler will forward-declare the types where appropriate and only #include the header in the generated source files.  Additionally, the compiler is intelligent enough to #include the header in the generated headers if it is required. For instance, if there is a struct or a union defined in the header file that requires a struct or a class from a using declaration, the relevant header will be included in the generated header instead of the generated source file.

Finally, if you need an enum type or a typedef, you should say:

using typeFromHeaderFile from "mozilla/HeaderFile.h"

This case functions similarly to what we had before, except that the header file is now closely associated with the type; ideally, that will encourage people to use the correct header (i.e. the one that defines the type).  While you are able to use this syntax with struct or class types, you should use the using struct or using class syntax, as appropriate, so that forward declarations are generated.

There are still a few instances of include directives for C++ headers in IPDL files; those should be considered a bug, and the include directive for C++ headers should not normally be needed going forward.

This change didn’t completely address the original issue of the bug (touching headers in gfx/ causes source files in netwerk/ to rebuild), but it moved us a lot closer to fixing those sorts of issues.


05
Nov 13

the performance implications of strncpy

Last week, I was working on making Firefox compile for a OS X target on a Linux host.  As part of this effort, I ported Apple’s opensourced ld64 linker to compile and run on a Linux host.  Since OS X is a BSD-derived operating system, ld64 made use of the strlcpy and strlcat functions, designed to be safer than the strcpy/strncpy/strcat/strncat functions.  Linux’s libc doesn’t implement strlcpy and strlcat, so I had to find replacements.  strlcpy seemed easy enough, as a presentation on maloader suggested:

size_t strlcpy(char* dst, const char* src, size_t size)
{
    dst[size - 1] = '\0';
    strncpy(dst, src, size - 1);
    return strlen(dst);
}

I cribbed strlcat from someplace else and went about my merry way.

After I got ld64 to compile, then link simple programs, and worked my way through some configure bugs for non-Android cross-compiles, I ran into a problem: the JavaScript shell was taking 8 minutes to link.  This was unacceptable; it meant linking libxul was going to take over an hour, maybe over two, to link, which nobody would be happy about.  The equivalent link of the JavaScript shell on my Mac mini took about two seconds.

I started investigating what was going on with perf, just checking into what ld64 was doing for parts of those 8 minutes.  99%+ of the time was being spent inside strncpy.  Hm, that’s strange.

I fiddled around with a couple different things, none of which had much impact.  Then I took a close look at the code calling strlcpy (yes, all the time in strlcpy was through this function, which should have been a red flag in the first place):

int32_t StringPoolAtom::add(const char* str)
{
	int32_t offset = kBufferSize * _fullBuffers.size() + _currentBufferUsed;
	int lenNeeded = strlcpy(&_currentBuffer[_currentBufferUsed], str, kBufferSize-_currentBufferUsed)+1;
	if ( (_currentBufferUsed+lenNeeded) < kBufferSize ) {
 		_currentBufferUsed += lenNeeded;
 	}
 	else {
 		int copied = kBufferSize-_currentBufferUsed-1;
 		// change trailing '\0' that strlcpy added to real char
 		_currentBuffer[kBufferSize-1] = str[copied];
 		// alloc next buffer
 		_fullBuffers.push_back(_currentBuffer);
 		_currentBuffer = new char[kBufferSize];
 		_currentBufferUsed = 0;
 		// append rest of string
 		this->add(&str[copied+1]);
	}
	return offset;
}

In this code, kBufferSize is 16MB, so the size parameter passed to strlcpy can be rather large compared to the size of the string being copied to the destination.

I forget exactly where I read it, but I saw some small blurb about glibc’s strncpy having the crazy behavior of null padding the destination string, rather than just appending a null terminator. If strlcpy was implemented by calling out to strncpy, then just that function above would be writing hundreds or even thousands of megabytes of zeros more than required. That would definitely slow things down!

(I later discovered that this “crazy behavior” is documented in the strncpy man page and is actually required by standards.  Indeed, the original strlcpy paper cites this problem of strncpy as a motivating factor for strlcpy.  It is the only way the performance figures they give in the paper are actually relevant to their point.  But silly me, I just assumed I knew how strncpy works rather than actually reading documentation. I am really curious how this behavior of strncpy came to be and why folks thought it was worth preserving.)

Once I fixed the strlcpy implementation to do things properly, cross link times became comparable to native link times.  And then I could think about linking libxul in a reasonable amount of time. (And I did link libxul in a reasonable amount of time, if you read through the bug. And it even runs on a Mac!)

Lesson learned: don’t use strncpy!