15
Nov 16

efficiently passing the buck with needinfo requests

A while back, Bugzilla added this great tool called needinfo requests: you set a flag on the bug indicating that a particular person’s input is desired. X will then get something dropped into their requests page and a separate email notifying them of the needinfo request. Then, when X responds, clearing the needinfo request, you get an email notifying you that the request has been dealt with. This mechanism works much better than merely saying “X, what do you think?” in a bug comment and expecting that X will see the comment in their bugmail and respond.

My needinfo-related mail, along with all review-related mail, gets filtered into a separate folder in my email client.  It is then very obvious when I get needinfo requests, or needinfo requests that I have made have been answered.

Occasionally, however, when you get a needinfo, you will not be the correct person to answer the question, and you will need to needinfo someone else who has the appropriate knowledge…or is at least one step closer to providing the appropriate knowledge.

There is a right way and a wrong way to accomplish this. The wrong way is to clear your own needinfo request and request needinfo from someone else:

wrong-way

Why is this bad? Because the original requester will receive a notification that request has been dealt with appropriately, when it has not! So now they have to remember to watch the bug, or poll their bugmail, or similar to figure out when their request has been dealt with.  Additionally, you’ll get an email notification when your needinfo request has been answered, which you don’t necessarily want.

The right way (which I just discovered this week) is to uncheck the “Clear the needinfo request” box, which turns the second checkbox into a “Redirect my needinfo request”:

right-way

This method appropriately redirects the needinfo without notifying the original requester, and the original requester will (ideally) now receive a notification only when the request has been dealt with.


29
Jul 16

a git pre-commit hook for tooltool manifest checking

I’ve recently been uploading packages to tooltool for my work on Rust-in-Gecko and Android toolchains. The steps I usually follow are:

  1. Put together tarball of files.
  2. Call tooltool.py from build-tooltool to create a tooltool manifest.
  3. Upload files to tooltool with said manifest.
  4. Copy bits from said manifest into one of the manifest files automation uses.
  5. Do try push with new manifest.
  6. Admire my completely green try push.

That would be the ideal, anyway.  What usually happens at step 4 is that I forget a comma, or I forget a field in the manifest, and so step 5 winds up going awry, and I end up taking several times as long as I would have liked.

After running into this again today, I decided to implement some minimal validation for automation manifests.  I use a fork of gecko-dev for development, as I prefer Git to Mercurial. Git supports running programs when certain things occur; these programs are known as hooks and are usually implemented as shell scripts. The hook I’m interested in is the pre-commit hook, which is looked for at .git/hooks/pre-commit in any git repository. Repositories come with a sample hook for every hook supported by Git, so I started with:

cp .git/hooks/pre-commit.sample .git/hooks/pre-commit

The sample pre-commit hook checks trailing whitespace in files, which I sometimes leave around, especially when I’m editing Python, and can check for non-ASCII filenames being added.  I then added the following lines to that file:

if git diff --cached --name-only | grep -q releng.manifest; then
    for f in $(git diff --cached --name-only | grep releng.manifest); do
	if ! python -<<EOF
import json
import sys
try:
    with open("$f", 'r') as f:
        json.loads(f.read())
    sys.exit(0)
except:
    sys.exit(1)
EOF
	    then
	    echo $f is not valid JSON
	    exit 1
	fi
     done
fi

In prose, we’re checking to see if the current commit has any releng.manifest files being changed in any way. If so, then we’ll try parsing each of those files as JSON, and throwing an error if one doesn’t parse.

There are several ways this check could be more robust:

  • The check will error if a commit is removing a releng.manifest, because that file won’t exist for the script to check;
  • The check could ensure that the unpack field is set for all files, as the manifest file used for the upload in step 3, above, doesn’t include that field: it needs to be added manually.
  • The check could ensure that all of the digest fields are the correct length for the specified digest in use.
  • …and so on.

So far, though, simple syntax errors are the greatest source of pain for me, so that’s what’s getting checked for.  (Mismatched sizes have also been an issue, but I’m unsure of how to check that…)

What pre-commit hooks have you found useful in your own projects?


06
Jul 16

on the usefulness of computer books

I have a book, purchased during my undergraduate days, entitled Introduction to Algorithms. Said book contains a wealth of information about algorithms and data structures, has its own Wikipedia page, and even a snappy acronym people use (“CLRS”, for the first letters of its authors’ last names).

When I bought it, I expected it to be both an excellent textbook and a book I would refer to many times throughout my professional career.  I cannot remember whether it was a good textbook in the context of my classes, and I cannot remember the last time I opened it to find some algorithm or verify some subtle point.  Mostly, it has served two purposes: an excellent support for my monitor to position the monitor more closely to eye level, and as extra weight to move around when I have had to transfer my worldly possessions from place to place.

Whether this reflects on the sort of code I have worked on, or the rise of the Internet for answering questions, I am unsure.

I have another book, also purchased during my undergraduate days, entitled Programming with POSIX Threads.  Said book contains a wealth of information about POSIX threads (“pthreads”), is only mentioned in “Further Reading” on the Wikipedia page for POSIX threads, and has no snappy acronym associated with it.

I purchased this book because I thought I might assemble a library of programming knowledge, and of course threads would be a part of that.  Mostly, it would sit on the shelves to show people I was a Real Programmer(tm).

Instead, I have found it to be one of those books to always have close at hand, particularly working on Gecko.  Its explanations of the basic concepts of synchronization are clear and extensive, its examples of how to structure multithreaded algorithms are excellent, and its secondary coverage of “real-world” things such as memory ordering and signals + threads (short version: “don’t”) have been helpful when people have asked me for opinions or to review multi-threaded code.  When I have not followed the advice of this book, I have found myself in trouble later on.

My sense when searching for some of the same topics the book covers is that finding the same quality of coverage for those topics online is rather difficult, even taking into account that topics might be covered by disparate people.

If I had to trim my computer book library down significantly, I’m pretty sure I know what book I would choose.

What book have you found unexpectedly (un)helpful in your programming life?


31
May 16

why gecko data structures should be preferred to std:: ones

In light of the recent announcement that all of our Tier-1 platforms now have a C++11-supporting standard library, I received some questions about whether we should continue encouraging the use of Gecko-specific data structures. My answer was “yes”, and as I was writing the justification for said answer, I felt that the justification was worth broadcasting to a wider audience. Here are the reasons I came up with; feel free to agree or disagree in the comments.

  • Gecko’s data structures can be customized extensively for our purposes, whereas we don’t have the same control over the standard library.  Our string classes, for instance, permit sharing structure between strings (whether via something like nsDependentString or reference-counted string buffers); that functionality isn’t currently supported in the standard library.  While the default behavior on allocation failure in Gecko is to crash, our data structures provide interfaces for failing gracefully when allocations fail.  Allocation failures in standard library data structures are reported via exceptions, which we don’t use.  If you’re not using exceptions, allocation failures in those data structures simply crash, which isn’t acceptable in a number of places throughout Gecko.
  • Gecko data structures can assume things about the environment that the standard library can’t.  We ship the same memory allocator on all our platforms, so our hashtables and our arrays can attempt to make their allocation behavior line up with what the memory allocator efficiently supports.  It’s possible that the standard library implementations we’re using do things like this, but it’s not guaranteed by the standard.
  • Along similar lines as the first two, Gecko data structures provide better visibility for things like debug checks and memory reporting.  Some standard libraries we support come with built-in debug modes, but not all of them, and not all debug modes are equally complete. Where possible, we should have consistent support for these sorts of things across all our platforms.
  • Custom data structures may provide better behavior than standard data structures by relaxing the specifications provided by the standard.  The WebKit team had a great blog post on their new mutex implementation, which optimizes for cases that OS-provided mutexes aren’t optimized for, either because of compatibility constraints or because of outside specifications.  Chandler Carruth has a CppCon talk where he mentions the non-ideal interfaces in many of the standard library data structures.  We can do better with custom data structures.
  • Data structures in the standard library may provide inconsistent performance across platforms, or disagree on the finer points of the standard.  Love them or hate them, Gecko’s data structures at least provide consistent behavior everywhere.

Most of these arguments are not new; if you look at the documentation for Facebook’s open-source Folly library, for instance, you’ll find a number of these arguments, if not expressed in quite the same way.  Browsing through WebKit’s WTF library shows they have a number of the same things that we do in xpcom/ or mfbt/ as well, presumably for some of the same reasons.

All of this is not to say that our data structures are perfect: the APIs for our hashtables could use some improvements, our strings and nsTArray do a poor job of separating “data structure” from “algorithm”, nsDeque serves as an excellent excuse to go use the standard library instead, and XPCOM’s synchronization primitives should stop going through NSPR and use the underlying OS’s primitives directly (or simply be rewritten to use something like WebKit’s locking primitives, above).  This is a non-exhaustive list; I have more ideas if people are interested.

Having a C++11 standard library on all platforms brings opportunities to remove dead polyfills; MFBT contains a number of these (Atomics.h, Tuple.h, TypeTraits.h, UniquePtr.h, etc.)  But we shouldn’t flock to the standard library’s functionality just because it’s the standard.  If the standard library’s functionality doesn’t fit our use cases, we should definitely write our own replacement(s) and use them widely.


18
Apr 16

rr talk post-mortem

On Wednesday last week, I gave an invited talk on rr to a group of interested students and faculty at Rose-Hulman. The slides I used are available, though I doubt they make a lot of sense without the talk itself to go with them. Things I was pleased with:

  • I didn’t overrun my time limit, which was pretty satisfying.  I would have liked to have an hour (40 minutes talk/20 minutes for questions or overrun), but the slot was for a standard class period of 50 minutes.  I also wanted to leave some time for questions at the end, of which there were a few. Despite the talk being scheduled for the last class period of the day, it was well-attended.
  • The slides worked well  My slides are inspired by Lawrence Lessig’s style of presenting, which I also used for my lightning talk in Orlando.  It forces you to think about what you’re putting on each slide and make each slide count.  (I realize I didn’t use this for my Gecko onboarding presentation; I’m not sure if the Lessig method would work for things like that.  Maybe at the next onboarding…)
  • The level of sophistication was just about right, and I think the story approach to creating rr helped guide people through the presentation.  At least, it didn’t look as though many people were nodding off or completely confused, despite rr being a complex systems-heavy program.

Most of the above I credit to practicing the talk repeatedly.  I forget where I heard it, but a rule of thumb I use for presentations is 10 hours of prep time minimum (!) for every 1 hour of talk time.  The prep time always winds up helping: improving the material, refining the presentation, and boosting my confidence giving the presentation.  Despite all that practice, opportunities for improvement remain:

  • The talk could have used any amount of introduction on “here’s how debuggers work”.  This is kind of old hat to me, but I realized after the fact that to many students (perhaps even some faculty), blithely asserting that rr can start and stop threads at will, for instance, might seem mysterious.  A slide or two on the differences between how rr record works vs. how rr replay works and interacts with GDB would have been clarifying as well.
  • The above is an instance where a diagram or two might have been helpful.  I dislike putting diagrams in my talks because I dislike the thought of spending all that time to find a decent, simple app for drawing things, actually drawing them, and then exporting a non-awful version into a presentation.  It’s just a hurdle that I have to clear once, though, so I should just get over it.
  • Checkpointing and the actual mechanisms by which rr can run forwards or backwards in your program got short shrift and should have been explained in a little more detail.  (Diagrams again…)  Perhaps not surprisingly, the checkpointing material got added later during the talk prep and therefore didn’t get practiced as much.
  • The demo received very little practice (I’m sensing a theme here) and while it was able to show off a few of rr‘s capabilities, it wasn’t very polished or impressive.  Part of that is due to rr mysteriously deciding to cease working on my virtual machine, but part of that was just my own laziness and assuming things would work out just fine at the actual talk.  Always practice!

28
Jan 16

for-purpose instead of non-profit

I began talking with a guy in his midforties who ran an investment fund and told me about his latest capital raise. We hit it off while discussing the differences between start-ups on the East and West Coasts, and I enjoyed learning about how he evaluated new investment opportunities. Although I’d left that space a while ago, I still knew it well enough to carry a solid conversation and felt as if we were speaking the same language. Then he asked what I did.

“I run a nonprofit organization called Pencils of Promise.”

“Oh,” he replied, somewhat taken aback. “And you do that full-time?”

More than full-time, I thought, feeling a bit judged. “Yeah, I do. I used to work at Bain, but left to work on the organization full-time.”

“Wow, good for you,” he said in the same tone you’d use to address a small child, then immediately looked over my shoulder for someone new to approach…

On my subway ride home that night I began to reflect on the many times that this scenario had happened since I’d started Pencils of Promise. Conversations began on an equal footing, but the word nonprofit could stop a discussion in its tracks and strip our work of its value and true meaning. That one word could shift the conversational dynamic so that the other person was suddenly speaking down to me. As mad as I was at this guy, it suddenly hit me. I was to blame for his lackluster response. With one word, nonprofit, I had described my company as something that stood in stark opposition to the one metric that his company was being most evluated by. I had used a negative word, non, to detail our work when that inaccurately described what we did. Our primary driver was not the avoidance of profits, but the abundance of social impact…

That night I decided to start using a new phrase that more appropriately labeled the motivation behind our work. By changing the words you use to describe something, you can change how other perceive it. For too long we had allowed society to judge us with shackling expectations that weren’t supportive of scale. I knew that the only way to win the respect of our for-profit peers would be to wed our values and idealism to business acumen. Rather than thinking of ourselves as nonprofit, we would begin to refer to our work as for-purpose.

From The Promise of a Pencil by Adam Braun.


20
Jan 16

gecko and c++ onboarding presentation

One of the things the Firefox team has been doing recently is having onboarding sessions for new hires. This onboarding currently covers:

  • 1st day setup
  • Bugzilla
  • Building Firefox
  • Desktop Firefox Architecture / Product
  • Communication and Community
  • Javascript and the DOM
  • C++ and Gecko
  • Shipping Software
  • Telemetry
  • Org structure and career development

My first day consisted of some useful HR presentations and then I was given my laptop and a pointer to a wiki page on building Firefox.  Needless to say, it took me a while to get started!  It would have been super convenient to have an introduction to all the stuff above.

I’ve been asked to do the C++ and Gecko session three times.  All of the sessions are open to whoever wants to come, not just the new hires, and I think yesterday’s session was easily the most well-attended yet: somewhere between 10 and 20 people showed up.  Yesterday’s session was the first session where I made the slides available to attendees (should have been doing that from the start…) and it seemed equally useful to make the slides available to a broader audience as well. The Gecko and C++ Onboarding slides are up now!

This presentation is a “living” presentation; it will get updated for future sessions with feedback and as I think of things that should have been in the presentation or better ways to set things up (some diagrams would be nice…).  If you have feedback (good, bad, or ugly) on particular things in the slides or you have suggestions on what other things should be covered, please contact me!  Next time I do this I’ll try to record the presentation so folks can watch that if they prefer.


09
Oct 15

gecko include file statistics

I was inspired to poke at which files were most heavily #include‘d and which files contributed the most text as a result of their #include‘ing after seeing the simplicity of Libre Office’s script for doing so. I had to rewrite it in Python, as the obvious modifications to the awk script weren’t working, and I had no taste for debugging awk code. I’ve put the script up as a gist:

It’s intended to be run from a newly built objdir on Linux like so:

python includebloat.py .

The ability to pick a subdirectory of interest:

python includebloat.py dom/bindings/

was useful to me when I was testing the script, so I wasn’t groveling through several thousand files at a time.

The output lines are formatted like so:

total_size file_size num_of_includes filename

and are intended to be manipulated further via sort, etc. The script might work on Mac and Windows, but I make no promises.

The results were…interesting, if not especially helpful at suggesting modifications for future work. I won’t show the entirety of the script’s output, but here are the top twenty files by total size included (size of the file on disk multiplied by number of times it appears as a dependency), done by filtering the script’s output through sort -n -k 1 -r | head -n 20 | cut -f 1,4 -d ' ':

332478924 /usr/lib/gcc/x86_64-linux-gnu/4.9/include/avx512fintrin.h
189877260 /home/froydnj/src/gecko-dev.git/js/src/jsapi.h
161543424 /usr/include/c++/4.9/bits/stl_algo.h
141264528 /usr/include/c++/4.9/bits/random.h
113475040 /home/froydnj/src/gecko-dev.git/xpcom/glue/nsTArray.h
105880002 /usr/include/c++/4.9/bits/basic_string.h
92449760 /home/froydnj/src/gecko-dev.git/xpcom/glue/nsISupportsImpl.h
86975736 /usr/include/c++/4.9/bits/random.tcc
76991387 /usr/include/c++/4.9/type_traits
72934768 /home/froydnj/src/gecko-dev.git/mfbt/TypeTraits.h
68956018 /usr/include/c++/4.9/bits/locale_facets.h
68422130 /home/froydnj/src/gecko-dev.git/js/src/jsfriendapi.h
66917730 /usr/include/c++/4.9/limits
66625614 /home/froydnj/src/gecko-dev.git/xpcom/glue/nsCOMPtr.h
66284625 /usr/include/x86_64-linux-gnu/c++/4.9/bits/c++config.h
63730800 /home/froydnj/src/gecko-dev.git/js/public/Value.h
62968512 /usr/include/stdlib.h
57095874 /home/froydnj/src/gecko-dev.git/js/public/HashTable.h
56752164 /home/froydnj/src/gecko-dev.git/mfbt/Attributes.h
56126246 /usr/include/wchar.h

How does avx512fintrin.h get included so much? It turns out <algorithm> drags in a lot of code, despite people usually only needing min, max, or swap. In this case, <algorithm> includes <random> because std::shuffle requires std::uniform_int_distribution from <random>. This include chain is responsible for essentially all of the /usr/include/c++/4.9-related files in the above list.

If you are compiling with SSE2 enabled (as is the default on x86-64 Linux), then<random> includes <x86intrin.h> because <random> contains a SIMD Mersenne Twister implementation. And <x86intrin.h> is a clearinghouse for all sorts of x86 intrinsics, even though all we need is a few typedefs and intrinsics for SSE2 code. Minus points for GCC header cleanliness here.

What about the top twenty files by number of times included (filter the script’s output through sort -n -k 3 -r | head -n 20 | cut -f 3,4 -d ' ')?

2773 /home/froydnj/src/gecko-dev.git/mfbt/Char16.h
2268 /home/froydnj/src/gecko-dev.git/mfbt/Attributes.h
2243 /home/froydnj/src/gecko-dev.git/mfbt/Compiler.h
2234 /home/froydnj/src/gecko-dev.git/mfbt/Types.h
2204 /home/froydnj/src/gecko-dev.git/mfbt/TypeTraits.h
2132 /home/froydnj/src/gecko-dev.git/mfbt/Likely.h
2123 /home/froydnj/src/gecko-dev.git/memory/mozalloc/mozalloc.h
2108 /home/froydnj/src/gecko-dev.git/mfbt/Assertions.h
2079 /home/froydnj/src/gecko-dev.git/mfbt/MacroArgs.h
2002 /home/froydnj/src/gecko-dev.git/xpcom/base/nscore.h
1973 /usr/include/stdc-predef.h
1955 /usr/include/x86_64-linux-gnu/gnu/stubs.h
1955 /usr/include/x86_64-linux-gnu/bits/wordsize.h
1955 /usr/include/x86_64-linux-gnu/sys/cdefs.h
1955 /usr/include/x86_64-linux-gnu/gnu/stubs-64.h
1944 /usr/lib/gcc/x86_64-linux-gnu/4.9/include/stddef.h
1942 /home/froydnj/src/gecko-dev.git/mfbt/Move.h
1941 /usr/include/features.h
1921 /opt/build/froydnj/build-mc/js/src/js-config.h
1918 /usr/lib/gcc/x86_64-linux-gnu/4.9/include/stdint.h

Not a lot of surprises here. A lot of these are basic definitions for C++ and/or Gecko (<stdint.h>, mfbt/Move.h).

There don’t seem to be very many obvious wins, aside from getting GCC to clean up its header files a bit. Getting us to the point where we can use <type_traits> instead of own homegrown mfbt/TypeTraits.h would be a welcome development. Making js/src/jsapi.h less of a mega-header might help some, but brings of a burden of “did I remember to include the correct JS header files”, which probably devolves into people cutting-and-pasting complete lists, which isn’t a win. Splitting up nsISupportsImpl.h seems like it could help a little bit, though with unified compilation, I suspect we’d likely wind up including all the split-up files at once anyway.


17
Sep 15

compiler-enforced locked accesses

If you’ve done any amount of threaded programming, you’ve probably run across code that looked like:

  // Only accessed with the mutex held.
  uint32_t mFlags;
  bool mConnected;
  nsTArray<int32_t> mData;

  // Only called with the mutex held.
  void DoSomething();

Perhaps you’ve even gotten to debug code which inadvertently violated the locking requirements of the members.

Several months ago, I reviewed a patch by David Keeler that addressed the second half of the above example. Methods that had locking requirements looked like:

  void DoSomething(MutexAutoLock& aProofOfLock);

which ensures (at compile time!) that you can’t call the function without locking the mutex first. I thought this was a nice technique, said as much in my review, and have been looking for places to apply it ever since.

The explicitness and the requirement to constantly pass MutexAutoLock& variables around is a feature, not a bug. Doing so encourages you to limit the amount of code that needs to be executed with locks held, keeping the concurrent parts of the code and the synchronized parts of the code clearly delimited. In this respect, this technique is similar to the IO monad in Haskell. I don’t know whether the extra verbosity would make the code more difficult to read or not, especially if the techniques for guarding members suggested below were applied as well.

This coding style also came in handy a week or so ago investigating overly high CPU usage when playing YouTube videos. Our event queue for nsIRunnables did its own locking internally, and exposed its internal reentrant monitor “for power users”. This led to code like:

{
  ReentrantMonitorAutoEnter mon(mEvents.GetReentrantMonitor());
  ...
  mEvents.PutEvent(event);
}

where the PutEvent call would do an extra round of locking (“just to be sure”), which was wasted work. Data structures like this doing their own locking internally typically isn’t a great idea, so part of the work in the above bug was to separate the locking requirements of the queue from who actually needs to do the locking. Or, in other words, we can have the class that owns the event queue do the locking, and have the event queue’s methods enforce the locking programmatically:

{
  MonitorAutoLock mon(mMonitor);
  ...
  mEvents.PutEvent(event, mon);
}

Now there’s no wasted work in PutEvent, because it already knows the appropriate locking has been done. The ability to use non-reentrant monitors—which are more efficient—was a nice bonus resulting from the separation of concerns here.

This technique can also help solve the first half of the problem we presented at the beginning of this post: ensuring members are only accessed with locks held.

template<typename T>
class Guarded;

template<>
class Guarded<uint32_t>
{
public:
  Guarded() : mValue(0) {}

  uint32_t Value(MutexAutoLock& aProofOfLock)
  {
    return mValue;
  }

  void Assign(MutexAutoLock& aProofOfLock, uint32_t aNewValue)
  {
    mValue = aNewValue;
  }

  // Since accesses should only be done under the lock, and copying
  // and moving would therefore require locks, we require the user
  // to ensure those constraints are met with explicit calls to the
  // above rather than the compiler sneaking unlocked accesses in.
  Guarded(const Guarded&) = delete;
  ...

private:
  uint32_t mValue;
};

The above class isn’t production quality code; it’s intended to sketch out how explicit locking requirements might work. A more robust version might require a Mutex& reference to be passed to the constructor, and member functions assert that the MutexAutoLock& parameters actually lock the specified mutex. Specializing for each of the integer types would also get tiresome, so we’d need to do a better job there. Handling types with methods could be done with something like the following, I think:

template<typename T>
class GuardedAggregate
{
public:
  GuardedAggregate() : mValue() {}

  // The core idea here is that the user would write:
  //
  // GuardedAggregrate<nsTArray> mArray;
  //
  // and then accesses would be done via:
  //
  // mArray.Value(lock).Method(...);
  //
  // This means that we don't have to proxy every single one of
  // the aggregate's methods, but the locking requirements are
  // still explicit.
  class Proxy
  {
  public:
    Proxy(MutexAutoLock& aProofOfLock, T* aValue) : mValue(aValue)
    {}

    T* operator->()
    {
      return mValue;
    }

  private:
    T* mValue;
  };

  Proxy Value(MutexAutoLock& aProofOfLock)
  {
    return Proxy(aProofOfLock, &mValue);
  }

  ...
private:
  T mValue;
};

This can also be though of as a compiler-independent, but less flexible version of clang’s Thread Safety Analysis. Folks have been asking about bringing the annotations that analysis requires into Gecko; I wonder if it might work just as well to apply this technique more liberally throughout the codebase.


07
Sep 15

standardizing things my way

I was reading The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age and ran across a passage that resonated:

Everybody, of course, supports standardization—in theory. But human beings (particularly, but not exclusively, famous Harvard professors practicing at famous Boston hospitals) want things to be standardized their way. The difficulty that doctors face in accepting a workplace that is not custom-designed around their personal preferences is captured in this old joke about the physician staffs of hospitals: What do you call a 99-1 vote of the medical staff? A tie.

Examples abound: coding styles, version control systems, code review systems…