No newsletter this week: we landed a bunch of bug fixes but nothing major to write about. I did recently finish a mini-project to run some queries and analyze SpiderMonkey bugs and how we handle them. Here are the results in picture form:
This will be a short one: it’s been a fairly quiet week, with mostly fixes for crashes and other small bugs.
The one big landing was ObjShrink, by Brian Hackett, which took the size of a JS object down from about 48 bytes (on 32-bit platforms) to 16 bytes. ObjShrink also made shapes (an auxiliary data structure that records object property layouts) smaller. I tested ObjShrink in practice by comparing starting Firefox and loading TechCrunch on a Dec 2 nightly and a Dec 6 nightly. The memory usage for the TechCrunch tab went from 4.84MB to 4.02MB. Overall memory usage went from 51.8MB to 49.3MB–I noticed that the system compartment showed a big drop, presumably because the browser UI uses a lot of JS objects. That’s a 17% improvement in a JS-heavy tab and almost a 5% improvement overall, which is a huge memory win.
Jeff Walden landed some refactorings to the parser–we’re hoping to make it easier to hack on for doing ECMAScript Harmony work. For IonMonkey, Nicolas Pierron and Jan de Mooij added support for some more opcodes to the compiler so the team can run more benchmarks. Incremental GC is working on the larch branch, but Bill is still fixing bugs (I helped with one today too). RyanVM (Ryan VanderMeulen) has been cleaning up more tracejit leftovers. Finally, Hannes Verschore, one of our 2011 summer interns, fixed a 3-year old bug that caused incorrect error messages with certain usages of let.
[Update: I made slight corrections to the percentage wins shown in the first draft. I had originally written 20% and 4%.]
This continues the JägerMonkey project series. Part 1 is here.
arewefastyet (aka AWFY) seems to be the most famous single thing in the project (with the possible exception of the project art). The story starts in spring 2010, late March or so, when apparently I said something like: we should have a web page that shows our benchmark scores improving over time. The next thing I knew, David surprised me by showing me AWFY.
The funny thing is that I don’t actually remember suggesting it: David told me that I did in the project debrief. I’m pretty sure that when I mentioned the idea I was channeling Jeff Naughton, my databases professor at Wisconsin. He once told me about working on some databases performance problem (possibly large-scale sorting), and how first you have to build up the basic infrastructure, but then after that you can start implementing optimizations, and the fun begins as you watch your score improve. I (probably) thought that it would also be a lot of fun (and motivational) for us to watch our score improve once we had built a basic compiler.
And the next thing we knew after that, AWFY had gone viral. We hadn’t intended it to–we didn’t even think we had told anyone about it, but somehow it was on Reddit. (David tells me there were also a bunch of inflammatory posts there that he just had to respond to.) But we didn’t mind: it built a lot of excitement around our efforts, even impatience, as the “fans” would often ask us what was going on if the line hadn’t moved for a few days. I tried to build on the excitement by tweeting about optimizations as they landed, which combined well with AWFY since anyone could go there to see what I was tweeting about. The attention also drew in a couple of people, wx24 and Michael Clackler, who created better layout and UI for the site.
But we liked AWFY most of all as a motivational scoreboard. It was also up on an old laptop next to the JS pit. With AWFY, once you landed a patch to improve our performance, just a few minutes later it showed up on the scoreboard. Every win, even the small ones, counted for the project, and they all counted visibly on AWFY. (In the debrief, Chris Leary connected this to the concept of the Big Visible Chart.) And you could look at the recent history and be reminded of how much good stuff was happening–I loved starting my day by walking into the pit and seeing the recent string of wins.
Since then, a few other areweXyet sites have appeared; I don’t know how useful they’ve been. I may be wrong about this, but talking to people now, I often sense a faint belief of “and we shall build an areweXyet, and X shall come to pass”, and I don’t think it really works that way.
AWFY was great, as both viral marketing and a motivational tool, but other projects won’t automatically be able to replicate that experience. On the viral marketing side, I have no idea how it happened, so I guess I can’t say that it can’t be replicated. But I suspect there were many hidden ingredients that might not be present again: maybe people were primed to be excited about Firefox getting a big boost in JS performance, or maybe it was the novelty factor of a performance scoreboard with a joke name and a huge “NO” to answer the question. So I wouldn’t count on it.
The motivational value is solid, but applies to some projects and not others. The great thing about AWFY was that the score was almost entirely under our control (modulo a few ms of noise): land an optimization patch, and it definitely gets faster. If it gets slower when you’re not looking, just back out the regressing patch. (The ‘almost’ is because it’s possible some correctness fix would need to be made that would necessarily regress us, but that was a rare event.) Another great thing was that we had an achievable target: cross the lines. That one wasn’t entirely under our control, as the other JS engines were doing things too, but they weren’t doing a major new JIT like we were, so they weren’t changing so fast. Finally, there was really one measure we were most focused on, SunSpider, so we didn’t have to think about too many things and could really concentrate our efforts.
The opposite of AWFY, in my experience, was the blocker count for the Firefox 4 release. The stated goal was, fix all the blocking bugs, so there are zero left, then we get to release. (And finally get JägerMonkey in the hands of all our users!) I think there were sometimes graphs of the blocker count showing on the big monitors here in MV. I also had my own tools to track things. The problem was that the blocker count could go up as well as down, so there is no notion of getting closer to the goal, or of efforts getting “locked in”, as they were with AWFY. I can’t remember the exact numbers, but the JS team was fixing something like 5 blockers per day, with 4 more arriving each day. Trying to get the blocker count down was like being on a fast treadmill. That was actually pretty motivating for me, but only in a negative way: I wanted off that exhausting treadmill. The goal wasn’t really achievable either: 0 blocking bugs represented a kind of perfection and was not realistic. Knowing you’ll definitely never really reach the goal is not encouraging.
So, to be a positive motivational tool, I think a programming scoreboard ideally is super-simple and easy to read, has the score fully under the control of the developers, generally moves only in a positive direction, and focuses on building up wins. Some projects have goals that obviously map on to that, others might fit with some cleverness, and others just won’t. I’m sure all those elements aren’t necessary: it’s not about following a recipe, it’s about designing your own tool, for yourself and your team, that fits you, and is motivational to you: if posting scores to it feels good, it’s working.
AWFY could perhaps be seen as an instance of gamification, and I think games or the book Reality is Broken would be good sources of inspiration for project scoreboards. I wouldn’t recommend actually turning a software engineering project into a game, though: there’s always tons of stuff to pay attention to other than the score, and you definitely don’t want people to start trying to game the system. (I have heard horror stores of management at company X setting up a system to score developers on the number of source checkins, with the predictable results.) We never had that problem because we didn’t use AWFY to control or evaluate our work (we did that through thought and discussion): we just used it to make looking at the results of our work more fun.
Help Firefox get incremental GC
Bill McCloskey has incremental GC working as a prototype in the larch branch. The current version is probably pretty crashy but does show the pause time improvements on some sites.
It would help Bill a lot to get a list of sites that have big GC pauses now in Firefox. He needs sites that will do a lot of GC so he can tune incremental GC to perform well, and also test for stability. So, if you know of any good sites that suffer from GC pauses, please post them to https://bugzilla.mozilla.org/show_bug.cgi?id=702495 or email me (firstname.lastname@example.org). Some hints on what to look for:
Sites that are animation-heavy or JS-heavy are the likeliest to get GC pauses. One example is Flight of the Navigator, which will show some brief pauses or glitches in the animation.
The big news for this week is that the tracejit has been been retired: jstracer.cpp and the nanojit directory have been removed from the tree. With JM+TI, the tracer doesn’t run, so it wasn’t helping performance, but removing it reduced our code size (by 0.5MB in one measurement on Linux) and complexity, which will help us add other optimizations faster. For example, the ObjShrink project removes some things that the tracer depends on, so if we kept the tracer, we’d have to do a bunch of work to make the tracer work correctly, which wouldn’t even help us because the tracer doesn’t run for content.
I’ve seen a few commenters express disappointment about the retirement of the tracer, saying things like, too bad it didn’t work out. But I don’t see it that way. We are constantly replacing code inside the engine as we improve it–I think it’s been estimated in the past that about 25% of the engine is replaced per year–so the expected lifetime of any particular piece of code is something like 4 years. JITs are hot right now, so they are prime candidates for improvement and replacement. TraceMonkey has been in the tree for about 3.5 years. JägerMonkey might not even make it 2 years before being replaced by IonMonkey.
I still remember that not long after I started at Mozilla, I got to go to a few meetings in building S (or was it K?) with Brendan to talk about a tracing compiler for JS with people from Adobe like Edwin Smith and Rick Reitmaier, and meeting Andreas and Michael Bebenita and Michael Franz; learning why tracing nested loops is hard, and the different solutions of ‘outerlining’ and nested traces; and hearing about how a Game of Life benchmark generated a large number of traces. I was working on some static analysis stuff at the time but I thought doing a compiler for JS sounded amazing, so I followed along. I was excited enough to read their code carefully and do a 5-part series on Tamarin internals on this blog.
I’m not sure exactly how I got started working on TraceMonkey myself, but I think I jumped in to fix some crashes and performance faults to help get it ready for release. What I remember best about that is spending an entire week on one strange intermittent performance regression that depended on the length of the command line used to run the test and eventually turned up a data alignment problem that boosted the performance of the whole system when fixed, still one of my proudest moments in Mozilla hackery. I did a lot of other work on the tracer too, mostly bug fixes and small optimizations, but I also implemented support for tracing pretty much any kind of closure or closure variable over a summer.
So TraceMonkey was my (and David Anderson’s) personal training ground in industrial-strength JIT architecture, implementation, and performance analysis, which fed straight into JägerMonkey and now IonMonkey. Andreas turned TraceMonkey into a PLDI paper, which may have something to do with the increased interest the academic community has taken in JS since then. TraceMonkey gave Firefox fast JS for versions 3.5 and 3.6, and provided an essential part of the platform for David Humphrey and his colleagues to make a ton of cool demos. And that’s pretty great.
Chris Leary landed function inlining, the last of the major optimizations in the initial plan. Sean Stangl landed on-stack replacement (OSR), a key piece of infrastructure, which is required to patch in reoptimized or deoptimized compiled code while running (JM+TI also has OSR).
Small but nice landings
When Firefox runs GC, it can optionally do a “shrink GC”. A shrink GC does a regular GC, then releases free GC-controlled memory pools to the OS, thus reducing the amount of memory used by the browser. If the browser is about to allocate more memory, then it will have to claim that back from the OS immediately, slowing things down, so Firefox doesn’t do shrink GCs very often, mostly when it really expects memory usage to go down for a while. Terrence Cole landed a patch to trigger shrink GC on system memory pressure events, which will hopefully help cut memory usage when it’s most needed.
Finally, Alon Zakai added a printErr function to the JS shell, just like print but going to stderr.
Update since last time (11/1):
Stuff that might affect you
We are moving to a model where JSRuntimes are single-threaded. That means if you want to use multiple threads, you’ll create separate runtimes. (Web workers do this in Firefox now.) This is going to help us make a lot of simplifications to the engine, and will be more robust than the current model.
In preparation for single-threaded runtimes, Luke landed a patch that asserts that a JSRuntime is used only from its ‘owner’ thread. If you get Assertion failure: rt->onOwnerThread(), it’s from using a runtime from multiple threads.
Luke is going to have a blog post that explains the changes and the new model in more detail in the near future.
I went to Velocity Europe and gave my “Know Your Engines” JS talk. As usual for such a trip, I was always fighting jet lag, but I saw and really enjoyed the talks on Amazon Silk, browser performance (Opera, Chrome, and Firefox), and performance guidelines for mobile web development. A week or so later I gave the same talk at LinkedIn to an audience that asked a lot of fun questions.
Not much has had to be updated since my original version of the talk, just because not too many big JS performance improvements have been shipped since June. Changes are coming soon, though: Chrome has landed incremental GC in the developer channel (labeled version 17; note: for me, Firefox got a better score on the particular benchmark linked there, but you can try it yourself), and we are getting close to landing it as well. With JM+TI coming in Firefox 9, and IonMonkey later on, we’ll have more advanced typed compilers out there, and we should get to learn more about what they really do in practice. The long-term question is whether all browsers will get similar compilers, so that web devs that want to reach all users can count on them.
Bill McCloskey landed write barriers for incremental GC. That’s the first hard part of IGC, the part that makes it not crash all over the place. Bill is finishing up the incrementalization itself. The second hard part will be tuning: finding out how short the pauses can be dialed down and making sure pages that have GC pauses now actually do get incrementalized. This stuff is hard, and we don’t know how much tuning work there will be, but we’re currently targeting IGC landing for Firefox 11.
Terrence Cole landed a patch to mark unused GC arenas as decommitted. This means when a piece of GC memory (a 4k arena) becomes empty after a shrinking GC, the memory is no longer committed to Firefox, is no longer reported as allocated for the process, and can be immediately reused by the OS. (Previously, Firefox generally held on to these arenas.)
Boris Zbarsky added a JS_GetElementIfPresent API to make things faster, specifically Array.prototype.Slice on NodeLists.
IonMonkey development continues. Marty finished a new ARM backend. Chris finished function inlining. The team is still finishing on-stack replacement and other infrastructure, so IM can’t run too many benchmarks yet or do anything interesting in a browser. But I think it will start running more benchmarks soon, so we’ll get to see how it’s doing on basic performance.
JM+TI is looking good and on track for shipping with Fx9, with nice speedups pretty much across the board.
For a while, I was pretty swamped, with not enough time to do a regular team newsletter. And then, I figured I’d have to go back over the past 2 months of activity to sort out everything that happened so I could write a proper giant update. But, I realize that’s way too hard and I’ll never get to it, so instead I’ll just do a brief update as of today:
- Terrence Cole joined the team to work on GC efforts. As his first key project, he’s been working on a patch to release pages of the JS heap that are not in use back to the OS, to reduce memory fragmentation.
- Jan de Mooij just joined the IonMonkey team as a full-time employee working out of the Netherlands. Jan, of course, was a key volunteer contributor to both the JägerMonkey and Type Inference (TI) projects.
- Speaking of TI, it’s about to go to Beta and everything looks good, in both performance and stability. Brian Hackett has been working hard to fix critical bugs and performance regressions that were discovered in Nightly and Aurora.
- Brian also did work in the JM branch to make objects and functions smaller, which will help a bit with memory usage, but mostly for performance–less data to initialize and being able to hold more objects in cache. This is waiting for incremental GC before landing to mozilla-central.
- Incremental GC is getting close–it’s planned to land by the end of 2011. GC tuning and memory reporter improvements continue.
- IonMonkey, finishing ES5 compat, finishing new debugger support, and the JS stack profiler are all ongoing.
- We are spinning up work on ECMAScript Harmony–Jason Orendorff is starting with modules. He’s doing some front-end refactoring first to ease the path.
I’ve been swamped the past few weeks, so this is a bigger catch-up newsletter. I’ll just give the big news because there’s plenty of it.
Nicolas Pierron has joined the JS team to work on IonMonkey. He has done a lot of work on the NixOS package system as well as a few different programming languages research project. He’s starting out by teaching IonMonkey how to call C++ functions from jitcode. Welcome Nicolas!
2011 Summer Internship Program
- Andrew Drake, returning from last summer, this time implemented linear scan register allocation.
- Ryan Pearl implemented SSA-based optimistic global value numbering, an optimization that eliminates redundant computations.
- Andy Scheff implemented loop invariant code motion, an optimization that hoists expressions out of the loop that compute the same thing every iteration. He also wrote the trampolines that allow us to jump from C++ into the generated jitcode.
- Hannes Verschore added compiler support for more opcodes, including conditional branches, multiplication, and a jump table implementation of switch statements, so now IonMonkey can run programs that aren’t entirely trivial.
Each one made a valuable contribution to the project that will (barring meteor strike) eventually ship to hundreds of millions of Firefox users and make their JS faster! Goodbye, and thanks! It was a pleasure having you here!
Project Status Updates
We’ve formed a dedicated project team for IonMonkey. David Anderson is the IonMonkey Project Leader, and the core team members are Chris Leary, Nicolas Pierron, Marty Rosenberg, and Sean Stangl. Brian Hackett will also be working alongside the team to integrate the type inference analysis with the IonMonkey compiler.
The new Debug API has landed! Congrats to Jim Blandy and Jason Orendorff. This is a major improvement to our debugging infrastructure that will make it easier to develop debugging tools for Firefox and help make those tools more reliable. The new Debug API has a well-defined protocol that can operate over a network, so it also makes remote debugging possible.
Sean Stangl has joined the JS team. He worked on JägerMonkey last year as an intern, and now he’s here full-time, initially working on IonMonkey. Welcome, Sean!
Project Status Updates
Type Inference: Brian Hackett is fixing regressions and getting ready to “land” type inference. Landing is tricky because the complexity of the project makes it very difficult to disable or back out the landing. Dave Mandelin posted a proposal in dev-planning for getting type inference out to users by creating parallel aurora and beta repositories and pointing channel users at the new builds.
IonMonkey: David Anderson landed bailouts. Current work is focused on fixing bugs turned up by the test suite.
jsdbg2 is almost there: just one more review. It is expected to land in time for Firefox 8.
Changes to SpiderMonkey that might affect you
In bug 676738, Jeff Walden split JS_GetElement and other JS_XXXElement functions into two forms: one where the index is a jsid (as before), and a new one where the index is a uint32. This goes along with the ‘aslots’ project, which will split int-named ‘element’ storage from string-named ‘property’ storage and allow dense arrays on any object.
Dave Mandelin and Chris Leary attended Black Hat. The talk Attacking Clientside JIT Compilers by Chris Rohlf and Yan Invnitskiy gave a lot of information about JIT spraying, and analyzed Firefox specifically. The good news is that JIT spraying is pretty hard with the kind of code JägerMonkey generates. The bad news is that Firefox currently has few defenses against JIT spraying.
But Chris Leary is on the case: he’s been implementing defenses, which are now public in bug 677272. He should have a strong set of mitigations landed within a week or two.
Bug 649202: Marty Rosenberg landed ICs for typed arrays on ARM, so typed arrays are now fast on ARM too.
Bug 586297: Jacob Bramley improved the generation of branches for ARM. The bug shows an 18% perf improvement on Kraken on ARM with the methodjit only.
Bug 669132: Jacob improved ARM floating-point loads. No perf data in the bug.
Bug 664249: Nikhil Marathe changed typed arrays to hold their properties inside the JSObject slots in order to improve performance. It looks like it got a 2x improvement to property access time in JM.
We did a debrief for the JägerMonkey project a while back, and I had been meaning to write up the results as a sort of definitive history with all the lessons learned, but I never seemed to find the time. So, instead, I’ll try to write a series of blog posts on the high points, which hopefully will be easier to get started on, and also maybe more useful.
About a year before, Google had released V8, and Apple had released SquirrelFish Extreme (now called Nitro). Both engines used an untyped  function compiler  with inline caches . Both of them started out with good speed, and had very good speed by the end of 2009. Opera also had very fast JS with its Carakan engine. SpiderMonkey was seriously falling behind on SunSpider and the V8 benchmarks–we knew we had to do something big to catch up.
 I use “untyped compiler” to mean a compiler that works on boxed any-type values; a “typed compiler” compiles code specialized to a certain set of types. I used to say “type-specializing compiler” and “basic compiler”, but I always found the former unwieldy and the latter vague.
 “Function compiler” means a compiler that compiles a function at a time, in contrast to the TraceMonkey trace compiler, which compiles a linear code trace at a time.
 Inline caching is one of the most important optimizations in a JS engine. See cdleary’s blog for much more.
The obvious something big was an “untyped per-function compiler with inline caches.” So we made the decision to do it, formed an initial team consisting of me, David Anderson, and Luke Wagner, and chose the name. Our goal was performance competitive with V8, Nitro, and Opera. We gave an initial estimate, which was just a guess, of 9 months.
And we basically did it. We landed JägerMonkey to the TraceMonkey repo on August 31. Testing on that repo turned up a few more problems, but JägerMonkey went on to mozilla-central 2 weeks later. (There was also some fallout involving compartments and Firebug support that had to be sorted out around the end of the year. More about that later.) We also hit our (fuzzy) target on benchmark scores: ahead of V8 and Safari on SunSpider by a small margin (with IE ahead of us by a small margin), and a solid score on the V8 benchmarks, second only to V8, and pretty close to them until Crankshaft came out the next year. I think it’s fair to say that with a 10% error margin, we did just what we said we would way back in December.
We also got a cool poster, t-shirts, and had Firefox playing Super Mario at 58 fps.
My role was project leader. By that, I mean simply that I considered myself responsible for making sure the project as a whole was successful. That included architecture, dividing work, spotting issues, recruiting help, selecting optimizations, arranging infrastructure, and critiquing t-shirt design.
I also did some coding, although it wasn’t my primary task and I didn’t have long periods of uninterrupted time to work on big features. Toward the beginning, when the project was smaller and things were quieter, I did implement ICs for the first version of the compiler.
The next big patch of coding I did was when Luke was finishing fatvals (the new jsval format). He had designed the new C++ classes for the values and did most of the engine adaptations. But there were a bunch of bugs that needed to be fixed before landing, and fatvals was about to be the bottleneck: the new compiler depended on them. So I jumped in and fixed a bunch of small bugs in fatvals and got that landed.
Later, when the compiler was getting close, and landing it was the next big step, I jumped in there to fix a bunch of bugs and get that ready. Also during that time, I was doing daily merges from the Tracemonkey repo (which was the main repo for JS development) to JM, so that we would catch merge fallout fast and stay sync’d and ready to land.
From my project leader point of view, as needed I deployed myself as an extra developer to speed things along. In the debrief, Luke called this the “glue person” role. He said having a glue person allowed everyone else to get into the zone and do lots of design work and big blocks of coding–everyone else on the project doesn’t have to worry about random bugs and and other small technical issues because the glue person picks them all up.
It’s similar to the “fix problems and remove obstacles” aspect of being a project manager, except it’s fully technical: solving whatever random coding problem that is necessary right now, but would be a distraction to everyone else. And it fits well with doing the rest of project management, because while it’s hard to do a big design project when you have to step away to do something else frequently, small bugs can be squeezed in pretty easily.
I suppose not all developers would jump at the chance to be the glue person, because it looks like grunt work, but I didn’t mind too much: it’s a great contribution to the overall success of the project, and it’s a nice favor to the rest of the team.
I’ve been meaning for a while to write a post on how to interview software engineering candidates: here it is. In my opinion, there’s really no science to interviewing: it’s an intuitive, practice-based activity. So, as Barry Schwartz suggests in his book Practical Wisdom, I’m going to try to express what I know in just a few simple guidelines on each topic. First, a couple of general remarks:
- Interviewing is important. Interviewing a candidate well is one of the most productive things you can do for your team in two hours as an engineer. So give interviewing the same attention and care as you do coding and debugging.
- Interviewing takes practice. If interviewing feels awkward, or you get nervous, or you think you have no idea what you’re doing, that’s totally normal, and fine: you will get better with practice, and also more relaxed. It may take 3, 5, 10, or more interviews to hit your stride, but you will.
Basic Interview Template. Here is a basic 5-step interview plan, which you can use directly at first, and later adapt according to the specific interview and your own style:
- Ask about experience listed on the resume. The key here is to follow your curiosity–look for things on the resume that intrigue you and try to have an interesting conversation. Examples: “What was the project? What was your role? What did you learn? What was hard about it? How would you handle this variation of the problem?”
- Ask about interests. This is to learn about fit and motivation. Examples: “Why do you want to work here? What roles are you interested in? What projects would you like to join?”
- Ask your technical questions. Ideally, ask questions that are similar to what comes up on the job, or even have the candidate write some code on a laptop. Problem-solving ability and communication skills are as important as getting the right answer. Experiment with different questions, but eventually you will want to have a standard set so you can calibrate the responses.
- Invite the candidate to ask you about anything. Like the greeting, this part is really for the candidate. They deserve a chance to interview you too, to see if they really want the job.
Before the interview, prep by scanning the resume for interesting things to ask about. Also, pick out what technical questions will be appropriate for this candidate. In general, don’t find out what others thought of the candidate–go in fresh and see for yourself.
After the interview, think about how it went. Did you learn everything you wanted to about the candidate? Were your technical questions too easy, too hard, or just right? Did you spend the amount of time you wanted to on each topic?
Writeups. After the interview, write up your findings promptly, while your memory is fresh. Doing a good writeup is a lot of work: it may take over an hour at first.
I use a template based on the interview template, with four sections: summary, experience, interests, and technical. The latter three are the detailed sections, and I write them first. The summary is for busy readers and for you at debrief.
For the detailed sections, I write down what I asked about, and what I found out. I usually include a few evaluative remarks: e.g., “simple project, but creative”, “communicated the answer very concisely”.
The summary section starts with a summary of the summary: a one-sentence take on the candidate. Then it gives highlights from the details, and strengths and weaknesses. Last come general discussion and evaluation.
Evaluation. The last step is to decide whether you want to hire the candidate. This goes in the writeup summary and of course you will say it in the debrief. Choose one of these four:
- Champion, or strong recommend hire. This means you will strongly advocate and argue for the candidate at debrief. Say this if you think the candidate will become a leader or outstanding individual contributor, i.e., the candidate would be a peer to your senior team members.
- Yes, or recommend hire. This means you want to hire the candidate. Say this if you think the candidate will become an above average member of your team, i.e., the candidate would fit in as a peer on your team.
- Veto, or strong recommend no hire. This means you seriously don’t want to work with this person. Say this if the candidate is clearly unqualified or you see serious behavioral issues.
- No, or recommend no hire. This is the default answer. If you are concerned about issues that are not serious enough for a veto, say no. If you think “maybe”, or “don’t know”, say no. It’s OK to say no: you are not making the final decision right now, and you might be persuaded to say yes at the debrief. Saying no is hard, and deciding not to hire a candidate is sad. There’s no getting around that. But remember that you have a responsibility to your co-workers to hire the right people, and to the candidate not to hire them to a job where they won’t thrive.
Final thought: try to have fun. Interviewing can be fun: you get to meet people, talk about their work, and reflect on the experience. If you enjoy interviewing, it won’t seem like a chore, and you’ll find it easier to give it the mindful effort needed to do a good job. Also, if you go into the interview expecting to have fun, the candidate can pick up on that, connect with you better, and show you more of what they’re all about.