Do you want a free jQuery Foundation membership?


jQuery Foundation Logo

Mozilla is renewing our jQuery Foundation corporate membership for 2014, and giving 2 individual memberships to web developers in the Mozilla community!

If you want us to consider you for a membership:

1. Join the ‘jquery’ group on Mozillians
2. Edit your Mozillians bio to include your most impressive jQuery code or activities

By June 2014, Mozilla Developer Relations and Webdev teams will select 2 winners based on the most outstanding contributions to the Open Web using jQuery.

Keep rocking the free web!

Enabling Communities: Iterating on the Get Involved Page



The Get Involved page on is one of the primary ways future community members reach out to join Mozilla and work with us to build a better Internet.

In March 2013, we turned the Get Involved page into an animated slideshow celebrating Mozilla’s 15th anniversary.

15th Anniversary slide show on the Get Involved Page

15th Anniversary slide show on the Get Involved Page

As the end of Mozilla’s 15th anniversary year approached, we wanted begin iterating on an improved user sign-up flow and telling a different part of the Mozilla story.  We also wanted to make sure our storytelling supports open web standards, so we included a sequence of HTML5 videos showing just a few of the amazing members of our global community.

You can see the updated page now in English, French, Italian, and German, with more locales to be added very soon.

What’s next for the Get Involved page?

The team is working closely with the Community Building and Community Engagement teams to build a new version of the Get Involved page to better support Mozilla’s  “Enable Communites that Have Impact”  goal.

We will collaborate with the Community Builder’s Research group to better understand how and why prospective new community members find Mozilla.  We’ll also coordinate closely with the Pathways and Systems & Data groups to ensure we are doing everything we can to help potential volunteers find the opportunity they are looking for to work with Mozilla.

We need your help!

We are still in the “gather data/understand the problem” phase of this next iteration.  If you see other websites that are doing a great job recruiting volunteers or have ideas for things that the Get Involved page could do to better connect people with Mozilla (ex:  make it easier for people to find Mozilla events close to home), please let us know!

You can contribute your ideas by either:

1.  adding them to this etherpad

2.  creating dependent bugs to this tracking bug

Expect to see the next updates to the Get Involved page in Q3 2014!

Building A JavaScript Library With Tests, Mocks, and CI


I talked recently at DjangoCon about how to build better Python packages. As engineers we all use each other’s open source libraries everyday so it’s nice when 1) they work, 2) they’re well documented, and 3) they’re well tested.

For a recent project I set out to build a JavaScript library for other developers to use in their web applications. It had been a while since I made a browser library in JavaScript so I took it as an opportunity to see if I could apply some Python packaging concepts to JavaScript.

This is a brief summary of some tools and patterns I settled on. There are plenty of alternatives so if you have suggestions for improvement, please comment!

Project Tasks

With any project it’s inevitable that you’ll begin writing scripts for maintenance. I decided to go with Grunt because it keeps me in the realm of JavaScript (even though the developer needs NodeJS installed). Plus there are lots of contributed Grunt tasks to choose from. For example, to minify my library at lib/myscript.js all I had to do was install uglify-js, grunt-contrib-uglify, and add a Gruntfile.js with this:

Now I can type this to minify the code:

grunt uglify

Running Unit Tests

All libraries (even JavaScript ones!) need automated tests otherwise you and future collaborators will find the library really hard to maintain. If you try to make a change to code without tests you have to think about every possible side effect or you have to manually test all side effects. Both of these things seem easy when you only have one or two functions but they become exponentially harder as your library grows.

I decided to try Karma to run tests in a real web browser, mocha for my test cases, and chai for assertions. There are a few alternatives but Karma seemed like the best choice for a runner. (I found out about Yeti after I settled on Karma; I haven’t tried it.)

Karma lets you run tests from your console against one or more web browsers in parallel. It’s exactly what I wish existed back when I wrote jstestnet! Karma is really fast and seems to work pretty well. Speed is essential in testing.

To get started, you can type karma init karma.conf.js to create a config file and you can hook it up to Grunt with grunt-karma in a Gruntfile like this:

This fit well with my project since I was already using Grunt. I can now type:

grunt test

Karma will open all target web browsers (per config file), run my test suite, report results, and shut down the web browsers.

The Karma output is pretty minimal. It looks something like this in a shell:

Screen Shot 2014-03-19 at 2.56.49 PM

For development I also added this command:

With that I can type:

grunt karma:dev

This opens all target web browsers, runs the test suite, and keeps the web browsers running. As I edit files, it re-runs the tests. This seems to work okay but occasionally I’ve seen some timeout errors that go away if I restart the browsers. I’m still looking into that.

Writing Tests

Mocha is a testing library that works in NodeJS and also in web browsers. It uses the BDD (behavioral driven development) style of specifying an object or function and declaring how it should behave. Here is an example from my library that covers some error handling:

As you can see it’s pretty easy to read the code and see what is being tested. Both Mocha and Jasmine have Karma adapters and there are probably other adapters too if you want to use another test library.

Testing With Mock Objects

A common pattern in testing is to mock out objects that are used by your system but that you don’t need to test. Sinon is designed exactly for this and especially for testing in the web browser. My library has a thin API layer around XMLHttpRequest but I wanted to mock that out while testing the API layer. Here is an example of making sure it gets an error callback for 500 responses:

This is nice because I can run my tests without the code touching a real API server. I’m using Sinon’s Fake Server here.

It’s easy to go overboard with mocks once you discover their power. My words of caution is that anytime you mock out an object you are deciding not to test something. Make sure that’s the right decision. Make sure it gives you real benefits like a speedup or something. It’s usually a good idea to mock out HTTP connections since otherwise your tests depend on the Internet.

Continuous Integration

Testing is most effective when you run your tests after every code commit. There is a free service for open source projects called TravisCI which supports running browser tests with Karma just fine. You can run all your tests in a headless browser like PhantomJS with a task like this:

Add the grunt command to a .travis.yml file to hook it up:

However, my specific library will benefit most from running its tests in a real Firefox and TravisCI supports web browsers just fine so, hey, why not. All I had to do was add this to my yaml file:

Firefox is pre-installed on TravisCI so I didn’t even need to declare it.

Checking For Syntax Errors

JavaScript is a quirky language (to put it nicely) so I always make sure to run the code through something like JS Hint to catch undeclared variables and other problems. You can add syntax checking easily to grunt like this using grunt-contrib-jshint:

This lets me type grunt jshint to check for syntax errors in all lib and test files as well as in my Gruntfile.js. I use a .jshintrc file to set common options that I like.

My continuous integration script actually runs JS Hint before running unit tests. I chain all these together like this at the bottom of my Gruntfile:

grunt.registerTask('test', ['jshint', 'karma:run']);

Now when I run grunt test the jshint and karma:run tasks are executed.


All packages need documentation! I like to start with realistic scenarios to illustrate usage of the library. After that I make sure to fully document all public APIs. I tend to just put docs in a Github README and then switch to Sphinx and Read The Docs later when the docs are too big for a single README. Sphinx doesn’t support JavaScript as well as Python but it’s still probably the best tool out there for managing interlinked docs.


To make a library useful to others you may wish to package it up. To be honest I haven’t done this to my library yet but I was considering bower and/or NodeJS packaging.

That’s It

Since I left out a lot of details, you may want to check the actual code for working examples.

A new update experience for Australis: our process and design principles

Holly Habstritt Gaal


We’re excited to release a new onboarding experience for users updating to the new Firefox (Australis). We’re not just introducing a new design in Firefox 29, but a new way for the web and chrome to interact with each other in our onboarding experience.  This will allow us to show, not just tell the users what is new in Firefox and educate them about the browser. With this new interaction we avoid relying on passive viewing of a web page and create a memorable experience that is immersed in the browser. To learn more, see the following blog posts that describe our design principles and process.

5 questions to ask during your design process

Learn how our initial assumptions about educating users became a project dedicated to creating an Update experience for Firefox 29 in 5 questions to ask during your design process.

Introducing the Update Experience for Australis

Learn about the key design principles that stemmed from our research and testing in Introducing the Update Experience for Australis.


Onboarding Tour UI

Onboarding Tour UI

Cross-team collaboration at Mozilla has been key to creating this experience.  The collaboration spans across teams such as release engineering, web development, marketing, visual design, UX, user advocacy & support, metrics, and others. Thanks to everyone involved for their hard work and stay tuned as we continue to iterate and improve this experience in preparation for the Firefox 29 general release.

We love to discuss all things onboarding. If you have any questions, please reach out to Holly Habstritt Gaal and Zhenshuo Fang.

DXR Gets A Huge UI Refresh

Erik Rose


After months of hard work by talented Mozillians, both paid and volunteer, DXR’s UI overhaul branch has hit production! With more efficient workflows, support for multiple trees, and improvements in discoverability, it makes searching Mozilla’s codebases more fun and takes a big step toward the retirement of legacy tools.

What’s new?

Improved flow. The old design forced you into a choice upfront about whether you’d be browsing or searching. That splash screen is gone, replaced with useful information: the top level of the source tree.

The front page of DXR, showing both the search pane and a top-level listing of the folders in mozilla-central

If you want to browse, browse; search, search. In addition, multi-tree support is polished and proven, and new trees are coming soon. Finally, breadcrumbs are integrated more smoothly into the workflow; it will soon be a one-click matter to limit search to the folder you’re browsing.

Filters upfront. We now expose and document all 27 search filters in a ubiquitous drop-down menu.

DXR's 27 filters, arrayed in the Filters menu

Previously, we showed only about 6, and even those were available only via the Advanced Search panel, which didn’t appear until you had already entered a search and hit Return—search-as-you-type didn’t cut it. Take a look: DXR knows some tricks you weren’t aware of.

Real parsing. There’s an honest-to-goodness query parser now! You can express quotation marks without doing backflips, and the quoting behavior of regexes is unified with that of the other filters. Any filter’s argument can be quoted with ether single and double quotes, and, in case you need both at once, they can be backslash-escaped. For example…

  • A phrase with a space: "big, small"
  • Quotes in a plain text search, taken as literals since they’re not leading: id="whatShouldIDoContent"
  • Double quotes inside single quotes, as a filter argument: regexp:'"wh(at|y)'
  • Backslash escaping: "I don't \"believe\" in fairies."

Furthermore, we have plans to simplify the selection of filters. You’ve said you don’t care, most of the time, what kind of identifier you’re looking for; identifier names are pretty unique. Thus, we plan generic “id” and “ref” filters to save your brain cycles. We’ll also reduce redundancy and make some things shorter and more memorable. See our sketch on the wiki, and don’t hesitate to scribble your feedback on it.

Better URL handling. The URL is updated in place as you search, so all your searches are bookmarkable and shareable, whether you hit Return or not.

Even more. The case-sensitive checkbox has an accesskey. The search field no longer autofocuses, making it easier to use arrow-key scrolling or type-to-select. Table rows present easier click targets. Infinite scrolling is more anticipatory. The JS has been completely rewritten. And everything looks prettier, to boot.

DXR viewing a syntax-highlit file, with a context menu open offering to jump to a definition of AtkAttribute

Thanks to everyone who has contributed their feedback and expertise to this release, not only to the UI but also to the back-end improvements that went into production while it was cooking. Special recognition goes to Schalk Neethling for his front-end magic; Nick Cameron, who has been making things better all over the stack; and James Abbatiello, who keeps adding new filters and chasing down analysis corner cases. It’s a fun project to hack on, with something for everybody. Join us!

More is always to come. Known issues are here. File anything else you see.

Happy searching!

DXR gets more correct, less case-sensitive

Erik Rose


DXR, Mozilla’s evolving codebase search engine, has been taking patches at a furious rate these last two months. A great deal of work has gone into a UI refit, still in progress, which will improve discoverability, consistency, and power. Meanwhile, we have kept pushing more immediately enjoyable enhancements into production.

Cleaning Out The Pipes

One of these is a complete rewrite of our HTML generation pipeline. DXR pulls metadata about code from a number of disparate sources: syntax coloring from Pygments, locations of functions and macros from clang, links to Bugzilla bugs from a glorified regex. It then encodes those as begin-end pairs of text offsets, which it stitches together to make the final markup. However, the stitching was previously handled by a teetering state machine, stuffed info a single monolithic function with zero test coverage, replete with terrible mystery. As it turned out, it had been generating grossly invalid markup for some time. Fortunately, modern browsers are equally replete with terrible mystery and managed to make some semblance of sense out of things like </a></a></a>.

But now that’s gone away. The rewrite brings…

  • Correct markup
  • Support for line-spanning regions, as for multi-line comments or strings
  • Support for Windows line-endings (of which we did have a few in mozilla-central)
  • Full test coverage
  • And, perhaps most importantly in the long term, it modernizes our plugin contract by supporting annotation regions which overlap. This lets us enjoy truly decoupled plugins which no longer have to care if they’re used with others that emit overlapping regions. We can add plugins that support more languages and more types of analysis without having to worry about whether they’ll play nicely with the existing ecosystem. It also makes development of plugins outside the DXR codebase more practical.

    Other Improvements

    Other user-visible improvements include…

  • Case-insensitive searching for plain text. This is now the default.
  • Exposing values of constants using tooltips
  • Results now show in alphabetical order by path rather than in random order, so you can rule out entire directory trees more easily.
  • Searching for Layers.cpp:45 takes you straight to that line of the file.
  • Lexing .h files as C++ rather than C means we now highlight all those pesky C++ keywords.
  • We now syntax-color preprocessor directives in JS.
  • We’ve introduced override and overridden queries.
  • No more “l” in line-number anchors means no more mistaking them for “1”.
  • Fixed an off-by-one in line annotation position.
  • No longer consider uninitialized struct or class members to be var refs.
  • Support non-UTF-8 encodings of source files.
  • Distinguish identically named functions in different anonymous namespaces.
  • Thanks to James Abbatiello for lots of analysis improvements, Nick Cameron for the handy line-number search and syntax coloring, jonasac for several great fixes, and Schalk Neethling for a huge amount of work toward getting the UI refit out the door. If you’d like to join the DXR hacking community, we’ve got a nice ramp-up paved out for you and some easy bugs tagged.

    New UI Teasers

    As for the upcoming UI refit, there’s plenty in store:

  • A natural integration of the now fairly disjoint browse and search modes
  • Easy discoverability of all 26-or-so search filters: no more figuring them out through hearsay or by spelunking through the code
  • No more unpredictability of interface elements like the Advanced Search panel
  • First-class support for multiple trees, to be followed by more actual trees
  • A real query parser. You can express quotation marks without resorting to regexes, and you can use quoted strings as arguments to filters.
  • No more astonishing, disappearing splash page
  • Check out our mockups and our sometimes-broken staging site, and do keep the feedback coming. All of the above work was motivated by the comments you’ve already given us.

    Happy hacking!

    November/December Accomplishments of the WebProd Team

    Chris More

    Wow. It has been a really busy second half of Q4 for the Web Productions team! I wanted to share some recent accomplishments from the team and what we are up to next.

    Recent Key Accomplishments

    Snapshot of Upcoming Projects

    Want More?

    Enjoy the rest of 2013 and see you all in 2014!

    Improving user experience for people all over the world

    Kohei Yoshino

    Every day, thousands of people visit from across the globe. As the hub site of the global Mozilla community, it has a variety of content including the Firefox download page which has been localized into 80+ languages — like Firefox itself — thanks to the tremendous contributions of volunteer localizers.

    I can remember at an early age — a developer-oriented, documentation-centric geeky site. Some community members were translating those documents into their languages (I myself translated hundreds of docs to Japanese) but their work was done outside of the official site. As time passed, has become one of the most popular multilingual consumer sites on the Internet. is still under active development, though. With the increasing number of translations, how can visitors from around the world make the most of this content? In this article I’ll briefly explain some great new features I have contributed for the past few months that may lead to a better user experience for people who speak different languages.

    Language Switcher is migrating from the original PHP-based site to the new Python-based robust platform called Bedrock. The legacy site had a language switcher at the bottom of each page, but there was a problem: the switcher showed all supported languages regardless of whether the current page was actually localized. When a user chose Français from the list but the page wasn’t translated into French, they were taken back to the original English page. That behavior confused them because the page just reloaded without changing anything.

    The new language switcher on Bedrock only shows the localized languages for each page and works as expected. The number of the languages will continue to increase as localizers add new translations. (Bug 773371)

    We believe we can improve the language switcher even further. A simple dropdown menu is accessible but difficult to use when the list becomes too long. Also, my recent research showed that the user experience on switching language was varied among the Mozilla properties. It’s obvious we need a better solution like the Tabzilla universal site navigation widget. (Bug 919869)

    Search Engine Optimization

    The Language Switcher is not a collection of links but rather a dropdown menu using <select>, so it cannot tell search engines about our localization. Googlebot and others are enough smart to mechanically submit such a simple form but they definitely prefer a better way to crawl.

    We have implemented alternate URLs to solve the issue with just a little more HTML. Visit the home page and hit Ctrl+U (or Cmd+U) to view the source and find a list of <link rel="alternate" hreflang="x"> in the source. Search engines will recognize the list to show a localized page, if available, in their search results based on the searcher’s language. (Bug 481550)

    We’ll also soon serve comprehensive XML sitemaps with the alternate URLs as part of our SEO efforts. (Bug 906882)

    Translation Bar

    This is the latest cool addition to You might have seen a similar functionality if you have installed Google Toolbar or the lovely Chrome browser. It may ask you if you’d like to translate a foreign-language page into your language with Google Translate. While it’s useful in general, the quality of the translation largely depends on the language. For example, Japanese, my mother tongue, is one of the most difficult languages for machine translation. Here at, we can enjoy the pages manually translated by native localizers, so why not offer our visitors the nicely localized page? That was the motivation for the new Translation Bar.

    The implementation was straightforward. As described above, we already have the alternate, localized URLs in the page source. A script compares the browser locale (navigator.language) against the list, then shows the bar if the translation is available in that language. If the user selects Yes, please, he/she will be promptly redirected to the localized page. If the user selects No, thanks, sessionStorage will remember the preference and the bar will be hidden in the subsequent browsing session.

    Of course, the labels on the bar are also localized. Visit a localized page to give it a try! The Translation Bar has just been deployed on and other Mozilla sites may adopt it soon. (Bug 906943)

    Beyond Translation

    As a Japanese Web developer and localizer, I do know localization is not just translation. Each language and country has a different culture, customs and perspectives. Under the Mozilla mission, the Web Production team is working hard to deliver a great experience for everyone. The ongoing challenges include localized news and promotions on the home page, better fonts for multibyte characters, layout improvements for RTL languages, and more. I’m very glad to help the team.

    Mozilla is a lively, global, successful open-source community, and is not merely a corporate site. You can contribute in many ways, like me. Do you speak a language other than English? Be one of the awesome localizers! Did you find any bugs on or do you have any feedback on how to improve the site? Let us know via Bugzilla! Are you a Web developer with knowledge of HTML, CSS, JavaScript, or Python? Fork the GitHub repository, browse the bugs and send us pull requests!

    Tracking Deploys in Git Log

    Mike Cooper

    Knowing what is going on with git and many environments can be hard. In particular, it can be hard to easily know where the server environments are on the git history, and how the rest of the world relates to that. I’ve set up a couple interlocking gears of tooling that help me know whats going on.


    One thing that I love about GitHub is it’s network view, which gives a nice high level overview of branches and forks in a project. One thing I don’t like about it is that it only shows what is on GitHub, and is a bit light on details. So I did some hunting, and I found a set of git commands that does a pretty good at replicating GitHub’s network view.

    $ git log --graph --all --decorate

    I have this aliased to git net. Let’s break it down:

    • git log – This shows the history of commits.
    • --graph – This adds lines between commits showing merging, branching, and
      all the rest of the non-linearity git allows in history.
    • --all – This shows all refs in your repo, instead of only your current branch.
    • --decorate – This shows the name of each ref next to each commit, like
      “origin/master” or “upstream/master”.

    This isn’t that novel, but it is really nice. I often get asked what tool I’m using for this when I pull this up where other people can see it.

    Cron Jobs

    Having all the extra detail in my view of git’s history is nice, but it doesn’t help if I can only see what is on my laptop. I generally know what I’ve commited (on a good day), so the real goal here is to see what is in all of my remotes.

    In practice, I only have this done for my main day-job project, so the update script is specific to that project. It could be expanded to all my git repos, but I haven’t done that. To pull this off, I have this line in my crontab:

    */10 * * * * python2 /home/mythmon/src/kitsune/scripts/

    I’ll get to the details of this script in the next section, but the important part is that it runs git fetch --all for the repo on question. To run this from a cronjob, I had to switch all my remotes to using https protocol for git instead of ssh, since my SSH keys aren’t unlocked. Git knows the passwords to my http remotes thanks to it’s gnome-keychain integration, so this all works without user interaction.

    This has the result of keeping git up to date on what refs exist in the world. I have my teammate’s repos as remotes, as well as our central master. This makes it easier for me to see what is going on in the world.

    Deployment Refs

    The last bit of information I wanted to see in my local network is the state of deployment on our servers. We have three environments that run our code, and knowing what I’m about to deploy is really useful. If you look in the screenshot above, you’ll notice a couple refs that are likely unfamiliar: deployed/state and deployed/prod, in green. This is the second part of the script I mentioned above.

    As a part of the SUMO deploy process, we put a file on the server that contains the current git sha. This script reads that file, and makes local references in my git repo that correspond to them

    Aside: What’s a git ref?

    A git ref is anything that has a commit sha. So master is a ref. So
    are any other branches you create. Git also tracks remote content in
    the same way, in refs under refs/remotes.

    In short, a git ref is a generalization of tags, and branches, both
    remote and locale. It is how git keeps track of things with names, and
    it is what is written on the graph when --decorate is
    passed to log.

    Wait, creates git refs from thin air? Yeah. This is a cool trick my friend Jordan Evans taught me about git. Since git’s references are just files on the file system, you can make new ones easily. For example, in any git repo, the file .git/refs/heads/master contains a commit sha, which is how git knows where your master branch is. You could make new refs by editing these files manually, creating files and overwriting them to manipulate git. That’s a little messy though. Instead we should use git’s tools to do this.

    Git provides git update-ref to manipulate refs. For example, to make my deployment refs, I run something like git update-ref refs/heas/deployed/prod 895e1e5ae. The last argument can be any sort of commit reference, including HEAD or branch names. If the ref doesn’t exist, it will be created, and if you want to delete a ref, you can add -d. Cool stuff.

    All Together Now

    Now finally the entire script. Here I am using an git helper that I wrote that I have ommited for space. It works how you would expect, translating git.log(all=True, 'some-branch' to git log --all some-branch. I made a gist of it for the curious.

    The basic strategy is to get fetch all remotes, then add/update the refs for the various server environments using git update-rev. This is run on a cron every few minutes, and makes knowing what is going on a little easier, and git in a distributed team a little nicer.

    That’s It

    The general idea is really easy:

    1. Fetch remotes often.
    2. Write down deployment shas.
    3. Actually look at it all.

    The fact that it requires a little bit of cleverness, and a bit of git magic along the way means it took some time figure out. I think it was well worth it though.

    Originally from

    One C++ Tokenizer Too Many: A DXR Story

    Erik Rose

    When your codebase is 2GB, grep doesn’t cut it anymore. It’s slow, and, in such a large corpus, many attempts to find a symbol get drowned out by false positives. Even modern IDEs begin to choke under the load. This is the domain of DXR, Mozilla’s tool for doing structured queries, free-text searches, and even trigram-accelerated regex matching on large projects like Firefox.

    Of course, it’s a software engineering truism that providing speed at a moment’s notice exacts a price in pre-computation, and DXR is no exception. Every night, we run the entire mozilla-central codebase through the clang compiler, injecting a custom plugin which sees what the compiler sees and writes it all down in a database that can dish out fast answers later.

    Except when things go awry.

    During the Mozilla Summit, DXR had a conveniently timed series of failed indexing runs. A bit of digging revealed that, while the mozilla-central compilation was going off without a hitch, a run of the source through our custom C++ tokenizer was exploding in a later phase.

    Wait—custom C++ tokenizer?!

    This worn but dutiful little fossil harkens back to DXR’s pre-clang days. In the early Cretaceous, when gcc ruled the earth, we didn’t have an easy framework for compiler plugins; we had to get by on the clever application of heuristics. But, as the millennia wore on and the clang ecosystem evolved, the uses of the custom tokenizer eroded, until its only remaining purpose was to find #include directives so we could guess where they pointed—which we got wrong half the time anyway. It was time to toss that strategy in a tarpit.

    And so, after a little compiler plugin tinkering, I’m pleased to announce that DXR now resolves all includes simply by lifting the correct answer out of clang. Before, we would often throw up our hands when including a file without a totally unique name (which happened a lot). Now, with only a few exceptions for weird macro corner cases, we successfully link all non-generated, tree-dwelling includes. And, of course, we lay the maintenance burden of tokenizing C++ squarely on the compiler’s shoulders, where it belongs.

    Want to join us in hacking on compiler plugins, with a generous dollop of Python back-end code? Pitch in at