Mozilla Webdev “Beer and Tell”, April 26, 2013

Jennifer Fong

So what’s a Beer and Tell?

A Beer and Tell is an event web developers at Mozilla hold every third Friday of the month. We share emotional and uplifting stories about projects we’re hacking on to the group. Usually open source projects. Likely with a reasonable license. Today your Beer and Tell post is written by Edna Piranha of Noodle Industries fame.

Last Friday, we experienced the following dramatic and exciting worlds:

  • Michael Kelly’s text-based adventures in JavaScript!
  • Absent Erik Rose’s clickable traceback stack frames in Python!
  • James Long’s scalable cloth simulation in LLJS/asm.js!
  • Chris Van’s “because my original project name was taken on Github” project GltHub!
  • Edna Piranha’s (that’s me!) interactive fiction game engine thing!
  • Matt Basta’s graphicsBC!

Text-based adventures in JavaScript

Reporting from hills surrounding a Martian colony, Michael Kelly shares his latest project – an ode to the time when we were young children (or possibly still unborn) – text-based adventure games in JavaScript. You can play a demo, as long as you promise to avoid viewing the source and cheating.

Text adventures in JavaScript

Text adventures in JavaScript

Clickable traceback stack frames in Python

While we weren’t looking, Erik Rose’s body was replaced by Mike Cooper and he presented in Erik’s voice about nose-progressive. You can click your stack frames to open your editor – view a demo or call your local representative!

Scalable cloth simulation in LLJS/asm.js

Like a hawk from the night sky, James Long swooshes into the Mountain View office through a wormhole and opens his high-performance laptop. He presses the space bar and an open source browser opens with a cloth simulation. He plays Gary Wright’s “Dreamweaver” while we are slowly possessed by the silky smooth cloth simulation. You too, can be possessed by the dance of the cloth.

cloth simulation

cloth simulation

GltHub (a.k.a. because my original project name was taken)

Chris Van takes us on a spaceship to Jupiter and tells us about GltHub, a site that reminds you of the guilt you should feel for not closing bugs fast enough. If you want to contribute more guilt, check out the rocket engine that made this.

GltHub

GltHub

General Store

Edna Piranha (wait, I’m talking about myself in the third person – weird) presents an interactive fiction game engine called General Store that causes post-post-post modern, post-deconstructed encounters with the fourth kind – absurdist fiction. Also known as Lovecraft meets Kafka. Do not attempt to adjust your television, just click around and breathe slowly.

General Store

General Store

graphicsBC

We finally land back on planet Earth and sit comfortably in a circle around a campfire. With a flashlight under his chin pointing upwards and him staring straight at us, Matt Basta talks about graphicsBC, a microscript for code golfers and hobbyists to create graphics. Suddenly, Tofumatt shows up in a flying motorcycle and throws his helmet into the fire.

And that’s it for now! See you again next month.

Kanban for MDN development

groovecoder

3

MDN Kanbanery

MDN workflow was a formless void

When I first joined Mozilla there was no workflow for MDN; in fact, there were no full-time developers for MDN! To push my first bug fix to the site, I had to negotiate a mess of mdn bugs, wiki pages, and IT bugs, strung together with email. It took a couple weeks to get a simple fix into production, and worst of all – I had to interrupt at least 3 other peoples’ regular workflows.

Let there be Scrum

MDN Sprint Burndown

I had done Scrum for a few years at SourceForge so I naturally started to work in some “scrummy” things for MDN – first I added some agile/scrum features to the Bugzilla JS addon. Then we got another couple of developers on the team and we used more agile practices in our MDN workflow. So we did user stories, standups, retrospectives, planning poker, and sprints on top of Bugzilla with Paul McLanahan’s fantastic ScrumBugs tool. We used it to run our sprints for over a year; to migrate MDN from MindTouch to Django. Sprints helped us prioritize, plan, and commit to release batches of bug-fixes, enhancements, and features at regular intervals.

Continuous Deployment

MDN Staging Chief

On our new Django platform, James and Jake hooked chief up to give us continuous deployment to MDN; our “regular intervals” went from weeks to hours. Our flexibility and adaptability with continuous deployment is awesome. Pushing frequently caused a series of effects on our workflow, though:

  1. We naturally prioritized things by hours or days. So, during our sprints, we would
  2. remove and add items between planning meetings. This had knock-on effects like
  3. making it hard to see what we’re actually working on between meetings, and
  4. making our sprint-based estimates totally inaccurate.

A ban came, it’s name was Kan

Around this time, I read Kanban, by David Anderson and some of Kanban and Scrum. A couple things stood out to me:

The Kanban feature I most liked was lead time – i.e., the clock time between when a request is made and delivered – and cycle times – i.e., the clock time between when a cycle of work begins and ends. These clock times seem more valuable to plan and prioritize than fuzzy estimations. Because really – who doesn’t like data and numbers, right? So, of course, I wrote code – kanbugs – to help visualize our MDN workflow and calculate our lead and cycle times.

Kanbugs

Kanbugs ScreenshotKanbugs used data from our bugzilla product and our GitHub pull requests to visualize our workflow. In general, the workflow was:

  1. We select a bug and add ‘p=<estimate>’ to the whiteboard
  2. We implement a bug and submit a GitHub pull request with “fix bug <id>” in the commit message
  3. We review a bug and merge it to master, which automatically marks the bug ‘RESOLVED:FIXED’ in bugzilla
  4. WebQA tests the bug on the MDN dev integration server
  5. We release the code to production, and WebQA marks the bug “VERIFIED:FIXED” in bugzilla

By visualizing the workflow and calculating both lead and cycle times we learned a couple interesting things. Most prominently, of our 39-day lead-time, 19 days were spent in triage and 14 days were spent waiting for testing! 85% of our lead-time was happening outside the development team. This kind of data helps us more effectively improve our workflow. E.g., rather than improve our estimates, we worked with WebQA to change from mandatory to as-needed exploratory testing.

Kanbugs was good for passive, read-only visualization and calculation, but (like Scrumbu.gs before) it wasn’t made for active management. It lacks:

  • Quick visual scanning for action items – who’s blocked? who’s too busy? what should I work on next?
  • Work-in-Progress limits – Kanban enforces small batch sizes by setting limits on how many things can be in the board at a time

Kanbanery

MDN KanbaneryKanbanery is good for these features. So, a couple months ago, we started an MDN board on Kanbanery. Kanbanery’s visualization is great. On a typical morning, I check emails and then

  1. quickly scan the board for any cards with green check-marks that I can pull to the next step of development.
  2. quickly scan the work-in-progress limits to see if I can help un-block cards that are piling up at a certain step
  3. quickly scan the board for my cards to resume my main dev task(s)

Of course, now that we’ve used Kanbanery for a couple months, we’ve hit drawbacks:

  • Yet Another Tracking Tool – we already have to keep up with email, bugzilla, and github. Scrumbugs had the same drawback, but Scrumbugs doesn’t micro-track through the sprint, and it automatically marks bugs fixed using bugzilla data.
  • How to incorporate contributors – we don’t want to make contributors use Yet Another Tracking Tool, so we don’t include them in the board. That means the board doesn’t show contributors’ work, which can be considerable.
  • General process churn – we went from no process through multiple forms of agile and scrum practices to Kanban in the space of about 18 months. I really like to continuously improve dev practices, but even I’m getting tired of changing things.

What’s next – maybe Kanbuggery?

Rather than change things up again, what I’d really like to do is automate Kanbanery – to update the board based on bugzilla and github actions, like we had done with Kanbugs. Then Kanbanery could give us its great visual features without needing active care and attention. Kanbanery has what looks like a fully-featured API, but I haven’t had time to explore it. Has anyone else?

How are other Mozilla, open-source, or website teams using Kanban for development?

Firefox Marketplace: April 5th – April 18th

Andy McKay

This is a regular post focusing on the status of the Firefox Marketplace.

  • Total bugs open: 485
  • Total bugs opened last two weeks: 150
  • Total bugs closed last week: 110

The API documentation has moved and is now separate from the rest of the marketplace code. The marketplace has adopted the status code 451 to describe an app we can’t display.

The migration over to the Add-ons site is now complete. Go check out themes.

Some specific changes of note:

  • OAuth has been turned on for all internal services between marketplace, solitude and webpay (858813)
  • Changes to the receipt specification to allow different kinds of receipts (858610)
  • Three legged OAuth has landed for the API (827948)
  • API submission is now throttled, but you can apply for access to submit more apps (848869)
  • Did we mention themes are migrated, oh yeah (858276)
  • Fireplace now has a featured page (860410)
  • Receipt verifying now checks the verifying service URL (770666) and type of receipt
  • Fireplace now has abuse pages (857685)
  • Unit tests galore (851582) and more for Fireplace
  • Apps filtered on adult and child flags (852567)
  • PIN user interface awesomeness (842861)
  • Charts, charts and more charts using monolith (843046)
  • Where in the world is Carmen Sandiego? I don’t know but our new geo location might tell you (851192)

And finally… odd add-on of the day whimsy.

Firefox Marketplace: March 8th – April 3rd

Andy McKay

1

This is a weekly post focusing on the status of the Firefox Marketplace. Because of holidays, PyCon and then more holidays, this post is a little delayed and covers a few weeks.

  • Total bugs open: 464
  • Total bugs opened last two weeks: 232
  • Total bugs closed last two weeks: 233

Most of the changes revolve around the API. There are endpoints added for the homepage, login, ratings and more. For more information on the Marketplace API, please check out the documentation.

The primary consumer of the API right now is fireplace the packaged app for the Marketplace. There have been a lot of changes in fireplace including search results, ratings and more.

We are currently migrating the existing Get Personas site over to the Add-ons site. For more information read our blog post.

Better know a WebDev: Wil Clouser

Andy McKay

Featuring Wil Clouser.

5865658769_1fa4436ecc

What do you do at Mozilla?

I’m the Engineering Manager for the Firefox Marketplace and Add-ons. I spend most of my day trying to remove roadblocks for developers and coordinating with other teams to make sure there is a steady stream of work. The spice must flow!

Other parts of my job are day-to-day team operations (HR questions, career development, etc.), spec reviews, architectural design input, any shady negotiations for project priority, and crisis management in the rare case when something goes pear shaped.

Any fun side projects you’re working on?

I’m a hobbyist photographer these days and have started experimenting with writing short (fiction) stories to go along with the pictures. Something that is still in its infancy for sure.

How did you get started in web development or programming?

I took a C class in high school and thought it was fun, then I suffered through Java in college and started doing PHP work part time for the university. I flipped to python a few years ago and have enjoyed it although my coding these days is mostly minor tweaks to other peoples’ code.

How did you get involved with Mozilla?

Mike Morgan and I worked in the same place while I was in college. He was doing volunteer work with Mozilla and it looked like he was having a good time so I started volunteering too. At that time we were maintaining Add-ons which was a mix of hard-coded add-ons and smarty templates. It wasn’t pretty.

What’s a funny fail or mistake story you can share?

We used to maintain mozilla.com in SVN as a bunch of static files in a hierarchy. I may have gotten a little carried away with the flat files – we had about a dozen files we duplicated for every version of Firefox (at the time we had 4 levels, eg. version 2.0.0.1) and for every locale (about 60 locales). That, combined with the locale specific CSS and images added up to a ton of files on disk and SVN would soak up all the disk I/O trying to do an update or a commit.

Anyway, when we launched Firefox 3 we were aiming for the Guinness World Record and we put on a big show of it to get lots of downloads including having people talking live on Air Mozilla about how awesome the new browser was. Due to the high traffic everything was responding slower than usual, but we were in decent shape, except that I had to push one last thing to SVN and I did an `svn up` first and it was just sitting there thrashing and – SVN doesn’t have any output when it’s parsing through your directory tree – I just had to sit there and wait. It ended up taking about 20-30 minutes which wasn’t unusual (sadly enough), but not being able to give an estimate was rough. I remember watching Mike Beltzner talking about random things on Air Mozilla trying to fill time and he’d ask me on IRC for an update every couple minutes and I’d reply with “real soon now.” I felt bad for him.

So, I contributed to us being late to launch in a high-profile kind of way, but we still made the record and I was happy about that. :)

What’s something you’re particularly proud of?

When the Marketplace team gets in its groove it blows productivity estimates out of the water. We work so well together I’m just proud of my whole team. The entire Firefox Marketplace was just an idea a year ago and look where we are now.

What’s coming up that you’re excited about?

We’re going to be merging Get Personas into Add-ons. For real. We’ve been talking about it for, like, 2 years and every time it comes up it gets bumped for something higher priority but it’s finally close enough that we’re just going for it.

What question do you wish you’d been asked?

Q: If I typed “youtube” into your awesomebar would you be embarrassed at what showed up?

A: Nope! First hit: Macklemore’s Thrift Shop. Second hit: LMFAO’s Sexy and I know it. Third hit: I clicked it and it says it has been removed because it violated Youtube’s TOS. So, that one might have been embarrassing, I’m not sure.

Favourite city that’s not Portland?

The easy answer is wherever my friends are, but as I’m staring outside at the cold wind and the looming rain I’d say somewhere warm like some anonymous fishing town in Mexico.

Firefox Marketplace: Feb 22nd – March 7th

Andy McKay

This is a weekly post focusing on the status of the Firefox Marketplace. Because of Mobile World Congress there were no production pushes last week. So this post covers the last two weeks.

  • Total bugs open: 476
  • Total bugs opened last two weeks: 213
  • Total bugs closed last week: 138

A few of the many changes:

  • Anonymous installs, app creation and user registration being recorded for metrics (bug 836586 and more).
  • If an app is public the API will return details to anonymous users bug 827986).
  • App validator API no longer requires authentication (bug 826835).
  • Ratings API added (bug 841199).
  • Mobile review pages added for reviewers (839543).
  • Multiple improvements to the devhub documentation.
  • Simulated in-app payments can now be done (839652).
  • Marketplace locked to portrait orientation (844186).

The Firefox Marketplace is building a packaged app which allow the browsing and installing of apps. Fireplace is the start of that. To allow the packagead app to communicate with the marketplace we are rapidly expanding the API.

Static File Shootout: Apache RewriteRules vs. Flask

Erik Rose

7

Ever wonder just how much you gain by having Apache serve your static files? I had a particularly hairy set of RewriteRules to support this in my project and a fairly simple Python routine as an alternative, so I ran a few benchmarks to find out.

DXR is a code navigation and static analysis tool for the Firefox codebase. It consists of two parts:

  • A Flask app which runs under Apache via mod_wsgi
  • An offline build process which generates a syntax-highlighted version of every Firefox source file as HTML and lays them down on disk

These generated files are the ones served by RewriteRules in production:

However, for convenience during development, we also have a trivial Python routine to serve those files:

I pitted the RewriteRules against the Python controller on a local VM, so keep in mind that all the standard caveats of complex systems apply. That said, let’s see what happened!

Having heard complaints about NFS-based VirtualBox shared directories (where my generated files lived), I expected both solutions to be bottlenecked on IO. To my surprise, I saw a pronounced difference between them.

The RewriteRules serve static pages in an average of 6 ms at a concurrency of 10. This is a representative test run of ab. The tables at the bottom are the most important parts:

(py27)[15:16:12 ~/Checkouts/dxr]% ab -c 10 -n 1000 http://33.33.33.77/code/README.mkd
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 33.33.33.77 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests

Server Software:        Apache/2.2.22
Server Hostname:        33.33.33.77
Server Port:            80

Document Path:          /code/README.mkd
Document Length:        7348 bytes

Concurrency Level:      10
Time taken for tests:   0.573 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      7635628 bytes
HTML transferred:       7355348 bytes
Requests per second:    1744.93 [#/sec] (mean)
Time per request:       5.731 [ms] (mean)
Time per request:       0.573 [ms] (mean, across all concurrent requests)
Transfer rate:          13011.36 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   0.4      2       4
Processing:     1    4   4.9      4     124
Waiting:        1    4   3.1      4      99
Total:          2    6   4.9      5     124

Percentage of the requests served within a certain time (ms)
  50%      5
  66%      6
  75%      6
  80%      6
  90%      6
  95%      7
  98%      7
  99%      8
 100%    124 (longest request)

Routing the requests through Python instead drives the mean up to 14 ms:

 50%     14
 66%     15
 75%     16
 80%     17
 90%     19
 95%     21
 98%     23
 99%     25
100%     32 (longest request)

This is with WSGIDaemonProcess example.com processes=2 threads=2, which, after a little experimentation, I determined is close to optimal for my 4-CPU VM. It makes some intuitive sense: one thread for each logical core. The host box has 4 physical cores with hyperthreading, so there are plenty to go around.

Turning the concurrency down to 2 had surprising results: Python actually got slightly faster than Apache: 3 ms avg. This could be measurement noise.

 50%      3
 66%      3
 75%      4
 80%      4
 90%      4
 95%      5
 98%      7
 99%      8
100%    222 (longest request)

-c 4 yields 6 ms:

 50%      6
 66%      7
 75%      8
 80%      8
 90%     10
 95%     11
 98%     12
 99%     12
100%     14 (longest request)

And, more generally, there is a linear performance trailoff as concurrency increases:

There's a linear relationship between concurrent requests and mean response time.

This was a surprise, as I expected more of a cliff when I exceeded the complement of 4 WSGI threads.

When we keep our concurrency down, it turns out that Apache doesn’t necessarily run the RewriteRules any faster than Python executes browse(). However, at high concurrency, Apache does pull ahead of Python, presumably because it has more threads to go around. That will probably hold true in production, since raw Apache processes eat less RAM than WSGI children and will thus have their threads capped less stringently.

Is a gain of twenty-some milliseconds for likely concurrency levels worth the added complexity—and logic duplication—of the RewriteRules? I think not. To get an idea of what 20 ms feels like, the human audio wetware juuuust begins to recognize two adjacent sounds as distinct when they are 20 ms apart: any closer, and they blend into a continuous tone. (Some sources go as low as 12Hz.) There are some usability studies that estimate a 1% dropoff in conversion rate for every extra 100 ms a page takes to load, but no one bothers to measure very fast-loading pages, and I would expect to reach a “fast enough” plateau eventually. Even if the linear relationship were overturned on real hardware, real hardware should be faster, making the latency differences (within reasonable concurrencies) even less than 20 ms. The “go ahead and use Python” conclusion should hold.

Finally, it’s worth mentioning that I’m still serving what are classically in the category of “static resources”—CSS, JS, images, and the like—with Apache, because we can do so in one simple Alias directive. What’s to lose?

Obviously, this is a boisterously multidimensional search space, comprising virtualization, threads, hardware, and IO abstraction, and I had ab running on the same physical box, so take these exact results as a single data point. However, they do establish a nice ceiling that lets us stop worrying about wringing every last drop out of the web server.

Packaged Apps on the Marketplace

Rob Hudson

This blog post will cover what happens after you submit your packaged app to the Firefox Marketplace and how users get your app onto their devices.

What is a packaged app?

A packaged app is an Open Web App that has all of its resources (e.g., HTML, CSS, JavaScript, app manifest, images) contained in a zip file, instead of having its resources on a web server. A packaged app is simply a zip file with the app manifest in its root directory. The manifest must be named manifest.webapp.

If your app needs device access to sensitive APIs, then it must be packaged and signed by the Firefox Marketplace. If your app has a lot of static assets it might be advantageous to package your app so all assets are stored on device.

Uploading Your Packaged App to the Firefox Marketplace

Once you have an app and are ready to submit it, package up your app into a zip file and upload it. See the packaged app docs for background on getting started and creating a zip file for your app.

When ready, sign in to the Marketplace Developer Hub and submit your app. Once you finish the process your app is pending review.

The Review Process

The Marketplace Review Team will review apps that have been in the queue the longest first. During the review process there may be back and forth with the review team about your app. Reviewers assess the security, privacy, content, functionality, and usability according to the Marketplace Review Criteria.

App Approval

Once approved it will become available on the Marketplace. If you’ve selected to make the app public yourself after it is approved, an option during the submission process, you will receive an email that your app was approved and is awaiting your action to make it publicly available. When your app is made public the package is cryptographically signed and available for download from the Marketplace servers, a “mini” manifest is generated with content from the package manifest and hosted on the Marketplace, and the app shows up in search queries and Marketplace listing pages.

App Installation

Packaged apps are installed by calling the navigator.mozApps API function, installPackage, passing it the absolute URL path to the “mini” manifest. On the device this triggers an install dialogue that displays information about the app to the user. The user has the choice to install the app or not. Assuming the user chooses to continue with app installation the device will download the zip file, verify the cryptographic signature, and install the app on the device.

App updates

When you have a new version of your app the process is similar. Visit your app’s “Status & Versions” page from within the Developer Hub’s Manage My Submissions page. On this page you’ll see a section titled, “Upload New Version”. Upload your new zip file here. Your app must have a new version string specified in the manifest file. After upload you will have an opportunity to provide version change information and notes to reviewers. When complete, the latest version of your app is pending review and the previous version is still available on the Marketplace. When the reviewer approves your update the new version will be made available for download and the app details on Marketplace pages are updated.

The device polls the “mini” manifest URL periodically looking for updates. Once your new version is approved devices will then detect the update and will attempt to download and install the new zip file. The process is the same as far as prompting the user to upgrade, downloading the zip file, verifying the signature, and finally installing the updated package.

Why a “mini” manifest?

The reason for the mini-manifest is to present the install or update dialogue to the user to provide them the option to install/upgrade or cancel, without having to download the complete zip file, which could be quite large, especially over mobile networks. Without the mini-manifest we would need the complete zip file to present this dialogue to the user. On the Marketplace the mini-manifest is automatically generated and maintained for every packaged app, including proper caching and handling of ETags required for Firefox OS to detect updates.

Hopefully this provides some insight into packaged apps, the review process, and some technical details of packaged app installs and updates. We look forward to your future packaged app submissions.

February Beer and Tell

James Socol

Every month, Mozilla web devs and community get together and do lightning talks for each other about little things we’ve been working on. We call it Beer and Tell. They are on Friday afternoons.

The February 2013 Beer and Tell was not one to disappoint! Unfortunately, we don’t have a recorded video of this one, so you’ll have to take my word for it. Or read on!

NoodleAmp Reborn

Michael Kelly (mkelly) showed us the latest iteration of NoodleAmp, and let random people from the internet pick which songs to play from his computer. If you can write a Python generator and get gstreamer installed, you, too, can use NoodleAmp!

DetourApp

ednapiranha showed us DetourApp, the messaging app for spies, where all the messages self-destruct after 10 seconds. Hop over, sign in with Persona, and start sending missives. The code is up on GitHub and it uses Flask, Redis, PIL, Jinja2, nunjucks, and my personal favorite: bleach.

django-fancy-cache

peterbe is at it again, coming in with django-fancy-cache, a replacement for the built-in @cache_page decorator that lets you use useful custom functions to control how you cache views, when they expire, and even manipulate the output! If you’ve ever wished you could cache a form with @cache_page and then been bit by the CSRF token, this is the library for you.

LeechTracker

Wraithan admits to getting distracted during the day, but it’s OK because he built a tool to help! Combined with LeechBlock, LeechTracker helps him figure which times of the day he’s most susceptible to distraction, and what those distractions are. Then he goes back to work. And you can, too! Just install the add-on and follow the instructions. Or check out how the distractions have tapered off over time.

My Search for the Perfect Keyboard

Erik Rose knows that the tool developers use more than anything else is their keyboard, so he invested serious time in making sure he had the right one. After trying the rest:

  • http://www.keyboardbumps.com/
  • http://matias.ca/quietpro/mac/
  • http://pckeyboard.com/

Erik found the best: http://www.daskeyboard.com/

Though he’d be the first to tell you it’s a personal thing. Find a place you can try out a bunch of them!

That’s all for this month. Tune in next time. Same Bash time, same Bash channel.

The restful Marketplace

Andy McKay

While the Firefox Marketplace is being built, we are starting to make some fundamental changes in the way the Marketplace is constructed behind the scenes. In the beginning there was one big Django site that served all the requests back to the users.

Over the course of the last few months that has changed to being a smaller number of services that provide APIs to each other. We’ve got separate services for payments (and this), statistics (and this) and a new front end client and plans for more. The main communication mechanism between them is going to be REST APIs.

For REST APIs in Django we are currently using Tastypie, which does a pretty good job of doing a lot of things you’d need. There are a few frustrations with Tastypie and going forward I’m extremely tempted by Cornice, which we currently use for statistics.

When you ask people about consuming REST APIs in Python, lots of people tell me “we just use requests“. Requests is a great library for making HTTP calls, but when you are developing against a REST API having all the overhead of coping with HTTP is a bit much. Coping with client errors versus HTTP errors, understanding the error responses, scaling and failover and generally coping with an imperfect world.

So we took slumber and wrapped that in our our library called curling. Curling is a wrapper that makes some assumptions about the server and the client. It provides us one entry point for all our REST APIs and a place to have consistency on the client side. Below I get a timestamp on a transaction using requests.

However in the curling example it will check that one and only one result is returned (and raise meaningful errors if it didn’t) and correctly raise a meaningful error based on the HTTP response.

The result is code that is easier to write and read, but it is still relatively close to the metal of just being a REST API without being as simple as just HTTP requests or as complicated as a Web Services and SOAP stack.

As an extra bonus curling has a command line API that fills some HTTP headers out for you, syntax highlights output and the like. As a bonus, if you are calling a Django server in debug mode, rather than spew lines of HTML to the screen – it will write the HTML to a file and open a browser to the file.

Screen Shot 2013-02-22 at 3.16.52 PM

These APIs are open and growing rapidly. You can find documentation on them on each project, but the main one to follow is in zamboni.