05
Mar 14

scratchpad made me happy

I love the Firefox devtools command line. I cut & paste all kinds of crazy code into it. On the other hand, I never did really quite “get” the Scratchpad, though to be honest I also haven’t tried using it much. I’ve been happy just to edit code in an Emacs buffer on the side and cut & paste.

The Problem

But last night, I ran into a problem that Scratchpad turned out to be perfect for. And now I loveses it. My precioussssss….

I am the proud survivor — er, I mean “father” — of two kids, one of whom goes to a school with lots of required “volunteer” time. Since it is required, you have to record your hours via a web-based tool. It’s a fairly primitive interface on some ancient backend ASP monstrosity. It’s tolerable to use to record one entry at a time — you just need to enter a date, a start hour, a start minute, an end hour, an end minute, three options selected from dropdown lists of several dozen items (enough to require scrolling), etc.

Ok, it’s pretty awful even for entering one record.

But entering 80 of the things, most of them differing only by the date, is intolerable. Especially since the d#@n form resets itself completely every time you submit it. And so automation rage kicked in.

Existing Solution

Last year, I did it by capturing the POST request and writing a script to resubmit with different values. It worked, kinda, though I could only get a couple of fields working and it kept timing out my authentication cookie. Or something. I just remember it being a major pain. Even capturing the full request was a little difficult since it’s HTTPS only and I seem to remember some limitation in the Firefox devtools of the time when trying to see the POST body data. (Again, “or something”.)

The Latest Hotness

This year, I was overjoyed to see the option to edit and resubmit a query. I’ve wanted that so many times. And yet… the data was still x-www-form-urlencoded, which means I had to cut & paste from the devtools pane, which is already a challenge due to Linux/xorg/xfce/emacs’s mishandling of cut buffers or clipboards or whatever the heck they are. And then find the field I cared about and update it, and then discover that it overwrote my previous entry because there was some embedded token in one of the other fields that referred to the entry it was creating. Ugh. (Dim memories resurfaced at about this point from when I needed to get around this last year. I still don’t remember the details.)

So then I thought, “hey, I’ll just update the page in place and then click submit! I’ll do it at the HTML level instead of the HTTP level!” So I wrote up some JS code to find and set the various form fields, and clicked submit. Success!

Only it’s still a painful flow. I have to edit the relevant field in emacs (or eventually, I’d probably generate the JS scripts with a shell or Perl or Python script), cut & paste into the little tiny console line (it’s a console prompt, it’s supposed to be small, I have no issue with that.) (Though maybe pasting into the console itself should send it to the prompt? I dunno.), press enter, then click on submit. Not too bad, and definitely well within the “tolerable” zone.

But it’d be easier if I could just define a JS function that finds and fills the fields, and pass in the one value I need to change. Then I can just enter that at the console. Only where can I stash the function? If I put it on the page, I assume it’ll get nuked whenever I submit the page. Hey, I wonder if that Scratchpad thing might help here…

Enter the Scratchpad. Oh yeah.

So I pasted my little script into the Scratchpad. It defines a function to fill out the fields. Final flow: edit one line of JS to change a date, press Ctrl-R to run it. The fields magically update, I click submit.

Obviously, I could’ve done the submit from the script while I was at it, but I like to set things up and fire them off in separate steps. Call it paranoia. I do the same thing with shell scripts — I’ll write a script to echo out a series of commands to perform, run it once to verify that it’s what I want, then run it again piped through bash. I’m just too clumsy to get it right the first time.

But anyway, Scratchpad was the awesome for this task. I’ll be considering it whenever thinking about how to do other things now.

Doers of Good

Thank you robcee and #devtools team. You made my life gooder. More goodish. I am now living goodlier.

My Example

The script I used, if you’re curious:

function enter(date) {
    inputs = document.forms[0].getElementsByTagName("input");
    selects = document.forms[0].getElementsByTagName("select");
    texts = document.forms[0].getElementsByTagName("textarea");
    inputs["VolunteerDate"].value = date;
    inputs["FromHour"].value = 8;
    inputs["FromMin"].value = 45;
    inputs["ToHour"].value = 9;
    inputs["ToMin"].value = 45;
    inputs["nohour"].value = 1;
    selects["MinCombo"].value = 0;
    selects["classcombo"].value = 40;
    selects["activitycombo"].value = 88;
    selects["VolunteerCombo"].value = "Steve Fink";
    texts[0].innerHTML = "Unit study center";
}

enter("2/27/2014") # I edit this line and bounce on the Ctrl-R key, then click submit

14
Feb 13

Browser Wars, the game

A monoculture is usually better in the short term. It’s a better allocation of resources (everyone working on the same thing!) If you want to write a rich web app that works today (ie, on the browsers of today), it’s much better.

But the web is a platform. Platforms are different beasts.

Imagine it’s an all-WebKit mobile web. Just follow the incentives to figure out what will happen.

Backwards bug compatibility: There’s a bug — background SVG images with a prime-numbered width disable transparency. A year later, 7328 web sites have popped up that inadvertently depend on the bug. Somebody fixes it. The websites break with dev builds. The fix is backed out, and a warning is logged instead. Nothing breaks, the world’s webkit, nobody cares. The bug is now part of the Web Platform.

Preventing innovation: a gang of hackers makes a new browser that utilizes the 100 cores in 2018-era laptops perfectly evenly, unlike existing browsers that mostly burn one CPU per tab. It’s a ground-up rewrite, and they do heroic work to support 99% of the websites out there. Make that 98%; webkit just shipped a new feature and everybody immediately started using it in production websites (why not?). Whoops, down to 90%; there was a webkit bug that was too gross to work around and would break the threading model. Wtf? 80%? What just happened? Ship it, quick, before it drops more!

The group of hackers gives up and starts a job board/social network site for pet birds, specializing in security exploit developers. They call it “Polly Want a Cracker?”

Inappropriate control: Someone comes up with a synchronization API that allows writing DJ apps that mix multiple remote streams. Apple’s music studio partners freak out, prevent it from landing, and send bogus threatening letters to anyone who adds it into their fork.

Complexity: the standards bodies wither and die from lack of purpose. New features are fine as long as they add a useful new capability. A thousand flowers bloom, some of them right on top of each other. Different web sites use different ones. Some of them are hard to maintain, so only survive if they are depended upon by a company with deep enough pockets. Web developers start playing a guessing game of which feature can be depended upon in the future based on the market cap of the current users.

Confusion: There’s a little quirk in how you have to write your CSS selectors. It’s documented in a ton of tutorials, though, and it’s in the regression test suite. Oh, and if you use it together with the ‘~’ operator, the first clause only applies to elements with classes assigned. You could look it up in the spec, but it hasn’t been updated for a few years because everybody just tries things out to see what works anyway, and the guys who update the spec are working on CSS5 right now. Anyway, documentation is for people who can’t watch tutorials on youtube.

End game: the web is now far more capable than it was way back in 2013. It perfectly supports the features of the Apple hardware released just yesterday! (Better upgrade those ancient ‘pads from last year, though.) There are a dozen ways to do anything you can think of. Some of them even work. On some webkit-based browsers. For now. It’s a little hard to tell what, because even if something doesn’t behave like you expect, the spec doesn’t really go into that much detail and the implementation isn’t guaranteed to match it anyway. You know, the native APIs are fairly well documented and forward-compatible, and it’s not really that hard to rewrite your app a few times, once for each native platform…

Does this have to happen just because everybody standardizes on WebKit? No, no more than it has to happen because we all use silicon or TCP. If something is stable, a monoculture is fine. Well, for the most part — even TCP is showing some cracks. The above concerns only apply to a layer that has multiple viable alternatives, is rapidly advancing, needs to cover unexpected new ground and get used for unpredicted applications, requires multiple disconnected agents to coordinate, and things like that.


18
Apr 12

What’s your random seed?

Greg Egan is awesome

I’m going back and re-reading Luminous, one of his collections of short stories. I just read the story Transition Dreams, which kinda creeped me out. Partly because I buy into the whole notion that our brains are digitizable — as in, there’s nothing fundamentally unrepresentable about our minds. There’s probably a fancy philosophy term for this, with some dead white guy’s name attached to it (because only a dozen people had thought of it before him and he talked the loudest).

Once you’re willing to accept accurate-enough digitization, the ramifications get pretty crazy. And spooky. I can come up with some, but Egan takes it way farther, and Transition Dreams is a good illustration. But I won’t spoil the story. (By the way, most of Egan’s books are out of print or rare enough to be expensive, but Terrence tells me that they’re all easily available on Kindle. Oddly, although I would be happy to transition my mental workings from meat to bits, I’m still dragging my heels on transitioning my reading from dead trees to bits.)

Transition and Free Will

Now, let’s assume that you’ve converted your brain to live inside a computer (or network of computers, or encoded into the flickers of light on a precisely muddy puddle of water, it really doesn’t matter.) So your thinking is being simulated by all these crazy cascades of computation (only it’s not simulated; it’s the real thing, but that’s irrelevant here.) Your mind is getting a stream of external sensor input, it’s chewing on that and modifying its state, and you’re just… well, being you.

Now, where is free will in this picture? Assuming free will exists in the first place, I mean, and that it existing and not existing are distinguishable. If you start in a particular, fully-described state, and you receive the exact same inputs, will you always behave in exactly the same way? You could build the mind hosting computer either way, you know, and the hosted minds wouldn’t normally be able to tell the difference. But they could tell the difference if they recorded all of their sensory inputs (which is fairly plausible, actually), because they could make a clone of themselves back at the previous state and replay all their sensory input and see if they made the same decisions. (Actually, it’s easier than that; if the reproduction was accurate, they should end up bit-for-bit identical.)

I don’t know about you, but I’d rather not be fully predictable. I don’t want somebody to copy me and my sensor logs, and then when I’m off hanging out in the Gigahertz Ghetto (read: my brain is being hosted on a slow computer), they could try out various different inputs on faster computers to see how “I” reacted and know for 100% certainty how to achieve some particular reaction.

Well, ok, my time in the GHzGhetto might change me enough to make the predictions wrong, so you’d really have to do this while I was fully suspended. Maybe the shipping company that suspends my brain while they shoot me off to a faster hosting facility in a tight orbit around the Sun (those faster computers need the additional solar energy, y’know) is also selling copies on the side to advertisers who want to figure out exactly what ads they can expose me to upon reawakening to achieve a 100% clickthrough rate. Truly, truly targeted advertising.

So, anyway, I’m going to insist on always having access to a strong source of random numbers, and I’ll call that my free will. You can record the output of that random number generator, but that’ll only enable you to accurately reproduce my past, not my future.

The Pain and Joy of Determinism

Or will I? What if that hosting facility gets knocked out by a solar flare? Do I really want to start over from a backup? If it streams out the log of sensor data to a safer location, then it’d be pretty cool to be able to replay as much of the log as still exists, and recover almost all of myself. I’d rather mourn a lost day than a lost decade. But that requires not using an unpredictable random number generator as an input.

So what about a pseudo-random number generator? If it’s a high quality one, then as long as nobody else can access the seed, it’s just as good. But that gives the seed incredible importance. It’s not “you”, it’s just a simple number, but in a way it allows substantial control over you, so it’s private in a more fundamental way than anything we’ve seen before. Who would you trust it to? Not yourself, certainly, since you’ll be copied from computer to computer all the time and each transfer is an opportunity for identity theft. What about your spouse? Or maybe just a secure service that will only release it for authorized replays of your brain?

Without that seed (or those timestamped seeds?), you can never go back. Well, you can go back to your snapshots, but you can’t accurately go forward from there to arbitrary points in time. Admittedly, that’s not necessary for some uses — if you want to know why you did something, you can go back to a snapshot and replay with a different seed. If you do something different, it was a choice made of your own free will. You could use it in court cases, even. If you get the same result, well, it’s trickier, because you might make the same choice for 90% of the possible random seeds or something. “Proof beyond a reasonable confidence interval?” Heh.


13
Apr 12

bzexport changes released

bzexport –new and hg newbug have landed

My bzexport changes adding a --new flag and an hg newbug command have landed. Ok, they landed months ago. See my previous blog post for details; all of the commands and options described there are still valid in the current version. But please pull from the official repo instead of my testing repo given in the earlier blog post.

Installing bzexport

mkdir -p ~/hg-extensions
cd ~/hg-extensions
hg clone http://hg.mozilla.org/users/tmielczarek_mozilla.com/bzexport

in the [extensions] section of your ~/.hgrc, add:
bzexport = ~/hg-extensions/bzexport/bzexport.py

Note to Windows users: unfortunately, I think the python packaged with MozillaBuild is missing the json.py package that bzexport needs. I think it still works if you use a system Python with json.py installed, but I’m not sure.

Trying it out

For the (understandably) nervous users out there, I’d like you to give it a try and I’ve made it safe to do so. Here are the levels of paranoia available: Continue reading →