Attachment history

Earlier today, Dietrich tweeted a great pic as he was getting ready for the 2013 Summit in Brussels, showing his collection of badges from previous summits.

As it happens, I also dug out my badge collection home just yesterday. (Didn’t want to forget it at the office!).

That got me thinking, so I pulled out my drawer full of other Mozilla schwag and took some pictures. It’s an interesting historical look at some of the artifacts Mozilla has made over the years. Hopefully we’ll get some more great stuff at this year’s summit. Because, really, that’s the whole point of it. ;-)

Posted in PlanetMozilla | 1 Comment

Mozilla office history

If you don’t live in the San Francisco Bay area, you may not be aware that the culmination of a major infrastructure project is underway this holiday weekend. The Bay Bridge, one of 5 major Bay-area bridges, is in the middle of a 5 1/2 day closure as it’s transitioned over to its replacement. (The other 4 major bridges, in case you’re wondering, are the
Richmond–San Rafael Bridge, the San Mateo Bridge, the Dumbarton Bridge, aaaand… hmmm… oh, right, the Golden Gate bridge)

The Bay Bridge was originally built in the 1930s, and after the Loma Prieta earthquake in 1989 it became clear it needed to be replaced. One of the flashbulb memories many people have of the quake — in addition to it interrupting the ’89 World Series and the collapse of the double-decker Cypress Street Viaduct — is the failure of a span on the Bay Bridge, with cars driving over it. Since then, the western half of the Bay Bridge has been retrofitted to be earthquake-safe, but the eastern span of the bridge has taken longer to completely replace. This weekend’s work is to transition the connection points, so that tomorrow people will be driving over a completely new bridge that’s been 11 years and $6.4 billion in the making!

(I’m getting to the part where Mozilla ties into this.)

Last Friday @BurritoJustice tweeted a link to a slideshow that dove into the engineering history of the Bay Bridge, complete with photos taken during the construction.

It’s some fantastic engineering porn, and I spent my lunch reading through all of it. I happened to notice that the building in the background of one of the photos looked familiar…

Mozilla’s San Francisco office, in the historic Hills Brothers Coffee building at 2 Harrison Street, is literally next-door to where the western span of the Bay Bridge lands in S.F. It makes for some really great views of the bridge from our top-floor patio:

As well as a first-row seat beneath the giant “HILLS BROS COFFEE” sign atop the building.

It’s this sign that made it easy to spot our building in the engineering history slideshow. The building was constructed in 1926, and the Bay Bridge wasn’t built until 1933-1936, so I was curious to see if the sign was visible in other contemporaneous photos. I started digging though some online resources, and got lucky right away by finding a highres version of that picture at the Library Of Congress:

I skimmed through the other 415 photos in this collection and another 1160 from UC Berkeley (So You Don’t Have To™), and found some other nice shots with the Hills Coffee sign peeking out from the background:






So there you have it. Pics of the Mozilla San Francisco office from both ends of an 80 year span of history.

Posted in Firefox, PlanetFirefox, PlanetMozilla | 1 Comment

A change in password manager

In my last post I talked about how password manager currently works. Starting in Firefox 26 (assuming all goes well), it will work a bit differently.

Currently, the password manager must look at each page that loads, and process it to see if it might be a login page (and if so try to fill in a saved username/password). This is kind of dumb and has always annoyed me. For one, it’s not really great for performance. We spend a small amount of time to do this on every page, when only a tiny fraction of pages will actually be login pages. Nor is it suited to the modern web — many pages do interesting things after they’ve initially loaded, but the password manager only attempts to do its work at initial page load. If a page dynamically adds a username/password field to the DOM afterward (in response to, say, the user clicking a “login” button), we never notice and thus can’t offer to fill in a login.

Starting with tomorrow’s Firefox 26 nightly build (or the day after, depending on how merges and build go), this whole process will instead be triggered when an <input type=password> is appended to the DOM — be it from loading static HTML, or when the page dynamically adds one later. This means password manager can skip processing most page loads (i.e., the ones without password fields). It also means it can now fill in logins on some sites that it couldn’t before.

This new functionality comes via bug 355063, with major supporting work from bug 771331 (which added the chrome-only DOMFormHasPassword event) and bug 839961 (which refactored password manager to prepare it for using this event). It’s also interesting to mention that bug 762593 has already added code to use DOMFormHasPassword, in order to log console warnings when a page is using password fields insecurely.

There is one potential downside.

This changes the timing of when passwords are filled, so if a site is setting or clearing the contents of login fields, it’s possible it could blow away a login that previously hadn’t been filled yet. For example, I’ve seen sites clear fields as a poor security measure (I guess to deter password managers?), as well as set fields (to values like “Enter username here”). The latter is a great example of when and why the HTML5 placeholder attribute should be used.

I don’t think this change is a significant risk, and is definitely worth it for the other benefits. But should you run across it, there will be a signon.useDOMFormHasPassword pref that can be flipped to revert to the old code. (This is only a temporary pref, it will go away once we’re sure this isn’t a big problem.) And, again, if you find problems please file a bug containing debug info.

Posted in PlanetMozilla | 4 Comments

On Firefox’s Password Manager

It’s been a while since I last blogged about Firefox’s password manager. No big deal, it really hasn’t fundamentally changed since since I rewrote it 6 years ago for Firefox 3.0. Well, ok, the storage backend did switch to SQLite, but that’s mostly invisible to users. There’s a noteworthy change coming soon (see next post!), but I figured that would be easier to explain if I first talked about how things currently work. Which I’ve been meaning to do for a long time, ahem.

The first thing you should know is that there is no standard or specification for how login pages work! The browser isn’t involved with the login process, other than to do generic things like loading pages and running javascript. So we basically have to guess about what’s going on in order to fill or save a username/password, and sometimes sites can do things that break this guesswork. I’m actually surprised I don’t get questions about this more often.

So let’s talk about how two of the main functions work — filling in known logins, and saving new logins.

Filling in a known login

The overall process for filling in an existing stored login is simple and brutish.

  1. Use the chrome docloader service and nsIWebProgress to learn when we start loading a new page.
  2. Add a DOMContentLoaded event listener to learn when that page has mostly loaded.
  3. When that event fires…
    1. Check to see if there are any logins stored for this site. If not, we’re done.
    2. Loop through each form element on the page…
      1. Is there an <input type=password> in the form? If not, skip form.
      2. Check to see if any known logins match the particular conditions of this form. If not, skip form.
      3. Check to see if various other conditions prevent us from using the login in this form.
      4. Fill in the username and/or password. Great success!

Phew! But it’s the details of looking at a form where things get complex.

To start with, where do the username and password go? The password is fairly obvious, because we can look for the password-specific input type. (What if there’s more than one? Then we ignore everything after the first.) There’s no unique “username” type, instead we just look for the first regular input field before the password field. At least, that was before HTML5 added more input types. Now we also allow types that could be plausibly be usernames, like <input type=email> (but not types like <input type=color>). Note that this all relies on the order of fields in the DOM — we can’t detect cases where a username is intended to go after the password (thankfully I’ve never seen anyone do this), or cases where other text inputs are inserted between the actual username field and the password (perhaps with a table or CSS to adjust visual ordering).

And then there’s the quirks of the fields themselves. If your username is “alice”, what should happen if the username field already has “bob” filled in? (We don’t change it or fill the password.) Or, more common and depressing, what if the username field already contains “Enter your sign in name here”? In Mongolian? (We treat it like “bob”.) What if the page has <input maxlength=8> but your username is “billvonweiterheimerschmidt”? (We avoid filling anything the user couldn’t have typed.)

And then there’s the quirks of the saved logins. What if the username field already has “ALICE” instead of “alice”? (We ignore case when filling, it’s a little trickier when saving.) Is there a difference between <input name=user> and <input name=login>? (Nope, we stopped using the fieldname in Firefox 3 because it was being used inconsistently, even within a site.) What about a site has both a username+password _and_ a separate password-like PIN? (Surprisingly, we were able to make that work! Depending on the presence of a username field, we prefer one login or the other.)

And then. And then and then and then. Like I said, there’s no spec, and sometimes a site’s usage can break the guesses we make.

Saving a new/changed login

In comparison, the process for saving a login is simpler.

  1. Watch for any form submissions (via a chrome earlyformsubmit observer)
  2. Given a form submission, is there a password field in it? If not, we’re done.
  3. Determine the username and password from the form, and compare with existing logins…
    • If username is new, ask if user wants to save this new login
    • If username exists but the password is different, ask if user wants to change the saved password
    • If username and password are already saved, there’s nothing else to do.

Of course, there are still a number of complicating details!

This whole process is initiated by a form submission. If a site doesn’t actually submit a form (via form.submit() or <button type=submit>), but just runs some JavaScript to process the login, the password manager won’t see it. And thus can’t offer to save a new/changed login for the user. (Note that this is easy for a site to work around — do your work in the form’s onsubmit, but return |false| to cancel the submission).

Oh, and there’s still the same question as before — how to determine which fields are the username and password? We reuse the same algorithm as when filling a page, for consistency. But there are a few wrinkles. The form might be changing a password, so there could be up to 3 relevant password fields (for the cases of: just a new password, old and new passwords, and old + new + confirm.). And the password fields could be in any order! (Surprisingly, again, this works.) The most common problem I’ve seen is an account signup page with other form fields between the username and password, such as “choose a user name, enter your shipping address, set a password”. The password manager will guess wrong and think your zipcode is your username.

Oh, and somewhere in all this I should mention how differences in URLs can prevent filling in a login (or result in saving a seemingly-duplicate login). Clearly google.com and yahoo.com logins should be separate. But we also match on protocol, so that a https://site.com login will not be filled in on http://site.com. And what about www.foo.com and foo.com or accounts.foo.com? (We treat them separately.) What about mail.mozilla.com and people.mozilla.com? (Also separate.) What you might not realize is that we also use the form’s action URL (i.e., where the form is submitted to), ever since bug 360493. While this prevented sending your myspace.com password to evilhacker.com, it also breaks things when a site uses slightly different URLs in various login pages, or later changes where their login forms submit to.

Oh, bother.

All the gory details

This is one of the benefits of being Open Source. If you want to see alllll the gory details of how the Firefox password manager works, you can look at the source code. See http://mxr.mozilla.org/mozilla-central/source/toolkit/components/passwordmgr/. In particular, LoginManagerContent.jsm contains the code implementing the stuff discussed in this post, with the main entry points being onContentLoaded() and onFormSubmit().

Finally (!), I’ll mention that the Firefox password manager has some built in logging to help with debugging problems. If you find it not working in some particular case, you can enable the logging and it will often make the problem clear — or at least clearer to a developer when you file a bug!

Posted in Firefox, PlanetFirefox, PlanetMozilla | 7 Comments

IRC notifications

I currently use IRSSI, a traditional text-based client for IRC. I’ve used various GUI clients in the past, but found them all to have bugs and quirks that drove me crazy. IRSSI’s been very good to me, but it’s lacking one major feature that every GUI client has — notifications. And so unless I actually *look* at my IRC terminal, I can’t tell if someone’s trying to get my attention. To make things even worse, I run IRSSI under screen on a Mozilla server (so it’s always online even if my laptop isn’t). That makes it tricky to get a notification on my local system when the trigger is on a remote system.

Back in 2010 Justin Dow blogged about one way to get this working. It’s a little complicated… You first configure IRSSI to log notifications to a text file, the run one normal interactive SSH connection to screen+IRSSI, and another SSH connection to pipe the IRSSI log back to a local script that feeds it into Growl. I didn’t want to do that.

I tried using iTerm 2 for a while. Among its many bells and whistles is the ability to do actions (like notifications) upon a regex match. Neat little feature, and was easy to get working for someone saying “dolske” in whatever channel I had active. But background mentions don’t send any obvious string, so that was a huge limitation. (I tried regexing for the ANSI control codes that hilight an IRSSI window number, but that didn’t work reliably.)

I recently read that as of OS X 10.7, the default Terminal.app includes a simple feature — whenever it receives a bell character, it will bounce the Dock icon and annotate the icon with a number. Perfect! I got this all working, so here’s what you can do:

1. Configure Terminal.app. Actually, you don’t need to. This works automatically. But Preferences –> Settings –> Advanced –> Audible Bell may be something you want to disable if you dislike terminals making noise. Or enable it for debugging this.

2. Configure IRSSI to beep on private messages and hilights: /set beep_msg_level MSGS HILIGHT

3. Configure Screen to allow passing through beeps. You can put “vbell off” in ~/.screenrc and/or toggle it interactively via Control-A, Control-G. An easy test for this is to enable Audible Bell (see step 1) and use IRSSI’s /beep command. If you hear a sound, you’re good to go.

Note that IRSSI also has a “/set bell_beeps on|off” toggle. It’s not needed for the above; it seems to be for filtering out bell characters from channel messages, but the docs are somewhat vague (and a quick skim of the source wasn’t any more enlightening).

Posted in PlanetMozilla | 3 Comments

Ten year year retrospective

Think back to 10 years ago. February 1st 2003, around 9am (EST) in the morning. Some of you will have this date burned into your memory, but many will not. Do you remember where you were, when you first heard?

I do, and won’t ever forget.

I woke up late that morning. I’d meant to watch the landing live, but getting up early isn’t exactly my favorite thing to do. To be honest, watching a routine touchdown isn’t terribly exciting. Intellectually interesting — oh my yes — but not as primally exciting as the thrill and danger of a launch. Sometimes the cable TV networks would even break away from the usual news drivel to show the first minute or two… The countdown and ignition still caught the public’s attention. The rest of the mission and landing? Not so much.

I loved every minute minute of it.

NASA’s a pretty easy thing for a geek like me to fall in love with (see #5). It wasn’t always easy to keep up with, though. The online streams often got swamped during a Shuttle mission (only a lucky few got NasaTV on their local cable station). If you knew about them, the sci.space.shuttle and sci.space.history groups on Usenet were great for technical tidbits, and even had an early cryptic hint that there might be something wrong with STS-107.

That morning it became tragically obvious as to just how wrong things were.

I slept through Columbia and her seven crew burning up on reentry. I think I’m glad I missed the immediate confusion over what was happening. But instead I got it all in one lump, when I finally started my day by reading the news on Fark.com. It took a moment to sink in — Fark was a unique blend of news and humor, but the “News Flash!” headline I saw was unusually direct and blunt:

Space shuttle Columbia explodes on re-entry. All on board killed

Oh… no. No.

Things moved quickly from there. The rough outline of what likely happened was publicly known within days (instead of weeks, as NASA was significantly more open than they were in the Challenger era). An accident investigation board was formed, and held regular sessions to update the public on their progress. A report was generated, leading to a NASA Shuttle program that was reformed but also put on a path to shutdown.

If you’re interested in a technical, engineering and systems view of what happened, there is no better resource than the final report from the Columbia Accident Investigation Board. It’s long but also highly readable. A large one-part PDF of volume 1 is available here.

Another retrospective on the Columbia accident can be found on former flight-director Wayne Hale’s blog. I watched it unfold on the Internet, but he was there.

The 1941 poem High Flight is a surprisingly prescient tribute:

Oh! I have slipped the surly bonds of Earth
And danced the skies on laughter-silvered wings;
Sunward I’ve climbed, and joined the tumbling mirth
of sun-split clouds, — and done a hundred things
You have not dreamed of — wheeled and soared and swung
High in the sunlit silence. Hov’ring there,
I’ve chased the shouting wind along, and flung
My eager craft through footless halls of air….

Up, up the long, delirious, burning blue
I’ve topped the wind-swept heights with easy grace.
Where never lark, or even eagle flew —
And, while with silent, lifting mind I’ve trod
The high untrespassed sanctity of space,
– Put out my hand, and touched the face of God.

Posted in PlanetMozilla, Technology | Comments Off

Font politics

There’s a nice blog post up today talking about the typefaces used in the current presidential campaign. (Along with a link to an interesting study about their impact on perception.)

I couldn’t resist making the following:

(References: [1], [2], [3], [4] ;-)

Posted in PlanetMozilla | Comments Off

Demo: Image enhancement with getUserMedia

Enhance 224 to 176. Enhance, stop. Move in, stop. Pull out, track right, stop. Center in, pull back. Stop. Track 45 right. Stop. Center and stop. Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Pull back. Wait a minute, go right, stop. Enhance 57 to 19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there.

It seems demos of the new WebRTC getUserMedia() are all the rage these days. The bug’s bitten me too, so I took Tim’s green-screen demo and hacked it up to my own ends…

A common technique in photography — especially astrophotography — is “image stacking.” The stuff you DON’T want in your image is transient and random noise, whereas the scene you DO want in your image can be reliably and repeatedly captured. So, the basic idea is to take a bunch of photos, changing as little as possible, and then use image processing to combine/average (“stack”) them together. Once you start thinking about capturing photons this way, it’s possible to capture images that far exceed what one would normally expect.

I’ve implemented a simple-and-dumb version of image stacking using HTML5′s getUserMedia() and canvas. Let me illustrate with some pictures.

First I pointed my super-cheap USB webcam at a thing on my desk — which was dimly lit and quite stationary. Here’s a typical frame of captured video:

Doesn’t look very good; it’s a typical poor-quality webcam image. Next, using my little hack, I then captured 50 frames at 640×480 and averaged them together:

Yum. Much better. The image is overall much cleaner; the random-color “static” is suppressed in favor of flat colors and smooth gradients. (A little too much so, making it look cartoonish. I’m not sure if this is because of my cheap camera, a dumb algorithm, or something else.). But this image isn’t just smoother — it’s also sharper. Lines and edges are now crisp instead of blurry and mottled. Text labels that were barely readable before are now easily readable. This is particularly evident in tiny “◃SCALE▹” and “◃POSITION▹” labels just above them.

The differences are even more obvious if you boost up the brightness of the above images in Photoshop. The already-bright areas are now washed out, but detail in the darker areas is easier to see. In particular, the area from the column of dark buttons to the lower-left corner of the pic is much more detailed than in the single-frame capture:


Now, just point your space telescope at a dark patch of the sky, use similar techniques to stack up 23-days worth of exposure, and you get this. Neat.

If you’d like to play around with this demo yourself, try it out in your browser here. I’m curious if people can find improvements to the averaging (in either speed or quality).

Posted in PlanetMozilla | 5 Comments

Unicorns and Mozilla

Let’s file this under “reasons I love working at Mozilla”. Because, well, that’s true and I can’t think of a better place for it!

Months ago (months!), I ran across an amazing, amazing thing for “Unicorn Poop Cookies”. I’ve had this tab open in my browser since then, and it was time to close it out. So I made some cookies. Oh, yes. And who better to share them with than my awesome coworkers?!

Here’s the recipe I used…

  • 1 cup butter
  • 0.75 cup sugar
  • 1 egg
  • 0.5 teaspoon vanilla extract
  • 0.25 teaspoon lemon extract
  • 2.5 cups flour (bleached — I know — to show color better)
  • 1 teaspoon baking powder
  • 0.25 teaspoon salt
  1. Cream butter and sugar. Add liquids, mix well. Slowly combine in remaining dry ingredients.
  2. Let dough chill. Then divide into portions, add coloring (use gel food coloring for bright colors)
  3. Chill again. Form dough into ropes. Splice in various colors, twist. Spiral into cookie-sized portions. Chill yet again.
  4. Make an egg wash (for shininess and to adhere toppings) with 1 egg + splash of water. Brush on. Sprinkle with decorations. “Disco Dust” is great for that added special sparkle.
  5. Bake at 375°F for no more than 8 minutes. Watch carefully, you want the dough to _just_ set, not become brown and crispy. These will be soft, almost underbaked, cookies about the consistency of PlayDoh.
  6. Once cooled, feed everything to a unicorn and wait for nature to Do Its Magic™.
  7. Serve with assortment of marshmallows other omgfluffy things. Such as more unicorns.

TBH, I’ll probably do it differently next time. I think the cookies turned out fine, but I’d rather use almond flavoring instead of lemon. The lemon extract I used gave a nice subtle(?) “tang” upfront, but also gave a distinct lemon aftertaste that lingered a bit too long. I may also be slightly biased for anything almond-flavored. Hmm, maybe an almond paste center would be interesting.

Special thanks to Jonathan Wilde for letting me borrow his Unicorn for this process! :D

Posted in PlanetFirefox, PlanetMozilla | 1 Comment

A transit of Venus

(or how I learned to stop worrying and love the Sun)

This is going to be a longish post, the ravings of an obsessive-compulsive mind. So here’s the tl;dr… I took a picture using a telescope (indirectly). It’s Venus transiting across the Sun. It took some frantic last-minute work, but it was worth it because it won’t happen again for 105 years. Here is my picture:

The black dot is Venus. The whitish circle is the Sun. Behold, the glory of orbital mechanics!

OK, most of you can safely skip the rest of this post. Geek stuff follows…

The story begins 16 days ago. The annular eclipse of May 20th 2012. These are relatively infrequent (every 2 or 3 years), but what was special about this event was that the optimal centerline-path of the eclipse was only about a 3-hour drive away. So I packed up my gear and hit the road. Yes, I also took a picture then:

I learned a few things from this event. The first was that — damn — these things sneak up on you fast. I’m a science/astronomy geek, and knew this thing was going to happen, but it wasn’t until a few days prior to the event that some random news article reminded me that is was going to happen omgnow. So I had to scramble. The second lesson was that it really helps to do a few dry-runs for your observation. I was prepared with a pinhole projector and pinhead mirror for observing the eclipse, but I built (!) both of them the day of the eclipse and was in a rush. It turned out ok, but I glitched the math on my pinhead mirror focal length and so the image was more blurry than I had been hoping for. No big deal for the eclipse, though, since most of the experience was more atmospheric (pardon the pun)… The relaxing drive, the slowly darkening skies, and the unexpected sights along the way. The saying that “the journey is the reward” was spot-on.

Taking these lessons to heart, I decided to be more prepared for the then-upcoming Transit of Venus. After much armchair research, I figured that the absolute best viewing experience would be with a specialty telescope fitted with an H-alpha filter. The views are astounding, as is the price — it jumps into the $thousands before you can even blink.

So after careful deliberation (especially given the once-in-a-lifetime opportunity), I did what had to be done. Yep. I got a $10 #14 welder’s filter and $5 pair of eclipse glasses. (Looking at others’ expensive pictures? Priceless.) I do have a decent conventional telescope, but it’s a pain to lug around and I was thinking I wouldn’t really need it.

But I found that the eclipse glasses seemed useless for a transit — the naked-eye diameter of the Sun is surprisingly small, and the tiny dot that is Venus would be challenging to see. I also did a test shot of the Sun with my camera thought the #14 filter, and scaled a shot of the 2004 Venus Transit to that size to create a mockup of what I might be able to capture on Transit Day:

Unimpressive. Venus would be about 4 pixels — at best. I’d get better results with a longer zoom lens on my camera (Lumix GF3, 20mm); but I tend to prefer wide-angle shots, thus spending $hundreds for a lens I might not use very often was unappealing. And so I basically left it at that, figuring whatever I might manage to record would be just fine as a token momento. After all, NASA and professional astronomers would capture the event in stunning detail, so why bother trying, right?

Timeline: Tuesday June 5th. 10:30am.

I started off my morning with a weekly meeting with my boss, Johnath. I don’t recall exactly how we got to the topic of astrophotography and Venus, but we did. And something (read: Johnath’s nefarious plan) sparked my brain into thinking that surely it couldn’t be that hard to something with my telescope for the transit. And from that point on I knew my day would have one, intensive, singular focus.

The ideal way to view the Sun through a telescope is with a special H-alpha filter (expensive, as noted) or much cheaper general solar filter. Both basically block 99-point-lots-of-9s of the light entering the telescope, making it safe to look through directly with your eye. Alas, I had neither of these filters. (See above, re: prepare and practice.) Instead I fell back to an idea I had heard about, which involves physically blocking most of the telescope’s aperture except for a small opening. This would reduce, in my case, the light-gathering power of an 8″-diameter Schmidt–Cassegrain ‘scope — which is normally great for viewing dim galaxies and faint nebulae — to a much smaller opening. Say, a half-inch to 2 inches.

First step — measure twice, cut once. I was very precisely cutting a form from foam board (say that four times fast), because I wanted to ensure that it fit snugly and didn’t leak light. A full-aperture telescope pointed directly at the Sun has an overwhelming tendency to do things like cause blindness, burn things, and damage delicate optics. That would be bad. On top of the first foamboard layer would be a sheet of foil (because foamboard isn’t very opaque to light), and then a second layer of foamboard to help protect the foil and sit flush with the outer lip of the telescope body. A little bit of gray fleece on the back and flat-black paint helped cut down on reflections. Ta-da, done.

I also built a holder for the #14 welding glass I have, but didn’t end up using it. It’s too dark to use for projection, and viewing through an eyepiece just didn’t seem to work as well as projecting. I also ended up entirely skipping the creation of some kludgey adapter to attach my camera to the telescope (I have a camera fitting, but it’s for the ancient Pentax-K mount).

Finally, after all that, I set up everything to see if it worked. I got the sun projecting onto a small piece of foamboard, and my first thought was to wonder what I has screwed up to get such a big black dot in the image. Oh, wait. Right. That’s Venus. I wasn’t watching the time, and the transit was already underway. Hooray, it works!

After this I couldn’t directly see the sun from my patio any more, so I headed off to a park near work with an open view of the sky. Matt and Jared helped out with moving equipment and holding the projection surface (again, formboard) while I fiddled with the telescope. This is how we were set up for capturing the photo at the top of this post:

Afterwards, just for fun I took an image from NASA’s SDO satellite and scaled it down to compare with my own contrast-stretched shot. I will be the first to admit their $850 million toy captures a far superior image, but I’m still pleased with my own result. (To be clear, mine is on the left. :)

And that’s how I spent most of Tuesday.

Posted in PlanetMozilla | 8 Comments