User interfaces for tweakable settings
For those unfamiliar with the field, tweaking a computer graphics rendering often involves playing with dozens of values, some with obvious meanings (colour), some less obvious (randomness). Lots of people have spent lots of time trying to refine these models, but today at SIGGRAPH 2010 a paper about user studies on a subset of these parameter-tweaking interfaces was presented.
In this study, three types of interfaces were evaluated: two which amounted to tweaking numeric values using sliders, and one which involved searching for the type of effect you’re looking for visually. The study consisted of users trying to match given outputs using the three methods as well as creating an entirely new rendering to fit in with an existing scene. The users who were participating in this study were all novices: they’d never done any rendering before this study.
Everyone involved followed the same pattern: playing with the controls to get a handle on what each of them does, then “blocking out” the values (getting them in the neighbourhood of correct) and moving on to the next set. Then, once each of the values was sort of correct, you go back and tweak the rendering by smaller and smaller amounts until you converge on the correct output.
Interestingly, though, users found (and the amount of time spent to get the correct value agreed with them) the slider interfaces about equally easy, and much easier than the visual search. In many cases, users artificially constrained their visual search to a slider-like couple of results in order to make their search easier. The visual search was found too cumbersome; while blocking was around as easy, tweaking was much more difficult.
However, precisely the opposite was found when the task was to generate something new to fit in with an existing scene. While the visual search was still just as difficult to tweak, its more unconstrained nature made it much easier for users to find something they liked as a starting point, and in the end users were happier with their results than with the slider-based approach.
What is the take-away message? In my mind, it’s that, when you need to make small changes iterating towards a goal, providing a highly granular and more easily tweakable interface is of utmost importance; however, when you’re just starting to create something, providing a more visceral, less controllable interface gives users a good starting point. Ideally, you’d provide a hybrid of both approaches, allowing users to define their direction in broad strokes and then tweak it quickly using more detailed controls.
Computer Graphics in history
I went to a presentation given by Richard Chuang (formerly from PDI) and Ed Catmull (from Pixar, and CG lore). It focused on a course that Catmull and Jim Blinn taught at Berkeley in 1980, and which Chuang audited via a microwave link to his workplace at HP. There was a lot of history of computer graphics in this course, and it was quite transformational in Chuang’s life; a year later he helped found PDI, which was later bought by Dreamworks.
The most important thing I took from this panel presentation was that you should always start with the hardest part of your project, because the easier parts will be informed by the choices you make. The specific example given was the choice many initial implementers of hidden-surface algorithms (occlusion, depth buffers, etc) made to ignore anti-aliasing, figuring it was a simple extension of their work. As it turns out, anti-aliasing is hard, and it is made even harder if your hidden-surface algorithm throws away data that you’d need to anti-alias, like the specific points in your polygons.