GPUs sample pixels in a perhaps non-intuitive way. For multi-sample anti-aliasing, a pixel that intersects only one triangle is sampled for coverage data several times, but shaded only once, because texture lookups are appropriately filtered, and shading is the most expensive part of the GPU pipeline. The shaded colour is then attenuated by the coverage value. The only time shading happens multiple times per pixel is when multiple triangles cover a single pixel; in that case, the GPU samples each triangle separately.
In , the paper’s authors propose adding another step to the GPU pipeline that merges adjacent triangles that share an edge. This makes it possible to shade only once, reducing the amount of work necessary.
 detailed a new technique for generating 2D cartoons that have some of the behaviour of 3D cartoons while maintaining the simple 2D look. It amounted to separating different strokes of the drawing into different layers, each of which could be rotated and occluded to rotate around the character. The key part of this is that each of the separate parts of character were “billboards” that always faced the viewer. They could be occluded by other parts, but you couldn’t look behind them. Further, once you defined what, say, your character looked like from the front and the side (perhaps his nose changes, and one of his ears is invisible, but his mouth probably looks the same), the system automatically lets you rotate between those two positions by interpolating between the drawings, and since it knows the relative ordering of your character’s parts, you can even rotate all the way around the character, and each part of the character will disappear as it’s occluded by the character’s body.
 was a great improvement of the libaa of old, though it did rely on a way of making outlines/vectors out of images that wasn’t detailed. They overlaid the vectorized images with a grid (of the size of the ASCII art image you want to generate), and then matched the line segments in each of those grids the known shapes of the font they use. Because these matches are sometimes close, they then perturb the lines (in a very controlled way) to try to get a better fit. Iterating on this produces some pretty great results which exceed the ability of ASCII artists to reproduce images, though the artists’ results were still preferred by a small majority for overall look.
1. Fatahalian, K., Boulos, S., Hegarty, J., Akeley, K., Mark, W., Moreton, H., Hanrahan, P. 2010. Reducing Shading on GPUs using Quad-Fragment Merging. ACM Trans. Graph. 29, 4, Article 67 (July 2010), 8 pages. DOI = 10.1145/1778765.1778804 http://doi.acm.org/10.1145/1778765.1778804.
2. Rivers, A., Igarashi, T., Durand, F. 2010. 2.5D Cartoon Models. ACM Trans. Graph. 29, 4, Article 59 (July 2010), 7 pages. DOI = 10.1145/1778765.1778796 http://doi.acm.org/10.1145/1778765.1778796.
3. Xu, X., Zhang, L., Wong, T. 2010. Structure-based ASCII Art. ACM Trans. Graph. 29, 4, Article 52 (July 2010), 9 pages. DOI = 10.1145/1778765.1778789 http://doi.acm.org/10.1145/1778765.1778789.