Dither me this


For some reason, the Internet got all excited about dithering a few months ago, linking primarily to this article about eleven dithering algorithms. (Here's another article about dithering.)

Which reminded me of a dithering story.

The folks at Microsoft Research, when not working on their time machine, occasionally produce code for product teams. Some time ago, the Internet Explorer team received some code from Research that implemented an improved dithering algorithm.

One of my colleagues reviewed the code, and noticed that, among other things, the algorithm rejected the convention wisdom mentioned in Tanner Helland's article:

Typically, when we finish processing a line of the image, we discard the error value we've been tracking and start over again at an error of "0" with the next line of the image.

Rather than throwing away the accumulated error, the algorithm distributed the error in the final pixel into the next scan row, so that the error from the last pixel in the row wouldn't be lost.¹ It also had a separate algorithm for the last row of the bitmap, pushing all the error to the right since there is no row below.

I have no idea what it did with the error from the final pixel. Maybe it uploaded it to a server where all the lost errors are archived.

My colleague asked why the algorithm was so meticulous about not throwing away accumulated error, at a cost of significant algorithmic complexity. The developer explained, "Because it's correct. We want to minimize the error, and if you throw it away at the end of the row, then you never get to compensate for it in the next row. The total error in the image is higher as a result."

This is a case of getting so engrossed by the mathematical perfection of your algorithm that you lose sight of the original problem.

The goal of dithering is to improve the appearance of an image when it is quantized, that is, when it is displayed with a lower color resolution than the original image. The principle behind Floyd-Steinberg dithering is to keep track of how far your quantized color is from the ideal color and to distribute that error into future nearby pixels, in order to give those pixels a chance to compensate for the quantization error.

But minimizing error is not the goal of dithering. It is merely a technique. The goal of dithering is to make an image that looks good.

My colleague asked, "What if the image is clipped due to layout, or because the user scrolls the page so that the image is only half-visible. Do you go back and re-dither the visible portion of the image?"

The developer admitted that the image is dithered only once. If the image is clipped, the clipping is applied to the dithered version.

"Well, then, when the image gets clipped, what happens to all the error in the clipped-out portion? Oh, wait, I'll tell you: It's thrown away."

As an extreme case of this, consider dragging another window so it partially covers the dithered image. Does the image re-dither to account for the fact that some of its error is now covered by another window? No, the dithered image is just the dithered image, and it gets clipped and occluded just like any other image.

So there's really no point in being so anal-retentive about the error in the last column and last row of the image. First of all, it creates a discontinuity in the algorithm, so that the last column and last row of the image get dithered differently from the other pixels, which may result in a "band" at the edge if there is a large area of uniform color.

Besides, that error can get lost to occlusion or clipping. And don't try to "repair" the image when that happens. Not only is it computationally expensive, but it would result in the somewhat disturbing effect that moving a window over the image results in the image developing a constantly changing "fringe" as the last visible row and column of the image constantly recalculates.

Getting rid of the special cases for the last row and column simplified the algorithm. What's more, in the era where CPUs had limited on-chip cache (measured in single-digit kilobytes), it meant that the code and look-ahead row could fit into cache, leaving all the memory bandwidth available for accessing the actual bitmap.

This was a soft real-time algorithm, because the user is waiting for the image to appear on the screen, so improvements in speed were far more valuable than barely perceptible improvements in visual quality.

¹ I forget how large the error diffusion matrix was exactly. Didn't realize there was going to be a quiz over 20 years later. Let's assume it was the classic Floyd-Steinberg dither, which pushes error only one pixel out. (Although I seem to recall it had a larger diffusion matrix, because there was some odd/even weirdness.)

Comments (17)
  1. pc says:

    I've worked on some computers that were clearly where all the leftover error had traveled to.

    1. Ray Koopa says:

      This was probably the computer who defined the last node of the internet.

  2. Adrian says:

    "the Internet got all excited about dithering"

    You might say they were all in a dither. ;-)

  3. And yet, no matter how much work they put into their dithering algorithm, it was still fundamentally wrong because they never accounted for the non-linear nature of sRGB. Even today, modern browsers are still incapable of so much as resizing an image correctly. Seriously, modern graphics APIs have stuff built in for this and GPUs have lookup tables so it is super cheap to account for sRGB, and yet so much software disregards it.

    1. Josh B says:

      Like replacing JPEG with any better format from the last 20 years, the collective "Meh" from everyone, even developers, stops progress once there's some standard. The pain of sticking with the new has the outweigh inertia; if it hadn't been for Unisys PNG wouldn't exist or only as a niche format too. True for pretty much every field of human endeavor....

      1. Douglas says:

        My favorite one is how we can't get an image format that:
        1) Supports animation
        2) Supports more than 256 colors
        3) Is supported by more than one (Blink-based Opera counts as Chrome here) browser right out of the box

        Actually, APNG is supported by both FF and Safari, and Chrome will support it soon™, so the situation is almost okay.

        Checking online, APNG came out in 2004, took 3 years to get turned down, and almost another ten to get semi-widely adopted. Progress!

        1. henke37 says:

          At this point you just throw H.264 at it.

          1. Does h.264 work in IMG elements now?

      2. This has nothing to do with being a new format. Almost all images on the internet are already sRGB and almost all monitors are sRGB (although many are calibrated poorly). This would be nothing more than an internal change in browser rendering engines to load textures as sRGB so they are converted to linear when sampled, and then enable sRGB on the output framebuffer. Nobody else would have to do anything and everything would magically become better.

        1. Drak says:

          As far as I know, most web browsers (on the desktop) use sRGB...

      3. smf says:

        "if it hadn’t been for Unisys PNG wouldn’t exist or only as a niche format too."

        I think you've overplaying Unisys involvement with PNG.

        "Welch filed a patent application for the LZW method in June 1983. The resulting patent, US 4558302, granted in December 1985, was assigned to Sperry Corporation who subsequently merged with Burroughs Corporation in 1986 and formed Unisys.[28] Further patents were obtained in the United Kingdom, France, Germany, Italy, Japan and Canada."

        It was only because GIF use LZW and Unisys started asking for patent licenses that someone invented PNG.

        By your reasoning, we could also thank Microsoft for Linux.

    2. Don Reba says:

      For the small deltas involved, the non-linearity is usually not significant. I compared dithering in RGB and in Lab colour spaces back when converting 24-bit graphics assets into 565 16-bit for WinCE apps, and it didn't make a substantial difference.

      1. smf says:

        Eyes and monitors are not calibrated for perfect linearity either. If your monitor is hooked up to a 15 pin vga graphics card then linearity and monotonicity is an issue.

        All dithering is supposed to do is break up the banding you would get if you just dropped the lowest significant bits, replacing the banding with something as unnoticeable as possible. Over the years I've seen dithering which I'd rather just turn off as it interfered with the signal to noise ration. I'd like to see a c64 artist dither algorithm, that could take the box art of a commodore 64 game and turn it into a loading screen in the style of the original artists in a c64 compatible graphics format. Maybe one we will have the processing power to duplicate their work.

  4. jimbo1qaz says:

    Bisqwit (TASvideos founder) did an article on dithering algorithms that are stable when the image changes. It reminded me of your comments on moving windows. http://bisqwit.iki.fi/story/howto/dither/jy/

  5. Neil says:

    I once wrote a very simple 15 colour dithering algorithm. The principle (but not the implementation!) was as follows: for each pixel in the image, ask GDI to paint an image-sized rectangle using a solid brush of that colour, then look to see which colour actually ended up being used for the original pixel. The actual implementation was simple enough for me to code directly in x86 assembly.

  6. Austin Donnelly (MSFT) says:

    Here's some additional material on digital halftoning. The original article Raymond linked to only covered FM halftoning algorithms, but there are some pretty cool AM algorithms too:
    http://www.donnelly.org.uk/and1000/newsprint/halftone.html

    1. DWalker says:

      Cool info. Although it's a bit hard to read some of the text through the second vertical purple stripe that's 2/3 of the way to the right side of the Web page. :-)

Comments are closed.

Skip to main content