Color quantization

If you’ve ever done work with Web graphics, I’m sure that at some point you reduced an image with thousands of colors, such as a screenshot or photograph, to an image with only 256 colors, for example to store the image in the GIF format. Remarkably, the reduced image is usually very visually similar to the original image, despite the massive loss of information. Most paint programs are capable of performing this transformation, and operating systems such as Windows will perform the transformation on-the-fly when the user is running a video mode with only 256 colors.  But how does it work?

In the literature, this problem is called the color quantization problem — generally, the word “quantization” refers to any process that maps several values to a single representative value. In this case, the values that we’re mapping are colors. There are two main steps to color quantization:

  1. Generate a good palette
  2. Dither

Generate a good palette

First we’ll talk about generating a good palette. The simplest approach is to use a fixed palette for all images. We can map pixels in the source image to pixels in the reduced image by simply choosing the closest color. What do we mean by “closest”? Well, we can identify each color with a point in three-dimensional space whose coordinates are the red, green, and blue values of the color.  Then, we consider two colors to be close if their points are close in this three-dimensional space.  This isn’t ideal, but it’s very simple.

The fixed palette approach has the advantage that all images using the same palette can be rendered on the same screen simultaneously; unfortunately, it severely compromises image quality. Making this even worse is that typical fixed palettes, such as the Windows standard palette and the web colors palette, are chosen as uniform palettes, meaning that the colors are evenly spaced throughout RGB space. This is a terrible way to construct a palette, as humans can distinguish much finer color differences in some areas of RGB space than others.

Below, we show a photograph of a rose taken by Stan Shebs, but modified to remove its blue channel. This means that all of its colors lie in a single plane, the plane blue = 0. That color plane is shown in the graph below.  We also show a 16-color optimized palette found by Adobe Photoshop – each blue point is a palette color, and each region shows the section of the color space mapping to that palette color. As you can see, the palette is far from uniform, and there are many colors that the image doesn’t even use.

From this picture, we can see that generating a good palette is essentially a point clustering problem. We want to identify clusters in the color space that contain a large number of points close together and assign a single palette entry to those colors.

One of the most popular algorithms for this problem is called the median cut algorithm. It was invented by Paul Heckburt in 1980 in his paper “Color Image Quantization for Frame Buffer Display” and is still in wide use today. The algorithm follows just a few basic steps:

  1. Create a block or box surrounding the points; it should be just large enough to contain them all.
  2. While the number of blocks is less than the desired palette size:

    1. Find the block with the longest side out of all blocks.
    2. Cut this block into two blocks along its longest side. Cut it in such a way that half of the points inside the block fall into each of the two new blocks (that is, we split it through the median, thus the name).
    3. Shrink these two new boxes so that they just barely contain their points.

  3. Average the points in each box to obtain the final set of palette colors.

Despite a number of more sophisticated modern algorithms, this strikingly simple approach is adequate for generating 256 color palettes for most images. You can read about a C++ implemention of the median cut algorithm at my wiki.


If the palette is large enough and of high enough quality, Reducing by nearest color is usually sufficient. If not, however, distinct artifacts will be visible, such as banding, where what were once smooth gradients become striped regions (see this image; look closely at the cat’s neck). One effective technique for dealing with these palette limitations is called dithering, which mixes dots of different colors to simulate a larger number of colors and smooth out gradients. For example, if you mix a grid of red and blue dots, the result will appear purple. Dithering is also commonly used in printing, where for example a grayscale image can be simulated with great accuracy using a higher-resolution dithered black and white image.

The trick to dithering is to effectively simulate the original color in each area of the image while avoiding telltale artifacts formed by poor dithering patterns; see ordered dithering, commonly used in newspapers, as an example of a dithering scheme with clearly visible artifacts. One of the most popular dithering schemes to this day is the Floyd-Steinberg algorithm, a classic dithering algorithm falling into the general class of error diffusion dithering algorithms.

The basic concept behind error diffusion is to begin with nearest color matching, but whenever we can’t match a pixel color exactly, the error between the original color and the palletized color is distributed into adjacent pixels not yet visited. In Floyd-Steinberg, the pixel to the right receives 7/16 of the error, the pixel below 5/16, the pixel below and to the left 3/16, and the pixel below and to the right 1/16. When these pixels are visited, this added error affects the palette entry selected for them, leading to an average colour over that portion of the image close to the actual average color of the pixels in that area.

What’s great about Floyd-Steinberg is that it generalizes easily to reducing full 24-bit color images to palettes of any size. The only modification we need to make is that we perform the error distribution step for each of the three channels. The palette entry is still chosen by nearest color. You can read about a C implementation of Floyd-Steinberg dithering on my wiki.

This isn’t the end of the story of course – there is ongoing research and all kinds of fancy variations on these approaches, many of which can produce much nicer results than these classical algorithms. But at least now the next time you see a GIF or 8-bit PNG file on the web, or use a computer in 256-color mode, you can appreciate the algorithms under the cover that helped make your images look good under these severe limitations.

Comments (8)

  1. Michael says:

    This might be a strange question, but is it possible to simulate prime colors with dithering? Take a bright green square for instance – is it possible to create an ‘approximation’ of the visual impact by using a dithered mix of dark green and green/cyan? The idea here would be to come as close as possible to the desired prime color by using secon. Generally I understand that Primary Colors are basic and cannot be mixed from other elements. They are to color what prime numbers are to mathematics.

    If there are work arounds however and approximations that fool the human eye I’d be interested to hear about it.

    michael @

  2. MSDN Archive says:

    Hi Michael. That’s a good question. The simplest way to answer it is with this picture:

    It is a mixture of bright cyan, bright yellow, and dark green (#006400) that resembles the color #35b135, which is exactly the same hue as the dark green but with much higher brightness (69% versus 39%) and somewhat lower saturation (70% versus 100%); it appears more similar to bright green. Because dithering can only average colors, though, there are colors that can’t be approximated with certain palettes. For example, a palette with only bright colors can’t approximate darker colors, but if you add black they can.

    To say that red, green, and blue are "primary", "pure", or "fundamental" colors though is really somewhat misleading. In reality, the basis of the RGB color system lies not in physics but in human biology. Because our eyes contain three different types of color receptors, attuned to frequencies near red, green, and blue, we can approximate the visual impression of many colors by mixing these three colors, and the details of this were first worked out by experiments in the late 1920s – but this doesn’t work for the eyes of other animals. A system that worked for all animals would have to assign a wavelength (frequency) and intensity to each pixel and reproduce these two attributes using some kind of display device that we haven’t yet invented. You can read more here:

  3. MSDN Archive says:

    To clarify my previous statement: to describe an arbitrary color in a purely physical way would require not only specifying intensity and a single frequency, but actually specifying the intensity of *every* frequency in the visible light spectrum, which would amount to a function f from wavelengths to intensity. We exploit the unique property of human vision that allows us to create the perception of many different intensity functions by just adding three different functions (corresponding to red, green, and blue) scaled by constants. These three functions vary according to exactly which absolute color model you use, but here are the three curves used in the original CIE model:

  4. Michael says:

    Thank you for your comments and the great example picture. Actually I manually produced a similar effect yesterday by mixing dark green and cyan – it started to look a tiny bit blueish and I was wondering if the adding of yellow might get it closer to the original green. I’m aware of the fact that our RGB model is purely based on human perception – take for instance magenta – there is no single frequency that can produce this color. It is merely a mixture of two colors on opposing sides of the visible light spectrum and is perceived by humans because of the way it stimulates two respective types of rods.

    Now, I’ve learned that the depth of color detail perception in the blue rods is not as strong as for instance in the green rods. At the same time green and red are almost inseparable since the stimuli spectrum strongly overlaps. My goal is to produce dithered representations of mainly full green and full blue by using a mixture of colors which contain up to 75% of either green or blue. Red can be used at 100% intensity – what would be the ‘color dithering’ formula to accomplish that?

    Your input would be greatly appreciated. BTW, I assume you are a scientist/engineer – would you be available for contract work in the not so distant future?

  5. Michael says:

    I just compared your dithered green example with pure green and of course it doesn’t approach the intensity of #00FF00. Does this image represent the closest representation of pure green one would be able to achieve with dithering of neighboring colors?

  6. MSDN Archive says:

    Hi Michael. Because dithering effectively averages colors, in order to achieve a color of value 255, one would have to use only colors of value 255. You can get bright green by dithering hues equidistant from green, such as #00FFD0 and #D0FF00. You might be interested in this image created by my tool colorq, which creates the illusion of a variety of colors using only purple, green, orange, and white:

    Oh, and I can’t answer questions concerning work in this forum.

  7. MSDN Archive says:

    Oh, by the way, since this has gone a bit off topic, if you want to continue this conversation, please use my contact page:


  8. Jayachandra says:

    can anyone of u give me a code for implementing popularity algorithm for color quantization