Viewing function composition as transformation of the domain


A lot of formulas you encounter in computer science can be viewed as function composition. Let's start with the simple problem of rounding integers down to the nearest multiple of some positive constant. The formula for this should be relatively easy for you to produce:

round_down(n, m) = floor_div(n, m) * m

where floor_div returns the largest integer less than or equal to n/m. If n≥0 and m>0, then floor_div(n,m) = n/m where / is the C integer division operator.

But what if you want to round up? Take a look at the difference between rounding up and rounding down, say, using multiples of four for concreteness.

 0  1  2  3  4  5  6  7  8  9 10 11 12
round_down 0 0 0 0 4 4 4 4 8 8 8 8 12
round_up 0 4 4 4 4 8 8 8 8 12 12 12 12

The round_up table is just the round_down table shifted left three places. The mathematical way of shifting the table heading is by manipulating the domain, in this case, by adding three. In other words, don't think of adding three as a vertical operation

 0  1  2  3  4  5  6  7  8  9 10 11 12
+3 +3 +3 +3 +3 +3 +3 +3 +3 +3 +3 +3 +3
3 4 5 6 7 8 9 10 11 12 13 14 15

but rather as a horizontal one:

 0  1  2  3  4  5  6  7  8  9 10 11 12
<- move left three spaces
3 4 5 6 7 8 9 10 11 12 13 14 15

(Sorry, I'm too lazy to cook up the appropriate VML diagram. Use your imagination and pretend that there is an arrow from the "3" in the top row to the "3" in the bottom row, similarly from the "4" in the top row to the "4" in the bottom row, and so on.)

Now that you see that the answer is to "move the results" three spots to the left, you can read off that the desired formula is

round_up(n, 4) = round_down(n + 3, 4)

Shifting the domain left and right can be done by addition. Multiplication and division let you stretch and shrink it. Consider the puzzle of rounding down to the nearest quarter. You already know how to round down to the nearest unit, namely by using the language's built-in truncation operator.

0    1    2    3    4    5    6
|    |    |    |    |    |    |
+----+----+----+----+----+----+

 \___/\___/\___/\___/\___/\___/
   0    1    2    3    4    5

If only we could divide everything in the diagram by four. But we can! To do this, we transform the problem space into one in which everything is four times as big as normal, apply the operation, and then convert back to normal size.

Those of you who've played with a Rubik's Cube are well familiar this technique: If you have a move that, say, flips two adjacent edges, but you want to flip two edges that aren't adjacent, you can still accomplish this by manipulating the cube until the two edges are adjacent, perform the flip move, then undo the steps you performed to get the edges adjacent in the first place. (This is known as "conjugation" in group theory and is a very handy technique.)

We're just doing the same thing with this truncation operation: If we could shrink the truncator a factor of four, it would truncate by quarters. But we don't know how to shrink the truncator, so we do it from the other direction: Stretch the number line, apply the truncator, then shrink it back.

0                   1
|                   |
|   1/4  2/4  3/4   |   5/4  6/4
|    |    |    |    |    |    |
+----+----+----+----+----+----+

 \___/\___/\___/\___/\___/\___/
   0    1    2    3    4    5

   0   1/4  2/4  3/4  4/4  5/4

The top line shows the number line stretched by a factor of four. The truncator is still unchanged. And below it, we shrink the results by a factor of four, resulting in our desired rounding down to the nearest quarter.

Taking the above diagram and converting it back to a formula:

round_down_to_quarter(v) = trunc(v * 4.0) / 4.0

This was probably old hat for most of you, but I think it's worthwhile seeing how the problem can be viewed geometrically. In particular, if you have a reversible operation "f", then the composition "f-1 ◦ g ◦ f" has the effect of "reinterpreting g through f-colored glasses". Here, the operation "f" was "multiple by four" and "g" was "truncate to nearest integer". Putting them together allowed us to take the truncation operator "g" and make it truncate according to a different set of rules.

Comments (12)
  1. Carlos says:

    "If n≥0 and m>0, then floor_div(n,m) = n/m where / is the C integer division operator"

    The equality doesn’t hold when n<0 because the C integer division and modulus operators are broken by design: even the basic identities (a+m)/m=(a/m)+1 and (a+m)%m=a%m (for m>0) don’t hold.  This has been briefly discussed on your blog before (http://blogs.msdn.com/oldnewthing/archive/2005/05/31/423407.aspx) but I’m still curious: does anyone know why this is broken and has anyone ever found the brokenness useful?  Or why every other computer language seems to have the same problem?

  2. BryanK says:

    I don’t know for sure why C works that way, but I’m going to guess that it was because it was easier to implement them that way on the PDP-6, or something like that.

    I’m also guessing that most other languages have the same issues because they’re pretty much built on C these days.  Either their interpreter uses C code to do the actual division, or their compiler generates C-equivalent machine language.  (The language might even compile "through" C, like some early C++ compilers did, and C-Intercal still does.)

  3. josh says:

    Probably because -3 >> 1 = -2 with 2’s complement math.

  4. The standard is unclear on these mathematical topics because ANSI C at least was to be an federation of existing practices more than defining a whole new language.

    Prior to ANSI C the general rule was that the C compiler should do the right thing for the physical machine that it was targetting.  Each operation should have a direct predictable performance impact.  (Thus the usual comment that C is just "portable assembly".)

    Even then, it was really up to the whim of the compiler author.  Few early compilers actually would test the performance of various code sequences to figure out the best to generate; there is/was a very general mental model of "fewer instructions means faster" which was often the case before good caches and faster memory.  This is still the case if you compare a few contemporary "new" popular architectures, say in the 64 bit world.

    When the committee met and basically vendor A said "it has to work >this< way or all my existing code will break!" and vendor B made the same claim about the opposite implementation, the committee unified what it could and left the rest as an implementation detail.

    e.g. 12%10==2 no matter what but -12%10 may yield any number of different answers.  (Common answers include: 2, -2, 8.  8 is, I believe, exceedingly rare even if it is the mathematically expected one for modulus arithmetic.)

  5. mikeb says:

    Ouch – all this math…

    Raymond, has Eric Lippert taken over your blog today?

  6. Mike says:

    Michael Grier wrote:

    "e.g. 12%10==2 no matter what but -12%10 may yield any number of different answers.  (Common answers include: 2, -2, 8.  8 is, I believe, exceedingly rare even if it is the mathematically expected one for modulus arithmetic.)"

    If you want to compute a % b as a positive number between 0 and b, and preserve the rule that a % b is congruent to (a + b) % b for all a, including a < 0, then you can substitute where a < 0: (b – (-a % b)) % b

    I did not compile and test the above operation. It is my assumption that you know what I mean, and perhaps the compiler does too (compiling and testing would prove it one way or the other, as would looking up operator precedence).

  7. The problem is that the compilers do not guarantee this in general.  I believe that the behavior of % when either of its operands is "implementation defined" which means that it’s whatever the compiler vendor says it is.  It could format your hard drive as long as it was documented to do so.  ;-)

    There are/were additional standards for numerical processing in C but they were not widely adopted and I don’t believe that anyone "fixed" this for C++ given that for the longest time C++’s design was constrained to be able to be a preprocessor (cfront et al.) before handing off the generated code to a "real" C compiler.

    Sure I understand what it means (or should mean to a mathematician) but I don’t know how many times I’ve had people at my whiteboard during interview questions who expect that "(-12 % 10) == 2".  (That’s about the worst answer possible even if it is what they are hoping for in the context.)

  8. asdf says:

    But the all too common problem of this is when lazy programmers naively do these kind operations on a finite set of numbers when they never restrict the input to a valid range. Lots of times you have to handle the edge cases explicitly or do a robust rewrite like http://blogs.msdn.com/oldnewthing/archive/2004/08/12/213468.aspx :

    p >= buffer && p – buffer < BUFFER_SIZE

    to

    p >= buffer && p < buffer + BUFFER_SIZE

    (*every* STL implementation I’ve seen has a bug like this in their _adjust_heap (or whatever) helper function that calculates index = 2*(index + 1) but they never make sure that calculation doesn’t overflow)

  9. Mike says:

    I was discussing this with a friend over lunch and he pointed out that several languages have modulus operators and remainder operators, and LISP (he’s a big fan) has both.

    Whether the % is implemented as mod or remainder (probably depends upon the CPU instruction set available), the expression I gave above (with the appropriate parentheses to garuntee precedence) would make the behavior more consistent with a mathematical modulus operator.

    int mod( int a, int b )

    {

      if( b <= 1 ) return 0;

      if( a >= 0 ) return a % b;

      return (b – (-a % b) ) % b;

    }

    bool testMod()

    {

      int min = 2, max = 25;

      for( int b = min; b < max; ++b )

      {

         int test = 5 * b;

         for( int a = -test; a < test; ++a )

         {

            if( mod( a, b ) != mod( a + b, b ) )

               return false;

         }

      }

      return true;

    }

    I find that it’s nice always nice to have a clean scratch project on my dev boxes for when I get curious about the behavior of the compiler. Maybe it would be good for interviews to have something similarly prepared.

    It might make for a quick and easy warm-up interview question to ask an interviewee to implement mod without using the remainder operator (telling them that they can use +, -, * and / would be a hint if they got stuck on what it was that they were trying to do).

  10. josh says:

    Modulus *is* a remainder, it just depends on how you round after division.

  11. asdf says:

    C89/C++ can pick between two rounding modes for division: truncation aka round towards 0 (i.e. -12 / 10 == -1) or floor aka round towards -Inf (i.e. -12 / 10 == -2). C99 requires round towards 0, which most hardware uses since fortran required it and it’s easier to implement (since -a/b == -(a/b)) if they even bother dedicating transistors for it at all. If you *really* want the round towards 0 guarantee then use the div() function but be prepared to take a speed hit since compilers usually don’t inline it or make it an intrinsic and since most people dynamically link the C runtime, not even whole program optimization can save you. Now since C requires (a/b)*b + a%b == a,

    with round towards 0 you get the remainder:

    (-12/10)*10 + -12%10 == -12

    -1*10 + -12%10 == -12

    -10 + -12%10 == -12

    -12%10 == -2

    with round towards -Inf you get the modulus:

    (-12/10)*10 + -12%10 == -12

    -2*10 + -12%10 == -12

    -20 + -12%10 == -12

    -12%10 == 8

    (2 can never be an answer on a non-broken implementation). It’s ridiculously easy to get round towards -Inf with round towards 0:

    div_t divmod(int a, int b)

    {

    div_t ret;

    ret.quot = a/b – (a%b < 0);

    ret.rem = a – b*ret.quot;

    return ret;

    }

  12. Merle says:

    <em>"Those of you who’ve played with a Rubik’s Cube are well familiar this technique: If you have a move that, say, flips two adjacent edges, but you want to flip two edges that aren’t adjacent, you can still accomplish this by manipulating the cube until the two edges are adjacent, perform the flip move, then undo the steps you performed to get the edges adjacent in the first place."</em>

    This is a <b>very</b> useful thing to know.  It has allowed me to solve cubes (or other solids) that I do not actually understand.  There was a dodecahedral Rubik’s thing, with some 62 pieces.  All I needed to learn was how to swap two edge pieces and two corner pieces, and the rest was just "manipulate the cube so the pieces are in the right place, perform the operation, and de-manipulate them".

Comments are closed.

Skip to main content