Division is too hard: can we get rid of it? :)

Just a bit of philosophical rumination on the most basic math operators as they relate to (graphics) programming. I don't know if this stuff is all obvious, laborious, redundant, or maybe incorrect. Or if there's anything interesting here. Let me know in the comments. :)

First the easy ones: addition seems to be about combining. And subtraction about separating.

Multiplication. Since I haven't thought carefully enough about math for much of my life, I suspect that there was a long time during which I thought of multiplication as making things bigger, or increase. It's used that way as a synonym for animal reproduction, and in economics (the money multiplier) and other sciences, to imply growth.

In graphics programming, I soon noticed that multiplication is great for scaling things (and since then I've read that multiplication is scaling). Scaling is about making things bigger or smaller. Imagine a unit-length cube with a corner at 0,0,0. You can scale it uniformly up and down by multiplying its vertex coordinates by some number S.

When it comes to multiplication (at least), there's something special about the number 1. It's special because if S > 1 then the cube scales up, if S < 1 then the cube scales down, and if S == 1 then multiplication has no effect and arguably there is no scaling, or you have "identity scaling", which I may have just made up but it amounts to the same thing.

Outside of graphics, multiplication feels natural and extremely useful when, for example, scaling between a normalized value [0-1] in a data model and a much wider range on a UI slider, say [0-100]. Doing that scaling in the getter and setter (and raising a property change notification at the same time) nicely encapsulates it.

The one thing that you can't affect by scaling is 0. You can't scale zero. When it comes to zero, 1 loses its specialness. That idea might be worth returning to.

In the past I've been even more confused about division than about multiplication. I used to think about division as making things smaller, or decrease. Dividing by D > 1 does indeed make a thing smaller, but dividing by D < 1 makes it bigger. And division by D == 1 has no effect. There's that special number 1 again.

So division is multiplication-by-the-reciprocal (and multiplication is division-by-the-reciprocal). So, division is also a form of scaling but a kind of through-the-looking-glass scaling that doesn't appeal to me. So, in my mental model of math, what if I eliminate division in general (whether I can eliminate the use of the operator from my code or not) and replace it with multiplication-by-the-reciprocal. But the reciprocal is itself a special case of division so what can I do about that?

The reciprocal is 1/x, but it's such a special case of division that I wonder whether you really need such a general notational device as division to represent it. For now, let's just say recip(x) instead. If you draw a graph of y = recip(x), you see that the graph avoids the x == 0 line like the plague. So recip(0) is undefined, but recip of any other x is fine, so that's good to know. Also, it's also interesting to note that the product of a number and its reciprocal is 1, so x * recip(x) == 1. The reason I find that interesting is because if you want to scale x to make it unit size, you just scale it by recip(x). And scaling something to make it unit size is very common in graphics programming, and it's called normalizing the thing. For example, what you do to a vector to make it unit length is called normalizing it, and it's done by scaling each co-ord by the reciprocal of the vector's length. Once things are normalized in the graphics world, they become heaps more useful.

If using multiplication to scale works for you then use it and you're done. If you want to scale down by the proportion that multiplying by S scales up, or you want to scale up by the proportion that multiplying by S scales down, then multiply by recip(S). That way, you're always using multiplication to scale, and multiplication is the same concept as scaling and the mental model is simple. Another notation for 1/S is S-1, so when it comes to writing code you could either write 1/S (but read that in your mind not as division but as recip(S)), or write pow(S,-1) so that you never need to use the division operator, and division isn't a concept any more. More generally, instead of dividing by D you can multiply by the reciprocal of D and thus avoid the division operator in every case. It's true that A / D is a little simpler and shorter as notation than A * pow(D, -1), but it does require having division in your mental model and, as I think I'm implying, I'm not entirely sure that I know philosophically-speaking what division means. But scaling by a reciprocal, I think maybe I do.

The number 1 is showing up a lot, sometimes as -1, sometimes as the notion of normalizing. And I find it really interesting to think of the reciprocal as both the tool you use to normalize a value, and the tool you use to make multiplication the only scaling operator you need.

So far I've been thinking in terms of how the magnitude of the output relates to the magnitude of the input. But there's more to this than just making a value bigger or smaller. Division can be seen as a counting operation sometimes. If my hens lay a hundred eggs, how many dozen-egg-boxes will that make? Is that the same as scaling? My mental model, what I mean in this case by 100 / 12, is that I'm calculating a count. The fact that I'm scaling 100 down to some smaller value is incidental, it feels. Similarly, the number of pints in a gallon is conceptually about counting, even though it involves dividing the gallon into parts. So, G / 8 feels right, but G * 1 / 8 or G * 8-1 or G * 0.125 don't.