Templatized Min/Max is a bad idea!

Ah, back to nice geeky C++ programming topics, which is much more fun than angry customer topics…

Some well-meaning soul wrote this:

template<typename T, typename U> T TMax(T t, U u){ return t > u ? t : u; }

Let me count the bugs – first of all, it still runs afoul of the normal problem with max, which is this:

UINT u = 2;
INT i = -1;

TMax(i, u) will return -1, because in order to compare the two, they have to be cast to a UINT, and (UINT)-1 is 0xffffffff, which is clearly a much bigger number than 2, so yeah, the maximum of 2 and -1 must be -1. Sure, that's gonna work…

If you're an astute C++ template reader, you might say, hey – what's the return type there? Good catch. What happens if we do this?

WORD w = 0xffff;
INT oops = 0x10000;

TMax(w, oops)

Well, then it correctly figures out that 0x10000 is bigger than 0xffff, and it will then truncate it back to a WORD, so the maximum of 0xffff and 0x10000 must be 0, right?

I'm sure this could do some amazing things in terms of unexpected code flow, allocations, and so on. You may be wondering how I found this? I was working in a code base that had:

#pragma warning(disable:4244) // Integral type converted to smaller integral type

So I commented it out, rebuilt, and found a complaint we were truncating ints to a WORD.

This is pretty icky, since what we really want is a way to change the return type based on the two inputs. As it turns out, we need signed and unsigned, 32 and 64-bit ints to do the job. I'm not sure it is possible to make this happen very easily without overloading the 121 possible combinations of integer types, along the lines of:

Int TMax(char a, char b);
Int TMax(signed char a, char b);

So how do you fix such a thing? I leveraged SafeInt, which can correctly sort out whether one int is bigger than another, regardless of combination of types, and while I don't like macros, this does the job:

template<typename T, typename U> bool SafeLhsMax(T t, U u) {SafeInt<T> st(t); return st > u;}

Followed up with:

#define TMax(a, b) (SafeLhsMax((a), (b)) ? a : b)

There's a bit more to it – we need some overloads in case someone passes a SafeInt into our macro, and it will object if we try to use a float, so we could make some specializations there, but there you have it – one of the more interesting bugs I've seen this month, and really points out that disabling warnings on a large scale is often a bad thing, which could lead to user astonishment, and even exploits. So compile your code with /W4 and /WX, and fix the warnings! SafeInt also has the happy side-effect of making the signed-unsigned comparison warnings go away, too. Now I just have to hope I don't hit any offsetting bug regressions where two mistakes really did add up to something that's OK, and now that we're down to one mistake, it's a bug.

PS – if you think you're safe because you run /analyze (the bits of prefast/OACR that are externally available), think again – they don't tend to re-implement compiler warnings, since if the compiler already does it, why should they?