Const methods don’t prevent a method from having side effects

In my musing on whether people write insane code with multiple overlapping side effects with a straight face, I noted that raising a warning on any code that depends on the order of evaluation would generate a lot of false positives, such as

total_cost = p->base_price + p->calculate_tax();

It has been argued that this is merely evidence that the calculate_tax method should be const.

Well, except it may not be const. For example, calculate_tax needs to look up the tax rate, which means it needs to look up the tax region, and it may decide to cache that information in the object so that future tax calculations can be more efficient. Which means non-const.

And then there's this:

total_cost = apply_discount(p->base_price) + p->calculate_tax();

This is still a potential dependency upon the order of evaluation, because the apply_discount function might modify the thing that p points to.

It's unlikely, but technically legal.

In practice, nearly everything you write is potentially dependent upon the order of evaluation, but in practice it isn't because you are not a nincompoop. But the compiler doesn't know that. The compiler must adhere to the letter of the language standard, because it has to compile insane code as well as sane code.

Maybe you'll say, "Fine, the compiler shouldn't complain about potential order of evaluation dependencies in sane code." But now the argument is circular: What is sane code? Code that isn't dependent upon order of evaluation.

Bonus reading: You don't know const and mutable. Which introduces yet another wrinkle into the story.

Comments (29)
  1. SI says:

    Wouldn’t that cache be a candidate for a mutable member var? And then have the function still be const? Or is that considered worse.

    1. James Moore says:

      If you’re in a const function, all of your member vars are themselves const, and therefore, non-mutable. This isn’t to say it’s side-effect free: You could store the cache on-disk, on network, in registry, in a global variable, in a random chunk of unallocated memory, pretty much anywhere *but* a mutable member var.

      1. James Moore says:

        Apparently I overlooked that ‘mutable’ was a C++ keyword designed speifically for this. I’d say it’s way worse because mutating something that’s const violates the entire intention of const.

  2. Alv says:

    I’ve not used C++ for a while now and not up to date with the latest standards, but the argument that calculate_tax shouldn’t be const because of some implementation detail which is not visible at the public interface at all seems a bit misguided. If whatever the method does internally doesn’t change its public behaviour, why can’t it be considered const?

    1. Okay, so what if calculate_tax was const, but it did do some logging. Changing the order of evaluation changes the log file contents, which is externally observable. So the program still relies on order of evaluation.

      1. Alv says:

        But this is the whole point: by using const we can tell the compiler that we don’t care about that and it should not flag this as a false positive.

        1. Now we’re adding a third semantic to const. It already means “read-only” and “thread-safe”, and now it also means “unaffected by order of evaluation.” And what if you also want to say that a non-const method is unaffected by order of evaluation? e.g. auto combined_result = when_all(start_task_1(), start_task_2()); The start_task() methods are definitely not const, but they are also independent of each other and are not affected by order of evaluation.

          1. Alv says:

            But doesn’t “read-only” and “thread-safe” already imply “does not affect order of evaluation”? “Unaffected by order of evaluation” is only a consequence of “does not affect order of evaluation” being true for all sub-expressions.

          2. Atomic increment, for example, is thread-safe, but is definitely dependent on order of evaluation.

      2. Joker_vD says:

        No it doesn’t, unless the program reads its own log which is ridiculous. Enabling instumentation in a pure Haskell program doesn’t suddenly make it impure either.

        1. Gabriel Ravier says:

          It’s a lot less ridiculous when you start considering any program that lets you open arbitrary files and/or modify them.

  3. Andreas says:

    calculate_tax() could be const, only make sure that the tax region cache is mutable…

  4. henke37 says:

    I think you meant apply_discount(p) instead of apply_discount(p->base_price). Unless you want to make the parameter a reference or base_price a pointer.

  5. Jaha says:

    mutable taxRate;

    1. Great, now you need to add locks to avoid multithreaded race conditions. Because const and mutable imply “thread-safe”. Even if your object was not intended to be multithreaded, you just signed yourself up for additional work.

      1. Jaha says:

        True, but AFAIK it’s the suggested approach. Otherwise you need to throw away const for an entire call chain for caching/lazy initializing? Doesn’t sound better.

  6. Phil Miller says:

    It doesn’t help the cases shows, since arithmetic arguments aren’t ordered, but since C++17, per proposal P0145, order of evaluation around function/method calls is at least somewhat tighter specified.

  7. Antonio Rodríguez says:

    That is a task for static code analysis. Static analyzers can easily detect an expression which relies on the order of evaluation (or produces any kind of side effects) and notify the programmer. Also, programmers which use static analyzers are supposed to be more careful, so there will be less false positives, and they will be addressed by someone that know what s/he’s doing.

    Of course, it is possible than in an organization somewhere the pointed-haired boss tells fresh-out-of college coders to regularly run static analysis on the codebase and blindly tweak it until all warnings are gone. But at least static code analyzers aren’t usually featured in business magazines…

  8. Davidbrcz says:

    Code in functional languages is independent from the order of evaluation as there are no values, simply expression that get combined / rewritten.

    1. AndyCadley says:

      Yes, this feels a lot like trying to add verification to a language to make it act like a functional one. Wouldn’t it just be easier to write in a functional language in the first place if that’s what people want?

  9. Adrian says:

    This makes me think about the “Movers and Shakers” section of _Writing Solid Code_ by Steve Maguire. That section focuses on the C allocator and how it (especially realloc) will often do the expected thing but will occasionally do surprising things. This allows bugs to lurk because the surprising behavior rarely happens. The solution is to make an allocator that always does the surprising thing so that you’re almost certain to see the bugs in the code that wasn’t prepared for the surprising behavior.

    This idea can be applied to other areas. There are unit test suites that will run your tests in a randomized order each time in order to ferret out bugs where one test accidentally depends on a side effect of an earlier test.

    It makes me wonder whether it would be worth having a compiler option to always do the surprising thing, like evaluate expressions in a random order wherever the standard allows. Without optimization (i.e., in your debug builds) the order is probably exactly what you’d expect, and the surprises only happen when the optimizer has a better idea (i.e., in your release builds). You’ve got low odds of even detecting the bug, and, if you do, it’ll be tough to debug. A mover and shaker built into the compiler could be a real benefit.

    1. Joshua says:

      It would be entertaining to observe the debate between movers and shakers and reproducible builds.

  10. Ivan K says:

    I love how the compiler has been groomed to take programmer hints over the decades. “inline int Fling(…) const throw(P *p) { register int i… stuff… return (i = p->Addref();) }”

  11. Azarien says:

    What is insane here is C and C++ standard not defining evaluation order where it matters.

    1. M Hotchin says:

      The reason this was not specified is so that compilers could take advantage of any particular quirks of the target hardware to optimize the result (typically for speed or size). For any particular order of operations, there’s probably a piece of hardware that could do things better in a different order.

      For example, depending on how many registers you have available, one particular order might be able to be done without any intermediate memory stores.

  12. James Sutherland says:

    Clearly, the answer is for the compiler only to accept sane code in the first place … whatever that is! Probably implemented as a side-effect of the often demanded “do what I mean not what I say” feature…

    Quite topical seeing a cache update as a side effect, that being the essence of Spectre/Meltdown (“if we speculatively execute the instruction then throw the result away if it shouldn’t have run after all, that’s OK isn’t it, because it’s as if the instruction never executed? Right?”)

  13. Alv says:

    “Atomic increment, for example, is thread-safe, but is definitely dependent on order of evaluation.”

    – but definitely not read-only. Or am I mistaken in assuming that ‘const’ implies/requires both?

    1. See Herb Sutter’s talk, which I linked to. const is more complicated than you think.

      1. Alv says:

        The talk can be interpreted in two ways. I’d see your point if the moral of the story was ‘const only means thread-safe now, not logically const any more’. But to me it seems that ‘logically const and from c++11 on also thread-safe’ is the correct interpretation. Thus, a const function must have no observable side effects, so if everything in the expression is const, the result should not depend on the order of evaluation.

Comments are closed.

Skip to main content