The exception code EXCEPTION_INT_DIVIDE_BY_ZERO (and its doppelgänger STATUS_INTEGER_DIVIDE_BY_ZERO) is raised, naturally enough, when the denominator of an integer division is zero.

The x86 and x64 processors also raise this exception when you divide INT_MIN by -1, or more generally, when the result of a division does not fit in the destination. The division instructions for those processors take a 2N-bit dividend and an N-bit divisor, and they produce N-bit quotient and remainder. Values of N can be 8, 16, and 32; the 64-bit processors also support 64. And the division can be signed or unsigned. Therefore, you can get this exception if you try to divide, say, 2³² by 1, using a 64-bit dividend and 32-bit divisor. The quotient is 2³², which does not fit in a 32-bit divisor.

The Windows 95 kernel does not attempt to distinguish between division overflow and division by zero. It just converts the processor exception to EXCEPTION_INT_DIVIDE_BY_ZERO and calls it a day.

The Windows NT kernel realizes that the underlying processor exception is ambiguous and tries to figure out why the division operation failed. If the divisor is zero, then the exception is reported as EXCEPTION_INT_DIVIDE_BY_ZERO. If the divisor is nonzero, then the exception is reported as EXCEPTION_INT_OVERFLOW.

Another place that EXCEPTION_INT_OVERFLOW can arise from a processor exception is if an application issues the INTO instruction and the overflow flag is set.

Puzzle: The DIV and IDIV instructions support a divisor in memory. What happens if the memory becomes inaccessible after the processor raises the exception but before the kernel can read the value in order to check whether it is zero? What other things could go wrong?

Comments (21)
  1. Joshua says:

    Let's see. In single CPU land it can't happen. In multi CPU I'd probably opt to reraise it as an access violation. The other thing that can go wrong is obviously the codeseg unmapped out from the interrupt handler. Same solution.

  2. Adam Rosenfield says:

    Puzzle: I'd guess that the kernel does the moral equivalent of __try/__except when checking the divisor in the exception handler, and if a secondary exception is encountered while trying to handle the division exception, it just throws its hands up and calls it an EXCEPTION_INT_DIVIDE_BY_ZERO by default?  Or maybe there's something tricky going on with the cache?

    The memory could also change out from under the kernel during the brief period between when the exception is raised and when it's handled, e.g. if it's memory-mapped I/O or the target of a DMA read request, in which case the kernel might report the wrong exception type.  Of course in that case, the program itself would have a terrible race condition.

  3. Henke37 says:

    Niceties: they are allowed to fail.

  4. alegr1 says:

    The division exception handler has to touch user memory. Any code in the kernel that needs to touch user memory has to run in a __try/__except block.

  5. Cesar says:

    Fun fact: the new RISC-V architecture does not raise an exception on division by zero or division of INT_MIN by -1. The RISC-V ISA specifies a fixed value to be returned in both cases. If you want to check for division by zero or division overflow, you can add a test before or after the division instruction.

    (I don't have the C spec in front of me right now, but I believe that both cases are "undefined behavior" in C, which means returning a fixed value is acceptable, and the compiler doesn't have to add any test.)

  6. Kevin says:

    @Cesar: Signed integer overflow is definitely undefined behavior, though many compilers are nice and produce twos-complement behavior.  Unsigned integer overflow, however, is perfectly well-defined (the number rolls over like an odometer), so the compiler might have to generate additional code to cover that case.  OTOOH, division by zero is something of a special case so it is probably separately undefined regardless of signedness.

  7. voo says:

    @Cesar AArch64 (new 64bit ARM ISA) does the same thing. I'm guessing the necessary hardware and complexity just isn't worth to implement it in hardware. An extremely predictable (hoepfully!) branch is cheap and compilers know that the dividend can't be zero in lots of cases too, so go for the simpler hardware implementation.

  8. Bob says:

    Of course, if another thread changes the divisor value in memory, it could be something other than 0 or -1 when the kernel looks.

    Trivia:  PowerPC takes a somewhat different approach. The result in either case (div by 0 or INT_MIN/-1) is undefined. Optionally, the instruction can request that the overflow flag be set/cleared based on the result so that the condition can be detected. If the overflow flag is set, the "summary overflow" flag is also set. But, this flag is never reset by an arithmetic instruction, so one can detect that there was a problem in a sequence of code by checking "summary overflow" at the end of it.

  9. Myria says:

    Yes, the x86/x64 NT kernel actually has a mini-disassembler in it that understands divide instructions enough to find and determine whether the divisor is zero.  I think that there are corner cases where the kernel isn't a complete-enough x86 disassembler to determine the correct parameters.  I forget which of the two exceptions is returned if it can't determine the answer (or if the kernel ends up throwing an exception trying).

    The MSVC ARM (Windows Phone/Windows RT) compiler gives division by zero defined behavior in the form of properly throwing exceptions like x86.  When you divide by an unknown divisor, or by zero intentionally, a library function is called.  This library function checks for a zero divisor (or if signed, for INT_MIN / -1).  If such a condition happens, the code executes a particular illegal Thumb2 instruction to cause an illegal instruction exception.  The NT kernel knows this particular illegal instruction and translates it to a divide exception.  I believe that whether a certain register (r0?) is zero determines which of the two exceptions occurs.

    ARMv7/Thumb2 has 256 such illegal instructions; I think 7 are used in this manner in Windows on ARM.  Others are used as __debugbreak, __fastfail, __yield, __rdpmccntr64, if I remember correctly.

  10. Mark says:

    I'd also change it to an access violation. The kernel can pretend the divide-by-zero which threw the fault never occurred, and it was an access violation all along. Once the program gets control back, it's impossible* for it to know any better.

    * unless it's doing weird stuff with hardware debug registers

  11. Dave says:

    @Puzzle: Just as a general response, I'd say it makes up a suitably plausible answer and returns that. Sometimes you have to be able to say "this is what would usually happen, so we'll call call it that because EXCEPTION_BEATS_ME_WHATS_WRONG doesn't go down too well with developers". As long as you keep things within reason and don't report something like "lp0 on fire" of course.

  12. JM says:

    "Divide by zero" is, to me, the poster child for dubious priorities — something that produces an exception or trap of some sort across a very wide range of hardware and languages, even where all sorts of other horrible things that deserve to be signaled aren't. It's like programmers and designers all agreed from their basic math class that yes, division by zero is such an obviously wrong and undefined thing to do that we need hardware support to interrupt the program. Integer overflow? Meh, maybe. If you're lucky. Let's just wrap around instead, that's usually what we want anyway — right?

    Not that I'm advocating this, but I wonder how many programs would continue running just fine if the hardware just returned 0. In most practical cases I've seen, the division happens as a result of something like "widget_blob_size = total_blob_size / widgets", where someone forgot the corner case of there being no widgets. Obviously you can come up with easy examples of this going horribly wrong, but then, you can do that for any instance of undefined behavior that nevertheless isn't checked.

    The best situation is a dual system where you can have checks and exceptions all the way if you're so inclined, or just fallback results and flags that are checked wherever appropriate, but if you need to care about performance you are of course very dependent on what the hardware offers you.

  13. 640k says:

    @JM: It's called fail fast.

  14. Anonymous Coward says:

    @640k: JM contrasted divide-by-zero exceptions with overflow exceptions. If divide-by-zero is fail-fast, then wouldn't it also make sense for all overflows to be fail-fast?

  15. Anon says:

    I agree that the fault reading the value should surface as a fault, as if the division never happened.

    If the kernel reads the value and it's not 0 or -1 because another thread wrote it, why not return control to the program and let it retry the DIV instruction? Again, like the first divide never happened.

    These both seem like fair game when data races are play.

  16. Matt says:

    @Kevin "Signed integer overflow is definitely undefined behavior"

    Uh, no. Signed integer overflow is defined *In C programs*. It is certainly not undefined for usermode processes in Windows in general. For example, signed-overflow is well defined in C#, or Java, or if the programmer decides to jump in and do a bit of have-a-go x86 assembly.

    The kernel doesn't know what constraints user-mode is running in, so it has to try and resolve the exception as best it can.

  17. hagenp says:

    Thank you, Raymond, for using the correct spelling of "Doppelgänger"…!!!

    (Why this matters to me? Well, just imagine, some country had "y" as a special character, and so they would use "u" or "v" instead of "y" all the time when writing English:

    "I guess, this could be verv annoving for vou after some time. OK, vou would get used to it, but it would still not be correct." …hopefully it is clear now.)

  18. Joshua says:

    "I guess, this could be very annoying for you after some time."


  19. Kevin says:


    >For example, signed-overflow is well defined in C#, or Java

    They don't count.  They run on VMs.

    >or if the programmer decides to jump in and do a bit of have-a-go x86 assembly.

    We're not talking about x86.  We're talking about RISC-V.

  20. @Kevin says:

    ">or if the programmer decides to jump in and do a bit of have-a-go x86 assembly.

    We're not talking about x86.  We're talking about RISC-V."

    Huh? C programs are not the only kind of processes that are allowed to run under a general-purpose OS. "undefined behavior" is a C specific thing, and only so by definition of this specific language. Any other language on the planet can define the behavior differently. Any part of a  program written in assembler has to expect that the assembly instruction works like defined for the specific CPU, not how the C language likes to define the integer division.

  21. Kevin says:


    Cesar asked a question *about C* and I replied.  Why do people keep changing the subject?  I suppose now the comments to Raymond's blog need their own nitpicker's corners.

Comments are closed.

Skip to main content