Why did the Win64 team choose the LLP64 model?


Over on Channel 9, member Beer28 wrote, "I can't imagine there are too many problems with programs that have type widths changed." I got a good chuckle out of that and made a note to write up an entry on the Win64 data model.

The Win64 team selected the LLP64 data model, in which all integral types remain 32-bit values and only pointers expand to 64-bit values. Why?

In addition to the reasons give on that web page, another reason is that doing so avoids breaking persistence formats. For example, part of the header data for a bitmap file is defined by the following structure:

typedef struct tagBITMAPINFOHEADER {
        DWORD      biSize;
        LONG       biWidth;
        LONG       biHeight;
        WORD       biPlanes;
        WORD       biBitCount;
        DWORD      biCompression;
        DWORD      biSizeImage;
        LONG       biXPelsPerMeter;
        LONG       biYPelsPerMeter;
        DWORD      biClrUsed;
        DWORD      biClrImportant;
} BITMAPINFOHEADER, FAR *LPBITMAPINFOHEADER, *PBITMAPINFOHEADER;

If a LONG expanded from a 32-bit value to a 64-bit value, it would not be possible for a 64-bit program to use this structure to parse a bitmap file.

There are persistence formats other than files. In addition to the obvious things like RPC and DCOM, registry binary blobs and shared memory blocks can also be used to transfer information between processes. If the source and destination processes are different bitness, any change to the integer sizes would result in a mismatch.

Notice that in these inter-process communication scenarios, we don't have to worry as much about the effect of a changed pointer size. Nobody in their right mind would transfer a pointer across processes: Separate address spaces mean that the pointer value is useless in any process other than the one that generated it, so why share it?

Comments (110)
  1. Cooney says:

    Nobody in their right mind would transfer a pointer across processes

    So, about a year after win64 ships, you’ll be writing a log entry about how the app-compat team had to write a patch for some app that did just that.

  2. Tim Smith says:

    I personally know many programmers who aren’t in their right mind.

  3. Derek Park says:

    Looks to me like this is a good example of the flaw with Microsoft’s typedefs. If the typedefs had been defined as INT16, INT32, and INT64, there wouldn’t be a problem (well, not this problem) with porting to 64 bit. The typedefs would need to be updated, but they would still be logically correct (i.e. INT32 is still 32 bits, and LONG just doesn’t exist). In fact, such a port is still possible, but more difficult.

    It’s hard to blame this entirely on Microsoft, though. Their typedefs were still far better than the poorly defined C/C++ built-in types.

  4. Dave says:

    I never liked most of Pascal, but declaring an integer variable’s range like 1..20 was a good idea. In many (most?) cases you’re using integers for counting and know you don’t need the full range. Then you let the compiler choose the best size. Yes, there need to be pragmas to nail down actual sizes, just like there are pragmas for struct alignment. But as it stands the compiler has no easy way to tell how big a number you might put in that variable.

  5. Ben Cooke says:

    "Parsing" data files by overlaying a C struct is a bad idea anyway. What about other issues like endianness and alignment? I think it’s a far better idea to read the file in chunks (well, read large parts of it and parse it in chunks, really) and copy the data of interest into an in-memory structure, doing conversions as necessary. Sure, it might be a bit slower, but it’ll be far more portable.

    Do all of the architectures that Win32 runs on have the same endianness? I guess they must, or overlaying that bitmap structure over a bitmap file would fail on some platforms but not others. I can’t actually remember off the top of my head which platforms Windows NT is or has been available for, though.

    With all that said, I do think it was a good idea to leave the data sizes the same. Knowing the kinds of nasty tricks and stupid mistakes application developers make, it would have been a portability nightmare. The days when the release of a new system meant rewriting or heavily modifying your application are (in most cases) behind us, and I like it much better this way. (Not to say that good programmers shouldn’t use practices that make their programs generally storage-size-independent, though.)

  6. G. Man says:

    I would generally ignore anything Beer28 says, he is a Linux troll and a poor one at that.

  7. any change to the integer sizes would result in a mismatch.

    I may be nitpicking here but I think it’s worth pointing out that it had been possible to change the size of int when switch from Win16 to Win32 because long was preferred to int in Windows header files. The trick consisted in keeping long’s size unchanged while changing int’s size.

    Now long can’t stay 32 bits if int becomes 64 bits because the C/C++ standard says that long’s size must be greater than or equal to int’s size.

    The question is, since LONG seems to be preferred to long in Windows headers, what about typedef’ing LONG to a 32 bit integer (short?) while int and long become 64 bits ? From semantics point of view, it’s not elegant (LONG is no longer that long… And I agree this is not negligeable !). But is there any major technical problem besides this semantics problem ?

  8. DrPizza says:

    "The Win64 team selected the LLP64 data model, in which all integral types remain 32-bit values and only pointers expand to 64-bit values. Why? "

    To create gratuitous incompatibility with Unix.

    "If a LONG expanded from a 32-bit value to a 64-bit value, it would not be possible for a 64-bit program to use this structure to parse a bitmap file. "

    So fix the header to not say "LONG" but instead "DWORD" or whatever it should be.

  9. Raymond Chen says:

    DrPizza: One structure down, 20 billion to go. And most of the 20 billion belong to you – the application progammer – not to Windows.

  10. BlackTigerX says:

    and fix the other 3 million structures with the same thing, oh, plus any user made structures…

    I don’t know about that

  11. lowercase josh says:

    Hey, if you don’t like it, you can always #define long __int64 #define int __int32 or something.

    Of course then you’ll have trouble using other people’s header files and linking with the C++RT…

  12. AndyB says:

    To take an analogy of perhaps what effect changing the type widths would have, think about the impact of migrating your code from old 8-bit characters to 16-bit Unicode.

    If you think that nearly all existing code is going to run in 32-bit emulation anyway, then all code that wants or needs to be 64-bit should take the trouble to make sure it is properly 64-bit, in which case a ‘better’ choice of type width could have been made.

    IMHO, the current choice seems to be about short-term ease (unless its just a conspiracy to make source compatibility with 64-bit Linux more difficult :) )

  13. Vince says:

    Well if MS had realized this was a problem years ago when doing 64-bit Alpha port, or even with the ia64 port then most of the important structures could have been fixed long ago, and warnings could have been displayed so people could fix their own code.

    It’s not really a compatibility issue, as it only makes a difference when re-compiling code, not when using an existing binary.

  14. Raymond Chen says:

    "most of the important structures could have been fixed". If you don’t fix them all then the cure becomes worse than the disease.

    "short-term ease": Yup, this is a topic I intend to come back to in a few months.

  15. Joe Beda says:

    Hey Raymond,

    "Nobody in their right mind would transfer a pointer across processes: Separate address spaces mean that the pointer value is useless in any process other than the one that generated it, so why share it?"

    The only case I can think of to pass pointer values across processes is for abstract cookies. I know that I’ve designed APIs in the past where you register something and get a handle/cookie back. That handle cookie is either a poitner directly or the pointer XORd with some private value. While this is most common in proc, I can imagine someone doing it cross proc. Implementing one of these legacy interfaces in 64-bit land means that you have to create a 64->32 bit map where you didn’t need to map before. Not insurmountable but not straightforward either.

    Joe

  16. A Cautious Observer says:

    What Raymond is basically saying is that because on-disk and on-wire formats weren’t specified with explicit widths like INT32 instead of LONG then they were broken. User programs copied this brokenness and hence the LLP64 had to be chosen because of bad design choices made > 15 years ago.

    Is that a fair assessment?

  17. I agree with Derek Park, I wish Microsoft would drop the typedefs altogether and explicitly use __int8, __int16, __int32, and __int64; remove the ambiguity.

    While we’re on the subjuct of typedefs, who at Microsoft was the genius behind "DWORD64" in BaseTsd.h? This is a contradiction in terms, I guess they never heard of QWORD. Also, why do the new 64-bit types end in "PTR?" (e.g. DWORD_PTR) This is a misnomer since these aren’t pointers.

  18. Derek: Win32 does have explicit typedefs. There’s UINT8 to UINT64, INT8 to INT64, ULONG32/LONG32 (actually defined as non-long ints, probably by mistake) and ULONG64/LONG64. They just are relatively new and rarely used.

    Ben: yes, Windows only ever ran on little-endian architectures. There’s very little places concerned with endianness in Win32, and they’re all #ifdef _MAC (i.e. the Win32 port to MacOS for Office and Internet Explorer)

  19. Swamp Justice says:

    If you are going to use explicitly sized ints, you should use the typedef defined by the C99 standard:

    int8_t

    int16_t

    int32_t

    uint8_t

    uint16_t

    uint32_t

    Brian: I don’t think DWORD_PTR is not supposed to be used as an int. It is defines the int type that is large enough to hold a pointer.

  20. Eric Lippert says:

    Joe:

    > Not insurmountable but not straightforward either.

    Indeed — in fact, an interview question I often ask "industry" candidates is to critique such a system, and then describe to me how they would implement such a system in 64 bit land without changing the requirement that the unique cookies be 32 bit integers.

    It most definately is not straightforward — you can run into problems of security, efficiency, portability, all kinds of stuff. I quite like "open ended" interview questions.

  21. Cooney says:

    Larry,

    Your link is about transferring handles, not pointers. You can transfer handles, but you will need OS support to do it. Pointers are still useless as pointers outside of their process.

  22. Cooney, absolutely – but handles are logically pointers (the HANDLE type is a PVOID).

    Swamp Justice: Revisionist history. The C99 types you’re describing weren’t available in 1985, when many of these structures were finalized. We’re not prescient.

  23. long vs Long says:

    Long in (c++).net is 64-bit

  24. Waleri says:

    Asking me, it is fix the header files, don’t "fix" the compiler. Many headers now DO use the new DWORD_PTR stuff, why not simply fix those? Gosh, a simple search/replace would do, is that hard?

    BTW what will be the size of DWORD_PTR in 64 edition? 8 bytes I guess… so I’ll look at DWORD part and will think "4 bytes" and then will have to remember the PTR part and say "oh, 8 bytes"… Also, this means UINT and UINT_PTR could have sizes… well, asking me, this is at least confusing… And by the way, WORD wasn’t supposed to be fixed 2 bytes, but thanks to Intel it is…

  25. Phil Weber says:

    Raymond: Care to comment on why the VB.NET team made the opposite decision (redefining "Integer" to 32 bits), breaking VB6 persistence formats, not to mention Win32 API calls? Thanks! :-)

  26. Raymond Chen says:

    Waleri: The reason for DWORD_PTR is so that on 32-bit systems it stays DWORD. Does C99 have a "pointer that is the same size as an integer" type?

    Phil: I am not qualified to comment on VB.NET design principles.

  27. Doug says:

    Long time reader, first time poster. love the show.

    I don’t really buy Raymond’s inital argument that the definition of a bmp file (or any file format) should in some way define the size of data types in an OS. Why not create a new data type for 64 bit quantities, higher precision reals, etc. (there are plenty of Windows specials, anyway – DWORD, for example).

    As to RPC and DCOM, why isn’t data transmitted btween these in some network (architecture-independent) format?

    Well, reading through the above post before sending, I can see that you’ve got to work with what you’ve got.

    Were there any other possibilities? A couple i can think of right now are a) new 64-bit data types, to keep RPC, file etc. formats valid, b) versioning in RPC and file formats to allow on the fly conversion (e.g. bmps translated in app layer [maybe with help from a library] plus new ’64-bit bmp format’, RPC, etc. translated in OS subsystem). c) ?

    Any thoughts on these? Would they have been considered?

  28. mpz says:

    "What about other issues like endianness and alignment?"

    Not to mention input validation. An attacker can inject unexpected values into a data file and crash the program / gain priviledges that way.

  29. Brent Dax says:

    I can understand why you wouldn’t want to use INT32-type things everywhere–that would mean a crapload of search-and-replace when you were porting to 64-bit. But why not say that you should use INT{8,16,32,64} in serializable structures, and {SHORT,INT,LONG} otherwise?

  30. Raymond Chen says:

    Brent: Are you saying that existing structures should be retrofitted to use the INT<n> types? But that would violate the "don’t break existing 32-bit code" rule. Consider:

    struct something {

    INT a;

    LONG b;

    };

    becomes

    struct something {

    INT32 a;

    INT32 b;

    };

    Great, the underlying type of "b" changed from "signed long" to "signed int" -> build breaks.

  31. Alan De Smet says:

    For better or worse, I worked on a product that passed a HANDLE across processes. Specifically, we wanted to seperate a web browser plugin implementation into its own process for stability reasons. (Said plugin used OpenGL. At the time hardware acceleration could be a bit sketchy, driver problems were all too common.) So we (from memory) sprintfed the HANDLE for the plugin’s window into a "%d" and passed it into the child process on the command line. Said plugin was cross platform, a similar trick was done on Linux/X-Windows. I’m fuzzy on how the MacOS9 version did it. The entire thing seemed overly clever to me, but it worked like a charm.

    This got me thinking. My understanding is that a HANDLE is (handwave) a void pointer. So quietly turning into a 64-bit pointer could cause problems; I suppose it depends on our code’s ability to write a 64-bit integer and read it back.

    Of course, in this particular case it’s moot; the company went under and the code is basically dead. It will almost certainly never be compiled into a 64-bit binary, so things should keep Just Working. (crosses fingers)

  32. Waleri says:

    >> The reason for DWORD_PTR is so that on 32-bit systems it stays DWORD. Does C99 have a "pointer that is the same size as an integer" type?

    Yes, but on 64-bit systems DWORD_PTR would be misleading. Why do I need "pointer that is same in size as integer" type in a first place?

    >>> I guess I don’t understand what your proposed "DDWORD" type would be used for, different from the existing UINT64 type.

    No difference, DDWORD/QWORD/UINT64 that’s all the same, but the point is which one should be used – INT/LONG or WORD/DWORD/QWORD/WHATEVER128 in structures like bitmap headers, etc.

    As for the DCOM issues, why not simply create a 32bit stub for 64-bit platforms and a "native" 64-bit implementation?

  33. Raymond Chen says:

    "Why do I need ‘pointer that is same in size as integer’ type in the first place?" -> Look through your Platform SDK header files and you’ll see plenty of reasons.

  34. Ray Trent says:

    Frankly, not breaking existing source code seems like a pretty pathetic goal, seeing as how it’s pretty much doomed to failure anyway. Having to go in and cut/paste some types or even just typedef for compatibility wouldn’t take anyone very much time. It’s not like any sane person would expect they could reduce their testing requirements because "it just compiled" anyway, right?

    The most egregious example of this I can think of is the system32 directory.

    While it won’t break existing code (mostly), we’re going to end up being saddled with 64-bit DLLs being in System32, and 32-bit DLLs being in WOW64 for a very, very long time (there’s only one more doubling of bit-size needed before we can individually label every subatomic particle in the universe)…

    Ok, some bad programmers would have had to have changed 1 constant somewhere in their code to fix this. Was it worth it?

  35. Ray,

    I love this. Up until today, all the arguments on Raymond’s blog were all about how it was stupid for Microsoft to jump through hoops to make existing binary applications to work.

    The argument usually went "Why don’t we just force the developers to recompile their stupid broken applications and ship a new one?".

    Now that the issue is not revising source definitions in the header files, the claim is that we should stop those apps that used these types from compiling.

    Ah, the irony.

  36. Raymond Chen says:

    Suppose you installed the latest header files and nothing compiled any more. Even code that was previously perfectly legal. (In other words, you’re innocent!) But now you have to go and upgrade to Win64 every window procedure, every call to SetTimer, every WM_NOTIFY handler, every owner-drawn listbox and menu… even though your program has no intention of being a 64-bit program.

    How would you react? Would you say, "Thanks, Microsoft! After four days of effort, I’m finally back to where I was with preceptible benefit to me! Too bad I can’t use MFC’s class builder any more – it spits out code that doesn’t compile any more. And the code samples in all the magazines I own and web pages I visit don’t work any more, including this function I just copied from a magazine without really understanding how it works but it sure does the job…"

    Or would you say, "Heck, for all this effort I could’ve ported it to OS/2."

  37. pete diemert says:

    Here, here! Hat’s off to Ray and Larry for weathering the storm! Just wanted to toss in some quick kudos to the MS folks who have worked VERY hard over several Windows releases to keep our favorite apps up and going. In an earlier comment a disgruntled developer asked why would MS make this decision about LLP64 by answering:

    "To create gratuitous incompatibility with Unix"

    I will gratuitously suggest that if by this comment he means preserving combatibility with legacy apps between Window’s releases then I give a resounding three cheers to people like Ray who help keep the wheels in motion with this little "incompatibility".

  38. Waleri says:

    Well, what about this:

    SomeStringFn(LPTSTR) turns to

    SomeStringFnA(LPSTR) and

    SomeStringFnW(LPWSTR)

    So why not

    SomeIntegerFn(UINT) to become

    SomeIntegerFn32(UINT32) and

    SomeIntegerFn64(UINT64)

  39. Waleri says:

    P.S. – same to be applied to structures, etc

  40. Ben Hutchings says:

    Raymond: C99 specifies intptr_t and uintptr_t as optional type aliases for signed and unsigned integer types large enough to hold a pointer. (They are optional because there may not be large enough integer types.) Even VC++ has definitions for them now.

  41. foxyshadis says:

    API/header bloat, probably. A few entries back Raymond mentioned a similar scenario, and the test matrix nightmare that would ensue. (That was for adding a flag, but same diff.) Not to mention the age-old documentation question – when do you use 32, when do you use 64, when do you use generic "I don’t care" for best portability?

  42. I can believe the application problems. How come, then doesn’t Windows make LONG a typedef to int, and allow the (naked) long be 64 bit?

    Is it because changing both LONG and long to 64 bits would break tons of stuff; while changing LONG to 32 bits (say typedef to int) and then making long 64 bits while break somewhat less, but still plenty, of stuff?

    Unix programmers can’t can’t assume sizeof(long) == size(int) for a very uh, long, time now. Otherwise nothing will compile on many interesting platforms. With Linux these days, and other Unixes quickly dying out, that might not be the case for long, though…

  43. Beer29 says:

    Hi, I was the baligerant poster who originally made the comment about type widths staying the same.

    Since I’m porting alot of the MFC defs to a new linux lib I’m spearheading from windef.h, I’m pretty familiar with the various millions of types declared in that lovely document.

    <quote>Notice that in these inter-process communication scenarios, we don’t have to worry as much about the effect of a changed pointer size. Nobody in their right mind would transfer a pointer across processes: Separate address spaces mean that the pointer value is useless in any process other than the one that generated it, so why share it?</quote>

    GlobalAlloc(), Mapped Files, ATOM’s across instances, Named Pipes, ect…

    IPC with memory handles wasn’t uncommon in my now defunct windows programming style. Of course I would pass handles for API, and not the actual paged addresses, because of protected mode annoyances like page protection ect….

    Also I don’t even know if a paged address from one process would even be the same address for another process because of the context switching in the kernel ect…, and the restoring of the registers from the different process’s ldt in GDT. I’m pretty sure the addresses are absolute as 32 bit in the virtual paged addy table, but who really knows, maybe there was some effect from process’s registers when they are restored to execute the slice.

    Who the heck knows, and nobody will ever know because windows is closed source. I am happy because now I can flip open the kernel source and voyeuristically peer in to my hearts content. I even have handbook guides to help me along. Thanks Linus, you da man.

  44. Raymond Chen says:

    "I don’t even know if a paged address from one process would even be the same address for another process because of the context switching in the kernel etc" -> ?? Processes have separate address spaces. An address in one process is meaningless in any other process. So asking whether it’s the "same" is like asking if my phone has the same telephone number in a different area code.

  45. Waleri says:

    Raymond, it seems we’re talking about different things. Yes, so called LLP64 will perfectly preserve the structures, but my point is that structures should be updated in a manner that they won’t depend on INT/LONG size, such preservation will not be an issue anymore. Anyhow, this is plain theory, since due to backward compatibility reasons we’ll stuck in 32-bit world forever, due to mixing fixed with nonfixed datatypes, like HIWORD(lParam).

    Presumably, WinFX will be free from these issues, but all the problems you mentioned in your first post will remain – how application written in WIN32 and WINFX will share memory, etc…

  46. Raymond Chen says:

    "structures should be updated in a manner that they won’t depend on INT/LONG size" -> The cost here is that by changing every structure in the system from "LONG" to "int32_t", you break existing perfectly legal code. Do this too much and people say, "Obviously Microsoft has an ulterior motive in making widespread breaking changes to Win32 and forcing people to rewrite their Win32 code – they are intentionally making Win32 programming so difficult that people will give up and switch to WinFX."

  47. Raymond Chen says:

    I wouldn’t call them "bad" design choices. Who could have predicted that in the next 15 years that the system you were designing would have to *remain source code compatible* with a processor with four times the register size? (OS/2 was the operating system "for the ages"; Windows was just a toy.)

    Brian Friesen: The _PTR suffix means that the integer has the same size as a native pointer. I.e., sizeof(X_PTR) == sizeof(void*). It’s explained in MSDN. http://msdn.microsoft.com/library/en-us/win64/win64/the_new_data_types.asp

    (The SDK can’t define new types beginning with __; those are reserved by the C and C++ language standards.)

  48. Beer29 says:

    The cost here is that by changing every structure in the system from "LONG" to "int32_t", you break existing perfectly legal code.

    Legal in which context, LONG doesn’t exist in C++ or C, none of the typedefs in windef.h exist in the standards. So really when you use these types in the first place, which is accepted as the norm in windows programming, you are asking for it not to be portable.

    If you really wanted to match windows types for portability, could you just doctor windef.h and that would be the end of it?

    All those types are a bunch of fooey anyway.

  49. Raymond Chen says:

    By "perfectly legal code" I meant of course "perfectly legal Win32 code."

    Look at the "struct something" example from earlier today. No matter how you define "int32_t", you will break either "a" or "b", because one of them is derived from "int" and the other is derived from "long". Whichever one you pick for "int32_t" you will break the other.

  50. Beer29 says:

    <quote>Look at the "struct something" example from earlier today. No matter how you define "int32_t", you will break either "a" or "b", because one of them is derived from "int" and the other is derived from "long". Whichever one you pick for "int32_t" you will break the other.</quote>

    But how could picking one ever break the other for legacy code?

    legacy Win32 code will always have LONG as typedef’d from long in winnt.h, so in the 32 bit VC compiler context, that’s always a 32 bit value.

    Having LONG be 32 bits in a 64 bit compiler context where the "long" compiler type is possibly 64 bits wide, is certainly a little confusing, but at least it wouldn’t break anything. I think that’s what you guys ended up doing from reading the first post on this blog here. Just keeping the original windef.h and winnt.h widths.

    so

    typedef int INT

    typedef INT LONG

    Another thing you could do from the compiler perspective is make 32 bit and 64 bit pragma blocks where the actual "long" C type is 32 bits in the 32bit pragma block, and 64 bit in the 64 bit pragma block, like it is in java, with 32 bit ints and 64 bit longs.

    #pragma win32

    // BLOCK

    #pragma win64

    like that. I’m guessing you guys already built that into the preprocessor and compiler.

    You could actually just have the preprocessor go through and just macro change "long" to "int" in the win32 code blocks within that pragma directive, so it wouldn’t even require a compiler change persay.

    At any rate, with the GNU tools you’re responsible for making your own abstract types of any kind, so ultimately you, yourself have to change them. This is my situation now, so I’m focusing on that.

  51. Beer29 says:

    <quote>legacy Win32 code will always have LONG as typedef’d from long in winnt.h, so in the 32 bit VC compiler context, that’s always a 32 bit value. </quote>

    I mean that for a 32 bit C++ compiler. It could be different for a 64 bit compiler, in which case you could do pp replacements before you start lexing/parsing/compiling the code.

  52. Beer29 says:

    actually, if the pp went through and macro replaced all the LONG, ULONG, to INT, UINT and long to int, all the 32 bit code blocks would work fine,

    then when you would call a function from a 64 bit block with a LONG return type, it would be a type mismatch.

    So ultimately the compiler would have to be involved, smart converting types between 32 and 64 bit blocks.

    That’s why they pay you guys the big bucks though right!

  53. Beer29 says:

    from 64 bit block functions with 64 bit wide return types to 32 bit pragma blocks calling, have the compiler issue data loss warnings for 32 bit cut offs.

    If people ignore them at least you tried. Other than that I think it would be ok.

    If they really want the whole 64 bits, they move the func out of the 32 bit block into the 64 bit.

    For those that don’t need the extra width, they can keep coding as usual with the win32 block pragma’s and pretend AMD64 was never released.

  54. Raymond Chen says:

    If you want LONG to be a 64-bit integer when compiler on a 64-bit machine, then you have to figure out how to change the definition of "struct something" so that the following legal Win32 code compiles cleanly and operates identically both as 32-bit and as 64-bit:

    something s;

    fread(fp, &s, sizeof(s));

    int i = s.a;

    long l = s.b;

  55. Beer29 says:

    I see, I was thinking 32 and 64 bit versions of the API as well in which ever block. 32bit pragma block retains the 32 bit versions of stdlib.h or cstdlib and the rest of the API outside the standard libraries.

    I realize that would be next to impossible for you to accomplish though.

    If you’re going to use the same system dll API for both the 32 and 64 bit blocks it wouldn’t work.

    I’m going to see how GNU handled this. I don’t have a 64 bit chip so I haven’t been interested but I bet they came up with a crafty solution.

  56. Beer29 says:

    http://64.233.161.104/search?q=cache:X6vyrlUbMJkJ:gcc.fyxm.net/summit/2003/Porting%2520to%252064%2520bit.pdf+x86_64+gcc+64+bit+long+types&hl=en

    <quote>4.2.1 int vs. long

    Since the sizes of int and long are the same on a 32-bit platforms, programmers have of-ten been lazy and used int and long inter-changeably. But this will not work anymore with 64-bit systems where long has a larger size than int.

    Due to its size a pointer does not fit into a variable of type int. It fits on Unix into a long variable but the intptr_t type from ISO C99 is the better choice.</quote>

    Well, I guess this will kind of suck at first, but amd64 does have 32 bit compatibility mode, and it’s better to stick to standards. I think they did the right thing.

    Not having all those typedefs to same width types in winnt.h and windef.h probably are going to help gcc/g++’s case along when it comes to this switch.

    It’s been this way with java since the jdk1.1, so it’s not a new concept.

  57. Raymond Chen says:

    RPC/DCOM do use an architecture-independent format.

    One of the goals of the Win64 design is *not to break existing 32-bit code*. If structures changed from, say, LONG to int32_t, you would have build breaks like

    error: assigning signed long to signed int.

    on compilers that are strict about int/long separation.

  58. LP64 v.s. LLP64 (aka Unix64 v.s. Win64) http://www.unix.org/version2/whatsnew/lp64_wp.html Getting Ready for 64-bit Windows Why did the Win64 team choose the LLP64 model?…

  59. Doug says:

    "RPC/DCOM do use an architecture-independent format."

    So why would changing the sizes of the data types in the OS affect these protocols? i.e. i don’t think this is a valid argument.

    "One of the goals of the Win64 design is *not to break existing 32-bit code*."

    Given this, i can certainly see why they made the decision they did, then. Can someone clear something up for me – are talking about the 64bit version of XP, or are we talking about Longhorn? (RTFA, or find one, is a valid answer!)

    If Longhorn, then I was under the impression that apps had to be recompiled for this new OS anyway. Please correct me if I’m wrong.

    Finally, one last q. Given Raymond’s answers, why couldn’t they have gone the route of a new set of data types. You went from WORD to DWORD. Why not DDWORD, etc.? Adding new types would not break any existing code at all.

  60. Raymond Chen says:

    I’m not talking about wire formats. I’m talking about structures in header files.

    And I’m talking about 64-bit Windows in general, not tied to a specific release – Windows XP 64-bit, Windows Server 2003 64-bit, etc.

    I don’t see how inventing a new data type helps you fix existing structures. You can’t touch them carelessly without breaking source code compatibility.

    Besides, there *are* new types, like the INT{8,16,32,64} mentioned above. So I’m not sure why you’re saying the Win64 designers should have invented something that they already invented.

    I guess I don’t understand what your proposed "DDWORD" type would be used for, different from the existing UINT64 type.

  61. Ray Trent says:

    Now that I think of it, why didn’t MS just add a "WIN32_COMPATIBLE" flag to the compiler that kept all the sizes the same while by default letting the types sizes float to ones that make more sense in the processor architecture?

    Surely the effort of typing 24 characters wouldn’t be too much to ask *even* of people too lazy to have programmed their code correctly in the first place…

  62. Raymond Chen says:

    Okay, consider: You download the latest Platform SDK, recompile your program, and you get all these errors. Is your reaction:

    (a) Gosh, I’d better go through and modify my 50,000-line program to work with these new 64-bit compatible structures.

    (b) !@#$!! Microsoft, why do they go around breaking perfectly good code? I’m not going to port to 64-bit Windows any time soon, why do I have to go through and modify my 50,000-line program to be compatible with something I don’t care about?

    "Why not a WIN32_COMPATIBLE compiler flag?" -> You might have a different opinion of this approach after you spend four days tracking down a problem caused by somebody #define’ing this flag in one header file (but not another), causing two structure definitions to mismatch.

    The Win64 team went through multiple proposals before settling on the one they chose. I experienced the pain of previous attempts that tried some of the things people have been suggesting. It was not fun. "Hey, I’m making a checkin to winbase.h that *prevents all of Windows from compiling*." You don’t make friends that way.

  63. Ray Trent says:

    Ummm, actually, that argument always was BS. The reason not to break binaries is that you’re hurting the wrong people. You’re hurting end users that weren’t to blame for the poorly written code in the first place. Not only does this win you no customers, it punishes the innocent.

    Breaking the compile punishes the guilty. Hopefully enough that they get out of the business. Darwin is way too dead in the modern world as it is.

    I wonder, though, about this decision regarding System32… who exactly was that supposed to protect, and from what? It’s *more* likely to break binaries (of apps that made some path assumptions that are now broken), but *less* likely to break recompiles…

    Does this have anything to do with Steve tromping around yelling "DEVELOPERS!"? :-)

  64. Alex Blekhman says:

    "Ah, the irony."

    Actually, the whole irony is that DWORD was broken from the beginning for 32-bit architecture. It’s that since DWORD macro is just leftover of 16-bit world where DWORD had (just for once) its true meaning: doubled processor word, i.e. 32 bits. Processor’s word under IA32 has 32 bits, so DWORD should be 64 bits. Sad story about WPARAM and LPARAM is the first thing that beginner learns from Petzold’s book and PSDK documentation.

    As a matter of fact, I have great admiration for MS developers, which succeeded to maintain backward compatibility for code that can be 15 years old. And still, if written carefully, it will compile and *work* flawlessly today. It suggests highest rank of professionalism among these developers. I just take my hat off to you, guys.

  65. Aaargh! says:

    "Ben: yes, Windows only ever ran on little-endian architectures. There’s very little places concerned with endianness in Win32, and they’re all #ifdef _MAC (i.e. the Win32 port to MacOS for Office and Internet Explorer)"

    Wasn’t IE-Mac a completely different codebase/engine than IE/Win32 ?

  66. bobsmith says:

    What about creating new header files that fix the mistakes made when the current ones were made? A winbase_new.h would allow for new projects to use fixed headers while older projects can use winbase.h until someone decides to update them.

  67. Raymond Chen says:

    Creating two versions of the structure completely misses the point. The whole point of the exercise is to ensure that structures *stay the same* between 32-bit and 64-bit. Otherwise 64-bit code wouldn’t be able to read BMP files created by a 32-bit program.

  68. Brent Dax says:

    "Are you saying that existing structures should be retrofitted to use the INT<n> types?"

    No, I’m saying that the designers of Win32 should have anticipated that one day there might be 64-bit architectures that people would want to run Windows on, and designed the type system with room to expand without massive code breakage. Especially since they were in the middle of a 16-to-32-bit change!

    If WinFX is a complete replacement for Win32 (I’m a little fuzzy on this), I hope it *is* being designed this way, with separate "{small,medium,large} integer" and "{16,32,64}-bit integer" types.

  69. Alex Blekhman wrote:

    "Actually, the whole irony is that DWORD was broken from the beginning for 32-bit architecture. It’s that since DWORD macro is just leftover of 16-bit world where DWORD had (just for once) its true meaning: doubled processor word, i.e. 32 bits. Processor’s word under IA32 has 32 bits, so DWORD should be 64 bits. Sad story about WPARAM and LPARAM is the first thing that beginner learns from Petzold’s book and PSDK documentation"

    Hmmm… are you sure about this?

    I always heard it like this:

    4 bits = nybble

    8 bits = 1 byte

    2 nybbles = 1 byte

    2 bytes = 1 word

    … and the logical extension from there is 2 words = 1 dword, 4 words = 1 qword.

    *shrugs* I don’t recall ever hearing that the length of a word was CPU specific.

  70. Waleri says:

    >>> *shrugs* I don’t recall ever hearing that the length of a word was CPU specific.

    BYTE is always 8 bits

    WORD is largest amount of BYTEs CPU can process at once. When first Intel CPUs became popular, WORD was equal to two BYTEs and when CPUs WORD began to grow, WORD remain 2 bytes, for the same reasons we’re discussing here now – everybody *knew* that WORD is two BYTEs and changing that would broke too many things.

  71. Ben Hutchings says:

    KJK::Hyperion wrote: Windows only ever ran on little-endian architectures. There’s very little places concerned with endianness in Win32, and they’re all #ifdef _MAC

    Did I hear wrong about the X-Box Next running on a PowerPC in big-endian mode, then?

  72. Adrian says:

    So what’s the type of size_t and ptrdiff_t when compiling 64-bit? These have to be (at least) 64-bit quantities. If long and unsigned long–the largest integral types defined by the C standard–aren’t big enough for these quantities, how can they be defined in a standard and useful manner? What will happen when you use sizeof (which is a size_t by definition)?

    Raymond wrote: "Note however that converting a program from Win16 to Win32 typically resulted in two unrelated codebases (or a single file with a LOT of #ifdef’s) because the Win16->Win32 shift was so huge."

    I disagree. Win16->Win32 was painful only if you had sloppy Win16 code. Message packer/cracker macros handled the different packing schemes for the parameters. Quicken had simultaneous 16- and 32-bit versions from the same source with minimal #ifdefs (and no Win32s). In fact, for a while we built 16-bit with Borland and 32-bit with Microsoft compilers.

    How big are HANDLEs (kernel and GDI) and WPARAMs in Win64? When compiling with STRICT, GDI HANDLEs are defined as pointers to different types so that the compiler can do stricter type checking. To do that in Win64, then HANDLEs would have to be 64-bits, like a pointer. Both WPARAM and LPARAM have to hold HANDLEs from time to time as do LRESULTs, so are they all 64-bit?

    I never understood why the SDK came up with VOID, CHAR, LONG, etc. Why not use the standard keywords if the underlying types could never be changed? I agree with the early commentor. Size-specified types should be used for persistent formats and "on the wire", but everything else should use the size-neutral types.

    A byte is not 8 bits. A byte is the size of the smallest directly-addressable unit of memory. On most modern processors, it happens to be 8 bits. That’s why RFCs use the term octet to be unambiguous and TeX can be compiled on machines that have bytes as small as 6 bits.

    A word is not 16 bits. A word is the natural size of the processor’s arithmetic unit. If you’re a C programmer, this is an int. Typically this was also the size of an address, but that seems to be changing as we move to 64-bit machines. On current processors, a word is typically 32-bits, but the term has been abused to the point that it’s now ambiguous.

    DWORD, as far as I can tell, was coined by Microsoft to mean a double word. But it stuck at 32 bits, since words were 16 bits when the term was introduced.

    A quadword is four words. On a 32-bit machine, this should mean 128 bits, not 64. If my memory is correct, VAX/VMS got this right. I wonder what it was on Alpha/VMS.

  73. Raymond Chen says:

    I don’t know who invented DWORD, but Windows got it from Intel assembly language.

  74. Simon Cooke [exMSFT] wrote:

    "*shrugs* I don’t recall ever hearing that the length of a word was CPU specific."

    Yes, it’s very common to think that word related rather to number of bytes than to architecture. It comes as surprise to a lot of people. Actually, the more correct term is "machine word", since the size of word determined both by CPU and data bus. Strict definition is: size of machine word is equal to amount of bits that machine can operate on as unit. Usually hardware designers tend to make well balanced systems, so they make data bus wide enough to operate on CPU register at once. Therefore, most of the time machine word is equal to CPU register. Here’s additional info: "Understanding Intel Instruction Sizes" (http://www.swansontec.com/sintel.htm).

    As Waleri already explained it (http://weblogs.asp.net/oldnewthing/archive/2005/01/31/363790.aspx#364195), success of PC (which was 16-bit then) was so tremendous that terms of that era became engraved in people’s memory.

  75. Waleri says:

    <Quote>

    structures should be updated in a manner that they won’t depend on INT/LONG size" -> The cost here is that by changing every structure in the system from "LONG" to "int32_t", you break existing perfectly legal code.

    </Quote>

    Yes, but that will occure *only* when one recompile with WIN64 as a target. If the compiler have a switch for both WIN32/WIN64 as a target, it would be up to the developer to decide whether to compile with new settings and face the consequences or not.

    <Quote>

    something s;

    fread(fp, &s, sizeof(s));

    int i = s.a;

    long l = s.b;

    </Quote>

    Good example of a bad code. Aside of sizeof(s) vs. 4 problem, there are also problems with little endian/big endian of the variables.

    Many years ago, part of the Microsoft C 6.0 or was it 7.0 documentation was a little book how to write application in a WIN32 ready manner. I think all these issues were covered there. I think it is time to reprint this manual for WIN64… well, maybe it is a little too late :) Even today, compiler have a switch to warn about W64 portability problems – I just wonder how many ppl here use it (I don’t :)

    My point is, that years ago, when the dilemma was whether INT should be 16 or 32, the decision was to made it 32, so why choice now?

    It seems that ppl never learn from their mistakes. We had 16 vs 32 now we had 32 vs 64.. soon we’ll run to 64 vs 128.. or I guess it will be 32 vs 128… We had Y2K problem, in couple decades we’ll run into time_t problem (somewhere around the yeaar 2038). I understand nobody’s perfect. No one can predict everything, but now we talk about problems we already encountered before and instead solve them, we didn’t solve them even now. Instead we try to find a workaround and postpone the problem

  76. Ben Hutchings says:

    Wow, so many misconceptions I hardly know where to begin.

    Raymond wrote: The SDK can’t define new types beginning with __; those are reserved by the C and C++ language standards.

    They’re reserved to the implementation, which MS dictates at least part of (for instance, the sizes of fundamental types). Don’t tell me the Platform SDK people have suddenly developed a concern for namespace pollution after years of wantonly defining macros with no prefixes.

    Raymond wrote:

    If structures changed from, say, LONG to int32_t, you would have build breaks like

    error: assigning signed long to signed int.

    on compilers that are strict about int/long separation.

    Such compilers are, so far as I’m aware, a figment of your imagination. Such conversions are entirely legal. Conversion of pointer types (long * to int *) is a different matter, admittedly.

    Simon Cooke wrote: *shrugs* I don’t recall ever hearing that the length of a word was CPU specific.

    I think I’m going to add this to my random signature collection.

    Waleri: BYTE is always 8 bits.

    A byte was originally the unit of storage used for character codes. As such it has varied between about 6 and 12 bits on different systems; in fact the PDP-10 allowed software to determine the size of a byte. C and C++ require at least 8-bit bytes, however, and 8 bits has become the de facto standard – yet, with the increasing use of Unicode, it is normal to use 16 or 32 bits for a character code. For precision one should use "octet" to mean a group of 8 bits.

  77. Raymond Chen says:

    Note however that converting a program from Win16 to Win32 typically resulted in two unrelated codebases (or a single file with a LOT of #ifdef’s) because the Win16->Win32 shift was so huge.

    One of the goals of the Win32->Win64 transition is that you can write a program once, *without any ifdef’s* (well okay maybe one or two), and have it compile and run as both a Win32 program and a Win64 program.

    Another goal was that existing Win32 code should remain valid. (Even if it wasn’t Win64-compliant, it should still be valid Win32 code.)

    How well do these alternate proposals hold up in the face of these two constraints? Just saying, "That’s bad code" is a cop-out. Whether you like it or not, there’s a lot of bad code out there.

    (I don’t know what to make of the suggestion that sizeof(LONG) < sizeof(long) on Win64. Surely if the Win64 team came up with such a model you would point to it as proof that Microsoft developers are morons.)

    bobsmith: Two versions of the header file with different definitions for types, structures, and functions creates the "incompatible libraries" problem. Suppose you have two code libraries, an older one that uses the old definitions, and a newer one that uses the new definitions. Your program needs to use both libraries. What do you do? Whichever one you pick, you’ll be incompatible with the other one.

    (Ben: __ is reserved for the implementation, and the Platform SDK is not the implementation. It’s just a header file in the application namespace.)

  78. T says:

    From this

    http://www.unix.org/version2/whatsnew/lp64_wp.html

    I get the impression LP64 might be as rational a choice for Unix as LLP64 is for Windows.

    I guess it’s because Unix apis tend to work on the assumption that sizeof(long) >= sizeof(void *). Unix code has to worry about endianess issues, multiple compilers and so on, and is less likely to write C structures to disk with a single fwrite call.

    On the other hand in 64 bit Windows with the LLP64 model, code that tries to fit a pointer into an int will break at compile time, which is easy to fix. But there is lots of application code that makes implicit assumptions about types when it writes structures to disk, and if you changed sizeof(int) you’d break it silently at run time.

  79. doug says:

    ok, my last post didn’t get through.

    But thanks for your answers and time Raymond, and everyone else too. Definitely one of the more interesting posts. All those for more of the same technical stuff say aye. Passed.

  80. Waleri says:

    I guess to avoid similar problems in the future, it would be nice to have a warning in the compiler about assigning nonfixed data type to a fixed one. Warning should be generated even if sizeof(UINT) >= sizeof(UINT32)

    UINT src;

    UINT32 dst;

    dst = src; // Produce a warning

  81. Martin Liversage says:

    While you are busy porting from Win32 to Win64 please prepare for Win128 to avoid the hazzle next time. ;^)

  82. DrPizza says:

    "DrPizza: One structure down, 20 billion to go. And most of the 20 billion belong to you – the application progammer – not to Windows. "

    But I only have to care if I recompile in 64-bit. If I don’t, the old 32-bit compiled-in sizes are maintained.

    And if I’ve written a program that assumes that longs in structs are a particular size then I’ve probably also written a program that has other 64-bit portability issues anyway. Which I’ve got to fix anyway.

    So what’s the "win"?

  83. T says:

    @Adrian

    How big are HANDLEs (kernel and GDI) and

    WPARAMs in Win64?

    64 bit. Obviously ;-)

    I gues on Win128 they’d be 128bit.

    I never understood why the SDK came up with

    VOID, CHAR, LONG, etc. Why not use the

    standard keywords if the underlying types

    could never be changed?

    Because the underlying types may change I guess, the typedefs could be altered to remain the same size as the platform and compilers changed.

    DWORD, as far as I can tell, was coined by

    Microsoft to mean a double word.

    I think DWORD meant a 32 bit integer in Win16, and now it Win64 it still means the same thing. If a API function needs to use DWORD parameters to store pointers, the parameter type changes to DWORD_PTR which will be pointer sized. So rather than criticising them for not breaking third party code, why not praise them for fixing their own APIs.

  84. Raymond Chen says:

    DrPizza: Paranoia is a good thing when it comes to changing something that hasn’t changed in over 20 years. There are millions of lines of code that use, for example, the BITMAPINFOHEADER structure. Can you prove that none of them will break if the "LONG biWidth" changes to "INT32 biWidth"?

    I remember seeing a compiler that raised a warning if you did

    long l;

    int i = l; // warning: nonportable – potential truncation

    If you change LONG to INT32 then code that went

    long l;

    bmih.biWidth = l; // warning raised here

    will not get a warning when they didn’t before.

    I guess we could have invented "LONG32", which then would create the strange situation that on 64-bit machines, "LONG32" isn’t a "long".

    Like I said, many possibilities were considered. Perhaps in your opinion we should have taken the greater risk and chosen a model that would have required more work to convert a program to Win64, hoping that people would perceive the extra work as worth the hassle.

    Note that it wasn’t under Windows 95 that people finally perceived the extra work as worth the hassle to port from Win16 to Win32! 64-bit Windows has been available since Windows XP 64-bit Edition – do you see many people porting to Win64? Shouldn’t the goal be to make it easier to port to Win64, not harder?

    "And if I’ve written a program that assumes that longs in structs are a particular size then I’ve probably also written a program that has other 64-bit portability issues anyway. Which I’ve got to fix anyway."

    But you don’t have to fix them if you have no intention of porting to Win64. See above.

  85. DrPizza says:

    "DrPizza: Paranoia is a good thing when it comes to changing something that hasn’t changed in over 20 years. There are millions of lines of code that use, for example, the BITMAPINFOHEADER structure. Can you prove that none of them will break if the "LONG biWidth" changes to "INT32 biWidth"? "

    Er… if you’re not recompiling it doesn’t matter what the structure changes to, because you’re not recompiling. If you are recompiling, then at the absolute worst you’ll get a compile-time error (because the compiler doesn’t know what an INT32 is), which you can fix.

    "Like I said, many possibilities were considered. Perhaps in your opinion we should have taken the greater risk and chosen a model that would have required more work to convert a program to Win64, hoping that people would perceive the extra work as worth the hassle. "

    What greater risk? The only things that’ll change are programs rebuilt as 64-bit binaries, and they fall into two categories already:

    programs that need fixing to become 64-bit clean (in which case fixing the structure makes their job no harder)

    programs that are already 64-bit clean (in which case fixing the structure makes their job no harder)

    Except thanks to this decision, there are vanishingly few programs in the latter category. If LP64 were picked, at least all the cross-platform scientific/maths/etc. programs would be 64-bit clean (or very nearly so).

    "64-bit Windows has been available since Windows XP 64-bit Edition – do you see many people porting to Win64? Shouldn’t the goal be to make it easier to port to Win64, not harder? "

    Maybe if XP 64 were (a) available on something other than Itanium (the x86-64 version still isn’t out…) (b) not crippled (as the Itanium version omits lots of features the 32-bit version has (c) useful (as there’s next to no software that benefits from Itanium or 64-bit that you’d want to run on WinXP) we’d see more Win64 uptake.

    "But you don’t have to fix them if you have no intention of porting to Win64. See above. "

    But if I’ve no intention of porting, it doesn’t matter ANYWAY because the definitions I’m using will be the same as they always were, because they’re compiled into the program, which will be running under WoW64.

  86. Raymond Chen says:

    "But if I’ve no intention of porting, it doesn’t matter ANYWAY because the definitions I’m using will be the same as they always were"

    No, because your proposal changes LONG to INT32. The type changed. You install the latest Platform SDK, recompile your program, and it doesn’t build any more. Does this make you happy or sad?

    "because they’re compiled into the program" – and then what happens when you recompile?

    Most people expect that installing the latest Platform SDK will not introduce build breaks.

  87. Ebbe Kristensen says:

    Raymond Chen: "If a LONG expanded from a 32-bit value to a 64-bit value, it would not be possible for a 64-bit program to use this structure to parse a bitmap file."

    The *real* problem is that the most commonly used languages for Windows development do not specify the size of numeric types other than a, say, the size of a ‘long int’ must be bigger or equal to the size of an ‘int’ etc. And that was of course the reason for inventing DWORD etc..

    Also, I don’t understand why it is a problem to define a DWORD to be 32 bits in a 64 bit system?

  88. DrPizza says:

    "No, because your proposal changes LONG to INT32."

    No, it changes structure definitions that previously erroneously said "long" to say something else (presumably "int").

    My 32-bit program is unaltered anyway (because it treats long and int equivalently, save for truncation warnings).

    My 64-bit program needs careful checking anyway (because I need to make sure I’m not assuming that integral types are big enough to hold pointer types).

    "The type changed. You install the latest Platform SDK, recompile your program, and it doesn’t build any more. Does this make you happy or sad? "

    Huh? Why would it stop building?

    "and then what happens when you recompile? "

    As 32-bit? Nothing.

  89. Raymond Chen says:

    "Why would it stop building?"

    Because you changed "long" to "int". You yourself noted that doing this will raise truncation warnings. And if you compile with "treat warnings as errors" your build is broken.

  90. Kuwanger says:

    As far as the whole Win32 and Win64 compatibility goes, what happens if a programmer does this:

    int temp_buffer[4];

    int *some_int = NULL;

    temp_buffer[0] = (int) some_int;

    And before you say that’s crazy, realize that automatic garbage collector libraries that work in C actually do checks on pointers *and* ints because they realize it is a real issue that programmers will put pointers into ints and extract them back out as pointers later. As someone else was pointing out, they used %d to print out a handle to another program, yet they should have been using %p. And there’s also the issue of programs that have hardcoded structure sizes which container pointers. It’d seem that all the issues with pointers would motivate someone to realize that it’s not possible to keep longs the same size and assume that’d fix all problems. Though I will agree it might motivate people to switch to Win64..until their badly written code starts corrupting/crashing because of assumptions on pointer size as well.

  91. Patrick Bergeron says:

    I am in the process of porting a 32-bit app to Win64. The app is over 15 million lines of code. I’ve been at this for 6+ months.

    Raymond, THANK YOU for LLP64.

    While I’m on the subject, DrPizza, Raymond is absolutely correct when saying that changing "long" to "int" in code introduces build breaks. I’ve experienced it over thousands of lines of broken code, which I would have had to fix.

    Unfortunately for me, I have to port this application to Linux-64 as well, under Mainwin. Since under linux "long" is 64 bits, the mainwin headers had no choice to define LONG as "typedef int". You can’t imagine how broken the linux builds are right now.

    God, how I wish Windows and Unix had chosen the same model.

    It looks like we will have no choice but to ban the use of the word "long" in our code and enforce it by writing checkin scripts that parse the code and make sure no programmer has used this type.

    I found this site by trying to find a way to tell gcc (linux) to make "long" 32 bits. I found a switch, "-mlong32", but I am not sure how this will break other things, especially when compiling/linking external libraries we use (like STL etc). Any comments?

  92. DrPizza says:

    "God, how I wish Windows and Unix had chosen the same model. "

    Given that unix chose first….

  93. DrPizza says:

    "Because you changed "long" to "int". You yourself noted that doing this will raise truncation warnings. And if you compile with "treat warnings as errors" your build is broken. "

    It ought to only raise truncation warnings when conversion from long to int is a truncation. Which only ought to be the case when compiling for 64-bit. Which needs truncation fixes anyway.

  94. Raymond Chen says:

    Just because things are bad doesn’t mean that we should intentionally make things worse.

  95. Raymond Chen says:

    I already explained the truncation warning here

    http://weblogs.asp.net/oldnewthing/archive/2005/01/31/363790.aspx#364024

    and here

    http://weblogs.asp.net/oldnewthing/archive/2005/01/31/363790.aspx#365523“>http://weblogs.asp.net/oldnewthing/archive/2005/01/31/363790.aspx#365523

    and alluded to it here

    http://weblogs.asp.net/oldnewthing/archive/2005/01/31/363790.aspx#364255

    (assuming the reader would pick up on the long-to-int assignment).

    "It ought to only raise truncation warnings when conversion from long to int is a truncation."

    On your compiler perhaps. There is at least one compiler that raises warnings for *potential* behavior, not just actual behavior – as I already mentioned here

    http://weblogs.asp.net/oldnewthing/archive/2005/01/31/363790.aspx#365523“>http://weblogs.asp.net/oldnewthing/archive/2005/01/31/363790.aspx#365523

    I find it frustrating that in my comments I keep having to repeat myself.

  96. Ben Hutchings says:

    Raymond wrote: __ is reserved for the implementation, and the Platform SDK is not the implementation. It’s just a header file in the application namespace.

    $ pwd

    /cygdrive/c/Program Files/Microsoft Visual Studio .NET 2003/Vc7/PlatformSDK/Include

    $ grep ‘(# *define|struct) *__’ *.h | wc -l

    8844

    The same goes for _ followed by a capital letter, by the way.

    $ grep ‘(# *define|struct) *_[A-Z]’ *.h | wc -l

    5380

  97. hippietim says:

    pete diemart – I wish it was only 15 years :)

    As part of my last job on the Windows team I was the Dev Mgr for NTVDM. We still run Visicalc. Wasn’t that released back in ’81?

  98. Michael Smith says:

    I understand why you chose LLP64, but I still have a big reservation about it.

    The C standard guarantees that

    sizeof(short) <= sizeof(int) <=sizeof(long)

    and

    long is the biggest int.

    People who have followed the standard may now have their code broken, as long is no longer the biggest.

    Alternatively, we are supporting people who ignored the standard by assuming that sizeof(long) would not change.

    Effectively we are rewarding bad programmers.

    Now I know that you have to live in the real world, and you probably made the right choice, but it still grates on me a bit. :-)

  99. Antoine says:

    This discussion is fascinating.

    Of course MS did not have any other options: there are far too much code out there which assume LONG <B>and</B> long are 32 bit that anything else would be a rebuttal; and the coders are <B>customers</B>.

    OTOH, about the choice made by Unices: there, idioms such as:

    printf("foo: %lun", (unsigned long)sizeof(foo));

    which furthermore is code that went to precautions to be maximally portable (among Unices), led them to the only possible solution: ensuring that the assertion size_t<=ulong still holds with 64-bit. Even if it meant breaking a lot of code (written in the ’80s, usually on Vaxen) which incorrectly assumed that int==long (or, more often, long==int32_t). Less code, older, and less vociferous coders: less hassle.

    As someone that writes code intended to be portable (without #ifdef) between Windows (Win32 and Win64) and Unices, I certainly know this is a minefield. However, I am really much more annoyed by the lack of long long in CL until 2003: this means that for yet a pair of years I will have to "support" the __int64 hack.

    And this leads to Michael’s point: as a consensuated end to the long long debate, it was agreed to add in the C99 standard the following subclause:

    <BLOCKQUOTE>

    7.17 Common definitions <stddef.h>

    Recommended practice

    [#4] The types used for size_t and ptrdiff_t should not have an integer conversion rank

    greater than that of signed long unless the implementation supports objects large

    enough to make this necessary.

    </BLOCKQUOTE>

    Having closely followed the whole thread that ended here, I do not feel this is actually a reward toward bad programmers. I do not believe bad programmers will receive any reward in any case. I rather believe it is just that "not bad" programmers that cared about portability, and which had part of their code brocken, were penalized; and this above point was in order to soften this.

    Since Unix compilers generally enhance this recommendation, while MS did not, it only turns out that Unix "not bad" programmers voiced their concerns louder than MS "not bad" ones. Nothing more.

  100. Michael Smith says:

    Michael Smith: This is not guaranteed in C99. However, the MS C/C++ compiler implements only a fraction of the changes made in C99, so when compiling for 64-bit targets it’s compliant with neither C90 nor C99, which is a shame.

  101. Antoine says:

    Michael: what do you mean by "guaranteed"?

    Of course a "Recommended practice" is no guarantee! It is rather the contrary, at least to the coder reading the Standard; it should mean that while he may be customary for him to see this behaviour, this text reminds him he can encounter different situations (yes, Standardese is a strange dialect.)

    In 2005, compilers’ conformance to C99 is not 100% even for the most advanced one.

    Of course VC does not claim to be among them either. I am reading they are after C++:98, which is another piece of cake entirely.

  102. Chris Becke says:

    I dont know. I’d have thought theres far too much code out there written by coders who expect

    int len = pszEnd – pszStart;

    or, more generically, for an int to be big enough to hold the result of a (char* – char*)

  103. D says:

    The question should be who came up with these standards? Peter?

    What should have been done was the same thing that was done going from 16 to 32 bits.

    pointer sizes= 16, 32, 64

    int = 16, 32, 64

    long = 32

    longlong = 64

    short = 16

    then of course the int16, int32, int64 variations.

    I support 16 and 32 bit in one code base and would have been automatic for 64bit but now it’s going to be a hassle of coming up with my own typedef data types.

Comments are closed.