Why does DS_SHELLFONT = DS_FIXEDSYS | DS_SETFONT?


You may have noticed that the numerical value of the DS_SHELLFONT flag is equal to DS_FIXEDSYS | DS_SETFONT.

#define DS_SETFONT          0x40L   /* User specified font for Dlg controls */
#define DS_FIXEDSYS         0x0008L
#define DS_SHELLFONT        (DS_SETFONT | DS_FIXEDSYS)

Surely that isn't a coincidence.

The value of the DS_SHELLFONT flag was chosen so that older operating systems (Windows 95, 98, NT 4) would accept the flag while nevertheless ignoring it. This allowed people to write a single program that got the "Windows 2000" look when running on Windows 2000 and got the "classic" look when running on older systems. (If you make people have to write two versions of their program, one that runs on all systems and one that runs only on the newer system and looks slightly cooler, they will usually not bother writing the second one.)

The DS_FIXEDSYS flag met these conditions. Older systems accepted the flag since it was indeed a valid flag, but they also ignored it because the DS_SETFONT flag takes precedence.

This is one of those backwards-compatibility exercises: How do you design something so that it is possible to write one program that gets the new features on new systems while at the same time degrading gracefully on old systems?

Comments (23)
  1. Anonymous says:

    Flags are bad; they have a limited number of values depending on the data type being stored to.

    That said if there was room to add another flag it should not have caused a problem assuming older operating systems just ignored flags that didn’t exist at the time.

  2. Anonymous says:

    Sorry to be offtopic; this comment is really directed at any knowledgeable Windows user here.

    Today, I noticed a weird file sorting bug in explorer. I have a bunch of files with decimal numbers at the beginning, and explorer would sort them in a completely confusing and unpredictable way. (First I figured it might be using lexicographic ascii ordering, but that is wrong. It turns out that a simple ascii sort would produce correct results in this example):

    0.1-…

    0.2-…

    0.15-…

    Any explanations?

  3. Anonymous says:

    Flags are good, you have a limited amount of memory and a flag lookup is more importantly infinitely faster than some kind of look-up in a list of attributes, or a tree, or whatever.

    But, of course, space is limited in a 32-bit value. If I had to choose between "flags are bad" and "flags are good", I would vote for "flags are good". They can actually be more maintainable than a lot of different boolean values stored at different places or at different levels, but relating to the same object.

  4. Anonymous says:

    M: I think Explorer tries to be smart and sort numerically when it sees filenames that have a common prefix or suffix. If you look at "0." as just a prefix rather than a number, the 1, 2, and 15 are sorted numerically.

  5. Anonymous says:

    The first thing to realize here is that this part of the filename is probably taken as the file extension.

    The behavior I get when I have decimal endings in file extensions would indicate that they are compared as if they were padded with a character between ‘9’ and ‘A’. One effect of this is that decimal numbers are sorted in decimal order.

    Note that the order from a dir without sorting, which I think does no extra sorting after the Windows file enumeration itself, will not list them in this order.

    So, the real question is — what kind of relevance is attributed to file extension/file type in explorer when sorting is done by name? :-)

  6. Anonymous says:

    To clarify: exposing flags to end users (programmers) is bad. By all means use flags in your internal implementation but you will be better off if you hide the flags from the user.

    Pseudo code:

    Window.SetAttribute(DS_SHELLFONT, true);

    Window.SetAttribute(DS_EXAMPLE, true);

    As opposed to

    Window.SetAttribute(DS_SHELLFONT | DS_EXAMPLE);

    In example 1 we have a virtually unlimited number of values and the programmer is forced to set values to either true or false and it is significantly easier to keep programmers from toggling values they shouldn’t be able to.

    In example 2 we’re limited to 32 possible values and the logic to determine whether an attribute is set is not intuitive.

    Saying that bad code (attributes not stored on an object when they should be) is worse than good code (attributes stored on an object when they should be) is a given.

  7. Anonymous says:

    I don’t think that an ‘absolute’ comparison of these two approaches is valid. They have different design goals.

    The first: fool-proof, but bulky & slow(er)

    The second: fast & compact, but more error-prone

    As is nearly always the case, the ‘best’ approach depends on the context.

  8. Anonymous says:

    Why wasn’t the rule from the beginning that unknown flag bits being set were silently ignored? If you were designing the API from scratch today, would you do it that way — is it just something that was not thought of at the time — or is there a reason for making undefined bits being set an error?

    Similarly, why do so many APIs, like CoInitialize, have reserved-always-NULL parameters? Are these truly NOPs, or does passing non-NULL in this parameter have some undocumented use? Why wasn’t this parameter resused instead of introducing CoInitializeEx (which also has an lpReserved, for no /apparent/ reason)?

  9. Anonymous says:

    "Why wasn’t the rule from the beginning that unknown flag bits being set were silently ignored?"

    For inputs to the OS I agree, it would be useful to have the API ignore undefined bits with the idea that later versions may define them.

    For apps, you know that some app will make bad assumptions like unused bits are zero. In later versions of the API, when the unused bits become used, that app will do something dumb. You could discourage that by having unused bits return random values. Then people will post in blogs demanding to know what the undocumented bits mean since they aren’t zero.

  10. Anonymous says:

    James, re CoInitialize: The "reserved" parameter used to have meaning–it used to be a pointer to a Malloc if I recall correctly. But eventually it wasn’t needed anymore, so they turned it into reserved.

  11. Anonymous says:

    Why wasn’t the rule from the beginning that unknown flag bits being set were silently ignored? If you were designing the API from scratch today, would you do it that way — is it just something that was not thought of at the time — or is there a reason for making undefined bits being set an error?

    I’d probably require that any undefined bits be zero. After seeing Raymond’s experience with the sloppy version and seeing the intel manuals that demand a zero in all reserved fields, I can’t really see doing it any other way.

  12. Anonymous says:

    0.1-…

    0.2-…

    0.15-…

    Looks fine to me. Of course, I’m used to seeing things like "Section 0, Paragragh 15" or "verion 0, subverion 15" being written as 0.15. I don’t I’ve ever used decimal numbers in a filename.

  13. Anonymous says:

    Of course, Windows’ window/dialog style flags *are* ~20 years old… so it I’d say any "bad" usage of flags is excusable…

  14. Anonymous says:

    "Ignoring undefined flags" – for the same reason structure sizes are checked strictly.

    http://weblogs.asp.net/oldnewthing/archive/2003/12/12/56061.aspx

    We learned this lesson the hard way – many apps passed uninitialzed garbage as flags and got away with it because the flags were ignored. Along comes the next version of Windows that *gives the flag meaning*, and now the app crashes.

  15. Anonymous says:

    Hmm… About those strict struct sizes – I got bitten by that a week ago. I spent a whole day trying to figure out why my tracking tooltips don’t work. The regular tooltips work fine, but not the tracking tooltips. Turns out I was using the XP structure sizes on Win2000. The TOOLINFO structure is defined:

    typedef struct tagTOOLINFOA {

    UINT cbSize;

    UINT uFlags;

    HWND hwnd;

    UINT_PTR uId;

    RECT rect;

    HINSTANCE hinst;

    LPSTR lpszText;

    #if (_WIN32_IE >= 0x0300)

    LPARAM lParam;

    #endif

    #if (_WIN32_WINNT >= 0x0501)

    void *lpReserved;

    #endif

    } TTTOOLINFOA, NEAR *PTOOLINFOA, *LPTTTOOLINFOA;

    But the documentation (even the online MSDN) doesn’t mention the lpReserved. Also, the size TTTOOLINFO_V2_SIZE is not defined, there is TTTOOLINFOA_V2_SIZE and TTTOOLINFOW_V2_SIZE versions, but not UNICODE independent version.

    The bigger question is, why do some features work with the XP size, and some don’t?

    Ivo

  16. Anonymous says:

    Apple had some fun when earlier 68k chips ignored upper bits in addresses since they couldn’t address that much memory. Various developers used to use the upper bits as a type field, such as specifying if the remaining bits were a point or an integer. Then along came a new chip that could address the memory and paid attention to the upper bits and all sorts of programs crashed so they had to introduce various compatibility hacks. (Somewhere inside Apple is Raymond’s counterpart :-)

    This kind of "trick" is very common for Lisp environments. Nowadays you can pull the same trick but using the lower bits instead. For example you know memory allocations are going to be 4 byte aligned so you can use the lower 2 bits to indicate the type of whatever is pointed to. Remember to set the bits to zero when doing the memory access.

  17. Anonymous says:

    Jonathan: IIRC, the reason the pointer to the malloc was removed was because it wasn’t possible to set it. Originally, there was a CoSetMalloc to go along with CoGetMalloc – the idea was that you could change the allocator used by COM to suit the needs of your application.

    Unfortunately, there were a bunch of apps/shell extensions that relied on the fact that the shell used LocalAlloc() for it’s implementation of IMalloc and they called LocalFree on the memory (instead of calling CoGetMalloc and using that value).

    And when someone tried to replace the default allocator, blam!

    So the parameter was pulled to ensure that apps couldn’t misbehave in that way.

  18. Anonymous says:

    Cooney: I’d probably require that any undefined bits be zero. After seeing Raymond’s experience with the sloppy version and seeing the intel manuals that demand a zero in all reserved fields, I can’t really see doing it any other way.

    But then you’ve lost the ability to cleanly handle future flags.

    The reason Kristoffer Henriksson suggested that the OS ignore unknown flags was so that programs written for later OSs (say Win2K) could set those flags without it affecting their ability to run on an older OS (say NT4) which didn’t handle the flag.

    If we follow your suggestion to require unused flags be set to zero, we are back to not being able to pass flags to NT4 that might be valid on Win2k, but not valid on NT4. I assume that would require the API to return an error if it saw a non-zero value for a flag it didn’t support, since if the API didn’t enforce the requirement, we are back to the problem Raymond listed with Apps passing random flags because they aren’t forced to do otherwise.

    If the OS threw an error on an unknown flag it would require all apps to choose to:

    1) never select flags older OSs didn’t support,

    2) select new flags knowing it would prevent the program from running on older OSs,

    3) add code to detect which OS you were on in order to choose which flags to throw.

    Handling forward and backward compatibility is just a tricky thing to do…

  19. Anonymous says:

    > We learned this lesson the hard way – many

    > apps passed uninitialzed garbage as flags

    > and got away with it because the flags were

    > ignored. Along comes the next version of

    > Windows that *gives the flag meaning*, and

    > now the app crashes.

    That is true. That also explains why Windows XP checks some supposedly ignored structure members and returns errors if those members are not zero — later versions of Windows might use them for something. Now if MSDN would say that those members should be zero instead of saying that those members are ignored, programmers would understand faster why they were getting error returns.

    Sorry I don’t remember which APIs I was getting hit by because it was about two years ago. Though I have a feeling that it would be productive to search for the word "ignore[d]" in the Windows API section of MSDN.

  20. Anonymous says:

    Sounds reasonable to me.

    If you’re supposedly supporting an older operating system, then you should be making sure your code actually works on said operating system.

    The older OS has no idea how important those random bits might be, so all it can (and should) do is throw an error.

    Think of it like a function that is only available in later versions. Should the older OS simply ignore it because it isn’t available? :)

  21. Anonymous says:

    I think gnu emacs does that too, with is why there’s a silly limit on the size of editable files having to fit on 26 bits or something (you could find the exact number by looking at the source code).

  22. Anonymous says:

    Walk the template header and do what it says.

Comments are closed.