A very brief return to part 6 of Loading the Chinese/English dictionary

Back in Part 6 of the first phase of the "Chinese/English dictionary" series (a series which I intend to get back to someday but somehow that day never arrives), I left an exercise related to the alignment member of the HEADER union.

Alignment is one of those issues that people who grew up with a forgiving processor architecture tend to ignore. In this case, the WCHAR alignment member ensures that the total size of the HEADER union is suitably chosen so that a WCHAR can appear immediately after it. Since we're going to put characters immediately after the HEADER, we'd better make sure those characters are aligned. If not, then processors that are alignment-sensitive will raise a STATUS_DATATYPE_MISALIGNMENT exception, and even processors that are alignment-forgiving will suffer performance penalties when accessing unaligned data.

There are many variations on the alignment trick, some of them more effective than others. A common variation is the one-element-array trick:

struct HEADER {
 HEADER* m_phdrPrev;
 SIZE_T  m_cb;
 WCHAR   m_rgwchData[1];

// you can also use "offsetof" if you included <stddef.h>

We would then use HEADER_SIZE instead of sizeof(HEADER). This technique does make it explicit that an array of WCHARs will come after the header, but it means that the code that wants to allocate a HEADER needs to be careful to use HEADER_SIZE instead of the more natural sizeof(HEADER).

A common mistake is to use this incorrect definition for HEADER_SIZE:

#define HEADER_SIZE (sizeof(HEADER) - sizeof(WCHAR)) // wrong

This incorrect macro inadvertently commits the mistake it is trying to protect against! There might be (and indeed, will almost certainly be in this instance) structure padding after m_rgwchData, which this macro fails to take into account. On a 32-bit machine, there will likely be two bytes of padding after the m_rgwchData in order to bring the total structure size back to a value that permits another HEADER to appear directly after the previous one. In its excitement over dealing with internal padding, the above macro forgot to deal with trail padding!

It is the "array of HEADERs" that makes the original union trick work. Since the compiler has to be prepared for the possibility of allocating an array of HEADERs, it must provide padding at the end of the HEADER to ensure that the next HEADER begins at a suitably-aligned boundary. Yes, the union trick can result in "excess padding", since the type used for alignment may have less stringent alignment requirements than the other members of the aggregate, but better to have too much than too little.

Another minor point was brought up by commenter Dan McCarty: "Why is MIN_CBCHUNK set to 32,000 instead of 32K?" Notice that MIN_CBCHUNK is added to sizeof(HEADER) before it is rounded up. If the allocation granularity were 32768, then rounding up the sum to the nearest multiple would have taken us to 65536. Nothing wrong with that, but it means that our minimum chunk size is twice as big as the #define suggests. (Of course, since in practice the allocation granularity is 64KB, this distinction is only theoretical right now.)

Comments (14)
  1. dave says:

    I’m rather fond of the SMB structure, which as far as I can tell, must have been intentionally misaligned.

    A typical message starts with a 32-byte header, then a 1-byte ‘word count’, then some number of 2-byte word fields… all odd-aligned.

    Alternatively, if you try and arrange that the message starts on an odd boundary so that those word fields are naturally aligned in the address space, then everything in the header will be misaligned. No win, overall.

    (Yeah, I know the real reason why: it was designed  in Ye Olde Days when saving 8 bits trumped anything else.  Anyone who dealt with RAD50 encoding on various PDP11 operating systems remembers the pain of those days.)

  2. Carlos says:

    The nicest way to do this is to put a zero length array at the end of the structure but, although it’s supported by msvc, gcc and C99, it’s not standard C++.

  3. Peter Ritchie says:

    Seems like a minor point (RE: MIN_CBHUNK == 32000) but, isn’t it kind of pointless to add 32000 to a value less than 768 if you’re expecting to round up to a multiple of 32768?

    I’m assuming that’s the basis of 32000, with a little breathing room in case something gets added to HEADER.  Seems like a “magic number” to me; 32768-sizeof(HEADER) would be more clear that you’re really trying to allocate a minimum of 32768, despite the granularity (and despite it really being 64K making it even more moot).

    [I never claimed that my Chinese/English dictionary was “a model program that sets the standard for all programming projects henceforth.” I’m not writing a textbook. This was a program I hacked together for fun and thought people might find it interesting. If you prefer that only book-quality code be posted here, then let me know. -Raymond]
  4. Adam says:

    But it’s wise to leave a bit of extra space for malloc() overhead. If you did "32768 – sizeof(HEADER)" but malloc() reserves, say, 8 bytes before each pointer that it returns to you for bookkeeping purposes (and to keep 8-byte alignment), then each time you allocate one of these on the heap, you’ve just extended 8 bytes into the next 32k chunk for your 32k allocation. Not clever.

  5. asdf says:

    C99 doesn’t support 0 length arrays, it supports "flexible array members":


  6. N. Velope says:

     If you used VB6, it would automatically pad the structure so that 2 byte types have an offset inside the structure equal to a multiple of 2 and 4 byte types have a multiple of 4 offset.  This only happens in memory – if you write variables of the type to a file, it takes out the padding.

  7. Dave: The DOS redirector had to run on machines with 256K of RAM.  The original redirector was something like 10K of code – an entire network filesystem in 10K. Think about it.  The DOS LanMan redirector was something like 45K and BillG screamed at me for something like 20 minutes over that one.

    Also, SMB was designed for an 8 bit processor, on an 8 bit processor, alignment is irrelevant.

  8. Norman Diamond says:

    struct HEADER {

     HEADER* m_phdrPrev;

     SIZE_T  m_cb;

     WCHAR   m_rgwchData[1];


    Actually this has sort of the opposite effect of the union member that you used before.  Temporarily ignoring some complicating factors, this saves memory by allowing the array of WCHARs to start at the first WCHAR-aligned location.  If the array started after the end of the struct, there would have to be enough padding to match the alignment required by the entire struct.  This is because trailing padding has to be enough to make the struct size as if it were an element of an array of the same kind of structs.  So for example if the pointer required 8 byte alignment and size_t required 4 byte alignment and wchar_t required 2 byte alignment then the array could start after 0 bytes of padding instead of 4 bytes of padding.

    (There are complicating factors because SIZE_T doesn’t have to be size_t, WCHAR_T doesn’t have to be wchar_t, size_t usually has the same alignment requirements as a pointer and it’s usually at least as strict as wchar_t, etc.  These make it harder to see that the example is a possible example of that effect, but that effect still remains possible.)

    But is it really worth doing that…  There are a number of Win32 APIs that return pointers to structures that are defined in this way.  Some APIs can be told to return how much memory is really needed before being told to return the contents.  Otherwise I’d probably have got some of these computations wrong too.  I haven’t noticed some xxx_SIZE macros, and FIELD_OFFSET isn’t exactly standard.

    On the other hand, deliberate unalignments in some kinds of data structures is pretty reasonable.  Until recently it would take more time to transmit one byte of padding over a network than to do a memory-to-memory move of a buffer to realign a bunch of contents.  It still likely takes more time to read a few disk blocks full of padding than to do memory-to-memory moves to unpack their contents.

    If you prefer that only book-quality code be

    posted here, then let me know.

    That’s a very difficult question.  Of course no one wants you to write book-quality code without being paid for it.  Some of us hate slave labour even when we’re not writing books.  But notice how much MSDN contents are still, um, book-quality when we remember that quite a lot of books are atrociously poor quality too.  You’ve already mentioned that readers sometimes have to copy code out of MSDN without understanding it.  Surely there are people who have to copy code out of your blog because MSDN’s code is too garbagy and some of your articles provide fixes.  So this is a tough question.

    Maybe if your company could be persuaded to hire some competent programmers to fix MSDN articles, there would be less need for book-quality code in blogs.  (But if this means that competent programmers would be pulled off of Vista then don’t do it.  Vista still needs a few more years of work by competent programmers before it will be ready for release.)

  9. steveg says:

    Maybe if your company could be persuaded to hire some competent programmers to fix MSDN articles

    Norman, just out of curiosity, which IT companies get your tick of approval, or are they all the same in your book?

  10. Norman Diamond says:

    Monday, October 09, 2006 9:32 AM by steveg

    Norman, just out of curiosity, which IT

    companies get your tick of approval

    In the current environment that’s pretty difficult to answer.  There still exist some companies that accept bug reports without requiring paid support incidents to be opened first.  There still exist some companies that replace defective products with working ones.  But in the current environment this pretty much happens only with hardware defects.  For example one vendor replaced an entire note PC because the video chip was defective but they didn’t offer to replace Windows 95 by Windows NT4 SP3.

    In ancient history hardware vendors often supplied their own operating systems.  Some of them accepted bug reports without requiring paid support incidents to be opened first.  Some were glad to make fixes.  Some of them were glad to deliver fixes.  That era is gone now.  For those of us who remember that era, we don’t even make an active decision to compare it to the present, it just comes automatically.

    Nonetheless I think everyone knows that MSDN’s sample code still needs a lot of fixing.  Some of the text too.  Even Mr. Chen has said so in the past.  In my particular sentence that you quoted, my point was that a tech writer with knowledge of English isn’t enough, it’s necessary to fix the code too.

    Once upon a time it was possible to answer some questions with "RTFM".  When TFM is broken that isn’t a valid answer any more.

  11. Norman Diamond says:

    Sorry for two in a row, but I’ve just read that there was a period where Microsoft thought differently about quality.


    The schedule was merely a checklist of

    features waiting to be turned into bugs. In

    the post-mortem, this was referred to as

    "infinite defects methodology".


    To correct the problem, Microsoft universally

    adopted something called a "zero defects

    methodology". […] Actually, "zero defects"

    meant that at any given time, the highest

    priority is to eliminate bugs before writing

    any new code.

    The adoption of that methodology and the return to the former methodology must have occured during a pretty short time interval.  I wonder why it didn’t stick?

  12. Norman Diamond says:

    Here’s an example of MSDN code which would benefit from being replaced by textbook quality code.


    This is one example that would benefit from being fixed by someone competent at Win32 programming as well as English.  Notice the use of LPTSTR variables.  Notice how little effort will be needed to make it compile in a Unicode environment such as the default in Visual Studio 2005:  it’s only necessary to wrap some strings in _T() macros and leave some other strings unwrapped.  Notice that the resulting compiled program will still yield incorrect results.

    [Why not use the feedback link at the bottom of the MSDN page? No need to keep me informed of every piece of MSDN feedback you submit. -Raymond]
  13. Norman Diamond says:

    > Why not use the feedback link at the bottom

    > of the MSDN page?

    The last time I did that, Microsoft sent a polite response saying that your company had received some headers from my submission but had tossed everything that I typed into the input controls in the feedback form.

    The previous two times I did that, Microsoft sent responses saying that I had purchased the web site http://msdn.microsoft.com/library outside of North America and therefore only Microsoft Japan would be able to support the English-language MSDN library.

    (Hmmmm.  If MSDN were fixed and if programmers relied on MSDN then there wouldn’t be enough appcompat work to do any more.  Then would Microsoft allow the same bug fixing talent to be applied to Windows itself or would … I don’t want to think about it.)

    [Okay, well I don’t see how that means that this is the right place to report your frustration. Perhaps I should just create a “Norman Diamond complains about MSDN” thread so you can post your complaints there, at least they’ll all be in one place rather than scattered all over the place. -Raymond]
  14. Norman Diamond says:

    > Perhaps I should just create a “Norman

    > Diamond complains about MSDN” thread

    Don’t bother.  Some time ago I gained an impression that someone at Microsoft was interested in getting bugs fixed in MSDN, but I should learn better.

    Maybe Microsoft is dogfooding from MSDN as it does with Visual Studio on Vista.  Maybe we can see what kind of code gets into Windows.  Don’t touch a thing, just let it remain visible.

    One thing I still can’t figure out though.  When programmers outside of Microsoft read MSDN, should we obey the contract or just ignore it?  Sometimes your blog contradicts MSDN but no sensible person wants you do slave labour to convert everything to textbook quality code.  So what should we do, just ignore MSDN and join those who never read it?

    [I leave each person to make their own decision. I’m not going to tell you what to do. (P.S., don’t tell me what your decision is; I’m no longer interested.) -Raymond]

Comments are closed.

Skip to main content