Concise and easy to use parameter types in KMDF

One of the goals of KMDF was to use clear and concise types in our parameters and structures so that their intended use was clear and there was a safe way to use them. Some of them were obvious to us at the start, others were suggested to us by our beta testers and outside community as better alternatives. Here are a few of them


Both have the same storage capacity, but the latter indicates a character while the former indicates an unspecified 8 bit quantity.   If I had an index which could only fit into a byte, I would use the BYTE type.


The former does not require a NULL terminator and, more importantly, is the string type used for all underlying WDM calls. PWSTR is too problematic in terms of guaranteeing the NULL and converting from a UNICODE_STRING to a PWSTR.

In my opinion, the missing piece of the puzzle was the lack of safe string APIs that manipulated a UNICODE_STRING, and without it, KMDF couild not use a UNICODE_STRING as its standardized string parameter. If you wanted the safe string functionality for a UNICODE_STRING, you had to treat the buffer like PWSTR, use the safe string API, and then translate the results back into a UNICODE_STRING…talk about error prone code. This led me to duplicate all safe string functions (and then some since any function which took a string as a source parameter needed a version which took a PWSTR and a PUNICODE_STRING version) in ntstrsafe.h and include these changes in the Server SP1 DDK and WDK.

ULONGLONG (vs ULARGE_INTEGER (or their signed equivalents))

This one was so simple to do once it was pointed out to the team (thanks to Don Burns for the suggestion during the beta!).  ULARGE_INTEGER was created when NT was initially being developed because there was no support in the compiler for 64 bit values.  Support for 64 bit values has been in the compiler for a long time so exposing the native compiler types made more sense then using a legacy type.

Enumerants (vs #defines)

I wrote about this before and I think that post goes into greater depth then I can here.  What it boils down to is that I feel that enumerants provide some type and range safety that a #define does not and can prevent simple mistakes.

Comments (2)

  1. KJK::Hyperion says:

    > If I had an index which could only fit into a byte, I would use the BYTE type.

    I believe that’s a job for the CCHAR typedef

    > Enumerants (vs #defines)

    Don’t forget debugger integration! IMO the only "problem" with enums is the unpredictable and non-portable bit width they get, but usually you can get away with forcing a minimum width with a bogus value like 0xFFFFFFFF

  2. Yes, CHAR/CSHORT/CLONG are declared as cardinal types, but I feel that it still does not convey the intent.  It still has CHAR is in its name, and unfortunately, that makes a large majority of the developers still think it has something to do wtih a string or ASCII.

    I covered the debugger advantages in my previous entry which I linked  to ;).  The non portability of enums only comes into play if you are going across language boundaries or using them to represent types on the wire/in hardware.  If you are self consistent in its use internally, it is not a problem.  For some reason though i thought all enums default to a 32 bit sized value (expect for C# where you can be explicit about what the maximum bitness should be).


Skip to main content