The Defmof

As I alluded to in a previous posting on SQL DB sizing, carefully defining changes to the defmof can play a large part in determining the performance of hardware inventory processing at a primary site. Often I have seen the performance of hardware inventory drop dramatically because of poorly implemented changes to the defmof.


How SMS Primary Site uses the Defmof:

The defmof is composed of multiple object classes that themselves are composed of multiple properties. Each object class result in a separate table being created by the Primary Site once it receives an inventory file (.mif) from a client with the compiled defmof.


This can result in poor performance from two areas:

  1. Inventorying of properties that are dynamic in nature and thus constantly changing.

  2. Defining too broad an object class, leading to large tables of dissimilar data.


Let’s explore these two issues:


Inventory of Dynamic proprieties


One of the major reasons hardware inventory can suffer is the frequent inventorying of properties that are very dynamic in their nature. For example; in prior versions of SMS we used to inventory machine properties such as; freespace, free physical memory, free virtual memory. Now these were often important, especially in the past when disk space wasn’t as cheap as it is these days, in today’s environment these are less important.


The problem with these proprieties is that since these constantly change, every inventory cycle, they result in a large history table being maintained for these object classes. When new inventory is received the old data is moved from the data table to the history table. The larger the history table, the longer it takes to move the data.


Now consider the problem if in addition to the default machine properties you also add additional objects/properties that also change each inventory cycle.


Some examples that some customers turn on are:

  • Win32_Process

  • Win32_NTEventLogfile

Both of these are extremely dynamic and will change almost every cycle.


While it’s important to capture the data you need to manage your environment, it’s important to perform a cost/benefit analysis on the speed at which you change the ability of the server to process inventory vs. knowing what Win32 processes you have running.



Defining too broad an Object Class


O this is perhaps the more likely defmof change that affects many customers. Once you define a class object the tendency is to use this object class to capture large volumes of data about your specific environment. It’s the usual story, the object class is already created to capture some company specific data so what’s a few more proprieties?


Sound familiar?


Well the problem with this is that SMS uses a single stored procedure to insert all proprieties in a class into a single database table…see where this is going?

Firstly you have this stored procedure that will take a relatively long time (multiple seconds) to execute into a table that is constantly growing, resulting in even longer execution times for the stored procedure.


The best advice I can give you is to:


  • Plan carefully the changes you intend to make to the defmof. Test and retest in a lab with production class and configured machines.

  • Try to create discrete object classes for each additional group of information you wish to collect. So that you create multiple tables (data and hist table pairs) for each class.

  • Add one object class at a time and test these defmof changes in a lab environment and review the table changes that occur.

  • Try to minimize the number of classes/proprieties you collect that have regularly changing values.

  • Do not use one class as a catch all for all custom properties you gather from your environment.

  • Think about how the data you capture will be used. If its there “just in case” someone needs it, perhaps it’s not really necessary. Or perhaps there’s a better way to use software distribution to distribute an application that can collect this one-time use data. Even if it’s used every six months, why collect the data every week or worse daily.

  • Remember changes to the defmof will kick off resyncs on the standard client. They will NOT kick off resyncs on the advanced client.

Comments (4)
  1. Eric Newton says:

    Does it really continue to take longer and longer to insert records into a constantly growing table? Even if it does, it cant be orders of magnitude… to the tune of taking more than 1000ms even on a table with a ton of indexes…

    I have to take your word for it, but it just doesnt seem to make sense about that one particular point, basically because you’re talking about a very basic operation for sql

  2. SMSPerfGuy says:

    No its not orders of magnitude, but it is per inventory file and table, so if we’re processing delta inventory records at hundreds per min, the millisecs per file and per table starts to add up.

    Its not the individual operation that causes the perf hit, its the fact that we perform this hndreds of times per min.

    In addition there are not tons of indexes per table in SMS, as this will have some benefit but also at a cost.


  3. Ed Aldrich says:

    In your "Best Advice" paragraph, you said "Try to create discrete object classes for each additional group…"

    Could you give one or two examples (as you did in the earlier portion of your note) to illustrate this technique?

Comments are closed.

Skip to main content