Heading: Tidbit on BAM

I was sitting through a very very early review of potential next generation product features yesterday (no I can’t tell you which except they were cool) when I learned something that I didn’t know which applies to the current version of the product.

Let’s start with some background.  BAM has this very useful methodology of collecting data that enables it to scale.  Specifically the BAM runtime populates a table, in the BAMPrimaryImport database, with a single row of data per active orchestration instance, and then rolls that information through another table in the same database when the instances complete.  It also has the notion of multiple partitions for this second table enabling it to scale further.  BAM Views provide a consolidated view across both tables.  Once the data in the instances complete table has hung around for a user-definable time it is archived into the BAMArchive table (based on the settings of the OnlineWindowTimeUnit and OnlineWindowTimeLength properties int eh BAM_Metadata_Activities table)

Now where things get interesting is if you setup OLAP (rather than RTA) aggregations.  For OLAP aggregations there are DTS packages that run taking information from the table mentioned above and constructing cubes.  These packages can be scheduled to run at a particular time.  Here is the interesting bit.  You must must schedule the DTS packages and make sure they are running when using BAM with OLAP. Why? Well it would be bad if we purged information from the table above without sending it to OLAP right? You would lose data and that won’t be good.  The only information that gets purged from the table is that which has already completed OLAP processing. This makes a lot of sense but it also means if you never schedule the DTS packages the table above, and BAMPrimaryImport database will get huge because it will never purge and performance will be impacted.

So the lesson here is make sure if you choose OLAP you schedule the DTS packages.  If you choose RTA then we figure this all out actually so you don’t need to worry J.

Thanks to Georgi for the education.

Comments (1)

  1. In regards to your comments…

    "DTS packages that run taking information from the table mentioned above and constructing cubes"

    Is it taking the information from the BAMArchive table or the completed table instances. If the former, what is the default time limit before the message is archived. It would seem that if this value is too large, it will decrease the latency in cube data.

    Also, if I read this correctly, use of RTA alone will result in the primary import table growing continously (i.e. when I run my DTS packages to create cubes, they are actually purging the records form the BAMArchive).