Is your cube build slower, and your tempdb larger since loading SP1? – UPDATE – See new posting 3/24/2008

See for the latest information on this problem.

There is an issue with some of the SP1 and hotfix 939594 fixes which resolve earlier problems of data missing in the cubes.  The query has become more complex and SQL Server 2005 is not coming up with the right execution plan for this new more complex query.  The upshot of this is much use of tempdb - which can grow very large, and the cube build takes much longer than it did.

You may have already seen the blog which gives a workaround for this issue - but as many of you may see this anew after loading SP1 I wanted to raise awareness again. Thanks to Noel, Kermit, JF and Thuy for helping get to the bottom of this one.

The problem is mostly seen if you have added lots of dimensions to the various cubes and also if you specify a dynamic date range.  A full build from earliest to latest is sometimes a workaround which reduces the impact on tempdb but is still slower than it should be.  My suggested steps which are also mentioned in the referenced blog and given even more coverage in the MSDN article are:-

1. Create a cube set exactly how you want it (earliest to latest please - no dynamic date range) and while it is building capture a trace in SQL Profiler.  SQL Database Engine should be profiled – can be restricted to the reporting db.

2. After a few minutes search in the running trace for MSP_EpmAssignmentByDay_OlapView.  If it isn’t found then wait a little while longer and try again.

3. Once found then right click the line in the trace and select Extract Event Data. (DO NOT COPY FROM THE bottom pane).  In the save as dialog save this as badquery.sql.

4. The current cube can be stopped at this point.  If no one else is using Analysis Service then just restarting will kill this off.  If you do restart then you will need to close out of the Cube Status window otherwise you can get errors with the next cube build.

5. In SQL Server Management Studio open a new query to the Database Engine and select the Reporting DB.  Copy the following text into the query window.

EXEC sp_create_plan_guide N'guide_forceorder',

    N'<exact query>',




N'OPTION (force order)'

6. Open the badquery.sql file and select all (Ctrl-A or Edit, Select All) then Copy (CTRL_C or Edit Copy) and then select the <exact query> from the text in the query window and Paste (CTRL-V).  You should have selected the < > character too.  This may leave some space between the ‘ and the SELECT but this is fine.

7. Execute this command – which should finish successfully.  Build a new cube and this should use this execution plan and process more quickly.  To get an indication of the speed you could open badquery.sql and then at the end of the query paste OPTION (FORCE ORDER) and then execute.

8. You can also monitor in SQL Profiler to see if it is finishing quicker by looking for the MSP_EpmAssignmentByDay_OlapView and seeing if the SQL BatchCompleted comes in a reasonable time.

The reason to avoid dynamic date ranges is that this will make the query change every time it is run - and then the execution plan will not match - and will be ignored.

If you use a constant date range then this method can be applied - but read the MSDN article on the need to escape single quotations marks.  The will be around the dates - so for instance '12/31/2008' would need to be ''12/31/2008''.  Note that these are two individual quotes, not a double quote.

As an example this workaround can mean the tempdb hardly gets touched (rather than growing 30GB or more) and the cube builds in less than 1 tenth of the time.  Your mileage may vary.

I hope this helps - and if you don't understand any of the steps above then you probably shouldn't attempt this - speak to your DBA.  But if you are seeing this issue and need some assistance to address it then please open an incident with our support teams.  Http:// give you the options.

Technorati Tags: Project Server 2007

Comments (19)

  1. Martin Winzig says:

    What about add query hints to original QUERY?

  2. Firstly we can’t deliver that as quickly as this workaround – and secondly that could be challenging as the query itself is dynamic based on how a customer’s data set is structured and the configuration of their cubes in terms of dates, diemnsions and measures.  It could be that some customers will not see this problem – so will not need the workaround.  In my very quick tests the same hint slows down the query in the RTM code – but again this would be very dependant on the data structure.  I’m sure we will either look to come up with an in-built query hint in certain circumstances – or see if SQL can improve the execution plan generation for this type of query.  Thanks for the posting Martin.

  3. Single reference point for some of the latest stuff on EPM 2007 and other useful stuff for EPM 2007 admin

  4. Ed says:


    BEWARE! This does not seem to only apply to dynamic date ranges.  It seems to apply to any reconfiguration of the cube (adding/removing fields).  Not only did I not correct our original problem, I invalidated the slow-build fix.

    What I did:

    1) Suspecting the RBS to contain invalid characters, I removed it from the cube settings because I didn’t have enough time to run through the RBS looking for it, and our production cube was down.

    2) There was a resource name with an apostrophe in the last name.  I removed it by replacing it with a space

    3) I deleted a custom field we defined to hold our OBS.  We’re a large organization, and the OBS can stretch for a mile, never mind the 255 character limit.  We were getting reports of corrupted global.mpt files on peoples’ desktops.  There are posts out there about this.  It was probably an unnecessary move on part, because EPM lookup tables are define nvarchar(255).  SQL probably truncated it anyway.  If nothing else, the Pro GUI would have taken care of truncation.  They were also Users who may have used Pro to access our QA environment.  Regardless, it was suspect, so I eliminated the only field that referenced it (not the lookup table).

    4) I reinstated the cube using the procedure mention below.

    Short story:

    I inspected the badsqlquery.sql file noted in the fix, and sure enough, there was the RBS field I removed.  I am highly suspect that it invalidated the plan, because cube build times returned to the 3 hours it took before the fix was applied.  Thankfully, it rebuilt.  Needless to say, I’ll have to reapply the fix, but Man what a pain in the [fill in the blank].


    We are experiencing cube corruption, anytime one of our Users alters a PWA Data Analysis User view.  If they simply move fields around to get the pivot table they seek, the browser will hang – especially if you do this repeatedly, searching for that view you want.  If you go to the Analytics server (a dedicated box for analytics and SRS) it immediately goes hay-wire.  Pages/sec goes top side, memory shoots to 2G and continues to climb, and msmdsrv.exe goes into a steady state of 25% CPU.  It stays there until you restart Analysis Services.

    If I can take a short detour, I found a quick way to get the cube back on-line, as long as you don’t touch the cube configuration settings, and have applied the fix you explained prior.

    1) Stop analysis services

    2) Use the following SQL on the OLAP DB (not the cube) to clear the index

    a. Delete From dbo.OlapObjects;

    3) Start Analysis Services

    4) Launch the cube build from the PWA

    We have a ticket open with Microsoft.  Hopefully they’ll have a fix for this and the slow cube-build soon.

    To give you an idea of our environment, without going into too much detail, we run the following platform configurations.  The above occurs in either environment and certainly repeatable.


    2 load balanced front-end web/app servers

    1 dedicated SQL box

    1 dedicated AS/SRS box


    3 load balanced front-end web/app servers

    1 dedicated SQL box

    1 dedicated AS/SRS box

    We’re running, but not using Portfolio Server and it resides on a dedicated box, but shares the SQL box.  Portfolio Server is one product where I wish I kept the receipt – so I could return it.  But that’s another day.

  5. Ed - followup says:


    If we have custom fields that contain formulas included in the cube dimensions or measures, could that contribute to the problems mentioned prior?


  6. Hi Ed,

    Yes, any change to cube configuration will need the workaround to be re-applied.  Sorry I did not make this clear.  If you change the configuration by adding or removing or even re-naming a dimension then the query will change – and hence will not match the execution plan.  Even changing it back to how it was may not mean the query is identical – the dimensions may be in a different order.


  7. Hi Ed,

    Regarding formulae in the custom fiedls – if the custom fields do not change then the workaround should stay valid.  However, it may well increase the build time and tempdb usage compared to fields with no formulae, so you may see the problem with smaller data volumes.


  8. Sandro says:

    Hi Brian,

    why i can’t use the formula

    Sum(PeriodsToDate([Time].[All]),[Measures].[Work]) in project server 2007  portfolio analyser?????  

  9. After quite a bit of work on this one it appears that SP1 (and a slightly earlier hotfix) were almost

  10. Hi Brian —

    Just wanted to mention that we’ve been able to partly address this issue with a SQL Server fix in a couple of cases (should work for all cases).  The fix is here:

    Requires SQL Server 2005 SP2 CU4 or later, and turning on a couple of trace flags.  In my testing, I went from tempdb growing to > 25GB before I killed the process to tempdb not growing beyond 1GB with this fix activated.  Also know of at least 3 other custs who found it did the trick.

    Hope that helps…

    Justin (MSFT SSAS Support)

  11. Thanks Justin – and yes, that seems to do the trick.  The only time I’ve seen it fail has quickly been fixed with a update of the reporting DB statistics.


  12. susan says:

    hi ,

    I am getting the below error

    [7/18/2008 4:51 PM] Cube build session started

    [7/18/2008 4:51 PM] Analysis Services session started

    [7/18/2008 4:51 PM] Analysis Services session failed with the following error: Invalid object name ‘OlapObjects’.

    ===== Process Completed =====

    [7/18/2008 4:52 PM] Failed to build the OLAP cubes. Error: Analysis Services session failed with the following error: Invalid object name ‘OlapObjects’.

    can you please let me know why is it coming and how to solve it

  13. Hi Susan,

    OlapObjects is a table in the repository database – from the error I would guess that the SQL script in the TechNet article was not executed – or it was executed against the wrong DB.  Perhaps you created an OlapObjects table in the master database.  Ruiing the script in the correct DB should resolve this issue.

    Best regards,


  14. RyDaMa says:


    Can you please confirm if any of the more recent Hotfixes fixed this problem or from your experience this is still a common problem. We have applied all hotfixes and I am wondering if attempting this will help or possibly make matters worse as the last post was 29 July 2008.


  15. Hi RyDaMa,

    It can certainly still be a problem if you are not setting the trace flags in your SQL Server.  We have seen this issue again recently – mainly due to customers upgrading SQL Server to the latest SP and assuming the fix is now permanent.  It is not – so the trace flags are still required to overcome the problem.  Also note that even with the trace flags building can be slow if indexes are stale.

    Just to be clear – there was never a bug with Project – just a potentially very complex query that exposed a bug with SQL Server.

    Best regards,


  16. HyderZaidi says:

    My cube building now takes 5 minutes, as it was taken 25 minutes. Brain you are genius.

    I have noticed one issue in MSP_Portfolio_Analyzer Cube i.e. Missing values in Base Capacity & Capacity field for certain individuals in certain months.


    Hyder Zaidi

  17. Nico says:

    Brian, I was just wondering whether this will be fixed?  I have to do this at most of my customer sites, and having to regenerate a new execution plan everytime we change the cube configuration is rather time consuming :(.  this should not be difficult to fix programmatically?

  18. Hi Nico – it is fixed as long as you have the applied SQL updates mentioned in the article linked from the first line – and if you are running a more recent version of SQL you still need to set the trace flags.  Once you have doen this then you should not need to set a plan guide.

    Best regards,


  19. Nico says:

    Hi Brian,

    thanks for replying – I did in fact try this with the trace flags enabled (on a system that has SQL 2005 SP3), but it did not make a difference.  I did only try it briefly, I will try again.

Skip to main content