Tracking down Power View Performance Problems

The scenario was that we saw sluggishness on the initial load of a Power View Report and also when we went to use one of the filters on the report - like a Pie Chart Slice. For the given report the customer had showed us, the initial load was taking 13-15 seconds to come up, versus 3-5 seconds on my system with the same report and PowerPivot Workbook. For the Pie Slice, it was taking upwards of 20-28 seconds on the customer's system and about 5-8 seconds on my system.

That may not sound like a whole lot if we were just opening an Excel workbook off of a SharePoint site, but for a Power View report, it is all about the experience of the report, and this killed the experience.

I was able to reproduce this using the HelloWorldPicnic Samples Report and Workbook. To do this, I modified the workbook slightly to include 3 more tabs and some more data on those tabs. This modification did not have the extreme impact that the customer's workbook did, but it was enough to illustrate the differences.

Where to start?

From a Reporting Services Report perspective, we want to start with the ExecutionLog of Reporting Services.

clip_image001[4]

However, this doesn't appear to have anything. There are some different versions of this view, so I changed to ExecutionLog3

ItemPath

TimeStart

TimeEnd

TimeDataRetrieval

TimeProcessing

TimeRendering

ByteCount

RowCount

/{986edfff-e480-4c77-b8ef-8e09b3a5a27d}/Reports/HelloWorldPicnicReport.rdlx

9:17:09 AM

9:17:24 AM

12811

166

1877

67313

284

 

9:17:28 AM

9:17:37 AM

7076

257

2308

47971

328

 

We see two entries here. The first entry is the initial load. The second is when I clicked on the report to filter it with a slicer. We can see that the report we ran was HelloWorldPicnicReport.rdlx. Let's have a look at the original report to compare the difference!

 

ItemPath

TimeStart

TimeEnd

TimeDataRetrieval

TimeProcessing

TimeRendering

ByteCount

RowCount

/{986edfff-e480-4c77-b8ef-8e09b3a5a27d}/Reports/HelloWorldPicnicReport.rdlx

9:17:09 AM

9:17:24 AM

12811

166

1877

67313

284

 

9:17:28 AM

9:17:37 AM

7076

257

2308

47971

328

/{986edfff-e480-4c77-b8ef-8e09b3a5a27d}/Reports/HelloWorldPicnicReport - Fast.rdlx

9:17:58 AM

9:18:01 AM

3280

45

302

67313

284

 

9:18:04 AM

9:18:07 AM

1714

83

535

47971

328

 

We can see the report I labeled as "- Fast.xlsx". Notice the difference? Also note that ByteCount and RowCount are identical. So, the data didn't change. To put this into perspective, have a look at the timing information from the customer that I worked on.

 

ItemPath

TimeStart

TimeEnd

TimeDataRetrieval

TimeProcessing

TimeRendering

ByteCount

RowCount

/{986edfff-e480-4c77-b8ef-8e09b3a5a27d}/Reports/Repro - Slow.rdlx

9:25:27 AM

9:25:52 AM

21317

268

2810

23642

74

 

9:25:55 AM

9:26:19 AM

20005

383

4217

35511

103

/{986edfff-e480-4c77-b8ef-8e09b3a5a27d}/Reports/Repro - Fast.rdlx

9:26:53 AM

9:26:59 AM

4651

39

516

23642

74

 

9:27:01 AM

9:27:09 AM

6323

94

968

35511

103

 

You can see a bigger variance here. And these numbers were pretty consistent when we ran the report. Of note, We saw the fast numbers when I tried to reproduce the issue locally. I was not able to originally see the times they were seeing on their end.

Being that we saw that the TimeDataRetrieval was the biggest difference, that's what we decided to focus on. My first thought when I saw DataRetrieval high was that we should get a Profiler Trace from the Analysis Services PowerPivot Instance.

Analysis Services

Here is an example of what we found in the Profiler Trace from SSAS:

Lock Acquired 2013-03-20 01:28:56.247
Query Begin 3 - DAXQuery 2013-03-20 01:28:56.247 2013-03-20 01:28:56.247
Lock Released 2013-03-20 01:28:56.263
Lock Released 2013-03-20 01:28:56.263
Query End 3 - DAXQuery 2013-03-20 01:28:56.263 2013-03-20 01:28:56.247 16

The 16 was the query duration. That's in milliseconds. Actually everything we are really looking at here is milliseconds. The queries within SSAS varied between 0ms and 16ms. Hmmm. So, it looked like that ruled out SSAS. Which made this a little bit more difficult.

AdditionalInfo

If TimeDataRetrieval is showing over 20 seconds, but Analysis Services does not reflect that, how can we tell what is going on? Enter the AdditionalInfo field within the ExecutionLog. To really see what is going on, you need to bump this up to Verbose. This can be done in the properties of the Reporting Services Service Application under System Settings at the very bottom.

clip_image002[4]

This will enable added detail in the AdditionalInfo field. For example, each dataset for the report will be broken out. Here is an example of the slow report.

<Connection>
<ConnectionOpenTime>141</ConnectionOpenTime>
<DataSource>
<Name>EntityDataSource</Name>
<ConnectionString>Data Source="https://asaxtontest1/PowerPivotGallery/HelloWorldPicnicPowerView.xlsx"</ConnectionString>
<DataExtension>DAX</DataExtension>
</DataSource>
<DataSets>
<DataSet>
<Name>Band1_SliderDataSet</Name>
<CommandText>…</CommandText>
<RowsRead>24</RowsRead>
<TotalTimeDataRetrieval>541</TotalTimeDataRetrieval>
<QueryPrepareAndExecutionTime>541</QueryPrepareAndExecutionTime>
<ExecuteReaderTime>541</ExecuteReaderTime>
<DataReaderMappingTime>0</DataReaderMappingTime>
<DisposeDataReaderTime>0</DisposeDataReaderTime>
</DataSet>
</DataSets>
</Connection>

Let's compare this to the fast report.

<Connection>
<ConnectionOpenTime>23</ConnectionOpenTime> <-- 23 vs. 141. This is ultimately what we are troubleshooting
<DataSource>
<Name>EntityDataSource</Name>
<ConnectionString>Data Source="https://asaxtontest1/Reports/HelloWorldPicnicPowerView.xlsx"</ConnectionString>
<DataExtension>DAX</DataExtension>
</DataSource>
<DataSets>
<DataSet>
<Name>Band1_SliderDataSet</Name>
<CommandText>…</CommandText> <-- The actual DAX Query
<RowsRead>24</RowsRead>
<TotalTimeDataRetrieval>125</TotalTimeDataRetrieval> <-- this is the total time for the items below except for QueryPrepareAndExecutionTime.
<QueryPrepareAndExecutionTime>125</QueryPrepareAndExecutionTime>
<ExecuteReaderTime>125</ExecuteReaderTime> <-- 125 vs. 541
<DataReaderMappingTime>0</DataReaderMappingTime>
<DisposeDataReaderTime>0</DisposeDataReaderTime>
</DataSet>
</DataSets>
</Connection>

That definitely gives us a good break down. Overall, that doesn't really look like much. 24 milliseconds vs 200 milliseconds for a connection? Why do we care about that? Well, for this particular report, there are 12 datasets like the above. And some are worse than others. For example:

<Connection>
<ConnectionOpenTime>1354</ConnectionOpenTime> <-- 1354 milliseconds
<DataSource>
<Name>EntityDataSource</Name>
<ConnectionString>Data Source="https://asaxtontest1/PowerPivotGallery/HelloWorldPicnicPowerView.xlsx"</ConnectionString>
<DataExtension>DAX</DataExtension>
</DataSource>
<DataSets>
<DataSet>
<Name>Tablix1DataSet</Name>
<CommandText>…</CommandText>
<RowsRead>21</RowsRead>
<TotalTimeDataRetrieval>1626</TotalTimeDataRetrieval> <-- ExecuteReaderTime + CancelCommandTime
<QueryPrepareAndExecutionTime>447</QueryPrepareAndExecutionTime>
<ExecuteReaderTime>447</ExecuteReaderTime>
<DataReaderMappingTime>0</DataReaderMappingTime>
<DisposeDataReaderTime>0</DisposeDataReaderTime>
<CancelCommandTime>1179</CancelCommandTime> <-- There were a bunch of datasets that had a high CancelCommandTime
</DataSet>
</DataSets>
</Connection>

SharePoint ULS Logs

Where do we go from here to get more information? How about the SharePoint ULS Log? To start, I turned off everything, and then bumped the following categories to Verbose. The reason I went this route is for two reasons. First, I wanted to see what this looked like without added noise. The second is that I wanted to see if this alone would provide what I needed.

  • Excel Services Application
    • Data Model
    • Excel Calculation Services
  • PowerPivot Service
    • Request Processing
    • Unknown
    • Usage
  • SQL Server Reporting Services
    • Power View
    • Report Server Data Extension
    • Report Server Processing

I used UlsViewer to combine the log files that I had for my WFE and my App Server to see one view of the logs. There are a couple of items I noticed from a logging perspective:

1.  The Power View Category was not useful in this case as we were dealing with Processing/Data Retrieval which was outside of the Silverlight control

2.  When we about to process the datasets, we saw the following blog of entries indicating the items on the report.

SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope 'Chart3_Month_Name' which uses or contains NonNaturalGroup. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope 'Chart3_ItemID' which uses or contains NonNaturalGroup. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope 'Chart3' which uses or contains PeerChildScopes. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope '' which uses or contains PeerChildScopes. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope 'Band1_ItemID' which uses or contains PeerChildScopes. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope 'Chart4_ItemID' which uses or contains NonNaturalGroup. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope 'Chart4_Category' which uses or contains NonNaturalGroup. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope 'Chart4' which uses or contains PeerChildScopes. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope '' which uses or contains PeerChildScopes. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope 'Band2_Month_Name' which uses or contains PeerChildScopes. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope 'Chart4_SyncDataSet' which uses or contains Aggregates. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Report contains a scope 'SmallMultipleContainer1_SyncDataSet' which uses or contains Aggregates. This may prevent optimizations from being applied to parent or child scopes. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317

3. After these entries, I saw the start of a given DataSet being processed

SQL Server Reporting Services Report Server Processing 00000 Verbose Data source 'EntityDataSource': Transaction = False, MergeTran = False, NumDataSets = 1 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317

4. Then followed the timing for how long it took to open a connection

SQL Server Reporting Services Report Server Processing 00000 Verbose Opening a connection for DataSource: EntityDataSource took 141 ms. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317
SQL Server Reporting Services Report Server Processing 00000 Verbose Data source 'EntityDataSource': Created a connection. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317

5. This was then ended with the following entry

SQL Server Reporting Services Report Server Processing 00000 Verbose Data source 'EntityDataSource': Processing of all data sets completed. 8dcc0d9c-7d7f-706b-8c45-2aa4abdb9317

This was our signature for a given dataset being processed. However, with what I had enabled, it didn't really shed any light on why it was slow. The only thing I potentially noticed was that in between these entries, the number of log entries was different. For the slower run, I saw more entries then for the faster run. The entries just looked like background information.

I was then pointed to VerboseEX Logging by a friend of mine, Gyorgy Homolya from the SharePoint Support group. He indicated that this would show additional entries related to the SQL commands that SharePoint was issuing within different scopes. This will turn everything up to VerboseEX and had to be enabled using the following PowerShell Command:

get-sploglevel | set-sploglevel -traceseverity verboseex

This didn't show up before because I had turned off the main SharePoint logs, but another thing you can use to spot the start of a Power View Report is to search for "RenderEdit". This is the command that will be issued to begin processing of a given Power View Report. The main entry point.

SharePoint Foundation General 6t8b Verbose Looking up context site https://asaxtontest1:80/_vti_bin/reportserver/?rs:Command=RenderEdit&rs:ProgressiveSessionId=d9b277f73c204cfb92a6193c284c043entvusm24nykqfbbxmfzr0h55 in the farm SharePoint_Config

NOTE: VerboseEX will severely bloat your ULS log. Very quickly, and without much going on with the Server, my WFE Log got up to ~250MB. Be very careful when turning this one. And, I don't recommend doing this on a production box.

Based on what we found above, I did some searches for "DataSet" to get back to the first entries we had above. Of note, this started dropping SQL Commands, so I had some false hits until I got to what I was looking for. Then we had the start entry for our first DataSet:

SQL Server Reporting Services Report Server Processing 00000 Verbose Data source 'EntityDataSource': Transaction = False, MergeTran = False, NumDataSets = 1 2bce0d9c-dda0-706b-8c45-2c28019a07c4

This was then followed by a SQL Scope:

SharePoint Foundation Monitoring nasq Verbose Entering monitored scope (SPSqlClient). Parent No 2bce0d9c-dda0-706b-8c45-2c28019a07c4

This was then followed by entries pertaining to the SQL Command and then the following.

SharePoint Foundation Monitoring b4ly Verbose Leaving Monitored Scope (SPSqlClient). Execution Time=101.5835 2bce0d9c-dda0-706b-8c45-2c28019a07c4
SharePoint Foundation Monitoring nass Verbose ____Execution Time=101.5835 2bce0d9c-dda0-706b-8c45-2c28019a07c4

Looking down further, we came to the following that looked different between the slow run and the fast run consistently between the DataSets.

SharePoint Foundation Monitoring b4ly Verbose Leaving Monitored Scope (SPSqlClient). Execution Time=48.2024 2bce0d9c-dda0-706b-8c45-2c28019a07c4
SharePoint Foundation Monitoring nass Verbose ____Execution Time=48.2024 2bce0d9c-dda0-706b-8c45-2c28019a07c4

SharePoint Foundation Monitoring b4ly Verbose Leaving Monitored Scope (EnsureListItemsData). Execution Time=77.6741 2bce0d9c-dda0-706b-8c45-2c28019a07c4
SharePoint Foundation Monitoring nass Verbose ____SQL Query Count=3 2bce0d9c-dda0-706b-8c45-2c28019a07c4
SharePoint Foundation Monitoring nass Verbose ____Execution Time=77.6741 2bce0d9c-dda0-706b-8c45-2c28019a07c4

Here is the fast run:

SharePoint Foundation Monitoring b4ly Verbose Leaving Monitored Scope (SPSqlClient). Execution Time=6.264 6dcf0d9c-9d35-706b-8c45-270465d749b0
SharePoint Foundation Monitoring nass Verbose ____Execution Time=6.264 6dcf0d9c-9d35-706b-8c45-270465d749b0

SharePoint Foundation Monitoring b4ly Verbose Leaving Monitored Scope (EnsureListItemsData). Execution Time=21.718 6dcf0d9c-9d35-706b-8c45-270465d749b0
SharePoint Foundation Monitoring nass Verbose ____SQL Query Count=3 6dcf0d9c-9d35-706b-8c45-270465d749b0
SharePoint Foundation Monitoring nass Verbose ____Execution Time=21.718 6dcf0d9c-9d35-706b-8c45-270465d749b0

EnsureListItemsData seemed to be chewing up more time in the slow run. We also had the query that was taking the large amount of time (in milliseconds) out of the 3 queries that were within the EnsureListItemsData scope:

SharePoint Foundation Database tzkv Verbose SqlCommand: ' SELECT TOP(@NUMROWS) t3.[nvarchar9] AS c14c8, t1.[SortBehavior] AS c0, t3.[nvarchar12] AS c14c10, CASE WHEN DATALENGTH(t1.DirName) = 0 THEN t1.LeafName WHEN DATALENGTH(t1.LeafName) = 0 THEN t1.DirName ELSE t1.DirName + N'/' + t1.LeafName END AS c15, t4.[nvarchar4] AS c21c6, t4.[tp_Created] AS c21c11, t1.[CheckinComment] AS c28, UserData.[tp_ItemOrder], UserData.[tp_ModerationStatus], UserData.[tp_Created], t1.[Size] AS c23, UserData.[nvarchar1], UserData.[nvarchar6], UserData.[tp_WorkflowInstanceID], t1.[ETagVersion] AS c36, t2.[nvarchar4] AS c3c6, t3.[nvarchar4] AS c14c6, UserData.[ntext1], UserData.[tp_AppAuthor], t2.[tp_Created] AS c3c11, t1.[MetaInfo] AS c18, t4.[nvarchar11] AS c21c9, UserData.[tp_AppEditor], t1.[TimeLastModified] AS c13, UserData.[tp_ID], t4.[nvarchar1] AS c21c4, t1.[Size] AS c26, UserData.[nvarchar5], UserData.[tp_GUID], t1.[ParentVersionString] AS c34, UserData.[bit1], t1.[TimeCreated] AS c1, UserData.[tp_Editor], t2.[nvarchar11] AS c3c9, t3.[nvarchar1] AS c14c4, t3.[nvarchar11] AS c14c9, UserData.[tp_Author], t2.[nvarchar1] AS c3c4, t3.[tp_Created] AS c14c11, t1.[ItemChildCount] AS c29, t7.[Title] AS c33c32, t1.[IsCheckoutToLocal] AS c16, UserData.[tp_ContentTypeId], t4.[nvarchar6] AS c21c7, t1.[LTCheckoutUserId] AS c24, t6.[Title] AS c31c32, UserData.[tp_WorkflowVersion], UserData.[nvarchar4], UserData.[tp_CheckoutUserId], t3.[nvarchar6] AS c14c7, UserData.[tp_Version], t5.[nvarchar1] AS c4, UserData.[tp_IsCurrentVersion], UserData.[nvarchar9], t2.[nvarchar6] AS c3c7, UserData.[tp_HasCopyDestinations], UserData.[tp_Level], t4.[nvarchar12] AS c21c10, t1.[Id] AS c19, t4.[tp_ID] AS c21c5, t1.[DirName] AS c22, t1.[ParentLeafName] AS c35, t1.[LeafName] AS c2, UserData.[nvarchar3], UserData.[tp_Modified], UserData.[tp_UIVersion], t1.[FolderChildCount] AS c30, UserData.[nvarchar8], t2.[tp_ID] AS c3c5, t3.[tp_ID] AS c14c5, UserData.[tp_CopySource], UserData.[tp_InstanceID], t2.[nvarchar12] AS c3c10, t1.[Type] AS c12, t1.[ProgId] AS c17, t4.[nvarchar9] AS c21c8, t1.[ClientId] AS c25, UserData.[tp_UIVersionString], t1.[ScopeId] AS c20, UserData.[nvarchar2], UserData.[nvarchar7], t2.[nvarchar9] AS c3c8 FROM AllDocs AS t1 WITH(FORCESEEK(AllDocs_Url(SiteId,DeleteTransactionId)),NOLOCK) INNER LOOP JOIN AllUserData AS UserData ON (UserData.[tp_RowOrdinal] = 0) AND (t1.SiteId=UserData.tp_SiteId) AND (t1.SiteId = @SITEID) AND (t1.ParentId = UserData.tp_ParentId) AND (t1.Id = UserData.tp_DocId) AND ( (UserData.tp_Level = 1 OR UserData.tp_Level =255) ) AND (t1.Level = UserData.tp_Level) AND ((UserData.tp_Level = 255 AND t1.LTCheckoutUserId =@IU OR (UserData.tp_Level = 1 AND (UserData.tp_DraftOwnerId IS NULL) OR UserData.tp_Level = 2)AND (t1.LTCheckoutUserId IS NULL OR t1.LTCheckoutUserId <> @IU ))) AND (UserData.[tp_IsCurrentVersion] = CONVERT(bit,1) ) AND (UserData.[tp_CalculatedVersion] = 0 ) AND (UserData.[tp_DeleteTransactionId] = 0x ) AND (t1.[DeleteTransactionId] = 0x ) LEFT OUTER LOOP JOIN AllUserData AS t2 WITH(FORCESEEK(AllUserData_PK(tp_SiteId,tp_ListId,tp_DeleteTransactionId,tp_IsCurrentVersion,tp_ID,tp_CalculatedVersion)),NOLOCK) ON (UserData.[tp_Editor]=t2.[tp_ID]) AND (UserData.[tp_RowOrdinal] = 0) AND (t2.[tp_RowOrdinal] = 0) AND ( (t2.tp_Level = 1) ) AND (t2.[tp_IsCurrentVersion] = CONVERT(bit,1) ) AND (t2.[tp_CalculatedVersion] = 0 ) AND (t2.[tp_DeleteTransactionId] = 0x ) AND (t2.tp_ListId = @L3 AND t2.tp_SiteId = @SITEID) AND (UserData.tp_ListId = @L4 AND UserData.tp_SiteId = @SITEID) AND (UserData.[tp_IsCurrentVersion] = CONVERT(bit,1) ) AND (UserData.[tp_CalculatedVersion] = 0 ) AND (UserData.[tp_DeleteTransactionId] = 0x ) LEFT OUTER LOOP JOIN AllUserData AS t3 WITH(FORCESEEK(AllUserData_PK(tp_SiteId,tp_ListId,tp_DeleteTransactionId,tp_IsCurrentVersion,tp_ID,tp_CalculatedVersion)),NOLOCK) ON (UserData.[tp_CheckoutUserId]=t3.[tp_ID]) AND (UserData.[tp_RowOrdinal] = 0) AND (t3.[tp_RowOrdinal] = 0) AND ( (t3.tp_Level = 1) ) AND (t3.[tp_IsCurrentVersion] = CONVERT(bit,1) ) AND (t3.[tp_CalculatedVersion] = 0 ) AND (t3.[tp_DeleteTransactionId] = 0x ) AND (t3.tp_ListId = @L3 AND t3.tp_SiteId = @SITEID) AND (UserData.tp_ListId = @L4 AND UserData.tp_SiteId = @SITEID) AND (UserData.[tp_IsCurrentVersion] = CONVERT(bit,1) ) AND (UserData.[tp_CalculatedVersion] = 0 ) AND (UserData.[tp_DeleteTransactionId] = 0x ) LEFT OUTER LOOP JOIN AllUserData AS t4 WITH(FORCESEEK(AllUserData_PK(tp_SiteId,tp_ListId,tp_DeleteTransactionId,tp_IsCurrentVersion,tp_ID,tp_CalculatedVersion)),NOLOCK) ON (UserData.[tp_Author]=t4.[tp_ID]) AND (UserData.[tp_RowOrdinal] = 0) AND (t4.[tp_RowOrdinal] = 0) AND ( (t4.tp_Level = 1) ) AND (t4.[tp_IsCurrentVersion] = CONVERT(bit,1) ) AND (t4.[tp_CalculatedVersion] = 0 ) AND (t4.[tp_DeleteTransactionId] = 0x ) AND (t4.tp_ListId = @L3 AND t4.tp_SiteId = @SITEID) AND (UserData.tp_ListId = @L4 AND UserData.tp_SiteId = @SITEID) AND (UserData.[tp_IsCurrentVersion] = CONVERT(bit,1) ) AND (UserData.[tp_CalculatedVersion] = 0 ) AND (UserData.[tp_DeleteTransactionId] = 0x ) LEFT OUTER LOOP JOIN AllUserData AS t5 WITH(FORCESEEK(AllUserData_PK(tp_SiteId,tp_ListId,tp_DeleteTransactionId,tp_IsCurrentVersion,tp_ID,tp_CalculatedVersion)),NOLOCK) ON (t1.[LTCheckoutUserId]=t5.[tp_ID]) AND (t5.[tp_RowOrdinal] = 0) AND ( (t5.tp_Level = 1) ) AND (t5.[tp_IsCurrentVersion] = CONVERT(bit,1) ) AND (t5.[tp_CalculatedVersion] = 0 ) AND (t5.[tp_DeleteTransactionId] = 0x ) AND (t5.tp_ListId = @L3 AND t5.tp_SiteId = @SITEID) LEFT OUTER LOOP JOIN AppPrincipals AS t6 WITH(NOLOCK) ON (UserData.[tp_AppAuthor]=t6.[Id]) AND (UserData.[tp_RowOrdinal] = 0) AND (t6.SiteId = @SITEID) AND (UserData.tp_ListId = @L4 AND UserData.tp_SiteId = @SITEID) AND (UserData.[tp_IsCurrentVersion] = CONVERT(bit,1) ) AND (UserData.[tp_CalculatedVersion] = 0 ) AND (UserData.[tp_DeleteTransactionId] = 0x ) LEFT OUTER LOOP JOIN AppPrincipals AS t7 WITH(NOLOCK) ON (UserData.[tp_AppEditor]=t7.[Id]) AND (UserData.[tp_RowOrdinal] = 0) AND (t7.SiteId = @SITEID) AND (UserData.tp_ListId = @L4 AND UserData.tp_SiteId = @SITEID) AND (UserData.[tp_IsCurrentVersion] = CONVERT(bit,1) ) AND (UserData.[tp_CalculatedVersion] = 0 ) AND (UserData.[tp_DeleteTransactionId] = 0x ) WHERE ( (UserData.tp_Level = 1 OR UserData.tp_Level =255) AND ( UserData.tp_Level= 255 AND UserData.tp_CheckoutUserId = @IU OR ( UserData.tp_Level = 2 AND UserData.tp_DraftOwnerId IS NOT NULL OR UserData.tp_Level = 1 AND UserData.tp_DraftOwnerId IS NULL ) AND ( UserData.tp_CheckoutUserId IS NULL OR UserData.tp_CheckoutUserId <> @IU))) AND (UserData.tp_SiteId=@SITEID) AND (UserData.tp_RowOrdinal=0) AND (((t1.[LeafName] = @L5LNP) AND (t1.[DirName] = @L6DNP)) AND t1.SiteId=@SITEID AND (t1.DirName=@DN OR t1.DirName LIKE @DNEL+N'/%')) ORDER BY t1.[SortBehavior] DESC ,UserData.[tp_ID] ASC OPTION (FORCE ORDER, MAXDOP 1)' CommandType: Text CommandTimeout: 0 Parameter: '@LFFP' Type: UniqueIdentifier Size: 0 Direction: Input Value: '00000000-0000-0000-0000-000000000000' Parameter: '@SITEID' Type: UniqueIdentifier Size: 0 Direction: Input Value: '986edfff-e480-4c77-b8ef-8e09b3a5a27d' Parameter: '@IU' Type: Int Size: 0 Direction: Input Value: '1073741823' Parameter: '@L3' Type: UniqueIdentifier Size: 0 Direction: Input Value: '44daded8-6a19-474f-81cb-6c22d9208dbd' Parameter: '@L4' Type: UniqueIdentifier Size: 0 Direction: Input Value: '60467b41-a312-452e-a22c-6e96f08ed0c3' Parameter: '@L5LNP' Type: NVarChar Size: 4000 Direction: Input Value: 'HelloWorldPicnicPowerView.xlsx' Parameter: '@L6DNP' Type: NVarChar Size: 4000 Direction: Input Value: 'PowerPivotGallery' Parameter: '@DN' Type: NVarChar Size: 4000 Direction: Input Value: 'PowerPivotGallery' Parameter: '@DNEL' Type: NVarChar Size: 4000 Direction: Input Value: 'PowerPivotGallery' Parameter: '@NUMROWS' Type: BigInt Size: 0 Direction: Input Value: '2' Parameter: '@RequestGuid' Type: UniqueIdentifier Size: 0 Direction: Input Value: '2bce0d9c-edc6-706b-8c45-26dd20966162' 2bce0d9c-dda0-706b-8c45-2c28019a07c4

Here is where the VerboseEX comes in handy. First off it dumps out the Managed Call Stack, so we can see who even issued the call that got us here.

SharePoint Foundation Database tzkk VerboseEx SqlCommand StackTrace-Managed:
at Microsoft.SharePoint.Utilities.SqlSession.OnPreExecuteCommand(SqlCommand command)
at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand command, CommandBehavior behavior, SqlQueryData monitoringData, Boolean retryForDeadLock)
at Microsoft.SharePoint.SPSqlClient.ExecuteQueryInternal(Boolean retryfordeadlock)
at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean retryfordeadlock)
at Microsoft.SharePoint.Library.SPRequestInternalClass.GetListItemDataWithCallback2(IListItemSqlClient pSqlClient, String bstrUrl, String bstrListName, String bstrViewName, String bstrViewXml, SAFEARRAYFLAGS fSafeArrayFlags, ISP2DSafeArrayWriter pSACallback, ISPDataCallback pPagingCallback, ISPDataCallback pPagingPrevCallback, ISPDataCallback pFilterLinkCallback, ISPDataCallback pSchemaCallback, ISPDataCallback pRowCountCallback, Boolean& pbMaximalView)
at Microsoft.SharePoint.Library.SPRequestInternalClass.GetListItemDataWithCallback2(IListItemSqlClient pSqlClient, String bstrUrl, String bstrListName, String bstrViewName, String bstrViewXml, SAFEARRAYFLAGS fSafeArrayFlags, ISP2DSafeArrayWriter pSACallback, ISPDataCallback pPagingCallback, ISPDataCallback pPagingPrevCallback, ISPDataCallback pFilterLinkCallback, ISPDataCallback pSchemaCallback, ISPDataCallback pRowCountCallback, Boolean& pbMaximalView)
at Microsoft.SharePoint.Library.SPRequest.GetListItemDataWithCallback2(IListItemSqlClient pSqlClient, String bstrUrl, String bstrListName, String bstrViewName, String bstrViewXml, SAFEARRAYFLAGS fSafeArrayFlags, ISP2DSafeArrayWriter pSACallback, ISPDataCallback pPagingCallback, ISPDataCallback pPagingPrevCallback, ISPDataCallback pFilterLinkCallback, ISPDataCallback pSchemaCallback, ISPDataCallback pRowCountCallback, Boolean& pbMaximalView)
at Microsoft.SharePoint.SPListItemCollection.EnsureListItemsData()
at Microsoft.SharePoint.SPListItemCollection.get_Count()
at Microsoft.SharePoint.SPWeb.GetItem(String strUrl, Boolean bFile, Boolean cacheRowsetAndId, Boolean bDatesInUtc, String[] fields)
at Microsoft.SharePoint.SPFile.get_Item()
at Microsoft.AnalysisServices.SPClient.SPViewOnlyFileAccessor.Load(String url)
at Microsoft.AnalysisServices.SPClient.KeepAliveThread.Session.<GetFreshETagExceptions>d__e.MoveNext()
at System.Linq.Enumerable.<ConcatIterator>d__71`1.MoveNext() at System.Linq.Enumerable.Any[TSource](IEnumerable`1 source)
at Microsoft.AnalysisServices.SPClient.WorkbookSession.BeginActivity()
at Microsoft.AnalysisServices.AdomdClient.XmlaClient.WriteEndOfMessage(Boolean callBaseDirect)
at Microsoft.AnalysisServices.AdomdClient.XmlaClient.EndRequest()
at Microsoft.AnalysisServices.AdomdClient.XmlaClient.SendMessage(Boolean endReceivalIfException, Boolean readSession, Boolean readNamespaceCompatibility)
at Microsoft.AnalysisServices.AdomdClient.XmlaClient.ExecuteStatement(String statement, IDictionary connectionProperties, IDictionary commandProperties, IDataParameterCollection parameters, Boolean isMdx)    

We can also see the SQL Statistics IO for this given command. The items that was different between the slow and fast run was the following (here is the slow run).

SharePoint Foundation Database fdz2 VerboseEx SQL IO Statistics: Procedure , Table 'AllDocs'. Scan count 1, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 449, lob physical reads 0, lob read-ahead reads 96. 2bce0d9c-dda0-706b-8c45-2c28019a07c4

LOB Logical Reads and LOB Read-Ahead Reads. Based on the query above, we can easily just pump this into Management Studio to compare the query runs and Execution Plans on the SQL Side.

clip_image003[4]

The PowerPivot Gallery is on the left and the regular Document Library is on the right. 234ms vs. 78ms. Of note, the Execution Plans were identical and I we saw seeks around the AllDocs table. Also, updating statistics and rebuilding the index did not change anything.

clip_image004[4]

Looking at a Profiler trace of these two queries we see that the major difference are the Reads, which was highlighted by the SharePoint ULS Log with the Statistics IO.

clip_image005[4]

This brought us down to which fields in the query that were related to AllDocs was causing us to read a lot more data. This was narrowed down to the MetaInfo field on the AllDocs Table. The MetaInfo field is used to store properties about a document added to a given library. It is not the actual document itself as that is stored in the DocStreams table. Looking at the datalength of the MetaInfo field, we see the following.

-- PowerPivot Gallery
select datalength(MetaInfo) from alldocs where id = '9319533E-35CC-46F8-89E0-C165AE588B0D'

-- Document Library
select datalength(MetaInfo) from alldocs where id = '1B4CEFB4-F6D0-4DCC-A7BA-4C6F02FFF846'

PowerPivot Gallery: 899,821 bytes
Document Library: 293 bytes

That's a pretty significant different and also explains the number of reads that are needed as we have more data to go grab. For the PowerPivot Gallery, this is where we store the Snapshots (screenshots) that we see in the Gallery.

clip_image006[4]

This actually hasn't changed between SP 2010 and SP 2013 with regards to the PowerPivot Gallery. What did change is ADOMD.NET to accommodate some of the architectural differences with SharePoint in 2013. The main change was that we were now checking to see if the workbook had been updated at all. Part of that resulted in us hitting the MetaInfo field significantly more which incurred the delay.

Based on findings above in how we narrowed down the performance problem that we were seeing within Power View, we were able to get some items addressed to bring performance back in line to what we were seeing in SharePoint 2010. The KB Article for the issue itself is the following:

FIX: Slow performance when you render a Power View report that uses a SQL Server 2012 SP1 PowerPivot workbook as its data source
https://support.microsoft.com/kb/2846345

This was released as part of the following Cumulative Update:

Cumulative update package 5 for SQL Server 2012 SP1
https://support.microsoft.com/kb/2861107/en-us

 

Adam W. Saxton | Microsoft Escalation Services
https://twitter.com/awsaxton