Yet another cause of Kerberos authentication failure connecting to SQL Server

Yesterday I was troubleshooting a rather common problem. A query running on an instance of SQL Server (INST1) was using the OPENROWSET function to retrieve a rowset from another instance (INST2). This is known as “double-hop authentication”, and Kerberos authentication is required for this to work. When executed from a particular client machine, the query was failing with the infamous “Login failed for user ‘NT AUTHORITY\ANONYMOUS LOGON'” error.

The troubleshooting steps for this problem are well known and are described in multiple sources. My favorite is the My Kerberos Checklist post by Adam Saxton on the CSS SQL Server Engineers blog. I went through the list and confirmed that the basics were configured correctly: the proper SPNs were registered, delegation for the SQL instance service account was enabled, the client account was not marked as sensitive, etc. Then I ran the following query on INST1:

SELECT auth_scheme, client_net_address
FROM sys.dm_exec_connections;

I noticed that for some clients, auth_scheme was KERBEROS, yet for that particular client machine it was NTLM. This seemed to indicate that the instance was correctly configured for Kerberos, and the problem was likely client related.

The next step was to trace the authentication process on the client. This produced the first useful clue – during an attempt to get a Kerberos ticket from the Key Distribution Center (KDC), a call to the LsaCallAuthenticationPackage function failed with substatus 0xc00002fd, which corresponds to this error message: “The encryption type requested is not supported by the KDC.”

I started looking at the all parts of the picture once again, including the service account of the INST1 instance. I noticed that for that account, the “Use DES encryption types for this account” option was enabled, yet that option was disabled for other SQL Server service accounts in the domain. Once I disabled that option and restarted the INST1 instance, the query immediately worked.

The root cause of this problem is explained in KB977321. In Windows Server 2008 R2 and Windows 7, DES encryption is disabled by default, and AES encryption is used. The client machine in this case did run Windows Server 2008 R2, unlike other clients connecting to INST1, which ran older versions of Windows. Since the INST1 service account only supported DES encryption, Kerberos authentication failed because a common encryption type between the client and the server did not exist. Another possible solution could be to enable the (weaker) DES encryption on the client, as described in the KB article.

© 2019 Microsoft. All rights reserved.

Statistics on system tables and query performance

Recently I was helping a customer design a database with a very large number of database objects – hundreds of schemas, a few thousands of stored procedures, hundreds of partitioned tables, with most tables containing between two hundred and a thousand partitions. Once all these objects were created, I had to write a catalog metadata query that went something like this: “Find the partition of a table in schema A that has the largest number of pages of all partitions in that table, with the limitation that the table must have more than one partition and may not be referenced by a foreign key constraint, and has a view in schema B with the same name.” (As little sense as this probably makes, there was a perfectly good reason why this query was needed.)

With the plethora of catalog views and DMVs in SQL Server, writing this query wasn’t particularly hard. In this case, it involved sys.schemas, sys.tables, sys.views, sys.partitions, sys.indexes, sys.dm_db_partition_stats, and sys.data_spaces. However, while the query returned correct results, its performance was very poor: it took about twenty seconds to return the row I was after. Looking at the query plan, I noticed something that would normally be a direct clue to the cause of poor performance: while the actual number of rows passing through various iterators in the plan varied, the estimated number of rows was always 1. In other words, the estimates used by the optimizer were inaccurate, resulting in suboptimal plan and poor performance.

If the query used regular user tables, the next troubleshooting step would be to update statistics on these tables in the hopes of giving the optimizer better estimates. But in this case, the query used only catalog views and DMVs, and on first glance, there was no statistics in sight. However, looking at the query plan, I saw that the objects referenced in the iterators were actually hidden system tables, i.e. sys.sysclsobjs, sys.sysidxstats, sys.sysschobjs, sys.syssingleobjrefs, etc. While it is not possible to query these tables directly (unless using the DAC connection), their definition can still be viewed with the sp_help stored procedure. The result from sp_help includes the indexes defined on these tables, and for each index, corresponding statistics can also be viewed with DBCC SHOW_STATISTICS.

When I did that, the reason for inaccurate estimates was clear – even though most tables had a rather large number of rows corresponding to the multitude of objects in the database, in many cases the statistics were dated prior to the time when database objects were created, and showed completely inaccurate density and distribution. Once I ran UPDATE STATISTICS on each index of the system tables, the query returned in less than a second.

© 2019 Microsoft. All rights reserved.

Estimating data compression savings for entire database

The sp_estimate_data_compression_savings system stored procedure in SQL Server 2008 estimates how much space can be saved by compressing a single partition of an index on a table. Performing this estimation for many tables, indexes, and partitions manually is a tedious task.

The attached script is a wrapper for the sp_estimate_data_compression_savings procedure, that calls it on each partition in the database and in the end outputs a single result set. This multi-row result set is similar to the one-row set from sp_estimate_data_compression_savings. It contains three rows for every partition in the database, one row for each available compression option (NONE, ROW, PAGE). The three totals rows, presenting average compression estimates for the entire database, are at the top of the result set. As written, the script includes only the database level totals, however it is not hard to add other levels of aggregation (i.e., by schema, table, index, or a combination thereof), by modifying the GROUPING SETS clause in the last SELECT statement.

Please note that for large databases, the script can take a long time to complete. To gauge progress, the script outputs status messages referencing the object, index, and partition used as arguments for the current call to sp_estimate_data_compression_savings.

The sp_estimate_data_compression_savings stored procedure is available only in the Enterprise, Developer, or Datacenter editions of SQL Server 2008 and SQL Server 2008 R2.

EstimateCompressionSavings.sql

© 2019 Microsoft. All rights reserved.