In my previous post, I discussed the ROW_NUMBER ranking function which was introduced in SQL Server 2005. In this post, I'll take a look at the other ranking functions - RANK, DENSE_RANK, and NTILE. Let's begin with RANK and DENSE_RANK. These functions are similar - both in functionality and implementation - to ROW_NUMBER. The difference is that while the ROW_NUMBER function assigns a unique (and ascending) value to each row without regard for ties in the ORDER BY values, the RANK and DENSE_RANK functions assign the same value to rows that have the same ORDER BY value. The difference between the RANK and DENSE_RANK functions is in how values are assigned to rows following a tie. The easiest way to illustrate the difference between all of these functions is with a simple example:

CREATE TABLE T (PK INT IDENTITY, A INT, B INT, C INT)

CREATE UNIQUE CLUSTERED INDEX TPK ON T(PK)

INSERT T VALUES (0, 1, 6)

INSERT T VALUES (0, 1, 4)

INSERT T VALUES (0, 3, 2)

INSERT T VALUES (0, 3, 0)

INSERT T VALUES (1, 0, 7)

INSERT T VALUES (1, 0, 5)

INSERT T VALUES (0, 2, 3)

INSERT T VALUES (0, 2, 1)

SELECT *,

ROW_NUMBER() OVER (ORDER BY B) AS RowNumber,

RANK() OVER (ORDER BY B) AS Rank,

DENSE_RANK() OVER (ORDER BY B) AS DenseRank

FROM T

PK A B C RowNumber Rank DenseRank

----- ----- ----- ----- ---------- ---------- ----------

5 1 0 7 1 1 1

6 1 0 5 2 1 1

1 0 1 6 3 3 2

2 0 1 4 4 3 2

7 0 2 3 5 5 3

8 0 2 1 6 5 3

3 0 3 2 7 7 4

4 0 3 0 8 7 4

Notice how the ROW_NUMBER function ignores the duplicate values for column B and assigns the unique integers from 1 to 8 to the 8 rows while the RANK and DENSE_RANK functions assigns the same value to each of the pairs of duplicate rows. Moreover, notice how the RANK function counts the duplicate rows even while it assigns the same value to each duplicate row whereas the DENSE_RANK function does not count the duplicate rows. For example, both the RANK and DENSE_RANK functions assign a rank of 1 to the first two rows, but the RANK function assigns a rank of 3 to the third row - as it is the third row - while the DENSE_RANK function assigns a rank of 2 to the third row - as it contains the second distinct value for column B. Note that the maximum value returned by the DENSE_RANK function is exactly equal to the number of distinct values in column B.

The query plan for computing all three of these functions is essentially the same and, in fact, as long as all three of the functions share the same PARTITION BY and ORDER BY clauses, a single sequence project operator can compute all three functions simultaneously:

|--Sequence Project(DEFINE:([Expr1003]=row_number, [Expr1004]=rank, [Expr1005]=dense_rank))** |--Segment [GROUP BY:([T].[B])]** |--Segment [GROUP BY:()]

|--Sort(ORDER BY:([T].[B] ASC))

|--Clustered Index Scan(OBJECT:([T].[TPK]))

The only real difference between this plan and a plan that computes only the ROW_NUMBER function is the presence of the extra segment operator that groups by column B. This segment operator detects ties and notifies the sequence project operator when to output the same value and when to output a new value for the RANK and DENSE_RANK functions.

All of the issues from my previous post on ROW_NUMBER (such as the use of an index to avoid the sort or the impact of mixing ranking functions with different PARTITION BY and/or ORDER BY clauses) apply equally well to the RANK and DENSE_RANK functions so I will not repeat them here.

Now let's take a look at the NTILE function. This function breaks an input set down into N equal sized groups. To determine how many rows belong in each group, SQL Server must first determine the total number of rows in the input set. If the NTILE function includes a PARTITION BY clause, SQL Server must compute the number of rows in each partition separately. Once we know the number of rows in each partition, we can write the NTILE function as

NTILE(N) := (N * (ROW_NUMBER() - 1) / COUNT(*)) + 1

where COUNT(*) is the number of rows in each partition. We need the -1 and +1 in this equation as the ROW_NUMBER and NTILE functions are 1 not 0 based.

We already know how to compute the ROW_NUMBER function, so the real question is how to compute the COUNT(*) function. It turns out that SQL Server supports use of the OVER clause that we've already seen with the ranking functions with nearly all aggregate functions as well. This type of aggregate function is sometimes referred to as a "window aggregate function." With a window aggregate function, instead of returning a single row per group, SQL Server returns the same aggregate result with each row in the group. Let's look at an example:

SELECT *, COUNT(*) OVER (PARTITION BY A) AS Cnt FROM T

PK A B C Cnt

----- ----- ----- ----- ----------

1 0 1 6 6

2 0 1 4 6

3 0 3 2 6

4 0 3 0 6

7 0 2 3 6

8 0 2 1 6

5 1 0 7 2

6 1 0 5 2

Notice that unlike a scalar aggregate query which can return only aggregate functions or an aggregate query with a GROUP BY clause which can return only columns from the GROUP BY clause and aggregate functions, this query can return any columns from the table. Moreover, since each row in the group gets the same result, the ORDER BY clause is not supported for this scenario. Now let's take a look at the plan for this query:

|--Nested Loops(Inner Join)

|--Table Spool

| |--Segment [GROUP BY:([T].[A])]

| |--Sort(ORDER BY:([T].[A] ASC))

| |--Clustered Index Scan(OBJECT:([T].[TPK]))

|--Nested Loops(Inner Join, WHERE:((1)))

|--Compute Scalar(DEFINE:([Expr1003]=CONVERT_IMPLICIT(int,[Expr1005],0)))

| |--Stream Aggregate(DEFINE:([Expr1005]=Count(*)))

| |--Table Spool

|--Table Spool

While this plan is certainly more complex than a typical aggregation query plan, it's actually fairly straightforward. The sort and segment operators merely break the rows into groups in the same way as a stream aggregate. The table spool above the segment operator is a special type of common subexpression spool known as a segment spool. This spool operator makes a copy of all rows in one group and then, when the segment operator indicates the start of a new group, it returns a single row to the topmost nested loops join which immediately executes its inner side. At this point, the table spool replays the rows from the current group and the stream aggregate counts them. Finally, the stream aggregate returns the count to the bottommost nested loops join which joins this count with the rows from the current group - the table spool replays these rows a second time.

Note that this plan needs the segment spool because there is no way to know a priori how many rows are in the same group with the current row without scanning the entire group. But after scanning the entire group and determining the count, we need some way to go back and apply the count to each of the original rows. The spool makes it possible to go back. A typical aggregate query with a GROUP BY clause does not have this issue because after computing the aggregate function (be it a COUNT or any other function), it simply outputs the result without go back to the original rows.

For an example of a different query that yields a very similar query plan involving a segment spool, check out my discussion of subqueries on pages 171 through 173 of Inside Microsoft SQL Server 2005: Query Tuning and Optimization.

It's now a simple matter to see how to compute the NTILE function. For example, the following query groups the rows by column A and then for each group computes which rows are below the median (denoted by a 1) and which rows are above the median (denoted by a 2):

SELECT *, NTILE(2) OVER (PARTITION BY A ORDER BY B) AS NTile FROM T

Here are results for this query:

PK A B C NTile

----- ----- ----- ----- ------

1 0 1 6 1

2 0 1 4 1

7 0 2 3 1

8 0 2 1 2

3 0 3 2 2

4 0 3 0 2

5 1 0 7 1

6 1 0 5 2

And here is the plan for this query:

** |--Sequence Project(DEFINE:([Expr1003]=ntile)) |--Compute Scalar(DEFINE:([Expr1007]=(1))) |--Segment [GROUP BY:([T].[A])]** |--Nested Loops(Inner Join)

|--Table Spool

| |--Segment [GROUP BY:([T].[A])]

| |--Sort(ORDER BY:([T].[A] ASC, [T].[B] ASC))

| |--Clustered Index Scan(OBJECT:([T].[TPK]))

|--Nested Loops(Inner Join, WHERE:((1)))

|--Stream Aggregate(DEFINE:([Expr1004]=Count(*)))

| |--Table Spool

|--Table Spool

This plan is basically the same as the plan for the previous COUNT(*) query. The only difference is the addition of the extra segment and sequence project operators which compute the NTILE function by computing ROW_NUMBER and applying the formula I provided above. Using this formula, we could also compute NTILE manually as follows:

SELECT *, (2*(RowNumber-1)/Cnt)+1 AS MyNTile

FROM

(

SELECT *,

ROW_NUMBER() OVER (PARTITION BY A ORDER BY B) AS RowNumber,

COUNT(*) OVER (PARTITION BY A ) AS Cnt,

NTILE(2) OVER (PARTITION BY A ORDER BY B) AS NTile

FROM T

) TSeq

Unfortunately, the optimizer does not recognize that it can use the same plan fragment to compute both the ranking functions and the COUNT(*) window aggregate function and ends up generating the following inefficient plan which computes the COUNT(*) twice:

|--Compute Scalar(DEFINE:([Expr1006]=((2)*([Expr1003]-(1)))/CONVERT_IMPLICIT(bigint,[Expr1004],0)+(1)))

|--Nested Loops(Inner Join)

|--Table Spool

| |--Segment [GROUP BY:([T].[A])]

| |--Sequence Project(DEFINE:([Expr1003]=row_number, [Expr1005]=ntile))

| |--Compute Scalar(DEFINE:([Expr1010]=(1)))

| |--Segment [GROUP BY:([T].[A])]

| |--Nested Loops(Inner Join)

| |--Table Spool

| | |--Segment [GROUP BY:([T].[A])]

| | |--Sort(ORDER BY:([T].[A] ASC, [T].[B] ASC))

| | |--Clustered Index Scan(OBJECT:([T].[TPK]))

| |--Nested Loops(Inner Join, WHERE:((1)))

| |--Stream Aggregate(DEFINE:([Expr1007]=Count(*)))

| | |--Table Spool

| |--Table Spool

|--Nested Loops(Inner Join, WHERE:((1)))

|--Compute Scalar(DEFINE:([Expr1004]=CONVERT_IMPLICIT(int,[Expr1012],0)))

| |--Stream Aggregate(DEFINE:([Expr1012]=Count(*)))

| |--Table Spool

|--Table Spool

Partitioning and Ranking seem like they might be useful in my EDI application, but I’m not sure exactly how I would do it…

EDI transaction sets arrive in groups of transaction segments. In my case, a segment ID of "INS" signals a new eligibility "record" (which includes the INS and a host of other related segments).

So, a "partition" should include all records beginning with an INS segment, until the next INS segment.

So, imagine data coming in like this:

segment, field1, field2, field 3

INS, val, val, val

DTP, val, val, val

COB, val, val, val

INS, val, val, val

DTP, val, val, val

COB, val, val, val

INS, val, val, val

DTP, val, val, val

COB, val, val, val

So the above are 3 "records" but in 3 rows each.

Can I use Partition Over to assign a dense rank to each set of records, so I wind up with this:

segment, field1, field2, field 3, RecordNumber

INS, val, val, val, 1

DTP, val, val, val, 1

COB, val, val, val, 1

INS, val, val, val, 2

DTP, val, val, val, 2

COB, val, val, val, 2

INS, val, val, val, 3

DTP, val, val, val, 3

COB, val, val, val, 3

It seems like it should be kind of easy with the partition, if one can find some way to do the "over" clause based on the row_number of the INS segment.

?

Hi Bill,

The records in your first set (without the RecordNumber column) have no column on which you can order the rows. This presents a problem since, in the absence of an ORDER BY clause, SQL does not guarantee that any given query will process rows in any given order.

If you add an ascending identity or row number column that ensures a certain global order across all of the rows in your input set (let’s call this column [id]) AND if you know that there is EXACTLY one DTP and COB segment per INS segment, you can then write something like so:

SELECT *, ROW_NUMBER() OVER (PARTITION BY segment ORDER BY id) as RecordNumber FROM t ORDER BY id

If you cannot guarantee that all segments follow the same pattern (INS, DTP, COB, …), it may still be possible to do something more sophisticated, but it will likely be quite a bit more complex and/or expensive.

HTH

Craig

Dense_Rank returns the rank as positive integers without any gaps in between the ranks. , it will assign the rank to the records as per the condition used in clause. Follow the link to know more…

http://www.sqllion.com/…/dense_rank

As refer to NTILE function,

Real Scenario

If you works on HR management module in your project and you need to build a query in such a way that you should get Highest and medium salaried employee then it can be resolved by using **NTILE** rank function.

For more detail, check the below link –

vsstack.blogspot.in/…/ntile-function-in-sql-server.html