Pages

Saturday, March 27, 2010

Bad index usage within stored procedure

I came across this problem last week. Here’s a table containing contact data:

create table contact
 (
  last_name nvarchar(200)
 ,city nvarchar(200)
 ,filler nchar(200)
 )
go
insert
contact (last_name,City)
  select LastName, City
    from AdventureWorks2008R2.Sales.vIndividualCustomer

Yes, I’m already using SQL Server 2008 R2 but I was able to reproduce the behavior described below also with SQL Server 2008. The script above will insert 18,508 rows into our table.

Let’s create a clustered index on the last_name column and a nonclustered index on the city column:

create clustered index ix_t1_last_name on contact(last_name)
go
create
nonclustered index ix_t1_city on contact(city)

Given this table, what do you think about index utilization of the following ad hoc statement:

select *
    from contact
   where last_name like '%nobody%'
     and city like 'nowhere%'

The SELECT statement returns 0 rows and the optimizer is able to find out this by using the nonclustered index on the city column. The execution plan proves it:

image

As the index contains two levels, two logical reads are necessary to obtain the empty result set.

Ok, now let’s create the following stored procedure:

create procedure getNames(@name nvarchar(50), @city nvarchar(50)) with recompile as
  select *
    from contact
   where city like @city
     and last_name like @name go

Inside the SP the same statement as before is executed. Also, the procedure is created with the RECOMPILE option, to ensure a fresh execution plan is generated for every execution of the SP. Let’s say, we like it to be this way, since we expect very different parameter values will be provided which in turn will create result sets of very different cardinality.

If we call the procedure like this:

exec getNames @name = '%nobody%', @city = 'nowhere%'

What will the execution plan look like? Inside the procedure the same statement is executed as the ad hoc statement above, so I’d expected to see the same execution plan as before.

But surprisingly, the execution plan reveals a clustered index seek this time which is far from optimal! Look at this:

image

Although the plan shows a Clustered Index Seek, this seek actually is a scan, because of the provided search argument “%nobody%”. (See here for more information.) Therefore, all leaf level pages of the clustered index have to be scanned, and this takes 1,065 reads. That’s about than 300 times more then it was with the ad hoc query!

I did some more experiments, where I left out the clustered index, e.g. and created both indexes nonclustered. Same result. If executing the procedure, always the suboptimal index is chosen, with one exception: If I create the clustered index on the city column, both queries will take advantage of the clustered index.

Strange. I wasn’t expecting this. Due to parameter sniffing, the procedure’s plan should be adjusted to the parameter values provided. This statement should be even more valid, since the procedure was created by specifying the RECOMPILE option. But parameter sniffing seems not to work in this case. As I don’t know why, and I also consider this behavior totally unexpected and kind of wrong, I’ve created a bug for this on MSFT’s connect platform. You can vote for it here.

Sunday, March 21, 2010

Making a column value in all rows identical

Assume we have the following table:

create table t1
 (
  id int not null identity (1,1) primary key
 ,status tinyint not null default 0
 ,filler nchar(4000) not null default '#'
 )
go
insert
t1(status)
 select top 10000 1
   from sys.fulltext_system_stopwords as a
       ,sys.fulltext_system_stopwords as b
go
update
top(100) t1
  set status = 0

Table t1 has 10,000 rows and also 10,000 pages at the leaf level of the clustered index (one row per page, since I’ve added the filler column which is 8,000 bytes wide). I’ve decided creating the test table like this (one row per page), since it eases further calculations.

I was using a table similar to this, when collecting data during the day that had to be processed in a nightly batch job. To distinguish between new arrived rows and already processed ones, the status column is used. Let’s just say, we use a status value of 1 to mark rows that have been processed and that a value of 0 is flagging all rows that are ready to be processed. So, at the moment there are 10,000 rows in total, where 100 rows (with status=0) are waiting to be handled.

For all rows that could be successfully handled, the nightly batch will set the value of the status column to 1 at the end. Usually, if all rows could be properly processed, all values of the status column will be at 1 after the nightly batch has accomplished its work.

So let’s say, all rows could be handled successfully and no new rows will be arriving during the nightly batch. What’s the difference than between the following two statements:

Statement 1:

update t1 set status = 1
 where status !=
1

Statement 2:

update t1 set status = 1

Logically, there’s no difference. Both statements will set the value of the status column to 1 for all existing rows. But we have only 100 rows with status != 1 in place, so will there be any noticeable difference in the number of writes and the amount of dirty pages in the buffer cache after executing each of the two UPDATE statements? Is SQL Server smart enough to detect that only 100 rows are affected (that is also 100 pages in our case) for both queries, or is the update without a predicate really touching all rows/pages?

Well, let’ see. For each of the following experiments I have repeated the creation of the test table before executing one of the two UPDATE statements. Additionally, I have performed the following script right before starting the UPDATE:

checkpoint 10
go
dbcc
dropcleanbuffers
dbcc freeproccache

This is to make sure that all modified pages have been written to disk before examining the number of dirty pages that our UPDATE will create.

For examining the performed number of writes, I queried the column last_logical_writes of the DMV sys.dm_exec_query_stats. The number of dirty pages after each UPDATE was simply obtained by utilizing the DBA’s all purpose tool DBCC. DBCC MEMORYSTATUS reveals a regarding section (almost at the end) containing buffer pool information which looks like this:

image

We start with the first statement, containing the predicate WHERE status != 1. I expect to see only 100 pages been written (100 logical writes) and almost the same amount of dirty pages (may be some more, since the value obtained by DBCC MEMORYSTATUS is server-wide).

Ok, the numbers are as expected. My experiment shows 100 logical writes and 115 dirty pages. Nor surprise here.

My second experiment uses statement 2 (that one without a predicate). This time, the result is surprising. Again the UPDATE shows 100 logical writes and a similar amount of dirty pages (135). That’s really amazing, isn’t it? SQL Server is smart enough to write only modified data and not touches unchanged rows, even if we, let’s say, forget the predicate. I didn’t expect this result.

Including the OUTPUT clause

What will happen if we incorporate an OUTPUT clause into our UPDATE statements? Via the OUTPUT clause I do have access to all modified data by using the virtual DELETED and INSERTED tables. Actually, two questions arise, when not specifying the predicate WHERE status != 1:

  1. Will we see all rows in the OUTPUT? Technically, only 100 rows are affected, but logically we have specified that all rows have to be modified. So, how many rows are returned by OUTPUT inserted.*?
  2. If OUTPUT inserted.* returns all rows, how many writes and dirty pages can be seen this time?

Here come the answers:

  1. If no predicate is specified, the OUTPUT clause returns all rows, regardless of the fact that, technically, only 100 rows are affected by the UPDATE.
  2. Incorporating the OUTPUT clause does have no effect on the number or necessary writes and also no impact on the quantum of dirty pages.

I’d say, this is as expected.

Introducing an Update Trigger

For the next experiments, I’ve also created an UPDATE TRIGGER like this:

create trigger tr_t1
  on t1 for update as
 begin
   -- do nothing
   declare @x int
 end 
go

The trigger doesn’t do anything particularly useful, but let’s see if its only existence changes something.

It does!

With the trigger in place, the number of writes and also the amount of dirty pages increase dramatically, if no predicate is specified (so all rows are affected by the UPDATE). In this case I’ve observed number of writes as well as the total of dirty pages are both around 40,000! That’s about 400 times higher than without a trigger. So, the next CHECKPOINT has to write out about 40,000 pages this time, instead of only around 100 before. Big change!

I think, the reason for this specific behavior lies in the pure existence of the virtual tables INSERTED and DELETED inside the trigger. The trigger is fired after the update has been performed, and these two virtual tables have to contain all tables’ rows, when no predicate is specified. It seems, SQL Server has to modify rows in order to have them included in INSERTED and DELETED tables.

I could also observe reads from tempdb, when the trigger is present. That’s as expected, since INSERTED and DELETED tables are hosted inside tempdb.

Somewhat strange, since the behavior for the OUTPUT clause is different, although both cases (OUTPUT and TRIGGER) are pretty similar.

So, despite the trigger does nothing, it is affecting query performance and resource utilization! Not because of the trigger’s code, but simply because of its pure existence. Just one more reason to be careful with triggers!

Wrapping up

The following table contains all measured results.

Trigger

Predicate

Logical Writes

Dirty Pages

No Trigger

No Predicate

100

135

WHERE status!=1

100

115

FOR UPDATE

No Predicate

40,101

40,107

WHERE status!=1

501

556

Sunday, March 14, 2010

Implicit type conversation and the plan cache

You probably read already something about caveats and catches when relying on implicit type conversation. Here’s another one regarding the plan cache.

Whenever an execution plan is compiled, SQL Server will try to store this plan in the plan cache. That’s because plan compilations are quite expensive in respect of CPU usage and therefore should be minimized. But sometimes this process doesn’t work as expected. Have a look at the following example.

Problem

First, we clear the plan cache

dbcc freeproccache

Now let’s execute the following query:

select * from msdb.dbo.backupfile
 where backup_set_id=100000

and see what’s in the plan cache by invoking this query:

select qt.text
      ,qs.execution_count
  from sys.dm_exec_query_stats as qs
       cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt

Here’s the result:

image 

Ok, no surprise. There’s a cached plan with a usage count of 1. This is as expected.

Now, let’s execute a second query like this:

select * from msdb.dbo.backupfile
 where backup_set_id=1000

I expected the cached plan was reused. But surprisingly not! Let’s see what the plan cache reveals now. It looks like this:

image

As we asked for backup_set_id=1000 this time, our parameter is implicitly converted into a SMALLINT, since SMALLINT is the smallest possible data type a value of 1000 fits into. Consequently a new plan is generated and cached.

We can repeat our experiment with a predicate of WHERE backup_set_id=10 and see another plan is going into the plan cache:

image

So, we have three plans now, all with an execution count of 1!

As for me, I think this behavior is somewhat strange. The column backupset_id is of type INT, so why not just create one plan with INT as parameter type and stick to this plan?

Solutions

Since implicit type conversation is the root of our problem, we may solve it by avoiding this implicit conversation and cast explicit. We can re-write our queries like this:

select * from msdb.dbo.backupfile
 where backup_set_id=cast(100000 as int)
 
select * from msdb.dbo.backupfile
 where backup_set_id=cast(1000 as int)

select * from msdb.dbo.backupfile
 where backup_set_id=cast(10 as int)

After doing so, there’s only one plan in the cache. But I suppose, you wouldn’t want to introduce those explicit casts to all of your queries. It’s not very readable and looks like bad programming style, doesn’t it?

Another solution comes through sp_executesql:

exec sp_executesql N'select * from msdb.dbo.backupfile
                      where backup_set_id=@id'
                  ,N'@id int'
                  ,@id = 100000
 
exec sp_executesql N'select * from msdb.dbo.backupfile
                      where backup_set_id=@id'
                  ,N'@id int'
                  ,@id = 1000

exec sp_executesql N'select * from msdb.dbo.backupfile
                      where backup_set_id=@id'
                  ,N'@id int'
                 
,@id = 10

sp_executesql expects parameter declarations and, therefore, no implicit type conversation is involved this time. We see only one cached plan with an execution count of 3, when all of the above three queries have been executed.

Saturday, March 6, 2010

String comparison and collation

You probably already know that string comparison is dependent on the collation. But sometimes this dependency is slightly strange…

I have installed an SQL Server with LATIN1_GENERAL_CI_AS as default collation (like is also the default setting during installation). Some weeks ago I came across a specific behavior in combination of this collation and the German language.

German is a beautiful language! Really. You may know that we Germans have some special vowels (the “Umlaut”: ä, ö, and ü) and also that extra ß which is often (but not always) simply interchangeable with “ss”. Look at this query that uses the default collation for comparison:

select case
         when 'ss' collate latin1_general_ci_as
             = 'ß' collate latin1_general_ci_as
              then 'Yes'
         else 'No'
       end [ss=ß?]

Considering that LATIN1_GENERAL_CI_AS is the default collation, the query above is identical to the following query:

select case
         when 'ss' = 'ß' then 'Yes'
         else 'No'
       end [ss=ß?]

In both cases the result is Yes, so “ß” is considered as to be identical to “ss” when using the LATIN1_GENERAL_CI_AS collation.

Hmm. Now look at this query:

select case
         when 'ä' = 'ae' then 'Yes'
         else 'No'
       end [ä=ae?]

This time the answer is No which is somewhat surprising, since one could consider the overall behavior inconsistent. In German, “ä” and “ae” (and also “ö” and “oe” as well as “ü” and “ue”) are just as interchangeable as “ß” and “ss”.

When this happened to me, I started browsing books online. There’s a hint, where you are advised setting the default collation to LATIN1_GENERAL_BIN for new installations. I didn’t know this so far, but ok: let’s retry our experiment:

select case
         when 'ss' collate latin1_general_bin
             = 'ß' collate latin1_general_bin
              then 'Yes'
         else 'No'
       end [ss=ß?]

Now the answer is No.

But considering the fact that most newer installations use LATIN1_GENERAL_CI_AS, simply because that’s the default during installation, I can’t just change the collation to LATIN1_GENERAL_BIN, since this will certainly create other problems with queries spanning multiple databases with different collations. Not taking into account that changing the collation for an existing server and all of its databases/columns is a cumbersome and also very risky task on its own…

Followers