Often I find people aren't making use of the benefit of SET STATISTICS IO and SET STATISTICS TIME while trying to tune their queries.
Bottom-line is we want our queries to run as fast as possible. One of the challenges we face is not all environments which we would be working on are similar. The configuration, loads et al would be different between our Development box, Staging box, Production box etc., So how can we measure whether the changes which we do really improves the performance and it would work well in other environmentts as well?
Let's try to understand few basics before seeing some code in action. For any query to be executed by SQL Server it uses many server resources. One such is "Amount of CPU resources it needs to run the query". This information would remain almost the same (There might be minimal changes in milliseconds) between executions. Another SQL resource which it needs for executing a query is IO. It would first check the Memory/Data Cache (Logical Reads) for the availability of the required data. In case the data is not available then it goes to Disk (Physical Reads) for fetching that data. So if queries are going to take more CPU and IO resources then the query is going to perform slower. So our goal should be to (re)write queries which use "Less CPU and IO resources".
Theory sounds good but how would we measure whether the changes we make to our queries are increasing or decreasing CPU/IO resource usages? This is where SET STATISTICS IO and SET STATISTICS TIME commands would come handy to us.
Things to look for in SET STATISTICS IO while tuning the queries:
1. Scan Count - Number of times the underlying table in the query was accessed. If your query is not having any JOINs then you can ignore this count itself. Instead if your query is having JOINs then this is something which you need to watch out for during the course of your query tuning process. Needless to say, if this count goes down during your tuning then it's a good sign.
2. Logical Reads - This is the best piece of information to rely on. This count says how many PAGES it has to read from DATA CACHE to provide the desired result. This information is something which doesn't change between multiple executions. So we can rely on this output item while tuning the query. Fewer the Logical Reads better the performance would be.
Here I would like to elaborate a bit on Logical Reads. For whatever query we run SQL Server would check the DATA CACHE first for availability of data. In case required data pages are not available in DATA CACHE then it would go to the physical disk (Physical Reads) to fetch those DATA PAGES and move it to DATA CACHE.
Things to look for in SET STATISTICS TIME while tuning the queries:
1. CPU Time - This tells us how much CPU time was used to execute the query. This information would be reliable. Though we could see slight variations in this number each time we run in the same box or different machines but it would be very minimal.
Always look for the "SQL Server Execution Times:" which occurs immediately after (x rows affected) line in the output.
There is another output item "Elapsed Time". This tells us the total amount of time it took from start to end. Elapsed time is not useful for query tuning as this number can vary drastically based on server load.
How to enable these commands?
By default, both SET STATISTICS IO and SET STATISTICS TIME would be disabled. So to enable them make use of either one of these options.
Option 1. Tools > Options > Query Execution > SQL Server > Advanced > Click on the check-box besides these commands.
Edit: We can also do this - Query > Query Options > Advanced.
Option 2. We can make use of the SQL command to do it.
SET STATISTICS IO ON
GO
SET STATISTICS TIME ON
GO
..... Your query comes here .....
SET STATISTICS IO OFF
GO
SET STATISTICS TIME OFF
GO
Examples
Though this post is not about explaining how to tune queries thought would share some basic examples to show case these commands in action.
--Setting up a Sample table for demonstration purpose
CREATE TABLE tblTest
(
ProductID INT IDENTITY,
ProductName VARCHAR(50) NOT NULL,
Manufacturing_Date DATETIME NOT NULL
)
GO
Let us make use of our Random records generator UDF to populate some test data into this table.
--Populating test data
INSERT INTO tblTest
SELECT
dbo.udf_StringGenerator('A', 50),
GETDATE() + (RAND() * (39 * 365))
GO 100000
Important: Always before you start tuning your query for performance run the below two commands.
DBCC DROPCLEANBUFFERS -- Clear SQL Server Data Cache
DBCC FREEPROCCACHE -- Clears SQL Server Procedure Cache
Example 1: Let's write a query to list total count of all products whose names start with 'ab'.
SET STATISTICS IO ON
GO
SET STATISTICS TIME ON
GO
--I have 146 records in my table matching this condition.
--It would be different for you as we have generated random records.
SELECT COUNT(*) FROM tblTest WHERE ProductName LIKE 'ab%';
/*
(1 row(s) affected)
Table 'tblTest'. Scan count 1, logical reads 971, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 47 ms, elapsed time = 48 ms.
*/
There is no index on ProductName column. So lets create a non-clustered index on that column and then check the query again.
CREATE NONCLUSTERED INDEX [nc_ix_productName] ON [dbo].[tblTest]
(
[ProductName] ASC
)
GO
--Lets check again!
/*
(1 row(s) affected)
Table 'tblTest'. Scan count 1, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 1 ms.
*/
So by creating appropriate index we are able to bring down the CPU time and Logical Reads.
Example 2: Let's write a query to list total count of all products whose manufacturing year is 2012.
--I have 1923 records in my table matching this condition.
--It would be different for you as we have generated random records.
/*
(1 row(s) affected)
Table 'tblTest'. Scan count 1, logical reads 971, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 31 ms, elapsed time = 33 ms.
*/
CREATE NONCLUSTERED INDEX [nc_ix_manufacturing_dt] ON [dbo].[tblTest]
(
[Manufacturing_Date] ASC
)
GO
/*
(1 row(s) affected)
Table 'tblTest'. Scan count 1, logical reads 237, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 32 ms, elapsed time = 37 ms.
*/
--Lets rewrite the query to make the query SARGABLE.
SELECT COUNT(*) FROM tblTest
WHERE Manufacturing_Date > '20111231' and Manufacturing_Date < '20130101';
/*
(1 row(s) affected)
Table 'tblTest'. Scan count 1, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 1 ms.
*/
SET STATISTICS IO OFF
GO
SET STATISTICS TIME OFF
GO
So always while performance tuning queries make use of SET STATISTICS IO and the SET STATISTICS TIME commands. Within it watch out for Logical Reads, the CPU time and/or Scan Count to know whether your query tuning steps are really improving or decreasing the performance.
Bottom-line is we want our queries to run as fast as possible. One of the challenges we face is not all environments which we would be working on are similar. The configuration, loads et al would be different between our Development box, Staging box, Production box etc., So how can we measure whether the changes which we do really improves the performance and it would work well in other environmentts as well?
Let's try to understand few basics before seeing some code in action. For any query to be executed by SQL Server it uses many server resources. One such is "Amount of CPU resources it needs to run the query". This information would remain almost the same (There might be minimal changes in milliseconds) between executions. Another SQL resource which it needs for executing a query is IO. It would first check the Memory/Data Cache (Logical Reads) for the availability of the required data. In case the data is not available then it goes to Disk (Physical Reads) for fetching that data. So if queries are going to take more CPU and IO resources then the query is going to perform slower. So our goal should be to (re)write queries which use "Less CPU and IO resources".
Theory sounds good but how would we measure whether the changes we make to our queries are increasing or decreasing CPU/IO resource usages? This is where SET STATISTICS IO and SET STATISTICS TIME commands would come handy to us.
Things to look for in SET STATISTICS IO while tuning the queries:
1. Scan Count - Number of times the underlying table in the query was accessed. If your query is not having any JOINs then you can ignore this count itself. Instead if your query is having JOINs then this is something which you need to watch out for during the course of your query tuning process. Needless to say, if this count goes down during your tuning then it's a good sign.
2. Logical Reads - This is the best piece of information to rely on. This count says how many PAGES it has to read from DATA CACHE to provide the desired result. This information is something which doesn't change between multiple executions. So we can rely on this output item while tuning the query. Fewer the Logical Reads better the performance would be.
Here I would like to elaborate a bit on Logical Reads. For whatever query we run SQL Server would check the DATA CACHE first for availability of data. In case required data pages are not available in DATA CACHE then it would go to the physical disk (Physical Reads) to fetch those DATA PAGES and move it to DATA CACHE.
So it means for SQL Server to execute a given query it requires all data to be available in the DATA CACHE first.Apart from these two, the rest of them in SET STATISTICS IO are not useful for us while working on tuning the queries.
Things to look for in SET STATISTICS TIME while tuning the queries:
1. CPU Time - This tells us how much CPU time was used to execute the query. This information would be reliable. Though we could see slight variations in this number each time we run in the same box or different machines but it would be very minimal.
Always look for the "SQL Server Execution Times:" which occurs immediately after (x rows affected) line in the output.
There is another output item "Elapsed Time". This tells us the total amount of time it took from start to end. Elapsed time is not useful for query tuning as this number can vary drastically based on server load.
How to enable these commands?
By default, both SET STATISTICS IO and SET STATISTICS TIME would be disabled. So to enable them make use of either one of these options.
Option 1. Tools > Options > Query Execution > SQL Server > Advanced > Click on the check-box besides these commands.
Edit: We can also do this - Query > Query Options > Advanced.
Option 2. We can make use of the SQL command to do it.
SET STATISTICS IO ON
GO
SET STATISTICS TIME ON
GO
..... Your query comes here .....
SET STATISTICS IO OFF
GO
SET STATISTICS TIME OFF
GO
Examples
Though this post is not about explaining how to tune queries thought would share some basic examples to show case these commands in action.
--Setting up a Sample table for demonstration purpose
CREATE TABLE tblTest
(
ProductID INT IDENTITY,
ProductName VARCHAR(50) NOT NULL,
Manufacturing_Date DATETIME NOT NULL
)
GO
Let us make use of our Random records generator UDF to populate some test data into this table.
--Populating test data
INSERT INTO tblTest
SELECT
dbo.udf_StringGenerator('A', 50),
GETDATE() + (RAND() * (39 * 365))
GO 100000
Important: Always before you start tuning your query for performance run the below two commands.
**Please be advised not to use these commands in the PRODUCTION environment directly. Use these in Development or Test Environment ONLY**
DBCC FREEPROCCACHE -- Clears SQL Server Procedure Cache
Example 1: Let's write a query to list total count of all products whose names start with 'ab'.
SET STATISTICS IO ON
GO
SET STATISTICS TIME ON
GO
--I have 146 records in my table matching this condition.
--It would be different for you as we have generated random records.
SELECT COUNT(*) FROM tblTest WHERE ProductName LIKE 'ab%';
/*
(1 row(s) affected)
Table 'tblTest'. Scan count 1, logical reads 971, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 47 ms, elapsed time = 48 ms.
*/
There is no index on ProductName column. So lets create a non-clustered index on that column and then check the query again.
CREATE NONCLUSTERED INDEX [nc_ix_productName] ON [dbo].[tblTest]
(
[ProductName] ASC
)
GO
--Lets check again!
SELECT COUNT(*) FROM tblTest WHERE ProductName LIKE 'ab%';
/*
(1 row(s) affected)
Table 'tblTest'. Scan count 1, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 1 ms.
*/
So by creating appropriate index we are able to bring down the CPU time and Logical Reads.
Example 2: Let's write a query to list total count of all products whose manufacturing year is 2012.
--I have 1923 records in my table matching this condition.
--It would be different for you as we have generated random records.
SELECT COUNT(*) FROM tblTest WHERE YEAR(Manufacturing_Date) = 2012
/*
(1 row(s) affected)
Table 'tblTest'. Scan count 1, logical reads 971, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 31 ms, elapsed time = 33 ms.
*/
There is no index on Manufacturing_Date column as well. So lets create a non-clustered index on that column and then check the query again.
(
[Manufacturing_Date] ASC
)
GO
--Let's check it again!
SELECT COUNT(*) FROM tblTest WHERE YEAR(Manufacturing_Date) = 2012
/*
(1 row(s) affected)
Table 'tblTest'. Scan count 1, logical reads 237, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 32 ms, elapsed time = 37 ms.
*/
There is only a slight improvement and wait a minute the existing query is NOT SARGable (Check out "What does Avoid NON SARGable means?") So lets rewrite the query and make it SARGable.
SELECT COUNT(*) FROM tblTest
WHERE Manufacturing_Date > '20111231' and Manufacturing_Date < '20130101';
/*
(1 row(s) affected)
Table 'tblTest'. Scan count 1, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 1 ms.
*/
SET STATISTICS IO OFF
GO
SET STATISTICS TIME OFF
GO
So always while performance tuning queries make use of SET STATISTICS IO and the SET STATISTICS TIME commands. Within it watch out for Logical Reads, the CPU time and/or Scan Count to know whether your query tuning steps are really improving or decreasing the performance.
Comments