Skip to main content

Posts

Showing posts from 2012

Using template explorer to create your own code bank

Most of the Developers / DBAs I have worked with are maintaining their utility scripts in file system. When ever a script is needed they browse through that folder via SQL Server Management Studio (SSMS) and open it.  I have personally found this method little tedious and not so productive way of doing things. I always prefer these two methods instead: Creating an Utility database in the server and having all the required scripts in it The other way is to organize our utility script with the help of SSMS Template explorer Maintaining Utility DB method is self-explanatory and so in this post we would concentrate on the way to make use of Template Explorer for organizing our scripts. Let's get started. To open template explorer  from SSMS  follow either of the methods: Option 1: Click View >> Template Explorer Option 2: Press Control + ALT + T We would see how to utilize template explorer to organize our utility scripts and how it helps us in improving

BIGINT - Upper limit - Overflow - SQL Server

BIGINT upper limit is 2^63-1 (9,223,372,036,854,775,807). For complete reference check out this MSDN article Recently I was asked when we use INT data type and it reaches its limit what do we do? The following is the error message we would see when it reaches its upper limit. Arithmetic overflow error converting IDENTITY to data type int. Arithmetic overflow occurred. Though there are multiple solutions, one of the option for us is to change the datatype to BIGINT. The person who asked me wasn't satisfied with this answer. He was worried is this a permanent solution? Won't BIGINT also overflow / reach its limits sooner or later? Obviously BIGINT would also reach its limit but it would take really LOTS of years + millions of transactions per second for it. Actually I wouldn't bother about it at all for the reasons explained below. Let's take few examples and see how many years will it take for BIGINT to reach its upper limit in a table: (A) Considering o

Capture Deprecated SQL Server code with SQL Profiler

While migrating your application from one version of SQL Server to another have you ever wondered how to identify the deprecated features in the new version? Manually going through hundreds of scripts is going to be tedious and time consuming. I am a lazy coder myself and would be interested only in an automated solution :) Until SQL Server 2005 we had only one way to identify it that is by making use of SQL Server Profiler . From SQL Server 2008 onwards we can make use of Extended Events as well. In this post lets see how to make use of SQL Server Profiler to identify the deprecated SQL Server code. Step 1: Open a SQL Server Instance and find out the session ID by executing the following script. This would come handy in SQL Profiler to filter out only information coming from this session. SELECT @@SPID Step 2: Open your SQL Server Profiler. Click on "Event Selections" tab and choose "Deprecation" event. Deprecation Announcement: Occurs

Clustered Index on an UniqueIdentifier column is costly

Generally having a clustered index on a UniqueIdentifier column is going to be costly. Even if we add some 5000 or 10000 records in the table the fragmentation level would be surely around 95+ % which means we have to very frequently REORGANIZE or REBUILD that index. So always i would suggest lets double check and be very sure that we need a GUID column and not an INT or BIGINT column. Fragmentation Level would be too heavy Lets create a sample to see how data is getting fragmented for different datatypes. ----------------------------------------------------- --Demo Table Creation Scripts ----------------------------------------------------- --Table with UniqueIdentifier as Clusetered Index CREATE TABLE dbo.TblUID ( Sno Uniqueidentifier NOT NULL DEFAULT NEWID (), FirstName VARCHAR (100) NOT NULL, DOB DATETIME NOT NULL, CONSTRAINT pk_tblUid PRIMARY KEY CLUSTERED (sno asc ) ); --Table with UniqueIdentifier as Clusetered Index CREATE TABLE dbo.TblSEQUID (

Create CLUSTERED Index first then NON CLUSTERED indexes

We might have heard that always we need to create our CLUSTERED index first then NONCLUSTERED indexes. Why is that? What would happen if NONCLUSTERED indexes are created first and then we create the CLUSTERED index? If you create NONCLUSTERED indexes first and then CLUSTERED index internally ALL NONCLUSTERED indexes on that table would get recreated. On a big table this might take for ever to create the CLUSTERED Index itself. Example: In the sample shown in blog post titled " Query tuning using SET STATISTICS IO and SET STATISTICS TIME " we had created couple of NONCLUSTERED indexes alone. Now, let us assume we need to create a CLUSTERED index for that table on ProductID column. First enable SET STATISTICS PROFILE ON so that we can see the profile information of the scripts we are going to execute. Then execute the below script: --Script to create CLUSTERED index on ProductID column CREATE CLUSTERED INDEX [ix_productId] ON [dbo].[tblTest] ( [ProductID] ASC

Declaring VARCHAR without length

Do you find anything wrong with this script? CREATE PROCEDURE uspProcedureName       @param1 VARCHAR AS .... .... If you aren't sure may be you should read this post completely without fail :) All this while I was thinking that it is a well known issue until last week I saw a stored procedure something similar to the one shown above. Who ever created that stored procedure hasn't bothered to specify the length. Before jumping into the explanation of why we should SPECIFY THE LENGTH ALWAYS let us do a small exercise to understand this better. Guess the results: Try to answer what would be the output before checking the result. --Declaring a variable without specifying the length DECLARE @strSentence VARCHAR SET @strSentence = 'Rajinikanth is always the NO 1 hero of India' SELECT @strSentence Expected Output:  Rajinikanth is always the NO 1 hero of India Actual Output: R --While CASTing / CONVERTing --The given string has 36 characters. SELE

Query tuning using SET STATISTICS IO and SET STATISTICS TIME

Often I find people aren't making use of the benefit of SET STATISTICS IO and SET STATISTICS TIME while trying to tune their queries. Bottom-line is we want our queries to run as fast as possible. One of the challenges we face is not all environments which we would be working on are similar. The configuration, loads et al would be different between our Development box, Staging box, Production box etc., So how can we measure whether the  changes which we do really improves the performance and it would work well in other environmentts as well? Let's try to understand few basics before seeing some code in action. For any query to be executed by SQL Server it uses many server resources. One such is "Amount of CPU resources it needs to run the query". This information would remain almost the same (There might be minimal changes in milliseconds) between executions. Another SQL resource which it needs for executing a query is IO . It would first check the Memory/Data

Find the last day of the month

Prior to SQL Server 2012 we can make use of DATEADD function to find the last day of a month for the provided date. DECLARE @dtTempDate DATETIME SELECT @dtTempDate = GETDATE() In SQL Server 2005 or 2008 these are couple of ways by which we can get the desired --Option1 SELECT DATEADD(DAY, -1, DATEADD (MONTH, MONTH (@dtTempDate) , DATEADD (YEAR,  YEAR (@dtTempDate) - 1900, 0 ))) --Option2 SELECT DATEADD( DAY , -1, DATEADD(MONTH , 1, DATEADD(DAY , 1 - DAY (@dtTempDate), @dtTempDate))) --Option3 SELECT DATEADD(DAY , -1, DATEADD(MONTH, DATEDIFF(MONTH , 0, @dtTempDate) +1, 0)) Now in SQL Server 2012 there is a new Date and Time function named EOMONTH . EOMONTH ( start_date [, month_to_add ] ) Solution using EOMONTH which would work only in SQL Server 2012 and above: SELECT EOMONTH ( @dtTempDate ) AS [Current Month Last Date] --here the second parameter tells how many MONTHS to add to the given input SELECT EOMONTH ( @dtTempDate ,1) AS [Next Month Last Dat

Download and Install SQL Server 2012 Product Documentation

When you install SQL Server 2012 by default it installs only the Help viewer and its related components. It doesn't install any help documentation locally and user has to go online for checking any reference. But that concept won't work for me as I have installed SQL Server 2012 in a laptop which doesn't have internet connection at all.  Download SQL Server 2012 Product Documentation So I was searching for 'downloadable version of SQL Server 2012 product documentation' and I found it here -  http://www.microsoft.com/download/en/details.aspx?id=347 .  I downloaded  SQLServer2012Documentation.exe which was approx 204 MB and ran the exe which created a folder named "SQLServer2012Documentation" and extracted the following: Folder named "Packages" which had 25 CAB files  (cabinet files) Four HTML document (book1, book2, book3, product1) One MSHA file (HelpContentSetup.msha) So over all 30 items are extracted. Installing the

Adventure Works database for SQL Server 2012

At last during the weekend I was able to download and install SQL Server 2012 standard edition in my laptop. Next to get started I was looking for sample database for SQL Server 2012. Using the search string "adventureworks 2012 download" I found the below download link. Adventure Works DB for SQL Server 2012 -  http://msftdbprodsamples.codeplex.com/releases/view/55330 Actually it downloads the DATA file named AdventureWorks2012_Data.mdf and the size when I downloaded was 189 MB . Copy the DATA file to the location where you want it to be stored in your hard drive and then make use of that path in "FILENAME" attribute within CREATE DATABASE syntax. CREATE DATABASE AdventureWorks2012 ON ( FILENAME = ' C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\DATA\AdventureWorks2012_Data.mdf' ) FOR ATTACH_REBUILD_LOG ; Reference: CREATE DATABASE syntax for SQL Server 2012

Foreign key doesn't create an Index automatically

There are still people who are believing that Foreign Key does create an index automatically by SQL Server. I think since Primary key by default creates a Clustered Index people are assuming that Foreign keys would also create an Index automatically. This is a myth and SQL Server does NOT automatically  create an index on a foreign key columns. But one of the best practices for Index tuning is to Index all the columns which are part of a foreign key relationship. Check out the MSDN documentation  for the sub heading Indexing FOREIGN KEY Constraints. The first line says " Creating an index on a foreign key is often useful .... ". Microsoft wouldn't be saying this if FK is automatically indexed.

Avoid using SCOPE_IDENTITY and @@IDENTITY

Avoid using SCOPE_IDENTITY() and @@IDENTITY functions if your system is using Parallel Plans . Extract from the above link: Posted by Microsoft on 3/18/2008 at 1:10 PM Dave, thanks to your very detailed and dilligent report I was able to find the problem.  Yes, it's a bug - whenever a parallel query plan is generated @@IDENTITY and SCOPE_IDENTITY() are not being updated consistenly and can't be relied upon.  The few workarounds I can offer you for now: Use MAX_DOP=1 as you are already using. This may hurt performance of the SELECT part of your query. Read the value from SELECT part into a set of variables (or single tabel variable) and then insert into the target table with MAX_DOP=1. Since the INSERT plan will not be parallel you will get the right semantic, yet your SELECT will be parallel to achieve performance there if you really need it. Use OUTPUT clause of INSERT to get the value you were looking for, as in the example I give further below. In fact I hig

Arithmetic overflow error converting IDENTITY to data type int

If we have an IDENTITY column and if our insert statement is trying to exceed the maximum value of INTEGER then it would throw this error. To know the range for TinyInt, SmallInt, INT, BIGINT check out this MSDN link Lets reproduce the error for an TINYINT column CREATE TABLE dbo.tblIdentityTest ( Sno TINYINT IDENTITY(250,1) --Max limit of TinyInt is 255 ,Firstname VARCHAR(20) ) GO --These records would get inserted INSERT INTO dbo.tblIdentityTest VALUES ('250') INSERT INTO dbo.tblIdentityTest VALUES ('251') INSERT INTO dbo.tblIdentityTest VALUES ('252') INSERT INTO dbo.tblIdentityTest VALUES ('253') INSERT INTO dbo.tblIdentityTest VALUES ('254') INSERT INTO dbo.tblIdentityTest VALUES ('255') GO SELECT * FROM dbo.tblIdentityTest GO --As TINYINT has already reached it max limit any new insertion would fail INSERT INTO dbo.tblIdentityTest VALUES ('This would fail') GO Msg 8115, Level 16, State 1, Lin