Skip to main content

Posts

My Second Brevet - 300 BRM Freedom Ride

After completing 200 BRM on June 21st 2013 (ECR Classic 200 BRM) was enjoying the success for too long that I didn't practice for quiet sometime. Even registered for 400 BRM for July but dropped out on last minute as I wasn't feeling well. During that period Kandappa Sir posted in Madras Randonneurs about a crash course he is willing to take for folks who are interested in attempting 200 / 300 BRM. Didn't think twice before requesting him to consider me as well and from then on no looking back.  Took the practice rides seriously and followed it religiously.  Me and my worries! Couple of things I was worried about were: I have never rode during night time on highway and was little worried on whether my headlight would be powerful enough to show me the way.  Never during a ride I had ate anything apart from energy bar, tender coconut, Gatorade or Juices. But for a 20 hours ride one can't survive like that alone. To overcome this fear I decided to face it and...

Type of Bike / Cycle to buy

Since lately I have been posting my cycling adventures in my Facebook account many of my friends have been asking suggestions on "What types of cycle should I buy?", "Tell me the best Brand of cycle to buy" etc., Though I am not an expert on this yet thought of sharing what I understood till now and leave for the experts to comment and help me correct any content which they feel might be misleading for the readers. My Top 4 Learning as of now 1. DON'T just go by Brand 2. Fix your Budget - Because there are bikes from few thousands to 2+ Lakhs :) 3. Be clear on what sort of terrain you plan to ride / Major purpose of this investment 4. Test as many cycles, brands as possible and see which one YOU are comfortable with. Majorly these are the 3 types of bikes I tested: Road Bikes, Mountain Bikes (MTB) and Hybrid Bikes. I had always loved the Brand Cannondale and Bianchi a lot. So I had made up my mind to buy only one of these brands. But after t...

My First Brevet - ECR Classic 200 BRM

During March 2014, after completing my first TCC 100 Kms ride in less than 4 hrs 30 mins decided that I should attempt BRM 200 in June.  Preparation: I wasn't having any friends who are into cycling at that time so wasn't too sure on what sort of practice is needed for this. I am used to cycling all alone and my only group event was TCC 100 Kms ride till now. So all I did was as much as possible daily 1 hour (20 to 25 kms) of cycling and over the weekend i extended it to 2 to 3 hours. I made sure to take a rest day after every 2 days of ride. Used to carry two bottles one filled with water and another one filled with Gatorade. As a backup will also carry two more Gatorade bottles in my backpack along with 4 energy bars. Reason being I want to get used to that drink and energy bar which I plan to use on the event day as well. Things to carry on a 200 BRM: This is in no way an exhaustive list I am just documenting what I carried and found it useful. Hope it w...

Using template explorer to create your own code bank

Most of the Developers / DBAs I have worked with are maintaining their utility scripts in file system. When ever a script is needed they browse through that folder via SQL Server Management Studio (SSMS) and open it.  I have personally found this method little tedious and not so productive way of doing things. I always prefer these two methods instead: Creating an Utility database in the server and having all the required scripts in it The other way is to organize our utility script with the help of SSMS Template explorer Maintaining Utility DB method is self-explanatory and so in this post we would concentrate on the way to make use of Template Explorer for organizing our scripts. Let's get started. To open template explorer  from SSMS  follow either of the methods: Option 1: Click View >> Template Explorer Option 2: Press Control + ALT + T We would see how to utilize template explorer to organize our utility scripts and how it helps us in...

BIGINT - Upper limit - Overflow - SQL Server

BIGINT upper limit is 2^63-1 (9,223,372,036,854,775,807). For complete reference check out this MSDN article Recently I was asked when we use INT data type and it reaches its limit what do we do? The following is the error message we would see when it reaches its upper limit. Arithmetic overflow error converting IDENTITY to data type int. Arithmetic overflow occurred. Though there are multiple solutions, one of the option for us is to change the datatype to BIGINT. The person who asked me wasn't satisfied with this answer. He was worried is this a permanent solution? Won't BIGINT also overflow / reach its limits sooner or later? Obviously BIGINT would also reach its limit but it would take really LOTS of years + millions of transactions per second for it. Actually I wouldn't bother about it at all for the reasons explained below. Let's take few examples and see how many years will it take for BIGINT to reach its upper limit in a table: (A) Considering o...

Capture Deprecated SQL Server code with SQL Profiler

While migrating your application from one version of SQL Server to another have you ever wondered how to identify the deprecated features in the new version? Manually going through hundreds of scripts is going to be tedious and time consuming. I am a lazy coder myself and would be interested only in an automated solution :) Until SQL Server 2005 we had only one way to identify it that is by making use of SQL Server Profiler . From SQL Server 2008 onwards we can make use of Extended Events as well. In this post lets see how to make use of SQL Server Profiler to identify the deprecated SQL Server code. Step 1: Open a SQL Server Instance and find out the session ID by executing the following script. This would come handy in SQL Profiler to filter out only information coming from this session. SELECT @@SPID Step 2: Open your SQL Server Profiler. Click on "Event Selections" tab and choose "Deprecation" event. Deprecation Announcement: Occurs...

Clustered Index on an UniqueIdentifier column is costly

Generally having a clustered index on a UniqueIdentifier column is going to be costly. Even if we add some 5000 or 10000 records in the table the fragmentation level would be surely around 95+ % which means we have to very frequently REORGANIZE or REBUILD that index. So always i would suggest lets double check and be very sure that we need a GUID column and not an INT or BIGINT column. Fragmentation Level would be too heavy Lets create a sample to see how data is getting fragmented for different datatypes. ----------------------------------------------------- --Demo Table Creation Scripts ----------------------------------------------------- --Table with UniqueIdentifier as Clusetered Index CREATE TABLE dbo.TblUID ( Sno Uniqueidentifier NOT NULL DEFAULT NEWID (), FirstName VARCHAR (100) NOT NULL, DOB DATETIME NOT NULL, CONSTRAINT pk_tblUid PRIMARY KEY CLUSTERED (sno asc ) ); --Table with UniqueIdentifier as Clusetered Index CREATE TABLE dbo.TblSEQUID (...

Create CLUSTERED Index first then NON CLUSTERED indexes

We might have heard that always we need to create our CLUSTERED index first then NONCLUSTERED indexes. Why is that? What would happen if NONCLUSTERED indexes are created first and then we create the CLUSTERED index? If you create NONCLUSTERED indexes first and then CLUSTERED index internally ALL NONCLUSTERED indexes on that table would get recreated. On a big table this might take for ever to create the CLUSTERED Index itself. Example: In the sample shown in blog post titled " Query tuning using SET STATISTICS IO and SET STATISTICS TIME " we had created couple of NONCLUSTERED indexes alone. Now, let us assume we need to create a CLUSTERED index for that table on ProductID column. First enable SET STATISTICS PROFILE ON so that we can see the profile information of the scripts we are going to execute. Then execute the below script: --Script to create CLUSTERED index on ProductID column CREATE CLUSTERED INDEX [ix_productId] ON [dbo].[tblTest] ( [ProductID] ASC...

Declaring VARCHAR without length

Do you find anything wrong with this script? CREATE PROCEDURE uspProcedureName       @param1 VARCHAR AS .... .... If you aren't sure may be you should read this post completely without fail :) All this while I was thinking that it is a well known issue until last week I saw a stored procedure something similar to the one shown above. Who ever created that stored procedure hasn't bothered to specify the length. Before jumping into the explanation of why we should SPECIFY THE LENGTH ALWAYS let us do a small exercise to understand this better. Guess the results: Try to answer what would be the output before checking the result. --Declaring a variable without specifying the length DECLARE @strSentence VARCHAR SET @strSentence = 'Rajinikanth is always the NO 1 hero of India' SELECT @strSentence Expected Output:  Rajinikanth is always the NO 1 hero of India Actual Output: R --While CASTing / CONVERTing --The given string has 36...

Query tuning using SET STATISTICS IO and SET STATISTICS TIME

Often I find people aren't making use of the benefit of SET STATISTICS IO and SET STATISTICS TIME while trying to tune their queries. Bottom-line is we want our queries to run as fast as possible. One of the challenges we face is not all environments which we would be working on are similar. The configuration, loads et al would be different between our Development box, Staging box, Production box etc., So how can we measure whether the  changes which we do really improves the performance and it would work well in other environmentts as well? Let's try to understand few basics before seeing some code in action. For any query to be executed by SQL Server it uses many server resources. One such is "Amount of CPU resources it needs to run the query". This information would remain almost the same (There might be minimal changes in milliseconds) between executions. Another SQL resource which it needs for executing a query is IO . It would first check the Memory/Dat...