Saturday, August 09, 2014

My First Brevet - ECR Classic 200 BRM

During March 2014, after completing my first TCC 100 Kms ride in less than 4 hrs 30 mins decided that I should attempt BRM 200 in June. 

Preparation:

I wasn't having any friends who are into cycling at that time so wasn't too sure on what sort of practice is needed for this. I am used to cycling all alone and my only group event was TCC 100 Kms ride till now.

So all I did was as much as possible daily 1 hour (20 to 25 kms) of cycling and over the weekend i extended it to 2 to 3 hours. I made sure to take a rest day after every 2 days of ride. Used to carry two bottles one filled with water and another one filled with Gatorade. As a backup will also carry two more Gatorade bottles in my backpack along with 4 energy bars. Reason being I want to get used to that drink and energy bar which I plan to use on the event day as well.

Things to carry on a 200 BRM:

This is in no way an exhaustive list I am just documenting what I carried and found it useful. Hope it would be useful for someone someday in future!

1. Helmet
2. Gloves
3. Two Water Bottles
4. Four Energy Bars
5. Gatorade - Either bottles or Pouches
6. Two Spare Tubes
7. Puncture Kit
8. Reflective Vest
9. Sunglasses (I bought sunglasses with custom power based on my eye power) 
10. Money - Convert change and keep it. During the ride if you want to buy something would come in handy.
11. Must have an ATM card - Few control points would be unmanned so we need to take ATM slip as proof of reaching that point on time.
12. Cycling SpeedoMeter - Will come in handy to know how far it is to the next destination and also to monitor the (avg) speed we are maintaining
13. Tail Light (Red Color) - Have spare batteries handy
14. Head Light (White Color) - Have spare batteries handy
15. Pump
16. Cycling Shorts & Jersey
17. Ziploc - Had my smartphone, money, ATM card, Brevet Card in this. Just to be on the safer side even if we are caught in rain!
18. Rider Name, Number, Emergency contact details laminated and stuck on front as well as rear of the cycle

Control Points Calculation:

As per the rule 200 Kms should be completed overall within 13.5 hours and there would also be intermediate control points which should be reached within the specified time limit as well. Check out http://www.rusa.org/octime_alg.html to know how Control Times are Calculated.

On the Event Day:

Woke up at 2 AM to get ready for the event. Had to be there at the starting point by atleast 5.15 AM so that after inspection can start at 5.30 AM. Drank lots of water and did some stretching exercises as well. Before starting from home I had coffee, biscuits and kelloggs with handful of nuts.



Arrived at the starting point (12 kms from my home) some 15 mins before start and got my cycle inspected quickly for all mandatory things. Once the event started I took it slowly and was maintaining around 20 kmph for ~15 mins before increasing the speed gradually. It took me 1 hr 14 mins to reach the first control point which was at 26 kms from the starting point. Its an Unmanned control point and we were supposed to take an ATM slip from Axis/HDFC bank there. Though took the slip quickly the mistake I did there was spent too much time just chit chatting and taking snaps with others. Learnt later from experienced riders that we should not unnecessarily spend time in control points. 


After some 10 odd mins of wastage I continued my lonely journey to the second control point which was at 48 kms (Mahabs - Thirukundram Road, ECR Poonjeri Junction). I didn't push myself too much for this stretch as well and took it easy and maintained an average of 20-22 kmph. This was a manned control point and Partha Datta was there welcoming the riders. Got my Brevet card signed and sealed from him. 



Shortly Balamurugan Palanivel arrived there with bananas, chocolates etc., had 2 bananas and a chocolate.  Unfortunately they were waiting for the other car to come in which they have water cans for riders. Already I had emptied my water bottle and also didn't want to waste too much time again in this control point for the water can to arrive. So started from there thinking on the way can buy some water bottles. Since I had spare Gatorade bottles with me I was sipping it once in 15 mins until I reached Kalpakkam. Stopped at Kalpakkam and got a 2 ltrs water can. Filled both of my water bottles and drank the remaining water. Now I started to feel fresh again and was ready for ride towards Marakannam (U turn point).

Little did I know that I am entering into a killer stretch which needs real physical and mental strength to complete successfully. From Kalpakkam to Marakkanam the headwinds were real crazy and I was struggling to maintain even 12 to 13 kmph. At one point I even felt should I take a call on quitting! During this stretch I halted multiple times to rest, drink tender coconut etc., Atlast by 11.32 AM (6 hrs from the start) reached the U turn point (111 kms). Felt very happy to see Pradeep Ravindran & Varun Spike manning that control point. 

Varun took some snaps - How ever tired I am on seeing a camera automatically will put a smiling face :) Spend ~30 mins in this control point. I am very grateful to Pradeep as he gave me a mix of lemon, water and salt - wow that really energized me. Took two of that drink. They have also arranged for ice cubes - Took and massaged it on my lower back, knee, neck etc., Other riders chit chatting there were having mixed opinions on how the return journey is going to be. Few were saying it would be tailwinds while others were saying it would be crosswinds. 



So I decided that I would ride for another 1 hour and see how I feel and based on that take a call on quitting or continuing. Fortunately few mins into the ride I was feeling very comfortable as it was tail winds :) I started to increase my pace and was doing 25+ kmph for most of the return journey.

Every 15 or 20 kms I stopped and either took Tender coconut, Nungu (Palm Fruit?), refilling water etc., At one stretch I couldn't find anything but a Hatsun shop so stopped there thinking I would get buttermilk or curd.  After entering the shop realized that they sell only ice creams. Ok so what?  Ordered a 500 ml butterscotch icecream and ate it all alone :) While I was eating there couple of local residents came and started quizzing me about the event, how long I am riding, how gears work in cycle etc., very friendly chit chat again missed to watch out for the time spent and ended up taking too much time there.



After bidding goodbye to them started my journey back maintaining a decent speed and reached the 4th control point ECR View Point (189 kms) by 4.18 pm. Saw Partha Datta was having a camera with lens which was as long as my cycle! He looked like a pro photographer. Also Pradeep Ravinder was there and gave  me Cavin's milkshakes and something else which I forgot now. Knowing that only 16 kms left for the final destination spend some quality 15 mins resting there. 

Enjoyed my ride towards the final destination as I was feeling very happy that my dream of holding that medal from France is going to be achieved shortly. When I was nearing MGM all of a sudden a scorpio stopped few  meters before me and couple of guys got down from it. I was in a dilemma on whether did I do something bad unknowingly while riding :) On nearing them realized that one of my office colleague with his friends is returning back from Pondy (Yeah they were walking steadily :) ). Just spent less than 1 min there to inform them about my participation in an event and am at the last stretch now would explain rest in office later.

The final destination is an unmanned control point - SBI ATM @ Uthandi. I was not aware where it is so after reaching Mayajaal started enquiring people about the location of that ATM. At last one guy showed me the right place and helped me reach the destination @ 5.15 PM. Forgot to take the ATM slip as I was in a mood that I had already completed the event. Later realized and took the slip. While showing it to Anil Sharma Sir he pointed out that the time is not printed properly in that. So went again to the ATM and took a fresh slip from a different machine to be on a safer side. So officially I completed the ride @ 5.21 PM.

Called my wife first and informed her that I had completed the event successfully and thanked her for patiently helping me with my preparation.




Hired a tempo from there and boarded on top of it with my cycle to return to Just Buy Cycles in Velachery where we are supposed to hand over all the ATM slips and Brevet Card.

Tips learnt during the course of this preparation:

1. Fully charge your mobile. Before starting your ride disable Wifi and Mobile data.
2. Never try to ride between vehicles OR race with other commuters on the road. 
3. Normally people say drink before you feel thirsty. I wasn't too sure how to handle this but Partha Datta told me a much more easier version of this. Sip water / energy drink once in 15 mins irrespective of you feel thirsty or not.
4. No shame in quitting. Coming till this point by itself is an achievement and there is always a next time - If you feel the body is not able to take it call up the ride director and inform about your decision to quit.
5. Before starting the ride at home I drank atleast 2 ltrs of water. On the week before the ride I even drank more water than usual.
6. As much as possible reduce the number of breaks before the U turn point. Between 12 to 3 PM is what the heat is going to be at its peak. So my idea was to cover the maximum distance possible without stretching too much before that period. 
7. I donno whether this is right or wrong but i did it :) - Ate lots of chicken, icecream, panner, fish etc., the week before the ride.
8. May be make use of the breaks / Control points to inform your dear one's on where you are. I did that because felt they would be worried and would appreciate we proactively informing them once in a while.

Lastly a very big thanks to Kandappa sir and Partha Datta. I lost count of the number of times I had pestered them during my preparation to clear my doubts - both were very helpful and was more than willing to share notes.

Hope you enjoyed reading my experience! Happy Cycling!!

Saturday, July 14, 2012

Using template explorer to create your own code bank

Most of the Developers / DBAs I have worked with are maintaining their utility scripts in file system. When ever a script is needed they browse through that folder via SQL Server Management Studio (SSMS) and open it. 


I have personally found this method little tedious and not so productive way of doing things. I always prefer these two methods instead:

  • Creating an Utility database in the server and having all the required scripts in it
  • The other way is to organize our utility script with the help of SSMS Template explorer

Maintaining Utility DB method is self-explanatory and so in this post we would concentrate on the way to make use of Template Explorer for organizing our scripts. Let's get started.

To open template explorer from SSMS follow either of the methods:

Option 1: Click View >> Template Explorer
Option 2: Press Control + ALT + T



We would see how to utilize template explorer to organize our utility scripts and how it helps us in improving our productivity.

Creating Custom Folder

Step 1: Right click on "SQL Server Templates" and choose "New" >> "Folder"



Step 2: I am naming it as "Vadivel - TSQL Code Bank"



Organizing Utility Scripts within the folder:

Step 1: Right click on the new folder we created and choose "New" >> "Template"


Step 2: Lets create a template and name it as "Identify_Remove_DuplicateRecords"


Step 3: Right click on the new template created and choose "Edit"


Step 4: It would open a new blank query window. Paste/Write the required TSQL script and Save it.

Step 5: Repeat Step 1 to 4 until you have all your required frequently used Utility scripts added into this folder.

How to use it?

Now that we have added ALL our utility scripts within a code bank folder. Using it when required becomes very easy.

Step 1: Open a new query window
Step 2: Click on the utility script which you need from the Template explorer and "drag" it into the query window. That's it :)

I would like to add that this method is in no way a replacement to maintaining the scripts in source control. Having it added in the template explorer also as mentioned above would actually just help us improve the productivity.

If you are wondering where these scripts are getting stored. Read on.

I used SQL Server 2005 to create the folder/templates and they are available at \Microsoft SQL Server\90\Tools\Shell\Templates\Sql


Tuesday, May 01, 2012

BIGINT - Upper limit - Overflow - SQL Server

BIGINT upper limit is 2^63-1 (9,223,372,036,854,775,807). For complete reference check out this MSDN article

Recently I was asked when we use INT data type and it reaches its limit what do we do? The following is the error message we would see when it reaches its upper limit.


Arithmetic overflow error converting IDENTITY to data type int.
Arithmetic overflow occurred.

Though there are multiple solutions, one of the option for us is to change the datatype to BIGINT.

The person who asked me wasn't satisfied with this answer. He was worried is this a permanent solution? Won't BIGINT also overflow / reach its limits sooner or later?

Obviously BIGINT would also reach its limit but it would take really LOTS of years + millions of transactions per second for it. Actually I wouldn't bother about it at all for the reasons explained below.

Let's take few examples and see how many years will it take for BIGINT to reach its upper limit in a table:

(A) Considering only positive numbers, Max limit of BIGINT = 9,223,372,036,854,775,807
(B) Number of Seconds in a year = 31,536,000

Assume there are 50,000 records inserted per second into the table. Then the number of years it would take to reach the BIGINT max limit is:

9,223,372,036,854,775,807 / 31,536,000 / 50,000 = 5,849,424 years

Similarly,
If we inserted 1 lakh records per second into the table then it would take 2,924,712 yrs
If we inserted 1 million (1000000) records per second into the table then it would take 292,471 yrs
If we inserted 10 million (10000000) records per second into the table then it would take 29,247 yrs
If we inserted 100 million records per second into the table then it would take 2,925 yrs
If we inserted 1000 million records per second into the table then it would take 292 yrs

By this we would have understood that it would take extremely lots of years to reach the max limit of BIGINT. May be end of world would be earlier than this :) Atleast I have not seen (or) heard of any application which has exceeded the BIGINT's max limit as of now. Please feel free to let me know if you have seen any.

By the way, if you are wondering how we calculated the number of seconds in a year.
It is just 365 days * 24 hours * 60 minutes * 60 seconds = 31,536,000 seconds.

Monday, April 30, 2012

Capture Deprecated SQL Server code with SQL Profiler

While migrating your application from one version of SQL Server to another have you ever wondered how to identify the deprecated features in the new version? Manually going through hundreds of scripts is going to be tedious and time consuming. I am a lazy coder myself and would be interested only in an automated solution :)

Until SQL Server 2005 we had only one way to identify it that is by making use of SQL Server Profiler. From SQL Server 2008 onwards we can make use of Extended Events as well.

In this post lets see how to make use of SQL Server Profiler to identify the deprecated SQL Server code.

Step 1: Open a SQL Server Instance and find out the session ID by executing the following script. This would come handy in SQL Profiler to filter out only information coming from this session.

SELECT @@SPID



Step 2: Open your SQL Server Profiler.


Click on "Event Selections" tab and choose "Deprecation" event.


Deprecation Announcement: Occurs when you use a feature that will be removed from future version of SQL Server, but will NOT be removed from the next major release of SQL Server.

Deprecation Final Support: Occurs when you use a feature that will be removed from the next major release of SQL Server.


Next click on "Column Filters" and set SPID filter to 61 (or what ever number you got in Step 1)


Step 3: Now lets go back to the window where we ran SELECT @@SPID and create some sample scripts which contains deprecated commands in it.

/*
Some random scripts to demonstrate how deprecated code can be captured using SQL Server Profiler
*/

SET NOCOUNT ON;

--Here TEXT, IMAGE are deprecated
CREATE TABLE tblDeprecationLocator
(
Sno INT,
Remarks TEXT,
Profilepic IMAGE
)
GO

--This is deprecated. Better alternative is to use ALTER INDEX
DBCC DBREINDEX('tblDeprecationLocator')
GO

--Using hints without WITH keyword is deprecated. It should be WITH (NOLOCK)
SELECT * FROM tblDeprecationLocator (NOLOCK)
GO

USE Master
GO
--here Truncate_only option is deprecated
BACKUP TRAN TestBed WITH TRUNCATE_ONLY
GO

Now when we execute these scripts SQL Profiler would capture all the deprecated code and provide appropriate message as shown below.

1. The TEXT, NTEXT, and IMAGE data types will be removed in a future version of SQL Server. Avoid using them in new development work, and plan to modify applications that currently use them. Use the varchar(max), nvarchar(max), and varbinary(max) data types instead.


2. DBCC DBREINDEX will be removed in a future version of SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use it. Use ALTER INDEX instead.


3. Specifying table hints without using a WITH keyword is a deprecated feature and will be removed in a future version.


4. BACKUP LOG WITH TRUNCATE_ONLY or WITH NO_LOG is deprecated. The simple recovery model should be used to automatically truncate the transaction log.


In the next post we would see how to make use of Extended Events in SQL Server 2008.

Conclusion: 

For greatest longevity of your applications, avoid using features that cause the deprecation announcement event class (or) the deprecation final support event class.

Monday, April 09, 2012

Clustered Index on an UniqueIdentifier column is costly

Generally having a clustered index on a UniqueIdentifier column is going to be costly. Even if we add some 5000 or 10000 records in the table the fragmentation level would be surely around 95+ % which means we have to very frequently REORGANIZE or REBUILD that index. So always i would suggest lets double check and be very sure that we need a GUID column and not an INT or BIGINT column.

Fragmentation Level would be too heavy

Lets create a sample to see how data is getting fragmented for different datatypes.


-----------------------------------------------------
--Demo Table Creation Scripts
-----------------------------------------------------

--Table with UniqueIdentifier as Clusetered Index
CREATE TABLE dbo.TblUID
(
Sno Uniqueidentifier NOT NULL DEFAULT NEWID(),
FirstName VARCHAR(100) NOT NULL,
DOB DATETIME NOT NULL,
CONSTRAINT pk_tblUid PRIMARY KEY CLUSTERED(sno asc)
);

--Table with UniqueIdentifier as Clusetered Index
CREATE TABLE dbo.TblSEQUID
(
Sno Uniqueidentifier NOT NULL DEFAULT NEWSEQUENTIALID(),
FirstName VARCHAR(100) NOT NULL,
DOB DATETIME NOT NULL,
CONSTRAINT pk_tblSEQUid PRIMARY KEY CLUSTERED(sno asc)
);

--Table with INTeger column as Clusetered Index
CREATE TABLE dbo.TblInt
(
Sno INT IDENTITY(1,1) NOT NULL,
FirstName VARCHAR(100) NOT NULL,
DOB DATETIME NOT NULL,
CONSTRAINT pk_tblInt PRIMARY KEY CLUSTERED(sno asc)
);

--Table with BIGINTeger as Clusetered Index
CREATE TABLE dbo.TblBigInt
(
Sno BIGINT IDENTITY(1,1) NOT NULL,
FirstName VARCHAR(100) NOT NULL,
DOB DATETIME NOT NULL,
CONSTRAINT pk_tblBigInt PRIMARY KEY CLUSTERED(sno asc)
);


-----------------------------------------------------
--Create NON Clustered Index on DOB Column
-----------------------------------------------------
CREATE NONCLUSTERED INDEX nx_tbluid_dob on dbo.tbluid(DOB);
CREATE NONCLUSTERED INDEX nx_tblsequid_dob on dbo.tblSEQUid(DOB);
CREATE NONCLUSTERED INDEX nx_tblInt_dob on dbo.tblInt(DOB);
CREATE NONCLUSTERED INDEX nx_tblBigInt_dob on dbo.tblBigInt(DOB);


/*
Insert dummy records in each of the table. 
Change the number near GO for capturing data for more records
*/

SET NOCOUNT ON;

INSERT INTO dbo.TblUID (FirstName, DOB)
SELECT REPLICATE('x',100),Getdate();
GO 10000

INSERT INTO dbo.TblSEQUID (FirstName, DOB)
SELECT REPLICATE('x',100),Getdate();
GO 10000

INSERT INTO dbo.TblInt(FirstName, DOB)
SELECT REPLICATE('x',100),Getdate();
GO 10000

INSERT INTO dbo.TblBigInt(FirstName, DOB)
SELECT REPLICATE('x',100),Getdate();
GO 10000


---------------------------------------------------
--Check the Space Used and Fragmentation Level
---------------------------------------------------
EXEC sp_spaceused tblUID, True;
SELECT index_type_desc, avg_fragmentation_in_percent, page_count, record_count
FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID(N'tblUID'), NULL, NULL, NULL)
OPTION (MAXDOP 1);

EXEC sp_spaceused tblSEQUID, True;
SELECT index_type_desc, avg_fragmentation_in_percent, page_count, record_count
FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID(N'tblSEQUID'), NULL, NULL, NULL)          
OPTION (MAXDOP 1);
  
EXEC sp_spaceused tblINT, True;
SELECT index_type_desc, avg_fragmentation_in_percent, page_count, record_count
FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID(N'tblINT'), NULL, NULL, NULL)
OPTION (MAXDOP 1);

EXEC sp_spaceused tblBIGINT, True;
SELECT index_type_desc, avg_fragmentation_in_percent, page_count, record_count
FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID(N'tblBIGINT'), NULL, NULL, NULL)
OPTION (MAXDOP 1);

Inspired by this blog post of Kimberlyhttp://www.sqlskills.com/BLOGS/KIMBERLY/post/GUIDs-as-PRIMARY-KEYs-andor-the-clustering-key.aspx



Sunday, March 25, 2012

Create CLUSTERED Index first then NON CLUSTERED indexes

We might have heard that always we need to create our CLUSTERED index first then NONCLUSTERED indexes.

Why is that? What would happen if NONCLUSTERED indexes are created first and then we create the CLUSTERED index?

If you create NONCLUSTERED indexes first and then CLUSTERED index internally ALL NONCLUSTERED indexes on that table would get recreated. On a big table this might take for ever to create the CLUSTERED Index itself.

Example:

In the sample shown in blog post titled "Query tuning using SET STATISTICS IO and SET STATISTICS TIME" we had created couple of NONCLUSTERED indexes alone. Now, let us assume we need to create a CLUSTERED index for that table on ProductID column.

First enable SET STATISTICS PROFILE ON so that we can see the profile information of the scripts we are going to execute. Then execute the below script:

--Script to create CLUSTERED index on ProductID column
CREATE CLUSTERED INDEX [ix_productId] ON [dbo].[tblTest]
(
[ProductID] ASC
)
GO

During this process it would have recreated both of these NONCLUSTERED indexes 
  1. [nc_ix_manufacturing_dt] & 
  2. [nc_ix_productName]


Friday, March 23, 2012

Declaring VARCHAR without length

Do you find anything wrong with this script?

CREATE PROCEDURE uspProcedureName
      @param1 VARCHAR
AS
....
....

If you aren't sure may be you should read this post completely without fail :)

All this while I was thinking that it is a well known issue until last week I saw a stored procedure something similar to the one shown above. Who ever created that stored procedure hasn't bothered to specify the length. Before jumping into the explanation of why we should SPECIFY THE LENGTH ALWAYS let us do a small exercise to understand this better.

Guess the results:

Try to answer what would be the output before checking the result.

--Declaring a variable without specifying the length
DECLARE @strSentence VARCHAR
SET @strSentence = 'Rajinikanth is always the NO 1 hero of India'
SELECT @strSentence

Expected Output: Rajinikanth is always the NO 1 hero of India
Actual Output: R

--While CASTing / CONVERTing
--The given string has 36 characters.
SELECT CONVERT(VARCHAR, '123456789123456789123456789123456789');
SELECT CAST('123456789123456789123456789123456789' AS VARCHAR);

Expected Output: 123456789123456789123456789123456789
Actual Output: 123456789123456789123456789123

As you could see for Variables the default length for VARCHAR is 1. When used within CONVERT/CAST the default length for VARCHAR is 30. 

So if we don't specify the length ourself these are the default values SQL Server uses - which means the data what we would be expecting to get stored in the database would have got silently truncated without our knowledge. These are one such tricky issues which if we aren't aware would mean we have to spend hell a lot of time debugging our scripts at a later stage.

Now to answer the first question on what is wrong with this script let us create a sample table and a stored procedure similar to that and see what it does.

--Sample table
CREATE TABLE dbo.tblDemo
(
ID INT,
FirstName VARCHAR(30)
)
GO

--Stored Procedure to Insert data into the sample table
CREATE PROCEDURE dbo.uspInsertData
@ID INT,
@FName VARCHAR -- See there is no length specified here
AS
BEGIN
    SET NOCOUNT ON;
    INSERT INTO dbo.tblDemo VALUES (@ID, @FName)
END
GO

--Test
EXEC dbo.uspInsertData 10, 'Vadivel'
GO
SELECT ID, FirstName FROM dbo.tblDemo
GO

The output would be just 'V'. Once again the data has got truncated silently. So ALWAYS SPECIFY THE LENGTH OF THE STRING WITHOUT FAIL.


--Clean Up
DROP PROC dbo.uspInsertData;
DROP TABLE dbo.tblDemo;