SQL query optimization

Wow. Two SQL posts on the same day.

The SQL query processing team blog is quite a gem. I wonder how I missed this.

The query processing team – query optimization & execution – providing tips, tricks, advice and answers to frequently-asked questions in a continued effort to make your queries run faster and smoother in SQL Server.

BTW: I consider this short article a must read for everyone who writes a SELECT query. The reason is that we never learn about the query optimizer in school when learning about databases.

Let me prove this to you. My best friend Shasheendra wrote this code the other day and it stumped me. Imagine this, you want to insert 10,000 rows into a temporary table.  Let’s say we can take two approaches. The first one is to simply write 10,000 inserts the second could be to insert the entries into a table with a single field with the 10,000 being comma delimited. Then parse the string and insert each row at a time.

Which do you think runs faster? No hurries, just take a look at the code and then scroll down to see the time comparison.

Approach 1:

Write 10,000 insert statements. Like this

CREATE TABLE #filetab (filename varchar(256))

CREATE CLUSTERED INDEX IX_filetab ON #filetab (filename)

INSERT INTO #filetab VALUES (‘dummy’)

INSERT INTO #filetab VALUES (‘ed7d1eda-025b-41be-b8b4-523dc9d53ece’)

INSERT INTO #filetab VALUES (‘b61abf2e-3252-423c-9ad7-cc63cb5e51bc’)

INSERT INTO #filetab VALUES (‘4224b7f8-120a-447b-a8ae-63cefce78255’)

….(10,000 entries)…

INSERT INTO #filetab VALUES (‘7f38bed5-be32-42c8-986f-20b3c78950ad’)


DROP TABLE #filetab


Approach 2:

DECLARE @filelisttab TABLE (filelist text)

CREATE TABLE #filetab (filename varchar(256))

CREATE CLUSTERED INDEX IX_filetab ON #filetab (filename)

INSERT INTO @filelisttab (filelist) VALUES (‘dummy,ed7d1eda-025b-41be-b8b4-523dc9d53ece,b61abf2e ….(10,000 comma seperated entries)…,7f38bed5-be32-42c8-986f-20b3c78950ad’)

DECLARE @i int, @l int, @tl int

SET @i = 1

SELECT @tl = DATALENGTH(filelist) FROM @filelisttab

WHILE (@i < @tl)


    DECLARE @filename Varchar(256)

    DECLARE @nextcommat int

    SELECT @filename = SUBSTRING(filelist, @i, 128) FROM @filelisttab — Use 128 character read buffer!

    SET @nextcommat = CHARINDEX(‘,’, @filename)

    SET @l = CASE @nextcommat WHEN 0 THEN 128 ELSE (@nextcommat1) END

    SET @filename = SUBSTRING(@filename, 1, @l)

    IF (LEN(@filename)>0) INSERT INTO #filetab (filename) VALUES (@filename)

    SET @i = @i + @l + 1



DROP TABLE #filetab

Query1.sql (722 KB)

Query2.sql (401 KB)


If you don’t believe me, you can download the two sql statements and try running them on your machine. These scripts create a temporary table, insert data and then drop the table.

Query 1 : First run 2 min 48 seconds, second run 2 min 28 seconds

Query 2:  First run 9 seconds, second run 5 seconds

And here I was thinking all these days that I knew all the basics on SQL query optimization.

SQL query optimization