Archive

Archive for December, 2013

2013 in review

December 31, 2013 Leave a comment

The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 210,000 times in 2013. If it were an exhibit at the Louvre Museum, it would take about 9 days for that many people to see it.

Click here to see the complete report.

Advertisement

Memory Optimized Indexes | Hash vs Range Indexes – SQL Server 2014

December 20, 2013 3 comments

In SQL Server 2014 for In-Memory tables there are lot of changes in DDLs compared with normal Disk Based Tables. In-Memory Tables related changes we’ve seen in previous posts, check [here]. Here we will see Memory Optimized Index related changes and few important things to take care before designing your Tables and Indexes.
 

–> Some of the main points to note are:

1. Indexes on In-Memory tables must be created inline with CREATE TABLE DDL script only.

2. These Indexes are not persisted on Disk and reside only in memory, thus they are not logged. As these Indexes are not persistent so they are re-created whenever SQL Server is restarted. Thus In-Memory tables DO NOT support Clustered Indexes.

3. Only two types of Indexes can be created on In-Memory tables, i.e. Non Clustered Hash Index and Non Clustered Index (aka Range Index). So there is no bookmark lookup.

4. These Non Clustered Indexes are inherently Covering, and all columns are automatically INCLUDED in the Index.

5. Total there can be MAX 8 Non Clustered Indexes created on an In-Memory table.
 

–> Here we will see how Query Optimizer uses Hash & Range Indexes to process query and return results:

1. Hash Indexes: are used for Point Lookups or Seeks. Are optimized for index seeks on equality predicates and also support full index scans. Thus these will only perform better when the predicate clause contains only equality predicate (=).

2. Range Indexes: are used for Range Scans and Ordered Scans. Are optimized for index scans on inequality predicates, such as greater than or less than, as well as sort order. Thus these will only preform better when the predicate clause contains only inequality predicates (>, <, =, BETWEEN).
 

–> Let’s check this by some hands-on code. We will create 2 similar In-Memory tables, one with Range Index and another with Hash Index:

-- Create In-Memory Table with simple NonClustered Index (a.k.a Range Index):
CREATE TABLE dbo.MemOptTable_With_NC_Range_Index
(
    ID INT NOT NULL
        PRIMARY KEY NONCLUSTERED,
    VarString VARCHAR(200) NOT NULL, 
    DateAdded DATETIME NOT NULL
) WITH (
	MEMORY_OPTIMIZED = ON, 
	DURABILITY = SCHEMA_AND_DATA
)
GO

-- Create In-Memory Table with NonClustered Hash Index:
CREATE TABLE dbo.MemOptTable_With_NC_Hash_Index
(
    ID INT NOT NULL
        PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 10000),
    VarString VARCHAR(200) NOT NULL, 
    DateAdded DATETIME NOT NULL
) WITH (
	MEMORY_OPTIMIZED = ON, 
	DURABILITY = SCHEMA_AND_DATA
)
GO

 

–> Now we will Insert about 10k records on both the tables, so that we have good numbers of records to compare:

DECLARE @i INT = 1

WHILE @i <= 10000
BEGIN
	INSERT INTO dbo.MemOptTable_With_NC_Range_Index 
		VALUES(@i, REPLICATE('a', 200), GETDATE())

	INSERT INTO dbo.MemOptTable_With_NC_Hash_Index 
		VALUES(@i, REPLICATE('a', 200), GETDATE())

	SET @i = @i+1
END

 

–> Now check the Execution Plan by using equality Operator (=) on both the tables:

SELECT * FROM MemOptTable_With_NC_Hash_Index  WHERE ID = 5000 -- 4%
SELECT * FROM MemOptTable_With_NC_Range_Index WHERE ID = 5000 -- 96%

You will see in the Execution Plan image below that Equality Operator with Hash Index Costs you only 4%, but Range Index Costs you 96%.

SQLServer2014_Hash_vs_Range_1
 

–> Now check the Execution Plan by using inequality Operator (BETWEEN) on both the tables:

SELECT * FROM MemOptTable_With_NC_Hash_Index  WHERE ID BETWEEN 5000 AND 6000 -- 99%
SELECT * FROM MemOptTable_With_NC_Range_Index WHERE ID BETWEEN 5000 AND 6000 -- 1%

You will see in the Execution Plan image below that Inequality Operator with Range Index Costs you only 1%, but Hash Index Costs you 99%.

SQLServer2014_Hash_vs_Range_2
 

So, while designing In-Memory Tables and Memory Optimized Indexes you will need to see in future that how you will be going to query that table. It also depends upon various scenarios and conditions, so always keep note of these things in advance while designing your In-Memory Tables.
 

Update: Know more about In-Memory tables:


 


XTP (eXtreme Transaction Processing) with Hekaton Tables & Native Compiled Stored Procedures – SQL Server 2014

December 19, 2013 1 comment

In my previous posts [this & this] I talked about creating Memory Optimized Database, how to create In-Memory Tables & Native Compiled Stored Procedures and what happens when they are created.

Here in this post we will see how FAST actually In-Memory tables & Native Compiled Stored Procedures are, when compared with normal Disk based Tables & Simple Stored Procedures.

I’ll be using the same [ManTest] database used in my previous posts, you can refer to the DDL script [here].
 

–> We will create:

1. One Disk based Table & one simple Stored Procedure which will use this Disk based Table.

2. One In-Memory Table & one Native Compiled Stored Procedure which will use this In-Memory Table.
 

1. Let’s first create a Disk based Table and a normal Stored Procedure:

USE [ManTest]
GO

-- Create a Disk table (non-Memory Optimized):
CREATE TABLE dbo.DiskTable
(
    ID INT NOT NULL
        PRIMARY KEY,
    VarString VARCHAR(200) NOT NULL, 
    DateAdded DATETIME NOT NULL
)
GO

-- Create normal Stored Procedure to load data into above Table:
CREATE PROCEDURE dbo.spLoadDiskTable @maxRows INT, @VarString VARCHAR(200)
AS
BEGIN
	SET NOCOUNT ON

	DECLARE @i INT = 1

	WHILE @i <= @maxRows
	BEGIN
		INSERT INTO dbo.DiskTable VALUES(@i, @VarString, GETDATE())
		SET @i = @i+1
	END
END
GO

 

2. Now create an In-Memory table & a Native Compiled Stored Procedure to load data:

-- Create an In-Memory table:
CREATE TABLE dbo.MemOptTable
(
    ID INT NOT NULL
        PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 10000),
    VarString VARCHAR(200) NOT NULL, 
    DateAdded DATETIME NOT NULL
) WITH (
	MEMORY_OPTIMIZED = ON, 
	DURABILITY = SCHEMA_AND_DATA
)
GO

-- Create Native Compiled Stored Procedure to load data into above Table:
CREATE PROCEDURE dbo.spLoadMemOptTable @maxRows INT, @VarString VARCHAR(200)
WITH 
	NATIVE_COMPILATION, 
	SCHEMABINDING, 
	EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='us_english')

	DECLARE @i INT = 1

	WHILE @i <= @maxRows
	BEGIN
		INSERT INTO dbo.MemOptTable VALUES(@i, @VarString, GETDATE())
		SET @i = @i+1
	END
END
GO

 

–> Now we will try to Load 10k record in above 2 table in various ways, as follows:

1. Load Disk based Table by T-SQL script using a WHILE loop.

2. Load the same Disk based Table by Stored Procedure which internally uses a WHILE loop.

3. Load In-Memory Table by T-SQL script using a WHILE loop.

4. Load the same In-Memory Table by Native Compiled Stored Procedure which internally uses a WHILE loop.
 

–> Working with Disk based Tables:

SET NOCOUNT ON

DECLARE 
	@StartTime DATETIME2,
	@TotalTime INT

DECLARE 
	@i INT,
	@maxRows INT,
	@VarString VARCHAR(200)

SET @maxRows = 10000
SET @VarString = REPLICATE('a',200)

SET @StartTime = SYSDATETIME()
SET @i = 1

-- 1. Load Disk Table (without SP):
WHILE @i <= @maxRows
BEGIN
	INSERT INTO dbo.DiskTable VALUES(@i, @VarString, GETDATE())
	SET @i = @i+1
END

SET @TotalTime = DATEDIFF(ms,@StartTime,SYSDATETIME())

SELECT 'Disk Table Load: ' + CAST(@TotalTime AS VARCHAR) + ' ms (without SP)'

-- 2. Load Disk Table (with simple SP):
DELETE FROM dbo.DiskTable

SET @StartTime = SYSDATETIME()

EXEC spLoadDiskTable @maxRows, @VarString

SET @TotalTime = DATEDIFF(ms,@StartTime,SYSDATETIME())

SELECT 'Disk Table Load: ' + CAST(@TotalTime AS VARCHAR) + ' ms (with simple SP)'

 

–> Working with In-Memory Tables:

-- 3. Load Memory Optimized Table (without SP):
SET @StartTime = SYSDATETIME()
SET @i = 1

WHILE @i <= @maxRows
BEGIN
	INSERT INTO dbo.MemOptTable VALUES(@i, @VarString, GETDATE())
	SET @i = @i+1
END

SET @TotalTime = DATEDIFF(ms,@StartTime,SYSDATETIME())

SELECT 'Memory Table Load: ' + CAST(@TotalTime AS VARCHAR) + ' ms (without SP)'

-- 4. Load Memory Optimized Table (with Native Compiled SP):
DELETE FROM dbo.MemOptTable

SET @StartTime = SYSDATETIME()

EXEC spLoadMemOptTable @maxRows, @VarString

SET @TotalTime = DATEDIFF(ms,@StartTime,SYSDATETIME())

SELECT 'Disk Table Load: ' + CAST(@TotalTime AS VARCHAR) + ' ms (with Native Compiled SP)'
GO

–> Output (Loaded 10k records):

Disk based Table Load	 : 28382 ms (without SP)
Disk based Table SP Load : 8297 ms  (with simple SP)
In-Memory Table Load     : 5176 ms  (without SP)
In-Memory Table SP Load  : 174 ms   (with Native Compiled SP)

 

So, you can clearly see the benefit and multifold increase in performance by using In-Memory Tables & Native Compiled Stored Procedures. The graph below shows performance in visual bar charts, impressive, isn’t it?

SQLServer2014Comparison
 

–> Final Cleanup

DROP PROCEDURE dbo.spLoadDiskTable
DROP TABLE dbo.DiskTable
DROP PROCEDURE dbo.spLoadMemOptTable
DROP TABLE dbo.MemOptTable
GO

 

Update: Know more about In-Memory tables:


 


MERGE data from source Table to target Table across Databases with a SP – MSDN TSQL forum

December 18, 2013 Leave a comment

–> Question:

How do I merge data from Table1 on Server-1 to final table on Server-2 with a Stored Procedure to execute every so many hours?
 

–> My Answer:

A basic approach would be to create a Linked Server in Server2 that will point to Server1.

On Server2 you can create a Stored Procedure, which will read the table via Linked Server from Server1. You can use MERGE statement to merge (or INSET/UPDATE/DELETE) records in final table on Server2 form table1 in Server1.

If you are dealing with millions of records then you can go with Incrememtal/Delta load approach, here you will have to store the timestamp of previous load in some meta-data table.

To keep executing every 2 Hrs, you will need to create a SQL Agent Job, and schedule it to run for ever 2 Hrs.

You can also use SSIS or other ETL tool as a better approach to setup this load.
 

Ref Link.


SQL Tips – Search and list out SQL Jobs containing specific word, text or SQL Query

December 12, 2013 Leave a comment

 
Today while working on one database migration project I wanted to check a particular Stored Procedure is called by what all SQL Jobs.

So, I created following queries to pass the SP name in the where clause and get respective SQL Job name.

The 1st query just gives the SQL Job name, and the 2nd query gives the SQL Job name with Job Step name, SQL Command used in that step, and target Database.

use msdb
go

-- #1. List out all SQL Jobs:

select distinct j.name
from msdb.dbo.sysjobs j
join msdb.dbo.sysjobsteps js on js.job_id = j.job_id
where js.command like '%spStartSystem%'
order by j.name


-- #2. List out all SQL Jobs with SQL command, Step name and target Database:

select 
	j.name, 
	js.step_name, 
	js.command, 
	js.database_name
from msdb.dbo.sysjobs j
join msdb.dbo.sysjobsteps js on js.job_id = j.job_id
where js.command like '%spStartSystem%'
order by j.name


Categories: SQL Tips Tags: , ,