Archive

Posts Tagged ‘Hekaton 2014’

Memory Optimized Indexes | Hash vs Range Indexes – SQL Server 2014

December 20, 2013 3 comments

In SQL Server 2014 for In-Memory tables there are lot of changes in DDLs compared with normal Disk Based Tables. In-Memory Tables related changes we’ve seen in previous posts, check [here]. Here we will see Memory Optimized Index related changes and few important things to take care before designing your Tables and Indexes.
 

–> Some of the main points to note are:

1. Indexes on In-Memory tables must be created inline with CREATE TABLE DDL script only.

2. These Indexes are not persisted on Disk and reside only in memory, thus they are not logged. As these Indexes are not persistent so they are re-created whenever SQL Server is restarted. Thus In-Memory tables DO NOT support Clustered Indexes.

3. Only two types of Indexes can be created on In-Memory tables, i.e. Non Clustered Hash Index and Non Clustered Index (aka Range Index). So there is no bookmark lookup.

4. These Non Clustered Indexes are inherently Covering, and all columns are automatically INCLUDED in the Index.

5. Total there can be MAX 8 Non Clustered Indexes created on an In-Memory table.
 

–> Here we will see how Query Optimizer uses Hash & Range Indexes to process query and return results:

1. Hash Indexes: are used for Point Lookups or Seeks. Are optimized for index seeks on equality predicates and also support full index scans. Thus these will only perform better when the predicate clause contains only equality predicate (=).

2. Range Indexes: are used for Range Scans and Ordered Scans. Are optimized for index scans on inequality predicates, such as greater than or less than, as well as sort order. Thus these will only preform better when the predicate clause contains only inequality predicates (>, <, =, BETWEEN).
 

–> Let’s check this by some hands-on code. We will create 2 similar In-Memory tables, one with Range Index and another with Hash Index:

-- Create In-Memory Table with simple NonClustered Index (a.k.a Range Index):
CREATE TABLE dbo.MemOptTable_With_NC_Range_Index
(
    ID INT NOT NULL
        PRIMARY KEY NONCLUSTERED,
    VarString VARCHAR(200) NOT NULL, 
    DateAdded DATETIME NOT NULL
) WITH (
	MEMORY_OPTIMIZED = ON, 
	DURABILITY = SCHEMA_AND_DATA
)
GO

-- Create In-Memory Table with NonClustered Hash Index:
CREATE TABLE dbo.MemOptTable_With_NC_Hash_Index
(
    ID INT NOT NULL
        PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 10000),
    VarString VARCHAR(200) NOT NULL, 
    DateAdded DATETIME NOT NULL
) WITH (
	MEMORY_OPTIMIZED = ON, 
	DURABILITY = SCHEMA_AND_DATA
)
GO

 

–> Now we will Insert about 10k records on both the tables, so that we have good numbers of records to compare:

DECLARE @i INT = 1

WHILE @i <= 10000
BEGIN
	INSERT INTO dbo.MemOptTable_With_NC_Range_Index 
		VALUES(@i, REPLICATE('a', 200), GETDATE())

	INSERT INTO dbo.MemOptTable_With_NC_Hash_Index 
		VALUES(@i, REPLICATE('a', 200), GETDATE())

	SET @i = @i+1
END

 

–> Now check the Execution Plan by using equality Operator (=) on both the tables:

SELECT * FROM MemOptTable_With_NC_Hash_Index  WHERE ID = 5000 -- 4%
SELECT * FROM MemOptTable_With_NC_Range_Index WHERE ID = 5000 -- 96%

You will see in the Execution Plan image below that Equality Operator with Hash Index Costs you only 4%, but Range Index Costs you 96%.

SQLServer2014_Hash_vs_Range_1
 

–> Now check the Execution Plan by using inequality Operator (BETWEEN) on both the tables:

SELECT * FROM MemOptTable_With_NC_Hash_Index  WHERE ID BETWEEN 5000 AND 6000 -- 99%
SELECT * FROM MemOptTable_With_NC_Range_Index WHERE ID BETWEEN 5000 AND 6000 -- 1%

You will see in the Execution Plan image below that Inequality Operator with Range Index Costs you only 1%, but Hash Index Costs you 99%.

SQLServer2014_Hash_vs_Range_2
 

So, while designing In-Memory Tables and Memory Optimized Indexes you will need to see in future that how you will be going to query that table. It also depends upon various scenarios and conditions, so always keep note of these things in advance while designing your In-Memory Tables.
 

Update: Know more about In-Memory tables:


 


Advertisement

XTP (eXtreme Transaction Processing) with Hekaton Tables & Native Compiled Stored Procedures – SQL Server 2014

December 19, 2013 1 comment

In my previous posts [this & this] I talked about creating Memory Optimized Database, how to create In-Memory Tables & Native Compiled Stored Procedures and what happens when they are created.

Here in this post we will see how FAST actually In-Memory tables & Native Compiled Stored Procedures are, when compared with normal Disk based Tables & Simple Stored Procedures.

I’ll be using the same [ManTest] database used in my previous posts, you can refer to the DDL script [here].
 

–> We will create:

1. One Disk based Table & one simple Stored Procedure which will use this Disk based Table.

2. One In-Memory Table & one Native Compiled Stored Procedure which will use this In-Memory Table.
 

1. Let’s first create a Disk based Table and a normal Stored Procedure:

USE [ManTest]
GO

-- Create a Disk table (non-Memory Optimized):
CREATE TABLE dbo.DiskTable
(
    ID INT NOT NULL
        PRIMARY KEY,
    VarString VARCHAR(200) NOT NULL, 
    DateAdded DATETIME NOT NULL
)
GO

-- Create normal Stored Procedure to load data into above Table:
CREATE PROCEDURE dbo.spLoadDiskTable @maxRows INT, @VarString VARCHAR(200)
AS
BEGIN
	SET NOCOUNT ON

	DECLARE @i INT = 1

	WHILE @i <= @maxRows
	BEGIN
		INSERT INTO dbo.DiskTable VALUES(@i, @VarString, GETDATE())
		SET @i = @i+1
	END
END
GO

 

2. Now create an In-Memory table & a Native Compiled Stored Procedure to load data:

-- Create an In-Memory table:
CREATE TABLE dbo.MemOptTable
(
    ID INT NOT NULL
        PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 10000),
    VarString VARCHAR(200) NOT NULL, 
    DateAdded DATETIME NOT NULL
) WITH (
	MEMORY_OPTIMIZED = ON, 
	DURABILITY = SCHEMA_AND_DATA
)
GO

-- Create Native Compiled Stored Procedure to load data into above Table:
CREATE PROCEDURE dbo.spLoadMemOptTable @maxRows INT, @VarString VARCHAR(200)
WITH 
	NATIVE_COMPILATION, 
	SCHEMABINDING, 
	EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='us_english')

	DECLARE @i INT = 1

	WHILE @i <= @maxRows
	BEGIN
		INSERT INTO dbo.MemOptTable VALUES(@i, @VarString, GETDATE())
		SET @i = @i+1
	END
END
GO

 

–> Now we will try to Load 10k record in above 2 table in various ways, as follows:

1. Load Disk based Table by T-SQL script using a WHILE loop.

2. Load the same Disk based Table by Stored Procedure which internally uses a WHILE loop.

3. Load In-Memory Table by T-SQL script using a WHILE loop.

4. Load the same In-Memory Table by Native Compiled Stored Procedure which internally uses a WHILE loop.
 

–> Working with Disk based Tables:

SET NOCOUNT ON

DECLARE 
	@StartTime DATETIME2,
	@TotalTime INT

DECLARE 
	@i INT,
	@maxRows INT,
	@VarString VARCHAR(200)

SET @maxRows = 10000
SET @VarString = REPLICATE('a',200)

SET @StartTime = SYSDATETIME()
SET @i = 1

-- 1. Load Disk Table (without SP):
WHILE @i <= @maxRows
BEGIN
	INSERT INTO dbo.DiskTable VALUES(@i, @VarString, GETDATE())
	SET @i = @i+1
END

SET @TotalTime = DATEDIFF(ms,@StartTime,SYSDATETIME())

SELECT 'Disk Table Load: ' + CAST(@TotalTime AS VARCHAR) + ' ms (without SP)'

-- 2. Load Disk Table (with simple SP):
DELETE FROM dbo.DiskTable

SET @StartTime = SYSDATETIME()

EXEC spLoadDiskTable @maxRows, @VarString

SET @TotalTime = DATEDIFF(ms,@StartTime,SYSDATETIME())

SELECT 'Disk Table Load: ' + CAST(@TotalTime AS VARCHAR) + ' ms (with simple SP)'

 

–> Working with In-Memory Tables:

-- 3. Load Memory Optimized Table (without SP):
SET @StartTime = SYSDATETIME()
SET @i = 1

WHILE @i <= @maxRows
BEGIN
	INSERT INTO dbo.MemOptTable VALUES(@i, @VarString, GETDATE())
	SET @i = @i+1
END

SET @TotalTime = DATEDIFF(ms,@StartTime,SYSDATETIME())

SELECT 'Memory Table Load: ' + CAST(@TotalTime AS VARCHAR) + ' ms (without SP)'

-- 4. Load Memory Optimized Table (with Native Compiled SP):
DELETE FROM dbo.MemOptTable

SET @StartTime = SYSDATETIME()

EXEC spLoadMemOptTable @maxRows, @VarString

SET @TotalTime = DATEDIFF(ms,@StartTime,SYSDATETIME())

SELECT 'Disk Table Load: ' + CAST(@TotalTime AS VARCHAR) + ' ms (with Native Compiled SP)'
GO

–> Output (Loaded 10k records):

Disk based Table Load	 : 28382 ms (without SP)
Disk based Table SP Load : 8297 ms  (with simple SP)
In-Memory Table Load     : 5176 ms  (without SP)
In-Memory Table SP Load  : 174 ms   (with Native Compiled SP)

 

So, you can clearly see the benefit and multifold increase in performance by using In-Memory Tables & Native Compiled Stored Procedures. The graph below shows performance in visual bar charts, impressive, isn’t it?

SQLServer2014Comparison
 

–> Final Cleanup

DROP PROCEDURE dbo.spLoadDiskTable
DROP TABLE dbo.DiskTable
DROP PROCEDURE dbo.spLoadMemOptTable
DROP TABLE dbo.MemOptTable
GO

 

Update: Know more about In-Memory tables:


 


Behind the scenes with Hekaton Tables & Native Compiled SPs | SQL Server 2014

September 25, 2013 1 comment

In my previous posts I talked about:

1. What’s [new with SQL Server 2014 & Hekaton] and CTP-1 [download link].

2. [Installing] SQL Server 2014 with Sneak-Peek.

3. Working with [Hekaton] Tables.

Here, in this post I’ll discuss more on what the new SQL Server 2014 does behind the scene while you create Hekaton or Memory-Optimized Tables & Native Compiled Stored Procedures.

I will use the same Database (Hekaton enabled) created in my [previous post], and then we will check what SQL Server does while creating Hekaton tables & Compiled SPs:

–> Create Memory-Optimized Table:

USE [ManTest]
GO

CREATE TABLE dbo.Test_memoryOptimizedTable
(
	TestID INT NOT NULL
		PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1024),
	
	TestName NVARCHAR(100) NOT NULL, 
	
	DateAdded DATETIME NOT NULL

) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

–> Create Native Compiled Stored Procedure:

USE [ManTest]
GO

CREATE PROCEDURE dbo.Test_NativelyCompiledStoredProcedure (
	@param1 INT not null, 
	@param2 NVARCHAR(100) not null
	)
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH (
	TRANSACTION ISOLATION LEVEL = SNAPSHOT, 
	LANGUAGE = N'us_english'
	)

INSERT dbo.Test_memoryOptimizedTable VALUES (@param1, @param2, getdate())

END
GO

–> After executing above code as usual the Tables & Stored Procedure will be created. But the important thing here is what the Hekaton Engine does internally, is shown in the following image below:
SQLServer2014_C_Code_DLL

– It creates total 6 files for every Table & SP with following extensions: .c, .dll, .mat, .obj, .out and .pdb.

– Most important are the C Code and the DLL files with four (4) other supporting files for each Table the Stored Procedure and stores them at following path: “C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\xtp\5\”.

– The “xtp” folder/directory here contains a sub-folder “5” which is nothing but the Database ID, check this:

SELECT DB_ID();

SELECT object_id, name, type
FROM sys.sysobjects 
WHERE name IN ('Test_memoryOptimizedTable', 'Test_NativelyCompiledStoredProcedure');

– If you look closely the file are create with particular naming conventions and relate to the results of above query:
— For files xtp_t_5_277576027: xtp_t is for Table, 5 is the Database ID and 277576027 is the Object (table) ID.
— For files xtp_p_5_325576198: xtp_p is for Stored Procedure, 5 is the Database ID and 325576198 is the Object (Stored Procedure) ID.

–> Opening the xtp_t_5_277576027.c file looks like this:

#define __in
#define __out
#define __inout
#define __in_opt
#define __out_opt
#define __inout_opt
#define __in_ecount(x)
#define __out_ecount(x)
#define __deref_out_ecount(x)
#define __inout_ecount(x)
#define __in_bcount(x)
#define __out_bcount(x)
#define __deref_out_bcount(x)
#define __deref_out_range(x, y)
#define __success(x)
#define __inout_bcount(x)
#define __deref_opt_out
#define __deref_out
#define __checkReturn
#define __callback
#define __nullterminated

typedef unsigned char bool;
typedef unsigned short wchar_t;
typedef long HRESULT;

typedef unsigned __int64 ULONG_PTR;

#include "hkenggen.h"
#include "hkrtgen.h"
#include "hkgenlib.h"

#define ENABLE_INTSAFE_SIGNED_FUNCTIONS
#include "intsafe.h"

int _fltused = 0;

int memcmp(const void*, const void*, size_t);
void *memcpy(void*, const void*, size_t);
void *memset(void*, int, size_t);

#define offsetof(s,f)   ((size_t)&(((s*)0)->f))

struct hkt_277576027
{
	__int64 hkc_3;
	long hkc_1;
	unsigned short hkvdo[2];
};
struct hkis_27757602700002
{
	long hkc_1;
};
struct hkif_27757602700002
{
	long hkc_1;
};
unsigned short GetSerializeSize_277576027(
	struct HkRow const* hkRow)
{
	struct hkt_277576027 const* row = ((struct hkt_277576027 const*)hkRow);
	return ((row->hkvdo)[1]);
}
HRESULT Serialize_277576027(
	struct HkRow const* hkRow,
	unsigned char* buffer,
	unsigned short bufferSize,
	unsigned short* copySize)
{
	return (RowSerialize(hkRow, (GetSerializeSize_277576027(hkRow)), buffer, bufferSize, copySize));
}
HRESULT Deserialize_277576027(
	struct HkTransaction* tx,
	struct HkTable* table,
	unsigned char const* data,
	unsigned short datasize,
	struct HkRow** hkrow)
{
	return (RowDeserialize(tx, table, data, datasize, sizeof(struct hkt_277576027), (sizeof(struct hkt_277576027) + 200), hkrow));
}
unsigned short GetSerializeRecKeySize_277576027(
	struct HkRow const* hkRow)
{
	struct hkt_277576027 const* row = ((struct hkt_277576027 const*)hkRow);
	unsigned short size = sizeof(struct hkif_27757602700002);
	return size;
}
HRESULT SerializeRecKey_27757602700002(
	struct HkRow const* hkRow,
	unsigned char* hkKey,
	unsigned short bufferSize,
	unsigned short* keySize)
{
	struct hkt_277576027 const* row = ((struct hkt_277576027 const*)hkRow);
	struct hkif_27757602700002* key = ((struct hkif_27757602700002*)hkKey);
	(*keySize) = sizeof(struct hkif_27757602700002);
	if ((bufferSize < (*keySize)))
	{
		return -2013265920;
	}
	(key->hkc_1) = (row->hkc_1);
	return 0;
}
HRESULT DeserializeRecKey_277576027(
	unsigned char const* data,
	unsigned short dataSize,
	struct HkSearchKey* key,
	unsigned short bufferSize)
{
	struct hkif_27757602700002 const* source = ((struct hkif_27757602700002 const*)data);
	struct hkis_27757602700002* target = ((struct hkis_27757602700002*)key);
	unsigned long targetSize = sizeof(struct hkis_27757602700002);
	if ((targetSize > bufferSize))
	{
		return -2013265920;
	}
	(target->hkc_1) = (source->hkc_1);
	return 0;
}
__int64 CompareSKeyToRow_27757602700002(
	struct HkSearchKey const* hkArg0,
	struct HkRow const* hkArg1)
{
	struct hkis_27757602700002* arg0 = ((struct hkis_27757602700002*)hkArg0);
	struct hkt_277576027* arg1 = ((struct hkt_277576027*)hkArg1);
	__int64 ret;
	ret = (CompareKeys_int((arg0->hkc_1), (arg1->hkc_1)));
	return ret;
}
__int64 CompareRowToRow_27757602700002(
	struct HkRow const* hkArg0,
	struct HkRow const* hkArg1)
{
	struct hkt_277576027* arg0 = ((struct hkt_277576027*)hkArg0);
	struct hkt_277576027* arg1 = ((struct hkt_277576027*)hkArg1);
	__int64 ret;
	ret = (CompareKeys_int((arg0->hkc_1), (arg1->hkc_1)));
	return ret;
}
unsigned long ComputeSKeyHash_27757602700002(
	struct HkSearchKey const* hkArg)
{
	struct hkis_27757602700002* arg = ((struct hkis_27757602700002*)hkArg);
	unsigned long hashState = 0;
	unsigned long hashValue = 0;
	hashValue = (ComputeHash_int((arg->hkc_1), (&hashState)));
	return hashValue;
}
unsigned long ComputeRowHash_27757602700002(
	struct HkRow const* hkArg)
{
	struct hkt_277576027* arg = ((struct hkt_277576027*)hkArg);
	unsigned long hashState = 0;
	unsigned long hashValue = 0;
	hashValue = (ComputeHash_int((arg->hkc_1), (&hashState)));
	return hashValue;
}
struct HkOffsetInfo const KeyOffsetArray_27757602700002[] = 
{

	{
		offsetof(struct hkis_27757602700002, hkc_1),
		0,
		0,
	},
};
struct HkKeyColsInfo const KeyColsInfoArray_277576027[] = 
{

	{
		sizeof(struct hkis_27757602700002),
		KeyOffsetArray_27757602700002,
		sizeof(struct hkis_27757602700002),
		sizeof(struct hkis_27757602700002),
	},
};
struct HkOffsetInfo const OffsetArray_277576027[] = 
{

	{
		offsetof(struct hkt_277576027, hkc_1),
		0,
		0,
	},

	{
		(offsetof(struct hkt_277576027, hkvdo) + 0),
		0,
		0,
	},

	{
		offsetof(struct hkt_277576027, hkc_3),
		0,
		0,
	},
};
struct HkColsInfo const ColsInfo_277576027 = 
{
	sizeof(struct hkt_277576027),
	OffsetArray_277576027,
	KeyColsInfoArray_277576027,
};
struct HkHashIndexMD HashIndexMD_277576027[] = 
{

	{
		2,
		1,
		1024,
		CompareSKeyToRow_27757602700002,
		CompareRowToRow_27757602700002,
		ComputeSKeyHash_27757602700002,
		ComputeRowHash_27757602700002,
	},
};
struct HkTableMD TableMD = 
{
	sizeof(struct hkt_277576027),
	(sizeof(struct hkt_277576027) + 200),
	1,
	HashIndexMD_277576027,
	0,
	0,
	0,
	(&ColsInfo_277576027),
	277576027,
	0,
	GetSerializeSize_277576027,
	Serialize_277576027,
	Deserialize_277576027,
	GetSerializeRecKeySize_277576027,
	SerializeRecKey_27757602700002,
	DeserializeRecKey_277576027,
};
__declspec(dllexport)
struct HkTableBindings g_Bindings = 
{
	277576027,
	(&TableMD),
};

I cannot understand a single bit here, but this recalls memories when I was studying C & C++ in college 🙂

–> Final Cleanup

DROP PROCEDURE dbo.Test_NativelyCompiledStoredProcedure
DROP TABLE dbo.Test_memoryOptimizedTable

 

Update: Know more about In-Memory tables:


 


ColumnStore Index enhancements in SQL Server 2014

August 20, 2013 Leave a comment

ColumnStore Indexes were first introduced in SQL Server 2012, and this created a new way to store and retrieve the Index or Table data in an efficient manner.

ColumnStore uses xVelocity technology that was based on Vertipaq engine, this uses a new Columnar storage technique to store data that is highly Compressed and is capable of In-memory Caching and highly parallel data scanning with Aggregation algorithms.

ColumnStore or Columnar data format does not store data in traditional RowStore fashion, instead the data is grouped and stored as one column at a time in Column Segments.

The Traditional RowStore stores data for each row and then joins all the rows and store them in Data Pages, and is still the same storage mechanism for Heap and Clustered Indexes.

Thus, a ColumnStore increases the performance by reading less data that is highly compressed, and in batches from disk to memory by further reducing the I/O.
 

–> To know more about ColumnStore Indexes check [MSDN BoL].
 

We will start with SQL Server 2012 offering for ColumnStore Indexes:

1. A Table (Heap or BTree) can have only one NonClustered ColumnStore Index.

2. A Table with NonClustered ColumnStore Index becomes readonly and cannot be updated.

3. The NonClustered ColumnStore Index uses Segment Compression for high compression of Columnar data.
 

With SQL Sever 2014 some new features were added, like:

1. You can create one Clustered ColumnStore Index on a Table, and no further Indexes can be created.

2. A Table with Clustered ColumnStore Index can be updated with INSERT/UPDATE/DELETE operations.

3. Both Clustered & NonClustered ColumnStore Index has new Archival Compression options i.e. COLUMNSTORE_ARCHIVE to further compress the data.
 

–> Check more about this on [MSDN BoL].
 


Getting Started with Hekaton (Memory Optimized) tables | SQL Server 2014

July 16, 2013 11 comments

Hekaton, which also means in-Memory or Memory Optimized Tables, is a Microsoft Code Project for its new version of SQL Server 2014. This version of SQL Server mainly focuses on the high performance In-Memory Database workloads. Which means this version includes Memory Optimized Tables which will reside into Server’s memory and efficiently provides execution of Business Logic by completely reducing I/O between Disk & Memory.

So, as soon as a Memory Optimized Table is created the data it contains will also load into Memory. Let’s see here a Hands-On with Hekaton Tables by simple T-SQL Scripts:

–> Let’s first create a Test Database:

USE [master]
GO

CREATE DATABASE [ManTest]
 CONTAINMENT = NONE
 ON  PRIMARY ( 
	NAME = N'ManTest', 
	FILENAME = N'D:\SQLServer2014\MSSQL\DATA\ManTest.mdf', 
	SIZE = 5120KB, 
	FILEGROWTH = 1024KB 
 )
 LOG ON ( 
	NAME = N'ManTest_log', 
	FILENAME = N'D:\SQLServer2014\MSSQL\Log\ManTest_log.ldf', 
	SIZE = 1024KB, 
	FILEGROWTH = 10%
 )
GO

–> Now we will create a FileGroup and associate the [ManTest] Database with it:

USE [ManTest]
GO

-- Add MEMORY_OPTIMIZED_DATA filegroup to the database.
ALTER DATABASE [ManTest] 
	ADD FILEGROUP [ManTestFG] CONTAINS MEMORY_OPTIMIZED_DATA 
GO

-- Add file to the MEMORY_OPTIMIZED_DATA filegroup.
ALTER DATABASE [ManTest]
ADD FILE
  ( NAME = ManTestFG_file1,
    FILENAME = N'D:\SQLServer2014\MSSQL\DATA\Datasample_database_1') -- You might need to check and correct the path here.
TO FILEGROUP ManTestFG
GO

A SQL Server FileGroup is required to store memory_optimized_data in a SQL Server Database. As per MSDN this FileGroup will be used to guarantee durability of memory-resident data in the event of a server crash or restart. During the crash recovery phase in server startup, the data is recovered from this FileGroup and loaded back into memory.

–> Now we will create Memory Optimized Table with new Options added in CREATE TABLE syntax:

-- Create memory optimized table and hash indexes on the memory optimized table:
CREATE TABLE dbo.Person_memoryOptimizedTable
(
	PersonID INT NOT NULL
		PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 19972),
	
	PersonName NVARCHAR(100) NOT NULL, 
	
	DateAdded DATETIME NOT NULL

) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

The Options:
– NONCLUSTERED HASH WITH : Clustered Index not allowed (as indexes are not stored, but re-created after SQL Server is restarted), so a PK should be defined with NonClustered Index of Type ‘Hash’.
– BUCKET_COUNT : Indicates the number of buckets that should be created in the Hash index.
– MEMORY_OPTIMIZED=ON : makes the table Memory Optimized.
– DURABILITY = SCHEMA_AND_DATA : makes the Table and its Data available all the time in-Memory.

Note:
– Indexes can only be created online with CREATE TABLE statement.
– If you don’t provide the DURABILITY = SCHEMA_AND_DATA option, then the Data will be persisted in the Table only.
– The Option DURABILITY = SCHEMA_ONLY will not make the Table Data Durable on Disk and data will lost after Server Restart or Crash, but the Table Schema will be available.

–> Let’s insert some records in the table created above:

-- Inserting records into the Memory Optimized Table from [Person] table in [AdventureWorks] DB:
INSERT INTO dbo.Person_memoryOptimizedTable
SELECT 
	[BusinessEntityID], 
	CONCAT([FirstName], ' ', [MiddleName], ' ', [LastName]) AS PersonName, 
	GETDATE() AS DateAdded
FROM [AdventureWorks2012].[Person].[Person]

You will get following error:
Msg 41317, Level 16, State 3, Line 34
A user transaction that accesses memory optimized tables or natively compiled procedures cannot access more than one user database or databases model and msdb, and it cannot write to master.

This means that you cannot insert records into Memory Optimized Tables across databases.
So, we have a workaround here, we will insert records into a #temp table first and then insert from this #temp table.

-- So, I'll insert records from [AdvantureWorks] Database to #temp table first:
SELECT 
	[BusinessEntityID], 
	CONCAT([FirstName], ' ', [MiddleName], ' ', [LastName]) AS PersonName, 
	GETDATE() AS DateAdded
into #tempPerson
FROM [AdventureWorks2012].[Person].[Person]
-- (19972 row(s) affected)

-- Now, I can insert records from #temp table into the Memory Optimized Table:
INSERT INTO dbo.Person_memoryOptimizedTable
SELECT 
	[BusinessEntityID], 
	PersonName, 
	DateAdded
FROM #tempPerson
-- (19972 row(s) affected)

-- Let's check contents of the Table:
SELECT * 
FROM Person_memoryOptimizedTable
GO

So, we here saw how to create Memory Optimized Tables and create a Separate Database to store them by allocating a FileGroup to the DB.

-- Final Cleanup
DROP TABLE dbo.Person_memoryOptimizedTable

DROP TABLE #tempPerson
GO

USE master
GO

DROP DATABASE ManTest
GO

 

Update: Know more about In-Memory tables:


 

Next post I’ll discuss more about Memory Optimized tables, their benefits and limitations.