Advertisements

Archive

Archive for the ‘SQL Server Internals’ Category

Pass an array of multiple values with a single Parameter in a Stored Procedure – SQL Server

July 10, 2015 3 comments

This post comes from an old discussion on MSDN T-SQL Forum[link], where someone asked about the same topic.

Check my reply there and here with this post I’m going to collate all the different methods I’ve blog previously that can be used for the same purpose.
 

Stored Procedures accept only a fixed and pre-defined number of parameters thus there is a limitation you cannot provide dynamic number of params, like you can do with other languages, like C, C#, Java, etc. In these languages there is concept of Arrays which you can pass in a method/function, but in SQL there are no array variables and it does not have any datatype that support arrays. Thus if you have to provide multiple values to any parameter you cannot do it directly, though there are some workarounds.
 

–> Here are some of the methods or workarounds by which we can pass multiple values as a single Parameter in a Stored Procedure or a Function:

Method #1 – Passing a CSV: list of strings as a parameter to a (N)VARCHAR datatype parameter, then splitting/parsing it inside the SP or UDF, check here.

Method #2 – Passing an XML: string as an XML datatype parameter. We will need to parse the XML inside the SP, check here.

Method #3 – Using a temp table: inside an SP which is created outside just before its execution. Here there is no need to pass any parameter with the SP, check here.

Method #4 – Using TVPs: With SQL Server 2008 and above you can create TVPs or Table Valued Parameters and declare them by using user-defined table types. These TVPs can then be used to send multiple rows of data to SPs or UDFs, without creating a temp table or multiple parameters, check here.

Method #5 – Passing a JSON string: as a NVARCHAR datatype parameter. We will need to parse the JSON inside the SP, check here.
 

Thanks a lot, please provide your valuable comments and suggestions on this topic.
 

>> Check & Subscribe my [YouTube videos] on SQL Server.

Check & Like my FB Page


Advertisements

ColumnStore Indexes evolution from SQL Server 2012, 2014 to 2016

June 8, 2015 2 comments

ColumnStore Indexes were first introduced in SQL Server 2012, and this created a new way to store and retrieve the Index or Table data in an efficient manner.

ColumnStore uses xVelocity technology that was based on Vertipaq engine, this uses a new Columnar storage technique to store data that is highly Compressed and is capable of In-memory Caching and highly parallel data scanning with Aggregation algorithms.

ColumnStore or Columnar data format does not store data in traditional RowStore fashion, instead the data is grouped and stored as one column at a time in Column Segments.

The Traditional RowStore stores data for each row and then joins all the rows and store them in Data Pages, and is still the same storage mechanism for Heap and Clustered Indexes.

Thus, a ColumnStore increases the performance by reading less data that is highly compressed, and in batches from disk to memory by further reducing the I/O.
 

–> To know more about ColumnStore Indexes check [MSDN BoL].
 

–> Let’s see the journey of ColumnStore index from SQL Server 2012 to 2016:
 

We will start with SQL Server 2012 offering for ColumnStore Indexes:

1. A Table (Heap or BTree) can have only one NonClustered ColumnStore Index.

2. A Table with NonClustered ColumnStore Index becomes readonly and cannot be updated.

3. The NonClustered ColumnStore Index uses Segment Compression for high compression of Columnar data.
 

With SQL Sever 2014 some new features were added, like:

1. You can create one Clustered ColumnStore Index on a Table, and no further Indexes can be created.

2. A Table with Clustered ColumnStore Index can be updated with INSERT/UPDATE/DELETE operations.

3. Both Clustered & NonClustered ColumnStore Index has new Archival Compression options i.e. COLUMNSTORE_ARCHIVE to further compress the data.
 

Now with SQL Server 2016 new features have been added and some limitations are removed:

1. A Table will still have one NonClustered ColumnStore Index, but this will be updatable and thus the Table also.

2. Now you can create a Filtered NonClustered ColumnStore Index by specifying a WHERE clause to it.

3. A Clustered ColumnStore Index table can have one or more NonClustered RowStore Indexes.

4. Clustered ColumnStore Index can now be Unique indirectly as you can create a Primary Key Constraint on a Heap table (and also Foreign Key constraints).

5. Columnar Support for In-Memory (Hekaton) tables, as you can create one one ColumnStore Index on top of them.
 

–> Here is the full feature summary of ColumnStore Indexe evolution from SQL Server 2012 to 2016:

SQL Server 2016 - ColumnStoreIndexes

–> Check more about this on [MSDN BoL].
 

Check the above details explained in the video below:


Reading XML data in SQL Server – MSDN TSQL forum


–> Question:

I have a table with a column with data type XML.

I don’t know the contents of the XML structure etc and I need to extract it

Please suggest.
 

–> My Answer:

Check these blog posts on how to query and work with XML data in SQL:

Convert a table to XML and back to tabular format

Query XML data in tabulat format

Query Nested XML

Using APPLY operator

Read XML from a file

XML with Namespaces

Export XML to a file

All XML posts
 

Ref Link.


Export XML column data to a file.xml

April 13, 2015 2 comments

There are times when you want to export XML data stored in Table’s columns to xml files for some purpose.

Here is a small hands on script that uses BCP command with QueryOut option to export data from SQL Server to file system:

USE [AdventureWorks2012]
GO

-- Save XML records to a file:
DECLARE @fileName VARCHAR(50)

DECLARE @sqlStr VARCHAR(1000)
DECLARE @sqlCmd VARCHAR(1000)

SET @fileName = 'D:\SQL_Queries\PersonAdditionalContactInfo.xml'
SET @sqlStr = 'select TOP 1 AdditionalContactInfo from AdventureWorks2012.Person.Person where AdditionalContactInfo IS NOT NULL'

SET @sqlCmd = 'bcp "' + @sqlStr + '" queryout ' + @fileName + ' -w -T'

EXEC xp_cmdshell @sqlCmd
Output:

NULL
Starting copy...
NULL
1 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 140    Average : (7.14 rows per sec.)
NULL

If you have to extract XMLs stored in all the rows then you can iterate through all the rows one by one by using a WHILE loop or a CURSOR and extract data in separate files. You can provide unique names to your files dynamically by tweaking the above code.

Categories: XML Tags: , , , ,

Index Usage Stats – Indexes Used, Unused and Updated

April 2, 2015 1 comment

While working on Performance Optimization and Index tuning many Developers ends up in creating some (or even many) unnecessary Indexes based upon various SQL Queries. Some or many of those Indexes might not be used at all by the SQL Query Optimizer. But even if they are used you may have to Trade-off with your ETL performance going down, as the CRUD (CREATE, READ, UPDATE, DELETE) operations are going to taking more time to update those new Indexes.

So, while creating new Indexes you will have to plan very carefully, decide and balance out things so that your Data retrieval is fast and on the same hand your ETLs are also not affected much.

SQL Server provides some DMVs (Dynamic Management Views) and DMFs (Dynamic Management Functions) to get this information from SQL engine.
 

–> To know how the Indexes are getting used we can use sys.dm_db_index_usage_stats DMV and this will provide us information on how many times the Index was used for SEEK, SCAN & LOOKUP operations. Check the Query and its output below:

SELECT 
	DB_NAME(database_id) AS DATABASE_NAME,
	OBJECT_NAME(ius.object_id) AS TABLE_NAME, 
	ids.name AS INDEX_NAME, 
	ius.user_seeks AS SEEK_COUNT, 
	ius.user_scans AS SCAN_COUNT, 
	ius.user_lookups AS LOOKUP_COUNT, 
	ius.user_seeks + ius.user_scans + ius.user_lookups AS TOTAL_USAGE, 
	ius.last_user_seek AS LAST_SEEK_COUNT, 
	ius.last_user_scan AS LAST_SCAN_COUNT, 
	ius.last_user_lookup AS LAST_LOOKUP_COUNT
FROM sys.dm_db_index_usage_stats AS ius
INNER JOIN sys.indexes AS ids
ON ids.object_id = ius.object_id
AND ids.index_id = ius.index_id 
WHERE OBJECTPROPERTY(ius.object_id,'IsUserTable') = 1 
ORDER BY DATABASE_NAME, TOTAL_USAGE

Index Usage Stats 02
 

–> Now if we want to know the maintenance overhead on the new Indexes we created, like every time a related Table is updated the Indexes are also updated. We can check by using sys.dm_db_index_operational_stats DMF and it will show how many INSERT, UPDATE & DELETE operations are happening on particular indexes. Check the Query and its output below:

SELECT 
	DB_NAME(database_id) AS DATABASE_NAME,
	OBJECT_NAME(ios.object_id) AS TABLE_NAME, 
	idx.name AS INDEX_NAME, 
	ios.leaf_insert_count AS INSERT_COUNT, 
	ios.leaf_update_count AS UPDATE_COUNT, 
	ios.leaf_delete_count AS DELETE_COUNT, 
	ios.leaf_insert_count + ios.leaf_update_count + ios.leaf_delete_count AS TOTAL_COUNT 
FROM sys.dm_db_index_operational_stats (NULL,NULL,NULL,NULL ) ios
INNER JOIN sys.indexes AS idx
ON idx.object_id = ios.object_id
AND idx.index_id = ios.index_id 
WHERE  OBJECTPROPERTY(ios.object_id,'IsUserTable') = 1
ORDER BY DATABASE_NAME, TOTAL_COUNT

Index Usage Stats 01
 

–> So, by using these two Dynamic Management Views/Functions you can know:

1. What is the affect (maintenance overhead) of Indexes you created, and

2. Are the Indexes really used or not so that you can DROP them.
 

Please Note: that these stats could be wrong at times because of several reasons, like:

1. If SQL Server (MSSQLSERVER) service is re-started these counters are initialized to 0.

2. When a Database is detached or is shut down (for example, because AUTO_CLOSE is set to ON), all rows associated with the database are removed.

3. Index Rebuild resets these counters to 0 (Bug logged in MS Connect, link).