Archive
Reading JSON string with Nested array of elements | SQL Server 2016 – Part 3
In my [previous post] I discussed about how to Import or Read a JSON string and convert it in relational/tabular format in row/column from.
Today in this post I’ll talk about how to read/parse JSON string with nested array of elements, just like XML.
Native JSON support in SQL Server 2016 provides you few functions to read and parse your JSON string into relational format and these are:
– OPENJSON() Table valued function: parses JSON text and returns rowset view of JSON.
– JSON_Value() Scalar function: returns a value from JSON on the specified path.
We will see usage of both the functions in our example below:
Here, we have just one nested element, and the OPENJSON() function will get you the child elements values:
–> Method #1.a. Using OPENJSON() function:
DECLARE @json NVARCHAR(1000)
SELECT @json =
N'{
"OrderHeader": [
{
"OrderID": 100,
"CustomerID": 2000,
"OrderDetail":
{
"ProductID": 2000,
"UnitPrice": 350
}
}
]
}'
SELECT
OrderID,
CustomerID,
[OrderDetail.ProductID] AS ProductID,
[OrderDetail.UnitPrice] AS UnitPrice
FROM OPENJSON (@json, '$.OrderHeader')
WITH (
OrderID INT,
CustomerID INT,
[OrderDetail.ProductID] INT,
[OrderDetail.UnitPrice] INT
) AS Orders
OrderID CustomerID ProductID UnitPrice 100 2000 2000 350
But, if you have more than one nested elements the same query will give just 1 row with NULL values under the child columns, like this.
–> Method #1.b. In case of multiple child elements:
DECLARE @json NVARCHAR(1000)
SELECT @json =
N'{
"OrderHeader": [
{
"OrderID": 100,
"CustomerID": 2000,
"OrderDetail": [
{
"ProductID": 2000,
"UnitPrice": 350
},
{
"ProductID": 3000,
"UnitPrice": 450
}
]
}
]
}'
SELECT
OrderID,
CustomerID,
[OrderDetail.ProductID] AS ProductID,
[OrderDetail.UnitPrice] AS UnitPrice
FROM OPENJSON (@json, '$.OrderHeader')
WITH (
OrderID INT,
CustomerID INT,
[OrderDetail.ProductID] INT,
[OrderDetail.UnitPrice] INT
) AS Orders
OrderID CustomerID ProductID UnitPrice 100 2000 NULL NULL
You might be expecting 2 rows with same OrderID & CustomerID, with different ProductID & UnitPrice, right?
Instead you get ProductID & UnitPrice column values as NULL. Because, here you are having array of child elements with OrderDetail node (notice the square-bracket after “OrderDetail”: node), thus the Query is not able to find the key on the path.
In this case what you have to do is, use the array positions with square brackets (“[” and “]”) in your query and call out separate columns for each child element, like below:
DECLARE @json NVARCHAR(1000)
SELECT @json =
N'{
"OrderHeader": [
{
"OrderID": 100,
"CustomerID": 2000,
"OrderDetail": [
{
"ProductID": 2000,
"UnitPrice": 350
},
{
"ProductID": 3000,
"UnitPrice": 450
}
]
}
]
}'
SELECT
OrderID,
CustomerID,
[OrderDetail[0]].ProductID] AS ProductID1,
[OrderDetail[0]].UnitPrice] AS UnitPrice1,
[OrderDetail[1]].ProductID] AS ProductID2,
[OrderDetail[1]].UnitPrice] AS UnitPrice2
FROM OPENJSON (@json, '$.OrderHeader')
WITH (
OrderID INT,
CustomerID INT,
[OrderDetail[0]].ProductID] INT,
[OrderDetail[0]].UnitPrice] INT,
[OrderDetail[1]].ProductID] INT,
[OrderDetail[1]].UnitPrice] INT
) AS Orders
OrderID CustomerID ProductID1 UnitPrice1 ProductID2 UnitPrice2 100 2000 2000 350 3000 450
You can also specify the child elements with full path by using the dollar sign “$” inside the WITH() clause (instead at column level above), like below:
–> Method #2. Using OPENJSON() function:
DECLARE @json NVARCHAR(1000)
SELECT @json =
N'{
"OrderHeader": [
{
"OrderID": 100,
"CustomerID": 2000,
"OrderDetail": [
{
"ProductID": 2000,
"UnitPrice": 350
},
{
"ProductID": 3000,
"UnitPrice": 450
}
]
}
]
}'
SELECT
OrderID,
CustomerID,
ProductID1,
UnitPrice1,
ProductID2,
UnitPrice2
FROM OPENJSON (@json, '$.OrderHeader')
WITH (
OrderID INT '$.OrderID',
CustomerID INT '$.CustomerID',
ProductID1 INT '$.OrderDetail[0].ProductID',
UnitPrice1 INT '$.OrderDetail[0].UnitPrice',
ProductID2 INT '$.OrderDetail[1].ProductID',
UnitPrice2 INT '$.OrderDetail[1].UnitPrice'
) AS Orders
OrderID CustomerID ProductID1 UnitPrice1 ProductID2 UnitPrice2 100 2000 2000 350 3000 450
Ok, so by using the key path and the array position we can get the child elements value in our Query result-set by using above 2 methods.
But instead of having them in separate columns how about pulling them in separate rows, this will also make your query dynamic as you would not know the number of child-elements before hand, right?
This can be done by CROSS APPLYing the JSON child node with the parent node and using the JSON_Value() function, like shown below:
–> Method #3. Using JSON_Value() with OPENJSON() function:
DECLARE @json NVARCHAR(1000)
SELECT @json =
N'{
"OrderHeader": [
{
"OrderID": 100,
"CustomerID": 2000,
"OrderDetail": [
{
"ProductID": 2000,
"UnitPrice": 350
},
{
"ProductID": 3000,
"UnitPrice": 450
},
{
"ProductID": 4000,
"UnitPrice": 550
}
]
}
]
}'
SELECT
JSON_Value (c.value, '$.OrderID') as OrderID,
JSON_Value (c.value, '$.CustomerID') as CustomerID,
JSON_Value (p.value, '$.ProductID') as ProductID,
JSON_Value (p.value, '$.UnitPrice') as UnitPrice
FROM OPENJSON (@json, '$.OrderHeader') as c
CROSS APPLY OPENJSON (c.value, '$.OrderDetail') as p
OrderID CustomerID ProductID UnitPrice 100 2000 2000 350 100 2000 3000 450 100 2000 4000 550
Ok, that’s it for today.
In my [next post] I’ll talk about storing JSON string in a table and doing some hands-on with it.
Passing multiple/dynamic values to Stored Procedures & Functions | Part 5 – by passing JSON string
This post is part of the [Passing multiple/dynamic values to Stored Procedures & Functions] series, and as well as the new feature Native JSON support in SQL Server 2016.
Adding the fifth part to this series we will use JSON string that will contain the set of values and pass as an JSON param variable to the SP. Then inside the SP we will parse this JSON and use those values in our SQL Queries, just like we did in previous posts with CSV/XML strings:
USE [AdventureWorks2014]
GO
-- Create an SP with NVARCHAR type parameter for JSON string:
CREATE PROCEDURE uspGetPersonDetailsJSON (
@persons NVARCHAR(MAX)
)
AS
BEGIN
--DECLARE @persons NVARCHAR(MAX)
--SET @persons = '{"root":[{"Name":"Charles"},{"Name":"Jade"},{"Name":"Jim"},{"Name":"Luke"},{"Name":"Ken"}]}'
SELECT Name
INTO #tblPersons
FROM OPENJSON (@persons, '$.root')
WITH (
Name NVARCHAR(100)
)
SELECT
BusinessEntityID,
Title,
FirstName,
MiddleName,
LastName,
ModifiedDate
FROM [Person].[Person] PER
WHERE EXISTS (
SELECT *
FROM #tblPersons tmp
WHERE tmp.Name = PER.FirstName
)
ORDER BY FirstName, LastName
DROP TABLE #tblPersons
END
GO
-- Create JSON string:
DECLARE @json NVARCHAR(1000)
SET @json = N'{
"root": [
{ "Name": "Charles" },
{ "Name": "Jade" },
{ "Name": "Jim" },
{ "Name": "Luke" },
{ "Name": "Ken" }
]
}'
-- Use the JSON string as parameter which calling the SP:
EXEC uspGetPersonDetailsJSON @json
GO
-- Check the output, objective achieved
-- Final Cleanup
DROP PROCEDURE uspGetPersonDetailsXML
GO
Thus you can also use JSON string similar to the way you used XML string, to pass multiple and dynamic number of parameters to your Stored Procedures.
As JSON feature is new to SQL Server 2016, so this method will only work with SQL Server 2016 and above versions.
Import/Read a JSON string and convert it in tabular (row/column) form | SQL Server 2016 – Part 2
In my [previous post] I discussed on how to format/convert any Query/Table data to a string in JSON format. JSON in SQL Server was limited to this feature only till the CTP 2 version.
But now with the CTP 3 release you can do reverse of it also, means now you can read back JSON data and convert it to tabular or row & column format.
Let’s check this by taking same sample data from our previous JSON-Export post.
DECLARE @json NVARCHAR(1000)
SELECT @json = N'{"StudList":
[
{
"ID":1,
"FirstName":"Manoj",
"LastName":"Pandey",
"Class":10,
"Marks":80.5
},
{
"ID":2,
"FirstName":"Saurabh",
"LastName":"Sharma",
"Class":11,
"Marks":82.7
},
{
"ID":3,
"FirstName":"Kanchan",
"LastName":"Pandey",
"Class":10,
"Marks":90.5
},
{
"ID":4,
"FirstName":"Rajesh",
"LastName":"Shah",
"Class":11,
"Marks":70.3
},
{
"ID":5,
"FirstName":"Kunal",
"LastName":"Joshi",
"Class":12,
"Marks":64.7
}
]'
SELECT ID, FirstName, LastName, Class, Marks
FROM OPENJSON (@json, '$.StudList')
WITH (
ID INT,
FirstName VARCHAR(255),
LastName VARCHAR(255),
Class INT,
Marks DECIMAL(3,1)
) AS StudList
Well, that was simple, isn’t it!!!
In my [next post] I’ll talk about Nested Properties in a JSON string with single and multiple elements.
Code Review checklist for SQL Server “Stored Procedures” & T-SQL Scripts
While working on an official Database or DW/BI project we want to make sure that the Code written by team members should be correct, as per standards, with comments and beautifully indented. But everyone has a different coding style, some are good, but some are not so good, I won’t say bad :). So, to make sure everyone’s code is correct we/you introduce a process of Code Reviews.
Here in this post I’ll share some best coding practices and good to have stuff that will help others to better understand your code. The below checklist features some of them:
- Before sending any item for Code Review please make sure to do following actions:
-
– Compile the Stored Procedures, Views, T-SQL Scripts.
– For a SSRS report check the Stored Procedure by running from the Report, and check if you are getting desired output and it meets the requirement.
– If it’s an Integration SP run it for Full Load, Delta Load, and see if your are getting expected functionality.
- Before doing the check-ins ensure you do a get latest and resolve all the conflicts.
- Use TRY-CATCH to handle exceptions, and THROW clause to raise the error in CATCH clause.
- Use “;THROW” clause instead of RAISERROR() syntax. This is aligned with SQL Server future versions (> 2012) as RAISERROR is deprecated & this will reduce the code from multiple lines to just 1 line.
- Do not use TRANSACTIONs while creating Reporting or read-only Stored Procedures. Use them only in Integration SPs if the DMLs exceeds by one Query.
- Do not include #Table processing within TRANSACTIONs, and try to keep them as small as possible.
- Do not JOIN/APPLY UDFs with other tables, first extract and store records in #Tables then JOIN the #Table with other tables, otherwise it may kill performance.
- Use “WHERE EXISTS (SELECT * FROM <table>)” while evaluating Sub-Queries, and use “IN()” only while dealing with constant values.
- Use COALESCE() instead of ISNULL(), its faster & fail proof.
- Never use ISNULL() on the BIT data type, as it only accepts 1/0/NULL as the possible values.
- While comparing data against a NULLABLE column, ensure COALESCE is always used to handle records which in fact have NULLs.
- Don’t use ISNUMERIC(), use TRY_PARSE() instead.
- Do not mix DDLs & DMLs in a Stored Procedure, try to use DDLs first at the top section of SP, than use DMLs in below section. Otherwise this leads to recompilation of SP on every execution and not use the optimal cached plan.
- For temporary storage use @Table Variables for small set of records, and use #Tables for larger sets of data. Like to store Parameter values use @Table variables.
- Always use SET NOCOUNT ON statement in the beginning of SP.
- Use CTEs instead of Sub-Queries for better code manageability and readability.
- Handle Divide-by-zero errors the columns/variables occurring in denominator.
- With WHERE clause put OR conditions within brackets, otherwise they will conflict with other AND conditions and may behave differently.
- In WHERE clause use columns from a LEFT JOIN table very carefully as it may lead to convert the LEFT JOIN to an INNER JOIN or behave differently.
- Do not include DROP & CREATE statements in the CREATE table script; they should be added only In the deployment scripts.
- While designing new tables, ensure most of the columns are created as NOT NULLs.
- Please do not use “SELECT *” while creating Views, in SPs, instead get only required columns. Even if all Columns are required, list out all the columns instead of “SELECT *”.
- Do not use USE <db_name> and GO keywords in DB Object (SPs, Views, Table, etc) scripts, should be only used with Custom Pre/Post SQL Scripts.
- While CREATING or ALTERING any DB object try checking its existence by using IF-ELSE condition like:
IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = ‘TABLE_NAME’ AND TABLE_SCHEMA = ‘SCHEMA_NAME’)
BEGIN
CREATE TABLE dbo.TABLE_NAME (…)
END
- While sending Code Reviews for SSRS RDLs, also mention the URL where the reports are deployed and the changes done, with steps to populate the required params with expected O/P.
I would also like to hear from you what all things you make sure for Code Review process.
SQL Tips – Check count of records in Temporary Tables from other sessions
Recently one Stored Procedure was giving issues on our Production system. The SP had some temporary (#) tables that were getting populated in sequence, but the final result was not as expected. As Temp-Tables cannot be accessed across other sessions you cannot see what’s going on. Also as its a PROD system I could not dissect or change the SP and see what’s happening inside.
Debugging a SP on a DEV machine is simple. I can add some extra checks after temp-table population code to get the record count by using “SELECT @@rowcount”, and was able to see the issues.
But this cannot be done on PROD, as you cannot alter the SPs there. So, to see which temp table is being populated you can use below query. This will also show the records count if any temp table is in mid of population.
SELECT T.NAME AS TABLE_NAME, S.ROW_COUNT FROM TEMPDB.sys.dm_db_partition_stats AS S INNER JOIN TEMPDB.sys.tables AS T ON S.OBJECT_ID = T.OBJECT_ID WHERE S.INDEX_ID < 2 and T.NAME like '%#TempTable%'; -- Give #Table name here






