Archive

Posts Tagged ‘TRY_PARSE’

Bug with ISNUMERIC() – MSDN TSQL forum

July 8, 2015 2 comments

–> Question:

Has anyone seen this SQL bug before?

If you use the IsNumeric function with 12345 and replace number 3 with an “e” and no other letter, SQL still thinks it’s numeric.

If ISNUMERIC('12e45') = 1
print 'Is Numeric'
else 
Print 'No'

 

–> My Answer:

Yes, “12e45” is still numeric, notice the small ‘e’ which is a symbol for exponent for representing a big number.

But yes, NUMERIC() function does results incorrect results for some values, like:

SELECT
     ISNUMERIC('123') as '123'
    ,ISNUMERIC('.') as '.' --Period
    ,ISNUMERIC(',') as ',' --Comma

SELECT
     ISNUMERIC('123') as '123'
    ,ISNUMERIC('-') as '-'
    ,ISNUMERIC('+') as '+'
    ,ISNUMERIC('$') as '$'
    ,ISNUMERIC('\') as '\'

… gives you 1 for all these non-numeric values.

After release of SQL Server 2012 and ahead you must be using TRY_PARSE() instead of ISNUMERIC().

Check my blog post on how to use this – Use new TRY_PARSE() instead of ISNUMERIC() | SQL Server 2012
 

–> Answer by CELKO:

The use of E or e for floating point numbers started in the 1950’s with FORTRAN and later in Algol. You wrote “twelve times ten to the forty-fifth power” in the standard notation. The only difference is that some languages have to start with a digit and some can start with the E.

Please look up how floating numbers are displayed. This has been true for over 60 years! The E or e is for “exponent” in the notation that goes back to FORTRAN I.
 

–> Answer by Erland Sommarskog:

In addition to other posts, on SQL2012 or later, you can use try_convert to check if the number is convertible to a specific data type:

SELECT 
CASE WHEN try_convert(int, @str) IS NOT NULL 
       THEN 'Converts' 
       ELSE 'Not convertible' 
END

 

–> Answer by Dan Guzman:

ISNUMERIC isn’t particularly useful for the common use case of checking for a string containing only the digits 0 through 9. Consider a CASE expression instead to get the desired behavior:

CASE WHEN @value LIKE '%[^0-9]%' OR @value = '' THEN 0 ELSE 1 END

 

Ref link.


Advertisement

Use new TRY_PARSE() instead of ISNUMERIC() | SQL Server 2012

September 20, 2013 6 comments

I was working on a legacy T-SQL script written initially on SQL Server 2005 and I was facing an unexpected behavior. The code was giving me unexpected records, I tried to dig into it and found that ISNUMERIC() function applied to a column was giving me extra records with value like “,” (comma) & “.” (period).

–> So, to validate it I executed following code and found that ISNUMERIC() function also passes these characters as numbers:

SELECT 
	 ISNUMERIC('123') as '123'
	,ISNUMERIC('.') as '.' --Period
	,ISNUMERIC(',') as ',' --Comma

Function ISNUMERIC() returns “1” when the input expression evaluates to a valid numeric data type; otherwise it returns “0”. But the above query will return value “1” for all 3 column values, validating them as numeric values, but that’s not correct for last 2 columns.

–> And not only this, ISNUMERIC() function treats few more characters as numeric, like: – (minus), + (plus), $ (dollar), \ (back slash), check this:

SELECT 
	 ISNUMERIC('123') as '123'
	,ISNUMERIC('abc') as 'abc'
	,ISNUMERIC('-') as '-'
	,ISNUMERIC('+') as '+'
	,ISNUMERIC('$') as '$'
	,ISNUMERIC('.') as '.'
	,ISNUMERIC(',') as ','
	,ISNUMERIC('\') as '\'

This will return “0” for second column containing value “abc”, and value “1” for rest of the column values.

So, you will need to be very careful while using ISNUMERIC() function and have to consider all these possible validations on your T-SQL logic.

– OR –

Switch to new TRY_PARSE() function introduced in SQL Server 2012.

–> The TRY_PARSE() function returns the result of an expression, translated to the requested Data-Type, or NULL if the Cast fails. Let’s check how TRY_PARSE() validates above character values as numeric:

SELECT 
	 TRY_PARSE('123' as int) as '123'
	,TRY_PARSE('abc' as int) as 'abc'
	,TRY_PARSE('-' as int) as '-'
	,TRY_PARSE('+' as int) as '+'
	,TRY_PARSE('$' as int) as '$'
	,TRY_PARSE('.' as int) as '.'
	,TRY_PARSE(',' as int) as ','
	,TRY_PARSE('\' as int) as '\'

So, the above query gives me expected results by validating first column value as numeric and rest as invalid and returns NULL for those.

–> TRY_PARSE() can be used with other NUMERIC & DATETIME data-types for validation, like:

SELECT 
	 TRY_PARSE('123' as int) as '123'
	,TRY_PARSE('123.0' as float) as '123.0'
	,TRY_PARSE('123.1' as decimal(4,1)) as '123.1'
	,TRY_PARSE('$123.55' as money) as '$123.55'
	,TRY_PARSE('2013/09/20' as datetime) as '2013/09/20'

… will give expected results 🙂

SQL Server 2012 (DENALI) TSQL – New features | Built-in Functions

July 27, 2011 6 comments

Here is a list of some new function introduced in SQL Server Denali, some examples are taken from MSDN and here is the link for more details.
 

Logical Functions:

-- CHOOSE()
SELECT CHOOSE ( 3, 'Manager', 'Director', 'Developer', 'Tester' ) AS Result; -- Developer
SELECT CHOOSE ( 2, 'Manoj', 'Saurabh', 'Andy', 'Dave' ) AS Result; -- Saurabh
GO

-- IIF()
DECLARE @a int = 45;
DECLARE @b int = 40;
SELECT IIF ( @a > @b, 'TRUE', 'FALSE' ) AS Result; -- TRUE
GO

Note: The IIF() function was important for the developers that migrated from ACCESS to SQL Server. Introduction of these 2 functions will make their life easy.

Check the video Tutorial on CHOOSE() and IIF() functions.
 

Conversion Functions:

-- PARSE()
SELECT PARSE ('Monday, 13 December 2010' AS datetime2 USING 'en-US') AS Result;
SELECT PARSE ('€345,98' AS money USING 'de-DE') AS Result;
GO

-- TRY_CAST()
SELECT
CASE WHEN TRY_CAST ('test' as float) IS NULL
	THEN 'Cast failed'
	ELSE 'Cast succeeded'
END AS Result;
GO

-- TRY_CONVERT()
SELECT
CASE WHEN TRY_CONVERT (float,'test') IS NULL
THEN 'Cast failed'
ELSE 'Cast succeeded'
END AS Result;
GO

-- TRY_PARSE()
SELECT
CASE WHEN TRY_PARSE ('Aragorn' AS decimal, 'sr-Latn-CS') IS NULL
THEN 'True'
ELSE 'False'
END
AS Result;
GO

Note: The TRY_CAST(), TRY_CONVERT() & TRY_PARSE() functions added exception handling to the existing CAST(), CONVERT() and new PARSE() function respectively. Now developers can fully rely on these 2 functions while type converting values and defaulting them to something else on any execption.

 

Analytic Functions:

USE AdventureWorks2008R2;
GO

-- FIRST_VALUE() and LAST_VALUE()
SELECT 
	Name, ListPrice, ProductSubcategoryID,
	FIRST_VALUE (Name) OVER (PARTITION BY ProductSubcategoryID ORDER BY ListPrice ASC) AS LeastExpensive,
	LAST_VALUE (Name) OVER (PARTITION BY ProductSubcategoryID ORDER BY ListPrice ASC) AS LeastExpensive
FROM Production.Product;

-- LAG()
SELECT 
	BusinessEntityID, YEAR(QuotaDate) AS SalesYear, SalesQuota AS CurrentQuota,
	LAG (SalesQuota, 1, 0) OVER (ORDER BY YEAR(QuotaDate)) AS PreviousQuota
FROM Sales.SalesPersonQuotaHistory
WHERE BusinessEntityID = 275 and YEAR(QuotaDate) IN ('2005','2006');

-- LEAD()
SELECT 
	TerritoryName, BusinessEntityID, SalesYTD,
	LEAD (SalesYTD, 1, 0) OVER (PARTITION BY TerritoryName ORDER BY SalesYTD DESC) AS PrevRepSales
FROM Sales.vSalesPerson
WHERE TerritoryName IN (N'Northwest', N'Canada')
ORDER BY TerritoryName;

Note: The LAG() and LEAD() functions are already in Oracle and by introducing them in MS SQL Server’s Denali version is very much welcomed by TSQL developers. Plus the addition of FIRST_VALUE(), LAST_VALUE(), PERCENTILE_XXX(), etc will really add value and provide zeal to programmers in TSQL development, just like Ranking functions and Cube/Rollup did in 2005(08).

 

Some other new functions introduced are:
– CUME_DIST()
– PERCENTILE_CONT()
– PERCENTILE_DISC()
– PERCENT_RANK()
 

More details here on MS BOL: http://technet.microsoft.com/en-us/library/hh213234%28SQL.110%29.aspx