Archive

Posts Tagged ‘MSDN TSQL forum’

String concatenation resulting NULL values – MSDN TSQL forum

April 9, 2015 Leave a comment

–> Question:

The Payments field in the below update statement always return null even though the update statement executed successfully.

What is wrong here?

DECLARE @Temp nvarchar(200), @sql nvarchar(max),@Details_InvNo nvarchar(4000), @Details_Amt nvarchar(100),@ChkNo nvarchar(100)
set @Details_InvNo = 'IZM5728'
set @ChkNo = '1-33333333'
set @Details_Amt = '100'
SET @Temp = ',[' + convert(nvarchar(max),@ChkNo) + '=' + cast(@Details_Amt as nvarchar(50)) + ']' 

UPDATE INVOICE 
SET Payments = convert(nvarchar(max),Payments) + cast(@Temp as nvarchar(max)), 
AmtDue = 100, 
AmtPaid=0 
WHERE InvNo = @Details_InvNo
GO

 

–>My Response:

As the Payment columns would be NULL and you are concatenating it with another String, the result will be NULL only. So, just put an IsNULL() over Payments column after “=” operator, like this:

DECLARE @Temp nvarchar(200), 
	@sql nvarchar(max), 
	@Details_InvNo nvarchar(4000), 
	@Details_Amt nvarchar(100),
	@ChkNo nvarchar(100)

set @Details_InvNo = 'IZM5728'
set @ChkNo = '1-33333333'
set @Details_Amt = '100'
SET @Temp = ',[' + convert(nvarchar(max),@ChkNo) + '=' + cast(@Details_Amt as nvarchar(50)) + ']' 

UPDATE INVOICE 
SET Payments = ISNULL(convert(nvarchar(max),Payments),'') + cast(@Temp as nvarchar(max)), 
	AmtDue = 100, 
	AmtPaid=0   
WHERE InvNo = @Details_InvNo
go

Or use CONCAT() function like this:

SET Payments = CONCAT(convert(nvarchar(max),Payments), cast(@Temp as nvarchar(max))), 

 

Ref Link.


DELETE from Parent and all its child tables with Foriegn Key (no DELETE ON CASCADE) – MSDN TSQL forum

April 8, 2015 Leave a comment

–> Question:

I have a database with many tables. I would like to Delete all rows with practiceID=55 from all Parents tables and all corresponding rows from its child tables. Tables are linked with foreign key constraints (but there is no ON DELETE CASCADE).
Can any one tell how to write a generalized code for removing rows from both parent and child tables.

Query should pick parent table one by one and delete rows with practiceID=55 and all corresponding rows from its child tables.
 

–> My Answer:

I this case you need to create query by using system catalog tables/views, link.

Check this query below:

DECLARE @Deletestr VARCHAR(max) = ''
SELECT @Deletestr = @Deletestr +
'DELETE FROM ' + object_name(fk.referenced_object_id) + 'WHERE ParentId = 55
GO
'
from sys.foreign_keys fk
join sys.foreign_key_columns fc
on fk.object_id = fc.constraint_object_id
where object_name(fk.parent_object_id) = 'SalesOrderHeader'

PRINT (@Deletestr)

EXEC (@Deletestr)

Like above query get the linked tables (FK) info, and finally with the output create Dynamic DELETE statement.
 

Ref Link.


Standardize TRY CATCH block across all Stored Procedures – MSDN TSQL forum

March 31, 2015 Leave a comment

–> Question:

I am trying to develop best try catch block to be standardized in all of the SPs. Below is what I have right now developed which is giving me proper formatted output, would like to add more details or remove unwanted items from it.

Thanks for all your help!

I am using 2008 R2 Version, would be migrating to 2012 soon.

BEGIN CATCH
  IF @@TRANCOUNT > 0
  BEGIN
     ROLLBACK TRANSACTION
  END

DECLARE @ErrorMsg varchar(2100);

SET  @ErrorMsg =  ' DBName        = ' + DB_NAME() + CHAR(10)
+ ' ErrorMessage  = ' + LTRIM(CONVERT(VARCHAR(2047), LEFT(Error_Message(),2044))) + CHAR(10) 
+ ' Procedure     = ' + OBJECT_NAME(@@PROCID) + CHAR(10) + ' ErrorNumber   = ' + 
LTRIM(CONVERT(VARCHAR(9),Error_Number()))  + CHAR(10) 
+ ' ErrorState    = ' + LTRIM(CONVERT(VARCHAR(3),Error_State())) + CHAR(10)
+ ' ErrorSeverity = ' + LTRIM(CONVERT(VARCHAR(3),Error_Severity()))  + CHAR(10)
+ ' LineNumber    = ' + LTRIM(CONVERT(VARCHAR(9),Error_Line()))  + CHAR(10)
+ ' ErrorDT       = ' + CONVERT(VARCHAR(23),GETDATE(),121) + CHAR(10) + ' ErrorBy = ' + SUSER_SNAME();

RAISERROR(@ErrorMsg,16,1);
    RETURN -1
END CATCH

 

–> My Answer:

The way you’ve drafted your TRY-CATCH looks good, however you can also consider few things:

1. Use THROW instead of RAISERROR as recommended by Microsoft, this will also reduce unnecessary (at times) declaration and assignment of error related variables, link.

2. With THROW you can also catch multiple errors, RAISERROR will only throw last error raised, link.

3. Make use of XACT_ABORT only when you want to rollback the entire transaction, link.

Check all these and other stuff about TRY-CATCH in my blog posts.
 

Ref Link.


Dynamic SQL usage, when to use? – MSDN TSQL forum

March 4, 2015 Leave a comment

–> Question:

Can someone tell when or where to use Dynamic SQL ( exec sp_executesql and exec commands) over normal SQL?
 

–> My Answer:

Dynamic SQL queries should be avoided and one should put more thought and time on creating non-dynamic SQL statements.

But there could be some rare scenarios or requirements where you need to create Dynamic SQL, like doing some multiple DBA activities in one go, like:

– Enabling/Disabling multiple Jobs at once, link.

– Creating Dynamic PIVOT where there could be multiple columns based on multiple rows, link.

– Use sp_executeSQL instead of EXEC (SQL statement) to avoid SQL-Injection.

– Check this article by Erland on Dynamic SQL.

– Check this blog post for EXEC (SQL statement) vs sp_executeSQL.
 

Ref Link.


What is the use of SQL server table compression – MSDN TSQL forum

September 30, 2014 Leave a comment

–> Question:

1. What is the use of the table compression?

2. When do we need to compress the table ?

3. If i compress the table what will be the performance impact?

 

–> My Answer:

1. What is the use of the table compression?
Reduction in DiskSpace as well as reduction in I/O happening across memory and CPU.

2. when do we need to compress the table ?
If your table column contains lot of wide character strings (char/varchar) columns, then you will get the best compression. Can go for Row/Page level compression, Page level has highest level of compression.

3. If i compress the table what will be the performance impact?
No, in most of the cases. But you will get good perf because of reduction in I/O, because many times I/O is the main culprit. CPUs being more powerful can compress/uncompress data within no time compared to the time taken by I/O.

 

–> Another Answer by Erland:

There are two levels of compression: ROW and PAGE. ROW is basically a different storage format, which gives a more compact format for most data profiles. Not the least if you have plenty of fixed-length columns that are often NULL. ROW compression has a fairly low CPU overhead. Since compression means that the data takes up less space, this means a scan of the full table will be faster. This is why you may gain performance.

Page compression is more aggressive and uses a dictionary. You can make a bigger gain in disk space, but the CPU overhead is fairly considerable, so it is less likely that you will make a net gain.

To find out how your system is affected, there is a stored procedure, of which I don’t recall the name right now, which can give you estimated space savings. But if you also want to see the performance effects, you will need to run a test with your workload.

There is also columnstore, which also is a form a compression, and which for data warehouses can give enormous performance gains.
 

Ref Link.