SQLState = S0002 NativeError = 208 [SQL Server Native Client 11.0][SQL Server]Invalid object name ‘xyztable’ Unable to resolve column level collations
While working with the BCP command to dump data from a Source table to a data file I was getting some errors.
I was trying to execute following BCP command from SSMS by using xp_cmdshell Extended Stored Procedure:
DECLARE @str VARCHAR(1000) SET @str = 'bcp "Select * FROM dbo.xyzTable" ' + 'queryout "D:\BulkOut\xyzTable.dat" ' + '-S "sourceServer.database.windows.net" ' + '-U "saUserName" -P "saPassword" -f "D:\Bulk\xyzTable.fmt" ' EXEC xp_cmdshell @str GO
… and encountered following error:
SQLState = S0002, NativeError = 208
Error = [Microsoft][SQL Server Native Client 11.0][SQL Server]Invalid object name ‘xyzTable’.
SQLState = S1000, NativeError = 0
Error = [Microsoft][SQL Server Native Client 11.0]Unable to resolve column level collations
BCP copy out failed
After searching a bit on Internet I found that I missed to provide the Database name, thus the error “Invalid object name”. What a silly mistake :)
So, there are 2 ways you can provide Database name while querying your table.
1. First is by prefixing Database name with the SELECT statement.
2. Second is by using the -d flag with the Database name.
let’s check both the options here:
-- 1. By Providing DB name with the SELECT statement: DECLARE @str VARCHAR(1000) SET @str = 'bcp "Select * FROM DBName.dbo.xyzTable" ' -- DB name SELECT stmt + 'queryout "D:\BulkOut\xyzTable.dat" ' + '-S "sourceServer.database.windows.net" ' + '-U "saUserName" -P "saPassword" -f "D:\Bulk\xyzTable.fmt" ' EXEC xp_cmdshell @str GO -- 2. By providing DB name as an argument with the BCP statement: DECLARE @str VARCHAR(1000) SET @str = 'bcp "Select * FROM dbo.xyzTable" ' + 'queryout "D:\BulkOut\xyzTable.dat" -d "DBName" ' -- DB name with -d flag + '-S "sourceServer.database.windows.net" ' + '-U "saUserName" -P "saPassword" -f "D:\Bulk\xyzTable.fmt" ' EXEC xp_cmdshell @str GO
As I saw lot of people faced the same error and there was not proper answer, so I thought to blog it here.
Thanks for reading!!!
Msg 15281, Level 16, State 1, Procedure xp_cmdshell, Line 1 SQL Server blocked access because this component is turned off
I was trying to execute a Windows Shell command from SSMS by using xp_cmdshell Extended Stored Procedure and encountered an error highlighted below in RED color:
EXEC xp_cmdshell 'DIR *.exe'
Msg 15281, Level 16, State 1, Procedure xp_cmdshell, Line 1
SQL Server blocked access to procedure ‘sys.xp_cmdshell’ of component ‘xp_cmdshell’ because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of ‘xp_cmdshell’ by using sp_configure.
Even if you are a sysadmin and as by default this option is turned off after a new SQL Server installation, you need to manually turn on this option, let’s see how:
-- To change the advanced options: EXEC sp_configure 'show advanced options', 1 GO -- To update the current value for advanced options: RECONFIGURE GO -- To enable the xp_cmdshell option: EXEC sp_configure 'xp_cmdshell', 1 GO -- To update the new configured value for xp_cmdshell option: RECONFIGURE GO
–> If you want to suppress the messages returned by the xp_cmdshell extended Stored Procedure use the optional parameter no_output.
exec xp_cmdshell 'DIR *.exe', no_output
Please Note: If xp_cmdshell is executed within a BATCH and returns an error, the complete batch will fail. In earlier versions of SQL Server the batch used to continue executing.
If the user is not member of sysadmin role and want to use this extended SP then a Proxy account credential need to be created by using sp_xp_cmdshell_proxy_account. Check this KB article for the same: link.
There are times when you want to export XML data stored in Table’s columns to xml files for some purpose.
Here is a small hands on script that uses BCP command with QueryOut option to export data from SQL Server to file system:
USE [AdventureWorks2012] GO -- Save XML records to a file: DECLARE @fileName VARCHAR(50) DECLARE @sqlStr VARCHAR(1000) DECLARE @sqlCmd VARCHAR(1000) SET @fileName = 'D:\SQL_Queries\PersonAdditionalContactInfo.xml' SET @sqlStr = 'select TOP 1 AdditionalContactInfo from AdventureWorks2012.Person.Person where AdditionalContactInfo IS NOT NULL' SET @sqlCmd = 'bcp "' + @sqlStr + '" queryout ' + @fileName + ' -w -T' EXEC xp_cmdshell @sqlCmd
Output: NULL Starting copy... NULL 1 rows copied. Network packet size (bytes): 4096 Clock Time (ms.) Total : 140 Average : (7.14 rows per sec.) NULL
If you have to extract XMLs stored in all the rows then you can iterate through all the rows one by one by using a WHILE loop or a CURSOR and extract data in separate files. You can provide unique names to your files dynamically by tweaking the above code.
While working on Performance Optimization and Index tuning many Developers ends up in creating some (or even many) unnecessary Indexes based upon various SQL Queries. Some or many of those Indexes might not be used at all by the SQL Query Optimizer. But even if they are used you may have to Trade-off with your ETL performance going down, as the CRUD (CREATE, READ, UPDATE, DELETE) operations are going to taking more time to update those new Indexes.
So, while creating new Indexes you will have to plan very carefully, decide and balance out things so that your Data retrieval is fast and on the same hand your ETLs are also not affected much.
SQL Server provides some DMVs (Dynamic Management Views) and DMFs (Dynamic Management Functions) to get this information from SQL engine.
–> To know how the Indexes are getting used we can use sys.dm_db_index_usage_stats DMV and this will provide us information on how many times the Index was used for SEEK, SCAN & LOOKUP operations. Check the Query and its output below:
SELECT DB_NAME(database_id) AS DATABASE_NAME, OBJECT_NAME(ius.object_id) AS TABLE_NAME, ids.name AS INDEX_NAME, ius.user_seeks AS SEEK_COUNT, ius.user_scans AS SCAN_COUNT, ius.user_lookups AS LOOKUP_COUNT, ius.user_seeks + ius.user_scans + ius.user_lookups AS TOTAL_USAGE, ius.last_user_seek AS LAST_SEEK_COUNT, ius.last_user_scan AS LAST_SCAN_COUNT, ius.last_user_lookup AS LAST_LOOKUP_COUNT FROM sys.dm_db_index_usage_stats AS ius INNER JOIN sys.indexes AS ids ON ids.object_id = ius.object_id AND ids.index_id = ius.index_id WHERE OBJECTPROPERTY(ius.object_id,'IsUserTable') = 1 ORDER BY DATABASE_NAME, TOTAL_USAGE
–> Now if we want to know the maintenance overhead on the new Indexes we created, like every time a related Table is updated the Indexes are also updated. We can check by using sys.dm_db_index_operational_stats DMF and it will show how many INSERT, UPDATE & DELETE operations are happening on particular indexes. Check the Query and its output below:
SELECT DB_NAME(database_id) AS DATABASE_NAME, OBJECT_NAME(ios.object_id) AS TABLE_NAME, idx.name AS INDEX_NAME, ios.leaf_insert_count AS INSERT_COUNT, ios.leaf_update_count AS UPDATE_COUNT, ios.leaf_delete_count AS DELETE_COUNT, ios.leaf_insert_count + ios.leaf_update_count + ios.leaf_delete_count AS TOTAL_COUNT FROM sys.dm_db_index_operational_stats (NULL,NULL,NULL,NULL ) ios INNER JOIN sys.indexes AS idx ON idx.object_id = ios.object_id AND idx.index_id = ios.index_id WHERE OBJECTPROPERTY(ios.object_id,'IsUserTable') = 1 ORDER BY DATABASE_NAME, TOTAL_COUNT
–> So, by using these two Dynamic Management Views/Functions you can know:
1. What is the affect (maintenance overhead) of Indexes you created, and
2. Are the Indexes really used or not so that you can DROP them.
Please Note: that these stats could be wrong at times because of several reasons, like:
1. If SQL Server (MSSQLSERVER) service is re-started these counters are initialized to 0.
2. When a Database is detached or is shut down (for example, because AUTO_CLOSE is set to ON), all rows associated with the database are removed.
3. Index Rebuild resets these counters to 0 (Bug logged in MS Connect, link).