Archive

Archive for August, 2017

Migrate from Oracle to SQL Server for free !!!

August 29, 2017 Leave a comment

 

Microsoft is encouraging people/customers who are currently using Oracle, IBM Db2, Sybase, or Netezza database, to migrate to SQL Server 2017 for free (free license), get subsidized migration support services, hands-on labs, and instructor-led demos as part of this offer.
 


 

With SQL Server 2017 not only its available in Windows, but also on Linux, Mac & Docker too, you get in-memory performance across workloads, mission-critical high availability, end-to-end mobile BI, and in-database advanced analytics with security features to protect your data at rest and in motion, all on your choice of language and platform – now including Linux and Docker.
 

– Migration Offer Broucher Details

– Download the Microsoft SQL Server Migration Assistant v7.8 for Oracle
 


 


Advertisement

Sample 14 Interview Questions and Answers for Hadoop Administration Certified Professional

August 21, 2017 3 comments

 
Despite plenty of opportunities for Hadoop professionals, getting a good job may seem tedious. This is because cracking the Hadoop Admin Interview is a challenge and you must prepare for it to get a good job. At Koenig Solutions, candidates not only acquire Hadoop administration certification, but also get to prepare for the interview to start a challenging yet lucrative career.
 

–> This article enlists 14 important questions and answers commonly asked during Hadoop Administration jobs interviews:
 

Q1. What daemons are required to run a Hadoop cluster?
A. DataNode, NameNode, JobTracker and TaskTracker are required for the process.
 

Q2. How would you restart a NameNode?
A. The easiest way – click on stop-all.sh (to run the command to stop running shell script). After this, click start-all.sh to restart the NameNode.
 

Q3. What are different schedulers available in Hadoop?
A. a. COSHH: Considers the workload, cluster and the user heterogeneity for scheduling decisions.
    b. FIFO Scheduler: Doesn’t consider heterogeneity, but orders the job on the basis of arrival time in queue.
    c. Fair Sharing: Defines a pool for each user. Users can use their own pools to execute the job.
 

Q4. What Hadoop shell commands can be used to perform copy operation?
A. fs –copyToLocal
    fs –put
    fs –copyFromLocal.
 

Q5. What’s the purpose of jps command?
A. It is used to confirm whether the daemons running Hadoop cluster are working or not. The output of jps command reveals the status of DataNode, NameNode, Secondary NameNode, JobTracker and TaskTracker.
 

Q6. How many NameNodes can be run on single Hadoop cluster?
A. Only one.
 

Q7. What will happen when the NameNode on the Hadoop cluster is down?
A. Whenever the NameNode is down, the file system goes offline.
 

Q8. Detail crucial hardware considerations when deploying Hadoop in product environment.
A. Operating System: 64-bit operating system
    Capacity: Larger form factor (3.5”) disks allow more storage and costs less.
    Network: Two TOR switches per rack for better redundancy.
    Storage: To achieve high performance and scalability, it is better to design a Hadoop platform by moving the compute activity to data.
    Memory: System’s memory requirements vary based on the application.
    Computational Capacity: Can be determined by the total count of MapReduce slots existing across nodes within a Hadoop cluster.
 

Q9. Which command will you use to determine if the HDFS (Hadoop Distributed File System) is corrupt?
A. Hadoop FSCK (File System Check) command.
 

Q10. How a Hadoop job can be killed?
A. using command: Hadoop job –kill jobID.
 

Q11. Can filed be copied across multiple clusters? If yes, how?
A. Yes, it is possible using distributed copy. DistCP command can be used for intra or inter cluster copying.
 

Q12. Recommend the best Operating System to run Hadoop.
A. Ubuntu or Linux is the best. Although Windows can be used, it can lead to several problems.
 

Q13. How often the NameNode should be reformatted?
A. Never, as it can lead to complete data loss. It is formatted only once, in the beginning.
 

Q14. What are Hadoop configuration files and where are they located?
A. Hadoop has 3 different configuration files – mapred-site.xml, hdfs-site.xml, and core-site.xml – which are located in “conf” sub directory.
 

Checkout – Best Free Resources For Sharpening Your Skills In Hadoop. These are just a few questions, but you may come across several others, depending on your Hadoop
training.


 
Author Bio: Michael Warne is a tech blogger and an expert in Hadoop certification training. He has an experience of 5 years in the Hadoop professionals industry, and has worked as a certified Hadoop for top-notch IT companies.


SQL Trivia – How to convert milliseconds to [hh:mm:ss.ms] format

August 1, 2017 1 comment

 
Today for some reporting purpose I need to convert milliseconds to hh:mm:ss.ms format, i.e.

Hours : Minutes : Seconds . Micro-Seconds
 

So, I tried to create query below, of course by taking help from internet, so that I can have a sample code handy for future reference:

DECLARE @MilliSeconds INT
SET @MilliSeconds = 25289706

SELECT CONCAT(
		RIGHT('0' + CAST(@MilliSeconds/(1000*60*60) AS VARCHAR(2)),2), ':',						-- Hrs
		RIGHT('0' + CAST((@MilliSeconds%(1000*60*60))/(1000*60) AS VARCHAR(2)),2), ':',			-- Mins
		RIGHT('0' + CAST(((@MilliSeconds%(1000*60*60))%(1000*60))/1000 AS VARCHAR(2)),2), '.',	-- Secs
		((@MilliSeconds%(1000*60*60))%(1000*60))%1000											-- Milli Secs
) AS [hh:mm:ss.ms]

-- 7 Hrs, 1 minute, 29 seconds and 706 milliseconds


Categories: SQL Trivia Tags: ,

Microsoft Azure Data Platform – July (2017) update

August 1, 2017 Leave a comment