Archive

Archive for the ‘Hadoop’ Category

Hadoop/HDFS storage types, formats and internals – Text, Parquet, ORC, Avro

December 30, 2018 1 comment

 
HDFS or Hadoop Distributed File System is the distributed file system provided by the Hadoop Big Data platform. The primary objective of HDFS is to store data reliably even in the presence of node failures in the cluster. This is facilitated with the help of data replication across different racks in the cluster infrastructure. These files stored in HDFS system are used for further data processing by different data processing engines like Hadoop Map-Reduce, Hive, Spark, Impala, Pig etc.
 

–> Here we will talk about different types of file formats supported in HDFS:

1. Text (CSV, TSV, JSON): These are the flat file format which could be used with the Hadoop system as a storage format. However these format do not contain the self inherited Schema. Thus with this the developer using any processing engine have to apply schema while reading these file formats.
 

2. Parquet: file format is the Columnar oriented format in the Hadoop ecosystem. Parquet stores the binary data column wise, which brings following benefits:
– Less storage, efficient Compression resulting in Storage optimization, as the same data type is residing adjacent to each other. That helps in compressing the data better hence provide storage optimization.
– Increased query performance as entire row needs not to be loaded in the memory.

Parquet file format can be used with any Hadoop ecosystem like: Hive, Impala, Pig, Spark, etc.
 

3. ORC: stands for Optimized Row Columnar, which is a Columnar oriented storage format. ORC is primarily used in the Hive world and gives better performance with Hive based data retrievals because Hive has a vectorized ORC reader. Schema is self contained in the file as part of the footer. Because of the column oriented nature it provide better compression ratio and faster reads.
 

4. Avro: is the Row oriented storage format, and make a perfect use case for write heavy applications. The schema is self contained with in the file in the form of JSON, which help in achieving efficient schema evolution.
 

–> Now, Lets take a deep dive and look at these file format through a series of videos below:


 

Author/Speaker Bio: Viresh Kumar is a v-blogger and an expert in Big Data, Hadoop and Cloud world. He has an experience of ~14 years in the Data Platform industry.

 

Book: Hadoop – The Definitive Guide: Storage and Analysis at Internet Scale
 


Prepare for Certification Exam 70-775: Perform Data Engineering on Microsoft Azure HDInsight

April 10, 2018 1 comment

 
In my [previous post] I’ve tried to collate some basic stuff about HDInsight to let you know the basics and get started. You can also check [Microsoft Docs] for HDInsight to know more and deep dive into the Big-Data platform.
 

Microsoft Certification Exams is one of a good and easy approach to understand the technology. You can find details about Exam 70-775 certification on the Microsoft Certification page.

Though the web page provides most the details of what would be asked in the Exam, but lacks in providing the study material against each module and topics under it. Thus here with this post I’ve tried to find and provide the study material links against each of the topics covered on these modules:
 

The exam is divided into 4 Modules:

1. Administer and Provision HDInsight Clusters
2. Implement Big Data Batch Processing Solutions
3. Implement Big Data Interactive Processing Solutions
4. Implement Big Data Real-Time Processing Solutions

 

Module #1. Administer and Provision HDInsight Clusters

1. Deploy HDInsight clusters
    – Create a HDInsight cluster [Portal] [ARM Template] [PowerShell] [.net SDK] [CLI]
    – Create HDInsight clusters with Hadoop, Spark, Kafka, etc [Link]
    – Select an appropriate cluster type based on workload considerations [Link]
    – Create a cluster in a private virtual network [Link]
    – Create a domain-joined cluster [Link]
    – Create a cluster that has a custom metastore [link]
    – Manage managed disks [with Apache Kafka]
    – Configure vNet peering [Link]

2. Deploy and secure multi-user HDInsight clusters
    – Provision users who have different roles
    – Manage users, groups & permissions [Ambari] [PowerShell] [Apache Ranger]
    – Configure Kerberos [Link]
    – Configure service accounts
    – Implement SSH [Connecting] [Tunneling]
    – Restrict access to data [Link]

3. Ingest data for batch and interactive processing
    – Ingest data from cloud or on-premises data; store data in Azure Data Lake
    – Store data in Azure Blob Storage
    – Perform routine small writes on a continuous basis using Azure CLI tools
    – Ingest data in Apache Hive and Apache Spark by using Apache Sqoop, Application Development Framework (ADF), AzCopy, and AdlCopy
    – Ingest data from an on-premises Hadoop cluster

4. Configure HDInsight clusters
    – Manage metastore upgrades
    – View and edit Ambari configuration groups
    – View and change service configurations through Ambari
    – Access logs written to Azure Table storage
    – Enable heap dumps for Hadoop services
    – Manage HDInsight configuration, use HDInsight .NET SDK, and PowerShell
    – Perform cluster-level debugging
    – Stop and start services through Ambari
    – Manage Ambari alerts and metrics

5. Manage and debug HDInsight jobs
    – Describe YARN architecture and operation
    – Examine YARN jobs through ResourceManager UI and review running applications
    – Use YARN CLI to kill jobs
    – Find logs for different types of jobs
    – Debug Hadoop and Spark jobs
    – Use Azure Operations Management Suite (OMS) to monitor and manage alerts, and perform predictive actions
 

Module #2. Implement Big Data Batch Processing Solutions

1. Implement batch solutions with Hive and Apache Pig
    – Define external Hive tables; load data into a Hive table
    – Use partitioning and bucketing to improve Hive performance
    – Use semi-structured files such as XML and JSON with Hive
    – Join tables with Hive using shuffle joins and broadcast joins
    – Invoke Hive UDFs with Java and Python; design scripts with Pig
    – Identify query bottlenecks using the Hive query graph
    – Identify the appropriate storage format, such as Apache Parquet, ORC, Text, and JSON

2. Design batch ETL solutions for big data with Spark
    – Share resources between Spark applications using YARN queues and preemption
    – Select Spark executor and driver settings for optimal performance, use partitioning and bucketing to improve Spark performance
    – Connect to external Spark data sources
    – Incorporate custom Python and Scala code in a Spark DataSets program
    – Identify query bottlenecks using the Spark SQL query graph

3. Operationalize Hadoop and Spark
    – Create and customize a cluster by using ADF
    – Attach storage to a cluster and run an ADF activity
    – Choose between bring-your-own and on-demand clusters
    – Use Apache Oozie with HDInsight
    – Choose between Oozie and ADF
    – Share metastore and storage accounts between a Hive cluster and a Spark cluster to enable the same table across the cluster types
    – Select an appropriate storage type for a data pipeline, such as Blob storage, Azure Data Lake, and local Hadoop Distributed File System (HDFS)
 

Module #3. Implement Big Data Interactive Processing Solutions

1. Implement interactive queries for big data with Spark SQL
    – Execute queries using Spark SQL
    – Cache Spark DataFrames for iterative queries
    – Save Spark DataFrames as Parquet files,
    – Connect BI tools to Spark clusters
    – Optimize join types such as broadcast versus merge joins
    – Manage Spark Thrift server and change the YARN resources allocation
    – Identify use cases for different storage types for interactive queries

2. Perform exploratory data analysis by using Spark SQL
    – Use Jupyter and Apache Zeppelin for visualization and developing tidy Spark DataFrames for modeling
    – Use Spark SQL’s two-table joins to merge DataFrames and cache results
    – Save tidied Spark DataFrames to performant format for reading and analysis (Apache Parquet)
    – Manage interactive Livy sessions and their resources

3. Implement interactive queries for big data with Interactive Hive
    – Enable Hive LLAP through Hive settings
    – Manage and configure memory allocation for Hive LLAP jobs
    – Connect BI tools to Interactive Hive clusters

4. Perform exploratory data analysis by using Hive
    – Perform interactive querying and visualization
    – Use Ambari Views
    – Use HiveQL
    – Parse CSV files with Hive
    – Use ORC versus Text for caching
    – Use internal and external tables in Hive
    – Use Zeppelin to visualize data

5. Perform interactive processing by using Apache Phoenix on HBase
    – Use Phoenix in HDInsight
    – Use Phoenix Grammar for queries
    – Configure transactions, user-defined functions, and secondary indexes
    – Identify and optimize Phoenix performance
    – Select between Hive, Spark, and Phoenix on HBase for interactive processing
    – Identify when to share metastore between a Hive cluster and a Spark cluster
 

Module #4. Implement Big Data Real-Time Processing Solutions

1. Create Spark streaming applications using DStream API
    – Define DStreams and compare them to Resilient Distributed Dataset (RDDs)
    – Start and stop streaming applications
    – Transform DStream (flatMap, reduceByKey, UpdateStateByKey)
    – Persist long-term data stores in HBase and SQL
    – Persist Long Term Data Azure Data Lake and Azure Blob Storage
    – Stream data from Apache Kafka or Event Hub
    – Visualize streaming data in a PowerBI real-time dashboard

2. Create Spark structured streaming applications
    – Use DataFrames and DataSets APIs to create streaming DataFrames and Datasets
    – Create Window Operations on Event Time
    – Define Window Transformations for Stateful and Stateless Operations
    – Stream Window Functions, Reduce by Key, and Window to Summarize Streaming Data
    – Persist Long Term Data HBase and SQL
    – Persist Long Term Data Azure Data Lake and Azure Blob Storage
    – Stream data from Kafka or Event Hub
    – Visualize streaming data in a PowerBI real-time dashboard

3. Develop big data real-time processing solutions with Apache Storm
    – Create Storm clusters for real-time jobs
    – Persist Long Term Data HBase and SQL
    – Persist Long Term Data Azure Data Lake and Azure Blob Storage
    – Stream data from Kafka or Event Hub
    – Configure event windows in Storm
    – Visualize streaming data in a PowerBI real-time dashboard
    – Define Storm topologies and describe Storm Computation Graph Architecture
    – Create Storm streams and conduct streaming joins
    – Run Storm topologies in local mode for testing
    – Configure Storm applications (Workers, Debug mode)
    – Conduct Stream groupings to broadcast tuples across components
    – Debug and monitor Storm jobs

4. Build solutions that use Kafka
    – Create Spark and Storm clusters in the virtual network
    – Manage partitions
    – Configure MirrorMaker
    – Start and stop services through Ambari
    – Manage topics

5. Build solutions that use HBase
    – Identify HBase use cases in HDInsight
    – Use HBase Shell to create updates and drop HBase tables
    – Monitor an HBase cluster
    – Optimize the performance of an HBase cluster
    – Identify uses cases for using Phoenix for analytics of real-time data
    – Implement replication in HBase


An introduction to Azure HDInsight – Microsoft’s Big-Data/Hadoop solution on Azure

March 3, 2018 1 comment

 
The Microsoft Azure portal has all the details on HDInsight and is very vast. Here in this post I’ve simply curated main and important stuff for myself and others to get started with HDInsight.
 

Azure HDInsight is a standard Apache Hadoop distribution offered as a managed service on Microsoft Azure. It is based on the Hortonworks Data Platform (HDP) and provisioned as clusters on Azure. The clusters can be created on your choice of Windows or Linux Servers.
 

What HDInsight offers:

1. Provides an end-to-end SLA on all your production workloads.
2. Enables you to scale workloads up or down anytime and only pay for what you use.
3. Protects and Secure your data as per government compliance.
4. Provide Log Analytics to monitor your clusters.
5. Globally availability in multiple regions.
6. Provides various productivity tools for development.
 

HDInsight enables a broad range of scenarios such as: Process & Analyze Big-Data, Batch Processing, in-memory processing ETL, Data Warehousing, Machine Learning, IoT and more, by using a broad spectrum of open-source frameworks, like Hadoop, Spark, Kafka, HBase, Hive, Storm and R Server.


 

HDInsight Cluster Types:

1. Hadoop: A simple Map-Reduce programming model to process and analyze batch data in parallel. [Apache Hadoop]

2. Spark: An open-source, parallel-processing framework that supports in-memory processing to boost the performance of big-data analysis applications. [Apache Spark]

3. HBase: A NoSQL database built on Hadoop that provides random access and strong consistency for large amounts of unstructured and semi-structured data. [Apache HBase]

4. R Server: A server for hosting and managing parallel, distributed R processes. It provides data scientists, statisticians, and R programmers with on-demand access to scalable, distributed methods of analytics on HDInsight.

5. Storm: A distributed, real-time computation system for processing large streams of data fast. [Apache Storm]

6. Hive: or Interactive Query (AKA: Live Long and Process), In-memory caching for interactive and faster Hive queries. [Apache Hive]

7. Kafka: An open-source platform that’s used for building streaming data pipelines and applications. Kafka also provides message-queue functionality that allows you to publish and subscribe to data streams. [Apache Kafka]
 

Other Components available with HDInsight:

Ambari Avro HCatalog
Mahout MapReduce YARN
Phoenix Pig Sqoop
Tez Oozie ZooKeeper
 

Storage options in HDInsight:

1. Azure Blob Store

2. Azure Data Lake Store

Azure Data Lake Store vs Azure Blob Storage
 

Role Based Security:

Owner Lets you manage everything
Contributor Lets you manage everything except access to resources
Reader Lets you view everything but not make changes
User Access Administrator Lets you manage user access to Azure resources

 

HDInsight security:

[Overview and more details on Microsoft Docs]


 


What is Lambda Architecture? and what Azure offers with its new Cosmos DB?

February 16, 2018 3 comments

 
Lambda architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch processing and stream processing methods, and minimizing the latency involved in querying big data.

It is a Generic, Scalable, and Fault-tolerant data processing architecture to address batch and speed latency scenarios with big data and map-reduce.

–> The system consists of three layers: Batch Layer, Speed Layer & Service Layer

1. All data is pushed into both the Batch layer and Speed layer.

2. The Batch layer has a master dataset (immutable, append-only set of raw data) and pre-computes the batch views.

3. The Serving layer has Batch views for fast queries.

4. The Speed Layer compensates for processing time (to the serving layer) and deals with recent data only.

5. All queries can be answered by merging results from Batch views and Real-time views or pinging them individually.
 

Lambda Architecture with Azure:

Azure offers you a combination of following technologies to accelerate real-time big data analytics:

1. Azure Cosmos DB, a globally distributed and multi-model database service.

2. Apache Spark for Azure HDInsight, a processing framework that runs large-scale data analytics applications.

3. Azure Cosmos DB change feed, which streams new data to the batch layer for HDInsight to process.

4. The Spark to Azure Cosmos DB Connector

How Azure simplifies the Lambda Architecture:

1. All data is pushed into Azure Cosmos DB for processing.

2. The Batch layer has a master dataset (immutable, append-only set of raw data) stored in Azure Cosmos DB. Using HDI Spark, you can pre-compute your aggregations to be stored in your computed Batch Views.

3. The Serving layer is an Azure Cosmos DB database with collections for the master dataset and computed Batch View for fast queries.

4. The Speed layer compensates for processing time (to the serving layer) and deals with recent data only. It utilizes HDI Spark to read the Azure Cosmos DB change feed. This enables you to persist your data as well as to query and process it concurrently.

5. All queries can be answered by merging results from batch views and real-time views, or pinging them individually.
 

–> For complete details check here in Microsoft Docs: Azure Cosmos DB: Implement a lambda architecture on the Azure platform


Sample 14 Interview Questions and Answers for Hadoop Administration Certified Professional

August 21, 2017 3 comments

 
Despite plenty of opportunities for Hadoop professionals, getting a good job may seem tedious. This is because cracking the Hadoop Admin Interview is a challenge and you must prepare for it to get a good job. At Koenig Solutions, candidates not only acquire Hadoop administration certification, but also get to prepare for the interview to start a challenging yet lucrative career.
 

–> This article enlists 14 important questions and answers commonly asked during Hadoop Administration jobs interviews:
 

Q1. What daemons are required to run a Hadoop cluster?
A. DataNode, NameNode, JobTracker and TaskTracker are required for the process.
 

Q2. How would you restart a NameNode?
A. The easiest way – click on stop-all.sh (to run the command to stop running shell script). After this, click start-all.sh to restart the NameNode.
 

Q3. What are different schedulers available in Hadoop?
A. a. COSHH: Considers the workload, cluster and the user heterogeneity for scheduling decisions.
    b. FIFO Scheduler: Doesn’t consider heterogeneity, but orders the job on the basis of arrival time in queue.
    c. Fair Sharing: Defines a pool for each user. Users can use their own pools to execute the job.
 

Q4. What Hadoop shell commands can be used to perform copy operation?
A. fs –copyToLocal
    fs –put
    fs –copyFromLocal.
 

Q5. What’s the purpose of jps command?
A. It is used to confirm whether the daemons running Hadoop cluster are working or not. The output of jps command reveals the status of DataNode, NameNode, Secondary NameNode, JobTracker and TaskTracker.
 

Q6. How many NameNodes can be run on single Hadoop cluster?
A. Only one.
 

Q7. What will happen when the NameNode on the Hadoop cluster is down?
A. Whenever the NameNode is down, the file system goes offline.
 

Q8. Detail crucial hardware considerations when deploying Hadoop in product environment.
A. Operating System: 64-bit operating system
    Capacity: Larger form factor (3.5”) disks allow more storage and costs less.
    Network: Two TOR switches per rack for better redundancy.
    Storage: To achieve high performance and scalability, it is better to design a Hadoop platform by moving the compute activity to data.
    Memory: System’s memory requirements vary based on the application.
    Computational Capacity: Can be determined by the total count of MapReduce slots existing across nodes within a Hadoop cluster.
 

Q9. Which command will you use to determine if the HDFS (Hadoop Distributed File System) is corrupt?
A. Hadoop FSCK (File System Check) command.
 

Q10. How a Hadoop job can be killed?
A. using command: Hadoop job –kill jobID.
 

Q11. Can filed be copied across multiple clusters? If yes, how?
A. Yes, it is possible using distributed copy. DistCP command can be used for intra or inter cluster copying.
 

Q12. Recommend the best Operating System to run Hadoop.
A. Ubuntu or Linux is the best. Although Windows can be used, it can lead to several problems.
 

Q13. How often the NameNode should be reformatted?
A. Never, as it can lead to complete data loss. It is formatted only once, in the beginning.
 

Q14. What are Hadoop configuration files and where are they located?
A. Hadoop has 3 different configuration files – mapred-site.xml, hdfs-site.xml, and core-site.xml – which are located in “conf” sub directory.
 

Checkout – Best Free Resources For Sharpening Your Skills In Hadoop. These are just a few questions, but you may come across several others, depending on your Hadoop
training.


 
Author Bio: Michael Warne is a tech blogger and an expert in Hadoop certification training. He has an experience of 5 years in the Hadoop professionals industry, and has worked as a certified Hadoop for top-notch IT companies.