tag:blogger.com,1999:blog-59323153718841002012024-03-04T12:23:18.972-08:00Hadoop Interview Questions and Answers PDF for Experienced FreshersHadoop Interview Questions and Answers PDF for Experienced, Freshers Download.Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.comBlogger30125tag:blogger.com,1999:blog-5932315371884100201.post-83354999323740894582015-12-13T21:18:00.001-08:002016-02-07T05:53:10.675-08:00How did you debug your Hadoop code? <div dir="ltr" style="text-align: left;" trbidi="on">
There can be several ways of doing this but most common ways are:-<br />
- By using counters.<br />
- The web interface provided by Hadoop framework.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com1tag:blogger.com,1999:blog-5932315371884100201.post-72157160387308386902015-12-13T21:17:00.000-08:002016-02-07T05:53:10.664-08:00How will you write a custom partitioner for a Hadoop job?<div dir="ltr" style="text-align: left;" trbidi="on">
To have Hadoop use a custom partitioner you will have to do minimum the following three:<br />
- Create a new class that extends Partitioner Class<br />
- Override method getPartition<br />
- In the wrapper that runs the Mapreduce, either<br />
- Add the custom partitioner to the job programmatically using method set Partitioner Class or – add the custom partitioner to the job as a config file (if your wrapper reads from config file or oozie)</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com2tag:blogger.com,1999:blog-5932315371884100201.post-25428432809477236242015-12-13T21:15:00.007-08:002016-02-07T05:53:10.632-08:00How can you set an arbitrary number of Reducers to be created for a job in Hadoop?<div dir="ltr" style="text-align: left;" trbidi="on">
You can either do it programmatically by using method setNumReduceTasks in the Jobconf Class or set it up as a configuration setting.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-87210149088562182012015-12-13T21:15:00.003-08:002016-02-07T05:53:10.679-08:00How can you set an arbitrary number of mappers to be created for a job in Hadoop?<div dir="ltr" style="text-align: left;" trbidi="on">
You cannot set it.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-48502433277895135772015-12-13T21:14:00.001-08:002016-02-07T05:53:10.652-08:00What will a Hadoop job do if you try to run it with an output directory that is already present? Will it<div dir="ltr" style="text-align: left;" trbidi="on">
- Overwrite it<br />
- Warn you and continue<br />
- Throw an exception and exit<br />
The Hadoop job will throw an exception and exit.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-44568993494484174212015-12-13T21:12:00.002-08:002016-02-07T05:53:10.645-08:00Is it possible to have Hadoop job output in multiple directories? If yes, how? <div dir="ltr" style="text-align: left;" trbidi="on">
Yes, by using Multiple Outputs class.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-17867550149494118562015-12-13T21:11:00.002-08:002016-02-07T05:53:10.628-08:00 Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job?<div dir="ltr" style="text-align: left;" trbidi="on">
Yes, the input format class provides methods to add multiple directories as input to a Hadoop job.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com1tag:blogger.com,1999:blog-5932315371884100201.post-56569835868488454452015-12-13T21:10:00.000-08:002016-02-07T05:53:10.655-08:00Have you ever used Counters in Hadoop. Give us an example scenario?<div dir="ltr" style="text-align: left;" trbidi="on">
Anybody who claims to have worked on a Hadoop project is expected to use counters.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-80209398411952073762015-12-13T21:09:00.002-08:002016-02-07T05:53:10.673-08:00What mechanism does Hadoop framework provide to synchronise changes made in Distribution Cache during runtime of the application?<div dir="ltr" style="text-align: left;" trbidi="on">
This is a tricky question. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-43480303012565478072015-12-13T21:08:00.002-08:002016-02-07T05:53:10.643-08:00 What is the benefit of Distributed cache? Why can we just have the file in HDFS and have the application read it?<div dir="ltr" style="text-align: left;" trbidi="on">
This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 Mappers or Reducer, it will use the same copy of distributed cache. On the other hand, if you put code in file to read it from HDFS in the MR Job then every Mapper will try to access it from HDFS hence if a TaskTracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS is not very efficient when used like this.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-63049175703215029102015-12-13T21:07:00.000-08:002016-02-07T05:53:10.658-08:00What is Distributed Cache in Hadoop?<div dir="ltr" style="text-align: left;" trbidi="on">
Distributed Cache is a facility provided by the MapReduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-89890946415438951232015-12-13T21:04:00.002-08:002016-02-07T05:53:10.683-08:00What is the characteristic of streaming API that makes it flexible run MapReduce jobs in languages like Perl, Ruby, Awk etc.?<div dir="ltr" style="text-align: left;" trbidi="on">
Hadoop Streaming allows to use arbitrary programs for the Mapper and Reducer phases of a MapReduce job by having both Mappers and Reducers receive their input on stdin and emit output (key, value) pairs on stdout.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com1tag:blogger.com,1999:blog-5932315371884100201.post-88326046724391774822015-12-13T21:03:00.003-08:002016-02-07T05:53:10.681-08:00What is Hadoop Streaming? <div dir="ltr" style="text-align: left;" trbidi="on">
Streaming is a generic API that allows programs written in virtually any language to be used as Hadoop Mapper and Reducer implementations.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-71707723856913070382015-12-13T21:02:00.003-08:002016-02-07T05:53:10.662-08:00Using command line in Linux, how will you - See all jobs running in the Hadoop cluster - Kill a job?<div dir="ltr" style="text-align: left;" trbidi="on">
Hadoop job – list<br />
Hadoop job – kill jobID</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-71127845662998155162015-12-13T21:00:00.006-08:002016-02-07T05:53:10.660-08:00How does speculative execution work in Hadoop? <div dir="ltr" style="text-align: left;" trbidi="on">
JobTracker makes different TaskTrackers pr2ocess same input. When tasks complete, they announce this fact to the JobTracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the TaskTrackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-75286045067804656772015-12-13T21:00:00.003-08:002016-02-07T05:53:10.667-08:00Hadoop achieves parallelism by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program and slow down the program. What mechanism Hadoop provides to combat this?<div dir="ltr" style="text-align: left;" trbidi="on">
Speculative Execution.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-76844401675314675662015-12-13T20:59:00.001-08:002016-02-07T05:53:10.669-08:00Suppose Hadoop spawned 100 tasks for a job and one of the task failed. What will Hadoop do?<div dir="ltr" style="text-align: left;" trbidi="on">
It will restart the task again on some other TaskTracker and only if the task fails more than four (default setting and can be changed) times will it kill the job.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-80148591115963927972015-12-13T20:57:00.001-08:002016-02-07T05:53:10.671-08:00What is the relationship between Jobs and Tasks in Hadoop?<div dir="ltr" style="text-align: left;" trbidi="on">
One job is broken down into one or many tasks in Hadoop.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-39890985228939549252015-12-13T20:55:00.003-08:002016-02-07T05:53:10.639-08:00What is TaskTracker?<div dir="ltr" style="text-align: left;" trbidi="on">
TaskTracker is a node in the cluster that accepts tasks like MapReduce and Shuffle operations – from a JobTracker.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-47953125141307030182015-12-13T20:54:00.000-08:002016-02-07T05:53:10.648-08:00 What are some typical functions of Job Tracker?<div dir="ltr" style="text-align: left;" trbidi="on">
The following are some typical tasks of JobTracker:-<br />
- Accepts jobs from clients<br />
- It talks to the NameNode to determine the location of the data.<br />
- It locates TaskTracker nodes with available slots at or near the data.<br />
- It submits the work to the chosen TaskTracker nodes and monitors progress of each task by receiving heartbeat signals from Task tracker.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-72406183107314574502015-12-13T20:51:00.001-08:002016-02-07T05:53:10.666-08:00What is JobTracker?<div dir="ltr" style="text-align: left;" trbidi="on">
JobTracker is the service within Hadoop that runs MapReduce jobs on the cluster.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-46068642403534529302015-12-13T20:47:00.002-08:002016-02-07T05:53:10.647-08:00 What is a Combiner?<div dir="ltr" style="text-align: left;" trbidi="on">
The Combiner is a ‘mini-reduce’ process which operates only on data generated by a mapper. The Combiner will receive as input all data emitted by the Mapper instances o3n a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-59934536180476215402015-12-13T20:45:00.003-08:002016-02-07T05:53:10.657-08:00If no custom partitioner is defined in Hadoop then how is data partitioned before it is sent to the reducer?<div dir="ltr" style="text-align: left;" trbidi="on">
The default partitioner computes a hash value for the key and assigns the partition based on this result.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-13147584446099766452015-12-13T20:44:00.003-08:002016-02-07T05:53:10.625-08:00After the Map phase finishes, the Hadoop framework does “Partitioning, Shuffle and sort1”. Explain what happens in this phase?<div dir="ltr" style="text-align: left;" trbidi="on">
Partitioning: It is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same.<br />
<br />
Shuffle: After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they are required by the reducers. This process of moving map outputs to the reducers is known as shuffling.<br />
<br />
Sort: Each reduce task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by Hadoop before they are presented to the Reducer.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com0tag:blogger.com,1999:blog-5932315371884100201.post-42788755152216862192015-12-13T20:42:00.000-08:002016-02-07T05:53:10.641-08:00 What is the purpose of RecordReader in Hadoop?<div dir="ltr" style="text-align: left;" trbidi="on">
The InputSplit has defined a slice of work, but does not describe how to access it. The RecordReader class actually loads the data from its source and converts it into (key, value) pairs suitable for reading by the Mapper. The RecordReader instance is defined by the Input Format.</div>
Sampathhttp://www.blogger.com/profile/04781922522275895161noreply@blogger.com1