Start all the Hadoop services in the following order:
- HDFS 
- MapReduce 
- ZooKeeper 
- HBase 
- Hive Metastore 
- HiveServer2 
- WebHCat 
- Oozie 
- Ganglia 
- Nagios 
Instructions
- Start HDFS - Execute these commands on the NameNode host machine: - su -l hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode" 
- Execute these commands on the Secondary NameNode host machine: - su -l hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start secondarynamenode” 
- Execute these commands on all DataNodes: - su -l hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode" 
 
- Start MapReduce - Execute these commands on the JobTracker host machine: - su -l mapred -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start jobtracker; sleep 25" 
- Execute these commands on the JobTracker host machine: - su -l mapred -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start historyserver" 
- Execute these commands on all TaskTrackers: - su -l mapred -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start tasktracker" 
 
- Start ZooKeeper. On the ZooKeeper host machine, execute the following command: - su - zookeeper -c "export ZOOCFGDIR=/etc/zookeeper/conf ; export ZOOCFG=zoo.cfg ; source /etc/zookeeper/conf/zookeeper-env.sh ; /usr/lib/zookeeper/bin/zkServer.sh start" 
- Start HBase - Execute these commands on the HBase Master host machine: - su -l hbase -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start master" 
- Execute these commands on all RegionServers: - su -l hbase -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start regionserver" 
 
- Start Hive Metastore. On the Hive Metastore host machine, execute the following command: - su -l hive -c "nohup hive --service metastore > $HIVE_LOG_DIR/hive.out 2> $HIVE_LOG_DIR/hive.log &" - where - $HIVE_LOG_DIRis the directory where Hive server logs are stored (example: /var/log/hive).
- Start HiveServer2. On the Hive Server2 host machine, execute the following command: - sudo su hive -c "nohup /usr/lib/hive/bin/hiveserver2 -hiveconf hive.metastore.uris=\" \" > $HIVE_LOG_DIR /hiveServer2.out 2>$HIVE_LOG_DIR/hiveServer2.log &" - where - $HIVE_LOG_DIRis the directory where Hive server logs are stored (example: /var/log/hive).
- Start WebHCat. On the WebHCat host machine, execute the following command: - su -l hcat -c "/usr/lib/hcatalog/sbin/webhcat_server.sh start" 
- Start Oozie. On the Oozie server host machine, execute the following command: - sudo su -l oozie -c "cd $OOZIE_LOG_DIR/log; /usr/lib/oozie/bin/oozie-start.sh" - where - $OOZIE_LOG_DIRis the directory where Oozie log files are stored (for example:- /var/log/oozie).
- Start Ganglia. - Execute this command on the Ganglia server host machine: - /etc/init.d/hdp-gmetad start 
- Execute this command on all the nodes in your Hadoop cluster: - /etc/init.d/hdp-gmond start 
 
- Start Nagios. - service nagios start 


