Perform steps 1 through 8 on the NameNode host. In a HA NameNode configuration, you should execute these on the primary NameNode. The primary NameNode is the first NameNode configured in hdfs-site.xml.
IF the Oozie service is installed in your cluster, list all current jobs.
oozie jobs -oozie http://localhost:11000/oozie -len 100 -filter status=RUNNING
Stop all jobs in a RUNNING or SUSPENDED state on your Oozie server host. For example:
oozie job -oozie <your.oozie.server.host>:11000/oozie -kill <oozie.job.id>
Use the Services view on the Ambari Web UI to stop all services except HDFS and ZooKeeper. Also stop any client programs that access HDFS.
Finalize any prior HDFS upgrade, if you have not done so already.
su -l <HDFS_USER> hdfs dfsadmin -finalizeUpgrade
where <HDFS_USER> is the HDFS Service user. For example, hdfs.
Check the namenode directory to ensure that there is no snapshot of any prior HDFS upgrade.
Specifically, examine the
$dfs.namenode.name.diror the$dfs.name.dirdirectory on the NameNode host. Make sure that only a "\current" directory and no "\previous" directory exists on the NameNode host.Create the following logs and other files that let you to check the integrity of the file system, post-upgrade.
su -l <HDFS_USER>
where <HDFS_USER> is the HDFS Service user. For example, hdfs.
Run
fsckwith the following flags and send the results to a log. The resulting file contains a complete block map of the file system. You use this log later to confirm the upgrade.hdfs fsck / -files -blocks -locations > dfs-old-fsck-1.log
Optional: Capture the complete namespace of the file system. The following command does a recursive listing of the root file system.
hdfs dfs -ls -R / > dfs-old-lsr-1.log
Create a list of all the DataNodes in the cluster.
hdfs dfsadmin -report > dfs-old-report-1.log
Optional: Copy all unrecoverable data stored in HDFS to a local file system or to a backup instance of HDFS.
Save the namespace. You must be the HDFS service user to do this and you must put the cluster in Safe Mode.
hdfs dfsadmin -safemode enter hdfs dfsadmin -saveNamespace
![[Note]](../common/images/admon/note.png)
Note In a HA NameNode configuration, the command
hdfs dfsadmin -saveNamespacedoes checkpoint in the first NameNode specified in the configuration, indfs.ha.namenodes.[nameservice ID]. You can also use the dfsadmin-fsoption to specify which NameNode to connect. For example, to force a checkpoint in namenode 2:hdfs dfsadmin -fs hdfs://namenode2-hostname:namenode2-port -saveNamespace
Copy the following checkpoint files into a backup directory. You can find the directory by using the Services View in Ambari Web. Select the HDFS service, the Configs tab, in the Namenode section, look up the property NameNode Directories. It will be on your primary NameNode host.
$dfs.name.dir/current
![[Note]](../common/images/admon/note.png)
Note In a HA NameNode configuration, the location of the checkpoint depends on where the saveNamespace command is sent, as defined in the preceding step.
Store the layoutVersion for the NameNode. Make a copy of the file at
where$dfs.name.dir/current/VERSIONis the value of the config parameter$dfs.name.dirNameNode directories. This file will be used later to verify that the layout version is upgraded.Stop HDFS. Make sure all services in the cluster are completely stopped.
On the Hive metastore host, stop the Hive metastore service, if you have not done so already.
![[Note]](../common/images/admon/note.png)
Note Make sure that the Hive metastore database is running. For more information about Administering the Hive metastore database, see the Hive Metastore Administrator documentation.
If you are upgrading Hive and Oozie, back up the Hive and Oozie metastore databases on the Hive and Oozie database host machines, respectively.
Optional - Back up the Hive Metastore database.
![[Note]](../common/images/admon/note.png)
Note These instructions are provided for your convenience. Please check your database documentation for the latest back up instructions.
Table 2.1. Hive Metastore Database Backup and Rstore
Database Type Backup Restore MySQL
mysqldump $dbname > $outputfilename.sqlFor example:mysqldump hive > /tmp/mydir/backup_hive.sqlmysql $dbname < $inputfilename.sqlFor example:mysql hive < /tmp/mydir/backup_hive.sqlPostgres
sudo -u $username pg_dump $databasename > $outputfilename.sqlFor example:sudo -u postgres pg_dump hive > /tmp/mydir/backup_hive.sqlsudo -u $username psql $databasename < $inputfilename.sqlFor example:sudo -u postgres psql hive < /tmp/mydir/backup_hive.sqlOracle Connect to the Oracle database using sqlplusexport the database:exp username/password@database full=yes file=output_file.dmpImport the database: imp username/password@database ile=input_file.dmpOptional - Back up the Oozie Metastore database.
![[Note]](../common/images/admon/note.png)
Note These instructions are provided for your convenience. Please check your database documentation for the latest back up instructions.
Table 2.2. Oozie Metastore Database Backup and Restore
Database Type Backup Restore MySQL
mysqldump $dbname > $outputfilename.sqlFor example:mysqldump oozie > /tmp/mydir/backup_oozie.sqlmysql $dbname < $inputfilename.sqlFor example:mysql oozie < /tmp/mydir/backup_oozie.sqlPostgres
sudo -u $username pg_dump $databasename > $outputfilename.sqlFor example:sudo -u postgres pg_dump oozie > /tmp/mydir/backup_oozie.sqlsudo -u $username psql $databasename < $inputfilename.sqlFor example:sudo -u postgres psql oozie < /tmp/mydir/backup_oozie.sql
On the Ambari Server host, stop Ambari Server and confirm that it is stopped.
ambari-server stop
ambari-server status
Stop all Ambari Agents. On every host in your cluster known to Ambari:
ambari-agent stop

