Deploy Hue with an HA Cluster
If you are going to use Hue with an HA Cluster, make the following changes to
          /etc/hue/conf/hue.ini:
- Install the Hadoop HttpFS component on the Hue server. - For RHEL/CentOS/Oracle Linux: - yum install hadoop-httpfs - For SLES: - zypper install hadoop-httpfs 
- Modify - /etc/hadoop-httpfs/conf/httpfs-env.shto add the JDK path. In the file, ensure that JAVA_HOME is set:- export JAVA_HOME=/usr/jdk64/jdk1.7.0_67
- Configure the HttpFS service script by setting up the symlink in - /etc/init.d:- > ln -s /usr/hdp/{HDP2.4.x version number} /hadoop-httpfs/etc/rc.d/init.d/hadoop-httpfs /etc/init.d/hadoop-httpfs
- Modify - /etc/hadoop-httpfs/conf/httpfs-site.xmlto configure HttpFS to talk to the cluster, by confirming that the following properties are correct:- <property> <name>httpfs.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>httpfs.proxyuser.hue.groups</name> <value>*</value> </property> 
- Start the HttpFS service. - service hadoop-httpfs start 
- Modify the - core-site.xmlfile. On the NameNodes and all the DataNodes, add the following properties to the- $HADOOP_CONF_DIR /core-site.xmlfile, where- $HADOOP_CONF_DIRis the directory for storing the Hadoop configuration files. For example,- /etc/hadoop/conf.- <property> <name>hadoop.proxyuser.httpfs.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>*</value> </property> 
- In the - hue.inifile, under the- [hadoop][[hdfs_clusters]][[[default]]]subsection, use the following variables to configure the cluster:- Property - Description - Example - fs_defaultfs - NameNode URL using the logical name for the new name service. For reference, this is the dfs.nameservices property in hdfs-site.xml in your Hadoop configuration. - hdfs://mycluster - webhdfs_url - URL to the HttpFS server. - http://c6401.apache.org:14000/ webhdfs/v1/ 
- Restart Hue for the changes to take effect. - service hue restart 

