热门IT资讯网

hadoop 2.X HA详细配置

发表于:2024-11-23 作者:热门IT资讯网编辑
编辑最后更新 2024年11月23日,hadoop-daemon.sh与hadoop-daemons.sh区别hadoop-daemon.sh只能本地执行hadoop-daemons.sh能远程执行1. 启动JNhadoop-daemon


hadoop-daemon.sh与hadoop-daemons.sh区别

hadoop-daemon.sh只能本地执行

hadoop-daemons.sh能远程执行


1. 启动JN

hadoop-daemons.sh start journalnode


hdfs namenode -initializeSharedEdits //复制edits log文件到journalnode节点上,第一次创建得在格式化namenode之后使用


http://hadoop-yarn1:8480来看journal是否正常


2.格式化namenode,并启动Active Namenode

一、Active NameNode节点上格式化namenode

hdfs namenode -format
hdfs namenode -initializeSharedEdits

初始化journalnode完毕


二、启动Active Namenode

hadoop-daemon.sh start namenode


3.启动 Standby namenode


一、Standby namenode节点上格式化Standby节点

复制Active Namenode上的元数据信息拷贝到Standby Namenode节点上

hdfs namenode -bootstrapStandby

二、启动Standby节点

hadoop-daemon.sh start namenode


4.启动Automatic Failover

在zookeeper上创建 /hadoop-ha/ns1这样一个监控节点(ZNode)

hdfs zkfc -formatZK
start-dfs.sh

5.查看namenode状态

hdfs  haadmin -getServiceState nn1active

6.自动failover

hdfs  haadmin -failover nn1 nn2


配置文件详细信息

core-site.xml

            fs.defaultFS        hdfs://ns1                    hadoop.tmp.dir        /opt/modules/hadoop-2.2.0/data/tmp                    fs.trash.interval        60*24                    ha.zookeeper.quorum        hadoop-yarn1:2181,hadoop-yarn2:2181,hadoop-yarn3:2181                      hadoop.http.staticuser.user        yuanhai    

hdfs-site.xml

            dfs.replication        3                    dfs.nameservices        ns1                    dfs.ha.namenodes.ns1        nn1,nn2                            dfs.namenode.rpc-address.ns1.nn1        hadoop-yarn1:8020                        dfs.namenode.rpc-address.ns1.nn2        hadoop-yarn2:8020                    dfs.namenode.http-address.ns1.nn1        hadoop-yarn1:50070                    dfs.namenode.http-address.ns1.nn2        hadoop-yarn2:50070                    dfs.namenode.shared.edits.dir        qjournal://hadoop-yarn1:8485;hadoop-yarn2:8485;hadoop-yarn3:8485/ns1                    dfs.journalnode.edits.dir        /opt/modules/hadoop-2.2.0/data/tmp/journal                     dfs.ha.automatic-failover.enabled        true                    dfs.client.failover.proxy.provider.ns1        org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider                    dfs.ha.fencing.methods        sshfence                    dfs.ha.fencing.ssh.private-key-files        /home/hadoop/.ssh/id_rsa                    dfs.permissions.enabled        false            


slaves

hadoop-yarn1hadoop-yarn2hadoop-yarn3

yarn-site.xml

            yarn.nodemanager.aux-services        mapreduce_shuffle                    yarn.resourcemanager.hostname        hadoop-yarn1                     yarn.log-aggregation-enable        true                yarn.log-aggregation.retain-seconds        604800     

mapred-site.xml

            mapreduce.framework.name        yarn                mapreduce.jobhistory.address        hadoop-yarn1:10020        MapReduce JobHistory Server IPC host:port                mapreduce.jobhistory.webapp.address        hadoop-yarn1:19888        MapReduce JobHistory Server Web UI host:port                    mapreduce.job.ubertask.enable        true        


hadoop-env.sh

export JAVA_HOME=/opt/modules/jdk1.6.0_24



其他相关文章:

http://blog.csdn.net/zhangzhaokun/article/details/17892857


0