《hadoop2.0集群搭建详细讲解.doc》由会员分享,可在线阅读,更多相关《hadoop2.0集群搭建详细讲解.doc(11页珍藏版)》请在淘文阁 - 分享文档赚钱的网站上搜索。
1、 . . . . hadoop2.2.0集群搭建PS:apache提供的hadoop-2.2.0的安装包是在32位操作系统编译的,因为hadoop依赖一些C+的本地库,所以如果在64位的操作上安装hadoop-2.2.0就需要重新在64操作系统上重新编译1.准备工作:(参考伪分布式搭建)1.1修改Linux主机名1.2修改IP 1.3修改主机名和IP的映射关系 1.4关闭防火墙 1.5ssh免登陆 1.6.安装JDK,配置环境变量等2.集群规划:PS:在hadoop2.0常由两个NameNode组成,一个处于active状态,另一个处于standby状态。Active NameNode对外提供
2、服务,而Standby NameNode则不对外提供服务,仅同步active namenode的状态,以便能够在它失败时快速进行切换。hadoop2.0官方提供了两种HDFS HA的解决方案,一种是NFS,另一种是QJM。这里我们使用简单的QJM。在该方案中,主备NameNode之间通过一组JournalNode同步元数据信息,一条数据只要成功写入多数JournalNode即认为写入成功。通常配置奇数个JournalNode这里还配置了一个zookeeper集群,用于ZKFC(DFSZKFailoverController)故障转移,当Active NameNode挂掉了,会自动切换Stand
3、by NameNode为standby状态3.安装步骤:3.1.安装配置zooekeeper集群3.1.1解压tar -zxvf zookeeper-3.4.5.tar.gz -C /cloud/3.1.2修改配置cd /cloud/zookeeper-3.4.5/conf/cp zoo_sample.cfg zoo.cfgvim zoo.cfg修改:dataDir=/cloud/zookeeper-3.4.5/tmp在最后添加:server.1=hadoop01:2888:3888server.2=hadoop02:2888:3888server.3=hadoop03:2888:3888保存
4、退出然后创建一个tmp文件夹mkdir /cloud/zookeeper-3.4.5/tmp再创建一个空文件touch /cloud/zookeeper-3.4.5/tmp/myid最后向该文件写入IDecho 1 /cloud/zookeeper-3.4.5/tmp/myid3.1.3将配置好的zookeeper拷贝到其他节点(首先分别在hadoop02、 hadoop03根目录下创建一个cloud目录:mkdir /cloud)scp -r /cloud/zookeeper-3.4.5/ hadoop02:/cloud/scp -r /cloud/zookeeper-3.4.5/ hado
5、op03:/cloud/注意:修改hadoop02、hadoop03对应/cloud/zookeeper-3.4.5/tmp/myid容hadoop02:echo 2 /cloud/zookeeper-3.4.5/tmp/myidhadoop03:echo 3 /cloud/zookeeper-3.4.5/tmp/myid3.2.安装配置hadoop集群3.2.1解压tar -zxvf hadoop-2.2.0.tar.gz -C /cloud/3.2.2配置HDFS(hadoop2.0所有的配置文件都在 $HADOOP_HOME/etc/hadoop目录下)将hadoop添加到环境变量中vi
6、m /etc/profileexport JAVA_HOME=/usr/java/jdk1.6.0_45export HADOOP_HOME=/cloud/hadoop-2.2.0export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bincd /cloud/hadoop-2.2.0/etc/hadoop3.2.2.1修改hadoo-env.shexport JAVA_HOME=/usr/java/jdk1.6.0_453,2.2.2修改core-site.xmlfs.defaultFShdfs:/ns1hadoop.tmp.dir/cloud/hadoo
7、p-2.2.0/tmpha.zookeeper.quorumhadoop01:2181,hadoop02:2181,hadoop03:21813,2.2.3修改hdfs-site.xmldfs.nameservicesns1dfs.ha.namenodes.ns1nn1,nn2dfs.namenode.rpc-address.ns1.nn1hadoop01:9000dfs.namenode. -address.ns1.nn1hadoop01:50070dfs.namenode.rpc-address.ns1.nn2hadoop02:9000dfs.namenode. -address.ns1.
8、nn2hadoop02:50070dfs.namenode.shared.edits.dirqjournal:/hadoop01:8485;hadoop02:8485;hadoop03:8485/ns1dfs.journalnode.edits.dir/cloud/hadoop-2.2.0/journaldfs.ha.automatic-failover.enabled truedfs.client.failover.proxy.provider.ns1org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvid
9、erdfs.ha.fencing.methodssshfencedfs.ha.fencing.ssh.private-key-files/root/.ssh/id_rsa3.2.2.4修改slaveshadoop01hadoop02hadoop033.2.3配置YARN3.2.3.1修改yarn-site.xmlyarn.resourcemanager.hostnamehadoop01yarn.nodemanager.aux-servicesmapreduce_shuffle3.2.3.2修改mapred-site.xmlmapreduce.framework.name yarn3.2.4将配
10、置好的hadoop拷贝到其他节点scp -r /cloud/hadoop-2.2.0/ hadoo02:/cloud/scp -r /cloud/hadoop-2.2.0/ hadoo03:/cloud/3.2.5启动zookeeper集群(分别在hadoop01、hadoop02、hadoop03上启动zk)cd /cloud/zookeeper-3.4.5/bin/./zkServer.sh start查看状态:./zkServer.sh status(一个leader,两个follower)3.2.6启动journalnode(在hadoop01上启动所有journalnode)cd /
11、cloud/hadoop-2.2.0sbin/hadoop-daemons.sh start journalnode(运行jps命令检验,多了JournalNode进程)3.2.7格式化HDFS在hadoop01上执行命令:hadoop namenode -format格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里我配置的是/cloud/hadoop-2.2.0/tmp,然后将/cloud/hadoop-2.2.0/tmp拷贝到hadoop02的/cloud/hadoop-2.2.0/下。scp -r tmp/ hadoop02:/cloud/h
12、adoop-2.2.0/ 3.2.8格式化ZK(在hadoop01上执行即可)hdfs zkfc -formatZK 3.2.9启动HDFS(在hadoop01上执行)sbin/start-dfs.sh 3.3.10启动YARN(在hadoop01上执行)sbin/start-yarn.sh到此,hadoop2.2.0配置完毕,可以统计浏览器访问:192.168.1.201:50070NameNode hadoop01:9000 (active)192.168.1.202:50070NameNode hadoop02:9000 (standby)验证HDFS HA首先向hdfs上传一个文件ha
13、doop fs -put /etc/profile /profilehadoop fs -ls /然后再kill掉active的NameNodekill -9 通过浏览器访问:192.168.1.202:50070NameNode hadoop02:9000 (active)这个时候hadoop02上的NameNode变成了active在执行命令:hadoop fs -ls /-rw-r-r- 3 root supergroup 1926 2014-02-06 15:36 /profile刚才上传的文件依然存在!手动启动那个挂掉的NameNodesbin/hadoop-daemon.sh start namenode通过浏览器访问:192.168.1.201:50070NameNode hadoop01:9000 (standby)验证YARN:运行一下hadoop提供的demo中的WordCount程序:hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /profile /outOK,大功告成!11 / 11
限制150内