《Hadoop详细安装配置过程.doc》由会员分享,可在线阅读,更多相关《Hadoop详细安装配置过程.doc(9页珍藏版)》请在淘文阁 - 分享文档赚钱的网站上搜索。
1、如有侵权,请联系网站删除,仅供学习与交流Hadoop详细安装配置过程【精品文档】第 9 页Hadoop学习第一步之基础环境搭建1.下载并安装ubuntukylin-15.10-desktop-amd64.iso2.安装sshsudo apt-get install openssh-server openssh-client3.搭建vsftpd#sudo apt-get update#sudo apt-get install vsftpd配置参考vsftpd的开始、关闭和重启$sudo /etc/init.d/vsftpd start #开始$sudo /etc/init.d/vsftpd st
2、op #关闭$sudo /etc/init.d/vsftpd restart #重启4.安装jdk1.7sudo chown -R hadoop:hadoop /optcp /soft/jdk-7u79-linux-x64.gz /optsudo vi /etc/profilealias untar=tar -zxvfsudo source /etc/profilesource /etc/profileuntar jdk*环境变量配置# vi /etc/profile在profile文件最后加上# set java environmentexport JAVA_HOME=/opt/jdk1.7
3、.0_79export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport PATH=$JAVA_HOME/bin:$PATH配置完成后,保存退出。不重启,更新命令#source /etc/profile测试是否安装成功# Java version其他问题:1.sudo 出现unable to resolve host 解决方法参考2.Linux开机时停在 Starting sendmail 不动了的解决方案参考3.ubuntu 安装软件时出现 E: Unable to locate package vsftpd参考
4、4.Linux/Ubuntu vi/vim 使用方法讲解参考分类:Hadoop克隆master虚拟机至node1 、node2分别修改master的主机名为master、node1的主机名为node1、node2的主机名为node2(启动node1、node2系统默认分配递增ip,无需手动修改)分别修改/etc/hosts中的ip和主机名(包含其他节点ip和主机名)配置ssh免密码连入hadoopnode1:$ ssh-keygen -t dsa -P -f /.ssh/id_dsaGenerating public/private dsa key pair.Created directory
5、 /home/hadoop/.ssh.Your identification has been saved in /home/hadoop/.ssh/id_dsa.Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.The key fingerprint is:SHA256:B8vBju/uc3kl/v9lrMqtltttttCcXgRkQPbVoU hadoopnode1The keys randomart image is:+-DSA 1024-+| .o.o. | o+.E . | . oo + |o +. o o
6、oo +|=o. . o. ooo. o.|*o. .+=o .+.+|+-SHA256-+hadoopnode1:$ cd .sshhadoopnode1:/.ssh$ ll总用量 16drwx- 2 hadoop hadoop 4096 Jul 24 20:31 ./drwxr-xr-x 18 hadoop hadoop 4096 Jul 24 20:31 ./-rw- 1 hadoop hadoop 668 Jul 24 20:31 id_dsa-rw-r-r- 1 hadoop hadoop 602 Jul 24 20:31 id_dsa.pubhadoopnode1:/.ssh$ c
7、at id_dsa.pub authorized_keyshadoopnode1:/.ssh$ ll总用量 20drwx- 2 hadoop hadoop 4096 Jul 24 20:32 ./drwxr-xr-x 18 hadoop hadoop 4096 Jul 24 20:31 ./-rw-rw-r- 1 hadoop hadoop 602 Jul 24 20:32 authorized_keys-rw- 1 hadoop hadoop 668 Jul 24 20:31 id_dsa-rw-r-r- 1 hadoop hadoop 602 Jul 24 20:31 id_dsa.pub
8、单机回环ssh免密码登录测试hadoopnode1:/.ssh$ ssh localhostThe authenticity of host localhost (127.0.0.1) cant be established.ECDSA key fingerprint is SHA256:daO0dssyqt12tt9yGUauImOh6tt6A1SgxzSfSmpQqJVEiQTxas.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added localhost (ECDSA) t
9、o the list of known hosts.Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64) * Documentation: 270 packages can be updated.178 updates are security updates.New release 16.04 LTS available.Run do-release-upgrade to upgrade to it.Last login: Sun Jul 24 20:21:39 2016 from 192.168.219.1hadoopnod
10、e1:$ exit注销Connection to localhost closed.hadoopnode1:/.ssh$出现以上信息说明操作成功,其他两个节点同样操作让主结点(master)能通过SSH免密码登录两个子结点(slave)hadoopnode1:/.ssh$ scp hadoopmaster:/.ssh/id_dsa.pub ./master_dsa.pubThe authenticity of host master (192.168.219.128) cant be established.ECDSA key fingerprint is SHA256:daO0dssyqtt
11、9yGUuImOh646A1SgxzSfatSmpQqJVEiQTxas.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added master,192.168.219.128 (ECDSA) to the list of known hosts.hadoopmasters password: id_dsa.pub 100% 603 0.6KB/s 00:00 hadoopnode1:/.ssh$ cat master_dsa.pub authorized_keys如上过程显示了no
12、de1结点通过scp命令远程登录master结点,并复制master的公钥文件到当前的目录下,这一过程需要密码验证。接着,将master结点的公钥文件追加至authorized_keys文件中,通过这步操作,如果不出问题,master结点就可以通过ssh远程免密码连接node1结点了。在master结点中操作如下:hadoopmaster:/.ssh$ ssh node1The authenticity of host node1 (192.168.219.129) cant be established.ECDSA key fingerprint is SHA256:daO0dssyqt9y
13、GUuImOh3466A1SttgxzSfSmpQqJVEiQTxas.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added node1,192.168.219.129 (ECDSA) to the list of known hosts.Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64) * Documentation: 270 packages can be updated.178 updates are s
14、ecurity updates.New release 16.04 LTS available.Run do-release-upgrade to upgrade to it.Last login: Sun Jul 24 20:39:30 2016 from 192.168.219.1hadoopnode1:$ exit注销Connection to node1 closed.hadoopmaster:/.ssh$ 由上图可以看出,node1结点首次连接时需要,“YES”确认连接,这意味着master结点连接node1结点时需要人工询问,无法自动连接,输入yes后成功接入,紧接着注销退出至ma
15、ster结点。要实现ssh免密码连接至其它结点,还差一步,只需要再执行一遍ssh node1,如果没有要求你输入”yes”,就算成功了,过程如下:hadoopmaster:/.ssh$ ssh node1Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64) * Documentation: 270 packages can be updated.178 updates are security updates.New release 16.04 LTS available.Run do-release-upgrade to up
16、grade to it.Last login: Sun Jul 24 20:47:20 2016 from 192.168.219.128hadoopnode1:$ exit注销Connection to node1 closed.hadoopmaster:/.ssh$如上图所示,master已经可以通过ssh免密码登录至node1结点了。对node2结点也可以用上面同样的方法进行表面上看,这两个结点的ssh免密码登录已经配置成功,但是我们还需要对主结点master也要进行上面的同样工作,这一步有点让人困惑,但是这是有原因的,具体原因现在也说不太好,据说是真实物理结点时需要做这项工作,因为jo
17、btracker有可能会分布在其它结点上,jobtracker有不存在master结点上的可能性。对master自身进行ssh免密码登录测试工作:hadoopmaster:/.ssh$ scp hadoopmaster:/.ssh/id_dsa.pub ./master_dsa.pubThe authenticity of host master (127.0.0.1) cant be established.ECDSA key fingerprint is SHA256:daO0dssttqt9yGUuImOahtt166AgxttzSfSmpQqJVEiQTxas.Are you sure
18、 you want to continue connecting (yes/no)? yesWarning: Permanently added master (ECDSA) to the list of known hosts.id_dsa.pub 100% 603 0.6KB/s 00:00 hadoopmaster:/.ssh$ cat master_dsa.pub authorized_keyhadoopmaster:/.ssh$ ssh masterWelcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64) * Docume
19、ntation: 270 packages can be updated.178 updates are security updates.New release 16.04 LTS available.Run do-release-upgrade to upgrade to it.Last login: Sun Jul 24 20:39:24 2016 from 192.168.219.1hadoopmaster:$ exit注销Connection to master closed.至此,SSH免密码登录已经配置成功。解压hadoop-2.6.4.tar.gz/opt$untar hado
20、op-2.6.4.tar.gzmv hadoop-2.6.4.tar.gz hadoop然后更新环境变量vi /etc/profileexport JAVA_HOME=/opt/jdk1.7.0_79export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport HADOOP_HOME=/opt/hadoopexport PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbinexport HADOOP_COMMON_LIB_NATIVE_DIR=$H
21、ADOOP_HOME/lib/nativeexport HADOOP_OPTS=-Djava.library.path=$HADOOP_HOME/lib/nativealias untar=tar -zxvfalias viprofile=vi /etc/profilealias sourceprofile=source /etc/profilealias catprofile=cat /etc/profilealias cdhadoop=cd /opt/hadoop/alias startdfs=$HADOOP_HOME/sbin/start-dfs.shalias startyarn=$H
22、ADOOP_HOME/sbin/start-yarn.shalias stopdfs=$HADOOP_HOME/sbin/stop-dfs.shalias stopyarn=$HADOOP_HOME/sbin/stop-yarn.shsource /etc/profile步骤六:修改配置一共有7个文件要修改:$HADOOP_HOME/etc/hadoop/hadoop-env.sh$HADOOP_HOME/etc/hadoop/yarn-env.sh$HADOOP_HOME/etc/hadoop/core-site.xml$HADOOP_HOME/etc/hadoop/hdfs-site.xm
23、l$HADOOP_HOME/etc/hadoop/mapred-site.xml$HADOOP_HOME/etc/hadoop/yarn-site.xml$HADOOP_HOME/etc/hadoop/slaves其中$HADOOP_HOME表示hadoop根目录 a) hadoop-env.sh 、yarn-env.sh这二个文件主要是修改JAVA_HOME后的目录,改成实际本机jdk所在目录位置vi etc/hadoop/hadoop-env.sh (及 vi etc/hadoop/yarn-env.sh)找到下面这行的位置,改成(jdk目录位置,大家根据实际情况修改)export JAV
24、A_HOME=/opt/jdk1.7.0_79另外 hadoop-env.sh中 , 建议加上这句:export HADOOP_PREFIX=/opt/hadoopb) core-site.xml 参考下面的内容修改: fs.defaultFS hdfs:/master:9000 hadoop.tmp.dir /opt/hadoop/tmp 注:/opt/hadoop/tmp 目录如不存在,则先mkdir手动创建core-site.xml的完整参数请参考 http:/hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/
25、core-default.xmlc) hdfs-site.xml dfs.datanode.ipc.address 0.0.0.0:50020 dfs.datanode.http.address 0.0.0.0:50075 dfs.replication 2 注:dfs.replication 表示数据副本数,一般不大于 datanode 的节点数。hdfs-site.xml的完整参数请参考http:/hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xmld) mapred-site.xml
26、mapreduce.framework.name yarn mapred-site.xml的完整参数请参考http:/hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xmle)yarn-site.xml yarn.nodemanager.aux-services mapreduce_shuffle yarn-site.xml的完整参数请参考http:/hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-
27、yarn-common/yarn-default.xml另外,hadoop 1.x与2.x相比, 1.x中的很多参数已经被标识为过时,具体可参考http:/hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/DeprecatedProperties.html最后一个文件slaves暂时不管(可以先用mv slaves slaves.bak 将它改名),上述配置弄好后,就可以在master上启用 NameNode测试了,方法:$HADOOP_HOME/bin/hdfs namenode format 先格式化16/07/
28、25 。16/07/25 20:34:42 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1076359968-127.0.0.1-14008250616/07/25 20:34:42 INFO common.Storage: Storage directory /opt/hadoop/tmp/dfs/name has been successfully formatted.16/07/25 20:34:43 INFO namenode.NNStorageRetentionManager: Going to retain 1 imag
29、es with txid = 016/07/25 20:34:43 INFO util.ExitUtil: Exiting with status 016/07/25 20:34:43 INFO namenode.NameNode: SHUTDOWN_MSG: SHUTDOWN_MSG: Shutting down NameNode at master/127.0.0.1等看到这个时,表示格式化ok$HADOOP_HOME/sbin/start-dfs.sh 启动完成后,输入jps (ps -ef | grep .)查看进程,如果看到以下二个进程:5161 SecondaryNameNode4
30、989 NameNode表示master节点基本ok了再输入$HADOOP_HOME/sbin/start-yarn.sh ,完成后,再输入jps查看进程 5161 SecondaryNameNode5320 ResourceManager4989 NameNode如果看到这3个进程,表示yarn也ok了f) 修改 /opt/hadoop/etc/hadoop/slaves如果刚才用mv slaves slaves.bak对该文件重命名过,先运行 mv slaves.bak slaves 把名字改回来,再vi slaves 编辑该文件,输入node1node2保存退出,最后运行$HADOOP_
31、HOME/sbin/stop-dfs.sh $HADOOP_HOME/sbin/stop-yarn.sh 停掉刚才启动的服务步骤七:将master上的hadoop目录复制到 node1,node2仍然保持在master机器上cd 先进入主目录 cd /optzip -r hadoop.zip hadoopscp -r hadoop.zip hadoopnode1:/opt/scp -r hadoop.zip hadoopnode2:/opt/unzip hadoop.zip注: node1 、 node2 上的hadoop临时目录(tmp)及数据目录(data),仍然要先手动创建。步骤八:验证
32、master节点上,重新启动$HADOOP_HOME/sbin/start-dfs.sh$HADOOP_HOME/sbin/start-yarn.shhadoopmaster:/opt/hadoop/sbin$ start-dfs.shStarting namenodes on mastermaster: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-master.outnode1: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoo
33、p-datanode-node1.outnode2: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-node2.outStarting secondary namenodes 0.0.0.00.0.0.0: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-hadoop-secondarynamenode-master.outhadoopmaster:/opt/hadoop/sbin$ start-yarn.shstartin
34、g yarn daemonsstarting resourcemanager, logging to /opt/hadoop/logs/yarn-hadoop-resourcemanager-master.outnode1: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop-nodemanager-node1.outnode2: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop-nodemanager-node2.out顺利的话,master节点上
35、有几下3个进程:ps -ef | grep ResourceManagerps -ef | grep SecondaryNameNodeps -ef | grep NameNode7482 ResourceManager7335 SecondaryNameNode7159 NameNodeslave01、slave02上有几下2个进程:ps -ef | grep DataNodeps -ef | grep NodeManager2296 DataNode2398 NodeManager同时可浏览:http:/master:50070/http:/master:8088/查看状态另外也可以通过
36、bin/hdfs dfsadmin -report 查看hdfs的状态报告其它注意事项:a) master(即:namenode节点)若要重新格式化,请先清空各datanode上的data目录(最好连tmp目录也一起清空),否则格式化完成后,启动dfs时,datanode会启动失败b) 如果觉得master机器上只运行namenode比较浪费,想把master也当成一个datanode,直接在slaves文件里,添加一行master即可c) 为了方便操作,可修改/etc/profile,把hadoop所需的lib目录,先加到CLASSPATH环境变量中,同时把hadoop/bin,hadoop
37、/sbin目录也加入到PATH变量中,可参考下面的内容(根据实际情况修改):export HADOOP_HOME=/home/hadoop/hadoop-2.6.0export JAVA_HOME=/usr/java/jdk1.7.0_51export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$HADOOP_HOME/share/hadoop/common/hadoop-common-2.6.0.jar:$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:$HADOOP_HOME/share/hadoop/common/lib/commons-cli-1.2.jarexport PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin by colplay 2016.07.25
限制150内