CentOS-6.4下安装hadoop2.7.3

2024-02-09 18:48
文章标签 安装 centos 6.4 hadoop2.7

本文主要是介绍CentOS-6.4下安装hadoop2.7.3,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一.环境介绍

实验安装使用hadoop的版本为stable版本2.7.3,下载地址为:
http://www-eu.apache.org/dist/hadoop/common/
实验总共三台机器:
    
  1. [hadoop@hadoop1 hadoop]$ cat /etc/hosts
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.56.21 hadoop1
  5. 192.168.56.22 hadoop2
  6. 192.168.56.23 hadoop3
其中:
hadoop1为NameNode,SecondaryNameNode,ResourceManager
hadoop2/3为DataNode,NodeManager

文件系统如下,/hadoop目录准备用来安装hadoop和存放数据:
    
  1. [hadoop@hadoop1 hadoop]$ df -h
  2. Filesystem Size Used Avail Use% Mounted on
  3. /dev/mapper/vg_basic-lv_root
  4. 18G 5.5G 11G 34% /
  5. tmpfs 499M 0 499M 0% /dev/shm
  6. /dev/sda1 485M 34M 427M 8% /boot
  7. /dev/mapper/hadoopvg-hadooplv
  8. 49G 723M 46G 2% /hadoop
二.创建hadoop用户
创建hadoop用户来执行hadoop的安装:
    
  1. useradd hadoop
  2. chown -R hadoop:hadoop /hadoop

三.在hadoop用户下创建ssh免密钥

在hadoop1/2/3上分别运行:
su - hadoop
ssh-keygen -t rsa
ssh-keygen -t dsa
cd /home/hadoop/.ssh
cat *.pub >authorized_keys

在hadoop2上执行:
scp authorized_keys hadoop1:/home/hadoop/.ssh/hadoop2_keys

在hadoop3上执行:
scp authorized_keys hadoop1:/home/hadoop/.ssh/hadoop3_keys

在hadoop1上执行:
su - hadoop
cd /home/hadoop/.ssh
cat hadoop2_keys >> authorized_keys
cat hadoop3_keys >> authorized_keys
再将认证文件拷贝到其它机器上:
scp ./authorized_keys hadoop2:/home/hadoop/.ssh/
scp ./authorized_keys hadoop3:/home/hadoop/.ssh/

注意:查看authorized_keys的权限必须是644,如果不是则需要chmod修改,否则免密钥不成功!

四.添加java环境变量

java下载地址:http://www.oracle.com/technetwork/java/javase/downloads/
上传解压缩到/usr/local下,然后在.bash_profile中添加java的环境变量
    
  1. exprot JAVA_HOME=/usr/local/jdk1.8.0_131
  2. PATH=$JAVA_HOME/bin:$PATH:$HOME/bin

五.修改hadoop配置文件

将hadoop安装文件解压缩到/hadoop中.
~/hadoop/etc/hadoop/hadoop-env.sh
~/hadoop/etc/hadoop/yarn-env.sh
~/hadoop/etc/hadoop/slaves
~/hadoop/etc/hadoop/core-site.xml
~/hadoop/etc/hadoop/hdfs-site.xml
~/hadoop/etc/hadoop/mapred-site.xml
~/hadoop/etc/hadoop/yarn-site.xml
以上个别文件默认不存在的,可以复制相应的template文件获得。

创建下面几个文件夹,分别用来存放数据,name信息,临时文件
    
  1. [hadoop@hadoop1 hadoop]$cd /hadoop/hadoop
  2. [hadoop@hadoop1 hadoop]$ mkdir data tmp name

1.修改配置文件hadoop-env.sh,yarn-env.sh

cd /hadoop/hadoop/etc/hadoop
hadoop-env.sh,yarn-env.sh主要修改JAVA_HOME环境变量,其实如果你在profile文件里面已经添加了JAVA_HOME,就不需要修改了.

2.修改配置文件slaves

slaves配置文件修改datanode的主机:
    
  1. [hadoop@hadoop1 hadoop]$ cat slaves
  2. hadoop2
  3. hadoop3

3.修改配置文件core-site.xml 

    
  1. [hadoop@hadoop1 hadoop]$ cat core-site.xml
  2. <?xml version="1.0" encoding="UTF-8"?>
  3. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  4. <!--
  5. Licensed under the Apache License, Version 2.0 (the "License");
  6. you may not use this file except in compliance with the License.
  7. You may obtain a copy of the License at
  8. http://www.apache.org/licenses/LICENSE-2.0
  9. Unless required by applicable law or agreed to in writing, software
  10. distributed under the License is distributed on an "AS IS" BASIS,
  11. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. See the License for the specific language governing permissions and
  13. limitations under the License. See accompanying LICENSE file.
  14. -->
  15. <!-- Put site-specific property overrides in this file. -->
  16. <configuration>
  17. <property>
  18. <name>fs.defaultFS</name>
  19. <value>hdfs://hadoop1:9000</value>
  20. </property>
  21. <property>
  22. <name>io.file.buffer.size</name>
  23. <value>131072</value>
  24. </property>
  25. <property>
  26. <name>hadoop.tmp.dir</name>
  27. <value>file:/hadoop/hadoop/tmp</value>
  28. <description>Abase for other temporary directories.</description>
  29. </property>
  30. <property>
  31. <name>hadoop.proxyuser.hduser.hosts</name>
  32. <value>*</value>
  33. </property>
  34. <property>
  35. <name>hadoop.proxyuser.hduser.groups</name>
  36. <value>*</value>
  37. </property>
  38. </configuration>

4.修改配置文件hdfs-site.xml

    
  1. [hadoop@hadoop1 hadoop]$ cat hdfs-site.xml
  2. <?xml version="1.0" encoding="UTF-8"?>
  3. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  4. <!--
  5. Licensed under the Apache License, Version 2.0 (the "License");
  6. you may not use this file except in compliance with the License.
  7. You may obtain a copy of the License at
  8. http://www.apache.org/licenses/LICENSE-2.0
  9. Unless required by applicable law or agreed to in writing, software
  10. distributed under the License is distributed on an "AS IS" BASIS,
  11. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. See the License for the specific language governing permissions and
  13. limitations under the License. See accompanying LICENSE file.
  14. -->
  15. <!-- Put site-specific property overrides in this file. -->
  16. <configuration>
  17. <property>
  18. <name>dfs.namenode.secondary.http-address</name>
  19. <value>hadoop1:9001</value>
  20. </property>
  21. <property>
  22. <name>dfs.namenode.name.dir</name>
  23. <value>file:/hadoop/hadoop/name</value>
  24. </property>
  25. <property>
  26. <name>dfs.datanode.data.dir</name>
  27. <value>file:/hadoop/hadoop/data</value>
  28. </property>
  29. <property>
  30. <name>dfs.replication</name>
  31. <value>1</value>
  32. </property>
  33. <property>
  34. <name>dfs.webhdfs.enabled</name>
  35. <value>true</value>
  36. </property>
  37. </configuration>

5.修改配置文件mapred-site.xml

    
  1. [hadoop@hadoop1 hadoop]$ mv mapred-site.xml.template mapred-site.xml
  2. [hadoop@hadoop1 hadoop]$ cat mapred-site.xml
  3. <?xml version="1.0"?>
  4. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  5. <!--
  6. Licensed under the Apache License, Version 2.0 (the "License");
  7. you may not use this file except in compliance with the License.
  8. You may obtain a copy of the License at
  9. http://www.apache.org/licenses/LICENSE-2.0
  10. Unless required by applicable law or agreed to in writing, software
  11. distributed under the License is distributed on an "AS IS" BASIS,
  12. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. See the License for the specific language governing permissions and
  14. limitations under the License. See accompanying LICENSE file.
  15. -->
  16. <!-- Put site-specific property overrides in this file. -->
  17. <configuration>
  18. <property>
  19. <name>mapreduce.framework.name</name>
  20. <value>yarn</value>
  21. </property>
  22. <property>
  23. <name>mapreduce.jobhistory.address</name>
  24. <value>hadoop1:10020</value>
  25. </property>
  26. <property>
  27. <name>mapreduce.jobhistory.webapp.address</name>
  28. <value>hadoop1:19888</value>
  29. </property>
  30. </configuration>

6.修改配置文件yarn-site.xml

    
  1. [hadoop@hadoop1 hadoop]$ cat yarn-site.xml
  2. <?xml version="1.0"?>
  3. <!--
  4. Licensed under the Apache License, Version 2.0 (the "License");
  5. you may not use this file except in compliance with the License.
  6. You may obtain a copy of the License at
  7. http://www.apache.org/licenses/LICENSE-2.0
  8. Unless required by applicable law or agreed to in writing, software
  9. distributed under the License is distributed on an "AS IS" BASIS,
  10. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11. See the License for the specific language governing permissions and
  12. limitations under the License. See accompanying LICENSE file.
  13. -->
  14. <configuration>
  15. <!-- Site specific YARN configuration properties -->
  16. <property>
  17. <name>yarn.nodemanager.aux-services</name>
  18. <value>mapreduce_shuffle</value>
  19. </property>
  20. <property>
  21. <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  22. <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  23. </property>
  24. <property>
  25. <name>yarn.resourcemanager.address</name>
  26. <value>hadoop1:8032</value>
  27. </property>
  28. <property>
  29. <name>yarn.resourcemanager.scheduler.address</name>
  30. <value>hadoop1:8030</value>
  31. </property>
  32. <property>
  33. <name>yarn.resourcemanager.resource-tracker.address</name>
  34. <value>hadoop1:8031</value>
  35. </property>
  36. <property>
  37. <name>yarn.resourcemanager.admin.address</name>
  38. <value>hadoop1:8033</value>
  39. </property>
  40. <property>
  41. <name>yarn.resourcemanager.webapp.address</name>
  42. <value>hadoop1:8088</value>
  43. </property> 
  44. </configuration>
六.格式化hadoop
在启动namenode和yran之前必须先格式化namenode:
    
  1. [hadoop@hadoop1 hadoop]$ bin/hdfs namenode -format htest
  2. 17/05/14 23:50:22 INFO namenode.NameNode: STARTUP_MSG:
  3. /************************************************************
  4. STARTUP_MSG: Starting NameNode
  5. STARTUP_MSG: host = hadoop1/192.168.56.21
  6. STARTUP_MSG: args = [-format, htest]
  7. STARTUP_MSG: version = 2.7.3
  8. STARTUP_MSG: classpath = /hadoop/hadoop/etc/hadoop:/hadoop/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/hadoop/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/hadoop/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/junit-4.11.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/common/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/hadoop/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/hadoop/hadoop/share/hadoop/common/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/hadoop/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/hadoop/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/hadoop/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/hadoop/hadoop/share/hadoop/common/lib/activation-1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/hadoop/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/hadoop/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/hadoop/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/hadoop/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/hadoop/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/hadoop/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/hadoop/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/common/hadoop-nfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/hdfs:/hadoop/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/hadoop/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/contrib/capacity-scheduler/*.jar
  9. STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
  10. STARTUP_MSG: java = 1.8.0_131
  11. ************************************************************/
  12. 17/05/14 23:50:22 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
  13. 17/05/14 23:50:22 INFO namenode.NameNode: createNameNode [-format, htest]
  14. Formatting using clusterid: CID-3cf41172-e75f-4bfb-9f8d-32877047a551
  15. 17/05/14 23:50:22 INFO namenode.FSNamesystem: No KeyProvider found.
  16. 17/05/14 23:50:22 INFO namenode.FSNamesystem: fsLock is fair:true
  17. 17/05/14 23:50:22 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
  18. 17/05/14 23:50:22 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
  19. 17/05/14 23:50:22 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
  20. 17/05/14 23:50:22 INFO blockmanagement.BlockManager: The block deletion will start around 2017 May 14 23:50:22
  21. 17/05/14 23:50:22 INFO util.GSet: Computing capacity for map BlocksMap
  22. 17/05/14 23:50:22 INFO util.GSet: VM type = 64-bit
  23. 17/05/14 23:50:22 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
  24. 17/05/14 23:50:22 INFO util.GSet: capacity = 2^21 = 2097152 entries
  25. 17/05/14 23:50:22 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
  26. 17/05/14 23:50:22 INFO blockmanagement.BlockManager: defaultReplication = 1
  27. 17/05/14 23:50:22 INFO blockmanagement.BlockManager: maxReplication = 512
  28. 17/05/14 23:50:22 INFO blockmanagement.BlockManager: minReplication = 1
  29. 17/05/14 23:50:22 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
  30. 17/05/14 23:50:22 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
  31. 17/05/14 23:50:22 INFO blockmanagement.BlockManager: encryptDataTransfer = false
  32. 17/05/14 23:50:22 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
  33. 17/05/14 23:50:22 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
  34. 17/05/14 23:50:22 INFO namenode.FSNamesystem: supergroup = supergroup
  35. 17/05/14 23:50:22 INFO namenode.FSNamesystem: isPermissionEnabled = true
  36. 17/05/14 23:50:22 INFO namenode.FSNamesystem: HA Enabled: false
  37. 17/05/14 23:50:22 INFO namenode.FSNamesystem: Append Enabled: true
  38. 17/05/14 23:50:23 INFO util.GSet: Computing capacity for map INodeMap
  39. 17/05/14 23:50:23 INFO util.GSet: VM type = 64-bit
  40. 17/05/14 23:50:23 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
  41. 17/05/14 23:50:23 INFO util.GSet: capacity = 2^20 = 1048576 entries
  42. 17/05/14 23:50:23 INFO namenode.FSDirectory: ACLs enabled? false
  43. 17/05/14 23:50:23 INFO namenode.FSDirectory: XAttrs enabled? true
  44. 17/05/14 23:50:23 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
  45. 17/05/14 23:50:23 INFO namenode.NameNode: Caching file names occuring more than 10 times
  46. 17/05/14 23:50:23 INFO util.GSet: Computing capacity for map cachedBlocks
  47. 17/05/14 23:50:23 INFO util.GSet: VM type = 64-bit
  48. 17/05/14 23:50:23 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
  49. 17/05/14 23:50:23 INFO util.GSet: capacity = 2^18 = 262144 entries
  50. 17/05/14 23:50:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
  51. 17/05/14 23:50:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
  52. 17/05/14 23:50:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
  53. 17/05/14 23:50:23 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
  54. 17/05/14 23:50:23 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
  55. 17/05/14 23:50:23 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
  56. 17/05/14 23:50:23 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
  57. 17/05/14 23:50:23 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
  58. 17/05/14 23:50:23 INFO util.GSet: Computing capacity for map NameNodeRetryCache
  59. 17/05/14 23:50:23 INFO util.GSet: VM type = 64-bit
  60. 17/05/14 23:50:23 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
  61. 17/05/14 23:50:23 INFO util.GSet: capacity = 2^15 = 32768 entries
  62. 17/05/14 23:50:23 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1028743371-192.168.56.21-1494777023841
  63. 17/05/14 23:50:23 INFO common.Storage: Storage directory /hadoop/hadoop/name has been successfully formatted.
  64. 17/05/14 23:50:23 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop/hadoop/name/current/fsimage.ckpt_0000000000000000000 using no compression
  65. 17/05/14 23:50:24 INFO namenode.FSImageFormatProtobuf: Image file /hadoop/hadoop/name/current/fsimage.ckpt_0000000000000000000 of size 353 bytes saved in 0 seconds.
  66. 17/05/14 23:50:24 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
  67. 17/05/14 23:50:24 INFO util.ExitUtil: Exiting with status 0
  68. 17/05/14 23:50:24 INFO namenode.NameNode: SHUTDOWN_MSG:
  69. /************************************************************
  70. SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.56.21
  71. ************************************************************/
七.启动hadoop
    
  1. [hadoop@hadoop1 hadoop]$ ./sbin/start-dfs.sh
  2. Starting namenodes on [hadoop1]
  3. hadoop1: starting namenode, logging to /hadoop/hadoop/logs/hadoop-hadoop-namenode-hadoop1.out
  4. hadoop2: starting datanode, logging to /hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop2.out
  5. hadoop3: starting datanode, logging to /hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop3.out
  6. Starting secondary namenodes [hadoop1]
  7. hadoop1: starting secondarynamenode, logging to /hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-hadoop1.out
查看hadoop1上的进程:
    
  1. [hadoop@hadoop1 hadoop]$ jps
  2. 8568 NameNode
  3. 8873 Jps
  4. 8764 SecondaryNameNode
启动yarn:
    
  1. [hadoop@hadoop1 hadoop]$ ./sbin/start-yarn.sh
  2. starting yarn daemons
  3. starting resourcemanager, logging to /hadoop/hadoop/logs/yarn-hadoop-resourcemanager-hadoop1.out
  4. hadoop2: starting nodemanager, logging to /hadoop/hadoop/logs/yarn-hadoop-nodemanager-hadoop2.out
  5. hadoop3: starting nodemanager, logging to /hadoop/hadoop/logs/yarn-hadoop-nodemanager-hadoop3.out
  6. [hadoop@hadoop1 hadoop]$ jps
  7. 8930 ResourceManager
  8. 9187 Jps
  9. 8568 NameNode
  10. 8764 SecondaryNameNode
检查datanode上的进程:
    
  1. [hadoop@hadoop2 hadoop]$ jps
  2. 7909 DataNode
  3. 8039 NodeManager
  4. 8139 Jps
关闭hadoop:
    
  1. ./sbin/stop-dfs.sh
  2. ./sbin/stop-yarn.sh

八,WEB访问接口

Web Interfaces:

Once the Hadoop cluster is up and running check the web-ui of the components as described below:

Daemon Web Interface Notes
NameNode http://nn_host:port/ Default HTTP port is 50070.
ResourceManager http://rm_host:port/ Default HTTP port is 8088.
MapReduce JobHistory Server http://jhs_host:port/ Default HTTP port is 19888.

九.参考文档:
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/ClusterSetup.html

































这篇关于CentOS-6.4下安装hadoop2.7.3的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/695022

相关文章

Win安装MySQL8全过程

《Win安装MySQL8全过程》:本文主要介绍Win安装MySQL8全过程,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录Win安装mysql81、下载MySQL2、解压文件3、新建文件夹data,用于保存数据库数据文件4、在mysql根目录下新建文件my.ini

最详细安装 PostgreSQL方法及常见问题解决

《最详细安装PostgreSQL方法及常见问题解决》:本文主要介绍最详细安装PostgreSQL方法及常见问题解决,介绍了在Windows系统上安装PostgreSQL及Linux系统上安装Po... 目录一、在 Windows 系统上安装 PostgreSQL1. 下载 PostgreSQL 安装包2.

Maven如何手动安装依赖到本地仓库

《Maven如何手动安装依赖到本地仓库》:本文主要介绍Maven如何手动安装依赖到本地仓库问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录一、下载依赖二、安装 JAR 文件到本地仓库三、验证安装四、在项目中使用该依赖1、注意事项2、额外提示总结一、下载依赖登

如何在Mac上安装并配置JDK环境变量详细步骤

《如何在Mac上安装并配置JDK环境变量详细步骤》:本文主要介绍如何在Mac上安装并配置JDK环境变量详细步骤,包括下载JDK、安装JDK、配置环境变量、验证JDK配置以及可选地设置PowerSh... 目录步骤 1:下载JDK步骤 2:安装JDK步骤 3:配置环境变量1. 编辑~/.zshrc(对于zsh

如何在pycharm安装torch包

《如何在pycharm安装torch包》:本文主要介绍如何在pycharm安装torch包方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录在pycharm安装torch包适http://www.chinasem.cn配于我电脑的指令为适用的torch包为总结在p

在PyCharm中安装PyTorch、torchvision和OpenCV详解

《在PyCharm中安装PyTorch、torchvision和OpenCV详解》:本文主要介绍在PyCharm中安装PyTorch、torchvision和OpenCV方式,具有很好的参考价值,... 目录PyCharm安装PyTorch、torchvision和OpenCV安装python安装PyTor

Python Transformer 库安装配置及使用方法

《PythonTransformer库安装配置及使用方法》HuggingFaceTransformers是自然语言处理(NLP)领域最流行的开源库之一,支持基于Transformer架构的预训练模... 目录python 中的 Transformer 库及使用方法一、库的概述二、安装与配置三、基础使用:Pi

如何解决mmcv无法安装或安装之后报错问题

《如何解决mmcv无法安装或安装之后报错问题》:本文主要介绍如何解决mmcv无法安装或安装之后报错问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录mmcv无法安装或安装之后报错问题1.当我们运行YOwww.chinasem.cnLO时遇到2.找到下图所示这里3.

Python 安装和配置flask, flask_cors的图文教程

《Python安装和配置flask,flask_cors的图文教程》:本文主要介绍Python安装和配置flask,flask_cors的图文教程,本文通过图文并茂的形式给大家介绍的非常详细,... 目录一.python安装:二,配置环境变量,三:检查Python安装和环境变量,四:安装flask和flas

Win11安装PostgreSQL数据库的两种方式详细步骤

《Win11安装PostgreSQL数据库的两种方式详细步骤》PostgreSQL是备受业界青睐的关系型数据库,尤其是在地理空间和移动领域,:本文主要介绍Win11安装PostgreSQL数据库的... 目录一、exe文件安装 (推荐)下载安装包1. 选择操作系统2. 跳转到EDB(PostgreSQL 的