本文主要是介绍CentOS-6.4下安装hadoop2.7.3,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
一.环境介绍
实验安装使用hadoop的版本为stable版本2.7.3,下载地址为:
http://www-eu.apache.org/dist/hadoop/common/
实验总共三台机器:
[hadoop@hadoop1 hadoop]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.21 hadoop1
192.168.56.22 hadoop2
192.168.56.23 hadoop3
其中:
hadoop1为NameNode,SecondaryNameNode,ResourceManager
hadoop2/3为DataNode,NodeManager
文件系统如下,/hadoop目录准备用来安装hadoop和存放数据:
[hadoop@hadoop1 hadoop]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_basic-lv_root
18G 5.5G 11G 34% /
tmpfs 499M 0 499M 0% /dev/shm
/dev/sda1 485M 34M 427M 8% /boot
/dev/mapper/hadoopvg-hadooplv
49G 723M 46G 2% /hadoop
二.创建hadoop用户
创建hadoop用户来执行hadoop的安装:
useradd hadoop
chown -R hadoop:hadoop /hadoop
三.在hadoop用户下创建ssh免密钥
在hadoop1/2/3上分别运行:
su - hadoop
ssh-keygen -t rsa
ssh-keygen -t dsa
cd /home/hadoop/.ssh
cat *.pub >authorized_keys
在hadoop2上执行:
scp authorized_keys hadoop1:/home/hadoop/.ssh/hadoop2_keys
在hadoop3上执行:
scp authorized_keys hadoop1:/home/hadoop/.ssh/hadoop3_keys
在hadoop1上执行:
su - hadoop
cd /home/hadoop/.ssh
cat hadoop2_keys >> authorized_keys
cat hadoop3_keys >> authorized_keys
再将认证文件拷贝到其它机器上:
scp ./authorized_keys hadoop2:/home/hadoop/.ssh/
scp ./authorized_keys hadoop3:/home/hadoop/.ssh/
注意:查看authorized_keys的权限必须是644,如果不是则需要chmod修改,否则免密钥不成功!
四.添加java环境变量
java下载地址:http://www.oracle.com/technetwork/java/javase/downloads/
上传解压缩到/usr/local下,然后在.bash_profile中添加java的环境变量
exprot JAVA_HOME=/usr/local/jdk1.8.0_131
PATH=$JAVA_HOME/bin:$PATH:$HOME/bin
五.修改hadoop配置文件
将hadoop安装文件解压缩到/hadoop中.
~/hadoop/etc/hadoop/hadoop-env.sh
~/hadoop/etc/hadoop/yarn-env.sh
~/hadoop/etc/hadoop/slaves
~/hadoop/etc/hadoop/core-site.xml
~/hadoop/etc/hadoop/hdfs-site.xml
~/hadoop/etc/hadoop/mapred-site.xml
~/hadoop/etc/hadoop/yarn-site.xml
以上个别文件默认不存在的,可以复制相应的template文件获得。
创建下面几个文件夹,分别用来存放数据,name信息,临时文件
[hadoop@hadoop1 hadoop]$cd /hadoop/hadoop
[hadoop@hadoop1 hadoop]$ mkdir data tmp name
1.修改配置文件hadoop-env.sh,yarn-env.sh
cd /hadoop/hadoop/etc/hadoop
hadoop-env.sh,yarn-env.sh主要修改JAVA_HOME环境变量,其实如果你在profile文件里面已经添加了JAVA_HOME,就不需要修改了.
2.修改配置文件slaves
slaves配置文件修改datanode的主机:
[hadoop@hadoop1 hadoop]$ cat slaves
hadoop2
hadoop3
3.修改配置文件core-site.xml
[hadoop@hadoop1 hadoop]$ cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/hadoop/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.hduser.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hduser.groups</name>
<value>*</value>
</property>
</configuration>
4.修改配置文件hdfs-site.xml
[hadoop@hadoop1 hadoop]$ cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/hadoop/hadoop/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/hadoop/hadoop/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
5.修改配置文件mapred-site.xml
[hadoop@hadoop1 hadoop]$ mv mapred-site.xml.template mapred-site.xml
[hadoop@hadoop1 hadoop]$ cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop1:19888</value>
</property>
</configuration>
6.修改配置文件yarn-site.xml
[hadoop@hadoop1 hadoop]$ cat yarn-site.xml
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop1:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop1:8088</value>
</property>
- </configuration>
六.格式化hadoop
在启动namenode和yran之前必须先格式化namenode:
[hadoop@hadoop1 hadoop]$ bin/hdfs namenode -format htest
17/05/14 23:50:22 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop1/192.168.56.21
STARTUP_MSG: args = [-format, htest]
STARTUP_MSG: version = 2.7.3
STARTUP_MSG: classpath = /hadoop/hadoop/etc/hadoop:/hadoop/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/hadoop/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/hadoop/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/junit-4.11.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/common/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/hadoop/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/hadoop/hadoop/share/hadoop/common/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/hadoop/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/hadoop/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/hadoop/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/hadoop/hadoop/share/hadoop/common/lib/activation-1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/hadoop/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/hadoop/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/hadoop/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/hadoop/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/hadoop/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/hadoop/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/hadoop/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/common/hadoop-nfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/hdfs:/hadoop/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/hadoop/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG: java = 1.8.0_131
************************************************************/
17/05/14 23:50:22 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/05/14 23:50:22 INFO namenode.NameNode: createNameNode [-format, htest]
Formatting using clusterid: CID-3cf41172-e75f-4bfb-9f8d-32877047a551
17/05/14 23:50:22 INFO namenode.FSNamesystem: No KeyProvider found.
17/05/14 23:50:22 INFO namenode.FSNamesystem: fsLock is fair:true
17/05/14 23:50:22 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/05/14 23:50:22 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/05/14 23:50:22 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/05/14 23:50:22 INFO blockmanagement.BlockManager: The block deletion will start around 2017 May 14 23:50:22
17/05/14 23:50:22 INFO util.GSet: Computing capacity for map BlocksMap
17/05/14 23:50:22 INFO util.GSet: VM type = 64-bit
17/05/14 23:50:22 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
17/05/14 23:50:22 INFO util.GSet: capacity = 2^21 = 2097152 entries
17/05/14 23:50:22 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/05/14 23:50:22 INFO blockmanagement.BlockManager: defaultReplication = 1
17/05/14 23:50:22 INFO blockmanagement.BlockManager: maxReplication = 512
17/05/14 23:50:22 INFO blockmanagement.BlockManager: minReplication = 1
17/05/14 23:50:22 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
17/05/14 23:50:22 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/05/14 23:50:22 INFO blockmanagement.BlockManager: encryptDataTransfer = false
17/05/14 23:50:22 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
17/05/14 23:50:22 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
17/05/14 23:50:22 INFO namenode.FSNamesystem: supergroup = supergroup
17/05/14 23:50:22 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/05/14 23:50:22 INFO namenode.FSNamesystem: HA Enabled: false
17/05/14 23:50:22 INFO namenode.FSNamesystem: Append Enabled: true
17/05/14 23:50:23 INFO util.GSet: Computing capacity for map INodeMap
17/05/14 23:50:23 INFO util.GSet: VM type = 64-bit
17/05/14 23:50:23 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
17/05/14 23:50:23 INFO util.GSet: capacity = 2^20 = 1048576 entries
17/05/14 23:50:23 INFO namenode.FSDirectory: ACLs enabled? false
17/05/14 23:50:23 INFO namenode.FSDirectory: XAttrs enabled? true
17/05/14 23:50:23 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
17/05/14 23:50:23 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/05/14 23:50:23 INFO util.GSet: Computing capacity for map cachedBlocks
17/05/14 23:50:23 INFO util.GSet: VM type = 64-bit
17/05/14 23:50:23 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
17/05/14 23:50:23 INFO util.GSet: capacity = 2^18 = 262144 entries
17/05/14 23:50:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/05/14 23:50:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/05/14 23:50:23 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
17/05/14 23:50:23 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
17/05/14 23:50:23 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
17/05/14 23:50:23 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
17/05/14 23:50:23 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/05/14 23:50:23 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/05/14 23:50:23 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/05/14 23:50:23 INFO util.GSet: VM type = 64-bit
17/05/14 23:50:23 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
17/05/14 23:50:23 INFO util.GSet: capacity = 2^15 = 32768 entries
17/05/14 23:50:23 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1028743371-192.168.56.21-1494777023841
17/05/14 23:50:23 INFO common.Storage: Storage directory /hadoop/hadoop/name has been successfully formatted.
17/05/14 23:50:23 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop/hadoop/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/05/14 23:50:24 INFO namenode.FSImageFormatProtobuf: Image file /hadoop/hadoop/name/current/fsimage.ckpt_0000000000000000000 of size 353 bytes saved in 0 seconds.
17/05/14 23:50:24 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/05/14 23:50:24 INFO util.ExitUtil: Exiting with status 0
17/05/14 23:50:24 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.56.21
************************************************************/
七.启动hadoop
[hadoop@hadoop1 hadoop]$ ./sbin/start-dfs.sh
Starting namenodes on [hadoop1]
hadoop1: starting namenode, logging to /hadoop/hadoop/logs/hadoop-hadoop-namenode-hadoop1.out
hadoop2: starting datanode, logging to /hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop2.out
hadoop3: starting datanode, logging to /hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop3.out
Starting secondary namenodes [hadoop1]
hadoop1: starting secondarynamenode, logging to /hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-hadoop1.out
查看hadoop1上的进程:
[hadoop@hadoop1 hadoop]$ jps
8568 NameNode
8873 Jps
8764 SecondaryNameNode
启动yarn:
[hadoop@hadoop1 hadoop]$ ./sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /hadoop/hadoop/logs/yarn-hadoop-resourcemanager-hadoop1.out
hadoop2: starting nodemanager, logging to /hadoop/hadoop/logs/yarn-hadoop-nodemanager-hadoop2.out
hadoop3: starting nodemanager, logging to /hadoop/hadoop/logs/yarn-hadoop-nodemanager-hadoop3.out
[hadoop@hadoop1 hadoop]$ jps
8930 ResourceManager
9187 Jps
8568 NameNode
8764 SecondaryNameNode
检查datanode上的进程:
[hadoop@hadoop2 hadoop]$ jps
7909 DataNode
8039 NodeManager
8139 Jps
关闭hadoop:
./sbin/stop-dfs.sh
./sbin/stop-yarn.sh
八,WEB访问接口
Web Interfaces:
Once the Hadoop cluster is up and running check the web-ui of the components as described below:
Daemon Web Interface Notes
NameNode http://nn_host:port/ Default HTTP port is 50070.
ResourceManager http://rm_host:port/ Default HTTP port is 8088.
MapReduce JobHistory Server http://jhs_host:port/ Default HTTP port is 19888.
九.参考文档:
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/ClusterSetup.html
这篇关于CentOS-6.4下安装hadoop2.7.3的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!