`
小网客
  • 浏览: 1217146 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

Hadoop之YARN安装部署

 
阅读更多

版本信息:

Hadoop 2.3.0-cdh5.0.0

节点分布:

NameNode:compute-50-04 
SecondaryNameNode:compute-50-04  
ResourceManager :compute-50-03 
NodeManager :
compute-28-16
compute-28-17
compute-50-00
compute-50-03
compute-50-04
DataNode:
compute-28-16
compute-28-17
compute-50-00
compute-50-03
compute-50-04

部署:

1.增加hadoop用户,组也是hadoop组,详情参见“linux之用户组分配”

2.配置节点之间的双向ssh免密登陆,参见博客“linux免密钥SSH登陆配置”

3.官方下载hadoop 2.3.0-cdh5.0.0包,解压路径为:/home/hadoop/hadoop 2.3.0-cdh5.0.0

4.配置core-site.xml:

 

<configuration> 
  <property> 
    <name>fs.default.name</name>  
    <value>hdfs://compute-50-04:9000</value> 
  </property>  
  <property> 
    <name>hadoop.tmp.dir</name>  
    <value>/home/hadoop/data/tmp</value> 
  </property>  
  <property> 
    <name>ha.zookeeper.quorum</name>  
    <value>compute-28-16:2181,compute-28-17:2181,compute-50-00:2181</value> 
  </property>  
  <property> 
    <name>hadoop.proxyuser.hduser.hosts</name>  
    <value>*</value> 
  </property>  
  <property> 
    <name>hadoop.proxyuser.hduser.groups</name>  
    <value>*</value> 
  </property> 
</configuration>

5.配置hdfs-site.xml:

<configuration> 
  <property> 
    <name>dfs.replication</name>  
    <value>3</value> 
  </property>  
  <property> 
    <name>dfs.namenode.secondary.http-address</name>  
    <value>compute-50-03:9001</value> 
  </property>  
  <property> 
    <name>dfs.ha.fencing.methods</name>  
    <value>shell(/bin/true)</value> 
  </property>  
  <property> 
    <name>dfs.namenode.name.dir</name>  
    <value>/home/hadoop/data/dfs/nn</value> 
  </property>  
  <property> 
    <name>dfs.data.dir</name>  
    <value>/home/hadoop/data/dfs/dn</value> 
  </property>  
  <property> 
    <name>dfs.datanode.failed.volumes.tolerated</name>  
    <value>0</value> 
  </property>  
  <property> 
    <name>ipc.client.ping</name>  
    <value>false</value> 
  </property>  
  <property> 
    <name>ipc.ping.interval</name>  
    <value>60000</value> 
  </property>  
  <property> 
    <name>dfs.webhdfs.enabled</name>  
    <value>true</value> 
  </property>  
  <property> 
    <name>dfs.client.read.shortcircuit</name>  
    <value>false</value> 
  </property>  
  <property> 
    <name>dfs.permissions.enabled</name>  
    <value>false</value> 
  </property>  
  <property> 
    <name>dfs.domain.socket.path</name>  
    <value>${hadoop.tmp.dir}/sockets/dn._PORT</value> 
  </property> 
</configuration>

6.配置mapred-site.xml:

<configuration> 
  <property> 
    <name>mapreduce.framework.name</name>  
    <value>yarn</value> 
  </property>  
  <property> 
    <name>mapreduce.jobhistory.address</name>  
    <value>compute-50-00:10020</value> 
  </property>  
  <property> 
    <name>mapreduce.jobhistory.webapp.address</name>  
    <value>compute-50-00:19888</value> 
  </property>  
  <property> 
    <name>mapreduce.jobhistory.intermediate-done-dir</name>  
    <value>/data2/data/mr/history-tmp</value> 
  </property>  
  <property> 
    <name>mapreduce.jobhistory.done-dir</name>  
    <value>/data2/data/mr/history-done</value> 
  </property>  
  <property> 
    <name>yarn.app.mapreduce.am.staging-dir</name>  
    <value>/user</value> 
  </property>  
  <property> 
    <name>mapreduce.map.memory.mb</name>  
    <value>2048</value> 
  </property>  
  <property> 
    <name>mapreduce.map.speculative</name>  
    <value>false</value> 
  </property>  
  <property> 
    <name>mapreduce.job.queuename</name>  
    <value>default</value> 
  </property>  
  <!-- acl -->  
  <property> 
    <name>mapreduce.cluster.acls.enabled</name>  
    <value>false</value> 
  </property>  
  <property> 
    <name>mapreduce.job.acl-view-job</name>  
    <value></value> 
  </property>  
  <property> 
    <name>mapreduce.job.acl-modify-job</name>  
    <value></value> 
  </property>  
</configuration>

7.配置yarn-site.xml:

<configuration> 
  <property> 
    <name>yarn.nodemanager.aux-services</name>  
    <value>mapreduce_shuffle</value> 
  </property>  
  <property> 
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
    <value>org.apache.hadoop.mapred.ShuffleHandler</value> 
  </property>  
  <property> 
    <name>yarn.resourcemanager.hostname</name>  
    <value>compute-50-04</value> 
  </property>  
  <property> 
    <name>yarn.nodemanager.local-dirs</name>  
    <value>/data2/data/yarn/local</value> 
  </property>  
  <property> 
    <name>yarn.nodemanager.vmem-pmem-ratio</name>  
    <value>10</value> 
  </property>  
  <property> 
    <name>yarn.nodemanager.container-executor.class</name>  
    <value>org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor</value> 
  </property>  
  <property> 
    <name>yarn.log-aggregation-enable</name>  
    <value>false</value> 
  </property>  
  <property> 
    <name>yarn.resourcemanager.scheduler.class</name>  
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> 
  </property>  
  <property> 
    <name>yarn.nodemanager.vmem-check-enabled</name>  
    <value>false</value> 
  </property>  
  <property> 
    <name>yarn.resourcemanager.scheduler.monitor.enable</name>  
    <value>true</value> 
  </property>  
  <property> 
    <name>yarn.resourcemanager.scheduler.monitor.policies</name>  
    <value>org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy</value> 
  </property>  
  <property> 
    <name>yarn.resourcemanager.monitor.capacity.preemption.observe_only</name>  
    <value>false</value> 
  </property>  
  <property> 
    <name>yarn.nodemanager.resource.memory-mb</name>  
    <value>20720</value> 
  </property>  
  <property> 
    <name>yarn.acl.enable</name>  
    <value>false</value> 
  </property>  
  <property> 
    <name>yarn.admin.acl</name>  
    <value>yarn,hadoop</value> 
  </property>  
  <property> 
    <name>yarn.nodemanager.resource.cpu-vcores</name>  
    <value>8</value> 
  </property> 
</configuration>

8.配置slaves:

compute-28-16
compute-28-17
compute-50-00
compute-50-03
compute-50-04

 9.配置信息已经完成,那么建立相应的目录:

mkdir -p yourpath

10.分发到各个节点上去:

scp   -r /home/hadoop/hadoop-2.3.0-cdh5.0.0  hadoop@hostxxxxx:/home/hadoop/
......

11.格式化namenode:

bin/hadoop namenode -format 

12.启动:

sbin/start-all.sh

 

其他问题:

1.启动的时候报错:Error: JAVA_HOME is not set and could not be found

修复方式:

在etc/hadoop/yarn-env.sh 和etc/hadoop/hadoop-env.sh中显示指定 

export JAVA_HOME=/usr/java/jdk1.6.0_21

 

其他说明:

1.NameNode基于SecondaryNameNode没有HA

2.ResourceManager为单节点没有HA

0
0
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics