roman_日积跬步-终至千里

roman_日积跬步-终至千里

一.Purpose

此文章目的在于多节点hadoop(从几个节点到上千个节点)的安装,但这里不包括高可用和安全相关的内容。

 

二. Prerequisites

 

三. Installation

 

1. 节点规划

根据上面的建议,我这里选择了两个安装节点进行组件规划

 

2. Configuring Hadoop in Non-Secure Mode

注意:

 

3. 准备工作

每个节点【node1、node2】操作:

mkdir -p /home/user/hadoop
cd   /home/user/hadoop
tar -zxvf hadoop.tar.gz
ln -s   hadoop-3.0.3 hadoop

 

设置环境变量:

vim ~/.bashrc 

# 添加如下内容
export HADOOP_HOME=/home/user/hadoop/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_CONF_DIR=/home/user/hadoop/hadoop/etc/hadoop


# 执行
source ~/.bashrc 

 

4. 配置

在/{user_home}/hadoop/hadoop/etc/hadoop/ 下

core-site.xml


<configuration>
 <property>
        <name>fs.defaultFS</name>
        <value>hdfs://namenodeIp:9000</value>
        <description>
        ip 为namenode所在ip
        </description>
    </property>
</configuration>   

 

hdfs-site.xml

  <!-- ===========namenode===========   -->  
  
  <property>  
	 <name>dfs.namenode.name.dir</name>  
 	<value>/opt/data/hdfs/namenode,/opt/data02/hdfs/namenode</value>  
	 <description>If this is a comma-delimited list of directories then the name table is replicated in all of the  
            directories, for redundancy.  
            Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.  
            用于保存Namenode的namespace和事务日志的路径  
      </description>  
 </property>  
  <!-- ===========namenode===========   -->  
  
 <!-- ===========datanode===========   -->  
 <property>  
	 <name>dfs.datanode.data.dir</name>  
	 <value>/opt/data/hdfs/data,/opt/data02/hdfs/data</value>  
 	<description>
		 If this is a comma-delimited list of directories, 
		 then data will be stored in all named directories, 
		 typically on different devices.
     </description>  
 </property>  

 

yarn-site.xml

  
 <!--  Configurations for ResourceManager:   -->  
	 <property>  
		 <name>yarn.resourcemanager.address</name>  
		 <value>node1:8832</value>  
	 </property>  
	  
	 <property> 
		 <name>yarn.resourcemanager.scheduler.address</name>  
		 <value>node1:8830</value>  
	 </property>  
	  
	 <property> 
		 <name>yarn.resourcemanager.resource-tracker.address</name>  
		 <value>node1:8831</value>    
	 </property>  
	  
	 <property> 
	 	<name>yarn.resourcemanager.admin.address</name>  
	 	<value>node1:8833</value>  
	 </property>  
	  
	 <property> 
		 <name>yarn.resourcemanager.webapp.address</name>  
		 <value>node1:8888</value>  
	 </property> 
	  
	 <property> 
		 <name>yarn.resourcemanager.hostname</name>  
		 <value>rmhostname</value>  
	 </property>  
	  
	 <property>
		 <name>yarn.nodemanager.local-dirs</name>  
		 <value>/data/yarn/nm-local-dir,/data02/yarn/nm-local-dir</value>   
	 </property>  
	  
	 <property> 
		 <name>yarn.nodemanager.log-dirs</name>  
		 <value>/home/taiyi/hadoop/yarn/userlogs</value>  
	 </property>  

  
	 <property> 
		 <name>yarn.nodemanager.remote-app-log-dir</name>  
		 <value>/home/taiyi/hadoop/yarn/containerlogs</value>  
	 </property>
	 
	<property>  
	 	<name>yarn.nodemanager.resource.memory-mb</name>  
	 	<value>61440</value>
	 	<description>通过free -h 查看机器具体内存设定
		</description>
	</property>

 

mapred-site.xml

<!--Configurations for MapReduce JobHistory Server:-->  
	<property>  
		 <name>mapreduce.jobhistory.address</name>  
		 <value>node2:10020</value>  
	</property>  
	  
	<property>  
		 <name>mapreduce.jobhistory.webapp.address</name>  
		 <value>node2:19888</value>   
	</property>  
 
<!--Configurations for MapReduce JobHistory Server:-->

 

workers

配置工作节点

node1
node2

 

4. 分发配置、创建文件夹

配置分发到另外一个节点

scp -r   \
/home/user/hadoop/hadoop/etc/hadoop/  \
root@node2hostname:/home/user/hadoop/hadoop/etc/

所有节点创建文件夹

mkdir -p /data/yarn/nm-local-dir /data02/yarn/nm-local-dir
chown -R user:user /data/yarn /data02/yarn

mkdir -p /opt/data/hdfs/namenode /opt/data02/hdfs/namenode /opt/data/hdfs/data /opt/data02/hdfs/data
chown -R user:user /opt/data /opt/data02

 

5. 格式化

namenode所在节点执行

hdfs namenode -format

如果看到这些信息格式化成功

2022-08-12 17:43:11,039 INFO common.Storage: Storage directory /Users/lianggao/MyWorkSpace/002install/hadoop-3.3.1/hadoop_repo/dfs/name 
has been successfully formatted.

2022-08-12 17:43:11,069 INFO namenode.FSImageFormatProtobuf: Saving image file /Users/lianggao/MyWorkSpace/002install/hadoop-3.3.1/hadoop_repo/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2022-08-12 17:43:11,200 INFO namenode.FSImageFormatProtobuf: Image file /Users/lianggao/MyWorkSpace/002install/hadoop-3.3.1/hadoop_repo/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 403 bytes saved in 0 seconds .

 

6. 操作进程

6.1. hdfs

启动

node1

hdfs --daemon start namenode
hdfs --daemon start datanode

node2

hdfs --daemon start secondarynamenode
hdfs --daemon start datanode

停止

hdfs --daemon stop namenode
hdfs --daemon stop secondarynamenode
hdfs --daemon stop datanode

 

6.2. yarn

启动

node1

yarn --daemon start resourcemanager
yarn --daemon start nodemanager

node2

mapred --daemon start historyserver
yarn --daemon start nodemanager

停止

yarn --daemon stop resourcemanager
yarn --daemon stop nodemanager
mapred --daemon stop historyserver

 

7. 访问

http://node1:9870/
http://node2:8088/

【运维】hadoop集群安装(一)多节点安装-LMLPHP

【运维】hadoop集群安装(一)多节点安装-LMLPHP

08-28 04:08