本文介绍了Cloudera Manager中的dfs_hosts_allow的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在设置HDFS& Cloudera Manager通过Cloudera Manager API。然而,我被困在一个特定的点:$ b​​
$ b

我设置了所有HDFS角色,但NameNode拒绝与数据节点通信。 DataNode日志中的相关错误:

 块池BP-1653676587-172.168.215.10-1435054001015初始化失败(Datanode Uuid null )service to master.adastragrp.com/172.168.215.10:8022由于主机不在include-list中,Datanode拒绝与namenode进行通信:DatanodeRegistration(172.168.215.11,datanodeUuid = 1a114e5d-2243-442f-8603-8905b988bea7,infoPort = 50075,ipcPort = 50020,storageInfo = lv = -56; cid = cluster4; nsid = 103396489; c = 0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java: 917)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:5085)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode( NameNodeRpcServer.java:1140)
at

我的DNS是通过hosts文件配置的,所以我认为以下答案适用,并尝试解决方案没有成功:



不过,据我所知,我有另一个基本上具有相同配置的小群集,它正在工作。 DNS也是通过/ etc / hosts配置的,但是在这里我通过Cloudera Manager GUI而不是API设置了集群。



之后,我终于找到了配置目录正在运行的NameNode进程,并且在那里我找到了一个dfs_hosts_include文件。打开它显示只包含127.0.0.1。在工作群集上,所有节点都包含在该文件中。我在topology.map中发现了一个类似的奇怪:

 <?xml version =1.0encoding =UTF-8 ?> 

< topology>
< node name =master.adastragrp.comrack =/ default/>
< node name =127.0.0.1rack =/ default/>
< node name =slave.adastragrp.comrack =/ default/>
< node name =127.0.0.1rack =/ default/>
< / topology>

...这看起来不正确。
同样,在工作集群上,IPs是预期的。



我不仅不知道出了什么问题,我也不知道如何影响这些文件,因为它们都是由Cloudera Manager自动生成的。有没有人看过这个,可以提供指导?

问题出在/etc/cloudera-scm-agent/config.ini



我用一个模板生成了这个文件,并以 $ b $结尾b

  listening_ip = 127.0.0.1 

cloudera-cm-agent愉快地向服务器报告。有关更多信息,请参阅


I am trying to setup HDFS & Cloudera Manager via the Cloudera Manager API. However I am stuck at a specific point:

I setup all the HDFS roles, but the NameNode refuses to communicate with the data nodes. The relevant error from the DataNode log:

Initialization failed for Block pool BP-1653676587-172.168.215.10-1435054001015 (Datanode Uuid null) service to master.adastragrp.com/172.168.215.10:8022 Datanode denied communication with namenode because the host is not in the include-list: DatanodeRegistration(172.168.215.11, datanodeUuid=1a114e5d-2243-442f-8603-8905b988bea7, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=cluster4;nsid=103396489;c=0)
    at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:917)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:5085)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1140)
    at

My DNS is configured via the hosts file, so I thought the following answer applies and tried the solution without success:https://stackoverflow.com/a/29598059/1319284

However, I have another small cluster with basically the same configuration as far as I can tell, which is working. DNS is configured through /etc/hosts as well, but here I set up the cluster via Cloudera Manager GUI instead of the API.

After that I finally found the configuration directory of the running NameNode process, and there I found a dfs_hosts_include file. Opening it reveals that only 127.0.0.1 is included. On the working cluster, all the nodes are included in that file. I find a similar weirdness in topology.map:

<?xml version="1.0" encoding="UTF-8"?>

<!--Autogenerated by Cloudera Manager-->
<topology>
  <node name="master.adastragrp.com" rack="/default"/>
  <node name="127.0.0.1" rack="/default"/>
  <node name="slave.adastragrp.com" rack="/default"/>
  <node name="127.0.0.1" rack="/default"/>
</topology>

... That doesn't look right.Again, on the working cluster the IPs are as expected.

Not only do I not know what went wrong, I also do not know how to influence these files, as they are all auto-generated by Cloudera Manager. Has anyone seen this before and could provide guidance here?

解决方案

I finally found where I had the problem. The problem was in /etc/cloudera-scm-agent/config.ini

I generated this file with a template, and ended up with

listening_ip=127.0.0.1

which the cloudera-cm-agent happily reported to the server. For more information, see the question Salt changing /etc/hosts, but still caching old one?

这篇关于Cloudera Manager中的dfs_hosts_allow的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-11 06:51