本文介绍了在VMWare集群中启动Cassandra节点时,如何解决节点令牌冲突问题?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述 在VM环境中将节点添加到集群时,initial_token是否存在任何已知问题?Are there any known issues with initial_token collision when adding nodes to a cluster in a VM environment?我正在设置一个4节点集群VM。在尝试向集群添加节点时遇到问题。I'm working on a 4 node cluster set up on a VM. We're running into issues when we attempt to add nodes to the cluster.在cassandra.yaml文件中,initial_token为空。 因为我们运行> 1.0 cassandra,auto_bootstrap应该默认为true。In the cassandra.yaml file, initial_token is left blank.Since we're running > 1.0 cassandra, auto_bootstrap should be true by default.这是我的理解,集群中的每个节点应该分配一个启动时的初始令牌。 It's my understanding that each of the nodes in the cluster should be assigned an initial token at startup. 这不是我们目前看到的。 我们不想手动为每个节点设置initial_token的值(某种失败的动态目标)我们还将分区器设置为random:partitioner:org.apache.cassandra .dht.RandomPartitioner This is not what we're currently seeing. We do not want to manually set the value for initial_token for each node (kind of defeats the goal of being dynamic..)We also have set the partitioner to random: partitioner: org.apache.cassandra.dht.RandomPartitioner我概述了我们遵循的步骤和我们在下面看到的结果。 I've outlined the steps we follow and results we are seeing below.Can someone please asdvise as to what we're missing here?以下是我们正在执行的详细步骤:Here are the detailed steps we are taking: 1)杀死所有cassandra实例并删除数据&在每个节点上提交日志文件。1) Kill all cassandra instances and delete data & commit log files on each node. / p> 3)运行nodetool -h WWWW环并查看: Starts up fine.Address DC Rack Status State Load Effective-Ownership TokenS.S.S.S datacenter1 rack1 Up Normal 28.37 GB 100.00% 24360745721352799263907128727168388463 4)XXXX启动 4) X.X.X.X Startup INFO [GossipStage:1] 2012-11-29 21:16:02,194 Gossiper.java (line 850) Node /X.X.X.X is now part of the cluster INFO [GossipStage:1] 2012-11-29 21:16:02,194 Gossiper.java (line 816) InetAddress /X.X.X.X is now UP INFO [GossipStage:1] 2012-11-29 21:16:02,195 StorageService.java (line 1138) Nodes /X.X.X.X and /Y.Y.Y.Y have the same token 113436792799830839333714191906879955254. /X.X.X.X is the new owner WARN [GossipStage:1] 2012-11-29 21:16:02,195 TokenMetadata.java (line 160) Token 113436792799830839333714191906879955254 changing ownership from /Y.Y.Y.Y to /X.X.X.X 5)运行nodetool -h WWWW环并查看: 5) Run nodetool -h W.W.W.W ring and see:Address DC Rack Status State Load Effective-Ownership Token 113436792799830839333714191906879955254S.S.S.S datacenter1 rack1 Up Normal 28.37 GB 100.00% 24360745721352799263907128727168388463W.W.W.W datacenter1 rack1 Up Normal 123.87 KB 100.00% 113436792799830839333714191906879955254 6)YYYY启动 6) Y.Y.Y.Y Startup INFO [GossipStage:1] 2012-11-29 21:17:36,458 Gossiper.java (line 850) Node /Y.Y.Y.Y is now part of the cluster INFO [GossipStage:1] 2012-11-29 21:17:36,459 Gossiper.java (line 816) InetAddress /Y.Y.Y.Y is now UP INFO [GossipStage:1] 2012-11-29 21:17:36,459 StorageService.java (line 1138) Nodes /Y.Y.Y.Y and /X.X.X.X have the same token 113436792799830839333714191906879955254. /Y.Y.Y.Y is the new owner WARN [GossipStage:1] 2012-11-29 21:17:36,459 TokenMetadata.java (line 160) Token 113436792799830839333714191906879955254 changing ownership from /X.X.X.X to /Y.Y.Y.Y 7)运行nodetool -h WWWW环并查看: 7) Run nodetool -h W.W.W.W ring and see:Address DC Rack Status State Load Effective-Ownership Token 113436792799830839333714191906879955254S.S.S.S datacenter1 rack1 Up Normal 28.37 GB 100.00% 24360745721352799263907128727168388463Y.Y.Y.Y datacenter1 rack1 Up Normal 123.87 KB 100.00% 113436792799830839333714191906879955254 8)ZZZZ启动 8) Z.Z.Z.Z Startup INFO [GossipStage:1] 2012-11-30 04:52:28,590 Gossiper.java (line 850) Node /Z.Z.Z.Z is now part of the cluster INFO [GossipStage:1] 2012-11-30 04:52:28,591 Gossiper.java (line 816) InetAddress /Z.Z.Z.Z is now UP INFO [GossipStage:1] 2012-11-30 04:52:28,591 StorageService.java (line 1138) Nodes /Z.Z.Z.Z and /Y.Y.Y.Y have the same token 113436792799830839333714191906879955254. /Z.Z.Z.Z is the new owner WARN [GossipStage:1] 2012-11-30 04:52:28,592 TokenMetadata.java (line 160) Token 113436792799830839333714191906879955254 changing ownership from /Y.Y.Y.Y to /Z.Z.Z.Z 9)运行nodetool -h WWWW环并查看: 9) Run nodetool -h W.W.W.W ring and see:Address DC Rack Status State Load Effective-Ownership Token 113436792799830839333714191906879955254W.W.W.W datacenter1 rack1 Up Normal 28.37 GB 100.00% 24360745721352799263907128727168388463S.S.S.S datacenter1 rack1 Up Normal 28.37 GB 100.00% 24360745721352799263907128727168388463Z.Z.Z.Z datacenter1 rack1 Up Normal 123.87 KB 100.00% 113436792799830839333714191906879955254 。推荐答案清楚地,你的节点持有一些在启动时使用的过去的集群信息。确保删除LocationInfo目录,其中包含有关集群的数据。你有一个非常奇怪的令牌布局(例如,0的令牌),所以你肯定需要重新分配他们,如果你想要正确的所有权。Clearly your nodes are holding onto some past cluster information that is being used at startup. Make sure to delete the LocationInfo directories, which contain the data about the cluster. You have a very strange token layout (where's the 0 token, for example?), so you're certainly going to need to reassign them if you want the proper ownership.它可能有助于解释如何令牌分配工作,所以让我也解决这个问题。在一个全新的集群中,默认情况下,第一个节点将被分配token 0,并且拥有100%的所有权。如果您没有为下一个节点指定令牌,Cassandra将计算一个令牌,使原始节点拥有较低的50%,新节点拥有较高的50%。 It may help to explain how token assignment works, so let me also address this. In a brand new cluster, the first node will get assigned token 0 by default and will have 100% ownership. If you do not specify a token for your next node, Cassandra will calculate a token such that the original node owns the lower 50% and the new node the higher 50%. 添加节点3时,它会在第一个和第二个之间插入令牌,所以实际上最终的所有权看起来像是25%,25% 50%。这是非常重要的,因为在这里学习的教训是,Cassandra不会重新分配一个令牌本身来平衡环。如果您希望所有权平衡正常,您必须分配您自己的令牌。这是不难做到的,实际上有一个实用程序提供这样做。When you add node 3, it will insert the token between the first and second, so you'll actually end up with ownership that looks like 25%, 25%, 50%. This is really important, because the lesson to learn here is that Cassandra will NEVER reassign a token by itself to balance the ring. If you want your ownership balanced properly, you must assign your own tokens. This is not hard to do, and there's actually a utility provided to do this.所以Cassandra的初始引导过程,而动态,可能不会产生所需的环平衡。你不能简单地允许新的节点在没有任何干预的情况下连接,以确保你得到所需的结果。否则,你最终会得到你在你的问题中提出的方案。So Cassandra's initial bootstrap process, while dynamic, may not yield the desired ring balance. You can't simply allow new nodes to join willy nilly without some intervention to make sure you get the desired result. Otherwise you will end up with the scenario you have laid out in your question. 这篇关于在VMWare集群中启动Cassandra节点时,如何解决节点令牌冲突问题?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!
09-27 02:22