本文介绍了Cassandra:截断表两次抛出一致性异常的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个scalatest套件失败,我已经将原因缩小到在测试之前运行和截断数据表的代码。如果我运行下面的代码我可以重现问题

I have a scalatest suite that's failing, and I have narrowed the cause down to the code that runs before tests and truncates a data table. If I run the following code I can recreate the problem

session.execute(s"TRUNCATE ${dao.tableName};")
session.execute(s"TRUNCATE ${dao.tableName};")

抛出:

    Error during truncate: Cannot achieve consistency level ALL
    com.datastax.driver.core.exceptions.TruncateException: Error during truncate: Cannot achieve consistency level ALL
        at com.datastax.driver.core.exceptions.TruncateException.copy(TruncateException.java:35)
        at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:271)
        at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:187)
        at com.datastax.driver.core.Session.execute(Session.java:126)
        at com.datastax.driver.core.Session.execute(Session.java:77)
        at postingstore.cassandra.dao.PostingGroupDaoTest$$anonfun$2.apply$mcV$sp(PostingGroupDaoTest.scala:43)
        at postingstore.cassandra.dao.PostingGroupDaoTest$$anonfun$2.apply(PostingGroupDaoTest.scala:39)
        at postingstore.cassandra.dao.PostingGroupDaoTest$$anonfun$2.apply(PostingGroupDaoTest.scala:39)
        at org.scalatest.FunSuite$$anon$1.apply(FunSuite.scala:1265)
        at org.scalatest.Suite$class.withFixture(Suite.scala:1974)
        at ledger.testsupport.JUnitFunSuiteTest.withFixture(JUnitFunSuiteTest.scala:10)
        at org.scalatest.FunSuite$class.invokeWithFixture$1(FunSuite.scala:1262)
        at ...
    Caused by: com.datastax.driver.core.exceptions.TruncateException: Error during truncate: Cannot achieve consistency level ALL
        at com.datastax.driver.core.Responses$Error.asException(Responses.java:91)
        at com.datastax.driver.core.ResultSetFuture$ResponseCallback.onSet(ResultSetFuture.java:122)
        at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:224)
        at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:361)
        at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:510)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
        at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
        at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
        at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
        at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)

我使用datastax驱动程序2.0.0-RC2,并且有三个节点的集群。

I'm using the datastax driver 2.0.0-RC2, and have a cluster of three nodes.

推荐答案

原来这是一个节点的问题,它已经进入不一致的状态,由于磁盘空间不足

Turns out this was an issue with a node that had got into an inconsistent state due to running out of diskspace

这篇关于Cassandra:截断表两次抛出一致性异常的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-22 15:53