慢超时500毫秒/跨节点警告

问题描述 投票:0回答:1

我有一个树节点Cassandra集群。

当我从Java客户端询问大量数据时,我在服务器端发出以下警告:

 WARN SELECT * FROM [...] time 789 msec - slow timeout 500 msec/cross-node
 WARN SELECT * FROM [...] time 947 msec - slow timeout 500 msec/cross-node
 WARN SELECT * FROM [...] time 1027 msec - slow timeout 500 msec/cross-node
 WARN SELECT * FROM [...] time 819 msec - slow timeout 500 msec/cross-node

客户端,我最终得到以下异常:

java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.TransportException: [/x.y.z.a:9042]连接已关闭

我的服务器配置yaml如下:

 # How long the coordinator should wait for read operations to complete
 read_request_timeout_in_ms: 5000
 # How long the coordinator should wait for seq or index scans to complete
 range_request_timeout_in_ms: 10000
 # How long the coordinator should wait for writes to complete
 write_request_timeout_in_ms: 2000
 # How long the coordinator should wait for counter writes to complete
 counter_write_request_timeout_in_ms: 5000
 # How long a coordinator should continue to retry a CAS operation
 # that contends with other proposals for the same row
 cas_contention_timeout_in_ms: 1000
 # How long the coordinator should wait for truncates to complete
 # (This can be much longer, because unless auto_snapshot is disabled
 # we need to flush first so we can snapshot before removing the data.)
 truncate_request_timeout_in_ms: 60000
 # The default timeout for other, miscellaneous operations
 request_timeout_in_ms: 10000

我没有找到任何“500毫秒”超时的参考。那我怎么能调整这个超时呢?查询大量分区/数据时是否有任何选项可以避免以Exception结尾?

作为旁注,我使用future以异步方式检索数据:

 import com.datastax.driver.core.ResultSetFuture;
cassandra cassandra-3.0 datastax-java-driver
1个回答
4
投票

默认的slow_query_log_timeout_in_ms是500并且不是实际的超时,而只是通知/日志记录。如果你想要更高,你可以在你的yaml中更新它。

500毫秒虽然很慢,但可能表明您的环境或查询出现问题。虽然这种情况很少见,但它可能只是周期性的GC,可以通过客户端推测性重试来缓解。

© www.soinside.com 2019 - 2024. All rights reserved.