Redisson客户端异常 Netty线程

问题描述 投票:0回答:1

我有我的应用程序部署在openshift和它也使用Redis。白色的它工作的大部分时间,我仍然面临的问题有关的Redisson这是间歇性的。在启动应用程序的网址时,错误跟踪如下:----------------------。

org.redisson.client.WriteRedisConnectionException: Unable to send command! Node source: NodeSource [slot=null, addr=null, redisClient=null, redirect=null, entry=MasterSlaveEntry [masterEntry=[freeSubscribeConnectionsAmount=0, freeSubscribeConnectionsCounter=value:49:queue:0, freeConnectionsAmount=31, freeConnectionsCounter=value:63:queue:0, freezed=false, freezeReason=null, client=[addr=redis://webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com:6379], nodeType=MASTER, firstFail=0]]], connection: RedisConnection@1568202974 [redisClient=[addr=redis://webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com:6379], channel=[id: 0xceaf7022, L:/10.103.34.74:32826 ! R:webapp-sessionstore.9m6hkf.ng.0001.apse2.cache.amazonaws.com/10.112.17.104:6379], currentCommand=CommandData [promise=RedissonPromise [promise=ImmediateEventExecutor$ImmediatePromise@68b1bc80(failure: java.util.concurrent.CancellationException)], command=(HMSET), params=[redisson:tomcat_session:306A0C0325AD2189A7FDDB695D0755D2, PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), PooledUnsafeDirectByteBuf(freed), ...], codec=org.redisson.codec.CompositeCodec@25e7216]], command: (HMSET), params: [redisson:tomcat_session:77C4BB9FC4252BFC2C8411F3A4DBB6C9, PooledUnsafeDirectByteBuf(ridx: 0, widx: 24, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 10, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 24, cap: 256), PooledUnsafeDirectByteBuf(ridx: 0, widx: 10, cap: 256)] after 3 retry attempts
    org.redisson.command.CommandAsyncService.checkWriteFuture(CommandAsyncService.java:872)
    org.redisson.command.CommandAsyncService.access$000(CommandAsyncService.java:97)
    org.redisson.command.CommandAsyncService$7.operationComplete(CommandAsyncService.java:791)
    org.redisson.command.CommandAsyncService$7.operationComplete(CommandAsyncService.java:788)
    io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:502)
    io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:476)
    io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:415)
    io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:540)
    io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:533)
    io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:114)
    io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:1018)
    io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:874)
    io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1365)
    io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:716)
    io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:708)
    io.netty.channel.AbstractChannelHandlerContext.access$1700(AbstractChannelHandlerContext.java:56)
    io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1102)
    io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1149)
    io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1073)
    io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:405)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
    io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906)
    io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    java.lang.Thread.run(Thread.java:748)
Root Cause

io.netty.channel.ExtendedClosedChannelException
    io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
Note The full stack trace of the root cause is available in the server logs.
java redis openshift netty
1个回答
0
投票

这可能是因为增加了Redis集群的负载,因为它被共享的应用程序的数量。作为一个变通的办法,我没有尝试与重新部署每次我看到这一点,从而连接复位发生,解决了这个问题.正如我所说,这只是一个变通和永久的解决方案,也许将有一个专门的Redis集群为您的应用程序,这再次取决于您的应用程序的架构,大小.

© www.soinside.com 2019 - 2024. All rights reserved.