Ubuntu上的kubernetes:微服务问题通过领事与其他主机交互

问题描述 投票:3回答:1

我已经出现了几个星期而且无法在以下问题上取得进展:

该视频总结了:https://www.youtube.com/watch?v=48gb1HBHuC8&t=358s

但从那时起code itself / scripts have been updated。有各种shell脚本。

编写的微服务应用程序是在Micronauts中,如果它以文档化的方式执行而不经过kubernetes,它似乎工作正常。 (所以我们知道它确实有效)

现在尝试通过kubernetes使其工作,我最终得到以下结果:

kubectl get svc
NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                   AGE
billing                      ClusterIP   10.104.228.223   <none>        8085/TCP                                                                  3h
front                        ClusterIP   10.107.198.62    <none>        8080/TCP                                                                  8m
kafka-service                ClusterIP   None             <none>        9093/TCP                                                                  3h
kind-cheetah-consul-dns      ClusterIP   10.101.52.36     <none>        53/TCP,53/UDP                                                             3h
kind-cheetah-consul-server   ClusterIP   None             <none>        8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP   3h
kind-cheetah-consul-ui       ClusterIP   10.97.158.51     <none>        80/TCP                                                                    3h
kubernetes                   ClusterIP   10.96.0.1        <none>        443/TCP                                                                   3h
mongodb                      ClusterIP   10.104.205.91    <none>        27017/TCP                                                                 3h
react                        ClusterIP   10.106.74.166    <none>        3000/TCP                                                                  3h
stock                        ClusterIP   10.109.203.36    <none>        8083/TCP                                                                  9m
waiter                       ClusterIP   10.107.166.108   <none>        8084/TCP                                                                  3h
zipkin-deployment            NodePort    10.108.102.81    <none>        9411:31919/TCP                                                            3h
zk-cs                        ClusterIP   10.100.139.233   <none>        2181/TCP                                                                  3h
zk-hs                        ClusterIP   None             <none>        2888/TCP,3888/TCP                                                         3h

注意服务名称front stock这些是我们将关注的两个。

他们被称为front-deploymentstock-deployment作为服务。根据领事的说法,IT已经重命名了:

stock-675d778b7d-bg98c:8083
stock:8083

这些是可解析的名称:在这种情况下,库存部署正在解决ip,10.109.203.36现在称为以下库存:

我们有以下豆荚:

kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
billing-59b66cb85d-24mnz             1/1     Running   13         3h
curl-775f9567b5-vzclh                1/1     Running   2          27m
front-7c6d588fd4-ftk7n               1/1     Running   2          18m
kafka-0                              1/1     Running   13         3h
kind-cheetah-consul-server-0         1/1     Running   4          3h
kind-cheetah-consul-wgwfk            1/1     Running   4          3h
mongodb-744f8f5d4-9mgh2              1/1     Running   4          3h
react-6b7f565d96-h5khb               1/1     Running   4          3h
stock-675d778b7d-bg98c               1/1     Running   2          18m
waiter-584b466754-bzs7s              1/1     Running   13         3h
zipkin-deployment-5bf954f879-tbhdf   1/1     Running   4          3h
zk-0    

如果我跑:

kubectl attach curl-775f9567b5-vzclh -c curl -i -t
If you don't see a command prompt, try pressing enter.
[ root@curl-775f9567b5-vzclh:/ ]$ nslookup stock
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      stock
Address 1: 10.109.203.36 stock.default.svc.cluster.local
[ root@curl-775f9567b5-vzclh:/ ]$ nslookup front
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      front
Address 1: 10.107.198.62 front.default.svc.cluster.local

如果我跑:

kubectl exec front-7c6d588fd4-ftk7n -- nslookup stock
nslookup: can't resolve '(null)': Name does not resolve

Name:      stock
Address 1: 10.109.203.36 stock.default.svc.cluster.local


$ kubectl exec stock-675d778b7d-bg98c -- nslookup front
nslookup: can't resolve '(null)': Name does not resolve

Name:      front
Address 1: 10.107.198.62 front.default.svc.cluster.local

使用任何这些方法,似乎DNS工作正常。

如果我跑

minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ curl 10.109.203.36:8083/stock/lookup/Budweiser
{"name":"Budweiser","bottles":1000,"barrels":2.0,"availablePints":654.636}$ 

问题是:

 curl 10.107.198.62:8080/lookup/Budweiser
{"message":"Internal Server Error: The source Publisher is empty"}$ 
$ 

上面的curl调用beer-front application GatewayController方法查找,调用stockControllerClient.find:后者调用StockController in beer-stock application

@Get("/lookup/{name}")
@ContinueSpan
public Maybe<BeerStock> lookup(@SpanTag("gateway.beerLookup") @NotBlank String name) {
    System.out.println("Looking up beer for "+name+" "+new Date());
    return stockControllerClient.find(name)
            .onErrorReturnItem(new BeerStock());
}

我知道它试图打电话给客户:

 kubectl logs front-7c6d588fd4-ftk7n
11:54:27.629 [main] INFO  i.m.context.env.DefaultEnvironment - Established active environments: [cloud, k8s]
11:54:31.662 [main] INFO  io.micronaut.runtime.Micronaut - Startup completed in 4023ms. Server Running: http://front-7c6d588fd4-ftk7n:8080
11:54:32.168 [nioEventLoopGroup-1-3] INFO  i.m.d.registration.AutoRegistration - Registered service [gateway] with Consul
Looking up beer for Budweiser Tue Nov 27 12:13:38 GMT 2018
12:13:38.851 [nioEventLoopGroup-1-14] ERROR i.m.h.s.netty.RoutingInBoundHandler - Unexpected error occurred: The source Publisher is empty
java.util.NoSuchElementException: The source Publisher is empty

但是实际的客户端方法似乎都没有能够通过远程服务。

主要问题是我不确定httpClients无法连接到远程服务的哪个位出错。虽然领事未正确配置,但实际应用程序未能自行注册并无法启动。

版本:

 kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}


 $ helm version
    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}


$ minikube version
minikube version: v0.30.0

以下端口转发到localhost:

ps auwx|grep kubectl
xxx       6916  0.0  0.1  50584  9952 pts/4    Sl   11:51   0:00 kubectl port-forward kind-cheetah-consul-server-0 8500:8500
xxx       7332  0.0  0.1  49524  9936 pts/4    Sl   11:52   0:00 kubectl port-forward react-6b7f565d96-h5khb 3000:3000
xxx       8704  0.0  0.1  49524  9644 pts/4    Sl   11:55   0:00 kubectl port-forward front-7c6d588fd4-ftk7n 8080:8080

作为一个兴趣点,我启用了http客户端跟踪并点击了前端应用程序的当前ip:8080 / stock,这是生成的日志:

 09:34:27.929 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.discovery.event.ServiceStartedEvent] of candidate Definition: io.micronaut.health.HeartbeatTask
09:34:27.929 [pool-1-thread-1] TRACE i.m.context.DefaultBeanContext - Existing bean io.micronaut.health.HeartbeatTask@363a3d15 does not match qualifier <HeartbeatEvent> for type io.micronaut.context.event.ApplicationEventListener
09:34:27.929 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.server.event.ServerStartupEvent] of candidate Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList
09:34:27.929 [pool-1-thread-1] TRACE i.m.context.DefaultBeanContext - Existing bean io.micronaut.discovery.consul.ConsulServiceInstanceList@5d01ea21 does not match qualifier <HeartbeatEvent> for type io.micronaut.context.event.ApplicationEventListener
09:34:27.929 [pool-1-thread-1] DEBUG i.m.context.DefaultBeanContext - Qualifying bean [io.micronaut.context.event.ApplicationEventListener] from candidates [Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList, Definition: io.micronaut.discovery.consul.registration.ConsulAutoRegistration, Definition: io.micronaut.http.client.scope.ClientScope, Definition: io.micronaut.health.HeartbeatTask, Definition: io.micronaut.runtime.context.scope.refresh.RefreshScope] for qualifier: <HeartbeatEvent> 
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.server.event.ServerStartupEvent] of candidate Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.context.scope.refresh.RefreshEvent] of candidate Definition: io.micronaut.http.client.scope.ClientScope
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.discovery.event.ServiceStartedEvent] of candidate Definition: io.micronaut.health.HeartbeatTask
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.context.scope.refresh.RefreshEvent] of candidate Definition: io.micronaut.runtime.context.scope.refresh.RefreshScope
09:34:27.930 [pool-1-thread-1] DEBUG i.m.context.DefaultBeanContext - Found 1 beans for type [<HeartbeatEvent> io.micronaut.context.event.ApplicationEventListener]: [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] 
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.ApplicationEventPublisher - Established event listeners [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] for event: io.micronaut.health.HeartbeatEvent[source=io.micronaut.http.server.netty.NettyEmbeddedServerInstance@3f1ddac2]
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.ApplicationEventPublisher - Invoking event listener [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] for event: io.micronaut.health.HeartbeatEvent[source=io.micronaut.http.server.netty.NettyEmbeddedServerInstance@3f1ddac2]
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.PropertySourcePropertyResolver - No value found for property: vcap.application.instance_id
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Publisher pass(String checkId,String note)] invocation on target: io.micronaut.discovery.consul.client.v1.AbstractConsulClient$Intercepted@47b179d7
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: Publisher pass(String checkId,String note)
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: Publisher pass(String checkId,String note)
09:34:27.938 [nioEventLoopGroup-1-4] DEBUG i.m.d.registration.AutoRegistration - Successfully reported passing state to Consul
09:34:30.602 [nioEventLoopGroup-1-12] DEBUG i.m.h.server.netty.NettyHttpServer - Server waiter-7dd7998f77-bfkbt:8084 Received Request: GET /waiter/beer/a
09:34:30.602 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Matching route GET - /waiter/beer/a
09:34:30.604 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Matched route GET - /waiter/beer/a to controller class micronaut.demo.beer.$WaiterControllerDefinition$Intercepted
09:34:30.606 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Single serveBeerToCustomer(String customerName)] invocation on target: micronaut.demo.beer.$WaiterControllerDefinition$Intercepted@a624fe7
09:34:30.606 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.validation.ValidatingInterceptor@6642e95d] in chain for method invocation: Single serveBeerToCustomer(String customerName)
09:34:30.607 [nioEventLoopGroup-1-12] TRACE o.h.v.i.e.c.SimpleConstraintTree - Validating value a against constraint defined by ConstraintDescriptorImpl{annotation=j.v.c.NotBlank, payloads=[], hasComposingConstraints=true, isReportAsSingleInvalidConstraint=false, elementType=PARAMETER, definedOn=DEFINED_IN_HIERARCHY, groups=[interface javax.validation.groups.Default], attributes={groups=[Ljava.lang.Class;@71cccd2d, message={javax.validation.constraints.NotBlank.message}, payload=[Ljava.lang.Class;@5044372c}, constraintType=GENERIC, valueUnwrapping=DEFAULT}.
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.aop.chain.InterceptorChain$$Lambda$449/1045761764@6d4672c0] in chain for method invocation: Single serveBeerToCustomer(String customerName)
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)] invocation on target: micronaut.demo.beer.client.TicketControllerClient$Intercepted@eaba75d
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.609 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Flowable getInstances(String serviceId)] invocation on target: compositeDiscoveryClient(consul,kubernetes)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.cache.interceptor.CacheInterceptor@2b772100] in chain for method invocation: Flowable getInstances(String serviceId)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.aop.chain.InterceptorChain$$Lambda$449/1045761764@19a66abd] in chain for method invocation: Flowable getInstances(String serviceId)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)] invocation on target: io.micronaut.discovery.consul.client.v1.AbstractConsulClient$Intercepted@47b179d7
09:34:30.611 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)
09:34:30.611 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)
09:34:30.691 [nioEventLoopGroup-1-12] ERROR i.m.r.intercept.RecoveryInterceptor - Type [micronaut.demo.beer.client.TicketControllerClient$Intercepted] executed with error: Empty body
io.micronaut.http.client.exceptions.HttpClientResponseException: Empty body
    at io.micronaut.http.client.HttpClient.lambda$null$0(HttpClient.java:161)
    at java.util.Optional.orElseThrow(Optional.java:290)
    at io.micronaut.http.client.HttpClient.lambda$retrieve$1(HttpClient.java:161)
    at io.micronaut.core.async.publisher.Publishers$1.doOnNext(Publishers.java:143)
    at io.micronaut.core.async.subscriber.CompletionAwareSubscriber.onNext(CompletionAwareSubscriber.java:53)
    at io.reactivex.internal.util.HalfSerializer.onNext(HalfSerializer.java:45)
    at io.reactivex.internal.subscribers.StrictSubscriber.onNext(StrictSubscriber.java:97)
    at io.reactivex.internal.operators.flowable.FlowableSwitchMap$SwitchMapSubscriber.drain(FlowableSwitchMap.java:307)
    at io.reactivex.internal.operators.flowable.FlowableSwitchMap$SwitchMapInnerSubscriber.onNext(FlowableSwitchMap.java:391)
    at io.reactivex.internal.operators.flowable.FlowableSubscribeOn$SubscribeOnSubscriber.onNext(FlowableSubscribeOn.java:97)
    at io.reactivex.internal.operators.flowable.FlowableOnErrorNext$OnErrorNextSubscriber.onNext(FlowableOnErrorNext.java:79)
    at io.reactivex.internal.operators.flowable.FlowableTimeoutTimed$TimeoutSubscriber.onNext(FlowableTimeoutTimed.java:99)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.lambda$onNext$1(ClientServerRequestTracingPublisher.java:60)
    at io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:53)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.onNext(ClientServerRequestTracingPublisher.java:60)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.onNext(ClientServerRequestTracingPublisher.java:52)
    at io.reactivex.internal.util.HalfSerializer.onNext(HalfSerializer.java:45)
    at io.reactivex.internal.subscribers.StrictSubscriber.onNext(StrictSubscriber.java:97)
    at io.reactivex.internal.operators.flowable.FlowableCreate$NoOverflowBaseAsyncEmitter.onNext(FlowableCreate.java:403)
    at io.micronaut.http.client.DefaultHttpClient$10.channelRead0(DefaultHttpClient.java:1773)
    at io.micronaut.http.client.DefaultHttpClient$10.channelRead0(DefaultHttpClient.java:1705)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.micronaut.http.netty.stream.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:186)
    at io.micronaut.http.netty.stream.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:181)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
    at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
    at io.micronaut.tracing.instrument.util.TracingRunnable.run(TracingRunnable.java:54)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(Thread.java:748)
09:34:30.692 [nioEventLoopGroup-1-12] DEBUG i.m.r.intercept.RecoveryInterceptor - Type [micronaut.demo.beer.client.TicketControllerClient$Intercepted] resolved fallback: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.692 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Looking up existing bean for key: @Fallback micronaut.demo.beer.client.TicketControllerClient
09:34:30.692 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - No existing bean found for bean key: @Fallback micronaut.demo.beer.client.TicketControllerClient
09:34:30.693 [nioEventLoopGroup-1-12] DEBUG i.m.context.DefaultBeanContext - Resolving beans for type: <RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor 
09:34:30.693 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Looking up existing beans for key: <RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor
09:34:30.693 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Found 2 existing beans for type [<RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor]: [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc, io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] 
09:34:30.694 [nioEventLoopGroup-1-12] DEBUG i.m.context.DefaultBeanContext - Created bean [micronaut.demo.beer.client.NoCostTicket$Intercepted@77053015] from definition [Definition: micronaut.demo.beer.client.NoCostTicket$Intercepted] with qualifier [@Fallback]
 Blank beer from fall back being served
09:34:30.695 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Encoding emitted response object [micronaut.demo.beer.Beer@5caca659] using codec: io.micronaut.jackson.codec.JsonMediaTypeCodec@2ba33e2c

任何帮助将非常感激。项目链接在上面的链接中有各种shell脚本,所有设置和运行都相当复杂,所以也许在视频上观看一些可能更实用。

更新我基本上没有这个,但我真的无法继续,目前升级到最新的consul-helm v0.5.0和micronaut 1.0.4但仍然面临相同的问题,不太确定这是否正常:

09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.PropertySourcePropertyResolver - No value found for property: vcap.application.instance_id

我最终制作了一个非常基本的2应用程序版本的on this branch

有一个更新的更全面的日志 - 在这里找到 - 运行./install-minikube.sh后的全新安装(如果要为其他人运行,此脚本将需要更改docker用户名)logs produced

kubernetes consul minikube micronaut
1个回答
0
投票

看起来你的啤酒前线无法连接到领事。这被定义为无头服务。您会注意到kind-cheetah-consul-server没有ClusterIP。你能尝试直接连接“kind-cheetah-consul-server-0。[headless service fqdn]”或者只是“kind-cheetah-consul-server-0”。由于您的领事使用statefulset,您将拥有稳定的pod名称和dns。

© www.soinside.com 2019 - 2024. All rights reserved.