在 storm nimbus 上得到 "没有可用的拓扑槽 "的错误信息。

问题描述 投票:1回答:1

我是新加入apache-storm的。我正在尝试建立一个本地风暴集群。我已经使用下面的方法设置了zookeeper。联系 当我启动zookeeper时,它运行得很好,但当我用下面的方法启动nimbus时,我发现它运行得很好。启蒙 命令,我看到一个错误 没有可用的拓扑结构插槽nimbus.log 文件。

我的 nimbus.log 文件。

SendThread(kubernetes.docker.internal:2181) [INFO] Opening socket connection to server kubernetes.docker.internal/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
    2020-05-25 14:51:37.260 o.a.s.z.ClientZookeeper main [INFO] Starting ZK Curator
    2020-05-25 14:51:37.260 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl main [INFO] Starting
    2020-05-25 14:51:37.261 o.a.s.s.o.a.z.ZooKeeper main [INFO] Initiating client connection, connectString=127.0.0.1:2181/storm sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState@35beb15e
    2020-05-25 14:51:37.261 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Socket connection established to kubernetes.docker.internal/127.0.0.1:2181, initiating session
    2020-05-25 14:51:37.263 o.a.s.s.o.a.c.f.i.CuratorFrameworkImpl main [INFO] Default schema
    2020-05-25 14:51:37.264 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Opening socket connection to server kubernetes.docker.internal/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
    2020-05-25 14:51:37.265 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Session establishment complete on server kubernetes.docker.internal/127.0.0.1:2181, sessionid = 0x1000ebc40020006, negotiated timeout = 20000
    2020-05-25 14:51:37.266 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Socket connection established to kubernetes.docker.internal/127.0.0.1:2181, initiating session
    2020-05-25 14:51:37.266 o.a.s.s.o.a.c.f.s.ConnectionStateManager main-EventThread [INFO] State change: CONNECTED
    2020-05-25 14:51:37.270 o.a.s.s.o.a.z.ClientCnxn main-SendThread(kubernetes.docker.internal:2181) [INFO] Session establishment complete on server kubernetes.docker.internal/127.0.0.1:2181, sessionid = 0x1000ebc40020007, negotiated timeout = 20000
    2020-05-25 14:51:37.271 o.a.s.s.o.a.c.f.s.ConnectionStateManager main-EventThread [INFO] State change: CONNECTED
    2020-05-25 14:51:41.791 o.a.s.n.NimbusInfo main [INFO] Nimbus figures out its name to 7480-GQY29H2.smarshcorp.com
    2020-05-25 14:51:41.817 o.a.s.d.n.Nimbus main [INFO] Starting Nimbus with conf {storm.messaging.netty.min_wait_ms=100, topology.backpressure.wait.strategy=org.apache.storm.policy.WaitStrategyProgressive, storm.resource.isolation.plugin=org.apache.storm.container.cgroup.CgroupManager, storm.zookeeper.auth.user=null, storm.messaging.netty.buffer_size=5242880, storm.exhibitor.port=8080, topology.bolt.wait.progressive.level1.count=1, pacemaker.auth.method=NONE, ui.filter=null, worker.profiler.enabled=false, executor.metrics.frequency.secs=60, supervisor.thrift.threads=16, ui.http.creds.plugin=org.apache.storm.security.auth.DefaultHttpCredentialsPlugin, supervisor.supervisors.commands=[], supervisor.queue.size=128, logviewer.cleanup.age.mins=10080, topology.tuple.serializer=org.apache.storm.serialization.types.ListDelegateSerializer, storm.cgroup.memory.enforcement.enable=false, drpc.port=3772, topology.max.spout.pending=null, topology.transfer.buffer.size=1000, nimbus.worker.heartbeats.recovery.strategy.class=org.apache.storm.nimbus.TimeOutWorkerHeartbeatsRecoveryStrategy, worker.metrics={CGroupMemory=org.apache.storm.metric.cgroup.CGroupMemoryUsage, CGroupMemoryLimit=org.apache.storm.metric.cgroup.CGroupMemoryLimit, CGroupCpu=org.apache.storm.metric.cgroup.CGroupCpu, CGroupCpuGuarantee=org.apache.storm.metric.cgroup.CGroupCpuGuarantee}, logviewer.port=8000, worker.childopts=-Xmx%HEAP-MEM%m -XX:+PrintGCDetails -Xloggc:artifacts/gc.log -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=artifacts/heapdump, topology.component.cpu.pcore.percent=10.0, storm.daemon.metrics.reporter.plugins=[org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter], blacklist.scheduler.resume.time.secs=1800, drpc.childopts=-Xmx768m, nimbus.task.launch.secs=120, logviewer.childopts=-Xmx128m, storm.supervisor.hard.memory.limit.overage.mb=2024, storm.zookeeper.servers=[127.0.0.1], storm.messaging.transport=org.apache.storm.messaging.netty.Context, storm.messaging.netty.authentication=false, topology.localityaware.higher.bound=0.8, storm.cgroup.memory.limit.tolerance.margin.mb=0.0, storm.cgroup.hierarchy.name=storm, storm.metricprocessor.class=org.apache.storm.metricstore.NimbusMetricProcessor, topology.kryo.factory=org.apache.storm.serialization.DefaultKryoFactory, nimbus.assignments.service.threads=10, worker.heap.memory.mb=768, storm.network.topography.plugin=org.apache.storm.networktopography.DefaultRackDNSToSwitchMapping, supervisor.slots.ports=[6700, 6701, 6702, 6703], topology.stats.sample.rate=0.05, storm.local.dir=/Users/anshita.singh/storm/datadir/storm, topology.backpressure.wait.park.microsec=100, topology.ras.constraint.max.state.search=10000, topology.testing.always.try.serialize=false, nimbus.assignments.service.thread.queue.size=100, storm.principal.tolocal=org.apache.storm.security.auth.DefaultPrincipalToLocal, java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib:/usr/lib64, nimbus.local.assignments.backend.class=org.apache.storm.assignments.InMemoryAssignmentBackend, worker.gc.childopts=, storm.group.mapping.service.cache.duration.secs=120, topology.multilang.serializer=org.apache.storm.multilang.JsonSerializer, drpc.request.timeout.secs=600, nimbus.blobstore.class=org.apache.storm.blobstore.LocalFsBlobStore, topology.state.synchronization.timeout.secs=60, topology.bolt.wait.progressive.level2.count=1000, topology.worker.shared.thread.pool.size=4, topology.executor.receive.buffer.size=32768, pacemaker.servers=[], supervisor.monitor.frequency.secs=3, storm.nimbus.retry.times=5, topology.transfer.batch.size=1, transactional.zookeeper.port=null, storm.auth.simple-white-list.users=[], topology.scheduler.strategy=org.apache.storm.scheduler.resource.strategies.scheduling.DefaultResourceAwareStrategy, storm.zookeeper.port=2181, storm.zookeeper.retry.intervalceiling.millis=30000, storm.cluster.state.store=org.apache.storm.cluster.ZKStateStorageFactory, nimbus.thrift.port=6627, blacklist.scheduler.tolerance.count=3, nimbus.thrift.threads=64, supervisor.supervisors=[], nimbus.seeds=[localhost], supervisor.slot.ports=-6700 -6701 -6702 -6703, storm.cluster.metrics.consumer.publish.interval.secs=60, logviewer.filter.params=null, topology.min.replication.count=1, nimbus.blobstore.expiration.secs=600, storm.group.mapping.service=org.apache.storm.security.auth.ShellBasedGroupsMapping, storm.nimbus.retry.interval.millis=2000, topology.max.task.parallelism=null, topology.backpressure.wait.progressive.level2.count=1000, drpc.https.keystore.password=*****, resource.aware.scheduler.constraint.max.state.search=100000, supervisor.heartbeat.frequency.secs=5, nimbus.credential.renewers.freq.secs=600, storm.supervisor.medium.memory.grace.period.ms=30000, storm.thrift.transport=org.apache.storm.security.auth.SimpleTransportPlugin, storm.cgroup.hierarchy.dir=/cgroup/storm_resources, storm.zookeeper.auth.password=null, ui.port=8081, drpc.authorizer.acl.strict=false, topology.message.timeout.secs=30, topology.error.throttle.interval.secs=10, topology.backpressure.check.millis=50, drpc.https.keystore.type=JKS, supervisor.memory.capacity.mb=4096.0, storm.metricstore.class=org.apache.storm.metricstore.rocksdb.RocksDbStore, drpc.authorizer.acl.filename=drpc-auth-acl.yaml, topology.builtin.metrics.bucket.size.secs=60, topology.spout.wait.park.microsec=100, storm.local.mode.zmq=false, pacemaker.client.max.threads=2, ui.header.buffer.bytes=4096, topology.shellbolt.max.pending=100, topology.serialized.message.size.metrics=false, drpc.max_buffer_size=1048576, drpc.disable.http.binding=true, storm.codedistributor.class=org.apache.storm.codedistributor.LocalFileSystemCodeDistributor, worker.profiler.childopts=-XX:+UnlockCommercialFeatures -XX:+FlightRecorder, nimbus.supervisor.timeout.secs=60, storm.supervisor.cgroup.rootdir=storm, topology.worker.max.heap.size.mb=768.0, storm.zookeeper.root=/storm, topology.disable.loadaware.messaging=false, storm.supervisor.hard.memory.limit.multiplier=2.0, nimbus.topology.validator=org.apache.storm.nimbus.DefaultTopologyValidator, worker.heartbeat.frequency.secs=1, storm.messaging.netty.max_wait_ms=1000, topology.backpressure.wait.progressive.level1.count=1, topology.max.error.report.per.interval=5, nimbus.thrift.max_buffer_size=1048576, storm.metricstore.rocksdb.location=storm_rocks, storm.supervisor.low.memory.threshold.mb=1024, pacemaker.max.threads=50, ui.pagination=20, ui.disable.http.binding=true, supervisor.blobstore.download.max_retries=3, topology.enable.message.timeouts=true, logviewer.disable.http.binding=true, storm.messaging.netty.transfer.batch.size=262144, topology.spout.wait.progressive.level2.count=0, blacklist.scheduler.strategy=org.apache.storm.scheduler.blacklist.strategies.DefaultBlacklistStrategy, storm.metricstore.rocksdb.retention_hours=240, supervisor.run.worker.as.user=false, storm.messaging.netty.client_worker_threads=1, topology.tasks=null, supervisor.thrift.socket.timeout.ms=5000, storm.group.mapping.service.params=null, drpc.http.port=3774, transactional.zookeeper.root=/transactional, supervisor.blobstore.download.thread.count=5, logviewer.filter=null, pacemaker.kerberos.users=[], topology.spout.wait.strategy=org.apache.storm.policy.WaitStrategyProgressive, storm.blobstore.inputstream.buffer.size.bytes=65536, supervisor.worker.heartbeats.max.timeout.secs=600, supervisor.worker.timeout.secs=30, topology.worker.receiver.thread.count=1, logviewer.max.sum.worker.logs.size.mb=4096, topology.executor.overflow.limit=0, topology.batch.flush.interval.millis=1, nimbus.file.copy.expiration.secs=600, pacemaker.port=6699, topology.worker.logwriter.childopts=-Xmx64m, drpc.http.creds.plugin=org.apache.storm.security.auth.DefaultHttpCredentialsPlugin, nimbus.topology.blobstore.deletion.delay.ms=300000, storm.blobstore.acl.validation.enabled=false, ui.filter.params=null, topology.workers=1, blacklist.scheduler.tolerance.time.secs=300, storm.supervisor.medium.memory.threshold.mb=1536, topology.environment=null, drpc.invocations.port=3773, storm.metricstore.rocksdb.create_if_missing=true, nimbus.cleanup.inbox.freq.secs=600, client.blobstore.class=org.apache.storm.blobstore.NimbusBlobStore, topology.fall.back.on.java.serialization=true, storm.nimbus.retry.intervalceiling.millis=60000, storm.nimbus.zookeeper.acls.fixup=true, logviewer.appender.name=A1, ui.users=null, pacemaker.childopts=-Xmx1024m, storm.messaging.netty.server_worker_threads=1, scheduler.display.resource=false, ui.actions.enabled=true, storm.thrift.socket.timeout.ms=600000, storm.topology.classpath.beginning.enabled=false, storm.zookeeper.connection.timeout=15000, topology.tick.tuple.freq.secs=null, nimbus.inbox.jar.expiration.secs=3600, topology.debug=false, storm.zookeeper.retry.interval=1000, storm.messaging.netty.buffer.high.watermark=16777216, storm.blobstore.dependency.jar.upload.chunk.size.bytes=1048576, worker.log.level.reset.poll.secs=30, storm.exhibitor.poll.uripath=/exhibitor/v1/cluster/list, storm.zookeeper.retry.times=5, nimbus.code.sync.freq.secs=120, topology.component.resources.offheap.memory.mb=0.0, topology.spout.wait.progressive.level1.count=0, topology.state.checkpoint.interval.ms=1000, topology.priority=29, supervisor.localizer.cleanup.interval.ms=30000, nimbus.host=127.0.0.1, storm.health.check.dir=healthchecks, supervisor.cpu.capacity=400.0, topology.backpressure.wait.progressive.level3.sleep.millis=1, storm.cgroup.resources=[cpu, memory], storm.worker.min.cpu.pcore.percent=0.0, topology.classpath=null, storm.nimbus.zookeeper.acls.check=true, num.stat.buckets=20, topology.spout.wait.progressive.level3.sleep.millis=1, supervisor.localizer.cache.target.size.mb=10240, topology.worker.childopts=null, drpc.https.port=-1, topology.bolt.wait.park.microsec=100, topology.max.replication.wait.time.sec=60, storm.cgroup.cgexec.cmd=/bin/cgexec, topology.acker.executors=null, topology.bolt.wait.progressive.level3.sleep.millis=1, supervisor.worker.start.timeout.secs=120, supervisor.worker.shutdown.sleep.secs=3, logviewer.max.per.worker.logs.size.mb=2048, topology.trident.batch.emit.interval.millis=500, task.heartbeat.frequency.secs=3, supervisor.enable=true, supervisor.thrift.max_buffer_size=1048576, supervisor.blobstore.class=org.apache.storm.blobstore.NimbusBlobStore, topology.producer.batch.size=1, drpc.worker.threads=64, resource.aware.scheduler.priority.strategy=org.apache.storm.scheduler.resource.strategies.priority.DefaultSchedulingPriorityStrategy, blacklist.scheduler.reporter=org.apache.storm.scheduler.blacklist.reporters.LogReporter, storm.messaging.netty.socket.backlog=500, storm.cgroup.inherit.cpuset.configs=false, nimbus.queue.size=100000, drpc.queue.size=128, ui.disable.spout.lag.monitoring=true, topology.eventlogger.executors=0, pacemaker.base.threads=10, nimbus.childopts=-Xmx1024m, topology.spout.recvq.skips=3, storm.resource.isolation.plugin.enable=false, nimbus.monitor.freq.secs=10, storm.supervisor.memory.limit.tolerance.margin.mb=128.0, storm.disable.symlinks=false, topology.localityaware.lower.bound=0.2, transactional.zookeeper.servers=null, nimbus.task.timeout.secs=30, logs.users=null, pacemaker.thrift.message.size.max=10485760, ui.host=0.0.0.0, supervisor.thrift.port=6628, topology.bolt.wait.strategy=org.apache.storm.policy.WaitStrategyProgressive, pacemaker.thread.timeout=10, storm.meta.serialization.delegate=org.apache.storm.serialization.GzipThriftSerializationDelegate, dev.zookeeper.path=/tmp/dev-storm-zookeeper, topology.skip.missing.kryo.registrations=false, drpc.invocations.threads=64, storm.zookeeper.session.timeout=20000, storm.metricstore.rocksdb.metadata_string_cache_capacity=4000, storm.workers.artifacts.dir=workers-artifacts, topology.component.resources.onheap.memory.mb=128.0, storm.log4j2.conf.dir=log4j2, storm.cluster.mode=distributed, ui.childopts=-Xmx768m, task.refresh.poll.secs=10, supervisor.childopts=-Xmx256m, task.credentials.poll.secs=30, storm.health.check.timeout.ms=5000, storm.blobstore.replication.factor=3, worker.profiler.command=flight.bash, storm.messaging.netty.buffer.low.watermark=8388608}
    2020-05-25 14:51:41.877 o.a.s.z.LeaderElectorImp main [INFO] Queued up for leader lock.
    2020-05-25 14:51:41.907 o.a.s.n.NimbusInfo main-EventThread [INFO] Nimbus figures out its name to 7480-GQY29H2.smarshcorp.com
    2020-05-25 14:51:41.929 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] Sync remote assignments and id-info to local
    2020-05-25 14:51:41.963 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/word-topology-1-1589738489-stormcode.ser/7480-GQY29H2.smarshcorp.com:6627-1
    2020-05-25 14:51:41.999 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/Stock-Topology-1-1589133962-stormcode.ser/7480-GQY29H2.smarshcorp.com:6627-1
    2020-05-25 14:51:42.015 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/word-topology-1-1589738489-stormconf.ser/7480-GQY29H2.smarshcorp.com:6627-1
    2020-05-25 14:51:42.035 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/Stock-Topology-1-1589133962-stormjar.jar/7480-GQY29H2.smarshcorp.com:6627-1
    2020-05-25 14:51:42.052 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/Stock-Topology-1-1589133962-stormconf.ser/7480-GQY29H2.smarshcorp.com:6627-1
    2020-05-25 14:51:42.081 o.a.s.c.StormClusterStateImpl main [INFO] set-path: /blobstore/word-topology-1-1589738489-stormjar.jar/7480-GQY29H2.smarshcorp.com:6627-1
    2020-05-25 14:51:42.104 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] active-topology-blobs [Stock-Topology-1-1589133962,word-topology-1-1589738489] local-topology-blobs [word-topology-1-1589738489-stormcode.ser,Stock-Topology-1-1589133962-stormcode.ser,word-topology-1-1589738489-stormconf.ser,Stock-Topology-1-1589133962-stormjar.jar,Stock-Topology-1-1589133962-stormconf.ser,word-topology-1-1589738489-stormjar.jar] diff-topology-blobs []
    2020-05-25 14:51:42.239 o.a.s.d.m.ClientMetricsUtils main [INFO] Using statistics reporter plugin:org.apache.storm.daemon.metrics.reporters.JmxPreparableReporter
    2020-05-25 14:51:42.297 o.a.s.d.m.r.JmxPreparableReporter main [INFO] Preparing...
    2020-05-25 14:51:42.322 o.a.s.m.StormMetricsRegistry main [INFO] Started statistics report plugin...
    2020-05-25 14:51:42.327 o.a.s.d.n.Nimbus main [INFO] Starting nimbus server for storm version '2.1.0'
    2020-05-25 14:51:42.408 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] active-topology-dependencies [] local-blobs [word-topology-1-1589738489-stormcode.ser,Stock-Topology-1-1589133962-stormcode.ser,word-topology-1-1589738489-stormconf.ser,Stock-Topology-1-1589133962-stormjar.jar,Stock-Topology-1-1589133962-stormconf.ser,word-topology-1-1589738489-stormjar.jar] diff-topology-dependencies []
2020-05-25 14:51:42.409 o.a.s.n.LeaderListenerCallback main-EventThread [INFO] Accepting leadership, all active topologies and corresponding dependencies found locally.
        2020-05-25 14:51:42.409 o.a.s.z.LeaderListenerCallbackFactory main-EventThread [INFO] 7480-GQY29H2.smarshcorp.com gained leadership.
        2020-05-25 14:51:42.603 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive
        2020-05-25 14:51:42.603 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive
        2020-05-25 14:51:42.603 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive
        2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive
        2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive
        2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive
 2020-05-25 14:51:42.604 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive
 2020-05-25 14:51:42.618 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
 2020-05-25 14:51:42.618 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
 2020-05-25 14:51:42.618 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
 2020-05-25 14:51:42.619 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:51:45.096 o.a.s.d.n.Nimbus timer [INFO] TRANSITION: word-topology-1-1589738489 GAIN_LEADERSHIP null false
 2020-05-25 14:51:45.098 o.a.s.d.n.Nimbus timer [INFO] TRANSITION: Stock-Topology-1-1589133962 GAIN_LEADERSHIP null false
  2020-05-25 14:51:52.682 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive
 2020-05-25 14:51:52.682 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive
 2020-05-25 14:51:52.683 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive
2020-05-25 14:51:52.683 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive
 2020-05-25 14:51:52.683 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive
2020-05-25 14:51:52.684 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive
2020-05-25 14:51:52.684 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive
 2020-05-25 14:51:52.685 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
 2020-05-25 14:51:52.686 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
 2020-05-25 14:51:52.686 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
 2020-05-25 14:51:52.687 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
 2020-05-25 14:52:02.734 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive
2020-05-25 14:52:02.734 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive
2020-05-25 14:52:02.734 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive
 2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive
  2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive
2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive
2020-05-25 14:52:02.735 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive
2020-05-25 14:52:02.736 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:52:02.737 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:52:02.737 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:52:02.737 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive
2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive
2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive
2020-05-25 14:52:12.773 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive
2020-05-25 14:52:12.774 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive
2020-05-25 14:52:12.774 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive
2020-05-25 14:52:12.774 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive
 2020-05-25 14:52:12.774 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:52:12.774 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:52:12.775 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:52:12.775 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[4, 4] not alive
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[1, 1] not alive
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[2, 2] not alive
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor word-topology-1-1589738489:[3, 3] not alive
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[1, 1] not alive
2020-05-25 14:52:22.809 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[3, 3] not alive
2020-05-25 14:52:22.810 o.a.s.d.n.HeartbeatCache timer [INFO] Executor Stock-Topology-1-1589133962:[2, 2] not alive
2020-05-25 14:52:22.811 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: word-topology
2020-05-25 14:52:22.811 o.a.s.s.EvenScheduler timer [ERROR] No available slots for topology: Stock-Topology

这是我的 storm.yml:

storm.zookeeper.servers:
     - "127.0.0.1"

nimbus.host: "127.0.0.1"
ui.port: 8081
storm.local.dir: "/Users/anshita.singh/storm/datadir/storm"
supervisor.slot.ports: 
    -6700
    -6701
    -6702
    -6703

# storm.zookeeper.servers:
#     - "server1"
#     - "server2"
# 
# nimbus.seeds: ["host1", "host2", "host3"]
# 
# 
# ##### These may optionally be filled in:
#    
## List of custom serializations
# topology.kryo.register:
#     - org.mycompany.MyType
#     - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
#     - org.mycompany.MyDecorator
#
## Locations of the drpc servers
# drpc.servers:
#     - "server1"
#     - "server2"

## Metrics Consumers
## max.retain.metric.tuples
## - task queue will be unbounded when max.retain.metric.tuples is equal or less than 0.
## whitelist / blacklist
## - when none of configuration for metric filter are specified, it'll be treated as 'pass all'.
## - you need to specify either whitelist or blacklist, or none of them. You can't specify both of them.
## - you can specify multiple whitelist / blacklist with regular expression
## expandMapType: expand metric with map type as value to multiple metrics
## - set to true when you would like to apply filter to expanded metrics
## - default value is false which is backward compatible value
## metricNameSeparator: separator between origin metric name and key of entry from map
## - only effective when expandMapType is set to true
## - default value is "."
# topology.metrics.consumer.register:
#   - class: "org.apache.storm.metric.LoggingMetricsConsumer"
#     max.retain.metric.tuples: 100
#     parallelism.hint: 1
#   - class: "org.mycompany.MyMetricsConsumer"
#     max.retain.metric.tuples: 100
#     whitelist:
#       - "execute.*"
#       - "^__complete-latency$"
#     parallelism.hint: 1
#     argument:
#       - endpoint: "metrics-collector.mycompany.org"
#     expandMapType: true
#     metricNameSeparator: "."

## Cluster Metrics Consumers
# storm.cluster.metrics.consumer.register:
#   - class: "org.apache.storm.metric.LoggingClusterMetricsConsumer"
#   - class: "org.mycompany.MyMetricsConsumer"
#     argument:
#       - endpoint: "metrics-collector.mycompany.org"
#
# storm.cluster.metrics.consumer.publish.interval.secs: 60

# Event Logger
# topology.event.logger.register:
#   - class: "org.apache.storm.metric.FileBasedEventLogger"
#   - class: "org.mycompany.MyEventLogger"
#     arguments:
#       endpoint: "event-logger.mycompany.org"

# Metrics v2 configuration (optional)
#storm.metrics.reporters:
#  # Graphite Reporter
#  - class: "org.apache.storm.metrics2.reporters.GraphiteStormReporter"
#    daemons:
#        - "supervisor"
#        - "nimbus"
#        - "worker"
#    report.period: 60
#    report.period.units: "SECONDS"
#    graphite.host: "localhost"
#    graphite.port: 2003
#
#  # Console Reporter
#  - class: "org.apache.storm.metrics2.reporters.ConsoleStormReporter"
#    daemons:
#        - "worker"
#    report.period: 10
#    report.period.units: "SECONDS"
#    filter:
#        class: "org.apache.storm.metrics2.filters.RegexFilter"
#        expression: ".*my_component.*emitted.*"

有谁能告诉我,我漏掉了什么配置?并请告诉我是否还需要其他信息来调试?

我的环境。

  1. Apache-storm-2.1.0
  2. Apache-zookeeper-3.6.1。

解决方法:运行下面的命令

运行下面的命令。

storm admin remove_corrupt_topologies
apache-storm apache-storm-topology
1个回答
1
投票

看起来有一些损坏的拓扑结构在那里。当我运行这个命令时,它解决了这个问题。

storm admin remove_corrupt_topologies
© www.soinside.com 2019 - 2024. All rights reserved.