在Storm 0.10.0上,即使我设置workers = 1,也会启动2个工作进程,并且UI报告worker = 1

问题描述 投票:1回答:1

我有一个风暴拓扑,我这样做:setNumWorkers(1);

当我查看关于此运行拓扑的风暴UI报告时,我看到Num worker设置为1

但是,当我登录运行主管的节点时,我看到两个进程具有相同的-Dworker.id-Dworker.port设置。 我将ps的输出包括在下面的这两个过程中。

我的问题是:如果我只请求一个进程,为什么有两个进程似乎被配置为工作进程(注意:Storm UI确认我只有一个worker。)

这对我很重要,因为当我对拓扑所消耗的资源进行任何分析或分析时,我想知道哪个进程为零。

ps输出

root       787 20.0  0.6 5858228 78388 ?       Sl   05:04   0:00 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /opt/apache-storm-0.10.0/lib/log4j-slf4j-impl-2.1.jar:/opt/apache-storm-0.10.0/lib/servlet-api-2.5.jar:/opt/apache-storm-0.10.0/lib/clojure-1.6.0.jar:/opt/apache-storm-0.10.0/lib/slf4j-api-1.7.7.jar:/opt/apache-storm-0.10.0/lib/hadoop-auth-2.4.0.jar:/opt/apache-storm-0.10.0/lib/log4j-api-2.1.jar:/opt/apache-storm-0.10.0/lib/disruptor-2.10.4.jar:/opt/apache-storm-0.10.0/lib/storm-core-0.10.0.jar:/opt/apache-storm-0.10.0/lib/log4j-over-slf4j-1.6.6.jar:/opt/apache-storm-0.10.0/lib/log4j-core-2.1.jar:/opt/apache-storm-0.10.0/lib/asm-4.0.jar:/opt/apache-storm-0.10.0/lib/kryo-2.21.jar:/opt/apache-storm-0.10.0/lib/reflectasm-1.07-shaded.jar:/opt/apache-storm-0.10.0/lib/minlog-1.2.jar:/opt/apache-storm-0.10.0/conf:/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/stormjar.jar -Dlogfile.name=big-storm-job-1-1487739502-worker-6700.log -Dstorm.home=/opt/apache-storm-0.10.0 -Dstorm.id=big-storm-job-1-1487739502 -Dworker.id=e8e03e95-1fcc-492a-b5e4-51ef7b8db2ee -Dworker.port=6700 -Dstorm.log.dir=/opt/apache-storm-0.10.0/logs -Dlog4j.configurationFile=/opt/apache-storm-0.10.0/log4j2/worker.xml backtype.storm.LogWriter /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx768m -Djava.library.path=/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/resources/Linux-amd64:/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/resources:/usr/local/lib:/opt/local/lib:/usr/lib -Dlogfile.name=big-storm-job-1-1487739502-worker-6700.log -Dstorm.home=/opt/apache-storm-0.10.0 -Dstorm.conf.file= -Dstorm.options= -Dstorm.log.dir=/opt/apache-storm-0.10.0/logs -Dlogging.sensitivity=S3 -Dlog4j.configurationFile=/opt/apache-storm-0.10.0/log4j2/worker.xml -Dstorm.id=big-storm-job-1-1487739502 -Dworker.id=e8e03e95-1fcc-492a-b5e4-51ef7b8db2ee -Dworker.port=6700 -cp /opt/apache-storm-0.10.0/lib/log4j-slf4j-impl-2.1.jar:/opt/apache-storm-0.10.0/lib/servlet-api-2.5.jar:/opt/apache-storm-0.10.0/lib/clojure-1.6.0.jar:/opt/apache-storm-0.10.0/lib/slf4j-api-1.7.7.jar:/opt/apache-storm-0.10.0/lib/hadoop-auth-2.4.0.jar:/opt/apache-storm-0.10.0/lib/log4j-api-2.1.jar:/opt/apache-storm-0.10.0/lib/disruptor-2.10.4.jar:/opt/apache-storm-0.10.0/lib/storm-core-0.10.0.jar:/opt/apache-storm-0.10.0/lib/log4j-over-slf4j-1.6.6.jar:/opt/apache-storm-0.10.0/lib/log4j-core-2.1.jar:/opt/apache-storm-0.10.0/lib/asm-4.0.jar:/opt/apache-storm-0.10.0/lib/kryo-2.21.jar:/opt/apache-storm-0.10.0/lib/reflectasm-1.07-shaded.jar:/opt/apache-storm-0.10.0/lib/minlog-1.2.jar:/opt/apache-storm-0.10.0/conf:/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/stormjar.jar backtype.storm.daemon.worker big-storm-job-1-1487739502 8fde2226-4b32-406d-8809-81ed88e5ae1f 6700 e8e03e95-1fcc-492a-b5e4-51ef7b8db2ee

root       805  203  2.0 4308648 255336 ?      Sl   05:04   0:06 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx768m -Djava.library.path=/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/resources/Linux-amd64:/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/resources:/usr/local/lib:/opt/local/lib:/usr/lib -Dlogfile.name=big-storm-job-1-1487739502-worker-6700.log -Dstorm.home=/opt/apache-storm-0.10.0 -Dstorm.conf.file= -Dstorm.options= -Dstorm.log.dir=/opt/apache-storm-0.10.0/logs -Dlogging.sensitivity=S3 -Dlog4j.configurationFile=/opt/apache-storm-0.10.0/log4j2/worker.xml -Dstorm.id=big-storm-job-1-1487739502 -Dworker.id=e8e03e95-1fcc-492a-b5e4-51ef7b8db2ee -Dworker.port=6700 -cp /opt/apache-storm-0.10.0/lib/log4j-slf4j-impl-2.1.jar:/opt/apache-storm-0.10.0/lib/servlet-api-2.5.jar:/opt/apache-storm-0.10.0/lib/clojure-1.6.0.jar:/opt/apache-storm-0.10.0/lib/slf4j-api-1.7.7.jar:/opt/apache-storm-0.10.0/lib/hadoop-auth-2.4.0.jar:/opt/apache-storm-0.10.0/lib/log4j-api-2.1.jar:/opt/apache-storm-0.10.0/lib/disruptor-2.10.4.jar:/opt/apache-storm-0.10.0/lib/storm-core-0.10.0.jar:/opt/apache-storm-0.10.0/lib/log4j-over-slf4j-1.6.6.jar:/opt/apache-storm-0.10.0/lib/log4j-core-2.1.jar:/opt/apache-storm-0.10.0/lib/asm-4.0.jar:/opt/apache-storm-0.10.0/lib/kryo-2.21.jar:/opt/apache-storm-0.10.0/lib/reflectasm-1.07-shaded.jar:/opt/apache-storm-0.10.0/lib/minlog-1.2.jar:/opt/apache-storm-0.10.0/conf:/opt/apache-storm-0.10.0/storm-local/supervisor/stormdist/big-storm-job-1-1487739502/stormjar.jar backtype.storm.daemon.worker big-storm-job-1-1487739502 8fde2226-4b32-406d-8809-81ed88e5ae1f 6700 e8e03e95-1fcc-492a-b5e4-51ef7b8db2ee

以防万一阅读此内容有助于更好地了解我的环境,这是我的Storm(和其他东西)的docker配置。希望这会有所帮助。

version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zk
    hostname: zk
    ports:
      - "2181:2181"
    networks:
      storm:
  kafka:
     image: wurstmeister/kafka:0.8.2.2-1
     container_name: kafka
     hostname: kafka
     ports:
       - "9092:9092"
     environment:
       KAFKA_BROKER_ID: 1
       KAFKA_ADVERTISED_HOST_NAME: 10.211.55.4
       KAFKA_ZOOKEEPER_CONNECT: 10.211.55.4
     volumes:
       - /var/run/docker.sock:/var/run/docker.sock
  nimbus:
    image: sunside/storm-nimbus
    container_name: storm-nimbus
    hostname: storm-nimbus
    ports:
        - "49773:49772"
        - "49772:49773"
        - "49627:49627"
    environment:
        - "LOCAL_HOSTNAME=nimbus"
        - "ZOOKEEPER_ADDRESS=zk"
        - "ZOOKEEPER_PORT=2181"
        - "NIMBUS_ADDRESS=nimbus"
        - "NIMBUS_THRIFT_PORT=49627"
        - "DRPC_PORT=49772"
        - "DRPCI_PORT=49773"
    volumes:
      - /media/psf/Home/dev/storm-pipeline:/pipeline
    networks:
      storm:
  supervisor:
    image: sunside/storm-supervisor
    container_name: storm-supervisor
    hostname: storm-supervisor
    ports:
      - "8000:8000"
    environment:
      - "LOCAL_HOSTNAME=supervisor"
      - "NIMBUS_ADDRESS=nimbus"
      - "NIMBUS_THRIFT_PORT=49627"
      - "DRPC_PORT=49772"
      - "DRPCI_PORT=49773"
      - "ZOOKEEPER_ADDRESS=zk"
      - "ZOOKEEPER_PORT=2181"
    networks:
      storm:
  ui:
    image: sunside/storm-ui
    container_name: storm-ui
    hostname: storm-ui
    ports:
      - "8888:8080"
    environment:
      - "LOCAL_HOSTNAME=ui"
      - "NIMBUS_ADDRESS=nimbus"
      - "NIMBUS_THRIFT_PORT=49627"
      - "DRPC_PORT=49772"
      - "DPRCI_PORT=49773"
      - "ZOOKEEPER_ADDRESS=zk"
      - "ZOOKEEPER_PORT=2181"
    networks:
      storm:
  elasticsearch:
    image: elasticsearch:2.3
    container_name:  elasticsearch
    hostname: elasticsearch
    ports:
      - "9200:9200"
    networks:
      storm:
networks:
  storm:
    external: true
apache-storm
1个回答
1
投票

这个谜的答案是两个过程中的一个是一个真正的“工人”过程(正在执行的类是backtype.storm.daemon.worker)......另一个过程被打印出来以响应'ps'命令是一个日志编写器进程,由类backtype.storm.LogWriter执行。

我应该在两个进程的输出行中注意到这一点。哦,好吧......现在我们知道了!

© www.soinside.com 2019 - 2024. All rights reserved.