Tika Parser放慢了StormCrawler的速度

问题描述 投票:1回答:2

我有相当常见的任务,拥有数千个网站,并且必须尽可能多地解析(当然,以适当的方式)。

首先,我使用JSoup解析器进行了类似stormcrawlerfight的配置。生产力非常好,非常稳定,一分钟内大约8k次。

然后我想增加解析PDF / doc / etc的可能性。所以我添加了Tika解析器来解析非HTML文档。但我看到了这种指标:enter image description here

所以有时会有很好的分钟,有时会在一分钟内下降到数百分钟。当我删除Tika流记录时 - 一切恢复正常。所以一般的问题是,如何找到这种行为的原因,瓶颈。也许我错过了一些设置?

以下是我在Storm UI中的爬虫拓扑中看到的内容:enter image description here

ES-injector.flux:

name: "injector"

includes:
- resource: true
  file: "/crawler-default.yaml"
  override: false

- resource: false
  file: "crawler-custom-conf.yaml"
  override: true

- resource: false
  file: "es-conf.yaml"
  override: true

spouts:
  - id: "spout"
    className: "com.digitalpebble.stormcrawler.spout.FileSpout"
    parallelism: 1
    constructorArgs:
      - "."
      - "feeds.txt"
      - true

bolts:
  - id: "status"
    className: "com.digitalpebble.stormcrawler.elasticsearch.persistence.StatusUpdaterBol    t"
    parallelism: 1

streams:
  - from: "spout"
    to: "status"
    grouping:
      type: CUSTOM
      customClass:
        className:    "com.digitalpebble.stormcrawler.util.URLStreamGrouping"
        constructorArgs:
          - "byHost"
      streamId: "status"

ES crawler.flux:

name: "crawler"

includes:
- resource: true
  file: "/crawler-default.yaml"
  override: false

- resource: false
  file: "crawler-custom-conf.yaml"
  override: true

- resource: false
  file: "es-conf.yaml"
  override: true

spouts:
  - id: "spout"
    className: "com.digitalpebble.stormcrawler.elasticsearch.persistence.AggregationSpout"
    parallelism: 10

bolts:
  - id: "partitioner"
    className:    "com.digitalpebble.stormcrawler.bolt.URLPartitionerBolt"
    parallelism: 1

  - id: "fetcher"
    className: "com.digitalpebble.stormcrawler.bolt.FetcherBolt"
    parallelism: 1

  - id: "sitemap"
    className: "com.digitalpebble.stormcrawler.bolt.SiteMapParserBolt"
    parallelism: 1

  - id: "parse"
    className: "com.digitalpebble.stormcrawler.bolt.JSoupParserBolt"
    parallelism: 5

  - id: "index"
    className:    "com.digitalpebble.stormcrawler.elasticsearch.bolt.IndexerBolt"
    parallelism: 1

  - id: "status"
    className:    "com.digitalpebble.stormcrawler.elasticsearch.persistence.StatusUpdaterBolt"
    parallelism: 4

  - id: "status_metrics"
    className: "com.digitalpebble.stormcrawler.elasticsearch.metrics.StatusMetricsBolt"
    parallelism: 1

  - id: "redirection_bolt"
    className: "com.digitalpebble.stormcrawler.tika.RedirectionBolt"
    parallelism: 1

  - id: "parser_bolt"
    className: "com.digitalpebble.stormcrawler.tika.ParserBolt"
    parallelism: 1

streams:
  - from: "spout"
    to: "partitioner"
    grouping:
      type: SHUFFLE

  - from: "spout"
    to: "status_metrics"
    grouping:
      type: SHUFFLE     

  - from: "partitioner"
    to: "fetcher"
    grouping:
      type: FIELDS
      args: ["key"]

  - from: "fetcher"
    to: "sitemap"
    grouping:
      type: LOCAL_OR_SHUFFLE

  - from: "sitemap"
    to: "parse"
    grouping:
      type: LOCAL_OR_SHUFFLE

  # This is not needed as long as redirect_bolt is sending html content to index?
  # - from: "parse"
  #   to: "index"
  #   grouping:
  #     type: LOCAL_OR_SHUFFLE

  - from: "fetcher"
    to: "status"
    grouping:
      type: FIELDS
      args: ["url"]
      streamId: "status"

  - from: "sitemap"
    to: "status"
    grouping:
      type: FIELDS
      args: ["url"]
      streamId: "status"

  - from: "parse"
    to: "status"
    grouping:
      type: FIELDS
      args: ["url"]
      streamId: "status"

  - from: "index"
    to: "status"
    grouping:
      type: FIELDS
      args: ["url"]
      streamId: "status"

  - from: "parse"
    to: "redirection_bolt"
    grouping:
      type: LOCAL_OR_SHUFFLE

  - from: "redirection_bolt"
    to: "parser_bolt"
    grouping:
      type: LOCAL_OR_SHUFFLE
      streamId: "tika"

  - from: "redirection_bolt"
    to: "index"
    grouping:
      type: LOCAL_OR_SHUFFLE

  - from: "parser_bolt"
    to: "index"
    grouping:
      type: LOCAL_OR_SHUFFLE

更新:我发现我在workers.log中出现内存不足错误,即使我已将workers.heap.size设置为4Gb,工作进程也增加到10-15Gb ..

Update2:在我限制内存使用后,我看不到OutOfMemory错误,但性能非常低。 enter image description here enter image description here

没有Tika - 我每分钟看到15k次。有了Tika - 它只是在高杆之后,每分钟只有几百个。

我在工作日志中看到了这个:https://paste.ubuntu.com/p/WKBTBf8HMV/

CPU使用率非常高,但日志中没有任何内容。

web-crawler stormcrawler
2个回答
1
投票

正如您在UI上的统计数据中所看到的,Tika解析器螺栓是瓶颈:它的容量为1.6(值> 1意味着它无法足够快地处理输入)。如果你给它提供与JSOUP解析器相同的并行性,即4或更多,这应该会改进。


0
投票

迟到的回复但可能对其他人有用。

在开放式爬行中使用Tika会发生什么,Tika解析器获取JSOUPParser螺栓无法处理的所有内容:拉链,图像,视频等......通常这些URL往往非常繁重且处理速度慢且传入元组很快备份到内部队列,直到内存爆炸。

我刚刚提交了Set mimetype whitelist for Tika Parser #712,它允许您定义一组正则表达式,这些正则表达式将在文档的内容类型上进行尝试。如果匹配,则处理文档,否则,将元组作为错误发送到STATUS流。

您可以将白名单配置为:

parser.mimetype.whitelist:
 - application/.+word.*
 - application/.+excel.*
 - application/.+powerpoint.*
 - application/.*pdf.*

这应该使您的拓扑更加快速和稳定。让我知道事情的后续。

© www.soinside.com 2019 - 2024. All rights reserved.