Scrapy管道负载,但不工作

问题描述 投票:3回答:2

我有一个加载管道,但没有通过的项目给他们一个Scrapy项目。任何帮助表示赞赏。

蜘蛛的一个精简版:

#imports
class MySpider(CrawlSpider):
  #RULES AND STUFF 

  def parse_item(self, response):
    '''Takes HTML response and turns it into an item ready for database.  I hope.
    '''
    #A LOT OF CODE
    return item

在这一点上打印出该项目产生预期的效果和settings.py是足够简单:

ITEM_PIPELINES = [
  'mySpider.pipelines.MySpiderPipeline',
  'mySpider.pipelines.PipeCleaner',
  'mySpider.pipelines.DBWriter',
]

与管道似乎是正确的(没有进口):

class MySpiderPipeline(object):
  def process_item(self, item, spider):
    print 'PIPELINE: got ', item['name']
    return item

class DBWriter(object):
  """Writes each item to a DB.  I hope.
  """
  def __init__(self):
    self.dbpool = adbapi.ConnectionPool('MySQLdb'
                                        , host=settings['HOST']
                                        , port=int(settings['PORT'])
                                        , user=settings['USER']
                                        , passwd=settings['PASS']
                                        , db=settings['BASE']
                                        , cursorclass=MySQLdb.cursors.DictCursor
                                        , charset='utf8'
                                        , use_unicode=True
                                        )
    print('init DBWriter')

  def process_item(self, item, spider):
    print 'DBWriter process_item'
    query = self.dbpool.runInteraction(self._insert, item)
    query.addErrback(self.handle_error)
    return item

  def _insert(self, tx, item):
    print 'DBWriter _insert'
    # A LOT OF UNRELATED CODE HERE
    return item

class PipeCleaner(object):
  def __init__(self):
    print 'Cleaning these pipes.'

  def process_item(self, item, spider):
    print item['name'], ' is cleeeeaaaaannn!!'
    return item

当我运行的蜘蛛,我得到这个启动输出:

Cleaning these pipes.
init DBWriter
2012-10-23 15:30:04-0400 [scrapy] DEBUG: Enabled item pipelines: MySpiderPipeline, PipeCleaner, DBWriter

不像他们的init条款履带启动时不打印到屏幕上时,process_item方法不打印(或加工)任何东西。我穿越我的手指,我忘记了什么东西很简单。

scrapy pipeline
2个回答
1
投票
2012-10-23 15:30:04-0400 [scrapy] DEBUG: Enabled item pipelines: MySpiderPipeline, PipeCleaner, DBWriter

此行显示了您的管道被初始化,他们都OK。

问题是你的爬虫类,

class MySpider(CrawlSpider):
  #RULES AND STUFF 

  def parse_item(self, response):
    '''Takes HTML response and turns it into an item ready for database.  I hope.
    '''
    #A LOT OF CODE
    # before returning item , print it 
    return item

我想你应该打印一个项目,从MySpider返回之前。


1
投票

“迟到总比不到好”

#imports
class MySpider(CrawlSpider):
  #RULES AND STUFF 

  def parse_item(self, response):
    '''Takes HTML response and turns it into an item ready for database.  I hope.
    '''
    #A LOT OF CODE
    yield item       <------- yield instead of return
© www.soinside.com 2019 - 2024. All rights reserved.