我想在Scrapy(屏幕抓取器/网络搜寻器)中实施一些单元测试。由于项目是通过“ scrapy crawl”命令运行的,所以我可以通过诸如“鼻子”之类的东西来运行它。由于scrapy建立在扭曲之上,我可以使用其单元测试框架试用版吗?如果是这样,怎么办?否则,我想让nose工作。
更新:
我一直在讨论Scrapy-Users,我想我应该“在测试代码中构建Response,然后使用该响应调用该方法,并断言[I]在输出中获得了预期的项目/请求”。我似乎无法使它正常工作。
我可以在测试中构建单元测试的测试类:
但是最终会生成this追溯。对为什么有任何见解?
我完成此操作的方式是创建假响应,这样您就可以脱机测试解析函数。但是您可以通过使用真实的HTML来了解实际情况。
这种方法的问题是您的本地HTML文件可能无法反映在线的最新状态。因此,如果HTML在线更改,您可能会遇到一个很大的错误,但是您的测试用例仍然可以通过。因此,这可能不是测试这种方式的最佳方法。
我当前的工作流程是,每当发生错误时,我都会使用url向管理员发送电子邮件。然后针对该特定错误,创建一个html文件,其中包含引起错误的内容。然后,我为此创建一个单元测试。
这是我用来创建示例Scrapy http响应以从本地html文件进行测试的代码:
# scrapyproject/tests/responses/__init__.py
import os
from scrapy.http import Response, Request
def fake_response_from_file(file_name, url=None):
"""
Create a Scrapy fake HTTP response from a HTML file
@param file_name: The relative filename from the responses directory,
but absolute paths are also accepted.
@param url: The URL of the response.
returns: A scrapy HTTP response which can be used for unittesting.
"""
if not url:
url = 'http://www.example.com'
request = Request(url=url)
if not file_name[0] == '/':
responses_dir = os.path.dirname(os.path.realpath(__file__))
file_path = os.path.join(responses_dir, file_name)
else:
file_path = file_name
file_content = open(file_path, 'r').read()
response = Response(url=url,
request=request,
body=file_content)
response.encoding = 'utf-8'
return response
示例html文件位于scrapyproject / tests / responses / osdir / sample.html
然后,测试用例可能如下所示:测试用例的位置是scrapyproject / tests / test_osdir.py
import unittest
from scrapyproject.spiders import osdir_spider
from responses import fake_response_from_file
class OsdirSpiderTest(unittest.TestCase):
def setUp(self):
self.spider = osdir_spider.DirectorySpider()
def _test_item_results(self, results, expected_length):
count = 0
permalinks = set()
for item in results:
self.assertIsNotNone(item['content'])
self.assertIsNotNone(item['title'])
self.assertEqual(count, expected_length)
def test_parse(self):
results = self.spider.parse(fake_response_from_file('osdir/sample.html'))
self._test_item_results(results, 10)
这基本上就是我测试解析方法的方式,但不仅限于解析方法。如果变得更复杂,建议您查看Mox
新添加的Spider Contracts值得一试。它为您提供了一种无需添加大量代码即可添加测试的简单方法。
我第一次使用Betamax在真实站点上运行测试,并将http响应保留在本地,以便以后的测试在以下情况下超级快速地运行:
Betamax会拦截您提出的每个请求,并尝试查找已经被拦截和记录的匹配请求。
[当您需要获取网站的最新版本时,只需删除betamax记录的内容并重新运行测试。
示例:
from scrapy import Spider, Request
from scrapy.http import HtmlResponse
class Example(Spider):
name = 'example'
url = 'http://doc.scrapy.org/en/latest/_static/selectors-sample1.html'
def start_requests(self):
yield Request(self.url, self.parse)
def parse(self, response):
for href in response.xpath('//a/@href').extract():
yield {'image_href': href}
# Test part
from betamax import Betamax
from betamax.fixtures.unittest import BetamaxTestCase
with Betamax.configure() as config:
# where betamax will store cassettes (http responses):
config.cassette_library_dir = 'cassettes'
config.preserve_exact_body_bytes = True
class TestExample(BetamaxTestCase): # superclass provides self.session
def test_parse(self):
example = Example()
# http response is recorded in a betamax cassette:
response = self.session.get(example.url)
# forge a scrapy response to test
scrapy_response = HtmlResponse(body=response.content, url=example.url)
result = example.parse(scrapy_response)
self.assertEqual({'image_href': u'image1.html'}, result.next())
self.assertEqual({'image_href': u'image2.html'}, result.next())
self.assertEqual({'image_href': u'image3.html'}, result.next())
self.assertEqual({'image_href': u'image4.html'}, result.next())
self.assertEqual({'image_href': u'image5.html'}, result.next())
with self.assertRaises(StopIteration):
result.next()
FYI,由于Ian Cordasco's talk,我在pycon 2015上发现了betamax。
这是一个很晚的答案,但是我对草率的测试感到恼火,所以我写了scrapy-test一个框架,用于根据已定义的规格测试草率的爬虫。
它通过定义测试规范而不是静态输出来工作。例如,如果我们正在爬行这种项目:
{
"name": "Alex",
"age": 21,
"gender": "Female",
}
我们可以定义刮擦测试ItemSpec
:
from scrapytest.tests import Match, MoreThan, LessThan
from scrapytest.spec import ItemSpec
class MySpec(ItemSpec):
name_test = Match('{3,}') # name should be at least 3 characters long
age_test = Type(int), MoreThan(18), LessThan(99)
gender_test = Match('Female|Male')
还有针对[c0]的统计数据的创意测试:
StatsSpec
之后可以针对实时或缓存结果运行它:
from scrapytest.spec import StatsSpec
from scrapytest.tests import Morethan
class MyStatsSpec(StatsSpec):
validate = {
"item_scraped_count": MoreThan(0),
}
我一直在使用缓存运行来进行开发更改,并每天执行cronjobs来检测网站更改。
我正在使用Twisted的$ scrapy-test
# or
$ scrapy-test --cache
运行测试,类似于Scrapy自己的测试。它已经启动了一个反应堆,因此我使用trial
而不用担心在测试中启动和停止一个反应堆。
[从CrawlerRunner
和check
Scrapy命令中窃取了一些想法,最后我得到了以下基本parse
类,以便对活动站点运行断言:
TestCase
示例:
from twisted.trial import unittest
from scrapy.crawler import CrawlerRunner
from scrapy.http import Request
from scrapy.item import BaseItem
from scrapy.utils.spider import iterate_spider_output
class SpiderTestCase(unittest.TestCase):
def setUp(self):
self.runner = CrawlerRunner()
def make_test_class(self, cls, url):
"""
Make a class that proxies to the original class,
sets up a URL to be called, and gathers the items
and requests returned by the parse function.
"""
class TestSpider(cls):
# This is a once used class, so writing into
# the class variables is fine. The framework
# will instantiate it, not us.
items = []
requests = []
def start_requests(self):
req = super(TestSpider, self).make_requests_from_url(url)
req.meta["_callback"] = req.callback or self.parse
req.callback = self.collect_output
yield req
def collect_output(self, response):
try:
cb = response.request.meta["_callback"]
for x in iterate_spider_output(cb(response)):
if isinstance(x, (BaseItem, dict)):
self.items.append(x)
elif isinstance(x, Request):
self.requests.append(x)
except Exception as ex:
print("ERROR", "Could not execute callback: ", ex)
raise ex
# Returning any requests here would make the crawler follow them.
return None
return TestSpider
或在设置中执行一个请求,并针对结果运行多个测试:
@defer.inlineCallbacks
def test_foo(self):
tester = self.make_test_class(FooSpider, 'https://foo.com')
yield self.runner.crawl(tester)
self.assertEqual(len(tester.items), 1)
self.assertEqual(len(tester.requests), 2)
稍微简单一点,通过从所选答案中删除@defer.inlineCallbacks
def setUp(self):
super(FooTestCase, self).setUp()
if FooTestCase.tester is None:
FooTestCase.tester = self.make_test_class(FooSpider, 'https://foo.com')
yield self.runner.crawl(self.tester)
def test_foo(self):
self.assertEqual(len(self.tester.items), 1)
:
def fake_response_from_file
我正在使用scrapy 1.3.0和功能:fake_response_from_file,引发错误:
import unittest
from spiders.my_spider import MySpider
from scrapy.selector import Selector
class TestParsers(unittest.TestCase):
def setUp(self):
self.spider = MySpider(limit=1)
self.html = Selector(text=open("some.htm", 'r').read())
def test_some_parse(self):
expected = "some-text"
result = self.spider.some_parse(self.html)
self.assertEqual(result, expected)
if __name__ == '__main__':
unittest.main()
我得到:
response = Response(url=url, request=request, body=file_content)
解决方案是改用TextResponse,它可以正常工作,例如:
raise AttributeError("Response content isn't text")
非常感谢。
您可以按照scrapy网站中的response = TextResponse(url=url, request=request, body=file_content)
代码段从脚本中运行。然后,您可以对返回的项目进行任何类型的断言。
这是我写的一个程序包,它大大扩展了Scrapy Autounit库的功能,并朝着不同的方向发展(允许轻松地动态更新测试用例并合并调试/测试用例生成的过程)。它还包括Scrapy https://github.com/ThomasAitken/Scrapy-Testmaster命令(parse
)