Django Scrapy TypeError:RepoSpider.start_requests() 缺少 1 个必需的位置参数:'url'

问题描述 投票:0回答:1

我正在尝试构建一个网络应用程序来从存储库中获取数据。即将完成,但我目前面临这个错误。

代码:

这是蜘蛛代码

import scrapy
from App.models import Repo


class RepoSpider(scrapy.Spider):
    name = "RepoSpider"
    allowed_domains = ["github.com"]
    start_urls = []

    def start_requests(self, url):
        yield scrapy.Request(url)

    def parse(self, response):
        url = response.url
        url_parts = url.split('/')
        username = url_parts[-1]
        repo = url_parts[-2]

        description = response.css('.f4.my-3::text').get(default='').strip()
        language = response.css('.color-fg-default.text-bold.mr-1::text').get(default='')
        stars = response.css('a.Link.Link--muted strong::text').get(default='0').strip()

        yield {
            'username': username,
            'repo': repo,
            'description': description,
            'top_language': language,
            'stars': stars
        }

        scraped_repo = Repo(
            url=url,
            username=username,
            description=description,
            top_language=language,
            stars=stars
        )
        scraped_repo.save()

django 视图

from django.shortcuts import render, redirect
from .models import Repo
from scrapy.crawler import CrawlerProcess
from .tester.tester.spiders.repo import RepoSpider

def index(request):

    if request.method =='POST':
        url = request.POST.get('url')

        process = CrawlerProcess()
        process.crawl(RepoSpider, url)
        process.start()
    return render(request, 'index.html')

尝试了我能做的一切,但现在没有选择了。这是一个我需要尽快完成的项目,这对我来说意义重大。

python-3.x django scrapy
1个回答
0
投票

您可以像下面这样更新

start_requests

def start_requests(self):
    yield scrapy.Request(self.url)
© www.soinside.com 2019 - 2024. All rights reserved.