BeautifulSoup - 在clutch.co网站上解析并添加机器人的规则和规定

问题描述 投票:0回答:1

我想使用

Python
BeautifulSoup
从 Clutch.co 网站上抓取信息。

我想收集在clutch.co 上列出的公司的数据 :: 让我们以在clutch.co 上可见的以色列IT 机构为例:

https://clutch.co/il/agcies/digital

我的方法!?

import requests
from bs4 import BeautifulSoup
import time

def scrape_clutch_digital_agencies(url):
    # Set a User-Agent header
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
    }

    # Create a session to handle cookies
    session = requests.Session()

    # Check the robots.txt file
    robots_url = urljoin(url, '/robots.txt')
    robots_response = session.get(robots_url, headers=headers)

    # Print robots.txt content (for informational purposes)
    print("Robots.txt content:")
    print(robots_response.text)

    # Wait for a few seconds before making the first request
    time.sleep(2)

    # Send an HTTP request to the URL
    response = session.get(url, headers=headers)

    # Check if the request was successful (status code 200)
    if response.status_code == 200:
        # Parse the HTML content of the page
        soup = BeautifulSoup(response.text, 'html.parser')

        # Find the elements containing agency names (adjust this based on the website structure)
        agency_name_elements = soup.select('.company-info .company-name')

        # Extract and print the agency names
        agency_names = [element.get_text(strip=True) for element in agency_name_elements]

        print("Digital Agencies in Israel:")
        for name in agency_names:
            print(name)
    else:
        print(f"Failed to retrieve the page. Status code: {response.status_code}")

# Example usage
url = 'https://clutch.co/il/agencies/digital'
scrape_clutch_digital_agencies(url)

嗯——坦白说;我与这些条件作斗争 - 该网站返回以下内容 IE。我在 google-colab 中运行这个:

它会返回到 Colab 上的开发者控制台中:

NameError                                 Traceback (most recent call last)

<ipython-input-1-cd8d48cf2638> in <cell line: 47>()
     45 # Example usage
     46 url = 'https://clutch.co/il/agencies/digital'
---> 47 scrape_clutch_digital_agencies(url)

<ipython-input-1-cd8d48cf2638> in scrape_clutch_digital_agencies(url)
     13 
     14     # Check the robots.txt file
---> 15     robots_url = urljoin(url, '/robots.txt')
     16     robots_response = session.get(robots_url, headers=headers)
     17 

NameError: name 'urljoin' is not defined

嗯,我需要获得更多的见解——我很确定我会避开机器人的影响。机器人是许多人感兴趣的目标。所以我需要添加影响我的小 bs4 - 脚本的东西。

python pandas web-scraping beautifulsoup python-requests
1个回答
1
投票

您必须从相应的模块导入才能在代码中使用

 urljoin(url, '/robots.txt')

from urllib.parse import urljoin

但是,请注意,您会收到错误消息,因为

robots.txt
位于
https://clutch.co/robots.txt

© www.soinside.com 2019 - 2024. All rights reserved.