用于从网页中提取所有图像的脚本

问题描述 投票:1回答:1

我试图使用以下代码从网页中提取所有图像但它给出错误'Nonetype'对象没有属性'组'。谁能告诉我这里有什么问题?

import re
import requests
from bs4 import BeautifulSoup

site = 'http://pixabay.com'

response = requests.get(site)

soup = BeautifulSoup(response.text, 'html.parser')
img_tags = soup.find_all('img')

urls = [img['src'] for img in img_tags]


for url in urls:
    filename = re.search(r'/([\w_-]+[.](jpg|gif|png))$', url)
    with open(filename.group(1), 'wb') as f:
        if 'http' not in url:
            # sometimes an image source can be relative 
            # if it is provide the base url which also happens 
            # to be the site variable atm. 
            url = '{}{}'.format(site, url)
        response = requests.get(url)
        f.write(response.content)
python python-3.x beautifulsoup
1个回答
0
投票

编辑:对于上下文,由于原始问题已由其他人更新,并且已更改原始代码,因此用户使用的原始模式为r'/([\w_-]+.)$'。这是最初的问题。这个上下文将允许以下答案更有意义:

我选择了像r'/([\w_.-]+)$'这样的模式。您使用的模式不允许路径包含.,除了作为最后一个字符,因为.之外的[]表示任何字符,并且您在$(字符串的结尾)之前就已经拥有它。因此,我将.移动到[],这意味着允许字符组中的文字.。这允许模式捕获URL末尾的图像文件名。

import re
import requests
from bs4 import BeautifulSoup

site = 'http://pixabay.com'

response = requests.get(site)

soup = BeautifulSoup(response.text, 'html.parser')
img_tags = soup.find_all('img')

urls = [img['src'] for img in img_tags]

for url in urls:
    filename = re.search(r'/([\w_.-]+)$', url)
    with open(filename.group(1), 'wb') as f:
        if 'http' not in url:
            # sometimes an image source can be relative
            # if it is provide the base url which also happens
            # to be the site variable atm.
            url = '{}{}'.format(site, url)
        response = requests.get(url)
        f.write(response.content)
© www.soinside.com 2019 - 2024. All rights reserved.