尝试提取数据并希望保存在Excel中但使用python beautifulsoup获取错误

问题描述 投票:0回答:2

尝试提取但在最后一个字段中获取错误想要保存excel中的所有字段。

我已经尝试使用beautifulsoup提取但未能捕获,低于错误

Traceback(最近一次调用最后一次):

文件“C:/Users/acer/AppData/Local/Programs/Python/Python37/agri.py”,第30行,in

samples = soup2.find('h3',class _ ='trigger

扩展 ')。find_next_sibling(' DIV”,类_ = 'collapsefaq内容')。文本

AttributeError:'NoneType'对象没有属性'find_next_sibling'

from bs4 import BeautifulSoup
import requests

page1 = requests.get('http://www.agriculture.gov.au/pests-diseases-weeds/plant#identify-pests-diseases')

soup1 = BeautifulSoup(page1.text,'lxml')

for lis in soup1.find_all('li',class_='flex-item'):
    diseases = lis.find('img').next_sibling
    print("Diseases: " + diseases)
    image_link = lis.find('img')['src']
    print("Image_Link:http://www.agriculture.gov.au" + image_link)
    links = lis.find('a')['href']
    if links.startswith("http://"):
        link = links
    else:
        link = "http://www.agriculture.gov.au" + links
    page2 = requests.get(link)
    soup2 = BeautifulSoup(page2.text,'lxml')

    try:
        origin = soup2.find('strong',string='Origin: ').next_sibling
        print("Origin: " + origin)
    except:
        pass
    try:
        imported = soup2.find('strong',string='Pathways: ').next_sibling
        print("Imported: " + imported)
    except:
        pass 
    specimens = soup2.find('h3',class_='trigger expanded').find_next_sibling('div',class_='collapsefaq-content').text
    print("Specimens: " + specimens)

想要提取最后一个字段并使用python将所有字段保存到excel表中,PLZ帮助我任何人。

html python-3.x web-scraping beautifulsoup
2个回答
0
投票

它似乎想要标题以防止被阻止,并且每页还没有标本部分。以下显示了样本信息的每个页面的可能处理

from bs4 import BeautifulSoup
import requests
import pandas as pd

base = 'http://www.agriculture.gov.au'
headers = {'User-Agent' : 'Mozilla/5.0'}
specimens = []
with requests.Session() as s:
    r = s.get('http://www.agriculture.gov.au/pests-diseases-weeds/plant#identify-pests-diseases', headers = headers)
    soup = BeautifulSoup(r.content, 'lxml')
    names, images, links = zip(*[ ( item.text.strip(), base + item.select_one('img')['src'] , item['href'] if 'http' in item['href'] else base + item['href']) for item in soup.select('.flex-item > a') ])
    for link in links:
        r = s.get(link)
        soup = BeautifulSoup(r.content, 'lxml')
        if soup.select_one('.trigger'): # could also use if soup.select_one('.trigger:nth-of-type(3) + div'):
            info = soup.select_one('.trigger:nth-of-type(3) + div').text
        else:
            info = 'None'
        specimens.append(info)

df = pd.DataFrame([names, images, links, specimens])
df = df.transpose()
df.columns  = ['names', 'image_link', 'link', 'specimen']
df.to_csv(r"C:\Users\User\Desktop\Data.csv", sep=',', encoding='utf-8-sig',index = False ) 

我已经运行了很多次没有问题,但是,你总是可以将我当前的测试切换到try except块。

from bs4 import BeautifulSoup
import requests
import pandas as pd

base = 'http://www.agriculture.gov.au'
headers = {'User-Agent' : 'Mozilla/5.0'}
specimens = []
with requests.Session() as s:
    r = s.get('http://www.agriculture.gov.au/pests-diseases-weeds/plant#identify-pests-diseases', headers = headers)
    soup = BeautifulSoup(r.content, 'lxml')
    names, images, links = zip(*[ ( item.text.strip(), base + item.select_one('img')['src'] , item['href'] if 'http' in item['href'] else base + item['href']) for item in soup.select('.flex-item > a') ])
    for link in links:
        r = s.get(link)
        soup = BeautifulSoup(r.content, 'lxml')
        try:
            info = soup.select_one('.trigger:nth-of-type(3) + div').text
        except:
            info = 'None'
            print(link)
        specimens.append(info)

df = pd.DataFrame([names, images, links, specimens])
df = df.transpose()
df.columns  = ['names', 'image_link', 'link', 'specimen']

csv输出示例:

enter image description here


1
投票

次要类型:

   data2,append("Image_Link:http://www.agriculture.gov.au" + image_link)

应该:

   data2.append("Image_Link:http://www.agriculture.gov.au" + image_link) #period instead of a comma
© www.soinside.com 2019 - 2024. All rights reserved.