如何识别和跟踪链接,然后使用BeautifulSoup从新网页打印数据

问题描述 投票:1回答:4

[我正在尝试(1)从网页上获取标题,(2)打印标题,(3)链接到下一页,(4)从下一页获取标题,(5)打印下一页的标题。

步骤(1)和(4)是相同的功能,步骤(2)和(5)是相同的功能。唯一的不同是下一页将执行功能(4)和(5)。

#Imports
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re


##Internet
#Link to webpage 
web_page = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
#Soup object
soup = BeautifulSoup(web_page, 'html.parser')

我在步骤1和2上没有任何问题。我的代码能够获得标题并有效打印。步骤1和2:

##Get Data
def get_title():
    #Patent Number
    Patent_Number = soup.title.text
    print(Patent_Number)

get_title()

我得到的输出正是我想要的:

#Print Out
United States Patent: 10530579

我在执行步骤3时遇到问题。对于步骤(3),我已经能够识别正确的链接,但是无法将其跟随到下一页。我正在标识所需的链接,即图像标签上方的“ href”。

Picture of link to follow.

以下代码是我针对第3、4和5步的工作草案:

#Get
def get_link():
    ##Internet
    #Link to webpage 
    html = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
    #Soup object
    soup = BeautifulSoup(html, 'html.parser')
    #Find image
    ##image = <img valign="MIDDLE" src="/netaicon/PTO/nextdoc.gif" border="0" alt="[NEXT_DOC]">
    #image = soup.find("img", valign="MIDDLE")
    image = soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]")
    #Get new link
    new_link = link.attrs['href']
    print(new_link)

get_link()

我得到的输出:

#Print Out
##/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=32&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/"deep+learning"

输出是我要关注的确切链接。简而言之,我要编写的函数将打开new_link变量作为新网页,并在新网页上执行与(1)和(2)相同的功能。结果输出将是两个标题,而不是一个标题(一个用于网页,一个用于新网页)。

本质上,我需要写一个:

urlopen(new_link)

功能,而不是:

print(new_link)

功能。然后,在新网页上执行步骤4和5。但是,我很难弄清楚要打开新页面并获取标题。一个问题是new_link不是URL,而是我要单击的链接。

python html web-scraping beautifulsoup urlopen
4个回答
0
投票

抓住机会清理您的代码。我删除了不必要的re导入并简化了功能:

from urllib.request import urlopen
from bs4 import BeautifulSoup


def get_soup(web_page):
    web_page = urlopen(web_page)
    return BeautifulSoup(web_page, 'html.parser')

def get_title(soup):
    return soup.title.text  # Patent Number

def get_next_link(soup):
    return soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]").parent['href']

base_url = 'http://patft.uspto.gov'
web_page = base_url + '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22'

soup = get_soup(web_page)

get_title(soup)
> 'United States Patent: 10530579'

get_next_link(soup)
> '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=32&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/"deep+learning"'

soup = get_soup(base_url + get_next_link(soup))
get_title(soup)
> 'United States Patent: 10529534'

get_next_link(soup)
> '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=33&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/"deep+learning"'

1
投票

尽管您找到了解决方案,以防万一有人尝试类似的尝试。不建议在所有情况下都使用以下我的解决方案。在这种情况下,由于所有页面的URL仅因页面编号而不同。我们可以动态生成这些,然后按如下所示批量请求。您可以仅更改r的上限,直到该页面存在为止。

from urllib.request import urlopen
from bs4 import BeautifulSoup
import pandas as pd

head = "http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r="  # no trailing /
trail = """&f=G&l=50&co1=AND&d=PTXT&s1=("deep+learning".CLTX.+or+"deep+learning".DCTX.)&OS=ACLM/"deep+learning"""

final_url = []
news_data = []
for r in range(32,38): #change the upper range as per requirement
    final_url.append(head + str(r) + trail)
for url in final_url:
    try:
        page = urlopen(url)
        soup = BeautifulSoup(page, 'html.parser')   
        patentNumber = soup.title.text
        news_articles = [{'page_url':  url,
                     'patentNumber':  patentNumber}
                    ]
        news_data.extend(news_articles)     
    except Exception as e:
        print(e)
        print("continuing....")
        continue
df =  pd.DataFrame(news_data)  

0
投票

您可以使用一些正则表达式来提取链接并设置其格式(以防更改),并且整个示例代码如下:

# The first link
url = "http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22"

# Test loop (to grab 5 records)
for _ in range(5):
    web_page = urlopen(url)
    soup = BeautifulSoup(web_page, 'html.parser')

    # step 1 & 2 - grabbing and printing title from a webpage
    print(soup.title.text) 

    # step 4 - getting the link from the page
    next_page_link = soup.find('img', {'alt':'[NEXT_DOC]'}).find_parent('a').get('href')

    # extracting the link (determining the prefix (http or https) and getting the site data (everything until the first /))
    match = re.compile("(?P<prefix>http(s)?://)(?P<site>[^/]+)(?:.+)").search(url)
    if match:
        prefix = match.group('prefix')
        site = match.group('site')

    # formatting the link to the next page
    url = '%s%s%s' % (prefix, site, next_page_link)

    # printing the link just for debug purpose
    print(url)

    # continuing with the loop

0
投票

而不是print(new_link),此函数从下一页打印标题。

def get_link():
    ##Internet
    #Link to webpage 
    html = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
    #Soup object
    soup = BeautifulSoup(html, 'html.parser')
    #Find image
    image = soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]")
    #Follow link
    link = image.parent
    new_link = link.attrs['href']
    new_page = urlopen('http://patft.uspto.gov/'+new_link)
    soup = BeautifulSoup(new_page, 'html.parser')
    #Patent Number
    Patent_Number = soup.title.text
    print(Patent_Number)

get_link()

添加'http://patft.uspto.gov/'以及new_link-将链接转到有效的网址。然后,我可以打开URL,导航到页面并检索标题。

© www.soinside.com 2019 - 2024. All rights reserved.