使用Python请求提取href URL

问题描述 投票:1回答:4

我想使用python中的请求包从xpath中提取URL。我可以得到文本,但我尝试的没有给出URL。有人可以帮忙吗?

ipdb> webpage.xpath(xpath_url + '/text()')
['Text of the URL']
ipdb> webpage.xpath(xpath_url + '/a()')
*** lxml.etree.XPathEvalError: Invalid expression
ipdb> webpage.xpath(xpath_url + '/href()')
*** lxml.etree.XPathEvalError: Invalid expression
ipdb> webpage.xpath(xpath_url + '/url()')
*** lxml.etree.XPathEvalError: Invalid expression

我用这个教程开始:http://docs.python-guide.org/en/latest/scenarios/scrape/

看起来应该很容易,但在搜索过程中没有任何结果。

谢谢。

python python-3.x xpath python-requests lxml
4个回答
5
投票

你试过webpage.xpath(xpath_url + '/@href')吗?

这是完整的代码:

from lxml import html
import requests

page = requests.get('http://econpy.pythonanywhere.com/ex/001.html')
webpage = html.fromstring(page.content)

webpage.xpath('//a/@href')

结果应该是:

[
  'http://econpy.pythonanywhere.com/ex/002.html',
  'http://econpy.pythonanywhere.com/ex/003.html', 
  'http://econpy.pythonanywhere.com/ex/004.html',
  'http://econpy.pythonanywhere.com/ex/005.html'
]

1
投票

使用BeautifulSoup会更好:

from bs4 import BeautifulSoup

html = requests.get('testurl.com')
soup = BeautifulSoup(html, "lxml") # lxml is just the parser for reading the html
soup.find_all('a href') # this is the line that does what you want

您可以打印该行,将其添加到列表等。要迭代它,请使用:

links = soup.find_all('a href')
for link in links:
    print(link)

0
投票
from requests_html import HTMLSession
session = HTMLSession()
r = session.get('https://www.***.com')
r.html.links

Requests-HTML


0
投票

与上下文管理器的好处:

with requests_html.HTMLSession() as s:
    try:
        r = s.get('http://econpy.pythonanywhere.com/ex/001.html')
        links = r.html.links
        for link in links:
            print(link)
    except:
        pass
© www.soinside.com 2019 - 2024. All rights reserved.