使用 Python 解析 XML 站点地图

问题描述 投票:0回答:8

我有一个像这样的站点地图:http://www.site.co.uk/sitemap.xml其结构如下:

<sitemapindex>
  <sitemap>
    <loc>
    http://www.site.co.uk/drag_it/dragitsitemap_static_0.xml
    </loc>
    <lastmod>2015-07-07</lastmod>
  </sitemap>
  <sitemap>
    <loc>
    http://www.site.co.uk/drag_it/dragitsitemap_alpha_0.xml
    </loc>
    <lastmod>2015-07-07</lastmod>
  </sitemap>
...

我想从中提取数据。首先,我需要计算 xml 中有多少个

<sitemap>
,然后对于每个,提取
<loc>
<lastmod>
数据。有没有一种简单的方法可以在Python中做到这一点?

我见过类似的其他问题,但它们都提取了例如 xml 中的每个

<loc>
元素,我需要从每个元素中单独提取数据。

我尝试将

lxml
与此代码一起使用:

import urllib2
from lxml import etree

u = urllib2.urlopen('http://www.site.co.uk/sitemap.xml')
doc = etree.parse(u)

element_list = doc.findall('sitemap')

for element in element_list:
    url = store.findtext('loc')
    print url

但是

element_list
是空的。

python xml parsing
8个回答
20
投票

我选择使用 RequestsBeautifulSoup 库。我创建了一个字典,其中键是 url,值是最后修改日期。

from bs4 import BeautifulSoup
import requests

xml_dict = {}

r = requests.get("http://www.site.co.uk/sitemap.xml")
xml = r.text

soup = BeautifulSoup(xml, "lxml")
sitemap_tags = soup.find_all("sitemap")

print(f"The number of sitemaps are {len(sitemapTags)}")

for sitemap in sitemap_tags:
    xml_dict[sitemap.findNext("loc").text] = sitemap.findNext("lastmod").text

print(xml_dict)

或者使用 lxml:

from lxml import etree
import requests

xml_dict = {}

r = requests.get("http://www.site.co.uk/sitemap.xml")
root = etree.fromstring(r.content)
print(f"The number of sitemap tags are {len(root)}")
for sitemap in root:
    children = sitemap.getchildren()
    xml_dict[children[0].text] = children[1].text
print(xml_dict)

7
投票

使用 Python 3、请求、Pandas 和列表理解:

import requests
import pandas as pd
import xmltodict

url = "https://www.gov.uk/sitemap.xml"
res = requests.get(url)
raw = xmltodict.parse(res.text)

data = [[r["loc"], r["lastmod"]] for r in raw["sitemapindex"]["sitemap"]]
print("Number of sitemaps:", len(data))
df = pd.DataFrame(data, columns=["links", "lastmod"])

输出:

    links                                       lastmod
0   https://www.gov.uk/sitemaps/sitemap_1.xml   2018-11-06T01:10:02+00:00
1   https://www.gov.uk/sitemaps/sitemap_2.xml   2018-11-06T01:10:02+00:00
2   https://www.gov.uk/sitemaps/sitemap_3.xml   2018-11-06T01:10:02+00:00
3   https://www.gov.uk/sitemaps/sitemap_4.xml   2018-11-06T01:10:02+00:00
4   https://www.gov.uk/sitemaps/sitemap_5.xml   2018-11-06T01:10:02+00:00

4
投票

此函数将从 xml 中提取所有 url

from bs4 import BeautifulSoup
import requests

def get_urls_of_xml(xml_url):
    r = requests.get(xml_url)
    xml = r.text
    soup = BeautifulSoup(xml)

    links_arr = []
    for link in soup.findAll('loc'):
        linkstr = link.getText('', True)
        links_arr.append(linkstr)

    return links_arr



links_data_arr = get_urls_of_xml("https://www.gov.uk/sitemap.xml")
print(links_data_arr)


2
投票

这里使用

BeautifulSoup
来获取
sitemap
计数并提取文本:

from bs4 import BeautifulSoup as bs

html = """
 <sitemap>
    <loc>
    http://www.site.co.uk/drag_it/dragitsitemap_static_0.xml
    </loc>
    <lastmod>2015-07-07</lastmod>
  </sitemap>
  <sitemap>
    <loc>
    http://www.site.co.uk/drag_it/dragitsitemap_alpha_0.xml
    </loc>
    <lastmod>2015-07-07</lastmod>
  </sitemap>
"""

soup = bs(html, "html.parser")
sitemap_count = len(soup.find_all('sitemap'))
print("sitemap count: %d" % sitemap)
print(soup.get_text())

输出:

sitemap count: 2

    http://www.site.co.uk/drag_it/dragitsitemap_static_0.xml

2015-07-07

    http://www.site.co.uk/drag_it/dragitsitemap_alpha_0.xml

2015-07-07

1
投票

您可以使用

advertools
,它具有用于 解析 XML 站点地图的特殊功能。默认情况下,它还可以解析压缩的站点地图 (.xml.gz)。 如果您有站点地图索引文件,它还会递归地将它们全部放入一个 DataFrame 中。


import advertools as adv

economist =  adv.sitemap_to_df('https://www.economist.com/sitemap-2022-Q1.xml')
economist.head()
洛克 最后修改 更改频率 优先 网站地图 电子标签 sitemap_last_modified 站点地图大小_mb 下载日期
0 https://www.economist.com/printedition/2022-01-22 2022-01-20 15:57:17+00:00 每日 0.6 https://www.economist.com/sitemap-2022-Q1.xml e2637d17284eefef7d1eafb9ef4ebe3a 2022-01-22 04:00:54+00:00 0.0865097 2022-01-23 00:01:41.026416+00:00
1 https://www.economist.com/the-world-this-week/2022/01/22/kals-cartoon 2022-01-20 16:53:34+00:00 每日 0.6 https://www.economist.com/sitemap-2022-Q1.xml e2637d17284eefef7d1eafb9ef4ebe3a 2022-01-22 04:00:54+00:00 0.0865097 2022-01-23 00:01:41.026416+00:00
2 https://www.economist.com/united-states/2022/01/22/a-new-barbie-doll-commemorates-a-19th- century-suffragist 2022-01-20 16:10:36+00:00 每日 0.6 https://www.economist.com/sitemap-2022-Q1.xml e2637d17284eefef7d1eafb9ef4ebe3a 2022-01-22 04:00:54+00:00 0.0865097 2022-01-23 00:01:41.026416+00:00
3 https://www.economist.com/britain/2022/01/22/tory-mps-love-to-hate-the-bbc-but-tory-voters-love-to-watch-it 2022-01-20 17:09:59+00:00 每日 0.6 https://www.economist.com/sitemap-2022-Q1.xml e2637d17284eefef7d1eafb9ef4ebe3a 2022-01-22 04:00:54+00:00 0.0865097 2022-01-23 00:01:41.026416+00:00
4 https://www.economist.com/china/2022/01/22/the-communist-party-revisits-its-egalarian-roots 2022-01-20 16:48:14+00:00 每日 0.6 https://www.economist.com/sitemap-2022-Q1.xml e2637d17284eefef7d1eafb9ef4ebe3a 2022-01-22 04:00:54+00:00 0.0865097 2022-01-23 00:01:41.026416+00:00

0
投票

这是一个很好的库:https://github.com/mediacloud/ultimate-sitemap-parser

Python 3.5+ 的网站站点地图解析器。

安装:

pip install ultimate-sitemap-parser

从站点地图中提取 nytimes.com 网站的所有页面的示例:

from usp.tree import sitemap_tree_for_homepage

tree = sitemap_tree_for_homepage("https://www.nytimes.com/")
for page in tree.all_pages():
    print(page)

0
投票

在现代 Python3 中使用正确的库:

requests
lxml
,即使使用来自
utf8
声明的
XML
编码:

import requests
from lxml import etree
from pprint import pprint

session = requests.session()

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36'
}

res = session.get('https://example.org/sitemap-xml', headers=headers)
xml_bytes = res.text.encode('utf-8')

# Parse the XML bytes
root = etree.fromstring(xml_bytes)

# Define the namespace
ns = {'sitemap': 'http://www.sitemaps.org/schemas/sitemap/0.9'}

urls = root.xpath('//sitemap:url[./sitemap:loc[contains(., "/en-us/")]]', namespaces=ns)

# List comprehension
urls = [u.xpath('./sitemap:loc/text()', namespaces=ns)[0] for u in urls]

pprint(urls)

-1
投票

我今天刚接到任务。我使用了 requests 和 re (正则表达式) 导入请求 进口重新

sitemap_url = "https://www.gov.uk/sitemap.xml"
#if you need to send some headers
headers = {'user-agent': 'myApp'}
response = requests.get(sitemap_url,headers = headers)
xml = response.text

list_of_urls = []

for address in re.findall(r"https://.*(?=/</)", xml):
    list_of_urls.append(address+'/')#i add trailing slash, you might want to skip it
© www.soinside.com 2019 - 2024. All rights reserved.