我正在寻找欧洲志愿服务的公开列表:我不需要完整的地址 - 但需要名称和网站。我认为数据... XML、CSV ... 具有这些字段:名称、国家/地区 - 以及一些其他字段,对于每个存在国家/地区来说,一条记录会很好。 顺便说一句:欧洲志愿服务是年轻人的绝佳选择
我发现了一个很棒的页面,非常非常全面 - 请参阅
想要从欧洲网站上托管的欧洲志愿服务收集数据:
https://youth.europa.eu/go-abroad/volunteering/opportunities_en
我们在那里有数百个志愿服务机会 - 这些机会存储在如下网站中:
https://youth.europa.eu/solidarity/placement/39020_en
https://youth.europa.eu/solidarity/placement/38993_en
https://youth.europa.eu/solidarity/placement/38973_en
https://youth.europa.eu/solidarity/placement/38972_en
https://youth.europa.eu/solidarity/placement/38850_en
https://youth.europa.eu/solidarity/placement/38633_en
想法:
我认为收集数据会很棒 - 即使用基于
BS4
和 requests
的刮刀 - 解析数据并随后在 dataframe
中打印数据
嗯 - 我认为我们可以迭代所有的网址:
placement/39020_en
placement/38993_en
placement/38973_en
placement/38850_en
更新:感谢@hedgeHog的帮助,我们找到了解决方案。
想法:我认为我们可以在存储中从零迭代到100 000以获取存储在展示位置中的所有结果。但这个想法没有代码支持。换句话说 - 目前我不知道如何实现在如此大的范围内迭代的特殊想法:
目前我认为 - 这是从这里开始的基本方法:
import requests
from bs4 import BeautifulSoup
import pandas as pd
# Function to generate placement URLs based on a range of IDs
def generate_urls(start_id, end_id):
base_url = "https://youth.europa.eu/solidarity/placement/"
urls = [base_url + str(id) + "_en" for id in range(start_id, end_id+1)]
return urls
# Function to scrape data from a single URL
def scrape_data(url):
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.content, 'html.parser')
title = soup.h1.get_text(', ', strip=True)
location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ', strip=True)
start_date, end_date = (e.get_text(strip=True) for e in soup.select('span.extra strong')[-2:])
website_tag = soup.find("a", class_="btn__link--website")
website = website_tag.get("href") if website_tag else None
return {
"Title": title,
"Location": location,
"Start Date": start_date,
"End Date": end_date,
"Website": website,
"URL": url
}
else:
print(f"Failed to fetch data from {url}. Status code: {response.status_code}")
return None
# Set the range of placement IDs we want to scrape
start_id = 1
end_id = 100000
# Generate placement URLs
urls = generate_urls(start_id, end_id)
# Scrape data from all URLs
data = []
for url in urls:
placement_data = scrape_data(url)
if placement_data:
data.append(placement_data)
# Convert data to DataFrame
df = pd.DataFrame(data)
# Print DataFrame
print(df)
这给了我以下信息
Failed to fetch data from https://youth.europa.eu/solidarity/placement/154_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/156_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/157_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/159_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/161_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/162_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/163_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/165_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/166_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/169_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/170_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/171_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/173_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/174_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/176_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/177_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/178_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/179_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/180_en. Status code: 404
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-d6272ee535ef> in <cell line: 42>()
41 data = []
42 for url in urls:
---> 43 placement_data = scrape_data(url)
44 if placement_data:
45 data.append(placement_data)
<ipython-input-5-d6272ee535ef> in scrape_data(url)
16 title = soup.h1.get_text(', ', strip=True)
17 location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ', strip=True)
---> 18 start_date, end_date = (e.get_text(strip=True) for e in soup.select('span.extra strong')[-2:])
19 website_tag = soup.find("a", class_="btn__link--website")
20 website = website_tag.get("href") if website_tag else None
ValueError: not enough values to unpack (expected 2, got 0)
有什么想法吗?
首先检查
response
/ soup
中是否包含要选择的元素;您正在讲话的那些人似乎并不在场。正如@John Gordon 提到的,你的选择没有找到任何东西。
css selectors
:
# Extracting relevant data
title = soup.h1.get_text(', ',strip=True)
location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ',strip=True)
start_date,end_date = (e.get_text(strip=True)for e in soup.select('span.extra strong')[-2:])
标题 | 地点 | 开始日期 | 结束日期 | 网址 | |
---|---|---|---|---|---|
0 | 支持GOB的可持续园艺项目“Es Viver” | c/ Camí des Castell, 53, 07702 Maó, 梅诺卡岛, 西班牙 | 2024年1月6日 | 2025年5月31日 | https://youth.europa.eu/solidarity/placement/39020_en |
1 | 欧洲志愿服务 VS 人口减少 3.0 | 47400 梅迪纳德尔坎波(巴利亚多利德),西班牙 | 2024年5月31日 | 2025年3月30日 | https://youth.europa.eu/solidarity/placement/38993_en |
2 | 支持当地社区:ASMISAF/AUNA Inclusión | 西班牙甘迪亚 | 2024年1月6日 | 2025年6月30日 | https://youth.europa.eu/solidarity/placement/38973_en |
3 | 支持当地社区:Caritas Gandia | 西班牙甘迪亚 | 2024年1月6日 | 2025年6月30日 | https://youth.europa.eu/solidarity/placement/38972_en |
4 | 基于马匹辅助干预+社会服务的教学农场 | Masía Cal Taulé s/n, 08673 Serrateix, 西班牙 | 2024年1月3日 | 2025年3月31日 | https://youth.europa.eu/solidarity/placement/38850_en |
5 | 支持农村地区 | Plaza de Tuy, 6, 34440 弗罗米斯塔, 西班牙 | 2024年4月3日 | 2024/04/11 | https://youth.europa.eu/solidarity/placement/38633_en |