如何查询特定年份的 arXiv?

问题描述 投票:0回答:2

我使用下面所示的代码来从 arXiv 检索论文。我想检索标题中含有“机器”和“学习”一词的论文。论文数量较多,所以想实现按年份切片(

published
)。

如何在

search_query
中索取2020年和2019年的记录?请注意,我对后过滤不感兴趣。

import urllib.request

import time
import feedparser

# Base api query url
base_url = 'http://export.arxiv.org/api/query?';

# Search parameters
search_query = urllib.parse.quote("ti:machine learning")
start = 0
total_results = 5000
results_per_iteration = 1000
wait_time = 3

papers = []

print('Searching arXiv for %s' % search_query)

for i in range(start,total_results,results_per_iteration):
    
    print("Results %i - %i" % (i,i+results_per_iteration))
    
    query = 'search_query=%s&start=%i&max_results=%i' % (search_query,
                                                         i,
                                                         results_per_iteration)

    # perform a GET request using the base_url and query
    response = urllib.request.urlopen(base_url+query).read()

    # parse the response using feedparser
    feed = feedparser.parse(response)

    # Run through each entry, and print out information
    for entry in feed.entries:
        #print('arxiv-id: %s' % entry.id.split('/abs/')[-1])
        #print('Title:  %s' % entry.title)
        #feedparser v4.1 only grabs the first author
        #print('First Author:  %s' % entry.author)
        paper = {}
        paper["date"] = entry.published
        paper["title"] = entry.title
        paper["first_author"] = entry.author
        paper["summary"] = entry.summary
        papers.append(paper)
    
    # Sleep a bit before calling the API again
    print('Bulk: %i' % 1)
    time.sleep(wait_time)
python api urllib feedparser
2个回答
3
投票

根据 arXiv 文档,没有

published
date
字段可用。

您可以做的是按日期对结果进行排序(通过将

&sortBy=submittedDate&sortOrder=descending
添加到查询参数中),并在到达 2018 年时停止发出请求。

基本上你的代码应该像这样修改:

import urllib.request

import time
import feedparser

# Base api query url
base_url = 'http://export.arxiv.org/api/query?';

# Search parameters
search_query = urllib.parse.quote("ti:machine learning")
i = 0
results_per_iteration = 1000
wait_time = 3
papers = []
year = ""  
print('Searching arXiv for %s' % search_query)

while (year != "2018"): #stop requesting when papers date reach 2018
    print("Results %i - %i" % (i,i+results_per_iteration))
    
    query = 'search_query=%s&start=%i&max_results=%i&sortBy=submittedDate&sortOrder=descending' % (search_query,
                                                         i,
                                                         results_per_iteration)

    # perform a GET request using the base_url and query
    response = urllib.request.urlopen(base_url+query).read()

    # parse the response using feedparser
    feed = feedparser.parse(response)
    # Run through each entry, and print out information
    for entry in feed.entries:
        #print('arxiv-id: %s' % entry.id.split('/abs/')[-1])
        #print('Title:  %s' % entry.title)
        #feedparser v4.1 only grabs the first author
        #print('First Author:  %s' % entry.author)
        paper = {}
        paper["date"] = entry.published
        year = paper["date"][0:4]
        paper["title"] = entry.title
        paper["first_author"] = entry.author
        paper["summary"] = entry.summary
        papers.append(paper)
    # Sleep a bit before calling the API again
    print('Bulk: %i' % 1)
    i += results_per_iteration
    time.sleep(wait_time)

对于“后过滤”方法,一旦收集到足够的结果,我会做这样的事情:

papers2019 = [item for item in papers if item["date"][0:4] == "2019"]

0
投票

他们确实有按时间搜索 API,但未在文档中列出。

参考此:https://groups.google.com/g/arxiv-api/c/mAFYT2VRpK0

© www.soinside.com 2019 - 2024. All rights reserved.