将给定 URL 中的 HTML 表格抓取到 CSV 中

问题描述 投票:0回答:3

我寻找一个可以在命令行上运行的工具,如下所示:

tablescrape 'http://someURL.foo.com' [n]

如果未指定

n
并且页面上有多个 HTML 表格,则应将它们(标题行、总行数)汇总在一个编号列表中。 如果指定了
n
或者只有一个表,它应该解析该表并将其以 CSV 或 TSV 格式输出到标准输出。

潜在的附加功能:

  • 如果你想真正奇特的话,你可以解析表中的表,但对于我的目的来说——从维基百科页面等获取数据——这太过分了。
  • 将任何 unicode 化为ASCII 的选项。
  • 应用任意正则表达式替换来修复解析表中的怪异现象的选项。

你会用什么来拼凑这样的东西? Perl 模块 HTML::TableExtract 可能是一个很好的起点,甚至可以处理嵌套表的情况。 这也可能是一个带有 BeautifulSoup 的非常短的 Python 脚本。 YQL 是一个好的起点吗? 或者,理想情况下,您是否写过类似的内容并有指向它的指针? (我肯定不是第一个需要这个的人。)

相关问题:

html language-agnostic parsing csv screen-scraping
3个回答
13
投票

这是我的第一次尝试:

http://yootles.com/outbox/tablescrape.py

它需要更多的工作,比如更好的 asciifying,但它是可用的。例如,如果您将其指向此奥林匹克记录列表

./tablescrape http://en.wikipedia.org/wiki/List_of_Olympic_records_in_athletics

它告诉你有8张桌子可用,很明显第2和第3张(男子和女子记录)就是你想要的:

1: [  1 cols,   1 rows] Contents 1 Men's rec
2: [  7 cols,  25 rows] Event | Record | Name | Nation | Games | Date | Ref
3: [  7 cols,  24 rows] Event | Record | Name | Nation | Games | Date | Ref
[...]

然后,如果您再次运行它,请求第二个表,

./tablescrape http://en.wikipedia.org/wiki/List_of_Olympic_records_in_athletics 2

你得到一个合理的明文数据表:

100 metres | 9.69 | Usain Bolt | Jamaica (JAM) | 2008 Beijing | August 16, 2008 | [ 8 ]
200 metres | 19.30 | Usain Bolt | Jamaica (JAM) | 2008 Beijing | August 20, 2008 | [ 8 ]
400 metres | 43.49 | Michael Johnson | United States (USA) | 1996 Atlanta | July 29, 1996 | [ 9 ]
800 metres | 1:42.58 | Vebjørn Rodal | Norway (NOR) | 1996 Atlanta | July 31, 1996 | [ 10 ]
1,500 metres | 3:32.07 | Noah Ngeny | Kenya (KEN) | 2000 Sydney | September 29, 2000 | [ 11 ]
5,000 metres | 12:57.82 | Kenenisa Bekele | Ethiopia (ETH) | 2008 Beijing | August 23, 2008 | [ 12 ]
10,000 metres | 27:01.17 | Kenenisa Bekele | Ethiopia (ETH) | 2008 Beijing | August 17, 2008 | [ 13 ]
Marathon | 2:06:32 | Samuel Wanjiru | Kenya (KEN) | 2008 Beijing | August 24, 2008 | [ 14 ]
[...]

1
投票

使用 TestPlan 我制作了一个粗略的脚本。考虑到网络表格的复杂性,它可能需要在所有网站上进行定制。

第一个脚本列出了页面上的表格:

# A simple table scraping example. It lists the tables on a page
#
# Cmds.Site = the URL to scan
default %Cmds.Site% http://en.wikipedia.org/wiki/List_of_Olympic_records_in_athletics
GotoURL %Cmds.Site%

set %Count% 1
foreach %Table% in (response //table)
    Notice Table #%Count%
    # find a suitable name, look back for a header
    set %Check% ./preceding::*[name()='h1' or name()='h2' or name()='h3'][1]
    if checkIn %Table% %Check%
        Notice (selectIn %Table% %Check%)
    end

    set %Count% as binOp %Count% + 1
end

第二个脚本将一个表的数据提取到 CSV 文件中。

# Generic extract of contents of a table in a webpage
# Use list_tables to get the list of table and indexes
#
# Cmds.Site = the URL to scan
# Cmds.Index = Table index to scan
default %Cmds.Site% http://en.wikipedia.org/wiki/List_of_Olympic_records_in_athletics
default %Cmds.Index% 2

GotoURL %Cmds.Site%

set %Headers% //table[%Cmds.Index%]/tbody/tr[1]
set %Rows% //table[%Cmds.Index%]/tbody/tr[position()>1]

# Get an cleanup the header fields 
set %Fields% withvector
end
foreach %Header% in (response %Headers%/*)
    putin %Fields% (trim %Header%)
end
Notice %Fields%

# Create an output CSV
call unit.file.CreateDataFile with
    %Name% %This:Dir%/extract_table.csv
    %Format% csv
    %Fields% %Fields%
end
set %DataFile% %Return:Value%

# Now extract each row
foreach %Row% in (response %Rows%)
    set %Record% withvector
    end
    foreach %Cell% in (selectIn %Row% ./td)
        putin %Record% (trim %Cell%)
    end

    call unit.file.WriteDataFile with
        %DataFile% %DataFile%
        %Record% %Record%
    end
end

call unit.file.CloseDataFile with
    %DataFile% %DataFile%
end

我的 CSV 文件如下所示。请注意,维基百科在每个单元格中都提取了信息。有很多方法可以摆脱它,但不是通用的方式。

Shot put,22.47 m,"Timmermann, UlfUlf Timmermann",East Germany (GDR),1988 1988 Seoul,"01988-09-23 September 23, 1988",[25]
Discus throw,69.89 m,"Alekna, VirgilijusVirgilijus Alekna",Lithuania (LTU),2004 2004 Athens,"02004-08-23 August 23, 2004",[26]
Hammer throw,84.80 m,"Litvinov, SergeySergey Litvinov",Soviet Union (URS),1988 1988 Seoul,"01988-09-26 September 26, 1988",[27]
Javelin throw,90.57 m,"Thorkildsen, AndreasAndreas Thorkildsen",Norway (NOR),2008 2008 Beijing,"02008-08-23 August 23, 2008",[28]

0
投票

使用

jq
pup
,以及帽子的提示这个SO答案

#!/bin/bash
# tablescrape - convert nth HTML table on a page to CSV or tab-delimited
# author: https://stackoverflow.com/users/785213
# source: https://stackoverflow.com/q/2611418

set -u
input=${1:?"Expected a file, URL, or '-' as the first argument."}
nth=${2:-1}
mode=${3:-csv}

(
    if [[ -r $input || $input == - ]]; then
        cat "$input"
    else
        # '--location' means "follow redirects"
        curl --silent --show-error --location "$input"
    fi
) \
  | pup "table.wikitable:nth-of-type($nth) tr json{}" \
  | jq --raw-output '.[]
      | [
          .children[]                            # all children of <tr>s
            | select(.tag=="td" or .tag=="th")   # that are <td>s or <th>s
            | [ .. | .text? ]                    # recurse, looking for .text
            | map(select(.))                     # filter out empty nodes
            | join(" ")                          # concatenate together
        ]
      | @'$mode

使用方法

RECORDS='https://en.wikipedia.org/wiki/List_of_Olympic_records_in_athletics'

# read from a URL
./tablescrape $RECORDS 2

# read from a pipe or redirection
curl -sS $RECORDS | ./tablescrape - 1 tsv
curl -sS $RECORDS > records.html
< records.html ./tablescrape - 1 tsv

# read from a file
./tablescrape records.html 1 tsv
© www.soinside.com 2019 - 2024. All rights reserved.