使用jsoup刮取多个页面

问题描述 投票:0回答:1

我试图在GitHub存储库的分页中废弃链接我已经单独抓取它们但现在我想要的是使用一些循环来优化它。知道我该怎么办?这是代码

ComitUrl= "http://github.com/apple/turicreate/commits/master";

 Document document2 = Jsoup.connect(ComitUrl ).get();

 Element pagination = document2.select("div.pagination a").get(0);
 String Url1    =   pagination.attr("href");
 System.out.println("pagination-link1 = " + Url1);


 Document document3 = Jsoup.connect(Url1).get();
 Element pagination2 = document3.select("div.pagination a").get(1);
 String Url2 = pagination2.attr("href");

 System.out.println("pagination-link2 = " + Url2);
 Document document4 = Jsoup.connect(Url2).get();

 Element check = document4.select("span.disabled").first();

 if (check.text().equals("Older")) {
     System.out.println("No pagination link more"); 
 }
 else { Element pagination3 = document4.select("div.pagination a").get(1);
        String Url3 = pagination3.attr("href");
        System.out.println("pagination-link3 = " + Url3);

 }
java web-scraping jsoup
1个回答
2
投票

尝试下面给出的东西:

public static void main(String[] args) throws IOException{
    String url  = "http://github.com/apple/turicreate/commits/master";
    //get first link
    String link = Jsoup.connect(url).get().select("div.pagination a").get(0).attr("href");
    //an int just to count up links
    int i = 1;
    System.out.println("pagination-link_"+ i + "\t" + link);
    //parse next page using link
    //check if the div on next page has more than one link in it
    while(Jsoup.connect(link).get().select("div.pagination a").size() >1){
        link = Jsoup.connect(link).get().select("div.pagination a").get(1).attr("href");
        System.out.println("pagination-link_"+ (++i) +"\t" + link);
    }
} 
© www.soinside.com 2019 - 2024. All rights reserved.