我需要制作一个网络爬虫,以从特定网站收集链接和信息。我还需要使用Apache HTTP Client进行操作,并且浏览了几天的网站上的教程,却一无所获。现在,我正在尝试弄清楚如何使用apache HTTPClient来获取HTML,以便我可以对其进行解析。坦白说,可能是误解了HTTPClient应该用于什么目的。任何帮助将不胜感激。
h-m-m ...就是这样,但是...如果在浏览器中看不到您所看到的内容,请不要感到惊讶。正如我所说的,您将获得服务器根据请求实际返回的内容:
HttpClient client = new HttpClient();
HostConfiguration hostConfig = new HostConfiguration();
hostConfig.setHost("my.site.com", 80, Protocol.getProtocol("http"));
client.setHostConfiguration(hostConfig);
GetMethod getHtmlPageMethod = new GetMethod("/myPage.html");
getHtmlPageMethod.setFollowRedirects(true);
try {
int responseCode = client.executeMethod(getHtmlPageMethod);
System.out.println("Got response code: " + responseCode);
if (200 == responseCode) {
System.out.println("Response code 200 - SUCCESS ... go for response body... ");
String responseBody = getHtmlPageMethod.getResponseBodyAsString();
if (null != responseBody) {
System.out.println("Got body string:" + System.lineSeparator());
System.out.println(responseBody);
} else
{
System.out.println("No response body returned!");
}
}
} catch (Exception e) {
e.printStackTrace();
}