由于WebMagic的网络请求是通过Apache HttpClient请求,只需要拿到该对象进行登陆处理,后续的请求再使用同一个HttpClient对象请求,即可实现登陆状态下的请求,登陆相关的cookies不需要自己进行管理,HttpClient会自动处理

查看源码后,发现HttpClient的在HttpClientDownloader中使用

@ThreadSafe
public class HttpClientDownloader extends AbstractDownloader {
    private Logger logger = LoggerFactory.getLogger(this.getClass());
    private final Map<String, CloseableHttpClient> httpClients = new HashMap();
    private HttpClientGenerator httpClientGenerator = new HttpClientGenerator();
    private HttpUriRequestConverter httpUriRequestConverter = new HttpUriRequestConverter();
    private ProxyProvider proxyProvider;
    private boolean responseHeader = true;

    public HttpClientDownloader() {
    }

由源码可知CloseableHttpClient以Map的形式存在该downloader中,且为private变量,且getHttpClient方法也是private方法,无法从外部获取该对象进行登陆等操作

为解决这个问题,将HttpClientDownloader复制出来,继承AbstractDownloader,修改get方法为public即可

public CloseableHttpClient getHttpClient(Site site)

得到自己的类后,启动爬虫时,setDownloader即可

MyDownloader myDownloader = new MyDownloader();
Spider spider = Spider.create(new GithubRepoPageProcessor()).setDownloader(myDownloader).addUrl("https://github.com/code4craft").thread(5);
CloseableHttpClient httpClient = myDownloader.getHttpClient(spider.getSite());
//TODO 使用httpClient进行登录
//...
//...
spider.run();

在执行爬虫前,可以先通过getHttpClient得到HttpClient对象进行登陆,该downloader可作为全局变量,在各个PageProcessor中使用,即可保存登陆状态

12-23 21:42