爬douban,将数据保存到mongodb 中

新建一个scrapy项目  : scrapy startproject  doubantest

创建一个基类爬虫:scrapy genspider doubanmovie "movie.douban.com"

 

下面开始上代码:

首先在items中写出你要提取的字符:

class DoubantestItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # 影名
    title = scrapy.Field()
    # 评分
    score = scrapy.Field()
    # 主演
    star = scrapy.Field()
    # 简介
    info = scrapy.Field()

下一步开始爬虫的编写:

import scrapy
# 我们之前在doubantest/items.py 里定义了一个DoubanttestItem类。 这里引入进来
from doubantest.items import DoubantestItem

class DoubanmovieSpider(scrapy.Spider):
    # 爬虫名
    name = 'doubanmovie'
  # 域
    allowed_domains = ['movie.douban.com']
  
    start = 0
    url = 'https://movie.douban.com/top250?start='

   # 要爬取的第一个url
    start_urls = [url+str(start)]

  # 在基类爬虫中必须要有parse()
    def parse(self, response):

     #将我们得到的数据封装到一个 `DocbantestItem` 对象
        item = DoubantestItem()
       # 匹配网页的根节点
        movies = response.xpath('//div[@class = "info"]')

        for each in movies:

       # 从根节点中匹配要的信息,  #extract()方法返回的都是unicode字符串   #xpath返回的是包含一个元素的列表
            item['title'] = each.xpath('.//span[@class="title"]/text()').extract()[0]
            item['score'] = each.xpath('.//span[@class = "rating_num"]/text()').extract()[0]
            item['star']  = each.xpath('.//p[@class=""]/text()').extract()[0]
            item['info']  = each.xpath('.//span[@class = "inq"]/text()').extract()[0]
        
       # 将获取的数据交给pipelines
            yield item
      
        if self.start < 250:
            self.start += 25
       
            yield scrapy.Request(self.url+ str(self.start),callback=self.parse)

修改settines:

BOT_NAME = 'doubantest'

SPIDER_MODULES = ['doubantest.spiders']
NEWSPIDER_MODULE = 'doubantest.spiders'
# 这个爬虫协议,false就好了
ROBOTSTXT_OBEY = False
# 禁用cookies
COOKIES_ENABLED = False
# 设置管道
ITEM_PIPELINES = {
    'doubantest.pipelines.DoubantestPipeline': 300,
}

# mongodb 中设置

MONGODB_HOST = '127.0.0.1'
MONGODB_PORT = 27017
MONGODB_DBNAME = 'Douban'
MONGODB_DOCNAME = 'Doubanmovie'

写管道文件:

import pymongo
from scrapy.conf import settings

class DoubantestPipeline(object):
    def __init__(self):
    # 初始化mongodb的设置
        host = settings['MONGODB_HOST']
        port = settings['MONGODB_PORT']
        dbname = settings['MONGODB_DBNAME']
        docname = settings['MONGODB_DOCNAME']
     # 连接到mongodb数据库
        client = pymongo.MongoClient(host=host,port=port)
        # 指定数据库
        mdb = client[dbname]
         # 指定数据表
        self.post = mdb[docname]
    def process_item(self, item, spider):
        # 将数据保存到数据库中
        self.post.insert(dict(item))
       # 必须要有return item
        return item

如果是在pycharm中跑这个程序还必须写一个运行的脚本:start.py

from scrapy import cmdline
cmdline.execute("scrapy crawl doubanmovie".split())

以上的代码是可以将数据保存到本地的mongodb数据库了的

 

因为有很多的网站会一定的反爬虫的设置,一般的反爬虫都是跟据User-Agent 和Ip 来反爬的,所以

下面我们来跟据User-Agent和ip来做一定的反反爬虫:下载中间件


#先要有settings.py添加自己编写的下载中间件类。和User-Agent和代理ip池
DOWNLOADER_MIDDLEWARES = {
    'doubantest.middlewares.Doubantestuseragent': 100,
    'doubantest.middlewares.Doubantestproxy': 100,
}

USER_AGENTS = [
    'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)',
    'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2)',
    'Opera/9.27 (Windows NT 5.2; U; zh-cn)',
    'Opera/8.0 (Macintosh; PPC Mac OS X; U; en)',
    'Mozilla/5.0 (Macintosh; PPC Mac OS X; U; en) Opera 8.0',
    'Mozilla/5.0 (Linux; U; Android 4.0.3; zh-cn; M032 Build/IML74K) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30',
    'Mozilla/5.0 (Windows; U; Windows NT 5.2) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13'
]


PROXIES = [
    {"ip_port":"106.75.164.15:3128","user_passwd":""},
    {"ip_port":"61.135.217.7:80","user_passwd":""},
    {"ip_port":"118.190.95.35:9001","user_passwd":""}
]

编写下载中间件:middlewares.py

import random
import base64

from settings import USER_AGENTS
from settings import PROXIES

# 随机的User-Agent
class RandomUserAgent(object):
    def process_request(self, request, spider):
        useragent = random.choice(USER_AGENTS)
        #print useragent


        request.headers.setdefault("User-Agent", useragent)

class RandomProxy(object):
    def process_request(self, request, spider):
        proxy = random.choice(PROXIES)

        if proxy['user_passwd'] is None:
            # 没有代理账户验证的代理使用方式
            request.meta['proxy'] = "http://" + proxy['ip_port']

        else:
            # 对账户密码进行base64编码转换
            base64_userpasswd = base64.b64encode(proxy['user_passwd'])
            # 对应到代理服务器的信令格式里
            request.headers['Proxy-Authorization'] = 'Basic ' + base64_userpasswd

            request.meta['proxy'] = "http://" + proxy['ip_port']

 

为什么HTTP代理要使用base64编码:

HTTP代理的原理很简单,就是通过HTTP协议与代理服务器建立连接,协议信令中包含要连接到的远程主机的IP和端口号,如果有需要身份验证的话还需要加上授权信息,服务器收到信令后首先进行身份验证,通过后便与远程主机建立连接,连接成功之后会返回给客户端200,表示验证通过,就这么简单,下面是具体的信令格式:

CONNECT 59.64.128.198:21 HTTP/1.1
Host: 59.64.128.198:21
Proxy-Authorization: Basic bGV2I1TU5OTIz
User-Agent: OpenFetion

 

10-07 18:41