1. 新建一个scrapy项目:

Python3 Scrapy框架学习四:爬取的数据存入MongoDB-LMLPHP

2.使用PyCharm打开该项目

Python3 Scrapy框架学习四:爬取的数据存入MongoDB-LMLPHP

3.在settings.py文件中添加如下代码:

#模拟浏览器,应对反爬
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'
#解决字符乱码的问题
FEED_EXPORT_ENCODING = 'gbk'

4.在items.py中定义如下变量:

from scrapy import Item,Field

class MaoyanItem(Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    movie = Field()      #电影名称
    actor = Field()      #演员
    release = Field()    #上映时间
    score = Field()      #猫眼评分

5.打开spiders文件夹下的maoyanTop100.py文件,添加如下代码:

import time
import scrapy
from maoyan.items import MaoyanItem


class Maoyantop100Spider(scrapy.Spider):
    name = 'maoyanTop100'
    #allowed_domains = ['maoyan.com/board/4']
    allowed_domains = ['maoyan.com']   #这里一定要注意修改,否则无法爬取下一页
    start_urls = ['http://maoyan.com/board/4/']

    def parse(self, response):
        context = response.css('dd')  # 分析得知所有的电影item均在该标签内
        for info in context:
            item = MaoyanItem()
            item['movie'] = info.css('p.name a::text').extract_first().strip()
            item['actor'] = info.css('.star::text').extract_first().strip()
            item['release'] = info.css('.releasetime::text').extract_first().strip()
            score = info.css('i.integer::text').extract_first().strip()
            score += info.css('i.fraction::text').extract_first().strip()
            item['score'] = score
            yield item

        time.sleep(1)  # 暂停一秒,应对反爬

        next = response.css('li a::attr("href")').extract()[-1]  # 查找下一页的链接
        url = response.urljoin(next)
        yield scrapy.Request(url=url, callback=self.parse)  # 解析下一页

6.在Terminal框输入如下命令:

scrapy crawl maoyanTop100

7.可以看到,已经爬取成功:

Python3 Scrapy框架学习四:爬取的数据存入MongoDB-LMLPHP

8.在pipelines.py文件中加入如下代码:

import pymongo

class MongoPipeline(object):
    def __init__(self,mongo_url,mongo_db):
        self.mongo_url = mongo_url
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls,crawlers):
        return cls(
            mongo_url = crawlers.settings.get('MONGO_URL'),
            mongo_db = crawlers.settings.get('MONGO_DB')
        )

    def open_spider(self,spider):
        self.client = pymongo.MongoClient(self.mongo_url)
        self.db = self.client[self.mongo_db]

    def process_item(self,item,spider):
        name = item.__class__.__name__
        self.db[name].insert(dict(item))
        return item

    def close_spider(self,spider):
        self.client.close()


class MaoyanPipeline(object):
    def process_item(self, item, spider):
        return item

9.在settings.py中添加如下代码:

ITEM_PIPELINES = {
   'maoyan.pipelines.MongoPipeline':300,
}

MONGO_URL = 'localhost'
MONGO_DB = 'maoyan'

10.在Terminal框输入如下命令:

scrapy crawl maoyanTop100

11.打开robo3T客户端,可以看到maoyan项目已经保存在MongoDB里面。

Python3 Scrapy框架学习四:爬取的数据存入MongoDB-LMLPHP

 

说明已经成功的保存在MongoDB里面了。

10-07 09:03