Scrapy
Scrapy 是一个为了爬取网站数据,提取结构性数据而编写的应用框架。 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中。其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的, 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。
Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下
Scrapy主要包括了以下组件:
- 引擎(Scrapy)
用来处理整个系统的数据流处理, 触发事务(框架核心) - 调度器(Scheduler)
用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL(抓取网页的网址或者说是链接)的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址 - 下载器(Downloader)
用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的) - 爬虫(Spiders)
爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面 - 项目管道(Pipeline)
负责处理爬虫从网页中抽取的实体,主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后,将被发送到项目管道,并经过几个特定的次序处理数据。 - 下载器中间件(Downloader Middlewares)
位于Scrapy引擎和下载器之间的框架,主要是处理Scrapy引擎与下载器之间的请求及响应。 - 爬虫中间件(Spider Middlewares)
介于Scrapy引擎和爬虫之间的框架,主要工作是处理蜘蛛的响应输入和请求输出。 - 调度中间件(Scheduler Middewares)
介于Scrapy引擎和调度之间的中间件,从Scrapy引擎发送到调度的请求和响应。
Scrapy运行流程大概如下:
- 引擎从调度器中取出一个链接(URL)用于接下来的抓取
- 引擎把URL封装成一个请求(Request)传给下载器
- 下载器把资源下载下来,并封装成应答包(Response)
- 爬虫解析Response
- 解析出实体(Item),则交给实体管道进行进一步的处理
- 解析出的是链接(URL),则把URL交给调度器等待抓取
一、安装:
Linux:pip3 install scrapyWindows:1、直接安装:pip3 install scrapy2、可能报错(需要安装twisted):2.1、下载twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted2.2、进入下载目录,执行 pip3 install Twisted‑17.1.0‑cp35‑cp35m‑win_amd64.whl3、 安装pywin32:pip install pywin32
二、基本使用
2.1、基本命令
1. scrapy startproject 项目名称- 在当前目录中创建中创建一个项目文件(类似于Django)2. scrapy genspider [-t template]- 创建爬虫应用如:scrapy gensipider -t basic oldboy oldboy.comscrapy gensipider -t xmlfeed autohome autohome.com.cnPS:查看所有命令:scrapy gensipider -l查看模板命令:scrapy gensipider -d 模板名称3. scrapy list- 展示爬虫应用列表4. scrapy crawl 爬虫应用名称- 运行单独爬虫应用
2.2、项目结构以及爬虫应用简介
project_name/scrapy.cfgproject_name/__init__.pyitems.pypipelines.pysettings.pyspiders/__init__.py爬虫1.py爬虫2.py爬虫3.py
文件说明:
- scrapy.cfg 项目的主配置信息。(真正爬虫相关的配置信息在settings.py文件中)
- items.py 设置数据存储模板,用于结构化数据,如:Django的Model
- pipelines 数据处理行为,如:一般结构化的数据持久化
- settings.py 配置文件,如:递归的层数、并发数,延迟下载等
- spiders 爬虫目录,如:创建文件,编写爬虫规则
注意:一般创建爬虫文件时,以网站域名命名
import scrapyclass XiaoHuarSpider(scrapy.spiders.Spider):name = "xiaohuar" # 爬虫名称 *****allowed_domains = ["xiaohuar.com"] # 允许的域名start_urls = ["http://www.xiaohuar.com/hua/", # 其实URL ]def parse(self, response):# 访问起始URL并获取结果后的回调函数
import scrapy from scrapy.selector import HtmlXPathSelector from scrapy.http.request import Requestclass DigSpider(scrapy.Spider):# 爬虫应用的名称,通过此名称启动爬虫命令name = "dig"# 允许的域名allowed_domains = ["chouti.com"]# 起始URLstart_urls = ['http://dig.chouti.com/',]has_request_set = {}def parse(self, response):print(response.url)hxs = HtmlXPathSelector(response)page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/d+")]/@href').extract()for page in page_list:page_url = 'http://dig.chouti.com%s' % pagekey = self.md5(page_url)if key in self.has_request_set:passelse:self.has_request_set[key] = page_urlobj = Request(url=page_url, method='GET', callback=self.parse)yield obj@staticmethoddef md5(val):import hashlibha = hashlib.md5()ha.update(bytes(val, encoding='utf-8'))key = ha.hexdigest()return key
执行此爬虫文件,则在终端进入项目目录执行如下命令:
scrapy crawl dig --nolog # --nolog:不显示日志
对于上述代码重要之处在于:
- Request是一个封装用户请求的类,在回调函数中yield该对象表示继续访问
- HtmlXpathSelector用于结构化HTML代码并提供选择器功能
yield scrapy.Request(url, callback=self.parse)
解析:
2.3、url去重
scrapy下的request中init自带参数:
其中有个参数:dont_filter = False , scrapy默认使用 scrapy.dupefilter.RFPDupeFilter 进行去重,具体可进源码(RFPDupeFilter )查看,相关配置:
DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter' DUPEFILTER_DEBUG = False JOBDIR = "保存范文记录的日志路径,如:/root/" # 最终路径为 /root/requests.seen
我们自己自定义去重操作,这样能更灵活的操作数据,比如将已访问的url 存进内存、数据库、文件、缓存等,更换方式灵活
1)首先,我们要自定义url去重的类:RepeatFilter , 在setting.py中将scrapy默认去重的路径改成我们自定义的路径:
# setting.py DUPEFILTER_CLASS = 'scrapy.dupefilter.RepeatFilter'
2)然后再新建dupefilter.py文件,在该py文件中自定义RepeatFilter类, 重写RFPDupeFilter 类的几个方法:
- def from_settings(cls, settings) : 会定义成类方法,默认调用的第一个方法就是这个方法,作用:实例化RepeatFilter
- def request_seen(self, request) : 检查当前请求是否被访问过,返回True表示访问过 ,返回False表示未访问过
- def open(self) : 开始爬取请求时调用
- def close(self, reason) : 结束爬虫爬取时调用
- def log(self, request, spider) : 日志记录
调用顺序:1、from_settings 方法 → 2、init 方法 → 3、open 方法 → 4、request_seen 方法 →5、close 方法
# dupefilter.py/RepeatFilter class RepeatFilter(object):def __init__(self):self.visited_url = set()@classmethoddef from_settings(cls, settings):"""初始化时,调用:param settings: :return: """return cls()def request_seen(self, request):"""检测当前请求是否已经被访问过:param request: :return: True表示已经访问过;False表示未访问过"""if request.url in self.visited_url:return Trueself.visited_url.add(request.url)return Falsedef open(self):"""开始爬去请求时,调用:return: """print('open replication')def close(self, reason):"""结束爬虫爬取时,调用:param reason: :return: """print('close replication')def log(self, request, spider):"""记录日志:param request: :param spider: :return: """print('repeat', request.url)
2.4、pipelines相关
scrapy中items将数据格式化,之后传递给pipelines处理,
通过下述yield方法,即可将items数据传递到pipelines中,执行pipelines中 各类下的 process_item 方法,进行数据持久化处理 ,执行优先级由setting中注册(ITEM_PIPELINES)配置的数据决定,数值越小越先执行
yield item_example #此语句一执行,就会跳到pipelines.py下的类中执行其中的方法,比如 process_item方法:数据做持久化时调用 ,open_spider方法:爬虫开始爬取数据时调用的
pipelines相关类是在pipelines.py中编写的,要使pipelines.py下的类能正常运行,需要在setting中将需要用到的类注册进去,如:
#pipelines.py/ArticleImagePipeline类class ArticleImagePipeline(ImagesPipeline):def item_completed(self, results, item, info):if "front_image_url" in item:for ok, value in results:image_file_path = value["path"]item["front_image_path"] = image_file_pathreturn item #return item:会将item交个下个类继续处理,如不想让其他类处理,则需手动报异常:Raise DropItem(),表示丢弃item,不再被其他类处理
setting中注册:
ITEM_PIPELINES = {'ArticleSpider.pipelines.ArticleImagePipeline': 2, # 自定义ArticleImagePipeline ,需配置好管道路径 }
pipelines中的类,也有几个方法,有点类似上述去重的类:
- def process_item(self, item, spider) :操作并进行持久化时调用
- def from_crawler(cls, crawler) :初始化时调用,用于读取setting信息及实例化pipelines对象
- def open_spider(self,spider) :爬虫开始执行时调用
- def close_spider(self,spider) :爬虫结束时调用
2.5、cookie相关
# -*- coding: utf-8 -*- import scrapy from scrapy.http.response.html import HtmlResponse from scrapy.http import Request from scrapy.http.cookies import CookieJarclass ChoutiSpider(scrapy.Spider):name = "chouti"allowed_domains = ["chouti.com"]start_urls = ('http://www.chouti.com/',)def start_requests(self):url = 'http://dig.chouti.com/'yield Request(url=url, callback=self.login, meta={ 'cookiejar': True})def login(self, response):print(response.headers.getlist('Set-Cookie'))req = Request(url='http://dig.chouti.com/login',method='POST',headers={ 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},body='phone=8613121758648&password=woshiniba&oneMonth=1',callback=self.check_login,meta={ 'cookiejar': True})yield reqdef check_login(self, response):print(response.text)
# -*- coding: utf-8 -*- import scrapy import sys import io from scrapy.http import Request from scrapy.selector import Selector, HtmlXPathSelector from ..items import ChoutiItemsys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030') from scrapy.http.cookies import CookieJar class ChoutiSpider(scrapy.Spider):name = "chouti"allowed_domains = ["chouti.com",]start_urls = ['http://dig.chouti.com/']cookie_dict = Nonedef parse(self, response):cookie_obj = CookieJar()cookie_obj.extract_cookies(response,response.request)self.cookie_dict = cookie_obj._cookies # cookie# 带上用户名密码+cookieyield Request(url="http://dig.chouti.com/login",method='POST',body = "phone=8615131255089&password=woshiniba&oneMonth=1",headers={ 'Content-Type': "application/x-www-form-urlencoded; charset=UTF-8"},cookies=cookie_obj._cookies,callback=self.check_login)def check_login(self,response):print(response.text)yield Request(url="http://dig.chouti.com/",callback=self.good)def good(self,response):id_list = Selector(response=response).xpath('//div[@share-linkid]/@share-linkid').extract()for nid in id_list:print(nid)url = "http://dig.chouti.com/link/vote?linksId=%s" % nidyield Request(url=url,method="POST",cookies=self.cookie_dict,callback=self.show)# page_urls = Selector(response=response).xpath('//div[@id="dig_lcpage"]//a/@href').extract()# for page in page_urls:# url = "http://dig.chouti.com%s" % page# yield Request(url=url,callback=self.good)def show(self,response):print(response.text)
2.6、Scrapy框架扩展, 自定义扩展利用信号在指定位置注册制定操作
Scrapy框架扩展,scrapy提供一些扩展(钩子)可以供自定义操作,setting.py中:
EXTENSIONS = {'scrapy.extensions.telnet.TelnetConsole': None,
}
我们进入TelnetConsole 源码中,通过重写init方法、from_crawler方法,可以自定义我们的钩子
scrapy.signals中提供多种信号状态,可供我们自定义扩展钩子时使用:
engine_started = object() # 引擎开始时 engine_stopped = object() # 引擎结束时 spider_opened = object() # 爬虫开始时 spider_idle = object() # spider_closed = object() # 爬虫结束时 spider_error = object() # 爬虫错误时 request_scheduled = object() # request给到调度器时 request_dropped = object() # request丢弃时 response_received = object() # response接收时 response_downloaded = object() # response下载时 item_scraped = object() # item执行时 item_dropped = object() # item丢弃时
自定义钩子:
新建 extensions.py文件 ,创建类 MyExtension
from scrapy import signalsclass MyExtension(object):def __init__(self, value):self.value = value@classmethoddef from_crawler(cls, crawler):val = crawler.settings.getint('MMMM')ext = cls(val)crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)return extdef spider_opened(self, spider):print('open')def spider_closed(self, spider):print('close')
setting.py中注册:
EXTENSIONS = {# 'scrapy.extensions.telnet.TelnetConsole': None,'articlespider.extensions.MyExtend': 300, }
2.7、setting.py配置:
# -*- coding: utf-8 -*-# Scrapy settings for step8_king project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # http://doc.scrapy.org/en/latest/topics/settings.html # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html# 1. 爬虫名称 BOT_NAME = 'step8_king'# 2. 爬虫应用路径 SPIDER_MODULES = ['step8_king.spiders'] NEWSPIDER_MODULE = 'step8_king.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent # 3. 客户端 user-agent请求头 # USER_AGENT = 'step8_king (+http://www.yourdomain.com)'# Obey robots.txt rules # 4. 禁止爬虫配置 # ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16) # 5. 并发请求数 # CONCURRENT_REQUESTS = 4# Configure a delay for requests for the same website (default: 0) # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs # 6. 延迟下载秒数 # DOWNLOAD_DELAY = 2# The download delay setting will honor only one of: # 7. 单域名访问并发数,并且延迟下次秒数也应用在每个域名 # CONCURRENT_REQUESTS_PER_DOMAIN = 2 # 单IP访问并发数,如果有值则忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延迟下次秒数也应用在每个IP # CONCURRENT_REQUESTS_PER_IP = 3# Disable cookies (enabled by default) # 8. 是否支持cookie,cookiejar进行操作cookie # COOKIES_ENABLED = True # COOKIES_DEBUG = True# Disable Telnet Console (enabled by default) # 9. Telnet用于查看当前爬虫的信息,操作爬虫等... # 使用telnet ip port ,然后通过命令操作 # TELNETCONSOLE_ENABLED = True # TELNETCONSOLE_HOST = '127.0.0.1' # TELNETCONSOLE_PORT = [6023,]# 10. 默认请求头 # Override the default request headers: # DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', # }# Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html # 11. 定义pipeline处理请求 # ITEM_PIPELINES = { # 'step8_king.pipelines.JsonPipeline': 700, # 'step8_king.pipelines.FilePipeline': 500, # }# 12. 自定义扩展,基于信号进行调用 # Enable or disable extensions # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html # EXTENSIONS = { # # 'step8_king.extensions.MyExtension': 500, # }# 13. 爬虫允许的最大深度,可以通过meta查看当前深度;0表示无深度 # DEPTH_LIMIT = 3# 14. 爬取时,0表示深度优先Lifo(默认);1表示广度优先FiFo# 后进先出,深度优先 # DEPTH_PRIORITY = 0 # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue' # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue' # 先进先出,广度优先# DEPTH_PRIORITY = 1 # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue' # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'# 15. 调度器队列 # SCHEDULER = 'scrapy.core.scheduler.Scheduler' # from scrapy.core.scheduler import Scheduler# 16. 访问URL去重 # DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'# Enable and configure the AutoThrottle extension (disabled by default) # See http://doc.scrapy.org/en/latest/topics/autothrottle.html""" 17. 自动限速算法from scrapy.contrib.throttle import AutoThrottle自动限速设置1. 获取最小延迟 DOWNLOAD_DELAY2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCYtarget_delay = latency / self.target_concurrencynew_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间new_delay = max(target_delay, new_delay)new_delay = min(max(self.mindelay, new_delay), self.maxdelay)slot.delay = new_delay """# 开始自动限速 # AUTOTHROTTLE_ENABLED = True # The initial download delay # 初始下载延迟 # AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies # 最大下载延迟 # AUTOTHROTTLE_MAX_DELAY = 10 # The average number of requests Scrapy should be sending in parallel to each remote server # 平均每秒并发数 # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0# Enable showing throttling stats for every response received: # 是否显示 # AUTOTHROTTLE_DEBUG = True# Enable and configure HTTP caching (disabled by default) # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings""" 18. 启用缓存目的用于将已经发送的请求或相应缓存下来,以便以后使用from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddlewarefrom scrapy.extensions.httpcache import DummyPolicyfrom scrapy.extensions.httpcache import FilesystemCacheStorage """ # 是否启用缓存策略 # HTTPCACHE_ENABLED = True# 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可 # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy" # 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略 # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"# 缓存超时时间 # HTTPCACHE_EXPIRATION_SECS = 0# 缓存保存路径 # HTTPCACHE_DIR = 'httpcache'# 缓存忽略的Http状态码 # HTTPCACHE_IGNORE_HTTP_CODES = []# 缓存存储的插件 # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'""" 19. 代理,需要在环境变量中设置from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware方式一:使用默认os.environ{http_proxy:http://root:[email protected]:9999/https_proxy:http://192.168.11.11:9999/}方式二:使用自定义下载中间件def to_bytes(text, encoding=None, errors='strict'):if isinstance(text, bytes):return textif not isinstance(text, six.string_types):raise TypeError('to_bytes must receive a unicode, str or bytes ''object, got %s' % type(text).__name__)if encoding is None:encoding = 'utf-8'return text.encode(encoding, errors)class ProxyMiddleware(object):def process_request(self, request, spider):PROXIES = [{'ip_port': '111.11.228.75:80', 'user_pass': ''},{'ip_port': '120.198.243.22:80', 'user_pass': ''},{'ip_port': '111.8.60.9:8123', 'user_pass': ''},{'ip_port': '101.71.27.120:80', 'user_pass': ''},{'ip_port': '122.96.59.104:80', 'user_pass': ''},{'ip_port': '122.224.249.122:8088', 'user_pass': ''},]proxy = random.choice(PROXIES)if proxy['user_pass'] is not None:request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)print "**************ProxyMiddleware have pass************" + proxy['ip_port']else:print "**************ProxyMiddleware no pass************" + proxy['ip_port']request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])DOWNLOADER_MIDDLEWARES = {'step8_king.middlewares.ProxyMiddleware': 500,}"""""" 20. Https访问Https访问时有两种情况:1. 要爬取网站使用的可信任证书(默认支持)DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"2. 要爬取网站使用的自定义证书DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"# https.pyfrom scrapy.core.downloader.contextfactory import ScrapyClientContextFactoryfrom twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)class MySSLFactory(ScrapyClientContextFactory):def getCertificateOptions(self):from OpenSSL import cryptov1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())return CertificateOptions(privateKey=v1, # pKey对象certificate=v2, # X509对象verify=False,method=getattr(self, 'method', getattr(self, '_ssl_method', None)))其他:相关类scrapy.core.downloader.handlers.http.HttpDownloadHandlerscrapy.core.downloader.webclient.ScrapyHTTPClientFactoryscrapy.core.downloader.contextfactory.ScrapyClientContextFactory相关配置DOWNLOADER_HTTPCLIENTFACTORYDOWNLOADER_CLIENTCONTEXTFACTORY"""""" 21. 爬虫中间件class SpiderMiddleware(object):def process_spider_input(self,response, spider):'''下载完成,执行,然后交给parse处理:param response: :param spider: :return: '''passdef process_spider_output(self,response, result, spider):'''spider处理完成,返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)'''return resultdef process_spider_exception(self,response, exception, spider):'''异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline'''return Nonedef process_start_requests(self,start_requests, spider):'''爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象'''return start_requests内置爬虫中间件:'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,""" # from scrapy.contrib.spidermiddleware.referer import RefererMiddleware # Enable or disable spider middlewares # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html SPIDER_MIDDLEWARES = {# 'step8_king.middlewares.SpiderMiddleware': 543, }""" 22. 下载中间件class DownMiddleware1(object):def process_request(self, request, spider):'''请求需要被下载时,经过所有下载器中间件的process_request调用:param request::param spider::return:None,继续后续中间件去下载;Response对象,停止process_request的执行,开始执行process_responseRequest对象,停止中间件的执行,将Request重新调度器raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception'''passdef process_response(self, request, response, spider):'''spider处理完成,返回时调用:param response::param result::param spider::return:Response 对象:转交给其他中间件process_responseRequest 对象:停止中间件,request会被重新调度下载raise IgnoreRequest 异常:调用Request.errback'''print('response1')return responsedef process_exception(self, request, exception, spider):'''当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return:None:继续交给后续中间件处理异常;Response对象:停止后续process_exception方法Request对象:停止中间件,request将会被重新调用下载'''return None默认下载中间件{'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,}""" # from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware # Enable or disable downloader middlewares # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # DOWNLOADER_MIDDLEWARES = { # 'step8_king.middlewares.DownMiddleware1': 100, # 'step8_king.middlewares.DownMiddleware2': 500, # }
2.8、scrapy配置之自动限速以及缓存
""" 17. 自动限速算法from scrapy.contrib.throttle import AutoThrottle自动限速设置1. 获取最小延迟 DOWNLOAD_DELAY2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY4. 当请求下载完成后,获取其"连接"时间 latency,即:请求连接到接受到响应头之间的时间5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCYtarget_delay = latency / self.target_concurrencynew_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延迟时间new_delay = max(target_delay, new_delay)new_delay = min(max(self.mindelay, new_delay), self.maxdelay)slot.delay = new_delay """# 开始自动限速 # AUTOTHROTTLE_ENABLED = True # The initial download delay # 初始下载延迟 # AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies # 最大下载延迟 # AUTOTHROTTLE_MAX_DELAY = 10 # The average number of requests Scrapy should be sending in parallel to each remote server # 平均每秒并发数 # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
""" 18. 启用缓存目的用于将已经发送的请求或相应缓存下来,以便以后使用from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddlewarefrom scrapy.extensions.httpcache import DummyPolicyfrom scrapy.extensions.httpcache import FilesystemCacheStorage """ # 是否启用缓存策略 # HTTPCACHE_ENABLED = True# 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可 # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy" # 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略 # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"# 缓存超时时间 # HTTPCACHE_EXPIRATION_SECS = 0# 缓存保存路径 # HTTPCACHE_DIR = 'httpcache'# 缓存忽略的Http状态码 # HTTPCACHE_IGNORE_HTTP_CODES = []# 缓存存储的插件 # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
2.9、scrapy之默认代理及扩展代理
""" 19. 代理,需要在环境变量中设置from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware方式一:使用默认os.environ{http_proxy:http://root:[email protected]:9999/https_proxy:http://192.168.11.11:9999/}方式二:使用自定义下载中间件def to_bytes(text, encoding=None, errors='strict'):if isinstance(text, bytes):return textif not isinstance(text, six.string_types):raise TypeError('to_bytes must receive a unicode, str or bytes ''object, got %s' % type(text).__name__)if encoding is None:encoding = 'utf-8'return text.encode(encoding, errors)class ProxyMiddleware(object):def process_request(self, request, spider):PROXIES = [{'ip_port': '111.11.228.75:80', 'user_pass': ''},{'ip_port': '120.198.243.22:80', 'user_pass': ''},{'ip_port': '111.8.60.9:8123', 'user_pass': ''},{'ip_port': '101.71.27.120:80', 'user_pass': ''},{'ip_port': '122.96.59.104:80', 'user_pass': ''},{'ip_port': '122.224.249.122:8088', 'user_pass': ''},]proxy = random.choice(PROXIES)if proxy['user_pass'] is not None:request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)print "**************ProxyMiddleware have pass************" + proxy['ip_port']else:print "**************ProxyMiddleware no pass************" + proxy['ip_port']request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])DOWNLOADER_MIDDLEWARES = {'step8_king.middlewares.ProxyMiddleware': 500,}"""
2.10、scrapy之自定义https证书
""" 20. Https访问Https访问时有两种情况:1. 要爬取网站使用的可信任证书(默认支持)DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"2. 要爬取网站使用的自定义证书DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"# https.pyfrom scrapy.core.downloader.contextfactory import ScrapyClientContextFactoryfrom twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)class MySSLFactory(ScrapyClientContextFactory):def getCertificateOptions(self):from OpenSSL import cryptov1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())return CertificateOptions(privateKey=v1, # pKey对象certificate=v2, # X509对象verify=False,method=getattr(self, 'method', getattr(self, '_ssl_method', None)))其他:相关类scrapy.core.downloader.handlers.http.HttpDownloadHandlerscrapy.core.downloader.webclient.ScrapyHTTPClientFactoryscrapy.core.downloader.contextfactory.ScrapyClientContextFactory相关配置DOWNLOADER_HTTPCLIENTFACTORYDOWNLOADER_CLIENTCONTEXTFACTORY"""
2.11、 scrapy之爬虫中间件
1)下载中间件
""" 22. 下载中间件class DownMiddleware1(object):def process_request(self, request, spider):'''请求需要被下载时,经过所有下载器中间件的process_request调用:param request::param spider::return:None,继续后续中间件去下载;Response对象,停止process_request的执行,开始执行process_responseRequest对象,停止中间件的执行,将Request重新调度器raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception'''passdef process_response(self, request, response, spider):'''spider处理完成,返回时调用:param response::param result::param spider::return:Response 对象:转交给其他中间件process_responseRequest 对象:停止中间件,request会被重新调度下载raise IgnoreRequest 异常:调用Request.errback'''print('response1')return responsedef process_exception(self, request, exception, spider):'''当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return:None:继续交给后续中间件处理异常;Response对象:停止后续process_exception方法Request对象:停止中间件,request将会被重新调用下载'''return None默认下载中间件{'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,}"""
2)爬虫中间件
""" 21. 爬虫中间件class SpiderMiddleware(object):def process_spider_input(self,response, spider):'''下载完成,执行,然后交给parse处理:param response: :param spider: :return: '''passdef process_spider_output(self,response, result, spider):'''spider处理完成,返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)'''return resultdef process_spider_exception(self,response, exception, spider):'''异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline'''return Nonedef process_start_requests(self,start_requests, spider):'''爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象'''return start_requests内置爬虫中间件:'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,"""