Scrapy 源码分析 3 middlewares

2024-05-07 10:08

本文主要是介绍Scrapy 源码分析 3 middlewares,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

1 简介

Scrapy中有三种类型的middlewares,是Downloader middlewares,Spider middlewares,Extensions。

  • Downloader middlewares:介于引擎和下载器之间,可以在网页在下载前、后进行逻辑处理;
  • Spider middlewares:介于引擎和爬虫之间,在向爬虫输入下载结果前,和爬虫输出请求 / 数据后进行逻辑处理;
  • Extensions : 处于整个流程当中,主要提供一些辅助和状态统计;

2 共同基类 MiddlewareManager

MiddlewareManager类 位于scrapy.middleware.py中。

class MiddlewareManager:"""Base class for implementing middleware managers"""component_name = 'foo middleware'def __init__(self, *middlewares):# settings 中所有的middlewares的实例 例如 'scrapy.extensions.corestats.CoreStats'self.middlewares = middlewares# Optional because process_spider_output and process_spider_exception can be None# 保存每个实例对象的方法的集合# {"open_spider":[CoreStats.open_spider,...]}self.methods: Dict[str, Deque[Optional[Callable]]] = defaultdict(deque)for mw in middlewares:self._add_middleware(mw)@classmethoddef _get_mwlist_from_settings(cls, settings: Settings) -> list:# 每个子类需要实现的方法,在改方法中获取所有的middlewares全路径raise NotImplementedError@classmethoddef from_settings(cls, settings: Settings, crawler=None):mwlist = cls._get_mwlist_from_settings(settings)middlewares = []enabled = []for clspath in mwlist:try:# 加载类mwcls = load_object(clspath)# 创建middleware类实例mw = create_instance(mwcls, settings, crawler)middlewares.append(mw)enabled.append(clspath)except NotConfigured as e:if e.args:clsname = clspath.split('.')[-1]logger.warning("Disabled %(clsname)s: %(eargs)s",{'clsname': clsname, 'eargs': e.args[0]},extra={'crawler': crawler})logger.info("Enabled %(componentname)ss:\n%(enabledlist)s",{'componentname': cls.component_name,'enabledlist': pprint.pformat(enabled)},extra={'crawler': crawler})return cls(*middlewares)@classmethoddef from_crawler(cls, crawler):return cls.from_settings(crawler.settings, crawler)def _add_middleware(self, mw) -> None:if hasattr(mw, 'open_spider'):self.methods['open_spider'].append(mw.open_spider)if hasattr(mw, 'close_spider'):self.methods['close_spider'].appendleft(mw.close_spider)def _process_parallel(self, methodname: str, obj, *args) -> Deferred:methods = cast(Iterable[Callable], self.methods[methodname])return process_parallel(methods, obj, *args)def _process_chain(self, methodname: str, obj, *args) -> Deferred:methods = cast(Iterable[Callable], self.methods[methodname])return process_chain(methods, obj, *args)def open_spider(self, spider: Spider) -> Deferred:# 调用middlewares的所有的 open_spider 方法 参数为spider# def open_spider(self,spider:Spider):#   ......return self._process_parallel('open_spider', spider)def close_spider(self, spider: Spider) -> Deferred:# 调用middlewares的所有的 close_spider 方法 参数为spider# def close_spider(self,spider:Spider):#   ......return self._process_parallel('close_spider', spider)

其中 open_spider 执行顺序为正序,close_spider 执行顺序为倒序。

MiddlewareManager有三个子类,

  • DownloaderMiddlewareManager管理Downloader middlewares
  • SpiderMiddlewareManager管理Spider middlewares
  • ExtensionManager管理Extensions

3 ExtensionManager

源码位于scrapy.extension.py中

class ExtensionManager(MiddlewareManager):component_name = 'extension'@classmethoddef _get_mwlist_from_settings(cls, settings):return build_component_list(settings.getwithbase('EXTENSIONS'))
EXTENSIONS 默认值为
EXTENSIONS = {}EXTENSIONS_BASE = {'scrapy.extensions.corestats.CoreStats': 0,'scrapy.extensions.telnet.TelnetConsole': 0,'scrapy.extensions.memusage.MemoryUsage': 0,'scrapy.extensions.memdebug.MemoryDebugger': 0,'scrapy.extensions.closespider.CloseSpider': 0,'scrapy.extensions.feedexport.FeedExporter': 0,'scrapy.extensions.logstats.LogStats': 0,'scrapy.extensions.spiderstate.SpiderState': 0,'scrapy.extensions.throttle.AutoThrottle': 0,
}
def build_component_list(compdict, custom=None, convert=update_classpath):"""Compose a component list from a { class: order } dictionary."""def _check_components(complist):if len({convert(c) for c in complist}) != len(complist):raise ValueError(f'Some paths in {complist!r} convert to the same object, ''please update your settings')def _map_keys(compdict):if isinstance(compdict, BaseSettings):compbs = BaseSettings()for k, v in compdict.items():prio = compdict.getpriority(k)if compbs.getpriority(convert(k)) == prio:raise ValueError(f'Some paths in {list(compdict.keys())!r} ''convert to the same ''object, please update your settings')else:compbs.set(convert(k), v, priority=prio)return compbselse:_check_components(compdict)return {convert(k): v for k, v in compdict.items()}def _validate_values(compdict):"""Fail if a value in the components dict is not a real number or None."""for name, value in compdict.items():if value is not None and not isinstance(value, numbers.Real):raise ValueError(f'Invalid value {value} for component {name}, ''please provide a real number or None instead')# BEGIN Backward compatibility for old (base, custom) call signatureif isinstance(custom, (list, tuple)):_check_components(custom)return type(custom)(convert(c) for c in custom)if custom is not None:compdict.update(custom)# END Backward compatibility_validate_values(compdict)compdict = without_none_values(_map_keys(compdict))return [k for k, v in sorted(compdict.items(), key=itemgetter(1))]

由最后一行得知,EXTENSIONS参数列表中加载顺序为 value 的 正序排列 

3 DownloaderMiddlewareManager

源码位于 scrapy.core.downloader.middleware.py中

class DownloaderMiddlewareManager(MiddlewareManager):component_name = 'downloader middleware'@classmethoddef _get_mwlist_from_settings(cls, settings):return build_component_list(settings.getwithbase('DOWNLOADER_MIDDLEWARES'))def _add_middleware(self, mw):if hasattr(mw, 'process_request'):self.methods['process_request'].append(mw.process_request)if hasattr(mw, 'process_response'):self.methods['process_response'].appendleft(mw.process_response)if hasattr(mw, 'process_exception'):self.methods['process_exception'].appendleft(mw.process_exception)def download(self, download_func: Callable, request: Request, spider: Spider):@defer.inlineCallbacksdef process_request(request: Request):for method in self.methods['process_request']:method = cast(Callable, method)response = yield deferred_from_coro(method(request=request, spider=spider))if response is not None and not isinstance(response, (Response, Request)):raise _InvalidOutput(f"Middleware {method.__qualname__} must return None, Response or "f"Request, got {response.__class__.__name__}")if response:return responsereturn (yield download_func(request=request, spider=spider))@defer.inlineCallbacksdef process_response(response: Union[Response, Request]):if response is None:raise TypeError("Received None in process_response")elif isinstance(response, Request):return responsefor method in self.methods['process_response']:method = cast(Callable, method)response = yield deferred_from_coro(method(request=request, response=response, spider=spider))if not isinstance(response, (Response, Request)):raise _InvalidOutput(f"Middleware {method.__qualname__} must return Response or Request, "f"got {type(response)}")if isinstance(response, Request):return responsereturn response@defer.inlineCallbacksdef process_exception(failure: Failure):exception = failure.valuefor method in self.methods['process_exception']:method = cast(Callable, method)response = yield deferred_from_coro(method(request=request, exception=exception, spider=spider))if response is not None and not isinstance(response, (Response, Request)):raise _InvalidOutput(f"Middleware {method.__qualname__} must return None, Response or "f"Request, got {type(response)}")if response:return responsereturn failuredeferred = mustbe_deferred(process_request, request)deferred.addErrback(process_exception)deferred.addCallback(process_response)return deferred

默认的middlewares为

DOWNLOADER_MIDDLEWARES = {}DOWNLOADER_MIDDLEWARES_BASE = {# Engine side'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,# Downloader side
}

downloader middleware中添加了三个方法

  1. process_request  执行下载前执行,可以返回Response,、Request、None,当返回None时执行下载函数download_func,返回Response,、Request时跳过后面未执行的middleware和download_func。当发生异常时进入process_exception中执行。
  2. process_response 下载函数download_func执行完毕后执行,当result为Request时直接返回,重新添加到Solt的任务队列中。返回值必须为Response、Request中的一个实例对象,当其中一个为Request直接返回操作如上,最后返回最终的Response实例对象。
  3. process_exception 发生异常时执行,可以返回Response,、Request、None,当所有的middlewares返回None时,最后返回failure,当返回Response,、Request跳过后面未执行的middleware。

其中process_request执行顺序为正序,process_response和process_exception为倒序。

函数定义为

def process_request(self, request: Request, spider: Spider) -> Union[None, Response, Request]:passdef process_response(self, request: Request,response:Response, spider: Spider) -> Union[Response, Request]:passdef process_exception(self, request: Request,exception:Exception, spider: Spider) -> Union[None,Response, Request]:pass

4 SpiderMiddlewareManager

源码位置为scrapy.core.spiderwm.py中

class SpiderMiddlewareManager(MiddlewareManager):component_name = 'spider middleware'@classmethoddef _get_mwlist_from_settings(cls, settings):return build_component_list(settings.getwithbase('SPIDER_MIDDLEWARES'))def _add_middleware(self, mw):super()._add_middleware(mw)if hasattr(mw, 'process_spider_input'):self.methods['process_spider_input'].append(mw.process_spider_input)if hasattr(mw, 'process_start_requests'):self.methods['process_start_requests'].appendleft(mw.process_start_requests)process_spider_output = getattr(mw, 'process_spider_output', None)self.methods['process_spider_output'].appendleft(process_spider_output)process_spider_exception = getattr(mw, 'process_spider_exception', None)self.methods['process_spider_exception'].appendleft(process_spider_exception)def _process_spider_input(self, scrape_func: ScrapeFunc, response: Response, request: Request,spider: Spider) -> Any:for method in self.methods['process_spider_input']:method = cast(Callable, method)try:result = method(response=response, spider=spider)if result is not None:msg = (f"Middleware {method.__qualname__} must return None "f"or raise an exception, got {type(result)}")raise _InvalidOutput(msg)except _InvalidOutput:raiseexcept Exception:return scrape_func(Failure(), request, spider)return scrape_func(response, request, spider)def _evaluate_iterable(self, response: Response, spider: Spider, iterable: Iterable,exception_processor_index: int, recover_to: MutableChain) -> Generator:try:for r in iterable:yield rexcept Exception as ex:exception_result = self._process_spider_exception(response, spider, Failure(ex),exception_processor_index)if isinstance(exception_result, Failure):raiserecover_to.extend(exception_result)def _process_spider_exception(self, response: Response, spider: Spider, _failure: Failure,start_index: int = 0) -> Union[Failure, MutableChain]:exception = _failure.value# don't handle _InvalidOutput exceptionif isinstance(exception, _InvalidOutput):return _failuremethod_list = islice(self.methods['process_spider_exception'], start_index, None)for method_index, method in enumerate(method_list, start=start_index):if method is None:continueresult = method(response=response, exception=exception, spider=spider)if _isiterable(result):# stop exception handling by handing control over to the# process_spider_output chain if an iterable has been returnedreturn self._process_spider_output(response, spider, result, method_index + 1)elif result is None:continueelse:msg = (f"Middleware {method.__qualname__} must return None "f"or an iterable, got {type(result)}")raise _InvalidOutput(msg)return _failuredef _process_spider_output(self, response: Response, spider: Spider,result: Iterable, start_index: int = 0) -> MutableChain:# items in this iterable do not need to go through the process_spider_output# chain, they went through it already from the process_spider_exception methodrecovered = MutableChain()method_list = islice(self.methods['process_spider_output'], start_index, None)for method_index, method in enumerate(method_list, start=start_index):if method is None:continuetry:# might fail directly if the output value is not a generatorresult = method(response=response, result=result, spider=spider)except Exception as ex:exception_result = self._process_spider_exception(response, spider, Failure(ex), method_index + 1)if isinstance(exception_result, Failure):raisereturn exception_resultif _isiterable(result):result = self._evaluate_iterable(response, spider, result, method_index + 1, recovered)else:msg = (f"Middleware {method.__qualname__} must return an "f"iterable, got {type(result)}")raise _InvalidOutput(msg)return MutableChain(result, recovered)def _process_callback_output(self, response: Response, spider: Spider, result: Iterable) -> MutableChain:recovered = MutableChain()result = self._evaluate_iterable(response, spider, result, 0, recovered)return MutableChain(self._process_spider_output(response, spider, result), recovered)def scrape_response(self, scrape_func: ScrapeFunc, response: Response, request: Request,spider: Spider) -> Deferred:def process_callback_output(result: Iterable) -> MutableChain:return self._process_callback_output(response, spider, result)def process_spider_exception(_failure: Failure) -> Union[Failure, MutableChain]:return self._process_spider_exception(response, spider, _failure)dfd = mustbe_deferred(self._process_spider_input, scrape_func, response, request, spider)dfd.addCallbacks(callback=process_callback_output, errback=process_spider_exception)return dfddef process_start_requests(self, start_requests, spider: Spider) -> Deferred:return self._process_chain('process_start_requests', start_requests, spider)

默认的middlewares为

SPIDER_MIDDLEWARES = {}SPIDER_MIDDLEWARES_BASE = {# Engine side'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 50,'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': 500,'scrapy.spidermiddlewares.referer.RefererMiddleware': 700,'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware': 800,'scrapy.spidermiddlewares.depth.DepthMiddleware': 900,# Spider side
}

spider middlewares中添加了三个方法

  1. process_start_requests Engin引擎创建后,Engin.Solt创建之前 执行,目前默认的middlewares中没有该方法。
  2. process_spider_input download执行完成后执行,返回Request、Response、Item、Dict等。然后调用Request.callback或者Request.errorback。
  3. process_spider_output Request.callback执行后返回值传达给process_spider_output。
  4. process_spider_exception 处理以上流程中的异常值。

其中process_spider_input 执行顺序为正序,process_start_requests、process_spider_output 和process_spider_exception 为倒序。

函数定义为

def process_start_requests(self, start_requests: Iterable,spider: Spider) -> Iterable[Request]:passdef process_spider_input(self, response=Response,spider: Spider) -> Union[Request,Response,Item,dict]:passdef process_spider_output(self, response:Response,result,spider: Spider):passdef process_spider_exception(self, response=Response, exception=Exception,spider: Spider) -> Union[None,Iterable[Union[Request,Response,Item,dict]]]:pass

这篇关于Scrapy 源码分析 3 middlewares的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/967053

相关文章

怎样通过分析GC日志来定位Java进程的内存问题

《怎样通过分析GC日志来定位Java进程的内存问题》:本文主要介绍怎样通过分析GC日志来定位Java进程的内存问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录一、GC 日志基础配置1. 启用详细 GC 日志2. 不同收集器的日志格式二、关键指标与分析维度1.

MySQL中的表连接原理分析

《MySQL中的表连接原理分析》:本文主要介绍MySQL中的表连接原理分析,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录1、背景2、环境3、表连接原理【1】驱动表和被驱动表【2】内连接【3】外连接【4编程】嵌套循环连接【5】join buffer4、总结1、背景

python中Hash使用场景分析

《python中Hash使用场景分析》Python的hash()函数用于获取对象哈希值,常用于字典和集合,不可变类型可哈希,可变类型不可,常见算法包括除法、乘法、平方取中和随机数哈希,各有优缺点,需根... 目录python中的 Hash除法哈希算法乘法哈希算法平方取中法随机数哈希算法小结在Python中,

Java Stream的distinct去重原理分析

《JavaStream的distinct去重原理分析》Javastream中的distinct方法用于去除流中的重复元素,它返回一个包含过滤后唯一元素的新流,该方法会根据元素的hashcode和eq... 目录一、distinct 的基础用法与核心特性二、distinct 的底层实现原理1. 顺序流中的去重

关于MyISAM和InnoDB对比分析

《关于MyISAM和InnoDB对比分析》:本文主要介绍关于MyISAM和InnoDB对比分析,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录开篇:从交通规则看存储引擎选择理解存储引擎的基本概念技术原理对比1. 事务支持:ACID的守护者2. 锁机制:并发控制的艺

MyBatis Plus 中 update_time 字段自动填充失效的原因分析及解决方案(最新整理)

《MyBatisPlus中update_time字段自动填充失效的原因分析及解决方案(最新整理)》在使用MyBatisPlus时,通常我们会在数据库表中设置create_time和update... 目录前言一、问题现象二、原因分析三、总结:常见原因与解决方法对照表四、推荐写法前言在使用 MyBATis

Python主动抛出异常的各种用法和场景分析

《Python主动抛出异常的各种用法和场景分析》在Python中,我们不仅可以捕获和处理异常,还可以主动抛出异常,也就是以类的方式自定义错误的类型和提示信息,这在编程中非常有用,下面我将详细解释主动抛... 目录一、为什么要主动抛出异常?二、基本语法:raise关键字基本示例三、raise的多种用法1. 抛

github打不开的问题分析及解决

《github打不开的问题分析及解决》:本文主要介绍github打不开的问题分析及解决,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录一、找到github.com域名解析的ip地址二、找到github.global.ssl.fastly.net网址解析的ip地址三

Mysql的主从同步/复制的原理分析

《Mysql的主从同步/复制的原理分析》:本文主要介绍Mysql的主从同步/复制的原理分析,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录为什么要主从同步?mysql主从同步架构有哪些?Mysql主从复制的原理/整体流程级联复制架构为什么好?Mysql主从复制注意

java -jar命令运行 jar包时运行外部依赖jar包的场景分析

《java-jar命令运行jar包时运行外部依赖jar包的场景分析》:本文主要介绍java-jar命令运行jar包时运行外部依赖jar包的场景分析,本文给大家介绍的非常详细,对大家的学习或工作... 目录Java -jar命令运行 jar包时如何运行外部依赖jar包场景:解决:方法一、启动参数添加: -Xb