Scrapy 源码分析 3 middlewares

2024-05-07 10:08

本文主要是介绍Scrapy 源码分析 3 middlewares,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

1 简介

Scrapy中有三种类型的middlewares,是Downloader middlewares,Spider middlewares,Extensions。

  • Downloader middlewares:介于引擎和下载器之间,可以在网页在下载前、后进行逻辑处理;
  • Spider middlewares:介于引擎和爬虫之间,在向爬虫输入下载结果前,和爬虫输出请求 / 数据后进行逻辑处理;
  • Extensions : 处于整个流程当中,主要提供一些辅助和状态统计;

2 共同基类 MiddlewareManager

MiddlewareManager类 位于scrapy.middleware.py中。

class MiddlewareManager:"""Base class for implementing middleware managers"""component_name = 'foo middleware'def __init__(self, *middlewares):# settings 中所有的middlewares的实例 例如 'scrapy.extensions.corestats.CoreStats'self.middlewares = middlewares# Optional because process_spider_output and process_spider_exception can be None# 保存每个实例对象的方法的集合# {"open_spider":[CoreStats.open_spider,...]}self.methods: Dict[str, Deque[Optional[Callable]]] = defaultdict(deque)for mw in middlewares:self._add_middleware(mw)@classmethoddef _get_mwlist_from_settings(cls, settings: Settings) -> list:# 每个子类需要实现的方法,在改方法中获取所有的middlewares全路径raise NotImplementedError@classmethoddef from_settings(cls, settings: Settings, crawler=None):mwlist = cls._get_mwlist_from_settings(settings)middlewares = []enabled = []for clspath in mwlist:try:# 加载类mwcls = load_object(clspath)# 创建middleware类实例mw = create_instance(mwcls, settings, crawler)middlewares.append(mw)enabled.append(clspath)except NotConfigured as e:if e.args:clsname = clspath.split('.')[-1]logger.warning("Disabled %(clsname)s: %(eargs)s",{'clsname': clsname, 'eargs': e.args[0]},extra={'crawler': crawler})logger.info("Enabled %(componentname)ss:\n%(enabledlist)s",{'componentname': cls.component_name,'enabledlist': pprint.pformat(enabled)},extra={'crawler': crawler})return cls(*middlewares)@classmethoddef from_crawler(cls, crawler):return cls.from_settings(crawler.settings, crawler)def _add_middleware(self, mw) -> None:if hasattr(mw, 'open_spider'):self.methods['open_spider'].append(mw.open_spider)if hasattr(mw, 'close_spider'):self.methods['close_spider'].appendleft(mw.close_spider)def _process_parallel(self, methodname: str, obj, *args) -> Deferred:methods = cast(Iterable[Callable], self.methods[methodname])return process_parallel(methods, obj, *args)def _process_chain(self, methodname: str, obj, *args) -> Deferred:methods = cast(Iterable[Callable], self.methods[methodname])return process_chain(methods, obj, *args)def open_spider(self, spider: Spider) -> Deferred:# 调用middlewares的所有的 open_spider 方法 参数为spider# def open_spider(self,spider:Spider):#   ......return self._process_parallel('open_spider', spider)def close_spider(self, spider: Spider) -> Deferred:# 调用middlewares的所有的 close_spider 方法 参数为spider# def close_spider(self,spider:Spider):#   ......return self._process_parallel('close_spider', spider)

其中 open_spider 执行顺序为正序,close_spider 执行顺序为倒序。

MiddlewareManager有三个子类,

  • DownloaderMiddlewareManager管理Downloader middlewares
  • SpiderMiddlewareManager管理Spider middlewares
  • ExtensionManager管理Extensions

3 ExtensionManager

源码位于scrapy.extension.py中

class ExtensionManager(MiddlewareManager):component_name = 'extension'@classmethoddef _get_mwlist_from_settings(cls, settings):return build_component_list(settings.getwithbase('EXTENSIONS'))
EXTENSIONS 默认值为
EXTENSIONS = {}EXTENSIONS_BASE = {'scrapy.extensions.corestats.CoreStats': 0,'scrapy.extensions.telnet.TelnetConsole': 0,'scrapy.extensions.memusage.MemoryUsage': 0,'scrapy.extensions.memdebug.MemoryDebugger': 0,'scrapy.extensions.closespider.CloseSpider': 0,'scrapy.extensions.feedexport.FeedExporter': 0,'scrapy.extensions.logstats.LogStats': 0,'scrapy.extensions.spiderstate.SpiderState': 0,'scrapy.extensions.throttle.AutoThrottle': 0,
}
def build_component_list(compdict, custom=None, convert=update_classpath):"""Compose a component list from a { class: order } dictionary."""def _check_components(complist):if len({convert(c) for c in complist}) != len(complist):raise ValueError(f'Some paths in {complist!r} convert to the same object, ''please update your settings')def _map_keys(compdict):if isinstance(compdict, BaseSettings):compbs = BaseSettings()for k, v in compdict.items():prio = compdict.getpriority(k)if compbs.getpriority(convert(k)) == prio:raise ValueError(f'Some paths in {list(compdict.keys())!r} ''convert to the same ''object, please update your settings')else:compbs.set(convert(k), v, priority=prio)return compbselse:_check_components(compdict)return {convert(k): v for k, v in compdict.items()}def _validate_values(compdict):"""Fail if a value in the components dict is not a real number or None."""for name, value in compdict.items():if value is not None and not isinstance(value, numbers.Real):raise ValueError(f'Invalid value {value} for component {name}, ''please provide a real number or None instead')# BEGIN Backward compatibility for old (base, custom) call signatureif isinstance(custom, (list, tuple)):_check_components(custom)return type(custom)(convert(c) for c in custom)if custom is not None:compdict.update(custom)# END Backward compatibility_validate_values(compdict)compdict = without_none_values(_map_keys(compdict))return [k for k, v in sorted(compdict.items(), key=itemgetter(1))]

由最后一行得知,EXTENSIONS参数列表中加载顺序为 value 的 正序排列 

3 DownloaderMiddlewareManager

源码位于 scrapy.core.downloader.middleware.py中

class DownloaderMiddlewareManager(MiddlewareManager):component_name = 'downloader middleware'@classmethoddef _get_mwlist_from_settings(cls, settings):return build_component_list(settings.getwithbase('DOWNLOADER_MIDDLEWARES'))def _add_middleware(self, mw):if hasattr(mw, 'process_request'):self.methods['process_request'].append(mw.process_request)if hasattr(mw, 'process_response'):self.methods['process_response'].appendleft(mw.process_response)if hasattr(mw, 'process_exception'):self.methods['process_exception'].appendleft(mw.process_exception)def download(self, download_func: Callable, request: Request, spider: Spider):@defer.inlineCallbacksdef process_request(request: Request):for method in self.methods['process_request']:method = cast(Callable, method)response = yield deferred_from_coro(method(request=request, spider=spider))if response is not None and not isinstance(response, (Response, Request)):raise _InvalidOutput(f"Middleware {method.__qualname__} must return None, Response or "f"Request, got {response.__class__.__name__}")if response:return responsereturn (yield download_func(request=request, spider=spider))@defer.inlineCallbacksdef process_response(response: Union[Response, Request]):if response is None:raise TypeError("Received None in process_response")elif isinstance(response, Request):return responsefor method in self.methods['process_response']:method = cast(Callable, method)response = yield deferred_from_coro(method(request=request, response=response, spider=spider))if not isinstance(response, (Response, Request)):raise _InvalidOutput(f"Middleware {method.__qualname__} must return Response or Request, "f"got {type(response)}")if isinstance(response, Request):return responsereturn response@defer.inlineCallbacksdef process_exception(failure: Failure):exception = failure.valuefor method in self.methods['process_exception']:method = cast(Callable, method)response = yield deferred_from_coro(method(request=request, exception=exception, spider=spider))if response is not None and not isinstance(response, (Response, Request)):raise _InvalidOutput(f"Middleware {method.__qualname__} must return None, Response or "f"Request, got {type(response)}")if response:return responsereturn failuredeferred = mustbe_deferred(process_request, request)deferred.addErrback(process_exception)deferred.addCallback(process_response)return deferred

默认的middlewares为

DOWNLOADER_MIDDLEWARES = {}DOWNLOADER_MIDDLEWARES_BASE = {# Engine side'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,# Downloader side
}

downloader middleware中添加了三个方法

  1. process_request  执行下载前执行,可以返回Response,、Request、None,当返回None时执行下载函数download_func,返回Response,、Request时跳过后面未执行的middleware和download_func。当发生异常时进入process_exception中执行。
  2. process_response 下载函数download_func执行完毕后执行,当result为Request时直接返回,重新添加到Solt的任务队列中。返回值必须为Response、Request中的一个实例对象,当其中一个为Request直接返回操作如上,最后返回最终的Response实例对象。
  3. process_exception 发生异常时执行,可以返回Response,、Request、None,当所有的middlewares返回None时,最后返回failure,当返回Response,、Request跳过后面未执行的middleware。

其中process_request执行顺序为正序,process_response和process_exception为倒序。

函数定义为

def process_request(self, request: Request, spider: Spider) -> Union[None, Response, Request]:passdef process_response(self, request: Request,response:Response, spider: Spider) -> Union[Response, Request]:passdef process_exception(self, request: Request,exception:Exception, spider: Spider) -> Union[None,Response, Request]:pass

4 SpiderMiddlewareManager

源码位置为scrapy.core.spiderwm.py中

class SpiderMiddlewareManager(MiddlewareManager):component_name = 'spider middleware'@classmethoddef _get_mwlist_from_settings(cls, settings):return build_component_list(settings.getwithbase('SPIDER_MIDDLEWARES'))def _add_middleware(self, mw):super()._add_middleware(mw)if hasattr(mw, 'process_spider_input'):self.methods['process_spider_input'].append(mw.process_spider_input)if hasattr(mw, 'process_start_requests'):self.methods['process_start_requests'].appendleft(mw.process_start_requests)process_spider_output = getattr(mw, 'process_spider_output', None)self.methods['process_spider_output'].appendleft(process_spider_output)process_spider_exception = getattr(mw, 'process_spider_exception', None)self.methods['process_spider_exception'].appendleft(process_spider_exception)def _process_spider_input(self, scrape_func: ScrapeFunc, response: Response, request: Request,spider: Spider) -> Any:for method in self.methods['process_spider_input']:method = cast(Callable, method)try:result = method(response=response, spider=spider)if result is not None:msg = (f"Middleware {method.__qualname__} must return None "f"or raise an exception, got {type(result)}")raise _InvalidOutput(msg)except _InvalidOutput:raiseexcept Exception:return scrape_func(Failure(), request, spider)return scrape_func(response, request, spider)def _evaluate_iterable(self, response: Response, spider: Spider, iterable: Iterable,exception_processor_index: int, recover_to: MutableChain) -> Generator:try:for r in iterable:yield rexcept Exception as ex:exception_result = self._process_spider_exception(response, spider, Failure(ex),exception_processor_index)if isinstance(exception_result, Failure):raiserecover_to.extend(exception_result)def _process_spider_exception(self, response: Response, spider: Spider, _failure: Failure,start_index: int = 0) -> Union[Failure, MutableChain]:exception = _failure.value# don't handle _InvalidOutput exceptionif isinstance(exception, _InvalidOutput):return _failuremethod_list = islice(self.methods['process_spider_exception'], start_index, None)for method_index, method in enumerate(method_list, start=start_index):if method is None:continueresult = method(response=response, exception=exception, spider=spider)if _isiterable(result):# stop exception handling by handing control over to the# process_spider_output chain if an iterable has been returnedreturn self._process_spider_output(response, spider, result, method_index + 1)elif result is None:continueelse:msg = (f"Middleware {method.__qualname__} must return None "f"or an iterable, got {type(result)}")raise _InvalidOutput(msg)return _failuredef _process_spider_output(self, response: Response, spider: Spider,result: Iterable, start_index: int = 0) -> MutableChain:# items in this iterable do not need to go through the process_spider_output# chain, they went through it already from the process_spider_exception methodrecovered = MutableChain()method_list = islice(self.methods['process_spider_output'], start_index, None)for method_index, method in enumerate(method_list, start=start_index):if method is None:continuetry:# might fail directly if the output value is not a generatorresult = method(response=response, result=result, spider=spider)except Exception as ex:exception_result = self._process_spider_exception(response, spider, Failure(ex), method_index + 1)if isinstance(exception_result, Failure):raisereturn exception_resultif _isiterable(result):result = self._evaluate_iterable(response, spider, result, method_index + 1, recovered)else:msg = (f"Middleware {method.__qualname__} must return an "f"iterable, got {type(result)}")raise _InvalidOutput(msg)return MutableChain(result, recovered)def _process_callback_output(self, response: Response, spider: Spider, result: Iterable) -> MutableChain:recovered = MutableChain()result = self._evaluate_iterable(response, spider, result, 0, recovered)return MutableChain(self._process_spider_output(response, spider, result), recovered)def scrape_response(self, scrape_func: ScrapeFunc, response: Response, request: Request,spider: Spider) -> Deferred:def process_callback_output(result: Iterable) -> MutableChain:return self._process_callback_output(response, spider, result)def process_spider_exception(_failure: Failure) -> Union[Failure, MutableChain]:return self._process_spider_exception(response, spider, _failure)dfd = mustbe_deferred(self._process_spider_input, scrape_func, response, request, spider)dfd.addCallbacks(callback=process_callback_output, errback=process_spider_exception)return dfddef process_start_requests(self, start_requests, spider: Spider) -> Deferred:return self._process_chain('process_start_requests', start_requests, spider)

默认的middlewares为

SPIDER_MIDDLEWARES = {}SPIDER_MIDDLEWARES_BASE = {# Engine side'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 50,'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': 500,'scrapy.spidermiddlewares.referer.RefererMiddleware': 700,'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware': 800,'scrapy.spidermiddlewares.depth.DepthMiddleware': 900,# Spider side
}

spider middlewares中添加了三个方法

  1. process_start_requests Engin引擎创建后,Engin.Solt创建之前 执行,目前默认的middlewares中没有该方法。
  2. process_spider_input download执行完成后执行,返回Request、Response、Item、Dict等。然后调用Request.callback或者Request.errorback。
  3. process_spider_output Request.callback执行后返回值传达给process_spider_output。
  4. process_spider_exception 处理以上流程中的异常值。

其中process_spider_input 执行顺序为正序,process_start_requests、process_spider_output 和process_spider_exception 为倒序。

函数定义为

def process_start_requests(self, start_requests: Iterable,spider: Spider) -> Iterable[Request]:passdef process_spider_input(self, response=Response,spider: Spider) -> Union[Request,Response,Item,dict]:passdef process_spider_output(self, response:Response,result,spider: Spider):passdef process_spider_exception(self, response=Response, exception=Exception,spider: Spider) -> Union[None,Iterable[Union[Request,Response,Item,dict]]]:pass

这篇关于Scrapy 源码分析 3 middlewares的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/967053

相关文章

Android 缓存日志Logcat导出与分析最佳实践

《Android缓存日志Logcat导出与分析最佳实践》本文全面介绍AndroidLogcat缓存日志的导出与分析方法,涵盖按进程、缓冲区类型及日志级别过滤,自动化工具使用,常见问题解决方案和最佳实... 目录android 缓存日志(Logcat)导出与分析全攻略为什么要导出缓存日志?按需过滤导出1. 按

Linux中的HTTPS协议原理分析

《Linux中的HTTPS协议原理分析》文章解释了HTTPS的必要性:HTTP明文传输易被篡改和劫持,HTTPS通过非对称加密协商对称密钥、CA证书认证和混合加密机制,有效防范中间人攻击,保障通信安全... 目录一、什么是加密和解密?二、为什么需要加密?三、常见的加密方式3.1 对称加密3.2非对称加密四、

MySQL中读写分离方案对比分析与选型建议

《MySQL中读写分离方案对比分析与选型建议》MySQL读写分离是提升数据库可用性和性能的常见手段,本文将围绕现实生产环境中常见的几种读写分离模式进行系统对比,希望对大家有所帮助... 目录一、问题背景介绍二、多种解决方案对比2.1 原生mysql主从复制2.2 Proxy层中间件:ProxySQL2.3

python使用Akshare与Streamlit实现股票估值分析教程(图文代码)

《python使用Akshare与Streamlit实现股票估值分析教程(图文代码)》入职测试中的一道题,要求:从Akshare下载某一个股票近十年的财务报表包括,资产负债表,利润表,现金流量表,保存... 目录一、前言二、核心知识点梳理1、Akshare数据获取2、Pandas数据处理3、Matplotl

python panda库从基础到高级操作分析

《pythonpanda库从基础到高级操作分析》本文介绍了Pandas库的核心功能,包括处理结构化数据的Series和DataFrame数据结构,数据读取、清洗、分组聚合、合并、时间序列分析及大数据... 目录1. Pandas 概述2. 基本操作:数据读取与查看3. 索引操作:精准定位数据4. Group

MySQL中EXISTS与IN用法使用与对比分析

《MySQL中EXISTS与IN用法使用与对比分析》在MySQL中,EXISTS和IN都用于子查询中根据另一个查询的结果来过滤主查询的记录,本文将基于工作原理、效率和应用场景进行全面对比... 目录一、基本用法详解1. IN 运算符2. EXISTS 运算符二、EXISTS 与 IN 的选择策略三、性能对比

MySQL 内存使用率常用分析语句

《MySQL内存使用率常用分析语句》用户整理了MySQL内存占用过高的分析方法,涵盖操作系统层确认及数据库层bufferpool、内存模块差值、线程状态、performance_schema性能数据... 目录一、 OS层二、 DB层1. 全局情况2. 内存占js用详情最近连续遇到mysql内存占用过高导致

深度解析Nginx日志分析与499状态码问题解决

《深度解析Nginx日志分析与499状态码问题解决》在Web服务器运维和性能优化过程中,Nginx日志是排查问题的重要依据,本文将围绕Nginx日志分析、499状态码的成因、排查方法及解决方案展开讨论... 目录前言1. Nginx日志基础1.1 Nginx日志存放位置1.2 Nginx日志格式2. 499

Olingo分析和实践之EDM 辅助序列化器详解(最佳实践)

《Olingo分析和实践之EDM辅助序列化器详解(最佳实践)》EDM辅助序列化器是ApacheOlingoOData框架中无需完整EDM模型的智能序列化工具,通过运行时类型推断实现灵活数据转换,适用... 目录概念与定义什么是 EDM 辅助序列化器?核心概念设计目标核心特点1. EDM 信息可选2. 智能类

Olingo分析和实践之OData框架核心组件初始化(关键步骤)

《Olingo分析和实践之OData框架核心组件初始化(关键步骤)》ODataSpringBootService通过初始化OData实例和服务元数据,构建框架核心能力与数据模型结构,实现序列化、URI... 目录概述第一步:OData实例创建1.1 OData.newInstance() 详细分析1.1.1