aio-scrapy 2.1.6__py3-none-any.whl → 2.1.8__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: aio-scrapy
3
- Version: 2.1.6
3
+ Version: 2.1.8
4
4
  Summary: A high-level Web Crawling and Web Scraping framework based on Asyncio
5
5
  Home-page: https://github.com/conlin-huang/aio-scrapy.git
6
6
  Author: conlin
@@ -76,115 +76,58 @@ Dynamic: requires-dist
76
76
  Dynamic: requires-python
77
77
  Dynamic: summary
78
78
 
79
- <!--
80
- ![aio-scrapy](./doc/images/aio-scrapy.png)
81
- -->
82
- ### aio-scrapy
79
+ # AioScrapy
83
80
 
84
- An asyncio + aiolibs crawler imitate scrapy framework
81
+ AioScrapy是一个基于Python异步IO的强大网络爬虫框架。它的设计理念源自Scrapy,但完全基于异步IO实现,提供更高的性能和更灵活的配置选项。</br>
82
+ AioScrapy is a powerful asynchronous web crawling framework built on Python's asyncio library. It is inspired by Scrapy but completely reimplemented with asynchronous IO, offering higher performance and more flexible configuration options.
85
83
 
86
- English | [中文](./doc/README_ZH.md)
84
+ ## 特性 | Features
87
85
 
88
- ### Overview
89
- - aio-scrapy framework is base on opensource project Scrapy & scrapy_redis.
90
- - aio-scrapy implements compatibility with scrapyd.
91
- - aio-scrapy implements redis queue and rabbitmq queue.
92
- - aio-scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages.
93
- - Distributed crawling/scraping.
94
- ### Requirements
86
+ - **完全异步**:基于Python的asyncio库,实现高效的并发爬取
87
+ - **多种下载处理程序**:支持多种HTTP客户端,包括aiohttp、httpx、requests、pyhttpx、curl_cffi、DrissionPage和playwright
88
+ - **灵活的中间件系统**:轻松添加自定义功能和处理逻辑
89
+ - **强大的数据处理管道**:支持多种数据库存储选项
90
+ - **内置信号系统**:方便的事件处理机制
91
+ - **丰富的配置选项**:高度可定制的爬虫行为
92
+ - **分布式爬取**:支持使用Redis和RabbitMQ进行分布式爬取
93
+ - **数据库集成**:内置支持Redis、MySQL、MongoDB、PostgreSQL和RabbitMQ
95
94
 
96
- - Python 3.9+
97
- - Works on Linux, Windows, macOS, BSD
98
-
99
- ### Install
100
-
101
- The quick way:
102
-
103
- ```shell
104
- # Install the latest aio-scrapy
105
- pip install git+https://github.com/ConlinH/aio-scrapy
106
-
107
- # default
108
- pip install aio-scrapy
109
-
110
- # Install all dependencies
111
- pip install aio-scrapy[all]
112
-
113
- # When you need to use mysql/httpx/rabbitmq/mongo
114
- pip install aio-scrapy[aiomysql,httpx,aio-pika,mongo]
115
- ```
116
-
117
- ### Usage
118
-
119
- #### create project spider:
120
-
121
- ```shell
122
- aioscrapy startproject project_quotes
123
- ```
124
-
125
- ```
126
- cd project_quotes
127
- aioscrapy genspider quotes
128
- ```
129
-
130
- quotes.py
131
-
132
- ```python
133
- from aioscrapy.spiders import Spider
134
-
135
-
136
- class QuotesMemorySpider(Spider):
137
- name = 'QuotesMemorySpider'
138
-
139
- start_urls = ['https://quotes.toscrape.com']
140
-
141
- async def parse(self, response):
142
- for quote in response.css('div.quote'):
143
- yield {
144
- 'author': quote.xpath('span/small/text()').get(),
145
- 'text': quote.css('span.text::text').get(),
146
- }
147
95
 
148
- next_page = response.css('li.next a::attr("href")').get()
149
- if next_page is not None:
150
- yield response.follow(next_page, self.parse)
96
+ - **Fully Asynchronous**: Built on Python's asyncio for efficient concurrent crawling
97
+ - **Multiple Download Handlers**: Support for various HTTP clients including aiohttp, httpx, requests, pyhttpx, curl_cffi, DrissionPage and playwright
98
+ - **Flexible Middleware System**: Easily add custom functionality and processing logic
99
+ - **Powerful Data Processing Pipelines**: Support for various database storage options
100
+ - **Built-in Signal System**: Convenient event handling mechanism
101
+ - **Rich Configuration Options**: Highly customizable crawler behavior
102
+ - **Distributed Crawling**: Support for distributed crawling using Redis and RabbitMQ
103
+ - **Database Integration**: Built-in support for Redis, MySQL, MongoDB, PostgreSQL, and RabbitMQ
151
104
 
105
+ ## 安装 | Installation
152
106
 
153
- if __name__ == '__main__':
154
- QuotesMemorySpider.start()
107
+ ### 要求 | Requirements
155
108
 
156
- ```
157
-
158
- run the spider:
109
+ - Python 3.9+
159
110
 
160
- ```shell
161
- aioscrapy crawl quotes
162
- ```
111
+ ### 使用pip安装 | Install with pip
163
112
 
164
- #### create single script spider:
113
+ ```bash
114
+ pip install aio-scrapy
165
115
 
166
- ```shell
167
- aioscrapy genspider single_quotes -t single
116
+ # Install the latest aio-scrapy
117
+ # pip install git+https://github.com/ConlinH/aio-scrapy
168
118
  ```
169
119
 
170
- single_quotes.py:
171
-
120
+ ### 开始 | Start
172
121
  ```python
173
- from aioscrapy.spiders import Spider
122
+ from aioscrapy import Spider, logger
174
123
 
175
124
 
176
- class QuotesMemorySpider(Spider):
177
- name = 'QuotesMemorySpider'
125
+ class MyspiderSpider(Spider):
126
+ name = 'myspider'
178
127
  custom_settings = {
179
- "USER_AGENT": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36",
180
- 'CLOSE_SPIDER_ON_IDLE': True,
181
- # 'DOWNLOAD_DELAY': 3,
182
- # 'RANDOMIZE_DOWNLOAD_DELAY': True,
183
- # 'CONCURRENT_REQUESTS': 1,
184
- # 'LOG_LEVEL': 'INFO'
128
+ "CLOSE_SPIDER_ON_IDLE": True
185
129
  }
186
-
187
- start_urls = ['https://quotes.toscrape.com']
130
+ start_urls = ["https://quotes.toscrape.com"]
188
131
 
189
132
  @staticmethod
190
133
  async def process_request(request, spider):
@@ -203,49 +146,45 @@ class QuotesMemorySpider(Spider):
203
146
 
204
147
  async def parse(self, response):
205
148
  for quote in response.css('div.quote'):
206
- yield {
149
+ item = {
207
150
  'author': quote.xpath('span/small/text()').get(),
208
151
  'text': quote.css('span.text::text').get(),
209
152
  }
210
-
211
- next_page = response.css('li.next a::attr("href")').get()
212
- if next_page is not None:
213
- yield response.follow(next_page, self.parse)
153
+ yield item
214
154
 
215
155
  async def process_item(self, item):
216
- print(item)
156
+ logger.info(item)
217
157
 
218
158
 
219
159
  if __name__ == '__main__':
220
- QuotesMemorySpider.start()
221
-
160
+ MyspiderSpider.start()
222
161
  ```
223
162
 
224
- run the spider:
225
-
226
- ```shell
227
- aioscrapy runspider quotes.py
228
- ```
229
-
230
-
231
- ### more commands:
232
-
233
- ```shell
234
- aioscrapy -h
235
- ```
236
-
237
- #### [more example](./example)
238
-
239
- ### Documentation
240
- [doc](./doc/documentation.md)
241
-
242
- ### Ready
243
-
244
- Please submit your suggestions to the owner by creating an issue
245
-
246
- ## Thanks
247
-
248
- [aiohttp](https://github.com/aio-libs/aiohttp/)
249
-
250
- [scrapy](https://github.com/scrapy/scrapy)
251
-
163
+ ## 文档 | Documentation
164
+
165
+ ## 文档目录 | Documentation Contents
166
+ - [安装指南 | Installation Guide](docs/installation.md)
167
+ - [快速入门 | Quick Start](docs/quickstart.md)
168
+ - [核心概念 | Core Concepts](docs/concepts.md)
169
+ - [爬虫指南 | Spider Guide](docs/spiders.md)
170
+ - [下载器 | Downloaders](docs/downloaders.md)
171
+ - [中间件 | Middlewares](docs/middlewares.md)
172
+ - [管道 | Pipelines](docs/pipelines.md)
173
+ - [队列 | Queues](docs/queues.md)
174
+ - [请求过滤器 | Request Filters](docs/dupefilters.md)
175
+ - [代理 | Proxy](docs/proxy.md)
176
+ - [数据库连接 | Database Connections](docs/databases.md)
177
+ - [分布式部署 | Distributed Deployment](docs/distributed.md)
178
+ - [配置参考 | Settings Reference](docs/settings.md)
179
+ - [API参考 | API Reference](docs/api.md)
180
+ - [示例 | Example](example)
181
+
182
+ ## 许可证 | License
183
+
184
+ 本项目采用MIT许可证 - 详情请查看LICENSE文件。</br>
185
+ This project is licensed under the MIT License - see the LICENSE file for details.
186
+
187
+
188
+ ## 联系
189
+ QQ: 995018884 </br>
190
+ WeChat: h995018884
@@ -1,4 +1,4 @@
1
- aioscrapy/VERSION,sha256=JPUCseOr-o6i21WmLdCf175ZHUFbkfgq8M6QzXpEMGM,5
1
+ aioscrapy/VERSION,sha256=n5_8BdsibVJ4nz-ATeq6LbtB6k2zft54bCvByqyoWG8,5
2
2
  aioscrapy/__init__.py,sha256=esJeH66Mz9WV7XbotvZEjNn49jc589YZ_L2DKoD0JvA,858
3
3
  aioscrapy/__main__.py,sha256=rvTdJ0cQwbi29aucPj3jJRpccx5SBzvRcV7qvxvX2NQ,80
4
4
  aioscrapy/cmdline.py,sha256=0pusLJXryZAxU9qk6QqN89IO6Kv20gkfJBnZ8UKVg_A,22302
@@ -26,15 +26,15 @@ aioscrapy/core/scheduler.py,sha256=qF_VptLGuFa8E7mXz86tjX5vww6OJTKPxE_g8XsPqsc,2
26
26
  aioscrapy/core/scraper.py,sha256=ugO2z-ZJr8xB0S1BhGOpM3zio82a6PNykTrfbAdpd68,34045
27
27
  aioscrapy/core/downloader/__init__.py,sha256=LXjkOSuP6wj2lGgmJIH3nbQVf4r9RrlqenSZtWyZvzU,31522
28
28
  aioscrapy/core/downloader/handlers/__init__.py,sha256=Rxhrkj3QBo73HY2kb7goApfNKlfc3Mqn5olmoWxT98Q,11006
29
- aioscrapy/core/downloader/handlers/aiohttp.py,sha256=QQ6WzOZo2Ea_Prck37G7g3RmtfJqeIBZLboEl-8AnkM,13523
30
- aioscrapy/core/downloader/handlers/curl_cffi.py,sha256=MQJ-7iAZP4jCI6-D-lKHgBEPaWXN7Vy5IeROWp7FZKY,7901
31
- aioscrapy/core/downloader/handlers/httpx.py,sha256=bEFE8xxhZWz-1Bd1WzihBY9kSdo_k9RIgHKuk0XD_2s,8835
32
- aioscrapy/core/downloader/handlers/pyhttpx.py,sha256=E32REQf0p6EI6yC_36TTB-OfGbWwuQ7MrDPbdFXxwmA,8455
29
+ aioscrapy/core/downloader/handlers/aiohttp.py,sha256=V9UenrXzdn7jr0LxpsnFZE_smwncbK76gXW4DEE4EIA,13463
30
+ aioscrapy/core/downloader/handlers/curl_cffi.py,sha256=OmQl0RqWmlPI59FBD7h1mHqQi6e_VBbntJ35ui4IbY8,7864
31
+ aioscrapy/core/downloader/handlers/httpx.py,sha256=tsbrhmZfTqTNhxlH4vFU6_0VvPtOPAyKJlhPBiloZOg,8790
32
+ aioscrapy/core/downloader/handlers/pyhttpx.py,sha256=f1q5e2Cfq8jW-X2wn4ncsCgRnOQk-fqXLHy_UMxrO40,8519
33
33
  aioscrapy/core/downloader/handlers/requests.py,sha256=n0KTgbRzgNLnw2PiK2NRAC7lNHTF0d1-ZnHkFNQY41A,7795
34
- aioscrapy/core/downloader/handlers/webdriver/__init__.py,sha256=mzXpySCSLyzvMyLYVPUpxUNGb3zC4hLsojAaCY7gboM,127
35
- aioscrapy/core/downloader/handlers/webdriver/drissionpage.py,sha256=tFbUBC07Gj48bjGRWAVFmDVSeT_nx0QdU-ZawgzWcgM,21820
34
+ aioscrapy/core/downloader/handlers/webdriver/__init__.py,sha256=TxietLeEdQfNO0hAhh6oEKmHPV72s4Z3UXcEwu-w9sw,144
35
+ aioscrapy/core/downloader/handlers/webdriver/drissionpage.py,sha256=J_OwFHICR4ZQNxYO8Wfg2AQL4z66hu6amD24y_3XB94,21821
36
36
  aioscrapy/core/downloader/handlers/webdriver/driverpool.py,sha256=_NoCL_cRFCJtdJwkjArNfdhhfSAUWdmoZ0k7eCx7QwI,8981
37
- aioscrapy/core/downloader/handlers/webdriver/playwright.py,sha256=uaSUmFpGocMfHKA8hHKj428M-uwq_a4WJWg-4W_R7_w,22518
37
+ aioscrapy/core/downloader/handlers/webdriver/playwright.py,sha256=O_GQ3Xfs7php7ezwvWK5MReoR4JjV6tiWvh7__-XF3A,22519
38
38
  aioscrapy/db/__init__.py,sha256=d3X5cqYBkV6MCXIJa8s88Yli27GKQTX94IEJyK0Gj0w,8575
39
39
  aioscrapy/db/absmanager.py,sha256=onGxA2eQJ-kC6JsKhR5afaa6tw_UVstDHyh-kkSiW-o,8480
40
40
  aioscrapy/db/aiomongo.py,sha256=MOHqy3uIwWJDXxUGZv7fwup7em7mCsZhGzQB3dEauaI,14750
@@ -47,7 +47,7 @@ aioscrapy/dupefilters/disk.py,sha256=CIOhxJ8M2-caoMIZebnAcSjQC0Pr5RIA-69_Cb2k4BA
47
47
  aioscrapy/dupefilters/redis.py,sha256=6MUpIrJsgmWMd-1Xp_oF5dD3BQO2uKuTMu3UDUPKvn4,33223
48
48
  aioscrapy/http/__init__.py,sha256=_WrJLH4NQsyG1nUhrpnecWpcy7Bf6ZTfT7xZUIcL_SM,596
49
49
  aioscrapy/http/headers.py,sha256=FyIQnUvU2n39l3cDPez5VvtYLvVCWkSpjrUkzP58UTQ,9990
50
- aioscrapy/http/request/__init__.py,sha256=qEFVQUHFj6WUzEjNDTBWIxxeIMxmPt4147Mbf7K6wYA,16831
50
+ aioscrapy/http/request/__init__.py,sha256=bRUmUzyjOzsaV1wdywiGcT8VT0xfrlbjUUKwZ9vRqhQ,19990
51
51
  aioscrapy/http/request/form.py,sha256=W8Img6A6PyjIiJCOskUF442LLC-0fYnoDKWxExRjVbw,5123
52
52
  aioscrapy/http/request/json_request.py,sha256=XVuGHGkd8LLLNjQnW8TaAiqcZaM9X4N7MEEOMrp17kY,7563
53
53
  aioscrapy/http/response/__init__.py,sha256=ep9OnMNgEYF0lE6H7HghA7ziEmAOwCRNsujVXMaTsa4,17934
@@ -96,7 +96,7 @@ aioscrapy/queue/redis.py,sha256=KU31ZNciLI9xxZDxsDhtOPLtmkxZQlRPOx_1z8afdwY,4788
96
96
  aioscrapy/scrapyd/__init__.py,sha256=Ey14RVLUP7typ2XqP8RWcUum2fuFyigdhuhBBiEheIo,68
97
97
  aioscrapy/scrapyd/runner.py,sha256=L0VpRkZD6IOE9MD_QI1A9ipxu3F5mKkqwyl7QxGctFs,6747
98
98
  aioscrapy/settings/__init__.py,sha256=3TWbIDf8vtDosXo8QTiypTUGr3z3o8csKJZQnMHnZrs,37108
99
- aioscrapy/settings/default_settings.py,sha256=GELRr1VQWTE0Nt64euHU31ENOHFM3N7ZzDQX2uAH_2c,16023
99
+ aioscrapy/settings/default_settings.py,sha256=5iSMC1lLoHTkqGNbQY-eQl5X5xnZFbit26aOfHuPUHE,16040
100
100
  aioscrapy/spiders/__init__.py,sha256=U5ZrpW_I-YPUUSi7VDh47mt_rwtzxpj5R0CoYFi0N18,13527
101
101
  aioscrapy/templates/project/aioscrapy.cfg,sha256=_nRHP5wtPnZaBi7wCmjWv5BgUu5NYFJZhvCTRVSipyM,112
102
102
  aioscrapy/templates/project/module/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
@@ -126,9 +126,9 @@ aioscrapy/utils/template.py,sha256=7tyOvgY7HJJLqBOPcHqpZxtuBjgUwTKkDMZkSyE_4MY,4
126
126
  aioscrapy/utils/tools.py,sha256=JdAQM4eqBVMYM5LaOqvV6GURVedcSKOTvquLcHTPWXk,10563
127
127
  aioscrapy/utils/trackref.py,sha256=umHeYm9Td8h8OtzyvOiAY6GcTna0QUm8J6PwcV_NMgU,10002
128
128
  aioscrapy/utils/url.py,sha256=K0zyUoWoeh2EseYVNe3VswnbCr6-Nj2gukhaxCFvJ9w,19669
129
- aio_scrapy-2.1.6.dist-info/LICENSE,sha256=QbrHw1tuFHRfXCws2HUcrsOPH93sEJ7F4JO6PcjbMiQ,1083
130
- aio_scrapy-2.1.6.dist-info/METADATA,sha256=IgYA1l9iPOvyBak7oQNRdDNYrfBDc_1pRcf_f2A7xzU,6767
131
- aio_scrapy-2.1.6.dist-info/WHEEL,sha256=52BFRY2Up02UkjOa29eZOS2VxUrpPORXg1pkohGGUS8,91
132
- aio_scrapy-2.1.6.dist-info/entry_points.txt,sha256=WWhoVHZvqhW8a5uFg97K0EP_GjG3uuCIFLkyqDICgaw,56
133
- aio_scrapy-2.1.6.dist-info/top_level.txt,sha256=8l08KyMt22wfX_5BmhrGH0PgwZdzZIPq-hBUa1GNir4,10
134
- aio_scrapy-2.1.6.dist-info/RECORD,,
129
+ aio_scrapy-2.1.8.dist-info/LICENSE,sha256=QbrHw1tuFHRfXCws2HUcrsOPH93sEJ7F4JO6PcjbMiQ,1083
130
+ aio_scrapy-2.1.8.dist-info/METADATA,sha256=Bezyln2dU_Sp6aU1fgPvDHmzWp4kqFxZd45EbkO6UrQ,7137
131
+ aio_scrapy-2.1.8.dist-info/WHEEL,sha256=52BFRY2Up02UkjOa29eZOS2VxUrpPORXg1pkohGGUS8,91
132
+ aio_scrapy-2.1.8.dist-info/entry_points.txt,sha256=WWhoVHZvqhW8a5uFg97K0EP_GjG3uuCIFLkyqDICgaw,56
133
+ aio_scrapy-2.1.8.dist-info/top_level.txt,sha256=8l08KyMt22wfX_5BmhrGH0PgwZdzZIPq-hBUa1GNir4,10
134
+ aio_scrapy-2.1.8.dist-info/RECORD,,
aioscrapy/VERSION CHANGED
@@ -1 +1 @@
1
- 2.1.6
1
+ 2.1.8
@@ -50,7 +50,7 @@ class AioHttpDownloadHandler(BaseDownloadHandler):
50
50
 
51
51
  # Arguments to pass to aiohttp.ClientSession constructor
52
52
  # 传递给aiohttp.ClientSession构造函数的参数
53
- self.aiohttp_client_session_args: dict = settings.getdict('AIOHTTP_CLIENT_SESSION_ARGS')
53
+ self.aiohttp_args: dict = settings.getdict('AIOHTTP_ARGS')
54
54
 
55
55
  # SSL verification setting
56
56
  # SSL验证设置
@@ -228,13 +228,13 @@ class AioHttpDownloadHandler(BaseDownloadHandler):
228
228
  if self.use_session:
229
229
  # Not recommended to use session, The abnormal phenomena will occurs when using tunnel proxy
230
230
  # 不建议使用会话,使用隧道代理时会出现异常现象
231
- session = self.get_session(**self.aiohttp_client_session_args)
231
+ session = self.get_session(**self.aiohttp_args)
232
232
  async with session.request(request.method, request.url, **kwargs) as response:
233
233
  content: bytes = await response.read()
234
234
  else:
235
235
  # Create a new session for each request (recommended)
236
236
  # 为每个请求创建一个新会话(推荐)
237
- async with aiohttp.ClientSession(**self.aiohttp_client_session_args) as session:
237
+ async with aiohttp.ClientSession(**self.aiohttp_args) as session:
238
238
  async with session.request(request.method, request.url, **kwargs) as response:
239
239
  content: bytes = await response.read()
240
240
 
@@ -44,7 +44,7 @@ class CurlCffiDownloadHandler(BaseDownloadHandler):
44
44
 
45
45
  # Arguments to pass to curl_cffi AsyncSession constructor
46
46
  # 传递给curl_cffi AsyncSession构造函数的参数
47
- self.httpx_client_session_args: dict = self.settings.get('CURL_CFFI_CLIENT_SESSION_ARGS', {})
47
+ self.curl_cffi_args: dict = self.settings.get('CURL_CFFI_ARGS', {})
48
48
 
49
49
  # SSL verification setting
50
50
  # SSL验证设置
@@ -156,7 +156,7 @@ class CurlCffiDownloadHandler(BaseDownloadHandler):
156
156
 
157
157
  # Configure curl_cffi session
158
158
  # 配置curl_cffi会话
159
- session_args = self.httpx_client_session_args.copy()
159
+ session_args = self.curl_cffi_args.copy()
160
160
 
161
161
  # Perform the request
162
162
  # 执行请求
@@ -46,7 +46,7 @@ class HttpxDownloadHandler(BaseDownloadHandler):
46
46
 
47
47
  # Arguments to pass to httpx AsyncClient constructor
48
48
  # 传递给httpx AsyncClient构造函数的参数
49
- self.httpx_client_session_args: dict = self.settings.get('HTTPX_CLIENT_SESSION_ARGS', {})
49
+ self.httpx_args: dict = self.settings.get('HTTPX_ARGS', {})
50
50
 
51
51
  # SSL verification setting
52
52
  # SSL验证设置
@@ -147,7 +147,7 @@ class HttpxDownloadHandler(BaseDownloadHandler):
147
147
 
148
148
  # Configure httpx client session
149
149
  # 配置httpx客户端会话
150
- session_args = self.httpx_client_session_args.copy()
150
+ session_args = self.httpx_args.copy()
151
151
  session_args.setdefault('http2', True) # Enable HTTP/2 by default
152
152
  # 默认启用HTTP/2
153
153
  session_args.update({
@@ -46,7 +46,7 @@ class PyhttpxDownloadHandler(BaseDownloadHandler):
46
46
 
47
47
  # Arguments to pass to pyhttpx HttpSession constructor
48
48
  # 传递给pyhttpx HttpSession构造函数的参数
49
- self.pyhttpx_client_args: dict = self.settings.get('PYHTTPX_CLIENT_ARGS', {})
49
+ self.pyhttpx_args: dict = self.settings.get('PYHTTPX_ARGS', {})
50
50
 
51
51
  # SSL verification setting
52
52
  # SSL验证设置
@@ -161,10 +161,13 @@ class PyhttpxDownloadHandler(BaseDownloadHandler):
161
161
 
162
162
  # Configure pyhttpx session
163
163
  # 配置pyhttpx会话
164
- session_args = self.pyhttpx_client_args.copy()
164
+ session_args = self.pyhttpx_args.copy()
165
165
  session_args.setdefault('http2', True) # Enable HTTP/2 by default
166
166
  # 默认启用HTTP/2
167
167
 
168
+ if ja3 := request.meta.get("ja3"):
169
+ session_args['ja3'] = ja3
170
+
168
171
  # Execute the request in a thread pool since pyhttpx is synchronous
169
172
  # 由于pyhttpx是同步的,在线程池中执行请求
170
173
  with pyhttpx.HttpSession(**session_args) as session:
@@ -1,2 +1,2 @@
1
- from .playwright import PlaywrightDriver, PlaywrightDriver
2
- from .drissionpage import DrissionPageHandler, DrissionPageDriver
1
+ from .playwright import PlaywrightDownloadHandler, PlaywrightDriver
2
+ from .drissionpage import DrissionPageDownloadHandler, DrissionPageDriver
@@ -273,7 +273,7 @@ class DrissionPageDriver(WebDriverBase):
273
273
  self.page.set.cookies(cookies)
274
274
 
275
275
 
276
- class DrissionPageHandler(BaseDownloadHandler):
276
+ class DrissionPageDownloadHandler(BaseDownloadHandler):
277
277
  """
278
278
  Download handler that uses DrissionPage to perform browser-based HTTP requests.
279
279
  使用DrissionPage执行基于浏览器的HTTP请求的下载处理程序。
@@ -298,7 +298,7 @@ class DrissionPageHandler(BaseDownloadHandler):
298
298
 
299
299
  # Get DrissionPage client arguments from settings
300
300
  # 从设置中获取DrissionPage客户端参数
301
- client_args = settings.getdict('DP_CLIENT_ARGS', {})
301
+ client_args = settings.getdict('DP_ARGS', {})
302
302
 
303
303
  # Configure the pool size for browser instances
304
304
  # 配置浏览器实例的池大小
@@ -278,7 +278,7 @@ class PlaywrightDriver(WebDriverBase):
278
278
  ])
279
279
 
280
280
 
281
- class PlaywrightHandler(BaseDownloadHandler):
281
+ class PlaywrightDownloadHandler(BaseDownloadHandler):
282
282
  """
283
283
  Download handler that uses Playwright to perform browser-based HTTP requests.
284
284
  使用Playwright执行基于浏览器的HTTP请求的下载处理程序。
@@ -303,7 +303,7 @@ class PlaywrightHandler(BaseDownloadHandler):
303
303
 
304
304
  # Get Playwright client arguments from settings
305
305
  # 从设置中获取Playwright客户端参数
306
- playwright_client_args = settings.getdict('PLAYWRIGHT_CLIENT_ARGS')
306
+ playwright_client_args = settings.getdict('PLAYWRIGHT_ARGS')
307
307
 
308
308
  # Set the default page load event to wait for
309
309
  # 设置要等待的默认页面加载事件
@@ -11,9 +11,11 @@ It handles URL normalization, fingerprinting, serialization, and other request-r
11
11
 
12
12
  import hashlib
13
13
  import inspect
14
- import json
15
- from typing import Callable, List, Optional, Tuple, Type, TypeVar
14
+ from collections import Counter
15
+ from typing import Callable, List, Optional, Tuple, Type, TypeVar, Union
16
+ from urllib.parse import ParseResult, parse_qsl, urlencode, urlparse
16
17
 
18
+ import ujson
17
19
  from w3lib.url import canonicalize_url
18
20
  from w3lib.url import safe_url_string
19
21
 
@@ -23,11 +25,67 @@ from aioscrapy.utils.curl import curl_to_request_kwargs
23
25
  from aioscrapy.utils.python import to_unicode
24
26
  from aioscrapy.utils.url import escape_ajax
25
27
 
28
+
26
29
  # Type variable for Request class to use in class methods
27
30
  # 用于在类方法中使用的Request类的类型变量
28
31
  RequestTypeVar = TypeVar("RequestTypeVar", bound="Request")
29
32
 
30
33
 
34
+ def _update_url_params(url: str, params: Union[dict, list, tuple]) -> str:
35
+ """Add URL query params to provided URL being aware of existing.
36
+
37
+ Args:
38
+ url: string of target URL
39
+ params: dict containing requested params to be added
40
+
41
+ Returns:
42
+ string with updated URL
43
+
44
+ >> url = 'http://stackoverflow.com/test?answers=true'
45
+ >> new_params = {'answers': False, 'data': ['some','values']}
46
+ >> update_url_params(url, new_params)
47
+ 'http://stackoverflow.com/test?data=some&data=values&answers=false'
48
+ """
49
+ # No need to unquote, since requote_uri will be called later.
50
+ parsed_url = urlparse(url)
51
+
52
+ # Extracting URL arguments from parsed URL, NOTE the result is a list, not dict
53
+ parsed_get_args = parse_qsl(parsed_url.query, keep_blank_values=True)
54
+
55
+ # Merging URL arguments dict with new params
56
+ old_args_counter = Counter(x[0] for x in parsed_get_args)
57
+ if isinstance(params, dict):
58
+ params = list(params.items())
59
+ new_args_counter = Counter(x[0] for x in params)
60
+ for key, value in params:
61
+ # Bool and Dict values should be converted to json-friendly values
62
+ if isinstance(value, (bool, dict)):
63
+ value = ujson.dumps(value)
64
+ # 1 to 1 mapping, we have to search and update it.
65
+ if old_args_counter.get(key) == 1 and new_args_counter.get(key) == 1:
66
+ parsed_get_args = [
67
+ (x if x[0] != key else (key, value)) for x in parsed_get_args
68
+ ]
69
+ else:
70
+ parsed_get_args.append((key, value))
71
+
72
+ # Converting URL argument to proper query string
73
+ encoded_get_args = urlencode(parsed_get_args, doseq=True)
74
+
75
+ # Creating new parsed result object based on provided with new
76
+ # URL arguments. Same thing happens inside of urlparse.
77
+ new_url = ParseResult(
78
+ parsed_url.scheme,
79
+ parsed_url.netloc,
80
+ parsed_url.path,
81
+ parsed_url.params,
82
+ encoded_get_args,
83
+ parsed_url.fragment,
84
+ ).geturl()
85
+
86
+ return new_url
87
+
88
+
31
89
  class Request(object):
32
90
  attributes: Tuple[str, ...] = (
33
91
  "url", "callback", "method", "headers", "body",
@@ -42,7 +100,10 @@ class Request(object):
42
100
  callback: Optional[Callable] = None,
43
101
  method: str = 'GET',
44
102
  headers: Optional[dict] = None,
103
+ params: Optional[Union[dict, list, tuple]] = None,
45
104
  body: Optional[str] = None,
105
+ data: Optional[Union[dict[str, str], list[tuple], str, bytes]] = None,
106
+ json: Optional[dict | list] = None,
46
107
  cookies: Optional[dict] = None,
47
108
  meta: Optional[dict] = None,
48
109
  encoding: str = 'utf-8',
@@ -77,8 +138,32 @@ class Request(object):
77
138
  """
78
139
  self._encoding = encoding
79
140
  self.method = str(method).upper()
141
+
142
+ self.headers = Headers(headers or {})
143
+
144
+ # url
145
+ if params:
146
+ url = _update_url_params(url, params)
80
147
  self._set_url(url)
148
+
149
+ # body/data/json
150
+ if data is not None:
151
+ if isinstance(data, (dict, list, tuple)):
152
+ body = urlencode(data)
153
+ elif isinstance(data, str):
154
+ body = data
155
+ elif isinstance(data, bytes):
156
+ body = data.decode(self._encoding)
157
+ self.headers.setdefault('Content-Type', 'application/x-www-form-urlencoded')
158
+
159
+ if json is not None:
160
+ body = ujson.dumps(json, separators=(",", ":"))
161
+ # Set default headers for JSON content
162
+ # 设置JSON内容的默认头部
163
+ self.headers.setdefault('Content-Type', 'application/json')
164
+
81
165
  self._set_body(body)
166
+
82
167
  assert isinstance(priority, int), f"Request priority not an integer: {priority!r}"
83
168
  self.priority = priority
84
169
 
@@ -86,7 +171,6 @@ class Request(object):
86
171
  self.errback = errback
87
172
 
88
173
  self.cookies = cookies or {}
89
- self.headers = Headers(headers or {})
90
174
  self.dont_filter = dont_filter
91
175
  self.use_proxy = use_proxy
92
176
 
@@ -207,7 +291,7 @@ class Request(object):
207
291
  """
208
292
  return self._body
209
293
 
210
- def _set_body(self, body: str) -> None:
294
+ def _set_body(self, body: Optional[str]) -> None:
211
295
  """
212
296
  Set the request body.
213
297
  设置请求体。
@@ -361,7 +445,7 @@ class Request(object):
361
445
  The request fingerprint. 请求指纹。
362
446
  """
363
447
  return hashlib.sha1(
364
- json.dumps({
448
+ ujson.dumps({
365
449
  'method': to_unicode(self.method),
366
450
  'url': canonicalize_url(self.url, keep_fragments=keep_fragments),
367
451
  'body': self.body,
@@ -167,15 +167,15 @@ DOWNLOAD_HANDLERS_MAP = {
167
167
  # playwright handlers (for JavaScript rendering)
168
168
  # playwright处理程序(用于JavaScript渲染)
169
169
  'playwright': {
170
- 'http': 'aioscrapy.core.downloader.handlers.webdriver.playwright.PlaywrightHandler',
171
- 'https': 'aioscrapy.core.downloader.handlers.webdriver.playwright.PlaywrightHandler',
170
+ 'http': 'aioscrapy.core.downloader.handlers.webdriver.playwright.PlaywrightDownloadHandler',
171
+ 'https': 'aioscrapy.core.downloader.handlers.webdriver.playwright.PlaywrightDownloadHandler',
172
172
  },
173
173
 
174
- # DrissionPageHandler handlers (for JavaScript rendering)
175
- # DrissionPageHandler处理程序(用于JavaScript渲染)
174
+ # DrissionPage handlers (for JavaScript rendering)
175
+ # DrissionPage处理程序(用于JavaScript渲染)
176
176
  'dp': {
177
- 'http': 'aioscrapy.core.downloader.handlers.webdriver.drissionpage.DrissionPageHandler',
178
- 'https': 'aioscrapy.core.downloader.handlers.webdriver.drissionpage.DrissionPageHandler',
177
+ 'http': 'aioscrapy.core.downloader.handlers.webdriver.drissionpage.DrissionPageDownloadHandler',
178
+ 'https': 'aioscrapy.core.downloader.handlers.webdriver.drissionpage.DrissionPageDownloadHandler',
179
179
  },
180
180
 
181
181
  # curl_cffi handlers
@@ -480,4 +480,4 @@ URLLENGTH_LIMIT = 2083
480
480
 
481
481
  # Whether to close the spider when it becomes idle (no more requests)
482
482
  # 当爬虫变为空闲状态(没有更多请求)时是否关闭爬虫
483
- CLOSE_SPIDER_ON_IDLE = False
483
+ CLOSE_SPIDER_ON_IDLE = True