scrapling 0.2.92__py3-none-any.whl → 0.2.94__py3-none-any.whl

Sign up to get free protection for your applications and to get access to all the features.
@@ -1,6 +1,6 @@
1
- Metadata-Version: 2.1
1
+ Metadata-Version: 2.2
2
2
  Name: scrapling
3
- Version: 0.2.92
3
+ Version: 0.2.94
4
4
  Summary: Scrapling is a powerful, flexible, and high-performance web scraping library for Python. It
5
5
  Home-page: https://github.com/D4Vinci/Scrapling
6
6
  Author: Karim Shoair
@@ -10,7 +10,7 @@ Project-URL: Documentation, https://github.com/D4Vinci/Scrapling/tree/main/docs
10
10
  Project-URL: Source, https://github.com/D4Vinci/Scrapling
11
11
  Project-URL: Tracker, https://github.com/D4Vinci/Scrapling/issues
12
12
  Classifier: Operating System :: OS Independent
13
- Classifier: Development Status :: 4 - Beta
13
+ Classifier: Development Status :: 4 - Beta
14
14
  Classifier: Intended Audience :: Developers
15
15
  Classifier: License :: OSI Approved :: BSD License
16
16
  Classifier: Natural Language :: English
@@ -31,8 +31,7 @@ Classifier: Typing :: Typed
31
31
  Requires-Python: >=3.9
32
32
  Description-Content-Type: text/markdown
33
33
  License-File: LICENSE
34
- Requires-Dist: requests>=2.3
35
- Requires-Dist: lxml>=4.5
34
+ Requires-Dist: lxml>=5.0
36
35
  Requires-Dist: cssselect>=1.2
37
36
  Requires-Dist: click
38
37
  Requires-Dist: w3lib
@@ -41,7 +40,18 @@ Requires-Dist: tldextract
41
40
  Requires-Dist: httpx[brotli,socks,zstd]
42
41
  Requires-Dist: playwright>=1.49.1
43
42
  Requires-Dist: rebrowser-playwright>=1.49.1
44
- Requires-Dist: camoufox[geoip]>=0.4.9
43
+ Requires-Dist: camoufox[geoip]>=0.4.11
44
+ Dynamic: author
45
+ Dynamic: author-email
46
+ Dynamic: classifier
47
+ Dynamic: description
48
+ Dynamic: description-content-type
49
+ Dynamic: home-page
50
+ Dynamic: license
51
+ Dynamic: project-url
52
+ Dynamic: requires-dist
53
+ Dynamic: requires-python
54
+ Dynamic: summary
45
55
 
46
56
  # 🕷️ Scrapling: Undetectable, Lightning-Fast, and Adaptive Web Scraping for Python
47
57
  [![Tests](https://github.com/D4Vinci/Scrapling/actions/workflows/tests.yml/badge.svg)](https://github.com/D4Vinci/Scrapling/actions/workflows/tests.yml) [![PyPI version](https://badge.fury.io/py/Scrapling.svg)](https://badge.fury.io/py/Scrapling) [![Supported Python versions](https://img.shields.io/pypi/pyversions/scrapling.svg)](https://pypi.org/project/scrapling/) [![PyPI Downloads](https://static.pepy.tech/badge/scrapling)](https://pepy.tech/project/scrapling)
@@ -78,6 +88,21 @@ Scrapling is a high-performance, intelligent web scraping library for Python tha
78
88
  [![Evomi Banner](https://my.evomi.com/images/brand/cta.png)](https://evomi.com?utm_source=github&utm_medium=banner&utm_campaign=d4vinci-scrapling)
79
89
  ---
80
90
 
91
+ [Scrapeless](https://www.scrapeless.com/?utm_source=github&utm_medium=ads&utm_campaign=scraping&utm_term=D4Vinci) is your all-in-one web scraping toolkit, starting at just $0.60 per 1k URLs!
92
+
93
+ - 🚀 Scraping API: Effortless and highly customizable data extraction with a single API call, providing structured data from any website.
94
+ - ⚡ Scraping Browser: AI-powered and LLM-driven, it simulates human-like behavior with genuine fingerprints and headless browser support, ensuring seamless, block-free scraping.
95
+ - 🔒 Web Unlocker: Bypass CAPTCHAs, IP blocks, and dynamic content in real time, ensuring uninterrupted access.
96
+ - 🌐 Proxies: Use high-quality, rotating proxies to scrape top platforms like Amazon, Shopee, and more, with global coverage in 195+ countries.
97
+ - 💼 Enterprise-Grade: Custom solutions for large-scale and complex data needs.
98
+ - 🎁 Free Trial: Try before you buy—experience our service firsthand.
99
+ - 💬 Pay-Per-Use: Flexible, cost-effective pricing with no long-term commitments.
100
+ - 🔧 Easy Integration: Seamlessly integrate with your existing tools and workflows for hassle-free automation.
101
+
102
+
103
+ [![Scrapeless Banner](https://raw.githubusercontent.com/D4Vinci/Scrapling/main/images/scrapeless.jpg)](https://www.scrapeless.com/?utm_source=github&utm_medium=ads&utm_campaign=scraping&utm_term=D4Vinci)
104
+ ---
105
+
81
106
  ## Table of content
82
107
  * [Key Features](#key-features)
83
108
  * [Fetch websites as you prefer](#fetch-websites-as-you-prefer-with-async-support)
@@ -122,27 +147,27 @@ Scrapling is a high-performance, intelligent web scraping library for Python tha
122
147
  ## Key Features
123
148
 
124
149
  ### Fetch websites as you prefer with async support
125
- - **HTTP requests**: Stealthy and fast HTTP requests with `Fetcher`
126
- - **Stealthy fetcher**: Annoying anti-bot protection? No problem! Scrapling can bypass almost all of them with `StealthyFetcher` with default configuration!
127
- - **Your preferred browser**: Use your real browser with CDP, [NSTbrowser](https://app.nstbrowser.io/r/1vO5e5)'s browserless, PlayWright with stealth mode, or even vanilla PlayWright - All is possible with `PlayWrightFetcher`!
150
+ - **HTTP Requests**: Fast and stealthy HTTP requests with the `Fetcher` class.
151
+ - **Dynamic Loading & Automation**: Fetch dynamic websites with the `PlayWrightFetcher` class through your real browser, Scrapling's stealth mode, Playwright's Chrome browser, or [NSTbrowser](https://app.nstbrowser.io/r/1vO5e5)'s browserless!
152
+ - **Anti-bot Protections Bypass**: Easily bypass protections with `StealthyFetcher` and `PlayWrightFetcher` classes.
128
153
 
129
154
  ### Adaptive Scraping
130
- - 🔄 **Smart Element Tracking**: Locate previously identified elements after website structure changes, using an intelligent similarity system and integrated storage.
131
- - 🎯 **Flexible Querying**: Use CSS selectors, XPath, Elements filters, text search, or regex - chain them however you want!
132
- - 🔍 **Find Similar Elements**: Automatically locate elements similar to the element you want on the page (Ex: other products like the product you found on the page).
155
+ - 🔄 **Smart Element Tracking**: Relocate elements after website changes, using an intelligent similarity system and integrated storage.
156
+ - 🎯 **Flexible Selection**: CSS selectors, XPath selectors, filters-based search, text search, regex search and more.
157
+ - 🔍 **Find Similar Elements**: Automatically locate elements similar to the element you found!
133
158
  - 🧠 **Smart Content Scraping**: Extract data from multiple websites without specific selectors using Scrapling powerful features.
134
159
 
135
- ### Performance
136
- - 🚀 **Lightning Fast**: Built from the ground up with performance in mind, outperforming most popular Python scraping libraries (outperforming BeautifulSoup in parsing by up to 620x in our tests).
160
+ ### High Performance
161
+ - 🚀 **Lightning Fast**: Built from the ground up with performance in mind, outperforming most popular Python scraping libraries.
137
162
  - 🔋 **Memory Efficient**: Optimized data structures for minimal memory footprint.
138
- - ⚡ **Fast JSON serialization**: 10x faster JSON serialization than the standard json library with more options.
163
+ - ⚡ **Fast JSON serialization**: 10x faster than standard library.
139
164
 
140
- ### Developing Experience
141
- - 🛠️ **Powerful Navigation API**: Traverse the DOM tree easily in all directions and get the info you want (parent, ancestors, sibling, children, next/previous element, and more).
142
- - 🧬 **Rich Text Processing**: All strings have built-in methods for regex matching, cleaning, and more. All elements' attributes are read-only dictionaries that are faster than standard dictionaries with added methods.
143
- - 📝 **Automatic Selector Generation**: Create robust CSS/XPath selectors for any element.
144
- - 🔌 **API Similar to Scrapy/BeautifulSoup**: Familiar methods and similar pseudo-elements for Scrapy and BeautifulSoup users.
145
- - 📘 **Type hints and test coverage**: Complete type coverage and almost full test coverage for better IDE support and fewer bugs, respectively.
165
+ ### Developer Friendly
166
+ - 🛠️ **Powerful Navigation API**: Easy DOM traversal in all directions.
167
+ - 🧬 **Rich Text Processing**: All strings have built-in regex, cleaning methods, and more. All elements' attributes are optimized dictionaries that takes less memory than standard dictionaries with added methods.
168
+ - 📝 **Auto Selectors Generation**: Generate robust short and full CSS/XPath selectors for any element.
169
+ - 🔌 **Familiar API**: Similar to Scrapy/BeautifulSoup and the same pseudo-elements used in Scrapy.
170
+ - 📘 **Type hints**: Complete type/doc-strings coverage for future-proofing and best autocompletion support.
146
171
 
147
172
  ## Getting Started
148
173
 
@@ -151,21 +176,22 @@ from scrapling import Fetcher
151
176
 
152
177
  fetcher = Fetcher(auto_match=False)
153
178
 
154
- # Fetch a web page and create an Adaptor instance
179
+ # Do http GET request to a web page and create an Adaptor instance
155
180
  page = fetcher.get('https://quotes.toscrape.com/', stealthy_headers=True)
156
- # Get all strings in the full page
181
+ # Get all text content from all HTML tags in the page except `script` and `style` tags
157
182
  page.get_all_text(ignore_tags=('script', 'style'))
158
183
 
159
- # Get all quotes, any of these methods will return a list of strings (TextHandlers)
184
+ # Get all quotes elements, any of these methods will return a list of strings directly (TextHandlers)
160
185
  quotes = page.css('.quote .text::text') # CSS selector
161
186
  quotes = page.xpath('//span[@class="text"]/text()') # XPath
162
187
  quotes = page.css('.quote').css('.text::text') # Chained selectors
163
188
  quotes = [element.text for element in page.css('.quote .text')] # Slower than bulk query above
164
189
 
165
190
  # Get the first quote element
166
- quote = page.css_first('.quote') # / page.css('.quote').first / page.css('.quote')[0]
191
+ quote = page.css_first('.quote') # same as page.css('.quote').first or page.css('.quote')[0]
167
192
 
168
193
  # Tired of selectors? Use find_all/find
194
+ # Get all 'div' HTML tags that one of its 'class' values is 'quote'
169
195
  quotes = page.find_all('div', {'class': 'quote'})
170
196
  # Same as
171
197
  quotes = page.find_all('div', class_='quote')
@@ -173,10 +199,10 @@ quotes = page.find_all(['div'], class_='quote')
173
199
  quotes = page.find_all(class_='quote') # and so on...
174
200
 
175
201
  # Working with elements
176
- quote.html_content # Inner HTML
177
- quote.prettify() # Prettified version of Inner HTML
178
- quote.attrib # Element attributes
179
- quote.path # DOM path to element (List)
202
+ quote.html_content # Get Inner HTML of this element
203
+ quote.prettify() # Prettified version of Inner HTML above
204
+ quote.attrib # Get that element's attributes
205
+ quote.path # DOM path to element (List of all ancestors from <html> tag till the element itself)
180
206
  ```
181
207
  To keep it simple, all methods can be chained on top of each other!
182
208
 
@@ -241,7 +267,7 @@ then use it right away without initializing like:
241
267
  page = StealthyFetcher.fetch('https://example.com')
242
268
  ```
243
269
 
244
- Also, the `Response` object returned from all fetchers is the same as the `Adaptor` object except it has these added attributes: `status`, `reason`, `cookies`, `headers`, and `request_headers`. All `cookies`, `headers`, and `request_headers` are always of type `dictionary`.
270
+ Also, the `Response` object returned from all fetchers is the same as the `Adaptor` object except it has these added attributes: `status`, `reason`, `cookies`, `headers`, `history`, and `request_headers`. All `cookies`, `headers`, and `request_headers` are always of type `dictionary`.
245
271
  > [!NOTE]
246
272
  > The `auto_match` argument is enabled by default which is the one you should care about the most as you will see later.
247
273
  ### Fetcher
@@ -292,7 +318,7 @@ True
292
318
  | humanize | Humanize the cursor movement. Takes either True or the MAX duration in seconds of the cursor movement. The cursor typically takes up to 1.5 seconds to move across the window. | ✔️ |
293
319
  | allow_webgl | Enabled by default. Disabling it WebGL not recommended as many WAFs now checks if WebGL is enabled. | ✔️ |
294
320
  | geoip | Recommended to use with proxies; Automatically use IP's longitude, latitude, timezone, country, locale, & spoof the WebRTC IP address. It will also calculate and spoof the browser's language based on the distribution of language speakers in the target region. | ✔️ |
295
- | disable_ads | Enabled by default, this installs `uBlock Origin` addon on the browser if enabled. | ✔️ |
321
+ | disable_ads | Disabled by default, this installs `uBlock Origin` addon on the browser if enabled. | ✔️ |
296
322
  | network_idle | Wait for the page until there are no network connections for at least 500 ms. | ✔️ |
297
323
  | timeout | The timeout in milliseconds that is used in all operations and waits through the page. The default is 30000. | ✔️ |
298
324
  | wait_selector | Wait for a specific css selector to be in a specific state. | ✔️ |
@@ -574,7 +600,7 @@ Inspired by BeautifulSoup's `find_all` function you can find elements by using `
574
600
  * Any string passed is considered a tag name
575
601
  * Any iterable passed like List/Tuple/Set is considered an iterable of tag names.
576
602
  * Any dictionary is considered a mapping of HTML element(s) attribute names and attribute values.
577
- * Any regex patterns passed are used as filters
603
+ * Any regex patterns passed are used as filters to elements by their text content
578
604
  * Any functions passed are used as filters
579
605
  * Any keyword argument passed is considered as an HTML element attribute with its value.
580
606
 
@@ -583,7 +609,7 @@ So the way it works is after collecting all passed arguments and keywords, each
583
609
 
584
610
  1. All elements with the passed tag name(s).
585
611
  2. All elements that match all passed attribute(s).
586
- 3. All elements that match all passed regex patterns.
612
+ 3. All elements that its text content match all passed regex patterns.
587
613
  4. All elements that fulfill all passed function(s).
588
614
 
589
615
  Note: The filtering process always starts from the first filter it finds in the filtering order above so if no tag name(s) are passed but attributes are passed, the process starts from that layer and so on. **But the order in which you pass the arguments doesn't matter.**
@@ -1,23 +1,23 @@
1
- scrapling/__init__.py,sha256=0iEOX168f4gLFpReEUemMOhTske8AS2o0UQHJWXn-4o,500
1
+ scrapling/__init__.py,sha256=pOwvxTBwxLovt0OJNZz2A5FkbfjQC0wKrDmONqoNsL0,500
2
2
  scrapling/cli.py,sha256=njPdJKmbLFHeWjtSiGEm9ALBdSyfUp0IaJvxQL5C31Q,1125
3
- scrapling/defaults.py,sha256=tJAOMB-PMd3aLZz3j_yr6haBxxaklAvWdS_hP-GFFdU,331
4
- scrapling/fetchers.py,sha256=K3MKBqKDOXItJNwxFY2fe1C21Vz6QSd91fFtN98Mpg4,35402
5
- scrapling/parser.py,sha256=sT1gh5pnbjpUzFt8K9DGD6x60zKQcAtzmyf8DgiNDCI,55266
3
+ scrapling/defaults.py,sha256=sdXeZjXEX7PmCtaa0weK0nRrAUzqZukNNqipZ_sltYE,469
4
+ scrapling/fetchers.py,sha256=qmiJ6S-bnPWvP48Z6rKxBnSuR-tdwHlJwlIsYxGxFM0,35405
5
+ scrapling/parser.py,sha256=b_1eHxRwHRCidyvm3F6ST6qIYvVEVU6GhTTCI1LblVk,54330
6
6
  scrapling/py.typed,sha256=frcCV1k9oG9oKj3dpUqdJg1PxRT2RSN_XKdLCPjaYaY,2
7
7
  scrapling/core/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
8
- scrapling/core/_types.py,sha256=OcsP1WeQEOlEVo9OzTrLQfgZZfXuJ0civVs31SynwGA,641
9
- scrapling/core/custom_types.py,sha256=ZRzpoT6qQ4vU_ejhLXa7WYuYLGl5HwAjLPe01xdhuvM,10808
8
+ scrapling/core/_types.py,sha256=dKVi_dUxdxNtTr7sj7ySkHXDfrsmjFTfpCQeO5tGuBY,670
9
+ scrapling/core/custom_types.py,sha256=X5fNOS3E7BDkvoUxrRZpEoPlzbLMlibGhZVGbHb2E74,13393
10
10
  scrapling/core/mixins.py,sha256=sozbpaGL1_O_x3U-ABM5aYWpnxpCLfdbcA9SG3P7weY,3532
11
11
  scrapling/core/storage_adaptors.py,sha256=l_ZYcdn1y69AcoPuRrPoaxqKysN62pMExrwJWYdu5MA,6220
12
- scrapling/core/translator.py,sha256=ojDmNi5pFZE6Ke-AiSsTilXiPRdR8yhX3o-uVGMkap8,5236
12
+ scrapling/core/translator.py,sha256=hFSc3mxG5pYhbwRgingeFbD_E73U799vCsvVv0uFEXw,5237
13
13
  scrapling/core/utils.py,sha256=03LzCDzmeK1TXPjIKVzHSUgSfhpe36XE8AwxlgxzJoU,3705
14
14
  scrapling/engines/__init__.py,sha256=zA7tzqcDXP0hllwmjVewNHWipIA4JSU9mRG4J-cud0c,267
15
- scrapling/engines/camo.py,sha256=wJRfaIU0w_hDSlrP2AdpjBU6NNEKw0wSnVbqUoxt1Gk,13682
15
+ scrapling/engines/camo.py,sha256=SHMRnIrN6599upo5-G3fZQ10455xyB-bB_EsLMjBStA,16072
16
16
  scrapling/engines/constants.py,sha256=Gb_nXFoBB4ujJkd05SKkenMe1UDiRYQA3dkmA3DunLg,3723
17
- scrapling/engines/pw.py,sha256=MCYE5rDx55D2VOIeUNLl44ROXnyFRfku_u2FOcXjqEQ,18534
18
- scrapling/engines/static.py,sha256=7SVEfeigCPfwC1ukx0zIFFe96Bo5fox6qOq2IWrP6P8,10319
17
+ scrapling/engines/pw.py,sha256=LvS1jvTf3s7mfdeQo7_OyQ5zpiOzvBu5g88hOLlQBCQ,20856
18
+ scrapling/engines/static.py,sha256=_bqVKcsTkm8ok6NIH6PDDaXtyQ6B2ZoGWccjZJKwvBo,10414
19
19
  scrapling/engines/toolbelt/__init__.py,sha256=VQDdYm1zY9Apno6d8UrULk29vUjllZrQqD8mXL1E2Fc,402
20
- scrapling/engines/toolbelt/custom.py,sha256=d3qyeCg_qHm1RRE7yv5hyU9b17Y7YDPGBOVhEH1CAT0,12754
20
+ scrapling/engines/toolbelt/custom.py,sha256=qgONLwpxUoEIAIQBF1RcakYu8cqAAmX8qdyaol5hfjA,12813
21
21
  scrapling/engines/toolbelt/fingerprints.py,sha256=ajEHdXHr7W4hw9KcNS7XlyxNBZu37p1bRj18TiICLzU,2929
22
22
  scrapling/engines/toolbelt/navigation.py,sha256=xEfZRJefuxOCGxQOSI2llS0du0Y2XmoIPdVGUSHOd7k,4567
23
23
  scrapling/engines/toolbelt/bypasses/navigator_plugins.js,sha256=tbnnk3nCXB6QEQnOhDlu3n-s7lnUTAkrUsjP6FDQIQg,2104
@@ -33,17 +33,17 @@ tests/fetchers/test_utils.py,sha256=ANFu-4FFhtyGFGIwJksUO2M2tTTcKU2M_t6F2aav8lM,
33
33
  tests/fetchers/async/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
34
34
  tests/fetchers/async/test_camoufox.py,sha256=BANJ0TVqEdsjkYlsyU-q_spfaMsqTLOBQU8LUDurL9I,3685
35
35
  tests/fetchers/async/test_httpx.py,sha256=6WgsvqV1-rYTjZ9na5x-wt49C3Ur9D99HXBFbewO0gc,3888
36
- tests/fetchers/async/test_playwright.py,sha256=zzSYnfRksjNep_YipTiYAB9eQaIo3fssKLrsGzXEakw,4068
36
+ tests/fetchers/async/test_playwright.py,sha256=rr_3vB9LWclbl7PBNMH2MNU6CsirvJAIx_LsI9mLil0,4106
37
37
  tests/fetchers/sync/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
38
38
  tests/fetchers/sync/test_camoufox.py,sha256=IcDXPAWSSJnYT6psDFKSbCeym5n7hCrMPYQEghaOX3A,3165
39
39
  tests/fetchers/sync/test_httpx.py,sha256=xItYWjnDOIswKJzua2tDq8Oy43nTeFl0O1bci7lzGmg,3615
40
- tests/fetchers/sync/test_playwright.py,sha256=5eZdPwk3JGeaO7GuExv_QsByLyWDE9joxnmprW0WO6Q,3780
40
+ tests/fetchers/sync/test_playwright.py,sha256=MEyDRaMyxDIWupG7f_xz0f0jd9Cpbd5rXCPz6qUy8cs,3818
41
41
  tests/parser/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
42
42
  tests/parser/test_automatch.py,sha256=SxsNdExE8zz8AcPRQFBUjZ3Q_1-tPOd9dzVvMSZpOYQ,4908
43
43
  tests/parser/test_general.py,sha256=dyfOsc8lleoY4AxcfDUBUaD1i95xecfYuTUhKBsYjwo,12100
44
- scrapling-0.2.92.dist-info/LICENSE,sha256=XHgu8DRuT7_g3Hb9Q18YGg8eShp6axPBacbnQxT_WWQ,1499
45
- scrapling-0.2.92.dist-info/METADATA,sha256=2I-HK-xEkVFFyQBio8NAKR0eQEBB-dLHFuvb5eluCEQ,67415
46
- scrapling-0.2.92.dist-info/WHEEL,sha256=PZUExdf71Ui_so67QXpySuHtCi3-J3wvF4ORK6k_S8U,91
47
- scrapling-0.2.92.dist-info/entry_points.txt,sha256=DHyt2Blxy0P5OE2HRcP95Wz9_xo2ERCDcNqrJjYS3o8,49
48
- scrapling-0.2.92.dist-info/top_level.txt,sha256=ub7FkOEXeYmmYTUxd4pCrwXfBfAMIpZ1sCGmXCc14tI,16
49
- scrapling-0.2.92.dist-info/RECORD,,
44
+ scrapling-0.2.94.dist-info/LICENSE,sha256=XHgu8DRuT7_g3Hb9Q18YGg8eShp6axPBacbnQxT_WWQ,1499
45
+ scrapling-0.2.94.dist-info/METADATA,sha256=nF08IkBzVob418wgav0uHzbNdVXH1-FrTYZAxrTfg24,68878
46
+ scrapling-0.2.94.dist-info/WHEEL,sha256=In9FTNxeP60KnTkGw7wk6mJPYd_dQSjEZmXdBdMCI-8,91
47
+ scrapling-0.2.94.dist-info/entry_points.txt,sha256=DHyt2Blxy0P5OE2HRcP95Wz9_xo2ERCDcNqrJjYS3o8,49
48
+ scrapling-0.2.94.dist-info/top_level.txt,sha256=ub7FkOEXeYmmYTUxd4pCrwXfBfAMIpZ1sCGmXCc14tI,16
49
+ scrapling-0.2.94.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: setuptools (75.6.0)
2
+ Generator: setuptools (75.8.0)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5
 
@@ -70,7 +70,7 @@ class TestPlayWrightFetcherAsync:
70
70
  @pytest.mark.parametrize("kwargs", [
71
71
  {"disable_webgl": True, "hide_canvas": False},
72
72
  {"disable_webgl": False, "hide_canvas": True},
73
- {"stealth": True},
73
+ # {"stealth": True}, # causes issues with Github Actions
74
74
  {"useragent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:131.0) Gecko/20100101 Firefox/131.0'},
75
75
  {"extra_headers": {'ayo': ''}}
76
76
  ])
@@ -61,7 +61,7 @@ class TestPlayWrightFetcher:
61
61
  @pytest.mark.parametrize("kwargs", [
62
62
  {"disable_webgl": True, "hide_canvas": False},
63
63
  {"disable_webgl": False, "hide_canvas": True},
64
- {"stealth": True},
64
+ # {"stealth": True}, # causes issues with Github Actions
65
65
  {"useragent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:131.0) Gecko/20100101 Firefox/131.0'},
66
66
  {"extra_headers": {'ayo': ''}}
67
67
  ])