ultimate-pi 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (51) hide show
  1. package/.agents/skills/caveman/SKILL.md +67 -0
  2. package/.agents/skills/compress/SKILL.md +111 -0
  3. package/.agents/skills/compress/scripts/__init__.py +9 -0
  4. package/.agents/skills/compress/scripts/__main__.py +3 -0
  5. package/.agents/skills/compress/scripts/benchmark.py +78 -0
  6. package/.agents/skills/compress/scripts/cli.py +73 -0
  7. package/.agents/skills/compress/scripts/compress.py +227 -0
  8. package/.agents/skills/compress/scripts/detect.py +121 -0
  9. package/.agents/skills/compress/scripts/validate.py +189 -0
  10. package/.agents/skills/context7-cli/SKILL.md +73 -0
  11. package/.agents/skills/context7-cli/references/docs.md +121 -0
  12. package/.agents/skills/context7-cli/references/setup.md +43 -0
  13. package/.agents/skills/context7-cli/references/skills.md +118 -0
  14. package/.agents/skills/emil-design-eng/SKILL.md +679 -0
  15. package/.agents/skills/lean-ctx/SKILL.md +149 -0
  16. package/.agents/skills/lean-ctx/scripts/install.sh +95 -0
  17. package/.agents/skills/scrapling-official/LICENSE.txt +28 -0
  18. package/.agents/skills/scrapling-official/SKILL.md +390 -0
  19. package/.agents/skills/scrapling-official/examples/01_fetcher_session.py +26 -0
  20. package/.agents/skills/scrapling-official/examples/02_dynamic_session.py +26 -0
  21. package/.agents/skills/scrapling-official/examples/03_stealthy_session.py +26 -0
  22. package/.agents/skills/scrapling-official/examples/04_spider.py +58 -0
  23. package/.agents/skills/scrapling-official/examples/README.md +45 -0
  24. package/.agents/skills/scrapling-official/references/fetching/choosing.md +78 -0
  25. package/.agents/skills/scrapling-official/references/fetching/dynamic.md +352 -0
  26. package/.agents/skills/scrapling-official/references/fetching/static.md +432 -0
  27. package/.agents/skills/scrapling-official/references/fetching/stealthy.md +255 -0
  28. package/.agents/skills/scrapling-official/references/mcp-server.md +214 -0
  29. package/.agents/skills/scrapling-official/references/migrating_from_beautifulsoup.md +86 -0
  30. package/.agents/skills/scrapling-official/references/parsing/adaptive.md +212 -0
  31. package/.agents/skills/scrapling-official/references/parsing/main_classes.md +586 -0
  32. package/.agents/skills/scrapling-official/references/parsing/selection.md +494 -0
  33. package/.agents/skills/scrapling-official/references/spiders/advanced.md +344 -0
  34. package/.agents/skills/scrapling-official/references/spiders/architecture.md +94 -0
  35. package/.agents/skills/scrapling-official/references/spiders/getting-started.md +164 -0
  36. package/.agents/skills/scrapling-official/references/spiders/proxy-blocking.md +235 -0
  37. package/.agents/skills/scrapling-official/references/spiders/requests-responses.md +196 -0
  38. package/.agents/skills/scrapling-official/references/spiders/sessions.md +205 -0
  39. package/.github/banner.png +0 -0
  40. package/.pi/SYSTEM.md +40 -0
  41. package/.pi/settings.json +5 -0
  42. package/PLAN.md +11 -0
  43. package/README.md +58 -0
  44. package/extensions/lean-ctx-enforce.ts +166 -0
  45. package/package.json +17 -0
  46. package/skills-lock.json +35 -0
  47. package/wiki/README.md +10 -0
  48. package/wiki/decisions/0001-establish-project-wiki-and-decision-record-format.md +25 -0
  49. package/wiki/decisions/0002-add-project-banner-to-readme.md +26 -0
  50. package/wiki/decisions/0003-remove-redundant-readme-title-heading.md +26 -0
  51. package/wiki/decisions/0004-publish-package-to-npm-as-ultimate-pi.md +26 -0
@@ -0,0 +1,235 @@
1
+ # Proxy management and handling Blocks
2
+
3
+ Scrapling's `ProxyRotator` manages proxy rotation across requests. It works with all session types and integrates with the spider's blocked request retry system.
4
+
5
+ ## ProxyRotator
6
+
7
+ The `ProxyRotator` class manages a list of proxies and rotates through them automatically. Pass it to any session type via the `proxy_rotator` parameter:
8
+
9
+ ```python
10
+ from scrapling.spiders import Spider, Response
11
+ from scrapling.fetchers import FetcherSession, ProxyRotator
12
+
13
+ class MySpider(Spider):
14
+ name = "my_spider"
15
+ start_urls = ["https://example.com"]
16
+
17
+ def configure_sessions(self, manager):
18
+ rotator = ProxyRotator([
19
+ "http://proxy1:8080",
20
+ "http://proxy2:8080",
21
+ "http://user:pass@proxy3:8080",
22
+ ])
23
+ manager.add("default", FetcherSession(proxy_rotator=rotator))
24
+
25
+ async def parse(self, response: Response):
26
+ # Check which proxy was used
27
+ print(f"Proxy used: {response.meta.get('proxy')}")
28
+ yield {"title": response.css("title::text").get("")}
29
+ ```
30
+
31
+ Each request automatically gets the next proxy in the rotation. The proxy used is stored in `response.meta["proxy"]` so you can track which proxy fetched which page.
32
+
33
+
34
+ Browser sessions support both string and dict proxy formats:
35
+
36
+ ```python
37
+ from scrapling.fetchers import AsyncDynamicSession, AsyncStealthySession, ProxyRotator
38
+
39
+ # String proxies work for all session types
40
+ rotator = ProxyRotator([
41
+ "http://proxy1:8080",
42
+ "http://proxy2:8080",
43
+ ])
44
+
45
+ # Dict proxies (Playwright format) work for browser sessions
46
+ rotator = ProxyRotator([
47
+ {"server": "http://proxy1:8080", "username": "user", "password": "pass"},
48
+ {"server": "http://proxy2:8080"},
49
+ ])
50
+
51
+ # Then inside the spider
52
+ def configure_sessions(self, manager):
53
+ rotator = ProxyRotator(["http://proxy1:8080", "http://proxy2:8080"])
54
+ manager.add("browser", AsyncStealthySession(proxy_rotator=rotator))
55
+ ```
56
+
57
+ **Important:**
58
+
59
+ 1. You cannot use the `proxy_rotator` argument together with the static `proxy` or `proxies` parameters on the same session. Pick one approach when configuring the session, and override it per request later if needed.
60
+ 2. By default, all browser-based sessions use a persistent browser context with a pool of tabs. However, since browsers can't set a proxy per tab, when you use a `ProxyRotator`, the fetcher will automatically open a separate context for each proxy, with one tab per context. Once the tab's job is done, both the tab and its context are closed.
61
+
62
+ ## Custom Rotation Strategies
63
+
64
+ By default, `ProxyRotator` uses cyclic rotation - it iterates through proxies sequentially, wrapping around at the end.
65
+
66
+ You can provide a custom strategy function to change this behavior, but it has to match the below signature:
67
+
68
+ ```python
69
+ from scrapling.core._types import ProxyType
70
+
71
+ def my_strategy(proxies: list, current_index: int) -> tuple[ProxyType, int]:
72
+ ...
73
+ ```
74
+
75
+ It receives the list of proxies and the current index, and must return the chosen proxy and the next index.
76
+
77
+ Below are some examples of custom rotation strategies you can use.
78
+
79
+ ### Random Rotation
80
+
81
+ ```python
82
+ import random
83
+ from scrapling.fetchers import ProxyRotator
84
+
85
+ def random_strategy(proxies, current_index):
86
+ idx = random.randint(0, len(proxies) - 1)
87
+ return proxies[idx], idx
88
+
89
+ rotator = ProxyRotator(
90
+ ["http://proxy1:8080", "http://proxy2:8080", "http://proxy3:8080"],
91
+ strategy=random_strategy,
92
+ )
93
+ ```
94
+
95
+ ### Weighted Rotation
96
+
97
+ ```python
98
+ import random
99
+
100
+ def weighted_strategy(proxies, current_index):
101
+ # First proxy gets 60% of traffic, others split the rest
102
+ weights = [60] + [40 // (len(proxies) - 1)] * (len(proxies) - 1)
103
+ proxy = random.choices(proxies, weights=weights, k=1)[0]
104
+ return proxy, current_index # Index doesn't matter for weighted
105
+
106
+ rotator = ProxyRotator(proxies, strategy=weighted_strategy)
107
+ ```
108
+
109
+
110
+ ## Per-Request Proxy Override
111
+
112
+ You can override the rotator for individual requests by passing `proxy=` as a keyword argument:
113
+
114
+ ```python
115
+ async def parse(self, response: Response):
116
+ # This request uses the rotator's next proxy
117
+ yield response.follow("/page1", callback=self.parse_page)
118
+
119
+ # This request uses a specific proxy, bypassing the rotator
120
+ yield response.follow(
121
+ "/special-page",
122
+ callback=self.parse_page,
123
+ proxy="http://special-proxy:8080",
124
+ )
125
+ ```
126
+
127
+ This is useful when certain pages require a specific proxy (e.g., a geo-located proxy for region-specific content).
128
+
129
+ ## Blocked Request Handling
130
+
131
+ The spider has built-in blocked request detection and retry. By default, it considers the following HTTP status codes blocked: `401`, `403`, `407`, `429`, `444`, `500`, `502`, `503`, `504`.
132
+
133
+ The retry system works like this:
134
+
135
+ 1. After a response comes back, the spider calls the `is_blocked(response)` method.
136
+ 2. If blocked, it copies the request and calls the `retry_blocked_request()` method so you can modify it before retrying.
137
+ 3. The retried request is re-queued with `dont_filter=True` (bypassing deduplication) and lower priority, so it's not retried right away.
138
+ 4. This repeats up to `max_blocked_retries` times (default: 3).
139
+
140
+ **Tip:**
141
+
142
+ 1. On retry, the previous `proxy`/`proxies` kwargs are cleared from the request automatically, so the rotator assigns a fresh proxy.
143
+ 2. The `max_blocked_retries` attribute is different than the session retries and doesn't share the counter.
144
+
145
+ ### Custom Block Detection
146
+
147
+ Override `is_blocked()` to add your own detection logic:
148
+
149
+ ```python
150
+ class MySpider(Spider):
151
+ name = "my_spider"
152
+ start_urls = ["https://example.com"]
153
+
154
+ async def is_blocked(self, response: Response) -> bool:
155
+ # Check status codes (default behavior)
156
+ if response.status in {403, 429, 503}:
157
+ return True
158
+
159
+ # Check response content
160
+ body = response.body.decode("utf-8", errors="ignore")
161
+ if "access denied" in body.lower() or "rate limit" in body.lower():
162
+ return True
163
+
164
+ return False
165
+
166
+ async def parse(self, response: Response):
167
+ yield {"title": response.css("title::text").get("")}
168
+ ```
169
+
170
+ ### Customizing Retries
171
+
172
+ Override `retry_blocked_request()` to modify the request before retrying. The `max_blocked_retries` attribute controls how many times a blocked request is retried (default: 3):
173
+
174
+ ```python
175
+ from scrapling.spiders import Spider, SessionManager, Request, Response
176
+ from scrapling.fetchers import FetcherSession, AsyncStealthySession
177
+
178
+
179
+ class MySpider(Spider):
180
+ name = "my_spider"
181
+ start_urls = ["https://example.com"]
182
+ max_blocked_retries = 5
183
+
184
+ def configure_sessions(self, manager: SessionManager) -> None:
185
+ manager.add('requests', FetcherSession(impersonate=['chrome', 'firefox', 'safari']))
186
+ manager.add('stealth', AsyncStealthySession(block_webrtc=True), lazy=True)
187
+
188
+ async def retry_blocked_request(self, request: Request, response: Response) -> Request:
189
+ request.sid = "stealth"
190
+ self.logger.info(f"Retrying blocked request: {request.url}")
191
+ return request
192
+
193
+ async def parse(self, response: Response):
194
+ yield {"title": response.css("title::text").get("")}
195
+ ```
196
+
197
+ What happened above is that I left the blocking detection logic unchanged and had the spider mainly use requests until it got blocked, then switch to the stealthy browser.
198
+
199
+
200
+ Putting it all together:
201
+
202
+ ```python
203
+ from scrapling.spiders import Spider, SessionManager, Request, Response
204
+ from scrapling.fetchers import FetcherSession, AsyncStealthySession, ProxyRotator
205
+
206
+
207
+ cheap_proxies = ProxyRotator([ "http://proxy1:8080", "http://proxy2:8080"])
208
+
209
+ # A format acceptable by the browser
210
+ expensive_proxies = ProxyRotator([
211
+ {"server": "http://residential_proxy1:8080", "username": "user", "password": "pass"},
212
+ {"server": "http://residential_proxy2:8080", "username": "user", "password": "pass"},
213
+ {"server": "http://mobile_proxy1:8080", "username": "user", "password": "pass"},
214
+ {"server": "http://mobile_proxy2:8080", "username": "user", "password": "pass"},
215
+ ])
216
+
217
+
218
+ class MySpider(Spider):
219
+ name = "my_spider"
220
+ start_urls = ["https://example.com"]
221
+ max_blocked_retries = 5
222
+
223
+ def configure_sessions(self, manager: SessionManager) -> None:
224
+ manager.add('requests', FetcherSession(impersonate=['chrome', 'firefox', 'safari'], proxy_rotator=cheap_proxies))
225
+ manager.add('stealth', AsyncStealthySession(block_webrtc=True, proxy_rotator=expensive_proxies), lazy=True)
226
+
227
+ async def retry_blocked_request(self, request: Request, response: Response) -> Request:
228
+ request.sid = "stealth"
229
+ self.logger.info(f"Retrying blocked request: {request.url}")
230
+ return request
231
+
232
+ async def parse(self, response: Response):
233
+ yield {"title": response.css("title::text").get("")}
234
+ ```
235
+ The above logic is: requests are made with cheap proxies, such as datacenter proxies, until they are blocked, then retried with higher-quality proxies, such as residential or mobile proxies.
@@ -0,0 +1,196 @@
1
+ # Requests & Responses
2
+
3
+ This page covers the `Request` object in detail: how to construct requests, pass data between callbacks, control priority and deduplication, and use `response.follow()` for link-following.
4
+
5
+ ## The Request Object
6
+
7
+ A `Request` represents a URL to be fetched. You create requests either directly or via `response.follow()`:
8
+
9
+ ```python
10
+ from scrapling.spiders import Request
11
+
12
+ # Direct construction
13
+ request = Request(
14
+ "https://example.com/page",
15
+ callback=self.parse_page,
16
+ priority=5,
17
+ )
18
+
19
+ # Via response.follow (preferred in callbacks)
20
+ request = response.follow("/page", callback=self.parse_page)
21
+ ```
22
+
23
+ Here are all the arguments you can pass to `Request`:
24
+
25
+ | Argument | Type | Default | Description |
26
+ |---------------|------------|------------|-------------------------------------------------------------------------------------------------------|
27
+ | `url` | `str` | *required* | The URL to fetch |
28
+ | `sid` | `str` | `""` | Session ID - routes the request to a specific session (see [Sessions](sessions.md)) |
29
+ | `callback` | `callable` | `None` | Async generator method to process the response. Defaults to `parse()` |
30
+ | `priority` | `int` | `0` | Higher values are processed first |
31
+ | `dont_filter` | `bool` | `False` | If `True`, skip deduplication (allow duplicate requests) |
32
+ | `meta` | `dict` | `{}` | Arbitrary metadata passed through to the response |
33
+ | `**kwargs` | | | Additional keyword arguments passed to the session's fetch method (e.g., `headers`, `method`, `data`) |
34
+
35
+ Any extra keyword arguments are forwarded directly to the underlying session. For example, to make a POST request:
36
+
37
+ ```python
38
+ yield Request(
39
+ "https://example.com/api",
40
+ method="POST",
41
+ data={"key": "value"},
42
+ callback=self.parse_result,
43
+ )
44
+ ```
45
+
46
+ ## Response.follow()
47
+
48
+ `response.follow()` is the recommended way to create follow-up requests inside callbacks. It offers several advantages over constructing `Request` objects directly:
49
+
50
+ - **Relative URLs** are resolved automatically against the current page URL
51
+ - **Referer header** is set to the current page URL by default
52
+ - **Session kwargs** from the original request are inherited (headers, proxy settings, etc.)
53
+ - **Callback, session ID, and priority** are inherited from the original request if not specified
54
+
55
+ ```python
56
+ async def parse(self, response: Response):
57
+ # Minimal - inherits callback, sid, priority from current request
58
+ yield response.follow("/next-page")
59
+
60
+ # Override specific fields
61
+ yield response.follow(
62
+ "/product/123",
63
+ callback=self.parse_product,
64
+ priority=10,
65
+ )
66
+
67
+ # Pass additional metadata to
68
+ yield response.follow(
69
+ "/details",
70
+ callback=self.parse_details,
71
+ meta={"category": "electronics"},
72
+ )
73
+ ```
74
+
75
+ | Argument | Type | Default | Description |
76
+ |--------------------|------------|------------|------------------------------------------------------------|
77
+ | `url` | `str` | *required* | URL to follow (absolute or relative) |
78
+ | `sid` | `str` | `""` | Session ID (inherits from original request if empty) |
79
+ | `callback` | `callable` | `None` | Callback method (inherits from original request if `None`) |
80
+ | `priority` | `int` | `None` | Priority (inherits from original request if `None`) |
81
+ | `dont_filter` | `bool` | `False` | Skip deduplication |
82
+ | `meta` | `dict` | `None` | Metadata (merged with existing response meta) |
83
+ | **`referer_flow`** | `bool` | `True` | Set current URL as Referer header |
84
+ | `**kwargs` | | | Merged with original request's session kwargs |
85
+
86
+ ### Disabling Referer Flow
87
+
88
+ By default, `response.follow()` sets the `Referer` header to the current page URL. To disable this:
89
+
90
+ ```python
91
+ yield response.follow("/page", referer_flow=False)
92
+ ```
93
+
94
+ ## Callbacks
95
+
96
+ Callbacks are async generator methods on your spider that process responses. They must `yield` one of three types:
97
+
98
+ - **`dict`**: A scraped item, added to the results
99
+ - **`Request`**: A follow-up request, added to the queue
100
+ - **`None`**: Silently ignored
101
+
102
+ ```python
103
+ class MySpider(Spider):
104
+ name = "my_spider"
105
+ start_urls = ["https://example.com"]
106
+
107
+ async def parse(self, response: Response):
108
+ # Yield items (dicts)
109
+ yield {"url": response.url, "title": response.css("title::text").get("")}
110
+
111
+ # Yield follow-up requests
112
+ for link in response.css("a::attr(href)").getall():
113
+ yield response.follow(link, callback=self.parse_page)
114
+
115
+ async def parse_page(self, response: Response):
116
+ yield {"content": response.css("article::text").get("")}
117
+ ```
118
+
119
+ **Note:** All callback methods must be `async def` and use `yield` (not `return`). Even if a callback only yields items with no follow-up requests, it must still be an async generator.
120
+
121
+ ## Request Priority
122
+
123
+ Requests with higher priority values are processed first. This is useful when some pages are more important to be processed first before others:
124
+
125
+ ```python
126
+ async def parse(self, response: Response):
127
+ # High priority - process product pages first
128
+ for link in response.css("a.product::attr(href)").getall():
129
+ yield response.follow(link, callback=self.parse_product, priority=10)
130
+
131
+ # Low priority - pagination links processed after products
132
+ next_page = response.css("a.next::attr(href)").get()
133
+ if next_page:
134
+ yield response.follow(next_page, callback=self.parse, priority=0)
135
+ ```
136
+
137
+ When using `response.follow()`, the priority is inherited from the original request unless you specify a new one.
138
+
139
+ ## Deduplication
140
+
141
+ The spider automatically deduplicates requests based on a fingerprint computed from the URL, HTTP method, request body, and session ID. If two requests produce the same fingerprint, the second one is silently dropped.
142
+
143
+ To allow duplicate requests (e.g., re-visiting a page after login), set `dont_filter=True`:
144
+
145
+ ```python
146
+ yield Request("https://example.com/dashboard", dont_filter=True, callback=self.parse_dashboard)
147
+
148
+ # Or with response.follow
149
+ yield response.follow("/dashboard", dont_filter=True, callback=self.parse_dashboard)
150
+ ```
151
+
152
+ You can fine-tune what goes into the fingerprint using class attributes on your spider:
153
+
154
+ | Attribute | Default | Effect |
155
+ |----------------------|---------|-----------------------------------------------------------------------------------------------------------------|
156
+ | `fp_include_kwargs` | `False` | Include extra request kwargs (arguments you passed to the session fetch, like headers, etc.) in the fingerprint |
157
+ | `fp_keep_fragments` | `False` | Keep URL fragments (`#section`) when computing fingerprints |
158
+ | `fp_include_headers` | `False` | Include request headers in the fingerprint |
159
+
160
+ For example, if you need to treat `https://example.com/page#section1` and `https://example.com/page#section2` as different URLs:
161
+
162
+ ```python
163
+ class MySpider(Spider):
164
+ name = "my_spider"
165
+ fp_keep_fragments = True
166
+ # ...
167
+ ```
168
+
169
+ ## Request Meta
170
+
171
+ The `meta` dictionary lets you pass arbitrary data between callbacks. This is useful when you need context from one page to process another:
172
+
173
+ ```python
174
+ async def parse(self, response: Response):
175
+ for product in response.css("div.product"):
176
+ category = product.css("span.category::text").get("")
177
+ link = product.css("a::attr(href)").get()
178
+ if link:
179
+ yield response.follow(
180
+ link,
181
+ callback=self.parse_product,
182
+ meta={"category": category},
183
+ )
184
+
185
+ async def parse_product(self, response: Response):
186
+ yield {
187
+ "name": response.css("h1::text").get(""),
188
+ "price": response.css(".price::text").get(""),
189
+ # Access meta from the request
190
+ "category": response.meta.get("category", ""),
191
+ }
192
+ ```
193
+
194
+ When using `response.follow()`, the meta from the current response is merged with the new meta you provide (new values take precedence).
195
+
196
+ The spider system also automatically stores some metadata. For example, the proxy used for a request is available as `response.meta["proxy"]` when proxy rotation is enabled.
@@ -0,0 +1,205 @@
1
+ # Spiders sessions
2
+
3
+ A spider can use multiple fetcher sessions simultaneously. For example, a fast HTTP session for simple pages and a stealth browser session for protected pages.
4
+
5
+ ## What are Sessions?
6
+
7
+ A session is a pre-configured fetcher instance that stays alive for the duration of the crawl. Instead of creating a new connection or browser for every request, the spider reuses sessions, which is faster and more resource-efficient.
8
+
9
+ By default, every spider creates a single [FetcherSession](../fetching/static.md). You can add more sessions or swap the default by overriding the `configure_sessions()` method, but you have to use the async version of each session only, as the table shows below:
10
+
11
+
12
+ | Session Type | Use Case |
13
+ |-------------------------------------------------|------------------------------------------|
14
+ | [FetcherSession](../fetching/static.md) | Fast HTTP requests, no JavaScript |
15
+ | [AsyncDynamicSession](../fetching/dynamic.md) | Browser automation, JavaScript rendering |
16
+ | [AsyncStealthySession](../fetching/stealthy.md) | Anti-bot bypass, Cloudflare, etc. |
17
+
18
+
19
+ ## Configuring Sessions
20
+
21
+ Override `configure_sessions()` on your spider to set up sessions. The `manager` parameter is a `SessionManager` instance - use `manager.add()` to register sessions:
22
+
23
+ ```python
24
+ from scrapling.spiders import Spider, Response
25
+ from scrapling.fetchers import FetcherSession
26
+
27
+ class MySpider(Spider):
28
+ name = "my_spider"
29
+ start_urls = ["https://example.com"]
30
+
31
+ def configure_sessions(self, manager):
32
+ manager.add("default", FetcherSession())
33
+
34
+ async def parse(self, response: Response):
35
+ yield {"title": response.css("title::text").get("")}
36
+ ```
37
+
38
+ The `manager.add()` method takes:
39
+
40
+ | Argument | Type | Default | Description |
41
+ |--------------|-----------|------------|----------------------------------------------|
42
+ | `session_id` | `str` | *required* | A name to reference this session in requests |
43
+ | `session` | `Session` | *required* | The session instance |
44
+ | `default` | `bool` | `False` | Make this the default session |
45
+ | `lazy` | `bool` | `False` | Start the session only when first used |
46
+
47
+ **Notes:**
48
+
49
+ 1. In all requests, if you don't specify which session to use, the default session is used. The default session is determined in one of two ways:
50
+ 1. The first session you add to the manager becomes the default automatically.
51
+ 2. The session that gets `default=True` while added to the manager.
52
+ 2. The instances you pass of each session don't have to be already started by you; the spider checks on all sessions if they are not already started and starts them.
53
+ 3. If you want a specific session to start when used only, then use the `lazy` argument while adding that session to the manager. Example: start the browser only when you need it, not with the spider start.
54
+
55
+ ## Multi-Session Spider
56
+
57
+ Here's a practical example: use a fast HTTP session for listing pages and a stealth browser for detail pages that have bot protection:
58
+
59
+ ```python
60
+ from scrapling.spiders import Spider, Response
61
+ from scrapling.fetchers import FetcherSession, AsyncStealthySession
62
+
63
+ class ProductSpider(Spider):
64
+ name = "products"
65
+ start_urls = ["https://shop.example.com/products"]
66
+
67
+ def configure_sessions(self, manager):
68
+ # Fast HTTP for listing pages (default)
69
+ manager.add("http", FetcherSession())
70
+
71
+ # Stealth browser for protected product pages
72
+ manager.add("stealth", AsyncStealthySession(
73
+ headless=True,
74
+ network_idle=True,
75
+ ))
76
+
77
+ async def parse(self, response: Response):
78
+ for link in response.css("a.product::attr(href)").getall():
79
+ # Route product pages through the stealth session
80
+ yield response.follow(link, sid="stealth", callback=self.parse_product)
81
+
82
+ next_page = response.css("a.next::attr(href)").get()
83
+ if next_page:
84
+ yield response.follow(next_page)
85
+
86
+ async def parse_product(self, response: Response):
87
+ yield {
88
+ "name": response.css("h1::text").get(""),
89
+ "price": response.css(".price::text").get(""),
90
+ }
91
+ ```
92
+
93
+ The key is the `sid` parameter - it tells the spider which session to use for each request. When you call `response.follow()` without `sid`, the session ID from the original request is inherited.
94
+
95
+ Sessions can also be different instances of the same class with different configurations:
96
+
97
+ ```python
98
+ from scrapling.spiders import Spider, Response
99
+ from scrapling.fetchers import FetcherSession
100
+
101
+ class ProductSpider(Spider):
102
+ name = "products"
103
+ start_urls = ["https://shop.example.com/products"]
104
+
105
+ def configure_sessions(self, manager):
106
+ chrome_requests = FetcherSession(impersonate="chrome")
107
+ firefox_requests = FetcherSession(impersonate="firefox")
108
+
109
+ manager.add("chrome", chrome_requests)
110
+ manager.add("firefox", firefox_requests)
111
+
112
+ async def parse(self, response: Response):
113
+ for link in response.css("a.product::attr(href)").getall():
114
+ yield response.follow(link, callback=self.parse_product)
115
+
116
+ next_page = response.css("a.next::attr(href)").get()
117
+ if next_page:
118
+ yield response.follow(next_page, sid="firefox")
119
+
120
+ async def parse_product(self, response: Response):
121
+ yield {
122
+ "name": response.css("h1::text").get(""),
123
+ "price": response.css(".price::text").get(""),
124
+ }
125
+ ```
126
+
127
+ ## Session Arguments
128
+
129
+ Extra keyword arguments passed to a `Request` (or through `response.follow(**kwargs)`) are forwarded to the session's fetch method. This lets you customize individual requests without changing the session configuration:
130
+
131
+ ```python
132
+ async def parse(self, response: Response):
133
+ # Pass extra headers for this specific request
134
+ yield Request(
135
+ "https://api.example.com/data",
136
+ headers={"Authorization": "Bearer token123"},
137
+ callback=self.parse_api,
138
+ )
139
+
140
+ # Use a different HTTP method
141
+ yield Request(
142
+ "https://example.com/submit",
143
+ method="POST",
144
+ data={"field": "value"},
145
+ sid="firefox",
146
+ callback=self.parse_result,
147
+ )
148
+ ```
149
+
150
+ **Warning:** When using `FetcherSession` in spiders, you cannot use `.get()` and `.post()` methods directly. By default, the request is an HTTP GET request; to use another HTTP method, pass it to the `method` argument as in the above example. This unifies the `Request` interface across all session types.
151
+
152
+ For browser sessions (`AsyncDynamicSession`, `AsyncStealthySession`), you can pass browser-specific arguments like `wait_selector`, `page_action`, or `extra_headers`:
153
+
154
+ ```python
155
+ async def parse(self, response: Response):
156
+ # Use Cloudflare solver with the `AsyncStealthySession` we configured above
157
+ yield Request(
158
+ "https://nopecha.com/demo/cloudflare",
159
+ sid="stealth",
160
+ callback=self.parse_result,
161
+ solve_cloudflare=True,
162
+ block_webrtc=True,
163
+ hide_canvas=True,
164
+ google_search=True,
165
+ )
166
+
167
+ yield response.follow(
168
+ "/dynamic-page",
169
+ sid="browser",
170
+ callback=self.parse_dynamic,
171
+ wait_selector="div.loaded",
172
+ network_idle=True,
173
+ )
174
+ ```
175
+
176
+ **Warning:** Session arguments (**kwargs) passed from the original request are inherited by `response.follow()`. New kwargs take precedence over inherited ones.
177
+
178
+ ```python
179
+ from scrapling.spiders import Spider, Response
180
+ from scrapling.fetchers import FetcherSession
181
+
182
+ class ProductSpider(Spider):
183
+ name = "products"
184
+ start_urls = ["https://shop.example.com/products"]
185
+
186
+ def configure_sessions(self, manager):
187
+ manager.add("http", FetcherSession(impersonate='chrome'))
188
+
189
+ async def parse(self, response: Response):
190
+ # I don't want the follow request to impersonate a desktop Chrome like the previous request, but a mobile one
191
+ # so I override it like this
192
+ for link in response.css("a.product::attr(href)").getall():
193
+ yield response.follow(link, impersonate="chrome131_android", callback=self.parse_product)
194
+
195
+ next_page = response.css("a.next::attr(href)").get()
196
+ if next_page:
197
+ yield Request(next_page)
198
+
199
+ async def parse_product(self, response: Response):
200
+ yield {
201
+ "name": response.css("h1::text").get(""),
202
+ "price": response.css(".price::text").get(""),
203
+ }
204
+ ```
205
+ **Note:** Upon spider closure, the manager automatically checks whether any sessions are still running and closes them before closing the spider.
Binary file
package/.pi/SYSTEM.md ADDED
@@ -0,0 +1,40 @@
1
+ You are a lean-ctx-first, caveman-mode expert coding agent.
2
+
3
+ PRIMARY OPERATING MODE
4
+ - Caveman mode ALWAYS ON (full intensity) for all assistant responses.
5
+ - Keep technical accuracy exact. Remove fluff/hedging/pleasantries.
6
+ - Short, direct sentences/fragments. Pattern: "[thing] [action] [reason]. [next step]."
7
+ - Keep code, commands, errors, paths, and quoted text exact.
8
+
9
+ TOOL PRIORITY (MANDATORY)
10
+ 1) Prefer lean-ctx tools/MCP tools (ctx_*) for all read/search/edit/context tasks.
11
+ 2) If ctx_* not available, use bash with lean-ctx CLI wrappers:
12
+ - lean-ctx read <file> -m map|signatures|full
13
+ - lean-ctx -c <command>
14
+ 3) Do NOT use built-in read, grep, find, ls, bash raw commands directly when a lean-ctx equivalent exists.
15
+ 4) Do NOT use built-in edit/write when ctx_edit or other lean-ctx edit path is available.
16
+
17
+ WEB INTERACTION (MANDATORY)
18
+ - Default web interaction path: web-search skill using ddgr.
19
+ - For any web lookup, run ddgr first (non-interactive flags preferred).
20
+ - If ddgr missing, install it first using skill instructions, then continue.
21
+
22
+ FALLBACK POLICY
23
+ - If lean-ctx is unavailable, install/setup first when safe:
24
+ - run which lean-ctx || bash skills/lean-ctx/scripts/install.sh
25
+ - If install/setup fails or user declines install, explicitly state lean-ctx unavailable, then minimally fall back to built-ins to complete task.
26
+
27
+ BEHAVIOR RULES
28
+ - Always attempt lean-ctx route first and mention chosen lean-ctx mode briefly.
29
+ - Keep responses compact and caveman-style even during status/progress updates.
30
+ - Safety-critical warnings may use normal clarity for warning line, then resume caveman mode.
31
+
32
+ ENTERPRISE EXECUTION + KARPATHY-STYLE CHANGE DISCIPLINE (MANDATORY)
33
+ - Mimic enterprise software engineering team execution model.
34
+ - At project start, create and maintain a project wiki.
35
+ - Every design decision must be documented in wiki immediately after decision.
36
+ - Decision docs must include context, alternatives, chosen option, rationale, and consequences.
37
+ - Before any code change, reference the relevant wiki design decisions/guidelines.
38
+ - Make surgical code changes only: smallest viable diff for requested outcome.
39
+ - Do not touch irrelevant code, files, formatting, or structure.
40
+ - If unrelated issues are found, record them separately; do not modify unless explicitly requested.
@@ -0,0 +1,5 @@
1
+ {
2
+ "packages": [
3
+ ".."
4
+ ]
5
+ }