@the-convocation/twitter-scraper 0.15.0 → 0.15.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +2 -3
- package/dist/default/cjs/index.js +2 -2
- package/dist/default/cjs/index.js.map +1 -1
- package/dist/default/esm/index.mjs +2 -2
- package/dist/default/esm/index.mjs.map +1 -1
- package/dist/node/cjs/index.cjs +2 -2
- package/dist/node/cjs/index.cjs.map +1 -1
- package/dist/node/esm/index.mjs +2 -2
- package/dist/node/esm/index.mjs.map +1 -1
- package/dist/types/index.d.ts +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -171,14 +171,13 @@ const scraper = new Scraper({
|
|
|
171
171
|
### Rate limiting
|
|
172
172
|
The Twitter API heavily rate-limits clients, requiring that the scraper has its own
|
|
173
173
|
rate-limit handling to behave predictably when rate-limiting occurs. By default, the
|
|
174
|
-
scraper uses a rate-limiting strategy that
|
|
174
|
+
scraper uses a rate-limiting strategy that waits for the current rate-limiting period
|
|
175
175
|
to expire before resuming requests.
|
|
176
176
|
|
|
177
177
|
**This has been known to take a very long time, in some cases (up to 13 minutes).**
|
|
178
178
|
|
|
179
179
|
You may want to change how rate-limiting events are handled, potentially by pooling
|
|
180
|
-
scrapers logged-in to different accounts (
|
|
181
|
-
README). The rate-limit handling strategy can be configured by passing a custom
|
|
180
|
+
scrapers logged-in to different accounts (refer to [#116](https://github.com/the-convocation/twitter-scraper/pull/116) for how to do this yourself). The rate-limit handling strategy can be configured by passing a custom
|
|
182
181
|
implementation to the `rateLimitStrategy` option in the scraper constructor:
|
|
183
182
|
|
|
184
183
|
```ts
|