instapaper-scraper 1.0.0.post1__py3-none-any.whl → 1.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,280 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: instapaper-scraper
3
- Version: 1.0.0.post1
4
- Summary: A tool to scrape articles from Instapaper.
5
- Project-URL: Homepage, https://github.com/chriskyfung/InstapaperScraper
6
- Project-URL: Source, https://github.com/chriskyfung/InstapaperScraper
7
- Project-URL: Issues, https://github.com/chriskyfung/InstapaperScraper/issues
8
- Classifier: Programming Language :: Python :: 3
9
- Classifier: Programming Language :: Python :: 3.9
10
- Classifier: Programming Language :: Python :: 3.10
11
- Classifier: Programming Language :: Python :: 3.11
12
- Classifier: Programming Language :: Python :: 3.12
13
- Classifier: Programming Language :: Python :: 3.13
14
- Classifier: Programming Language :: Python :: 3.14
15
- Classifier: Development Status :: 5 - Production/Stable
16
- Classifier: Intended Audience :: End Users/Desktop
17
- Classifier: Topic :: Internet :: WWW/HTTP :: Browsers
18
- Classifier: License :: OSI Approved :: GNU General Public License v3 (GPLv3)
19
- Classifier: Operating System :: OS Independent
20
- Requires-Python: >=3.9
21
- Description-Content-Type: text/markdown
22
- License-File: LICENSE
23
- Requires-Dist: beautifulsoup4~=4.14.2
24
- Requires-Dist: certifi~=2025.11.12
25
- Requires-Dist: charset-normalizer~=3.4.3
26
- Requires-Dist: cryptography~=46.0.3
27
- Requires-Dist: guara~=0.0.14
28
- Requires-Dist: idna~=3.11
29
- Requires-Dist: python-dotenv~=1.2.1
30
- Requires-Dist: requests~=2.32.5
31
- Requires-Dist: soupsieve~=2.8
32
- Requires-Dist: typing_extensions~=4.15.0
33
- Requires-Dist: urllib3~=2.5.0
34
- Requires-Dist: tomli~=2.0.1; python_version < "3.11"
35
- Provides-Extra: dev
36
- Requires-Dist: pytest; extra == "dev"
37
- Requires-Dist: pytest-cov; extra == "dev"
38
- Requires-Dist: black; extra == "dev"
39
- Requires-Dist: ruff; extra == "dev"
40
- Requires-Dist: types-requests; extra == "dev"
41
- Requires-Dist: types-beautifulsoup4; extra == "dev"
42
- Requires-Dist: requests-mock; extra == "dev"
43
- Requires-Dist: build; extra == "dev"
44
- Requires-Dist: twine; extra == "dev"
45
- Dynamic: license-file
46
-
47
- # Instapaper Scraper
48
-
49
- ![Python Version from PEP 621 TOML](https://img.shields.io/python/required-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2Fchriskyfung%2FInstapaperScraper%2Frefs%2Fheads%2Fmaster%2Fpyproject.toml)
50
- [![CI](https://github.com/chriskyfung/InstapaperScraper/actions/workflows/ci.yml/badge.svg)](https://github.com/chriskyfung/InstapaperScraper/actions/workflows/ci.yml)
51
- [![PyPI version](https://img.shields.io/pypi/v/instapaper-scraper.svg)](https://pypi.org/project/instapaper-scraper/)
52
- [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
53
- [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
54
- [![GitHub License](https://img.shields.io/github/license/chriskyfung/InstapaperScraper)
55
- ](https://www.gnu.org/licenses/gpl-3.0.en.html)
56
- [![codecov](https://codecov.io/gh/chriskyfung/InstapaperScraper/graph/badge.svg)](https://codecov.io/gh/chriskyfung/InstapaperScraper)
57
-
58
- A Python tool to scrape all your saved Instapaper bookmarks and export them to various formats.
59
-
60
- ## Features
61
-
62
- - Scrapes all bookmarks from your Instapaper account.
63
- - Supports scraping from specific folders.
64
- - Exports data to CSV, JSON, or a SQLite database.
65
- - Securely stores your session for future runs.
66
- - Modern, modular, and tested architecture.
67
-
68
- ## Getting Started
69
-
70
- ### 1. Requirements
71
- - Python 3.9+
72
-
73
- ### 2. Installation
74
-
75
- This package is available on PyPI and can be installed with pip:
76
-
77
- ```sh
78
- pip install instapaper-scraper
79
- ```
80
-
81
- ### 3. Usage
82
-
83
- Run the tool from the command line, specifying your desired output format:
84
-
85
- ```sh
86
- # Scrape and export to the default CSV format
87
- instapaper-scraper
88
-
89
- # Scrape and export to JSON
90
- instapaper-scraper --format json
91
-
92
- # Scrape and export to a SQLite database with a custom name
93
- instapaper-scraper --format sqlite --output my_articles.db
94
- ```
95
-
96
- ## Configuration
97
-
98
- ### Authentication
99
-
100
- The script authenticates using one of the following methods, in order of priority:
101
-
102
- 1. **Command-line Arguments**: Provide your username and password directly when running the script:
103
-
104
- ```sh
105
- instapaper-scraper --username your_username --password your_password
106
- ```
107
-
108
- 2. **Session Files (`.session_key`, `.instapaper_session`)**: The script attempts to load these files in the following order:
109
- a. Path specified by `--session-file` or `--key-file` arguments.
110
- b. Files in the current working directory (e.g., `./.session_key`).
111
- c. Files in the user's configuration directory (`~/.config/instapaper-scraper/`).
112
- After the first successful login, the script creates an encrypted `.instapaper_session` file and a `.session_key` file to reuse your session securely.
113
-
114
- 3. **Interactive Prompt**: If no other method is available, the script will prompt you for your username and password.
115
-
116
- > **Note on Security:** Your session file (`.instapaper_session`) and the encryption key (`.session_key`) are stored with secure permissions (read/write for the owner only) to protect your credentials.
117
-
118
- ### Folder Configuration
119
-
120
- You can define and quickly access your Instapaper folders using a `config.toml` file. The scraper will look for this file in the following locations (in order of precedence):
121
-
122
- 1. The path specified by the `--config-path` argument.
123
- 2. `config.toml` in the current working directory.
124
- 3. `~/.config/instapaper-scraper/config.toml`
125
-
126
- Here is an example of `config.toml`:
127
-
128
- ```toml
129
- # Default output filename for non-folder mode
130
- output_filename = "home-articles.csv"
131
-
132
- [[folders]]
133
- key = "ml"
134
- id = "1234567"
135
- slug = "machine-learning"
136
- output_filename = "ml-articles.json"
137
-
138
- [[folders]]
139
- key = "python"
140
- id = "7654321"
141
- slug = "python-programming"
142
- output_filename = "python-articles.db"
143
- ```
144
-
145
- - **output_filename (top-level)**: The default output filename to use when not in folder mode.
146
- - **key**: A short alias for the folder.
147
- - **id**: The folder ID from the Instapaper URL.
148
- - **slug**: The human-readable part of the folder URL.
149
- - **output_filename (folder-specific)**: A preset output filename for scraped articles from this specific folder.
150
-
151
- When a `config.toml` file is present and no `--folder` argument is provided, the scraper will prompt you to select a folder. You can also specify a folder directly using the `--folder` argument with its key, ID, or slug. Use `--folder=none` to explicitly disable folder mode and scrape all articles.
152
-
153
- ### Command-line Arguments
154
-
155
- | Argument | Description |
156
- | --------------------- | ------------------------------------------------------------------------ |
157
- | `--config-path <path>`| Path to the configuration file. Searches `~/.config/instapaper-scraper/config.toml` and `config.toml` in the current directory by default. |
158
- | `--folder <value>` | Specify a folder by key, ID, or slug from your `config.toml`. **Requires a configuration file to be loaded.** Use `none` to explicitly disable folder mode. If a configuration file is not found or fails to load, and this option is used (not set to `none`), the program will exit. |
159
- | `--format <format>` | Output format (`csv`, `json`, `sqlite`). Default: `csv`. |
160
- | `--output <filename>` | Specify a custom output filename. |
161
- | `--username <user>` | Your Instapaper account username. |
162
- | `--password <pass>` | Your Instapaper account password. |
163
-
164
- ### Output Formats
165
-
166
- You can control the output format using the `--format` argument. The supported formats are:
167
-
168
- - `csv` (default): Exports data to `output/bookmarks.csv`.
169
- - `json`: Exports data to `output/bookmarks.json`.
170
- - `sqlite`: Exports data to an `articles` table in `output/bookmarks.db`.
171
- - `--output <filename>`: Specify a custom output filename.
172
-
173
- If the `--format` flag is omitted, the script will default to `csv`.
174
-
175
- #### Opening Articles in Instapaper
176
-
177
- The output data includes a unique `id` for each article. To open an article directly in Instapaper's reader view, append this ID to the base URL:
178
- `https://www.instapaper.com/read/<article_id>`
179
-
180
- ## How It Works
181
-
182
- The tool is designed with a modular architecture for reliability and maintainability.
183
-
184
- 1. **Authentication**: The `InstapaperAuthenticator` handles secure login and session management.
185
- 2. **Scraping**: The `InstapaperClient` iterates through all pages of your bookmarks, fetching the metadata for each article with robust error handling and retries.
186
- 3. **Data Collection**: All fetched articles are aggregated into a single list.
187
- 4. **Export**: Finally, the collected data is written to a file in your chosen format (`.csv`, `.json`, or `.db`).
188
-
189
- ## Example Output
190
-
191
- ### CSV (`output/bookmarks.csv`)
192
-
193
- ```csv
194
- id,title,url
195
- 999901234,"Article 1",https://www.example.com/page-1/
196
- 999002345,"Article 2",https://www.example.com/page-2/
197
- ```
198
-
199
- ### JSON (`output/bookmarks.json`)
200
-
201
- ```json
202
- [
203
- {
204
- "id": "999901234",
205
- "title": "Article 1",
206
- "url": "https://www.example.com/page-1/"
207
- },
208
- {
209
- "id": "999002345",
210
- "title": "Article 2",
211
- "url": "https://www.example.com/page-2/"
212
- }
213
- ]
214
- ```
215
-
216
- ### SQLite (`output/bookmarks.db`)
217
-
218
- A SQLite database file is created with an `articles` table containing `id`, `title`, and `url` columns.
219
-
220
- ## Development & Testing
221
-
222
- This project uses `pytest` for testing, `black` for code formatting, and `ruff` for linting.
223
-
224
- ### Setup
225
-
226
- To install the development dependencies:
227
-
228
- ```sh
229
- pip install -e .[dev]
230
- ```
231
-
232
- ### Running the Scraper
233
-
234
- To run the scraper directly without installing the package:
235
-
236
- ```sh
237
- python -m src.instapaper_scraper.cli
238
- ```
239
-
240
- ### Testing
241
-
242
- To run the tests, execute the following command from the project root:
243
-
244
- ```sh
245
- pytest
246
- ```
247
-
248
- To check test coverage:
249
-
250
- ```sh
251
- pytest --cov=src/instapaper_scraper --cov-report=term-missing
252
- ```
253
-
254
- ### Code Quality
255
-
256
- To format the code with `black`:
257
-
258
- ```sh
259
- black .
260
- ```
261
-
262
- To check for linting errors with `ruff`:
263
-
264
- ```sh
265
- ruff check .
266
- ```
267
-
268
- To automatically fix linting errors:
269
-
270
- ```sh
271
- ruff check . --fix
272
- ```
273
-
274
- ## Disclaimer
275
-
276
- This script requires valid Instapaper credentials. Use it responsibly and in accordance with Instapaper’s Terms of Service.
277
-
278
- ## License
279
-
280
- This project is licensed under the terms of the GNU General Public License v3.0. See the [LICENSE](LICENSE) file for the full license text.
@@ -1,12 +0,0 @@
1
- instapaper_scraper/__init__.py,sha256=qdcT3tp4KLufWH1u6tOuPVUQaXwakQD0gdjkwY4ljfg,206
2
- instapaper_scraper/api.py,sha256=KvGxK2P35-3TsONPWcQTVBZT-q70p7hobeQ7E9PhXwA,11740
3
- instapaper_scraper/auth.py,sha256=DepQKDdVSm1dMFNIkpK_LIlaI0JllAYZb3_LJWhMe-g,7554
4
- instapaper_scraper/cli.py,sha256=Pxf1cAoLW9N-X1BP73HE0i2Qv7rPTaIyrPqG3cgdSTI,6860
5
- instapaper_scraper/exceptions.py,sha256=CptHoZe4NOhdjOoyXkZEMFgQC6oKtzjRljywwDEtsTg,134
6
- instapaper_scraper/output.py,sha256=0vQQ4AHZwFJg3O5O2zzvKUf0cOS1fTjXdivFqEHAun0,3081
7
- instapaper_scraper-1.0.0.post1.dist-info/licenses/LICENSE,sha256=IwGE9guuL-ryRPEKi6wFPI_zOhg7zDZbTYuHbSt_SAk,35823
8
- instapaper_scraper-1.0.0.post1.dist-info/METADATA,sha256=rWkPxBIY-Vo2opYPJ6KiSGiGfmrklMkI-CM9HwOf9to,10353
9
- instapaper_scraper-1.0.0.post1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
10
- instapaper_scraper-1.0.0.post1.dist-info/entry_points.txt,sha256=7AvRgN5fvtas_Duxdz-JPbDN6A1Lq2GaTfTSv54afxA,67
11
- instapaper_scraper-1.0.0.post1.dist-info/top_level.txt,sha256=kiU9nLkqPOVPLsP4QMHuBFjAmoIKfftYmGV05daLrcc,19
12
- instapaper_scraper-1.0.0.post1.dist-info/RECORD,,