tumblrbot 1.4.2__tar.gz → 1.4.4__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: tumblrbot
3
- Version: 1.4.2
3
+ Version: 1.4.4
4
4
  Summary: An updated bot that posts to Tumblr, based on your very own blog!
5
5
  Requires-Python: >= 3.13
6
6
  Description-Content-Type: text/markdown
@@ -9,10 +9,9 @@ Requires-Dist: keyring
9
9
  Requires-Dist: more-itertools
10
10
  Requires-Dist: niquests[speedups, http3]
11
11
  Requires-Dist: openai
12
+ Requires-Dist: pwinput
12
13
  Requires-Dist: pydantic
13
14
  Requires-Dist: pydantic-settings
14
- Requires-Dist: requests
15
- Requires-Dist: requests-cache
16
15
  Requires-Dist: requests-oauthlib
17
16
  Requires-Dist: rich
18
17
  Requires-Dist: tiktoken
@@ -20,24 +19,29 @@ Requires-Dist: tomlkit
20
19
  Project-URL: Source, https://github.com/MaidThatPrograms/tumblrbot
21
20
 
22
21
  [OAuth]: https://oauth.net/1
23
- [OpenAI]: https://pypi.org/project/openai
24
22
  [Python]: https://python.org/download
25
- [Tumblr]: https://tumblr.com
26
23
 
24
+ [pip]: https://pypi.org
27
25
  [keyring]: https://pypi.org/project/keyring
28
26
  [Rich]: https://pypi.org/project/rich
29
27
 
28
+ [OpenAI]: https://pypi.org/project/openai
29
+ [OpenAI Pricing]: https://platform.openai.com/docs/pricing#fine-tuning
30
+ [OpenAI Tokens]: https://platform.openai.com/settings/organization/api-keys
31
+ [Fine-Tuning Portal]: https://platform.openai.com/finetune
30
32
  [Moderation API]: https://platform.openai.com/docs/api-reference/moderations
31
- [pip]: https://pypi.org
33
+
34
+ [Tumblr]: https://tumblr.com
35
+ [Tumblr Tokens]: https://tumblr.com/oauth/apps
32
36
 
33
37
  [Download]: src/tumblrbot/flow/download.py
34
38
  [Examples]: src/tumblrbot/flow/examples.py
35
39
  [Fine-Tune]: src/tumblrbot/flow/fine_tune.py
36
40
  [Generate]: src/tumblrbot/flow/generate.py
37
41
  [Main]: src/tumblrbot/__main__.py
38
- [README.md]: README.md
39
42
 
40
- [config]: #configuration
43
+ [Config]: #configuration
44
+ [Fine-Tuning]: #manual-fine-tuning
41
45
 
42
46
  # tumblrbot
43
47
  [![PyPI - Version](https://img.shields.io/pypi/v/tumblrbot)](https://python.org/pypi/tumblrbot)
@@ -64,6 +68,7 @@ Features:
64
68
  1. Provides cost estimates if the currently saved examples are used to fine-tune the [configured][config] model.
65
69
  1. [Uploads examples][Fine-Tune] to [OpenAI] and begins the fine-tuning process.
66
70
  - Resumes monitoring the same fine-tuning process when restarted.
71
+ - Deletes the uploaded examples file if fine-tuning does not succeed (optional).
67
72
  - Stores the output model automatically when fine-tuning is completed.
68
73
  1. [Generates and uploads posts][Generate] to the [configured][config] [Tumblr] blog using the [configured][config] fine-tuned model.
69
74
  - Creates tags by extracting keywords at the [configured][config] frequency using the [configured][config] model.
@@ -74,8 +79,10 @@ Features:
74
79
 
75
80
  **To-Do:**
76
81
  - Add code documentation.
77
- - Fix inaccurate post counts when downloading posts.
78
- - Fix file not found error when starting fine-tuning.
82
+
83
+ **Known Issues:**
84
+ - Sometimes, you will get an error about the training file not being found when starting fine-tuning. We do not currently have a fix or workaround for this. You should instead use the online portal for fine-tuning if this continues to happen. Read more in [fine-tuning].
85
+ - Post counts are incorrect when downloading posts. We are not certain what the cause of this is, but our tests suggest this is a [Tumblr] API problem that is giving inaccurate numbers.
79
86
 
80
87
 
81
88
  **Please submit an issue or contact us for features you want added/reimplemented.**
@@ -91,17 +98,17 @@ Features:
91
98
  - See [keyring] for additional requirements if you are not on Windows.
92
99
 
93
100
  ## Usage
94
- Run `tumblrbot` from anywhere. Run `tumblrbot --help` for command-line options. Every command-line option corresponds to a value from the [config](#configuration).
101
+ Run `tumblrbot` from anywhere. Run `tumblrbot --help` for command-line options. Every command-line option corresponds to a value from the [config].
95
102
 
96
103
  ## Obtaining Tokens
97
104
  ### OpenAI
98
- API token can be created [here](https://platform.openai.com/settings/organization/api-keys).
105
+ API token can be created [here][OpenAI Tokens].
99
106
  1. Leave everything at the defaults and set `Project` to `Default Project`.
100
107
  1. Press `Create secret key`.
101
108
  1. Press `Copy` to copy the API token to your clipboard.
102
109
 
103
110
  ### Tumblr
104
- API tokens can be created [here](https://tumblr.com/oauth/apps).
111
+ API tokens can be created [here][Tumblr Tokens].
105
112
  1. Press `+ Register Application`.
106
113
  1. Enter anything for `Application Name` and `Application Description`.
107
114
  1. Enter any URL for `Application Website` and `Default callback URL`, like `https://example.com`.
@@ -127,9 +134,19 @@ All file options can include directories that will be created when the program i
127
134
  ```
128
135
  - **`developer_message`** - This message is used in for fine-tuning the AI as well as generating prompts. If you change this, you will need to run the fine-tuning again with the new value before generating posts.
129
136
  - **`user_message`** - This message is used in the same way as `developer_message` and should be treated the same.
130
- - **`expected_epochs`** - The default value here is the default number of epochs for `base_model`. You may have to change this value if you change `base_model`. After running fine-tuning once, you will see the number of epochs used in the [fine-tuning portal](https://platform.openai.com/finetune) under *Hyperparameters*. This value will also be updated automatically if you run fine-tuning through this program.
131
- - **`token_price`** - The default value here is the default token price for `base_model`. You can find the up-to-date value [here](https://platform.openai.com/docs/pricing#fine-tuning), in the *Training* column.
132
- - **`job_id`** - If there is any value here, this program will resume monitoring the corresponding job, instead of starting a new one. This gets set when starting the fine-tuning and is cleared when it is completed. You can find job IDs in the [fine-tuning portal](https://platform.openai.com/finetune).
133
- - **`base_model`** - This value is used to choose the tokenizer for estimating fine-tuning costs. It is also the base model that will be fine-tuned and the model that is used to generate tags. You can find a list of options in the [fine-tuning portal](https://platform.openai.com/finetune) by pressing *+ Create* and opening the drop-down list for *Base Model*. Be sure to update `token_price` if you change this value.
137
+ - **`expected_epochs`** - The default value here is the default number of epochs for `base_model`. You may have to change this value if you change `base_model`. After running fine-tuning once, you will see the number of epochs used in the [fine-tuning portal] under *Hyperparameters*. This value will also be updated automatically if you run fine-tuning through this program.
138
+ - **`token_price`** - The default value here is the default token price for `base_model`. You can find the up-to-date value [here][OpenAI Pricing], in the *Training* column.
139
+ - **`job_id`** - If there is any value here, this program will resume monitoring the corresponding job, instead of starting a new one. This gets set when starting the fine-tuning and is cleared when it is completed. You can read more in [fine-tuning].
140
+ - **`base_model`** - This value is used to choose the tokenizer for estimating fine-tuning costs. It is also the base model that will be fine-tuned and the model that is used to generate tags. You can find a list of options in the [fine-tuning portal] by pressing `+ Create` and opening the drop-down list for `Base Model`. Be sure to update `token_price` if you change this value.
141
+ - **`fine_tuned_model`** - Set automatically after monitoring fine-tuning if the job has succeeded. You can read more in [fine-tuning].
134
142
  - **`tags_chance`** - This should be between 0 and 1. Setting it to 0 corresponds to a 0% chance (never) to add tags to a post. 1 corresponds to a 100% chance (always) to add tags to a post. Adding tags incurs a very small token cost.
135
143
 
144
+ ## Manual Fine-Tuning
145
+ You can manually upload the examples file to [OpenAI] and start the fine-tuning [here][fine-tuning portal].
146
+ 1. Press `+ Create`.
147
+ 1. Select the desired `Base Model` from the dropdown. This should ideally match the model set in the [config].
148
+ 1. Upload the generated examples file to the section under `Training data`. You can find the path for this in the [config].
149
+ 1. Press `Create`.
150
+ 1. (Optional) Copy the value next to `Job ID` and paste it into the [config] under `job_id`. You can then run the program and monitor its progress as usual.
151
+ 1. If you do not do the above, you will have to copy the value next to `Output model` once the job is complete and paste it into the [config] under `fine_tuned_model`.
152
+
@@ -1,22 +1,27 @@
1
1
  [OAuth]: https://oauth.net/1
2
- [OpenAI]: https://pypi.org/project/openai
3
2
  [Python]: https://python.org/download
4
- [Tumblr]: https://tumblr.com
5
3
 
4
+ [pip]: https://pypi.org
6
5
  [keyring]: https://pypi.org/project/keyring
7
6
  [Rich]: https://pypi.org/project/rich
8
7
 
8
+ [OpenAI]: https://pypi.org/project/openai
9
+ [OpenAI Pricing]: https://platform.openai.com/docs/pricing#fine-tuning
10
+ [OpenAI Tokens]: https://platform.openai.com/settings/organization/api-keys
11
+ [Fine-Tuning Portal]: https://platform.openai.com/finetune
9
12
  [Moderation API]: https://platform.openai.com/docs/api-reference/moderations
10
- [pip]: https://pypi.org
13
+
14
+ [Tumblr]: https://tumblr.com
15
+ [Tumblr Tokens]: https://tumblr.com/oauth/apps
11
16
 
12
17
  [Download]: src/tumblrbot/flow/download.py
13
18
  [Examples]: src/tumblrbot/flow/examples.py
14
19
  [Fine-Tune]: src/tumblrbot/flow/fine_tune.py
15
20
  [Generate]: src/tumblrbot/flow/generate.py
16
21
  [Main]: src/tumblrbot/__main__.py
17
- [README.md]: README.md
18
22
 
19
- [config]: #configuration
23
+ [Config]: #configuration
24
+ [Fine-Tuning]: #manual-fine-tuning
20
25
 
21
26
  # tumblrbot
22
27
  [![PyPI - Version](https://img.shields.io/pypi/v/tumblrbot)](https://python.org/pypi/tumblrbot)
@@ -43,6 +48,7 @@ Features:
43
48
  1. Provides cost estimates if the currently saved examples are used to fine-tune the [configured][config] model.
44
49
  1. [Uploads examples][Fine-Tune] to [OpenAI] and begins the fine-tuning process.
45
50
  - Resumes monitoring the same fine-tuning process when restarted.
51
+ - Deletes the uploaded examples file if fine-tuning does not succeed (optional).
46
52
  - Stores the output model automatically when fine-tuning is completed.
47
53
  1. [Generates and uploads posts][Generate] to the [configured][config] [Tumblr] blog using the [configured][config] fine-tuned model.
48
54
  - Creates tags by extracting keywords at the [configured][config] frequency using the [configured][config] model.
@@ -53,8 +59,10 @@ Features:
53
59
 
54
60
  **To-Do:**
55
61
  - Add code documentation.
56
- - Fix inaccurate post counts when downloading posts.
57
- - Fix file not found error when starting fine-tuning.
62
+
63
+ **Known Issues:**
64
+ - Sometimes, you will get an error about the training file not being found when starting fine-tuning. We do not currently have a fix or workaround for this. You should instead use the online portal for fine-tuning if this continues to happen. Read more in [fine-tuning].
65
+ - Post counts are incorrect when downloading posts. We are not certain what the cause of this is, but our tests suggest this is a [Tumblr] API problem that is giving inaccurate numbers.
58
66
 
59
67
 
60
68
  **Please submit an issue or contact us for features you want added/reimplemented.**
@@ -70,17 +78,17 @@ Features:
70
78
  - See [keyring] for additional requirements if you are not on Windows.
71
79
 
72
80
  ## Usage
73
- Run `tumblrbot` from anywhere. Run `tumblrbot --help` for command-line options. Every command-line option corresponds to a value from the [config](#configuration).
81
+ Run `tumblrbot` from anywhere. Run `tumblrbot --help` for command-line options. Every command-line option corresponds to a value from the [config].
74
82
 
75
83
  ## Obtaining Tokens
76
84
  ### OpenAI
77
- API token can be created [here](https://platform.openai.com/settings/organization/api-keys).
85
+ API token can be created [here][OpenAI Tokens].
78
86
  1. Leave everything at the defaults and set `Project` to `Default Project`.
79
87
  1. Press `Create secret key`.
80
88
  1. Press `Copy` to copy the API token to your clipboard.
81
89
 
82
90
  ### Tumblr
83
- API tokens can be created [here](https://tumblr.com/oauth/apps).
91
+ API tokens can be created [here][Tumblr Tokens].
84
92
  1. Press `+ Register Application`.
85
93
  1. Enter anything for `Application Name` and `Application Description`.
86
94
  1. Enter any URL for `Application Website` and `Default callback URL`, like `https://example.com`.
@@ -106,8 +114,18 @@ All file options can include directories that will be created when the program i
106
114
  ```
107
115
  - **`developer_message`** - This message is used in for fine-tuning the AI as well as generating prompts. If you change this, you will need to run the fine-tuning again with the new value before generating posts.
108
116
  - **`user_message`** - This message is used in the same way as `developer_message` and should be treated the same.
109
- - **`expected_epochs`** - The default value here is the default number of epochs for `base_model`. You may have to change this value if you change `base_model`. After running fine-tuning once, you will see the number of epochs used in the [fine-tuning portal](https://platform.openai.com/finetune) under *Hyperparameters*. This value will also be updated automatically if you run fine-tuning through this program.
110
- - **`token_price`** - The default value here is the default token price for `base_model`. You can find the up-to-date value [here](https://platform.openai.com/docs/pricing#fine-tuning), in the *Training* column.
111
- - **`job_id`** - If there is any value here, this program will resume monitoring the corresponding job, instead of starting a new one. This gets set when starting the fine-tuning and is cleared when it is completed. You can find job IDs in the [fine-tuning portal](https://platform.openai.com/finetune).
112
- - **`base_model`** - This value is used to choose the tokenizer for estimating fine-tuning costs. It is also the base model that will be fine-tuned and the model that is used to generate tags. You can find a list of options in the [fine-tuning portal](https://platform.openai.com/finetune) by pressing *+ Create* and opening the drop-down list for *Base Model*. Be sure to update `token_price` if you change this value.
117
+ - **`expected_epochs`** - The default value here is the default number of epochs for `base_model`. You may have to change this value if you change `base_model`. After running fine-tuning once, you will see the number of epochs used in the [fine-tuning portal] under *Hyperparameters*. This value will also be updated automatically if you run fine-tuning through this program.
118
+ - **`token_price`** - The default value here is the default token price for `base_model`. You can find the up-to-date value [here][OpenAI Pricing], in the *Training* column.
119
+ - **`job_id`** - If there is any value here, this program will resume monitoring the corresponding job, instead of starting a new one. This gets set when starting the fine-tuning and is cleared when it is completed. You can read more in [fine-tuning].
120
+ - **`base_model`** - This value is used to choose the tokenizer for estimating fine-tuning costs. It is also the base model that will be fine-tuned and the model that is used to generate tags. You can find a list of options in the [fine-tuning portal] by pressing `+ Create` and opening the drop-down list for `Base Model`. Be sure to update `token_price` if you change this value.
121
+ - **`fine_tuned_model`** - Set automatically after monitoring fine-tuning if the job has succeeded. You can read more in [fine-tuning].
113
122
  - **`tags_chance`** - This should be between 0 and 1. Setting it to 0 corresponds to a 0% chance (never) to add tags to a post. 1 corresponds to a 100% chance (always) to add tags to a post. Adding tags incurs a very small token cost.
123
+
124
+ ## Manual Fine-Tuning
125
+ You can manually upload the examples file to [OpenAI] and start the fine-tuning [here][fine-tuning portal].
126
+ 1. Press `+ Create`.
127
+ 1. Select the desired `Base Model` from the dropdown. This should ideally match the model set in the [config].
128
+ 1. Upload the generated examples file to the section under `Training data`. You can find the path for this in the [config].
129
+ 1. Press `Create`.
130
+ 1. (Optional) Copy the value next to `Job ID` and paste it into the [config] under `job_id`. You can then run the program and monitor its progress as usual.
131
+ 1. If you do not do the above, you will have to copy the value next to `Output model` once the job is complete and paste it into the [config] under `fine_tuned_model`.
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "tumblrbot"
3
- version = "1.4.2"
3
+ version = "1.4.4"
4
4
  description = "An updated bot that posts to Tumblr, based on your very own blog!"
5
5
  readme = "README.md"
6
6
  requires-python = ">= 3.13"
@@ -10,10 +10,9 @@ dependencies = [
10
10
  "more-itertools",
11
11
  "niquests[speedups,http3]",
12
12
  "openai",
13
+ "pwinput",
13
14
  "pydantic",
14
15
  "pydantic-settings",
15
- "requests",
16
- "requests-cache",
17
16
  "requests-oauthlib",
18
17
  "rich",
19
18
  "tiktoken",
@@ -3,12 +3,14 @@ from typing import Annotated, Any, ClassVar, Literal, Self, override
3
3
 
4
4
  import rich
5
5
  from keyring import get_password, set_password
6
+ from niquests import Session
6
7
  from openai import BaseModel
8
+ from pwinput import pwinput
7
9
  from pydantic import ConfigDict, PlainSerializer, SecretStr
8
10
  from pydantic.json_schema import SkipJsonSchema
9
11
  from requests_oauthlib import OAuth1Session
10
12
  from rich.panel import Panel
11
- from rich.prompt import Confirm, Prompt
13
+ from rich.prompt import Confirm
12
14
 
13
15
  type SerializableSecretStr = Annotated[
14
16
  SecretStr,
@@ -44,13 +46,11 @@ class Tokens(FullyValidatedModel):
44
46
 
45
47
  @staticmethod
46
48
  def online_token_prompt(url: str, *tokens: str) -> Generator[SecretStr]:
47
- formatted_tokens = [f"[cyan]{token}[/]" for token in tokens]
48
- formatted_token_string = " and ".join(formatted_tokens)
49
+ formatted_token_string = " and ".join(f"[cyan]{token}[/]" for token in tokens)
49
50
 
50
51
  rich.print(f"Retrieve your {formatted_token_string} from: {url}")
51
- for token in formatted_tokens:
52
- prompt = f"Enter your {token} [yellow](hidden)"
53
- yield SecretStr(Prompt.ask(prompt, password=True).strip())
52
+ for token in tokens:
53
+ yield SecretStr(pwinput(f"Enter your {token} (masked): ").strip())
54
54
 
55
55
  rich.print()
56
56
 
@@ -70,6 +70,8 @@ class Tokens(FullyValidatedModel):
70
70
  if not all(self.tumblr.model_dump(mode="json").values()) or Confirm.ask("Reset Tumblr API tokens?", default=False):
71
71
  self.tumblr.client_key, self.tumblr.client_secret = self.online_token_prompt("https://tumblr.com/oauth/apps", "consumer key", "consumer secret")
72
72
 
73
+ OAuth1Session.__bases__ = (Session,)
74
+
73
75
  with OAuth1Session(
74
76
  self.tumblr.client_key.get_secret_value(),
75
77
  self.tumblr.client_secret.get_secret_value(),
@@ -1,21 +1,18 @@
1
1
  from dataclasses import dataclass
2
2
  from typing import Self
3
3
 
4
- from niquests import HTTPError, Session
5
- from requests import Response
6
- from requests_cache import CacheMixin
4
+ from niquests import HTTPError, PreparedRequest, Response, Session
7
5
  from requests_oauthlib import OAuth1
8
6
 
9
7
  from tumblrbot.utils.models import Post, Tokens
10
8
 
11
9
 
12
10
  @dataclass
13
- class TumblrSession(CacheMixin, Session): # pyright: ignore[reportIncompatibleMethodOverride, reportIncompatibleVariableOverride]
11
+ class TumblrSession(Session):
14
12
  tokens: Tokens
15
13
 
16
14
  def __post_init__(self) -> None:
17
- CacheMixin.__init__(self, use_cache_dir=True)
18
- Session.__init__(self, happy_eyeballs=True)
15
+ super().__init__(multiplexed=True, happy_eyeballs=True)
19
16
 
20
17
  self.auth = OAuth1(**self.tokens.tumblr.model_dump(mode="json"))
21
18
  self.hooks["response"].append(self.response_hook)
@@ -24,21 +21,22 @@ class TumblrSession(CacheMixin, Session): # pyright: ignore[reportIncompatibleM
24
21
  super().__enter__()
25
22
  return self
26
23
 
27
- def response_hook(self, response: Response, **_: object) -> None:
28
- try:
29
- response.raise_for_status()
30
- except HTTPError as error:
31
- if response.text:
32
- error.add_note(response.text)
33
- raise
24
+ def response_hook(self, response: PreparedRequest | Response) -> None:
25
+ if isinstance(response, Response):
26
+ try:
27
+ response.raise_for_status()
28
+ except HTTPError as error:
29
+ if response.text:
30
+ error.add_note(response.text)
31
+ raise
34
32
 
35
33
  def retrieve_published_posts(self, blog_identifier: str, after: int) -> Response:
36
34
  return self.get(
37
35
  f"https://api.tumblr.com/v2/blog/{blog_identifier}/posts",
38
36
  params={
39
- "after": after,
37
+ "after": str(after),
40
38
  "sort": "asc",
41
- "npf": True,
39
+ "npf": str(True),
42
40
  },
43
41
  )
44
42
 
File without changes
File without changes