not-again-ai 0.5.1__tar.gz → 0.7.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (31) hide show
  1. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/PKG-INFO +22 -77
  2. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/README.md +17 -73
  3. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/pyproject.toml +7 -5
  4. not_again_ai-0.7.0/src/not_again_ai/base/file_system.py +34 -0
  5. not_again_ai-0.7.0/src/not_again_ai/llm/ollama/__init__.py +0 -0
  6. not_again_ai-0.7.0/src/not_again_ai/llm/ollama/chat_completion.py +95 -0
  7. not_again_ai-0.7.0/src/not_again_ai/llm/ollama/ollama_client.py +24 -0
  8. not_again_ai-0.7.0/src/not_again_ai/llm/ollama/service.py +81 -0
  9. not_again_ai-0.7.0/src/not_again_ai/llm/openai/__init__.py +0 -0
  10. {not_again_ai-0.5.1/src/not_again_ai/llm → not_again_ai-0.7.0/src/not_again_ai/llm/openai}/chat_completion.py +8 -4
  11. {not_again_ai-0.5.1/src/not_again_ai/llm → not_again_ai-0.7.0/src/not_again_ai/llm/openai}/context_management.py +1 -1
  12. {not_again_ai-0.5.1/src/not_again_ai/llm → not_again_ai-0.7.0/src/not_again_ai/llm/openai}/prompts.py +5 -61
  13. {not_again_ai-0.5.1/src/not_again_ai/llm → not_again_ai-0.7.0/src/not_again_ai/llm/openai}/tokens.py +2 -0
  14. not_again_ai-0.5.1/src/not_again_ai/base/file_system.py +0 -12
  15. not_again_ai-0.5.1/src/not_again_ai/llm/chat_completion_vision.py +0 -87
  16. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/LICENSE +0 -0
  17. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/__init__.py +0 -0
  18. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/base/__init__.py +0 -0
  19. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/base/parallel.py +0 -0
  20. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/llm/__init__.py +0 -0
  21. {not_again_ai-0.5.1/src/not_again_ai/llm → not_again_ai-0.7.0/src/not_again_ai/llm/openai}/embeddings.py +0 -0
  22. {not_again_ai-0.5.1/src/not_again_ai/llm → not_again_ai-0.7.0/src/not_again_ai/llm/openai}/openai_client.py +0 -0
  23. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/py.typed +0 -0
  24. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/statistics/__init__.py +0 -0
  25. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/statistics/dependence.py +0 -0
  26. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/viz/__init__.py +0 -0
  27. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/viz/barplots.py +0 -0
  28. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/viz/distributions.py +0 -0
  29. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/viz/scatterplot.py +0 -0
  30. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/viz/time_series.py +0 -0
  31. {not_again_ai-0.5.1 → not_again_ai-0.7.0}/src/not_again_ai/viz/utils.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: not-again-ai
3
- Version: 0.5.1
3
+ Version: 0.7.0
4
4
  Summary: Designed to once and for all collect all the little things that come up over and over again in AI projects and put them in one place.
5
5
  Home-page: https://github.com/DaveCoDev/not-again-ai
6
6
  License: MIT
@@ -21,10 +21,11 @@ Provides-Extra: llm
21
21
  Provides-Extra: statistics
22
22
  Provides-Extra: viz
23
23
  Requires-Dist: numpy (>=1.26.4,<2.0.0) ; extra == "statistics" or extra == "viz"
24
- Requires-Dist: openai (>=1.16.2,<2.0.0) ; extra == "llm"
25
- Requires-Dist: pandas (>=2.2.1,<3.0.0) ; extra == "viz"
24
+ Requires-Dist: ollama (>=0.1.9,<0.2.0) ; extra == "llm"
25
+ Requires-Dist: openai (>=1.25.1,<2.0.0) ; extra == "llm"
26
+ Requires-Dist: pandas (>=2.2.2,<3.0.0) ; extra == "viz"
26
27
  Requires-Dist: python-liquid (>=1.12.1,<2.0.0) ; extra == "llm"
27
- Requires-Dist: scikit-learn (>=1.4.1.post1,<2.0.0) ; extra == "statistics"
28
+ Requires-Dist: scikit-learn (>=1.4.2,<2.0.0) ; extra == "statistics"
28
29
  Requires-Dist: scipy (>=1.13.0,<2.0.0) ; extra == "statistics"
29
30
  Requires-Dist: seaborn (>=0.13.2,<0.14.0) ; extra == "viz"
30
31
  Requires-Dist: tiktoken (>=0.6.0,<0.7.0) ; extra == "llm"
@@ -47,9 +48,9 @@ Description-Content-Type: text/markdown
47
48
  [ruff-badge]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json
48
49
  [mypy-badge]: https://www.mypy-lang.org/static/mypy_badge.svg
49
50
 
50
- **not-again-ai** is a collection of various functionalities that come up over and over again when developing AI projects as a Data/Applied/Research Scientist. It is designed to be simple and minimize dependencies first and foremost. This is not meant to reinvent the wheel, but instead be a home for functions that don’t belong well elsewhere. Additionally, feel free to **a)** use this as a template for your own Python package. **b)** instead of installing the package, copy and paste functions into your own projects (this is made easier with the limited amount of dependencies and the MIT license).
51
+ **not-again-ai** is a collection of various building blocks that come up over and over again when developing AI products. The key goals of this package are to have simple, but flexible interfaces and to minimize dependencies. Feel free to **a)** use this as a template for your own Python package. **b)** instead of installing the package, copy and paste functions into your own projects (this is made possible with the limited amount of dependencies and the MIT license).
51
52
 
52
- **Documentation** available within the [readmes](readmes) or auto-generated at [DaveCoDev.github.io/not-again-ai/](https://DaveCoDev.github.io/not-again-ai/).
53
+ **Documentation** available within individual **[notebooks](notebooks)**, docstrings within the source, or auto-generated at [DaveCoDev.github.io/not-again-ai/](https://DaveCoDev.github.io/not-again-ai/).
53
54
 
54
55
  # Installation
55
56
 
@@ -61,82 +62,26 @@ Install the entire package from [PyPI](https://pypi.org/project/not-again-ai/) w
61
62
  $ pip install not_again_ai[llm,statistics,viz]
62
63
  ```
63
64
 
64
- The package is split into subpackages, so you can install only the parts you need.
65
+ The package is split into subpackages, so you can install only the parts you need. See the **[notebooks](notebooks)** for examples.
65
66
  * **Base only**: `pip install not_again_ai`
66
- * **LLM only**: `pip install not_again_ai[llm]`
67
+ * **LLM**: `pip install not_again_ai[llm]`
68
+ 1. If you wish to use OpenAI
69
+ 1. Go to https://platform.openai.com/settings/profile?tab=api-keys to get your API key.
70
+ 1. (Optionally) Set the `OPENAI_API_KEY` and the `OPENAI_ORG_ID` environment variables.
71
+ 1. If you wish to use Ollama:
72
+ 1. follow the instructions to install ollama for your system: https://github.com/ollama/ollama
73
+ 1. [Add Ollama as a startup service (recommended)](https://github.com/ollama/ollama/blob/main/docs/linux.md#adding-ollama-as-a-startup-service-recommended)
74
+ 1. If you'd like to make the ollama service accessible on your local network and it is hosted on Linux, add the following to the `/etc/systemd/system/ollama.service` file:
75
+ ```bash
76
+ [Service]
77
+ ...
78
+ Environment="OLLAMA_HOST=0.0.0.0"
79
+ ```
80
+ Now ollama will be available at `http://<local_address>:11434`
67
81
  * **Statistics**: `pip install not_again_ai[statistics]`
68
82
  * **Visualization**: `pip install not_again_ai[viz]`
69
83
 
70
84
 
71
- # Quick Tour
72
-
73
- ## Base
74
- [README](https://github.com/DaveCoDev/not-again-ai/blob/main/readmes/base.md)
75
-
76
- The base package includes only functions that have minimal external dependencies and are useful in a variety of situations such as parallelization and filesystem operations.
77
-
78
- ## LLM (Large Language Model)
79
- [README](https://github.com/DaveCoDev/not-again-ai/blob/main/readmes/llm.md), [Example Notebooks](https://github.com/DaveCoDev/not-again-ai/blob/main/notebooks/llm/)
80
-
81
- Supports OpenAI chat completions and text embeddings. Includes functions for creating chat completion prompts, token management, and context management.
82
-
83
- One example:
84
- ```python
85
- client = openai_client()
86
- messages = [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}]
87
- response = chat_completion(messages=messages, model="gpt-3.5-turbo", max_tokens=100, client=client)["message"]
88
- >>> "Hello! How can I help you today?"
89
- ```
90
-
91
- ## Statistics
92
- [README](https://github.com/DaveCoDev/not-again-ai/blob/main/readmes/statistics.md)
93
-
94
- We provide a few helpers for data analysis such as:
95
-
96
- ```python
97
- from not_again_ai.statistics.dependence import pearson_correlation
98
- # quadratic dependence
99
- >>> x = (rs.rand(500) * 4) - 2
100
- >>> y = x**2 + (rs.randn(500) * 0.2)
101
- >>> pearson_correlation(x, y)
102
- 0.05
103
- ```
104
-
105
- ## Visualization
106
- [README](https://github.com/DaveCoDev/not-again-ai/blob/main/readmes/viz.md)
107
-
108
- We offer opinionated wrappers around seaborn to make common visualizations easier to create and customize.
109
-
110
- ```python
111
- >>> import numpy as np
112
- >>> import pandas as pd
113
- >>> from not_again_ai.viz.time_series import ts_lineplot
114
- >>> from not_again_ai.viz.distributions import univariate_distplot
115
-
116
- # get some time series data
117
- >>> rs = np.random.RandomState(365)
118
- >>> values = rs.randn(365, 4).cumsum(axis=0)
119
- >>> dates = pd.date_range('1 1 2021', periods=365, freq='D')
120
- # plot the time series and save it to a file
121
- >>> ts_lineplot(ts_data=values, save_pathname='myplot.png', ts_x=dates, ts_names=['A', 'B', 'C', 'D'])
122
-
123
- # get a random distribution
124
- >>> distrib = np.random.beta(a=0.5, b=0.5, size=1000)
125
- # plot the distribution and save it to a file
126
- >>> univariate_distplot(
127
- ... data=distrib,
128
- ... save_pathname='mydistribution.svg',
129
- ... print_summary=False, bins=100,
130
- ... title=r'Beta Distribution $\alpha=0.5, \beta=0.5$'
131
- ... )
132
- ```
133
-
134
- <p float="center">
135
- <img src="https://raw.githubusercontent.com/DaveCoDev/not-again-ai/44c53fb7fb07234aaceea40c90d8cb74e5fa6c15/assets/distributions_test4.svg" width="404" />
136
- <img src="https://raw.githubusercontent.com/DaveCoDev/not-again-ai/44c53fb7fb07234aaceea40c90d8cb74e5fa6c15/assets/ts_lineplot5.svg" width="404" />
137
- </p>
138
-
139
-
140
85
  # Development Information
141
86
 
142
87
  The following information is relevant if you would like to contribute or use this package as a template for yourself.
@@ -13,9 +13,9 @@
13
13
  [ruff-badge]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json
14
14
  [mypy-badge]: https://www.mypy-lang.org/static/mypy_badge.svg
15
15
 
16
- **not-again-ai** is a collection of various functionalities that come up over and over again when developing AI projects as a Data/Applied/Research Scientist. It is designed to be simple and minimize dependencies first and foremost. This is not meant to reinvent the wheel, but instead be a home for functions that don’t belong well elsewhere. Additionally, feel free to **a)** use this as a template for your own Python package. **b)** instead of installing the package, copy and paste functions into your own projects (this is made easier with the limited amount of dependencies and the MIT license).
16
+ **not-again-ai** is a collection of various building blocks that come up over and over again when developing AI products. The key goals of this package are to have simple, but flexible interfaces and to minimize dependencies. Feel free to **a)** use this as a template for your own Python package. **b)** instead of installing the package, copy and paste functions into your own projects (this is made possible with the limited amount of dependencies and the MIT license).
17
17
 
18
- **Documentation** available within the [readmes](readmes) or auto-generated at [DaveCoDev.github.io/not-again-ai/](https://DaveCoDev.github.io/not-again-ai/).
18
+ **Documentation** available within individual **[notebooks](notebooks)**, docstrings within the source, or auto-generated at [DaveCoDev.github.io/not-again-ai/](https://DaveCoDev.github.io/not-again-ai/).
19
19
 
20
20
  # Installation
21
21
 
@@ -27,82 +27,26 @@ Install the entire package from [PyPI](https://pypi.org/project/not-again-ai/) w
27
27
  $ pip install not_again_ai[llm,statistics,viz]
28
28
  ```
29
29
 
30
- The package is split into subpackages, so you can install only the parts you need.
30
+ The package is split into subpackages, so you can install only the parts you need. See the **[notebooks](notebooks)** for examples.
31
31
  * **Base only**: `pip install not_again_ai`
32
- * **LLM only**: `pip install not_again_ai[llm]`
32
+ * **LLM**: `pip install not_again_ai[llm]`
33
+ 1. If you wish to use OpenAI
34
+ 1. Go to https://platform.openai.com/settings/profile?tab=api-keys to get your API key.
35
+ 1. (Optionally) Set the `OPENAI_API_KEY` and the `OPENAI_ORG_ID` environment variables.
36
+ 1. If you wish to use Ollama:
37
+ 1. follow the instructions to install ollama for your system: https://github.com/ollama/ollama
38
+ 1. [Add Ollama as a startup service (recommended)](https://github.com/ollama/ollama/blob/main/docs/linux.md#adding-ollama-as-a-startup-service-recommended)
39
+ 1. If you'd like to make the ollama service accessible on your local network and it is hosted on Linux, add the following to the `/etc/systemd/system/ollama.service` file:
40
+ ```bash
41
+ [Service]
42
+ ...
43
+ Environment="OLLAMA_HOST=0.0.0.0"
44
+ ```
45
+ Now ollama will be available at `http://<local_address>:11434`
33
46
  * **Statistics**: `pip install not_again_ai[statistics]`
34
47
  * **Visualization**: `pip install not_again_ai[viz]`
35
48
 
36
49
 
37
- # Quick Tour
38
-
39
- ## Base
40
- [README](https://github.com/DaveCoDev/not-again-ai/blob/main/readmes/base.md)
41
-
42
- The base package includes only functions that have minimal external dependencies and are useful in a variety of situations such as parallelization and filesystem operations.
43
-
44
- ## LLM (Large Language Model)
45
- [README](https://github.com/DaveCoDev/not-again-ai/blob/main/readmes/llm.md), [Example Notebooks](https://github.com/DaveCoDev/not-again-ai/blob/main/notebooks/llm/)
46
-
47
- Supports OpenAI chat completions and text embeddings. Includes functions for creating chat completion prompts, token management, and context management.
48
-
49
- One example:
50
- ```python
51
- client = openai_client()
52
- messages = [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}]
53
- response = chat_completion(messages=messages, model="gpt-3.5-turbo", max_tokens=100, client=client)["message"]
54
- >>> "Hello! How can I help you today?"
55
- ```
56
-
57
- ## Statistics
58
- [README](https://github.com/DaveCoDev/not-again-ai/blob/main/readmes/statistics.md)
59
-
60
- We provide a few helpers for data analysis such as:
61
-
62
- ```python
63
- from not_again_ai.statistics.dependence import pearson_correlation
64
- # quadratic dependence
65
- >>> x = (rs.rand(500) * 4) - 2
66
- >>> y = x**2 + (rs.randn(500) * 0.2)
67
- >>> pearson_correlation(x, y)
68
- 0.05
69
- ```
70
-
71
- ## Visualization
72
- [README](https://github.com/DaveCoDev/not-again-ai/blob/main/readmes/viz.md)
73
-
74
- We offer opinionated wrappers around seaborn to make common visualizations easier to create and customize.
75
-
76
- ```python
77
- >>> import numpy as np
78
- >>> import pandas as pd
79
- >>> from not_again_ai.viz.time_series import ts_lineplot
80
- >>> from not_again_ai.viz.distributions import univariate_distplot
81
-
82
- # get some time series data
83
- >>> rs = np.random.RandomState(365)
84
- >>> values = rs.randn(365, 4).cumsum(axis=0)
85
- >>> dates = pd.date_range('1 1 2021', periods=365, freq='D')
86
- # plot the time series and save it to a file
87
- >>> ts_lineplot(ts_data=values, save_pathname='myplot.png', ts_x=dates, ts_names=['A', 'B', 'C', 'D'])
88
-
89
- # get a random distribution
90
- >>> distrib = np.random.beta(a=0.5, b=0.5, size=1000)
91
- # plot the distribution and save it to a file
92
- >>> univariate_distplot(
93
- ... data=distrib,
94
- ... save_pathname='mydistribution.svg',
95
- ... print_summary=False, bins=100,
96
- ... title=r'Beta Distribution $\alpha=0.5, \beta=0.5$'
97
- ... )
98
- ```
99
-
100
- <p float="center">
101
- <img src="https://raw.githubusercontent.com/DaveCoDev/not-again-ai/44c53fb7fb07234aaceea40c90d8cb74e5fa6c15/assets/distributions_test4.svg" width="404" />
102
- <img src="https://raw.githubusercontent.com/DaveCoDev/not-again-ai/44c53fb7fb07234aaceea40c90d8cb74e5fa6c15/assets/ts_lineplot5.svg" width="404" />
103
- </p>
104
-
105
-
106
50
  # Development Information
107
51
 
108
52
  The following information is relevant if you would like to contribute or use this package as a template for yourself.
@@ -1,6 +1,6 @@
1
1
  [tool.poetry]
2
2
  name = "not-again-ai"
3
- version = "0.5.1"
3
+ version = "0.7.0"
4
4
  description = "Designed to once and for all collect all the little things that come up over and over again in AI projects and put them in one place."
5
5
  authors = ["DaveCoDev <dave.co.dev@gmail.com>"]
6
6
  license = "MIT"
@@ -28,16 +28,17 @@ python = "^3.11, <3.13"
28
28
 
29
29
  # Optional dependencies are defined here, and groupings are defined below.
30
30
  numpy = { version = "^1.26.4", optional = true }
31
- openai = { version = "^1.16.2", optional = true }
32
- pandas = { version = "^2.2.1", optional = true }
31
+ ollama = { version = "^0.1.9", optional = true }
32
+ openai = { version = "^1.25.1", optional = true }
33
+ pandas = { version = "^2.2.2", optional = true }
33
34
  python-liquid = { version = "^1.12.1", optional = true }
34
35
  scipy = { version = "^1.13.0", optional = true }
35
- scikit-learn = { version = "^1.4.1.post1", optional = true }
36
+ scikit-learn = { version = "^1.4.2", optional = true }
36
37
  seaborn = { version = "^0.13.2", optional = true }
37
38
  tiktoken = { version = "^0.6.0", optional = true }
38
39
 
39
40
  [tool.poetry.extras]
40
- llm = ["openai", "python-liquid", "tiktoken"]
41
+ llm = ["ollama", "openai", "python-liquid", "tiktoken"]
41
42
  statistics = ["numpy", "scikit-learn", "scipy"]
42
43
  viz = ["numpy", "pandas", "seaborn"]
43
44
 
@@ -128,6 +129,7 @@ filterwarnings = [
128
129
  # Add additional warning supressions as needed here. For example, if a third-party library
129
130
  # is throwing a deprecation warning that needs to be fixed upstream:
130
131
  # "ignore::DeprecationWarning:typer",
132
+ "ignore::pytest.PytestUnraisableExceptionWarning"
131
133
  ]
132
134
 
133
135
  [tool.coverage.run]
@@ -0,0 +1,34 @@
1
+ from pathlib import Path
2
+
3
+
4
+ def create_file_dir(filepath: str | Path) -> None:
5
+ """Creates the parent directories for the specified filepath.
6
+ Does not throw any errors if the directories already exist.
7
+
8
+ Args:
9
+ filepath (str | Path): path to a file
10
+ """
11
+ root_path = Path(filepath).parent
12
+ root_path.mkdir(parents=True, exist_ok=True)
13
+
14
+
15
+ def readable_size(size: float) -> str:
16
+ """Convert a file size given in bytes to a human-readable format.
17
+
18
+ Args:
19
+ size (int): file size in bytes
20
+
21
+ Returns:
22
+ str: human-readable file size
23
+ """
24
+ # Define the suffixes for each size unit
25
+ suffixes = ["B", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"]
26
+
27
+ # Start with bytes
28
+ count = 0
29
+ while size >= 1024 and count < len(suffixes) - 1:
30
+ count += 1
31
+ size /= 1024
32
+
33
+ # Format the size to two decimal places and append the appropriate suffix
34
+ return f"{size:.2f} {suffixes[count]}"
@@ -0,0 +1,95 @@
1
+ import contextlib
2
+ import json
3
+ import re
4
+ from typing import Any
5
+
6
+ from ollama import Client, ResponseError
7
+
8
+
9
+ def _convert_duration(nanoseconds: int) -> float:
10
+ seconds = nanoseconds / 1_000_000_000
11
+ return round(seconds, 5)
12
+
13
+
14
+ def chat_completion(
15
+ messages: list[dict[str, Any]],
16
+ model: str,
17
+ client: Client,
18
+ max_tokens: int | None = None,
19
+ context_window: int | None = None,
20
+ temperature: float = 0.8,
21
+ json_mode: bool = False,
22
+ seed: int | None = None,
23
+ **kwargs: Any,
24
+ ) -> dict[str, Any]:
25
+ """Gets a Ollama chat completion response, see https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion
26
+ For a full list of valid parameters: https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values
27
+
28
+ Args:
29
+ messages (list[dict[str, Any]]): A list of messages to send to the model.
30
+ model (str): The model to use.
31
+ client (Client): The Ollama client.
32
+ max_tokens (int, optional): The maximum number of tokens to generate. Ollama calls this `num_predict`.
33
+ context_window (int, optional): The number of tokens to consider as context. Ollama calls this `num_ctx`.
34
+ temperature (float, optional): The temperature of the model. Increasing the temperature will make the model answer more creatively.
35
+ json_mode (bool, optional): This will structure the response as a valid JSON object.
36
+ It is important to instruct the model to use JSON in the prompt. Otherwise, the model may generate large amounts whitespace.
37
+ seed (int, optional): The seed to use for the model for reproducible outputs. Defaults to None.
38
+
39
+ Returns:
40
+ dict[str, Any]: A dictionary with the following keys
41
+ message (str | dict): The content of the generated assistant message.
42
+ If json_mode is True, this will be a dictionary.
43
+ completion_tokens (int): The number of tokens used by the model to generate the completion.
44
+ response_duration (float): The time taken to generate the response in seconds.
45
+ """
46
+
47
+ options = {
48
+ "num_predict": max_tokens,
49
+ "num_ctx": context_window,
50
+ "temperature": temperature,
51
+ }
52
+ if seed is not None:
53
+ options["seed"] = seed
54
+ options.update(kwargs)
55
+
56
+ all_args = {
57
+ "model": model,
58
+ "messages": messages,
59
+ "options": options,
60
+ }
61
+ if json_mode:
62
+ all_args["format"] = "json"
63
+
64
+ try:
65
+ response = client.chat(**all_args)
66
+ except ResponseError as e:
67
+ # If the error says "model 'model' not found" use regex then raise a more specific error
68
+ expected_pattern = f"model '{model}' not found"
69
+ if re.search(expected_pattern, e.error):
70
+ raise ResponseError(
71
+ f"Model '{model}' not found. Please use not_again_ai.llm.ollama.service.pull() first."
72
+ ) from e
73
+ else:
74
+ raise ResponseError(e.message) from e
75
+
76
+ response_data: dict[str, Any] = {}
77
+
78
+ # Handle getting the message returned by the model
79
+ message = response["message"].get("content", None)
80
+ if message and json_mode:
81
+ with contextlib.suppress(json.JSONDecodeError):
82
+ message = json.loads(message)
83
+ if message:
84
+ response_data["message"] = message
85
+
86
+ # Get the number of tokens generated
87
+ response_data["completion_tokens"] = response.get("eval_count", None)
88
+
89
+ # Get the latency of the response
90
+ if response.get("total_duration", None):
91
+ response_data["response_duration"] = _convert_duration(response["total_duration"])
92
+ else:
93
+ response_data["response_duration"] = None
94
+
95
+ return response_data
@@ -0,0 +1,24 @@
1
+ import os
2
+
3
+ from ollama import Client
4
+
5
+
6
+ def ollama_client(host: str | None = None, timeout: float | None = None) -> Client:
7
+ """Create an Ollama client instance based on the specified host or will read from the OLLAMA_HOST environment variable.
8
+
9
+ Args:
10
+ host (str, optional): The host URL of the Ollama server.
11
+ timeout (float, optional): The timeout for requests
12
+
13
+ Returns:
14
+ Client: An instance of the Ollama client.
15
+
16
+ Examples:
17
+ >>> client = client(host="http://localhost:11434")
18
+ """
19
+ if host is None:
20
+ host = os.getenv("OLLAMA_HOST")
21
+ if host is None:
22
+ raise ValueError("Host must be provided or OLLAMA_HOST environment variable must be set.")
23
+
24
+ return Client(host=host, timeout=timeout)
@@ -0,0 +1,81 @@
1
+ from typing import Any
2
+
3
+ from ollama import Client
4
+
5
+ from not_again_ai.base.file_system import readable_size
6
+
7
+
8
+ def list_models(client: Client) -> list[dict[str, Any]]:
9
+ """List models that are available locally.
10
+
11
+ Args:
12
+ client (Client): The Ollama client.
13
+
14
+ Returns:
15
+ list[dict[str, Any]]: A list of dictionaries (each corresponding to an available model) with the following keys:
16
+ name (str): Name of the model
17
+ model (str): Name of the model. This should be the same as the name.
18
+ modified_at (str): The date and time the model was last modified.
19
+ size (int): The size of the model in bytes.
20
+ size_readable (str): The size of the model in a human-readable format.
21
+ details (dict[str, Any]): Additional details about the model.
22
+ """
23
+ response = client.list().get("models", [])
24
+
25
+ response_data = []
26
+ for model_data in response:
27
+ curr_model_data = {}
28
+ curr_model_data["name"] = model_data["name"]
29
+ curr_model_data["model"] = model_data["model"]
30
+ curr_model_data["modified_at"] = model_data["modified_at"]
31
+ curr_model_data["size"] = model_data["size"]
32
+ curr_model_data["size_readable"] = readable_size(model_data["size"])
33
+ curr_model_data["details"] = model_data["details"]
34
+
35
+ response_data.append(curr_model_data)
36
+
37
+ return response_data
38
+
39
+
40
+ def is_model_available(model_name: str, client: Client) -> bool:
41
+ """Check if a model is available locally.
42
+
43
+ Args:
44
+ model_name (str): The name of the model.
45
+ client (Client): The Ollama client.
46
+
47
+ Returns:
48
+ bool: True if the model is available locally, False otherwise.
49
+ """
50
+ # If model_name does not have a ":", append ":latest"
51
+ if ":" not in model_name:
52
+ model_name = f"{model_name}:latest"
53
+ models = list_models(client)
54
+ return any(model["name"] == model_name for model in models)
55
+
56
+
57
+ def show(model_name: str, client: Client) -> dict[str, Any]:
58
+ """Show information about a model including the modelfile, available parameters, template, and additional details.
59
+
60
+ Args:
61
+ model_name (str): The name of the model.
62
+ client (Client): The Ollama client.
63
+ """
64
+ response = client.show(model_name)
65
+
66
+ response_data = {}
67
+ response_data["modelfile"] = response["modelfile"]
68
+ response_data["parameters"] = response["parameters"]
69
+ response_data["template"] = response["template"]
70
+ response_data["details"] = response["details"]
71
+ return response_data
72
+
73
+
74
+ def pull(model_name: str, client: Client) -> Any:
75
+ """Pull a model from the Ollama server and returns the status of the pull operation."""
76
+ return client.pull(model_name)
77
+
78
+
79
+ def delete(model_name: str, client: Client) -> Any:
80
+ """Delete a model from the local filesystem and returns the status of the delete operation."""
81
+ return client.delete(model_name)
@@ -6,7 +6,7 @@ from openai import OpenAI
6
6
 
7
7
 
8
8
  def chat_completion(
9
- messages: list[dict[str, str]],
9
+ messages: list[dict[str, Any]],
10
10
  model: str,
11
11
  client: OpenAI,
12
12
  tools: list[dict[str, Any]] | None = None,
@@ -21,6 +21,10 @@ def chat_completion(
21
21
  ) -> dict[str, Any]:
22
22
  """Get an OpenAI chat completion response: https://platform.openai.com/docs/api-reference/chat/create
23
23
 
24
+ NOTE: Depending on the model, certain parameters may not be supported,
25
+ particularly for older vision-enabled models like gpt-4-1106-vision-preview.
26
+ Be sure to check the documentation: https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4
27
+
24
28
  Args:
25
29
  messages (list): A list of messages comprising the conversation so far.
26
30
  model (str): ID of the model to use. See the model endpoint compatibility table:
@@ -29,8 +33,8 @@ def chat_completion(
29
33
  client (OpenAI): An instance of the OpenAI client.
30
34
  tools (list[dict[str, Any]], optional): A list of tools the model may generate JSON inputs for.
31
35
  Defaults to None.
32
- tool_choice (str, optional): The tool choice to use. Can be "auto", "none", or a specific function name.
33
- Defaults to "auto".
36
+ tool_choice (str, optional): The tool choice to use. Can be "auto", "required", "none", or a specific function name.
37
+ Note the function name cannot be any of "auto", "required", or "none". Defaults to "auto".
34
38
  max_tokens (int, optional): The maximum number of tokens to generate in the chat completion.
35
39
  Defaults to None, which automatically limits to the model's maximum context length.
36
40
  temperature (float, optional): What sampling temperature to use, between 0 and 2.
@@ -83,7 +87,7 @@ def chat_completion(
83
87
 
84
88
  if tools is not None:
85
89
  kwargs["tools"] = tools
86
- if tool_choice not in ["none", "auto"]:
90
+ if tool_choice not in ["none", "auto", "required"]:
87
91
  kwargs["tool_choice"] = {"type": "function", "function": {"name": tool_choice}}
88
92
  else:
89
93
  kwargs["tool_choice"] = tool_choice
@@ -1,6 +1,6 @@
1
1
  import copy
2
2
 
3
- from not_again_ai.llm.tokens import num_tokens_from_messages, truncate_str
3
+ from not_again_ai.llm.openai.tokens import num_tokens_from_messages, truncate_str
4
4
 
5
5
 
6
6
  def _inject_variable(
@@ -7,69 +7,13 @@ from typing import Any
7
7
  from liquid import Template
8
8
 
9
9
 
10
- def _validate_message(message: dict[str, str]) -> bool:
11
- """Valides that a message has valid fields and if the role is valid.
12
- See https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages
13
- """
14
- valid_fields = ["role", "content", "name", "tool_call_id", "tool_calls"]
15
- # Check if the only keys in the message are in valid_fields
16
- if not all(key in valid_fields for key in message):
17
- raise ValueError(f"Message contains invalid fields: {message.keys()}")
18
-
19
- # Check if the only roles in the message are in valid_fields
20
- valid_roles = ["system", "user", "assistant", "tool"]
21
- if message["role"] not in valid_roles:
22
- raise ValueError(f"Message contains invalid role: {message['role']}")
23
-
24
- return True
25
-
26
-
27
- def chat_prompt(messages_unformatted: list[dict[str, str]], variables: dict[str, str]) -> list[dict[str, str]]:
28
- """
29
- Formats a list of messages for OpenAI's chat completion API using Liquid templating.
30
-
31
- Args:
32
- messages_unformatted: A list of dictionaries where each dictionary
33
- represents a message. Each message must have 'role' and 'content'
34
- keys with string values, where content is a Liquid template.
35
- variables: A dictionary where each key-value pair represents a variable
36
- name and its value for template rendering.
37
-
38
- Returns:
39
- A list of dictionaries with the same structure as `messages_unformatted`,
40
- but with the 'content' of each message with the provided `variables`.
41
-
42
- Examples:
43
- >>> messages = [
44
- ... {"role": "system", "content": "You are a helpful assistant."},
45
- ... {"role": "user", "content": "Help me {{task}}"}
46
- ... ]
47
- >>> vars = {"task": "write Python code for the fibonnaci sequence"}
48
- >>> chat_prompt(messages, vars)
49
- [
50
- {"role": "system", "content": "You are a helpful assistant."},
51
- {"role": "user", "content": "Help me write Python code for the fibonnaci sequence"}
52
- ]
53
- """
54
-
55
- messages_formatted = deepcopy(messages_unformatted)
56
- for message in messages_formatted:
57
- if not _validate_message(message):
58
- raise ValueError()
59
-
60
- liquid_template = Template(message["content"])
61
- message["content"] = liquid_template.render(**variables)
62
-
63
- return messages_formatted
64
-
65
-
66
10
  def _validate_message_vision(message: dict[str, list[dict[str, Path | str]] | str]) -> bool:
67
11
  """Validates that a message for a vision model is valid"""
68
- valid_fields = ["role", "content"]
12
+ valid_fields = ["role", "content", "name", "tool_call_id", "tool_calls"]
69
13
  if not all(key in valid_fields for key in message):
70
14
  raise ValueError(f"Message contains invalid fields: {message.keys()}")
71
15
 
72
- valid_roles = ["system", "user", "assistant"]
16
+ valid_roles = ["system", "user", "assistant", "tool"]
73
17
  if message["role"] not in valid_roles:
74
18
  raise ValueError(f"Message contains invalid role: {message['role']}")
75
19
 
@@ -126,13 +70,13 @@ def create_image_url(image_path: Path) -> str:
126
70
  return f"data:{mime_type};base64,{image_data}"
127
71
 
128
72
 
129
- def chat_prompt_vision(messages_unformatted: list[dict[str, Any]], variables: dict[str, str]) -> list[dict[str, Any]]:
130
- """Formats a list of messages for OpenAI's chat completion API, for vision models only, using Liquid templating.
73
+ def chat_prompt(messages_unformatted: list[dict[str, Any]], variables: dict[str, str]) -> list[dict[str, Any]]:
74
+ """Formats a list of messages for OpenAI's chat completion API,
75
+ including special syntax for vision models, using Liquid templating.
131
76
 
132
77
  Args:
133
78
  messages_unformatted (list[dict[str, list[dict[str, Path | str]] | str]]):
134
79
  A list of dictionaries where each dictionary represents a message.
135
- Each message must have 'role' and 'content' keys. `role` must be 'system', 'user', or 'assistant'.
136
80
  `content` can be a Liquid template string or a list of dictionaries where each dictionary
137
81
  represents a content part. Each content part can be a string or a dictionary with 'image' and 'detail' keys.
138
82
  The 'image' key must be a Path or a string representing a URL. The 'detail' key is optional and must be 'low' or 'high'.
@@ -80,6 +80,8 @@ def num_tokens_from_messages(messages: list[dict[str, str]], model: str = "gpt-3
80
80
  "gpt-4-1106-preview",
81
81
  "gpt-4-turbo-preview",
82
82
  "gpt-4-0125-preview",
83
+ "gpt-4-turbo",
84
+ "gpt-4-turbo-2024-04-09",
83
85
  }:
84
86
  tokens_per_message = 3 # every message follows <|start|>{role/name}\n{content}<|end|>\n
85
87
  tokens_per_name = 1 # if there's a name, the role is omitted
@@ -1,12 +0,0 @@
1
- import pathlib
2
-
3
-
4
- def create_file_dir(filepath: str) -> None:
5
- """Creates the parent directories for the specified filepath.
6
- Does not throw any errors if the directories already exist.
7
-
8
- Args:
9
- filepath (str): path to a file
10
- """
11
- root_path = pathlib.Path(filepath).parent
12
- root_path.mkdir(parents=True, exist_ok=True)
@@ -1,87 +0,0 @@
1
- from typing import Any
2
-
3
- from openai import OpenAI
4
-
5
-
6
- def chat_completion_vision(
7
- messages: list[dict[str, Any]],
8
- model: str,
9
- client: OpenAI,
10
- max_tokens: int | None = None,
11
- temperature: float = 0.7,
12
- seed: int | None = None,
13
- n: int = 1,
14
- **kwargs: Any,
15
- ) -> dict[str, Any]:
16
- """Get an OpenAI chat completion response for vision models only: https://platform.openai.com/docs/guides/vision
17
-
18
- Args:
19
- messages (list): A list of messages comprising the conversation so far.
20
- See https://platform.openai.com/docs/api-reference/chat/create for details on the format
21
- model (str): ID of the model to use for generating chat completions. Refer to OpenAI's documentation
22
- for details on available models.
23
- client (OpenAI): An instance of the OpenAI client, used to make requests to the API.
24
- max_tokens (int | None, optional): The maximum number of tokens to generate in the chat completion.
25
- If None, defaults to the model's maximum context length. Defaults to None.
26
- temperature (float, optional): Controls the randomness of the output. A higher temperature produces
27
- more varied results, whereas a lower temperature results in more deterministic and predictable text.
28
- Must be between 0 and 2. Defaults to 0.7.
29
- seed (int | None, optional): A seed used for deterministic generation. Providing a seed ensures that
30
- the same input will produce the same output across different runs. Defaults to None.
31
- n (int, optional): The number of chat completion choices to generate for each input message.
32
- Defaults to 1.
33
- **kwargs (Any): Additional keyword arguments to pass to the OpenAI client chat completion method.
34
-
35
- Returns:
36
- dict[str, Any]: A dictionary containing the generated responses and metadata. Key components include:
37
- 'finish_reason' (str): The reason the model stopped generating further tokens.
38
- Can be 'stop' or 'length'
39
- 'tool_names' (list[str], optional): The names of the tools called by the model.
40
- 'tool_args_list' (list[dict], optional): The arguments of the tools called by the model.
41
- 'message' (str | dict): The content of the generated assistant message.
42
- 'choices' (list[dict], optional): A list of chat completion choices if n > 1 where each dict contains the above fields.
43
- 'completion_tokens' (int): The number of tokens used by the model to generate the completion.
44
- NOTE: If n > 1 this is the sum of all completions.
45
- 'prompt_tokens' (int): The number of tokens in the messages sent to the model.
46
- 'system_fingerprint' (str, optional): If seed is set, a unique identifier for the model used to generate the response.
47
- """
48
- kwargs.update(
49
- {
50
- "messages": messages,
51
- "model": model,
52
- "max_tokens": max_tokens,
53
- "temperature": temperature,
54
- "n": n,
55
- }
56
- )
57
-
58
- if seed is not None:
59
- kwargs["seed"] = seed
60
-
61
- response = client.chat.completions.create(**kwargs)
62
-
63
- response_data: dict[str, Any] = {"choices": []}
64
- for response_choice in response.choices:
65
- response_data_curr = {}
66
- finish_reason = response_choice.finish_reason
67
- response_data_curr["finish_reason"] = finish_reason
68
-
69
- if finish_reason == "stop" or finish_reason == "length":
70
- message = response_choice.message.content
71
- response_data_curr["message"] = message
72
-
73
- response_data["choices"].append(response_data_curr)
74
-
75
- usage = response.usage
76
- if usage is not None:
77
- response_data["completion_tokens"] = usage.completion_tokens
78
- response_data["prompt_tokens"] = usage.prompt_tokens
79
-
80
- if seed is not None and response.system_fingerprint is not None:
81
- response_data["system_fingerprint"] = response.system_fingerprint
82
-
83
- if len(response_data["choices"]) == 1:
84
- response_data.update(response_data["choices"][0])
85
- del response_data["choices"]
86
-
87
- return response_data
File without changes