lionagi 0.0.105__tar.gz → 0.0.107__tar.gz
Sign up to get free protection for your applications and to get access to all the features.
- {lionagi-0.0.105 → lionagi-0.0.107}/PKG-INFO +7 -12
- {lionagi-0.0.105 → lionagi-0.0.107}/README.md +6 -11
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/api/oai_service.py +25 -13
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/session/conversation.py +22 -12
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/session/message.py +57 -30
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/session/session.py +130 -45
- lionagi-0.0.107/lionagi/utils/__init__.py +7 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/utils/api_util.py +91 -45
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/utils/doc_util.py +69 -26
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/utils/log_util.py +15 -4
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/utils/sys_util.py +74 -11
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/utils/tool_util.py +44 -29
- lionagi-0.0.107/lionagi/version.py +1 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi.egg-info/PKG-INFO +7 -12
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi.egg-info/SOURCES.txt +0 -1
- lionagi-0.0.105/lionagi/tools/__init__.py +0 -0
- lionagi-0.0.105/lionagi/utils/__init__.py +0 -10
- lionagi-0.0.105/lionagi/version.py +0 -1
- {lionagi-0.0.105 → lionagi-0.0.107}/LICENSE +0 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/README.rst +0 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/__init__.py +0 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/api/__init__.py +0 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/api/oai_config.py +0 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi/session/__init__.py +0 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi.egg-info/dependency_links.txt +0 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi.egg-info/requires.txt +0 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/lionagi.egg-info/top_level.txt +0 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/pyproject.toml +0 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/setup.cfg +0 -0
- {lionagi-0.0.105 → lionagi-0.0.107}/setup.py +0 -0
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.1
|
2
2
|
Name: lionagi
|
3
|
-
Version: 0.0.
|
3
|
+
Version: 0.0.107
|
4
4
|
Summary: Towards automated general intelligence.
|
5
5
|
Author: HaiyangLi
|
6
6
|
Author-email: Haiyang Li <ocean@lionagi.ai>
|
@@ -222,18 +222,13 @@ Requires-Dist: httpx==0.25.1
|
|
222
222
|
|
223
223
|
![PyPI - Version](https://img.shields.io/pypi/v/lionagi?labelColor=233476aa&color=231fc935) ![PyPI - Downloads](https://img.shields.io/pypi/dm/lionagi?labelColor=233476aa&color=231fc935) ![GitHub License](https://img.shields.io/github/license/lion-agi/lionagi?labelColor=233476aa&color=231fc935)
|
224
224
|
|
225
|
-
|
226
|
-
|
227
|
-
- PyPI: https://pypi.org/project/lionagi/
|
228
|
-
- Documentation: https://lionagi.readthedocs.io/en/latest/ (still a lot TODO)
|
229
|
-
- Website: TODO
|
230
|
-
- Discord: [Join Our Discord](https://discord.gg/7RGWqpSxze)
|
225
|
+
[PyPI](https://pypi.org/project/lionagi/) | [Documentation](https://lionagi.readthedocs.io/en/latest/) | [Website](https://www.lionagi.ai) | [Discord](https://discord.gg/7RGWqpSxze)
|
231
226
|
|
232
227
|
|
233
228
|
# LionAGI
|
234
229
|
**Towards Automated General Intelligence**
|
235
230
|
|
236
|
-
LionAGI is a Python
|
231
|
+
LionAGI is a Python intelligent agent framework that combines data manipulation with AI tools, aiming to simplify the integration of advanced machine learning tools, such as Large Language Models (i.e. OpenAI's GPT), with production-level data-centric projects.
|
237
232
|
|
238
233
|
Install LionAGI with pip:
|
239
234
|
|
@@ -244,13 +239,13 @@ Download the `.env_template` file, input your OPENAI_API_KEY, save the file, ren
|
|
244
239
|
|
245
240
|
### Features
|
246
241
|
|
247
|
-
- Robust performance. LionAGI is written in almost pure python. With minimum external dependency (aiohttp
|
242
|
+
- Robust performance. LionAGI is written in almost pure python. With minimum external dependency (`aiohttp`, `httpx`, `python-dotenv`, `tiktoken`)
|
248
243
|
- Efficient data operations for reading, chunking, binning, writing, storing and managing data.
|
249
|
-
- Fast interaction with LLM services like OpenAI with configurable rate limiting concurrent API calls for maximum throughput.
|
250
|
-
- Create a production ready LLM application in hours
|
244
|
+
- Fast interaction with LLM services like OpenAI with **configurable rate limiting concurrent API calls** for maximum throughput.
|
245
|
+
- Create a production ready LLM application **in hours**. Intuitive workflow management to streamline and expedite the process from idea to market.
|
251
246
|
|
252
247
|
---
|
253
|
-
Currently, LionAGI only natively support OpenAI API calls, support for other LLM providers as well as open source models will be integrated in future releases. LionAGI is designed to be async only, please check python documentation [here](https://docs.python.org/3/library/asyncio.html)
|
248
|
+
Currently, LionAGI only natively support OpenAI API calls, support for other LLM providers as well as open source models will be integrated in future releases. LionAGI is designed to be async only, please check python official documentation on how `async` work: [here](https://docs.python.org/3/library/asyncio.html)
|
254
249
|
|
255
250
|
|
256
251
|
**Notice**:
|
@@ -1,17 +1,12 @@
|
|
1
1
|
![PyPI - Version](https://img.shields.io/pypi/v/lionagi?labelColor=233476aa&color=231fc935) ![PyPI - Downloads](https://img.shields.io/pypi/dm/lionagi?labelColor=233476aa&color=231fc935) ![GitHub License](https://img.shields.io/github/license/lion-agi/lionagi?labelColor=233476aa&color=231fc935)
|
2
2
|
|
3
|
-
|
4
|
-
|
5
|
-
- PyPI: https://pypi.org/project/lionagi/
|
6
|
-
- Documentation: https://lionagi.readthedocs.io/en/latest/ (still a lot TODO)
|
7
|
-
- Website: TODO
|
8
|
-
- Discord: [Join Our Discord](https://discord.gg/7RGWqpSxze)
|
3
|
+
[PyPI](https://pypi.org/project/lionagi/) | [Documentation](https://lionagi.readthedocs.io/en/latest/) | [Website](https://www.lionagi.ai) | [Discord](https://discord.gg/7RGWqpSxze)
|
9
4
|
|
10
5
|
|
11
6
|
# LionAGI
|
12
7
|
**Towards Automated General Intelligence**
|
13
8
|
|
14
|
-
LionAGI is a Python
|
9
|
+
LionAGI is a Python intelligent agent framework that combines data manipulation with AI tools, aiming to simplify the integration of advanced machine learning tools, such as Large Language Models (i.e. OpenAI's GPT), with production-level data-centric projects.
|
15
10
|
|
16
11
|
Install LionAGI with pip:
|
17
12
|
|
@@ -22,13 +17,13 @@ Download the `.env_template` file, input your OPENAI_API_KEY, save the file, ren
|
|
22
17
|
|
23
18
|
### Features
|
24
19
|
|
25
|
-
- Robust performance. LionAGI is written in almost pure python. With minimum external dependency (aiohttp
|
20
|
+
- Robust performance. LionAGI is written in almost pure python. With minimum external dependency (`aiohttp`, `httpx`, `python-dotenv`, `tiktoken`)
|
26
21
|
- Efficient data operations for reading, chunking, binning, writing, storing and managing data.
|
27
|
-
- Fast interaction with LLM services like OpenAI with configurable rate limiting concurrent API calls for maximum throughput.
|
28
|
-
- Create a production ready LLM application in hours
|
22
|
+
- Fast interaction with LLM services like OpenAI with **configurable rate limiting concurrent API calls** for maximum throughput.
|
23
|
+
- Create a production ready LLM application **in hours**. Intuitive workflow management to streamline and expedite the process from idea to market.
|
29
24
|
|
30
25
|
---
|
31
|
-
Currently, LionAGI only natively support OpenAI API calls, support for other LLM providers as well as open source models will be integrated in future releases. LionAGI is designed to be async only, please check python documentation [here](https://docs.python.org/3/library/asyncio.html)
|
26
|
+
Currently, LionAGI only natively support OpenAI API calls, support for other LLM providers as well as open source models will be integrated in future releases. LionAGI is designed to be async only, please check python official documentation on how `async` work: [here](https://docs.python.org/3/library/asyncio.html)
|
32
27
|
|
33
28
|
|
34
29
|
**Notice**:
|
@@ -19,20 +19,25 @@ class OpenAIRateLimiter(RateLimiter):
|
|
19
19
|
and replenishing these limits at regular intervals.
|
20
20
|
|
21
21
|
Attributes:
|
22
|
-
max_requests_per_minute (int):
|
23
|
-
|
22
|
+
max_requests_per_minute (int):
|
23
|
+
Maximum number of requests allowed per minute.
|
24
|
+
max_tokens_per_minute (int):
|
25
|
+
Maximum number of tokens allowed per minute.
|
24
26
|
|
25
27
|
Methods:
|
26
|
-
rate_limit_replenisher:
|
27
|
-
|
28
|
+
rate_limit_replenisher:
|
29
|
+
Coroutine to replenish rate limits over time.
|
30
|
+
calculate_num_token:
|
31
|
+
Calculates the required tokens for a request.
|
28
32
|
"""
|
29
33
|
|
30
34
|
def __init__(self, max_requests_per_minute: int, max_tokens_per_minute: int) -> None:
|
31
35
|
"""
|
32
36
|
Initializes the rate limiter with specific limits for OpenAI API.
|
33
37
|
|
34
|
-
|
38
|
+
Parameters:
|
35
39
|
max_requests_per_minute (int): The maximum number of requests allowed per minute.
|
40
|
+
|
36
41
|
max_tokens_per_minute (int): The maximum number of tokens that can accumulate per minute.
|
37
42
|
"""
|
38
43
|
super().__init__(max_requests_per_minute, max_tokens_per_minute)
|
@@ -66,8 +71,9 @@ class OpenAIRateLimiter(RateLimiter):
|
|
66
71
|
This method should be implemented in a subclass to provide the specific calculation logic
|
67
72
|
for the OpenAI API.
|
68
73
|
|
69
|
-
|
74
|
+
Parameters:
|
70
75
|
payload (Dict[str, Any]): The payload of the request.
|
76
|
+
|
71
77
|
api_endpoint (str): The specific API endpoint for the request.
|
72
78
|
|
73
79
|
Returns:
|
@@ -160,12 +166,17 @@ class OpenAIService(BaseAPIService):
|
|
160
166
|
"""
|
161
167
|
Initializes the OpenAI service with configuration for API interaction.
|
162
168
|
|
163
|
-
|
169
|
+
Parameters:
|
164
170
|
api_key (str): The API key for authenticating with OpenAI.
|
171
|
+
|
165
172
|
token_encoding_name (str): The name of the text encoding used by OpenAI.
|
173
|
+
|
166
174
|
max_attempts (int): The maximum number of attempts for calling an API endpoint.
|
175
|
+
|
167
176
|
status_tracker (Optional[StatusTracker]): Tracker for API call outcomes.
|
168
|
-
|
177
|
+
|
178
|
+
ratelimiter (Optional[OpenAIRateLimiter]): Rate limiter for OpenAI's limits.
|
179
|
+
|
169
180
|
queue (Optional[AsyncQueue]): Queue for managing asynchronous API calls.
|
170
181
|
|
171
182
|
Example:
|
@@ -183,15 +194,16 @@ class OpenAIService(BaseAPIService):
|
|
183
194
|
super().__init__(api_key, token_encoding_name, max_attempts,
|
184
195
|
max_requests_per_minute, max_tokens_per_minute,
|
185
196
|
ratelimiter, status_tracker, queue)
|
186
|
-
|
187
197
|
|
188
|
-
async def call_api(self, http_session, endpoint, payload: Dict[str, any] =None) -> Optional[Dict[str, any]]:
|
198
|
+
async def call_api(self, http_session, endpoint, payload: Dict[str, any] = None) -> Optional[Dict[str, any]]:
|
189
199
|
"""
|
190
200
|
Call an OpenAI API endpoint with a specific payload and handle the response.
|
191
201
|
|
192
|
-
|
193
|
-
|
194
|
-
|
202
|
+
Parameters:
|
203
|
+
http_session: The session object for making HTTP requests.
|
204
|
+
|
205
|
+
endpoint (str): The full URL of the OpenAI API endpoint to be called.
|
206
|
+
|
195
207
|
payload (Dict[str, any]): The payload to send with the API request.
|
196
208
|
|
197
209
|
Returns:
|
@@ -1,5 +1,6 @@
|
|
1
1
|
from .message import Message
|
2
2
|
|
3
|
+
|
3
4
|
class Conversation:
|
4
5
|
"""
|
5
6
|
A class representing a conversation between users and the assistant.
|
@@ -8,21 +9,22 @@ class Conversation:
|
|
8
9
|
user instructions, and assistant responses.
|
9
10
|
|
10
11
|
Attributes:
|
11
|
-
response_counts (int):
|
12
|
-
|
13
|
-
|
14
|
-
|
12
|
+
response_counts (int):
|
13
|
+
The count of assistant responses in the conversation.
|
14
|
+
messages (list):
|
15
|
+
A list to store messages in the conversation.
|
16
|
+
msg (Message):
|
17
|
+
An instance of the Message class for creating messages.
|
18
|
+
responses (list):
|
19
|
+
A list to store assistant responses in the conversation.
|
15
20
|
|
16
21
|
Methods:
|
17
22
|
initiate_conversation(system, instruction, context=None, name=None):
|
18
23
|
Initiate a conversation with a system setting and user instruction.
|
19
|
-
|
20
24
|
add_messages(system, instruction, context=None, response=None, tool=None, name=None):
|
21
25
|
Add messages to the conversation, including system setting, user instruction, and assistant response.
|
22
|
-
|
23
26
|
change_system(system):
|
24
27
|
Change the system setting in the conversation.
|
25
|
-
|
26
28
|
keep_last_n_exchanges(n: int):
|
27
29
|
Keep the last n exchanges in the conversation.
|
28
30
|
"""
|
@@ -40,14 +42,17 @@ class Conversation:
|
|
40
42
|
self.msg = Message()
|
41
43
|
self.responses = []
|
42
44
|
|
43
|
-
def initiate_conversation(self, system, instruction, context=None, name=None):
|
45
|
+
def initiate_conversation(self, system=None, instruction=None, context=None, name=None):
|
44
46
|
"""
|
45
47
|
Initiate a conversation with a system setting and user instruction.
|
46
48
|
|
47
49
|
Parameters:
|
48
50
|
system (str): The system setting for the conversation.
|
51
|
+
|
49
52
|
instruction (str): The user instruction to initiate the conversation.
|
53
|
+
|
50
54
|
context (dict): Additional context for the conversation. Default is None.
|
55
|
+
|
51
56
|
name (str): The name associated with the user. Default is None.
|
52
57
|
"""
|
53
58
|
self.messages, self.responses = [], []
|
@@ -55,19 +60,23 @@ class Conversation:
|
|
55
60
|
self.add_messages(instruction=instruction, context=context, name=name)
|
56
61
|
|
57
62
|
# modify the message adding to accomodate tools
|
58
|
-
def add_messages(self, system=None, instruction=None, context=None, response=None,
|
63
|
+
def add_messages(self, system=None, instruction=None, context=None, response=None, name=None):
|
59
64
|
"""
|
60
65
|
Add messages to the conversation, including system setting, user instruction, and assistant response.
|
61
66
|
|
62
67
|
Parameters:
|
63
68
|
system (str): The system setting for the message. Default is None.
|
69
|
+
|
64
70
|
instruction (str): The instruction content for the message. Default is None.
|
71
|
+
|
65
72
|
context (dict): Additional context for the message. Default is None.
|
73
|
+
|
66
74
|
response (dict): The response content for the message. Default is None.
|
67
|
-
|
75
|
+
|
68
76
|
name (str): The name associated with the message. Default is None.
|
69
77
|
"""
|
70
|
-
msg = self.msg(system=system, instruction=instruction, context=context,
|
78
|
+
msg = self.msg(system=system, instruction=instruction, context=context,
|
79
|
+
response=response, name=name)
|
71
80
|
self.messages.append(msg)
|
72
81
|
|
73
82
|
def change_system(self, system):
|
@@ -92,4 +101,5 @@ class Conversation:
|
|
92
101
|
]
|
93
102
|
if len(response_indices) >= n:
|
94
103
|
first_index_to_keep = response_indices[-n] + 1
|
95
|
-
self.messages =
|
104
|
+
self.messages = self.messages[0] + self.messages[first_index_to_keep:]
|
105
|
+
|
@@ -11,11 +11,16 @@ class Message:
|
|
11
11
|
This class encapsulates messages from users, the assistant, systems, and external tools.
|
12
12
|
|
13
13
|
Attributes:
|
14
|
-
role (str):
|
15
|
-
|
16
|
-
|
17
|
-
|
18
|
-
|
14
|
+
role (str):
|
15
|
+
The role of the message, indicating if it's from the user, assistant, system, or tool.
|
16
|
+
content:
|
17
|
+
The content of the message, which can be an instruction, response, system setting, or tool information.
|
18
|
+
name (str):
|
19
|
+
The name associated with the message, specifying the source (user, assistant, system, or tool).
|
20
|
+
metadata (dict):
|
21
|
+
Additional metadata including id, timestamp, and name.
|
22
|
+
_logger (DataLogger):
|
23
|
+
An instance of the DataLogger class for logging message details.
|
19
24
|
|
20
25
|
Methods:
|
21
26
|
create_message(system, instruction, context, response, tool, name):
|
@@ -40,37 +45,52 @@ class Message:
|
|
40
45
|
self.metadata = None
|
41
46
|
self._logger = DataLogger()
|
42
47
|
|
43
|
-
def create_message(self, system=None, instruction=None, context=None, response=None,
|
48
|
+
def create_message(self, system=None, instruction=None, context=None, response=None, name=None):
|
49
|
+
|
44
50
|
"""
|
45
51
|
Create a message based on the provided information.
|
46
52
|
|
47
53
|
Parameters:
|
48
54
|
system (str): The system setting for the message. Default is None.
|
55
|
+
|
49
56
|
instruction (str): The instruction content for the message. Default is None.
|
57
|
+
|
50
58
|
context (dict): Additional context for the message. Default is None.
|
59
|
+
|
51
60
|
response (dict): The response content for the message. Default is None.
|
52
|
-
|
61
|
+
|
53
62
|
name (str): The name associated with the message. Default is None.
|
54
63
|
"""
|
55
|
-
if sum(l_call([system, instruction, response
|
64
|
+
if sum(l_call([system, instruction, response], bool)) > 1:
|
56
65
|
raise ValueError("Error: Message cannot have more than one role.")
|
57
66
|
|
58
67
|
else:
|
59
68
|
if response:
|
60
69
|
self.role = "assistant"
|
61
|
-
|
62
|
-
|
63
|
-
|
64
|
-
|
65
|
-
|
66
|
-
|
67
|
-
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
|
70
|
+
try:
|
71
|
+
response = response["message"]
|
72
|
+
if str(response['content']) == "None":
|
73
|
+
try:
|
74
|
+
tool_count = 0
|
75
|
+
func_list = []
|
76
|
+
while tool_count < len(response['tool_calls']):
|
77
|
+
if response['tool_calls'][tool_count]['type'] == 'function':
|
78
|
+
func_content = {"function": ("func_" + response['tool_calls'][tool_count]['function']['name']),
|
79
|
+
"arguments": response['tool_calls'][tool_count]['function']['arguments']}
|
80
|
+
func_list.append(func_content)
|
81
|
+
tool_count += 1
|
82
|
+
|
83
|
+
self.name = name or "func_request"
|
84
|
+
self.content = {'function_list': func_list}
|
85
|
+
except:
|
86
|
+
raise ValueError("Response message must be one of regular response or function calling")
|
87
|
+
else:
|
88
|
+
self.content = response['content']
|
89
|
+
self.name = name or "assistant"
|
90
|
+
except:
|
91
|
+
self.name = name or "func_call"
|
92
|
+
self.content = response
|
93
|
+
|
74
94
|
elif instruction:
|
75
95
|
self.role = "user"
|
76
96
|
self.content = {"instruction": instruction}
|
@@ -81,17 +101,13 @@ class Message:
|
|
81
101
|
self.role = "system"
|
82
102
|
self.content = system
|
83
103
|
self.name = name or "system"
|
84
|
-
elif tool:
|
85
|
-
self.role = "tool"
|
86
|
-
self.content = tool
|
87
|
-
self.name = name or "tool"
|
88
104
|
|
89
105
|
def to_json(self):
|
90
106
|
"""
|
91
107
|
Convert the message to a JSON format.
|
92
108
|
|
93
109
|
Returns:
|
94
|
-
|
110
|
+
dict: The message in JSON format.
|
95
111
|
"""
|
96
112
|
out = {
|
97
113
|
"role": self.role,
|
@@ -106,22 +122,27 @@ class Message:
|
|
106
122
|
self._logger({**self.metadata, **out})
|
107
123
|
return out
|
108
124
|
|
109
|
-
def __call__(self, system=None, instruction=None, context=None,
|
125
|
+
def __call__(self, system=None, instruction=None, context=None,
|
126
|
+
response=None, name=None):
|
110
127
|
"""
|
111
128
|
Create and return a message in JSON format.
|
112
129
|
|
113
130
|
Parameters:
|
114
131
|
system (str): The system setting for the message. Default is None.
|
132
|
+
|
115
133
|
instruction (str): The instruction content for the message. Default is None.
|
134
|
+
|
116
135
|
context (dict): Additional context for the message. Default is None.
|
136
|
+
|
117
137
|
response (dict): The response content for the message. Default is None.
|
138
|
+
|
118
139
|
name (str): The name associated with the message. Default is None.
|
119
|
-
tool (dict): The tool information for the message. Default is None.
|
120
140
|
|
121
141
|
Returns:
|
122
142
|
dict: The message in JSON format.
|
123
143
|
"""
|
124
|
-
self.create_message(system, instruction,
|
144
|
+
self.create_message(system=system, instruction=instruction,
|
145
|
+
context=context, response=response, name=name)
|
125
146
|
return self.to_json()
|
126
147
|
|
127
148
|
def to_csv(self, dir=None, filename=None, verbose=True, timestamp=True, dir_exist_ok=True, file_exist_ok=False):
|
@@ -130,10 +151,16 @@ class Message:
|
|
130
151
|
|
131
152
|
Parameters:
|
132
153
|
dir (str): The directory path for saving the CSV file. Default is None.
|
154
|
+
|
133
155
|
filename (str): The filename for the CSV file. Default is None.
|
156
|
+
|
134
157
|
verbose (bool): Whether to include verbose information in the CSV. Default is True.
|
158
|
+
|
135
159
|
timestamp (bool): Whether to include timestamps in the CSV. Default is True.
|
160
|
+
|
136
161
|
dir_exist_ok (bool): Whether to allow the directory to exist. Default is True.
|
162
|
+
|
137
163
|
file_exist_ok (bool): Whether to allow the file to exist. Default is False.
|
138
164
|
"""
|
139
|
-
self._logger.to_csv(dir, filename, verbose, timestamp, dir_exist_ok, file_exist_ok)
|
165
|
+
self._logger.to_csv(dir, filename, verbose, timestamp, dir_exist_ok, file_exist_ok)
|
166
|
+
|