aient 1.1.62__tar.gz → 1.1.64__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {aient-1.1.62 → aient-1.1.64}/PKG-INFO +6 -7
- {aient-1.1.62 → aient-1.1.64}/README.md +5 -6
- {aient-1.1.62 → aient-1.1.64}/aient/models/chatgpt.py +79 -188
- {aient-1.1.62 → aient-1.1.64}/aient.egg-info/PKG-INFO +6 -7
- {aient-1.1.62 → aient-1.1.64}/pyproject.toml +1 -1
- {aient-1.1.62 → aient-1.1.64}/LICENSE +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/__init__.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/core/__init__.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/core/log_config.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/core/models.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/core/request.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/core/response.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/core/test/test_base_api.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/core/test/test_geminimask.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/core/test/test_image.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/core/test/test_payload.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/core/utils.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/models/__init__.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/models/audio.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/models/base.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/__init__.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/arXiv.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/config.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/excute_command.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/get_time.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/image.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/list_directory.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/read_file.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/read_image.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/readonly.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/registry.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/run_python.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/websearch.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/plugins/write_file.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/utils/__init__.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/utils/prompt.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient/utils/scripts.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient.egg-info/SOURCES.txt +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient.egg-info/dependency_links.txt +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient.egg-info/requires.txt +0 -0
- {aient-1.1.62 → aient-1.1.64}/aient.egg-info/top_level.txt +0 -0
- {aient-1.1.62 → aient-1.1.64}/setup.cfg +0 -0
- {aient-1.1.62 → aient-1.1.64}/test/test_Web_crawler.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/test/test_ddg_search.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/test/test_google_search.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/test/test_ollama.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/test/test_plugin.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/test/test_search.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/test/test_url.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/test/test_whisper.py +0 -0
- {aient-1.1.62 → aient-1.1.64}/test/test_yjh.py +0 -0
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: aient
|
3
|
-
Version: 1.1.
|
3
|
+
Version: 1.1.64
|
4
4
|
Summary: Aient: The Awakening of Agent.
|
5
5
|
Requires-Python: >=3.11
|
6
6
|
Description-Content-Type: text/markdown
|
@@ -25,7 +25,7 @@ Dynamic: license-file
|
|
25
25
|
|
26
26
|
[English](./README.md) | [Chinese](./README_CN.md)
|
27
27
|
|
28
|
-
aient is a powerful library designed to simplify and unify the use of different large language models, including
|
28
|
+
aient is a powerful library designed to simplify and unify the use of different large language models, including gpt-4.1/5, o3, DALL-E 3, claude4, gemini-2.5-pro/flash, Vertex AI (Claude, Gemini), and Groq. The library supports GPT format function calls and has built-in Google search and URL summarization features, greatly enhancing the practicality and flexibility of the models.
|
29
29
|
|
30
30
|
## ✨ Features
|
31
31
|
|
@@ -82,14 +82,13 @@ The following is a list of environment variables related to plugin settings:
|
|
82
82
|
|
83
83
|
## Supported models
|
84
84
|
|
85
|
-
-
|
86
|
-
-
|
85
|
+
- gpt-4.1/5
|
86
|
+
- o3
|
87
87
|
- DALL-E 3
|
88
|
-
-
|
89
|
-
-
|
88
|
+
- claude4
|
89
|
+
- gemini-2.5-pro/flash
|
90
90
|
- Vertex AI (Claude, Gemini)
|
91
91
|
- Groq
|
92
|
-
- DuckDuckGo(gpt-4o-mini, claude-3-haiku, Meta-Llama-3.1-70B, Mixtral-8x7B)
|
93
92
|
|
94
93
|
## 🧩 Plugin
|
95
94
|
|
@@ -2,7 +2,7 @@
|
|
2
2
|
|
3
3
|
[English](./README.md) | [Chinese](./README_CN.md)
|
4
4
|
|
5
|
-
aient is a powerful library designed to simplify and unify the use of different large language models, including
|
5
|
+
aient is a powerful library designed to simplify and unify the use of different large language models, including gpt-4.1/5, o3, DALL-E 3, claude4, gemini-2.5-pro/flash, Vertex AI (Claude, Gemini), and Groq. The library supports GPT format function calls and has built-in Google search and URL summarization features, greatly enhancing the practicality and flexibility of the models.
|
6
6
|
|
7
7
|
## ✨ Features
|
8
8
|
|
@@ -59,14 +59,13 @@ The following is a list of environment variables related to plugin settings:
|
|
59
59
|
|
60
60
|
## Supported models
|
61
61
|
|
62
|
-
-
|
63
|
-
-
|
62
|
+
- gpt-4.1/5
|
63
|
+
- o3
|
64
64
|
- DALL-E 3
|
65
|
-
-
|
66
|
-
-
|
65
|
+
- claude4
|
66
|
+
- gemini-2.5-pro/flash
|
67
67
|
- Vertex AI (Claude, Gemini)
|
68
68
|
- Groq
|
69
|
-
- DuckDuckGo(gpt-4o-mini, claude-3-haiku, Meta-Llama-3.1-70B, Mixtral-8x7B)
|
70
69
|
|
71
70
|
## 🧩 Plugin
|
72
71
|
|
@@ -676,151 +676,7 @@ class chatgpt(BaseLLM):
|
|
676
676
|
self.conversation[convo_id].pop(-1)
|
677
677
|
self.conversation[convo_id].pop(-1)
|
678
678
|
|
679
|
-
def
|
680
|
-
self,
|
681
|
-
prompt: list,
|
682
|
-
role: str = "user",
|
683
|
-
convo_id: str = "default",
|
684
|
-
model: str = "",
|
685
|
-
pass_history: int = 9999,
|
686
|
-
function_name: str = "",
|
687
|
-
total_tokens: int = 0,
|
688
|
-
function_arguments: str = "",
|
689
|
-
function_call_id: str = "",
|
690
|
-
language: str = "English",
|
691
|
-
system_prompt: str = None,
|
692
|
-
stream: bool = True,
|
693
|
-
**kwargs,
|
694
|
-
):
|
695
|
-
"""
|
696
|
-
Ask a question (同步流式响应)
|
697
|
-
"""
|
698
|
-
# 准备会话
|
699
|
-
self.system_prompt = system_prompt or self.system_prompt
|
700
|
-
if convo_id not in self.conversation or pass_history <= 2:
|
701
|
-
self.reset(convo_id=convo_id, system_prompt=system_prompt)
|
702
|
-
self.add_to_conversation(prompt, role, convo_id=convo_id, function_name=function_name, total_tokens=total_tokens, function_arguments=function_arguments, function_call_id=function_call_id, pass_history=pass_history)
|
703
|
-
|
704
|
-
# 获取请求体
|
705
|
-
json_post = None
|
706
|
-
async def get_post_body_async():
|
707
|
-
nonlocal json_post
|
708
|
-
url, headers, json_post, engine_type = await self.get_post_body(prompt, role, convo_id, model, pass_history, stream=stream, **kwargs)
|
709
|
-
return url, headers, json_post, engine_type
|
710
|
-
|
711
|
-
# 替换原来的获取请求体的代码
|
712
|
-
# json_post = next(async_generator_to_sync(get_post_body_async()))
|
713
|
-
try:
|
714
|
-
loop = asyncio.get_event_loop()
|
715
|
-
if loop.is_closed():
|
716
|
-
loop = asyncio.new_event_loop()
|
717
|
-
asyncio.set_event_loop(loop)
|
718
|
-
except RuntimeError:
|
719
|
-
loop = asyncio.new_event_loop()
|
720
|
-
asyncio.set_event_loop(loop)
|
721
|
-
url, headers, json_post, engine_type = loop.run_until_complete(get_post_body_async())
|
722
|
-
|
723
|
-
self.truncate_conversation(convo_id=convo_id)
|
724
|
-
|
725
|
-
# 打印日志
|
726
|
-
if self.print_log:
|
727
|
-
self.logger.info(f"api_url: {kwargs.get('api_url', self.api_url.chat_url)}, {url}")
|
728
|
-
self.logger.info(f"api_key: {kwargs.get('api_key', self.api_key)}")
|
729
|
-
|
730
|
-
# 发送请求并处理响应
|
731
|
-
for _ in range(3):
|
732
|
-
if self.print_log:
|
733
|
-
replaced_text = json.loads(re.sub(r';base64,([A-Za-z0-9+/=]+)', ';base64,***', json.dumps(json_post)))
|
734
|
-
replaced_text_str = json.dumps(replaced_text, indent=4, ensure_ascii=False)
|
735
|
-
self.logger.info(f"Request Body:\n{replaced_text_str}")
|
736
|
-
|
737
|
-
try:
|
738
|
-
# 改进处理方式,创建一个内部异步函数来处理异步调用
|
739
|
-
async def process_async():
|
740
|
-
# 异步调用 fetch_response_stream
|
741
|
-
# self.logger.info("--------------------------------")
|
742
|
-
# self.logger.info(prompt)
|
743
|
-
# self.logger.info(parse_function_xml(prompt))
|
744
|
-
# self.logger.info(convert_functions_to_xml(parse_function_xml(prompt)))
|
745
|
-
# self.logger.info(convert_functions_to_xml(parse_function_xml(prompt)).strip() == prompt)
|
746
|
-
# self.logger.info("--------------------------------")
|
747
|
-
if prompt and "</" in prompt and "<instructions>" not in prompt and convert_functions_to_xml(parse_function_xml(prompt)).strip() == prompt:
|
748
|
-
tmp_response = {
|
749
|
-
"id": "chatcmpl-zXCi5TxWy953TCcxFocSienhvx0BB",
|
750
|
-
"object": "chat.completion.chunk",
|
751
|
-
"created": 1754588695,
|
752
|
-
"model": "gemini-2.5-flash",
|
753
|
-
"choices": [
|
754
|
-
{
|
755
|
-
"index": 0,
|
756
|
-
"delta": {
|
757
|
-
"role": "assistant",
|
758
|
-
"content": prompt
|
759
|
-
},
|
760
|
-
"finish_reason": "stop"
|
761
|
-
}
|
762
|
-
],
|
763
|
-
"system_fingerprint": "fp_d576307f90"
|
764
|
-
}
|
765
|
-
async def _mock_response_generator():
|
766
|
-
yield f"data: {json.dumps(tmp_response)}\n\n"
|
767
|
-
async_generator = _mock_response_generator()
|
768
|
-
else:
|
769
|
-
if stream:
|
770
|
-
async_generator = fetch_response_stream(
|
771
|
-
self.aclient,
|
772
|
-
url,
|
773
|
-
headers,
|
774
|
-
json_post,
|
775
|
-
engine_type,
|
776
|
-
model or self.engine,
|
777
|
-
)
|
778
|
-
else:
|
779
|
-
async_generator = fetch_response(
|
780
|
-
self.aclient,
|
781
|
-
url,
|
782
|
-
headers,
|
783
|
-
json_post,
|
784
|
-
engine_type,
|
785
|
-
model or self.engine,
|
786
|
-
)
|
787
|
-
# 异步处理响应流
|
788
|
-
async for chunk in self._process_stream_response(
|
789
|
-
async_generator,
|
790
|
-
convo_id=convo_id,
|
791
|
-
function_name=function_name,
|
792
|
-
total_tokens=total_tokens,
|
793
|
-
function_arguments=function_arguments,
|
794
|
-
function_call_id=function_call_id,
|
795
|
-
model=model,
|
796
|
-
language=language,
|
797
|
-
system_prompt=system_prompt,
|
798
|
-
pass_history=pass_history,
|
799
|
-
is_async=True,
|
800
|
-
**kwargs
|
801
|
-
):
|
802
|
-
yield chunk
|
803
|
-
|
804
|
-
# 将异步函数转换为同步生成器
|
805
|
-
return async_generator_to_sync(process_async())
|
806
|
-
except ConnectionError:
|
807
|
-
self.logger.error("连接错误,请检查服务器状态或网络连接。")
|
808
|
-
return
|
809
|
-
except requests.exceptions.ReadTimeout:
|
810
|
-
self.logger.error("请求超时,请检查网络连接或增加超时时间。")
|
811
|
-
return
|
812
|
-
except httpx.RemoteProtocolError:
|
813
|
-
continue
|
814
|
-
except Exception as e:
|
815
|
-
self.logger.error(f"发生了未预料的错误:{e}")
|
816
|
-
if "Invalid URL" in str(e):
|
817
|
-
e = "您输入了无效的API URL,请使用正确的URL并使用`/start`命令重新设置API URL。具体错误如下:\n\n" + str(e)
|
818
|
-
raise Exception(f"{e}")
|
819
|
-
# 最后一次重试失败,向上抛出异常
|
820
|
-
if _ == 2:
|
821
|
-
raise Exception(f"{e}")
|
822
|
-
|
823
|
-
async def ask_stream_async(
|
679
|
+
async def _ask_stream_handler(
|
824
680
|
self,
|
825
681
|
prompt: list,
|
826
682
|
role: str = "user",
|
@@ -837,7 +693,7 @@ class chatgpt(BaseLLM):
|
|
837
693
|
**kwargs,
|
838
694
|
):
|
839
695
|
"""
|
840
|
-
|
696
|
+
Unified stream handler (async)
|
841
697
|
"""
|
842
698
|
# 准备会话
|
843
699
|
self.system_prompt = system_prompt or self.system_prompt
|
@@ -851,41 +707,31 @@ class chatgpt(BaseLLM):
|
|
851
707
|
|
852
708
|
# 打印日志
|
853
709
|
if self.print_log:
|
854
|
-
self.logger.info(f"api_url: {url}")
|
710
|
+
self.logger.info(f"api_url: {kwargs.get('api_url', self.api_url.chat_url)}, {url}")
|
855
711
|
self.logger.info(f"api_key: {kwargs.get('api_key', self.api_key)}")
|
856
712
|
|
857
713
|
# 发送请求并处理响应
|
858
|
-
for
|
714
|
+
for i in range(3):
|
859
715
|
if self.print_log:
|
860
716
|
replaced_text = json.loads(re.sub(r';base64,([A-Za-z0-9+/=]+)', ';base64,***', json.dumps(json_post)))
|
861
717
|
replaced_text_str = json.dumps(replaced_text, indent=4, ensure_ascii=False)
|
862
718
|
self.logger.info(f"Request Body:\n{replaced_text_str}")
|
863
719
|
|
864
720
|
try:
|
865
|
-
# 使用fetch_response_stream处理响应
|
866
|
-
# self.logger.info("--------------------------------")
|
867
|
-
# self.logger.info(prompt)
|
868
|
-
# self.logger.info(parse_function_xml(prompt))
|
869
|
-
# self.logger.info(convert_functions_to_xml(parse_function_xml(prompt)))
|
870
|
-
# self.logger.info(convert_functions_to_xml(parse_function_xml(prompt)).strip() == prompt)
|
871
|
-
# self.logger.info("--------------------------------")
|
872
721
|
if prompt and "</" in prompt and "<instructions>" not in prompt and convert_functions_to_xml(parse_function_xml(prompt)).strip() == prompt:
|
873
722
|
tmp_response = {
|
874
723
|
"id": "chatcmpl-zXCi5TxWy953TCcxFocSienhvx0BB",
|
875
724
|
"object": "chat.completion.chunk",
|
876
725
|
"created": 1754588695,
|
877
|
-
"model":
|
726
|
+
"model": model or self.engine,
|
878
727
|
"choices": [
|
879
728
|
{
|
880
|
-
|
881
|
-
|
882
|
-
"
|
883
|
-
"content": prompt
|
884
|
-
},
|
885
|
-
"finish_reason": "stop"
|
729
|
+
"index": 0,
|
730
|
+
"delta": {"role": "assistant", "content": prompt},
|
731
|
+
"finish_reason": "stop",
|
886
732
|
}
|
887
733
|
],
|
888
|
-
"system_fingerprint": "fp_d576307f90"
|
734
|
+
"system_fingerprint": "fp_d576307f90",
|
889
735
|
}
|
890
736
|
async def _mock_response_generator():
|
891
737
|
yield f"data: {json.dumps(tmp_response)}\n\n"
|
@@ -893,42 +739,27 @@ class chatgpt(BaseLLM):
|
|
893
739
|
else:
|
894
740
|
if stream:
|
895
741
|
generator = fetch_response_stream(
|
896
|
-
self.aclient,
|
897
|
-
url,
|
898
|
-
headers,
|
899
|
-
json_post,
|
900
|
-
engine_type,
|
901
|
-
model or self.engine,
|
742
|
+
self.aclient, url, headers, json_post, engine_type, model or self.engine,
|
902
743
|
)
|
903
744
|
else:
|
904
745
|
generator = fetch_response(
|
905
|
-
self.aclient,
|
906
|
-
url,
|
907
|
-
headers,
|
908
|
-
json_post,
|
909
|
-
engine_type,
|
910
|
-
model or self.engine,
|
746
|
+
self.aclient, url, headers, json_post, engine_type, model or self.engine,
|
911
747
|
)
|
912
748
|
|
913
749
|
# 处理正常响应
|
914
750
|
async for processed_chunk in self._process_stream_response(
|
915
|
-
generator,
|
916
|
-
|
917
|
-
|
918
|
-
|
919
|
-
function_arguments=function_arguments,
|
920
|
-
function_call_id=function_call_id,
|
921
|
-
model=model,
|
922
|
-
language=language,
|
923
|
-
system_prompt=system_prompt,
|
924
|
-
pass_history=pass_history,
|
925
|
-
is_async=True,
|
926
|
-
**kwargs
|
751
|
+
generator, convo_id=convo_id, function_name=function_name,
|
752
|
+
total_tokens=total_tokens, function_arguments=function_arguments,
|
753
|
+
function_call_id=function_call_id, model=model, language=language,
|
754
|
+
system_prompt=system_prompt, pass_history=pass_history, is_async=True, **kwargs
|
927
755
|
):
|
928
756
|
yield processed_chunk
|
929
757
|
|
930
758
|
# 成功处理,跳出重试循环
|
931
759
|
break
|
760
|
+
except (httpx.ConnectError, httpx.ReadTimeout):
|
761
|
+
self.logger.error("连接或读取超时错误,请检查服务器状态或网络连接。")
|
762
|
+
return # Stop iteration
|
932
763
|
except httpx.RemoteProtocolError:
|
933
764
|
continue
|
934
765
|
except Exception as e:
|
@@ -939,9 +770,69 @@ class chatgpt(BaseLLM):
|
|
939
770
|
e = "您输入了无效的API URL,请使用正确的URL并使用`/start`命令重新设置API URL。具体错误如下:\n\n" + str(e)
|
940
771
|
raise Exception(f"{e}")
|
941
772
|
# 最后一次重试失败,向上抛出异常
|
942
|
-
if
|
773
|
+
if i == 2:
|
943
774
|
raise Exception(f"{e}")
|
944
775
|
|
776
|
+
def ask_stream(
|
777
|
+
self,
|
778
|
+
prompt: list,
|
779
|
+
role: str = "user",
|
780
|
+
convo_id: str = "default",
|
781
|
+
model: str = "",
|
782
|
+
pass_history: int = 9999,
|
783
|
+
function_name: str = "",
|
784
|
+
total_tokens: int = 0,
|
785
|
+
function_arguments: str = "",
|
786
|
+
function_call_id: str = "",
|
787
|
+
language: str = "English",
|
788
|
+
system_prompt: str = None,
|
789
|
+
stream: bool = True,
|
790
|
+
**kwargs,
|
791
|
+
):
|
792
|
+
"""
|
793
|
+
Ask a question (同步流式响应)
|
794
|
+
"""
|
795
|
+
try:
|
796
|
+
loop = asyncio.get_event_loop()
|
797
|
+
if loop.is_closed():
|
798
|
+
loop = asyncio.new_event_loop()
|
799
|
+
asyncio.set_event_loop(loop)
|
800
|
+
except RuntimeError:
|
801
|
+
loop = asyncio.new_event_loop()
|
802
|
+
asyncio.set_event_loop(loop)
|
803
|
+
|
804
|
+
async_gen = self._ask_stream_handler(
|
805
|
+
prompt, role, convo_id, model, pass_history, function_name, total_tokens,
|
806
|
+
function_arguments, function_call_id, language, system_prompt, stream, **kwargs
|
807
|
+
)
|
808
|
+
for chunk in async_generator_to_sync(async_gen):
|
809
|
+
yield chunk
|
810
|
+
|
811
|
+
async def ask_stream_async(
|
812
|
+
self,
|
813
|
+
prompt: list,
|
814
|
+
role: str = "user",
|
815
|
+
convo_id: str = "default",
|
816
|
+
model: str = "",
|
817
|
+
pass_history: int = 9999,
|
818
|
+
function_name: str = "",
|
819
|
+
total_tokens: int = 0,
|
820
|
+
function_arguments: str = "",
|
821
|
+
function_call_id: str = "",
|
822
|
+
language: str = "English",
|
823
|
+
system_prompt: str = None,
|
824
|
+
stream: bool = True,
|
825
|
+
**kwargs,
|
826
|
+
):
|
827
|
+
"""
|
828
|
+
Ask a question (异步流式响应)
|
829
|
+
"""
|
830
|
+
async for chunk in self._ask_stream_handler(
|
831
|
+
prompt, role, convo_id, model, pass_history, function_name, total_tokens,
|
832
|
+
function_arguments, function_call_id, language, system_prompt, stream, **kwargs
|
833
|
+
):
|
834
|
+
yield chunk
|
835
|
+
|
945
836
|
async def ask_async(
|
946
837
|
self,
|
947
838
|
prompt: str,
|
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: aient
|
3
|
-
Version: 1.1.
|
3
|
+
Version: 1.1.64
|
4
4
|
Summary: Aient: The Awakening of Agent.
|
5
5
|
Requires-Python: >=3.11
|
6
6
|
Description-Content-Type: text/markdown
|
@@ -25,7 +25,7 @@ Dynamic: license-file
|
|
25
25
|
|
26
26
|
[English](./README.md) | [Chinese](./README_CN.md)
|
27
27
|
|
28
|
-
aient is a powerful library designed to simplify and unify the use of different large language models, including
|
28
|
+
aient is a powerful library designed to simplify and unify the use of different large language models, including gpt-4.1/5, o3, DALL-E 3, claude4, gemini-2.5-pro/flash, Vertex AI (Claude, Gemini), and Groq. The library supports GPT format function calls and has built-in Google search and URL summarization features, greatly enhancing the practicality and flexibility of the models.
|
29
29
|
|
30
30
|
## ✨ Features
|
31
31
|
|
@@ -82,14 +82,13 @@ The following is a list of environment variables related to plugin settings:
|
|
82
82
|
|
83
83
|
## Supported models
|
84
84
|
|
85
|
-
-
|
86
|
-
-
|
85
|
+
- gpt-4.1/5
|
86
|
+
- o3
|
87
87
|
- DALL-E 3
|
88
|
-
-
|
89
|
-
-
|
88
|
+
- claude4
|
89
|
+
- gemini-2.5-pro/flash
|
90
90
|
- Vertex AI (Claude, Gemini)
|
91
91
|
- Groq
|
92
|
-
- DuckDuckGo(gpt-4o-mini, claude-3-haiku, Meta-Llama-3.1-70B, Mixtral-8x7B)
|
93
92
|
|
94
93
|
## 🧩 Plugin
|
95
94
|
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|