byllm 0.4.1.post1__tar.gz → 0.4.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of byllm might be problematic. Click here for more details.

byllm-0.4.3/PKG-INFO ADDED
@@ -0,0 +1,187 @@
1
+ Metadata-Version: 2.4
2
+ Name: byllm
3
+ Version: 0.4.3
4
+ Summary: byLLM Provides Easy to use APIs for different LLM Providers to be used with Jaseci's Jaclang Programming Language.
5
+ License: MIT
6
+ Keywords: llm,jaclang,jaseci,byLLM
7
+ Author: Jason Mars
8
+ Author-email: jason@jaseci.org
9
+ Maintainer: Jason Mars
10
+ Maintainer-email: jason@jaseci.org
11
+ Classifier: License :: OSI Approved :: MIT License
12
+ Classifier: Programming Language :: Python :: 2
13
+ Classifier: Programming Language :: Python :: 2.7
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Programming Language :: Python :: 3.4
16
+ Classifier: Programming Language :: Python :: 3.5
17
+ Classifier: Programming Language :: Python :: 3.6
18
+ Classifier: Programming Language :: Python :: 3.7
19
+ Classifier: Programming Language :: Python :: 3.8
20
+ Classifier: Programming Language :: Python :: 3.9
21
+ Classifier: Programming Language :: Python :: 3.10
22
+ Classifier: Programming Language :: Python :: 3.11
23
+ Classifier: Programming Language :: Python :: 3.12
24
+ Classifier: Programming Language :: Python :: 3.13
25
+ Classifier: Programming Language :: Python :: 3.14
26
+ Provides-Extra: tools
27
+ Provides-Extra: video
28
+ Requires-Dist: jaclang (>=0.8.8)
29
+ Requires-Dist: litellm (>=1.75.5.post1)
30
+ Requires-Dist: loguru (>=0.7.2,<0.8.0)
31
+ Requires-Dist: pillow (>=10.4.0,<10.5.0)
32
+ Description-Content-Type: text/markdown
33
+
34
+ <div align="center">
35
+ <img src="../docs/docs/assets/byLLM_name_logo.png" height="150">
36
+
37
+ [About byLLM] | [Get started] | [Usage docs] | [Research Paper]
38
+ </div>
39
+
40
+ [About byLLM]: https://www.jac-lang.org/learn/jac-byllm/with_llm/
41
+ [Get started]: https://www.jac-lang.org/learn/jac-byllm/quickstart/
42
+ [Usage docs]: https://www.jac-lang.org/learn/jac-byllm/usage/
43
+ [Research Paper]: https://arxiv.org/abs/2405.08965
44
+
45
+ # byLLM : Less Prompting! More Coding!
46
+
47
+ [![PyPI version](https://img.shields.io/pypi/v/byllm.svg)](https://pypi.org/project/byllm/) [![tests](https://github.com/jaseci-labs/jaseci/actions/workflows/test-jaseci.yml/badge.svg?branch=main)](https://github.com/jaseci-labs/jaseci/actions/workflows/test-jaseci.yml)
48
+
49
+ byLLM is an innovative AI integration framework built for the Jaseci ecosystem, implementing the cutting-edge Meaning Typed Programming (MTP) paradigm. MTP revolutionizes AI integration by embedding prompt engineering directly into code semantics, making AI interactions more natural and maintainable. While primarily designed to complement the Jac programming language, byLLM also provides a powerful Python library interface.
50
+
51
+ Installation is simple via PyPI:
52
+
53
+ ```bash
54
+ pip install byllm
55
+ ```
56
+
57
+ ## Basic Example
58
+
59
+ Consider building an application that translates english to other languages using an LLM. This can be simply built as follows:
60
+
61
+ ```python
62
+ import from byllm { Model }
63
+
64
+ glob llm = Model(model_name="gpt-4o");
65
+
66
+ def translate_to(language: str, phrase: str) -> str by llm();
67
+
68
+ with entry {
69
+ output = translate_to(language="Welsh", phrase="Hello world");
70
+ print(output);
71
+ }
72
+ ```
73
+
74
+ This simple piece of code replaces traditional prompt engineering without introducing additional complexity.
75
+
76
+ ## Power of Types with LLMs
77
+
78
+ Consider a program that detects the personality type of a historical figure from their name. This can eb built in a way that LLM picks from an enum and the output strictly adhere this type.
79
+
80
+ ```python
81
+ import from byllm { Model }
82
+ glob llm = Model(model_name="gemini/gemini-2.0-flash");
83
+
84
+ enum Personality {
85
+ INTROVERT, EXTROVERT, AMBIVERT
86
+ }
87
+
88
+ def get_personality(name: str) -> Personality by llm();
89
+
90
+ with entry {
91
+ name = "Albert Einstein";
92
+ result = get_personality(name);
93
+ print(f"{result} personality detected for {name}");
94
+ }
95
+ ```
96
+
97
+ > Similarly, custom types can be used as output types which force the LLM to adhere to the specified type and produce a valid result.
98
+
99
+ ## Control! Control! Control!
100
+
101
+ Even if we are elimination prompt engineering entierly, we allow specific ways to enrich code semantics through **docstrings** and **semstrings**.
102
+
103
+ ```python
104
+ """Represents the personal record of a person"""
105
+ obj Person {
106
+ has name: str;
107
+ has dob: str;
108
+ has ssn: str;
109
+ }
110
+
111
+ sem Person.name = "Full name of the person";
112
+ sem Person.dob = "Date of Birth";
113
+ sem Person.ssn = "Last four digits of the Social Security Number of a person";
114
+
115
+ """Calculate eligibility for various services based on person's data."""
116
+ def check_eligibility(person: Person, service_type: str) -> bool by llm();
117
+
118
+ ```
119
+
120
+ Docstrings naturally enhance the semantics of their associated code constructs, while the `sem` keyword provides an elegant way to enrich the meaning of class attributes and function arguments. Our research shows these concise semantic strings are more effective than traditional multi-line prompts.
121
+
122
+ ## How well does byLLM work?
123
+
124
+ byLLM is built using the underline priciple of Meaning Typed Programming and we shown our evaluation data compared with two such AI integration frameworks for python, such as DSPy and LMQL. We show significant performance gain against LMQL while allowing on par or better performance to DSPy, while having a lower cost and faster runtime.
125
+
126
+ <div align="center">
127
+ <img src="../docs/docs/assets/correctness_comparison.png" alt="Correctness Comparison" width="600" style="max-width: 100%;">
128
+ <br>
129
+ <em>Figure: Correctness comparison of byLLM with DSPy and LMQL on benchmark tasks.</em>
130
+ </div>
131
+
132
+ **📚 Full Documentation**: [Jac byLLM Documentation](https://www.jac-lang.org/learn/jac-byllm/with_llm/)
133
+
134
+ **🎮 Complete Examples**:
135
+ - [Fantasy Trading Game](https://www.jac-lang.org/learn/examples/mtp_examples/fantasy_trading_game/) - Interactive RPG with AI-generated characters
136
+ - [RPG Level Generator](https://www.jac-lang.org/learn/examples/mtp_examples/rpg_game/) - AI-powered game level creation
137
+ - [RAG Chatbot Tutorial](https://www.jac-lang.org/learn/examples/rag_chatbot/Overview/) - Building chatbots with document retrieval
138
+
139
+ **🔬 Research**: The research journey of MTP is available on [Arxiv](https://arxiv.org/abs/2405.08965) and accepted for OOPSLA 2025.
140
+
141
+ ## Quick Links
142
+
143
+ - [Getting Started Guide](https://www.jac-lang.org/learn/jac-byllm/quickstart/)
144
+ - [Jac Language Documentation](https://www.jac-lang.org/)
145
+ - [GitHub Repository](https://github.com/jaseci-labs/jaseci)
146
+
147
+ ## Contributing
148
+
149
+ We welcome contributions to byLLM! Whether you're fixing bugs, improving documentation, or adding new features, your help is appreciated.
150
+
151
+ Areas we actively seek contributions:
152
+ - 🐛 Bug fixes and improvements
153
+ - 📚 Documentation enhancements
154
+ - ✨ New examples and tutorials
155
+ - 🧪 Test cases and benchmarks
156
+
157
+ Please see our [Contributing Guide](https://www.jac-lang.org/internals/contrib/) for detailed instructions.
158
+
159
+ If you find a bug or have a feature request, please [open an issue](https://github.com/jaseci-labs/jaseci/issues/new/choose).
160
+
161
+ ## Community
162
+
163
+ Join our vibrant community:
164
+ - [Discord Server](https://discord.gg/6j3QNdtcN6) - Chat with the team and community
165
+
166
+ ## License
167
+
168
+ This project is licensed under the MIT License.
169
+
170
+ ### Third-Party Dependencies
171
+
172
+ byLLM integrates with various LLM providers (OpenAI, Anthropic, Google, etc.) through LiteLLM.
173
+
174
+ ## Cite our research
175
+
176
+
177
+ > Jayanaka L. Dantanarayana, Yiping Kang, Kugesan Sivasothynathan, Christopher Clarke, Baichuan Li, Savini
178
+ Kashmira, Krisztian Flautner, Lingjia Tang, and Jason Mars. 2025. MTP: A Meaning-Typed Language Ab-
179
+ straction for AI-Integrated Programming. Proc. ACM Program. Lang. 9, OOPSLA2, Article 314 (October 2025),
180
+ 29 pages. https://doi.org/10.1145/3763092
181
+
182
+
183
+ ## Jaseci Contributors
184
+
185
+ <a href="https://github.com/jaseci-labs/jaseci/graphs/contributors">
186
+ <img src="https://contrib.rocks/image?repo=jaseci-labs/jaseci" />
187
+ </a>
byllm-0.4.3/README.md ADDED
@@ -0,0 +1,154 @@
1
+ <div align="center">
2
+ <img src="../docs/docs/assets/byLLM_name_logo.png" height="150">
3
+
4
+ [About byLLM] | [Get started] | [Usage docs] | [Research Paper]
5
+ </div>
6
+
7
+ [About byLLM]: https://www.jac-lang.org/learn/jac-byllm/with_llm/
8
+ [Get started]: https://www.jac-lang.org/learn/jac-byllm/quickstart/
9
+ [Usage docs]: https://www.jac-lang.org/learn/jac-byllm/usage/
10
+ [Research Paper]: https://arxiv.org/abs/2405.08965
11
+
12
+ # byLLM : Less Prompting! More Coding!
13
+
14
+ [![PyPI version](https://img.shields.io/pypi/v/byllm.svg)](https://pypi.org/project/byllm/) [![tests](https://github.com/jaseci-labs/jaseci/actions/workflows/test-jaseci.yml/badge.svg?branch=main)](https://github.com/jaseci-labs/jaseci/actions/workflows/test-jaseci.yml)
15
+
16
+ byLLM is an innovative AI integration framework built for the Jaseci ecosystem, implementing the cutting-edge Meaning Typed Programming (MTP) paradigm. MTP revolutionizes AI integration by embedding prompt engineering directly into code semantics, making AI interactions more natural and maintainable. While primarily designed to complement the Jac programming language, byLLM also provides a powerful Python library interface.
17
+
18
+ Installation is simple via PyPI:
19
+
20
+ ```bash
21
+ pip install byllm
22
+ ```
23
+
24
+ ## Basic Example
25
+
26
+ Consider building an application that translates english to other languages using an LLM. This can be simply built as follows:
27
+
28
+ ```python
29
+ import from byllm { Model }
30
+
31
+ glob llm = Model(model_name="gpt-4o");
32
+
33
+ def translate_to(language: str, phrase: str) -> str by llm();
34
+
35
+ with entry {
36
+ output = translate_to(language="Welsh", phrase="Hello world");
37
+ print(output);
38
+ }
39
+ ```
40
+
41
+ This simple piece of code replaces traditional prompt engineering without introducing additional complexity.
42
+
43
+ ## Power of Types with LLMs
44
+
45
+ Consider a program that detects the personality type of a historical figure from their name. This can eb built in a way that LLM picks from an enum and the output strictly adhere this type.
46
+
47
+ ```python
48
+ import from byllm { Model }
49
+ glob llm = Model(model_name="gemini/gemini-2.0-flash");
50
+
51
+ enum Personality {
52
+ INTROVERT, EXTROVERT, AMBIVERT
53
+ }
54
+
55
+ def get_personality(name: str) -> Personality by llm();
56
+
57
+ with entry {
58
+ name = "Albert Einstein";
59
+ result = get_personality(name);
60
+ print(f"{result} personality detected for {name}");
61
+ }
62
+ ```
63
+
64
+ > Similarly, custom types can be used as output types which force the LLM to adhere to the specified type and produce a valid result.
65
+
66
+ ## Control! Control! Control!
67
+
68
+ Even if we are elimination prompt engineering entierly, we allow specific ways to enrich code semantics through **docstrings** and **semstrings**.
69
+
70
+ ```python
71
+ """Represents the personal record of a person"""
72
+ obj Person {
73
+ has name: str;
74
+ has dob: str;
75
+ has ssn: str;
76
+ }
77
+
78
+ sem Person.name = "Full name of the person";
79
+ sem Person.dob = "Date of Birth";
80
+ sem Person.ssn = "Last four digits of the Social Security Number of a person";
81
+
82
+ """Calculate eligibility for various services based on person's data."""
83
+ def check_eligibility(person: Person, service_type: str) -> bool by llm();
84
+
85
+ ```
86
+
87
+ Docstrings naturally enhance the semantics of their associated code constructs, while the `sem` keyword provides an elegant way to enrich the meaning of class attributes and function arguments. Our research shows these concise semantic strings are more effective than traditional multi-line prompts.
88
+
89
+ ## How well does byLLM work?
90
+
91
+ byLLM is built using the underline priciple of Meaning Typed Programming and we shown our evaluation data compared with two such AI integration frameworks for python, such as DSPy and LMQL. We show significant performance gain against LMQL while allowing on par or better performance to DSPy, while having a lower cost and faster runtime.
92
+
93
+ <div align="center">
94
+ <img src="../docs/docs/assets/correctness_comparison.png" alt="Correctness Comparison" width="600" style="max-width: 100%;">
95
+ <br>
96
+ <em>Figure: Correctness comparison of byLLM with DSPy and LMQL on benchmark tasks.</em>
97
+ </div>
98
+
99
+ **📚 Full Documentation**: [Jac byLLM Documentation](https://www.jac-lang.org/learn/jac-byllm/with_llm/)
100
+
101
+ **🎮 Complete Examples**:
102
+ - [Fantasy Trading Game](https://www.jac-lang.org/learn/examples/mtp_examples/fantasy_trading_game/) - Interactive RPG with AI-generated characters
103
+ - [RPG Level Generator](https://www.jac-lang.org/learn/examples/mtp_examples/rpg_game/) - AI-powered game level creation
104
+ - [RAG Chatbot Tutorial](https://www.jac-lang.org/learn/examples/rag_chatbot/Overview/) - Building chatbots with document retrieval
105
+
106
+ **🔬 Research**: The research journey of MTP is available on [Arxiv](https://arxiv.org/abs/2405.08965) and accepted for OOPSLA 2025.
107
+
108
+ ## Quick Links
109
+
110
+ - [Getting Started Guide](https://www.jac-lang.org/learn/jac-byllm/quickstart/)
111
+ - [Jac Language Documentation](https://www.jac-lang.org/)
112
+ - [GitHub Repository](https://github.com/jaseci-labs/jaseci)
113
+
114
+ ## Contributing
115
+
116
+ We welcome contributions to byLLM! Whether you're fixing bugs, improving documentation, or adding new features, your help is appreciated.
117
+
118
+ Areas we actively seek contributions:
119
+ - 🐛 Bug fixes and improvements
120
+ - 📚 Documentation enhancements
121
+ - ✨ New examples and tutorials
122
+ - 🧪 Test cases and benchmarks
123
+
124
+ Please see our [Contributing Guide](https://www.jac-lang.org/internals/contrib/) for detailed instructions.
125
+
126
+ If you find a bug or have a feature request, please [open an issue](https://github.com/jaseci-labs/jaseci/issues/new/choose).
127
+
128
+ ## Community
129
+
130
+ Join our vibrant community:
131
+ - [Discord Server](https://discord.gg/6j3QNdtcN6) - Chat with the team and community
132
+
133
+ ## License
134
+
135
+ This project is licensed under the MIT License.
136
+
137
+ ### Third-Party Dependencies
138
+
139
+ byLLM integrates with various LLM providers (OpenAI, Anthropic, Google, etc.) through LiteLLM.
140
+
141
+ ## Cite our research
142
+
143
+
144
+ > Jayanaka L. Dantanarayana, Yiping Kang, Kugesan Sivasothynathan, Christopher Clarke, Baichuan Li, Savini
145
+ Kashmira, Krisztian Flautner, Lingjia Tang, and Jason Mars. 2025. MTP: A Meaning-Typed Language Ab-
146
+ straction for AI-Integrated Programming. Proc. ACM Program. Lang. 9, OOPSLA2, Article 314 (October 2025),
147
+ 29 pages. https://doi.org/10.1145/3763092
148
+
149
+
150
+ ## Jaseci Contributors
151
+
152
+ <a href="https://github.com/jaseci-labs/jaseci/graphs/contributors">
153
+ <img src="https://contrib.rocks/image?repo=jaseci-labs/jaseci" />
154
+ </a>
@@ -6,7 +6,7 @@ and to validate instances against these schemas.
6
6
 
7
7
  from dataclasses import is_dataclass
8
8
  from enum import Enum
9
- from types import FunctionType, UnionType
9
+ from types import FunctionType, MethodType, UnionType
10
10
  from typing import Callable, Union, get_args, get_origin, get_type_hints
11
11
 
12
12
  from pydantic import TypeAdapter
@@ -117,7 +117,10 @@ def _type_to_schema(ty: type, title: str = "", desc: str = "") -> dict:
117
117
  f"Enum {ty.__name__} has mixed types. Not supported for schema generation."
118
118
  )
119
119
  enum_type = enum_type or int
120
- enum_desc = f"\nThe value *should* be one in this list: {enum_values}"
120
+
121
+ enum_desc = f"\nThe value *should* be one in this list: {enum_values} where"
122
+ enum_desc += " the names are [" + ", ".join([e.name for e in ty]) + "]."
123
+
121
124
  if enum_type not in (int, str):
122
125
  raise ValueError(
123
126
  f"Enum {ty.__name__} has unsupported type {enum_type}. "
@@ -129,7 +132,7 @@ def _type_to_schema(ty: type, title: str = "", desc: str = "") -> dict:
129
132
  }
130
133
 
131
134
  # Handle functions
132
- if isinstance(ty, FunctionType):
135
+ if isinstance(ty, (FunctionType, MethodType)):
133
136
  hints = get_type_hints(ty)
134
137
  hints.pop("return", None)
135
138
  params = {
@@ -8,6 +8,7 @@ tool calls, and tools that can be used in LLM requests and responses.
8
8
  import base64
9
9
  import mimetypes
10
10
  import os
11
+ from contextlib import suppress
11
12
  from dataclasses import dataclass
12
13
  from enum import StrEnum
13
14
  from io import BytesIO
@@ -90,14 +91,15 @@ class Tool:
90
91
 
91
92
  def __post_init__(self) -> None:
92
93
  """Post-initialization to validate the function."""
93
- self.func.__annotations__ = get_type_hints(self.func)
94
+ annotations = get_type_hints(self.func)
95
+ with suppress(Exception):
96
+ self.func.__annotations__ = annotations
97
+
94
98
  self.description = Tool.get_func_description(self.func)
95
99
  if hasattr(self.func, "_jac_semstr_inner"):
96
100
  self.params_desc = self.func._jac_semstr_inner # type: ignore
97
101
  else:
98
- self.params_desc = {
99
- name: str(type) for name, type in self.func.__annotations__.items()
100
- }
102
+ self.params_desc = {name: str(type) for name, type in annotations.items()}
101
103
 
102
104
  def __call__(self, *args: list, **kwargs: dict) -> object:
103
105
  """Call the tool function with the provided arguments."""
@@ -152,7 +154,13 @@ class Tool:
152
154
  def parse_arguments(self, args_json: dict) -> dict:
153
155
  """Parse the arguments from JSON to the function's expected format."""
154
156
  args = {}
155
- annotations = self.func.__annotations__
157
+
158
+ annotations: dict = {}
159
+ try:
160
+ annotations = self.func.__annotations__
161
+ except AttributeError:
162
+ annotations = get_type_hints(self.func)
163
+
156
164
  for arg_name, arg_json in args_json.items():
157
165
  if arg_type := annotations.get(arg_name):
158
166
  args[arg_name] = TypeAdapter(arg_type).validate_python(arg_json)
@@ -1,6 +1,6 @@
1
1
  [tool.poetry]
2
2
  name = "byllm"
3
- version = "0.4.1.post1"
3
+ version = "0.4.3"
4
4
  description = "byLLM Provides Easy to use APIs for different LLM Providers to be used with Jaseci's Jaclang Programming Language."
5
5
  authors = ["Jason Mars <jason@jaseci.org>"]
6
6
  maintainers = ["Jason Mars <jason@jaseci.org>"]
@@ -9,7 +9,7 @@ readme = "README.md"
9
9
  keywords = ["llm", "jaclang", "jaseci", "byLLM"]
10
10
 
11
11
  [tool.poetry.dependencies]
12
- jaclang = ">=0.8.6"
12
+ jaclang = ">=0.8.8"
13
13
  litellm = ">=1.75.5.post1"
14
14
  loguru = "~0.7.2"
15
15
  pillow = "~10.4.0"
@@ -1,102 +0,0 @@
1
- Metadata-Version: 2.3
2
- Name: byllm
3
- Version: 0.4.1.post1
4
- Summary: byLLM Provides Easy to use APIs for different LLM Providers to be used with Jaseci's Jaclang Programming Language.
5
- License: MIT
6
- Keywords: llm,jaclang,jaseci,byLLM
7
- Author: Jason Mars
8
- Author-email: jason@jaseci.org
9
- Maintainer: Jason Mars
10
- Maintainer-email: jason@jaseci.org
11
- Classifier: License :: OSI Approved :: MIT License
12
- Classifier: Programming Language :: Python :: 2
13
- Classifier: Programming Language :: Python :: 2.7
14
- Classifier: Programming Language :: Python :: 3
15
- Classifier: Programming Language :: Python :: 3.4
16
- Classifier: Programming Language :: Python :: 3.5
17
- Classifier: Programming Language :: Python :: 3.6
18
- Classifier: Programming Language :: Python :: 3.7
19
- Classifier: Programming Language :: Python :: 3.8
20
- Classifier: Programming Language :: Python :: 3.9
21
- Classifier: Programming Language :: Python :: 3.10
22
- Classifier: Programming Language :: Python :: 3.11
23
- Classifier: Programming Language :: Python :: 3.12
24
- Classifier: Programming Language :: Python :: 3.13
25
- Provides-Extra: tools
26
- Provides-Extra: video
27
- Requires-Dist: jaclang (>=0.8.6)
28
- Requires-Dist: litellm (>=1.75.5.post1)
29
- Requires-Dist: loguru (>=0.7.2,<0.8.0)
30
- Requires-Dist: pillow (>=10.4.0,<10.5.0)
31
- Description-Content-Type: text/markdown
32
-
33
- # byLLM - AI Integration Framework for Jac-lang
34
-
35
- [![PyPI version](https://img.shields.io/pypi/v/mtllm.svg)](https://pypi.org/project/mtllm/) [![tests](https://github.com/jaseci-labs/jaseci/actions/workflows/test-jaseci.yml/badge.svg?branch=main)](https://github.com/jaseci-labs/jaseci/actions/workflows/test-jaseci.yml)
36
-
37
- Meaning Typed Programming (MTP) is a programming paradigm for AI integration where prompt engineering is hidden through code semantics. byLLM is the plugin built, exploring this hypothesis. byLLM is built as a plugin to the Jaseci ecosystem. This plugin can be installed as a PyPI package.
38
-
39
- ```bash
40
- pip install byllm
41
- ```
42
-
43
- ## Basic Example
44
-
45
- A basic usecase of MTP can be demonstrated as follows:
46
-
47
- ```python
48
- import from byllm {Model}
49
-
50
- glob llm = Model(model_name="openai\gpt-4o");
51
-
52
- def translate_to(language: str, phrase: str) -> str by llm();
53
-
54
- with entry {
55
- output = translate_to(language="Welsh", phrase="Hello world");
56
- print(output);
57
- }
58
- ```
59
-
60
- ## AI-Powered Object Generation
61
-
62
- ```python
63
- import from byllm {Model}
64
-
65
- glob llm = Model(model_name="gpt-4o");
66
-
67
- obj Task {
68
- has description: str,
69
- priority: int,
70
- estimated_time: int;
71
- }
72
-
73
- sem Task.priority = "priority between 0 (highest priority) and 10(lowest priority)";
74
-
75
- def create_task(description: str, previous_tasks: list[Task]) -> Task by llm();
76
-
77
- with entry {
78
- tasks = [];
79
- new_task = create_task("Write documentation for the API", tasks);
80
- print(f"Task: {new_task.description}, Priority: {new_task.priority}, Time: {new_task.estimated_time}min");
81
- }
82
- ```
83
-
84
- The `by` abstraction allows to automate semantic extraction from existing code semantics, eliminating manual prompt engineering while leveraging type annotations for structured AI responses.
85
-
86
- ## Documentation and Examples
87
-
88
- **📚 Full Documentation**: [Jac byLLM Documentation](https://www.jac-lang.org/learn/jac-byllm/with_llm/)
89
-
90
- **🎮 Complete Examples**:
91
- - [Fantasy Trading Game](https://www.jac-lang.org/learn/examples/mtp_examples/fantasy_trading_game/) - Interactive RPG with AI-generated characters
92
- - [RPG Level Generator](https://www.jac-lang.org/learn/examples/mtp_examples/rpg_game/) - AI-powered game level creation
93
- - [RAG Chatbot Tutorial](https://www.jac-lang.org/learn/examples/rag_chatbot/Overview/) - Building chatbots with document retrieval
94
-
95
- **🔬 Research**: The research journey of MTP is available on [Arxiv](https://arxiv.org/abs/2405.08965).
96
-
97
- ## Quick Links
98
-
99
- - [Getting Started Guide](https://www.jac-lang.org/learn/jac-byllm/with_llm/)
100
- - [Model Configuration](https://www.jac-lang.org/learn/jac-byllm/model_declaration/)
101
- - [Jac Language Documentation](https://www.jac-lang.org/)
102
- - [GitHub Repository](https://github.com/jaseci-labs/jaseci)
@@ -1,70 +0,0 @@
1
- # byLLM - AI Integration Framework for Jac-lang
2
-
3
- [![PyPI version](https://img.shields.io/pypi/v/mtllm.svg)](https://pypi.org/project/mtllm/) [![tests](https://github.com/jaseci-labs/jaseci/actions/workflows/test-jaseci.yml/badge.svg?branch=main)](https://github.com/jaseci-labs/jaseci/actions/workflows/test-jaseci.yml)
4
-
5
- Meaning Typed Programming (MTP) is a programming paradigm for AI integration where prompt engineering is hidden through code semantics. byLLM is the plugin built, exploring this hypothesis. byLLM is built as a plugin to the Jaseci ecosystem. This plugin can be installed as a PyPI package.
6
-
7
- ```bash
8
- pip install byllm
9
- ```
10
-
11
- ## Basic Example
12
-
13
- A basic usecase of MTP can be demonstrated as follows:
14
-
15
- ```python
16
- import from byllm {Model}
17
-
18
- glob llm = Model(model_name="openai\gpt-4o");
19
-
20
- def translate_to(language: str, phrase: str) -> str by llm();
21
-
22
- with entry {
23
- output = translate_to(language="Welsh", phrase="Hello world");
24
- print(output);
25
- }
26
- ```
27
-
28
- ## AI-Powered Object Generation
29
-
30
- ```python
31
- import from byllm {Model}
32
-
33
- glob llm = Model(model_name="gpt-4o");
34
-
35
- obj Task {
36
- has description: str,
37
- priority: int,
38
- estimated_time: int;
39
- }
40
-
41
- sem Task.priority = "priority between 0 (highest priority) and 10(lowest priority)";
42
-
43
- def create_task(description: str, previous_tasks: list[Task]) -> Task by llm();
44
-
45
- with entry {
46
- tasks = [];
47
- new_task = create_task("Write documentation for the API", tasks);
48
- print(f"Task: {new_task.description}, Priority: {new_task.priority}, Time: {new_task.estimated_time}min");
49
- }
50
- ```
51
-
52
- The `by` abstraction allows to automate semantic extraction from existing code semantics, eliminating manual prompt engineering while leveraging type annotations for structured AI responses.
53
-
54
- ## Documentation and Examples
55
-
56
- **📚 Full Documentation**: [Jac byLLM Documentation](https://www.jac-lang.org/learn/jac-byllm/with_llm/)
57
-
58
- **🎮 Complete Examples**:
59
- - [Fantasy Trading Game](https://www.jac-lang.org/learn/examples/mtp_examples/fantasy_trading_game/) - Interactive RPG with AI-generated characters
60
- - [RPG Level Generator](https://www.jac-lang.org/learn/examples/mtp_examples/rpg_game/) - AI-powered game level creation
61
- - [RAG Chatbot Tutorial](https://www.jac-lang.org/learn/examples/rag_chatbot/Overview/) - Building chatbots with document retrieval
62
-
63
- **🔬 Research**: The research journey of MTP is available on [Arxiv](https://arxiv.org/abs/2405.08965).
64
-
65
- ## Quick Links
66
-
67
- - [Getting Started Guide](https://www.jac-lang.org/learn/jac-byllm/with_llm/)
68
- - [Model Configuration](https://www.jac-lang.org/learn/jac-byllm/model_declaration/)
69
- - [Jac Language Documentation](https://www.jac-lang.org/)
70
- - [GitHub Repository](https://github.com/jaseci-labs/jaseci)
File without changes
File without changes
File without changes
File without changes