@agentica/core 0.12.1 → 0.12.2-dev.20250314

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (77) hide show
  1. package/LICENSE +21 -21
  2. package/README.md +461 -461
  3. package/package.json +1 -1
  4. package/prompts/cancel.md +4 -4
  5. package/prompts/common.md +2 -2
  6. package/prompts/describe.md +6 -6
  7. package/prompts/execute.md +6 -6
  8. package/prompts/initialize.md +2 -2
  9. package/prompts/select.md +6 -6
  10. package/src/Agentica.ts +359 -359
  11. package/src/chatgpt/ChatGptAgent.ts +76 -76
  12. package/src/chatgpt/ChatGptCallFunctionAgent.ts +466 -466
  13. package/src/chatgpt/ChatGptCancelFunctionAgent.ts +280 -280
  14. package/src/chatgpt/ChatGptCompletionMessageUtil.ts +166 -166
  15. package/src/chatgpt/ChatGptDescribeFunctionAgent.ts +122 -122
  16. package/src/chatgpt/ChatGptHistoryDecoder.ts +88 -88
  17. package/src/chatgpt/ChatGptInitializeFunctionAgent.ts +96 -96
  18. package/src/chatgpt/ChatGptSelectFunctionAgent.ts +311 -311
  19. package/src/chatgpt/ChatGptUsageAggregator.ts +62 -62
  20. package/src/context/AgenticaCancelPrompt.ts +32 -32
  21. package/src/context/AgenticaClassOperation.ts +23 -23
  22. package/src/context/AgenticaContext.ts +130 -130
  23. package/src/context/AgenticaHttpOperation.ts +27 -27
  24. package/src/context/AgenticaOperation.ts +66 -66
  25. package/src/context/AgenticaOperationBase.ts +57 -57
  26. package/src/context/AgenticaOperationCollection.ts +52 -52
  27. package/src/context/AgenticaOperationSelection.ts +27 -27
  28. package/src/context/AgenticaTokenUsage.ts +170 -170
  29. package/src/context/internal/AgenticaTokenUsageAggregator.ts +66 -66
  30. package/src/context/internal/__IChatCancelFunctionsApplication.ts +23 -23
  31. package/src/context/internal/__IChatFunctionReference.ts +21 -21
  32. package/src/context/internal/__IChatInitialApplication.ts +15 -15
  33. package/src/context/internal/__IChatSelectFunctionsApplication.ts +24 -24
  34. package/src/events/AgenticaCallEvent.ts +36 -36
  35. package/src/events/AgenticaCancelEvent.ts +28 -28
  36. package/src/events/AgenticaDescribeEvent.ts +66 -66
  37. package/src/events/AgenticaEvent.ts +36 -36
  38. package/src/events/AgenticaEventBase.ts +7 -7
  39. package/src/events/AgenticaEventSource.ts +6 -6
  40. package/src/events/AgenticaExecuteEvent.ts +50 -50
  41. package/src/events/AgenticaInitializeEvent.ts +14 -14
  42. package/src/events/AgenticaRequestEvent.ts +45 -45
  43. package/src/events/AgenticaResponseEvent.ts +48 -48
  44. package/src/events/AgenticaSelectEvent.ts +37 -37
  45. package/src/events/AgenticaTextEvent.ts +62 -62
  46. package/src/functional/assertHttpLlmApplication.ts +55 -55
  47. package/src/functional/validateHttpLlmApplication.ts +66 -66
  48. package/src/index.ts +44 -44
  49. package/src/internal/AgenticaConstant.ts +4 -4
  50. package/src/internal/AgenticaDefaultPrompt.ts +43 -43
  51. package/src/internal/AgenticaOperationComposer.ts +96 -96
  52. package/src/internal/ByteArrayUtil.ts +5 -5
  53. package/src/internal/MPSCUtil.ts +111 -111
  54. package/src/internal/MathUtil.ts +3 -3
  55. package/src/internal/Singleton.ts +22 -22
  56. package/src/internal/StreamUtil.ts +64 -64
  57. package/src/internal/__map_take.ts +15 -15
  58. package/src/json/IAgenticaEventJson.ts +178 -178
  59. package/src/json/IAgenticaOperationJson.ts +36 -36
  60. package/src/json/IAgenticaOperationSelectionJson.ts +19 -19
  61. package/src/json/IAgenticaPromptJson.ts +130 -130
  62. package/src/json/IAgenticaTokenUsageJson.ts +107 -107
  63. package/src/prompts/AgenticaCancelPrompt.ts +32 -32
  64. package/src/prompts/AgenticaDescribePrompt.ts +41 -41
  65. package/src/prompts/AgenticaExecutePrompt.ts +52 -52
  66. package/src/prompts/AgenticaPrompt.ts +14 -14
  67. package/src/prompts/AgenticaPromptBase.ts +27 -27
  68. package/src/prompts/AgenticaSelectPrompt.ts +32 -32
  69. package/src/prompts/AgenticaTextPrompt.ts +31 -31
  70. package/src/structures/IAgenticaConfig.ts +123 -123
  71. package/src/structures/IAgenticaController.ts +133 -133
  72. package/src/structures/IAgenticaExecutor.ts +157 -157
  73. package/src/structures/IAgenticaProps.ts +69 -69
  74. package/src/structures/IAgenticaSystemPrompt.ts +125 -125
  75. package/src/structures/IAgenticaVendor.ts +39 -39
  76. package/src/transformers/AgenticaEventTransformer.ts +165 -165
  77. package/src/transformers/AgenticaPromptTransformer.ts +134 -134
package/README.md CHANGED
@@ -1,461 +1,461 @@
1
- # `@agentica/core`
2
- ![agentica-conceptual-diagram](https://github.com/user-attachments/assets/d7ebbd1f-04d3-4b0d-9e2a-234e29dd6c57)
3
-
4
- [![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/wrtnlabs/agentica/blob/master/LICENSE)
5
- [![npm version](https://img.shields.io/npm/v/@agentica/core.svg)](https://www.npmjs.com/package/@agentica/core)
6
- [![Downloads](https://img.shields.io/npm/dm/@agentica/core.svg)](https://www.npmjs.com/package/@agentica/core)
7
- [![Build Status](https://github.com/wrtnlabs/agentica/workflows/build/badge.svg)](https://github.com/wrtnlabs/agentica/actions?query=workflow%3Abuild)
8
-
9
- The simplest **Agentic AI** library, specialized in **LLM Function Calling**.
10
-
11
- Don't compose complicate agent graph or workflow, but just deliver **Swagger/OpenAPI** documents or **TypeScript class** types linearly to the `@agentica/core`. Then `@agentica/core` will do everything with the function calling.
12
-
13
- Look at the below demonstration, and feel how `@agentica/core` is easy and powerful.
14
-
15
- ```typescript
16
- import { Agentica } from "@agentica/core";
17
- import typia from "typia";
18
-
19
- const agent = new Agentica({
20
- controllers: [
21
- await fetch(
22
- "https://shopping-be.wrtn.ai/editor/swagger.json",
23
- ).then(r => r.json()),
24
- typia.llm.application<ShoppingCounselor>(),
25
- typia.llm.application<ShoppingPolicy>(),
26
- typia.llm.application<ShoppingSearchRag>(),
27
- ],
28
- });
29
- await agent.conversate("I wanna buy MacBook Pro");
30
- ```
31
-
32
- > https://github.com/user-attachments/assets/01604b53-aca4-41cb-91aa-3faf63549ea6
33
- >
34
- > Demonstration video of Shopping AI Chatbot
35
-
36
-
37
-
38
-
39
- ## How to Use
40
- ### Setup
41
- ```bash
42
- npm install @agentica/core @samchon/openapi typia
43
- npx typia setup
44
- ```
45
-
46
- Install not only `@agentica/core`, but also [`@samchon/openapi`](https://github.com/samchon/openapi) and [`typia`](https://github.com/samchon/typia).
47
-
48
- `@samchon/openapi` is an OpenAPI specification library which can convert Swagger/OpenAPI document to LLM function calling schema. And `typia` is a transformer (compiler) library which can compose LLM function calling schema from a TypeScript class type.
49
-
50
- By the way, as `typia` is a transformer library analyzing TypeScript source code in the compilation level, it needs additional setup command `npx typia setup`. Also, if you're not using non-standard TypeScript compiler (not `tsc`) or developing the agent in the frontend environment, you have to setup [`@ryoppippi/unplugin-typia`](https://typia.io/docs/setup/#unplugin-typia) too.
51
-
52
- ### Chat with Backend Server
53
- ```typescript
54
- import { IHttpLlmApplication } from "@samchon/openapi";
55
- import { Agentica, validateHttpLlmApplication } from "@agentica/core";
56
- import OpenAI from "openai";
57
- import { IValidation } from "typia";
58
-
59
- const main = async (): Promise<void> => {
60
- // LOAD SWAGGER DOCUMENT, AND CONVERT TO LLM APPLICATION SCHEMA
61
- const application: IValidation<IHttpLlmApplication<"chatgpt">> =
62
- validateHttpLlmApplication({
63
- model: "chatgpt",
64
- document: await fetch("https://shopping-be.wrtn.ai/editor/swagger.json").then(
65
- (r) => r.json()
66
- ),
67
- });
68
- if (application.success === false) {
69
- console.error(application.errors);
70
- throw new Error("Type error on the target swagger document");
71
- }
72
-
73
- // CREATE AN AGENT WITH THE APPLICATION
74
- const agent: Agentica<"chatgpt"> = new Agentica({
75
- model: "chatgpt",
76
- vendor: {
77
- api: new OpenAI({
78
- apiKey: "YOUR_OPENAI_API_KEY",
79
- }),
80
- model: "gpt-4o-mini",
81
- },
82
- controllers: [
83
- {
84
- protocol: "http",
85
- name: "shopping",
86
- application: application.data,
87
- connection: {
88
- host: "https://shopping-be.wrtn.ai",
89
- },
90
- },
91
- ],
92
- config: {
93
- locale: "en-US",
94
- },
95
- });
96
-
97
- // ADD EVENT LISTENERS
98
- agent.on("select", async (select) => {
99
- console.log("selected function", select.operation.function.name);
100
- });
101
- agent.on("execute", async (execute) => {
102
- consoe.log("execute function", {
103
- function: execute.operation.function.name,
104
- arguments: execute.arguments,
105
- value: execute.value,
106
- });
107
- });
108
-
109
- // CONVERSATE TO AI CHATBOT
110
- await agent.conversate("What you can do?");
111
- };
112
- main().catch(console.error);
113
- ```
114
-
115
- Just load your swagger document, and put it into the `@agentica/core`.
116
-
117
- Then you can start conversation with your backend server, and the API functions of the backend server would be automatically called. AI chatbot will analyze your conversation texts, and executes proper API functions by the LLM (Large Language Model) function calling feature.
118
-
119
- From now on, every backend developer is also an AI developer.
120
-
121
- ### Chat with TypeScript Class
122
- ```typescript
123
- import { Agentica } from "@agentica/core";
124
- import typia, { tags } from "typia";
125
- import OpenAI from "openai";
126
-
127
- class BbsArticleService {
128
- /**
129
- * Create a new article.
130
- *
131
- * Writes a new article and archives it into the DB.
132
- *
133
- * @param props Properties of create function
134
- * @returns Newly created article
135
- */
136
- public async create(props: {
137
- /**
138
- * Information of the article to create
139
- */
140
- input: IBbsArticle.ICreate;
141
- }): Promise<IBbsArticle>;
142
-
143
- /**
144
- * Update an article.
145
- *
146
- * Updates an article with new content.
147
- *
148
- * @param props Properties of update function
149
- * @param input New content to update
150
- */
151
- public async update(props: {
152
- /**
153
- * Target article's {@link IBbsArticle.id}.
154
- */
155
- id: string & tags.Format<"uuid">;
156
-
157
- /**
158
- * New content to update.
159
- */
160
- input: IBbsArticle.IUpdate;
161
- }): Promise<void>;
162
- }
163
-
164
- const main = async (): Promise<void> => {
165
- const api: OpenAI = new OpenAI({
166
- apiKey: "YOUR_OPENAI_API_KEY",
167
- });
168
- const agent: Agentica<"chatgpt"> = new Agentica({
169
- model: "chatgpt",
170
- vendor: {
171
- api: new OpenAI({
172
- apiKey: "YOUR_OPENAI_API_KEY",
173
- }),
174
- model: "gpt-4o-mini",
175
- },
176
- controllers: [
177
- {
178
- protocol: "class",
179
- name: "vectorStore",
180
- application: typia.llm.application<
181
- BbsArticleService,
182
- "chatgpt"
183
- >(),
184
- execute: new BbsArticleService(),
185
- },
186
- ],
187
- });
188
- await agent.conversate("I wanna write an article.");
189
- };
190
- main().catch(console.error);
191
- ```
192
-
193
- You also can chat with a TypeScript class.
194
-
195
- Just deliver the TypeScript type to the `@agentica/core`, and start conversation. Then `@agentica/core` will call the proper class functions by analyzing your conversation texts with LLM function calling feature.
196
-
197
- From now on, every TypeScript classes you've developed can be the AI chatbot.
198
-
199
- ### Multi Agent Orchestration
200
- ```typescript
201
- import { Agentica } from "@agentica/core";
202
- import typia from "typia";
203
- import OpenAI from "openai";
204
-
205
- class OpenAIVectorStoreAgent {
206
- /**
207
- * Retrieve Vector DB with RAG.
208
- *
209
- * @param props Properties of Vector DB retrievelance
210
- */
211
- public query(props: {
212
- /**
213
- * Keywords to look up.
214
- *
215
- * Put all the keywords you want to look up. However, keywords
216
- * should only be included in the core, and all ambiguous things
217
- * should be excluded to achieve accurate results.
218
- */
219
- keywords: string;
220
- }): Promise<IVectorStoreQueryResult>;
221
- }
222
-
223
- const main = async (): Promise<void> => {
224
- const api: OpenAI = new OpenAI({
225
- apiKey: "YOUR_OPENAI_API_KEY",
226
- });
227
- const agent: Agentica<"chatgpt"> = new Agentica({
228
- model: "chatgpt",
229
- context: {
230
- api: new OpenAI({
231
- apiKey: "YOUR_OPENAI_API_KEY",
232
- }),
233
- model: "gpt-4o-mini",
234
- },
235
- controllers: [
236
- {
237
- protocol: "class",
238
- name: "vectorStore",
239
- application: typia.llm.application<
240
- OpenAIVectorStoreAgent,
241
- "chatgpt"
242
- >(),
243
- execute: new OpenAIVectorStoreAgent({
244
- api,
245
- id: "YOUR_OPENAI_VECTOR_STORE_ID",
246
- }),
247
- },
248
- ],
249
- });
250
- await agent.conversate("I wanna research economic articles");
251
- };
252
- main().catch(console.error);
253
- ```
254
-
255
- In the `@agentica/core`, you can implement multi-agent orchestration super easily.
256
-
257
- Just develop a TypeScript class which contains agent feature like Vector Store, and just deliver the TypeScript class type to the `@agentica/core` like above. The `@agentica/core` will centralize and realize the multi-agent orchestration by LLM function calling strategy to the TypeScript class.
258
-
259
-
260
- ### If you want drastically improves function selection speed
261
-
262
- Use the [@agentica/pg-vector-selector](../pg-vector-selector/README.md)
263
-
264
- Just initialize and set the config
265
- when use this adapter, you should run the [connector-hive](https://github.com/wrtnlabs/connector-hive)
266
-
267
- ```typescript
268
- import { Agentica } from "@agentica/core";
269
- import { AgenticaPgVectorSelector } from "@agentica/pg-vector-selector";
270
-
271
- import typia from "typia";
272
-
273
-
274
- // Initialize with connector-hive server
275
- const selectorExecute = AgenticaPgVectorSelector.boot<"chatgpt">(
276
- 'https://your-connector-hive-server.com'
277
- );
278
-
279
-
280
- const agent = new Agentica({
281
- model: "chatgpt",
282
- vendor: {
283
- model: "gpt-4o-mini",
284
- api: new OpenAI({
285
- apiKey: process.env.CHATGPT_API_KEY,
286
- }),
287
- },
288
- controllers: [
289
- await fetch(
290
- "https://shopping-be.wrtn.ai/editor/swagger.json",
291
- ).then(r => r.json()),
292
- typia.llm.application<ShoppingCounselor>(),
293
- typia.llm.application<ShoppingPolicy>(),
294
- typia.llm.application<ShoppingSearchRag>(),
295
- ],
296
- config: {
297
- executor: {
298
- select: selectorExecute,
299
- }
300
- }
301
- });
302
- await agent.conversate("I wanna buy MacBook Pro");
303
- ```
304
-
305
-
306
- ## Principles
307
- ### Agent Strategy
308
- ```mermaid
309
- sequenceDiagram
310
- actor User
311
- actor Agent
312
- participant Selector
313
- participant Caller
314
- participant Describer
315
- activate User
316
- User-->>Agent: Conversate:<br/>user says
317
- activate Agent
318
- Agent->>Selector: Deliver conversation text
319
- activate Selector
320
- deactivate User
321
- Note over Selector: Select or remove candidate functions
322
- alt No candidate
323
- Selector->>Agent: Talk like plain ChatGPT
324
- deactivate Selector
325
- Agent->>User: Conversate:<br/>agent says
326
- activate User
327
- deactivate User
328
- end
329
- deactivate Agent
330
- loop Candidate functions exist
331
- activate Agent
332
- Agent->>Caller: Deliver conversation text
333
- activate Caller
334
- alt Contexts are enough
335
- Note over Caller: Call fulfilled functions
336
- Caller->>Describer: Function call histories
337
- deactivate Caller
338
- activate Describer
339
- Describer->>Agent: Describe function calls
340
- deactivate Describer
341
- Agent->>User: Conversate:<br/>agent describes
342
- activate User
343
- deactivate User
344
- else Contexts are not enough
345
- break
346
- Caller->>Agent: Request more information
347
- end
348
- Agent->>User: Conversate:<br/>agent requests
349
- activate User
350
- deactivate User
351
- end
352
- deactivate Agent
353
- end
354
- ```
355
-
356
- When user says, `@agentica/core` delivers the conversation text to the `selector` agent, and let the `selector` agent to find (or cancel) candidate functions from the context. If the `selector` agent could not find any candidate function to call and there is not any candidate function previously selected either, the `selector` agent will work just like a plain ChatGPT.
357
-
358
- And `@agentica/core` enters to a loop statement until the candidate functions to be empty. In the loop statement, `caller` agent tries to LLM function calling by analyzing the user's conversation text. If context is enough to compose arguments of candidate functions, the `caller` agent actually calls the target functions, and let `decriber` agent to explain the function calling results. Otherwise the context is not enough to compose arguments, `caller` agent requests more information to user.
359
-
360
- Such LLM (Large Language Model) function calling strategy separating `selector`, `caller`, and `describer` is the key logic of `@agentica/core`.
361
-
362
- ### Validation Feedback
363
- ```typescript
364
- import { FunctionCall } from "pseudo";
365
- import { ILlmFunction, IValidation } from "typia";
366
-
367
- export const correctFunctionCall = (p: {
368
- call: FunctionCall;
369
- functions: Array<ILlmFunction<"chatgpt">>;
370
- retry: (reason: string, errors?: IValidation.IError[]) => Promise<unknown>;
371
- }): Promise<unknown> => {
372
- // FIND FUNCTION
373
- const func: ILlmFunction<"chatgpt"> | undefined =
374
- p.functions.find((f) => f.name === p.call.name);
375
- if (func === undefined) {
376
- // never happened in my experience
377
- return p.retry(
378
- "Unable to find the matched function name. Try it again.",
379
- );
380
- }
381
-
382
- // VALIDATE
383
- const result: IValidation<unknown> = func.validate(p.call.arguments);
384
- if (result.success === false) {
385
- // 1st trial: 50% (gpt-4o-mini in shopping mall chatbot)
386
- // 2nd trial with validation feedback: 99%
387
- // 3nd trial with validation feedback again: never have failed
388
- return p.retry(
389
- "Type errors are detected. Correct it through validation errors",
390
- {
391
- errors: result.errors,
392
- },
393
- );
394
- }
395
- return result.data;
396
- }
397
- ```
398
-
399
- Is LLM function calling perfect?
400
-
401
- The answer is not, and LLM (Large Language Model) vendors like OpenAI take a lot of type level mistakes when composing the arguments of the target function to call. Even though an LLM function calling schema has defined an `Array<string>` type, LLM often fills it just by a `string` typed value.
402
-
403
- Therefore, when developing an LLM function calling agent, the validation feedback process is essentially required. If LLM takes a type level mistake on arguments composition, the agent must feedback the most detailed validation errors, and let the LLM to retry the function calling referencing the validation errors.
404
-
405
- About the validation feedback, `@agentica/core` is utilizing [`typia.validate<T>()`](https://typia.io/docs/validators/validate) and [`typia.llm.application<Class, Model>()`](https://typia.io/docs/llm/application/#application) functions. They construct validation logic by analyzing TypeScript source codes and types in the compilation level, so that detailed and accurate than any other validators like below.
406
-
407
- Such validation feedback strategy and combination with `typia` runtime validator, `@agentica/core` has achieved the most ideal LLM function calling. In my experience, when using OpenAI's `gpt-4o-mini` model, it tends to construct invalid function calling arguments at the first trial about 50% of the time. By the way, if correct it through validation feedback with `typia`, success rate soars to 99%. And I've never had a failure when trying validation feedback twice.
408
-
409
- Components | `typia` | `TypeBox` | `ajv` | `io-ts` | `zod` | `C.V.`
410
- -------------------------|--------|-----------|-------|---------|-------|------------------
411
- **Easy to use** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
412
- [Object (simple)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectSimple.ts) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔
413
- [Object (hierarchical)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectHierarchical.ts) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔
414
- [Object (recursive)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectRecursive.ts) | ✔ | ❌ | ✔ | ✔ | ✔ | ✔ | ✔
415
- [Object (union, implicit)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectUnionImplicit.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
416
- [Object (union, explicit)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectUnionExplicit.ts) | ✔ | ✔ | ✔ | ✔ | ✔ | ❌
417
- [Object (additional tags)](https://github.com/samchon/typia/#comment-tags) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔
418
- [Object (template literal types)](https://github.com/samchon/typia/blob/master/test/src/structures/TemplateUnion.ts) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌
419
- [Object (dynamic properties)](https://github.com/samchon/typia/blob/master/test/src/structures/DynamicTemplate.ts) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌
420
- [Array (rest tuple)](https://github.com/samchon/typia/blob/master/test/src/structures/TupleRestAtomic.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
421
- [Array (hierarchical)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayHierarchical.ts) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔
422
- [Array (recursive)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRecursive.ts) | ✔ | ✔ | ✔ | ✔ | ✔ | ❌
423
- [Array (recursive, union)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRecursiveUnionExplicit.ts) | ✔ | ✔ | ❌ | ✔ | ✔ | ❌
424
- [Array (R+U, implicit)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRecursiveUnionImplicit.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
425
- [Array (repeated)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRepeatedNullable.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
426
- [Array (repeated, union)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRepeatedUnionWithTuple.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
427
- [**Ultimate Union Type**](https://github.com/samchon/typia/blob/master/test/src/structures/UltimateUnion.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
428
-
429
- > `C.V.` means `class-validator`
430
-
431
- ### OpenAPI Specification
432
- ```mermaid
433
- flowchart
434
- subgraph "OpenAPI Specification"
435
- v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]]
436
- v30("OpenAPI v3.0") --upgrades--> emended
437
- v31("OpenAPI v3.1") --emends--> emended
438
- end
439
- subgraph "OpenAPI Generator"
440
- emended --normalizes--> migration[["Migration Schema"]]
441
- migration --"Artificial Intelligence"--> lfc{{"LLM Function Calling"}}
442
- lfc --"OpenAI"--> chatgpt("ChatGPT")
443
- lfc --"Anthropic"--> claude("Claude")
444
- lfc --"Google"--> gemini("Gemini")
445
- lfc --"Meta"--> llama("Llama")
446
- end
447
- ```
448
-
449
- `@agentica/core` obtains LLM function calling schemas from both Swagger/OpenAPI documents and TypeScript class types. The TypeScript class type can be converted to LLM function calling schema by [`typia.llm.application<Class, Model>()`](https://typia.io/docs/llm/application#application) function. Then how about OpenAPI document? How Swagger document can be LLM function calling schema.
450
-
451
- The secret is on the above diagram.
452
-
453
- In the OpenAPI specification, there are three versions with different definitions. And even in the same version, there are too much ambiguous and duplicated expressions. To resolve these problems, [`@samchon/openapi`](https://github.com/samchon/openapi) is transforming every OpenAPI documents to v3.1 emended specification. The `@samchon/openapi`'s emended v3.1 specification has removed every ambiguous and duplicated expressions for clarity.
454
-
455
- With the v3.1 emended OpenAPI document, `@samchon/openapi` converts it to a migration schema that is near to the function structure. And as the last step, the migration schema will be transformed to a specific LLM vendor's function calling schema. LLM function calling schemas are composed like this way.
456
-
457
- > **Why do not directly convert, but intermediate?**
458
- >
459
- > If directly convert from each version of OpenAPI specification to specific LLM's function calling schema, I have to make much more converters increased by cartesian product. In current models, number of converters would be 12 = 3 x 4.
460
- >
461
- > However, if define intermediate schema, number of converters are shrunk to plus operation. In current models, I just need to develop only (7 = 3 + 4) converters, and this is the reason why I've defined intermediate specification. This way is economic.
1
+ # `@agentica/core`
2
+ ![agentica-conceptual-diagram](https://github.com/user-attachments/assets/d7ebbd1f-04d3-4b0d-9e2a-234e29dd6c57)
3
+
4
+ [![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/wrtnlabs/agentica/blob/master/LICENSE)
5
+ [![npm version](https://img.shields.io/npm/v/@agentica/core.svg)](https://www.npmjs.com/package/@agentica/core)
6
+ [![Downloads](https://img.shields.io/npm/dm/@agentica/core.svg)](https://www.npmjs.com/package/@agentica/core)
7
+ [![Build Status](https://github.com/wrtnlabs/agentica/workflows/build/badge.svg)](https://github.com/wrtnlabs/agentica/actions?query=workflow%3Abuild)
8
+
9
+ The simplest **Agentic AI** library, specialized in **LLM Function Calling**.
10
+
11
+ Don't compose complicate agent graph or workflow, but just deliver **Swagger/OpenAPI** documents or **TypeScript class** types linearly to the `@agentica/core`. Then `@agentica/core` will do everything with the function calling.
12
+
13
+ Look at the below demonstration, and feel how `@agentica/core` is easy and powerful.
14
+
15
+ ```typescript
16
+ import { Agentica } from "@agentica/core";
17
+ import typia from "typia";
18
+
19
+ const agent = new Agentica({
20
+ controllers: [
21
+ await fetch(
22
+ "https://shopping-be.wrtn.ai/editor/swagger.json",
23
+ ).then(r => r.json()),
24
+ typia.llm.application<ShoppingCounselor>(),
25
+ typia.llm.application<ShoppingPolicy>(),
26
+ typia.llm.application<ShoppingSearchRag>(),
27
+ ],
28
+ });
29
+ await agent.conversate("I wanna buy MacBook Pro");
30
+ ```
31
+
32
+ > https://github.com/user-attachments/assets/01604b53-aca4-41cb-91aa-3faf63549ea6
33
+ >
34
+ > Demonstration video of Shopping AI Chatbot
35
+
36
+
37
+
38
+
39
+ ## How to Use
40
+ ### Setup
41
+ ```bash
42
+ npm install @agentica/core @samchon/openapi typia
43
+ npx typia setup
44
+ ```
45
+
46
+ Install not only `@agentica/core`, but also [`@samchon/openapi`](https://github.com/samchon/openapi) and [`typia`](https://github.com/samchon/typia).
47
+
48
+ `@samchon/openapi` is an OpenAPI specification library which can convert Swagger/OpenAPI document to LLM function calling schema. And `typia` is a transformer (compiler) library which can compose LLM function calling schema from a TypeScript class type.
49
+
50
+ By the way, as `typia` is a transformer library analyzing TypeScript source code in the compilation level, it needs additional setup command `npx typia setup`. Also, if you're not using non-standard TypeScript compiler (not `tsc`) or developing the agent in the frontend environment, you have to setup [`@ryoppippi/unplugin-typia`](https://typia.io/docs/setup/#unplugin-typia) too.
51
+
52
+ ### Chat with Backend Server
53
+ ```typescript
54
+ import { IHttpLlmApplication } from "@samchon/openapi";
55
+ import { Agentica, validateHttpLlmApplication } from "@agentica/core";
56
+ import OpenAI from "openai";
57
+ import { IValidation } from "typia";
58
+
59
+ const main = async (): Promise<void> => {
60
+ // LOAD SWAGGER DOCUMENT, AND CONVERT TO LLM APPLICATION SCHEMA
61
+ const application: IValidation<IHttpLlmApplication<"chatgpt">> =
62
+ validateHttpLlmApplication({
63
+ model: "chatgpt",
64
+ document: await fetch("https://shopping-be.wrtn.ai/editor/swagger.json").then(
65
+ (r) => r.json()
66
+ ),
67
+ });
68
+ if (application.success === false) {
69
+ console.error(application.errors);
70
+ throw new Error("Type error on the target swagger document");
71
+ }
72
+
73
+ // CREATE AN AGENT WITH THE APPLICATION
74
+ const agent: Agentica<"chatgpt"> = new Agentica({
75
+ model: "chatgpt",
76
+ vendor: {
77
+ api: new OpenAI({
78
+ apiKey: "YOUR_OPENAI_API_KEY",
79
+ }),
80
+ model: "gpt-4o-mini",
81
+ },
82
+ controllers: [
83
+ {
84
+ protocol: "http",
85
+ name: "shopping",
86
+ application: application.data,
87
+ connection: {
88
+ host: "https://shopping-be.wrtn.ai",
89
+ },
90
+ },
91
+ ],
92
+ config: {
93
+ locale: "en-US",
94
+ },
95
+ });
96
+
97
+ // ADD EVENT LISTENERS
98
+ agent.on("select", async (select) => {
99
+ console.log("selected function", select.operation.function.name);
100
+ });
101
+ agent.on("execute", async (execute) => {
102
+ consoe.log("execute function", {
103
+ function: execute.operation.function.name,
104
+ arguments: execute.arguments,
105
+ value: execute.value,
106
+ });
107
+ });
108
+
109
+ // CONVERSATE TO AI CHATBOT
110
+ await agent.conversate("What you can do?");
111
+ };
112
+ main().catch(console.error);
113
+ ```
114
+
115
+ Just load your swagger document, and put it into the `@agentica/core`.
116
+
117
+ Then you can start conversation with your backend server, and the API functions of the backend server would be automatically called. AI chatbot will analyze your conversation texts, and executes proper API functions by the LLM (Large Language Model) function calling feature.
118
+
119
+ From now on, every backend developer is also an AI developer.
120
+
121
+ ### Chat with TypeScript Class
122
+ ```typescript
123
+ import { Agentica } from "@agentica/core";
124
+ import typia, { tags } from "typia";
125
+ import OpenAI from "openai";
126
+
127
+ class BbsArticleService {
128
+ /**
129
+ * Create a new article.
130
+ *
131
+ * Writes a new article and archives it into the DB.
132
+ *
133
+ * @param props Properties of create function
134
+ * @returns Newly created article
135
+ */
136
+ public async create(props: {
137
+ /**
138
+ * Information of the article to create
139
+ */
140
+ input: IBbsArticle.ICreate;
141
+ }): Promise<IBbsArticle>;
142
+
143
+ /**
144
+ * Update an article.
145
+ *
146
+ * Updates an article with new content.
147
+ *
148
+ * @param props Properties of update function
149
+ * @param input New content to update
150
+ */
151
+ public async update(props: {
152
+ /**
153
+ * Target article's {@link IBbsArticle.id}.
154
+ */
155
+ id: string & tags.Format<"uuid">;
156
+
157
+ /**
158
+ * New content to update.
159
+ */
160
+ input: IBbsArticle.IUpdate;
161
+ }): Promise<void>;
162
+ }
163
+
164
+ const main = async (): Promise<void> => {
165
+ const api: OpenAI = new OpenAI({
166
+ apiKey: "YOUR_OPENAI_API_KEY",
167
+ });
168
+ const agent: Agentica<"chatgpt"> = new Agentica({
169
+ model: "chatgpt",
170
+ vendor: {
171
+ api: new OpenAI({
172
+ apiKey: "YOUR_OPENAI_API_KEY",
173
+ }),
174
+ model: "gpt-4o-mini",
175
+ },
176
+ controllers: [
177
+ {
178
+ protocol: "class",
179
+ name: "vectorStore",
180
+ application: typia.llm.application<
181
+ BbsArticleService,
182
+ "chatgpt"
183
+ >(),
184
+ execute: new BbsArticleService(),
185
+ },
186
+ ],
187
+ });
188
+ await agent.conversate("I wanna write an article.");
189
+ };
190
+ main().catch(console.error);
191
+ ```
192
+
193
+ You also can chat with a TypeScript class.
194
+
195
+ Just deliver the TypeScript type to the `@agentica/core`, and start conversation. Then `@agentica/core` will call the proper class functions by analyzing your conversation texts with LLM function calling feature.
196
+
197
+ From now on, every TypeScript classes you've developed can be the AI chatbot.
198
+
199
+ ### Multi Agent Orchestration
200
+ ```typescript
201
+ import { Agentica } from "@agentica/core";
202
+ import typia from "typia";
203
+ import OpenAI from "openai";
204
+
205
+ class OpenAIVectorStoreAgent {
206
+ /**
207
+ * Retrieve Vector DB with RAG.
208
+ *
209
+ * @param props Properties of Vector DB retrievelance
210
+ */
211
+ public query(props: {
212
+ /**
213
+ * Keywords to look up.
214
+ *
215
+ * Put all the keywords you want to look up. However, keywords
216
+ * should only be included in the core, and all ambiguous things
217
+ * should be excluded to achieve accurate results.
218
+ */
219
+ keywords: string;
220
+ }): Promise<IVectorStoreQueryResult>;
221
+ }
222
+
223
+ const main = async (): Promise<void> => {
224
+ const api: OpenAI = new OpenAI({
225
+ apiKey: "YOUR_OPENAI_API_KEY",
226
+ });
227
+ const agent: Agentica<"chatgpt"> = new Agentica({
228
+ model: "chatgpt",
229
+ context: {
230
+ api: new OpenAI({
231
+ apiKey: "YOUR_OPENAI_API_KEY",
232
+ }),
233
+ model: "gpt-4o-mini",
234
+ },
235
+ controllers: [
236
+ {
237
+ protocol: "class",
238
+ name: "vectorStore",
239
+ application: typia.llm.application<
240
+ OpenAIVectorStoreAgent,
241
+ "chatgpt"
242
+ >(),
243
+ execute: new OpenAIVectorStoreAgent({
244
+ api,
245
+ id: "YOUR_OPENAI_VECTOR_STORE_ID",
246
+ }),
247
+ },
248
+ ],
249
+ });
250
+ await agent.conversate("I wanna research economic articles");
251
+ };
252
+ main().catch(console.error);
253
+ ```
254
+
255
+ In the `@agentica/core`, you can implement multi-agent orchestration super easily.
256
+
257
+ Just develop a TypeScript class which contains agent feature like Vector Store, and just deliver the TypeScript class type to the `@agentica/core` like above. The `@agentica/core` will centralize and realize the multi-agent orchestration by LLM function calling strategy to the TypeScript class.
258
+
259
+
260
+ ### If you want drastically improves function selection speed
261
+
262
+ Use the [@agentica/pg-vector-selector](../pg-vector-selector/README.md)
263
+
264
+ Just initialize and set the config
265
+ when use this adapter, you should run the [connector-hive](https://github.com/wrtnlabs/connector-hive)
266
+
267
+ ```typescript
268
+ import { Agentica } from "@agentica/core";
269
+ import { AgenticaPgVectorSelector } from "@agentica/pg-vector-selector";
270
+
271
+ import typia from "typia";
272
+
273
+
274
+ // Initialize with connector-hive server
275
+ const selectorExecute = AgenticaPgVectorSelector.boot<"chatgpt">(
276
+ 'https://your-connector-hive-server.com'
277
+ );
278
+
279
+
280
+ const agent = new Agentica({
281
+ model: "chatgpt",
282
+ vendor: {
283
+ model: "gpt-4o-mini",
284
+ api: new OpenAI({
285
+ apiKey: process.env.CHATGPT_API_KEY,
286
+ }),
287
+ },
288
+ controllers: [
289
+ await fetch(
290
+ "https://shopping-be.wrtn.ai/editor/swagger.json",
291
+ ).then(r => r.json()),
292
+ typia.llm.application<ShoppingCounselor>(),
293
+ typia.llm.application<ShoppingPolicy>(),
294
+ typia.llm.application<ShoppingSearchRag>(),
295
+ ],
296
+ config: {
297
+ executor: {
298
+ select: selectorExecute,
299
+ }
300
+ }
301
+ });
302
+ await agent.conversate("I wanna buy MacBook Pro");
303
+ ```
304
+
305
+
306
+ ## Principles
307
+ ### Agent Strategy
308
+ ```mermaid
309
+ sequenceDiagram
310
+ actor User
311
+ actor Agent
312
+ participant Selector
313
+ participant Caller
314
+ participant Describer
315
+ activate User
316
+ User-->>Agent: Conversate:<br/>user says
317
+ activate Agent
318
+ Agent->>Selector: Deliver conversation text
319
+ activate Selector
320
+ deactivate User
321
+ Note over Selector: Select or remove candidate functions
322
+ alt No candidate
323
+ Selector->>Agent: Talk like plain ChatGPT
324
+ deactivate Selector
325
+ Agent->>User: Conversate:<br/>agent says
326
+ activate User
327
+ deactivate User
328
+ end
329
+ deactivate Agent
330
+ loop Candidate functions exist
331
+ activate Agent
332
+ Agent->>Caller: Deliver conversation text
333
+ activate Caller
334
+ alt Contexts are enough
335
+ Note over Caller: Call fulfilled functions
336
+ Caller->>Describer: Function call histories
337
+ deactivate Caller
338
+ activate Describer
339
+ Describer->>Agent: Describe function calls
340
+ deactivate Describer
341
+ Agent->>User: Conversate:<br/>agent describes
342
+ activate User
343
+ deactivate User
344
+ else Contexts are not enough
345
+ break
346
+ Caller->>Agent: Request more information
347
+ end
348
+ Agent->>User: Conversate:<br/>agent requests
349
+ activate User
350
+ deactivate User
351
+ end
352
+ deactivate Agent
353
+ end
354
+ ```
355
+
356
+ When user says, `@agentica/core` delivers the conversation text to the `selector` agent, and let the `selector` agent to find (or cancel) candidate functions from the context. If the `selector` agent could not find any candidate function to call and there is not any candidate function previously selected either, the `selector` agent will work just like a plain ChatGPT.
357
+
358
+ And `@agentica/core` enters to a loop statement until the candidate functions to be empty. In the loop statement, `caller` agent tries to LLM function calling by analyzing the user's conversation text. If context is enough to compose arguments of candidate functions, the `caller` agent actually calls the target functions, and let `decriber` agent to explain the function calling results. Otherwise the context is not enough to compose arguments, `caller` agent requests more information to user.
359
+
360
+ Such LLM (Large Language Model) function calling strategy separating `selector`, `caller`, and `describer` is the key logic of `@agentica/core`.
361
+
362
+ ### Validation Feedback
363
+ ```typescript
364
+ import { FunctionCall } from "pseudo";
365
+ import { ILlmFunction, IValidation } from "typia";
366
+
367
+ export const correctFunctionCall = (p: {
368
+ call: FunctionCall;
369
+ functions: Array<ILlmFunction<"chatgpt">>;
370
+ retry: (reason: string, errors?: IValidation.IError[]) => Promise<unknown>;
371
+ }): Promise<unknown> => {
372
+ // FIND FUNCTION
373
+ const func: ILlmFunction<"chatgpt"> | undefined =
374
+ p.functions.find((f) => f.name === p.call.name);
375
+ if (func === undefined) {
376
+ // never happened in my experience
377
+ return p.retry(
378
+ "Unable to find the matched function name. Try it again.",
379
+ );
380
+ }
381
+
382
+ // VALIDATE
383
+ const result: IValidation<unknown> = func.validate(p.call.arguments);
384
+ if (result.success === false) {
385
+ // 1st trial: 50% (gpt-4o-mini in shopping mall chatbot)
386
+ // 2nd trial with validation feedback: 99%
387
+ // 3nd trial with validation feedback again: never have failed
388
+ return p.retry(
389
+ "Type errors are detected. Correct it through validation errors",
390
+ {
391
+ errors: result.errors,
392
+ },
393
+ );
394
+ }
395
+ return result.data;
396
+ }
397
+ ```
398
+
399
+ Is LLM function calling perfect?
400
+
401
+ The answer is not, and LLM (Large Language Model) vendors like OpenAI take a lot of type level mistakes when composing the arguments of the target function to call. Even though an LLM function calling schema has defined an `Array<string>` type, LLM often fills it just by a `string` typed value.
402
+
403
+ Therefore, when developing an LLM function calling agent, the validation feedback process is essentially required. If LLM takes a type level mistake on arguments composition, the agent must feedback the most detailed validation errors, and let the LLM to retry the function calling referencing the validation errors.
404
+
405
+ About the validation feedback, `@agentica/core` is utilizing [`typia.validate<T>()`](https://typia.io/docs/validators/validate) and [`typia.llm.application<Class, Model>()`](https://typia.io/docs/llm/application/#application) functions. They construct validation logic by analyzing TypeScript source codes and types in the compilation level, so that detailed and accurate than any other validators like below.
406
+
407
+ Such validation feedback strategy and combination with `typia` runtime validator, `@agentica/core` has achieved the most ideal LLM function calling. In my experience, when using OpenAI's `gpt-4o-mini` model, it tends to construct invalid function calling arguments at the first trial about 50% of the time. By the way, if correct it through validation feedback with `typia`, success rate soars to 99%. And I've never had a failure when trying validation feedback twice.
408
+
409
+ Components | `typia` | `TypeBox` | `ajv` | `io-ts` | `zod` | `C.V.`
410
+ -------------------------|--------|-----------|-------|---------|-------|------------------
411
+ **Easy to use** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
412
+ [Object (simple)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectSimple.ts) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔
413
+ [Object (hierarchical)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectHierarchical.ts) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔
414
+ [Object (recursive)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectRecursive.ts) | ✔ | ❌ | ✔ | ✔ | ✔ | ✔ | ✔
415
+ [Object (union, implicit)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectUnionImplicit.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
416
+ [Object (union, explicit)](https://github.com/samchon/typia/blob/master/test/src/structures/ObjectUnionExplicit.ts) | ✔ | ✔ | ✔ | ✔ | ✔ | ❌
417
+ [Object (additional tags)](https://github.com/samchon/typia/#comment-tags) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔
418
+ [Object (template literal types)](https://github.com/samchon/typia/blob/master/test/src/structures/TemplateUnion.ts) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌
419
+ [Object (dynamic properties)](https://github.com/samchon/typia/blob/master/test/src/structures/DynamicTemplate.ts) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌
420
+ [Array (rest tuple)](https://github.com/samchon/typia/blob/master/test/src/structures/TupleRestAtomic.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
421
+ [Array (hierarchical)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayHierarchical.ts) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔
422
+ [Array (recursive)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRecursive.ts) | ✔ | ✔ | ✔ | ✔ | ✔ | ❌
423
+ [Array (recursive, union)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRecursiveUnionExplicit.ts) | ✔ | ✔ | ❌ | ✔ | ✔ | ❌
424
+ [Array (R+U, implicit)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRecursiveUnionImplicit.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
425
+ [Array (repeated)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRepeatedNullable.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
426
+ [Array (repeated, union)](https://github.com/samchon/typia/blob/master/test/src/structures/ArrayRepeatedUnionWithTuple.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
427
+ [**Ultimate Union Type**](https://github.com/samchon/typia/blob/master/test/src/structures/UltimateUnion.ts) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌
428
+
429
+ > `C.V.` means `class-validator`
430
+
431
+ ### OpenAPI Specification
432
+ ```mermaid
433
+ flowchart
434
+ subgraph "OpenAPI Specification"
435
+ v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]]
436
+ v30("OpenAPI v3.0") --upgrades--> emended
437
+ v31("OpenAPI v3.1") --emends--> emended
438
+ end
439
+ subgraph "OpenAPI Generator"
440
+ emended --normalizes--> migration[["Migration Schema"]]
441
+ migration --"Artificial Intelligence"--> lfc{{"LLM Function Calling"}}
442
+ lfc --"OpenAI"--> chatgpt("ChatGPT")
443
+ lfc --"Anthropic"--> claude("Claude")
444
+ lfc --"Google"--> gemini("Gemini")
445
+ lfc --"Meta"--> llama("Llama")
446
+ end
447
+ ```
448
+
449
+ `@agentica/core` obtains LLM function calling schemas from both Swagger/OpenAPI documents and TypeScript class types. The TypeScript class type can be converted to LLM function calling schema by [`typia.llm.application<Class, Model>()`](https://typia.io/docs/llm/application#application) function. Then how about OpenAPI document? How Swagger document can be LLM function calling schema.
450
+
451
+ The secret is on the above diagram.
452
+
453
+ In the OpenAPI specification, there are three versions with different definitions. And even in the same version, there are too much ambiguous and duplicated expressions. To resolve these problems, [`@samchon/openapi`](https://github.com/samchon/openapi) is transforming every OpenAPI documents to v3.1 emended specification. The `@samchon/openapi`'s emended v3.1 specification has removed every ambiguous and duplicated expressions for clarity.
454
+
455
+ With the v3.1 emended OpenAPI document, `@samchon/openapi` converts it to a migration schema that is near to the function structure. And as the last step, the migration schema will be transformed to a specific LLM vendor's function calling schema. LLM function calling schemas are composed like this way.
456
+
457
+ > **Why do not directly convert, but intermediate?**
458
+ >
459
+ > If directly convert from each version of OpenAPI specification to specific LLM's function calling schema, I have to make much more converters increased by cartesian product. In current models, number of converters would be 12 = 3 x 4.
460
+ >
461
+ > However, if define intermediate schema, number of converters are shrunk to plus operation. In current models, I just need to develop only (7 = 3 + 4) converters, and this is the reason why I've defined intermediate specification. This way is economic.