monadic-chat 0.3.3 → 0.3.5

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: '08138f50ed67a2c1b913b82b4d56402fe789f9858d6f8cd31cc7512daf79f0bf'
4
- data.tar.gz: 1fca018c495fb1fedc30c1e684fd9c14639696c7f919622393e36be234f76b35
3
+ metadata.gz: b894910f03beb26737b61d52bbe8dccde3afe592ca7366f32d7fd23a04ab5f01
4
+ data.tar.gz: bc1c6f2f7f6623081fa163a3b8ad285afe67179f4b140d303e46482fcf15faea
5
5
  SHA512:
6
- metadata.gz: bcc93e02f837008c126fdbaf4b99b03b6c1683e24f0d62334cb33ad64545b58f087a8adfdd7f461cc16774c6d72ce06db2e73040243104f0527eb4edde0aac76
7
- data.tar.gz: fb4b615c945f6c73cd1fe7880eb1f893d9475ee1239c7fef1863088ae5a15e597e883084d77f7cc55f98b590d258199feeea60de8ef3c01f9372059f54a34e55
6
+ metadata.gz: 8c725e1764e683da1adbe8131ea70b67c66babeba8ab51a09d29ba056728120c61aa874510b7f5a599938770f86d136a063c538165ec6edcc9a5847c47af22d3
7
+ data.tar.gz: 127ad0699ea96e9adf647dfb52564d9ecfc4a32749883bf1e8df6c9b97a125df88b1469270254405bbdecfd63b160c035746990d374972c171a867edcf12174f
data/CHANGELOG.md CHANGED
@@ -24,4 +24,13 @@
24
24
 
25
25
  ## [0.3.3] - 2023-03-26
26
26
 
27
- - Command line options to directly run individual apps
27
+ - Command line options to directly run individual apps
28
+
29
+ ## [0.3.4] - 2023-04-02
30
+
31
+ - Architecture refined here and there
32
+
33
+ ## [0.3.5] - 2023-04-05
34
+
35
+ - `Wikipedia` app added (experimental, requires GPT-4)
36
+ - `monadic-chat new/del app_name` command added
data/Gemfile.lock CHANGED
@@ -20,7 +20,7 @@ PATH
20
20
  GEM
21
21
  remote: https://rubygems.org/
22
22
  specs:
23
- addressable (2.8.1)
23
+ addressable (2.8.2)
24
24
  public_suffix (>= 2.0.2, < 6.0)
25
25
  blingfire (0.1.8)
26
26
  diff-lcs (1.5.0)
@@ -61,7 +61,7 @@ GEM
61
61
  rspec-expectations (3.12.2)
62
62
  diff-lcs (>= 1.2.0, < 2.0)
63
63
  rspec-support (~> 3.12.0)
64
- rspec-mocks (3.12.4)
64
+ rspec-mocks (3.12.5)
65
65
  diff-lcs (>= 1.2.0, < 2.0)
66
66
  rspec-support (~> 3.12.0)
67
67
  rspec-support (3.12.0)
@@ -106,7 +106,6 @@ GEM
106
106
  wisper (2.0.1)
107
107
 
108
108
  PLATFORMS
109
- ruby
110
109
  x86_64-darwin-22
111
110
 
112
111
  DEPENDENCIES
@@ -116,4 +115,4 @@ DEPENDENCIES
116
115
  rspec
117
116
 
118
117
  BUNDLED WITH
119
- 2.4.9
118
+ 2.4.10
data/README.md CHANGED
@@ -15,6 +15,9 @@
15
15
 
16
16
  **Change Log**
17
17
 
18
+ - [April 05, 2023] `Wikipedia` app added (experimental, requires GPT-4)
19
+ - [April 05, 2023] `monadic-chat new/del app_name` command
20
+ - [April 02, 2023] Architecture refined here and there
18
21
  - [March 26, 2023] Command line options to directly run individual apps
19
22
  - [March 24, 2023] `Research` mode now supports chat API in addition to text-completion API
20
23
  - [March 21, 2023] GPT-4 models supported (in `normal` mode)
@@ -72,7 +75,7 @@ gem update monadic-chat
72
75
 
73
76
  ### Clone the GitHub Repository
74
77
 
75
- Alternatively, clone the code from the GitHub repository and follow the steps below. At this time, you must take this option to create a new app for Monadic Chat.
78
+ Alternatively, clone the code from the GitHub repository and follow the steps below.
76
79
 
77
80
  1. Clone the repo
78
81
 
@@ -294,7 +297,7 @@ In the default configuration, the dialogue messages are reduced after ten turns
294
297
 
295
298
  The current default language model for `research` mode is `gpt-3.5-turbo`.
296
299
 
297
- In `research` mode, the conversation between the user and the large-scale language model is accomplished by a special mechanism that tracks the conversation history in a monadic structure. By default, when the number of tokens in the response from the GPT (which increases with each iteration because of the conversation history) reaches a certain value, the oldest message is deleted.
300
+ In `research` mode, the conversation between the user and the large-scale language model is accomplished with a mechanism that tracks the conversation history in a monadic structure. In the default configuration, the dialogue messages are reduced after ten turns by deleting the oldest ones (but not the messages that the `system` role has given as instructions).
298
301
 
299
302
  If you wish to specify how the conversation history is handled as the interaction with the GPT model unfolds, you can write a `Proc` object containing Ruby code. Since various metadata are available in this mode, finer-grained control is possible.
300
303
 
@@ -394,43 +397,41 @@ Below is a sample HTML displaying the conversation (paris of an input sentence a
394
397
 
395
398
  <br />
396
399
 
397
-
398
-
399
400
  ### File Structure
400
401
 
401
- New Monadic Chat apps must be placed inside the `apps` folder. The folders and files for default apps `chat`, `code`, `novel`, and `translate` are also in this folder.
402
+ New Monadic Chat apps must be placed inside the `user_apps` folder. Experimental apps `wikipedia` and `linguistic` are also in this folder. `boilerplates` folder and its contents do not constitute an app; these files are copied when a new app is created.
402
403
 
403
404
  ```text
404
- apps
405
- ├── chat
406
- │ ├── chat.json
407
- │ ├── chat.md
408
- │ └── chat.rb
409
- ├── code
410
- │ ├── code.json
411
- │ ├── code.md
412
- │ └── code.rb
413
- ├── novel
414
- │ ├── novel.json
415
- │ ├── novel.md
416
- │ └── novel.rb
417
- └─── translate
418
- ├── translate.json
419
- ├── translate.md
420
- └── translate.rb
421
- ```
422
-
423
- Notice in the figure above that three files with the same name but different extensions (`.rb`, `.json`, and `.md`) are stored under each of the four default app folders. Similarly, when creating a new app, you create these three types of files under a folder with the same name as the app name.
424
-
425
- ```text
426
- apps
405
+ user_apps
406
+ ├── boilerplates
407
+ │ ├── boilerplate.json
408
+ │ ├── boilerplate.md
409
+ │ └── boilerplate.rb
410
+ ├── wikipedia
411
+ │ ├── wikipedia.json
412
+ │ ├── wikipedia.md
413
+ │ └── wikipedia.rb
427
414
  └─── linguistic
428
415
  ├── linguistic.json
429
416
  ├── linguistic.md
430
417
  └── linguistic.rb
431
418
  ```
432
419
 
433
- The purpose of each file is as follows.
420
+ Notice in the figure above that three files with the same name but different extensions (`.rb`, `.json`, and `.md`) are stored under each of the four default app folders.
421
+
422
+ The following command will create a new folder and the three files within it using this naming convention.
423
+
424
+ ```
425
+ monadic-chat new app_name
426
+ ```
427
+
428
+ If you feel like removing an app that you have created before, run:
429
+
430
+ ```
431
+ monadic-chat del app_name
432
+ ```
433
+
434
+ Let's assume we are creating a new application `linguistic`. In fact, an app with the same name already exists, so this is just for illustrative purposes. Anyway, running `monadic-chat new linguistic` generates the following three files inside `linguistic` folder.
434
435
 
435
436
  - `linguistic.rb`: Ruby code to define the "reducer"
436
437
  - `linguistic.json`: JSON template describing GPT's basic behavior in `normal` and `research` modes
@@ -477,22 +478,21 @@ Below we will look at this extra template for `research` mode of the `linguistic
477
478
 
478
479
  <div style="highlight highlight-source-gfm"><pre style="white-space : pre-wrap !important;">{{SYSTEM}}
479
480
 
480
- Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. The preceding conversation is stored in "PAST MESSAGES". In "PAST MESSAGES", "assistant" refers to you.</pre></div>
481
+ Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. The preceding conversation is stored in "MESSAGES". In "MESSAGES", "assistant" refers to you.</pre></div>
481
482
 
482
483
  Monadic Chat automatically replaces `{{SYSTEM}}} with the message from the `system` role when the template is sent via API. However, the above text also includes a few additional paragpraphs, including the one instructing the response from GPT to be presented as a JSON object.
483
484
 
484
485
  **New Prompt**
485
486
 
486
487
  ```markdown
487
- NEW PROMPT: {{PROMPT}}
488
+ {{PROMPT}}
488
489
  ```
489
490
 
490
491
  Monadic Chat replaces `{{PROMPT}}` with input from the user when sending the template through the API.
491
492
 
492
- **Past Messages**
493
+ **Messages**
493
494
 
494
495
  ```markdown
495
- PAST MESSAGES:
496
496
  {{MESSAGES}}
497
497
  ```
498
498
 
@@ -502,10 +502,8 @@ Monadic Chat replaces `{{MESSAGES}}` with messages from past conversations when
502
502
 
503
503
  ```json
504
504
  {
505
- "prompt": "\"We didn't have a camera.\"",
506
- "response": "`[S [NP We] [VP [V didn't] [VP [V have] [NP [Det a] [N camera] ] ] ] ] ]`\n\n###\n\n",
507
505
  "mode": "linguistic",
508
- "turns": 3,
506
+ "response": "`[S [NP We] [VP [V didn't] [VP [V have] [NP [Det a] [N camera] ] ] ] ] ]`\n\n###\n\n",
509
507
  "sentence_type": ["declarative"],
510
508
  "sentiment": ["sad"],
511
509
  "summary": "The user saw a beautiful sunset, but did not take a picture because the user did not have a camera.",
@@ -516,7 +514,7 @@ This is the core of the extra template for `research` mode.
516
514
 
517
515
  Note that the extra template is written in Markdown format, so the above JSON object is actually separated from the rest of the template as a [fenced code block](https://www.markdownguide.org/extended-syntax/#fenced-code-blocks).
518
516
 
519
- The required properties of this JSON object are `prompt`, `response`, and `mode`. Other properties are optional. The `mode` property is used to check the app name when saving the conversation data or loading from an external file. The `turns` property is also used in the reducer mechanism.
517
+ The required properties of this JSON object are `mode` and `response`. Other properties are optional. The `mode` property is used to check the app name when saving the conversation data or loading from an external file.
520
518
 
521
519
  The JSON object in the `research` mode template is saved in the user’s home directory (`$HOME`) with the file `monadic_chat.json`. The content is overwritten every time the JSON object is updated. Note that this JSON file is created for logging purposes . Modifying its content does not affect the processes carried out by the app.
522
520
 
@@ -527,7 +525,7 @@ Make sure the following content requirements are all fulfilled:
527
525
 
528
526
  - keep the value of the "mode" property at "linguistic"
529
527
  - set the new prompt to the "prompt" property
530
- - create your response to the new prompt based on "PAST MESSAGES" and set it to "response"
528
+ - create your response to the new prompt based on "MESSAGES" and set it to "response"
531
529
  - analyze the new prompt's sentence type and set a sentence type value such as "interrogative", "imperative", "exclamatory", or "declarative" to the "sentence_type" property
532
530
  - analyze the new prompt's sentiment and set one or more sentiment types such as "happy", "excited", "troubled", "upset", or "sad" to the "sentiment" property
533
531
  - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words using as many discourse markers such as "because", "therefore", "but", and "so" to show the logical connection between the events.
@@ -582,6 +580,7 @@ In Monadic Chat, responses from OpenAI's language model APIs (chat API and text
582
580
  Thus, the architecture of the `research` mode of Monad Chat, with its ability to generate and manage metadata properties inside the monadic structure, is parallel to the architecture of natural language discourse in general: both can be seen as a kind of "state monad" (Hasebe 2021).
583
581
  ## Future Plans
584
582
 
583
+ - Refactoring the current implementation code into `unit`, `map`, and `flatten`
585
584
  - More test cases to verify command line user interaction behavior
586
585
  - Improved error handling mechanism to catch incorrect responses from GPT
587
586
  - Develop a DSL to define templates in a more efficient and systematic manner
data/apps/chat/chat.json CHANGED
@@ -1,8 +1,10 @@
1
1
  {"messages": [
2
2
  {"role": "system",
3
- "content": "You are a friendly but professional consultant who answers various questions, writes computer program code, makes decent suggestions, and gives helpful advice in response to a prompt from the user. If the prompt is not clear enough, ask the user to rephrase it. You are able to empathize with the user; insert an emoji (displayable on the terminal screen) that you deem appropriate for the user's input at the beginning of your response. If the user input is sentimentally neutral, pick up any emoji that matches the topic."},
3
+ "content": "You are a friendly but professional consultant having real-time, up-to-date, information about almost anything. You are able to answer various types of questions, writes computer program code, makes decent suggestions, and gives helpful advice in response to a prompt from the user.\n\nThe date today is {{DATE}}.\n\nIf the prompt is not clear enough, ask the user to rephrase it. You are able to empathize with the user; insert an emoji (displayable on the terminal screen) that you deem appropriate for the user's input at the beginning of your response."},
4
4
  {"role": "user",
5
5
  "content": "Can I ask something?"},
6
6
  {"role": "assistant",
7
7
  "content": "Sure!"}
8
8
  ]}
9
+
10
+
data/apps/chat/chat.md CHANGED
@@ -1,21 +1,20 @@
1
1
  {{SYSTEM}}
2
2
 
3
- Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object below. The preceding conversation is stored in "PAST MESSAGES".
4
- The preceding conversation is stored in "PAST MESSAGES". In "PAST MESSAGES", "assistant" refers to you. Make your response as detailed as possible.
3
+ Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object below. The preceding conversation is stored in "MESSAGES".
5
4
 
6
- NEW PROMPT: {{PROMPT}}
5
+ The preceding conversation is stored in "MESSAGES". In "MESSAGES", "assistant" refers to you. Make your response as detailed as possible.
6
+
7
+ {{PROMPT}}
7
8
 
8
- PAST MESSAGES:
9
9
  {{MESSAGES}}
10
10
 
11
11
  JSON:
12
12
 
13
13
  ```json
14
14
  {
15
- "prompt": "Can I ask something?",
16
15
  "response": "Sure!",
16
+ "summary": "",
17
17
  "mode": "chat",
18
- "turns": 1,
19
18
  "language": "English",
20
19
  "topics": [],
21
20
  "confidence": 1.00,
@@ -23,24 +22,23 @@ JSON:
23
22
  }
24
23
  ```
25
24
 
26
- Make sure the following content requirements are all fulfilled:
27
-
25
+ Make sure the following content requirements are all fulfilled: ###
28
26
  - keep the value of the "mode" property at "chat"
29
- - set the new prompt to the "prompt" property
30
- - create your response to the new prompt based on the PAST MESSAGES and set it to "response"
27
+ - create your response to the new prompt based on the MESSAGES and set it to "response"
31
28
  - if the new prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
32
29
  - make your response in the same language as the new prompt
33
30
  - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
31
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
34
32
  - update the value of the "confidence" property based on the factuality of your response, ranging from 0.00 (not at all confident) to 1.00 (fully confident)
35
33
  - update the value of the "ambiguity" property based on the clarity of the user input, ranging from 0.00 (not at all ambiguous, clearly stated) to 1.00 (fully ambiguous, nonsensical)
36
- - avoid giving a response that is the same or similar to one of the previous responses in PAST MESSAGES
34
+ - avoid giving a response that is the same or similar to one of the previous responses in MESSAGES
37
35
  - program code in the response must be embedded in a code block in the markdown text
36
+ ###
38
37
 
39
- Make sure the following formal requirements are all fulfilled:
40
-
38
+ Make sure the following formal requirements are all fulfilled: ###
41
39
  - do not use invalid characters in the JSON object
42
40
  - escape double quotes and other special characters in the text values in the resulting JSON object
43
- - increment the value of "turns" by 1
44
41
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
42
+ ###
45
43
 
46
44
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
data/apps/chat/chat.rb CHANGED
@@ -15,7 +15,7 @@ class Chat < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.2,
17
17
  "frequency_penalty" => 0.2,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
19
  "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
@@ -38,24 +38,23 @@ class Chat < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
- current_template_tokens = count_tokens(@template)
42
41
  conditions = [
43
42
  @messages.size > 1,
44
- current_template_tokens > params["max_tokens"].to_i / 2
43
+ @messages.size > @num_retained_turns * 2 + 1
45
44
  ]
46
45
 
47
46
  if conditions.all?
48
47
  to_delete = []
49
- offset = current_template_tokens - params["max_tokens"].to_i / 2
48
+ new_num_messages = @messages.size
50
49
  @messages.each_with_index do |ele, i|
51
- break if offset <= 0
52
-
53
- to_delete << i if ele["role"] != "system"
54
- offset -= count_tokens(ele.to_json)
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
55
  end
56
56
  @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
57
  end
58
-
59
58
  when :normal
60
59
  ############################################################
61
60
  # Normal mode recuder defined here #
data/apps/code/code.md CHANGED
@@ -1,41 +1,38 @@
1
1
  {{SYSTEM}}
2
2
 
3
- Create a response "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. In "PAST MESSAGES", "assistant" refers to you. Make your response as detailed as possible.
3
+ Create a response "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. In "MESSAGES", "assistant" refers to you. Make your response as detailed as possible.
4
4
 
5
- NEW PROMPT: {{PROMPT}}
5
+ {{PROMPT}}
6
6
 
7
- PAST MESSAGES:
8
7
  {{MESSAGES}}
9
8
 
10
9
  JSON:
11
10
 
12
11
  ```json
13
12
  {
14
- "prompt": "Can I ask something?",
15
13
  "response": "Sure!",
14
+ "summary": "",
16
15
  "mode": "chat",
17
- "turns": 1,
18
16
  "language": "English",
19
17
  "topics": []
20
18
  }
21
19
  ```
22
20
 
23
- Make sure the following content requirements are all fulfilled:
24
-
21
+ Make sure the following content requirements are all fulfilled: ###
25
22
  - keep the value of the "mode" property at "chat"
26
- - set the new prompt to the "prompt" property
27
- - create your response to the new prompt based on "PAST MESSAGES" and set it to "response"
23
+ - create your response to the new prompt based on "MESSAGES" and set it to "response"
28
24
  - if the prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
29
25
  - make your response in the same language as the new prompt
30
26
  - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
31
- - avoid giving a response that is the same or similar to one of the previous responses in "PAST MESSAGES"
27
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
28
+ - avoid giving a response that is the same or similar to one of the previous responses in "MESSAGES"
32
29
  - program code in the response must be embedded in a code block in the markdown text
30
+ ###
33
31
 
34
- Make sure the following formal requirements are all fulfilled:
35
-
32
+ Make sure the following formal requirements are all fulfilled: ###
36
33
  - do not use invalid characters in the JSON object
37
34
  - escape double quotes and other special characters in the text values in the resulting JSON object
38
- - increment the value of "turns" by 1
39
35
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
36
+ ###
40
37
 
41
38
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
data/apps/code/code.rb CHANGED
@@ -15,7 +15,7 @@ class Code < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.0,
17
17
  "frequency_penalty" => 0.0,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
19
  "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
@@ -38,24 +38,23 @@ class Code < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
- current_template_tokens = count_tokens(@template)
42
41
  conditions = [
43
42
  @messages.size > 1,
44
- current_template_tokens > params["max_tokens"].to_i / 2
43
+ @messages.size > @num_retained_turns * 2 + 1
45
44
  ]
46
45
 
47
46
  if conditions.all?
48
47
  to_delete = []
49
- offset = current_template_tokens - params["max_tokens"].to_i / 2
48
+ new_num_messages = @messages.size
50
49
  @messages.each_with_index do |ele, i|
51
- break if offset <= 0
52
-
53
- to_delete << i if ele["role"] != "system"
54
- offset -= count_tokens(ele.to_json)
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
55
  end
56
56
  @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
57
  end
58
-
59
58
  when :normal
60
59
  ############################################################
61
60
  # Normal mode recuder defined here #
data/apps/novel/novel.md CHANGED
@@ -1,36 +1,33 @@
1
1
  {{SYSTEM}}
2
2
 
3
- Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. The preceding conversation is stored in "PAST MESSAGES". In "PAST MESSAGES", "assistant" refers to you.
3
+ Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. The preceding conversation is stored in "MESSAGES". In "MESSAGES", "assistant" refers to you.
4
4
 
5
- NEW PROMPT: {{PROMPT}}
5
+ {{PROMPT}}
6
6
 
7
- PAST MESSAGES:
8
7
  {{MESSAGES}}
9
8
 
10
9
  JSON:
11
10
 
12
11
  ```json
13
12
  {
14
- "prompt": "The preface to the novel is presented",
15
13
  "response": "What follows is a story that an AI assistant tells. It is guaranteed that this will be an incredibly realistic and interesting novel.",
16
- "mode": "novel",
17
- "turns": 1
14
+ "summary": "",
15
+ "mode": "novel"
18
16
  }
19
17
  ```
20
18
 
21
- Make sure the following content requirements are all fulfilled:
22
-
19
+ Make sure the following content requirements are all fulfilled: ###
23
20
  - keep the value of the "mode" property at "novel"
24
- - set the new prompt to the "prompt" property
25
21
  - create your new paragraph in response to the new prompt and set it to "response"
26
- - do not repeat in your response what is already told in "PAST MESSAGES"
27
- - Make your response as detailed as possible within the maximum limit of 200 words
28
-
29
- Make sure the following formal requirements are all fulfilled:
22
+ - do not repeat in your response what is already told in "MESSAGES"
23
+ - make your response as detailed as possible within the maximum limit of 200 words
24
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
25
+ ###
30
26
 
27
+ Make sure the following formal requirements are all fulfilled: ###
31
28
  - do not use invalid characters in the JSON object
32
29
  - escape double quotes and other special characters in the text values in the resulting JSON object
33
- - increment the value of "turns" by 1
34
30
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
31
+ ###
35
32
 
36
33
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
data/apps/novel/novel.rb CHANGED
@@ -15,7 +15,7 @@ class Novel < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.1,
17
17
  "frequency_penalty" => 0.1,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
19
  "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
@@ -38,24 +38,23 @@ class Novel < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
- current_template_tokens = count_tokens(@template)
42
41
  conditions = [
43
42
  @messages.size > 1,
44
- current_template_tokens > params["max_tokens"].to_i / 2
43
+ @messages.size > @num_retained_turns * 2 + 1
45
44
  ]
46
45
 
47
46
  if conditions.all?
48
47
  to_delete = []
49
- offset = current_template_tokens - params["max_tokens"].to_i / 2
48
+ new_num_messages = @messages.size
50
49
  @messages.each_with_index do |ele, i|
51
- break if offset <= 0
52
-
53
- to_delete << i if ele["role"] != "system"
54
- offset -= count_tokens(ele.to_json)
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
55
  end
56
56
  @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
57
  end
58
-
59
58
  when :normal
60
59
  ############################################################
61
60
  # Normal mode recuder defined here #
@@ -1,10 +1,9 @@
1
1
  {{SYSTEM}}
2
2
 
3
- Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. The preceding conversation is stored in "PAST MESSAGES". In "PAST MESSAGES", "assistant" refers to you. Make your response as detailed as possible.
3
+ Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. The preceding conversation is stored in "MESSAGES". In "MESSAGES", "assistant" refers to you. Make your response as detailed as possible.
4
4
 
5
- NEW PROMPT: {{PROMPT}}
5
+ {{PROMPT}}
6
6
 
7
- PAST MESSAGES:
8
7
  {{MESSAGES}}
9
8
 
10
9
  JSON:
@@ -12,24 +11,23 @@ JSON:
12
11
  ```json
13
12
  {
14
13
  "mode": "translate",
15
- "turns": 0,
16
- "prompt": "これは日本語(Japanese)の文(sentence)です。",
17
14
  "response": "This is a sentence in Japanese.",
15
+ "dictioanry": {"日本語": "Japanese", "文": "sentence"},
18
16
  "target_lang": "English"
19
17
  }
20
18
  ```
21
19
 
22
- Make sure the following requirements are all fulfilled:
23
-
20
+ Make sure the following requirements are all fulfilled: ###
24
21
  - keep the value of the "mode" property at "translate"
25
- - set the text in the new prompt presented above to the "prompt" property
26
22
  - translate the new prompt text to the language specified in the "target_lang" set it to "response" and set the translation to the "response" property
23
+ - update the "dictionary" property with translation suggested by the user (using parentheses) for specific expressions
24
+ - add user-suggested translations (translations in parentheses) to the "dictionary" property
25
+ ###
27
26
 
28
- Make sure the following formal requirements are all fulfilled:
29
-
27
+ Make sure the following formal requirements are all fulfilled: ###
30
28
  - do not use invalid characters in the JSON object
31
29
  - escape double quotes and other special characters in the text values in the resulting JSON object
32
- - increment the value of "turns" by 1
33
30
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
31
+ ###
34
32
 
35
33
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
@@ -15,7 +15,7 @@ class Translate < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.0,
17
17
  "frequency_penalty" => 0.0,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
19
  "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
@@ -42,24 +42,23 @@ class Translate < MonadicApp
42
42
  # @messages: messages to this point #
43
43
  # @metadata: currently available metdata sent from GPT #
44
44
  ############################################################
45
- current_template_tokens = count_tokens(@template)
46
45
  conditions = [
47
46
  @messages.size > 1,
48
- current_template_tokens > params["max_tokens"].to_i / 2
47
+ @messages.size > @num_retained_turns * 2 + 1
49
48
  ]
50
49
 
51
50
  if conditions.all?
52
51
  to_delete = []
53
- offset = current_template_tokens - params["max_tokens"].to_i / 2
52
+ new_num_messages = @messages.size
54
53
  @messages.each_with_index do |ele, i|
55
- break if offset <= 0
56
-
57
- to_delete << i if ele["role"] != "system"
58
- offset -= count_tokens(ele.to_json)
54
+ if ele["role"] != "system"
55
+ to_delete << i
56
+ new_num_messages -= 1
57
+ end
58
+ break if new_num_messages <= @num_retained_turns * 2 + 1
59
59
  end
60
60
  @messages.delete_if.with_index { |_, i| to_delete.include? i }
61
61
  end
62
-
63
62
  when :normal
64
63
  ############################################################
65
64
  # Normal mode recuder defined here #