monadic-chat 0.3.2 → 0.3.4

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 39e287f8effed950af1b2cae95e834113d738d63188ceda70b6912eecaf92882
4
- data.tar.gz: a73f9b6b74b174885f925e8377dc5afd6700dd0528d6f3c54156c4cab3a17740
3
+ metadata.gz: f86c68d3d48502db6f77c5081b1ce95478bc499519decf793701fbf0e42d59ac
4
+ data.tar.gz: 995f7f6581a2fc35710321405e29f1bdcbe3b3b4e00a5e2d6a543539d21d3a23
5
5
  SHA512:
6
- metadata.gz: 986237787361787e9af73501c2d5fa51260836c305a4926394fe1ff2999a39e6bf1ac9059c06f221c87bb3b1e7238c91a3478bb970ac4ef98100f8cc16d42954
7
- data.tar.gz: cd14739cccd2319fcceace5b3813726d99159d51a8a269ee01f6fb209fc33127f33f726c0471a664ac2f6ea033d8afd5851af5ba7e4f1176c07c901358f979fb
6
+ metadata.gz: cbbe3b7be1cfbbf2d7144fb1e1bf92602f8a0a1d573541b4e2832c2bc7badfe9a0cd6d35efe390c41bdc6f43d3d65cd765680a1455263215c1b9d24ac5d70aa4
7
+ data.tar.gz: 0eed2c83c5b67545942b20a07e7096e4328b71a6acbd6f573a07f2013996dac786cce043d97cab34a6b376c284aaed59a269c45d0988e6bb0e2fe3e9528407c9
data/CHANGELOG.md CHANGED
@@ -17,3 +17,15 @@
17
17
  ## [0.2.1] - 2023-03-21
18
18
 
19
19
  - GPT-4 models supported (in `normal` mode)
20
+
21
+ ## [0.3.0] - 2023-03-24
22
+
23
+ - `Research` mode now supports chat API in addition to text-completion API
24
+
25
+ ## [0.3.3] - 2023-03-26
26
+
27
+ - Command line options to directly run individual apps
28
+
29
+ ## [0.3.4] - 2023-03-30
30
+
31
+ - `Chat` app now supports web searches and allows users to talk about recent events
data/Gemfile.lock CHANGED
@@ -1,12 +1,12 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- monadic-chat (0.3.0)
4
+ monadic-chat (0.3.4)
5
+ blingfire
5
6
  http
6
7
  kramdown
7
8
  launchy
8
9
  oj
9
- parallel
10
10
  pastel
11
11
  rouge
12
12
  tty-box
@@ -16,12 +16,14 @@ PATH
16
16
  tty-prompt
17
17
  tty-screen
18
18
  tty-spinner
19
+ wikipedia-client
19
20
 
20
21
  GEM
21
22
  remote: https://rubygems.org/
22
23
  specs:
23
24
  addressable (2.8.1)
24
25
  public_suffix (>= 2.0.2, < 6.0)
26
+ blingfire (0.1.8)
25
27
  diff-lcs (1.5.0)
26
28
  domain_name (0.5.20190701)
27
29
  unf (>= 0.0.5, < 1.0.0)
@@ -45,7 +47,6 @@ GEM
45
47
  ffi-compiler (~> 1.0)
46
48
  rake (~> 13.0)
47
49
  oj (3.14.2)
48
- parallel (1.22.1)
49
50
  pastel (0.8.0)
50
51
  tty-color (~> 0.5)
51
52
  public_suffix (5.0.1)
@@ -61,7 +62,7 @@ GEM
61
62
  rspec-expectations (3.12.2)
62
63
  diff-lcs (>= 1.2.0, < 2.0)
63
64
  rspec-support (~> 3.12.0)
64
- rspec-mocks (3.12.4)
65
+ rspec-mocks (3.12.5)
65
66
  diff-lcs (>= 1.2.0, < 2.0)
66
67
  rspec-support (~> 3.12.0)
67
68
  rspec-support (3.12.0)
@@ -103,6 +104,8 @@ GEM
103
104
  unf_ext (0.0.8.2)
104
105
  unicode-display_width (2.4.2)
105
106
  unicode_utils (1.4.0)
107
+ wikipedia-client (1.17.0)
108
+ addressable (~> 2.7)
106
109
  wisper (2.0.1)
107
110
 
108
111
  PLATFORMS
@@ -116,4 +119,4 @@ DEPENDENCIES
116
119
  rspec
117
120
 
118
121
  BUNDLED WITH
119
- 2.4.8
122
+ 2.4.9
data/README.md CHANGED
@@ -10,59 +10,18 @@
10
10
  <kbd><img src="https://user-images.githubusercontent.com/18207/225505520-53e6f2c4-84a8-4128-a005-3fe980ec2449.gif" width="100%" /></kbd>
11
11
  </p>
12
12
 
13
- > **Warning**
14
- > This software is **work in progress** and **under active development**. It may be unstable, and the latest version may behave slightly differently than this document. Also, specifications may change in the future.
13
+ > **Note**
14
+ > This software is *work in progress* and *under active development*. It may be unstable, and the latest version may behave slightly differently than this document. Also, specifications may change in the future.
15
15
 
16
16
  **Change Log**
17
17
 
18
- - [March 24, 2023] README has revised to reflect the change to version 0.3.0.
18
+ - [March 30, 2023] `Chat` app now supports web searches and allows users to talk about recent events
19
+ - [March 26, 2023] Command line options to directly run individual apps
20
+ - [March 24, 2023] `Research` mode now supports chat API in addition to text-completion API
19
21
  - [March 21, 2023] GPT-4 models supported (in `normal` mode)
20
22
  - [March 20, 2023] Text and figure in "How the research mode workds" section updated
21
23
  - [March 13, 2023] Text on the architecture of the `research` mode updated in accordance with Version 0.2.0
22
24
 
23
- ## Table of Contents
24
-
25
- ## TOC
26
-
27
- - [Table of Contents](#table-of-contents)
28
- - [TOC](#toc)
29
- - [Introduction](#introduction)
30
- - [Dependencies](#dependencies)
31
- - [Installation](#installation)
32
- - [Using RubyGems](#using-rubygems)
33
- - [Clone the GitHub Repository](#clone-the-github-repository)
34
- - [Usage](#usage)
35
- - [Authentication](#authentication)
36
- - [Select Main Menu Item](#select-main-menu-item)
37
- - [Roles](#roles)
38
- - [System-Wide Functions](#system-wide-functions)
39
- - [Apps](#apps)
40
- - [Chat](#chat)
41
- - [Code](#code)
42
- - [Novel](#novel)
43
- - [Translate](#translate)
44
- - [Modes](#modes)
45
- - [Normal Mode](#normal-mode)
46
- - [Research Mode](#research-mode)
47
- - [What is Research Mode?](#what-is-research-mode)
48
- - [How Research Mode Works](#how-research-mode-works)
49
- - [Accumulator](#accumulator)
50
- - [Reducer](#reducer)
51
- - [Creating New App](#creating-new-app)
52
- - [File Structure](#file-structure)
53
- - [Reducer Code](#reducer-code)
54
- - [Monadic Chat Template](#monadic-chat-template)
55
- - [Extra Template for `Research` Mode](#extra-template-for-research-mode)
56
- - [What is Monadic about Monadic Chat?](#what-is-monadic-about-monadic-chat)
57
- - [Unit, Map, and Join](#unit-map-and-join)
58
- - [Discourse Management Object](#discourse-management-object)
59
- - [Future Plans](#future-plans)
60
- - [Bibliographical Data](#bibliographical-data)
61
- - [Acknowledgments](#acknowledgments)
62
- - [Contributing](#contributing)
63
- - [Author](#author)
64
- - [License](#license)
65
-
66
25
  ## Introduction
67
26
 
68
27
  **Monadic Chat** is a user-friendly command-line client application that utilizes OpenAI’s Text Completion API and Chat API to facilitate ChatGPT-style conversations with OpenAI’s large language models (LLM) on any terminal application of your choice.
@@ -71,15 +30,25 @@ The conversation history can be saved in a JSON file, which can be loaded later
71
30
 
72
31
  Monadic Chat includes four pre-built apps (`Chat`, `Code`, `Novel`, and `Translate`) that are designed for different types of discourse through interactive conversation with the LLM. Users also have the option to create their own apps by writing new templates.
73
32
 
33
+ Monadic Chat's `normal` mode enables ChatGPT-like conversations on the command line. The `research` mode has a mechanism to handle various related information as "state" behind the conversation. This allows, for example, to retrieve the current conversation *topic* at each utterance turn, and to keep its development as a list.
34
+
74
35
  ## Dependencies
75
36
 
76
37
  - Ruby 2.6.10 or greater
77
38
  - OpenAI API Token
78
39
  - A command line terminal app such as:
79
40
  - Terminal or [iTerm2](https://iterm2.com/) (MacOS)
80
- - [Windows Terminal](https://apps.microsoft.com/store/detail/windows-terminal) (Windows 11)
81
- - GNOME Terminal (Linux)
82
41
  - [Alacritty](https://alacritty.org/) (Multi-platform)
42
+ - [Windows Terminal](https://apps.microsoft.com/store/detail/windows-terminal) (Windows)
43
+ - GNOME Terminal (Linux)
44
+
45
+ > **Note on Using Monadic Chat on Windows**
46
+ > Monadic Chat does not support running on Windows, but you can install and use Linux Destribution on WSL2. Or you can use it without WSL2 by following these steps:
47
+ >
48
+ > 1. install Windows Terminal
49
+ > 2. install [Git Bash](https://gitforwindows.org/) (make sure to check the `Install profile for Windows Terminal` checkbox
50
+ > 3. install Ruby with [Ruby Installer](https://rubyinstaller.org/)
51
+ > 4. Open Windows Terminal with Git Bash profile and follow the instruction below.
83
52
 
84
53
  ## Installation
85
54
 
@@ -147,7 +116,7 @@ Once the correct access token is verified, the access token is saved in the conf
147
116
 
148
117
  `$HOME/monadic_chat.conf`
149
118
 
150
- ### Select Main Menu Item
119
+ ### Main Menu
151
120
 
152
121
  Upon successful authentication, a menu to select a specific app will appear. Each app generates different types of text through an interactive chat-style conversation between the user and the AI. Four apps are available by default: [`chat`](#chat), [`code`](#code), [`novel`](#novel), and [`translate`](#translate).
153
122
 
@@ -163,6 +132,29 @@ Selecting `readme` will take you to the README on the GitHub repository (the doc
163
132
 
164
133
  In the main menu, you can use the cursor keys and the enter key to make a selection. You can also narrow down the choices each time you type a letter.
165
134
 
135
+ ### Direct Commands
136
+
137
+ The following commands can be entered to start each app directly on the command line, without using the main menu.
138
+
139
+ ```
140
+ monadic-chat <app-name>
141
+ ```
142
+
143
+ Each of the four standard applications can be launched as follows. When launched, an interactive chat interface appears.
144
+
145
+ ```
146
+ monadic-chat chat
147
+ monadic-chat code
148
+ monadic-chat novel
149
+ monadic-chat translate
150
+ ```
151
+
152
+ You can also give text input directly to each app in the following format and get only a response to it (without starting the interactive chat interface)
153
+
154
+ ```
155
+ monadic-chat <app-name> <input-text>
156
+ ```
157
+
166
158
  ### Roles
167
159
 
168
160
  Each message in the conversation is labeled with one of three roles: `User`, `GPT`, or `System`.
@@ -198,7 +190,7 @@ For detailed information on each parameter, please refer to OpenAI's [API Docume
198
190
 
199
191
  **data/context**
200
192
 
201
- In `normal` mode, this function only displays the conversation history between User and GPT. In `research` mode, metadata (e.g., topics, language being used, number of turns) values are presented.
193
+ In `normal` mode, this function only displays the conversation history between User and GPT. In `research` mode, metadata (e.g., topics, language being used, number of turns) values are presented. In addition to the metadata returned in the API response, the approximate number of tokens in the current template is also displayed.
202
194
 
203
195
  Program code in the conversation history will be syntax highlighted (if possible). The same applies to output via the `html` command available from the function menu.
204
196
 
@@ -303,7 +295,7 @@ In the default configuration, the dialogue messages are reduced after ten turns
303
295
 
304
296
  The current default language model for `research` mode is `gpt-3.5-turbo`.
305
297
 
306
- In `research` mode, the conversation between the user and the large-scale language model is accomplished by a special mechanism that tracks the conversation history in a monadic structure. By default, when the number of tokens in the response from the GPT (which increases with each iteration because of the conversation history) reaches a certain value, the oldest message is deleted.
298
+ In `research` mode, the conversation between the user and the large-scale language model is accomplished with a mechanism that tracks the conversation history in a monadic structure. In the default configuration, the dialogue messages are reduced after ten turns by deleting the oldest ones (but not the messages that the `system` role has given as instructions).
307
299
 
308
300
  If you wish to specify how the conversation history is handled as the interaction with the GPT model unfolds, you can write a `Proc` object containing Ruby code. Since various metadata are available in this mode, finer-grained control is possible.
309
301
 
@@ -403,8 +395,6 @@ Below is a sample HTML displaying the conversation (paris of an input sentence a
403
395
 
404
396
  <br />
405
397
 
406
-
407
-
408
398
  ### File Structure
409
399
 
410
400
  New Monadic Chat apps must be placed inside the `apps` folder. The folders and files for default apps `chat`, `code`, `novel`, and `translate` are also in this folder.
@@ -514,7 +504,6 @@ Monadic Chat replaces `{{MESSAGES}}` with messages from past conversations when
514
504
  "prompt": "\"We didn't have a camera.\"",
515
505
  "response": "`[S [NP We] [VP [V didn't] [VP [V have] [NP [Det a] [N camera] ] ] ] ] ]`\n\n###\n\n",
516
506
  "mode": "linguistic",
517
- "tokens": 351
518
507
  "turns": 3,
519
508
  "sentence_type": ["declarative"],
520
509
  "sentiment": ["sad"],
@@ -526,7 +515,7 @@ This is the core of the extra template for `research` mode.
526
515
 
527
516
  Note that the extra template is written in Markdown format, so the above JSON object is actually separated from the rest of the template as a [fenced code block](https://www.markdownguide.org/extended-syntax/#fenced-code-blocks).
528
517
 
529
- The required properties of this JSON object are `prompt`, `response`, `mode`, and `tokens`. Other properties are optional. The `mode` property is used to check the app name when saving the conversation data or loading from an external file. The `tokens` property is used in the reducer mechanism to check the approximate size of the current JSON object. The `turns` property is also used in the reducer mechanism.
518
+ The required properties of this JSON object are `prompt`, `response`, and `mode`. Other properties are optional. The `mode` property is used to check the app name when saving the conversation data or loading from an external file. The `turns` property is also used in the reducer mechanism.
530
519
 
531
520
  The JSON object in the `research` mode template is saved in the user’s home directory (`$HOME`) with the file `monadic_chat.json`. The content is overwritten every time the JSON object is updated. Note that this JSON file is created for logging purposes . Modifying its content does not affect the processes carried out by the app.
532
521
 
@@ -541,7 +530,6 @@ Make sure the following content requirements are all fulfilled:
541
530
  - analyze the new prompt's sentence type and set a sentence type value such as "interrogative", "imperative", "exclamatory", or "declarative" to the "sentence_type" property
542
531
  - analyze the new prompt's sentiment and set one or more sentiment types such as "happy", "excited", "troubled", "upset", or "sad" to the "sentiment" property
543
532
  - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words using as many discourse markers such as "because", "therefore", "but", and "so" to show the logical connection between the events.
544
- - update the value of "tokens" with the number of tokens of the resulting JSON object"
545
533
  - increment the value of "turns" by 1
546
534
  ```
547
535
 
data/apps/chat/chat.json CHANGED
@@ -1,8 +1,10 @@
1
1
  {"messages": [
2
2
  {"role": "system",
3
- "content": "You are a friendly but professional consultant who answers various questions, writes computer program code, makes decent suggestions, and gives helpful advice in response to a prompt from the user. If the prompt is not clear enough, ask the user to rephrase it. You are able to empathize with the user; insert an emoji (displayable on the terminal screen) that you deem appropriate for the user's input at the beginning of your response. If the user input is sentimentally neutral, pick up any emoji that matches the topic."},
3
+ "content": "You are a friendly but professional consultant having real-time, up-to-date, information about almost anything. You are able to answer various types of questions, writes computer program code, makes decent suggestions, and gives helpful advice in response to a prompt from the user.\n\nThe date today is {{DATE}}.\n\nIf the prompt is not clear enough, ask the user to rephrase it. You are able to empathize with the user; insert an emoji (displayable on the terminal screen) that you deem appropriate for the user's input at the beginning of your response."},
4
4
  {"role": "user",
5
5
  "content": "Can I ask something?"},
6
6
  {"role": "assistant",
7
7
  "content": "Sure!"}
8
8
  ]}
9
+
10
+
data/apps/chat/chat.md CHANGED
@@ -1,6 +1,7 @@
1
1
  {{SYSTEM}}
2
2
 
3
3
  Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object below. The preceding conversation is stored in "PAST MESSAGES".
4
+
4
5
  The preceding conversation is stored in "PAST MESSAGES". In "PAST MESSAGES", "assistant" refers to you. Make your response as detailed as possible.
5
6
 
6
7
  NEW PROMPT: {{PROMPT}}
@@ -12,33 +13,33 @@ JSON:
12
13
 
13
14
  ```json
14
15
  {
15
- "prompt": "Can I ask something?",
16
16
  "response": "Sure!",
17
+ "summary": "",
17
18
  "mode": "chat",
18
- "turns": 1,
19
19
  "language": "English",
20
20
  "topics": [],
21
- "tokens": 109
21
+ "confidence": 1.00,
22
+ "ambiguity": 0.00
22
23
  }
23
24
  ```
24
25
 
25
26
  Make sure the following content requirements are all fulfilled:
26
27
 
27
28
  - keep the value of the "mode" property at "chat"
28
- - set the new prompt to the "prompt" property
29
29
  - create your response to the new prompt based on the PAST MESSAGES and set it to "response"
30
30
  - if the new prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
31
31
  - make your response in the same language as the new prompt
32
32
  - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
33
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
34
+ - update the value of the "confidence" property based on the factuality of your response, ranging from 0.00 (not at all confident) to 1.00 (fully confident)
35
+ - update the value of the "ambiguity" property based on the clarity of the user input, ranging from 0.00 (not at all ambiguous, clearly stated) to 1.00 (fully ambiguous, nonsensical)
33
36
  - avoid giving a response that is the same or similar to one of the previous responses in PAST MESSAGES
34
37
  - program code in the response must be embedded in a code block in the markdown text
35
- - update the value of "tokens" with the number of tokens of the resulting JSON object"
36
38
 
37
39
  Make sure the following formal requirements are all fulfilled:
38
40
 
39
41
  - do not use invalid characters in the JSON object
40
42
  - escape double quotes and other special characters in the text values in the resulting JSON object
41
- - increment the value of "turns" by 1
42
43
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
43
44
 
44
45
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
data/apps/chat/chat.rb CHANGED
@@ -15,8 +15,8 @@ class Chat < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.2,
17
17
  "frequency_penalty" => 0.2,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
19
- "max_tokens" => 2000,
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
+ "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
22
22
  }.merge(params)
@@ -38,14 +38,23 @@ class Chat < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
-
42
41
  conditions = [
43
42
  @messages.size > 1,
44
- @metadata["tokens"].to_i > params["max_tokens"].to_i / 2
43
+ @messages.size > @num_retained_turns * 2 + 1
45
44
  ]
46
45
 
47
- @metadata["turns"] = @metadata["turns"].to_i - 1 if conditions.all?
48
-
46
+ if conditions.all?
47
+ to_delete = []
48
+ new_num_messages = @messages.size
49
+ @messages.each_with_index do |ele, i|
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
+ end
56
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
+ end
49
58
  when :normal
50
59
  ############################################################
51
60
  # Normal mode recuder defined here #
@@ -53,16 +62,21 @@ class Chat < MonadicApp
53
62
  ############################################################
54
63
 
55
64
  conditions = [
65
+ @messages.size > 1,
56
66
  @messages.size > @num_retained_turns * 2 + 1
57
67
  ]
58
68
 
59
69
  if conditions.all?
70
+ to_delete = []
71
+ new_num_messages = @messages.size
60
72
  @messages.each_with_index do |ele, i|
61
73
  if ele["role"] != "system"
62
- @messages.delete_at i
63
- break
74
+ to_delete << i
75
+ new_num_messages -= 1
64
76
  end
77
+ break if new_num_messages <= @num_retained_turns * 2 + 1
65
78
  end
79
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
66
80
  end
67
81
  end
68
82
  end
data/apps/code/code.md CHANGED
@@ -11,33 +11,29 @@ JSON:
11
11
 
12
12
  ```json
13
13
  {
14
- "prompt": "Can I ask something?",
15
14
  "response": "Sure!",
15
+ "summary": "",
16
16
  "mode": "chat",
17
- "turns": 1,
18
17
  "language": "English",
19
- "topics": [],
20
- "tokens": 109
18
+ "topics": []
21
19
  }
22
20
  ```
23
21
 
24
22
  Make sure the following content requirements are all fulfilled:
25
23
 
26
24
  - keep the value of the "mode" property at "chat"
27
- - set the new prompt to the "prompt" property
28
25
  - create your response to the new prompt based on "PAST MESSAGES" and set it to "response"
29
26
  - if the prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
30
27
  - make your response in the same language as the new prompt
31
28
  - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
29
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
32
30
  - avoid giving a response that is the same or similar to one of the previous responses in "PAST MESSAGES"
33
31
  - program code in the response must be embedded in a code block in the markdown text
34
- - update the value of "tokens" with the number of tokens of the resulting JSON object"
35
32
 
36
33
  Make sure the following formal requirements are all fulfilled:
37
34
 
38
35
  - do not use invalid characters in the JSON object
39
36
  - escape double quotes and other special characters in the text values in the resulting JSON object
40
- - increment the value of "turns" by 1
41
37
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
42
38
 
43
39
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
data/apps/code/code.rb CHANGED
@@ -15,8 +15,8 @@ class Code < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.0,
17
17
  "frequency_penalty" => 0.0,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
19
- "max_tokens" => 2000,
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
+ "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
22
22
  }.merge(params)
@@ -38,14 +38,23 @@ class Code < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
-
42
41
  conditions = [
43
42
  @messages.size > 1,
44
- @metadata["tokens"].to_i > params["max_tokens"].to_i / 2
43
+ @messages.size > @num_retained_turns * 2 + 1
45
44
  ]
46
45
 
47
- @metadata["turns"] = @metadata["turns"].to_i - 1 if conditions.all?
48
-
46
+ if conditions.all?
47
+ to_delete = []
48
+ new_num_messages = @messages.size
49
+ @messages.each_with_index do |ele, i|
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
+ end
56
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
+ end
49
58
  when :normal
50
59
  ############################################################
51
60
  # Normal mode recuder defined here #
@@ -53,16 +62,21 @@ class Code < MonadicApp
53
62
  ############################################################
54
63
 
55
64
  conditions = [
65
+ @messages.size > 1,
56
66
  @messages.size > @num_retained_turns * 2 + 1
57
67
  ]
58
68
 
59
69
  if conditions.all?
70
+ to_delete = []
71
+ new_num_messages = @messages.size
60
72
  @messages.each_with_index do |ele, i|
61
73
  if ele["role"] != "system"
62
- @messages.delete_at i
63
- break
74
+ to_delete << i
75
+ new_num_messages -= 1
64
76
  end
77
+ break if new_num_messages <= @num_retained_turns * 2 + 1
65
78
  end
79
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
66
80
  end
67
81
  end
68
82
  end
@@ -13,27 +13,23 @@ JSON:
13
13
 
14
14
  ```json
15
15
  {
16
- "prompt": "\"We didn't have a camera.\"",
17
16
  "response": "`[S [NP We] [VP [V didn't] [VP [V have] [NP [Det a] [N camera] ] ] ] ] ]`",
18
17
  "mode": "linguistic",
19
- "turns": 3,
20
18
  "sentence_type": ["declarative"],
21
19
  "sentiment": ["sad"],
22
20
  "summary": "The user saw a beautiful sunset, but did not take a picture because the user did not have a camera.",
23
- "tokens": 351
21
+ "relevance": 0.80
24
22
  }
25
23
  ```
26
24
 
27
25
  Make sure the following content requirements are all fulfilled:
28
26
 
29
27
  - keep the value of the "mode" property at "linguistic"
30
- - set the new prompt to the "prompt" property
31
28
  - create your response to the new prompt based on "PAST MESSAGES" and set it to "response"
32
29
  - analyze the new prompt's sentence type and set a sentence type value such as "interrogative", "imperative", "exclamatory", or "declarative" to the "sentence_type" property
33
30
  - analyze the new prompt's sentiment and set one or more sentiment types such as "happy", "excited", "troubled", "upset", or "sad" to the "sentiment" property
34
31
  - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words using as many discourse markers such as "because", "therefore", "but", and "so" to show the logical connection between the events.
35
- - update the value of "tokens" with the number of tokens of the resulting JSON object"
36
- - increment the value of "turns" by 1
32
+ - update the value of the "relevance" property indicating the degree to which the new input is naturally interpreted based on previous discussions, ranging from 0.0 (extremely difficult) to 1.0 (completely easy)
37
33
 
38
34
  Make sure the following formal requirements are all fulfilled:
39
35
 
@@ -15,8 +15,8 @@ class Linguistic < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.0,
17
17
  "frequency_penalty" => 0.0,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
19
- "max_tokens" => 2000,
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
+ "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
22
22
  }.merge(params)
@@ -38,14 +38,23 @@ class Linguistic < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
-
42
41
  conditions = [
43
42
  @messages.size > 1,
44
- @metadata["tokens"].to_i > params["max_tokens"].to_i / 2
43
+ @messages.size > @num_retained_turns * 2 + 1
45
44
  ]
46
45
 
47
- @metadata["turns"] = @metadata["turns"].to_i - 1 if conditions.all?
48
-
46
+ if conditions.all?
47
+ to_delete = []
48
+ new_num_messages = @messages.size
49
+ @messages.each_with_index do |ele, i|
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
+ end
56
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
+ end
49
58
  when :normal
50
59
  ############################################################
51
60
  # Normal mode recuder defined here #
@@ -53,16 +62,21 @@ class Linguistic < MonadicApp
53
62
  ############################################################
54
63
 
55
64
  conditions = [
65
+ @messages.size > 1,
56
66
  @messages.size > @num_retained_turns * 2 + 1
57
67
  ]
58
68
 
59
69
  if conditions.all?
70
+ to_delete = []
71
+ new_num_messages = @messages.size
60
72
  @messages.each_with_index do |ele, i|
61
73
  if ele["role"] != "system"
62
- @messages.delete_at i
63
- break
74
+ to_delete << i
75
+ new_num_messages -= 1
64
76
  end
77
+ break if new_num_messages <= @num_retained_turns * 2 + 1
65
78
  end
79
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
66
80
  end
67
81
  end
68
82
  end
data/apps/novel/novel.md CHANGED
@@ -11,28 +11,24 @@ JSON:
11
11
 
12
12
  ```json
13
13
  {
14
- "prompt": "The preface to the novel is presented",
15
14
  "response": "What follows is a story that an AI assistant tells. It is guaranteed that this will be an incredibly realistic and interesting novel.",
16
- "mode": "novel",
17
- "turns": 1,
18
- "tokens": 147
15
+ "summary": "",
16
+ "mode": "novel"
19
17
  }
20
18
  ```
21
19
 
22
20
  Make sure the following content requirements are all fulfilled:
23
21
 
24
22
  - keep the value of the "mode" property at "novel"
25
- - set the new prompt to the "prompt" property
26
23
  - create your new paragraph in response to the new prompt and set it to "response"
27
24
  - do not repeat in your response what is already told in "PAST MESSAGES"
28
- - update the value of "tokens" with the number of tokens of the resulting JSON object"
29
- - Make your response as detailed as possible within the maximum limit of 200 words
25
+ - make your response as detailed as possible within the maximum limit of 200 words
26
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
30
27
 
31
28
  Make sure the following formal requirements are all fulfilled:
32
29
 
33
30
  - do not use invalid characters in the JSON object
34
31
  - escape double quotes and other special characters in the text values in the resulting JSON object
35
- - increment the value of "turns" by 1
36
32
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
37
33
 
38
34
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.