monadic-chat 0.3.3 → 0.3.4

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: '08138f50ed67a2c1b913b82b4d56402fe789f9858d6f8cd31cc7512daf79f0bf'
4
- data.tar.gz: 1fca018c495fb1fedc30c1e684fd9c14639696c7f919622393e36be234f76b35
3
+ metadata.gz: f86c68d3d48502db6f77c5081b1ce95478bc499519decf793701fbf0e42d59ac
4
+ data.tar.gz: 995f7f6581a2fc35710321405e29f1bdcbe3b3b4e00a5e2d6a543539d21d3a23
5
5
  SHA512:
6
- metadata.gz: bcc93e02f837008c126fdbaf4b99b03b6c1683e24f0d62334cb33ad64545b58f087a8adfdd7f461cc16774c6d72ce06db2e73040243104f0527eb4edde0aac76
7
- data.tar.gz: fb4b615c945f6c73cd1fe7880eb1f893d9475ee1239c7fef1863088ae5a15e597e883084d77f7cc55f98b590d258199feeea60de8ef3c01f9372059f54a34e55
6
+ metadata.gz: cbbe3b7be1cfbbf2d7144fb1e1bf92602f8a0a1d573541b4e2832c2bc7badfe9a0cd6d35efe390c41bdc6f43d3d65cd765680a1455263215c1b9d24ac5d70aa4
7
+ data.tar.gz: 0eed2c83c5b67545942b20a07e7096e4328b71a6acbd6f573a07f2013996dac786cce043d97cab34a6b376c284aaed59a269c45d0988e6bb0e2fe3e9528407c9
data/CHANGELOG.md CHANGED
@@ -25,3 +25,7 @@
25
25
  ## [0.3.3] - 2023-03-26
26
26
 
27
27
  - Command line options to directly run individual apps
28
+
29
+ ## [0.3.4] - 2023-03-30
30
+
31
+ - `Chat` app now supports web searches and allows users to talk about recent events
data/Gemfile.lock CHANGED
@@ -16,6 +16,7 @@ PATH
16
16
  tty-prompt
17
17
  tty-screen
18
18
  tty-spinner
19
+ wikipedia-client
19
20
 
20
21
  GEM
21
22
  remote: https://rubygems.org/
@@ -61,7 +62,7 @@ GEM
61
62
  rspec-expectations (3.12.2)
62
63
  diff-lcs (>= 1.2.0, < 2.0)
63
64
  rspec-support (~> 3.12.0)
64
- rspec-mocks (3.12.4)
65
+ rspec-mocks (3.12.5)
65
66
  diff-lcs (>= 1.2.0, < 2.0)
66
67
  rspec-support (~> 3.12.0)
67
68
  rspec-support (3.12.0)
@@ -103,6 +104,8 @@ GEM
103
104
  unf_ext (0.0.8.2)
104
105
  unicode-display_width (2.4.2)
105
106
  unicode_utils (1.4.0)
107
+ wikipedia-client (1.17.0)
108
+ addressable (~> 2.7)
106
109
  wisper (2.0.1)
107
110
 
108
111
  PLATFORMS
data/README.md CHANGED
@@ -15,6 +15,7 @@
15
15
 
16
16
  **Change Log**
17
17
 
18
+ - [March 30, 2023] `Chat` app now supports web searches and allows users to talk about recent events
18
19
  - [March 26, 2023] Command line options to directly run individual apps
19
20
  - [March 24, 2023] `Research` mode now supports chat API in addition to text-completion API
20
21
  - [March 21, 2023] GPT-4 models supported (in `normal` mode)
@@ -294,7 +295,7 @@ In the default configuration, the dialogue messages are reduced after ten turns
294
295
 
295
296
  The current default language model for `research` mode is `gpt-3.5-turbo`.
296
297
 
297
- In `research` mode, the conversation between the user and the large-scale language model is accomplished by a special mechanism that tracks the conversation history in a monadic structure. By default, when the number of tokens in the response from the GPT (which increases with each iteration because of the conversation history) reaches a certain value, the oldest message is deleted.
298
+ In `research` mode, the conversation between the user and the large-scale language model is accomplished with a mechanism that tracks the conversation history in a monadic structure. In the default configuration, the dialogue messages are reduced after ten turns by deleting the oldest ones (but not the messages that the `system` role has given as instructions).
298
299
 
299
300
  If you wish to specify how the conversation history is handled as the interaction with the GPT model unfolds, you can write a `Proc` object containing Ruby code. Since various metadata are available in this mode, finer-grained control is possible.
300
301
 
@@ -394,8 +395,6 @@ Below is a sample HTML displaying the conversation (paris of an input sentence a
394
395
 
395
396
  <br />
396
397
 
397
-
398
-
399
398
  ### File Structure
400
399
 
401
400
  New Monadic Chat apps must be placed inside the `apps` folder. The folders and files for default apps `chat`, `code`, `novel`, and `translate` are also in this folder.
data/apps/chat/chat.json CHANGED
@@ -1,8 +1,10 @@
1
1
  {"messages": [
2
2
  {"role": "system",
3
- "content": "You are a friendly but professional consultant who answers various questions, writes computer program code, makes decent suggestions, and gives helpful advice in response to a prompt from the user. If the prompt is not clear enough, ask the user to rephrase it. You are able to empathize with the user; insert an emoji (displayable on the terminal screen) that you deem appropriate for the user's input at the beginning of your response. If the user input is sentimentally neutral, pick up any emoji that matches the topic."},
3
+ "content": "You are a friendly but professional consultant having real-time, up-to-date, information about almost anything. You are able to answer various types of questions, writes computer program code, makes decent suggestions, and gives helpful advice in response to a prompt from the user.\n\nThe date today is {{DATE}}.\n\nIf the prompt is not clear enough, ask the user to rephrase it. You are able to empathize with the user; insert an emoji (displayable on the terminal screen) that you deem appropriate for the user's input at the beginning of your response."},
4
4
  {"role": "user",
5
5
  "content": "Can I ask something?"},
6
6
  {"role": "assistant",
7
7
  "content": "Sure!"}
8
8
  ]}
9
+
10
+
data/apps/chat/chat.md CHANGED
@@ -1,6 +1,7 @@
1
1
  {{SYSTEM}}
2
2
 
3
3
  Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object below. The preceding conversation is stored in "PAST MESSAGES".
4
+
4
5
  The preceding conversation is stored in "PAST MESSAGES". In "PAST MESSAGES", "assistant" refers to you. Make your response as detailed as possible.
5
6
 
6
7
  NEW PROMPT: {{PROMPT}}
@@ -12,10 +13,9 @@ JSON:
12
13
 
13
14
  ```json
14
15
  {
15
- "prompt": "Can I ask something?",
16
16
  "response": "Sure!",
17
+ "summary": "",
17
18
  "mode": "chat",
18
- "turns": 1,
19
19
  "language": "English",
20
20
  "topics": [],
21
21
  "confidence": 1.00,
@@ -26,11 +26,11 @@ JSON:
26
26
  Make sure the following content requirements are all fulfilled:
27
27
 
28
28
  - keep the value of the "mode" property at "chat"
29
- - set the new prompt to the "prompt" property
30
29
  - create your response to the new prompt based on the PAST MESSAGES and set it to "response"
31
30
  - if the new prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
32
31
  - make your response in the same language as the new prompt
33
32
  - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
33
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
34
34
  - update the value of the "confidence" property based on the factuality of your response, ranging from 0.00 (not at all confident) to 1.00 (fully confident)
35
35
  - update the value of the "ambiguity" property based on the clarity of the user input, ranging from 0.00 (not at all ambiguous, clearly stated) to 1.00 (fully ambiguous, nonsensical)
36
36
  - avoid giving a response that is the same or similar to one of the previous responses in PAST MESSAGES
@@ -40,7 +40,6 @@ Make sure the following formal requirements are all fulfilled:
40
40
 
41
41
  - do not use invalid characters in the JSON object
42
42
  - escape double quotes and other special characters in the text values in the resulting JSON object
43
- - increment the value of "turns" by 1
44
43
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
45
44
 
46
45
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
data/apps/chat/chat.rb CHANGED
@@ -15,7 +15,7 @@ class Chat < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.2,
17
17
  "frequency_penalty" => 0.2,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
19
  "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
@@ -38,24 +38,23 @@ class Chat < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
- current_template_tokens = count_tokens(@template)
42
41
  conditions = [
43
42
  @messages.size > 1,
44
- current_template_tokens > params["max_tokens"].to_i / 2
43
+ @messages.size > @num_retained_turns * 2 + 1
45
44
  ]
46
45
 
47
46
  if conditions.all?
48
47
  to_delete = []
49
- offset = current_template_tokens - params["max_tokens"].to_i / 2
48
+ new_num_messages = @messages.size
50
49
  @messages.each_with_index do |ele, i|
51
- break if offset <= 0
52
-
53
- to_delete << i if ele["role"] != "system"
54
- offset -= count_tokens(ele.to_json)
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
55
  end
56
56
  @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
57
  end
58
-
59
58
  when :normal
60
59
  ############################################################
61
60
  # Normal mode recuder defined here #
data/apps/code/code.md CHANGED
@@ -11,10 +11,9 @@ JSON:
11
11
 
12
12
  ```json
13
13
  {
14
- "prompt": "Can I ask something?",
15
14
  "response": "Sure!",
15
+ "summary": "",
16
16
  "mode": "chat",
17
- "turns": 1,
18
17
  "language": "English",
19
18
  "topics": []
20
19
  }
@@ -23,11 +22,11 @@ JSON:
23
22
  Make sure the following content requirements are all fulfilled:
24
23
 
25
24
  - keep the value of the "mode" property at "chat"
26
- - set the new prompt to the "prompt" property
27
25
  - create your response to the new prompt based on "PAST MESSAGES" and set it to "response"
28
26
  - if the prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
29
27
  - make your response in the same language as the new prompt
30
28
  - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
29
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
31
30
  - avoid giving a response that is the same or similar to one of the previous responses in "PAST MESSAGES"
32
31
  - program code in the response must be embedded in a code block in the markdown text
33
32
 
@@ -35,7 +34,6 @@ Make sure the following formal requirements are all fulfilled:
35
34
 
36
35
  - do not use invalid characters in the JSON object
37
36
  - escape double quotes and other special characters in the text values in the resulting JSON object
38
- - increment the value of "turns" by 1
39
37
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
40
38
 
41
39
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
data/apps/code/code.rb CHANGED
@@ -15,7 +15,7 @@ class Code < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.0,
17
17
  "frequency_penalty" => 0.0,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
19
  "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
@@ -38,24 +38,23 @@ class Code < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
- current_template_tokens = count_tokens(@template)
42
41
  conditions = [
43
42
  @messages.size > 1,
44
- current_template_tokens > params["max_tokens"].to_i / 2
43
+ @messages.size > @num_retained_turns * 2 + 1
45
44
  ]
46
45
 
47
46
  if conditions.all?
48
47
  to_delete = []
49
- offset = current_template_tokens - params["max_tokens"].to_i / 2
48
+ new_num_messages = @messages.size
50
49
  @messages.each_with_index do |ele, i|
51
- break if offset <= 0
52
-
53
- to_delete << i if ele["role"] != "system"
54
- offset -= count_tokens(ele.to_json)
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
55
  end
56
56
  @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
57
  end
58
-
59
58
  when :normal
60
59
  ############################################################
61
60
  # Normal mode recuder defined here #
@@ -13,10 +13,8 @@ JSON:
13
13
 
14
14
  ```json
15
15
  {
16
- "prompt": "\"We didn't have a camera.\"",
17
16
  "response": "`[S [NP We] [VP [V didn't] [VP [V have] [NP [Det a] [N camera] ] ] ] ] ]`",
18
17
  "mode": "linguistic",
19
- "turns": 3,
20
18
  "sentence_type": ["declarative"],
21
19
  "sentiment": ["sad"],
22
20
  "summary": "The user saw a beautiful sunset, but did not take a picture because the user did not have a camera.",
@@ -27,12 +25,10 @@ JSON:
27
25
  Make sure the following content requirements are all fulfilled:
28
26
 
29
27
  - keep the value of the "mode" property at "linguistic"
30
- - set the new prompt to the "prompt" property
31
28
  - create your response to the new prompt based on "PAST MESSAGES" and set it to "response"
32
29
  - analyze the new prompt's sentence type and set a sentence type value such as "interrogative", "imperative", "exclamatory", or "declarative" to the "sentence_type" property
33
30
  - analyze the new prompt's sentiment and set one or more sentiment types such as "happy", "excited", "troubled", "upset", or "sad" to the "sentiment" property
34
31
  - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words using as many discourse markers such as "because", "therefore", "but", and "so" to show the logical connection between the events.
35
- - increment the value of "turns" by
36
32
  - update the value of the "relevance" property indicating the degree to which the new input is naturally interpreted based on previous discussions, ranging from 0.0 (extremely difficult) to 1.0 (completely easy)
37
33
 
38
34
  Make sure the following formal requirements are all fulfilled:
@@ -15,7 +15,7 @@ class Linguistic < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.0,
17
17
  "frequency_penalty" => 0.0,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
19
  "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
@@ -38,24 +38,23 @@ class Linguistic < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
- current_template_tokens = count_tokens(@template)
42
41
  conditions = [
43
42
  @messages.size > 1,
44
- current_template_tokens > params["max_tokens"].to_i / 2
43
+ @messages.size > @num_retained_turns * 2 + 1
45
44
  ]
46
45
 
47
46
  if conditions.all?
48
47
  to_delete = []
49
- offset = current_template_tokens - params["max_tokens"].to_i / 2
48
+ new_num_messages = @messages.size
50
49
  @messages.each_with_index do |ele, i|
51
- break if offset <= 0
52
-
53
- to_delete << i if ele["role"] != "system"
54
- offset -= count_tokens(ele.to_json)
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
55
  end
56
56
  @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
57
  end
58
-
59
58
  when :normal
60
59
  ############################################################
61
60
  # Normal mode recuder defined here #
data/apps/novel/novel.md CHANGED
@@ -11,26 +11,24 @@ JSON:
11
11
 
12
12
  ```json
13
13
  {
14
- "prompt": "The preface to the novel is presented",
15
14
  "response": "What follows is a story that an AI assistant tells. It is guaranteed that this will be an incredibly realistic and interesting novel.",
16
- "mode": "novel",
17
- "turns": 1
15
+ "summary": "",
16
+ "mode": "novel"
18
17
  }
19
18
  ```
20
19
 
21
20
  Make sure the following content requirements are all fulfilled:
22
21
 
23
22
  - keep the value of the "mode" property at "novel"
24
- - set the new prompt to the "prompt" property
25
23
  - create your new paragraph in response to the new prompt and set it to "response"
26
24
  - do not repeat in your response what is already told in "PAST MESSAGES"
27
- - Make your response as detailed as possible within the maximum limit of 200 words
25
+ - make your response as detailed as possible within the maximum limit of 200 words
26
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
28
27
 
29
28
  Make sure the following formal requirements are all fulfilled:
30
29
 
31
30
  - do not use invalid characters in the JSON object
32
31
  - escape double quotes and other special characters in the text values in the resulting JSON object
33
- - increment the value of "turns" by 1
34
32
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
35
33
 
36
34
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
data/apps/novel/novel.rb CHANGED
@@ -15,7 +15,7 @@ class Novel < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.1,
17
17
  "frequency_penalty" => 0.1,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
19
  "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
@@ -38,24 +38,23 @@ class Novel < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
- current_template_tokens = count_tokens(@template)
42
41
  conditions = [
43
42
  @messages.size > 1,
44
- current_template_tokens > params["max_tokens"].to_i / 2
43
+ @messages.size > @num_retained_turns * 2 + 1
45
44
  ]
46
45
 
47
46
  if conditions.all?
48
47
  to_delete = []
49
- offset = current_template_tokens - params["max_tokens"].to_i / 2
48
+ new_num_messages = @messages.size
50
49
  @messages.each_with_index do |ele, i|
51
- break if offset <= 0
52
-
53
- to_delete << i if ele["role"] != "system"
54
- offset -= count_tokens(ele.to_json)
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
55
  end
56
56
  @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
57
  end
58
-
59
58
  when :normal
60
59
  ############################################################
61
60
  # Normal mode recuder defined here #
@@ -12,9 +12,8 @@ JSON:
12
12
  ```json
13
13
  {
14
14
  "mode": "translate",
15
- "turns": 0,
16
- "prompt": "これは日本語(Japanese)の文(sentence)です。",
17
15
  "response": "This is a sentence in Japanese.",
16
+ "dictioanry": {"日本語": "Japanese", "文": "sentence"},
18
17
  "target_lang": "English"
19
18
  }
20
19
  ```
@@ -22,14 +21,14 @@ JSON:
22
21
  Make sure the following requirements are all fulfilled:
23
22
 
24
23
  - keep the value of the "mode" property at "translate"
25
- - set the text in the new prompt presented above to the "prompt" property
26
24
  - translate the new prompt text to the language specified in the "target_lang" set it to "response" and set the translation to the "response" property
25
+ - update the "dictionary" property with translation suggested by the user (using parentheses) for specific expressions
26
+ - add user-suggested translations (translations in parentheses) to the "dictionary" property
27
27
 
28
28
  Make sure the following formal requirements are all fulfilled:
29
29
 
30
30
  - do not use invalid characters in the JSON object
31
31
  - escape double quotes and other special characters in the text values in the resulting JSON object
32
- - increment the value of "turns" by 1
33
32
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
34
33
 
35
34
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
@@ -15,7 +15,7 @@ class Translate < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.0,
17
17
  "frequency_penalty" => 0.0,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
19
  "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
@@ -42,24 +42,23 @@ class Translate < MonadicApp
42
42
  # @messages: messages to this point #
43
43
  # @metadata: currently available metdata sent from GPT #
44
44
  ############################################################
45
- current_template_tokens = count_tokens(@template)
46
45
  conditions = [
47
46
  @messages.size > 1,
48
- current_template_tokens > params["max_tokens"].to_i / 2
47
+ @messages.size > @num_retained_turns * 2 + 1
49
48
  ]
50
49
 
51
50
  if conditions.all?
52
51
  to_delete = []
53
- offset = current_template_tokens - params["max_tokens"].to_i / 2
52
+ new_num_messages = @messages.size
54
53
  @messages.each_with_index do |ele, i|
55
- break if offset <= 0
56
-
57
- to_delete << i if ele["role"] != "system"
58
- offset -= count_tokens(ele.to_json)
54
+ if ele["role"] != "system"
55
+ to_delete << i
56
+ new_num_messages -= 1
57
+ end
58
+ break if new_num_messages <= @num_retained_turns * 2 + 1
59
59
  end
60
60
  @messages.delete_if.with_index { |_, i| to_delete.include? i }
61
61
  end
62
-
63
62
  when :normal
64
63
  ############################################################
65
64
  # Normal mode recuder defined here #
data/bin/monadic-chat CHANGED
@@ -54,11 +54,12 @@ module MonadicMenu
54
54
  clear_screen
55
55
  print "\n", banner.strip, "\n"
56
56
 
57
+ print TTY::Cursor.save
57
58
  openai_completion ||= MonadicChat.authenticate
58
59
  exit unless openai_completion
59
60
 
60
61
  max_app_name_width = APPS.reduce(8) { |accum, app| app.length > accum ? app.length : accum } + 2
61
- parameter = PROMPT_SYSTEM.select(" Current mode: #{print_mode.call(mode)}\n\nSelect item:",
62
+ parameter = PROMPT_SYSTEM.select("Current mode: #{print_mode.call(mode)}\n\nSelect item:",
62
63
  per_page: 10,
63
64
  cycle: true,
64
65
  filter: true,