monadic-chat 0.3.3 → 0.3.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,41 @@
1
+ {{SYSTEM}}
2
+
3
+ Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object below. The preceding conversation is stored in "PAST MESSAGES".
4
+
5
+ The preceding conversation is stored in "PAST MESSAGES". In "PAST MESSAGES", "assistant" refers to you. Make your response as detailed as possible.
6
+
7
+ NEW PROMPT: {{PROMPT}}
8
+
9
+ PAST MESSAGES:
10
+ {{MESSAGES}}
11
+
12
+ JSON:
13
+
14
+ ```json
15
+ {
16
+ "mode": "{{APP_NAME}}",
17
+ "response": "",
18
+ "language": "English",
19
+ "summary": "",
20
+ "topics": []
21
+ }
22
+ ```
23
+
24
+ Make sure the following content requirements are all fulfilled:
25
+
26
+ - keep the value of the "mode" property at "{{APP_NAME}}"
27
+ - create your response to the new prompt based on the PAST MESSAGES and set it to "response"
28
+ - if the new prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
29
+ - make your response in the same language as the new prompt
30
+ - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
31
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
32
+ - avoid giving a response that is the same or similar to one of the previous responses in PAST MESSAGES
33
+ - program code in the response must be embedded in a code block in the markdown text
34
+
35
+ Make sure the following formal requirements are all fulfilled:
36
+
37
+ - do not use invalid characters in the JSON object
38
+ - escape double quotes and other special characters in the text values in the resulting JSON object
39
+ - check the validity of the generated JSON object and correct any possible parsing problems before returning it
40
+
41
+ Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
@@ -0,0 +1,85 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "../../lib/monadic_app"
4
+
5
+ class {{APP_CLASS_NAME}} < MonadicApp
6
+ DESC = "Monadic Chat app ({{APP_NAME}})"
7
+ COLOR = "white" # green/yellow/read/blue/magenta/cyan/white
8
+
9
+ attr_accessor :template, :config, :params, :completion
10
+
11
+ def initialize(openai_completion, research_mode: false, stream: true, params: {})
12
+ @num_retained_turns = 10
13
+ params = {
14
+ "temperature" => 0.3,
15
+ "top_p" => 1.0,
16
+ "presence_penalty" => 0.2,
17
+ "frequency_penalty" => 0.2,
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
+ "max_tokens" => 1000,
20
+ "stream" => stream,
21
+ "stop" => nil
22
+ }.merge(params)
23
+ mode = research_mode ? :research : :normal
24
+ template_json = TEMPLATES["normal/{{APP_NAME}}"]
25
+ template_md = TEMPLATES["research/{{APP_NAME}}"]
26
+ super(mode: mode,
27
+ params: params,
28
+ template_json: template_json,
29
+ template_md: template_md,
30
+ placeholders: {},
31
+ prop_accumulator: "messages",
32
+ prop_newdata: "response",
33
+ update_proc: proc do
34
+ case mode
35
+ when :research
36
+ ############################################################
37
+ # Research mode reduder defined here #
38
+ # @messages: messages to this point #
39
+ # @metadata: currently available metdata sent from GPT #
40
+ ############################################################
41
+ conditions = [
42
+ @messages.size > 1,
43
+ @messages.size > @num_retained_turns * 2 + 1
44
+ ]
45
+
46
+ if conditions.all?
47
+ to_delete = []
48
+ new_num_messages = @messages.size
49
+ @messages.each_with_index do |ele, i|
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
+ end
56
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
+ end
58
+ when :normal
59
+ ############################################################
60
+ # Normal mode recuder defined here #
61
+ # @messages: messages to this point #
62
+ ############################################################
63
+ conditions = [
64
+ @messages.size > 1,
65
+ @messages.size > @num_retained_turns * 2 + 1
66
+ ]
67
+
68
+ if conditions.all?
69
+ to_delete = []
70
+ new_num_messages = @messages.size
71
+ @messages.each_with_index do |ele, i|
72
+ if ele["role"] != "system"
73
+ to_delete << i
74
+ new_num_messages -= 1
75
+ end
76
+ break if new_num_messages <= @num_retained_turns * 2 + 1
77
+ end
78
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
79
+ end
80
+ end
81
+ end
82
+ )
83
+ @completion = openai_completion
84
+ end
85
+ end
@@ -2,21 +2,18 @@
2
2
 
3
3
  All prompts by "user" in the "messages" property are continuous in content. If parsing the input sentence is extremely difficult, or the input is not enclosed in double quotes, let the user know.
4
4
 
5
- Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. The preceding conversation is stored in "PAST MESSAGES". In "PAST MESSAGES", "assistant" refers to you.
5
+ Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. The preceding conversation is stored in "MESSAGES". In "MESSAGES", "assistant" refers to you.
6
6
 
7
- NEW PROMPT: {{PROMPT}}
7
+ {{PROMPT}}
8
8
 
9
- PAST MESSAGES:
10
9
  {{MESSAGES}}
11
10
 
12
11
  JSON:
13
12
 
14
13
  ```json
15
14
  {
16
- "prompt": "\"We didn't have a camera.\"",
17
15
  "response": "`[S [NP We] [VP [V didn't] [VP [V have] [NP [Det a] [N camera] ] ] ] ] ]`",
18
16
  "mode": "linguistic",
19
- "turns": 3,
20
17
  "sentence_type": ["declarative"],
21
18
  "sentiment": ["sad"],
22
19
  "summary": "The user saw a beautiful sunset, but did not take a picture because the user did not have a camera.",
@@ -24,21 +21,19 @@ JSON:
24
21
  }
25
22
  ```
26
23
 
27
- Make sure the following content requirements are all fulfilled:
28
-
24
+ Make sure the following content requirements are all fulfilled: ###
29
25
  - keep the value of the "mode" property at "linguistic"
30
- - set the new prompt to the "prompt" property
31
- - create your response to the new prompt based on "PAST MESSAGES" and set it to "response"
26
+ - create your response to the new prompt based on "PMESSAGES" and set it to "response"
32
27
  - analyze the new prompt's sentence type and set a sentence type value such as "interrogative", "imperative", "exclamatory", or "declarative" to the "sentence_type" property
33
28
  - analyze the new prompt's sentiment and set one or more sentiment types such as "happy", "excited", "troubled", "upset", or "sad" to the "sentiment" property
34
29
  - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words using as many discourse markers such as "because", "therefore", "but", and "so" to show the logical connection between the events.
35
- - increment the value of "turns" by
36
30
  - update the value of the "relevance" property indicating the degree to which the new input is naturally interpreted based on previous discussions, ranging from 0.0 (extremely difficult) to 1.0 (completely easy)
31
+ ###
37
32
 
38
- Make sure the following formal requirements are all fulfilled:
39
-
33
+ Make sure the following formal requirements are all fulfilled: ###
40
34
  - do not use invalid characters in the JSON object
41
35
  - escape double quotes and other special characters in the text values in the resulting JSON object
42
36
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
37
+ ###
43
38
 
44
39
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
@@ -15,7 +15,7 @@ class Linguistic < MonadicApp
15
15
  "top_p" => 1.0,
16
16
  "presence_penalty" => 0.0,
17
17
  "frequency_penalty" => 0.0,
18
- "model" => openai_completion.model_name(research_mode: research_mode),
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
19
  "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
@@ -38,24 +38,23 @@ class Linguistic < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
- current_template_tokens = count_tokens(@template)
42
41
  conditions = [
43
42
  @messages.size > 1,
44
- current_template_tokens > params["max_tokens"].to_i / 2
43
+ @messages.size > @num_retained_turns * 2 + 1
45
44
  ]
46
45
 
47
46
  if conditions.all?
48
47
  to_delete = []
49
- offset = current_template_tokens - params["max_tokens"].to_i / 2
48
+ new_num_messages = @messages.size
50
49
  @messages.each_with_index do |ele, i|
51
- break if offset <= 0
52
-
53
- to_delete << i if ele["role"] != "system"
54
- offset -= count_tokens(ele.to_json)
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
55
  end
56
56
  @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
57
  end
58
-
59
58
  when :normal
60
59
  ############################################################
61
60
  # Normal mode recuder defined here #
@@ -0,0 +1,3 @@
1
+ {"messages": [
2
+ {"role": "system", "content": "You are a consultant who responds to any questions asked by the user. the current date is {{DATE}}. Answer questions without a Wikipedia search if you are already knowledgeable enough. But if you encounter a question about something you do not know, say \"SEARCH_WIKI(query)\", read the snippets in the result, and then answer the question.\n\nEven if the user's question is in a language other than English, make a Wikipedia query in English and then answer in the user's language. "}
3
+ ]}
@@ -0,0 +1,38 @@
1
+ {{SYSTEM}}
2
+
3
+ If there is a "NEW PROMPT" below, it represents the user's input. Or if there is a "SEARCH SNIPPETS" below, it is the response from a search engine to a query you made to answer the user's question. In either case, set your response to the "response" property of the JSON object. The preceding conversation is stored in "MESSAGES".
4
+
5
+ {{PROMPT}}
6
+
7
+ {{MESSAGES}}
8
+
9
+ JSON:
10
+
11
+ ```json
12
+ {
13
+ "mode": "wikipedia",
14
+ "response": "",
15
+ "language": "English",
16
+ "summary": "",
17
+ "topics": []
18
+ }
19
+ ```
20
+
21
+ Make sure the following content requirements are all fulfilled: ###
22
+ - keep the value of the "mode" property at "wikipedia"
23
+ - create your response to a new prompt or to wikipedia search results, based on the MESSAGES and set it to "response"
24
+ - if the new prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
25
+ - make your response in the same language as the new prompt
26
+ - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
27
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
28
+ - avoid giving a response that is the same or similar to one of the previous responses in MESSAGES
29
+ - program code in the response must be embedded in a code block in the markdown text
30
+ ###
31
+
32
+ Make sure the following formal requirements are all fulfilled: ###
33
+ - do not use invalid characters in the JSON object
34
+ - escape double quotes and other special characters in the text values in the resulting JSON object
35
+ - check the validity of the generated JSON object and correct any possible parsing problems before returning it
36
+ ###
37
+
38
+ Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
@@ -0,0 +1,85 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "../../lib/monadic_app"
4
+
5
+ class Wikipedia < MonadicApp
6
+ DESC = "Sarches Wikipedia for you (experimental, requires GPT-4)"
7
+ COLOR = "white"
8
+
9
+ attr_accessor :template, :config, :params, :completion
10
+
11
+ def initialize(openai_completion, research_mode: false, stream: true, params: {})
12
+ @num_retained_turns = 5
13
+ params = {
14
+ "temperature" => 0.3,
15
+ "top_p" => 1.0,
16
+ "presence_penalty" => 0.2,
17
+ "frequency_penalty" => 0.2,
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
+ "max_tokens" => 1000,
20
+ "stream" => stream,
21
+ "stop" => nil
22
+ }.merge(params)
23
+ mode = research_mode ? :research : :normal
24
+ template_json = TEMPLATES["normal/wikipedia"]
25
+ template_md = TEMPLATES["research/wikipedia"]
26
+ super(mode: mode,
27
+ params: params,
28
+ template_json: template_json,
29
+ template_md: template_md,
30
+ placeholders: {},
31
+ prop_accumulator: "messages",
32
+ prop_newdata: "response",
33
+ update_proc: proc do
34
+ case mode
35
+ when :research
36
+ ############################################################
37
+ # Research mode reduder defined here #
38
+ # @messages: messages to this point #
39
+ # @metadata: currently available metdata sent from GPT #
40
+ ############################################################
41
+ conditions = [
42
+ @messages.size > 1,
43
+ @messages.size > @num_retained_turns * 2 + 1
44
+ ]
45
+
46
+ if conditions.all?
47
+ to_delete = []
48
+ new_num_messages = @messages.size
49
+ @messages.each_with_index do |ele, i|
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
+ end
56
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
+ end
58
+ when :normal
59
+ ############################################################
60
+ # Normal mode recuder defined here #
61
+ # @messages: messages to this point #
62
+ ############################################################
63
+ conditions = [
64
+ @messages.size > 1,
65
+ @messages.size > @num_retained_turns * 2 + 1
66
+ ]
67
+
68
+ if conditions.all?
69
+ to_delete = []
70
+ new_num_messages = @messages.size
71
+ @messages.each_with_index do |ele, i|
72
+ if ele["role"] != "system"
73
+ to_delete << i
74
+ new_num_messages -= 1
75
+ end
76
+ break if new_num_messages <= @num_retained_turns * 2 + 1
77
+ end
78
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
79
+ end
80
+ end
81
+ end
82
+ )
83
+ @completion = openai_completion
84
+ end
85
+ end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: monadic-chat
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.3
4
+ version: 0.3.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - yohasebe
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2023-03-27 00:00:00.000000000 Z
11
+ date: 2023-04-05 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -274,9 +274,6 @@ files:
274
274
  - apps/code/code.json
275
275
  - apps/code/code.md
276
276
  - apps/code/code.rb
277
- - apps/linguistic/linguistic.json
278
- - apps/linguistic/linguistic.md
279
- - apps/linguistic/linguistic.rb
280
277
  - apps/novel/novel.json
281
278
  - apps/novel/novel.md
282
279
  - apps/novel/novel.rb
@@ -303,6 +300,8 @@ files:
303
300
  - doc/img/syntree-sample.png
304
301
  - lib/monadic_app.rb
305
302
  - lib/monadic_chat.rb
303
+ - lib/monadic_chat/authenticate.rb
304
+ - lib/monadic_chat/commands.rb
306
305
  - lib/monadic_chat/console.rb
307
306
  - lib/monadic_chat/formatting.rb
308
307
  - lib/monadic_chat/helper.rb
@@ -311,8 +310,18 @@ files:
311
310
  - lib/monadic_chat/menu.rb
312
311
  - lib/monadic_chat/open_ai.rb
313
312
  - lib/monadic_chat/parameters.rb
313
+ - lib/monadic_chat/tools.rb
314
314
  - lib/monadic_chat/version.rb
315
315
  - monadic_chat.gemspec
316
+ - user_apps/boilerplates/boilerplate.json
317
+ - user_apps/boilerplates/boilerplate.md
318
+ - user_apps/boilerplates/boilerplate.rb
319
+ - user_apps/linguistic/linguistic.json
320
+ - user_apps/linguistic/linguistic.md
321
+ - user_apps/linguistic/linguistic.rb
322
+ - user_apps/wikipedia/wikipedia.json
323
+ - user_apps/wikipedia/wikipedia.md
324
+ - user_apps/wikipedia/wikipedia.rb
316
325
  homepage: https://github.com/yohasebe/monadic-chat
317
326
  licenses:
318
327
  - MIT
File without changes