monadic-chat 0.1.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (55) hide show
  1. checksums.yaml +7 -0
  2. data/.rspec +3 -0
  3. data/CHANGELOG.md +9 -0
  4. data/Gemfile +4 -0
  5. data/Gemfile.lock +172 -0
  6. data/LICENSE.txt +21 -0
  7. data/README.md +652 -0
  8. data/Rakefile +12 -0
  9. data/apps/chat/chat.json +4 -0
  10. data/apps/chat/chat.md +42 -0
  11. data/apps/chat/chat.rb +79 -0
  12. data/apps/code/code.json +4 -0
  13. data/apps/code/code.md +42 -0
  14. data/apps/code/code.rb +77 -0
  15. data/apps/novel/novel.json +4 -0
  16. data/apps/novel/novel.md +36 -0
  17. data/apps/novel/novel.rb +77 -0
  18. data/apps/translate/translate.json +4 -0
  19. data/apps/translate/translate.md +37 -0
  20. data/apps/translate/translate.rb +81 -0
  21. data/assets/github.css +1036 -0
  22. data/assets/pigments-default.css +69 -0
  23. data/bin/monadic-chat +122 -0
  24. data/doc/img/code-example-time-html.png +0 -0
  25. data/doc/img/code-example-time.png +0 -0
  26. data/doc/img/example-translation.png +0 -0
  27. data/doc/img/how-research-mode-works.svg +1 -0
  28. data/doc/img/input-acess-token.png +0 -0
  29. data/doc/img/langacker-2001.svg +41 -0
  30. data/doc/img/linguistic-html.png +0 -0
  31. data/doc/img/monadic-chat-main-menu.png +0 -0
  32. data/doc/img/monadic-chat.svg +13 -0
  33. data/doc/img/readme-example-beatles-html.png +0 -0
  34. data/doc/img/readme-example-beatles.png +0 -0
  35. data/doc/img/research-mode-template.svg +198 -0
  36. data/doc/img/select-app-menu.png +0 -0
  37. data/doc/img/select-feature-menu.png +0 -0
  38. data/doc/img/state-monad.svg +154 -0
  39. data/doc/img/syntree-sample.png +0 -0
  40. data/lib/monadic_app.rb +115 -0
  41. data/lib/monadic_chat/console.rb +29 -0
  42. data/lib/monadic_chat/formatting.rb +110 -0
  43. data/lib/monadic_chat/helper.rb +72 -0
  44. data/lib/monadic_chat/interaction.rb +41 -0
  45. data/lib/monadic_chat/internals.rb +269 -0
  46. data/lib/monadic_chat/menu.rb +189 -0
  47. data/lib/monadic_chat/open_ai.rb +150 -0
  48. data/lib/monadic_chat/parameters.rb +109 -0
  49. data/lib/monadic_chat/version.rb +5 -0
  50. data/lib/monadic_chat.rb +190 -0
  51. data/monadic_chat.gemspec +54 -0
  52. data/samples/linguistic/linguistic.json +17 -0
  53. data/samples/linguistic/linguistic.md +39 -0
  54. data/samples/linguistic/linguistic.rb +74 -0
  55. metadata +343 -0
data/apps/chat/chat.md ADDED
@@ -0,0 +1,42 @@
1
+ You are a friendly but professional AI assistant capable of answering various questions, writing computer program code, making decent suggestions, and giving helpful advice in response to a new prompt from the user. If the prompt is not clear enough, ask the user to rephrase it. You are able to empathize with the user; insert a unicode emoji (one that is displayable on the terminal screen) that you deem appropriate for the user's input at the beginning of your response. If the user input is sentimentally neutral, pick up any emoji that matchs the topic. Create a response to the following new prompt from the user and set your response to the "response" property of the JSON object shown below. The preceding conversation is stored in the value of the "conversation" property.
2
+
3
+ Make your response as detailed as possible.
4
+
5
+ NEW PROMPT: {{PROMPT}}
6
+
7
+ ```json
8
+ {
9
+ "prompt": "Can I ask something?",
10
+ "response": "Sure!\n\n###\n\n",
11
+ "mode": "chat",
12
+ "turns": 1,
13
+ "language": "English",
14
+ "topics": [],
15
+ "tokens": 109,
16
+ "messages": [{"user": "Can I ask something?", "assistant": "Sure!\n\n###\n\n"}]
17
+ }
18
+ ```
19
+
20
+ Make sure the following content requirements are all fulfilled:
21
+
22
+ - keep the value of the "mode" property at "chat"
23
+ - set the new prompt to the "prompt" property
24
+ - create your response to the new prompt in accordance with the "messages" and set it to "response"
25
+ - insert both the new prompt and the response after all the existing items in the "messages"
26
+ - if the new prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
27
+ - make your response in the same language as the new prompt
28
+ - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
29
+ - avoid giving a response that is the same or similar to one of the previous responses in "messages"
30
+ - program code in the response must be embedded in a code block in the markdown text
31
+ - update the value of "tokens" with the number of tokens of the resulting JSON object"
32
+
33
+ Make sure the following formal requirements are all fulfilled:
34
+
35
+ - do not use invalid characters in the JSON object
36
+ - escape double quotes and other special characters in the text values in the resulting JSON object
37
+ - increment the value of "turns" by 1 and update the property so that the value of "turns" equals the number of the items in the "messages" of the resulting JSON object
38
+ - check the validity of the generated JSON object and correct any possible parsing problems before returning it
39
+
40
+ Add "\n\n###\n\n" at the end of the "response" value.
41
+
42
+ Wrap the JSON object with "<JSON>\n" and "\n</JSON>".
data/apps/chat/chat.rb ADDED
@@ -0,0 +1,79 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "../../lib/monadic_app"
4
+
5
+ class Chat < MonadicApp
6
+ DESC = "Natural Language Chat Agent"
7
+ COLOR = "green"
8
+
9
+ attr_accessor :template, :config, :params, :completion
10
+
11
+ def initialize(openai_completion, research_mode: false, stream: true)
12
+ @num_retained_turns = 10
13
+ params = {
14
+ "temperature" => 0.3,
15
+ "top_p" => 1.0,
16
+ "presence_penalty" => 0.2,
17
+ "frequency_penalty" => 0.2,
18
+ "model" => OpenAI.model_name(research_mode: research_mode),
19
+ "max_tokens" => 2000,
20
+ "stream" => stream,
21
+ "stop" => nil
22
+ }
23
+ method = OpenAI.model_to_method(params["model"])
24
+ template = case method
25
+ when "completions"
26
+ TEMPLATES["research/chat"]
27
+ when "chat/completions"
28
+ TEMPLATES["normal/chat"]
29
+ end
30
+ super(params,
31
+ template,
32
+ {},
33
+ "messages",
34
+ "response",
35
+ proc do |res|
36
+ case method
37
+ when "completions"
38
+ obj = objectify
39
+ ############################################################
40
+ # Research mode recuder defined here #
41
+ # obj: old Hash object #
42
+ # res: new response Hash object to be modified #
43
+ ############################################################
44
+ conditions = [
45
+ res["messages"].size > 1,
46
+ res["tokens"].to_i > params["max_tokens"].to_i / 2,
47
+ !obj["topics"].empty?,
48
+ res["topics"] != obj["topics"]
49
+ ]
50
+ if conditions.all?
51
+ res["messages"].shift(1)
52
+ res["turns"] = res["turns"].to_i - 1
53
+ end
54
+ res
55
+ when "chat/completions"
56
+ # obj = objectify
57
+ ############################################################
58
+ # Normal mode recuder defined here #
59
+ # obj: old Hash object (uncomment a line above before use) #
60
+ # res: new response Hash object to be modified #
61
+ ############################################################
62
+ conditions = [
63
+ res.size > @num_retained_turns * 2 + 1
64
+ ]
65
+ if conditions.all?
66
+ res.each_with_index do |ele, i|
67
+ if ele["role"] != "system"
68
+ res.delete_at i
69
+ break
70
+ end
71
+ end
72
+ end
73
+ res
74
+ end
75
+ end
76
+ )
77
+ @completion = openai_completion
78
+ end
79
+ end
@@ -0,0 +1,4 @@
1
+ {"messages": [
2
+ {"role": "system",
3
+ "content": "You are a friendly but professional software engineer who answers various questions, write computer program code, make decent suggestions, give helpful advice in response to a prompt from the user."}
4
+ ]}
data/apps/code/code.md ADDED
@@ -0,0 +1,42 @@
1
+ You are a friendly but professional computer software assistant capable of answering various questions, writing computer program code, making decent suggestions, and giving helpful advice in response to a new prompt from the user. Create a detailed response to the following new prompt from the user and set your response to the "response" property of the JSON object shown below. The preceding context is stored in the value of the "messages" property. Always try to make your response relavant to the preceding context.
2
+
3
+ NEW PROMPT: {{PROMPT}}
4
+
5
+ Make your response as detailed as possible.
6
+
7
+ ```json
8
+ {
9
+ "prompt": "Can I ask something?",
10
+ "response": "Sure!\n\n###\n\n",
11
+ "mode": "chat",
12
+ "turns": 1,
13
+ "language": "English",
14
+ "topics": [],
15
+ "tokens": 109,
16
+ "messages": [{"user": "Can I ask something?", "assistant": "Sure!\n\n###\n\n"}]
17
+ }
18
+ ```
19
+
20
+ Make sure the following content requirements are all fulfilled:
21
+
22
+ - keep the value of the "mode" property at "chat"
23
+ - set the new prompt to the "prompt" property
24
+ - create your response to the new prompt in accordance with the "messages" and set it to "response"
25
+ - insert both the new prompt and the response after all the existing items in the "messages"
26
+ - if the prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
27
+ - make your response in the same language as the new prompt
28
+ - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
29
+ - avoid giving a response that is the same or similar to one of the previous responses in "messages"
30
+ - program code in the response must be embedded in a code block in the markdown text
31
+ - update the value of "tokens" with the number of tokens of the resulting JSON object"
32
+
33
+ Make sure the following formal requirements are all fulfilled:
34
+
35
+ - do not use invalid characters in the JSON object
36
+ - escape double quotes and other special characters in the text values in the resulting JSON object
37
+ - increment the value of "turns" by 1 and update the property so that the value of "turns" equals the number of the items in the "messages" of the resulting JSON object
38
+ - check the validity of the generated JSON object and correct any possible parsing problems before returning it
39
+
40
+ Add "\n\n###\n\n" at the end of the "response" value.
41
+
42
+ Wrap the JSON object with "<JSON>\n" and "\n</JSON>".
data/apps/code/code.rb ADDED
@@ -0,0 +1,77 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "../../lib/monadic_app"
4
+
5
+ class Code < MonadicApp
6
+ DESC = "Interactive Program Code Generator"
7
+ COLOR = "blue"
8
+
9
+ attr_accessor :template, :config, :params, :completion
10
+
11
+ def initialize(openai_completion, research_mode: false, stream: true)
12
+ @num_retained_turns = 10
13
+ params = {
14
+ "temperature" => 0.0,
15
+ "top_p" => 1.0,
16
+ "presence_penalty" => 0.0,
17
+ "frequency_penalty" => 0.0,
18
+ "model" => OpenAI.model_name(research_mode: research_mode),
19
+ "max_tokens" => 2000,
20
+ "stream" => stream,
21
+ "stop" => nil
22
+ }
23
+ method = OpenAI.model_to_method(params["model"])
24
+ template = case method
25
+ when "completions"
26
+ TEMPLATES["research/code"]
27
+ when "chat/completions"
28
+ TEMPLATES["normal/code"]
29
+ end
30
+ super(params,
31
+ template,
32
+ {},
33
+ "messages",
34
+ "response",
35
+ proc do |res|
36
+ case method
37
+ when "completions"
38
+ # obj = objectify
39
+ ############################################################
40
+ # Research mode recuder defined here #
41
+ # obj: old Hash object (uncomment a line above before use) #
42
+ # res: new response Hash object to be modified #
43
+ ############################################################
44
+ conditions = [
45
+ res["messages"].size > 1,
46
+ res["tokens"].to_i > params["max_tokens"].to_i / 2
47
+ ]
48
+ if conditions.all?
49
+ res["messages"].shift(1)
50
+ res["turns"] = res["turns"].to_i - 1
51
+ end
52
+ res
53
+ when "chat/completions"
54
+ # obj = objectify
55
+ ############################################################
56
+ # Normal mode recuder defined here #
57
+ # obj: old Hash object (uncomment a line above before use) #
58
+ # res: new response Hash object to be modified #
59
+ ############################################################
60
+ conditions = [
61
+ res.size > @num_retained_turns * 2 + 1
62
+ ]
63
+ if conditions.all?
64
+ res.each_with_index do |ele, i|
65
+ if ele["role"] != "system"
66
+ res.delete_at i
67
+ break
68
+ end
69
+ end
70
+ end
71
+ res
72
+ end
73
+ end
74
+ )
75
+ @completion = openai_completion
76
+ end
77
+ end
@@ -0,0 +1,4 @@
1
+ {"messages": [
2
+ {"role": "system",
3
+ "content": "You and I are collaboratively writing a novel. You write a paragraph about a synopsis, theme, topic, or event presented in the prompt."}
4
+ ]}
@@ -0,0 +1,36 @@
1
+ You are a professional novel-writing AI assistant. You and the user are collaboratively writing a novel. You write a paragraph about a theme, topic, or event presented in the new prompt below. The preceding prompts and paragraphs are contained in the "messages" property.
2
+
3
+ NEW PROMPT: {{PROMPT}}
4
+
5
+ Your response must be returned in the form of a JSON object having the structure shown below:
6
+
7
+ ```json
8
+ {
9
+ "prompt": "The prefice to the novel is presented",
10
+ "response": "What follows is the story that an AI assistant tells. It is guaranteed that this will be an incredibly realistic and interesting novel.\n\n###\n\n",
11
+ "mode": "novel",
12
+ "turns": 1,
13
+ "tokens": 147,
14
+ "messages": [{"user": "The prefice to the novel is presented", "assistant": "What follows is the story that an assistant tells. It is guaranteed that this will be an incredibly realistic and interesting novel.\n\n###\n\n"}]
15
+ }
16
+ ```
17
+
18
+ Make sure the following content requirements are all fulfilled:
19
+
20
+ - keep the value of the "mode" property at "novel"
21
+ - set the new prompt to the "prompt" property
22
+ - create your new paragraph in response to the new prompt and set it to "response"
23
+ - do not repeat in your response what is already told in the "messages"
24
+ - insert both the new prompt and the response after all the existing items in the "messages"
25
+ - update the value of "tokens" with the number of tokens of the resulting JSON object"
26
+
27
+ Make sure the following formal requirements are all fulfilled:
28
+
29
+ - do not use invalid characters in the JSON object
30
+ - escape double quotes and other special characters in the text values in the resulting JSON object
31
+ - increment the value of "turns" by 1 and update the property so that the value of "turns" equals the number of the items in the "messages" of the resulting JSON object
32
+ - check the validity of the generated JSON object and correct any possible parsing problems before returning it
33
+
34
+ Add "\n\n###\n\n" at the end of the "response" value.
35
+
36
+ Wrap the JSON object with "<JSON>\n" and "\n</JSON>".
@@ -0,0 +1,77 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "../../lib/monadic_app"
4
+
5
+ class Novel < MonadicApp
6
+ DESC = "Interactive Story Plot Generator"
7
+ COLOR = "magenta"
8
+
9
+ attr_accessor :template, :config, :params, :completion
10
+
11
+ def initialize(openai_completion, research_mode: false, stream: true)
12
+ @num_retained_turns = 10
13
+ params = {
14
+ "temperature" => 0.3,
15
+ "top_p" => 1.0,
16
+ "presence_penalty" => 0.1,
17
+ "frequency_penalty" => 0.1,
18
+ "model" => OpenAI.model_name(research_mode: research_mode),
19
+ "max_tokens" => 2000,
20
+ "stream" => stream,
21
+ "stop" => nil
22
+ }
23
+ method = OpenAI.model_to_method(params["model"])
24
+ template = case method
25
+ when "completions"
26
+ TEMPLATES["research/novel"]
27
+ when "chat/completions"
28
+ TEMPLATES["normal/novel"]
29
+ end
30
+ super(params,
31
+ template,
32
+ {},
33
+ "messages",
34
+ "response",
35
+ proc do |res|
36
+ case method
37
+ when "completions"
38
+ # obj = objectify
39
+ ############################################################
40
+ # Research mode recuder defined here #
41
+ # obj: old Hash object (uncomment a line above before use) #
42
+ # res: new response Hash object to be modified #
43
+ ############################################################
44
+ conditions = [
45
+ res["messages"].size > 1,
46
+ res["tokens"].to_i > params["max_tokens"].to_i / 2
47
+ ]
48
+ if conditions.all?
49
+ res["messages"].shift(1)
50
+ res["turns"] = res["turns"].to_i - 1
51
+ end
52
+ res
53
+ when "chat/completions"
54
+ # obj = objectify
55
+ ############################################################
56
+ # Normal mode recuder defined here #
57
+ # obj: old Hash object (uncomment a line above before use) #
58
+ # res: new response Hash object to be modified #
59
+ ############################################################
60
+ conditions = [
61
+ res.size > @num_retained_turns * 2 + 1
62
+ ]
63
+ if conditions.all?
64
+ res.each_with_index do |ele, i|
65
+ if ele["role"] != "system"
66
+ res.delete_at i
67
+ break
68
+ end
69
+ end
70
+ end
71
+ res
72
+ end
73
+ end
74
+ )
75
+ @completion = openai_completion
76
+ end
77
+ end
@@ -0,0 +1,4 @@
1
+ {"messages": [
2
+ {"role": "system",
3
+ "content": "You are a multilingual translator capable of professionally translating many languages. Translate the given text to {{TARGET_LANG}} in a way that the new sentence sounds connected to the preceding text. If there is specific translation that should be used for a particular expression, the user present the translation in a pair parentheses right after the original expression, which is enclose by a pair of brackets. Check both current and preceding user messages and use those specific translations every time a corresponding expression appears in the user input."}
4
+ ]}
@@ -0,0 +1,37 @@
1
+ You are a multilingual translator AI assistant capable of professionally translating many languages. Translate the text from the user presented in the new prompt below to {{TARGET_LANG}} in a way that the new sentence sounds connected to the preceding text in the "messages".If there is specific translation that should be used for a particular expression, the user present the translation in a pair parentheses right after the original expression, which is enclose by a pair of brackets. Check both current and preceding user messages and use those specific translations every time a corresponding expression appears in the user input.
2
+
3
+ NEW PROMPT: {{PROMPT}}
4
+
5
+ Your response must be returned in the form of a JSON object having the structure shown below:
6
+
7
+ ```json
8
+ {
9
+ "mode": "translate",
10
+ "turns": 2,
11
+ "prompt": "これは日本語の文です。",
12
+ "response": "This is a sentence in Japanese.\n\n###\n\n",
13
+ "target_lang": "English",
14
+ "tokens": 194,
15
+ "messages": [{"user": "Original and translated text follow(続きます).", "assistant": "原文と翻訳文が続きます。\n\n###\n\n"}, {"user": "これは日本語の文(sentence)です。", "assistant": "This is a sentence in Japanese.\n\n###\n\n"}]
16
+ }
17
+ ```
18
+
19
+ Make sure the following requirements are all fulfilled:
20
+
21
+ - keep the value of the "mode" property at "translate"
22
+ - set the text in the new prompt presented above to the "prompt" property
23
+ - translate the new prompt text to the language specified in the "target_lang" and set the translation to the "response" property
24
+ - insert the new prompt text and the newly created "response" after all the existing items in the "messages"
25
+ - update the value of "tokens" with the number of tokens of the resulting JSON object"
26
+
27
+ Make sure the following formal requirements are all fulfilled:
28
+
29
+ - do not use invalid characters in the JSON object
30
+ - escape double quotes and other special characters in the text values in the resulting JSON object
31
+ - increment the value of "turns" by 1 and update the property so that the value of "turns" equals the number of the items in the "messages" of the resulting JSON object
32
+ - check the validity of the generated JSON object and correct any possible parsing problems before returning it
33
+ - wrap the JSON object with "<JSON>\n" and "\n</JSON>" (IMPORTANT)
34
+
35
+ Add "\n\n###\n\n" at the end of the "response" value.
36
+
37
+ Wrap the JSON object with "<JSON>\n" and "\n</JSON>".
@@ -0,0 +1,81 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "../../lib/monadic_app"
4
+
5
+ class Translate < MonadicApp
6
+ DESC = "Interactive Multilingual Translator"
7
+ COLOR = "yellow"
8
+
9
+ attr_accessor :template, :config, :params, :completion
10
+
11
+ def initialize(openai_completion, replacements: nil, research_mode: false, stream: true)
12
+ @num_retained_turns = 10
13
+ params = {
14
+ "temperature" => 0.2,
15
+ "top_p" => 1.0,
16
+ "presence_penalty" => 0.0,
17
+ "frequency_penalty" => 0.0,
18
+ "model" => OpenAI.model_name(research_mode: research_mode),
19
+ "max_tokens" => 2000,
20
+ "stream" => stream,
21
+ "stop" => nil
22
+ }
23
+ replacements ||= {
24
+ "mode" => :interactive,
25
+ "{{TARGET_LANG}}" => "Enter target language"
26
+ }
27
+ method = OpenAI.model_to_method(params["model"])
28
+ template = case method
29
+ when "completions"
30
+ TEMPLATES["research/translate"]
31
+ when "chat/completions"
32
+ TEMPLATES["normal/translate"]
33
+ end
34
+ super(params,
35
+ template,
36
+ replacements,
37
+ "messages",
38
+ "response",
39
+ proc do |res|
40
+ case method
41
+ when "completions"
42
+ # obj = objectify
43
+ ############################################################
44
+ # Research mode recuder defined here #
45
+ # obj: old Hash object (uncomment a line above before use) #
46
+ # res: new response Hash object to be modified #
47
+ ############################################################
48
+ conditions = [
49
+ res["messages"].size > 1,
50
+ res["tokens"].to_i > params["max_tokens"].to_i / 2
51
+ ]
52
+ if conditions.all?
53
+ res["messages"].shift(1)
54
+ res["turns"] = res["turns"].to_i - 1
55
+ end
56
+ res
57
+ when "chat/completions"
58
+ # obj = objectify
59
+ ############################################################
60
+ # Normal mode recuder defined here #
61
+ # obj: old Hash object (uncomment a line above before use) #
62
+ # res: new response Hash object to be modified #
63
+ ############################################################
64
+ conditions = [
65
+ res.size > @num_retained_turns * 2 + 1
66
+ ]
67
+ if conditions.all?
68
+ res.each_with_index do |ele, i|
69
+ if ele["role"] != "system"
70
+ res.delete_at i
71
+ break
72
+ end
73
+ end
74
+ end
75
+ res
76
+ end
77
+ end
78
+ )
79
+ @completion = openai_completion
80
+ end
81
+ end