monadic-chat 0.3.4 → 0.3.5

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,41 @@
1
+ {{SYSTEM}}
2
+
3
+ Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object below. The preceding conversation is stored in "PAST MESSAGES".
4
+
5
+ The preceding conversation is stored in "PAST MESSAGES". In "PAST MESSAGES", "assistant" refers to you. Make your response as detailed as possible.
6
+
7
+ NEW PROMPT: {{PROMPT}}
8
+
9
+ PAST MESSAGES:
10
+ {{MESSAGES}}
11
+
12
+ JSON:
13
+
14
+ ```json
15
+ {
16
+ "mode": "{{APP_NAME}}",
17
+ "response": "",
18
+ "language": "English",
19
+ "summary": "",
20
+ "topics": []
21
+ }
22
+ ```
23
+
24
+ Make sure the following content requirements are all fulfilled:
25
+
26
+ - keep the value of the "mode" property at "{{APP_NAME}}"
27
+ - create your response to the new prompt based on the PAST MESSAGES and set it to "response"
28
+ - if the new prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
29
+ - make your response in the same language as the new prompt
30
+ - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
31
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
32
+ - avoid giving a response that is the same or similar to one of the previous responses in PAST MESSAGES
33
+ - program code in the response must be embedded in a code block in the markdown text
34
+
35
+ Make sure the following formal requirements are all fulfilled:
36
+
37
+ - do not use invalid characters in the JSON object
38
+ - escape double quotes and other special characters in the text values in the resulting JSON object
39
+ - check the validity of the generated JSON object and correct any possible parsing problems before returning it
40
+
41
+ Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
@@ -0,0 +1,85 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "../../lib/monadic_app"
4
+
5
+ class {{APP_CLASS_NAME}} < MonadicApp
6
+ DESC = "Monadic Chat app ({{APP_NAME}})"
7
+ COLOR = "white" # green/yellow/read/blue/magenta/cyan/white
8
+
9
+ attr_accessor :template, :config, :params, :completion
10
+
11
+ def initialize(openai_completion, research_mode: false, stream: true, params: {})
12
+ @num_retained_turns = 10
13
+ params = {
14
+ "temperature" => 0.3,
15
+ "top_p" => 1.0,
16
+ "presence_penalty" => 0.2,
17
+ "frequency_penalty" => 0.2,
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
+ "max_tokens" => 1000,
20
+ "stream" => stream,
21
+ "stop" => nil
22
+ }.merge(params)
23
+ mode = research_mode ? :research : :normal
24
+ template_json = TEMPLATES["normal/{{APP_NAME}}"]
25
+ template_md = TEMPLATES["research/{{APP_NAME}}"]
26
+ super(mode: mode,
27
+ params: params,
28
+ template_json: template_json,
29
+ template_md: template_md,
30
+ placeholders: {},
31
+ prop_accumulator: "messages",
32
+ prop_newdata: "response",
33
+ update_proc: proc do
34
+ case mode
35
+ when :research
36
+ ############################################################
37
+ # Research mode reduder defined here #
38
+ # @messages: messages to this point #
39
+ # @metadata: currently available metdata sent from GPT #
40
+ ############################################################
41
+ conditions = [
42
+ @messages.size > 1,
43
+ @messages.size > @num_retained_turns * 2 + 1
44
+ ]
45
+
46
+ if conditions.all?
47
+ to_delete = []
48
+ new_num_messages = @messages.size
49
+ @messages.each_with_index do |ele, i|
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
+ end
56
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
+ end
58
+ when :normal
59
+ ############################################################
60
+ # Normal mode recuder defined here #
61
+ # @messages: messages to this point #
62
+ ############################################################
63
+ conditions = [
64
+ @messages.size > 1,
65
+ @messages.size > @num_retained_turns * 2 + 1
66
+ ]
67
+
68
+ if conditions.all?
69
+ to_delete = []
70
+ new_num_messages = @messages.size
71
+ @messages.each_with_index do |ele, i|
72
+ if ele["role"] != "system"
73
+ to_delete << i
74
+ new_num_messages -= 1
75
+ end
76
+ break if new_num_messages <= @num_retained_turns * 2 + 1
77
+ end
78
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
79
+ end
80
+ end
81
+ end
82
+ )
83
+ @completion = openai_completion
84
+ end
85
+ end
@@ -2,11 +2,10 @@
2
2
 
3
3
  All prompts by "user" in the "messages" property are continuous in content. If parsing the input sentence is extremely difficult, or the input is not enclosed in double quotes, let the user know.
4
4
 
5
- Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. The preceding conversation is stored in "PAST MESSAGES". In "PAST MESSAGES", "assistant" refers to you.
5
+ Create a response to "NEW PROMPT" from the user and set your response to the "response" property of the JSON object shown below. The preceding conversation is stored in "MESSAGES". In "MESSAGES", "assistant" refers to you.
6
6
 
7
- NEW PROMPT: {{PROMPT}}
7
+ {{PROMPT}}
8
8
 
9
- PAST MESSAGES:
10
9
  {{MESSAGES}}
11
10
 
12
11
  JSON:
@@ -22,19 +21,19 @@ JSON:
22
21
  }
23
22
  ```
24
23
 
25
- Make sure the following content requirements are all fulfilled:
26
-
24
+ Make sure the following content requirements are all fulfilled: ###
27
25
  - keep the value of the "mode" property at "linguistic"
28
- - create your response to the new prompt based on "PAST MESSAGES" and set it to "response"
26
+ - create your response to the new prompt based on "PMESSAGES" and set it to "response"
29
27
  - analyze the new prompt's sentence type and set a sentence type value such as "interrogative", "imperative", "exclamatory", or "declarative" to the "sentence_type" property
30
28
  - analyze the new prompt's sentiment and set one or more sentiment types such as "happy", "excited", "troubled", "upset", or "sad" to the "sentiment" property
31
29
  - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words using as many discourse markers such as "because", "therefore", "but", and "so" to show the logical connection between the events.
32
30
  - update the value of the "relevance" property indicating the degree to which the new input is naturally interpreted based on previous discussions, ranging from 0.0 (extremely difficult) to 1.0 (completely easy)
31
+ ###
33
32
 
34
- Make sure the following formal requirements are all fulfilled:
35
-
33
+ Make sure the following formal requirements are all fulfilled: ###
36
34
  - do not use invalid characters in the JSON object
37
35
  - escape double quotes and other special characters in the text values in the resulting JSON object
38
36
  - check the validity of the generated JSON object and correct any possible parsing problems before returning it
37
+ ###
39
38
 
40
39
  Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
@@ -0,0 +1,3 @@
1
+ {"messages": [
2
+ {"role": "system", "content": "You are a consultant who responds to any questions asked by the user. the current date is {{DATE}}. Answer questions without a Wikipedia search if you are already knowledgeable enough. But if you encounter a question about something you do not know, say \"SEARCH_WIKI(query)\", read the snippets in the result, and then answer the question.\n\nEven if the user's question is in a language other than English, make a Wikipedia query in English and then answer in the user's language. "}
3
+ ]}
@@ -0,0 +1,38 @@
1
+ {{SYSTEM}}
2
+
3
+ If there is a "NEW PROMPT" below, it represents the user's input. Or if there is a "SEARCH SNIPPETS" below, it is the response from a search engine to a query you made to answer the user's question. In either case, set your response to the "response" property of the JSON object. The preceding conversation is stored in "MESSAGES".
4
+
5
+ {{PROMPT}}
6
+
7
+ {{MESSAGES}}
8
+
9
+ JSON:
10
+
11
+ ```json
12
+ {
13
+ "mode": "wikipedia",
14
+ "response": "",
15
+ "language": "English",
16
+ "summary": "",
17
+ "topics": []
18
+ }
19
+ ```
20
+
21
+ Make sure the following content requirements are all fulfilled: ###
22
+ - keep the value of the "mode" property at "wikipedia"
23
+ - create your response to a new prompt or to wikipedia search results, based on the MESSAGES and set it to "response"
24
+ - if the new prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
25
+ - make your response in the same language as the new prompt
26
+ - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
27
+ - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words
28
+ - avoid giving a response that is the same or similar to one of the previous responses in MESSAGES
29
+ - program code in the response must be embedded in a code block in the markdown text
30
+ ###
31
+
32
+ Make sure the following formal requirements are all fulfilled: ###
33
+ - do not use invalid characters in the JSON object
34
+ - escape double quotes and other special characters in the text values in the resulting JSON object
35
+ - check the validity of the generated JSON object and correct any possible parsing problems before returning it
36
+ ###
37
+
38
+ Return your response consisting solely of the JSON object wrapped in "<JSON>\n" and "\n</JSON>" tags.
@@ -0,0 +1,85 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "../../lib/monadic_app"
4
+
5
+ class Wikipedia < MonadicApp
6
+ DESC = "Sarches Wikipedia for you (experimental, requires GPT-4)"
7
+ COLOR = "white"
8
+
9
+ attr_accessor :template, :config, :params, :completion
10
+
11
+ def initialize(openai_completion, research_mode: false, stream: true, params: {})
12
+ @num_retained_turns = 5
13
+ params = {
14
+ "temperature" => 0.3,
15
+ "top_p" => 1.0,
16
+ "presence_penalty" => 0.2,
17
+ "frequency_penalty" => 0.2,
18
+ "model" => research_mode ? SETTINGS["research_model"] : SETTINGS["normal_model"],
19
+ "max_tokens" => 1000,
20
+ "stream" => stream,
21
+ "stop" => nil
22
+ }.merge(params)
23
+ mode = research_mode ? :research : :normal
24
+ template_json = TEMPLATES["normal/wikipedia"]
25
+ template_md = TEMPLATES["research/wikipedia"]
26
+ super(mode: mode,
27
+ params: params,
28
+ template_json: template_json,
29
+ template_md: template_md,
30
+ placeholders: {},
31
+ prop_accumulator: "messages",
32
+ prop_newdata: "response",
33
+ update_proc: proc do
34
+ case mode
35
+ when :research
36
+ ############################################################
37
+ # Research mode reduder defined here #
38
+ # @messages: messages to this point #
39
+ # @metadata: currently available metdata sent from GPT #
40
+ ############################################################
41
+ conditions = [
42
+ @messages.size > 1,
43
+ @messages.size > @num_retained_turns * 2 + 1
44
+ ]
45
+
46
+ if conditions.all?
47
+ to_delete = []
48
+ new_num_messages = @messages.size
49
+ @messages.each_with_index do |ele, i|
50
+ if ele["role"] != "system"
51
+ to_delete << i
52
+ new_num_messages -= 1
53
+ end
54
+ break if new_num_messages <= @num_retained_turns * 2 + 1
55
+ end
56
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
+ end
58
+ when :normal
59
+ ############################################################
60
+ # Normal mode recuder defined here #
61
+ # @messages: messages to this point #
62
+ ############################################################
63
+ conditions = [
64
+ @messages.size > 1,
65
+ @messages.size > @num_retained_turns * 2 + 1
66
+ ]
67
+
68
+ if conditions.all?
69
+ to_delete = []
70
+ new_num_messages = @messages.size
71
+ @messages.each_with_index do |ele, i|
72
+ if ele["role"] != "system"
73
+ to_delete << i
74
+ new_num_messages -= 1
75
+ end
76
+ break if new_num_messages <= @num_retained_turns * 2 + 1
77
+ end
78
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
79
+ end
80
+ end
81
+ end
82
+ )
83
+ @completion = openai_completion
84
+ end
85
+ end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: monadic-chat
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.4
4
+ version: 0.3.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - yohasebe
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2023-04-02 00:00:00.000000000 Z
11
+ date: 2023-04-05 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -248,20 +248,6 @@ dependencies:
248
248
  - - ">="
249
249
  - !ruby/object:Gem::Version
250
250
  version: '0'
251
- - !ruby/object:Gem::Dependency
252
- name: wikipedia-client
253
- requirement: !ruby/object:Gem::Requirement
254
- requirements:
255
- - - ">="
256
- - !ruby/object:Gem::Version
257
- version: '0'
258
- type: :runtime
259
- prerelease: false
260
- version_requirements: !ruby/object:Gem::Requirement
261
- requirements:
262
- - - ">="
263
- - !ruby/object:Gem::Version
264
- version: '0'
265
251
  description: 'Monadic Chat is a command-line client application program that uses
266
252
  OpenAI''s Text Completion API and Chat API to enable chat-style conversations with
267
253
  OpenAI''s artificial intelligence system in a ChatGPT-like style.
@@ -288,9 +274,6 @@ files:
288
274
  - apps/code/code.json
289
275
  - apps/code/code.md
290
276
  - apps/code/code.rb
291
- - apps/linguistic/linguistic.json
292
- - apps/linguistic/linguistic.md
293
- - apps/linguistic/linguistic.rb
294
277
  - apps/novel/novel.json
295
278
  - apps/novel/novel.md
296
279
  - apps/novel/novel.rb
@@ -317,6 +300,8 @@ files:
317
300
  - doc/img/syntree-sample.png
318
301
  - lib/monadic_app.rb
319
302
  - lib/monadic_chat.rb
303
+ - lib/monadic_chat/authenticate.rb
304
+ - lib/monadic_chat/commands.rb
320
305
  - lib/monadic_chat/console.rb
321
306
  - lib/monadic_chat/formatting.rb
322
307
  - lib/monadic_chat/helper.rb
@@ -328,6 +313,15 @@ files:
328
313
  - lib/monadic_chat/tools.rb
329
314
  - lib/monadic_chat/version.rb
330
315
  - monadic_chat.gemspec
316
+ - user_apps/boilerplates/boilerplate.json
317
+ - user_apps/boilerplates/boilerplate.md
318
+ - user_apps/boilerplates/boilerplate.rb
319
+ - user_apps/linguistic/linguistic.json
320
+ - user_apps/linguistic/linguistic.md
321
+ - user_apps/linguistic/linguistic.rb
322
+ - user_apps/wikipedia/wikipedia.json
323
+ - user_apps/wikipedia/wikipedia.md
324
+ - user_apps/wikipedia/wikipedia.rb
331
325
  homepage: https://github.com/yohasebe/monadic-chat
332
326
  licenses:
333
327
  - MIT
File without changes
File without changes