monadic-chat 0.3.2 → 0.3.3

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 39e287f8effed950af1b2cae95e834113d738d63188ceda70b6912eecaf92882
4
- data.tar.gz: a73f9b6b74b174885f925e8377dc5afd6700dd0528d6f3c54156c4cab3a17740
3
+ metadata.gz: '08138f50ed67a2c1b913b82b4d56402fe789f9858d6f8cd31cc7512daf79f0bf'
4
+ data.tar.gz: 1fca018c495fb1fedc30c1e684fd9c14639696c7f919622393e36be234f76b35
5
5
  SHA512:
6
- metadata.gz: 986237787361787e9af73501c2d5fa51260836c305a4926394fe1ff2999a39e6bf1ac9059c06f221c87bb3b1e7238c91a3478bb970ac4ef98100f8cc16d42954
7
- data.tar.gz: cd14739cccd2319fcceace5b3813726d99159d51a8a269ee01f6fb209fc33127f33f726c0471a664ac2f6ea033d8afd5851af5ba7e4f1176c07c901358f979fb
6
+ metadata.gz: bcc93e02f837008c126fdbaf4b99b03b6c1683e24f0d62334cb33ad64545b58f087a8adfdd7f461cc16774c6d72ce06db2e73040243104f0527eb4edde0aac76
7
+ data.tar.gz: fb4b615c945f6c73cd1fe7880eb1f893d9475ee1239c7fef1863088ae5a15e597e883084d77f7cc55f98b590d258199feeea60de8ef3c01f9372059f54a34e55
data/CHANGELOG.md CHANGED
@@ -17,3 +17,11 @@
17
17
  ## [0.2.1] - 2023-03-21
18
18
 
19
19
  - GPT-4 models supported (in `normal` mode)
20
+
21
+ ## [0.3.0] - 2023-03-24
22
+
23
+ - `Research` mode now supports chat API in addition to text-completion API
24
+
25
+ ## [0.3.3] - 2023-03-26
26
+
27
+ - Command line options to directly run individual apps
data/Gemfile.lock CHANGED
@@ -1,12 +1,12 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- monadic-chat (0.3.0)
4
+ monadic-chat (0.3.4)
5
+ blingfire
5
6
  http
6
7
  kramdown
7
8
  launchy
8
9
  oj
9
- parallel
10
10
  pastel
11
11
  rouge
12
12
  tty-box
@@ -22,6 +22,7 @@ GEM
22
22
  specs:
23
23
  addressable (2.8.1)
24
24
  public_suffix (>= 2.0.2, < 6.0)
25
+ blingfire (0.1.8)
25
26
  diff-lcs (1.5.0)
26
27
  domain_name (0.5.20190701)
27
28
  unf (>= 0.0.5, < 1.0.0)
@@ -45,7 +46,6 @@ GEM
45
46
  ffi-compiler (~> 1.0)
46
47
  rake (~> 13.0)
47
48
  oj (3.14.2)
48
- parallel (1.22.1)
49
49
  pastel (0.8.0)
50
50
  tty-color (~> 0.5)
51
51
  public_suffix (5.0.1)
@@ -116,4 +116,4 @@ DEPENDENCIES
116
116
  rspec
117
117
 
118
118
  BUNDLED WITH
119
- 2.4.8
119
+ 2.4.9
data/README.md CHANGED
@@ -10,59 +10,17 @@
10
10
  <kbd><img src="https://user-images.githubusercontent.com/18207/225505520-53e6f2c4-84a8-4128-a005-3fe980ec2449.gif" width="100%" /></kbd>
11
11
  </p>
12
12
 
13
- > **Warning**
14
- > This software is **work in progress** and **under active development**. It may be unstable, and the latest version may behave slightly differently than this document. Also, specifications may change in the future.
13
+ > **Note**
14
+ > This software is *work in progress* and *under active development*. It may be unstable, and the latest version may behave slightly differently than this document. Also, specifications may change in the future.
15
15
 
16
16
  **Change Log**
17
17
 
18
- - [March 24, 2023] README has revised to reflect the change to version 0.3.0.
18
+ - [March 26, 2023] Command line options to directly run individual apps
19
+ - [March 24, 2023] `Research` mode now supports chat API in addition to text-completion API
19
20
  - [March 21, 2023] GPT-4 models supported (in `normal` mode)
20
21
  - [March 20, 2023] Text and figure in "How the research mode workds" section updated
21
22
  - [March 13, 2023] Text on the architecture of the `research` mode updated in accordance with Version 0.2.0
22
23
 
23
- ## Table of Contents
24
-
25
- ## TOC
26
-
27
- - [Table of Contents](#table-of-contents)
28
- - [TOC](#toc)
29
- - [Introduction](#introduction)
30
- - [Dependencies](#dependencies)
31
- - [Installation](#installation)
32
- - [Using RubyGems](#using-rubygems)
33
- - [Clone the GitHub Repository](#clone-the-github-repository)
34
- - [Usage](#usage)
35
- - [Authentication](#authentication)
36
- - [Select Main Menu Item](#select-main-menu-item)
37
- - [Roles](#roles)
38
- - [System-Wide Functions](#system-wide-functions)
39
- - [Apps](#apps)
40
- - [Chat](#chat)
41
- - [Code](#code)
42
- - [Novel](#novel)
43
- - [Translate](#translate)
44
- - [Modes](#modes)
45
- - [Normal Mode](#normal-mode)
46
- - [Research Mode](#research-mode)
47
- - [What is Research Mode?](#what-is-research-mode)
48
- - [How Research Mode Works](#how-research-mode-works)
49
- - [Accumulator](#accumulator)
50
- - [Reducer](#reducer)
51
- - [Creating New App](#creating-new-app)
52
- - [File Structure](#file-structure)
53
- - [Reducer Code](#reducer-code)
54
- - [Monadic Chat Template](#monadic-chat-template)
55
- - [Extra Template for `Research` Mode](#extra-template-for-research-mode)
56
- - [What is Monadic about Monadic Chat?](#what-is-monadic-about-monadic-chat)
57
- - [Unit, Map, and Join](#unit-map-and-join)
58
- - [Discourse Management Object](#discourse-management-object)
59
- - [Future Plans](#future-plans)
60
- - [Bibliographical Data](#bibliographical-data)
61
- - [Acknowledgments](#acknowledgments)
62
- - [Contributing](#contributing)
63
- - [Author](#author)
64
- - [License](#license)
65
-
66
24
  ## Introduction
67
25
 
68
26
  **Monadic Chat** is a user-friendly command-line client application that utilizes OpenAI’s Text Completion API and Chat API to facilitate ChatGPT-style conversations with OpenAI’s large language models (LLM) on any terminal application of your choice.
@@ -71,15 +29,25 @@ The conversation history can be saved in a JSON file, which can be loaded later
71
29
 
72
30
  Monadic Chat includes four pre-built apps (`Chat`, `Code`, `Novel`, and `Translate`) that are designed for different types of discourse through interactive conversation with the LLM. Users also have the option to create their own apps by writing new templates.
73
31
 
32
+ Monadic Chat's `normal` mode enables ChatGPT-like conversations on the command line. The `research` mode has a mechanism to handle various related information as "state" behind the conversation. This allows, for example, to retrieve the current conversation *topic* at each utterance turn, and to keep its development as a list.
33
+
74
34
  ## Dependencies
75
35
 
76
36
  - Ruby 2.6.10 or greater
77
37
  - OpenAI API Token
78
38
  - A command line terminal app such as:
79
39
  - Terminal or [iTerm2](https://iterm2.com/) (MacOS)
80
- - [Windows Terminal](https://apps.microsoft.com/store/detail/windows-terminal) (Windows 11)
81
- - GNOME Terminal (Linux)
82
40
  - [Alacritty](https://alacritty.org/) (Multi-platform)
41
+ - [Windows Terminal](https://apps.microsoft.com/store/detail/windows-terminal) (Windows)
42
+ - GNOME Terminal (Linux)
43
+
44
+ > **Note on Using Monadic Chat on Windows**
45
+ > Monadic Chat does not support running on Windows, but you can install and use Linux Destribution on WSL2. Or you can use it without WSL2 by following these steps:
46
+ >
47
+ > 1. install Windows Terminal
48
+ > 2. install [Git Bash](https://gitforwindows.org/) (make sure to check the `Install profile for Windows Terminal` checkbox
49
+ > 3. install Ruby with [Ruby Installer](https://rubyinstaller.org/)
50
+ > 4. Open Windows Terminal with Git Bash profile and follow the instruction below.
83
51
 
84
52
  ## Installation
85
53
 
@@ -147,7 +115,7 @@ Once the correct access token is verified, the access token is saved in the conf
147
115
 
148
116
  `$HOME/monadic_chat.conf`
149
117
 
150
- ### Select Main Menu Item
118
+ ### Main Menu
151
119
 
152
120
  Upon successful authentication, a menu to select a specific app will appear. Each app generates different types of text through an interactive chat-style conversation between the user and the AI. Four apps are available by default: [`chat`](#chat), [`code`](#code), [`novel`](#novel), and [`translate`](#translate).
153
121
 
@@ -163,6 +131,29 @@ Selecting `readme` will take you to the README on the GitHub repository (the doc
163
131
 
164
132
  In the main menu, you can use the cursor keys and the enter key to make a selection. You can also narrow down the choices each time you type a letter.
165
133
 
134
+ ### Direct Commands
135
+
136
+ The following commands can be entered to start each app directly on the command line, without using the main menu.
137
+
138
+ ```
139
+ monadic-chat <app-name>
140
+ ```
141
+
142
+ Each of the four standard applications can be launched as follows. When launched, an interactive chat interface appears.
143
+
144
+ ```
145
+ monadic-chat chat
146
+ monadic-chat code
147
+ monadic-chat novel
148
+ monadic-chat translate
149
+ ```
150
+
151
+ You can also give text input directly to each app in the following format and get only a response to it (without starting the interactive chat interface)
152
+
153
+ ```
154
+ monadic-chat <app-name> <input-text>
155
+ ```
156
+
166
157
  ### Roles
167
158
 
168
159
  Each message in the conversation is labeled with one of three roles: `User`, `GPT`, or `System`.
@@ -198,7 +189,7 @@ For detailed information on each parameter, please refer to OpenAI's [API Docume
198
189
 
199
190
  **data/context**
200
191
 
201
- In `normal` mode, this function only displays the conversation history between User and GPT. In `research` mode, metadata (e.g., topics, language being used, number of turns) values are presented.
192
+ In `normal` mode, this function only displays the conversation history between User and GPT. In `research` mode, metadata (e.g., topics, language being used, number of turns) values are presented. In addition to the metadata returned in the API response, the approximate number of tokens in the current template is also displayed.
202
193
 
203
194
  Program code in the conversation history will be syntax highlighted (if possible). The same applies to output via the `html` command available from the function menu.
204
195
 
@@ -514,7 +505,6 @@ Monadic Chat replaces `{{MESSAGES}}` with messages from past conversations when
514
505
  "prompt": "\"We didn't have a camera.\"",
515
506
  "response": "`[S [NP We] [VP [V didn't] [VP [V have] [NP [Det a] [N camera] ] ] ] ] ]`\n\n###\n\n",
516
507
  "mode": "linguistic",
517
- "tokens": 351
518
508
  "turns": 3,
519
509
  "sentence_type": ["declarative"],
520
510
  "sentiment": ["sad"],
@@ -526,7 +516,7 @@ This is the core of the extra template for `research` mode.
526
516
 
527
517
  Note that the extra template is written in Markdown format, so the above JSON object is actually separated from the rest of the template as a [fenced code block](https://www.markdownguide.org/extended-syntax/#fenced-code-blocks).
528
518
 
529
- The required properties of this JSON object are `prompt`, `response`, `mode`, and `tokens`. Other properties are optional. The `mode` property is used to check the app name when saving the conversation data or loading from an external file. The `tokens` property is used in the reducer mechanism to check the approximate size of the current JSON object. The `turns` property is also used in the reducer mechanism.
519
+ The required properties of this JSON object are `prompt`, `response`, and `mode`. Other properties are optional. The `mode` property is used to check the app name when saving the conversation data or loading from an external file. The `turns` property is also used in the reducer mechanism.
530
520
 
531
521
  The JSON object in the `research` mode template is saved in the user’s home directory (`$HOME`) with the file `monadic_chat.json`. The content is overwritten every time the JSON object is updated. Note that this JSON file is created for logging purposes . Modifying its content does not affect the processes carried out by the app.
532
522
 
@@ -541,7 +531,6 @@ Make sure the following content requirements are all fulfilled:
541
531
  - analyze the new prompt's sentence type and set a sentence type value such as "interrogative", "imperative", "exclamatory", or "declarative" to the "sentence_type" property
542
532
  - analyze the new prompt's sentiment and set one or more sentiment types such as "happy", "excited", "troubled", "upset", or "sad" to the "sentiment" property
543
533
  - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words using as many discourse markers such as "because", "therefore", "but", and "so" to show the logical connection between the events.
544
- - update the value of "tokens" with the number of tokens of the resulting JSON object"
545
534
  - increment the value of "turns" by 1
546
535
  ```
547
536
 
data/apps/chat/chat.md CHANGED
@@ -18,7 +18,8 @@ JSON:
18
18
  "turns": 1,
19
19
  "language": "English",
20
20
  "topics": [],
21
- "tokens": 109
21
+ "confidence": 1.00,
22
+ "ambiguity": 0.00
22
23
  }
23
24
  ```
24
25
 
@@ -30,9 +31,10 @@ Make sure the following content requirements are all fulfilled:
30
31
  - if the new prompt is in a language other than the current value of "language", set the name of the new prompt language to "language" and make sure that "response" is in that language
31
32
  - make your response in the same language as the new prompt
32
33
  - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
34
+ - update the value of the "confidence" property based on the factuality of your response, ranging from 0.00 (not at all confident) to 1.00 (fully confident)
35
+ - update the value of the "ambiguity" property based on the clarity of the user input, ranging from 0.00 (not at all ambiguous, clearly stated) to 1.00 (fully ambiguous, nonsensical)
33
36
  - avoid giving a response that is the same or similar to one of the previous responses in PAST MESSAGES
34
37
  - program code in the response must be embedded in a code block in the markdown text
35
- - update the value of "tokens" with the number of tokens of the resulting JSON object"
36
38
 
37
39
  Make sure the following formal requirements are all fulfilled:
38
40
 
data/apps/chat/chat.rb CHANGED
@@ -16,7 +16,7 @@ class Chat < MonadicApp
16
16
  "presence_penalty" => 0.2,
17
17
  "frequency_penalty" => 0.2,
18
18
  "model" => openai_completion.model_name(research_mode: research_mode),
19
- "max_tokens" => 2000,
19
+ "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
22
22
  }.merge(params)
@@ -38,13 +38,23 @@ class Chat < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
-
41
+ current_template_tokens = count_tokens(@template)
42
42
  conditions = [
43
43
  @messages.size > 1,
44
- @metadata["tokens"].to_i > params["max_tokens"].to_i / 2
44
+ current_template_tokens > params["max_tokens"].to_i / 2
45
45
  ]
46
46
 
47
- @metadata["turns"] = @metadata["turns"].to_i - 1 if conditions.all?
47
+ if conditions.all?
48
+ to_delete = []
49
+ offset = current_template_tokens - params["max_tokens"].to_i / 2
50
+ @messages.each_with_index do |ele, i|
51
+ break if offset <= 0
52
+
53
+ to_delete << i if ele["role"] != "system"
54
+ offset -= count_tokens(ele.to_json)
55
+ end
56
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
+ end
48
58
 
49
59
  when :normal
50
60
  ############################################################
@@ -53,16 +63,21 @@ class Chat < MonadicApp
53
63
  ############################################################
54
64
 
55
65
  conditions = [
66
+ @messages.size > 1,
56
67
  @messages.size > @num_retained_turns * 2 + 1
57
68
  ]
58
69
 
59
70
  if conditions.all?
71
+ to_delete = []
72
+ new_num_messages = @messages.size
60
73
  @messages.each_with_index do |ele, i|
61
74
  if ele["role"] != "system"
62
- @messages.delete_at i
63
- break
75
+ to_delete << i
76
+ new_num_messages -= 1
64
77
  end
78
+ break if new_num_messages <= @num_retained_turns * 2 + 1
65
79
  end
80
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
66
81
  end
67
82
  end
68
83
  end
data/apps/code/code.md CHANGED
@@ -16,8 +16,7 @@ JSON:
16
16
  "mode": "chat",
17
17
  "turns": 1,
18
18
  "language": "English",
19
- "topics": [],
20
- "tokens": 109
19
+ "topics": []
21
20
  }
22
21
  ```
23
22
 
@@ -31,7 +30,6 @@ Make sure the following content requirements are all fulfilled:
31
30
  - analyze the topic of the new prompt and insert it at the end of the value list of the "topics" property
32
31
  - avoid giving a response that is the same or similar to one of the previous responses in "PAST MESSAGES"
33
32
  - program code in the response must be embedded in a code block in the markdown text
34
- - update the value of "tokens" with the number of tokens of the resulting JSON object"
35
33
 
36
34
  Make sure the following formal requirements are all fulfilled:
37
35
 
data/apps/code/code.rb CHANGED
@@ -16,7 +16,7 @@ class Code < MonadicApp
16
16
  "presence_penalty" => 0.0,
17
17
  "frequency_penalty" => 0.0,
18
18
  "model" => openai_completion.model_name(research_mode: research_mode),
19
- "max_tokens" => 2000,
19
+ "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
22
22
  }.merge(params)
@@ -38,13 +38,23 @@ class Code < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
-
41
+ current_template_tokens = count_tokens(@template)
42
42
  conditions = [
43
43
  @messages.size > 1,
44
- @metadata["tokens"].to_i > params["max_tokens"].to_i / 2
44
+ current_template_tokens > params["max_tokens"].to_i / 2
45
45
  ]
46
46
 
47
- @metadata["turns"] = @metadata["turns"].to_i - 1 if conditions.all?
47
+ if conditions.all?
48
+ to_delete = []
49
+ offset = current_template_tokens - params["max_tokens"].to_i / 2
50
+ @messages.each_with_index do |ele, i|
51
+ break if offset <= 0
52
+
53
+ to_delete << i if ele["role"] != "system"
54
+ offset -= count_tokens(ele.to_json)
55
+ end
56
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
+ end
48
58
 
49
59
  when :normal
50
60
  ############################################################
@@ -53,16 +63,21 @@ class Code < MonadicApp
53
63
  ############################################################
54
64
 
55
65
  conditions = [
66
+ @messages.size > 1,
56
67
  @messages.size > @num_retained_turns * 2 + 1
57
68
  ]
58
69
 
59
70
  if conditions.all?
71
+ to_delete = []
72
+ new_num_messages = @messages.size
60
73
  @messages.each_with_index do |ele, i|
61
74
  if ele["role"] != "system"
62
- @messages.delete_at i
63
- break
75
+ to_delete << i
76
+ new_num_messages -= 1
64
77
  end
78
+ break if new_num_messages <= @num_retained_turns * 2 + 1
65
79
  end
80
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
66
81
  end
67
82
  end
68
83
  end
@@ -20,7 +20,7 @@ JSON:
20
20
  "sentence_type": ["declarative"],
21
21
  "sentiment": ["sad"],
22
22
  "summary": "The user saw a beautiful sunset, but did not take a picture because the user did not have a camera.",
23
- "tokens": 351
23
+ "relevance": 0.80
24
24
  }
25
25
  ```
26
26
 
@@ -32,8 +32,8 @@ Make sure the following content requirements are all fulfilled:
32
32
  - analyze the new prompt's sentence type and set a sentence type value such as "interrogative", "imperative", "exclamatory", or "declarative" to the "sentence_type" property
33
33
  - analyze the new prompt's sentiment and set one or more sentiment types such as "happy", "excited", "troubled", "upset", or "sad" to the "sentiment" property
34
34
  - summarize the user's messages so far and update the "summary" property with a text of fewer than 100 words using as many discourse markers such as "because", "therefore", "but", and "so" to show the logical connection between the events.
35
- - update the value of "tokens" with the number of tokens of the resulting JSON object"
36
- - increment the value of "turns" by 1
35
+ - increment the value of "turns" by
36
+ - update the value of the "relevance" property indicating the degree to which the new input is naturally interpreted based on previous discussions, ranging from 0.0 (extremely difficult) to 1.0 (completely easy)
37
37
 
38
38
  Make sure the following formal requirements are all fulfilled:
39
39
 
@@ -16,7 +16,7 @@ class Linguistic < MonadicApp
16
16
  "presence_penalty" => 0.0,
17
17
  "frequency_penalty" => 0.0,
18
18
  "model" => openai_completion.model_name(research_mode: research_mode),
19
- "max_tokens" => 2000,
19
+ "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
22
22
  }.merge(params)
@@ -38,13 +38,23 @@ class Linguistic < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
-
41
+ current_template_tokens = count_tokens(@template)
42
42
  conditions = [
43
43
  @messages.size > 1,
44
- @metadata["tokens"].to_i > params["max_tokens"].to_i / 2
44
+ current_template_tokens > params["max_tokens"].to_i / 2
45
45
  ]
46
46
 
47
- @metadata["turns"] = @metadata["turns"].to_i - 1 if conditions.all?
47
+ if conditions.all?
48
+ to_delete = []
49
+ offset = current_template_tokens - params["max_tokens"].to_i / 2
50
+ @messages.each_with_index do |ele, i|
51
+ break if offset <= 0
52
+
53
+ to_delete << i if ele["role"] != "system"
54
+ offset -= count_tokens(ele.to_json)
55
+ end
56
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
+ end
48
58
 
49
59
  when :normal
50
60
  ############################################################
@@ -53,16 +63,21 @@ class Linguistic < MonadicApp
53
63
  ############################################################
54
64
 
55
65
  conditions = [
66
+ @messages.size > 1,
56
67
  @messages.size > @num_retained_turns * 2 + 1
57
68
  ]
58
69
 
59
70
  if conditions.all?
71
+ to_delete = []
72
+ new_num_messages = @messages.size
60
73
  @messages.each_with_index do |ele, i|
61
74
  if ele["role"] != "system"
62
- @messages.delete_at i
63
- break
75
+ to_delete << i
76
+ new_num_messages -= 1
64
77
  end
78
+ break if new_num_messages <= @num_retained_turns * 2 + 1
65
79
  end
80
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
66
81
  end
67
82
  end
68
83
  end
data/apps/novel/novel.md CHANGED
@@ -14,8 +14,7 @@ JSON:
14
14
  "prompt": "The preface to the novel is presented",
15
15
  "response": "What follows is a story that an AI assistant tells. It is guaranteed that this will be an incredibly realistic and interesting novel.",
16
16
  "mode": "novel",
17
- "turns": 1,
18
- "tokens": 147
17
+ "turns": 1
19
18
  }
20
19
  ```
21
20
 
@@ -25,7 +24,6 @@ Make sure the following content requirements are all fulfilled:
25
24
  - set the new prompt to the "prompt" property
26
25
  - create your new paragraph in response to the new prompt and set it to "response"
27
26
  - do not repeat in your response what is already told in "PAST MESSAGES"
28
- - update the value of "tokens" with the number of tokens of the resulting JSON object"
29
27
  - Make your response as detailed as possible within the maximum limit of 200 words
30
28
 
31
29
  Make sure the following formal requirements are all fulfilled:
data/apps/novel/novel.rb CHANGED
@@ -16,7 +16,7 @@ class Novel < MonadicApp
16
16
  "presence_penalty" => 0.1,
17
17
  "frequency_penalty" => 0.1,
18
18
  "model" => openai_completion.model_name(research_mode: research_mode),
19
- "max_tokens" => 2000,
19
+ "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
22
22
  }.merge(params)
@@ -38,13 +38,23 @@ class Novel < MonadicApp
38
38
  # @messages: messages to this point #
39
39
  # @metadata: currently available metdata sent from GPT #
40
40
  ############################################################
41
-
41
+ current_template_tokens = count_tokens(@template)
42
42
  conditions = [
43
43
  @messages.size > 1,
44
- @metadata["tokens"].to_i > params["max_tokens"].to_i / 2
44
+ current_template_tokens > params["max_tokens"].to_i / 2
45
45
  ]
46
46
 
47
- @metadata["turns"] = @metadata["turns"].to_i - 1 if conditions.all?
47
+ if conditions.all?
48
+ to_delete = []
49
+ offset = current_template_tokens - params["max_tokens"].to_i / 2
50
+ @messages.each_with_index do |ele, i|
51
+ break if offset <= 0
52
+
53
+ to_delete << i if ele["role"] != "system"
54
+ offset -= count_tokens(ele.to_json)
55
+ end
56
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
57
+ end
48
58
 
49
59
  when :normal
50
60
  ############################################################
@@ -53,16 +63,21 @@ class Novel < MonadicApp
53
63
  ############################################################
54
64
 
55
65
  conditions = [
66
+ @messages.size > 1,
56
67
  @messages.size > @num_retained_turns * 2 + 1
57
68
  ]
58
69
 
59
70
  if conditions.all?
71
+ to_delete = []
72
+ new_num_messages = @messages.size
60
73
  @messages.each_with_index do |ele, i|
61
74
  if ele["role"] != "system"
62
- @messages.delete_at i
63
- break
75
+ to_delete << i
76
+ new_num_messages -= 1
64
77
  end
78
+ break if new_num_messages <= @num_retained_turns * 2 + 1
65
79
  end
80
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
66
81
  end
67
82
  end
68
83
  end
@@ -15,8 +15,7 @@ JSON:
15
15
  "turns": 0,
16
16
  "prompt": "これは日本語(Japanese)の文(sentence)です。",
17
17
  "response": "This is a sentence in Japanese.",
18
- "target_lang": "English",
19
- "tokens": 194
18
+ "target_lang": "English"
20
19
  }
21
20
  ```
22
21
 
@@ -24,9 +23,7 @@ Make sure the following requirements are all fulfilled:
24
23
 
25
24
  - keep the value of the "mode" property at "translate"
26
25
  - set the text in the new prompt presented above to the "prompt" property
27
- - translate the new prompt text to the language specified in the "target_lang" set it to "response"
28
- and set the translation to the "response" property
29
- - update the value of "tokens" with the number of tokens of the resulting JSON object"
26
+ - translate the new prompt text to the language specified in the "target_lang" set it to "response" and set the translation to the "response" property
30
27
 
31
28
  Make sure the following formal requirements are all fulfilled:
32
29
 
@@ -16,7 +16,7 @@ class Translate < MonadicApp
16
16
  "presence_penalty" => 0.0,
17
17
  "frequency_penalty" => 0.0,
18
18
  "model" => openai_completion.model_name(research_mode: research_mode),
19
- "max_tokens" => 2000,
19
+ "max_tokens" => 1000,
20
20
  "stream" => stream,
21
21
  "stop" => nil
22
22
  }.merge(params)
@@ -42,13 +42,23 @@ class Translate < MonadicApp
42
42
  # @messages: messages to this point #
43
43
  # @metadata: currently available metdata sent from GPT #
44
44
  ############################################################
45
-
45
+ current_template_tokens = count_tokens(@template)
46
46
  conditions = [
47
47
  @messages.size > 1,
48
- @metadata["tokens"].to_i > params["max_tokens"].to_i / 2
48
+ current_template_tokens > params["max_tokens"].to_i / 2
49
49
  ]
50
50
 
51
- @metadata["turns"] = @metadata["turns"].to_i - 1 if conditions.all?
51
+ if conditions.all?
52
+ to_delete = []
53
+ offset = current_template_tokens - params["max_tokens"].to_i / 2
54
+ @messages.each_with_index do |ele, i|
55
+ break if offset <= 0
56
+
57
+ to_delete << i if ele["role"] != "system"
58
+ offset -= count_tokens(ele.to_json)
59
+ end
60
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
61
+ end
52
62
 
53
63
  when :normal
54
64
  ############################################################
@@ -57,16 +67,21 @@ class Translate < MonadicApp
57
67
  ############################################################
58
68
 
59
69
  conditions = [
70
+ @messages.size > 1,
60
71
  @messages.size > @num_retained_turns * 2 + 1
61
72
  ]
62
73
 
63
74
  if conditions.all?
75
+ to_delete = []
76
+ new_num_messages = @messages.size
64
77
  @messages.each_with_index do |ele, i|
65
78
  if ele["role"] != "system"
66
- @messages.delete_at i
67
- break
79
+ to_delete << i
80
+ new_num_messages -= 1
68
81
  end
82
+ break if new_num_messages <= @num_retained_turns * 2 + 1
69
83
  end
84
+ @messages.delete_if.with_index { |_, i| to_delete.include? i }
70
85
  end
71
86
  end
72
87
  end
data/assets/gpt2.bin ADDED
Binary file
data/bin/monadic-chat CHANGED
@@ -120,5 +120,34 @@ module MonadicMenu
120
120
  end
121
121
  end
122
122
 
123
- MonadicMenu.clear_screen
124
- MonadicMenu.run
123
+ case ARGV.size
124
+ when 0
125
+ MonadicMenu.clear_screen
126
+ MonadicMenu.run
127
+ when 1
128
+ case ARGV[0]
129
+ when "readme", "-h"
130
+ MonadicChat.open_readme
131
+ when "version", "-v"
132
+ puts MonadicChat::VERSION
133
+ else
134
+ MonadicChat::APPS.each do |app|
135
+ next unless app == ARGV[0]
136
+
137
+ openai_completion ||= MonadicChat.authenticate(message: false)
138
+ eval(app.capitalize, binding, __FILE__, __LINE__).new(openai_completion, research_mode: false).run
139
+ exit
140
+ end
141
+ puts "Unknown command"
142
+ end
143
+ else
144
+ MonadicChat::APPS.each do |app|
145
+ next unless app == ARGV[0]
146
+
147
+ openai_completion ||= MonadicChat.authenticate(message: false)
148
+ app_obj = eval(app.capitalize, binding, __FILE__, __LINE__).new(openai_completion, research_mode: false, params: { "model" => "gpt-4" })
149
+ app_obj.bind(ARGV[1..].join(" "), num_retry: 2)
150
+ exit
151
+ end
152
+ puts "Unknown command"
153
+ end
data/lib/monadic_app.rb CHANGED
@@ -93,6 +93,7 @@ class MonadicApp
93
93
  end
94
94
 
95
95
  def run
96
+ clear_screen
96
97
  banner("MONADIC::CHAT / #{self.class.name}", self.class::DESC, self.class::COLOR)
97
98
  show_greet
98
99
 
@@ -14,6 +14,7 @@ class MonadicApp
14
14
 
15
15
  contextual << "- **#{key.split("_").map(&:capitalize).join(" ")}**: #{val.to_s.strip}"
16
16
  end
17
+ contextual << "- **Num of Tokens in Template**: #{count_tokens(@template)}"
17
18
  end
18
19
 
19
20
  @messages.each do |m|
@@ -65,7 +66,7 @@ class MonadicApp
65
66
  "`#{m[1].gsub("<br/>\n") { "\n" }}`"
66
67
  end
67
68
 
68
- `touch #{filepath}` unless File.exist?(filepath)
69
+ FileUtils.touch(filepath) unless File.exist?(filepath)
69
70
  File.open(filepath, "w") do |f|
70
71
  html = <<~HTML
71
72
  <!doctype html>
@@ -5,6 +5,10 @@ class MonadicApp
5
5
  # methods for preparation and updating
6
6
  ##################################################
7
7
 
8
+ def count_tokens(text)
9
+ MonadicChat.tokenize(text).size
10
+ end
11
+
8
12
  def fulfill_placeholders
9
13
  input = nil
10
14
  replacements = []
@@ -5,7 +5,6 @@ require "oj"
5
5
  require "net/http"
6
6
  require "uri"
7
7
  require "strscan"
8
- require "parallel"
9
8
  require "tty-progressbar"
10
9
 
11
10
  Oj.mimic_JSON
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module MonadicChat
4
- VERSION = "0.3.2"
4
+ VERSION = "0.3.3"
5
5
  end
data/lib/monadic_chat.rb CHANGED
@@ -1,5 +1,6 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ require "blingfire"
3
4
  require "tty-cursor"
4
5
  require "tty-screen"
5
6
  require "tty-markdown"
@@ -21,6 +22,8 @@ require_relative "./monadic_chat/helper"
21
22
  Oj.mimic_JSON
22
23
 
23
24
  module MonadicChat
25
+ gpt2model_path = File.absolute_path(File.join(__dir__, "..", "assets", "gpt2.bin"))
26
+ BLINGFIRE = BlingFire.load_model(gpt2model_path)
24
27
  CONFIG = File.join(Dir.home, "monadic_chat.conf")
25
28
  NUM_RETRY = 2
26
29
  MIN_LENGTH = 5
@@ -105,25 +108,12 @@ module MonadicChat
105
108
 
106
109
  def self.open_readme
107
110
  url = "https://github.com/yohasebe/monadic-chat/"
108
- shellscript = <<~SHELL
109
- if [[ "$OSTYPE" == "darwin"* ]]; then
110
- open "#{url}"
111
- elif [[ "$OSTYPE" == "linux-gnu"* ]]; then
112
- if command -v xdg-open >/dev/null 2>&1; then
113
- xdg-open "#{url}"
114
- else
115
- echo "#{url}"
116
- fi
117
- else
118
- echo "#{url}"
119
- fi
120
- SHELL
121
- `#{shellscript}`
111
+ Launchy.open(url)
122
112
  end
123
113
 
124
- def self.authenticate(overwrite: false)
114
+ def self.authenticate(overwrite: false, message: true)
125
115
  check = lambda do |token, normal_mode_model, research_mode_model|
126
- print "Checking configuration\n"
116
+ print "Checking configuration\n" if message
127
117
  SPINNER.auto_spin
128
118
  begin
129
119
  models = OpenAI.models(token)
@@ -131,28 +121,28 @@ module MonadicChat
131
121
 
132
122
  SPINNER.stop
133
123
 
134
- print "Success\n"
124
+ print "Success\n" if message
135
125
 
136
126
  if normal_mode_model && !models.map { |m| m["id"] }.index(normal_mode_model)
137
127
  SPINNER.stop
138
- print "Normal mode model set in config file not available.\n"
128
+ print "Normal mode model set in config file not available.\n" if message
139
129
  normal_mode_model = false
140
130
  end
141
131
  normal_mode_model ||= OpenAI.model_name(research_mode: false)
142
- print "Normal mode model: #{normal_mode_model}\n"
132
+ print "Normal mode model: #{normal_mode_model}\n" if message
143
133
 
144
134
  if research_mode_model && !models.map { |m| m["id"] }.index(research_mode_model)
145
135
  SPINNER.stop
146
- print "Normal mode model set in config file not available.\n"
147
- print "Fallback to the default model (#{OpenAI.model_name(research_mode: true)}).\n"
136
+ print "Normal mode model set in config file not available.\n" if message
137
+ print "Fallback to the default model (#{OpenAI.model_name(research_mode: true)}).\n" if message
148
138
  end
149
139
  research_mode_model ||= OpenAI.model_name(research_mode: true)
150
- print "Research mode model: #{research_mode_model}\n"
140
+ print "Research mode model: #{research_mode_model}\n" if message
151
141
 
152
142
  OpenAI::Completion.new(token, normal_mode_model, research_mode_model)
153
143
  rescue StandardError
154
144
  SPINNER.stop
155
- print "Authentication: failure.\n"
145
+ print "Authentication: failure.\n" if message
156
146
  false
157
147
  end
158
148
  end
@@ -169,7 +159,7 @@ module MonadicChat
169
159
  File.open(CONFIG, "w") do |f|
170
160
  config = { "access_token" => access_token }
171
161
  f.write(JSON.pretty_generate(config))
172
- print "New access token has been saved to #{CONFIG}\n"
162
+ print "New access token has been saved to #{CONFIG}\n" if message
173
163
  end
174
164
  end
175
165
  elsif File.exist?(CONFIG)
@@ -192,7 +182,7 @@ module MonadicChat
192
182
  config = { "access_token" => access_token }
193
183
  f.write(JSON.pretty_generate(config))
194
184
  end
195
- print "Access token has been saved to #{CONFIG}\n"
185
+ print "Access token has been saved to #{CONFIG}\n" if message
196
186
  end
197
187
  end
198
188
  completion || authenticate(overwrite: true)
@@ -219,6 +209,10 @@ module MonadicChat
219
209
  "\n#{PASTEL.send(:"on_#{color}", name)}"
220
210
  end
221
211
 
212
+ def self.tokenize(text)
213
+ BLINGFIRE.text_to_ids(text)
214
+ end
215
+
222
216
  PROMPT_USER = TTY::PromptX.new(active_color: :blue, prefix: prompt_user)
223
217
  PROMPT_SYSTEM = TTY::PromptX.new(active_color: :blue, prefix: "#{prompt_system} ")
224
218
  PROMPT_ASSISTANT = TTY::PromptX.new(active_color: :red, prefix: "#{prompt_assistant} ")
data/monadic_chat.gemspec CHANGED
@@ -37,11 +37,11 @@ Gem::Specification.new do |spec|
37
37
  spec.add_development_dependency "rake"
38
38
  spec.add_development_dependency "rspec"
39
39
 
40
+ spec.add_dependency "blingfire"
40
41
  spec.add_dependency "http"
41
42
  spec.add_dependency "kramdown"
42
43
  spec.add_dependency "launchy"
43
44
  spec.add_dependency "oj"
44
- spec.add_dependency "parallel"
45
45
  spec.add_dependency "pastel"
46
46
  spec.add_dependency "rouge"
47
47
  spec.add_dependency "tty-box"
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: monadic-chat
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.2
4
+ version: 0.3.3
5
5
  platform: ruby
6
6
  authors:
7
7
  - yohasebe
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2023-03-25 00:00:00.000000000 Z
11
+ date: 2023-03-27 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -53,7 +53,7 @@ dependencies:
53
53
  - !ruby/object:Gem::Version
54
54
  version: '0'
55
55
  - !ruby/object:Gem::Dependency
56
- name: http
56
+ name: blingfire
57
57
  requirement: !ruby/object:Gem::Requirement
58
58
  requirements:
59
59
  - - ">="
@@ -67,7 +67,7 @@ dependencies:
67
67
  - !ruby/object:Gem::Version
68
68
  version: '0'
69
69
  - !ruby/object:Gem::Dependency
70
- name: kramdown
70
+ name: http
71
71
  requirement: !ruby/object:Gem::Requirement
72
72
  requirements:
73
73
  - - ">="
@@ -81,7 +81,7 @@ dependencies:
81
81
  - !ruby/object:Gem::Version
82
82
  version: '0'
83
83
  - !ruby/object:Gem::Dependency
84
- name: launchy
84
+ name: kramdown
85
85
  requirement: !ruby/object:Gem::Requirement
86
86
  requirements:
87
87
  - - ">="
@@ -95,7 +95,7 @@ dependencies:
95
95
  - !ruby/object:Gem::Version
96
96
  version: '0'
97
97
  - !ruby/object:Gem::Dependency
98
- name: oj
98
+ name: launchy
99
99
  requirement: !ruby/object:Gem::Requirement
100
100
  requirements:
101
101
  - - ">="
@@ -109,7 +109,7 @@ dependencies:
109
109
  - !ruby/object:Gem::Version
110
110
  version: '0'
111
111
  - !ruby/object:Gem::Dependency
112
- name: parallel
112
+ name: oj
113
113
  requirement: !ruby/object:Gem::Requirement
114
114
  requirements:
115
115
  - - ">="
@@ -284,6 +284,7 @@ files:
284
284
  - apps/translate/translate.md
285
285
  - apps/translate/translate.rb
286
286
  - assets/github.css
287
+ - assets/gpt2.bin
287
288
  - assets/pigments-default.css
288
289
  - bin/monadic-chat
289
290
  - doc/img/code-example-time-html.png