pure-chat 0.1.1__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- pure_chat-0.1.1/LICENSE +8 -0
- pure_chat-0.1.1/PKG-INFO +87 -0
- pure_chat-0.1.1/README.md +72 -0
- pure_chat-0.1.1/pyproject.toml +18 -0
- pure_chat-0.1.1/setup.cfg +4 -0
- pure_chat-0.1.1/src/pure_chat/__init__.py +0 -0
- pure_chat-0.1.1/src/pure_chat/ai_manager.py +34 -0
- pure_chat-0.1.1/src/pure_chat/db_manager.py +251 -0
- pure_chat-0.1.1/src/pure_chat/main.py +167 -0
- pure_chat-0.1.1/src/pure_chat/util.py +105 -0
- pure_chat-0.1.1/src/pure_chat.egg-info/PKG-INFO +87 -0
- pure_chat-0.1.1/src/pure_chat.egg-info/SOURCES.txt +14 -0
- pure_chat-0.1.1/src/pure_chat.egg-info/dependency_links.txt +1 -0
- pure_chat-0.1.1/src/pure_chat.egg-info/entry_points.txt +2 -0
- pure_chat-0.1.1/src/pure_chat.egg-info/requires.txt +5 -0
- pure_chat-0.1.1/src/pure_chat.egg-info/top_level.txt +1 -0
pure_chat-0.1.1/LICENSE
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
1
|
+
Copyright 2026 Mats Heemeyer
|
|
2
|
+
|
|
3
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
|
4
|
+
|
|
5
|
+
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
|
6
|
+
|
|
7
|
+
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
8
|
+
|
pure_chat-0.1.1/PKG-INFO
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: pure-chat
|
|
3
|
+
Version: 0.1.1
|
|
4
|
+
Summary: A high-performance Terminal User Interface (TUI) designed to replicate the experience of browser based llm chat applications directly in your terminal.
|
|
5
|
+
Author-email: Mats Heemeyer <matsheemeyer@gmail.com>
|
|
6
|
+
Requires-Python: >=3.14
|
|
7
|
+
Description-Content-Type: text/markdown
|
|
8
|
+
License-File: LICENSE
|
|
9
|
+
Requires-Dist: google-genai>=1.72.0
|
|
10
|
+
Requires-Dist: prompt-toolkit>=3.0.52
|
|
11
|
+
Requires-Dist: python-dotenv>=1.2.2
|
|
12
|
+
Requires-Dist: questionary>=2.1.1
|
|
13
|
+
Requires-Dist: rich>=15.0.0
|
|
14
|
+
Dynamic: license-file
|
|
15
|
+
|
|
16
|
+
# PureChat
|
|
17
|
+
|
|
18
|
+
A high-performance Terminal User Interface (TUI) designed to replicate the experience of browser based llm chat applications directly in your terminal. This tool provides persistent conversation memory, real-time streaming, and interactive session management.
|
|
19
|
+
|
|
20
|
+
## 💡 Motivation
|
|
21
|
+
|
|
22
|
+
Most modern AI CLI tools have become heavily "agentic." While powerful for automation, they often force massive context windows, auto-execute commands, and prioritize task completion over simple conversation. This results in high token consumption, slower response times, and a lack of control over what data is being sent.
|
|
23
|
+
|
|
24
|
+
Gemini CLI Vault was built to fill the gap for a "regular" chat app in the terminal. It provides a familiar web-like experience for discussing complicated topics in a controlled environment where you don't want the tool to send excessive context or execute commands autonomously.
|
|
25
|
+
|
|
26
|
+
## ✨ Key Features
|
|
27
|
+
|
|
28
|
+
- **Persistent Memory**: Conversations are stored in a local SQLite database (gemini_vault.db), allowing you to resume any chat session at any time.
|
|
29
|
+
- **Live Streaming & Markdown**: Responses stream in real-time with full Markdown rendering, including syntax-highlighted code blocks, tables, and lists.
|
|
30
|
+
- **Intelligent Context Window**: Automatically manages token usage using a sliding window of the last 12 messages to maintain context without exceeding limits.
|
|
31
|
+
- **Interactive Session Switcher**: Use the /conversations command to browse, search, and switch between previous chat sessions using an arrow-key menu.
|
|
32
|
+
- **Global Command History**: Navigate your previous prompts across all sessions using the UP and DOWN arrows (powered by prompt_toolkit).
|
|
33
|
+
- **Google Search Integration**: The assistant is equipped with the Google Search tool to provide up-to-date information on current events.
|
|
34
|
+
- **Dynamic Personalities**: Load custom system instructions from a GEMINI.md file to change the AI's behavior and tone.
|
|
35
|
+
|
|
36
|
+
## 🛠️ Installation (using uv)
|
|
37
|
+
|
|
38
|
+
1. Clone the repository:
|
|
39
|
+
|
|
40
|
+
```
|
|
41
|
+
git clone https://github.com/yourusername/gemini-cli-vault.git
|
|
42
|
+
cd gemini-cli-vault
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
2. Configure Environment Variables:
|
|
46
|
+
|
|
47
|
+
```
|
|
48
|
+
# Create a .env file in the project root:
|
|
49
|
+
GEMINI_API_KEY=your_google_api_key_here
|
|
50
|
+
GEMINI_MODEL=gemini-3-flash-preview
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
3. (Optional) Define System Instructions:
|
|
54
|
+
Create a GEMINI.md file to set the AI's System Prompt:
|
|
55
|
+
You are an expert software architect. Provide concise, high-level advice and always include code snippets in Python or Rust.
|
|
56
|
+
|
|
57
|
+
## 🚀 Usage
|
|
58
|
+
|
|
59
|
+
Start a new or default session:
|
|
60
|
+
|
|
61
|
+
```
|
|
62
|
+
uv run main.py
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
Start or resume a specific session by name:
|
|
66
|
+
|
|
67
|
+
```
|
|
68
|
+
python main.py --name Project-Alpha
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
### ⌨️ In-Chat Commands
|
|
72
|
+
|
|
73
|
+
- `/conversations` : Opens the interactive session manager to switch or create chats.
|
|
74
|
+
- `/exit` : Safely saves and exits the application.
|
|
75
|
+
- `UP / DOWN` : Cycle through your entire history of user prompts.
|
|
76
|
+
- `Ctrl+C` : Interrupt the current input line.
|
|
77
|
+
|
|
78
|
+
## 🏗️ Project Architecture
|
|
79
|
+
|
|
80
|
+
- `main.py`: The entry point and TUI controller. Handles the input loop and rich live display.
|
|
81
|
+
- `db_manager.py`: The data layer. Manages SQLite tables, message logging, and session retrieval.
|
|
82
|
+
- `ai_manager.py`: The AI integration layer. Configures the google-genai client, tools, and system instructions.
|
|
83
|
+
- `gemini_vault.db`: The local database generated on first run.
|
|
84
|
+
|
|
85
|
+
## 🤖 AI Disclosure
|
|
86
|
+
|
|
87
|
+
This project was primarily developed with the assistance of AI. While the core logic, architecture, and feature set were human-directed, the majority of the code implementation and boilerplate was generated and refined using Large Language Models.
|
|
@@ -0,0 +1,72 @@
|
|
|
1
|
+
# PureChat
|
|
2
|
+
|
|
3
|
+
A high-performance Terminal User Interface (TUI) designed to replicate the experience of browser based llm chat applications directly in your terminal. This tool provides persistent conversation memory, real-time streaming, and interactive session management.
|
|
4
|
+
|
|
5
|
+
## 💡 Motivation
|
|
6
|
+
|
|
7
|
+
Most modern AI CLI tools have become heavily "agentic." While powerful for automation, they often force massive context windows, auto-execute commands, and prioritize task completion over simple conversation. This results in high token consumption, slower response times, and a lack of control over what data is being sent.
|
|
8
|
+
|
|
9
|
+
Gemini CLI Vault was built to fill the gap for a "regular" chat app in the terminal. It provides a familiar web-like experience for discussing complicated topics in a controlled environment where you don't want the tool to send excessive context or execute commands autonomously.
|
|
10
|
+
|
|
11
|
+
## ✨ Key Features
|
|
12
|
+
|
|
13
|
+
- **Persistent Memory**: Conversations are stored in a local SQLite database (gemini_vault.db), allowing you to resume any chat session at any time.
|
|
14
|
+
- **Live Streaming & Markdown**: Responses stream in real-time with full Markdown rendering, including syntax-highlighted code blocks, tables, and lists.
|
|
15
|
+
- **Intelligent Context Window**: Automatically manages token usage using a sliding window of the last 12 messages to maintain context without exceeding limits.
|
|
16
|
+
- **Interactive Session Switcher**: Use the /conversations command to browse, search, and switch between previous chat sessions using an arrow-key menu.
|
|
17
|
+
- **Global Command History**: Navigate your previous prompts across all sessions using the UP and DOWN arrows (powered by prompt_toolkit).
|
|
18
|
+
- **Google Search Integration**: The assistant is equipped with the Google Search tool to provide up-to-date information on current events.
|
|
19
|
+
- **Dynamic Personalities**: Load custom system instructions from a GEMINI.md file to change the AI's behavior and tone.
|
|
20
|
+
|
|
21
|
+
## 🛠️ Installation (using uv)
|
|
22
|
+
|
|
23
|
+
1. Clone the repository:
|
|
24
|
+
|
|
25
|
+
```
|
|
26
|
+
git clone https://github.com/yourusername/gemini-cli-vault.git
|
|
27
|
+
cd gemini-cli-vault
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
2. Configure Environment Variables:
|
|
31
|
+
|
|
32
|
+
```
|
|
33
|
+
# Create a .env file in the project root:
|
|
34
|
+
GEMINI_API_KEY=your_google_api_key_here
|
|
35
|
+
GEMINI_MODEL=gemini-3-flash-preview
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
3. (Optional) Define System Instructions:
|
|
39
|
+
Create a GEMINI.md file to set the AI's System Prompt:
|
|
40
|
+
You are an expert software architect. Provide concise, high-level advice and always include code snippets in Python or Rust.
|
|
41
|
+
|
|
42
|
+
## 🚀 Usage
|
|
43
|
+
|
|
44
|
+
Start a new or default session:
|
|
45
|
+
|
|
46
|
+
```
|
|
47
|
+
uv run main.py
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
Start or resume a specific session by name:
|
|
51
|
+
|
|
52
|
+
```
|
|
53
|
+
python main.py --name Project-Alpha
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
### ⌨️ In-Chat Commands
|
|
57
|
+
|
|
58
|
+
- `/conversations` : Opens the interactive session manager to switch or create chats.
|
|
59
|
+
- `/exit` : Safely saves and exits the application.
|
|
60
|
+
- `UP / DOWN` : Cycle through your entire history of user prompts.
|
|
61
|
+
- `Ctrl+C` : Interrupt the current input line.
|
|
62
|
+
|
|
63
|
+
## 🏗️ Project Architecture
|
|
64
|
+
|
|
65
|
+
- `main.py`: The entry point and TUI controller. Handles the input loop and rich live display.
|
|
66
|
+
- `db_manager.py`: The data layer. Manages SQLite tables, message logging, and session retrieval.
|
|
67
|
+
- `ai_manager.py`: The AI integration layer. Configures the google-genai client, tools, and system instructions.
|
|
68
|
+
- `gemini_vault.db`: The local database generated on first run.
|
|
69
|
+
|
|
70
|
+
## 🤖 AI Disclosure
|
|
71
|
+
|
|
72
|
+
This project was primarily developed with the assistance of AI. While the core logic, architecture, and feature set were human-directed, the majority of the code implementation and boilerplate was generated and refined using Large Language Models.
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
[project]
|
|
2
|
+
name = "pure-chat"
|
|
3
|
+
version = "0.1.1"
|
|
4
|
+
description = "A high-performance Terminal User Interface (TUI) designed to replicate the experience of browser based llm chat applications directly in your terminal."
|
|
5
|
+
readme = "README.md"
|
|
6
|
+
requires-python = ">=3.14"
|
|
7
|
+
authors = [
|
|
8
|
+
{ name="Mats Heemeyer", email="matsheemeyer@gmail.com" },
|
|
9
|
+
]
|
|
10
|
+
dependencies = [
|
|
11
|
+
"google-genai>=1.72.0",
|
|
12
|
+
"prompt-toolkit>=3.0.52",
|
|
13
|
+
"python-dotenv>=1.2.2",
|
|
14
|
+
"questionary>=2.1.1",
|
|
15
|
+
"rich>=15.0.0",
|
|
16
|
+
]
|
|
17
|
+
[project.scripts]
|
|
18
|
+
purechat = "pure_chat.main:main"
|
|
File without changes
|
|
@@ -0,0 +1,34 @@
|
|
|
1
|
+
import os
|
|
2
|
+
from google import genai
|
|
3
|
+
from google.genai import types
|
|
4
|
+
|
|
5
|
+
|
|
6
|
+
class GeminiAssistant:
|
|
7
|
+
def __init__(self, history=None, model_id=None):
|
|
8
|
+
api_key = os.getenv("GEMINI_API_KEY")
|
|
9
|
+
model_id = model_id or os.getenv("GEMINI_MODEL", "gemini-3-flash-preview")
|
|
10
|
+
|
|
11
|
+
self.client = genai.Client(api_key=api_key)
|
|
12
|
+
self.model_id = model_id
|
|
13
|
+
|
|
14
|
+
# 1. Load instructions from GEMINI.md
|
|
15
|
+
instr = "You are a helpful assistant."
|
|
16
|
+
if os.path.exists("GEMINI.md"):
|
|
17
|
+
with open("GEMINI.md", "r") as f:
|
|
18
|
+
instr = f.read()
|
|
19
|
+
|
|
20
|
+
# 2. Configure Tools and Token Caps
|
|
21
|
+
search_tool = types.Tool(google_search=types.GoogleSearch())
|
|
22
|
+
self.config = types.GenerateContentConfig(
|
|
23
|
+
tools=[search_tool],
|
|
24
|
+
system_instruction=instr,
|
|
25
|
+
max_output_tokens=1000, # Measure to reduce output token cost
|
|
26
|
+
temperature=0.7,
|
|
27
|
+
)
|
|
28
|
+
|
|
29
|
+
self.chat = self.client.chats.create(
|
|
30
|
+
model=self.model_id, config=self.config, history=history or []
|
|
31
|
+
)
|
|
32
|
+
|
|
33
|
+
def send_stream(self, text):
|
|
34
|
+
return self.chat.send_message_stream(text)
|
|
@@ -0,0 +1,251 @@
|
|
|
1
|
+
import sqlite3
|
|
2
|
+
import random
|
|
3
|
+
from datetime import datetime
|
|
4
|
+
|
|
5
|
+
DB_NAME = "gemini_vault.db"
|
|
6
|
+
|
|
7
|
+
# Friendly word lists for random names
|
|
8
|
+
ADJECTIVES = ["sparkling", "brave", "swift", "quiet", "neon", "mighty", "clever"]
|
|
9
|
+
NOUNS = ["phoenix", "glitch", "nebula", "cipher", "wizard", "orbit", "atlas"]
|
|
10
|
+
|
|
11
|
+
|
|
12
|
+
def init_db():
|
|
13
|
+
with sqlite3.connect(DB_NAME) as conn:
|
|
14
|
+
cursor = conn.cursor()
|
|
15
|
+
cursor.execute(
|
|
16
|
+
"""CREATE TABLE IF NOT EXISTS sessions
|
|
17
|
+
(id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT UNIQUE, created_at DATETIME)"""
|
|
18
|
+
)
|
|
19
|
+
cursor.execute("""CREATE TABLE IF NOT EXISTS messages
|
|
20
|
+
(id INTEGER PRIMARY KEY AUTOINCREMENT, session_id INTEGER,
|
|
21
|
+
role TEXT, content TEXT, timestamp DATETIME,
|
|
22
|
+
FOREIGN KEY(session_id) REFERENCES sessions(id))""")
|
|
23
|
+
|
|
24
|
+
# Create FTS5 virtual table for full-text search
|
|
25
|
+
cursor.execute("""CREATE VIRTUAL TABLE IF NOT EXISTS messages_fts USING fts5(
|
|
26
|
+
content, role, session_id, content_rowid=rowid
|
|
27
|
+
)""")
|
|
28
|
+
|
|
29
|
+
# Create triggers to keep FTS in sync
|
|
30
|
+
cursor.execute(
|
|
31
|
+
"""CREATE TRIGGER IF NOT EXISTS messages_ai AFTER INSERT ON messages BEGIN
|
|
32
|
+
INSERT INTO messages_fts(rowid, content, role, session_id)
|
|
33
|
+
VALUES (new.id, new.content, new.role, new.session_id);
|
|
34
|
+
END"""
|
|
35
|
+
)
|
|
36
|
+
cursor.execute(
|
|
37
|
+
"""CREATE TRIGGER IF NOT EXISTS messages_ad AFTER DELETE ON messages BEGIN
|
|
38
|
+
DELETE FROM messages_fts WHERE rowid = old.id;
|
|
39
|
+
END"""
|
|
40
|
+
)
|
|
41
|
+
cursor.execute(
|
|
42
|
+
"""CREATE TRIGGER IF NOT EXISTS messages_au AFTER UPDATE ON messages BEGIN
|
|
43
|
+
UPDATE messages_fts SET content = new.content, role = new.role, session_id = new.session_id
|
|
44
|
+
WHERE rowid = new.id;
|
|
45
|
+
END"""
|
|
46
|
+
)
|
|
47
|
+
|
|
48
|
+
# Backfill: Index existing messages that aren't in FTS yet
|
|
49
|
+
cursor.execute("""INSERT INTO messages_fts(rowid, content, role, session_id)
|
|
50
|
+
SELECT m.id, m.content, m.role, m.session_id
|
|
51
|
+
FROM messages m
|
|
52
|
+
LEFT JOIN messages_fts fts ON m.id = fts.rowid
|
|
53
|
+
WHERE fts.rowid IS NULL""")
|
|
54
|
+
|
|
55
|
+
|
|
56
|
+
def generate_random_name():
|
|
57
|
+
return (
|
|
58
|
+
f"{random.choice(ADJECTIVES)}-{random.choice(NOUNS)}-{random.randint(100, 999)}"
|
|
59
|
+
)
|
|
60
|
+
|
|
61
|
+
|
|
62
|
+
def get_or_create_session(name=None):
|
|
63
|
+
if not name:
|
|
64
|
+
name = generate_random_name()
|
|
65
|
+
|
|
66
|
+
with sqlite3.connect(DB_NAME) as conn:
|
|
67
|
+
cursor = conn.cursor()
|
|
68
|
+
cursor.execute(
|
|
69
|
+
"INSERT OR IGNORE INTO sessions (name, created_at) VALUES (?, ?)",
|
|
70
|
+
(name, datetime.now()),
|
|
71
|
+
)
|
|
72
|
+
cursor.execute("SELECT id, name FROM sessions WHERE name = ?", (name,))
|
|
73
|
+
return cursor.fetchone()
|
|
74
|
+
|
|
75
|
+
|
|
76
|
+
def get_chat_history(session_id, window_size=12):
|
|
77
|
+
"""Token Reduction: Only fetch the last N messages (Sliding Window)."""
|
|
78
|
+
with sqlite3.connect(DB_NAME) as conn:
|
|
79
|
+
cursor = conn.cursor()
|
|
80
|
+
cursor.execute(
|
|
81
|
+
"""
|
|
82
|
+
SELECT role, content FROM (
|
|
83
|
+
SELECT role, content, timestamp FROM messages
|
|
84
|
+
WHERE session_id = ?
|
|
85
|
+
ORDER BY timestamp DESC LIMIT ?
|
|
86
|
+
) ORDER BY timestamp ASC
|
|
87
|
+
""",
|
|
88
|
+
(session_id, window_size),
|
|
89
|
+
)
|
|
90
|
+
rows = cursor.fetchall()
|
|
91
|
+
return [{"role": r, "parts": [{"text": c}]} for r, c in rows]
|
|
92
|
+
|
|
93
|
+
|
|
94
|
+
def save_message(session_id, role, content):
|
|
95
|
+
with sqlite3.connect(DB_NAME) as conn:
|
|
96
|
+
cursor = conn.cursor()
|
|
97
|
+
cursor.execute(
|
|
98
|
+
"INSERT INTO messages (session_id, role, content, timestamp) VALUES (?, ?, ?, ?)",
|
|
99
|
+
(session_id, role, content, datetime.now()),
|
|
100
|
+
)
|
|
101
|
+
|
|
102
|
+
|
|
103
|
+
def list_all_sessions():
|
|
104
|
+
with sqlite3.connect(DB_NAME) as conn:
|
|
105
|
+
return conn.execute(
|
|
106
|
+
"SELECT name, created_at FROM sessions ORDER BY created_at DESC"
|
|
107
|
+
).fetchall()
|
|
108
|
+
|
|
109
|
+
|
|
110
|
+
# Add this to db_manager.py
|
|
111
|
+
def get_session_user_messages(session_id):
|
|
112
|
+
"""Fetches only user messages to populate terminal history."""
|
|
113
|
+
with sqlite3.connect(DB_NAME) as conn:
|
|
114
|
+
cursor = conn.cursor()
|
|
115
|
+
cursor.execute(
|
|
116
|
+
"SELECT content FROM messages WHERE session_id = ? AND role = 'user' ORDER BY timestamp ASC",
|
|
117
|
+
(session_id,),
|
|
118
|
+
)
|
|
119
|
+
return [row[0] for row in cursor.fetchall()]
|
|
120
|
+
|
|
121
|
+
|
|
122
|
+
# Add/Update these functions in db_manager.py
|
|
123
|
+
|
|
124
|
+
|
|
125
|
+
def get_last_n_messages(session_id):
|
|
126
|
+
"""Fetches the last N messages of a specific session for display."""
|
|
127
|
+
with sqlite3.connect(DB_NAME) as conn:
|
|
128
|
+
cursor = conn.cursor()
|
|
129
|
+
cursor.execute(
|
|
130
|
+
"""
|
|
131
|
+
SELECT role, content FROM messages
|
|
132
|
+
WHERE session_id = ?
|
|
133
|
+
ORDER BY timestamp ASC
|
|
134
|
+
""",
|
|
135
|
+
(session_id,),
|
|
136
|
+
)
|
|
137
|
+
return cursor.fetchall()
|
|
138
|
+
|
|
139
|
+
|
|
140
|
+
def get_all_user_messages_global():
|
|
141
|
+
"""Fetches EVERY user prompt ever recorded for the global UP-arrow history."""
|
|
142
|
+
with sqlite3.connect(DB_NAME) as conn:
|
|
143
|
+
cursor = conn.cursor()
|
|
144
|
+
cursor.execute(
|
|
145
|
+
"SELECT content FROM messages WHERE role = 'user' ORDER BY timestamp ASC"
|
|
146
|
+
)
|
|
147
|
+
return [row[0] for row in cursor.fetchall()]
|
|
148
|
+
|
|
149
|
+
|
|
150
|
+
def _escape_fts_token(token: str) -> str:
|
|
151
|
+
"""Escape FTS5 special characters in a single token."""
|
|
152
|
+
# Remove characters that have special meaning in FTS5 MATCH expressions
|
|
153
|
+
special = '" * ( ) : ^'
|
|
154
|
+
for ch in special:
|
|
155
|
+
token = token.replace(ch, "")
|
|
156
|
+
return token
|
|
157
|
+
|
|
158
|
+
|
|
159
|
+
def parse_search_query(query: str) -> str:
|
|
160
|
+
"""
|
|
161
|
+
Parse search query for FTS5 syntax.
|
|
162
|
+
- Quoted phrases: "exact match" → use phrase search
|
|
163
|
+
- Unquoted: use prefix search per word for fuzzy matching
|
|
164
|
+
"""
|
|
165
|
+
if not query or not query.strip():
|
|
166
|
+
return ""
|
|
167
|
+
|
|
168
|
+
query = query.strip()
|
|
169
|
+
|
|
170
|
+
# Check if query is wrapped in quotes (exact phrase)
|
|
171
|
+
if query.startswith('"') and query.endswith('"') and len(query) > 2:
|
|
172
|
+
# Remove quotes for phrase search (FTS5 handles phrases natively)
|
|
173
|
+
inner = query[1:-1]
|
|
174
|
+
return f'"{inner}"'
|
|
175
|
+
|
|
176
|
+
# Fuzzy search: add * at end of each word for prefix matching
|
|
177
|
+
tokens = query.split()
|
|
178
|
+
escaped = [_escape_fts_token(t) for t in tokens]
|
|
179
|
+
# Filter out empty tokens after escaping
|
|
180
|
+
escaped = [t for t in escaped if t]
|
|
181
|
+
if not escaped:
|
|
182
|
+
return ""
|
|
183
|
+
return " ".join(f"{t}*" for t in escaped)
|
|
184
|
+
|
|
185
|
+
|
|
186
|
+
# Invisible sentinel markers for FTS5 snippets (won't conflict with Rich markup)
|
|
187
|
+
_MARK_START = "\x01"
|
|
188
|
+
_MARK_END = "\x02"
|
|
189
|
+
|
|
190
|
+
|
|
191
|
+
def search_messages(query: str, limit: int = 10):
|
|
192
|
+
"""
|
|
193
|
+
Search messages using FTS5 with snippet extraction.
|
|
194
|
+
Deduplicates by session (best match per session).
|
|
195
|
+
Returns list of dicts: [{session_id, session_name, content_snippet, timestamp, role}]
|
|
196
|
+
"""
|
|
197
|
+
if not query or not query.strip():
|
|
198
|
+
return []
|
|
199
|
+
|
|
200
|
+
fts_query = parse_search_query(query)
|
|
201
|
+
|
|
202
|
+
with sqlite3.connect(DB_NAME) as conn:
|
|
203
|
+
cursor = conn.cursor()
|
|
204
|
+
|
|
205
|
+
# Search FTS table and join with messages for metadata
|
|
206
|
+
# Use invisible sentinel markers to avoid Rich markup conflicts
|
|
207
|
+
cursor.execute(
|
|
208
|
+
"""
|
|
209
|
+
SELECT
|
|
210
|
+
m.session_id,
|
|
211
|
+
s.name as session_name,
|
|
212
|
+
snippet(messages_fts, -1, ?, ?, '...', 64) as snippet,
|
|
213
|
+
m.timestamp,
|
|
214
|
+
m.role
|
|
215
|
+
FROM messages_fts
|
|
216
|
+
JOIN messages m ON messages_fts.rowid = m.id
|
|
217
|
+
JOIN sessions s ON m.session_id = s.id
|
|
218
|
+
WHERE messages_fts MATCH ?
|
|
219
|
+
ORDER BY rank
|
|
220
|
+
LIMIT ?
|
|
221
|
+
""",
|
|
222
|
+
(_MARK_START, _MARK_END, fts_query, limit),
|
|
223
|
+
)
|
|
224
|
+
|
|
225
|
+
# Deduplicate by session_id: keep the best (first) match per session
|
|
226
|
+
seen_sessions = set()
|
|
227
|
+
results = []
|
|
228
|
+
for row in cursor.fetchall():
|
|
229
|
+
sid = row[0]
|
|
230
|
+
if sid in seen_sessions:
|
|
231
|
+
continue
|
|
232
|
+
seen_sessions.add(sid)
|
|
233
|
+
|
|
234
|
+
# Replace sentinel markers with Rich markup tags
|
|
235
|
+
snippet = (
|
|
236
|
+
row[2]
|
|
237
|
+
.replace(_MARK_START, "[bold green]")
|
|
238
|
+
.replace(_MARK_END, "[/bold green]")
|
|
239
|
+
)
|
|
240
|
+
|
|
241
|
+
results.append(
|
|
242
|
+
{
|
|
243
|
+
"session_id": sid,
|
|
244
|
+
"session_name": row[1],
|
|
245
|
+
"snippet": snippet,
|
|
246
|
+
"timestamp": row[3],
|
|
247
|
+
"role": row[4],
|
|
248
|
+
}
|
|
249
|
+
)
|
|
250
|
+
|
|
251
|
+
return results
|
|
@@ -0,0 +1,167 @@
|
|
|
1
|
+
import argparse
|
|
2
|
+
from dotenv import load_dotenv
|
|
3
|
+
from rich.console import Console
|
|
4
|
+
from rich.markdown import Markdown
|
|
5
|
+
from rich.live import Live
|
|
6
|
+
from rich.panel import Panel
|
|
7
|
+
|
|
8
|
+
# Terminal-style input imports
|
|
9
|
+
from prompt_toolkit.formatted_text import HTML
|
|
10
|
+
|
|
11
|
+
from pure_chat import util
|
|
12
|
+
from pure_chat import db_manager
|
|
13
|
+
|
|
14
|
+
load_dotenv()
|
|
15
|
+
console = Console()
|
|
16
|
+
|
|
17
|
+
|
|
18
|
+
def main():
|
|
19
|
+
parser = argparse.ArgumentParser(description="Gemini CLI with SQLite & Search")
|
|
20
|
+
parser.add_argument("--name", type=str, help="Start/Continue a specific session")
|
|
21
|
+
args = parser.parse_args()
|
|
22
|
+
|
|
23
|
+
db_manager.init_db()
|
|
24
|
+
|
|
25
|
+
# Initial Session Setup
|
|
26
|
+
session_id, session_name = db_manager.get_or_create_session(args.name)
|
|
27
|
+
ai, input_session = util.setup_chat_session(session_id)
|
|
28
|
+
|
|
29
|
+
console.print(
|
|
30
|
+
Panel(
|
|
31
|
+
f"Active Session: [bold green]{session_name}[/bold green]\n"
|
|
32
|
+
"[dim]• Use UP/DOWN arrows for question history\n"
|
|
33
|
+
"• Type /search <query> to search conversations\n"
|
|
34
|
+
"• Type /conversations to switch sessions\n"
|
|
35
|
+
"• Type /model to switch active model\n"
|
|
36
|
+
"• Type /exit to quit[/dim]",
|
|
37
|
+
title="Gemini CLI",
|
|
38
|
+
expand=False,
|
|
39
|
+
)
|
|
40
|
+
)
|
|
41
|
+
|
|
42
|
+
while True:
|
|
43
|
+
try:
|
|
44
|
+
# Styled prompt using prompt_toolkit
|
|
45
|
+
user_input = input_session.prompt(
|
|
46
|
+
HTML("<ansicyan><b>You > </b></ansicyan>")
|
|
47
|
+
)
|
|
48
|
+
|
|
49
|
+
if not user_input.strip():
|
|
50
|
+
continue
|
|
51
|
+
|
|
52
|
+
# --- COMMANDS ---
|
|
53
|
+
if user_input.lower() in ["/exit"]:
|
|
54
|
+
console.print("[yellow]Goodbye![/yellow]")
|
|
55
|
+
break
|
|
56
|
+
|
|
57
|
+
if user_input.lower().startswith("/search "):
|
|
58
|
+
query = user_input[8:] # Remove "/search "
|
|
59
|
+
if not query.strip():
|
|
60
|
+
console.print(
|
|
61
|
+
"[yellow]Usage: /search <query> (use quotes for exact phrases)[/yellow]"
|
|
62
|
+
)
|
|
63
|
+
continue
|
|
64
|
+
|
|
65
|
+
results = db_manager.search_messages(query, limit=10)
|
|
66
|
+
|
|
67
|
+
if not results:
|
|
68
|
+
console.print("[yellow]No matches found.[/yellow]")
|
|
69
|
+
continue
|
|
70
|
+
|
|
71
|
+
console.print(
|
|
72
|
+
f"\n[bold cyan]Found {len(results)} results for:[/bold cyan] {query}\n"
|
|
73
|
+
)
|
|
74
|
+
|
|
75
|
+
for idx, result in enumerate(results, 1):
|
|
76
|
+
role_label = "You" if result["role"] == "user" else "Gemini"
|
|
77
|
+
console.print(
|
|
78
|
+
f"[bold white][{idx}][/bold white] [bold green]{result['session_name']}[/bold green]"
|
|
79
|
+
)
|
|
80
|
+
console.print(f"[dim]{result['timestamp']} | {role_label}[/dim]")
|
|
81
|
+
# Escape Rich markup in snippet, then restore our highlight tags
|
|
82
|
+
safe_snippet = (
|
|
83
|
+
result["snippet"]
|
|
84
|
+
.replace("[bold green]", "\x01")
|
|
85
|
+
.replace("[/bold green]", "\x02")
|
|
86
|
+
)
|
|
87
|
+
safe_snippet = safe_snippet.replace("[", "\\[")
|
|
88
|
+
safe_snippet = safe_snippet.replace("\x01", "[bold green]").replace(
|
|
89
|
+
"\x02", "[/bold green]"
|
|
90
|
+
)
|
|
91
|
+
console.print(safe_snippet)
|
|
92
|
+
console.print()
|
|
93
|
+
|
|
94
|
+
# Arrow-key selection (consistent with /conversations)
|
|
95
|
+
search_choices = [
|
|
96
|
+
f"{r['session_name']} ({r['timestamp']} | {'You' if r['role'] == 'user' else 'Gemini'})"
|
|
97
|
+
for r in results
|
|
98
|
+
]
|
|
99
|
+
search_choices.append("Cancel")
|
|
100
|
+
|
|
101
|
+
selected = util.pick_from_list("Jump to session:", search_choices)
|
|
102
|
+
|
|
103
|
+
if selected is None or selected == "Cancel":
|
|
104
|
+
console.print("[dim]Search cancelled.[/dim]\n")
|
|
105
|
+
else:
|
|
106
|
+
selected_name = selected.rsplit(" (", 1)[0]
|
|
107
|
+
session_id, session_name = db_manager.get_or_create_session(
|
|
108
|
+
selected_name
|
|
109
|
+
)
|
|
110
|
+
ai, input_session = util.setup_chat_session(session_id)
|
|
111
|
+
console.print(
|
|
112
|
+
Panel(
|
|
113
|
+
f"Switched to: [bold green]{session_name}[/bold green]",
|
|
114
|
+
expand=False,
|
|
115
|
+
)
|
|
116
|
+
)
|
|
117
|
+
util.print_session_tail(session_id, console)
|
|
118
|
+
|
|
119
|
+
continue
|
|
120
|
+
|
|
121
|
+
if user_input.lower() == "/conversations":
|
|
122
|
+
# Call the function directly (no weird import needed)
|
|
123
|
+
new_id, new_name = util.select_session_interactive()
|
|
124
|
+
session_id, session_name = new_id, new_name
|
|
125
|
+
ai, input_session = util.setup_chat_session(session_id)
|
|
126
|
+
console.print(
|
|
127
|
+
Panel(
|
|
128
|
+
f"Switched to: [bold green]{session_name}[/bold green]",
|
|
129
|
+
expand=False,
|
|
130
|
+
)
|
|
131
|
+
)
|
|
132
|
+
util.print_session_tail(session_id, console)
|
|
133
|
+
continue
|
|
134
|
+
|
|
135
|
+
if user_input.lower() == "/model":
|
|
136
|
+
util.select_model(session_id)
|
|
137
|
+
continue
|
|
138
|
+
|
|
139
|
+
# --- PROCESS CHAT ---
|
|
140
|
+
db_manager.save_message(session_id, "user", user_input)
|
|
141
|
+
|
|
142
|
+
full_res = ""
|
|
143
|
+
console.print("\n[bold magenta]Gemini:[/bold magenta]")
|
|
144
|
+
|
|
145
|
+
# Streaming with Rich Live display
|
|
146
|
+
with Live(Markdown(""), console=console, refresh_per_second=10) as live:
|
|
147
|
+
try:
|
|
148
|
+
for chunk in ai.send_stream(user_input):
|
|
149
|
+
if chunk.text:
|
|
150
|
+
full_res += chunk.text
|
|
151
|
+
live.update(Markdown(full_res))
|
|
152
|
+
except Exception as e:
|
|
153
|
+
console.print(f"[bold red]API Error:[/bold red] {e}")
|
|
154
|
+
|
|
155
|
+
db_manager.save_message(session_id, "model", full_res)
|
|
156
|
+
print() # Spacer
|
|
157
|
+
|
|
158
|
+
except KeyboardInterrupt:
|
|
159
|
+
# Standard terminal behavior: Ctrl+C clears the current line
|
|
160
|
+
continue
|
|
161
|
+
except EOFError:
|
|
162
|
+
# Ctrl+D exits
|
|
163
|
+
break
|
|
164
|
+
|
|
165
|
+
|
|
166
|
+
if __name__ == "__main__":
|
|
167
|
+
main()
|
|
@@ -0,0 +1,105 @@
|
|
|
1
|
+
import questionary
|
|
2
|
+
import os
|
|
3
|
+
from pure_chat import db_manager
|
|
4
|
+
from rich.markdown import Markdown
|
|
5
|
+
from prompt_toolkit import PromptSession
|
|
6
|
+
from prompt_toolkit.history import InMemoryHistory
|
|
7
|
+
from pure_chat.ai_manager import GeminiAssistant
|
|
8
|
+
from google import genai
|
|
9
|
+
|
|
10
|
+
QUESTIONARY_STYLE = questionary.Style(
|
|
11
|
+
[
|
|
12
|
+
("pointer", "fg:#00ff00 bold"),
|
|
13
|
+
("highlighted", "fg:#00ff00 bold"),
|
|
14
|
+
("selected", "fg:#00ff00"),
|
|
15
|
+
]
|
|
16
|
+
)
|
|
17
|
+
|
|
18
|
+
|
|
19
|
+
def pick_from_list(prompt, choices):
|
|
20
|
+
"""Shared arrow-key selection using questionary."""
|
|
21
|
+
return questionary.select(
|
|
22
|
+
prompt,
|
|
23
|
+
choices=choices,
|
|
24
|
+
style=QUESTIONARY_STYLE,
|
|
25
|
+
).ask()
|
|
26
|
+
|
|
27
|
+
|
|
28
|
+
def select_session_interactive():
|
|
29
|
+
"""Fetches sessions and lets user pick one with arrow keys."""
|
|
30
|
+
sessions = db_manager.list_all_sessions()
|
|
31
|
+
|
|
32
|
+
# We add an option to create a new one
|
|
33
|
+
choices = [f"{s[0]} ({s[1]})" for s in sessions]
|
|
34
|
+
choices.append("Create New Session")
|
|
35
|
+
|
|
36
|
+
selected = pick_from_list("Choose a conversation:", choices)
|
|
37
|
+
|
|
38
|
+
if selected == "Create New Session" or selected is None:
|
|
39
|
+
return db_manager.get_or_create_session()
|
|
40
|
+
|
|
41
|
+
# Extract the name back out (before the date in parentheses)
|
|
42
|
+
session_name = selected.rsplit(" (", 1)[0]
|
|
43
|
+
return db_manager.get_or_create_session(session_name)
|
|
44
|
+
|
|
45
|
+
|
|
46
|
+
def print_session_tail(session_id, console):
|
|
47
|
+
"""Prints the last 50 messages to the console with Rich formatting."""
|
|
48
|
+
tail = db_manager.get_last_n_messages(session_id)
|
|
49
|
+
if not tail:
|
|
50
|
+
return
|
|
51
|
+
|
|
52
|
+
for role, content in tail:
|
|
53
|
+
color = "blue" if role == "user" else "magenta"
|
|
54
|
+
label = "You" if role == "user" else "Gemini"
|
|
55
|
+
console.print(f"[bold {color}]{label}:[/bold {color}]")
|
|
56
|
+
console.print(Markdown(content))
|
|
57
|
+
console.print("") # Spacer
|
|
58
|
+
console.print("[dim]--- End of history ---\n[/dim]")
|
|
59
|
+
|
|
60
|
+
|
|
61
|
+
def setup_chat_session(session_id, model_id=None):
|
|
62
|
+
"""Initializes history and the AI assistant."""
|
|
63
|
+
# 1. Load context history for Gemini (Sliding Window)
|
|
64
|
+
history = db_manager.get_chat_history(session_id, window_size=12)
|
|
65
|
+
ai = GeminiAssistant(history=history, model_id=model_id)
|
|
66
|
+
|
|
67
|
+
# 2. Setup UP-ARROW history for the terminal input
|
|
68
|
+
past_user_prompts = db_manager.get_all_user_messages_global()
|
|
69
|
+
terminal_history = InMemoryHistory()
|
|
70
|
+
for prompt in past_user_prompts:
|
|
71
|
+
terminal_history.append_string(prompt)
|
|
72
|
+
|
|
73
|
+
input_session = PromptSession(history=terminal_history)
|
|
74
|
+
|
|
75
|
+
return ai, input_session
|
|
76
|
+
|
|
77
|
+
|
|
78
|
+
def select_model(session_id):
|
|
79
|
+
"""
|
|
80
|
+
Lists available Gemini models that support content generation
|
|
81
|
+
and allows the user to select one via a CLI menu.
|
|
82
|
+
"""
|
|
83
|
+
try:
|
|
84
|
+
api_key = os.getenv("GEMINI_API_KEY")
|
|
85
|
+
client = genai.Client(api_key=api_key)
|
|
86
|
+
available_models = [
|
|
87
|
+
m
|
|
88
|
+
for m in client.models.list()
|
|
89
|
+
if m.supported_actions and "generateContent" in m.supported_actions
|
|
90
|
+
]
|
|
91
|
+
|
|
92
|
+
choices = [
|
|
93
|
+
{"name": f"{m.display_name} ({m.name})", "value": m.name}
|
|
94
|
+
for m in available_models
|
|
95
|
+
]
|
|
96
|
+
|
|
97
|
+
selected_model_id = pick_from_list(
|
|
98
|
+
"Select the Gemini model you wish to use:", choices
|
|
99
|
+
)
|
|
100
|
+
|
|
101
|
+
return setup_chat_session(session_id, selected_model_id)
|
|
102
|
+
|
|
103
|
+
except Exception as e:
|
|
104
|
+
print(f"Error fetching models: {e}")
|
|
105
|
+
return None
|
|
@@ -0,0 +1,87 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: pure-chat
|
|
3
|
+
Version: 0.1.1
|
|
4
|
+
Summary: A high-performance Terminal User Interface (TUI) designed to replicate the experience of browser based llm chat applications directly in your terminal.
|
|
5
|
+
Author-email: Mats Heemeyer <matsheemeyer@gmail.com>
|
|
6
|
+
Requires-Python: >=3.14
|
|
7
|
+
Description-Content-Type: text/markdown
|
|
8
|
+
License-File: LICENSE
|
|
9
|
+
Requires-Dist: google-genai>=1.72.0
|
|
10
|
+
Requires-Dist: prompt-toolkit>=3.0.52
|
|
11
|
+
Requires-Dist: python-dotenv>=1.2.2
|
|
12
|
+
Requires-Dist: questionary>=2.1.1
|
|
13
|
+
Requires-Dist: rich>=15.0.0
|
|
14
|
+
Dynamic: license-file
|
|
15
|
+
|
|
16
|
+
# PureChat
|
|
17
|
+
|
|
18
|
+
A high-performance Terminal User Interface (TUI) designed to replicate the experience of browser based llm chat applications directly in your terminal. This tool provides persistent conversation memory, real-time streaming, and interactive session management.
|
|
19
|
+
|
|
20
|
+
## 💡 Motivation
|
|
21
|
+
|
|
22
|
+
Most modern AI CLI tools have become heavily "agentic." While powerful for automation, they often force massive context windows, auto-execute commands, and prioritize task completion over simple conversation. This results in high token consumption, slower response times, and a lack of control over what data is being sent.
|
|
23
|
+
|
|
24
|
+
Gemini CLI Vault was built to fill the gap for a "regular" chat app in the terminal. It provides a familiar web-like experience for discussing complicated topics in a controlled environment where you don't want the tool to send excessive context or execute commands autonomously.
|
|
25
|
+
|
|
26
|
+
## ✨ Key Features
|
|
27
|
+
|
|
28
|
+
- **Persistent Memory**: Conversations are stored in a local SQLite database (gemini_vault.db), allowing you to resume any chat session at any time.
|
|
29
|
+
- **Live Streaming & Markdown**: Responses stream in real-time with full Markdown rendering, including syntax-highlighted code blocks, tables, and lists.
|
|
30
|
+
- **Intelligent Context Window**: Automatically manages token usage using a sliding window of the last 12 messages to maintain context without exceeding limits.
|
|
31
|
+
- **Interactive Session Switcher**: Use the /conversations command to browse, search, and switch between previous chat sessions using an arrow-key menu.
|
|
32
|
+
- **Global Command History**: Navigate your previous prompts across all sessions using the UP and DOWN arrows (powered by prompt_toolkit).
|
|
33
|
+
- **Google Search Integration**: The assistant is equipped with the Google Search tool to provide up-to-date information on current events.
|
|
34
|
+
- **Dynamic Personalities**: Load custom system instructions from a GEMINI.md file to change the AI's behavior and tone.
|
|
35
|
+
|
|
36
|
+
## 🛠️ Installation (using uv)
|
|
37
|
+
|
|
38
|
+
1. Clone the repository:
|
|
39
|
+
|
|
40
|
+
```
|
|
41
|
+
git clone https://github.com/yourusername/gemini-cli-vault.git
|
|
42
|
+
cd gemini-cli-vault
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
2. Configure Environment Variables:
|
|
46
|
+
|
|
47
|
+
```
|
|
48
|
+
# Create a .env file in the project root:
|
|
49
|
+
GEMINI_API_KEY=your_google_api_key_here
|
|
50
|
+
GEMINI_MODEL=gemini-3-flash-preview
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
3. (Optional) Define System Instructions:
|
|
54
|
+
Create a GEMINI.md file to set the AI's System Prompt:
|
|
55
|
+
You are an expert software architect. Provide concise, high-level advice and always include code snippets in Python or Rust.
|
|
56
|
+
|
|
57
|
+
## 🚀 Usage
|
|
58
|
+
|
|
59
|
+
Start a new or default session:
|
|
60
|
+
|
|
61
|
+
```
|
|
62
|
+
uv run main.py
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
Start or resume a specific session by name:
|
|
66
|
+
|
|
67
|
+
```
|
|
68
|
+
python main.py --name Project-Alpha
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
### ⌨️ In-Chat Commands
|
|
72
|
+
|
|
73
|
+
- `/conversations` : Opens the interactive session manager to switch or create chats.
|
|
74
|
+
- `/exit` : Safely saves and exits the application.
|
|
75
|
+
- `UP / DOWN` : Cycle through your entire history of user prompts.
|
|
76
|
+
- `Ctrl+C` : Interrupt the current input line.
|
|
77
|
+
|
|
78
|
+
## 🏗️ Project Architecture
|
|
79
|
+
|
|
80
|
+
- `main.py`: The entry point and TUI controller. Handles the input loop and rich live display.
|
|
81
|
+
- `db_manager.py`: The data layer. Manages SQLite tables, message logging, and session retrieval.
|
|
82
|
+
- `ai_manager.py`: The AI integration layer. Configures the google-genai client, tools, and system instructions.
|
|
83
|
+
- `gemini_vault.db`: The local database generated on first run.
|
|
84
|
+
|
|
85
|
+
## 🤖 AI Disclosure
|
|
86
|
+
|
|
87
|
+
This project was primarily developed with the assistance of AI. While the core logic, architecture, and feature set were human-directed, the majority of the code implementation and boilerplate was generated and refined using Large Language Models.
|
|
@@ -0,0 +1,14 @@
|
|
|
1
|
+
LICENSE
|
|
2
|
+
README.md
|
|
3
|
+
pyproject.toml
|
|
4
|
+
src/pure_chat/__init__.py
|
|
5
|
+
src/pure_chat/ai_manager.py
|
|
6
|
+
src/pure_chat/db_manager.py
|
|
7
|
+
src/pure_chat/main.py
|
|
8
|
+
src/pure_chat/util.py
|
|
9
|
+
src/pure_chat.egg-info/PKG-INFO
|
|
10
|
+
src/pure_chat.egg-info/SOURCES.txt
|
|
11
|
+
src/pure_chat.egg-info/dependency_links.txt
|
|
12
|
+
src/pure_chat.egg-info/entry_points.txt
|
|
13
|
+
src/pure_chat.egg-info/requires.txt
|
|
14
|
+
src/pure_chat.egg-info/top_level.txt
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
pure_chat
|