ask-log 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
ask_log-0.1.0/LICENSE ADDED
@@ -0,0 +1,23 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 Vandan Savla
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
23
+
ask_log-0.1.0/PKG-INFO ADDED
@@ -0,0 +1,174 @@
1
+ Metadata-Version: 2.4
2
+ Name: ask-log
3
+ Version: 0.1.0
4
+ Summary: An AI log analyzer with chat interface
5
+ Author-email: Vandan Savla <vsavla21@gmail.com>
6
+ Project-URL: Homepage, https://github.com/vandan-savla/ask-log
7
+ Project-URL: Repository, https://github.com/vandan-savla/ask-log
8
+ Project-URL: Issues, https://github.com/vandan-savla/ask-log/issues
9
+ Classifier: Development Status :: 3 - Alpha
10
+ Classifier: Intended Audience :: Developers
11
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
12
+ Classifier: Topic :: System :: Logging
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Programming Language :: Python :: 3.12
16
+ Classifier: Programming Language :: Python :: 3.13
17
+ Requires-Python: >=3.12
18
+ Description-Content-Type: text/markdown
19
+ License-File: LICENSE
20
+ Requires-Dist: click
21
+ Requires-Dist: rich
22
+ Requires-Dist: prompt-toolkit
23
+ Requires-Dist: questionary
24
+ Requires-Dist: pyyaml
25
+ Requires-Dist: numpy
26
+ Requires-Dist: tiktoken
27
+ Requires-Dist: chromadb
28
+ Requires-Dist: litellm
29
+ Requires-Dist: fastembed
30
+ Requires-Dist: langchain
31
+ Requires-Dist: langchain-core
32
+ Requires-Dist: langchain-community
33
+ Requires-Dist: langchain-text-splitters
34
+ Requires-Dist: langchain-classic
35
+ Requires-Dist: langchain-huggingface
36
+ Provides-Extra: openai
37
+ Requires-Dist: langchain-openai; extra == "openai"
38
+ Provides-Extra: anthropic
39
+ Requires-Dist: langchain-anthropic; extra == "anthropic"
40
+ Provides-Extra: google
41
+ Requires-Dist: langchain-google-genai; extra == "google"
42
+ Requires-Dist: langchain-google-vertexai; extra == "google"
43
+ Provides-Extra: azure
44
+ Requires-Dist: langchain-openai; extra == "azure"
45
+ Dynamic: license-file
46
+
47
+ ## Ask Log
48
+
49
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
50
+ [![Python](https://img.shields.io/badge/Python-3.10%20|%203.11%20|%203.12%20|%203.13-blue)](https://www.python.org/downloads/)
51
+ [![OS](https://img.shields.io/badge/OS-Windows%20|%20macOS%20|%20Linux-555)](#)
52
+ [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](CONTRIBUTING.md)
53
+
54
+ Ask Log is a CLI-first assistant that helps you explore, summarize, and reason about application logs through a conversational interface. Point it at a log file, ask questions, and iterate quickly.
55
+
56
+ This project uses LangChain under the hood and supports multiple LLM providers (OpenAI, Anthropic, Google), configurable via a simple guided setup. Conversations are optionally saved so you can resume later.
57
+
58
+ ---
59
+
60
+ ### Quick links
61
+
62
+ [![Configure](https://img.shields.io/badge/Step%201-Configure-blue?logo=terminal)](#-configuration) [![Start Chat](https://img.shields.io/badge/Step%202-Start%20Chat-green?logo=gnu-bash)](#-quickstart) [![Status](https://img.shields.io/badge/Status-Check-informational?logo=gnometerminal)](#-commands)
63
+
64
+ ---
65
+
66
+ ### Features
67
+
68
+ - Conversational log analysis over any text log
69
+ - Local vector index (FAISS) for retrieval-augmented answers
70
+ - Provider-agnostic via LangChain: OpenAI, Anthropic, Google Gemini
71
+ - Persistent conversation history per-log for continuity
72
+ - Colorful, ergonomic CLI using `rich` and `click`
73
+
74
+ ---
75
+
76
+ ## Installation (local, editable)
77
+
78
+ No PyPI release yet. Install locally in editable mode for fast iteration.
79
+
80
+ #### 1) Clone
81
+
82
+ ```bash
83
+ git clone https://github.com/vandan-savla/ask-log.git
84
+ cd ask-log
85
+ ```
86
+
87
+ #### 2) Create and activate a virtual environment
88
+
89
+ - Windows (PowerShell)
90
+
91
+ ```powershell
92
+ py -m venv .venv
93
+ .\.venv\Scripts\Activate.ps1
94
+ python -m pip install -U pip setuptools wheel
95
+ ```
96
+
97
+ - macOS/Linux (bash)
98
+
99
+ ```bash
100
+ python3 -m venv .venv
101
+ source .venv/bin/activate
102
+ python -m pip install -U pip setuptools wheel
103
+ ```
104
+
105
+ #### 3) Install the project in editable mode
106
+
107
+ Core installation:
108
+
109
+ ```bash
110
+ pip install -e .
111
+ ```
112
+
113
+ ---
114
+
115
+ ## Quickstart
116
+
117
+ Once installed, the `ask-log` command is available.
118
+
119
+ ```bash
120
+ # 1) Configure your preferred provider and model
121
+ ask-log configure
122
+
123
+ # 2) Check configuration
124
+ ask-log status
125
+
126
+ # 3) Analyze a log file interactively; optionally save the conversation
127
+ ask-log chat --log-file /path/to/app.log --save ~/.ask-log/last-session.json
128
+ ```
129
+
130
+ During configuration, you’ll be prompted for provider credentials and model. Supported providers are:
131
+
132
+ - OpenAI: `OPENAI_API_KEY`
133
+ - Anthropic: `ANTHROPIC_API_KEY`
134
+ - Google: `GOOGLE_API_KEY`
135
+
136
+ ---
137
+
138
+ ## Commands
139
+
140
+ ```bash
141
+ # Guided config flow (choose provider, model, and options)
142
+ ask-log configure
143
+
144
+ # Show current provider configuration
145
+ ask-log status
146
+
147
+ # Start an interactive session on a specific log file
148
+ ask-log chat --log-file /path/to/logfile.log --save path/to/convo.json
149
+
150
+ # Reset configuration (removes ~/.ask-log/config.yaml)
151
+ ask-log reset
152
+ ```
153
+
154
+ Configuration is stored at `~/.ask-log/config.yaml`.
155
+
156
+ ---
157
+
158
+ ## Tips
159
+
160
+ - Use natural language questions like “What errors do you see?”, “Summarize main events”, “Any anomalies around 10:32?”
161
+ - Use `--save` to capture the conversation so you can resume context later.
162
+ - The first run on a large log may build a local vector index; subsequent runs will be faster.
163
+
164
+ ---
165
+
166
+ ## Contributing
167
+
168
+ Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines, a suggested workflow, and development tips.
169
+
170
+ ---
171
+
172
+ ## License
173
+
174
+ ## This project is licensed under the [MIT License](LICENSE).
@@ -0,0 +1,128 @@
1
+ ## Ask Log
2
+
3
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
4
+ [![Python](https://img.shields.io/badge/Python-3.10%20|%203.11%20|%203.12%20|%203.13-blue)](https://www.python.org/downloads/)
5
+ [![OS](https://img.shields.io/badge/OS-Windows%20|%20macOS%20|%20Linux-555)](#)
6
+ [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](CONTRIBUTING.md)
7
+
8
+ Ask Log is a CLI-first assistant that helps you explore, summarize, and reason about application logs through a conversational interface. Point it at a log file, ask questions, and iterate quickly.
9
+
10
+ This project uses LangChain under the hood and supports multiple LLM providers (OpenAI, Anthropic, Google), configurable via a simple guided setup. Conversations are optionally saved so you can resume later.
11
+
12
+ ---
13
+
14
+ ### Quick links
15
+
16
+ [![Configure](https://img.shields.io/badge/Step%201-Configure-blue?logo=terminal)](#-configuration) [![Start Chat](https://img.shields.io/badge/Step%202-Start%20Chat-green?logo=gnu-bash)](#-quickstart) [![Status](https://img.shields.io/badge/Status-Check-informational?logo=gnometerminal)](#-commands)
17
+
18
+ ---
19
+
20
+ ### Features
21
+
22
+ - Conversational log analysis over any text log
23
+ - Local vector index (FAISS) for retrieval-augmented answers
24
+ - Provider-agnostic via LangChain: OpenAI, Anthropic, Google Gemini
25
+ - Persistent conversation history per-log for continuity
26
+ - Colorful, ergonomic CLI using `rich` and `click`
27
+
28
+ ---
29
+
30
+ ## Installation (local, editable)
31
+
32
+ No PyPI release yet. Install locally in editable mode for fast iteration.
33
+
34
+ #### 1) Clone
35
+
36
+ ```bash
37
+ git clone https://github.com/vandan-savla/ask-log.git
38
+ cd ask-log
39
+ ```
40
+
41
+ #### 2) Create and activate a virtual environment
42
+
43
+ - Windows (PowerShell)
44
+
45
+ ```powershell
46
+ py -m venv .venv
47
+ .\.venv\Scripts\Activate.ps1
48
+ python -m pip install -U pip setuptools wheel
49
+ ```
50
+
51
+ - macOS/Linux (bash)
52
+
53
+ ```bash
54
+ python3 -m venv .venv
55
+ source .venv/bin/activate
56
+ python -m pip install -U pip setuptools wheel
57
+ ```
58
+
59
+ #### 3) Install the project in editable mode
60
+
61
+ Core installation:
62
+
63
+ ```bash
64
+ pip install -e .
65
+ ```
66
+
67
+ ---
68
+
69
+ ## Quickstart
70
+
71
+ Once installed, the `ask-log` command is available.
72
+
73
+ ```bash
74
+ # 1) Configure your preferred provider and model
75
+ ask-log configure
76
+
77
+ # 2) Check configuration
78
+ ask-log status
79
+
80
+ # 3) Analyze a log file interactively; optionally save the conversation
81
+ ask-log chat --log-file /path/to/app.log --save ~/.ask-log/last-session.json
82
+ ```
83
+
84
+ During configuration, you’ll be prompted for provider credentials and model. Supported providers are:
85
+
86
+ - OpenAI: `OPENAI_API_KEY`
87
+ - Anthropic: `ANTHROPIC_API_KEY`
88
+ - Google: `GOOGLE_API_KEY`
89
+
90
+ ---
91
+
92
+ ## Commands
93
+
94
+ ```bash
95
+ # Guided config flow (choose provider, model, and options)
96
+ ask-log configure
97
+
98
+ # Show current provider configuration
99
+ ask-log status
100
+
101
+ # Start an interactive session on a specific log file
102
+ ask-log chat --log-file /path/to/logfile.log --save path/to/convo.json
103
+
104
+ # Reset configuration (removes ~/.ask-log/config.yaml)
105
+ ask-log reset
106
+ ```
107
+
108
+ Configuration is stored at `~/.ask-log/config.yaml`.
109
+
110
+ ---
111
+
112
+ ## Tips
113
+
114
+ - Use natural language questions like “What errors do you see?”, “Summarize main events”, “Any anomalies around 10:32?”
115
+ - Use `--save` to capture the conversation so you can resume context later.
116
+ - The first run on a large log may build a local vector index; subsequent runs will be faster.
117
+
118
+ ---
119
+
120
+ ## Contributing
121
+
122
+ Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines, a suggested workflow, and development tips.
123
+
124
+ ---
125
+
126
+ ## License
127
+
128
+ ## This project is licensed under the [MIT License](LICENSE).
@@ -0,0 +1,9 @@
1
+ """
2
+ ask-log
3
+ An AI-powered CLI tool for analyzing log files locally
4
+ """
5
+ __version__ = "0.1.0"
6
+
7
+ from .chat import LogAnalyzer
8
+
9
+ __all__ = ["LogAnalyzer"]
@@ -0,0 +1,341 @@
1
+ """
2
+ Chat interface with memory persistence for log analysis
3
+ """
4
+ import os
5
+ import json
6
+ from datetime import datetime
7
+ from pathlib import Path
8
+ from typing import Dict, Any, Optional, List
9
+ import click
10
+ from rich.console import Console
11
+ from rich.markdown import Markdown
12
+ from rich.panel import Panel
13
+ from rich.progress import Progress, SpinnerColumn, TextColumn, BarColumn, TaskProgressColumn
14
+ import rich.spinner
15
+ from prompt_toolkit import prompt
16
+ from prompt_toolkit.history import FileHistory
17
+ from langchain_core.messages import HumanMessage, AIMessage
18
+ from langchain_text_splitters import RecursiveCharacterTextSplitter
19
+ from langchain_community.vectorstores import Chroma
20
+ from langchain_community.embeddings import FastEmbedEmbeddings
21
+ from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
22
+ from langchain_classic.chains.combine_documents import create_stuff_documents_chain
23
+ from langchain_classic.chains.retrieval import create_retrieval_chain
24
+ import hashlib
25
+ from langchain_core.runnables.history import RunnableWithMessageHistory
26
+ from langchain_community.chat_message_histories import FileChatMessageHistory
27
+
28
+ from .config import Config
29
+ from .llm_factory import llm_factory
30
+
31
+ console = Console()
32
+
33
+
34
+ class LogAnalyzer:
35
+ """Main chat interface for log analysis"""
36
+
37
+ def __init__(self, log_file_path: str, save_path: Optional[str] = None):
38
+ self.log_file_path = Path(log_file_path)
39
+ self.save_path = Path(save_path) if save_path else None
40
+ self.config = Config()
41
+ self.llm = None
42
+ self.conversation_history = []
43
+ self.retriever = None
44
+ self.rag_chain = None
45
+ self.fallback_chain = None
46
+ self.session_id = self._compute_session_id()
47
+
48
+ # Load log file content
49
+ self.log_content = self._load_log_file()
50
+
51
+ # Initialize LLM
52
+ self._initialize_llm()
53
+
54
+ # RAG chain will be initialized lazily on first use to speed startup
55
+
56
+ # Load previous conversation if save path exists
57
+ if self.save_path and self.save_path.exists():
58
+ self._load_conversation()
59
+
60
+ def _load_log_file(self) -> str:
61
+ """Load and return log file content"""
62
+ try:
63
+ with open(self.log_file_path, 'r', encoding='utf-8') as f:
64
+ content = f.read()
65
+ console.print(f"[green]✓ Loaded log file: {self.log_file_path}[/green]")
66
+ console.print(f"[dim]Log file size: {len(content)} characters[/dim]")
67
+ return content
68
+ except Exception as e:
69
+ console.print(f"[red]✗ Failed to load log file: {e}[/red]")
70
+ raise
71
+
72
+ def _initialize_llm(self):
73
+ """Initialize the LLM from configuration"""
74
+ provider_config = self.config.get_provider_config()
75
+ if not provider_config:
76
+ raise ValueError("No LLM provider configured. Please run 'ask-log configure' first.")
77
+
78
+ try:
79
+ self.llm = llm_factory.create_llm(
80
+ provider_config["provider"],
81
+ provider_config["model"],
82
+ provider_config
83
+ )
84
+ console.print(f"[green]✓ Initialized {provider_config['provider']} with model {provider_config['model']}[/green]")
85
+ except Exception as e:
86
+ console.print(f"[red]✗ Failed to initialize LLM: {e}[/red]")
87
+ raise
88
+
89
+ def _load_conversation(self):
90
+ """Load previous conversation from save file"""
91
+ try:
92
+ with open(self.save_path, 'r', encoding='utf-8') as f:
93
+ data = json.load(f)
94
+
95
+ self.conversation_history = data.get('conversation', [])
96
+
97
+ # Memory is derived on the fly from conversation_history; nothing else to do here
98
+
99
+ console.print(f"[green]✓ Loaded previous conversation with {len(self.conversation_history)} messages[/green]")
100
+
101
+ except Exception as e:
102
+ console.print(f"[yellow]Warning: Could not load previous conversation: {e}[/yellow]")
103
+
104
+ def _save_conversation(self):
105
+ """Save conversation to file (internal or user-specified)"""
106
+ if not self.save_path:
107
+ return
108
+
109
+ try:
110
+ self.save_path.parent.mkdir(parents=True, exist_ok=True)
111
+
112
+ data = {
113
+ 'timestamp': datetime.now().isoformat(),
114
+ 'log_file': str(self.log_file_path),
115
+ 'session_id': self.session_id,
116
+ 'conversation': self.conversation_history
117
+ }
118
+
119
+ with open(self.save_path, 'w', encoding='utf-8') as f:
120
+ json.dump(data, f, indent=2, ensure_ascii=False)
121
+
122
+ except Exception as e:
123
+ console.print(f"[red]Warning: Could not save conversation: {e}[/red]")
124
+
125
+
126
+ def _handle_exit_save(self):
127
+ """Prompt user to save if they haven't specified a save path already"""
128
+ # Only prompt if there is actually a conversation to save
129
+ if self.save_path or not self.conversation_history:
130
+ return
131
+
132
+ if click.confirm("\n[bold cyan]> [/bold cyan]Would you like to save this conversation history?"):
133
+ filename = click.prompt("[bold cyan]> [/bold cyan]Enter a filename (e.g., 'analysis_v1')")
134
+
135
+ # Ensure it has a .json extension
136
+ if not filename.endswith('.json'):
137
+ filename += '.json'
138
+
139
+ # Define path: {config_dir}/history/{filename}.json
140
+ history_dir = self.config.config_dir / "history"
141
+ self.save_path = history_dir / filename
142
+
143
+ # Execute the save
144
+ self._save_conversation()
145
+ console.print(f"[green]✓ Conversation saved to: {self.save_path}[/green]")
146
+
147
+
148
+ def _get_system_instructions(self) -> str:
149
+ """System instructions for the RAG chain (no full log in prompt)."""
150
+ current_dir = os.path.dirname(os.path.abspath(__file__))
151
+
152
+
153
+ prompt_path = os.path.join(current_dir, "prompts", "system_prompt.txt")
154
+
155
+ try:
156
+ with open(prompt_path, "r", encoding="utf-8") as f:
157
+ return f.read()
158
+ except FileNotFoundError:
159
+ return "Default system instructions..."
160
+
161
+ def _compute_session_id(self) -> str:
162
+ digest = hashlib.sha256(str(self.log_file_path.resolve()).encode("utf-8")).hexdigest()[:12]
163
+ return f"session-{digest}"
164
+
165
+ def _messages_store_path(self) -> Path:
166
+ base = self.config.config_dir / "messages"
167
+ base.mkdir(parents=True, exist_ok=True)
168
+ return base / f"{self.session_id}.json"
169
+
170
+ def _get_chat_history(self, session_id: str) -> FileChatMessageHistory:
171
+ # Single-session per log file. Persist messages for agent memory only.
172
+ return FileChatMessageHistory(str(self._messages_store_path()))
173
+
174
+ def _initialize_rag(self, force_rebuild: bool = False) -> None:
175
+ """Create or load a vector store retriever and retrieval chain over the log file.
176
+
177
+ Uses in-memory Chroma with FastEmbed for maximum efficiency and MMR for varied context.
178
+ """
179
+ with console.status("[yellow] Retrieving Embeddings...[/yellow]", spinner="material"):
180
+
181
+ try:
182
+ embeddings = FastEmbedEmbeddings(model_name="BAAI/bge-small-en-v1.5")
183
+
184
+ splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=200, add_start_index=True)
185
+
186
+ documents = splitter.create_documents(
187
+ [self.log_content], metadatas=[{"source": str(self.log_file_path)}]
188
+ )
189
+
190
+ # Use ephemeral in-memory Chroma vector store
191
+ vector_store = Chroma.from_documents(documents, embeddings)
192
+
193
+ # Silence ChromaDB warning when number of chunks is less than fetch_k
194
+ doc_count = len(documents)
195
+ fetch_k = min(20, doc_count)
196
+ k = min(6, doc_count)
197
+
198
+ # Use MMR constraint for diverse chunk selection
199
+ self.retriever = vector_store.as_retriever(
200
+ search_type="mmr" if doc_count > 0 else "similarity",
201
+ search_kwargs={"k": k, "fetch_k": fetch_k}
202
+ )
203
+
204
+ # Prompt and retrieval chain with persisted chat history
205
+ prompt = ChatPromptTemplate.from_messages([
206
+ ("system", "{system_instructions}\n\nRetrieved context:\n{context}"),
207
+ MessagesPlaceholder("history"),
208
+ ("human", "{input}")
209
+ ])
210
+ document_chain = create_stuff_documents_chain(self.llm, prompt)
211
+ base_chain = create_retrieval_chain(self.retriever, document_chain)
212
+ self.rag_chain = RunnableWithMessageHistory(
213
+ base_chain,
214
+ self._get_chat_history,
215
+ input_messages_key="input",
216
+ history_messages_key="history",
217
+ output_messages_key="answer",
218
+ )
219
+ console.print("[green]✓ Log are retrieved and ready to be analyzed[/green]")
220
+ except Exception as e:
221
+ console.print(f"[yellow]Warning: Failed to initialize Log retriever: {e}[/yellow]")
222
+ self.retriever = None
223
+ self.rag_chain = None
224
+
225
+ def _initialize_fallback_chain(self) -> None:
226
+ try:
227
+ prompt = ChatPromptTemplate.from_messages([
228
+ ("system", "{system_instructions}"),
229
+ MessagesPlaceholder("history"),
230
+ ("human", "{input}")
231
+ ])
232
+ base_chain = prompt | self.llm
233
+ self.fallback_chain = RunnableWithMessageHistory(
234
+ base_chain,
235
+ self._get_chat_history,
236
+ input_messages_key="input",
237
+ history_messages_key="history",
238
+ )
239
+ except Exception:
240
+ self.fallback_chain = None
241
+
242
+ def _format_response(self, response: str) -> None:
243
+ """Format and display AI response"""
244
+ panel = Panel(
245
+ Markdown(response),
246
+ title="[bold blue]Agent[/bold blue]",
247
+ border_style="blue",
248
+ padding=(1, 2)
249
+ )
250
+ console.print(panel)
251
+
252
+ def _add_to_history(self, message_type: str, content: str):
253
+ """Add message to conversation history"""
254
+ self.conversation_history.append({
255
+ 'type': message_type,
256
+ 'content': content,
257
+ 'timestamp': datetime.now().isoformat()
258
+ })
259
+
260
+ def start_chat(self):
261
+ """Start the interactive chat session"""
262
+ # Welcome message
263
+ welcome_msg = f"""🔍 **Welcome to Ask Log!**
264
+
265
+ I'm ready to help you analyze your log file: `{self.log_file_path.name}`
266
+
267
+ You can ask me questions like:
268
+ - "What errors do you see in this log?"
269
+ - "Summarize the main events"
270
+ - "Are there any patterns or anomalies?"
271
+ - "What happened around timestamp X?"
272
+
273
+ Type '/quit', '/exit', or press Ctrl+C to end the session.
274
+ """
275
+ self._initialize_rag()
276
+
277
+ self._format_response(welcome_msg)
278
+
279
+ # Set up prompt history
280
+ history_file = self.config.config_dir / "chat_history"
281
+ history = FileHistory(str(history_file))
282
+
283
+ try:
284
+ while True:
285
+ try:
286
+ # Get user input
287
+ user_input = prompt(
288
+ "You: ",
289
+ history=history
290
+ ).strip()
291
+
292
+ if not user_input:
293
+ continue
294
+
295
+ if user_input.lower() in ['/quit', '/exit']:
296
+ break
297
+
298
+ # Add user message to history
299
+ self._add_to_history('human', user_input)
300
+
301
+ # Get AI response (RAG if available; fallback to direct LLM) with transient status
302
+ with console.status("[dim]Analyzing...[/dim]", spinner="dots"):
303
+ if self.rag_chain is not None:
304
+ result = self.rag_chain.invoke(
305
+ {"input": user_input, "system_instructions": self._get_system_instructions()},
306
+ config={"configurable": {"session_id": self.session_id}},
307
+ )
308
+ ai_response = result.get("answer") or str(result)
309
+ else:
310
+ # Fallback chain with persisted chat history
311
+ if self.fallback_chain is None:
312
+ self._initialize_fallback_chain()
313
+ if self.fallback_chain is not None:
314
+ response_msg = self.fallback_chain.invoke(
315
+ {"input": user_input, "system_instructions": self._get_system_instructions()},
316
+ config={"configurable": {"session_id": self.session_id}},
317
+ )
318
+ ai_response = getattr(response_msg, "content", str(response_msg))
319
+ else:
320
+ response = self.llm.invoke([HumanMessage(content=user_input)])
321
+ ai_response = response.content if hasattr(response, "content") else str(response)
322
+
323
+ # Add AI response to history
324
+ self._add_to_history('ai', ai_response)
325
+
326
+ # Display response
327
+ self._format_response(ai_response)
328
+
329
+ except KeyboardInterrupt:
330
+ break
331
+ except EOFError:
332
+ break
333
+ except Exception as e:
334
+ console.print(f"\n[red]Error: {e}[/red]")
335
+ continue
336
+
337
+
338
+ finally:
339
+
340
+ console.print("\n[yellow]Goodbye! Your conversation has been saved.[/yellow]")
341
+ # self._handle_exit_save()