tracecli 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
tracecli-0.1.0/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Haseeb Arshad
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,136 @@
1
+ Metadata-Version: 2.4
2
+ Name: tracecli
3
+ Version: 0.1.0
4
+ Summary: The terminal's black box for your digital life. A privacy-first activity monitor.
5
+ Author-email: Haseeb Arshad <haseebarshad@users.noreply.github.com>
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/Haseeb-Arshad/trace-cli
8
+ Project-URL: Repository, https://github.com/Haseeb-Arshad/trace-cli
9
+ Project-URL: Issues, https://github.com/Haseeb-Arshad/trace-cli/issues
10
+ Keywords: productivity,tracker,cli,windows,privacy,copilot
11
+ Classifier: Development Status :: 4 - Beta
12
+ Classifier: Environment :: Console
13
+ Classifier: Operating System :: Microsoft :: Windows
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Topic :: Utilities
16
+ Requires-Python: >=3.9
17
+ Description-Content-Type: text/markdown
18
+ License-File: LICENSE
19
+ Requires-Dist: pywin32>=306
20
+ Requires-Dist: rich>=13.0
21
+ Requires-Dist: click>=8.1
22
+ Requires-Dist: psutil>=5.9
23
+ Provides-Extra: dev
24
+ Requires-Dist: pytest>=7.0; extra == "dev"
25
+ Requires-Dist: pytest-cov; extra == "dev"
26
+ Dynamic: license-file
27
+
28
+ # TraceCLI
29
+
30
+ **The terminal's black box for your digital life.**
31
+
32
+ TraceCLI is a privacy-first, AI-powered activity tracker and productivity coach that lives in your terminal. It monitors your digital habits, actively protects your focus, and provides AI-powered coaching to improve your workflow—all without your data ever leaving your machine.
33
+
34
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
35
+ [![GitHub Repository](https://img.shields.io/badge/GitHub-trace--cli-blue.svg?logo=github)](https://github.com/Haseeb-Arshad/trace-cli)
36
+
37
+ ## Why TraceCLI?
38
+
39
+ TraceCLI isn't just a time tracker. It's an intelligent agent designed to understand your workflow.
40
+
41
+ - **Privacy First**: All data is stored locally in `~/.tracecli/trace.db`. Your activity data stays on your machine.
42
+ - **Zero Friction**: Auto-starts with Windows. Tracks active windows, browser searches, and system resources automatically in the background.
43
+ - **AI-Powered**: Chat with your productivity data using Natural Language. Get personalized insights and weekly digests to understand where your time goes.
44
+
45
+ ---
46
+
47
+ ## Key Features
48
+
49
+ ### Productivity Heatmap
50
+ Visualize your consistency with a GitHub-style contribution graph.
51
+ ```bash
52
+ tracecli heatmap --weeks 52
53
+ ```
54
+
55
+ ### Focus Mode
56
+ A Pomodoro timer with active distraction detection. If you switch to a non-productive app (e.g., social media) during a session, TraceCLI alerts you immediately to get you back on track.
57
+ ```bash
58
+ tracecli focus --duration 25 --goal "Core Feature Implementation"
59
+ ```
60
+
61
+ ### AI Insights
62
+ Get personalized productivity digests or ask questions about your habits using natural language.
63
+ ```bash
64
+ tracecli insights # Weekly coaching report
65
+ tracecli ask "What was my most used app on Monday?"
66
+ ```
67
+
68
+ ### Configurable Rules
69
+ Define exactly what counts as "Productive" or "Distraction" for you using simple configuration rules.
70
+
71
+ ---
72
+
73
+ ## Installation
74
+
75
+ You can install TraceCLI directly from PyPI (coming soon) or from source.
76
+
77
+ ```bash
78
+ # Clone the repository
79
+ git clone https://github.com/Haseeb-Arshad/trace-cli.git
80
+ cd trace-cli
81
+
82
+ # Install in editable mode
83
+ pip install -e .
84
+
85
+ # Verify installation
86
+ tracecli --version
87
+ ```
88
+
89
+ ### AI Configuration (Optional)
90
+ Supported providers: `gemini`, `openai`, `claude`.
91
+ ```bash
92
+ tracecli config --provider gemini --key YOUR_API_KEY
93
+ ```
94
+
95
+ ---
96
+
97
+ ## Command Reference
98
+
99
+ ### Core & Tracking
100
+ - `tracecli start`: Start activity tracking (use `--background` to run silently).
101
+ - `tracecli live`: View real-time activity feed.
102
+ - `tracecli status`: Check CLI and Database status.
103
+ - `tracecli autostart enable`: Enable start on Windows login.
104
+
105
+ ### Analytics & Reports
106
+ - `tracecli stats`: View daily productivity summary.
107
+ - `tracecli heatmap`: GitHub-style productivity graph.
108
+ - `tracecli report`: Detailed daily report with app breakdowns.
109
+ - `tracecli timeline`: Visual daily timeline of activities.
110
+ - `tracecli app [NAME]`: Deep dive into a specific application's usage.
111
+
112
+ ### System & Browsing
113
+ - `tracecli urls`: Browser history and domain breakdown.
114
+ - `tracecli searches`: Recent browser search queries.
115
+ - `tracecli system`: System resource overview (CPU/RAM snapshots).
116
+
117
+ ---
118
+
119
+ ## Architecture
120
+
121
+ TraceCLI is built on a modular architecture designed for performance and extensibility.
122
+
123
+ - **Tracker**: Polls the Windows API to detect active applications and measures engagement.
124
+ - **Categorizer**: Maps process names and window titles to categories (Development, Communication, Productivity, etc.) using a flexible rule engine.
125
+ - **FocusMonitor**: A dedicated background thread that enforces focus rules during active sessions.
126
+ - **BrowserBridge**: Safely reads browser history databases (Chrome/Edge) to provide context on web usage without invasive extensions.
127
+
128
+ ---
129
+
130
+ ## Contributing
131
+
132
+ We welcome contributions! Please see `CONTRIBUTING.md` for details on how to get started.
133
+
134
+ ## License
135
+
136
+ TraceCLI is released under the [MIT License](LICENSE).
@@ -0,0 +1,109 @@
1
+ # TraceCLI
2
+
3
+ **The terminal's black box for your digital life.**
4
+
5
+ TraceCLI is a privacy-first, AI-powered activity tracker and productivity coach that lives in your terminal. It monitors your digital habits, actively protects your focus, and provides AI-powered coaching to improve your workflow—all without your data ever leaving your machine.
6
+
7
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
8
+ [![GitHub Repository](https://img.shields.io/badge/GitHub-trace--cli-blue.svg?logo=github)](https://github.com/Haseeb-Arshad/trace-cli)
9
+
10
+ ## Why TraceCLI?
11
+
12
+ TraceCLI isn't just a time tracker. It's an intelligent agent designed to understand your workflow.
13
+
14
+ - **Privacy First**: All data is stored locally in `~/.tracecli/trace.db`. Your activity data stays on your machine.
15
+ - **Zero Friction**: Auto-starts with Windows. Tracks active windows, browser searches, and system resources automatically in the background.
16
+ - **AI-Powered**: Chat with your productivity data using Natural Language. Get personalized insights and weekly digests to understand where your time goes.
17
+
18
+ ---
19
+
20
+ ## Key Features
21
+
22
+ ### Productivity Heatmap
23
+ Visualize your consistency with a GitHub-style contribution graph.
24
+ ```bash
25
+ tracecli heatmap --weeks 52
26
+ ```
27
+
28
+ ### Focus Mode
29
+ A Pomodoro timer with active distraction detection. If you switch to a non-productive app (e.g., social media) during a session, TraceCLI alerts you immediately to get you back on track.
30
+ ```bash
31
+ tracecli focus --duration 25 --goal "Core Feature Implementation"
32
+ ```
33
+
34
+ ### AI Insights
35
+ Get personalized productivity digests or ask questions about your habits using natural language.
36
+ ```bash
37
+ tracecli insights # Weekly coaching report
38
+ tracecli ask "What was my most used app on Monday?"
39
+ ```
40
+
41
+ ### Configurable Rules
42
+ Define exactly what counts as "Productive" or "Distraction" for you using simple configuration rules.
43
+
44
+ ---
45
+
46
+ ## Installation
47
+
48
+ You can install TraceCLI directly from PyPI (coming soon) or from source.
49
+
50
+ ```bash
51
+ # Clone the repository
52
+ git clone https://github.com/Haseeb-Arshad/trace-cli.git
53
+ cd trace-cli
54
+
55
+ # Install in editable mode
56
+ pip install -e .
57
+
58
+ # Verify installation
59
+ tracecli --version
60
+ ```
61
+
62
+ ### AI Configuration (Optional)
63
+ Supported providers: `gemini`, `openai`, `claude`.
64
+ ```bash
65
+ tracecli config --provider gemini --key YOUR_API_KEY
66
+ ```
67
+
68
+ ---
69
+
70
+ ## Command Reference
71
+
72
+ ### Core & Tracking
73
+ - `tracecli start`: Start activity tracking (use `--background` to run silently).
74
+ - `tracecli live`: View real-time activity feed.
75
+ - `tracecli status`: Check CLI and Database status.
76
+ - `tracecli autostart enable`: Enable start on Windows login.
77
+
78
+ ### Analytics & Reports
79
+ - `tracecli stats`: View daily productivity summary.
80
+ - `tracecli heatmap`: GitHub-style productivity graph.
81
+ - `tracecli report`: Detailed daily report with app breakdowns.
82
+ - `tracecli timeline`: Visual daily timeline of activities.
83
+ - `tracecli app [NAME]`: Deep dive into a specific application's usage.
84
+
85
+ ### System & Browsing
86
+ - `tracecli urls`: Browser history and domain breakdown.
87
+ - `tracecli searches`: Recent browser search queries.
88
+ - `tracecli system`: System resource overview (CPU/RAM snapshots).
89
+
90
+ ---
91
+
92
+ ## Architecture
93
+
94
+ TraceCLI is built on a modular architecture designed for performance and extensibility.
95
+
96
+ - **Tracker**: Polls the Windows API to detect active applications and measures engagement.
97
+ - **Categorizer**: Maps process names and window titles to categories (Development, Communication, Productivity, etc.) using a flexible rule engine.
98
+ - **FocusMonitor**: A dedicated background thread that enforces focus rules during active sessions.
99
+ - **BrowserBridge**: Safely reads browser history databases (Chrome/Edge) to provide context on web usage without invasive extensions.
100
+
101
+ ---
102
+
103
+ ## Contributing
104
+
105
+ We welcome contributions! Please see `CONTRIBUTING.md` for details on how to get started.
106
+
107
+ ## License
108
+
109
+ TraceCLI is released under the [MIT License](LICENSE).
@@ -0,0 +1,41 @@
1
+ [build-system]
2
+ requires = ["setuptools>=68.0", "wheel"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "tracecli"
7
+ version = "0.1.0"
8
+ description = "The terminal's black box for your digital life. A privacy-first activity monitor."
9
+ readme = "README.md"
10
+ requires-python = ">=3.9"
11
+ license = {text = "MIT"}
12
+ authors = [{name = "Haseeb Arshad", email = "haseebarshad@users.noreply.github.com"}]
13
+ keywords = ["productivity", "tracker", "cli", "windows", "privacy", "copilot"]
14
+
15
+ classifiers = [
16
+ "Development Status :: 4 - Beta",
17
+ "Environment :: Console",
18
+ "Operating System :: Microsoft :: Windows",
19
+ "Programming Language :: Python :: 3",
20
+ "Topic :: Utilities",
21
+ ]
22
+ dependencies = [
23
+ "pywin32>=306",
24
+ "rich>=13.0",
25
+ "click>=8.1",
26
+ "psutil>=5.9",
27
+ ]
28
+
29
+ [project.urls]
30
+ Homepage = "https://github.com/Haseeb-Arshad/trace-cli"
31
+ Repository = "https://github.com/Haseeb-Arshad/trace-cli"
32
+ Issues = "https://github.com/Haseeb-Arshad/trace-cli/issues"
33
+
34
+ [project.optional-dependencies]
35
+ dev = ["pytest>=7.0", "pytest-cov"]
36
+
37
+ [project.scripts]
38
+ tracecli = "src.cli:main"
39
+
40
+ [tool.setuptools.packages.find]
41
+ include = ["src*"]
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,3 @@
1
+ """TraceCLI — The terminal's black box for your digital life."""
2
+
3
+ __version__ = "2.0.0"
@@ -0,0 +1,6 @@
1
+ """TraceCLI — Run as module: python -m src"""
2
+
3
+ from .cli import main
4
+
5
+ if __name__ == "__main__":
6
+ main()
@@ -0,0 +1,368 @@
1
+ """
2
+ TraceCLI AI Agent
3
+ ~~~~~~~~~~~~~~~~~
4
+ Zero-dependency AI integration using urllib.
5
+ Handles Text-to-SQL generation and natural language summarization.
6
+ """
7
+
8
+ import json
9
+ import sqlite3
10
+ import urllib.request
11
+ import urllib.error
12
+ from datetime import date
13
+ from typing import Optional, Dict, Any
14
+
15
+ from rich.console import Console
16
+ from rich.panel import Panel
17
+
18
+ from . import config
19
+ from .database import get_connection, DB_PATH
20
+
21
+ console = Console()
22
+
23
+ # ── API Clients ────────────────────────────────────────────────────────────
24
+
25
+ def _post_json(url: str, headers: Dict[str, str], data: Dict[str, Any]) -> Dict:
26
+ """Helper to send JSON POST request using standard library (with retries)."""
27
+ import time
28
+
29
+ max_retries = 3
30
+ backoff = 1 # seconds
31
+
32
+ for attempt in range(max_retries):
33
+ try:
34
+ req = urllib.request.Request(
35
+ url,
36
+ data=json.dumps(data).encode("utf-8"),
37
+ headers=headers,
38
+ method="POST"
39
+ )
40
+ with urllib.request.urlopen(req) as response:
41
+ return json.load(response)
42
+ except urllib.error.HTTPError as e:
43
+ if e.code == 429:
44
+ if attempt < max_retries - 1:
45
+ console.print(f"[yellow]Rate limited (429). Retrying in {backoff}s...[/yellow]")
46
+ time.sleep(backoff)
47
+ backoff *= 2
48
+ continue
49
+
50
+ try:
51
+ err_body = e.read().decode()
52
+ console.print(f"[red]API Error {e.code}: {err_body}[/red]")
53
+ except Exception:
54
+ console.print(f"[red]API Error: {e}[/red]")
55
+ return {}
56
+ except Exception as e:
57
+ console.print(f"[red]Network Error: {e}[/red]")
58
+ return {}
59
+ return {}
60
+
61
+
62
+ def call_gemini(api_key: str, prompt: str, model: str = ""):
63
+ """Call Google Gemini API."""
64
+ model = model or "gemini-flash-latest"
65
+ url = f"https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={api_key}"
66
+ headers = {"Content-Type": "application/json"}
67
+ data = {
68
+ "contents": [{"parts": [{"text": prompt}]}]
69
+ }
70
+
71
+ resp = _post_json(url, headers, data)
72
+ try:
73
+ return resp["candidates"][0]["content"]["parts"][0]["text"]
74
+ except (KeyError, IndexError):
75
+ return None
76
+
77
+
78
+ def call_openai(api_key: str, prompt: str, model: str = ""):
79
+ """Call OpenAI API."""
80
+ model = model or "gpt-3.5-turbo"
81
+ url = "https://api.openai.com/v1/chat/completions"
82
+ headers = {
83
+ "Content-Type": "application/json",
84
+ "Authorization": f"Bearer {api_key}"
85
+ }
86
+ data = {
87
+ "model": model,
88
+ "messages": [{"role": "user", "content": prompt}],
89
+ "temperature": 0
90
+ }
91
+
92
+ resp = _post_json(url, headers, data)
93
+ try:
94
+ return resp["choices"][0]["message"]["content"]
95
+ except (KeyError, IndexError):
96
+ return None
97
+
98
+
99
+ def call_claude(api_key: str, prompt: str, model: str = ""):
100
+ """Call Anthropic Claude API."""
101
+ model = model or "claude-3-haiku-20240307"
102
+ url = "https://api.anthropic.com/v1/messages"
103
+ headers = {
104
+ "x-api-key": api_key,
105
+ "anthropic-version": "2023-06-01",
106
+ "content-type": "application/json"
107
+ }
108
+ data = {
109
+ "model": model,
110
+ "max_tokens": 1024,
111
+ "messages": [{"role": "user", "content": prompt}]
112
+ }
113
+
114
+ resp = _post_json(url, headers, data)
115
+ try:
116
+ return resp["content"][0]["text"]
117
+ except (KeyError, IndexError):
118
+ return None
119
+
120
+
121
+ def ask_llm(provider: str, api_key: str, prompt: str, model: str = "") -> Optional[str]:
122
+ """Dispatch to the configured provider."""
123
+ provider = provider.lower()
124
+ if provider == "gemini":
125
+ return call_gemini(api_key, prompt, model)
126
+ elif provider == "openai":
127
+ return call_openai(api_key, prompt, model)
128
+ elif provider == "claude":
129
+ return call_claude(api_key, prompt, model)
130
+ else:
131
+ console.print(f"[red]Unknown provider: {provider}[/red]")
132
+ return None
133
+
134
+
135
+ # ── Text-to-SQL Logic ──────────────────────────────────────────────────────
136
+
137
+ def get_schema_summary() -> str:
138
+ """Get a summary of the database schema for the LLM."""
139
+ conn = get_connection()
140
+ cursor = conn.cursor()
141
+
142
+ schema = []
143
+ tables = ["activity_log", "daily_stats", "process_snapshots", "browser_urls", "search_history"]
144
+
145
+ for table in tables:
146
+ cursor.execute(f"PRAGMA table_info({table})")
147
+ cols = [f"{row['name']} ({row['type']})" for row in cursor.fetchall()]
148
+ schema.append(f"Table {table}: " + ", ".join(cols))
149
+
150
+ return "\n".join(schema)
151
+
152
+
153
+ VALID_CATEGORIES = [
154
+ "💻 Development", "🌐 Browsing", "📚 Research", "💬 Communication",
155
+ "📝 Productivity", "🎮 Distraction", "❓ Other"
156
+ ]
157
+
158
+
159
+ def generate_sql(question: str, provider: str, api_key: str, model: str) -> Optional[str]:
160
+ """Ask LLM to generate SQL."""
161
+ schema = get_schema_summary()
162
+ today = date.today().isoformat()
163
+
164
+ prompt = f"""
165
+ You are a SQLite expert. Given the database schema below, write a single SQL query to answer the user's question.
166
+
167
+ Current Date: {today}
168
+ Schema:
169
+ {schema}
170
+
171
+ Rules:
172
+ 1. Return ONLY the SQL query. No markdown, no explanations.
173
+ 2. Use safe localized time comparison: `start_time LIKE '2023-01-01%'` or `timestamp LIKE '2023-01-01%'`.
174
+ 3. Calculate durations in minutes if relevant (duration_seconds / 60 or visit_duration / 60).
175
+ 4. Only use SELECT statements.
176
+
177
+ Data Quirks:
178
+ - `category` column uses these exact strings: {VALID_CATEGORIES}
179
+ - If user asks for "Entertainment", "Social", or "Chat", look for keywords in `app_name` OR include `category = '❓ Other'` to find uncategorized apps.
180
+ - `search_history` entries with `url = ''` are captured from browser titles.
181
+ - To join `search_history` with `browser_urls` for productivity time, use:
182
+ `ON s.url = b.url OR (s.url = '' AND b.title LIKE '%' || s.query || '%')`
183
+
184
+ Question: {question}
185
+ SQL:
186
+ """
187
+
188
+ response = ask_llm(provider, api_key, prompt, model)
189
+ if not response:
190
+ return None
191
+
192
+ # Clean up response (remove markdown if any)
193
+ sql = response.strip()
194
+ if sql.startswith("```"):
195
+ sql = sql.split("\n", 1)[1]
196
+ if sql.endswith("```"):
197
+ sql = sql.rsplit("\n", 1)[0]
198
+ return sql.strip()
199
+
200
+
201
+ def summarize_result(question: str, sql: str, rows: list, provider: str, api_key: str, model: str) -> str:
202
+ """Ask LLM to summarize the SQL result."""
203
+ # Truncate rows if too large
204
+ data_str = str(rows[:20])
205
+ if len(rows) > 20:
206
+ data_str += f"... (+{len(rows)-20} more rows)"
207
+
208
+ prompt = f"""
209
+ You are a helpful assistant analyzing activity data.
210
+ User Question: "{question}"
211
+ SQL Query Run: "{sql}"
212
+ Result Data: {data_str}
213
+
214
+ You are a helpful assistant analyzing activity data.
215
+ User Question: "{question}"
216
+ SQL Query Run: "{sql}"
217
+ Result Data: {data_str}
218
+
219
+ Provide a concise, friendly answer based on the data.
220
+ - If you see apps with `category='❓ Other'`, analyze their `app_name` or `window_title`. If they look like games, social media, or entertainment, point them out as potential distractions.
221
+ - If the data is empty, say so politely.
222
+ Answer:
223
+ """
224
+
225
+ return ask_llm(provider, api_key, prompt, model) or "Error generating summary."
226
+
227
+
228
+ def handle_ask(question: str):
229
+ """Main entry point for 'tracecli ask'."""
230
+ provider, api_key, model = config.get_ai_config()
231
+
232
+ if not api_key:
233
+ console.print("[yellow]⚠️ AI API Key not configured.[/yellow]")
234
+ console.print(f"Run [bold]tracecli config --key YOUR_KEY[/bold] to set it up.")
235
+ return
236
+
237
+ console.print(f"[dim]Thinking with {provider}...[/dim]")
238
+
239
+ # 1. Generate SQL
240
+ sql = generate_sql(question, provider, api_key, model)
241
+ if not sql:
242
+ console.print("[red]Failed to generate SQL query.[/red]")
243
+ return
244
+
245
+ # Safety Check
246
+ if not sql.upper().startswith("SELECT"):
247
+ console.print("[red]Safety Violation: Generated SQL is not a SELECT statement.[/red]")
248
+ return
249
+
250
+ console.print(f"[dim]Executing: {sql}[/dim]")
251
+
252
+ # 2. Execute SQL
253
+ try:
254
+ conn = get_connection()
255
+ # Ensure read-only by using a cursor? sqlite3 doesn't easily enforce read-only per query
256
+ # But we checked for SELECT.
257
+ cursor = conn.execute(sql)
258
+ cols = [description[0] for description in cursor.description]
259
+ rows = [dict(zip(cols, row)) for row in cursor.fetchall()]
260
+ except Exception as e:
261
+ console.print(f"[red]SQL Error: {e}[/red]")
262
+ return
263
+
264
+ if not rows:
265
+ console.print("\n[yellow]No data found matching your query.[/yellow]\n")
266
+ return
267
+
268
+ # 3. Summarize
269
+ console.print("[dim]Analyzing results...[/dim]")
270
+ answer = summarize_result(question, sql, rows, provider, api_key, model)
271
+
272
+ console.print()
273
+ console.print(Panel(answer, title="🤖 AI Answer", border_style="cyan"))
274
+ console.print()
275
+
276
+
277
+ # ── Weekly Productivity Digest ─────────────────────────────────────────────
278
+
279
+ def generate_weekly_digest(days: int = 7) -> Optional[str]:
280
+ """
281
+ Generate an AI-powered productivity digest for the past N days.
282
+
283
+ Collects daily stats, app usage, and search patterns, then asks
284
+ the LLM to generate a personalized productivity coaching report.
285
+ """
286
+ provider, api_key, model = config.get_ai_config()
287
+ if not api_key:
288
+ return None
289
+
290
+ from .database import (
291
+ get_stats_range, get_category_breakdown, get_app_breakdown,
292
+ get_focus_stats, query_searches,
293
+ )
294
+ from datetime import date, timedelta
295
+
296
+ # Gather data
297
+ stats = get_stats_range(days)
298
+ today = date.today()
299
+
300
+ if not stats:
301
+ return "No activity data found for the past week."
302
+
303
+ # Build structured summary
304
+ total_seconds = sum(s["total_seconds"] for s in stats)
305
+ productive_seconds = sum(s["productive_seconds"] for s in stats)
306
+ distraction_seconds = sum(s["distraction_seconds"] for s in stats)
307
+
308
+ # App breakdown for latest day
309
+ app_breakdown = get_app_breakdown(today)
310
+ top_apps = [
311
+ {"app": a["app_name"], "time_hours": round(a["total_seconds"] / 3600, 1)}
312
+ for a in app_breakdown[:5]
313
+ ]
314
+
315
+ # Category breakdown for latest day
316
+ cat_breakdown = get_category_breakdown(today)
317
+ categories = [
318
+ {"category": c["category"], "time_hours": round(c["total_seconds"] / 3600, 1)}
319
+ for c in cat_breakdown
320
+ ]
321
+
322
+ # Focus stats
323
+ focus = get_focus_stats()
324
+
325
+ # Search patterns
326
+ searches = query_searches(today)
327
+ recent_queries = [s["query"] for s in searches[:10]]
328
+
329
+ summary = {
330
+ "period_days": days,
331
+ "total_tracked_hours": round(total_seconds / 3600, 1),
332
+ "productive_hours": round(productive_seconds / 3600, 1),
333
+ "distraction_hours": round(distraction_seconds / 3600, 1),
334
+ "productivity_score": round((productive_seconds / total_seconds) * 100) if total_seconds > 0 else 0,
335
+ "days_tracked": len(stats),
336
+ "daily_breakdown": [
337
+ {
338
+ "date": s["date"],
339
+ "hours": round(s["total_seconds"] / 3600, 1),
340
+ "productive_hours": round(s["productive_seconds"] / 3600, 1),
341
+ "score": round((s["productive_seconds"] / s["total_seconds"]) * 100) if s["total_seconds"] > 0 else 0,
342
+ }
343
+ for s in stats
344
+ ],
345
+ "top_apps": top_apps,
346
+ "categories": categories,
347
+ "focus_sessions": focus.get("total_sessions", 0),
348
+ "avg_focus_score": round(focus.get("avg_focus_score", 0), 1),
349
+ "recent_searches": recent_queries,
350
+ }
351
+
352
+ prompt = f"""You are a friendly, insightful productivity coach analyzing someone's computer usage data.
353
+
354
+ Here is their activity summary for the past {days} days:
355
+ {json.dumps(summary, indent=2)}
356
+
357
+ Generate a concise, personalized productivity digest with these sections:
358
+ 1. 🏆 **Top Achievement** — Highlight their best metric or pattern (be specific with numbers)
359
+ 2. ⚠️ **Biggest Distraction Pattern** — Identify when/where they get distracted most
360
+ 3. 💡 **Actionable Suggestion** — One specific, practical tip to improve tomorrow
361
+ 4. 📊 **Week-over-Week** — Comment on trends if multiple days of data exist
362
+
363
+ Keep each section to 1-2 sentences. Be encouraging but honest.
364
+ Use emoji and make it engaging. Don't use markdown headers, just the emoji labels above.
365
+ If there's limited data, acknowledge it and focus on what you can see."""
366
+
367
+ result = ask_llm(provider, api_key, prompt, model)
368
+ return result