zerottmm 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,176 @@
1
+ Metadata-Version: 2.4
2
+ Name: zerottmm
3
+ Version: 0.1.0
4
+ Summary: Time‑to‑Mental‑Model: a local‑first code reading assistant (Phase A)
5
+ Author-email: Gaurav Sood <contact@gsood.com>
6
+ License-Expression: MIT
7
+ Keywords: code-analysis,static-analysis,code-reading,python,ast
8
+ Classifier: Development Status :: 4 - Beta
9
+ Classifier: Intended Audience :: Developers
10
+ Classifier: Programming Language :: Python :: 3
11
+ Classifier: Programming Language :: Python :: 3.10
12
+ Classifier: Programming Language :: Python :: 3.11
13
+ Classifier: Programming Language :: Python :: 3.12
14
+ Classifier: Programming Language :: Python :: 3.13
15
+ Classifier: Topic :: Software Development :: Code Generators
16
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
17
+ Classifier: Topic :: Utilities
18
+ Requires-Python: >=3.10
19
+ Description-Content-Type: text/markdown
20
+ Provides-Extra: ui
21
+ Requires-Dist: streamlit>=1.23; extra == "ui"
22
+ Requires-Dist: openai>=1.0.0; extra == "ui"
23
+ Provides-Extra: test
24
+ Requires-Dist: pytest; extra == "test"
25
+ Provides-Extra: ai
26
+ Requires-Dist: openai>=1.0.0; extra == "ai"
27
+ Provides-Extra: all
28
+ Requires-Dist: streamlit>=1.23; extra == "all"
29
+ Requires-Dist: pytest; extra == "all"
30
+ Requires-Dist: openai>=1.0.0; extra == "all"
31
+
32
+ # ttmm: Time‑to‑Mental‑Model
33
+
34
+ `ttmm` is a local‑first code reading assistant designed to reduce the time it takes to load a mental model of a codebase. It provides static indexing, simple call graph navigation, hotspot detection and dynamic tracing. You can use it either from the command line or through a Streamlit web UI.
35
+
36
+ **New**: `ttmm` now supports remote repositories via Git URLs and GitIngest integration, making it easy to analyze any public Python repository without cloning manually.
37
+
38
+ ## Key features (Phase A)
39
+
40
+ * **Index your repository** – builds a lightweight SQLite database of all Python functions/methods, their definitions, references and coarse call edges using only the standard library.
41
+ * **Remote repository support** – analyze any GitHub, GitLab, or Bitbucket repository directly via URL or GitIngest links without manual cloning.
42
+ * **Hotspot detection** – computes a hotspot score by combining cyclomatic complexity and recent git churn to help you prioritise where to read first.
43
+ * **Static call graph navigation** – shows callers and callees for any symbol using conservative AST analysis. Attribute calls that cannot be resolved are marked as `<unresolved>`.
44
+ * **Keyword search** – a tiny TF‑IDF engine lets you ask a natural language question and returns a minimal reading set of relevant symbols.
45
+ * **Dynamic tracing** – run a module, function or script with `sys.settrace` to capture the actual call sequence executed at runtime and persist it in the database.
46
+
47
+ ## Installation
48
+
49
+ Requirements:
50
+
51
+ * Python 3.8 or later (except 3.9.7 due to Streamlit compatibility)
52
+ * A `git` executable in your `PATH` if you want churn‑based hotspot scores
53
+
54
+ Install from PyPI:
55
+
56
+ ```bash
57
+ pip install zerottmm
58
+ ```
59
+
60
+ Or install in development mode from this repository:
61
+
62
+ ```bash
63
+ pip install -e .
64
+ ```
65
+
66
+ To enable optional extras:
67
+
68
+ * `[ui]` – install `streamlit` and `openai` for the web UI with AI features
69
+ * `[test]` – install `pytest` for running the test suite
70
+ * `[ai]` – install `openai` for AI-enhanced analysis
71
+
72
+ For example:
73
+
74
+ ```bash
75
+ pip install zerottmm[ui,test]
76
+ ```
77
+
78
+ ## Command line usage
79
+
80
+ After installation a `zerottmm` command will be available:
81
+
82
+ ```bash
83
+ zerottmm index PATH_OR_URL # index a Python repository (local or remote)
84
+ zerottmm hotspots PATH # show the top hotspots (default 10)
85
+ zerottmm callers PATH SYMBOL
86
+ zerottmm callees PATH SYMBOL
87
+ zerottmm trace PATH [--module pkg.mod:func | --script file.py] [-- args...]
88
+ zerottmm answer PATH "your question"
89
+ ```
90
+
91
+ * **PATH_OR_URL** – local repository path, Git URL, or GitIngest URL
92
+ * **PATH** – local repository path that has been indexed previously
93
+ * **SYMBOL** – a fully‑qualified name like `package.module:Class.method` or `package.module:function`.
94
+ * **--module** – run a function or module entry point (e.g. `package.module:main`) and trace calls within the repository.
95
+ * **--script** – run an arbitrary Python script in the repository and trace calls.
96
+
97
+ Use `zerottmm --help` for full documentation.
98
+
99
+ ## Examples
100
+
101
+ Here are some examples analyzing popular Python repositories:
102
+
103
+ ### Analyze the Python requests library
104
+ ```bash
105
+ # Index directly from GitHub
106
+ zerottmm index https://github.com/psf/requests.git
107
+
108
+ # Find hotspots (complex functions with high churn)
109
+ zerottmm hotspots /tmp/ttmm_repo_*/
110
+ # Output: PreparedRequest.prepare_body, super_len, RequestEncodingMixin._encode_files
111
+
112
+ # Ask natural language questions
113
+ zerottmm answer /tmp/ttmm_repo_*/ "how to make HTTP requests"
114
+ # Output: HTTPAdapter.send, Session.request, HTTPAdapter.cert_verify
115
+ ```
116
+
117
+ ### Analyze a mathematical optimization library
118
+ ```bash
119
+ # Index a specialized repo via GitIngest URL
120
+ zerottmm index "https://gitingest.com/?url=https://github.com/finite-sample/rank_preserving_calibration"
121
+
122
+ # Find the main algorithmic components
123
+ zerottmm answer /tmp/ttmm_repo_*/ "main calibration algorithm"
124
+ # Output: calibrate_dykstra, calibrate_admm, _isotonic_regression
125
+
126
+ # Explore function relationships
127
+ zerottmm callers /tmp/ttmm_repo_*/ "calibrate_dykstra"
128
+ # Shows all the places this core algorithm is used
129
+ ```
130
+
131
+ ### Analyze FastAPI core (subpath example)
132
+ ```bash
133
+ # Index just the FastAPI core module using GitIngest subpath
134
+ zerottmm index "https://gitingest.com/?url=https://github.com/tiangolo/fastapi&subpath=fastapi"
135
+
136
+ # Find entry points and main interfaces
137
+ zerottmm answer /tmp/ttmm_repo_*/ "main application interface"
138
+ ```
139
+
140
+ ## Streamlit UI
141
+
142
+ A simple web UI is provided under `app/app.py`. To run it locally:
143
+
144
+ ```bash
145
+ pip install -e .[ui]
146
+ streamlit run app/app.py
147
+ ```
148
+
149
+ The app allows you to index repositories (local or remote via GitIngest), explore hotspots, get AI-powered insights, and search interactively. Features include:
150
+
151
+ * **Repository indexing** from local paths, Git URLs, or GitIngest links
152
+ * **Automatic repository summary** with key metrics and analysis
153
+ * **AI-enhanced analysis** with OpenAI integration (optional)
154
+ * **Hotspot detection** and complexity analysis
155
+ * **Natural language search** over code symbols
156
+
157
+ The app is designed to run on [Streamlit Community Cloud](https://streamlit.io/cloud) – simply push this repository to GitHub and deploy the app by pointing to `app/app.py`. The `requirements.txt` file ensures all dependencies (including OpenAI) are automatically installed.
158
+
159
+ ## Development & tests
160
+
161
+ Tests live in `tests/test_ttmm.py` and cover indexing, hotspot scoring and search. To run them:
162
+
163
+ ```bash
164
+ pip install -e .[test]
165
+ pytest -q
166
+ ```
167
+
168
+ Continuous integration is configured via `.github/workflows/ci.yml` to run the test suite on Python 3.8 through 3.12. If you fork this repository on GitHub the workflow will execute automatically.
169
+
170
+ ## Limitations
171
+
172
+ * Phase A supports Python only and uses conservative static analysis. Many dynamic method calls cannot be resolved statically; these appear as `<unresolved>` in the call graph.
173
+ * Hotspot scores require `git` to compute churn – if `git` is not installed or the directory is not a git repository, churn is assumed to be zero.
174
+ * Dynamic tracing only captures calls within the repository root. Calls to the standard library or external packages are ignored.
175
+
176
+ Future phases (not implemented here) would add richer language support, deeper type‑aware call resolution and integration with your editor.
@@ -0,0 +1,145 @@
1
+ # ttmm: Time‑to‑Mental‑Model
2
+
3
+ `ttmm` is a local‑first code reading assistant designed to reduce the time it takes to load a mental model of a codebase. It provides static indexing, simple call graph navigation, hotspot detection and dynamic tracing. You can use it either from the command line or through a Streamlit web UI.
4
+
5
+ **New**: `ttmm` now supports remote repositories via Git URLs and GitIngest integration, making it easy to analyze any public Python repository without cloning manually.
6
+
7
+ ## Key features (Phase A)
8
+
9
+ * **Index your repository** – builds a lightweight SQLite database of all Python functions/methods, their definitions, references and coarse call edges using only the standard library.
10
+ * **Remote repository support** – analyze any GitHub, GitLab, or Bitbucket repository directly via URL or GitIngest links without manual cloning.
11
+ * **Hotspot detection** – computes a hotspot score by combining cyclomatic complexity and recent git churn to help you prioritise where to read first.
12
+ * **Static call graph navigation** – shows callers and callees for any symbol using conservative AST analysis. Attribute calls that cannot be resolved are marked as `<unresolved>`.
13
+ * **Keyword search** – a tiny TF‑IDF engine lets you ask a natural language question and returns a minimal reading set of relevant symbols.
14
+ * **Dynamic tracing** – run a module, function or script with `sys.settrace` to capture the actual call sequence executed at runtime and persist it in the database.
15
+
16
+ ## Installation
17
+
18
+ Requirements:
19
+
20
+ * Python 3.8 or later (except 3.9.7 due to Streamlit compatibility)
21
+ * A `git` executable in your `PATH` if you want churn‑based hotspot scores
22
+
23
+ Install from PyPI:
24
+
25
+ ```bash
26
+ pip install zerottmm
27
+ ```
28
+
29
+ Or install in development mode from this repository:
30
+
31
+ ```bash
32
+ pip install -e .
33
+ ```
34
+
35
+ To enable optional extras:
36
+
37
+ * `[ui]` – install `streamlit` and `openai` for the web UI with AI features
38
+ * `[test]` – install `pytest` for running the test suite
39
+ * `[ai]` – install `openai` for AI-enhanced analysis
40
+
41
+ For example:
42
+
43
+ ```bash
44
+ pip install zerottmm[ui,test]
45
+ ```
46
+
47
+ ## Command line usage
48
+
49
+ After installation a `zerottmm` command will be available:
50
+
51
+ ```bash
52
+ zerottmm index PATH_OR_URL # index a Python repository (local or remote)
53
+ zerottmm hotspots PATH # show the top hotspots (default 10)
54
+ zerottmm callers PATH SYMBOL
55
+ zerottmm callees PATH SYMBOL
56
+ zerottmm trace PATH [--module pkg.mod:func | --script file.py] [-- args...]
57
+ zerottmm answer PATH "your question"
58
+ ```
59
+
60
+ * **PATH_OR_URL** – local repository path, Git URL, or GitIngest URL
61
+ * **PATH** – local repository path that has been indexed previously
62
+ * **SYMBOL** – a fully‑qualified name like `package.module:Class.method` or `package.module:function`.
63
+ * **--module** – run a function or module entry point (e.g. `package.module:main`) and trace calls within the repository.
64
+ * **--script** – run an arbitrary Python script in the repository and trace calls.
65
+
66
+ Use `zerottmm --help` for full documentation.
67
+
68
+ ## Examples
69
+
70
+ Here are some examples analyzing popular Python repositories:
71
+
72
+ ### Analyze the Python requests library
73
+ ```bash
74
+ # Index directly from GitHub
75
+ zerottmm index https://github.com/psf/requests.git
76
+
77
+ # Find hotspots (complex functions with high churn)
78
+ zerottmm hotspots /tmp/ttmm_repo_*/
79
+ # Output: PreparedRequest.prepare_body, super_len, RequestEncodingMixin._encode_files
80
+
81
+ # Ask natural language questions
82
+ zerottmm answer /tmp/ttmm_repo_*/ "how to make HTTP requests"
83
+ # Output: HTTPAdapter.send, Session.request, HTTPAdapter.cert_verify
84
+ ```
85
+
86
+ ### Analyze a mathematical optimization library
87
+ ```bash
88
+ # Index a specialized repo via GitIngest URL
89
+ zerottmm index "https://gitingest.com/?url=https://github.com/finite-sample/rank_preserving_calibration"
90
+
91
+ # Find the main algorithmic components
92
+ zerottmm answer /tmp/ttmm_repo_*/ "main calibration algorithm"
93
+ # Output: calibrate_dykstra, calibrate_admm, _isotonic_regression
94
+
95
+ # Explore function relationships
96
+ zerottmm callers /tmp/ttmm_repo_*/ "calibrate_dykstra"
97
+ # Shows all the places this core algorithm is used
98
+ ```
99
+
100
+ ### Analyze FastAPI core (subpath example)
101
+ ```bash
102
+ # Index just the FastAPI core module using GitIngest subpath
103
+ zerottmm index "https://gitingest.com/?url=https://github.com/tiangolo/fastapi&subpath=fastapi"
104
+
105
+ # Find entry points and main interfaces
106
+ zerottmm answer /tmp/ttmm_repo_*/ "main application interface"
107
+ ```
108
+
109
+ ## Streamlit UI
110
+
111
+ A simple web UI is provided under `app/app.py`. To run it locally:
112
+
113
+ ```bash
114
+ pip install -e .[ui]
115
+ streamlit run app/app.py
116
+ ```
117
+
118
+ The app allows you to index repositories (local or remote via GitIngest), explore hotspots, get AI-powered insights, and search interactively. Features include:
119
+
120
+ * **Repository indexing** from local paths, Git URLs, or GitIngest links
121
+ * **Automatic repository summary** with key metrics and analysis
122
+ * **AI-enhanced analysis** with OpenAI integration (optional)
123
+ * **Hotspot detection** and complexity analysis
124
+ * **Natural language search** over code symbols
125
+
126
+ The app is designed to run on [Streamlit Community Cloud](https://streamlit.io/cloud) – simply push this repository to GitHub and deploy the app by pointing to `app/app.py`. The `requirements.txt` file ensures all dependencies (including OpenAI) are automatically installed.
127
+
128
+ ## Development & tests
129
+
130
+ Tests live in `tests/test_ttmm.py` and cover indexing, hotspot scoring and search. To run them:
131
+
132
+ ```bash
133
+ pip install -e .[test]
134
+ pytest -q
135
+ ```
136
+
137
+ Continuous integration is configured via `.github/workflows/ci.yml` to run the test suite on Python 3.8 through 3.12. If you fork this repository on GitHub the workflow will execute automatically.
138
+
139
+ ## Limitations
140
+
141
+ * Phase A supports Python only and uses conservative static analysis. Many dynamic method calls cannot be resolved statically; these appear as `<unresolved>` in the call graph.
142
+ * Hotspot scores require `git` to compute churn – if `git` is not installed or the directory is not a git repository, churn is assumed to be zero.
143
+ * Dynamic tracing only captures calls within the repository root. Calls to the standard library or external packages are ignored.
144
+
145
+ Future phases (not implemented here) would add richer language support, deeper type‑aware call resolution and integration with your editor.
@@ -0,0 +1,44 @@
1
+ [build-system]
2
+ requires = ["setuptools>=65", "wheel"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "zerottmm"
7
+ version = "0.1.0"
8
+ description = "Time‑to‑Mental‑Model: a local‑first code reading assistant (Phase A)"
9
+ readme = "README.md"
10
+ requires-python = ">=3.10"
11
+ authors = [
12
+ { name = "Gaurav Sood", email = "contact@gsood.com" }
13
+ ]
14
+ license = "MIT"
15
+ keywords = ["code-analysis", "static-analysis", "code-reading", "python", "ast"]
16
+ classifiers = [
17
+ "Development Status :: 4 - Beta",
18
+ "Intended Audience :: Developers",
19
+ "Programming Language :: Python :: 3",
20
+ "Programming Language :: Python :: 3.10",
21
+ "Programming Language :: Python :: 3.11",
22
+ "Programming Language :: Python :: 3.12",
23
+ "Programming Language :: Python :: 3.13",
24
+ "Topic :: Software Development :: Code Generators",
25
+ "Topic :: Software Development :: Libraries :: Python Modules",
26
+ "Topic :: Utilities",
27
+ ]
28
+
29
+ # core dependencies – pure standard library so far
30
+ dependencies = []
31
+
32
+ [project.optional-dependencies]
33
+ # extras for the Streamlit UI and testing
34
+ ui = ["streamlit>=1.23", "openai>=1.0.0"] # Include OpenAI for UI
35
+ test = ["pytest"]
36
+ ai = ["openai>=1.0.0"] # Future AI integration
37
+ all = ["streamlit>=1.23", "pytest", "openai>=1.0.0"]
38
+
39
+ [project.scripts]
40
+ zerottmm = "zerottmm.cli:main"
41
+
42
+ [tool.setuptools.packages.find]
43
+ include = ["zerottmm*"]
44
+ exclude = ["app*"]
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,91 @@
1
+ """Basic tests for the ttmm package.
2
+
3
+ These tests create a small temporary repository on the fly, index it,
4
+ and verify that symbol extraction, call resolution, hotspot scoring and
5
+ search behave as expected. They are intentionally simple and do not
6
+ cover every corner case; the goal is to provide smoke coverage for
7
+ continuous integration.
8
+ """
9
+
10
+ from __future__ import annotations
11
+
12
+ import os
13
+ import tempfile
14
+ import textwrap
15
+
16
+ import pytest
17
+
18
+ from zerottmm import index as ttmm_index
19
+ from zerottmm import store as ttmm_store
20
+ from zerottmm import search as ttmm_search
21
+
22
+
23
+ def create_sample_repo(tmp_path) -> str:
24
+ """Create a small sample Python project for testing.
25
+
26
+ The project has two files:
27
+
28
+ * ``alpha.py`` defines ``foo`` which calls ``bar``; ``bar`` does nothing.
29
+ * ``beta.py`` defines a class ``Widget`` with method ``ping`` which
30
+ calls ``foo``.
31
+ """
32
+ alpha = textwrap.dedent(
33
+ """
34
+ def foo():
35
+ '''Foo does something and calls bar.'''
36
+ bar()
37
+
38
+ def bar():
39
+ return None
40
+ """
41
+ )
42
+ beta = textwrap.dedent(
43
+ """
44
+ class Widget:
45
+ def ping(self):
46
+ # ping calls foo from alpha
47
+ from alpha import foo
48
+ foo()
49
+ """
50
+ )
51
+ repo = tmp_path
52
+ (repo / "alpha.py").write_text(alpha)
53
+ (repo / "beta.py").write_text(beta)
54
+ return str(repo)
55
+
56
+
57
+ def test_index_and_hotspots_and_call_graph(tmp_path):
58
+ repo = create_sample_repo(tmp_path)
59
+ ttmm_index.index_repo(repo)
60
+ conn = ttmm_store.connect(repo)
61
+ try:
62
+ # Check that foo and bar and Widget.ping are present
63
+ sid_foo = ttmm_store.resolve_symbol(conn, "alpha:foo")
64
+ sid_bar = ttmm_store.resolve_symbol(conn, "alpha:bar")
65
+ sid_ping = ttmm_store.resolve_symbol(conn, "beta:Widget.ping")
66
+ assert sid_foo and sid_bar and sid_ping
67
+ # Check callees: foo should call bar
68
+ callees = ttmm_store.get_callees(conn, sid_foo)
69
+ assert any(name == "alpha:bar" and not unresolved for name, path, unresolved in callees)
70
+ # Check callers: bar should have foo as caller
71
+ callers_bar = ttmm_store.get_callers(conn, sid_bar)
72
+ assert any(name == "alpha:foo" for name, _ in callers_bar)
73
+ # Hotspots should include these three symbols
74
+ hotspots = ttmm_store.get_hotspots(conn, limit=10)
75
+ qualnames = [row["qualname"] for row in hotspots]
76
+ assert "alpha:foo" in qualnames
77
+ assert "alpha:bar" in qualnames
78
+ assert "beta:Widget.ping" in qualnames
79
+ finally:
80
+ ttmm_store.close(conn)
81
+
82
+
83
+ def test_search_answers(tmp_path):
84
+ repo = create_sample_repo(tmp_path)
85
+ ttmm_index.index_repo(repo)
86
+ # Ask about bar
87
+ results = ttmm_search.answer_question(repo, "call bar", top=3, include_scores=False)
88
+ # Expect foo to appear because foo calls bar
89
+ qualnames = [r[0] for r in results]
90
+ assert any(qn.endswith(":foo") for qn in qualnames)
91
+ assert any(qn.endswith(":bar") for qn in qualnames)
@@ -0,0 +1,38 @@
1
+ """Top‑level package for ttmm.
2
+
3
+ `ttmm` (Time‑to‑Mental‑Model) helps you build a mental model of a Python codebase
4
+ faster. It can index a repository, compute hotspots, navigate static call graphs,
5
+ run dynamic traces and answer natural language questions about your code. The core
6
+ functionality lives in submodules:
7
+
8
+ * `ttmm.index` – parse and index a Python repository
9
+ * `ttmm.store` – SQLite persistence layer
10
+ * `ttmm.metrics` – compute cyclomatic complexity and other metrics
11
+ * `ttmm.gitutils` – git churn calculations
12
+ * `ttmm.trace` – runtime tracing using `sys.settrace`
13
+ * `ttmm.search` – tiny TF‑IDF search over your codebase
14
+ * `ttmm.cli` – command line entry point
15
+
16
+ Importing this package will expose the `__version__` attribute. For most use cases
17
+ you should call into `ttmm.cli` via the `ttmm` command line, or import functions
18
+ from the specific submodules.
19
+ """
20
+
21
+ from importlib.metadata import version, PackageNotFoundError
22
+
23
+ try: # pragma: no cover - during development version metadata may be missing
24
+ __version__ = version(__name__)
25
+ except PackageNotFoundError:
26
+ __version__ = "0.0.0"
27
+
28
+ __all__ = [
29
+ "index",
30
+ "store",
31
+ "metrics",
32
+ "gitutils",
33
+ "trace",
34
+ "search",
35
+ "gitingest",
36
+ "ai_analysis",
37
+ "cli",
38
+ ]
@@ -0,0 +1,149 @@
1
+ """AI-powered code analysis using OpenAI API.
2
+
3
+ This module provides AI-enhanced analysis capabilities for ttmm,
4
+ allowing users to get natural language explanations and insights
5
+ about their codebases using OpenAI's GPT models.
6
+ """
7
+
8
+ from __future__ import annotations
9
+
10
+ from typing import Dict, List, Optional
11
+
12
+
13
+ def analyze_code_with_ai(
14
+ api_key: str,
15
+ analysis_type: str,
16
+ hotspots_context: List[str],
17
+ repo_info: Dict,
18
+ custom_prompt: Optional[str] = None,
19
+ ) -> str:
20
+ """Analyze code using OpenAI API.
21
+
22
+ Parameters
23
+ ----------
24
+ api_key : str
25
+ OpenAI API key
26
+ analysis_type : str
27
+ Type of analysis to perform
28
+ hotspots_context : List[str]
29
+ List of hotspot descriptions
30
+ repo_info : Dict
31
+ Repository metadata
32
+ custom_prompt : str, optional
33
+ Custom analysis prompt
34
+
35
+ Returns
36
+ -------
37
+ str
38
+ AI analysis result
39
+ """
40
+ try:
41
+ import openai
42
+ except ImportError:
43
+ return ("❌ **OpenAI library not installed**\n\n"
44
+ "Please install it with: `pip install openai`")
45
+
46
+ # Set up OpenAI client
47
+ client = openai.OpenAI(api_key=api_key)
48
+
49
+ # Prepare context
50
+ repo_context = f"""
51
+ Repository Information:
52
+ - Path: {repo_info.get('path', 'Unknown')}
53
+ - Remote URL: {repo_info.get('remote_url', 'Local repository')}
54
+ - Branch: {repo_info.get('branch', 'Unknown')}
55
+ - Commit: {repo_info.get('commit', 'Unknown')}
56
+
57
+ Top Code Hotspots (high complexity functions):
58
+ {chr(10).join(hotspots_context[:5])}
59
+ """
60
+
61
+ # Define analysis prompts
62
+ analysis_prompts = {
63
+ "Explain hotspots": (
64
+ "Analyze the code hotspots listed above. Explain what makes these functions "
65
+ "complex and suggest potential improvements or areas that might need attention. "
66
+ "Focus on maintainability and potential refactoring opportunities."
67
+ ),
68
+ "Summarize architecture": (
69
+ "Based on the hotspots and repository information, provide a high-level "
70
+ "architectural summary of this codebase. Identify the main components, "
71
+ "patterns, and overall structure."
72
+ ),
73
+ "Identify design patterns": (
74
+ "Analyze the code hotspots and identify any design patterns being used. "
75
+ "Comment on the appropriateness of these patterns and suggest alternatives "
76
+ "if beneficial."
77
+ ),
78
+ "Find potential issues": (
79
+ "Review the code hotspots for potential issues like performance bottlenecks, "
80
+ "security concerns, maintainability problems, or technical debt. Provide "
81
+ "specific recommendations."
82
+ ),
83
+ "Custom analysis": custom_prompt or "Provide a general analysis of the codebase.",
84
+ }
85
+
86
+ prompt = analysis_prompts.get(analysis_type, analysis_prompts["Custom analysis"])
87
+
88
+ try:
89
+ response = client.chat.completions.create(
90
+ model="gpt-3.5-turbo",
91
+ messages=[
92
+ {
93
+ "role": "system",
94
+ "content": (
95
+ "You are a senior software engineer helping to analyze a Python "
96
+ "codebase. Provide clear, actionable insights based on the code "
97
+ "metrics and hotspots provided. Be concise but thorough."
98
+ )
99
+ },
100
+ {
101
+ "role": "user",
102
+ "content": f"{repo_context}\n\nAnalysis Request: {prompt}"
103
+ }
104
+ ],
105
+ max_tokens=1000,
106
+ temperature=0.3,
107
+ )
108
+
109
+ return response.choices[0].message.content or "No analysis generated."
110
+
111
+ except openai.OpenAIError as e:
112
+ return f"❌ **OpenAI API Error**: {str(e)}"
113
+ except Exception as e:
114
+ return f"❌ **Analysis Error**: {str(e)}"
115
+
116
+
117
+ def test_openai_connection(api_key: str) -> tuple[bool, str]:
118
+ """Test if OpenAI API key is valid.
119
+
120
+ Parameters
121
+ ----------
122
+ api_key : str
123
+ OpenAI API key to test
124
+
125
+ Returns
126
+ -------
127
+ tuple[bool, str]
128
+ (success, message)
129
+ """
130
+ try:
131
+ import openai
132
+ except ImportError:
133
+ return False, "OpenAI library not installed"
134
+
135
+ try:
136
+ client = openai.OpenAI(api_key=api_key)
137
+ # Make a minimal API call to test the key
138
+ client.chat.completions.create(
139
+ model="gpt-3.5-turbo",
140
+ messages=[{"role": "user", "content": "Hello"}],
141
+ max_tokens=5
142
+ )
143
+ return True, "API key is valid"
144
+ except openai.AuthenticationError:
145
+ return False, "Invalid API key"
146
+ except openai.OpenAIError as e:
147
+ return False, f"OpenAI API error: {str(e)}"
148
+ except Exception as e:
149
+ return False, f"Connection error: {str(e)}"