prepforge 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Himanshu Arora
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,149 @@
1
+ Metadata-Version: 2.4
2
+ Name: prepforge
3
+ Version: 0.1.0
4
+ Summary: AI Exam Preparation System with TUI, GUI, and Streamlit
5
+ Author: Himanshu Arora
6
+ License-Expression: MIT
7
+ Classifier: Programming Language :: Python :: 3
8
+ Classifier: Operating System :: OS Independent
9
+ Requires-Python: >=3.10
10
+ Description-Content-Type: text/markdown
11
+ License-File: LICENSE
12
+ Requires-Dist: transformers
13
+ Requires-Dist: peft
14
+ Requires-Dist: rich
15
+ Requires-Dist: prompt_toolkit
16
+ Provides-Extra: gui
17
+ Requires-Dist: PyQt6; extra == "gui"
18
+ Provides-Extra: web
19
+ Requires-Dist: streamlit; extra == "web"
20
+ Provides-Extra: train
21
+ Requires-Dist: datasets; extra == "train"
22
+ Requires-Dist: trl; extra == "train"
23
+ Provides-Extra: gpu
24
+ Requires-Dist: torch; extra == "gpu"
25
+ Provides-Extra: fast
26
+ Requires-Dist: unsloth; extra == "fast"
27
+ Dynamic: license-file
28
+
29
+ # šŸŽ“ PrepForge
30
+
31
+ PrepForge is an AI-powered exam preparation system with multiple interfaces and support for base LLMs and LoRA adapters.
32
+
33
+ ---
34
+
35
+ ## šŸ“¦ Installation
36
+
37
+ Install directly from PyPI:
38
+
39
+ ```bash
40
+ pip install prepforge
41
+ ```
42
+
43
+ PyPI: https://pypi.org/project/prepforge/0.1.0/
44
+
45
+ ---
46
+
47
+ ## ✨ Features
48
+
49
+ - šŸ–„ Terminal UI (TUI)
50
+ - 🪟 Desktop GUI (PyQt6)
51
+ - 🌐 Web interface (Streamlit)
52
+ - 🧠 Supports base models and LoRA adapters
53
+ - šŸ‹ļø Built-in training pipeline (QLoRA / Unsloth)
54
+ - ⚔ Offline inference support
55
+
56
+ ---
57
+
58
+ ## šŸš€ Usage
59
+
60
+ ### Terminal Interface (TUI)
61
+ ```bash
62
+ prepforge run tui
63
+ ```
64
+
65
+ ### Desktop GUI
66
+ ```bash
67
+ prepforge run gui
68
+ ```
69
+
70
+ ### Web Interface
71
+ ```bash
72
+ prepforge run streamlit
73
+ ```
74
+
75
+ ---
76
+
77
+ ## āš™ļø Configuration
78
+
79
+ Set default model:
80
+
81
+ ```bash
82
+ prepforge config --model <model_name_or_path>
83
+ ```
84
+
85
+ Set LoRA adapter:
86
+
87
+ ```bash
88
+ prepforge config --lora <adapter_path>
89
+ ```
90
+
91
+ ---
92
+
93
+ ## šŸ‹ļø Training
94
+
95
+ ```bash
96
+ prepforge train \
97
+ --dataset dataset.jsonl \
98
+ --epochs 3 \
99
+ --output trained_model
100
+ ```
101
+
102
+ ---
103
+
104
+ ## šŸ“Š Dataset Format
105
+
106
+ ```json
107
+ {
108
+ "instruction": "Explain Newton's laws",
109
+ "input": "",
110
+ "output": "Newton's laws describe motion..."
111
+ }
112
+ ```
113
+
114
+ ---
115
+
116
+ ## šŸ“‚ Project Structure
117
+
118
+ ```
119
+ major_project/
120
+ ā”œā”€ā”€ core/
121
+ │ ā”œā”€ā”€ model.py
122
+ │ ā”œā”€ā”€ train.py
123
+ │ └── utils.py
124
+ ā”œā”€ā”€ tui_app.py
125
+ ā”œā”€ā”€ gui_app.py
126
+ ā”œā”€ā”€ streamlit_app.py
127
+ ā”œā”€ā”€ cli.py
128
+ ā”œā”€ā”€ config.py
129
+ ```
130
+
131
+ ---
132
+
133
+ ## āš ļø Notes
134
+
135
+ - Models are not bundled with the package
136
+ - Users must download base models separately
137
+ - GPU (CUDA) is recommended for best performance
138
+
139
+ ---
140
+
141
+ ## šŸ§‘ā€šŸ’» Author
142
+
143
+ Himanshu Arora
144
+
145
+ ---
146
+
147
+ ## šŸ“œ License
148
+
149
+ MIT License
@@ -0,0 +1,121 @@
1
+ # šŸŽ“ PrepForge
2
+
3
+ PrepForge is an AI-powered exam preparation system with multiple interfaces and support for base LLMs and LoRA adapters.
4
+
5
+ ---
6
+
7
+ ## šŸ“¦ Installation
8
+
9
+ Install directly from PyPI:
10
+
11
+ ```bash
12
+ pip install prepforge
13
+ ```
14
+
15
+ PyPI: https://pypi.org/project/prepforge/0.1.0/
16
+
17
+ ---
18
+
19
+ ## ✨ Features
20
+
21
+ - šŸ–„ Terminal UI (TUI)
22
+ - 🪟 Desktop GUI (PyQt6)
23
+ - 🌐 Web interface (Streamlit)
24
+ - 🧠 Supports base models and LoRA adapters
25
+ - šŸ‹ļø Built-in training pipeline (QLoRA / Unsloth)
26
+ - ⚔ Offline inference support
27
+
28
+ ---
29
+
30
+ ## šŸš€ Usage
31
+
32
+ ### Terminal Interface (TUI)
33
+ ```bash
34
+ prepforge run tui
35
+ ```
36
+
37
+ ### Desktop GUI
38
+ ```bash
39
+ prepforge run gui
40
+ ```
41
+
42
+ ### Web Interface
43
+ ```bash
44
+ prepforge run streamlit
45
+ ```
46
+
47
+ ---
48
+
49
+ ## āš™ļø Configuration
50
+
51
+ Set default model:
52
+
53
+ ```bash
54
+ prepforge config --model <model_name_or_path>
55
+ ```
56
+
57
+ Set LoRA adapter:
58
+
59
+ ```bash
60
+ prepforge config --lora <adapter_path>
61
+ ```
62
+
63
+ ---
64
+
65
+ ## šŸ‹ļø Training
66
+
67
+ ```bash
68
+ prepforge train \
69
+ --dataset dataset.jsonl \
70
+ --epochs 3 \
71
+ --output trained_model
72
+ ```
73
+
74
+ ---
75
+
76
+ ## šŸ“Š Dataset Format
77
+
78
+ ```json
79
+ {
80
+ "instruction": "Explain Newton's laws",
81
+ "input": "",
82
+ "output": "Newton's laws describe motion..."
83
+ }
84
+ ```
85
+
86
+ ---
87
+
88
+ ## šŸ“‚ Project Structure
89
+
90
+ ```
91
+ major_project/
92
+ ā”œā”€ā”€ core/
93
+ │ ā”œā”€ā”€ model.py
94
+ │ ā”œā”€ā”€ train.py
95
+ │ └── utils.py
96
+ ā”œā”€ā”€ tui_app.py
97
+ ā”œā”€ā”€ gui_app.py
98
+ ā”œā”€ā”€ streamlit_app.py
99
+ ā”œā”€ā”€ cli.py
100
+ ā”œā”€ā”€ config.py
101
+ ```
102
+
103
+ ---
104
+
105
+ ## āš ļø Notes
106
+
107
+ - Models are not bundled with the package
108
+ - Users must download base models separately
109
+ - GPU (CUDA) is recommended for best performance
110
+
111
+ ---
112
+
113
+ ## šŸ§‘ā€šŸ’» Author
114
+
115
+ Himanshu Arora
116
+
117
+ ---
118
+
119
+ ## šŸ“œ License
120
+
121
+ MIT License
File without changes
@@ -0,0 +1,186 @@
1
+ import argparse
2
+ import subprocess
3
+ import os
4
+ import sys
5
+
6
+
7
+ def main():
8
+ parser = argparse.ArgumentParser(
9
+ prog="prepforge",
10
+ description="prepforge - Multi-interface AI assistant with training support",
11
+ )
12
+
13
+ subparsers = parser.add_subparsers(dest="command", help="Available commands")
14
+ subparsers.required = True
15
+
16
+ # -------------------------
17
+ # RUN APPS
18
+ # -------------------------
19
+ run_parser = subparsers.add_parser(
20
+ "run",
21
+ help="Run the application (tui | gui | streamlit)",
22
+ description=(
23
+ "Run prepforce in one of the following modes:\n\n"
24
+ " tui Terminal interface\n"
25
+ " gui Desktop application\n"
26
+ " streamlit Web interface\n"
27
+ ),
28
+ formatter_class=argparse.RawTextHelpFormatter,
29
+ )
30
+
31
+ run_parser.add_argument(
32
+ "mode",
33
+ choices=["tui", "gui", "streamlit"],
34
+ help="Mode to run",
35
+ )
36
+
37
+ # -------------------------
38
+ # TRAIN
39
+ # -------------------------
40
+ train_parser = subparsers.add_parser(
41
+ "train",
42
+ help="Train a LoRA adapter",
43
+ description=(
44
+ "Train a model using your dataset\n\n"
45
+ "Dataset Control Options:\n"
46
+ " --limit N Train on first N samples\n"
47
+ " --subset F Train on fraction (0.1 = 10%%)\n\n"
48
+ "Examples:\n"
49
+ " prepforce train --dataset data.jsonl\n"
50
+ " prepforce train --dataset data.jsonl --limit 500\n"
51
+ " prepforce train --dataset data.jsonl --subset 0.05\n"
52
+ ),
53
+ formatter_class=argparse.RawTextHelpFormatter,
54
+ )
55
+
56
+ train_parser.add_argument(
57
+ "--dataset",
58
+ required=True,
59
+ help="Path to dataset file (JSON/JSONL format)",
60
+ )
61
+
62
+ train_parser.add_argument(
63
+ "--output",
64
+ default="trained_model",
65
+ help="Output directory for trained model",
66
+ )
67
+
68
+ train_parser.add_argument(
69
+ "--epochs",
70
+ type=int,
71
+ default=2,
72
+ help="Number of training epochs",
73
+ )
74
+
75
+ train_parser.add_argument(
76
+ "--lr",
77
+ type=float,
78
+ default=2e-4,
79
+ help="Learning rate",
80
+ )
81
+
82
+ # šŸ”„ Mutually exclusive group
83
+ group = train_parser.add_mutually_exclusive_group()
84
+
85
+ group.add_argument(
86
+ "--limit",
87
+ type=int,
88
+ help="Limit number of samples (e.g. 500)",
89
+ )
90
+
91
+ group.add_argument(
92
+ "--subset",
93
+ type=float,
94
+ help="Use fraction of dataset (0.05 = 5%%)",
95
+ )
96
+
97
+ # -------------------------
98
+ # CONFIG
99
+ # -------------------------
100
+ config_parser = subparsers.add_parser(
101
+ "config",
102
+ help="Set default model or LoRA",
103
+ description=(
104
+ "Configure default model and LoRA adapter\n\n"
105
+ "Examples:\n"
106
+ " prepforce config --model meta-llama/Llama-3-8B\n"
107
+ " prepforce config --lora ~/adapter\n"
108
+ ),
109
+ formatter_class=argparse.RawTextHelpFormatter,
110
+ )
111
+
112
+ config_parser.add_argument(
113
+ "--model",
114
+ help="Set default base model",
115
+ )
116
+
117
+ config_parser.add_argument(
118
+ "--lora",
119
+ help="Set default LoRA adapter path",
120
+ )
121
+
122
+ args = parser.parse_args()
123
+
124
+ try:
125
+ # -------------------------
126
+ # RUN MODES
127
+ # -------------------------
128
+ if args.command == "run":
129
+ if args.mode == "tui":
130
+ from .tui_app import main
131
+
132
+ main()
133
+
134
+ elif args.mode == "gui":
135
+ from .gui_app import main
136
+
137
+ main()
138
+
139
+ elif args.mode == "streamlit":
140
+ base_dir = os.path.dirname(__file__)
141
+ app_path = os.path.join(base_dir, "streamlit_app.py")
142
+
143
+ subprocess.run([sys.executable, "-m", "streamlit", "run", app_path])
144
+
145
+ # -------------------------
146
+ # TRAIN
147
+ # -------------------------
148
+ elif args.command == "train":
149
+ from .core.train import train_model
150
+
151
+ train_model(
152
+ dataset_path=args.dataset,
153
+ output_dir=args.output,
154
+ epochs=args.epochs,
155
+ lr=args.lr,
156
+ limit=args.limit,
157
+ subset=args.subset,
158
+ )
159
+
160
+ # -------------------------
161
+ # CONFIG
162
+ # -------------------------
163
+ elif args.command == "config":
164
+ from .core.config_store import load_user_config, save_user_config
165
+
166
+ config = load_user_config()
167
+
168
+ if args.model:
169
+ config["model"] = args.model
170
+ print(f"āœ… Default model set to: {args.model}")
171
+
172
+ if args.lora:
173
+ config["lora"] = args.lora
174
+ print(f"āœ… Default LoRA set to: {args.lora}")
175
+
176
+ if not args.model and not args.lora:
177
+ print("āš ļø No changes provided. Use --model or --lora")
178
+
179
+ save_user_config(config)
180
+
181
+ except Exception as e:
182
+ print(f"\nāŒ Error: {str(e)}\n")
183
+
184
+
185
+ if __name__ == "__main__":
186
+ main()
@@ -0,0 +1,17 @@
1
+ import os
2
+ from major_project.core.config_store import load_user_config
3
+
4
+ DEFAULT_MODEL = "unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit"
5
+ DEFAULT_LORA = None
6
+ DEFAULT_APP_NAME = "PrepForge"
7
+ MAX_HISTORY = 6
8
+
9
+
10
+ def get_model():
11
+ user_config = load_user_config()
12
+ return user_config.get("model") or os.getenv("MENTORAI_MODEL") or DEFAULT_MODEL
13
+
14
+
15
+ def get_lora():
16
+ user_config = load_user_config()
17
+ return user_config.get("lora") or os.getenv("MENTORAI_LORA") or DEFAULT_LORA
File without changes
@@ -0,0 +1,20 @@
1
+ import os
2
+ import json
3
+
4
+ CONFIG_DIR = os.path.expanduser("~/.prepforce")
5
+ CONFIG_FILE = os.path.join(CONFIG_DIR, "config.json")
6
+
7
+
8
+ def load_user_config():
9
+ if not os.path.exists(CONFIG_FILE):
10
+ return {}
11
+
12
+ with open(CONFIG_FILE, "r") as f:
13
+ return json.load(f)
14
+
15
+
16
+ def save_user_config(config):
17
+ os.makedirs(CONFIG_DIR, exist_ok=True)
18
+
19
+ with open(CONFIG_FILE, "w") as f:
20
+ json.dump(config, f, indent=2)
@@ -0,0 +1,112 @@
1
+ import os
2
+ import torch
3
+ from transformers import AutoModelForCausalLM, AutoTokenizer
4
+ from peft import PeftModel
5
+
6
+ # āœ… Use shared config
7
+ from major_project.config import get_model, get_lora
8
+
9
+
10
+ def _check_cuda():
11
+ """
12
+ Ensure NVIDIA GPU is available.
13
+ """
14
+ if not torch.cuda.is_available():
15
+ raise RuntimeError(
16
+ "\nāŒ CUDA GPU not detected.\n"
17
+ "This application requires an NVIDIA GPU.\n"
18
+ "CPU execution is not supported.\n"
19
+ "\nšŸ‘‰ Install CUDA-enabled PyTorch:\n"
20
+ "pip install torch --index-url https://download.pytorch.org/whl/cu121\n"
21
+ )
22
+
23
+
24
+ def load_model(
25
+ base_model: str | None = None,
26
+ lora_path: str | None = None,
27
+ device_map: str = "auto",
28
+ dtype: torch.dtype = torch.float16,
29
+ ):
30
+ """
31
+ Load base model and optionally apply LoRA adapter.
32
+
33
+ Args:
34
+ base_model (str | None): HuggingFace model name or local path
35
+ lora_path (str | None): Path to LoRA adapter folder
36
+ device_map (str): Device mapping ("auto", "cpu", etc.)
37
+ dtype (torch.dtype): Torch dtype
38
+
39
+ Returns:
40
+ model, tokenizer
41
+ """
42
+
43
+ # -----------------------------
44
+ # šŸ”„ CUDA CHECK
45
+ # -----------------------------
46
+ _check_cuda()
47
+
48
+ # -----------------------------
49
+ # USE DEFAULT CONFIG (if not provided)
50
+ # -----------------------------
51
+ if base_model is None:
52
+ base_model = get_model()
53
+
54
+ if lora_path is None:
55
+ lora_path = get_lora()
56
+
57
+ # -----------------------------
58
+ # VALIDATE INPUTS
59
+ # -----------------------------
60
+ if not base_model:
61
+ raise ValueError("Base model path/name must be provided")
62
+
63
+ if lora_path:
64
+ lora_path = os.path.expanduser(lora_path)
65
+
66
+ if not os.path.exists(lora_path):
67
+ raise FileNotFoundError(f"LoRA path not found: {lora_path}")
68
+
69
+ expected = os.path.join(lora_path, "adapter_config.json")
70
+ if not os.path.exists(expected):
71
+ raise ValueError(
72
+ f"Invalid LoRA adapter folder (missing adapter_config.json): {lora_path}"
73
+ )
74
+
75
+ # -----------------------------
76
+ # LOAD BASE MODEL
77
+ # -----------------------------
78
+ print(f"[MODEL] Loading base model: {base_model}")
79
+
80
+ model = AutoModelForCausalLM.from_pretrained(
81
+ base_model,
82
+ device_map=device_map,
83
+ torch_dtype=dtype,
84
+ local_files_only=True,
85
+ )
86
+
87
+ tokenizer = AutoTokenizer.from_pretrained(
88
+ base_model,
89
+ local_files_only=True,
90
+ use_fast=True,
91
+ )
92
+
93
+ # -----------------------------
94
+ # APPLY LORA (OPTIONAL)
95
+ # -----------------------------
96
+ if lora_path:
97
+ print(f"[MODEL] Applying LoRA adapter: {lora_path}")
98
+
99
+ model = PeftModel.from_pretrained(
100
+ model,
101
+ lora_path,
102
+ local_files_only=True,
103
+ )
104
+
105
+ print("[MODEL] LoRA successfully loaded")
106
+
107
+ # -----------------------------
108
+ # FINAL SETUP
109
+ # -----------------------------
110
+ model.eval()
111
+
112
+ return model, tokenizer