ins-pricing 0.3.4__py3-none-any.whl → 0.4.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,78 @@
1
+ LOSS FUNCTIONS
2
+
3
+ Overview
4
+ This document describes the loss-function changes in ins_pricing. The training
5
+ stack now supports multiple regression losses (not just Tweedie deviance) and
6
+ propagates the selected loss into tuning, training, and inference.
7
+
8
+ Supported loss_name values
9
+ - auto (default): keep legacy behavior based on model name
10
+ - tweedie: Tweedie deviance (uses tw_power / tweedie_variance_power when tuning)
11
+ - poisson: Poisson deviance (power=1)
12
+ - gamma: Gamma deviance (power=2)
13
+ - mse: mean squared error
14
+ - mae: mean absolute error
15
+
16
+ Loss name mapping (all options)
17
+ - Tweedie deviance -> tweedie
18
+ - Poisson deviance -> poisson
19
+ - Gamma deviance -> gamma
20
+ - Mean squared error -> mse
21
+ - Mean absolute error -> mae
22
+ - Classification log loss -> logloss (classification only)
23
+ - Classification BCE -> bce (classification only)
24
+
25
+ Classification tasks
26
+ - loss_name can be auto, logloss, or bce
27
+ - training continues to use BCEWithLogits for torch models; evaluation uses logloss
28
+
29
+ Where to set loss_name
30
+ Add to any BayesOpt config JSON:
31
+
32
+ {
33
+ "task_type": "regression",
34
+ "loss_name": "mse"
35
+ }
36
+
37
+ Behavior changes
38
+ 1) Tuning and metrics
39
+ - When loss_name is mse/mae, tuning does not sample Tweedie power.
40
+ - When loss_name is poisson/gamma, power is fixed (1.0/2.0).
41
+ - When loss_name is tweedie, power is sampled as before.
42
+
43
+ 2) Torch training (ResNet/FT/GNN)
44
+ - Loss computation is routed by loss_name.
45
+ - For tweedie/poisson/gamma, predictions are clamped positive.
46
+ - For mse/mae, no Tweedie power is used.
47
+
48
+ 3) XGBoost objective
49
+ - loss_name controls XGB objective:
50
+ - tweedie -> reg:tweedie
51
+ - poisson -> count:poisson
52
+ - gamma -> reg:gamma
53
+ - mse -> reg:squarederror
54
+ - mae -> reg:absoluteerror
55
+
56
+ 4) Inference
57
+ - ResNet/GNN constructors now receive loss_name.
58
+ - When loss_name is not tweedie, tw_power is not applied at inference.
59
+
60
+ Legacy defaults (auto)
61
+ - If loss_name is omitted, behavior is unchanged:
62
+ - model name contains "f" -> poisson
63
+ - model name contains "s" -> gamma
64
+ - otherwise -> tweedie
65
+
66
+ Examples
67
+ - ResNet direct training (MSE):
68
+ "loss_name": "mse"
69
+
70
+ - FT embed -> ResNet (MSE):
71
+ "loss_name": "mse"
72
+
73
+ - XGB direct training (unchanged):
74
+ omit loss_name or set "loss_name": "auto"
75
+
76
+ Notes
77
+ - loss_name is global per config. If you need different losses for different
78
+ models, split into separate configs and run them independently.
@@ -0,0 +1,152 @@
1
+ # Quick Start Guide
2
+
3
+ Get started with the Insurance Pricing Model Training Frontend in 3 easy steps.
4
+
5
+ ## Prerequisites
6
+
7
+ 1. Install the `ins_pricing` package
8
+ 2. Install Gradio:
9
+ ```bash
10
+ pip install gradio>=4.0.0
11
+ ```
12
+
13
+ ## Step 1: Launch the Application
14
+
15
+ ### On Windows:
16
+ Double-click `start_app.bat` or run:
17
+ ```bash
18
+ python -m ins_pricing.frontend.app
19
+ ```
20
+
21
+ ### On Linux/Mac:
22
+ Run the shell script:
23
+ ```bash
24
+ ./start_app.sh
25
+ ```
26
+
27
+ Or use Python directly:
28
+ ```bash
29
+ python -m ins_pricing.frontend.app
30
+ ```
31
+
32
+ The web interface will automatically open at `http://localhost:7860`
33
+
34
+ ## Step 2: Configure Your Model
35
+
36
+ ### Option A: Upload Existing Config (Recommended)
37
+ 1. Go to the **Configuration** tab
38
+ 2. Click **"Upload JSON Config File"**
39
+ 3. Select a config file (e.g., `config_xgb_direct.json` from `examples/`)
40
+ 4. Click **"Load Config"**
41
+
42
+ ### Option B: Manual Configuration
43
+ 1. Go to the **Configuration** tab
44
+ 2. Scroll to **"Manual Configuration"**
45
+ 3. Fill in the required fields:
46
+ - **Data Directory**: Path to your data folder
47
+ - **Model List**: Model name(s)
48
+ - **Target Column**: Your target variable
49
+ - **Weight Column**: Your weight variable
50
+ - **Feature List**: Comma-separated features
51
+ - **Categorical Features**: Comma-separated categorical features
52
+ 4. Adjust other settings as needed
53
+ 5. Click **"Build Configuration"**
54
+
55
+ ## Step 3: Run Training
56
+
57
+ 1. Switch to the **Run Task** tab
58
+ 2. Click **"Run Task"**
59
+ 3. Watch real-time logs appear below
60
+
61
+ Training will start automatically and logs will update in real-time!
62
+
63
+ ## New Features
64
+
65
+ ### FT Two-Step Workflow
66
+
67
+ For advanced FT-Transformer → XGB/ResN training:
68
+
69
+ 1. **Prepare Base Config**: Create or load a base configuration
70
+ 2. **Go to FT Two-Step Workflow tab**
71
+ 3. **Step 1 - FT Embedding Generation**:
72
+ - Configure DDP settings
73
+ - Click "Prepare Step 1 Config"
74
+ - Copy the config to Configuration tab
75
+ - Run it in "Run Task" tab
76
+ 4. **Step 2 - Train XGB/ResN**:
77
+ - After Step 1 completes, click "Prepare Step 2 Configs"
78
+ - Choose which models to train (XGB, ResN, or both)
79
+ - Copy the generated configs and run them
80
+
81
+ ### Open Results Folder
82
+
83
+ - In the **Run Task** tab, click **"📁 Open Results Folder"**
84
+ - Automatically opens the output directory in your file explorer
85
+ - Works on Windows, macOS, and Linux
86
+
87
+ ## Example Configuration
88
+
89
+ Here's a minimal example to get started:
90
+
91
+ ```json
92
+ {
93
+ "data_dir": "./Data",
94
+ "model_list": ["od"],
95
+ "model_categories": ["bc"],
96
+ "target": "response",
97
+ "weight": "weights",
98
+ "feature_list": ["age", "gender", "region"],
99
+ "categorical_features": ["gender", "region"],
100
+ "runner": {
101
+ "mode": "entry",
102
+ "model_keys": ["xgb"],
103
+ "max_evals": 50
104
+ }
105
+ }
106
+ ```
107
+
108
+ Save this as `my_first_config.json` and upload it!
109
+
110
+ ## Tips
111
+
112
+ - **Save Your Config**: After building a configuration, save it using the "Save Configuration" button for reuse
113
+ - **Check Logs**: Training logs update in real-time - watch for errors or progress indicators
114
+ - **GPU Usage**: Toggle "Use GPU" checkbox in Training Settings to enable/disable GPU acceleration
115
+ - **Model Selection**: Specify which models to train in "Model Keys" (xgb, resn, ft, gnn)
116
+ - **Open Results**: Use the "📁 Open Results Folder" button to quickly access output files
117
+ - **FT Workflow**: Use the dedicated FT tab for automated two-step FT → XGB/ResN training
118
+
119
+ ## Troubleshooting
120
+
121
+ **Problem**: Interface doesn't load
122
+ - **Solution**: Check that port 7860 is not in use, or specify a different port
123
+
124
+ **Problem**: Configuration validation fails
125
+ - **Solution**: Ensure all required fields are filled and feature lists are properly formatted
126
+
127
+ **Problem**: Training doesn't start
128
+ - **Solution**: Verify data paths exist and configuration is valid
129
+
130
+ **Problem**: Results folder won't open
131
+ - **Solution**: Make sure the task has run at least once to create the output directory
132
+
133
+ **Problem**: Step 2 configs fail to generate
134
+ - **Solution**: Ensure Step 1 completed successfully and embedding files exist
135
+
136
+ ## Next Steps
137
+
138
+ - Explore advanced options in the Configuration tab
139
+ - Try the FT Two-Step Workflow for better model performance
140
+ - Experiment with different model combinations (xgb, resn, ft)
141
+ - Try different split strategies
142
+ - Use the Explain mode for model interpretability
143
+ - Check the full [README.md](README.md) for detailed documentation
144
+
145
+ ## Support
146
+
147
+ For issues or questions, refer to:
148
+ - Full documentation: [README.md](README.md)
149
+ - Example configs: `ins_pricing/examples/`
150
+ - Package documentation: `ins_pricing/docs/`
151
+
152
+ Happy modeling!
@@ -0,0 +1,419 @@
1
+ # Insurance Pricing Model Frontend
2
+
3
+ A Gradio-based web interface for configuring and running all insurance pricing model tasks from the examples folder.
4
+
5
+ ## Features
6
+
7
+ - **Multiple Task Modes**: Supports all task types, automatically detected from config
8
+ - **Training** (entry mode): Train XGB, ResNet, FT-Transformer, and GNN models
9
+ - **Explanation** (explain mode): Generate permutation importance, SHAP values, integrated gradients
10
+ - **Incremental** (incremental mode): Incremental batch training
11
+ - **Watchdog** (watchdog mode): Automated monitoring and retraining
12
+ - **Dual Configuration Modes**: Manual parameter configuration or JSON file upload
13
+ - **Real-time Logging**: Live task logs displayed in the UI
14
+ - **Parameter Validation**: Automatic validation of configuration parameters
15
+ - **Config Export**: Save current configuration as JSON file for reuse
16
+ - **User-friendly Interface**: Intuitive web UI without writing code
17
+ - **Auto-Detection**: Automatically detects task mode from `config.runner.mode`
18
+ - **Plotting & Prediction Tools**: Run the plotting, prediction, and compare steps from the example notebooks
19
+
20
+ ## Supported Examples
21
+
22
+ This frontend provides dedicated tabs or workflows that match the notebooks in `ins_pricing/examples/`:
23
+
24
+ | Example Notebook | Task Mode | Description |
25
+ |-----------------|-----------|-------------|
26
+ | `01 Plot_Oneway_Pre.ipynb` | Manual plotting | Pre-model oneway analysis (can run manually, see examples) |
27
+ | `02 PricingSingle.ipynb` | `entry` | Legacy training; use config-based training tab |
28
+ | `02 Train_XGBResN.ipynb` | `entry` | Direct training of XGB/ResN models |
29
+ | `02 Train_FT_Embed_XGBResN.ipynb` | `entry` | FT-Transformer embedding + XGB/ResN training |
30
+ | `03 Plot_Embed_Model.ipynb` | Manual plotting | Post-model plotting (oneway, lift, double-lift) |
31
+ | `04 Explain_Run.ipynb` | `explain` | Model explanation and interpretability |
32
+ | `05 Predict_FT_Embed_XGB.ipynb` | Prediction | Model prediction (load config + run) |
33
+ | `06 Compare_*.ipynb` | Manual plotting | Model comparison plots |
34
+
35
+ ## Installation
36
+
37
+ ```bash
38
+ pip install gradio>=4.0.0
39
+ ```
40
+
41
+ Or install from requirements file:
42
+
43
+ ```bash
44
+ pip install -r ins_pricing/frontend/requirements.txt
45
+ ```
46
+
47
+ ### Recommended (Cross-Platform) Install
48
+
49
+ To avoid dependency mismatches on Linux/macOS, install the pinned frontend extras:
50
+
51
+ ```bash
52
+ pip install "ins_pricing[frontend]"
53
+ ```
54
+
55
+ If installing from source:
56
+
57
+ ```bash
58
+ pip install -e ".[frontend]"
59
+ ```
60
+
61
+ ### Linux Note (gradio + huggingface_hub)
62
+
63
+ If you see `ImportError: cannot import name 'HfFolder'`, your `huggingface_hub` is too new.
64
+ Fix it with:
65
+
66
+ ```bash
67
+ pip install "gradio>=4,<5" "huggingface_hub<0.24"
68
+ ```
69
+
70
+ ### Apple Silicon (MPS) Note
71
+
72
+ For MPS usage, install a PyTorch build with MPS support, and optionally enable fallback:
73
+
74
+ ```bash
75
+ export PYTORCH_ENABLE_MPS_FALLBACK=1
76
+ ```
77
+
78
+ ## Launch Methods
79
+
80
+ ### Method 1: Direct Run
81
+
82
+ ```bash
83
+ python -m ins_pricing.frontend.app
84
+ ```
85
+
86
+ ### Method 2: Launch in Python Script
87
+
88
+ ```python
89
+ from ins_pricing.frontend.app import create_ui
90
+
91
+ demo = create_ui()
92
+ demo.launch()
93
+ ```
94
+
95
+ ### Method 3: Custom Host and Port
96
+
97
+ ```python
98
+ from ins_pricing.frontend.app import create_ui
99
+
100
+ demo = create_ui()
101
+ demo.launch(
102
+ server_name="localhost", # or "0.0.0.0" for external access
103
+ server_port=8080, # custom port
104
+ share=False # set True to generate public link
105
+ )
106
+ ```
107
+
108
+ ## Usage Guide
109
+
110
+ ### 1. Configure Model Parameters
111
+
112
+ #### Option A: Upload JSON Config File (Recommended)
113
+
114
+ 1. Click the **"Configuration"** tab
115
+ 2. In the **"Load Configuration"** section, click **"Upload JSON Config File"**
116
+ 3. Select a config file from `examples/`:
117
+ - `config_template.json` - Full template
118
+ - `config_xgb_direct.json` - XGBoost training
119
+ - `config_resn_direct.json` - ResNet training
120
+ - `config_explain_template.json` - Model explanation
121
+ - `config_ft_unsupervised_*.json` - FT-Transformer configs
122
+ 4. Click **"Load Config"** button
123
+ 5. Configuration will display in the **"Current Configuration"** panel
124
+
125
+ **Important**: The `runner.mode` field in the config determines which task runs:
126
+ - `"mode": "entry"` → Training
127
+ - `"mode": "explain"` → Model explanation
128
+ - `"mode": "incremental"` → Incremental training
129
+ - `"mode": "watchdog"` → Watchdog monitoring
130
+
131
+ #### Option B: Manual Parameter Entry
132
+
133
+ Fill in parameters in the **"Manual Configuration"** section:
134
+
135
+ **Data Settings**
136
+ - **Data Directory**: Directory containing data files (e.g., `./Data`)
137
+ - **Model List**: Comma-separated model names (e.g., `od`)
138
+ - **Model Categories**: Comma-separated model categories (e.g., `bc`)
139
+ - **Target Column**: Target column name (e.g., `response`)
140
+ - **Weight Column**: Weight column name (e.g., `weights`)
141
+
142
+ **Features**
143
+ - **Feature List**: Comma-separated feature names
144
+ - **Categorical Features**: Comma-separated categorical feature names
145
+
146
+ **Model Settings**
147
+ - **Task Type**: Task type (`regression`/`binary`/`multiclass`)
148
+ - **Test Proportion**: Test set ratio (0.1-0.5)
149
+ - **Holdout Ratio**: Holdout validation ratio (0.1-0.5)
150
+ - **Validation Ratio**: Validation ratio (0.1-0.5)
151
+ - **Split Strategy**: Data split strategy (`random`/`stratified`/`time`/`group`)
152
+ - **Random Seed**: Random seed for reproducibility
153
+ - **Epochs**: Number of training epochs
154
+
155
+ **Training Settings**
156
+ - **Output Directory**: Output directory (e.g., `./Results`)
157
+ - **Use GPU**: Whether to use GPU
158
+ - **Model Keys**: Comma-separated model types (e.g., `xgb, resn`)
159
+ - **Max Evaluations**: Maximum number of evaluations
160
+
161
+ **XGBoost Settings**
162
+ - **XGB Max Depth**: XGBoost maximum depth
163
+ - **XGB Max Estimators**: XGBoost maximum number of estimators
164
+
165
+ ### 2. Build Configuration
166
+
167
+ 1. After filling parameters, click **"Build Configuration"** button
168
+ 2. Generated JSON config will display in the **"Generated Config (JSON)"** textbox
169
+ 3. You can review and edit the generated configuration
170
+ 4. **Note**: Manual configuration defaults to `runner.mode = "entry"` (training)
171
+
172
+ ### 3. Save Configuration (Optional)
173
+
174
+ 1. Enter filename in **"Save Filename"** textbox (e.g., `my_config.json`)
175
+ 2. Click **"Save Configuration"** button
176
+ 3. Configuration will be saved to the specified file
177
+
178
+ ### 4. Run Task
179
+
180
+ 1. Switch to the **"Run Task"** tab
181
+ 2. Click **"Run Task"** button to execute
182
+ 3. Task status will display in the **"Task Status"** section
183
+ 4. Real-time logs will appear in the **"Task Logs"** textbox below
184
+
185
+ **The system automatically detects the task mode from your config and runs the appropriate task!**
186
+
187
+ ### 5. Plotting / Prediction / Compare
188
+
189
+ Use the **Plotting**, **Prediction**, and **Compare** tabs to run:
190
+ - Pre-model oneway plots
191
+ - Post-model plots (direct or FT-embed workflows)
192
+ - FT-embed predictions
193
+ - Direct vs FT-embed model comparisons
194
+
195
+ ## Task Modes Explained
196
+
197
+ ### Entry Mode (Training)
198
+
199
+ Standard model training mode. Trains one or more models specified in `runner.model_keys`.
200
+
201
+ **Example config snippet**:
202
+ ```json
203
+ {
204
+ "runner": {
205
+ "mode": "entry",
206
+ "model_keys": ["xgb", "resn"],
207
+ "max_evals": 50
208
+ }
209
+ }
210
+ ```
211
+
212
+ **Equivalent to**: `ins_pricing/examples/02 Train_XGBResN.ipynb`
213
+
214
+ ### Explain Mode
215
+
216
+ Generates model explanations using various methods.
217
+
218
+ **Example config snippet**:
219
+ ```json
220
+ {
221
+ "runner": {
222
+ "mode": "explain"
223
+ },
224
+ "explain": {
225
+ "model_keys": ["xgb"],
226
+ "methods": ["permutation", "shap"],
227
+ "on_train": false,
228
+ "permutation": {
229
+ "n_repeats": 5,
230
+ "max_rows": 5000
231
+ },
232
+ "shap": {
233
+ "n_background": 500,
234
+ "n_samples": 200
235
+ }
236
+ }
237
+ }
238
+ ```
239
+
240
+ **Equivalent to**: `ins_pricing/examples/04 Explain_Run.ipynb`
241
+
242
+ **Supported methods**:
243
+ - `permutation`: Permutation feature importance
244
+ - `shap`: SHAP values
245
+ - `integrated_gradients`: Integrated gradients (for neural models)
246
+
247
+ ### Incremental Mode
248
+
249
+ Incremental batch training for continuous model updates.
250
+
251
+ **Example config snippet**:
252
+ ```json
253
+ {
254
+ "runner": {
255
+ "mode": "incremental",
256
+ "incremental_args": [
257
+ "--incremental-dir", "./IncrementalBatches",
258
+ "--incremental-template", "{model_name}_2025Q1.csv",
259
+ "--merge-keys", "policy_id", "vehicle_id",
260
+ "--model-keys", "xgb",
261
+ "--update-base-data"
262
+ ]
263
+ }
264
+ }
265
+ ```
266
+
267
+ ### Watchdog Mode
268
+
269
+ Automated monitoring and retraining when new data arrives.
270
+
271
+ **Example config snippet**:
272
+ ```json
273
+ {
274
+ "runner": {
275
+ "mode": "watchdog",
276
+ "use_watchdog": true,
277
+ "idle_seconds": 7200,
278
+ "max_restarts": 50
279
+ }
280
+ }
281
+ ```
282
+
283
+ ## Configuration Examples
284
+
285
+ ### Minimal Training Config
286
+
287
+ ```json
288
+ {
289
+ "data_dir": "./Data",
290
+ "model_list": ["od"],
291
+ "model_categories": ["bc"],
292
+ "target": "response",
293
+ "weight": "weights",
294
+ "feature_list": ["age", "gender", "region"],
295
+ "categorical_features": ["gender", "region"],
296
+ "runner": {
297
+ "mode": "entry",
298
+ "model_keys": ["xgb"],
299
+ "max_evals": 50
300
+ }
301
+ }
302
+ ```
303
+
304
+ ### Minimal Explain Config
305
+
306
+ ```json
307
+ {
308
+ "data_dir": "./Data",
309
+ "model_list": ["od"],
310
+ "model_categories": ["bc"],
311
+ "target": "response",
312
+ "weight": "weights",
313
+ "output_dir": "./Results",
314
+ "runner": {
315
+ "mode": "explain"
316
+ },
317
+ "explain": {
318
+ "model_keys": ["xgb"],
319
+ "methods": ["permutation"]
320
+ }
321
+ }
322
+ ```
323
+
324
+ ### Full Configuration Examples
325
+
326
+ Refer to configuration files in the `ins_pricing/examples/` directory:
327
+ - `config_template.json` - Complete training template
328
+ - `config_xgb_direct.json` - XGBoost training
329
+ - `config_resn_direct.json` - ResNet training
330
+ - `config_explain_template.json` - Model explanation template
331
+ - `config_ft_unsupervised_*.json` - FT-Transformer configs
332
+ - `config_incremental_template.json` - Incremental training template
333
+
334
+ ## FAQ
335
+
336
+ ### Q: How do I access the frontend interface?
337
+
338
+ A: After launching, the browser will open automatically, or manually navigate to `http://localhost:7860`
339
+
340
+ ### Q: Which task mode will run?
341
+
342
+ A: The task mode is determined by `config.runner.mode` in your configuration file:
343
+ - `"entry"` = Training
344
+ - `"explain"` = Explanation
345
+ - `"incremental"` = Incremental training
346
+ - `"watchdog"` = Watchdog mode
347
+
348
+ ### Q: Can I interrupt the task?
349
+
350
+ A: The current version does not support interruption. Tasks must complete once started.
351
+
352
+ ### Q: How do I run explanation after training?
353
+
354
+ A: First, run training with a config file. Then, load an explain config that points to the same output directory, and set `runner.mode` to `"explain"`.
355
+
356
+ ### Q: What if logs don't display?
357
+
358
+ A: Check that the configuration is correct and data paths exist. Check the console for error messages.
359
+
360
+ ### Q: Can I run multiple tasks simultaneously?
361
+
362
+ A: Not recommended. Wait for the current task to complete before starting a new one.
363
+
364
+ ### Q: How do I run on a remote server?
365
+
366
+ A: Set `server_name="0.0.0.0"` when launching, then access via server IP and port.
367
+
368
+ ```python
369
+ demo.launch(server_name="0.0.0.0", server_port=7860)
370
+ ```
371
+
372
+ ### Q: Where are configuration files saved?
373
+
374
+ A: By default, saved in the current working directory. You can specify a full path in "Save Filename".
375
+
376
+ ### Q: How do I run plotting tasks?
377
+
378
+ A: Plotting tasks (oneway, lift, double-lift) can be run by using config files with plotting enabled. See `config_plot.json` example or manually run the plotting notebooks in `examples/`.
379
+
380
+ ## Technical Architecture
381
+
382
+ - **Frontend Framework**: Gradio 4.x
383
+ - **Configuration Management**: ConfigBuilder class
384
+ - **Task Execution**: TaskRunner class (with real-time log capture and auto-detection)
385
+ - **Backend Interface**: `ins_pricing.cli.utils.notebook_utils.run_from_config` (unified entry point)
386
+
387
+ ## Development Guide
388
+
389
+ ### File Structure
390
+
391
+ ```
392
+ ins_pricing/frontend/
393
+ ├── __init__.py # Package initialization
394
+ ├── app.py # Main application entry
395
+ ├── config_builder.py # Configuration builder
396
+ ├── runner.py # Unified task runner
397
+ ├── requirements.txt # Dependencies
398
+ ├── README.md # This document
399
+ ├── QUICKSTART.md # Quick start guide
400
+ ├── example_config.json # Example configuration
401
+ ├── start_app.bat # Windows launcher
402
+ └── start_app.sh # Linux/Mac launcher
403
+ ```
404
+
405
+ ### Extending Functionality
406
+
407
+ To add new features:
408
+
409
+ 1. **Add new config parameters**: Modify the `ConfigBuilder` class in `config_builder.py`
410
+ 2. **Modify UI layout**: Edit the `create_ui()` function in `app.py`
411
+ 3. **Customize task handling**: Modify the `TaskRunner` class in `runner.py`
412
+
413
+ ### How Task Detection Works
414
+
415
+ The `TaskRunner` reads `config.runner.mode` from your JSON file and automatically calls the appropriate backend function via `run_from_config()`. No manual routing needed!
416
+
417
+ ## License
418
+
419
+ This project follows the same license as the `ins_pricing` package.
@@ -0,0 +1,10 @@
1
+ """
2
+ Insurance Pricing Frontend Package
3
+ Web-based interface for configuring and running insurance pricing model tasks.
4
+ """
5
+
6
+ from .config_builder import ConfigBuilder
7
+ from .runner import TaskRunner, TrainingRunner
8
+ from .ft_workflow import FTWorkflowHelper
9
+
10
+ __all__ = ['ConfigBuilder', 'TaskRunner', 'TrainingRunner', 'FTWorkflowHelper']