ins-pricing 0.3.4__py3-none-any.whl → 0.4.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,78 @@
1
+ LOSS FUNCTIONS
2
+
3
+ Overview
4
+ This document describes the loss-function changes in ins_pricing. The training
5
+ stack now supports multiple regression losses (not just Tweedie deviance) and
6
+ propagates the selected loss into tuning, training, and inference.
7
+
8
+ Supported loss_name values
9
+ - auto (default): keep legacy behavior based on model name
10
+ - tweedie: Tweedie deviance (uses tw_power / tweedie_variance_power when tuning)
11
+ - poisson: Poisson deviance (power=1)
12
+ - gamma: Gamma deviance (power=2)
13
+ - mse: mean squared error
14
+ - mae: mean absolute error
15
+
16
+ Loss name mapping (all options)
17
+ - Tweedie deviance -> tweedie
18
+ - Poisson deviance -> poisson
19
+ - Gamma deviance -> gamma
20
+ - Mean squared error -> mse
21
+ - Mean absolute error -> mae
22
+ - Classification log loss -> logloss (classification only)
23
+ - Classification BCE -> bce (classification only)
24
+
25
+ Classification tasks
26
+ - loss_name can be auto, logloss, or bce
27
+ - training continues to use BCEWithLogits for torch models; evaluation uses logloss
28
+
29
+ Where to set loss_name
30
+ Add to any BayesOpt config JSON:
31
+
32
+ {
33
+ "task_type": "regression",
34
+ "loss_name": "mse"
35
+ }
36
+
37
+ Behavior changes
38
+ 1) Tuning and metrics
39
+ - When loss_name is mse/mae, tuning does not sample Tweedie power.
40
+ - When loss_name is poisson/gamma, power is fixed (1.0/2.0).
41
+ - When loss_name is tweedie, power is sampled as before.
42
+
43
+ 2) Torch training (ResNet/FT/GNN)
44
+ - Loss computation is routed by loss_name.
45
+ - For tweedie/poisson/gamma, predictions are clamped positive.
46
+ - For mse/mae, no Tweedie power is used.
47
+
48
+ 3) XGBoost objective
49
+ - loss_name controls XGB objective:
50
+ - tweedie -> reg:tweedie
51
+ - poisson -> count:poisson
52
+ - gamma -> reg:gamma
53
+ - mse -> reg:squarederror
54
+ - mae -> reg:absoluteerror
55
+
56
+ 4) Inference
57
+ - ResNet/GNN constructors now receive loss_name.
58
+ - When loss_name is not tweedie, tw_power is not applied at inference.
59
+
60
+ Legacy defaults (auto)
61
+ - If loss_name is omitted, behavior is unchanged:
62
+ - model name contains "f" -> poisson
63
+ - model name contains "s" -> gamma
64
+ - otherwise -> tweedie
65
+
66
+ Examples
67
+ - ResNet direct training (MSE):
68
+ "loss_name": "mse"
69
+
70
+ - FT embed -> ResNet (MSE):
71
+ "loss_name": "mse"
72
+
73
+ - XGB direct training (unchanged):
74
+ omit loss_name or set "loss_name": "auto"
75
+
76
+ Notes
77
+ - loss_name is global per config. If you need different losses for different
78
+ models, split into separate configs and run them independently.
@@ -0,0 +1,152 @@
1
+ # Quick Start Guide
2
+
3
+ Get started with the Insurance Pricing Model Training Frontend in 3 easy steps.
4
+
5
+ ## Prerequisites
6
+
7
+ 1. Install the `ins_pricing` package
8
+ 2. Install Gradio:
9
+ ```bash
10
+ pip install gradio>=4.0.0
11
+ ```
12
+
13
+ ## Step 1: Launch the Application
14
+
15
+ ### On Windows:
16
+ Double-click `start_app.bat` or run:
17
+ ```bash
18
+ python -m ins_pricing.frontend.app
19
+ ```
20
+
21
+ ### On Linux/Mac:
22
+ Run the shell script:
23
+ ```bash
24
+ ./start_app.sh
25
+ ```
26
+
27
+ Or use Python directly:
28
+ ```bash
29
+ python -m ins_pricing.frontend.app
30
+ ```
31
+
32
+ The web interface will automatically open at `http://localhost:7860`
33
+
34
+ ## Step 2: Configure Your Model
35
+
36
+ ### Option A: Upload Existing Config (Recommended)
37
+ 1. Go to the **Configuration** tab
38
+ 2. Click **"Upload JSON Config File"**
39
+ 3. Select a config file (e.g., `config_xgb_direct.json` from `examples/`)
40
+ 4. Click **"Load Config"**
41
+
42
+ ### Option B: Manual Configuration
43
+ 1. Go to the **Configuration** tab
44
+ 2. Scroll to **"Manual Configuration"**
45
+ 3. Fill in the required fields:
46
+ - **Data Directory**: Path to your data folder
47
+ - **Model List**: Model name(s)
48
+ - **Target Column**: Your target variable
49
+ - **Weight Column**: Your weight variable
50
+ - **Feature List**: Comma-separated features
51
+ - **Categorical Features**: Comma-separated categorical features
52
+ 4. Adjust other settings as needed
53
+ 5. Click **"Build Configuration"**
54
+
55
+ ## Step 3: Run Training
56
+
57
+ 1. Switch to the **Run Task** tab
58
+ 2. Click **"Run Task"**
59
+ 3. Watch real-time logs appear below
60
+
61
+ Training will start automatically and logs will update in real-time!
62
+
63
+ ## New Features
64
+
65
+ ### FT Two-Step Workflow
66
+
67
+ For advanced FT-Transformer → XGB/ResN training:
68
+
69
+ 1. **Prepare Base Config**: Create or load a base configuration
70
+ 2. **Go to FT Two-Step Workflow tab**
71
+ 3. **Step 1 - FT Embedding Generation**:
72
+ - Configure DDP settings
73
+ - Click "Prepare Step 1 Config"
74
+ - Copy the config to Configuration tab
75
+ - Run it in "Run Task" tab
76
+ 4. **Step 2 - Train XGB/ResN**:
77
+ - After Step 1 completes, click "Prepare Step 2 Configs"
78
+ - Choose which models to train (XGB, ResN, or both)
79
+ - Copy the generated configs and run them
80
+
81
+ ### Open Results Folder
82
+
83
+ - In the **Run Task** tab, click **"📁 Open Results Folder"**
84
+ - Automatically opens the output directory in your file explorer
85
+ - Works on Windows, macOS, and Linux
86
+
87
+ ## Example Configuration
88
+
89
+ Here's a minimal example to get started:
90
+
91
+ ```json
92
+ {
93
+ "data_dir": "./Data",
94
+ "model_list": ["od"],
95
+ "model_categories": ["bc"],
96
+ "target": "response",
97
+ "weight": "weights",
98
+ "feature_list": ["age", "gender", "region"],
99
+ "categorical_features": ["gender", "region"],
100
+ "runner": {
101
+ "mode": "entry",
102
+ "model_keys": ["xgb"],
103
+ "max_evals": 50
104
+ }
105
+ }
106
+ ```
107
+
108
+ Save this as `my_first_config.json` and upload it!
109
+
110
+ ## Tips
111
+
112
+ - **Save Your Config**: After building a configuration, save it using the "Save Configuration" button for reuse
113
+ - **Check Logs**: Training logs update in real-time - watch for errors or progress indicators
114
+ - **GPU Usage**: Toggle "Use GPU" checkbox in Training Settings to enable/disable GPU acceleration
115
+ - **Model Selection**: Specify which models to train in "Model Keys" (xgb, resn, ft, gnn)
116
+ - **Open Results**: Use the "📁 Open Results Folder" button to quickly access output files
117
+ - **FT Workflow**: Use the dedicated FT tab for automated two-step FT → XGB/ResN training
118
+
119
+ ## Troubleshooting
120
+
121
+ **Problem**: Interface doesn't load
122
+ - **Solution**: Check that port 7860 is not in use, or specify a different port
123
+
124
+ **Problem**: Configuration validation fails
125
+ - **Solution**: Ensure all required fields are filled and feature lists are properly formatted
126
+
127
+ **Problem**: Training doesn't start
128
+ - **Solution**: Verify data paths exist and configuration is valid
129
+
130
+ **Problem**: Results folder won't open
131
+ - **Solution**: Make sure the task has run at least once to create the output directory
132
+
133
+ **Problem**: Step 2 configs fail to generate
134
+ - **Solution**: Ensure Step 1 completed successfully and embedding files exist
135
+
136
+ ## Next Steps
137
+
138
+ - Explore advanced options in the Configuration tab
139
+ - Try the FT Two-Step Workflow for better model performance
140
+ - Experiment with different model combinations (xgb, resn, ft)
141
+ - Try different split strategies
142
+ - Use the Explain mode for model interpretability
143
+ - Check the full [README.md](README.md) for detailed documentation
144
+
145
+ ## Support
146
+
147
+ For issues or questions, refer to:
148
+ - Full documentation: [README.md](README.md)
149
+ - Example configs: `ins_pricing/examples/`
150
+ - Package documentation: `ins_pricing/docs/`
151
+
152
+ Happy modeling!
@@ -0,0 +1,388 @@
1
+ # Insurance Pricing Model Frontend
2
+
3
+ A Gradio-based web interface for configuring and running all insurance pricing model tasks from the examples folder.
4
+
5
+ ## Features
6
+
7
+ - **Multiple Task Modes**: Supports all task types, automatically detected from config
8
+ - **Training** (entry mode): Train XGB, ResNet, FT-Transformer, and GNN models
9
+ - **Explanation** (explain mode): Generate permutation importance, SHAP values, integrated gradients
10
+ - **Incremental** (incremental mode): Incremental batch training
11
+ - **Watchdog** (watchdog mode): Automated monitoring and retraining
12
+ - **Dual Configuration Modes**: Manual parameter configuration or JSON file upload
13
+ - **Real-time Logging**: Live task logs displayed in the UI
14
+ - **Parameter Validation**: Automatic validation of configuration parameters
15
+ - **Config Export**: Save current configuration as JSON file for reuse
16
+ - **User-friendly Interface**: Intuitive web UI without writing code
17
+ - **Auto-Detection**: Automatically detects task mode from `config.runner.mode`
18
+ - **Plotting & Prediction Tools**: Run the plotting, prediction, and compare steps from the example notebooks
19
+
20
+ ## Supported Examples
21
+
22
+ This frontend provides dedicated tabs or workflows that match the notebooks in `ins_pricing/examples/`:
23
+
24
+ | Example Notebook | Task Mode | Description |
25
+ |-----------------|-----------|-------------|
26
+ | `01 Plot_Oneway_Pre.ipynb` | Manual plotting | Pre-model oneway analysis (can run manually, see examples) |
27
+ | `02 PricingSingle.ipynb` | `entry` | Legacy training; use config-based training tab |
28
+ | `02 Train_XGBResN.ipynb` | `entry` | Direct training of XGB/ResN models |
29
+ | `02 Train_FT_Embed_XGBResN.ipynb` | `entry` | FT-Transformer embedding + XGB/ResN training |
30
+ | `03 Plot_Embed_Model.ipynb` | Manual plotting | Post-model plotting (oneway, lift, double-lift) |
31
+ | `04 Explain_Run.ipynb` | `explain` | Model explanation and interpretability |
32
+ | `05 Predict_FT_Embed_XGB.ipynb` | Prediction | Model prediction (load config + run) |
33
+ | `06 Compare_*.ipynb` | Manual plotting | Model comparison plots |
34
+
35
+ ## Installation
36
+
37
+ ```bash
38
+ pip install gradio>=4.0.0
39
+ ```
40
+
41
+ Or install from requirements file:
42
+
43
+ ```bash
44
+ pip install -r ins_pricing/frontend/requirements.txt
45
+ ```
46
+
47
+ ## Launch Methods
48
+
49
+ ### Method 1: Direct Run
50
+
51
+ ```bash
52
+ python -m ins_pricing.frontend.app
53
+ ```
54
+
55
+ ### Method 2: Launch in Python Script
56
+
57
+ ```python
58
+ from ins_pricing.frontend.app import create_ui
59
+
60
+ demo = create_ui()
61
+ demo.launch()
62
+ ```
63
+
64
+ ### Method 3: Custom Host and Port
65
+
66
+ ```python
67
+ from ins_pricing.frontend.app import create_ui
68
+
69
+ demo = create_ui()
70
+ demo.launch(
71
+ server_name="localhost", # or "0.0.0.0" for external access
72
+ server_port=8080, # custom port
73
+ share=False # set True to generate public link
74
+ )
75
+ ```
76
+
77
+ ## Usage Guide
78
+
79
+ ### 1. Configure Model Parameters
80
+
81
+ #### Option A: Upload JSON Config File (Recommended)
82
+
83
+ 1. Click the **"Configuration"** tab
84
+ 2. In the **"Load Configuration"** section, click **"Upload JSON Config File"**
85
+ 3. Select a config file from `examples/`:
86
+ - `config_template.json` - Full template
87
+ - `config_xgb_direct.json` - XGBoost training
88
+ - `config_resn_direct.json` - ResNet training
89
+ - `config_explain_template.json` - Model explanation
90
+ - `config_ft_unsupervised_*.json` - FT-Transformer configs
91
+ 4. Click **"Load Config"** button
92
+ 5. Configuration will display in the **"Current Configuration"** panel
93
+
94
+ **Important**: The `runner.mode` field in the config determines which task runs:
95
+ - `"mode": "entry"` → Training
96
+ - `"mode": "explain"` → Model explanation
97
+ - `"mode": "incremental"` → Incremental training
98
+ - `"mode": "watchdog"` → Watchdog monitoring
99
+
100
+ #### Option B: Manual Parameter Entry
101
+
102
+ Fill in parameters in the **"Manual Configuration"** section:
103
+
104
+ **Data Settings**
105
+ - **Data Directory**: Directory containing data files (e.g., `./Data`)
106
+ - **Model List**: Comma-separated model names (e.g., `od`)
107
+ - **Model Categories**: Comma-separated model categories (e.g., `bc`)
108
+ - **Target Column**: Target column name (e.g., `response`)
109
+ - **Weight Column**: Weight column name (e.g., `weights`)
110
+
111
+ **Features**
112
+ - **Feature List**: Comma-separated feature names
113
+ - **Categorical Features**: Comma-separated categorical feature names
114
+
115
+ **Model Settings**
116
+ - **Task Type**: Task type (`regression`/`binary`/`multiclass`)
117
+ - **Test Proportion**: Test set ratio (0.1-0.5)
118
+ - **Holdout Ratio**: Holdout validation ratio (0.1-0.5)
119
+ - **Validation Ratio**: Validation ratio (0.1-0.5)
120
+ - **Split Strategy**: Data split strategy (`random`/`stratified`/`time`/`group`)
121
+ - **Random Seed**: Random seed for reproducibility
122
+ - **Epochs**: Number of training epochs
123
+
124
+ **Training Settings**
125
+ - **Output Directory**: Output directory (e.g., `./Results`)
126
+ - **Use GPU**: Whether to use GPU
127
+ - **Model Keys**: Comma-separated model types (e.g., `xgb, resn`)
128
+ - **Max Evaluations**: Maximum number of evaluations
129
+
130
+ **XGBoost Settings**
131
+ - **XGB Max Depth**: XGBoost maximum depth
132
+ - **XGB Max Estimators**: XGBoost maximum number of estimators
133
+
134
+ ### 2. Build Configuration
135
+
136
+ 1. After filling parameters, click **"Build Configuration"** button
137
+ 2. Generated JSON config will display in the **"Generated Config (JSON)"** textbox
138
+ 3. You can review and edit the generated configuration
139
+ 4. **Note**: Manual configuration defaults to `runner.mode = "entry"` (training)
140
+
141
+ ### 3. Save Configuration (Optional)
142
+
143
+ 1. Enter filename in **"Save Filename"** textbox (e.g., `my_config.json`)
144
+ 2. Click **"Save Configuration"** button
145
+ 3. Configuration will be saved to the specified file
146
+
147
+ ### 4. Run Task
148
+
149
+ 1. Switch to the **"Run Task"** tab
150
+ 2. Click **"Run Task"** button to execute
151
+ 3. Task status will display in the **"Task Status"** section
152
+ 4. Real-time logs will appear in the **"Task Logs"** textbox below
153
+
154
+ **The system automatically detects the task mode from your config and runs the appropriate task!**
155
+
156
+ ### 5. Plotting / Prediction / Compare
157
+
158
+ Use the **Plotting**, **Prediction**, and **Compare** tabs to run:
159
+ - Pre-model oneway plots
160
+ - Post-model plots (direct or FT-embed workflows)
161
+ - FT-embed predictions
162
+ - Direct vs FT-embed model comparisons
163
+
164
+ ## Task Modes Explained
165
+
166
+ ### Entry Mode (Training)
167
+
168
+ Standard model training mode. Trains one or more models specified in `runner.model_keys`.
169
+
170
+ **Example config snippet**:
171
+ ```json
172
+ {
173
+ "runner": {
174
+ "mode": "entry",
175
+ "model_keys": ["xgb", "resn"],
176
+ "max_evals": 50
177
+ }
178
+ }
179
+ ```
180
+
181
+ **Equivalent to**: `ins_pricing/examples/02 Train_XGBResN.ipynb`
182
+
183
+ ### Explain Mode
184
+
185
+ Generates model explanations using various methods.
186
+
187
+ **Example config snippet**:
188
+ ```json
189
+ {
190
+ "runner": {
191
+ "mode": "explain"
192
+ },
193
+ "explain": {
194
+ "model_keys": ["xgb"],
195
+ "methods": ["permutation", "shap"],
196
+ "on_train": false,
197
+ "permutation": {
198
+ "n_repeats": 5,
199
+ "max_rows": 5000
200
+ },
201
+ "shap": {
202
+ "n_background": 500,
203
+ "n_samples": 200
204
+ }
205
+ }
206
+ }
207
+ ```
208
+
209
+ **Equivalent to**: `ins_pricing/examples/04 Explain_Run.ipynb`
210
+
211
+ **Supported methods**:
212
+ - `permutation`: Permutation feature importance
213
+ - `shap`: SHAP values
214
+ - `integrated_gradients`: Integrated gradients (for neural models)
215
+
216
+ ### Incremental Mode
217
+
218
+ Incremental batch training for continuous model updates.
219
+
220
+ **Example config snippet**:
221
+ ```json
222
+ {
223
+ "runner": {
224
+ "mode": "incremental",
225
+ "incremental_args": [
226
+ "--incremental-dir", "./IncrementalBatches",
227
+ "--incremental-template", "{model_name}_2025Q1.csv",
228
+ "--merge-keys", "policy_id", "vehicle_id",
229
+ "--model-keys", "xgb",
230
+ "--update-base-data"
231
+ ]
232
+ }
233
+ }
234
+ ```
235
+
236
+ ### Watchdog Mode
237
+
238
+ Automated monitoring and retraining when new data arrives.
239
+
240
+ **Example config snippet**:
241
+ ```json
242
+ {
243
+ "runner": {
244
+ "mode": "watchdog",
245
+ "use_watchdog": true,
246
+ "idle_seconds": 7200,
247
+ "max_restarts": 50
248
+ }
249
+ }
250
+ ```
251
+
252
+ ## Configuration Examples
253
+
254
+ ### Minimal Training Config
255
+
256
+ ```json
257
+ {
258
+ "data_dir": "./Data",
259
+ "model_list": ["od"],
260
+ "model_categories": ["bc"],
261
+ "target": "response",
262
+ "weight": "weights",
263
+ "feature_list": ["age", "gender", "region"],
264
+ "categorical_features": ["gender", "region"],
265
+ "runner": {
266
+ "mode": "entry",
267
+ "model_keys": ["xgb"],
268
+ "max_evals": 50
269
+ }
270
+ }
271
+ ```
272
+
273
+ ### Minimal Explain Config
274
+
275
+ ```json
276
+ {
277
+ "data_dir": "./Data",
278
+ "model_list": ["od"],
279
+ "model_categories": ["bc"],
280
+ "target": "response",
281
+ "weight": "weights",
282
+ "output_dir": "./Results",
283
+ "runner": {
284
+ "mode": "explain"
285
+ },
286
+ "explain": {
287
+ "model_keys": ["xgb"],
288
+ "methods": ["permutation"]
289
+ }
290
+ }
291
+ ```
292
+
293
+ ### Full Configuration Examples
294
+
295
+ Refer to configuration files in the `ins_pricing/examples/` directory:
296
+ - `config_template.json` - Complete training template
297
+ - `config_xgb_direct.json` - XGBoost training
298
+ - `config_resn_direct.json` - ResNet training
299
+ - `config_explain_template.json` - Model explanation template
300
+ - `config_ft_unsupervised_*.json` - FT-Transformer configs
301
+ - `config_incremental_template.json` - Incremental training template
302
+
303
+ ## FAQ
304
+
305
+ ### Q: How do I access the frontend interface?
306
+
307
+ A: After launching, the browser will open automatically, or manually navigate to `http://localhost:7860`
308
+
309
+ ### Q: Which task mode will run?
310
+
311
+ A: The task mode is determined by `config.runner.mode` in your configuration file:
312
+ - `"entry"` = Training
313
+ - `"explain"` = Explanation
314
+ - `"incremental"` = Incremental training
315
+ - `"watchdog"` = Watchdog mode
316
+
317
+ ### Q: Can I interrupt the task?
318
+
319
+ A: The current version does not support interruption. Tasks must complete once started.
320
+
321
+ ### Q: How do I run explanation after training?
322
+
323
+ A: First, run training with a config file. Then, load an explain config that points to the same output directory, and set `runner.mode` to `"explain"`.
324
+
325
+ ### Q: What if logs don't display?
326
+
327
+ A: Check that the configuration is correct and data paths exist. Check the console for error messages.
328
+
329
+ ### Q: Can I run multiple tasks simultaneously?
330
+
331
+ A: Not recommended. Wait for the current task to complete before starting a new one.
332
+
333
+ ### Q: How do I run on a remote server?
334
+
335
+ A: Set `server_name="0.0.0.0"` when launching, then access via server IP and port.
336
+
337
+ ```python
338
+ demo.launch(server_name="0.0.0.0", server_port=7860)
339
+ ```
340
+
341
+ ### Q: Where are configuration files saved?
342
+
343
+ A: By default, saved in the current working directory. You can specify a full path in "Save Filename".
344
+
345
+ ### Q: How do I run plotting tasks?
346
+
347
+ A: Plotting tasks (oneway, lift, double-lift) can be run by using config files with plotting enabled. See `config_plot.json` example or manually run the plotting notebooks in `examples/`.
348
+
349
+ ## Technical Architecture
350
+
351
+ - **Frontend Framework**: Gradio 4.x
352
+ - **Configuration Management**: ConfigBuilder class
353
+ - **Task Execution**: TaskRunner class (with real-time log capture and auto-detection)
354
+ - **Backend Interface**: `ins_pricing.cli.utils.notebook_utils.run_from_config` (unified entry point)
355
+
356
+ ## Development Guide
357
+
358
+ ### File Structure
359
+
360
+ ```
361
+ ins_pricing/frontend/
362
+ ├── __init__.py # Package initialization
363
+ ├── app.py # Main application entry
364
+ ├── config_builder.py # Configuration builder
365
+ ├── runner.py # Unified task runner
366
+ ├── requirements.txt # Dependencies
367
+ ├── README.md # This document
368
+ ├── QUICKSTART.md # Quick start guide
369
+ ├── example_config.json # Example configuration
370
+ ├── start_app.bat # Windows launcher
371
+ └── start_app.sh # Linux/Mac launcher
372
+ ```
373
+
374
+ ### Extending Functionality
375
+
376
+ To add new features:
377
+
378
+ 1. **Add new config parameters**: Modify the `ConfigBuilder` class in `config_builder.py`
379
+ 2. **Modify UI layout**: Edit the `create_ui()` function in `app.py`
380
+ 3. **Customize task handling**: Modify the `TaskRunner` class in `runner.py`
381
+
382
+ ### How Task Detection Works
383
+
384
+ The `TaskRunner` reads `config.runner.mode` from your JSON file and automatically calls the appropriate backend function via `run_from_config()`. No manual routing needed!
385
+
386
+ ## License
387
+
388
+ This project follows the same license as the `ins_pricing` package.
@@ -0,0 +1,10 @@
1
+ """
2
+ Insurance Pricing Frontend Package
3
+ Web-based interface for configuring and running insurance pricing model tasks.
4
+ """
5
+
6
+ from .config_builder import ConfigBuilder
7
+ from .runner import TaskRunner, TrainingRunner
8
+ from .ft_workflow import FTWorkflowHelper
9
+
10
+ __all__ = ['ConfigBuilder', 'TaskRunner', 'TrainingRunner', 'FTWorkflowHelper']