synth-ai 0.2.14__py3-none-any.whl → 0.2.16__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of synth-ai might be problematic. Click here for more details.

Files changed (236) hide show
  1. examples/README.md +1 -0
  2. examples/multi_step/SFT_README.md +147 -0
  3. examples/multi_step/configs/crafter_rl_stepwise_hosted_judge.toml +9 -9
  4. examples/multi_step/configs/crafter_sft_qwen30b_lora.toml +62 -0
  5. examples/multi_step/convert_traces_to_sft.py +84 -0
  6. examples/multi_step/run_sft_qwen30b.sh +45 -0
  7. examples/qwen_coder/configs/coder_lora_30b.toml +2 -1
  8. examples/qwen_coder/configs/coder_lora_4b.toml +2 -1
  9. examples/qwen_coder/configs/coder_lora_small.toml +2 -1
  10. examples/qwen_vl/BUGS_AND_FIXES.md +232 -0
  11. examples/qwen_vl/IMAGE_VALIDATION_COMPLETE.md +271 -0
  12. examples/qwen_vl/IMAGE_VALIDATION_SUMMARY.md +260 -0
  13. examples/qwen_vl/INFERENCE_SFT_TESTS.md +412 -0
  14. examples/qwen_vl/NEXT_STEPS_2B.md +325 -0
  15. examples/qwen_vl/QUICKSTART.md +327 -0
  16. examples/qwen_vl/QUICKSTART_RL_VISION.md +110 -0
  17. examples/qwen_vl/README.md +154 -0
  18. examples/qwen_vl/RL_VISION_COMPLETE.md +475 -0
  19. examples/qwen_vl/RL_VISION_TESTING.md +333 -0
  20. examples/qwen_vl/SDK_VISION_INTEGRATION.md +328 -0
  21. examples/qwen_vl/SETUP_COMPLETE.md +275 -0
  22. examples/qwen_vl/VISION_TESTS_COMPLETE.md +490 -0
  23. examples/qwen_vl/VLM_PIPELINE_COMPLETE.md +242 -0
  24. examples/qwen_vl/__init__.py +2 -0
  25. examples/qwen_vl/collect_data_via_cli.md +423 -0
  26. examples/qwen_vl/collect_vision_traces.py +368 -0
  27. examples/qwen_vl/configs/crafter_rl_vision_qwen3vl4b.toml +127 -0
  28. examples/qwen_vl/configs/crafter_vlm_sft_example.toml +60 -0
  29. examples/qwen_vl/configs/eval_gpt4o_mini_vision.toml +43 -0
  30. examples/qwen_vl/configs/eval_gpt4o_vision_proper.toml +29 -0
  31. examples/qwen_vl/configs/eval_gpt5nano_vision.toml +45 -0
  32. examples/qwen_vl/configs/eval_qwen2vl_vision.toml +44 -0
  33. examples/qwen_vl/configs/filter_qwen2vl_sft.toml +50 -0
  34. examples/qwen_vl/configs/filter_vision_sft.toml +53 -0
  35. examples/qwen_vl/configs/filter_vision_test.toml +8 -0
  36. examples/qwen_vl/configs/sft_qwen3_vl_2b_test.toml +54 -0
  37. examples/qwen_vl/crafter_gpt5nano_agent.py +308 -0
  38. examples/qwen_vl/crafter_qwen_vl_agent.py +300 -0
  39. examples/qwen_vl/run_vision_comparison.sh +62 -0
  40. examples/qwen_vl/run_vision_sft_pipeline.sh +175 -0
  41. examples/qwen_vl/test_image_validation.py +201 -0
  42. examples/qwen_vl/test_sft_vision_data.py +110 -0
  43. examples/rl/README.md +1 -1
  44. examples/rl/configs/eval_base_qwen.toml +17 -0
  45. examples/rl/configs/eval_rl_qwen.toml +13 -0
  46. examples/rl/configs/rl_from_base_qwen.toml +37 -0
  47. examples/rl/configs/rl_from_base_qwen17.toml +76 -0
  48. examples/rl/configs/rl_from_ft_qwen.toml +37 -0
  49. examples/rl/run_eval.py +436 -0
  50. examples/rl/run_rl_and_save.py +111 -0
  51. examples/rl/task_app/README.md +22 -0
  52. examples/rl/task_app/math_single_step.py +990 -0
  53. examples/rl/task_app/math_task_app.py +111 -0
  54. examples/sft/README.md +5 -5
  55. examples/sft/configs/crafter_fft_qwen0p6b.toml +4 -2
  56. examples/sft/configs/crafter_lora_qwen0p6b.toml +4 -3
  57. examples/sft/evaluate.py +2 -4
  58. examples/sft/export_dataset.py +7 -4
  59. examples/swe/task_app/README.md +1 -1
  60. examples/swe/task_app/grpo_swe_mini.py +0 -1
  61. examples/swe/task_app/grpo_swe_mini_task_app.py +0 -12
  62. examples/swe/task_app/hosted/envs/mini_swe/environment.py +13 -13
  63. examples/swe/task_app/hosted/policy_routes.py +0 -2
  64. examples/swe/task_app/hosted/rollout.py +0 -8
  65. examples/task_apps/crafter/task_app/grpo_crafter.py +4 -7
  66. examples/task_apps/crafter/task_app/synth_envs_hosted/envs/crafter/policy.py +59 -1
  67. examples/task_apps/crafter/task_app/synth_envs_hosted/inference/openai_client.py +30 -0
  68. examples/task_apps/crafter/task_app/synth_envs_hosted/policy_routes.py +62 -31
  69. examples/task_apps/crafter/task_app/synth_envs_hosted/rollout.py +16 -14
  70. examples/task_apps/enron/__init__.py +1 -0
  71. examples/vlm/README.md +3 -3
  72. examples/vlm/configs/crafter_vlm_gpt4o.toml +2 -0
  73. examples/vlm/crafter_openai_vlm_agent.py +3 -5
  74. examples/vlm/filter_image_rows.py +1 -1
  75. examples/vlm/run_crafter_vlm_benchmark.py +2 -2
  76. examples/warming_up_to_rl/_utils.py +92 -0
  77. examples/warming_up_to_rl/analyze_trace_db.py +1 -1
  78. examples/warming_up_to_rl/configs/crafter_fft.toml +2 -0
  79. examples/warming_up_to_rl/configs/crafter_fft_4b.toml +2 -0
  80. examples/warming_up_to_rl/configs/eval_fft_qwen4b.toml +2 -0
  81. examples/warming_up_to_rl/configs/eval_groq_qwen32b.toml +2 -0
  82. examples/warming_up_to_rl/configs/eval_modal_qwen4b.toml +2 -1
  83. examples/warming_up_to_rl/configs/rl_from_base_qwen4b.toml +2 -1
  84. examples/warming_up_to_rl/configs/rl_from_ft.toml +2 -0
  85. examples/warming_up_to_rl/export_trace_sft.py +174 -60
  86. examples/warming_up_to_rl/readme.md +63 -132
  87. examples/warming_up_to_rl/run_fft_and_save.py +1 -1
  88. examples/warming_up_to_rl/run_rl_and_save.py +1 -1
  89. examples/warming_up_to_rl/task_app/README.md +42 -0
  90. examples/warming_up_to_rl/task_app/grpo_crafter.py +696 -0
  91. examples/warming_up_to_rl/task_app/grpo_crafter_task_app.py +135 -0
  92. examples/warming_up_to_rl/task_app/synth_envs_hosted/README.md +173 -0
  93. examples/warming_up_to_rl/task_app/synth_envs_hosted/__init__.py +5 -0
  94. examples/warming_up_to_rl/task_app/synth_envs_hosted/branching.py +143 -0
  95. examples/warming_up_to_rl/task_app/synth_envs_hosted/environment_routes.py +1226 -0
  96. examples/warming_up_to_rl/task_app/synth_envs_hosted/envs/__init__.py +1 -0
  97. examples/warming_up_to_rl/task_app/synth_envs_hosted/envs/crafter/__init__.py +6 -0
  98. examples/warming_up_to_rl/task_app/synth_envs_hosted/envs/crafter/app.py +1 -0
  99. examples/warming_up_to_rl/task_app/synth_envs_hosted/envs/crafter/environment.py +522 -0
  100. examples/warming_up_to_rl/task_app/synth_envs_hosted/envs/crafter/policy.py +478 -0
  101. examples/warming_up_to_rl/task_app/synth_envs_hosted/envs/crafter/react_agent.py +108 -0
  102. examples/warming_up_to_rl/task_app/synth_envs_hosted/envs/crafter/shared.py +305 -0
  103. examples/warming_up_to_rl/task_app/synth_envs_hosted/envs/crafter/tools.py +47 -0
  104. examples/warming_up_to_rl/task_app/synth_envs_hosted/hosted_app.py +204 -0
  105. examples/warming_up_to_rl/task_app/synth_envs_hosted/inference/__init__.py +5 -0
  106. examples/warming_up_to_rl/task_app/synth_envs_hosted/inference/openai_client.py +618 -0
  107. examples/warming_up_to_rl/task_app/synth_envs_hosted/main.py +100 -0
  108. examples/warming_up_to_rl/task_app/synth_envs_hosted/policy_routes.py +1081 -0
  109. examples/warming_up_to_rl/task_app/synth_envs_hosted/registry.py +195 -0
  110. examples/warming_up_to_rl/task_app/synth_envs_hosted/rollout.py +1861 -0
  111. examples/warming_up_to_rl/task_app/synth_envs_hosted/storage/__init__.py +5 -0
  112. examples/warming_up_to_rl/task_app/synth_envs_hosted/storage/volume.py +211 -0
  113. examples/warming_up_to_rl/task_app/synth_envs_hosted/test_agents.py +161 -0
  114. examples/warming_up_to_rl/task_app/synth_envs_hosted/test_service.py +137 -0
  115. examples/warming_up_to_rl/task_app/synth_envs_hosted/utils.py +62 -0
  116. synth_ai/__init__.py +44 -30
  117. synth_ai/_utils/__init__.py +47 -0
  118. synth_ai/_utils/base_url.py +10 -0
  119. synth_ai/_utils/http.py +10 -0
  120. synth_ai/_utils/prompts.py +10 -0
  121. synth_ai/_utils/task_app_state.py +12 -0
  122. synth_ai/_utils/user_config.py +10 -0
  123. synth_ai/api/models/supported.py +144 -7
  124. synth_ai/api/train/__init__.py +13 -1
  125. synth_ai/api/train/cli.py +30 -7
  126. synth_ai/api/train/config_finder.py +18 -11
  127. synth_ai/api/train/env_resolver.py +13 -10
  128. synth_ai/cli/__init__.py +62 -78
  129. synth_ai/cli/_modal_wrapper.py +7 -5
  130. synth_ai/cli/_typer_patch.py +0 -2
  131. synth_ai/cli/_validate_task_app.py +22 -4
  132. synth_ai/cli/legacy_root_backup.py +3 -1
  133. synth_ai/cli/lib/__init__.py +10 -0
  134. synth_ai/cli/lib/task_app_discovery.py +7 -0
  135. synth_ai/cli/lib/task_app_env.py +518 -0
  136. synth_ai/cli/recent.py +2 -1
  137. synth_ai/cli/setup.py +266 -0
  138. synth_ai/cli/status.py +1 -1
  139. synth_ai/cli/task_app_deploy.py +16 -0
  140. synth_ai/cli/task_app_list.py +25 -0
  141. synth_ai/cli/task_app_modal_serve.py +16 -0
  142. synth_ai/cli/task_app_serve.py +18 -0
  143. synth_ai/cli/task_apps.py +71 -31
  144. synth_ai/cli/traces.py +1 -1
  145. synth_ai/cli/train.py +18 -0
  146. synth_ai/cli/tui.py +7 -2
  147. synth_ai/cli/turso.py +1 -1
  148. synth_ai/cli/watch.py +1 -1
  149. synth_ai/demos/__init__.py +10 -0
  150. synth_ai/demos/core/__init__.py +28 -1
  151. synth_ai/demos/crafter/__init__.py +1 -0
  152. synth_ai/demos/crafter/crafter_fft_4b.toml +55 -0
  153. synth_ai/demos/crafter/grpo_crafter_task_app.py +185 -0
  154. synth_ai/demos/crafter/rl_from_base_qwen4b.toml +74 -0
  155. synth_ai/demos/demo_registry.py +176 -0
  156. synth_ai/demos/math/__init__.py +1 -0
  157. synth_ai/demos/math/_common.py +16 -0
  158. synth_ai/demos/math/app.py +38 -0
  159. synth_ai/demos/math/config.toml +76 -0
  160. synth_ai/demos/math/deploy_modal.py +54 -0
  161. synth_ai/demos/math/modal_task_app.py +702 -0
  162. synth_ai/demos/math/task_app_entry.py +51 -0
  163. synth_ai/environments/environment/core.py +7 -1
  164. synth_ai/environments/examples/bandit/engine.py +0 -1
  165. synth_ai/environments/examples/bandit/environment.py +0 -1
  166. synth_ai/environments/examples/wordle/environment.py +0 -1
  167. synth_ai/evals/base.py +16 -5
  168. synth_ai/evals/client.py +1 -1
  169. synth_ai/inference/client.py +1 -1
  170. synth_ai/judge_schemas.py +8 -8
  171. synth_ai/learning/client.py +1 -1
  172. synth_ai/learning/health.py +1 -1
  173. synth_ai/learning/jobs.py +1 -1
  174. synth_ai/learning/rl/client.py +1 -1
  175. synth_ai/learning/rl/env_keys.py +1 -1
  176. synth_ai/learning/rl/secrets.py +1 -1
  177. synth_ai/learning/sft/client.py +1 -1
  178. synth_ai/learning/sft/data.py +407 -4
  179. synth_ai/learning/validators.py +4 -1
  180. synth_ai/task/apps/__init__.py +4 -2
  181. synth_ai/task/config.py +6 -4
  182. synth_ai/task/rubrics/__init__.py +1 -2
  183. synth_ai/task/rubrics/loaders.py +14 -10
  184. synth_ai/task/rubrics.py +219 -0
  185. synth_ai/task/trace_correlation_helpers.py +24 -11
  186. synth_ai/task/tracing_utils.py +14 -3
  187. synth_ai/task/validators.py +2 -3
  188. synth_ai/tracing_v3/abstractions.py +3 -3
  189. synth_ai/tracing_v3/config.py +15 -13
  190. synth_ai/tracing_v3/constants.py +21 -0
  191. synth_ai/tracing_v3/db_config.py +3 -1
  192. synth_ai/tracing_v3/decorators.py +10 -7
  193. synth_ai/tracing_v3/llm_call_record_helpers.py +5 -5
  194. synth_ai/tracing_v3/session_tracer.py +7 -7
  195. synth_ai/tracing_v3/storage/base.py +29 -29
  196. synth_ai/tracing_v3/storage/config.py +3 -3
  197. synth_ai/tracing_v3/turso/daemon.py +8 -9
  198. synth_ai/tracing_v3/turso/native_manager.py +80 -72
  199. synth_ai/tracing_v3/utils.py +2 -2
  200. synth_ai/tui/cli/query_experiments.py +4 -4
  201. synth_ai/tui/cli/query_experiments_v3.py +4 -4
  202. synth_ai/tui/dashboard.py +14 -9
  203. synth_ai/utils/__init__.py +101 -0
  204. synth_ai/utils/base_url.py +94 -0
  205. synth_ai/utils/cli.py +131 -0
  206. synth_ai/utils/env.py +287 -0
  207. synth_ai/utils/http.py +169 -0
  208. synth_ai/utils/modal.py +308 -0
  209. synth_ai/utils/process.py +212 -0
  210. synth_ai/utils/prompts.py +39 -0
  211. synth_ai/utils/sqld.py +122 -0
  212. synth_ai/utils/task_app_discovery.py +882 -0
  213. synth_ai/utils/task_app_env.py +186 -0
  214. synth_ai/utils/task_app_state.py +318 -0
  215. synth_ai/utils/user_config.py +137 -0
  216. synth_ai/v0/config/__init__.py +1 -5
  217. synth_ai/v0/config/base_url.py +1 -7
  218. synth_ai/v0/tracing/config.py +1 -1
  219. synth_ai/v0/tracing/decorators.py +1 -1
  220. synth_ai/v0/tracing/upload.py +1 -1
  221. synth_ai/v0/tracing_v1/config.py +1 -1
  222. synth_ai/v0/tracing_v1/decorators.py +1 -1
  223. synth_ai/v0/tracing_v1/upload.py +1 -1
  224. {synth_ai-0.2.14.dist-info → synth_ai-0.2.16.dist-info}/METADATA +85 -31
  225. {synth_ai-0.2.14.dist-info → synth_ai-0.2.16.dist-info}/RECORD +229 -117
  226. synth_ai/cli/man.py +0 -106
  227. synth_ai/compound/cais.py +0 -0
  228. synth_ai/core/experiment.py +0 -13
  229. synth_ai/core/system.py +0 -15
  230. synth_ai/demo_registry.py +0 -295
  231. synth_ai/handshake.py +0 -109
  232. synth_ai/http.py +0 -26
  233. {synth_ai-0.2.14.dist-info → synth_ai-0.2.16.dist-info}/WHEEL +0 -0
  234. {synth_ai-0.2.14.dist-info → synth_ai-0.2.16.dist-info}/entry_points.txt +0 -0
  235. {synth_ai-0.2.14.dist-info → synth_ai-0.2.16.dist-info}/licenses/LICENSE +0 -0
  236. {synth_ai-0.2.14.dist-info → synth_ai-0.2.16.dist-info}/top_level.txt +0 -0
examples/README.md ADDED
@@ -0,0 +1 @@
1
+ ### The instructions for how to create and configure a task app are documented at https://docs.usesynth.ai/sdk/task-apps
@@ -0,0 +1,147 @@
1
+ # SFT Training for Qwen3-Coder-30B with LoRA
2
+
3
+ Supervised Fine-Tuning configuration for the same 30B MoE model used in RL training.
4
+
5
+ ## Configuration Overview
6
+
7
+ **Model:** `Qwen/Qwen3-Coder-30B-A3B-Instruct` (Mixture of Experts)
8
+
9
+ **Hardware:** 4x H200 GPUs (561GB total VRAM)
10
+
11
+ **Parallelism Strategy:**
12
+ - **Tensor Parallel (TP)**: 2 GPUs - Splits the model across 2 GPUs for inference/forward pass
13
+ - **Data Parallel (DP)**: 2 GPUs - Splits batches across 2 GPUs for training throughput
14
+
15
+ **LoRA Configuration:**
16
+ - Rank (r): 16
17
+ - Alpha: 32
18
+ - Dropout: 0.05
19
+ - Target modules: `["all-linear"]` - Applies LoRA to all linear layers
20
+
21
+ ## Memory Breakdown per GPU
22
+
23
+ With 4x H200 (141GB each):
24
+
25
+ **Model Split (TP=2):**
26
+ - 2 GPUs hold the base model (70GB each)
27
+ - ~70GB free per GPU for activations and gradients
28
+
29
+ **Training (DP=2):**
30
+ - 2 GPUs process different batches
31
+ - LoRA adapters: ~5-10GB per GPU
32
+ - Gradients/optimizer states: ~20-30GB per GPU
33
+ - **Total per training GPU: ~50-60GB** ✅
34
+
35
+ ## Quick Start
36
+
37
+ ### 1. Prepare Your Dataset
38
+
39
+ Your dataset should be in JSONL format with conversation turns:
40
+
41
+ ```jsonl
42
+ {"messages": [{"role": "system", "content": "..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}
43
+ {"messages": [{"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}
44
+ ```
45
+
46
+ ### 2. Run Training
47
+
48
+ ```bash
49
+ # Using the helper script
50
+ ./examples/multi_step/run_sft_qwen30b.sh path/to/your/dataset.jsonl
51
+
52
+ # Or directly with synth-ai CLI
53
+ uvx synth-ai train \
54
+ --type sft \
55
+ --config examples/multi_step/configs/crafter_sft_qwen30b_lora.toml \
56
+ --dataset path/to/your/dataset.jsonl \
57
+ --env-file backend/.env.dev
58
+ ```
59
+
60
+ ### 3. Monitor Training
61
+
62
+ Check the Synth dashboard for:
63
+ - Training loss curve
64
+ - Validation metrics (if validation set provided)
65
+ - GPU utilization
66
+ - Training throughput (tokens/sec)
67
+
68
+ ## Hyperparameters
69
+
70
+ **Batch Configuration:**
71
+ - Per-device batch size: 1
72
+ - Gradient accumulation: 64 steps
73
+ - **Effective global batch size: 128** (1 × 64 × 2 GPUs)
74
+
75
+ **Learning Rate:**
76
+ - Initial LR: 5e-6
77
+ - Warmup ratio: 3%
78
+ - Schedule: Linear decay
79
+
80
+ **Sequence Length:** 4096 tokens
81
+
82
+ **Training:**
83
+ - Epochs: 1
84
+ - Mixed precision: BF16
85
+ - DeepSpeed: Stage 2 (optimizer state sharding)
86
+ - Activation checkpointing: Enabled
87
+
88
+ ## Configuration File Structure
89
+
90
+ ```toml
91
+ [algorithm]
92
+ type = "offline" # Supervised (not RL)
93
+ method = "sft" # Supervised fine-tuning
94
+ variety = "lora" # Using LoRA adapters
95
+
96
+ [compute]
97
+ gpu_type = "H200"
98
+ gpu_count = 4
99
+
100
+ [data.topology]
101
+ tensor_parallel = 2 # Split model across 2 GPUs
102
+ data_parallel = 2 # Split batches across 2 GPUs
103
+
104
+ [training]
105
+ mode = "lora"
106
+ use_qlora = true # Quantized LoRA (4-bit base model)
107
+
108
+ [lora]
109
+ r = 16 # LoRA rank
110
+ alpha = 32 # LoRA scaling
111
+ dropout = 0.05
112
+ target_modules = ["all-linear"] # Apply to all linear layers
113
+ ```
114
+
115
+ ## Comparison with RL Config
116
+
117
+ | Aspect | SFT | RL |
118
+ |--------|-----|-----|
119
+ | Purpose | Supervised learning | Reinforcement learning |
120
+ | Data | Labeled examples | Environment interactions |
121
+ | Topology | TP=2, DP=2 | Split: 2 inference + 2 training |
122
+ | Batch size | 128 (effective) | Variable (episode-based) |
123
+ | Training | Standard backprop | Policy gradient (GSPO) |
124
+
125
+ ## Tips
126
+
127
+ 1. **Start Small:** Test with a small dataset first to verify the pipeline
128
+ 2. **Validation:** Add a validation set to monitor overfitting
129
+ 3. **Checkpointing:** Training saves checkpoints every 100 steps
130
+ 4. **Resume:** Can resume from checkpoint if training is interrupted
131
+ 5. **Inference:** After training, use the LoRA adapter with the base model
132
+
133
+ ## Output
134
+
135
+ After training completes, you'll get:
136
+ - LoRA adapter weights (saved to volume)
137
+ - Training metrics and logs
138
+ - Best checkpoint (based on validation loss)
139
+ - Model ready for inference or RL initialization
140
+
141
+ ## Next Steps
142
+
143
+ 1. **Evaluate:** Test your fine-tuned model on held-out data
144
+ 2. **RL Training:** Use this as initialization for RL (`init_from_sft = true`)
145
+ 3. **Deploy:** Load LoRA adapter for inference
146
+ 4. **Iterate:** Adjust hyperparameters based on performance
147
+
@@ -16,24 +16,24 @@ judge_url = "https://synth-backend-dev-docker.onrender.com/api"
16
16
 
17
17
  [compute]
18
18
  gpu_type = "H200"
19
- gpu_count = 2
19
+ gpu_count = 4
20
20
 
21
21
  [topology]
22
22
  type = "single_node_split"
23
- gpus_for_vllm = 1
24
- gpus_for_training = 1
23
+ gpus_for_vllm = 2
24
+ gpus_for_training = 2
25
25
  gpus_for_ref = 0
26
- tensor_parallel = 1
26
+ tensor_parallel = 2
27
27
 
28
28
  [vllm]
29
- tensor_parallel_size = 1
30
- max_model_len = 8192
29
+ tensor_parallel_size = 2
30
+ max_model_len = 4096
31
31
 
32
32
  [reference]
33
33
  placement = "none"
34
34
 
35
35
  [model]
36
- base = "Qwen/Qwen3-4B"
36
+ base = "Qwen/Qwen3-Coder-30B-A3B-Instruct"
37
37
  trainer_mode = "lora"
38
38
  label = "crafter-rl-stepwise-hosted-judge"
39
39
 
@@ -74,7 +74,7 @@ seeds = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
74
74
 
75
75
  [training]
76
76
  num_epochs = 1
77
- iterations_per_epoch = 20
77
+ iterations_per_epoch = 5
78
78
  gradient_accumulation_steps = 1
79
79
  max_accumulated_minibatch = 1
80
80
  max_turns = 10
@@ -84,7 +84,7 @@ learning_rate = 5e-5
84
84
  log_interval = 1
85
85
  weight_sync_interval = 1
86
86
  event_rewards_kind = "unique"
87
- async_semaphore_max = 40 # Max concurrent rollouts in streaming pipeline
87
+ async_semaphore_max = 4 # Max concurrent rollouts in streaming pipeline
88
88
 
89
89
  # Enable dense decision rewards in the trainer to mirror env_config step rewards.
90
90
  step_rewards_enabled = true
@@ -0,0 +1,62 @@
1
+ # Crafter SFT LoRA configuration
2
+ # Train Qwen3-Coder-30B on Crafter agent traces
3
+
4
+ [algorithm]
5
+ type = "offline"
6
+ method = "sft"
7
+ variety = "lora"
8
+
9
+ [job]
10
+ model = "Qwen/Qwen3-Coder-30B-A3B-Instruct"
11
+ # Default dataset - can override with --dataset flag
12
+ data = "traces/crafter_sft_converted.jsonl"
13
+
14
+ [compute]
15
+ gpu_type = "H200"
16
+ gpu_count = 2
17
+ nodes = 1
18
+
19
+ [data]
20
+ # Forwarded into metadata.effective_config
21
+ topology = {}
22
+ # Optional validation set if you have one locally
23
+ # validation_path = "examples/multi_step/ft_data/crafter_sft.val.jsonl"
24
+
25
+ [training]
26
+ mode = "lora"
27
+ use_qlora = true
28
+
29
+ [training.validation]
30
+ enabled = true
31
+ evaluation_strategy = "steps"
32
+ eval_steps = 100
33
+ save_best_model_at_end = true
34
+ metric_for_best_model = "val.loss"
35
+ greater_is_better = false
36
+
37
+ [hyperparameters]
38
+ n_epochs = 1
39
+ train_kind = "peft"
40
+ per_device_batch = 1
41
+ gradient_accumulation_steps = 64
42
+ sequence_length = 4096
43
+ learning_rate = 5e-6
44
+ warmup_ratio = 0.03
45
+ lora_rank = 16
46
+ lora_alpha = 32
47
+ lora_dropout = 0.05
48
+ lora_target_modules = ["all-linear"]
49
+
50
+ [hyperparameters.parallelism]
51
+ use_deepspeed = true
52
+ deepspeed_stage = 2
53
+ fsdp = false
54
+ bf16 = true
55
+ fp16 = false
56
+ activation_checkpointing = true
57
+
58
+ [tags]
59
+ experiment = "crafter_sft_lora_qwen_coder_30b"
60
+ task = "crafter_agent"
61
+ model_size = "30b"
62
+
@@ -0,0 +1,84 @@
1
+ #!/usr/bin/env python3
2
+ """Convert Crafter trace format to SFT format with messages[] structure."""
3
+
4
+ import json
5
+ import sys
6
+ from pathlib import Path
7
+
8
+ def convert_trace_to_sft(trace: dict) -> dict:
9
+ """Convert a single trace to SFT format."""
10
+ # Extract dialogue from trace
11
+ dialogue = trace.get("dialogue", [])
12
+ assistant = trace.get("assistant", {})
13
+
14
+ # Build messages list
15
+ messages = []
16
+
17
+ # Add dialogue history
18
+ for msg in dialogue:
19
+ messages.append({
20
+ "role": msg["role"],
21
+ "content": msg["content"]
22
+ })
23
+
24
+ # Add assistant response if present
25
+ if assistant:
26
+ content = assistant.get("content", "")
27
+ tool_calls = assistant.get("tool_calls", [])
28
+
29
+ # If there are tool calls, format them
30
+ if tool_calls:
31
+ # Convert tool calls to a simple text format for SFT
32
+ tool_text = "\n".join([
33
+ f"Tool: {tc['name']}\nArguments: {json.dumps(tc.get('arguments', {}))}"
34
+ for tc in tool_calls
35
+ ])
36
+ content = f"{content}\n\n{tool_text}".strip()
37
+
38
+ messages.append({
39
+ "role": "assistant",
40
+ "content": content
41
+ })
42
+
43
+ return {"messages": messages}
44
+
45
+ def main():
46
+ if len(sys.argv) < 2:
47
+ print("Usage: python convert_traces_to_sft.py <input.jsonl> [output.jsonl]")
48
+ sys.exit(1)
49
+
50
+ input_path = Path(sys.argv[1])
51
+ output_path = Path(sys.argv[2]) if len(sys.argv) > 2 else input_path.with_name(f"{input_path.stem}_sft_format.jsonl")
52
+
53
+ if not input_path.exists():
54
+ print(f"Error: Input file not found: {input_path}")
55
+ sys.exit(1)
56
+
57
+ print(f"Converting {input_path} → {output_path}")
58
+
59
+ converted = 0
60
+ skipped = 0
61
+
62
+ with open(input_path) as f_in, open(output_path, "w") as f_out:
63
+ for line_no, line in enumerate(f_in, 1):
64
+ try:
65
+ trace = json.loads(line.strip())
66
+ sft_entry = convert_trace_to_sft(trace)
67
+
68
+ # Only write if we have messages
69
+ if sft_entry["messages"]:
70
+ f_out.write(json.dumps(sft_entry) + "\n")
71
+ converted += 1
72
+ else:
73
+ skipped += 1
74
+
75
+ except Exception as e:
76
+ print(f"Warning: Skipping line {line_no}: {e}")
77
+ skipped += 1
78
+
79
+ print(f"✅ Converted {converted} entries, skipped {skipped}")
80
+ print(f"Output: {output_path}")
81
+
82
+ if __name__ == "__main__":
83
+ main()
84
+
@@ -0,0 +1,45 @@
1
+ #!/bin/bash
2
+ # Run SFT for Qwen3-Coder-30B with LoRA on Crafter data
3
+
4
+ # Usage:
5
+ # ./run_sft_qwen30b.sh <dataset_path> [env_file]
6
+ #
7
+ # Example:
8
+ # ./run_sft_qwen30b.sh examples/multi_step/ft_data/crafter_traces.jsonl
9
+ # ./run_sft_qwen30b.sh examples/multi_step/ft_data/crafter_traces.jsonl backend/.env.dev
10
+
11
+ set -e
12
+
13
+ DATASET_PATH="${1:-examples/sft/ft_data/crafter_traces.jsonl}"
14
+ ENV_FILE="${2:-backend/.env.dev}"
15
+
16
+ if [ ! -f "$DATASET_PATH" ]; then
17
+ echo "Error: Dataset not found at $DATASET_PATH"
18
+ echo "Usage: $0 <dataset_path> [env_file]"
19
+ exit 1
20
+ fi
21
+
22
+ if [ ! -f "$ENV_FILE" ]; then
23
+ echo "Error: Env file not found at $ENV_FILE"
24
+ echo "Usage: $0 <dataset_path> [env_file]"
25
+ exit 1
26
+ fi
27
+
28
+ echo "🚀 Starting SFT training for Qwen3-Coder-30B with LoRA"
29
+ echo " Model: Qwen/Qwen3-Coder-30B-A3B-Instruct"
30
+ echo " Dataset: $DATASET_PATH"
31
+ echo " Config: examples/multi_step/configs/crafter_sft_qwen30b_lora.toml"
32
+ echo " GPUs: 4x H200"
33
+ echo " LoRA: r=16, alpha=32, all-linear"
34
+ echo ""
35
+
36
+ uvx synth-ai train \
37
+ --type sft \
38
+ --config examples/multi_step/configs/crafter_sft_qwen30b_lora.toml \
39
+ --dataset "$DATASET_PATH" \
40
+ --env-file "$ENV_FILE"
41
+
42
+ echo ""
43
+ echo "✅ SFT training job submitted!"
44
+ echo " Monitor progress in your Synth dashboard"
45
+
@@ -1,5 +1,7 @@
1
1
  # Qwen3 Coder 30B LoRA SFT – all-linear adapters
2
2
 
3
+ type = "sft"
4
+
3
5
  [algorithm]
4
6
  type = "offline"
5
7
  method = "sft"
@@ -58,4 +60,3 @@ alpha = 32
58
60
  dropout = 0.05
59
61
  target_modules = ["all-linear"]
60
62
 
61
-
@@ -1,5 +1,7 @@
1
1
  # Qwen3 Coder 4B LoRA SFT – all-linear adapters
2
2
 
3
+ type = "sft"
4
+
3
5
  [job]
4
6
  model = "Qwen/Qwen3-4B"
5
7
 
@@ -54,4 +56,3 @@ dropout = 0.05
54
56
  target_modules = ["all-linear"]
55
57
 
56
58
 
57
-
@@ -1,5 +1,7 @@
1
1
  # Qwen3 Coder LoRA SFT – all-linear adapters
2
2
 
3
+ type = "sft"
4
+
3
5
  [algorithm]
4
6
  type = "offline"
5
7
  method = "sft"
@@ -55,4 +57,3 @@ alpha = 32
55
57
  dropout = 0.05
56
58
  target_modules = ["all-linear"]
57
59
 
58
-
@@ -0,0 +1,232 @@
1
+ # Vision SFT Pipeline - Bugs and Fixes
2
+
3
+ Complete log of issues encountered and resolved during vision data collection setup.
4
+
5
+ ## ✅ Issue #1: Import Error - CrafterEnvironment
6
+
7
+ **Problem:**
8
+ ```python
9
+ ImportError: cannot import name 'CrafterEnvironment' from 'examples.task_apps.crafter.task_app.synth_envs_hosted.envs.crafter.environment'
10
+ ```
11
+
12
+ **Root Cause:**
13
+ Class is named `CrafterEnvironmentWrapper`, not `CrafterEnvironment`
14
+
15
+ **Fix:**
16
+ Updated imports and usages in:
17
+ - `crafter_gpt5nano_agent.py`
18
+ - `crafter_qwen_vl_agent.py`
19
+ - `collect_vision_traces.py`
20
+
21
+ ```python
22
+ # Before
23
+ from ...environment import CrafterEnvironment
24
+ wrapper = CrafterEnvironment(env, seed=seed)
25
+
26
+ # After
27
+ from ...environment import CrafterEnvironmentWrapper
28
+ wrapper = CrafterEnvironmentWrapper(env, seed=seed)
29
+ ```
30
+
31
+ **Status:** FIXED ✓
32
+
33
+ ---
34
+
35
+ ## ✅ Issue #2: OpenAI API Parameter - max_tokens
36
+
37
+ **Problem:**
38
+ ```
39
+ openai.BadRequestError: Error code: 400 - {'error': {'message': "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead."}}
40
+ ```
41
+
42
+ **Root Cause:**
43
+ gpt-5 models require `max_completion_tokens` parameter instead of `max_tokens`
44
+
45
+ **Fix:**
46
+ Updated `_normalise_openai_request()` function to detect gpt-5 models:
47
+
48
+ ```python
49
+ def _normalise_openai_request(payload, model, temperature):
50
+ request = dict(payload)
51
+ request["model"] = model
52
+
53
+ # gpt-5 models use max_completion_tokens, not max_tokens
54
+ if "gpt-5" in model.lower():
55
+ request.setdefault("max_completion_tokens", 512)
56
+ request.pop("max_tokens", None) # Remove if present
57
+ else:
58
+ # Older models use max_tokens
59
+ request.setdefault("max_tokens", 512)
60
+
61
+ return request
62
+ ```
63
+
64
+ **Files Updated:**
65
+ - `crafter_gpt5nano_agent.py`
66
+ - `collect_vision_traces.py`
67
+
68
+ **Status:** FIXED ✓
69
+
70
+ ---
71
+
72
+ ## ✅ Issue #3: OpenAI API Parameter - temperature
73
+
74
+ **Problem:**
75
+ ```
76
+ openai.BadRequestError: Error code: 400 - {'error': {'message': "Unsupported value: 'temperature' does not support 0.6 with this model. Only the default (1) value is supported."}}
77
+ ```
78
+
79
+ **Root Cause:**
80
+ gpt-5-nano only supports `temperature=1` (default), custom temperature values are not allowed
81
+
82
+ **Fix:**
83
+ Remove temperature parameter for gpt-5 models:
84
+
85
+ ```python
86
+ def _normalise_openai_request(payload, model, temperature):
87
+ # ...
88
+
89
+ if "gpt-5" in model.lower():
90
+ # gpt-5-nano only supports temperature=1 (default)
91
+ request.pop("temperature", None) # Remove custom temperature
92
+ request.setdefault("max_completion_tokens", 512)
93
+ request.pop("max_tokens", None)
94
+ else:
95
+ # Older models support custom temperature
96
+ request.setdefault("temperature", temperature)
97
+ request.setdefault("max_tokens", 512)
98
+
99
+ return request
100
+ ```
101
+
102
+ **Files Updated:**
103
+ - `crafter_gpt5nano_agent.py`
104
+ - `collect_vision_traces.py`
105
+
106
+ **Status:** FIXED ✓
107
+
108
+ ---
109
+
110
+ ## ⚠️ Issue #4: gpt-5-nano Tool Calling Support
111
+
112
+ **Problem:**
113
+ ```
114
+ Seed 0: no tool calls returned by model; ending episode early at step 0.
115
+ ```
116
+
117
+ **Root Cause:**
118
+ gpt-5-nano does not appear to support function/tool calling yet, or requires a different prompt format for tool use.
119
+
120
+ **Testing Results:**
121
+ - API returned 200 OK (auth and network fine)
122
+ - Model processed vision inputs successfully
123
+ - Model did not return tool calls even with tools schema provided
124
+ - Both episodes stopped immediately (step 0)
125
+
126
+ **Workaround:**
127
+ Switch to `gpt-4o-mini-2024-07-18` for data collection:
128
+ - Confirmed to support both vision AND tool calling
129
+ - Successfully completed 10 episodes with good quality
130
+ - Mean 2.6 achievements per episode
131
+ - 685 total tool calls across 10 episodes
132
+
133
+ **Status:** WORKAROUND APPLIED (use gpt-4o-mini) ✓
134
+
135
+ **Note:**
136
+ This is a model capability limitation, not a code bug. gpt-5-nano can be revisited when tool calling support is confirmed by OpenAI.
137
+
138
+ ---
139
+
140
+ ## 📊 Final Validation Results
141
+
142
+ ### Test Run #5: 10-Episode Collection with gpt-4o-mini
143
+
144
+ **Command:**
145
+ ```bash
146
+ uv run python examples/qwen_vl/crafter_gpt5nano_agent.py \
147
+ --model gpt-4o-mini-2024-07-18 \
148
+ --seeds 10 \
149
+ --steps 50
150
+ ```
151
+
152
+ **Results:**
153
+ ```
154
+ ✓ All 10 episodes completed (50 steps each)
155
+ ✓ Mean achievements: 2.6 per episode
156
+ ✓ Total tool calls: 685
157
+ ✓ Vision processing: Working (64x64 PNG frames)
158
+ ✓ Tool calling: Working (proper tool call format)
159
+ ✓ Frame saving: Working (saved to output directory)
160
+ ✓ Performance: ~5-6 minutes for 10 episodes
161
+ ```
162
+
163
+ **Quality Metrics:**
164
+ - Episode 1: 4 achievements, 72 tool calls, reward: 97.3
165
+ - Episode 5: 3 achievements, 62 tool calls, reward: 120.0
166
+ - Episode 8: 1 achievement, 71 tool calls, reward: 12.9
167
+ - Good variety in performance (1-4 achievements)
168
+
169
+ ---
170
+
171
+ ## 🔧 Code Changes Summary
172
+
173
+ ### Files Modified:
174
+ 1. **crafter_gpt5nano_agent.py**
175
+ - Import: `CrafterEnvironment` → `CrafterEnvironmentWrapper`
176
+ - Function: `_normalise_openai_request()` - handle gpt-5 parameters
177
+
178
+ 2. **crafter_qwen_vl_agent.py**
179
+ - Import: `CrafterEnvironment` → `CrafterEnvironmentWrapper`
180
+
181
+ 3. **collect_vision_traces.py**
182
+ - Import: `CrafterEnvironment` → `CrafterEnvironmentWrapper`
183
+ - Function: `_normalise_openai_request()` - handle gpt-5 parameters
184
+
185
+ ### Key Learnings:
186
+ 1. ✅ Always check actual class names in source code
187
+ 2. ✅ OpenAI's API evolves - newer models have different parameter requirements
188
+ 3. ✅ Test with known-working models first (gpt-4o-mini) before trying cutting-edge ones
189
+ 4. ✅ Vision + tool calling combo requires mature model support
190
+
191
+ ---
192
+
193
+ ## 🎯 Recommendations
194
+
195
+ ### For Production:
196
+ - **Teacher model:** Use `gpt-4o-mini-2024-07-18` for data collection
197
+ - Proven to work with vision + tools
198
+ - Good quality (2-4 achievements per episode)
199
+ - Reasonable cost
200
+
201
+ - **Monitor gpt-5-nano:** Revisit when tool calling support is confirmed
202
+
203
+ ### For Configs:
204
+ - Update eval configs to use `gpt-4o-mini` by default:
205
+ ```toml
206
+ [eval]
207
+ model = "gpt-4o-mini-2024-07-18" # Not gpt-5-nano
208
+ ```
209
+
210
+ ---
211
+
212
+ ## ✅ All Issues Resolved
213
+
214
+ **Infrastructure Status:** READY FOR PRODUCTION ✓
215
+
216
+ - Vision processing: Working
217
+ - Tool calling: Working
218
+ - Frame saving: Working
219
+ - OpenAI API integration: Working
220
+ - 10-episode test: Successful
221
+
222
+ **Next Steps:**
223
+ 1. Scale to 100 episodes for full dataset
224
+ 2. Apply filters and export to SFT format
225
+ 3. Train VLM with LoRA
226
+ 4. Fine-tune with RL
227
+
228
+ ---
229
+
230
+ **Last Updated:** 2025-10-26
231
+ **Test Environment:** synth-ai dev, macOS, Python 3.11
232
+