liger-kernel-nightly 0.4.2.dev20241210002150__tar.gz → 0.4.2.dev20241210030943__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (69) hide show
  1. {liger_kernel_nightly-0.4.2.dev20241210002150/src/liger_kernel_nightly.egg-info → liger_kernel_nightly-0.4.2.dev20241210030943}/PKG-INFO +5 -1
  2. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/README.md +4 -0
  3. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/pyproject.toml +1 -1
  4. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943/src/liger_kernel_nightly.egg-info}/PKG-INFO +5 -1
  5. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/LICENSE +0 -0
  6. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/NOTICE +0 -0
  7. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/setup.cfg +0 -0
  8. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/__init__.py +0 -0
  9. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/chunked_loss/__init__.py +0 -0
  10. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/chunked_loss/cpo_loss.py +0 -0
  11. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/chunked_loss/dpo_loss.py +0 -0
  12. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/chunked_loss/functional.py +0 -0
  13. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/chunked_loss/fused_linear_distillation.py +0 -0
  14. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/chunked_loss/fused_linear_preference.py +0 -0
  15. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/chunked_loss/orpo_loss.py +0 -0
  16. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/chunked_loss/simpo_loss.py +0 -0
  17. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/env_report.py +0 -0
  18. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/__init__.py +0 -0
  19. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/cross_entropy.py +0 -0
  20. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/experimental/embedding.py +0 -0
  21. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/experimental/mm_int8int2.py +0 -0
  22. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/fused_linear_cross_entropy.py +0 -0
  23. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/fused_linear_jsd.py +0 -0
  24. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/geglu.py +0 -0
  25. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/group_norm.py +0 -0
  26. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/jsd.py +0 -0
  27. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/kl_div.py +0 -0
  28. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/layer_norm.py +0 -0
  29. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/qwen2vl_mrope.py +0 -0
  30. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/rms_norm.py +0 -0
  31. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/rope.py +0 -0
  32. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/swiglu.py +0 -0
  33. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/ops/utils.py +0 -0
  34. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/__init__.py +0 -0
  35. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/auto_model.py +0 -0
  36. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/cross_entropy.py +0 -0
  37. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/experimental/embedding.py +0 -0
  38. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/functional.py +0 -0
  39. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/fused_linear_cross_entropy.py +0 -0
  40. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/fused_linear_jsd.py +0 -0
  41. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/geglu.py +0 -0
  42. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/group_norm.py +0 -0
  43. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/jsd.py +0 -0
  44. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/kl_div.py +0 -0
  45. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/layer_norm.py +0 -0
  46. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/model/__init__.py +0 -0
  47. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/model/gemma.py +0 -0
  48. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/model/gemma2.py +0 -0
  49. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/model/llama.py +0 -0
  50. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/model/mistral.py +0 -0
  51. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/model/mixtral.py +0 -0
  52. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/model/mllama.py +0 -0
  53. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/model/phi3.py +0 -0
  54. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/model/qwen2.py +0 -0
  55. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/model/qwen2_vl.py +0 -0
  56. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/monkey_patch.py +0 -0
  57. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/orpo_trainer.py +0 -0
  58. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/qwen2vl_mrope.py +0 -0
  59. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/rms_norm.py +0 -0
  60. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/rope.py +0 -0
  61. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/swiglu.py +0 -0
  62. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/transformers/trainer_integration.py +0 -0
  63. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/triton/__init__.py +0 -0
  64. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/triton/monkey_patch.py +0 -0
  65. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel/utils.py +0 -0
  66. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel_nightly.egg-info/SOURCES.txt +0 -0
  67. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel_nightly.egg-info/dependency_links.txt +0 -0
  68. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel_nightly.egg-info/requires.txt +0 -0
  69. {liger_kernel_nightly-0.4.2.dev20241210002150 → liger_kernel_nightly-0.4.2.dev20241210030943}/src/liger_kernel_nightly.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: liger_kernel_nightly
3
- Version: 0.4.2.dev20241210002150
3
+ Version: 0.4.2.dev20241210030943
4
4
  Summary: Efficient Triton kernels for LLM Training
5
5
  License: BSD 2-CLAUSE LICENSE
6
6
  Copyright 2024 LinkedIn Corporation
@@ -115,6 +115,7 @@ Requires-Dist: triton>=3.0.0; extra == "amd"
115
115
  <details>
116
116
  <summary>Latest News 🔥</summary>
117
117
 
118
+ - [2024/12/15] We release LinkedIn Engineering Blog - [Liger-Kernel: Empowering an open source ecosystem of Triton Kernels for Efficient LLM Training](https://www.linkedin.com/blog/engineering/open-source/liger-kernel-open-source-ecosystem-for-efficient-llm-training)
118
119
  - [2024/11/6] We release [v0.4.0](https://github.com/linkedin/Liger-Kernel/releases/tag/v0.4.0): Full AMD support, Tech Report, Modal CI, Llama-3.2-Vision!
119
120
  - [2024/10/21] We have released the tech report of Liger Kernel on Arxiv: https://arxiv.org/pdf/2410.10989
120
121
  - [2024/9/6] We release v0.2.1 ([X post](https://x.com/liger_kernel/status/1832168197002510649)). 2500+ Stars, 10+ New Contributors, 50+ PRs, 50k Downloads in two weeks!
@@ -126,6 +127,8 @@ Requires-Dist: triton>=3.0.0; extra == "amd"
126
127
 
127
128
  **Liger Kernel** is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU **training throughput by 20%** and reduces **memory usage by 60%**. We have implemented **Hugging Face Compatible** `RMSNorm`, `RoPE`, `SwiGLU`, `CrossEntropy`, `FusedLinearCrossEntropy`, and more to come. The kernel works out of the box with [Flash Attention](https://github.com/Dao-AILab/flash-attention), [PyTorch FSDP](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html), and [Microsoft DeepSpeed](https://github.com/microsoft/DeepSpeed). We welcome contributions from the community to gather the best kernels for LLM training.
128
129
 
130
+ We've also added optimized Post-Training kernels that deliver **up to 80% memory savings** for alignment and distillation tasks. We support losses like DPO, CPO, ORPO, SimPO, JSD, and many more.
131
+
129
132
  ## Supercharge Your Model with Liger Kernel
130
133
 
131
134
  ![Banner](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/docs/images/banner.GIF)
@@ -149,6 +152,7 @@ With one line of code, Liger Kernel can increase throughput by more than 20% and
149
152
  | [**Lightning Trainer**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/lightning) | Increase 15% throughput and reduce memory usage by 40% with LLaMA3-8B on MMLU dataset using 8 A100s with DeepSpeed ZeRO3 |
150
153
  | [**Medusa Multi-head LLM (Retraining Phase)**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/medusa) | Reduce memory usage by 80% with 5 LM heads and improve throughput by 40% using 8 A100s with FSDP |
151
154
  | [**Vision-Language Model SFT**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/huggingface/run_qwen2_vl.sh) | Finetune Qwen2-VL on image-text data using 4 A100s with FSDP |
155
+ | [**Liger ORPO Trainer**](https://github.com/linkedin/Liger-Kernel/blob/main/examples/alignment/run_orpo.py) | Align Llama 3.2 using Liger ORPO Trainer with FSDP with 50% memory reduction |
152
156
 
153
157
  ## Key Features
154
158
 
@@ -60,6 +60,7 @@
60
60
  <details>
61
61
  <summary>Latest News 🔥</summary>
62
62
 
63
+ - [2024/12/15] We release LinkedIn Engineering Blog - [Liger-Kernel: Empowering an open source ecosystem of Triton Kernels for Efficient LLM Training](https://www.linkedin.com/blog/engineering/open-source/liger-kernel-open-source-ecosystem-for-efficient-llm-training)
63
64
  - [2024/11/6] We release [v0.4.0](https://github.com/linkedin/Liger-Kernel/releases/tag/v0.4.0): Full AMD support, Tech Report, Modal CI, Llama-3.2-Vision!
64
65
  - [2024/10/21] We have released the tech report of Liger Kernel on Arxiv: https://arxiv.org/pdf/2410.10989
65
66
  - [2024/9/6] We release v0.2.1 ([X post](https://x.com/liger_kernel/status/1832168197002510649)). 2500+ Stars, 10+ New Contributors, 50+ PRs, 50k Downloads in two weeks!
@@ -71,6 +72,8 @@
71
72
 
72
73
  **Liger Kernel** is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU **training throughput by 20%** and reduces **memory usage by 60%**. We have implemented **Hugging Face Compatible** `RMSNorm`, `RoPE`, `SwiGLU`, `CrossEntropy`, `FusedLinearCrossEntropy`, and more to come. The kernel works out of the box with [Flash Attention](https://github.com/Dao-AILab/flash-attention), [PyTorch FSDP](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html), and [Microsoft DeepSpeed](https://github.com/microsoft/DeepSpeed). We welcome contributions from the community to gather the best kernels for LLM training.
73
74
 
75
+ We've also added optimized Post-Training kernels that deliver **up to 80% memory savings** for alignment and distillation tasks. We support losses like DPO, CPO, ORPO, SimPO, JSD, and many more.
76
+
74
77
  ## Supercharge Your Model with Liger Kernel
75
78
 
76
79
  ![Banner](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/docs/images/banner.GIF)
@@ -94,6 +97,7 @@ With one line of code, Liger Kernel can increase throughput by more than 20% and
94
97
  | [**Lightning Trainer**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/lightning) | Increase 15% throughput and reduce memory usage by 40% with LLaMA3-8B on MMLU dataset using 8 A100s with DeepSpeed ZeRO3 |
95
98
  | [**Medusa Multi-head LLM (Retraining Phase)**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/medusa) | Reduce memory usage by 80% with 5 LM heads and improve throughput by 40% using 8 A100s with FSDP |
96
99
  | [**Vision-Language Model SFT**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/huggingface/run_qwen2_vl.sh) | Finetune Qwen2-VL on image-text data using 4 A100s with FSDP |
100
+ | [**Liger ORPO Trainer**](https://github.com/linkedin/Liger-Kernel/blob/main/examples/alignment/run_orpo.py) | Align Llama 3.2 using Liger ORPO Trainer with FSDP with 50% memory reduction |
97
101
 
98
102
  ## Key Features
99
103
 
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "liger_kernel_nightly"
7
- version = "0.4.2.dev20241210002150"
7
+ version = "0.4.2.dev20241210030943"
8
8
  description = "Efficient Triton kernels for LLM Training"
9
9
  urls = { "Homepage" = "https://github.com/linkedin/Liger-Kernel" }
10
10
  readme = { file = "README.md", content-type = "text/markdown" }
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: liger_kernel_nightly
3
- Version: 0.4.2.dev20241210002150
3
+ Version: 0.4.2.dev20241210030943
4
4
  Summary: Efficient Triton kernels for LLM Training
5
5
  License: BSD 2-CLAUSE LICENSE
6
6
  Copyright 2024 LinkedIn Corporation
@@ -115,6 +115,7 @@ Requires-Dist: triton>=3.0.0; extra == "amd"
115
115
  <details>
116
116
  <summary>Latest News 🔥</summary>
117
117
 
118
+ - [2024/12/15] We release LinkedIn Engineering Blog - [Liger-Kernel: Empowering an open source ecosystem of Triton Kernels for Efficient LLM Training](https://www.linkedin.com/blog/engineering/open-source/liger-kernel-open-source-ecosystem-for-efficient-llm-training)
118
119
  - [2024/11/6] We release [v0.4.0](https://github.com/linkedin/Liger-Kernel/releases/tag/v0.4.0): Full AMD support, Tech Report, Modal CI, Llama-3.2-Vision!
119
120
  - [2024/10/21] We have released the tech report of Liger Kernel on Arxiv: https://arxiv.org/pdf/2410.10989
120
121
  - [2024/9/6] We release v0.2.1 ([X post](https://x.com/liger_kernel/status/1832168197002510649)). 2500+ Stars, 10+ New Contributors, 50+ PRs, 50k Downloads in two weeks!
@@ -126,6 +127,8 @@ Requires-Dist: triton>=3.0.0; extra == "amd"
126
127
 
127
128
  **Liger Kernel** is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU **training throughput by 20%** and reduces **memory usage by 60%**. We have implemented **Hugging Face Compatible** `RMSNorm`, `RoPE`, `SwiGLU`, `CrossEntropy`, `FusedLinearCrossEntropy`, and more to come. The kernel works out of the box with [Flash Attention](https://github.com/Dao-AILab/flash-attention), [PyTorch FSDP](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html), and [Microsoft DeepSpeed](https://github.com/microsoft/DeepSpeed). We welcome contributions from the community to gather the best kernels for LLM training.
128
129
 
130
+ We've also added optimized Post-Training kernels that deliver **up to 80% memory savings** for alignment and distillation tasks. We support losses like DPO, CPO, ORPO, SimPO, JSD, and many more.
131
+
129
132
  ## Supercharge Your Model with Liger Kernel
130
133
 
131
134
  ![Banner](https://raw.githubusercontent.com/linkedin/Liger-Kernel/main/docs/images/banner.GIF)
@@ -149,6 +152,7 @@ With one line of code, Liger Kernel can increase throughput by more than 20% and
149
152
  | [**Lightning Trainer**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/lightning) | Increase 15% throughput and reduce memory usage by 40% with LLaMA3-8B on MMLU dataset using 8 A100s with DeepSpeed ZeRO3 |
150
153
  | [**Medusa Multi-head LLM (Retraining Phase)**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/medusa) | Reduce memory usage by 80% with 5 LM heads and improve throughput by 40% using 8 A100s with FSDP |
151
154
  | [**Vision-Language Model SFT**](https://github.com/linkedin/Liger-Kernel/tree/main/examples/huggingface/run_qwen2_vl.sh) | Finetune Qwen2-VL on image-text data using 4 A100s with FSDP |
155
+ | [**Liger ORPO Trainer**](https://github.com/linkedin/Liger-Kernel/blob/main/examples/alignment/run_orpo.py) | Align Llama 3.2 using Liger ORPO Trainer with FSDP with 50% memory reduction |
152
156
 
153
157
  ## Key Features
154
158