warpgbm 0.1.13__tar.gz → 0.1.14__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: warpgbm
3
- Version: 0.1.13
3
+ Version: 0.1.14
4
4
  Summary: A fast GPU-accelerated Gradient Boosted Decision Tree library with PyTorch + CUDA
5
5
  License: GNU GENERAL PUBLIC LICENSE
6
6
  Version 3, 29 June 2007
@@ -700,7 +700,6 @@ WarpGBM is a high-performance, GPU-accelerated Gradient Boosted Decision Tree (G
700
700
  - GPU-accelerated training and histogram construction using custom CUDA kernels
701
701
  - Drop-in scikit-learn style interface
702
702
  - Supports pre-binned data or automatic quantile binning
703
- - Fully differentiable prediction path
704
703
  - Simple install with `pip`
705
704
 
706
705
  ---
@@ -713,7 +712,7 @@ In our initial tests on an NVIDIA 3090 (local) and A100 (Google Colab Pro), Warp
713
712
 
714
713
  ## Installation
715
714
 
716
- ### 🔧 Recommended (GitHub, always latest):
715
+ ### Recommended (GitHub, always latest):
717
716
 
718
717
  ```bash
719
718
  pip install git+https://github.com/jefferythewind/warpgbm.git
@@ -721,7 +720,7 @@ pip install git+https://github.com/jefferythewind/warpgbm.git
721
720
 
722
721
  This installs the latest version directly from GitHub and compiles CUDA extensions on your machine using your **local PyTorch and CUDA setup**. It's the most reliable method for ensuring compatibility and staying up to date with the latest features.
723
722
 
724
- ### 📦 Alternatively (PyPI, stable releases):
723
+ ### Alternatively (PyPI, stable releases):
725
724
 
726
725
  ```bash
727
726
  pip install warpgbm
@@ -729,7 +728,7 @@ pip install warpgbm
729
728
 
730
729
  This installs from PyPI and also compiles CUDA code locally during installation. This method works well **if your environment already has PyTorch with GPU support** installed and configured.
731
730
 
732
- > 💡 **Tip:**\
731
+ > **Tip:**\
733
732
  > If you encounter an error related to mismatched or missing CUDA versions, try installing with the following flag:
734
733
  >
735
734
  > ```bash
@@ -737,7 +736,7 @@ This installs from PyPI and also compiles CUDA code locally during installation.
737
736
  > ```
738
737
 
739
738
  Before either method, make sure you’ve installed PyTorch with GPU support:\
740
- 👉 [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
739
+ [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
741
740
 
742
741
  ---
743
742
 
@@ -774,7 +773,7 @@ print(f"LightGBM: corr = {np.corrcoef(lgb_preds, y)[0,1]:.4f}, time = {lgb_tim
774
773
  print(f"WarpGBM: corr = {np.corrcoef(wgbm_preds, y)[0,1]:.4f}, time = {wgbm_time:.2f}s")
775
774
  ```
776
775
 
777
- **🧪 Results (Ryzen 9 CPU, NVIDIA 3090 GPU):**
776
+ **Results (Ryzen 9 CPU, NVIDIA 3090 GPU):**
778
777
 
779
778
  ```
780
779
  LightGBM: corr = 0.8742, time = 37.33s
@@ -824,6 +823,23 @@ print(f"LightGBM: corr = {np.corrcoef(lgb_preds, Y_np)[0,1]:.4f}, time = {lgb_
824
823
  print(f"WarpGBM: corr = {np.corrcoef(wgbm_preds, Y_np)[0,1]:.4f}, time = {wgbm_time:.2f}s")
825
824
  ```
826
825
 
826
+ **Results (Google Colab Pro, A100 GPU):**
827
+
828
+ ```
829
+ LightGBM: corr = 0.0703, time = 643.88s
830
+ WarpGBM: corr = 0.0660, time = 49.16s
831
+ ```
832
+
833
+ ---
834
+
835
+ ### Run it live in Colab
836
+
837
+ You can try WarpGBM in a live Colab notebook using real pre-binned Numerai tournament data:
838
+
839
+ [Open in Colab](https://colab.research.google.com/drive/10mKSjs9UvmMgM5_lOXAylq5LUQAnNSi7?usp=sharing)
840
+
841
+ No installation required — just press **"Open in Playground"**, then **Run All**!
842
+
827
843
  ---
828
844
 
829
845
  ## Documentation
@@ -12,7 +12,6 @@ WarpGBM is a high-performance, GPU-accelerated Gradient Boosted Decision Tree (G
12
12
  - GPU-accelerated training and histogram construction using custom CUDA kernels
13
13
  - Drop-in scikit-learn style interface
14
14
  - Supports pre-binned data or automatic quantile binning
15
- - Fully differentiable prediction path
16
15
  - Simple install with `pip`
17
16
 
18
17
  ---
@@ -25,7 +24,7 @@ In our initial tests on an NVIDIA 3090 (local) and A100 (Google Colab Pro), Warp
25
24
 
26
25
  ## Installation
27
26
 
28
- ### 🔧 Recommended (GitHub, always latest):
27
+ ### Recommended (GitHub, always latest):
29
28
 
30
29
  ```bash
31
30
  pip install git+https://github.com/jefferythewind/warpgbm.git
@@ -33,7 +32,7 @@ pip install git+https://github.com/jefferythewind/warpgbm.git
33
32
 
34
33
  This installs the latest version directly from GitHub and compiles CUDA extensions on your machine using your **local PyTorch and CUDA setup**. It's the most reliable method for ensuring compatibility and staying up to date with the latest features.
35
34
 
36
- ### 📦 Alternatively (PyPI, stable releases):
35
+ ### Alternatively (PyPI, stable releases):
37
36
 
38
37
  ```bash
39
38
  pip install warpgbm
@@ -41,7 +40,7 @@ pip install warpgbm
41
40
 
42
41
  This installs from PyPI and also compiles CUDA code locally during installation. This method works well **if your environment already has PyTorch with GPU support** installed and configured.
43
42
 
44
- > 💡 **Tip:**\
43
+ > **Tip:**\
45
44
  > If you encounter an error related to mismatched or missing CUDA versions, try installing with the following flag:
46
45
  >
47
46
  > ```bash
@@ -49,7 +48,7 @@ This installs from PyPI and also compiles CUDA code locally during installation.
49
48
  > ```
50
49
 
51
50
  Before either method, make sure you’ve installed PyTorch with GPU support:\
52
- 👉 [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
51
+ [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
53
52
 
54
53
  ---
55
54
 
@@ -86,7 +85,7 @@ print(f"LightGBM: corr = {np.corrcoef(lgb_preds, y)[0,1]:.4f}, time = {lgb_tim
86
85
  print(f"WarpGBM: corr = {np.corrcoef(wgbm_preds, y)[0,1]:.4f}, time = {wgbm_time:.2f}s")
87
86
  ```
88
87
 
89
- **🧪 Results (Ryzen 9 CPU, NVIDIA 3090 GPU):**
88
+ **Results (Ryzen 9 CPU, NVIDIA 3090 GPU):**
90
89
 
91
90
  ```
92
91
  LightGBM: corr = 0.8742, time = 37.33s
@@ -136,6 +135,23 @@ print(f"LightGBM: corr = {np.corrcoef(lgb_preds, Y_np)[0,1]:.4f}, time = {lgb_
136
135
  print(f"WarpGBM: corr = {np.corrcoef(wgbm_preds, Y_np)[0,1]:.4f}, time = {wgbm_time:.2f}s")
137
136
  ```
138
137
 
138
+ **Results (Google Colab Pro, A100 GPU):**
139
+
140
+ ```
141
+ LightGBM: corr = 0.0703, time = 643.88s
142
+ WarpGBM: corr = 0.0660, time = 49.16s
143
+ ```
144
+
145
+ ---
146
+
147
+ ### Run it live in Colab
148
+
149
+ You can try WarpGBM in a live Colab notebook using real pre-binned Numerai tournament data:
150
+
151
+ [Open in Colab](https://colab.research.google.com/drive/10mKSjs9UvmMgM5_lOXAylq5LUQAnNSi7?usp=sharing)
152
+
153
+ No installation required — just press **"Open in Playground"**, then **Run All**!
154
+
139
155
  ---
140
156
 
141
157
  ## Documentation
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "warpgbm"
7
- version = "0.1.13"
7
+ version = "0.1.14"
8
8
  description = "A fast GPU-accelerated Gradient Boosted Decision Tree library with PyTorch + CUDA"
9
9
  readme = "README.md"
10
10
  requires-python = ">=3.8"
@@ -0,0 +1 @@
1
+ 0.1.14
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: warpgbm
3
- Version: 0.1.13
3
+ Version: 0.1.14
4
4
  Summary: A fast GPU-accelerated Gradient Boosted Decision Tree library with PyTorch + CUDA
5
5
  License: GNU GENERAL PUBLIC LICENSE
6
6
  Version 3, 29 June 2007
@@ -700,7 +700,6 @@ WarpGBM is a high-performance, GPU-accelerated Gradient Boosted Decision Tree (G
700
700
  - GPU-accelerated training and histogram construction using custom CUDA kernels
701
701
  - Drop-in scikit-learn style interface
702
702
  - Supports pre-binned data or automatic quantile binning
703
- - Fully differentiable prediction path
704
703
  - Simple install with `pip`
705
704
 
706
705
  ---
@@ -713,7 +712,7 @@ In our initial tests on an NVIDIA 3090 (local) and A100 (Google Colab Pro), Warp
713
712
 
714
713
  ## Installation
715
714
 
716
- ### 🔧 Recommended (GitHub, always latest):
715
+ ### Recommended (GitHub, always latest):
717
716
 
718
717
  ```bash
719
718
  pip install git+https://github.com/jefferythewind/warpgbm.git
@@ -721,7 +720,7 @@ pip install git+https://github.com/jefferythewind/warpgbm.git
721
720
 
722
721
  This installs the latest version directly from GitHub and compiles CUDA extensions on your machine using your **local PyTorch and CUDA setup**. It's the most reliable method for ensuring compatibility and staying up to date with the latest features.
723
722
 
724
- ### 📦 Alternatively (PyPI, stable releases):
723
+ ### Alternatively (PyPI, stable releases):
725
724
 
726
725
  ```bash
727
726
  pip install warpgbm
@@ -729,7 +728,7 @@ pip install warpgbm
729
728
 
730
729
  This installs from PyPI and also compiles CUDA code locally during installation. This method works well **if your environment already has PyTorch with GPU support** installed and configured.
731
730
 
732
- > 💡 **Tip:**\
731
+ > **Tip:**\
733
732
  > If you encounter an error related to mismatched or missing CUDA versions, try installing with the following flag:
734
733
  >
735
734
  > ```bash
@@ -737,7 +736,7 @@ This installs from PyPI and also compiles CUDA code locally during installation.
737
736
  > ```
738
737
 
739
738
  Before either method, make sure you’ve installed PyTorch with GPU support:\
740
- 👉 [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
739
+ [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
741
740
 
742
741
  ---
743
742
 
@@ -774,7 +773,7 @@ print(f"LightGBM: corr = {np.corrcoef(lgb_preds, y)[0,1]:.4f}, time = {lgb_tim
774
773
  print(f"WarpGBM: corr = {np.corrcoef(wgbm_preds, y)[0,1]:.4f}, time = {wgbm_time:.2f}s")
775
774
  ```
776
775
 
777
- **🧪 Results (Ryzen 9 CPU, NVIDIA 3090 GPU):**
776
+ **Results (Ryzen 9 CPU, NVIDIA 3090 GPU):**
778
777
 
779
778
  ```
780
779
  LightGBM: corr = 0.8742, time = 37.33s
@@ -824,6 +823,23 @@ print(f"LightGBM: corr = {np.corrcoef(lgb_preds, Y_np)[0,1]:.4f}, time = {lgb_
824
823
  print(f"WarpGBM: corr = {np.corrcoef(wgbm_preds, Y_np)[0,1]:.4f}, time = {wgbm_time:.2f}s")
825
824
  ```
826
825
 
826
+ **Results (Google Colab Pro, A100 GPU):**
827
+
828
+ ```
829
+ LightGBM: corr = 0.0703, time = 643.88s
830
+ WarpGBM: corr = 0.0660, time = 49.16s
831
+ ```
832
+
833
+ ---
834
+
835
+ ### Run it live in Colab
836
+
837
+ You can try WarpGBM in a live Colab notebook using real pre-binned Numerai tournament data:
838
+
839
+ [Open in Colab](https://colab.research.google.com/drive/10mKSjs9UvmMgM5_lOXAylq5LUQAnNSi7?usp=sharing)
840
+
841
+ No installation required — just press **"Open in Playground"**, then **Run All**!
842
+
827
843
  ---
828
844
 
829
845
  ## Documentation
@@ -1 +0,0 @@
1
- 0.1.13
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes