langtune 0.0.2__tar.gz → 0.0.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of langtune might be problematic. Click here for more details.

@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: langtune
3
- Version: 0.0.2
3
+ Version: 0.0.3
4
4
  Summary: A package for finetuning text models.
5
5
  Author-email: Pritesh Raj <priteshraj41@gmail.com>
6
6
  License: MIT License
@@ -39,7 +39,7 @@ Requires-Dist: pyyaml
39
39
  Requires-Dist: scipy
40
40
  Dynamic: license-file
41
41
 
42
- # Langtune: Large Language Models (LLMs) with Efficient LoRA Fine-Tuning for Text
42
+ # Langtune: Efficient LoRA Fine-Tuning for Text LLMs
43
43
 
44
44
  <hr/>
45
45
  <p align="center">
@@ -60,8 +60,8 @@ Dynamic: license-file
60
60
  </p>
61
61
 
62
62
  <p align="center">
63
- <b>Langtune provides modular components for text models and LoRA-based fine-tuning.</b><br/>
64
- <span style="font-size:1.1em"><i>Adapt and fine-tune language models for a range of NLP tasks.</i></span>
63
+ <b>Langtune is a Python package for fine-tuning large language models on text data using LoRA.</b><br/>
64
+ <span style="font-size:1.1em"><i>Provides modular components for adapting language models to various NLP tasks.</i></span>
65
65
  </p>
66
66
  <hr/>
67
67
 
@@ -89,41 +89,40 @@ Dynamic: license-file
89
89
  - [Examples & Use Cases](#examples--use-cases)
90
90
  - [Extending the Framework](#extending-the-framework)
91
91
  - [Contributing](#contributing)
92
- - [FAQ](#faq)
92
+ - [License](#license)
93
93
  - [Citation](#citation)
94
94
  - [Acknowledgements](#acknowledgements)
95
- - [License](#license)
96
95
 
97
96
  ---
98
97
 
99
98
  ## Features
100
- - LoRA adapters for parameter-efficient fine-tuning of LLMs
99
+ - LoRA adapters for efficient fine-tuning
101
100
  - Modular transformer backbone
102
- - Model zoo for open-source language models
101
+ - Model zoo for language models
103
102
  - Configurable and extensible codebase
104
- - Checkpointing and resume support
103
+ - Checkpointing and resume
105
104
  - Mixed precision and distributed training
106
- - Built-in metrics and visualization tools
107
- - CLI for fine-tuning and evaluation
108
- - Extensible callbacks (early stopping, logging, etc.)
105
+ - Metrics and visualization tools
106
+ - CLI for training and evaluation
107
+ - Callback support (early stopping, logging, etc.)
109
108
 
110
109
  ---
111
110
 
112
111
  ## Showcase
113
112
 
114
- Langtune is a framework for building and fine-tuning large language models with LoRA support. It is suitable for tasks such as text classification, summarization, question answering, and other NLP applications.
113
+ Langtune is intended for building and fine-tuning large language models with LoRA. It can be used for text classification, summarization, question answering, and other NLP tasks.
115
114
 
116
115
  ---
117
116
 
118
117
  ## Getting Started
119
118
 
120
- Install with pip:
119
+ Install:
121
120
 
122
121
  ```bash
123
122
  pip install langtune
124
123
  ```
125
124
 
126
- Minimal example:
125
+ Example usage:
127
126
 
128
127
  ```python
129
128
  import torch
@@ -145,28 +144,28 @@ with torch.no_grad():
145
144
  print('Output shape:', out.shape)
146
145
  ```
147
146
 
148
- For more details, see the [Documentation](docs/index.md) and `src/langtune/cli/finetune.py`.
147
+ See the [Documentation](docs/index.md) and `src/langtune/cli/finetune.py` for more details.
149
148
 
150
149
  ---
151
150
 
152
151
  ## Supported Python Versions
153
- - Python 3.8+
152
+ - Python 3.8 or newer
154
153
 
155
154
  ---
156
155
 
157
156
  ## Why langtune?
158
157
 
159
- - Parameter-efficient fine-tuning with LoRA adapters
160
- - Modular transformer backbone for flexible model design
161
- - Unified interface for open-source language models
162
- - Designed for both research and production
163
- - Efficient memory usage for large models
158
+ - Fine-tuning with LoRA adapters
159
+ - Modular transformer design
160
+ - Unified interface for language models
161
+ - Suitable for research and production
162
+ - Efficient memory usage
164
163
 
165
164
  ---
166
165
 
167
166
  ## Architecture Overview
168
167
 
169
- Langtune uses a modular transformer backbone with LoRA adapters in attention and MLP layers. This allows adaptation of pre-trained models with fewer trainable parameters.
168
+ Langtune uses a transformer backbone with LoRA adapters in attention and MLP layers. This enables adaptation of pre-trained models with fewer trainable parameters.
170
169
 
171
170
  ### Model Data Flow
172
171
 
@@ -337,12 +336,14 @@ model.finetune(dataset, config_path="configs/custom_config.yaml")
337
336
  - For advanced usage, see `src/langtune/cli/finetune.py`.
338
337
 
339
338
  ## Contributing
340
- We welcome contributions. See the [Contributing Guide](CONTRIBUTING.md) for details.
339
+ Contributions are welcome. See the [Contributing Guide](CONTRIBUTING.md) for details.
341
340
 
342
- ## License & Citation
341
+ ## License
343
342
 
344
343
  This project is licensed under the MIT License. See [LICENSE](LICENSE) for details.
345
344
 
345
+ ## Citation
346
+
346
347
  If you use langtune in your research, please cite:
347
348
 
348
349
  ```bibtex
@@ -1,4 +1,4 @@
1
- # Langtune: Large Language Models (LLMs) with Efficient LoRA Fine-Tuning for Text
1
+ # Langtune: Efficient LoRA Fine-Tuning for Text LLMs
2
2
 
3
3
  <hr/>
4
4
  <p align="center">
@@ -19,8 +19,8 @@
19
19
  </p>
20
20
 
21
21
  <p align="center">
22
- <b>Langtune provides modular components for text models and LoRA-based fine-tuning.</b><br/>
23
- <span style="font-size:1.1em"><i>Adapt and fine-tune language models for a range of NLP tasks.</i></span>
22
+ <b>Langtune is a Python package for fine-tuning large language models on text data using LoRA.</b><br/>
23
+ <span style="font-size:1.1em"><i>Provides modular components for adapting language models to various NLP tasks.</i></span>
24
24
  </p>
25
25
  <hr/>
26
26
 
@@ -48,41 +48,40 @@
48
48
  - [Examples & Use Cases](#examples--use-cases)
49
49
  - [Extending the Framework](#extending-the-framework)
50
50
  - [Contributing](#contributing)
51
- - [FAQ](#faq)
51
+ - [License](#license)
52
52
  - [Citation](#citation)
53
53
  - [Acknowledgements](#acknowledgements)
54
- - [License](#license)
55
54
 
56
55
  ---
57
56
 
58
57
  ## Features
59
- - LoRA adapters for parameter-efficient fine-tuning of LLMs
58
+ - LoRA adapters for efficient fine-tuning
60
59
  - Modular transformer backbone
61
- - Model zoo for open-source language models
60
+ - Model zoo for language models
62
61
  - Configurable and extensible codebase
63
- - Checkpointing and resume support
62
+ - Checkpointing and resume
64
63
  - Mixed precision and distributed training
65
- - Built-in metrics and visualization tools
66
- - CLI for fine-tuning and evaluation
67
- - Extensible callbacks (early stopping, logging, etc.)
64
+ - Metrics and visualization tools
65
+ - CLI for training and evaluation
66
+ - Callback support (early stopping, logging, etc.)
68
67
 
69
68
  ---
70
69
 
71
70
  ## Showcase
72
71
 
73
- Langtune is a framework for building and fine-tuning large language models with LoRA support. It is suitable for tasks such as text classification, summarization, question answering, and other NLP applications.
72
+ Langtune is intended for building and fine-tuning large language models with LoRA. It can be used for text classification, summarization, question answering, and other NLP tasks.
74
73
 
75
74
  ---
76
75
 
77
76
  ## Getting Started
78
77
 
79
- Install with pip:
78
+ Install:
80
79
 
81
80
  ```bash
82
81
  pip install langtune
83
82
  ```
84
83
 
85
- Minimal example:
84
+ Example usage:
86
85
 
87
86
  ```python
88
87
  import torch
@@ -104,28 +103,28 @@ with torch.no_grad():
104
103
  print('Output shape:', out.shape)
105
104
  ```
106
105
 
107
- For more details, see the [Documentation](docs/index.md) and `src/langtune/cli/finetune.py`.
106
+ See the [Documentation](docs/index.md) and `src/langtune/cli/finetune.py` for more details.
108
107
 
109
108
  ---
110
109
 
111
110
  ## Supported Python Versions
112
- - Python 3.8+
111
+ - Python 3.8 or newer
113
112
 
114
113
  ---
115
114
 
116
115
  ## Why langtune?
117
116
 
118
- - Parameter-efficient fine-tuning with LoRA adapters
119
- - Modular transformer backbone for flexible model design
120
- - Unified interface for open-source language models
121
- - Designed for both research and production
122
- - Efficient memory usage for large models
117
+ - Fine-tuning with LoRA adapters
118
+ - Modular transformer design
119
+ - Unified interface for language models
120
+ - Suitable for research and production
121
+ - Efficient memory usage
123
122
 
124
123
  ---
125
124
 
126
125
  ## Architecture Overview
127
126
 
128
- Langtune uses a modular transformer backbone with LoRA adapters in attention and MLP layers. This allows adaptation of pre-trained models with fewer trainable parameters.
127
+ Langtune uses a transformer backbone with LoRA adapters in attention and MLP layers. This enables adaptation of pre-trained models with fewer trainable parameters.
129
128
 
130
129
  ### Model Data Flow
131
130
 
@@ -296,12 +295,14 @@ model.finetune(dataset, config_path="configs/custom_config.yaml")
296
295
  - For advanced usage, see `src/langtune/cli/finetune.py`.
297
296
 
298
297
  ## Contributing
299
- We welcome contributions. See the [Contributing Guide](CONTRIBUTING.md) for details.
298
+ Contributions are welcome. See the [Contributing Guide](CONTRIBUTING.md) for details.
300
299
 
301
- ## License & Citation
300
+ ## License
302
301
 
303
302
  This project is licensed under the MIT License. See [LICENSE](LICENSE) for details.
304
303
 
304
+ ## Citation
305
+
305
306
  If you use langtune in your research, please cite:
306
307
 
307
308
  ```bibtex
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "langtune"
7
- version = "0.0.2"
7
+ version = "0.0.3"
8
8
  description = "A package for finetuning text models."
9
9
  authors = [
10
10
  { name = "Pritesh Raj", email = "priteshraj41@gmail.com" }
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: langtune
3
- Version: 0.0.2
3
+ Version: 0.0.3
4
4
  Summary: A package for finetuning text models.
5
5
  Author-email: Pritesh Raj <priteshraj41@gmail.com>
6
6
  License: MIT License
@@ -39,7 +39,7 @@ Requires-Dist: pyyaml
39
39
  Requires-Dist: scipy
40
40
  Dynamic: license-file
41
41
 
42
- # Langtune: Large Language Models (LLMs) with Efficient LoRA Fine-Tuning for Text
42
+ # Langtune: Efficient LoRA Fine-Tuning for Text LLMs
43
43
 
44
44
  <hr/>
45
45
  <p align="center">
@@ -60,8 +60,8 @@ Dynamic: license-file
60
60
  </p>
61
61
 
62
62
  <p align="center">
63
- <b>Langtune provides modular components for text models and LoRA-based fine-tuning.</b><br/>
64
- <span style="font-size:1.1em"><i>Adapt and fine-tune language models for a range of NLP tasks.</i></span>
63
+ <b>Langtune is a Python package for fine-tuning large language models on text data using LoRA.</b><br/>
64
+ <span style="font-size:1.1em"><i>Provides modular components for adapting language models to various NLP tasks.</i></span>
65
65
  </p>
66
66
  <hr/>
67
67
 
@@ -89,41 +89,40 @@ Dynamic: license-file
89
89
  - [Examples & Use Cases](#examples--use-cases)
90
90
  - [Extending the Framework](#extending-the-framework)
91
91
  - [Contributing](#contributing)
92
- - [FAQ](#faq)
92
+ - [License](#license)
93
93
  - [Citation](#citation)
94
94
  - [Acknowledgements](#acknowledgements)
95
- - [License](#license)
96
95
 
97
96
  ---
98
97
 
99
98
  ## Features
100
- - LoRA adapters for parameter-efficient fine-tuning of LLMs
99
+ - LoRA adapters for efficient fine-tuning
101
100
  - Modular transformer backbone
102
- - Model zoo for open-source language models
101
+ - Model zoo for language models
103
102
  - Configurable and extensible codebase
104
- - Checkpointing and resume support
103
+ - Checkpointing and resume
105
104
  - Mixed precision and distributed training
106
- - Built-in metrics and visualization tools
107
- - CLI for fine-tuning and evaluation
108
- - Extensible callbacks (early stopping, logging, etc.)
105
+ - Metrics and visualization tools
106
+ - CLI for training and evaluation
107
+ - Callback support (early stopping, logging, etc.)
109
108
 
110
109
  ---
111
110
 
112
111
  ## Showcase
113
112
 
114
- Langtune is a framework for building and fine-tuning large language models with LoRA support. It is suitable for tasks such as text classification, summarization, question answering, and other NLP applications.
113
+ Langtune is intended for building and fine-tuning large language models with LoRA. It can be used for text classification, summarization, question answering, and other NLP tasks.
115
114
 
116
115
  ---
117
116
 
118
117
  ## Getting Started
119
118
 
120
- Install with pip:
119
+ Install:
121
120
 
122
121
  ```bash
123
122
  pip install langtune
124
123
  ```
125
124
 
126
- Minimal example:
125
+ Example usage:
127
126
 
128
127
  ```python
129
128
  import torch
@@ -145,28 +144,28 @@ with torch.no_grad():
145
144
  print('Output shape:', out.shape)
146
145
  ```
147
146
 
148
- For more details, see the [Documentation](docs/index.md) and `src/langtune/cli/finetune.py`.
147
+ See the [Documentation](docs/index.md) and `src/langtune/cli/finetune.py` for more details.
149
148
 
150
149
  ---
151
150
 
152
151
  ## Supported Python Versions
153
- - Python 3.8+
152
+ - Python 3.8 or newer
154
153
 
155
154
  ---
156
155
 
157
156
  ## Why langtune?
158
157
 
159
- - Parameter-efficient fine-tuning with LoRA adapters
160
- - Modular transformer backbone for flexible model design
161
- - Unified interface for open-source language models
162
- - Designed for both research and production
163
- - Efficient memory usage for large models
158
+ - Fine-tuning with LoRA adapters
159
+ - Modular transformer design
160
+ - Unified interface for language models
161
+ - Suitable for research and production
162
+ - Efficient memory usage
164
163
 
165
164
  ---
166
165
 
167
166
  ## Architecture Overview
168
167
 
169
- Langtune uses a modular transformer backbone with LoRA adapters in attention and MLP layers. This allows adaptation of pre-trained models with fewer trainable parameters.
168
+ Langtune uses a transformer backbone with LoRA adapters in attention and MLP layers. This enables adaptation of pre-trained models with fewer trainable parameters.
170
169
 
171
170
  ### Model Data Flow
172
171
 
@@ -337,12 +336,14 @@ model.finetune(dataset, config_path="configs/custom_config.yaml")
337
336
  - For advanced usage, see `src/langtune/cli/finetune.py`.
338
337
 
339
338
  ## Contributing
340
- We welcome contributions. See the [Contributing Guide](CONTRIBUTING.md) for details.
339
+ Contributions are welcome. See the [Contributing Guide](CONTRIBUTING.md) for details.
341
340
 
342
- ## License & Citation
341
+ ## License
343
342
 
344
343
  This project is licensed under the MIT License. See [LICENSE](LICENSE) for details.
345
344
 
345
+ ## Citation
346
+
346
347
  If you use langtune in your research, please cite:
347
348
 
348
349
  ```bibtex
File without changes
File without changes