langtune 0.1.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of langtune might be problematic. Click here for more details.
- langtune-0.1.0/LICENSE +21 -0
- langtune-0.1.0/PKG-INFO +459 -0
- langtune-0.1.0/README.md +418 -0
- langtune-0.1.0/pyproject.toml +36 -0
- langtune-0.1.0/setup.cfg +4 -0
- langtune-0.1.0/src/langtune/__init__.py +0 -0
- langtune-0.1.0/src/langtune.egg-info/PKG-INFO +459 -0
- langtune-0.1.0/src/langtune.egg-info/SOURCES.txt +10 -0
- langtune-0.1.0/src/langtune.egg-info/dependency_links.txt +1 -0
- langtune-0.1.0/src/langtune.egg-info/entry_points.txt +2 -0
- langtune-0.1.0/src/langtune.egg-info/requires.txt +5 -0
- langtune-0.1.0/src/langtune.egg-info/top_level.txt +1 -0
langtune-0.1.0/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2025 Pritesh Raj
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
langtune-0.1.0/PKG-INFO
ADDED
|
@@ -0,0 +1,459 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: langtune
|
|
3
|
+
Version: 0.1.0
|
|
4
|
+
Summary: A package for finetuning text models.
|
|
5
|
+
Author-email: Pritesh Raj <priteshraj41@gmail.com>
|
|
6
|
+
License: MIT License
|
|
7
|
+
|
|
8
|
+
Copyright (c) 2025 Pritesh Raj
|
|
9
|
+
|
|
10
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
11
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
12
|
+
in the Software without restriction, including without limitation the rights
|
|
13
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
14
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
15
|
+
furnished to do so, subject to the following conditions:
|
|
16
|
+
|
|
17
|
+
The above copyright notice and this permission notice shall be included in all
|
|
18
|
+
copies or substantial portions of the Software.
|
|
19
|
+
|
|
20
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
21
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
22
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
23
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
24
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
25
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
26
|
+
SOFTWARE.
|
|
27
|
+
|
|
28
|
+
Project-URL: Homepage, https://github.com/langtrain-ai/langtune
|
|
29
|
+
Project-URL: Documentation, https://github.com/langtrain-ai/langtune/tree/main/docs
|
|
30
|
+
Project-URL: Source, https://github.com/langtrain-ai/langtune
|
|
31
|
+
Project-URL: Tracker, https://github.com/langtrain-ai/langtune/issues
|
|
32
|
+
Requires-Python: >=3.8
|
|
33
|
+
Description-Content-Type: text/markdown
|
|
34
|
+
License-File: LICENSE
|
|
35
|
+
Requires-Dist: torch>=1.10
|
|
36
|
+
Requires-Dist: numpy
|
|
37
|
+
Requires-Dist: tqdm
|
|
38
|
+
Requires-Dist: pyyaml
|
|
39
|
+
Requires-Dist: scipy
|
|
40
|
+
Dynamic: license-file
|
|
41
|
+
|
|
42
|
+
# langtune: Large Language Models (LLMs) with Efficient LoRA Fine-Tuning for Text
|
|
43
|
+
|
|
44
|
+
<hr/>
|
|
45
|
+
<p align="center">
|
|
46
|
+
<picture>
|
|
47
|
+
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/langtrain-ai/langtrain/main/static/langtune-use-dark.png">
|
|
48
|
+
<img alt="Langtune Logo" src="https://raw.githubusercontent.com/langtrain-ai/langtrain/main/static/langtune-white.png" width="full" />
|
|
49
|
+
</picture>
|
|
50
|
+
</p>
|
|
51
|
+
|
|
52
|
+
<!-- Badges -->
|
|
53
|
+
<p align="center">
|
|
54
|
+
<a href="https://pypi.org/project/langtune/"><img src="https://img.shields.io/pypi/v/langtune.svg" alt="PyPI version"></a>
|
|
55
|
+
<a href="https://pepy.tech/project/langtune"><img src="https://pepy.tech/badge/langtune" alt="Downloads"></a>
|
|
56
|
+
<a href="LICENSE"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License"></a>
|
|
57
|
+
<a href="https://img.shields.io/badge/coverage-90%25-brightgreen" alt="Coverage"> <img src="https://img.shields.io/badge/coverage-90%25-brightgreen"/></a>
|
|
58
|
+
<a href="https://img.shields.io/badge/python-3.8%2B-blue" alt="Python Version"> <img src="https://img.shields.io/badge/python-3.8%2B-blue"/></a>
|
|
59
|
+
<a href="https://github.com/psf/black"><img src="https://img.shields.io/badge/code%20style-black-000000.svg" alt="Code style: black"></a>
|
|
60
|
+
</p>
|
|
61
|
+
|
|
62
|
+
<p align="center">
|
|
63
|
+
<b>Modular LLMs (Large Language Models for Text) with Efficient LoRA Fine-Tuning</b><br/>
|
|
64
|
+
<span style="font-size:1.1em"><i>Build, adapt, and fine-tune text models with ease and efficiency.</i></span>
|
|
65
|
+
</p>
|
|
66
|
+
<hr/>
|
|
67
|
+
|
|
68
|
+
## ๐ Quick Links
|
|
69
|
+
- [Documentation](docs/index.md)
|
|
70
|
+
- [Tutorials](docs/tutorials/index.md)
|
|
71
|
+
- [Changelog](CHANGELOG.md)
|
|
72
|
+
- [Contributing Guide](CONTRIBUTING.md)
|
|
73
|
+
- [Roadmap](ROADMAP.md)
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
77
|
+
## ๐ Table of Contents
|
|
78
|
+
- [Features](#-features)
|
|
79
|
+
- [Showcase](#-showcase)
|
|
80
|
+
- [Getting Started](#-getting-started)
|
|
81
|
+
- [Supported Python Versions](#-supported-python-versions)
|
|
82
|
+
- [Why langtune?](#-why-langtune)
|
|
83
|
+
- [Architecture Overview](#-architecture-overview)
|
|
84
|
+
- [Core Modules](#-core-modules)
|
|
85
|
+
- [Performance & Efficiency](#-performance--efficiency)
|
|
86
|
+
- [Advanced Configuration](#-advanced-configuration)
|
|
87
|
+
- [Documentation & Resources](#-documentation--resources)
|
|
88
|
+
- [Testing & Quality](#-testing--quality)
|
|
89
|
+
- [Examples & Use Cases](#-examples--use-cases)
|
|
90
|
+
- [Extending the Framework](#-extending-the-framework)
|
|
91
|
+
- [Contributing](#-contributing)
|
|
92
|
+
- [FAQ](#-faq)
|
|
93
|
+
- [Citation](#-citation)
|
|
94
|
+
- [Acknowledgements](#-acknowledgements)
|
|
95
|
+
- [License](#-license)
|
|
96
|
+
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
## โจ Features
|
|
100
|
+
- ๐ง **Plug-and-play LoRA adapters** for parameter-efficient fine-tuning of LLMs
|
|
101
|
+
- ๐๏ธ **Modular Transformer backbone** with customizable components
|
|
102
|
+
- ๐ฏ **Unified model zoo** for open-source language models
|
|
103
|
+
- โ๏ธ **Easy configuration** and extensible codebase
|
|
104
|
+
- ๐ **Production ready** with comprehensive testing and documentation
|
|
105
|
+
- ๐พ **Memory efficient** training with gradient checkpointing support
|
|
106
|
+
- ๐ **Built-in metrics** and visualization tools
|
|
107
|
+
- ๐งฉ **Modular training loop** with LoRA support
|
|
108
|
+
- ๐ฏ **Unified CLI** for fine-tuning and evaluation
|
|
109
|
+
- ๐ **Extensible callbacks** (early stopping, logging, etc.)
|
|
110
|
+
- ๐ฆ **Checkpointing and resume**
|
|
111
|
+
- ๐ **Mixed precision training**
|
|
112
|
+
- ๐ง **Easy dataset and model extension**
|
|
113
|
+
- โก **Ready for distributed/multi-GPU training**
|
|
114
|
+
|
|
115
|
+
---
|
|
116
|
+
|
|
117
|
+
## ๐ Showcase
|
|
118
|
+
|
|
119
|
+
**langtune** is a modular, research-friendly framework for building and fine-tuning Large Language Models (LLMs) for text with efficient Low-Rank Adaptation (LoRA) support. Whether you're working on text classification, summarization, question answering, or custom NLP tasks, langtune provides the tools you need for parameter-efficient model adaptation.
|
|
120
|
+
|
|
121
|
+
---
|
|
122
|
+
|
|
123
|
+
## ๐ Getting Started
|
|
124
|
+
|
|
125
|
+
Here's a minimal example to get you up and running:
|
|
126
|
+
|
|
127
|
+
```bash
|
|
128
|
+
pip install langtune
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
```python
|
|
132
|
+
import torch
|
|
133
|
+
from langtune.models.llm import LanguageModel
|
|
134
|
+
from langtune.utils.config import default_config
|
|
135
|
+
|
|
136
|
+
# Create model
|
|
137
|
+
input_ids = torch.randint(0, 1000, (2, 128))
|
|
138
|
+
model = LanguageModel(
|
|
139
|
+
vocab_size=default_config['vocab_size'],
|
|
140
|
+
embed_dim=default_config['embed_dim'],
|
|
141
|
+
num_layers=default_config['num_layers'],
|
|
142
|
+
num_heads=default_config['num_heads'],
|
|
143
|
+
mlp_ratio=default_config['mlp_ratio'],
|
|
144
|
+
lora_config=default_config['lora'],
|
|
145
|
+
)
|
|
146
|
+
|
|
147
|
+
# Forward pass
|
|
148
|
+
with torch.no_grad():
|
|
149
|
+
out = model(input_ids)
|
|
150
|
+
print('Output shape:', out.shape)
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
For advanced usage, CLI details, and more, see the [Documentation](docs/index.md) and `src/langtune/cli/finetune.py`.
|
|
154
|
+
|
|
155
|
+
---
|
|
156
|
+
|
|
157
|
+
## ๐ Supported Python Versions
|
|
158
|
+
- Python 3.8+
|
|
159
|
+
|
|
160
|
+
---
|
|
161
|
+
|
|
162
|
+
## ๐งฉ Why langtune?
|
|
163
|
+
|
|
164
|
+
- **Parameter-efficient fine-tuning**: Plug-and-play LoRA adapters for fast, memory-efficient adaptation with minimal computational overhead
|
|
165
|
+
- **Modular Transformer backbone**: Swap or extend components like embedding, attention, or MLP heads with ease
|
|
166
|
+
- **Unified model zoo**: Access and experiment with open-source language models through a consistent interface
|
|
167
|
+
- **Research & production ready**: Clean, extensible codebase with comprehensive configuration options and robust utilities
|
|
168
|
+
- **Memory efficient**: Fine-tune large models on consumer hardware by updating only a small fraction of parameters
|
|
169
|
+
|
|
170
|
+
---
|
|
171
|
+
|
|
172
|
+
## ๐๏ธ Architecture Overview
|
|
173
|
+
|
|
174
|
+
langtune is built around a modular Transformer backbone, with LoRA adapters strategically injected into attention and MLP layers for efficient fine-tuning. This approach allows you to adapt large pre-trained models using only a fraction of the original parameters.
|
|
175
|
+
|
|
176
|
+
### Model Data Flow
|
|
177
|
+
|
|
178
|
+
```mermaid
|
|
179
|
+
---
|
|
180
|
+
config:
|
|
181
|
+
layout: dagre
|
|
182
|
+
---
|
|
183
|
+
flowchart TD
|
|
184
|
+
subgraph LoRA_Adapters["LoRA Adapters in Attention and MLP"]
|
|
185
|
+
LA1(["LoRA Adapter 1"])
|
|
186
|
+
LA2(["LoRA Adapter 2"])
|
|
187
|
+
LA3(["LoRA Adapter N"])
|
|
188
|
+
end
|
|
189
|
+
A(["Input Tokens"]) --> B(["Embedding Layer"])
|
|
190
|
+
B --> C(["Positional Encoding"])
|
|
191
|
+
C --> D1(["Encoder Layer 1"])
|
|
192
|
+
D1 --> D2(["Encoder Layer 2"])
|
|
193
|
+
D2 --> D3(["Encoder Layer N"])
|
|
194
|
+
D3 --> E(["LayerNorm"])
|
|
195
|
+
E --> F(["MLP Head"])
|
|
196
|
+
F --> G(["Output Logits"])
|
|
197
|
+
LA1 -.-> D1
|
|
198
|
+
LA2 -.-> D2
|
|
199
|
+
LA3 -.-> D3
|
|
200
|
+
LA1:::loraStyle
|
|
201
|
+
LA2:::loraStyle
|
|
202
|
+
LA3:::loraStyle
|
|
203
|
+
classDef loraStyle fill:#e1f5fe,stroke:#0277bd,stroke-width:2px
|
|
204
|
+
```
|
|
205
|
+
|
|
206
|
+
### Architecture Components
|
|
207
|
+
|
|
208
|
+
**Legend:**
|
|
209
|
+
- **Solid arrows**: Main data flow through the Transformer
|
|
210
|
+
- **Dashed arrows**: LoRA adapter injection points in encoder layers
|
|
211
|
+
- **Blue boxes**: LoRA adapters for parameter-efficient fine-tuning
|
|
212
|
+
|
|
213
|
+
**Data Flow Steps:**
|
|
214
|
+
1. **Input Tokens**: Tokenized text data ready for processing
|
|
215
|
+
2. **Embedding Layer**: Tokens mapped to dense vectors
|
|
216
|
+
3. **Positional Encoding**: Learnable or fixed position embeddings added
|
|
217
|
+
4. **Transformer Encoder Stack**: Multi-layer transformer with self-attention and MLP blocks
|
|
218
|
+
- **LoRA Integration**: Low-rank adapters injected into attention and MLP layers
|
|
219
|
+
- **Efficient Updates**: Only LoRA parameters updated during fine-tuning
|
|
220
|
+
5. **LayerNorm**: Final normalization of encoder outputs
|
|
221
|
+
6. **MLP Head**: Task-specific classification or regression head
|
|
222
|
+
7. **Output**: Final predictions (class probabilities, regression values, etc.)
|
|
223
|
+
|
|
224
|
+
---
|
|
225
|
+
|
|
226
|
+
## ๐งฉ Core Modules
|
|
227
|
+
|
|
228
|
+
| Module | Description | Key Features |
|
|
229
|
+
|--------|-------------|--------------|
|
|
230
|
+
| **Embedding** | Token embedding and positional encoding | โข Configurable vocab size<br>โข Learnable/fixed position embeddings |
|
|
231
|
+
| **TransformerEncoder** | Multi-layer transformer backbone | โข Self-attention mechanisms<br>โข LoRA adapter integration<br>โข Gradient checkpointing support |
|
|
232
|
+
| **LoRALinear** | Low-rank adaptation layers | โข Configurable rank and scaling<br>โข Memory-efficient updates<br>โข Easy enable/disable functionality |
|
|
233
|
+
| **MLPHead** | Output projection layer | โข Multi-class classification<br>โข Regression support<br>โข Dropout regularization |
|
|
234
|
+
| **Config System** | Centralized configuration management | โข YAML/JSON config files<br>โข Command-line overrides<br>โข Validation and defaults |
|
|
235
|
+
| **Data Utils** | Preprocessing and augmentation | โข Built-in tokenization<br>โข Custom dataset loaders<br>โข Efficient data pipelines |
|
|
236
|
+
|
|
237
|
+
---
|
|
238
|
+
|
|
239
|
+
## ๐ Performance & Efficiency
|
|
240
|
+
|
|
241
|
+
### LoRA Benefits
|
|
242
|
+
|
|
243
|
+
| Metric | Full Fine-tuning | LoRA Fine-tuning | Improvement |
|
|
244
|
+
|--------|------------------|------------------|-------------|
|
|
245
|
+
| **Trainable Parameters** | 125M | 3.2M | **97% reduction** |
|
|
246
|
+
| **Memory Usage** | 16GB | 5GB | **69% reduction** |
|
|
247
|
+
| **Training Time** | 6 hours | 2 hours | **67% faster** |
|
|
248
|
+
| **Storage per Task** | 500MB | 12MB | **98% smaller** |
|
|
249
|
+
|
|
250
|
+
*Benchmarks on Transformer-Base with WikiText-103, RTX 3090*
|
|
251
|
+
|
|
252
|
+
### Supported Model Sizes
|
|
253
|
+
|
|
254
|
+
- **Transformer-Tiny**: 7M parameters, perfect for experimentation
|
|
255
|
+
- **Transformer-Small**: 30M parameters, good balance of performance and efficiency
|
|
256
|
+
- **Transformer-Base**: 125M parameters, strong performance across tasks
|
|
257
|
+
- **Transformer-Large**: 355M parameters, state-of-the-art results
|
|
258
|
+
|
|
259
|
+
---
|
|
260
|
+
|
|
261
|
+
## ๐ง Advanced Configuration
|
|
262
|
+
|
|
263
|
+
### LoRA Configuration
|
|
264
|
+
|
|
265
|
+
```python
|
|
266
|
+
lora_config = {
|
|
267
|
+
"rank": 16, # Low-rank dimension
|
|
268
|
+
"alpha": 32, # Scaling factor
|
|
269
|
+
"dropout": 0.1, # Dropout rate
|
|
270
|
+
"target_modules": [ # Modules to adapt
|
|
271
|
+
"attention.qkv",
|
|
272
|
+
"attention.proj",
|
|
273
|
+
"mlp.fc1",
|
|
274
|
+
"mlp.fc2"
|
|
275
|
+
],
|
|
276
|
+
"merge_weights": False # Whether to merge during inference
|
|
277
|
+
}
|
|
278
|
+
```
|
|
279
|
+
|
|
280
|
+
### Training Configuration
|
|
281
|
+
|
|
282
|
+
```yaml
|
|
283
|
+
# config.yaml
|
|
284
|
+
model:
|
|
285
|
+
name: "transformer_base"
|
|
286
|
+
vocab_size: 50257
|
|
287
|
+
embed_dim: 768
|
|
288
|
+
num_layers: 12
|
|
289
|
+
num_heads: 12
|
|
290
|
+
|
|
291
|
+
training:
|
|
292
|
+
epochs: 10
|
|
293
|
+
batch_size: 32
|
|
294
|
+
learning_rate: 1e-4
|
|
295
|
+
weight_decay: 0.01
|
|
296
|
+
warmup_steps: 1000
|
|
297
|
+
|
|
298
|
+
lora:
|
|
299
|
+
rank: 16
|
|
300
|
+
alpha: 32
|
|
301
|
+
dropout: 0.1
|
|
302
|
+
```
|
|
303
|
+
|
|
304
|
+
---
|
|
305
|
+
|
|
306
|
+
## ๐ Documentation & Resources
|
|
307
|
+
|
|
308
|
+
- ๐ [Complete API Reference](docs/api/index.md)
|
|
309
|
+
- ๐ [Tutorials and Examples](docs/tutorials/index.md)
|
|
310
|
+
- ๐ฌ [Research Papers](#research-papers)
|
|
311
|
+
- ๐ก [Best Practices Guide](docs/best_practices.md)
|
|
312
|
+
- ๐ [Troubleshooting](docs/troubleshooting.md)
|
|
313
|
+
|
|
314
|
+
### Research Papers
|
|
315
|
+
- [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685)
|
|
316
|
+
- [Attention Is All You Need](https://arxiv.org/abs/1706.03762)
|
|
317
|
+
- [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805)
|
|
318
|
+
|
|
319
|
+
---
|
|
320
|
+
|
|
321
|
+
## ๐งช Testing & Quality
|
|
322
|
+
|
|
323
|
+
Run the comprehensive test suite:
|
|
324
|
+
|
|
325
|
+
```bash
|
|
326
|
+
# Unit tests
|
|
327
|
+
pytest tests/unit/
|
|
328
|
+
|
|
329
|
+
# Integration tests
|
|
330
|
+
pytest tests/integration/
|
|
331
|
+
|
|
332
|
+
# Performance benchmarks
|
|
333
|
+
pytest tests/benchmarks/
|
|
334
|
+
|
|
335
|
+
# All tests with coverage
|
|
336
|
+
pytest tests/ --cov=langtune --cov-report=html
|
|
337
|
+
```
|
|
338
|
+
|
|
339
|
+
### Code Quality Tools
|
|
340
|
+
|
|
341
|
+
```bash
|
|
342
|
+
# Linting
|
|
343
|
+
flake8 src/
|
|
344
|
+
black src/ --check
|
|
345
|
+
|
|
346
|
+
# Type checking
|
|
347
|
+
mypy src/
|
|
348
|
+
|
|
349
|
+
# Security scanning
|
|
350
|
+
bandit -r src/
|
|
351
|
+
```
|
|
352
|
+
|
|
353
|
+
---
|
|
354
|
+
|
|
355
|
+
## ๐ Examples & Use Cases
|
|
356
|
+
|
|
357
|
+
### Text Classification
|
|
358
|
+
```python
|
|
359
|
+
from langtune import LanguageModel
|
|
360
|
+
from langtune.datasets import TextClassificationDataset
|
|
361
|
+
|
|
362
|
+
# Load pre-trained model
|
|
363
|
+
model = LanguageModel.from_pretrained("transformer_base")
|
|
364
|
+
|
|
365
|
+
# Fine-tune on custom dataset
|
|
366
|
+
dataset = TextClassificationDataset(train=True, tokenizer=model.tokenizer)
|
|
367
|
+
model.finetune(dataset, epochs=10, lora_rank=16)
|
|
368
|
+
```
|
|
369
|
+
|
|
370
|
+
### Custom Dataset
|
|
371
|
+
```python
|
|
372
|
+
from langtune.datasets import CustomTextDataset
|
|
373
|
+
|
|
374
|
+
# Your custom dataset
|
|
375
|
+
dataset = CustomTextDataset(
|
|
376
|
+
file_path="/path/to/dataset.txt",
|
|
377
|
+
split="train",
|
|
378
|
+
tokenizer=model.tokenizer
|
|
379
|
+
)
|
|
380
|
+
|
|
381
|
+
# Fine-tune with custom configuration
|
|
382
|
+
model.finetune(
|
|
383
|
+
dataset,
|
|
384
|
+
config_path="configs/custom_config.yaml"
|
|
385
|
+
)
|
|
386
|
+
```
|
|
387
|
+
|
|
388
|
+
---
|
|
389
|
+
|
|
390
|
+
## ๐งฉ Extending the Framework
|
|
391
|
+
- Add new datasets in `src/langtune/data/datasets.py`
|
|
392
|
+
- Add new callbacks in `src/langtune/callbacks/`
|
|
393
|
+
- Add new models in `src/langtune/models/`
|
|
394
|
+
- Add new CLI tools in `src/langtune/cli/`
|
|
395
|
+
|
|
396
|
+
## ๐ Documentation
|
|
397
|
+
- See code comments and docstrings for details on each module.
|
|
398
|
+
- For advanced usage, see the `src/langtune/cli/finetune.py` script.
|
|
399
|
+
|
|
400
|
+
## ๐ค Contributing
|
|
401
|
+
We welcome contributions from the community! Here's how you can get involved:
|
|
402
|
+
|
|
403
|
+
### Ways to Contribute
|
|
404
|
+
- ๐ **Report bugs** by opening issues with detailed reproduction steps
|
|
405
|
+
- ๐ก **Suggest features** through feature requests and discussions
|
|
406
|
+
- ๐ **Improve documentation** with examples, tutorials, and API docs
|
|
407
|
+
- ๐ง **Submit pull requests** for bug fixes and new features
|
|
408
|
+
- ๐งช **Add tests** to improve code coverage and reliability
|
|
409
|
+
|
|
410
|
+
### Development Setup
|
|
411
|
+
```bash
|
|
412
|
+
# Clone and setup development environment
|
|
413
|
+
git clone https://github.com/langtrain-ai/langtune.git
|
|
414
|
+
cd langtune
|
|
415
|
+
pip install -e ".[dev]"
|
|
416
|
+
|
|
417
|
+
# Install pre-commit hooks
|
|
418
|
+
pre-commit install
|
|
419
|
+
|
|
420
|
+
# Run tests
|
|
421
|
+
pytest tests/
|
|
422
|
+
```
|
|
423
|
+
|
|
424
|
+
### Community Resources
|
|
425
|
+
- ๐ฌ [GitHub Discussions](https://github.com/langtrain-ai/langtune/discussions) - Ask questions and share ideas
|
|
426
|
+
- ๐ [Issue Tracker](https://github.com/langtrain-ai/langtune/issues) - Report bugs and request features
|
|
427
|
+
- ๐ [Contributing Guide](CONTRIBUTING.md) - Detailed contribution guidelines
|
|
428
|
+
- ๐ฏ [Roadmap](ROADMAP.md) - See what's planned for future releases
|
|
429
|
+
|
|
430
|
+
## ๐ License & Citation
|
|
431
|
+
|
|
432
|
+
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
|
433
|
+
|
|
434
|
+
### Citation
|
|
435
|
+
|
|
436
|
+
If you use langtune in your research, please cite:
|
|
437
|
+
|
|
438
|
+
```bibtex
|
|
439
|
+
@software{langtune2025,
|
|
440
|
+
author = {Pritesh Raj},
|
|
441
|
+
title = {langtune: LLMs with Efficient LoRA Fine-Tuning},
|
|
442
|
+
url = {https://github.com/langtrain-ai/langtune},
|
|
443
|
+
year = {2025},
|
|
444
|
+
version = {0.1.0}
|
|
445
|
+
}
|
|
446
|
+
```
|
|
447
|
+
|
|
448
|
+
## ๐ Acknowledgements
|
|
449
|
+
|
|
450
|
+
We thank the following projects and communities:
|
|
451
|
+
|
|
452
|
+
- [PyTorch](https://pytorch.org/) - Deep learning framework
|
|
453
|
+
- [HuggingFace](https://huggingface.co/) - Transformers and model hub
|
|
454
|
+
- [PEFT](https://github.com/huggingface/peft) - Parameter-efficient fine-tuning methods
|
|
455
|
+
|
|
456
|
+
<p align="center">
|
|
457
|
+
<b>Made in India ๐ฎ๐ณ with โค๏ธ by the langtune team</b><br/>
|
|
458
|
+
<i>Star โญ this repo if you find it useful!</i>
|
|
459
|
+
</p>
|