aibt-fl 1.0.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,247 @@
1
+ Metadata-Version: 2.4
2
+ Name: aibt-fl
3
+ Version: 1.0.0
4
+ Summary: AIBT: Adversarial Information Bottleneck Training for Privacy-Preserving Federated Learning
5
+ Author-email: AIBT Research Team <aibt@example.com>
6
+ Maintainer-email: AIBT Research Team <aibt@example.com>
7
+ License: MIT
8
+ Project-URL: Homepage, https://github.com/aibt/aibt
9
+ Project-URL: Documentation, https://aibt.readthedocs.io
10
+ Project-URL: Repository, https://github.com/aibt/aibt
11
+ Project-URL: Issues, https://github.com/aibt/aibt/issues
12
+ Project-URL: Changelog, https://github.com/aibt/aibt/blob/main/CHANGELOG.md
13
+ Keywords: federated-learning,privacy-preserving,machine-learning,deep-learning,adversarial-training,information-bottleneck,pytorch,neural-networks,differential-privacy,secure-aggregation
14
+ Classifier: Development Status :: 4 - Beta
15
+ Classifier: Intended Audience :: Developers
16
+ Classifier: Intended Audience :: Science/Research
17
+ Classifier: License :: OSI Approved :: MIT License
18
+ Classifier: Operating System :: OS Independent
19
+ Classifier: Programming Language :: Python :: 3
20
+ Classifier: Programming Language :: Python :: 3.8
21
+ Classifier: Programming Language :: Python :: 3.9
22
+ Classifier: Programming Language :: Python :: 3.10
23
+ Classifier: Programming Language :: Python :: 3.11
24
+ Classifier: Programming Language :: Python :: 3.12
25
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
26
+ Classifier: Topic :: Security :: Cryptography
27
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
28
+ Requires-Python: >=3.8
29
+ Description-Content-Type: text/markdown
30
+ License-File: LICENSE
31
+ Requires-Dist: torch>=1.9.0
32
+ Requires-Dist: numpy>=1.19.0
33
+ Requires-Dist: scikit-learn>=0.24.0
34
+ Provides-Extra: dev
35
+ Requires-Dist: pytest>=6.0; extra == "dev"
36
+ Requires-Dist: pytest-cov>=2.0; extra == "dev"
37
+ Requires-Dist: black>=22.0; extra == "dev"
38
+ Requires-Dist: isort>=5.0; extra == "dev"
39
+ Requires-Dist: flake8>=4.0; extra == "dev"
40
+ Provides-Extra: docs
41
+ Requires-Dist: sphinx>=4.0; extra == "docs"
42
+ Requires-Dist: sphinx-rtd-theme>=1.0; extra == "docs"
43
+ Dynamic: license-file
44
+
45
+ # AIBT: Adversarial Information Bottleneck Training
46
+
47
+ [![PyPI version](https://badge.fury.io/py/aibt-fl.svg)](https://badge.fury.io/py/aibt-fl)
48
+ [![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
49
+ [![PyTorch](https://img.shields.io/badge/PyTorch-1.9+-red.svg)](https://pytorch.org/)
50
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
51
+
52
+ > Privacy-preserving federated learning with information-theoretic privacy guarantees
53
+
54
+ ## Overview
55
+
56
+ **AIBT** (Adversarial Information Bottleneck Training) is a privacy-preserving federated learning framework that combines information bottleneck theory with adversarial training to achieve strong privacy guarantees while maintaining high model utility.
57
+
58
+ ### Key Features
59
+
60
+ - 🔒 **Privacy-Preserving**: Combines Information Bottleneck (IB) with adversarial training
61
+ - 🌐 **Federated Learning**: Distributed training with FedAvg aggregation
62
+ - 🛡️ **Attack Resistant**: Defends against Membership and Attribute Inference attacks
63
+ - 🚀 **Easy to Use**: Simple API for training and evaluation
64
+ - 📊 **Built-in Metrics**: Privacy attack evaluation included
65
+
66
+ ## Installation
67
+
68
+ ```bash
69
+ pip install aibt-fl
70
+ ```
71
+
72
+ ### From Source
73
+
74
+ ```bash
75
+ git clone https://github.com/aibt/aibt.git
76
+ cd aibt/aibt_package
77
+ pip install -e .
78
+ ```
79
+
80
+ ## Quick Start
81
+
82
+ ```python
83
+ import torch
84
+ from aibt import AIBTFL, AIBTModel, create_aibt_model, evaluate_privacy
85
+
86
+ # Create an AIBT model for your data
87
+ model = create_aibt_model(
88
+ input_dim=13, # Number of input features
89
+ num_classes=2, # Number of output classes
90
+ latent_dim=64, # Latent space dimension
91
+ num_sensitive_classes=2 # Number of sensitive attribute classes
92
+ )
93
+
94
+ # Initialize AIBT Federated Learning
95
+ aibt = AIBTFL(
96
+ model=model,
97
+ num_clients=10,
98
+ device="cpu",
99
+ learning_rate=0.001,
100
+ lambda_kl=0.01, # KL divergence weight (Information Bottleneck)
101
+ lambda_adv=1.0, # Adversarial loss weight
102
+ )
103
+
104
+ # Setup clients with their data
105
+ aibt.setup_clients(
106
+ client_datasets=client_data, # List of (X, y) tuples per client
107
+ sensitive_data=client_sensitive # Optional: sensitive attributes
108
+ )
109
+
110
+ # Train with federated learning
111
+ history = aibt.train(
112
+ num_rounds=100,
113
+ local_epochs=5,
114
+ test_data=(X_test, y_test),
115
+ verbose=True
116
+ )
117
+
118
+ # Evaluate privacy
119
+ privacy_metrics = evaluate_privacy(
120
+ model=model,
121
+ train_data=(X_train, y_train),
122
+ test_data=(X_test, y_test),
123
+ device="cpu"
124
+ )
125
+
126
+ print(f"Membership Inference AUC: {privacy_metrics['membership_auc']:.4f}")
127
+ print(f"Privacy preserved: {privacy_metrics['membership_auc'] < 0.55}")
128
+ ```
129
+
130
+ ## Architecture
131
+
132
+ AIBT combines three key components:
133
+
134
+ ```
135
+ Input → Encoder → Compressed Representation → Predictor → Output
136
+
137
+ Adversary (tries to infer sensitive info)
138
+
139
+ Gradient Reversal Layer
140
+ ```
141
+
142
+ **Loss Function:**
143
+ ```
144
+ L = L_task + λ₁ L_KL - λ₂ L_adv
145
+ ```
146
+
147
+ - `L_task`: Task-specific loss (e.g., cross-entropy)
148
+ - `L_KL`: KL divergence for information bottleneck compression
149
+ - `L_adv`: Adversarial loss for privacy (with gradient reversal)
150
+
151
+ ## API Reference
152
+
153
+ ### Core Classes
154
+
155
+ #### `AIBTFL`
156
+ Main federated learning class with AIBT training.
157
+
158
+ ```python
159
+ AIBTFL(
160
+ model, # AIBTModel instance
161
+ num_clients=10, # Number of federated clients
162
+ device="cpu", # Device (cpu/cuda)
163
+ learning_rate=0.001, # Learning rate
164
+ batch_size=32, # Batch size
165
+ lambda_kl=0.01, # KL divergence weight
166
+ lambda_adv=1.0, # Adversarial loss weight
167
+ lambda_grl=1.0, # Gradient reversal strength
168
+ )
169
+ ```
170
+
171
+ #### `AIBTModel`
172
+ Complete AIBT model with encoder, predictor, and adversary.
173
+
174
+ ```python
175
+ AIBTModel(
176
+ encoder, # Encoder network
177
+ predictor, # Task predictor
178
+ adversary, # Adversary network
179
+ lambda_kl=0.01, # KL weight
180
+ lambda_adv=1.0, # Adversarial weight
181
+ lambda_grl=1.0, # GRL strength
182
+ )
183
+ ```
184
+
185
+ ### Model Components
186
+
187
+ - `GradientReversalLayer`: Reverses gradients during backprop for adversarial training
188
+ - `VariationalEncoder`: Information bottleneck encoder with reparameterization
189
+ - `MLPEncoder`: MLP encoder for tabular data
190
+ - `Predictor`: Task prediction head
191
+ - `Adversary`: Sensitive attribute classifier
192
+
193
+ ### Privacy Metrics
194
+
195
+ ```python
196
+ from aibt import evaluate_privacy, evaluate_membership_inference, evaluate_attribute_inference
197
+
198
+ # Complete privacy evaluation
199
+ metrics = evaluate_privacy(model, train_data, test_data, sensitive_train, sensitive_test)
200
+
201
+ # Individual attacks
202
+ mia_metrics = evaluate_membership_inference(model, train_data, test_data)
203
+ aia_metrics = evaluate_attribute_inference(model, X, sensitive_attrs)
204
+ ```
205
+
206
+ ## Hyperparameters
207
+
208
+ | Parameter | Default | Description |
209
+ |-----------|---------|-------------|
210
+ | `lambda_kl` | 0.01 | KL divergence weight (compression) |
211
+ | `lambda_adv` | 1.0 | Adversarial loss weight (privacy) |
212
+ | `lambda_grl` | 1.0 | Gradient reversal strength |
213
+ | `latent_dim` | 128 | Latent space dimension |
214
+ | `learning_rate` | 0.001 | Optimizer learning rate |
215
+ | `batch_size` | 32 | Training batch size |
216
+
217
+ ### Hyperparameter Tuning
218
+
219
+ - **Higher `lambda_kl`**: More compression, potentially lower accuracy
220
+ - **Higher `lambda_adv`**: Stronger privacy, may affect utility
221
+ - **Recommended range**: `lambda_kl ∈ [0.005, 0.02]`, `lambda_adv ∈ [0.5, 2.0]`
222
+
223
+ ## Citation
224
+
225
+ If you use AIBT in your research, please cite:
226
+
227
+ ```bibtex
228
+ @article{aibt2025,
229
+ title={Adversarial Information Bottleneck Training for Privacy-Preserving Federated Learning},
230
+ journal={IEEE Transactions on Neural Networks and Learning Systems},
231
+ year={2025}
232
+ }
233
+ ```
234
+
235
+ ## References
236
+
237
+ - Tishby et al., "The Information Bottleneck Method", Allerton 1999
238
+ - Ganin & Lempitsky, "Domain-Adversarial Training of Neural Networks", JMLR 2016
239
+ - McMahan et al., "Communication-Efficient Learning of Deep Networks from Decentralized Data", AISTATS 2017
240
+
241
+ ## License
242
+
243
+ MIT License - see [LICENSE](LICENSE) for details.
244
+
245
+ ## Contributing
246
+
247
+ Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
@@ -0,0 +1,13 @@
1
+ aibt/__init__.py,sha256=zFLXOiDrNTT4mcZdJxn51P9r66-TvEDF0I__9rmv0As,1973
2
+ aibt/aggregation.py,sha256=NwtAKKTwOGhxka_V_rgA-7wzg1auwRRFDKifrgmBkP4,1873
3
+ aibt/client.py,sha256=KH4KKNLMQ4EPoAvE4m7cps23Sj6R2huiSaBmG1iyY_M,8370
4
+ aibt/core.py,sha256=26YgiQiS2ccNjpF2HG_2ILehUXxqGab-BWmCQkN9icg,10409
5
+ aibt/metrics.py,sha256=BUlmpZGVarwl1rmCd-vW4AELF_zKlSlHOjXniHJIfbM,13019
6
+ aibt/models.py,sha256=xZmUi04ohZ9GbYw9KES_ebcBj5HBjPs-1eMXwaKIqqY,16215
7
+ aibt/py.typed,sha256=a5K6xx9qkO2ovz5hHopVYtLfXXtb1or--wJahADDHGU,87
8
+ aibt/utils.py,sha256=xVznSZWaLU5hO-MJi7VdacclcAz4qIn7X6ZwnyBLEL4,4026
9
+ aibt_fl-1.0.0.dist-info/licenses/LICENSE,sha256=Mu9ua711JBvrfP99dUNRoELBtzhdtQJsstlyLprYxws,1096
10
+ aibt_fl-1.0.0.dist-info/METADATA,sha256=j7-Gi-JGSnez1pPluKa0ykVoNMnwxO-XFXx_SmCks7U,8552
11
+ aibt_fl-1.0.0.dist-info/WHEEL,sha256=wUyA8OaulRlbfwMtmQsvNngGrxQHAvkKcvRmdizlJi0,92
12
+ aibt_fl-1.0.0.dist-info/top_level.txt,sha256=3xRK-_2gN4esOyiSWpisUQmkbcMXAbYNYSTP_HDgk0E,5
13
+ aibt_fl-1.0.0.dist-info/RECORD,,
@@ -0,0 +1,5 @@
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (80.10.2)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 AIBT Research Team
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1 @@
1
+ aibt