Federated Machine Learning: Privacy, Explainability, and Performance
Master thesis
Permanent lenke
https://hdl.handle.net/11250/3141899Utgivelsesdato
2024Metadata
Vis full innførselSamlinger
Sammendrag
There is an increasing need for explainable and private machine learning. The EuropeanUnion’s AI Act is a recent legislation aimed at regulating the development and use ofartificial intelligence in the European Union. Trustworthy AI is an important part of this,and some of the key requirements for trustworthy AI are data privacy and model transparency.Takagi-Sugeno-Kang fuzzy rule-based systems (TSK-FRBS) are inherently explainable, andfederated learning (FL) is a way to train machine learning (ML) models while ensuring dataprivacy. Training an inherently explainable ML model using FL has the potential to ensure dataprivacy while training a transparent model.
This thesis empirically investigates the possible trade-offs between privacy and the performanceof inherently explainable ML models and deep learning (DL) models. Does federated learningreduce the performance of either TSK-FRBS or deep learning models? To answer this question,a central TSK-FRBS, a federated TSK-FRBS, a central DL, and a federated DL model have beenimplemented on ten different datasets from different application areas.
The federated models have been trained using i.i.d data and five collaborators. The experimentsshow that federated learning has not significantly impacted the performance and explainabilityof the different models. The central models performed comparably to the federated modelstrained on the same datasets. The deep learning models perform slightly better than theTSK-FRBS model overall, with some exceptions. The TSK-FRBS model is explainable becauseit is transparent and consists of rules. Training the TSK-FRBS model using FL does not alterthe rule in any way that reduces its explainability. The results suggest there is no reason notto train a TSK-FRBS using FL regarding explainability. To the researcher’s knowledge, this isthe only research that compares inherently explainable ML and DL to investigate the impact ofFL.