Vis enkel innførsel

dc.contributor.advisorChristian Walter Peter Omlin
dc.contributor.authorSkribeland, Halvor
dc.date.accessioned2024-07-17T16:23:50Z
dc.date.available2024-07-17T16:23:50Z
dc.date.issued2024
dc.identifierno.uia:inspera:222274016:50197232
dc.identifier.urihttps://hdl.handle.net/11250/3141899
dc.description.abstractThere is an increasing need for explainable and private machine learning. The European Union’s AI Act is a recent legislation aimed at regulating the development and use of artificial intelligence in the European Union. Trustworthy AI is an important part of this, and some of the key requirements for trustworthy AI are data privacy and model transparency. Takagi-Sugeno-Kang fuzzy rule-based systems (TSK-FRBS) are inherently explainable, and federated learning (FL) is a way to train machine learning (ML) models while ensuring data privacy. Training an inherently explainable ML model using FL has the potential to ensure data privacy while training a transparent model. This thesis empirically investigates the possible trade-offs between privacy and the performance of inherently explainable ML models and deep learning (DL) models. Does federated learning reduce the performance of either TSK-FRBS or deep learning models? To answer this question, a central TSK-FRBS, a federated TSK-FRBS, a central DL, and a federated DL model have been implemented on ten different datasets from different application areas. The federated models have been trained using i.i.d data and five collaborators. The experiments show that federated learning has not significantly impacted the performance and explainability of the different models. The central models performed comparably to the federated models trained on the same datasets. The deep learning models perform slightly better than the TSK-FRBS model overall, with some exceptions. The TSK-FRBS model is explainable because it is transparent and consists of rules. Training the TSK-FRBS model using FL does not alter the rule in any way that reduces its explainability. The results suggest there is no reason not to train a TSK-FRBS using FL regarding explainability. To the researcher’s knowledge, this is the only research that compares inherently explainable ML and DL to investigate the impact of FL.
dc.description.abstract
dc.language
dc.publisherUniversity of Agder
dc.titleFederated Machine Learning: Privacy, Explainability, and Performance
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel