dc.description.abstract | There is an increasing need for explainable and private machine learning. The European
Union’s AI Act is a recent legislation aimed at regulating the development and use of
artificial intelligence in the European Union. Trustworthy AI is an important part of this,
and some of the key requirements for trustworthy AI are data privacy and model transparency.
Takagi-Sugeno-Kang fuzzy rule-based systems (TSK-FRBS) are inherently explainable, and
federated learning (FL) is a way to train machine learning (ML) models while ensuring data
privacy. Training an inherently explainable ML model using FL has the potential to ensure data
privacy while training a transparent model.
This thesis empirically investigates the possible trade-offs between privacy and the performance
of inherently explainable ML models and deep learning (DL) models. Does federated learning
reduce the performance of either TSK-FRBS or deep learning models? To answer this question,
a central TSK-FRBS, a federated TSK-FRBS, a central DL, and a federated DL model have been
implemented on ten different datasets from different application areas.
The federated models have been trained using i.i.d data and five collaborators. The experiments
show that federated learning has not significantly impacted the performance and explainability
of the different models. The central models performed comparably to the federated models
trained on the same datasets. The deep learning models perform slightly better than the
TSK-FRBS model overall, with some exceptions. The TSK-FRBS model is explainable because
it is transparent and consists of rules. Training the TSK-FRBS model using FL does not alter
the rule in any way that reduces its explainability. The results suggest there is no reason not
to train a TSK-FRBS using FL regarding explainability. To the researcher’s knowledge, this is
the only research that compares inherently explainable ML and DL to investigate the impact of
FL. | |