• norsk
    • English
  • English 
    • norsk
    • English
  • Login
View Item 
  •   Home
  • Faculty of Engineering and Science
  • Department of Information and Communication Technology
  • Master's theses in Information and Communication Technology
  • View Item
  •   Home
  • Faculty of Engineering and Science
  • Department of Information and Communication Technology
  • Master's theses in Information and Communication Technology
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Explainable AI in Industrial Control Systems; Key attributes, requirements and proof of concept

Siqveland, Erlend Sundet
Master thesis
Thumbnail
View/Open
no.uia:inspera:222274016:47013114.pdf (3.326Mb)
URI
https://hdl.handle.net/11250/3141893
Date
2024
Metadata
Show full item record
Collections
  • Master's theses in Information and Communication Technology [509]
Abstract
The increased digitization of the Industrial Control Systems (ICSs) and the integration of Internet of Things (IoT) technologies in Critical Infrastructure (CI), introduced new vulnerabilities to systems' security and operational safety of ICS and CI. However, advancement in Machine Learning (ML) and Artificial Intelligence (AI) enabled the development of Intrusion Detection Systems (IDSs) using ML/AI methods (e.g. Support Vector Machine (SVM), Random Forest (RF), Artificial Neural Networks (ANN) for anomaly detection and classification of cyberattacks on IT and Industrial IoT networks. ICS in critical infrastructure are real-time systems with zero fault-tolerance and high safety requirements for operations' continuity, because these systems are vital to society. Hence, integration of ML/AI in different applications including industrial control systems, created the need to have strict requirements for the development and integrating ML/AI applications in critical infrastructure (e.g. detection of operational failures or cyberattacks). Several governing bodies have begun the process for developing guidelines and regulations related to AI technologies and applications. Thus, explainable AI (XAI) is one approach to increase trustworthiness and transparency of ML/AI methods, to enable better integration in ICS and help reduce vulnerabilities of ICS systems.

This thesis explores AI applications detecting cyberattacks on IoT networks in ICS environment. To identify the key attributes of AI applications in ICS and determine the requirements for an XAI methods, a Systematic Literature Review (SLR) was conducted. Key findings of the SLR show that AI applications in ICS has differing attributes, new metrics are needed to validate ML/AI models for ICSs and a critical time is needed for identifying an anomaly detection/classification when it comes to operational safety and information security. The key attributes of XAI methods are the time constraints, additionally an intuitive explanation and the need for validation of XAI are required. Based on these finding an unsupervised anomaly detector with a RF ML model for explainability was proposed for detection and classification of anomalies related to cyberattacks on IoT networks. Furthermore, this research uncovered a gap in the exacting work related to anomaly detection and classification when it comes to system errors, operational safety and cybersecurity. Due to constraints in resources (time, simulation environment to generate a new dataset), existing datasets of IoT network and physical sensor data was used to develop logic layer as a proof of concept to detect and classify combination of cyberattacks. While the results are promising, further work is needed to develop a test-bed and data that would cover operational safety and cybersecurity cases. Finally, methods for evaluating vulnerability of ML/AI methods and validation for XAI requirements are needed.
 
 
 
Publisher
University of Agder

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit
 

 

Browse

ArchiveCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsDocument TypesJournalsThis CollectionBy Issue DateAuthorsTitlesSubjectsDocument TypesJournals

My Account

Login

Statistics

View Usage Statistics

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit