Show simple item record

dc.contributor.advisorSpagnoletti, Paolo
dc.contributor.authorTenge Hansen, Henrik
dc.contributor.authorRøsand Valø, Truls
dc.date.accessioned2023-07-11T16:23:11Z
dc.date.available2023-07-11T16:23:11Z
dc.date.issued2023
dc.identifierno.uia:inspera:143804570:98868477
dc.identifier.urihttps://hdl.handle.net/11250/3077668
dc.description.abstractThe Focus: The focus of this Master Thesis is to investigate how AI tools, such as Large Learning Models (LLMs), impact cybersecurity operations in organizations that are regarded as highly reliable. To understand the impacts of AI tools on such operations, we also need to understand the nature of AI tools, their context of use and the experience of users that rely on them. Research Approach: This thesis is structured around two different methods of investigation. First a systematic literature review was conducted, where related articles was found in different databases, i.e. Google Scholar, Web of Science and the Basket of Eight publications. After this a Qualitative study was conducted where a multiple case study with interviews and random sampling was utilized. A total of 8 informants were interviewed for this study, each lasting ~30 minutes where the questions were based on the findings from the literature. Findings: From the literature it became clear that AIs, while better than humans in many things such as analyzing Big Data, intrusion detection and other pattern recognition activities, does bring with it many difficulties to the individual and the organization. AIs and LLMs are prone to making you develop an overreliance on them where you accept their answers because of your own biases, while the information itself might be fundamentally wrong or even deceitful. This phenomenon is called AI Hallucination and is vital to understanding an AIs effect on individuals. The literature highlighted that when using any tool, it was important to realize that the AI tool is simply a machine and might be wrong, question everything and do not accept any information at face value. Quite simply, think things through. LLMs have a problem with transparency, it is impossible to know its ‘reasoning’ behind the information it provides. This fact is supported by both the literature and the interviews themselves. Overreliance, hallucination, cultivating the wrong kind of trust and lack of transparency all lead to an individual acting mindless who takes the information as true. While they have been deceived by trusting something that essentially is untrustworthy or at the very least should have been looked more into. Implication: The practical implications for this study is that an organization, especially if it is of high reliability should carefully identify measures to avoid the negative impact of AI Assistants when used in day-to-day work in cybersecurity operations.
dc.description.abstract
dc.language
dc.publisherUniversity of Agder
dc.titleCybersecurity Mindfulness in the Age of Mindless AIs: Investigating AI Assistants Impact in High-Reliability Organizations
dc.typeMaster thesis


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record