Optimization of Multivariate Hawkes Processes via Decentralized Learning Automata: Towards Learning How to Boost Factual Information on Social Networks
MetadataShow full item record
Original versionAbouzeid, A. A. O. (2023). Optimization of Multivariate Hawkes Processes via Decentralized Learning Automata: Towards Learning How to Boost Factual Information on Social Networks. [Doctoral dissertation]. University of Agder.
The increasing amount of misleading information on Social Media (SM) platforms is problematic. The reason is that these platforms have become one of the primary sources of information due to their ease-of-use and cheap cost of information acquisition. One example is that misleading information can disturb the social order and recovery from emergencies, recently actualized by the infodemic of COVID-19 and the Russian-Ukrainian information war. As the amount of misleading information on SM increases, we risk that consumers start mistrusting even reliable information sources. To this end, a wide range of Artificial Intelligence (AI)-based solutions were proposed to combat such an issue. One common approach is the intervention-based misinformation mitigation on SM, where the task is to mitigate the exposure to misinformation by alternatively boosting the exposure to factual information. Traditionally, the exposure to a particular information type for each user is defined as the count of propagated content of that information type by the adjacents of that user, e.g., followees on Twitter. To boost exposure to online factual content, SM users are incentivized to propagate these facts first, and their network adjacents such as their followers on Twitter can then reach and interact with these facts. Hence, in this context, intervening with users means incentivizing them to change their information dissemination behavior by propagating more factual information. Because each user has a different number of adjacent users and each user has a different level of exposure to misinformation, the individual incentives should be determined differently according to the following. On one hand, (A) how much misinformation exposure a user has? On the other hand, (B) how much exposure to misinformation do adjacent users have? In all cases, (C) How likely are a user or the adjacent users will accept or be influenced by the determined incentivization? Traditionally, the learning of individual incentives is facilitated through the Reinforcement Learning (RL) framework. In the latter, the dynamics of SM users’ online engagements are modeled through a simulated social network environment from which the RL agents can learn about users’ behavior. However, there have been relatively few intakes on how to learn and evaluate the optimal individual incentivization required to achieve optimal mitigation outcomes. For instance, existing criterion functions and representations mainly focused on quantities of misinformation and factual information which each user is exposed to, without considering the root causes that drive these exposures. Hence, the answer to question (C) was not investigated adequately. We believe the latter is a noticeable drawback in the proposed solutions because the simulated network and the incentivization procedure should be conducted over the best possible representation and criterion function that reflect real-world dynamics on SM. In this research, we propose a novel approach utilizing RL, specifically Learning Automaton (LA). Our method combines the principles of RL with the adaptive decision-making capabilities of LA to address the challenges of user needs-based incentivization learning. Further, we propose a novel simulation-based optimization framework with novel users’ activity representation to model the task of interventionbased misinformation mitigation. Driven by the novel activity representation, we propose a novel criterion function that considers the key factors that influence information propagation on SM instead of only calculating quantities of misinformation and factual information exposures. These key factors were proposed in recent Social Science literature and illustrated what dictates misinformation spread on today’s SM platforms. In that manner, we propose temporal activities of societal bias, content engagement, and the propagation patterns of both misinformation and factual information. Thus, we do not assume incentives to be assigned based on the quantities of exposure to misinformation only but rather evaluated and assigned based on the probability of agreeing with a content or an opinion that has a particular bias, in addition to the probability of engaging with it in the first place. Further, the study proposes preliminary algorithms to help verify and self-learning of SM activity categories such as political bias and information type. Finally, our empirical results show three main significant properties. First, they demonstrate how our novel mitigation algorithms perform better in most of the scenarios when compared to traditional RL algorithms. Second, the results indicate how our proposed criterion functions are robust to different network statistics in terms of different percentages of misinformation exposure among users. Third, our novel activity representation is more transparent and extended the analytical capacity to a misinformation mitigation solution. The latter is recognized in the provided capabilities of tracing the change in probabilities of societal bias and content engagement as a consequence of the intervention. In all brevity, this research investigates questions and problem variables beyond the existing misinformation benchmark datasets and their underlying representations. The study gathered additional comprehensive and crucial data to create a well-developed learning setting and standard for the proposed LA agent. Our research aims to connect the realms of AI and Social Science by examining pertinent theoretical studies concerning the issue of misinformation spreading on SM and the interconnected dynamics that oversee this process
Has partsPaper I: Abouzeid, A. A. O., Granmo, O.-C., Webersik, C. & Goodwin, M. (2019). Causality-based Social Media Analysis for Normal Users Credibility Assessment in a Political Crisis. In S. Balandin, V. Niemi & T. Tuytina (Eds.), Proceedings of the 25th Conference of Open Innovations Association FRUCT (3-14). Finland: FRUCT. https://doi.org/10.23919/FRUCT48121.2019.8981500. Published version. Full-text is available in AURA as a separate file: https://hdl.handle.net/11250/2648706
Paper II: Abouzeid, A. A. O., Granmo, O.-C., Webersik, C. & Goodwin, M. (2020). Learning Automata-based Misinformation Mitigation via Hawkes Processes. Information Systems Frontiers, 23, 1169-1188. https://doi.org/10.1007/s10796-020-10102-8. Published version. Full-text is available in AURA as a separate file: https://hdl.handle.net/11250/3070107
Paper III: Abouzeid, A., Granmo, OC., Goodwin, M. (2021). Modelling Emotion Dynamics in Chatbots with Neural Hawkes Processes. In: Bramer, M., Ellis, R. (eds) Artificial Intelligence XXXVIII. SGAI-AI 2021. Lecture Notes in Computer Science(), vol 13101. https://doi.org/10.1007/978-3-030-91100-3_12. Author's accepted manuscript. Full-text is not available in AURA as a separate file.
Paper IV: Abouzeid, A., Granmo, O.-C., Webersik, C., & Goodwin, M. (2022). Socially Fair Mitigation of Misinformation on Social Networks via Constraint Stochastic Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 11801-11809. https://doi.org/10.1609/aaai.v36i11.21436. Author's accepted manuscript. Full-text is not available in AURA as a separate file.
Paper V: Abouzeid, A. A. O. & Granmo, O-C. (2022). MMSS: A storytelling simulation software to mitigate misinformation on social media. Software Impacts, 13, 1-4. https://doi.org/10.1016/j.simpa.2022.100341. Published version. Full-text is available in AURA as a separate file: https://hdl.handle.net/11250/3033432
Paper VI: Abouzeid, A., Granmo, O -C., Goodwin, M. & Webersik, C. (2022). Label-Critic Tsetlin Machine: A Novel Self-supervised Learning Scheme for Interpretable Clustering. 2022 International Symposium on the Tsetlin Machine (ISTM). IEEE. https://doi.org/10.1109/ISTM54910.2022.00016. Author's accepted manuscript. Full-text is not available in AURA as a separate file.
Paper VII: Abouzeid, A. (Forthcoming). Novel Users’ Activity Representation for Modeling Societal Acceptance Towards Misinformation Mitigationon Social Media. Journal of Computational Social Science. Submitted version. Full-text is not available in AURA as a separate file.