Explaining graph-based misinformation detection models
Social media and social networking platforms have greatly connected people worldwide and democratised information creation and propagation by facilitating seamless and almost instantaneous information sharing between people and communities. However, such platforms can also become vectors of misinformation propagation, which can have deleterious consequences in the real world.
Automatic misinformation detection systems have emerged as a solution for platforms to combat the spread of misinformation and mitigate its impact before serious damage is done, with Graph Neural Network (GNN) approaches gaining prominence for their ability to model structural information of online information propagation events alongside their textual components. Despite their success, automated detection systems have raised questions about their trustworthiness and transparency, as the public has become increasingly sceptical about the fairness of such systems. Thus, there is an urgency to reassure the public that automated systems implemented by platforms can be trusted. To this end, we must explain the workings of these misinformation detection methods, which would otherwise remain opaque. Explainable Artificial Intelligence (XAI) aims to address this gap by explaining and interpreting model decisions in a way that is understood by humans and faithfully reflects the behaviour of the model.
In this thesis, we first analyse GNNs applied to the problem of misinformation detection and examine the role of graph structure and graph modelling mechanisms in the misinformation detection task. From this analysis, we identify two shortcomings of existing GNN explanations: (i) the smoothing effect of node aggregation mechanisms, resulting in a lack of class-discriminativeness of node-level explanations and (ii) node-feature level explanations corresponding to latent features in the high-dimensional text embedding space, which are not interpretable to humans. To tackle these two issues, we develop a framework to produce class-contrastive explanations and apply them at the token level to obtain more granular explanations, which more closely reflect the behaviour of GNN misinformation detection models while also improving their interpretability by identifying important natural language tokens rather than latent features. Finally, to address the impact of the GNN inductive bias and further improve explanation class-discriminativeness, we extend the idea of class-contrastive feature selection to consider the relative importance of features between classes and propose a formulation of the explanation generation task as a multiobjective optimisation problem.
Speaker’s profile

Chin Wai Kit Daniel is currently a PhD student in the Information Systems Technology and Design Pillar, Singapore University of Technology and Design. His research interests are in Explainable Artificial Intelligence, Graph Neural Networks and Social Computing. He also holds a Bachelor’s degree in Engineering from Singapore University of Technology and Design.