Defesa de Proposta de Tese de Doutorado de Vítor Nascimento Lourenço, em 04/04/24, às 10:00h, por videoconferência

Defesa de Proposta de Tese de Doutorado de Vítor Nascimento Lourenço, em 04/04/24, às 10:00h, por videoconferência

Link para defesa: https://meet.google.com/xyo-sskb-gyi

 

An Explainable Perspective on Multimodal and Multilingual Misinformation Detection on Social Networks

 

Abstract:

 

The global proliferation of misleading or entirely false information in social networks, refereed to misinformation, is a mounting concern, prompting the emergence of fact checking organisations. Nevertheless, manual verification by humans is slow, hindering the timely verification of trustworthy information. In that sense, the need for automatic misinformation detection becomes evident. To be used in real-world scenarios and thus convince individuals of its reliability, it is of foremost importance that these methods have an explainable decision-making process; furthermore, these methods should take advantage of the rich information encoded within the multiple existing modalities in social media posts. This thesis addresses the challenge of detecting misinformation on social networks from an explainable perspective. For this purpose, we propose mu2X, a conceptual framework for tackling misinformation detection on social networks from a topic-agnostic, multimodal, multilingual, and explainable perspective. The framework can leverage the multiple existing modalities from social networks’ posts into a multimodal representation space that enables machine learning techniques to classify them at scale as fact or misinformation. Moreover, we address the classification from an explainable perspective by identifying and generating explanations at a modality level. We evaluate the generated explanations regarding their completeness and interpretability facets, and for that, we propose robustness and trustworthiness metrics, as well as a novel human-centred protocol to assess explanations’ interpretability. We conduct experiments on an X/Twitter dataset and our findings show that the misinformation classifier benefits from encoding multimodal information. We qualitatively and quantitatively assess the interpretability of the provided explanations, and we conduct protocols to evaluate the trustworthiness and robustness of the generated explanations. Our analyses show the progress achieved by the proposed method to explain the classification through the relevant features whilst providing interpretability for end users. We devise future work to expand the types of modalities supported, increase the techniques used to provide explanations and evaluate the completeness and interpretability of explanations. 

 

Banca  examinadora:

 

Profa. Aline Marins Paes Carvalho, UFF – Presidente

Profa. Flavia Cristina Bernardini, UFF

Prof. Tillman Weyde, City, University of London

Profa. Ana Cristina Bicharra Garcia, UNIRIO

Prof. Fabrício Benevenuto de Souza, UFMG

Related Posts

Leave a Reply