[PhD position] Multimodal automatic hate speech detection

Multispeech team                                 

Supervisors:

Irina Illina, MdC, HDR, UL    illina@loria.fr

Dominique Fohr, CR CNRS    dominique.fohr@loria.fr

Context and motivations

Hate speech expresses an antisocial behavior. In many countries, online hate speech is an offense and it is punishable by the law. In this case, the social media is held responsible and accountable if they do not remove hate speech content promptly. Manual analysis of such content and its moderation are impossible because of the huge amount of data circulating on the internet. An effective solution to this problem would be the automatic detection of hate comments.

Automatic detection of hate speech is a challenging problem in the field of Natural Language Processing (NLP). The approaches proposed for automatic hate speech detection are based on the representation of the text in a numerical form and on the use of some classification models.

Deep-learning techniques have been shown to be very powerful in classifying hate speech. Recently a new powerful transformer-based model has been proposed by Devlin: BERT (Devlin et al., 2020). This model is pre-trained using huge text corpora on two tasks: masked language modelling and next sentence prediction. BERT has obtained the best results on several NLP tasks.

In the Multispeech team since 3 years, we started to work on automatic hate speech detection (D’Sa et al., 2021; Bose et al, 2021). Until now, in many the research works on hate speech detection, only the text documents have been used. We would like to advance the knowledge about hate speech detection by exploring a new type of document: audio documents.

Multimodality

On social media platforms, data are multimodal by nature. Indeed, content posted on these platforms can be a mix of text, video, audio, and various meta-information. Besides this content, the data is even richer, because users can interact with it in different ways: comment, share, like/dislike, etc. We aim to design architectures that are capable of exploiting the diversity of these multimodal and interconnected data to detect and characterize hate speech. Then, we would like to develop a new methodology to automatically detect hate speech, based on Machine Learning (ML) and Deep Neural Networks (DNN) using text and audio modalities.

Current ML methods use only certain task-specific text features to model HS. Only very few works begin to use audio (Yousef, 2019). We propose to develop an innovative approach, based on DNN, to combine these pieces of audio and text information into a multi-feature approach so that the weaknesses of the individual features are compensated by the strengths of other features (audio, lexical features, semantic features, etc.). DNN will be used because this paradigm has led to major improvements in NLP technologies. Multimodal data will be investigated for hate speech detection: transcribed videos, audio features, text and comments. A multi-feature approach will be designed to integrate all these features. Specific embedding approaches will be considered to embed all the modalities in the same space.

Text modality is able to provide additional relevant information: writing specificities (repeated punctuation, upper case, etc.), lexical categories, and semantical features. One possibility to represent these linguistic features is to use transformer-based models, like BERT or GPT-2.

We plan to use available corpora with speech and text data, for example, IEMOCAP (Busso et al., 2008) or HSDVD (Rana et al., 2022).

Given the nature of this project with its focus on studying user-generated content in a highly critical setting, we will take care to guarantee that correct ethical and GDPR procedures are properly followed so as to ensure privacy and to respect individual rights as well as to implement the good ethical practice.

Required Skills:  the candidate should have a Master 2 degree or engineer in computer science and should have theoretical and a moderate practical experience with Deep Learning, including a good practice in Python and an understanding of deep learning libraries like Pytorch. The knowledge of NLP or signal processing will be helpful.

References

Bose, N. Aletras, I. Illina, D. Fohr « Dynamically Refined Regularization for Improving Cross-domain Hate Speech Detection » Conference of the North American Chapter of the Association for Computational Linguistics, 2022.

Busso, M. Bulut, C.-C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee and S. S. Narayanan “IEMOCAP: Interactive emotional dyadic motion capture database”, Journal of Language Resources and Evaluation, vol. 42, N°4 pp335-359, 2008.

Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019, pp. 4171–4186.

Rana, S. Jha “Emotion Based Hate Speech Detection using Multimodal Learning” arXiv:2202.06218, 2022.

A.G. D’Sa, I Illina, D. Fohr, D. Klakow, D. Ruiter « Exploring Conditional Language Model Based Data Augmentation Approaches For Hate Speech Classification » International Conference on Text, Speech, and Dialogue, 135-146, 2021.

Yousef, D. Emmanouilidou « Audio-based Toxic Language Classification using Self-attentive Convolutional Neural Network », European Signal Processing Conference (EUSIPCO), 2019.