Detecting Unintended Social Bias in Toxic Language Datasets

Abstract

With the rise of online hate speech, automatic detection of Hate Speech, Offensive texts as a natural language processing task is getting popular. However, very little research has been done to detect unintended social bias from these toxic language datasets. This paper introduces a new dataset ToxicBias curated from the existing dataset of Kaggle competition named ‘Jigsaw Unintended Bias in Toxicity Classification’. We aim to detect social biases, their categories, and targeted groups. The dataset contains instances annotated for five different bias categories, viz., gender, race/ethnicity, religion, political, and LGBTQ. We train transformer-based models using our curated datasets and report baseline performance for bias identification, target generation, and bias implications. Model biases and their mitigation are also discussed in detail. Our study motivates a systematic extraction of social bias data from toxic language datasets.

Publication
CoNLL 2022
Nihar Ranjan Sahoo
Nihar Ranjan Sahoo
PhD Student

My research interests include Bias and fairness in NLP, Interpretable NLP, Natural language generation.

Related