loading page

CYBERBULLYING DETECTION: A COMPARATIVE STUDY OF CLASSIFICATION ALGORITHMS
  • +1
  • PARTHAV NUTHALAPATI,
  • SRINIVAS ADITYA ABBARAJU,
  • G. HEMANTH VARMA,
  • SITANATH BISWAS
PARTHAV NUTHALAPATI
VNR Vignana Jyothi College of Engineering and Technology

Corresponding Author:parthavnuthalapati2019@gmail.com

Author Profile
SRINIVAS ADITYA ABBARAJU
VNR Vignana Jyothi College of Engineering and Technology
Author Profile
G. HEMANTH VARMA
VNR Vignana Jyothi College of Engineering and Technology
Author Profile
SITANATH BISWAS
VNR Vignana Jyothi College of Engineering and Technology
Author Profile

Abstract

In the realm of social media, cyberbullying’s pervasive impact raises urgent concerns about its emotional and psychological toll on victims. This study addresses the imperative of effectively detecting cyberbullying. By leveraging ML and DL techniques, we aim to develop reliable methods that accurately identify instances of cyberbullying in social media data, enhancing detection efficiency and accuracy. This facilitates timely intervention and support for affected individuals. In this comprehensive analysis of existing systems, various ML and DL models are extensively texted for cyberbullying detection. The evaluated models include Random Forest, XgBoost, Naive Bayes, SVM, CNN, RNN, and BERT. Pre-processed datasets are utilized to train and evaluate the models. To evaluate the ability of each model to reliably identify cyberbullying in social media data, performance metrics such as F1 score, recall, precision, and accuracy are used. The findings of this study demonstrate the efficacy of different ML and DL models in monitoring cyberbullying in social media data. Among the models evaluated, the BERT model exhibits exceptional performance, achieving the highest accuracy rates of 88 .8% for binary classification and 86 .6% for multiclass classification.
21 Sep 2023Submitted to Security and Privacy
21 Sep 2023Review(s) Completed, Editorial Evaluation Pending
21 Sep 2023Submission Checks Completed
21 Sep 2023Assigned to Editor
21 Sep 2023Reviewer(s) Assigned