Specializing General-purpose LLM Embeddings for Implicit Hate Speech Detection across Datasets
| Type of publication: | Conference paper |
| Citation: | Cheremetiev_ACMDHOWWORKSHOP_2025 |
| Booktitle: | Proceedings of the 2nd International Workshop on Diffusion of Harmful Content on Online Web (DHOW '25), October 27--28, 2025, Dublin, Ireland |
| Year: | 2025 |
| Abstract: | Implicit hate speech (IHS) is indirect language that conveys prejudice or hatred through subtle cues, sarcasm or coded terminology. IHS is challenging to detect as it does not include explicit derogatory or inflammatory words. To address this challenge, task-specific pipelines can be complemented with external knowledge or additional information such as context, emotions and sentiment data. In this paper, we show that, by solely fine-tuning recent general-purpose embedding models based on large language models (LLMs), such as Stella, Jasper, NV-Embed and E5, we achieve state-of-the-art performance. Experiments on multiple IHS datasets show up to 1.10 percentage points improvements for in-dataset, and up to 20.35 percentage points improvements in cross-dataset evaluation, in terms of F1-macro score. |
| Main Research Program: | Human-AI Teaming |
| Keywords: | Context, detection, embeddings, implicit hate speech |
| Authors: | |
| Added by: | [UNK] |
| Total mark: | 0 |
|
Attachments
|
|
|
Notes
|
|
|
|
|