logo Idiap Research Institute        
 [BibTeX] [Marc21]
Claim-Dissector: An Interpretable Fact-Checking System with Joint Re-ranking and Veracity Prediction
Type of publication: Journal paper
Citation: Fajcik_ACL2023_2023
Publication status: Published
Journal: Association for Computational Linguistics
Volume: Findings of the Association for Computational Linguistics: ACL 2023
Year: 2023
Month: July
Pages: 10184–10205
Note: https://aclanthology.org/2023.findings-acl.647/
Crossref: Fajcik_Idiap-Com-03-2022:
URL: https://aclanthology.org/2023....
Abstract: We present Claim-Dissector: a novel latent variable model for fact-checking and analysis, which given a claim and a set of retrieved evidence jointly learns to identify: (i) the relevant evidences to the given claim (ii) the veracity of the claim. We propose to disentangle the per-evidence relevance probability and its contribution to the final veracity probability in an interpretable way — the final veracity probability is proportional to a linear ensemble of per-evidence relevance probabilities. In this way, the individual contributions of evidences towards the final predicted probability can be identified. In per-evidence relevance probability, our model can further distinguish whether each relevant evidence is supporting (S) or refuting (R) the claim. This allows to quantify how much the S/R probability contributes to final verdict or to detect disagreeing evidence. Despite its interpretable nature, our system achieves results competetive with state-of-the-art on the FEVER dataset, as compared to typical two-stage system pipelines, while using significantly fewer parameters. Furthermore, our analysis shows that our model can learn fine-grained relevance cues while using coarse-grained supervision and we demonstrate it in 2 ways. (i) We show that our model can achieve competitive sentence recall while using only paragraph-level relevance supervision. (ii) Traversing towards the finest granularity of relevance, we show that our model is capable of identifying relevance at the token level. To do this, we present a new benchmark TLR-FEVER focusing on token-level interpretability — humans annotate tokens in relevant evidences they considered essential when making their judgment. Then we measure how similar are these annotations to the tokens our model is focusing on.
Keywords:
Projects Idiap
CRITERIA
Authors Fajcik, Martin
Motlicek, Petr
Smrz, Pavel
Added by: [UNK]
Total mark: 0
Attachments
    Notes