CONF Jullien_SEMEVAL-2023_2023/IDIAP SemEval-2023 Task 7: Multi-Evidence Natural Language Inference for Clinical Trial Data Jullien, Mael Valentino, Marco Frost, Hannah O'Reagan, Paul Landers, Donal Freitas, Andre Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023) 2023 Association for Computational Linguistics https://aclanthology.org/2023.semeval-1.307 URL 10.18653/v1/2023.semeval-1.307 doi This paper describes the results of SemEval 2023 task 7 – Multi-Evidence Natural Language Inference for Clinical Trial Data (NLI4CT) – consisting of 2 tasks, a Natural Language Inference (NLI) task, and an evidence selection task on clinical trial data. The proposed challenges require multi-hop biomedical and numerical reasoning, which are of significant importance to the development of systems capable of large-scale interpretation and retrieval of medical evidence, to provide personalized evidence-based care. Task 1, the entailment task, received 643 submissions from 40 participants, and Task 2, the evidence selection task, received 364 submissions from 23 participants. The tasks are challenging, with the majority of submitted systems failing to significantly outperform the majority class baseline on the entailment task, and we observe significantly better performance on the evidence selection task than on the entailment task. Increasing the number of model parameters leads to a direct increase in performance, far more significant than the effect of biomedical pre-training. Future works could explore the limitations of large models for generalization and numerical inference, and investigate methods to augment clinical datasets to allow for more rigorous testing and to facilitate fine-tuning. We envisage that the dataset, models, and results of this task will be useful to the biomedical NLI and evidence retrieval communities. The dataset, competition leaderboard, and website are publicly available.