CONF Nastase_CLIC-IT2024-2_2024/IDIAP Exploring syntactic information in sentence embeddings through multilingual subject-verb agreement Nastase, Vivi Jiang, Chunyang Samo, Giuseppe Merlo, Paola cross-lingual diagnostic studies of deep learning models Multilingual syntactic information synthetic structured data Tenth Italian Conference on Computational Linguistics 2024 In this paper, our goal is to investigate to what degree multilingual pretrained language models capture cross-linguistically valid abstract linguistic representations. We take the approach of developing curated synthetic data on a large scale, with specific properties, and using them to study sentence representations built using pretrained language models. We use a new multiple-choice task and datasets, Blackbird Language Matrices (BLMs), to focus on a specific grammatical structural phenomenon -- subject-verb agreement across a variety of sentence structures -- in several languages. Finding a solution to this task requires a system detecting complex linguistic patterns and paradigms in text representations. Using a two-level architecture that solves the problem in two steps -- detect syntactic objects and their properties in individual sentences, and find patterns across an input sequence of sentences -- we show that despite having been trained on multilingual texts in a consistent manner, multilingual pretrained language models have language-specific differences, and syntactic structure is not shared, even across closely related languages.