Are there identifiable structural parts in the sentence embedding whole?
Type of publication: | Conference paper |
Citation: | Nastase_BLACKBOXNLP_2024 |
Publication status: | Published |
Booktitle: | Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP |
Year: | 2024 |
Abstract: | Sentence embeddings from transformer models encode much linguistic information in a fixed-length vector. We investigate whether structural information – specifically, information about chunks and their structural and semantic properties – can be detected in these representations. We use a dataset consisting of sentences with known chunk structure, and two linguistic intelligence datasets, whose solution relies on detecting chunks and their grammatical number, and respectively, their semantic roles. Through an approach involving indirect supervision, and through analyses of the performance on the tasks and of the internal representations built during learning, we show that information about chunks and their properties can be obtained from sentence embeddings. |
Keywords: | |
Projects |
Idiap |
Authors | |
Added by: | [UNK] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|