logo Idiap Research Institute        
 [BibTeX] [Marc21]
Learning to Abstract with Nonparametric Variational Information Bottleneck
Type of publication: Conference paper
Citation: Behjati_EMNLP_2023
Publication status: Accepted
Booktitle: The 2023 Conference on Empirical Methods in Natural Language Processing
Year: 2023
URL: https://openreview.net/forum?i...
Abstract: Learned representations at the level of characters, sub-words, words, and sentences, have each contributed to advances in understanding different NLP tasks and linguistic phenomena. However, learning textual embeddings is costly as they are tokenization specific and require different models to be trained for each level of abstraction. We introduce a novel language representation model which can learn to compress to different levels of abstraction at different layers of the same model. We apply Nonparametric Variational Information Bottleneck (NVIB) to stacked Transformer self-attention layers in the encoder, which encourages an information-theoretic compression of the representations through the model. We find that the layers within the model correspond to increasing levels of abstraction and that their representations are more linguistically informed. Finally, we show that NVIB compression results in a model which is more robust to adversarial perturbations.
Keywords: Analysis of Neural Networks, deep learning, Nonparametric Variational Information Bottleneck, representation learning
Projects Idiap
EVOLANG
Authors Behjati, Melika
Fehr, Fabio
Henderson, James
Added by: [UNK]
Total mark: 0
Attachments
  • Behjati_EMNLP_2023.pdf
Notes