CONF
Fehr_COLM_2024/IDIAP
Nonparametric Variational Regularisation of Pretrained Transformers
Fehr, Fabio
Henderson, James
Nonparametric VIB
Out-of-domain generalisation
Post-training regularisation
Reinterpretation
transformers
EXTERNAL
https://publications.idiap.ch/attachments/papers/2024/Fehr_COLM_2024.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/Fehr_NVRegularisation_2023
Related documents
First conference on Language Modelling
2024
https://openreview.net/forum?id=Zu8OWNUC0u#discussion
URL
retrained transformers have demonstrated impressive abilities, but tend not to generalise well out-of-domain and are very expensive to fine-tune on new domain data. Nonparametric Variational Information Bottleneck (NVIB) has been proposed as a regulariser for training cross-attention in transformers, potentially addressing this domain overfitting problem. We extend the NVIB framework to replace all types of attention functions in transformers. We show that existing pretrained transformers can be reinterpreted as nonparametric variational models using an empirical prior distribution and identity initialisation with controllable hyperparameters. We then show that changing the initialisation introduces a novel, information-theoretic post-training regularisation in the attention mechanism, which improves out-of-domain generalisation on NLP tasks without any additional training. This success supports the hypothesis that the way pretrained transformer embeddings represent information is accurately characterised by nonparametric variational Bayesian models.
ARTICLE
Fehr_NVRegularisation_2023/IDIAP
Nonparametric Variational Regularisation of Pretrained Transformers
Fehr, Fabio
Henderson, James
ArXiv
2023
https://arxiv.org/abs/2312.00662
URL
https://doi.org/10.48550/arXiv.2312.00662
doi
The current paradigm of large-scale pre-training and fine-tuning Transformer large language models has lead to significant improvements across the board in natural language processing. However, such large models are susceptible to overfitting to their training data, and as a result the models perform poorly when the domain changes. Also, due to the model's scale, the cost of fine-tuning the model to the new domain is large. Nonparametric Variational Information Bottleneck (NVIB) has been proposed as a regulariser for training cross-attention in Transformers, potentially addressing the overfitting problem. We extend the NVIB framework to replace all types of attention functions in Transformers, and show that existing pretrained Transformers can be reinterpreted as Nonparametric Variational (NV) models using a proposed identity initialisation. We then show that changing the initialisation introduces a novel, information-theoretic post-training regularisation in the attention mechanism, which improves out-of-domain generalisation without any training. This success supports the hypothesis that pretrained Transformers are implicitly NV Bayesian models.