Keywords:
- Analysis of Neural Networks
- attention-based models
- autoencoders
- deep learning
- distributional semantics
- entailment
- evaluation
- generation
- hallucination detection
- Hierarchical Attention Networks
- human annotations
- language modeling
- latent space learning
- LLM
- Machine Translation
- Multilingual Learning
- multilinguality
- natural language
- natural language generation
- Natural language processing
- natural language semantics
- neural machine translation
- neural networks
- NLP
- Nonparametric Variational Information Bottleneck
- Nonparametric VIB
- Out-of-domain generalisation
- output representation learning
- Parsing
- Post-training regularisation
- QA
- Reinterpretation
- representation learning
- style transfer
- Summarization
- Text classification
- Transformer
- transformers
- unsupervised learning
- VAE
- variable-size
- VIB
- word embedding
- word sense disambiguation
- Zero-shot Learning
Publications of James Henderson sorted by title
| 1 | 2 |
A
A Corpus and Evaluation for Predicting Semi-Structured Human Annotations, , , , and , in: Workshop on Generation, Evaluation and Metrics (GEM), 2022 |
|
A VAE for Transformers with Nonparametric Variational Information Bottleneck, and , in: The Eleventh International Conference on Learning Representations, 2023 |
[URL] |
A Variational AutoEncoder for Transformers with Nonparametric Variational Information Bottleneck, and , in: arxiv, 2022 |
[DOI] [URL] |
B
Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation, and , Idiap-RR-21-2021 |
Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation, and , in: Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Association for Computational Linguistics, Online, pages 468–488, 2022 |
[URL] |
Beyond Weight Tying: Learning Joint Input-Output Embeddings for Neural Machine Translation, , and , in: Proceedings of the Third Conference on Machine Translation (WMT), 2018 |
|
C
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers, , and , in: NeurIPS, 2021 |
|
D
Deep Residual Output Layers for Neural Language Generation, and , in: Proceedings of the 36th International Conference on Machine Learning (ICML), 2019 |
|
Document-Level Neural Machine Translation with Hierarchical Attention Networks, , , and , in: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018 |
|
E
End-to-End Bias Mitigation by Modelling Biases in Corpora, , and , in: ACL, 2020 |
|
G
GADePo: Graph-Assisted Declarative Pooling Transformers for Document-Level Relation Extraction, , , and , in: Proceedings of the 3rd Workshop on Knowledge Augmented Methods for NLP, Association for Computational Linguistics, 2024 |
[DOI] [URL] |
GILE: A Generalized Input-Label Embedding for Text Classification, and , in: Transactions of the Association for Computational Linguistics (TACL), 2019 |
|
Graph Refinement for Coreference Resolution, and , in: Findings of Association for >Computational Linguistics: ACL 2022, 2022 |
Graph-to-Graph Transformer for Transition-based Dependency Parsing, and , in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2020 |
[URL] |
Graph-to-Graph Transformer for Transition-based Dependency Parsing, and , in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, ACL, Online, pages 3278–3289, Association for Computational Linguistics, 2020 |
[URL] |
H
HyperMixer: An MLP-based Green AI Alternative to Transformers, , , , , , and , in: arxiv, 2022 |
HyperMixer: An MLP-based Low Cost Alternative to Transformers, , , , , , and , in: Proc. of the 61st Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Toronto, Canada, pages 15632-15654, 2023 |
[DOI] |
I
Idiap Scientific Report 2022, , , , , , , , , , , , , , , , , and , Idiap-RR-05-2023 |
|
Implicit discourse relation classification with syntax-aware contextualized word representations, , , and , in: Proceedings of the 32nd International Florida Artificial Intelligence Research Society Conference, 2019 |
Imposing Relation Structure in Language-Model Embeddings Using Contrastive Learning, , , and , in: Proceedings of the 25th Conference on Computational Natural Language Learning, Online, pages 337-348, Association for Computational Linguistics, 2021 |
Inducing Meaningful Units from Character Sequences with Dynamic Capacity Slot Attention, and , in: Transactions on Machine Learning Research, 2023 |
[URL] |
Integrating Weakly Supervised Word Sense Disambiguation into Neural Machine Translation, , , and , in: Transactions of the Association for Computational Linguistics (TACL), 2018 |
|
L
Learning Entailment-Based Sentence Embeddings from Natural Language Inference, , and , Idiap-RR-20-2019 |
[URL] |
Learning to Abstract with Nonparametric Variational Information Bottleneck, , and , in: The 2023 Conference on Empirical Methods in Natural Language Processing, 2023 |
[URL] |
M
Multi-Adversarial Learning for Cross-Lingual Word Embeddings, , and , in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, pages 463-472, 2021 |
Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers, , , , and , in: Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, Seattle, USA, pages 89–100, 2022 |
|
N
Nonparametric Variational Regularisation of Pretrained Transformers, and , in: ArXiv, 2023 |
[DOI] [URL] |
Nonparametric Variational Regularisation of Pretrained Transformers, and , in: First conference on Language Modelling, 2024 |
[URL] |
P
Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks, , , and , in: ACL, 2021 |
|
Partially-supervised Mention Detection, and , in: Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference, 2020 |
|
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models, , , , , , and , in: ACL, 2022 |
|
Plug and Play Autoencoders for Conditional Text Generation, , , , and , Idiap-RR-24-2020 |
|
Plug and Play Autoencoders for Conditional Text Generation, , , , and , in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Online, 2020 |
|
R
Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement, and , in: Transactions of the Association for Computational Linguistics(under submission), 2020 |
Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement, and , in: Transactions of the Association for Computational Linguistics, 2020 |
[URL] |
| 1 | 2 |