Update cookies preferences
 logo Idiap Research Institute        
 [BibTeX] [Marc21]
Alleviating Forgetfulness of Linear Attention by Hybrid Sparse Attention and Contextualized Learnable Token Eviction
Type of publication: Idiap-RR
Citation: He_Idiap-RR-01-2026
Number: Idiap-RR-01-2026
Year: 2026
Month: 3
Institution: Idiap
Abstract: Linear-attention models that compress the entire input sequence into a fixed-size recurrent state offer an efficient alternative to Transformers, but their finite memory induces forgetfulness that harms retrieval-intensive tasks. To mitigate the issue, we explore a series of hybrid models that restore direct access to past tokens. We interleave token mixers with intermediate time and space complexity between linear and full attention, including sparse attention with token eviction, and the query-aware native sparse attention. Particularly, we propose a novel learnable token eviction approach. Combined with sliding-window attention, an end-to-end trainable lightweight CNN aggregates information from both past and future adjacent tokens to adaptively retain a limited set of critical KV-pairs per head, maintaining linear attention's constant time and space complexity. Efficient Triton kernels for the sparse attention mechanisms are provided. Empirical evaluations on retrieval-intensive benchmarks support the effectiveness of our approaches.
URL: https://arxiv.org/abs/2510.207...
Main Research Program: Human-AI Teaming
Keywords:
Projects: Idiap
Authors: He, Mutian
Garner, Philip N.
Added by: [ADM]
Total mark: 0
Attachments
  • He_Idiap-RR-01-2026.pdf (MD5: ca31bb22e683da8907139d9ff6cdaa10)
Notes