CONF
Bredin_ICASSP_2020/IDIAP
pyannote.audio: neural building blocks for speaker diarization
Bredin, Herve
Yin, Ruiqing
Coria, Juan Manuel
Korshunov, Pavel
Lavechin, Marvin
Fustes, Diego
Titeux, Hadrien
Bouaziz, Wassim
Gill, Marie-Philippe
IEEE International Conference on Acoustics, Speech, and Signal Processing
2020
https://arxiv.org/pdf/1911.01255.pdf
URL
We introduce pyannote.audio, an open-source toolkit written in Python for speaker diarization. Based on PyTorch machine learning framework, it provides a set of trainable end-to-end neural building blocks that can be combined and jointly optimized to build speaker diarization pipelines. pyannote.audio also comes with pre-trained models covering a wide range of domains for voice activity detection, speaker change detection, overlapped speech detection, and speaker embedding – reaching state-of-the-art performance for most of them.