ARTICLE Parthasarathi_TASLP_2012/IDIAP Wordless Sounds: Robust Speaker Diarization using Privacy-Preserving Audio Representations Parthasarathi, Sree Hari Krishnan Bourlard, Hervé Gatica-Perez, Daniel EXTERNAL https://publications.idiap.ch/attachments/papers/2012/Parthasarathi_TASLP_2012.pdf PUBLIC https://publications.idiap.ch/index.php/publications/showcite/Parthasarathi_Idiap-RR-28-2012 Related documents IEEE Transactions on Audio, Speech, and Language Processing 2012 This paper investigates robust privacy-sensitive audio features for speaker diarization in multiparty conversations: ie., a set of audio features having low linguistic information for speaker diarization in a single and multiple distant microphone scenarios. We systematically investigate Linear Prediction (LP) residual. Issues such as prediction order and choice of representation of LP residual are studied. Additionally, we explore the combination of LP residual with subband information from 2.5 kHz to 3.5 kHz and spectral slope. Next, we propose a supervised framework using deep neural architecture for deriving privacy-sensitive audio features. We benchmark these approaches against the traditional Mel Frequency Cepstral Coefficients (MFCC) features for speaker diarization in both the microphone scenarios. Experiments on the RT07 evaluation dataset show that the proposed approaches yield diarization performance close to the MFCC features on the single distant microphone dataset. To objectively evaluate the notion of privacy in terms of linguistic information, we perform human and automatic speech recognition tests, showing that the proposed approaches to privacy-sensitive audio features yield much lower recognition accuracies compared to MFCC features. REPORT Parthasarathi_Idiap-RR-28-2012/IDIAP Wordless Sounds: Robust Speaker Diarization using Privacy-Preserving Audio Representations Parthasarathi, Sree Hari Krishnan Bourlard, Hervé Gatica-Perez, Daniel EXTERNAL https://publications.idiap.ch/attachments/reports/2011/Parthasarathi_Idiap-RR-28-2012.pdf PUBLIC Idiap-RR-28-2012 2012 Idiap September 2012 This paper investigates robust privacy-sensitive audio features for speaker diarization in multiparty conversations: ie., a set of audio features having low linguistic information for speaker diarization in a single and multiple distant microphone scenarios. We systematically investigate Linear Prediction (LP) residual. Issues such as prediction order and choice of representation of LP residual are studied. Additionally, we explore the combination of LP residual with subband information from 2.5 kHz to 3.5 kHz and spectral slope. Next, we propose a supervised framework using deep neural architecture for deriving privacy-sensitive audio features. We benchmark these approaches against the traditional Mel Frequency Cepstral Coefficients (MFCC) features for speaker diarization in both the microphone scenarios. Experiments on the RT07 evaluation dataset show that the proposed approaches yield diarization performance close to the MFCC features on the single distant microphone dataset. To objectively evaluate the notion of privacy in terms of linguistic information, we perform human and automatic speech recognition tests, showing that the proposed approaches to privacy-sensitive audio features yield much lower recognition accuracies compared to MFCC features.