CONF
Khelif_EISIC2017_2017/IDIAP
Towards a breakthrough Speaker Identification approach for Law Enforcement Agencies: SIIP
Khelif, Khaled
yann Mombrun,
Backfried, Gerhard
Sahito, Farhan
Scarpatto, Luca
Motlicek, Petr
Kelly, Damien
Hazzani, Gideon
Chatzigavriil, Emmanouil
Madikeri, Srikanth
audio and voice analysis
Forensics
LEA
OSINT
Speaker identification
EXTERNAL
https://publications.idiap.ch/attachments/papers/2017/Khelif_EISIC2017_2017.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/Khelif_Idiap-RR-29-2017
Related documents
European Intelligence and Security Informatics Conference (EISIC) 2017
Athenes, Greece
2017
IEEE Computer Society
32-39
978-1-5386-2385-5/17
http://www.eisic.eu/eisic2017/organization.aspx
URL
10.1109/EISIC.2017.14
doi
This paper describes SIIP (Speaker Identification Integrated Project) a high performance innovative and sustainable Speaker Identification (SID)
solution, running over large voice samples database. The solution is based on development, integration and fusion of a series of speech analytic algorithms which includes speaker model recognition, gender identification, age identification, language and accent identification, keyword and taxonomy
spotting. A full integrated system is proposed ensuring multisource data management, advanced voice analysis, information sharing and efficient and consistent man-machine interactions.
REPORT
Khelif_Idiap-RR-29-2017/IDIAP
Towards a breakthrough speaker identification approach for law enforcement agencies
Khelif, Khaled
yann Mombrun,
Motlicek, Petr
Backfried, Gerhard
Kelly, Damien
Sahito, Farhan
Hazzani, Gideon
Scarpatto, Luca
Chatzigavriil, Emmanouil
Madikeri, Srikanth
audio and voice analysis
Forensics
LEA
OSINT
Speaker identification
EXTERNAL
https://publications.idiap.ch/attachments/reports/2017/Khelif_Idiap-RR-29-2017.pdf
PUBLIC
Idiap-RR-29-2017
2017
Idiap
Rue Marconi 19, Martigny, Switzerland
October 2017
This paper describes a high performance innovative and sustainable Speaker Identification (SID) solution, running over large voice samples database. The solution is based on development, integration and fusion of a series of speech analytic algorithms which includes speaker model recognition,
gender identification, age identification, language and accent identification, keyword and taxonomy spotting. A full integrated system is proposed ensuring multisource data management, advanced voice analysis, information sharing and efficient and consistent man-machine interactions.