logo Idiap Research Institute        
Petr Motlicek
First name(s): Petr
Last name(s): Motlicek

Keywords:


Publications of Petr Motlicek sorted by first author
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |


K

Towards a breakthrough Speaker Identification approach for Law Enforcement Agencies: SIIP, Khaled Khelif, yann Mombrun, Gerhard Backfried, Farhan Sahito, Luca Scarpatto, Petr Motlicek, Damien Kelly, Gideon Hazzani, Emmanouil Chatzigavriil and Srikanth Madikeri, in: European Intelligence and Security Informatics Conference (EISIC) 2017, Athenes, Greece, pages 32-39, IEEE Computer Society, 2017
attachment
[DOI]
[URL]
SIIP: An Innovative Speaker Identification Approach for Law Enforcement Agencies, Khaled Khelif, yann Mombrun, Gideon Hazzani, Petr Motlicek, Srikanth Madikeri, Farhan Sahito, Damien Kelly, Luca Scarpatto, Emmanouil Chatzigavriil and Gerhard Backfried, in: Big Data and Artificial Intelligence for Military Decision Making, http://www.sto.nato.int/, pages PT-1 - 1: PT-1 - 14, STO, 2018
[DOI]
[URL]
Adaptation of Assistant Based Speech Recognition to New Domains and Its Acceptance by Air Traffic Controllers, Matthias Kleinert, Hartmut Helmke, Gerald Siol, heiko Ehr, Dietrich Klakow, Mittul Singh, Petr Motlicek, Kern Christian, Cerna Aneta and Hlousek Petr, in: Proceedings of the 2nd International Conference on Intelligent Human Systems Integration (IHSI 2019): Integrating People and Intelligent Systems, San Diego, California, USA, pages 820 - 826, 2019
[DOI]
Multimodal Cue Detection Engine for Orchestrated Entertainment, Danil Korchagin, Stefan Duffner, Petr Motlicek and Carl Scheffler, in: Proceedings International Conference on MultiMedia Modeling, Klagenfurt, Austria, 2012
attachment
Hands Free Audio Analysis from Home Entertainment, Danil Korchagin, Philip N. Garner and Petr Motlicek, in: Proceedings of Interspeech, Makuhari, Japan, 2010
attachment
Just-in-Time Multimodal Association and Fusion from Home Entertainment, Danil Korchagin, Petr Motlicek, Stefan Duffner and Hervé Bourlard, in: Proceedings IEEE International Conference on Multimedia & Expo, Barcelona, Spain, 2011
attachment
Multitask Speech Recognition and Speaker Change Detection for Unknown Number of Speakers, Shashi Kumar, Srikanth Madikeri, Nigmatulina Iuliia, Esaú Villatoro-Tello, Petr Motlicek, Karthik Pandia D S, S. Pavankumar Dubagunta and Aravind Ganapathiraju, in: Proceedings of the 49th IEEE International Conference on Acoustics, Speech, & Signal Processing (ICASSP) 2024, Seoul, Republic of Korea, pages 12592-12596, IEEE, 2024
[DOI]
[URL]
TokenVerse: Towards Unifying Speech and NLP Tasks via Transducer-based ASR, Shashi Kumar, Srikanth Madikeri, Juan Zuluaga-Gomez, Iuliia Thorbecke, Esaú Villatoro-Tello, Sergio Burdisso, Petr Motlicek, Karthik Pandia D S and Aravind Ganapathiraju, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 20988–20995, Association for Computational Linguistics (ACL), 2024
attachment
[URL]

L

Impact du degré de supervision sur l'adaptation à un domaine d'un modèle de langage à partir du Web, Gwénolé Lecorvé, John Dines, Thomas Hain and Petr Motlicek, in: Actes de la conference conjointe JEP-TALN-RECITAL 2012, Grenoble, France, pages 193-200, ATALA/AFCP, 2012
attachment
Supervised and unsupervised Web-based language model domain adaptation, Gwénolé Lecorvé, John Dines, Thomas Hain and Petr Motlicek, in: Proceedings of Interspeech, Portland, Oregon, USA, pages to appear, 2012
attachment

M

A BAYESIAN APPROACH TO INTER-TASK FUSION FOR SPEAKER RECOGNITION, Srikanth Madikeri, Subhadeep Dey and Petr Motlicek, in: In Proceedings of ICASSP 2019, Brighton, ENGLAND, pages 5786-5790, 2019
attachment
Analysis of Language Dependent Front-End for Speaker Recognition, Srikanth Madikeri, Subhadeep Dey and Petr Motlicek, in: Proceedings of Interspeech 2018, Hyderabad, INDIA, pages 1101-1105, 2018
[DOI]
INTRA-CLASS COVARIANCE ADAPTATION IN PLDA BACK-ENDS FOR SPEAKER VERIFICATION, Srikanth Madikeri, Marc Ferras, Petr Motlicek and Subhadeep Dey, in: Proceedings of International Conference on Acoustics, Speech and Signal Processing, pages 5365-5369, 2017
[DOI]
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |