logo Idiap Research Institute        
Philip N. Garner
First name(s): Philip N.
Last name(s): Garner

Keywords:


Publications of Philip N. Garner sorted by first author
| 1 | 2 | 3 | 4 | 5 | 6 |


K


L

DNN-based Speech Synthesis: Importance of input features and training data, Alexandros Lazaridis, Blaise Potard and Philip N. Garner, in: International Conference on Speech and Computer , SPECOM, pages 193-200, Springer Berlin Heidelberg, 2015
attachment
[DOI]
Conversational Speech Recognition Needs Data? Experiments with Austrian German, Julian Linke, Philip N. Garner, Gernot Kubin and Barbara Schuppler, in: Proceedings of the 13th Language Resources and Evaluation Conference, European Language Resources Association, pages 4684--4691, 2022
[URL]

M

An End-to-end Network to Synthesize Intonation Using a Generalized Command Response Model, François Marelli, Bastian Schnell, Hervé Bourlard, T. Dutoit and Philip N. Garner, in: ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, pages 7040-7044, IEEE, 2019
attachment
[DOI]
[URL]
ACCENT ADAPTATION USING SUBSPACE GAUSSIAN MIXTURE MODELS, Petr Motlicek, Philip N. Garner, Namhoon Kim and Jeongmi Cho, in: The 38th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), IEEE, Vancouver, BC, Canada, pages 7170-7174, 2013
attachment
[DOI]
Crosslingual Tandem-SGMM: Exploiting Out-Of-Language Data for Acoustic Model and Feature Level Adaptation, Petr Motlicek, David Imseng and Philip N. Garner, in: Proceedings of the 14th Annual Conference of the International Speech Communication Association (Interspeech 2013), ISCA - International Speech Communication Association, Lyon, France, pages 510-514, ISCA, 2013
attachment
Exploiting foreign resources for DNN-based ASR, Petr Motlicek, David Imseng, Blaise Potard, Philip N. Garner and Ivan Himawan, in: EURASIP Journal on Audio, Speech, and Music Processing(2015:17), 2015
attachment
[DOI]
English Spoken Term Detection in Multilingual Recordings, Petr Motlicek, Fabio Valente and Philip N. Garner, in: Proceedings of Interspeech, Makuhari, Japan, 2010, ISCA, Makuhari, Japan, 2010
attachment

N

EMPIRICAL EVALUATION AND COMBINATION OF PUNCTUATION PREDICTION MODELS APPLIED TO BROADCAST NEWS, Alexandre Nanchen and Philip N. Garner, in: Proceedings of 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2019
attachment

P

A Speech-based Just-in-Time Retrieval System using Semantic Search, Andrei Popescu-Belis, Majid Yazdani, Alexandre Nanchen and Philip N. Garner, in: Proceedings of the ACL-HLT 2011 System Demonstrations (49th Annual Meeting of the Association for Computational Linguistics), Portland, OR, pages 80-86, 2011
[URL]
A Just-in-Time Document Retrieval System for Dialogues or Monologues, Andrei Popescu-Belis, Majid Yazdani, Alexandre Nanchen and Philip N. Garner, in: SIGDIAL 2011 (12th annual SIGDIAL Meeting on Discourse and Dialogue), Demonstration Session, Portland, OR, pages 350-352, 2011
attachment

R

CONTEXT-AWARE ATTENTION MECHANISM FOR SPEECH EMOTION RECOGNITION, Gaetan Ramet, Philip N. Garner, Michael Baeriswyl and Alexandros Lazaridis, in: IEEE Workshop on Spoken Language Technology, Athens, Greece, pages 126-131, 2018
attachment
[URL]
ROCKIT: Roadmap for Conversational Interaction Technologies, Steve Renals, Jean Carletta, K Edwards, Hervé Bourlard, Philip N. Garner, Andrei Popescu-Belis, Dietrich Klakow, A Girenko, Volha Petukhova, P Wacker, A Joscelyne, C Kompis, S Aliwell, W Stevens and Y Sabbah, in: Proceedings of the 2014 Workshop on Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges (RFMIR '14), pages 39-42, ACM, 2014
[DOI]

S

Vocal Tract Length Normalization for Statistical Parametric Speech Synthesis, Lakshmi Saheer, John Dines and Philip N. Garner, in: IEEE Transactions on Audio, Speech and Language Processing, 2012
attachment
Implementation of VTLN for Statistical Speech Synthesis, Lakshmi Saheer, John Dines, Philip N. Garner and Hui Liang, in: Proceedings of ISCA Speech Synthesis Workshop, Kyoto, Japan, 2010
attachment
| 1 | 2 | 3 | 4 | 5 | 6 |