logo Idiap Research Institute        
All publications
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 |

2020
Towards Multilingual Sign Language Recognition, Sandrine Tornay, Marzieh Razavi and Mathew Magimai.-Doss, in: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
attachment
Detection of S1 and S2 locations in phonocardiogram signals using zero frequency filter, RaviShankar Prasad, Gürkan Yilmaz, Olivier Chetelat and Mathew Magimai.-Doss, in: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
attachment
SYNTHETIC SPEECH REFERENCES FOR AUTOMATIC PATHOLOGICAL SPEECH INTELLIGIBILITY ASSESSMENT, Parvaneh Janbakhshi, Ina Kodrasi and Hervé Bourlard, in: International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Barcelona, Spain, 2020
attachment
Estimating The Degree of Sleepiness by Integrating Articulatory Feature Knowledge In Raw Waveform Based CNNs, Julian Fritsch, S. Pavankumar Dubagunta and Mathew Magimai.-Doss, in: International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Barcelona, Spain, 2020
attachment
Dysarthric Speech Recognition with Lattice-Free MMI, Enno Hermann and Mathew Magimai.-Doss, in: International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 6109-6113, 2020
attachment
[DOI]
[URL]
Efficient Convolutional Neural Networks for Depth-Based Multi-Person Pose Estimation, Angel Martínez-González, Michael Villamizar, Olivier Canévet and Jean-Marc Odobez, in: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 30(11):4207-4221, 2020
attachment
[DOI]
[URL]
2019
End-to-End Accented Speech Recognition, Thibault Viglino, Petr Motlicek and Milos Cernak, in: International Conference on Speech and Language Processing, Interspeech, ISCA, Graz, Austria, pages 2140-2144, 2019
attachment
[DOI]
AM-FM DECOMPOSITION OF SPEECH SIGNAL: APPLICATIONS FOR SPEECH PRIVACY AND DIAGNOSIS, Petr Motlicek, Hynek Hermansky, Srikanth Madikeri, Amrutha Prasad and Sriram Ganapathy, in: 11th International workshop on Models and Analysis of Vocal Emissions for Biomedical Applications, Universita Degli Studi Firenze, Firenze, Italy, 2019
attachment
[URL]
Implicit discourse relation classification with syntax-aware contextualized word representations, D. N. Popa, J. Perez, James Henderson and E. Gaussier, in: Proceedings of the 32nd International Florida Artificial Intelligence Research Society Conference, 2019
Weakly-Supervised Concept-based Adversarial Learning for Cross-lingual Word Embeddings, Haozhou Wang, James Henderson and Paola Merlo, in: Proc. 2019 Conference on Empirical Methods in Natural Language Processing, 2019
Joint estimation of RETF vector and power spectral densities for speech enhancement based on alternating least squares, Marvin Tammen, Ina Kodrasi and Simon Doclo, in: IEEE International Conference on Acoustics, Speech and Signal Processing, pages 795--799, 2019
Learning an event sequence embedding for event-based deep stereo, Stepan Tulyakov, Francois Fleuret, Martin Kiefel, Peter Gehler and Michael Hirsch, in: Proceedings of the IEEE International Conference on Computer Vision, 2019
Reducing Noise in GAN Training with Variance Reduced Extragradient, Tatjana Chavdarova, Gauthier Gidel, Francois Fleuret and Simon Lacoste-Julien, in: Proceedings of the international conference on Neural Information Processing Systems, 2019
Uncertainty-aware imitation learning using kernelized movement primitives, J. Silverio, Y. Huang, F. J. Abu-Dakka, L. Rozo and D. G. Caldwell, in: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2019
attachment
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task, Alireza Mohammadshahi, Karl Aberer and Rémi Lebret, in: Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), Hong Kong, pages 27-33, Association for Computational Linguistics, 2019
[DOI]
[URL]
Neural VTLN for Speaker Adaptation in TTS, Bastian Schnell and Philip N. Garner, in: Proc. 10th ISCA Speech Synthesis Workshop, ISCA, Vienna, Austria, pages 6, 2019
attachment
[DOI]
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 |